text
stringlengths
56
7.94M
\begin{document} \begin{center} {\Large \textbf{Multilevel Particle Filters for L\'{e}vy-driven stochastic differential equations }} BY AJAY JASRA $^{1}$, KODY J. H. LAW $^{2}$ \& PRINCE PEPRAH OSEI $^{1}$ $^{1}$ {\footnotesize Department of Statistics \& Applied Probability, National University of Singapore, Singapore, 117546, SG.}\\ {\footnotesize E-Mail:\,}\texttt{\emph{\footnotesize [email protected], [email protected]}}\\ $^{2}$ {\footnotesize School of Mathematics, University of Manchester, UK, AND Computer Science and Mathematics Division, Oak Ridge National Laboratory Oak Ridge, TN, 37831, USA.}\\ {\footnotesize E-Mail:\,}\texttt{\emph{\footnotesize [email protected]}} \end{center} \begin{abstract} We develop algorithms for computing expectations with respect to the laws of models associated to stochastic differential equations (SDEs) driven by pure L\'{e}vy processes. We consider filtering such processes and well as pricing of path dependent options. We propose a multilevel particle filter (MLPF) to address the computational issues involved in solving these continuum problems. We show via numerical simulations and theoretical results that under suitable assumptions regarding the discretization of the underlying driving L\'{e}vy proccess, our proposed method achieves optimal convergence rates: the cost to obtain MSE $\mathcal{O}(\epsilon^2)$ scales like $\mathcal{O}(\epsilon^{-2})$ for our method, as compared with the standard particle filter $\mathcal{O}(\epsilon^{-3})$. \\ \textbf{Keywords}: L\'{e}vy-driven SDE; L\'{e}vy processes; Particle Filters; Multilevel Particle Filters; Barrier options. \end{abstract} \section{Introduction}\label{levy:intro} L\'{e}vy processes have become very useful recently in several scientific disciplines. A non-exhaustive list includes physics, in the study of turbulence and quantum field theory; economics, for continuous time-series models; insurance mathematics, for computation of insurance and risk, and mathematical finance, for pricing path dependent options. Earlier application of L\'{e}vy processes in modeling financial instruments dates back in \cite{Madan_VGprocess} where a variance gamma process is used to model market returns. A typical computational problem in mathematical finance is the computation of the quantity $\mathbb{E}\left[f(Y_t)\right]$, where $Y_t$ is the time $t$ solution of a stochastic differential equation driven by a L\'{e}vy process and $f\in\mathcal{B}_b(\mathbb{R}^d)$, a bounded Borel measurable function on $\mathbb{R}^d$. For instance $f$ can be a payoff function. Typically one uses the Black-Scholes model, in which the underlying price process is lognormal. However, often the asset price exhibits big jumps over the time horizon. The inconsistency of the assumptions of the Black-Scholes model for market data has lead to the development of more realistic models for these data in the literature. General L\'{e}vy processes offer a promising alternative to describe the observed reality of financial market data, as compared to models that are based on standard Brownian motions. In the application of standard and multilevel particle filter methods to SDEs driven by general L\'{e}vy processes, in addition to pricing path dependent options, we will consider filtering of partially-observed L\'{e}vy process with discrete-time observations. In the latter context, we will assume that the partially-observed data are regularly spaced observations $z_1,\dots,z_n$, where $z_k\in\mathbb{R}^d$ is a realization of $Z_k$ and $Z_k|(Y_{k\tau}=y_{k\tau}$) has density given by $g\left(z_k|y_{k\tau}\right)$, where $\tau$ is the time scale. Real S\&P $500$ stock price data will be used to illustrate our proposed methods as well as the standard particle filter. We will show how both of these problems can be formulated as general Feynman-Kac type problems \cite{delm:04}, with time-dependent potential functions modifying the L{\'e}vy path measure. The multilevel Monte Carlo (MLMC) methodology was introduced in \cite{heinrich2001multilevel} and first applied to the simulation of SDE driven by Brownian motion in \cite{Giles_mlmc}. Recently, \cite{Dereich_mlmcLevydriven} provided a detailed analysis of the application of MLMC to a L\'{e}vy-driven SDE. {This first work was extended in \cite{Dereich_gausscorrection} to a method with a Gaussian correction term which can substantially improve the rate for pure jump processes \cite{Asmusen_Rosinski_levyprocesses}.} The authors in \cite{castilla_mlmcLevy} use the MLMC method for general L\'{e}vy processes based on Wiener-Hopf decomposition. We extend the methodology described in \cite{Dereich_mlmcLevydriven} to a particle filtering framework. This is challenging due to the following reasons. First, one must choose a suitable weighting function to prevent the weights in the particle filter being zero (or infinite). Next, one must control the jump part of the underlying L\'{e}vy process such that the path of the filter does not blow up as the time parameter increases. In pricing path dependent options, for example knock out barrier options, we adopt the same strategy described in \cite{JayOption, Jay_smcdiff} for the computation of the expectation of the functionals of the SDE driven by general L\'{e}vy processes. The rest of the paper is organised as follows. In Section \ref{levy:PODLevy}, we briefly review the construction of general L\'{e}vy processes, the numerical approximation of L\'{e}vy-driven SDEs, the MLMC method, and finally the construction of a coupled kernel for L\'{e}vy-driven SDEs which will allow MLMC to be used. Section \ref{levy:ML_Levy} introduces both the standard and multilevel particle filter methods and their application to L\'{e}vy-driven SDEs. Section \ref{levy:numerics} features numerical examples of pricing barrier options and filtering of partially observed L\'{e}vy processes. The computational savings of the multilevel particle filter over the standard particle filter is illustrated in this section. \section{ Approximating SDE driven by L\'{e}vy Processes}\label{levy:PODLevy} In this section, we briefly describe the construction and approximation of a general $d^{\prime}$-dimensional L\'{e}vy process $\{X_t\}_{t\in[0,K]}$, and the solution $Y:=\{Y_t\}_{t\in[0,K]}$ of a $d$-dimensional SDE driven by $X$. Consider a stochastic differential equation given by \begin{align}\label{levy:eq1} \mathrm{d}Y_t&=a(Y_{t^{-}})\mathrm{d}X_t,\quad \mathrm{y}_0\in\mathbb{R}^d, \end{align} where $a:\mathbb{R}^d\rightarrow\mathbb{R}^{d\times d^{\prime}}$, and the initial value is $\mathrm{y}_0$ (assumed known). In particular, in the present work we are interested in computing the expectation of bounded and measurable functions $f:\mathbb{R}^d\rightarrow\mathbb{R}$, that is $\mathbb{E}[f(Y_t)]$. \subsection{L\'{e}vy Processes}\label{levy:levproc} For a general detailed description of the L\'{e}vy processes and analysis of SDEs driven by L\'{e}vy processes, we shall refer the reader to the monographs of \cite{Bertoin_levyprocesses,Sato_levyprocesses} and \cite{Applebaum_levyprocesses,Protter_Isde}. L\'{e}vy processes are stochastic processes with stationary and independent increments, which begin almost surely from the origin and are stochastically continuous. Two important fundamental tools available to study the richness of the class of L\'{e}vy processes are the L\'{e}vy-Khintchine formula and the L\'{e}vy-It\^{o} decomposition. They respectively characterize the distributional properties and the structure of sample paths of the L\'{e}vy process. Important examples of L\'{e}vy processes include Poisson processes, compound Poisson processes and Brownian motions. There is a strong interplay between L\'{e}vy processes and infinitely divisible distributions such that, for any $t>0$ the distribution of $X_t$ is infinitely divisible. Conversely, if $F$ is an infinitely divisible distribution then there exists a L\'{e}vy process $X_t$ such that the distribution of $X_1$ is given by $F$. This conclusion is the result of L\'{e}vy-Khintchine formula for L\'{e}vy processes we describe below. Let $X$ be a L\'{e}vy process with a triplet $\left(\nu,\Sigma,b\right)$, $b\in\mathbb{R}^{d'}, 0\leq\Sigma=\Sigma^T \in\mathbb{R}^{d'\times d'}$, where $\nu$ is a measure satisfying $\nu(\{0\})=0$ and $\int_{\mathbb{R}^{d'}}(1\wedge|x|^2)\nu(\mathrm{d}x)<\infty$, such that \begin{align*} \mathbb{E}[e^{i\langle u, X_t\rangle }]&=\int_{\mathbb{R}^{d'}}e^{i\langle u, x\rangle }\pi(\mathrm{d}x)=e^{t\psi(u)} \end{align*} with $\pi$ the probability law of $X_t$, where \begin{align}\label{levy:eq2} \psi(u)&=i\langle u, b\rangle-\frac{\langle u, \Sigma u \rangle}{2}+ \int_{\mathbb{R}^{d'}\backslash\{0\}}\left(e^{i\langle u , x\rangle }-1-i\langle u , x\rangle \right)\nu(dx),\quad u\in\mathbb{R}^{d'}. \end{align} The measure $\nu$ is called the L\'{e}vy measure of $X$. The triplet of L\'{e}vy characteristics $\left(\nu,\Sigma,b\right)$ is simply called L\'{e}vy triplet. Note that in general, the L\'{e}vy measure $\nu$ can be finite or infinite. If $\nu(\mathbb R)<\infty$, then almost all paths of the L\'{e}vy process have a finite number of jumps on every compact interval {and it can be represented as a compensated compound Poisson process.} On the other hand, if $\nu(\mathbb R)=\infty$, then the process has an infinite number of jumps on every compact interval almost surely. {Even in this case the third term in the integrand ensures that the integral is finite, and hence so is the characteristic exponent.} \subsection{Simulation of L\'{e}vy Processes}\label{levy:levsimulation} The law of increments of many L\'{e}vy processes is not known explicitly. This makes it more difficult to simulate a path of a general L\'{e}vy process than for instance standard Brownian motion. For a few L\'{e}vy processes where the distribution of the process is known explicitly, \cite{Cont_jumpprocesses,Schoutens_levyprocfinance} provided methods for exact simulation of such processes, which are applicable in financial modelling. For our purposes, the simulation of the path of a general L\'{e}vy process will be based on the L\'{e}vy-It\^{o} decomposition and we briefly describe the construction below. An alternative construction is based on Wiener-Hopf decomposition. This is used in \cite{castilla_mlmcLevy}. The L\'{e}vy-It\^{o} decomposition reveals much about the structure of the paths of a L\'{e}vy process. We can split the L\'{e}vy exponent, or the characteristic exponent of $X_t$ in $\left(\ref{levy:eq2}\right)$, into three parts \begin{align*} \psi&=\psi^{1}+\psi^2+\psi^3 \, . \end{align*} where \begin{align*} \psi^1(u)&=i\langle u , b\rangle ,\quad \psi^2(u)=-\frac{\langle u, \Sigma u \rangle}{2}, \\ \psi^3(u)&=\int_{\mathbb{R}^{d'}\backslash\{0\}}\left(e^{i\langle u , x\rangle }-1-i\langle u , x\rangle \right)\nu(dx),\quad u\in\mathbb{R}^{d'} \end{align*} The first term corresponds to a deterministic drift process with parameter $b$, the second term to a Wiener process with covariance ${\sqrt{\Sigma}}$, {where $\sqrt{\Sigma}$ denotes the symmetric square-root}, and the last part corresponds to a L\'{e}vy process which is a square integrable martingale. This term may either be a compensated compound Poisson process or the limit of such processes, and it is the hardest to handle when it arises from such a limit. Thus, any L\'{e}vy process can be decomposed into three independent L\'{e}vy processes thanks to the L\'{e}vy-It\^{o} decomposition theorem. In particular, let $\{W_t\}_{t\in[0,K]}$ denote a Wiener process independent of the process $\{L_t\}_{t\in[0,K]}$. A L\'{e}vy process $\{X_t\}_{t\in[0,K]}$ can be {\emph constructed} as follows \begin{align}\label{levy:eq3} X_t&=\sqrt{\Sigma} W_t+L_t+bt \, . \end{align} The L\'{e}vy-It\^{o} decomposition guarantees that every square integrable L\'{e}vy process has a representation as $\left(\ref{levy:eq3}\right)$. We will assume that one cannot sample from the law of $X_t$, hence of $Y_t$, and rather we must numerically approximate the process with finite resolution. Such numerical methods have been studied extensively, for example in \cite{Jacod_levydrivensde,Rubenthaler_levyprocess}. It will be assumed that the L{\'e}vy process $X$ \eqref{levy:eq2}, and the L{\'e}vy-driven process $Y$ in \eqref{levy:eq1}, satisfy the following conditions. Let $|\cdot|$ denote the standard Euclidean $l2$ norm, for vectors, and induced operator norm for matrices. \begin{assumption} \label{ass:main} There exists a $C>0$ such that \begin{itemize} \item[{\rm (i)}] $|a(y) - a(y')| \leq C |y-y'|$, and $|a(y)| \leq C$ for all $y\in \mathbb R^d$ ; \item[{\rm (ii)}] $0 < \int |x|^2 \nu(dx) \leq C^2$ ; \item[{\rm (iii)}] $|\Sigma|< C^2$ and $|b| \leq C$ \, . \end{itemize} \end{assumption} Item (i) provides continuity of the forward map, while (ii) controls the variance of the jumps, and (iii) controls the diffusion and drift components and is trivially satisfied. These assumptions are the same as in the paper \cite{Dereich_mlmcLevydriven}, with the exception of the second part of (i), which was not required there. As in that paper we refer to the following general references on L{\'e}vy processes for further details\cite{Applebaum_levyprocesses, Bertoin_levyprocesses}. \subsection{Numerical Approximation of a L\'{e}vy Process and L\'{e}vy-driven SDE} \label{numApprox} Recall $\left(\ref{levy:eq1}\right)$ and $\left(\ref{levy:eq3}\right)$. Consider the evolution of discretized L\'{e}vy process and hence the L\'{e}vy-driven SDE over the time interval $[0,K]$. In order to describe the Euler discretization of the two processes for a given accuracy parameter $h_l$, we need some definitions. The meaning of the subscript will become clear in the next section. Let $\delta_l>0$ denote a jump threshold parameter in the sense that jumps which are smaller than $\delta_l$ will be ignored. Let $B_{\delta_l}=\{x\in\mathbb{R}^{d'}:|x|<\delta_l\}$. Define $\lambda_l=\nu(B_{\delta_l}^{c})<\infty$, that is the L\'{e}vy measure outside of the ball of radius $\delta_l$. We assume that the L\'evy component of the process is nontrivial so that $\nu(B_1)=\infty$. First $h_l$ will be chosen and then the parameter $\delta_l$ will be chosen such that the step-size of the time-stepping method is $h_l=1/\lambda_l$. The jump time increments are exponentially distributed with parameter $\lambda_l$ so that the number of jumps before time $t$ is a Poisson process $N^l(t)$ with intensity $\lambda_l$. The jump times will be denoted by $\tilde{T}_j^l$. The jump heights $\Delta L_{\tilde{T}_j}^l$ are distributed according to \begin{align*} \mu^l(\mathrm{d}x)&:=\frac{1}{\lambda_l}\mathbbm 1_{B_{\delta_l}^{c}}(x)\nu(\mathrm{d}x). \end{align*} Define \begin{equation}\label{eq:efl} F_0^l=\int_{B_{\delta_{l}}^{c}}x\nu(\mathrm{d}x). \end{equation} The expected number of jumps on an interval of length $t$ is $F_0^l t$, and \red{the compensated compound Poisson process $L^{\delta}$ defined by $$L_t^{\delta}=\sum_{j=1}^{N^l(t)} \Delta L_{\tilde{T}_j}^l - F_0^l t$$ is an $L^2$ martingale which converges in $L^2$ to the L{\'e}vy process $L$ as $\delta_l \rightarrow 0$ \cite{Applebaum_levyprocesses,Dereich_mlmcLevydriven}}. The Euler discretization of the L\'{e}vy process and the L\'{e}vy driven SDE is given by Algorithm \ref{levy:DiscreteAlgo}. Appropriate refinement of the original jump times $\{\tilde{T}^l_j\}$ to new jump times $\{T_{j}^{l}\}$ is necessary to control the discretization error arising from the Brownian motion component, the original drift process, and the drift component of the compound Poisson process. Note that the $\Delta L_{{T}^l_j}^{l}$ is non-zero only when $T_{j}^{l}$ corresponds with $\tilde{T}_{m}^{l}$ for some $m$, as a consequence of the construction presented above. \begin{algorithm}[!ht] \caption{\textbf{: Discretization of L\'{e}vy process}} \label{levy:DiscreteAlgo} \begin{algorithmic} \STATE Initialization: Let $\tilde{T}_0^l=0$ and $j=1$; \begin{enumerate} \item[(A)] Generate jump times: $\tilde{T}_j^l=\min\{1,\tilde{T}_{j-1}^l+\xi_j^l\}$, $\xi_j^l\sim Exp(\lambda_l)$\, ; If $\tilde{T}_j^l=1, \tilde{k}_l=j$; Go to (B); Otherwise $j=j+1$; Go to start of (A). \item[(B)] Generate jump heights: For $j\in\{1,\dots,\tilde{k}_l-1\}$, $z_j^l\sim\mu^l$; $\Delta L_{\tilde{T}_j^l}^l=z_j^l$ and $\Delta L_{\tilde{T}_{\tilde{k}_l}}^l=0$; Set $j=1$, $T_0^l=0$. \item[(C)] Refinement of original jump times: $T_j^l=\min\bigg\{T_{j-1}^l+h_l, \min\Big\{\tilde{T}_k^l > T_{j-1}^l;k\in\{1,\dots,\tilde{k}_l\}\Big\}\bigg\}$ \, ; If $T_j^l=\tilde{T}_k^l$ for some $k\in \{1,\dots, \tilde{k}_l\}$, then $\Delta L_{T^l_j} = \Delta L_{\tilde{T}^l_j}$; otherwise $\Delta L_{T^l_j} =0$ \, ; If $T_j^l=1, k_l=j$; Go to (D); Otherwise $j=j+1$; Go to start of (C). \end{enumerate} \end{algorithmic} \end{algorithm} \begin{algorithm}[!ht] \begin{algorithmic} \STATE \begin{enumerate} \item[(D)] Recursion of the process: For $m\in\{0,\dots,k_l-1\}$, $X_0^l=x_0$; \begin{equation}\label{levy:eq4} X^l_{T^l_{m+1}} =X^l_{T^l_{m}}+\sqrt{\Sigma}\Big(W_{T^l_{m+1}}-W_{T^l_{m}}\Big)+ \Delta L_{{T}_{m+1}^l}^l +(b-F_0^l)(T^l_{m+1}-T^l_{m}) \, . \end{equation} \end{enumerate} \end{algorithmic} \end{algorithm} The numerical approximation of the L\'{e}vy process described in Algorithm \ref{levy:DiscreteAlgo} gives rise to an approximation of the L\'{e}vy-driven SDE as follows. Given $Y^l_{T^l_{0}}$, for $m=0,\dots, k_l-1$ \begin{equation}\label{eq:euler_levyd} Y^l_{T^l_{m+1}} =Y^l_{T^l_{m}}+a(Y^l_{T^l_{m}})(\Delta X)^l_{T^l_{m+1}}, \end{equation} where $(\Delta X)^l_{T^l_{m+1}} = X^l_{T^l_{m+1}}-X^l_{T^l_{m}}$ is given by \eqref{levy:eq4}. In particular the recursion in \eqref{eq:euler_levyd} gives rise to a transition kernel, denoted by $Q^l(u,dy)$, between observation times $t\in\{0,1,\dots,K\}$. This kernel is the measure of $Y^l_{T^l_{k_l}}$ given initial condition $Y^l_{T^l_0}=u$. Observe that the initial condition for $X$ is irrelevant for simulation of $Y$, since only the increments $(\Delta X)^l_{T^l_{m+1}}$ are required, which are simulated independently by adding a realization of $N\big ((b-F_0^l)(T^l_{m+1}-T^l_{m}), (T^l_{m+1}-T^l_{m})\Sigma \big)$ to $\Delta L_{T_{m+1}^l}^l$. \begin{rem} The numerical approximation of the L\'{e}vy process and hence L\'{e}vy-driven SDE $\left(\ref{levy:eq1}\right)$ in Algorithm \ref{levy:DiscreteAlgo} is the single-level version of a more general coupled discretization \cite{Dereich_mlmcLevydriven} which will be described shortly in Section \ref{levy:couplekernel}. This procedure will be used to obtain samples for the plain particle filter algorithm. \end{rem} \subsection{Multilevel Monte Carlo Method} \label{levy:mlmc} Suppose one aims to approximate the expectation of functionals of the solution of the L\'{e}vy-driven SDE in $\left(\ref{levy:eq1}\right)$ at time $1$, that is $\mathbb{E}[f(Y_1)]$, where $f:\mathbb{R}^d\rightarrow\mathbb{R}$ is a bounded and measurable function. Typically, one is interested in the expectation w.r.t. the law of exact solution of SDE $\left(\ref{levy:eq1}\right)$, but this is not always possible in practice. Suppose that the law associated with $\left(\ref{levy:eq1}\right)$ with no discretization is $\pi_1$. Since we cannot sample from $\pi_1$, we use a biased version $\pi^L_1$ associated with a given level of discretization of SDE $\left(\ref{levy:eq1}\right)$ at time $1$. Given $L\geq1$, define $\pi_{1}^{L}(f):=\mathbb{E}[f(Y^L_{1})]$, the expectation with respect to the density associated with the Euler discretization $\left(\ref{levy:eq4}\right)$ at level $L$. The standard Monte Carlo (MC) approximation at time $1$ consists in obtaining i.i.d. samples $\Big(Y_{1}^{L,(i)}\Big)_{i=1}^{N_L}$ from the density $\pi_1^L$ and approximating $\pi_{1}^{L}(f)$ by its empirical average \begin{align*} {\pi}_1^{L,N_L}(f)&:=\frac{1}{N_L}\sum_{i=1}^{N_L}f(Y_1^{L,(i)}). \end{align*} The mean square error of the estimator is \begin{align*} e({\pi}_1^{L,N_L}(f))^2&:=\mathbb{E}\left[\left({\pi}_1^{L,N}(f)-\pi_{1}(f)\right)^2\right]. \end{align*} Since the MC estimator ${\pi}_1^{L,N_L}(f)$ is an unbiased estimator for $\pi_{1}^{L}(f)$, this can further be decomposed into \begin{equation}\label{eq:error_dec} e({\pi}_1^{L,N_L}(f))^2=\underbrace{N_L^{-1}\mathbb{V}[f(Y_1^L)]}_{\hbox{variance}} +(\underbrace{{\pi}_1^L(f)-\pi_{1}(f)}_{\hbox{bias}})^2. \end{equation} The first term in the right hand side of the decomposition is the variance of MC simulation and the second term is the bias arising from discretization. If we want \eqref{eq:error_dec} to be $\mathcal{O}(\epsilon^2)$, then it is clearly necessary to choose $N_L\propto \epsilon^{-2}$, and then the total cost is $N_L\times {\rm Cost}(Y_1^{L,(i)}) \propto \epsilon^{-2 - \gamma}$, where it is assumed that ${\rm Cost}(Y_1^{L,(i)}) \propto \epsilon^{-\gamma}$ for some $\gamma>0$ is the cost to ensure the bias is $\mathcal{O}(\epsilon)$. Now, in the multilevel Monte Carlo (MLMC) settings, one can observe that the expectation of the finest approximation ${\pi}_1^L(f)$ can be written as a telescopic sum starting from a coarser approximation ${\pi}_{1}^{0}(f)$, and the intermediate ones: \begin{align}\label{levy:eq5} {\pi}_{1}^L(f)&:={\pi}_{1}^{0}(f)+\sum_{l=1}^{L}\left({\pi}_{1}^{l}(f)-{\pi}_{1}^{l-1}(f)\right). \end{align} Now it is our hope that the variance of the increments decays with $l$, which is reasonable in the present scenario where they are finite resolution approximations of a limiting process. The idea of the MLMC method is to approximate the multilevel (ML) identity $\left(\ref{levy:eq5}\right)$ by independently computing each of the expectations in the telescopic sum by a standard MC method. This is possible by obtaining i.i.d. pairs of samples $\Big(Y_{1}^{l,(i)},Y_{1}^{l-1,(i)}\Big)_{i=1}^{N_l}$ for each $l$, from a suitably coupled joint measure $\bar{\pi}_1^l$ with the appropriate marginals $\pi_1^l$ and $\pi_1^{l-1}$, for example generated from a coupled simulation the Euler discretization of SDE $\left(\ref{levy:eq1}\right)$ at successive refinements. The construction of such a coupled kernel is detailed in Section \ref{levy:couplekernel}. Suppose it is possible to obtain such coupled samples at time $1$. Then for $l=0,\dots,L$, one has independent MC estimates. Let \begin{align}\label{levy:eq6} {\pi}^{{N}_{0:L}}_{1}(f)&:=\frac{1}{N_0}\sum_{i=1}^{N_0}f(Y_1^{1,(i)})+\sum_{l=1}^{L}\frac{1}{N_l}\sum_{i=1}^{N_l}\left(f(Y_1^{l,(i)})-f(Y_1^{l-1,(i)})\right), \end{align} where ${N}_{0:L}:=\left\lbrace N_l\right\rbrace_{l=0}^{L}$. Analogously to the single level Monte Carlo method, the mean square error for the multilevel estimator $\left(\ref{levy:eq6}\right)$ can be expanded to obtain \begin{align}\label{levy:eq7} e\left({\pi}_{1}^{{N}_{0:L}}(f)\right)^2&:=\underbrace{\sum_{l=0}^{L}N_l^{-1}\mathbb{V}[f(Y_1^{l})-f(Y_1^{l-1}))]}_{\hbox{variance}}+(\underbrace{{\pi}_1^L(f)-\pi_{1}(f)}_{\hbox{bias}})^2, \end{align} with the convention that $f(Y_1^{-1})\equiv 0$. It is observed that the bias term remains the same; that is we have not introduced any additional bias. However, by an optimal choice of $N_{0:L}$, one can possibly reduce the computational cost for any pre-selected tolerance of the variance of the estimator, or conversely reduce the variance of the estimator for a given computational effort. In particular, for a given user specified error tolerance $\epsilon$ measured in the root mean square error, the highest level $L$ and the replication numbers $N_{0:L}$ are derived as follows. We make the following assumptions about the bias, variance and computational cost based on the observation that there is an exponential decay of bias and variance as $L$ increases. Suppose that there exist some constants $\alpha,\beta,\gamma$ and an accuracy parameter $h_l$ associated with the discretization of SDE $\left(\ref{levy:eq1}\right)$ at level $l$ such that \begin{itemize} \item[$\left(B_l\right)$] $|\mathbb{E}[f(Y^{l})-f(Y^{l-1})]|=\mathcal{O}(h_{l}^{\alpha})$, \item[$\left(V_l\right)$] $\mathbb{E}[|f(Y^{l})-f(Y^{l-1})|^2]=\mathcal{O}(h_{l}^{\beta})$, \item[$\left(C_l\right)$] $\hbox{cost}\left(Y^{l},Y^{l-1}\right) \propto h_{l}^{-\gamma}$, \end{itemize} where $\alpha,\beta,\gamma$ are related to the particular choice of the discretization method and cost is the computational effort to obtain one sample $\left(Y^l,Y^{l-1}\right)$. For example, the Euler-Maruyama discretization method for the solution of SDEs driven by Brownian motion gives orders $\alpha =\beta =\gamma=1$. The accuracy parameter $h_l$ typically takes the form $h_l=S_{0}^{-l}$ for some integer $S_{0}\in\mathbb{N}$. Such estimates can be obtained for L\'evy driven SDE and this point will be revisited in detail below. For the time being we take this as an assumption. The key observation from the mean-square error of the multilevel estimator $\left(\ref{levy:eq6}\right)-\left(\ref{levy:eq7}\right)$ is that the bias is given by the finest level, while the variance is decomposed into a sum of variances of the $l^{th}$ increments. Thus the total variance is of the form $\mathcal{V}=\sum_{l=0}^{L}V_lN_l^{-1}$ and by condition $\left(V_l\right)$ above, the variance of the $l^{th}$ increment is of the form $V_lN_l^{-1}$. The total computational cost takes the form $\mathcal{C}=\sum_{l=0}^{L}C_lN_l$. In order to minimize the effort to obtain a given mean square error (MSE), one must balance the terms in $\left(\ref{levy:eq7}\right)$. Based on the condition $\left(B_l\right)$ above, a bias error proportional to $\epsilon$ will require the highest level \begin{align}\label{levy:eq8} L&\propto\frac{-\log(\epsilon)}{\log(S_{0})\alpha}. \end{align} In order to obtain optimal allocation of resources $N_{0:L}$, one needs to solve a constrained optimization problem: minimize the total cost $\mathcal{C}=\sum_{l=0}^{L}C_lN_l$ for a given fixed total variance $\mathcal{V}=\sum_{l=0}^{L}V_lN_l^{-1}$ or vice versa. Based on the conditions $\left(V_l\right)$ and $\left(C_l\right)$ above, one obtains via the Lagrange multiplier method the optimal allocation $N_l\propto V_l^{1/2}C_l^{-1/2} \propto h_{l}^{(\beta+\gamma)/2}$. Now targetting an error of size $\mathcal{O}(\epsilon)$, one sets $N_l\propto\epsilon^{-2}h_{l}^{(\beta+\gamma)/2}K(\epsilon)$, where $K(\epsilon)$ is chosen to control the total error for increasing $L$. Thus, for the multilevel estimator we obtained: \begin{align*} \hbox{variance}&:\mathcal{V}=\sum_{l=0}^{L}V_lN_l^{-1}=\epsilon^{2}K(\epsilon)^{-1}\sum_{l=0}^{L}h_{l}^{(\beta-\gamma)/2}\\ \hbox{cost}:&\thickspace\mathcal{C}=\sum_{l=0}^{L}C_lN_l=\epsilon^{-2}K(\epsilon)^{2}. \end{align*} One then sets $K(\epsilon)=\sum_{l=0}^{L}h_{l}^{(\beta-\gamma)/2}$ in order to have variance of $\mathcal{O}(\epsilon^2)$. We can identify three distinct cases \begin{itemize} \item[(i).] If $\beta=\gamma$, which corresponds to the Euler-Maruyama scheme, then $K(\epsilon)=L$. One can clearly see from the expression in $\left(\ref{levy:eq8}\right)$ that $L=\mathcal{O}(|\log(\epsilon)|)$. Then the total cost is $\mathcal{O}(\epsilon^{-2}\log(\epsilon)^2)$ compared with single level $\mathcal{O}(\epsilon^{-3})$. \item[(ii).] If $\beta > \gamma$, which correspond to the Milstein scheme, then $K(\epsilon)\equiv 1$, and hence the optimal computational cost is $\mathcal{O}(\epsilon^{-2})$. \item[(iii).] If $\beta < \gamma$, which is the worst case scenario, then it is sufficient to choose $K(\epsilon)=K_{L}(\epsilon)= h_{L}^{(\beta-\gamma)/2}$. In this scenario, one can easily deduce that the total cost is $\mathcal{O}(\epsilon^{-(\gamma/\alpha+\kappa)})$, where $\kappa=2-\beta/\alpha$, using the fact that $h_L\propto \epsilon^{1/\alpha}$. \end{itemize} One of the defining features of the multilevel method is that the realizations $(Y_1^l,Y_1^{l-1})$ for a given increment must be sufficiently coupled in order to obtain decaying variances $(V_l)$. It is clear how to accomplish this in the context of stochastic differential equations driven by Brownian motion introduced in \cite{Giles_mlmc} (see also \cite{Jay_mlpf}), where coarse icrements are obtained by summing the fine increments, but it is non-trivial how to proceed in the context of SDEs purely driven by general L\'{e}vy processes. A technique based on {Poisson thinning} has been suggested by \cite{GilesXia_mlmcjumpdiff} for pure-jump diffusion and by \cite{castilla_mlmcLevy} for general L\'{e}vy processes. In the next section, we explain an alternative construction of a coupled kernel based on the L{\'e}vy-Ito decomposition, in the same spirit as in \cite{Dereich_mlmcLevydriven}. \subsection{Coupled Sampling for Levy-driven SDEs}\label{levy:couplekernel} The ML methodology described in Section \ref{levy:mlmc} works by obtaining samples from some coupled-kernel associated with discretization of $\left(\ref{levy:eq1}\right)$. We now describe how one can construct such a kernel associated with the discretization of the L\'{e}vy-driven SDE. Let $u=(y,y')\in \mathbb R^{2d}$. Define a kernel, $M^l:[\mathbb{R}^d\times\mathbb{R}^d]\times[\sigma(\mathbb{R}^d)\times\sigma(\mathbb{R}^d)]\rightarrow \mathbb{R}_{+}$, where $\sigma(.)$ denotes the $\sigma$-algebra of measurable subsets, such that for $A\in \sigma(\mathbb{R}^d)$ \begin{eqnarray}\label{eq:em} M^l(u,A) &:=& M^{l}(u,A\times\mathbb{R}^d) =\int_{A}Q^l\left(y,\mathrm{d}z\right)=Q^l(y,A), \\ \label{eq:em2} M^{l-1}(u,A) &:=& M^{l}(u,\mathbb{R}^d\times A) =\int_{A}Q^{l-1}\left(y',\mathrm{d}z\right)=Q^{l-1}(y',A). \end{eqnarray} The coupled kernel $M^l$ can be constructed using the following strategy. Using the same definitions in Section \ref{numApprox}, let $\delta_l$ and $\delta_{l-1}$ be user specified jump-thresholds for the fine and coarse approximation, respectively. Define \begin{equation}\label{eq:effs} F_0^l=\int_{B_{\delta_{l}}^{c}}x\nu(\mathrm{d}x) \quad {\rm and} \quad F_0^{l-1}=\int_{B_{\delta_{l-1}}^{c}}x\nu(\mathrm{d}x). \end{equation} The objective is to generate a coupled pair $(Y_1^{l,l},Y_1^{l,l-1})$ given $(Y_0^{l},Y_0^{l-1})$, $h_l,h_{l-1}$ with $h_l<h_{l-1}$. The parameter $\delta_\ell(h_\ell)$ will be chosen such that $h_\ell^{-1}=\nu(B_{\delta_\ell}^c)$, and these determine the value of $F_0^\ell$ in \eqref{eq:effs}, for $\ell\in\{l,l-1\}$. We now describe the construction of the coupled kernel $M^l$ and thus obtain the coupled pair in Algorithm \ref{levy:coupledkernelAlgo}, which is the same as the one presented in \cite{Dereich_mlmcLevydriven}. \begin{algorithm}[!ht] \caption{\textbf{: Coupled kernel $M^l$ for L\'{e}vy-driven SDE}} \label{levy:coupledkernelAlgo} \begin{algorithmic} \STATE \begin{enumerate} \item[$(1)$] Generate fine process: Use parts (A) to (C) of Algorithm \ref{levy:DiscreteAlgo} to generate fine process yielding $\Big(\Delta L_{{T}_1^{l,l}}^{l,l},\dots, \Delta L_{{T}_{{k}_l^l}^{l,l}}^{l,l}\Big)$ and $\Big(T_1^{l,l},\dots,{T}_{k_l^l}^{l,l}\Big)$ \item[$(2)$] Generate coarse jump times and heights: for $j_l\in\{1,\dots,{k}_l^l\}$ , If $\Delta L_{{T}_{j_l}^{l,l}}^{l,l}\geq\delta_{l-1}$, then $\Delta L_{\tilde{T}_{j_{l-1}}^{l,l-1}}^{l,l-1}=\Delta L_{{T}_{j_l}^{l,l}}^{l,l}$ and $\tilde{T}_{j_{l-1}}^{l,l-1}={T}_{j_l}^{l,l}$; $j_{l-1}=j_{l-1}+1$; \item[$(3)$] Refine jump times: Set $j_{l-1}=j_l=1$ and $T_0^{l,l-1}=\overline{T}^{l,l}_0=0$, (i) $T_{j_{l-1}}^{l,l-1}=\min\bigg\{T_{j_{l-1}-1}^{l,l-1}+h_{l-1}, \min\Big\{\tilde{T}_k^{l,l-1}\geq T_{j_{l-1}-1}^{l,l-1};k\in\{1,\dots,\tilde{k}_{l-1}^{l}\}\Big\}\bigg\}$. If $T_{j_{l-1}}^{l,l-1}=1$, set $k_{l-1}^l=j_{l-1}$; else $j_{l-1}=j_{l-1}+1$ and Go to (i). (ii) $\overline{T}_{j_l}^{l,l}=\min\bigg\{T \geq \overline{T}_{j_l-1}^{l,l}; T \in \{T_k^{l,l-1}\}_{k=1}^{k_{l-1}^l} \cup \{T_k^{l,l}\}_{k=1}^{k_{l}^l} \bigg \}$. If $\overline{T}_{j_l}^{l,l}=1$, set $k_l^l=j_l$, and redefine $T_i^{l,l} := \overline{T}_i^{l,l}$ for $i=1,\dots, k_l^l$; Else $j_l=j_l+1$ and Go to (ii). \item[$(4)$] Recursion of the process: sample $W_{T_1^{l,l}},\dots W_{T_{k_l^l}^{l,l}}$ (noting $\{T_{k}^{l,l-1}\}_{k=1} ^{k_{l-1}^l} \subset\{T_{k}^{l,l}\}_{k=1}^ {k_l^l}$); Let $m_l\in\{0,\dots,k_l^l-1\}$, $m_{l-1}\in\{0,\dots,k_{l-1}^l-1\}$, $Y_0^{l,l}=Y_0^l$ , and $Y_0^{l,l-1}=Y_0^{l-1}$; \end{enumerate} \begin{eqnarray}\label{levy:eq9} Y^{l,l}_{T^{l,l}_{m_l+1}}&=&Y^{l,l}_{T^{l,l}_{m_l}}+ a\Big(Y^{l,l}_{T^{l,l}_{m_l}}\Big)\Big(\sqrt{\Sigma}\Delta W_{T^{l,l}_{m_l+1}} + \Delta L_{{T}_{m_l+1}^{l,l}}^{l,l} + (b-F_0^l)\Delta T^{l,l}_{m_l+1} \Big ) \, , \\ \label{levy:eq10} Y^{l,l-1}_{T^{l,l-1}_{m_{l-1}+1}}&=&Y^{l,l-1}_{T^{l,l-1}_{m_{l-1}}}+a\Big(Y^{l,l-1}_{T^{l,l-1}_{m_{l-1}}}\Big) \Big(\sqrt{\Sigma}\Delta W_{T^{l,l-1}_{m_{l-1}}} + \Delta L_{{T}_{m_{l-1}+1}^{l,l-1}}^{l,l-1} + (b-F_0^{l-1})\Delta T^{l,l-1}_{m_{l-1}}\Big) \, , \end{eqnarray} where $\Delta W_{T^{l,\ell}_{m_\ell+1}}= W_{T^{l,\ell}_{m_\ell+1}}-W_{T^{l,\ell}_{m_\ell}}$ and $\Delta T^{l,\ell}_{m_\ell+1} = T^{l,\ell}_{m_\ell+1}-T^{l,\ell}_{m_\ell}$, for $\ell\in\{l,l-1\}$. \end{algorithmic} \end{algorithm} The construction of the coupled kernel $M^l$ outlined in Algorithm \ref{levy:coupledkernelAlgo} ensures that the paths of fine and coarse processes are correlated enough to ensure that the optimal convergence rate of the multilevel algorithm is achieved. \section{Multilevel Particle Filter for L\'{e}vy-driven SDEs}\label{levy:ML_Levy} In this section, the multilevel particle filter will be discussed for sampling from certain types of measures which have a density with respect to a L{\'e}vy process. We will begin by briefly reviewing the general framework and standard particle filter, and then we will extend these ideas into the multilevel particle filtering framework. \subsection{Filtering and Normalizing Constant Estimation for L\'{e}vy-driven SDEs}\label{levy:FilterLevySDE} Recall the L\'{e}vy-driven SDE \eqref{levy:eq1}. We will use the following notation here $y_{1:n}=[y_1, y_2,\dots,y_n]$. It will be assumed that the general probability density of interest is of the form for $n\geq 1$, for some given $y_0$ \begin{align}\label{eq:target} \hat{\eta}^{\infty}_{n}(y_{1:n}) \propto \Big[\prod_{i=1}^{n}G_i(y_{i})Q^{\infty}(y_{i-1}, y_i)\Big], \end{align} where $Q^{\infty}(y_{i-1},y)$ is the transition density of the process $\left(\ref{levy:eq1}\right)$ as a function of $y$, i.e. the density of solution $Y_1$ at observational time point $1$ given initial condition $Y_0=y_{i-1}$. It is assumed that $G_i(y_{i})$ is the conditional density (given $y_i$) of an observation at discrete time $i$, so observations (which are omitted from our notations) are regularly observed at times $1,2,\dots$. Note that the formulation discussed here, that is for $\hat{\eta}^{\infty}_{n}$, also allows one to consider general Feynman-Kac models (of the form \eqref{eq:target}), rather than just the filters that are focussed upon in this section. The following assumptions will be made on the likelihood functions $\{G_i\}$. Note these assumptions are needed for our later mathematical results and do not preclude the application of the algorithm to be described. \begin{assumption} There are $c>1$ and $C>0$, such that for all $n>0$, and $v, v' \in \mathbb R^{d}$, $G_n$ satisfies \begin{itemize} \item[{\rm (i)}] $c^{-1} < G_n(v) < c$ \, ; \item[{\rm (ii)}] $|G_n(v) - G_n(v')| \leq C |v - v'|$ \, . \end{itemize} \label{asn:g} \end{assumption} In practice, as discussed earlier on $Q^{\infty}$ is typically analytically intractable (and we further suppose is not currently known up-to a non-negative unbiased estimate). As a result, we will focus upon targets associated to a discretization, i.e.~of the type \begin{align}\label{eq:target_l} \hat{\eta}^{l}_{n}(y_{1:n}) \propto \Big[\prod_{i=1}^{n}G_i(y_{i})Q^{l}(y_{i-1}, y_i)\Big], \end{align} for $l<\infty$, where $Q^l$ is defined by $k_l$ iterates of the recursion in \eqref{eq:euler_levyd}. Note that we will use $\hat{\eta}^{l}_{n}$ as the notation for measure and density, with the use clear from the context, where $l = 0,1,\dots, \infty$. The objective is to compute the expectation of functionals with respect to this measure, particularly at the last co-ordinate. For any bounded and measurable function $f:\mathbb{R}^d \rightarrow\mathbb{R}$, $n\geq 1$, we will use the notation \begin{align}\label{levy:eq12} \hat{\eta}_{n}^{l}(f)&:=\int_{\mathbb R^{dn}}f(y_{n})\hat{\eta}^{l}_{n}(y_{1:n})\mathrm{d}y_{1:n}. \end{align} Often of interest is the computation of the un-normalized measure. That is, for any bounded and measurable function $f:\mathbb{R}^d \rightarrow\mathbb{R}$ define, for $n\geq 1$ \begin{equation}\label{eq:marg_like} \hat{\zeta}^{l}_{n}(f) := \int_{\mathbb{R}^{dn}}f(y_n) \Big[\prod_{i=1}^{n}G_i(y_{i})Q^{l}(y_{i-1}, y_i)\Big]\mathrm{d}y_{1:n}. \end{equation} In the context of the model under study, $\hat{\zeta}^{l}_{n}(1)$ is the marginal likelihood. \red{Henceforth $Y^l_{1:n}$ will be used to denote a draw from $\hat{\eta}^{l}_{n}$. The vanilla case described earlier can be viewed as the special example in which $G_i\equiv1$ for all $i$. Following standard practice, realizations of random variables will be denoted with small letters. So, after drawing $Y^{l,(i)}_n\sim \hat{\eta}^l_n$, then the notation $y^{l,(i)}_n$ will be used for later references to the realized value. The randomness of the samples will be recalled again for MSE calculations, over potential realizations.} \subsection{Particle Filtering}\label{levy:PF} We will describe the particle filter that is capable of exactly approximating, that is as the Monte Carlo samples go to infinity, terms of the form \eqref{levy:eq12} and \eqref{eq:marg_like}, for any fixed $l$. The particle filter has been studied and used extensively (see for example \cite{delm:04,doucet:01}) in many practical applications of interest. For a given level $l$, algorithm \ref{algo:particlefilter} gives the standard particle filter. The weights are defined as for $k\geq 1$ \begin{align}\label{levy:eq13} w^{l,(i)}_{k} & = w^{l,(i)}_{k-1} \frac{G_k(y^{l,(i)}_{k})}{\sum_{j=1}^{N_l}w_{k-1}^{l,(j)} G_k(y^{l,(j)}_{k})} \end{align} with the convention that $w^{l,(i)}_{0}=1$. Note that the abbreviation $ESS$ stands for effective sample size which measures the variability of weights at time $k$ of the algorithm (other more efficient procedures are also possible, but not considered). In the analysis to follow $H=1$ in algorithm \ref{algo:particlefilter} (or rather it's extension in the next section), but this is not the case in our numerical implementations. \cite{delm:04} (along with many other authors) have shown that \red{for upper-bounded, non-negative, $G_i$, $f:\mathbb{R}^d\rightarrow\mathbb{R}$ bounded measurable (these conditions can be relaxed),} at step 3 of algorithm \ref{algo:particlefilter}, the estimate $$ \sum_{i=1}^{N_l} w^{l,(i)}_{n}f(y^{l,(i)}_{n}) $$ will converge almost surely to \eqref{levy:eq12}. In addition, if $H=1$ in algorithm \ref{algo:particlefilter}, $$ \Big[\prod_{i=1}^{n-1}\frac{1}{N_l}\sum_{j=1}^{N_l}G_i(y^{l,(j)}_{i})\Big]\frac{1}{N_l}\sum_{j=1}^{N_l}G_n(y^{l,(j)}_{n})f(y^{l,(j)}_{n}) $$ will converge almost surely to \eqref{eq:marg_like}. \begin{algorithm}[!ht] \caption{\textbf{: Particle filter}} \label{algo:particlefilter} \begin{algorithmic} \STATE \begin{enumerate} \item[0.] Set $k=1$; for $i=1,\dots, N_l$, draw $Y_1^{l,(i)}\sim Q^l(y_0.)$ \item[1.] Compute weights $\{w_1^{l,(i)} \}_{i=1}^{N_l}$ using $\left(\ref{levy:eq13}\right)$ \item[2.] Compute $ESS=\Big(\sum_{i=1}^{N_l}(w_k^{l,(i)})^{2}\Big)^{-1}$.\\ If {$ESS/N_l<H$} (for some threshold $H$), resample the particles $\{Y_{k}^{l,(i)} \}_{i=1}^{N_l}$ and set all weights to $w_k^{l,(i)}=1/N_l$. Denote the resampled particles $\{\hat{Y}_{k}^{l,(i)} \}_{i=1}^{N_l}$.\\ Else set $\{\hat{Y}_{k}^{l,(i)} \}_{i=1}^{N_l}=\{Y_{k}^{l,(i)} \}_{i=1}^{N_l}$ \item[3.] Set $k=k+1$; if $k=n+1$ stop;\\ for $i=1\dots,N_l$, draw $Y_{k}^{l,(i)}\sim Q^l(\hat{y}_{k-1}^{l,(i)},.)$;\\ compute weights $\{w_k^{l,(i)}\}_{i=1}^{N_l}$ by using $\left(\ref{levy:eq13}\right)$. Go to 2. \end{enumerate} \end{algorithmic} \end{algorithm} \subsection{Multilevel Particle Filter}\label{levy:mlpf} We now describe the multilevel particle filter of \cite{Jay_mlpf} for the context considered here. The basic idea is to run $L+1$ independent algorithms, the first a particle filter as in the previous section and the remaining, coupled particle filters. The particle filter will sequentially (in time) approximate $\hat{\eta}_k^0$ and the coupled filters will sequentially approximate the couples $(\hat{\eta}^0_k,\hat{\eta}^1_k),\dots,(\hat{\eta}^{L-1}_k,\hat{\eta}^{L}_k)$. Each (coupled) particle filter will be run with $N_l$ particles. The most important step in the MLPF is the coupled resampling step, which maximizes the probability of resampled indices being the same at the coarse and fine levels. Denote the coarse and fine particles at level $l\geq 1$ and step $k\geq 1$ as $\Big(Y_{k}^{l,(i)}(l),Y_{k}^{l-1,(i)}(l)\Big)$, for $i=1,\dots,N_l$. Equation \eqref{levy:eq13} is replaced by the following, for $k\geq 1$ \begin{align}\label{levy:mlweights} w^{l,(i)}_{k}(l) & = w^{l,(i)}_{k-1}(l) \frac{G_k(y^{l,(i)}_{k}(l))}{\sum_{j=1}^{N_l}w_{k-1}^{l,(j)}(l) G_k(y^{l,(j)}_{k}(l))} \\ w^{l-1,(i)}_{k}(l) & = w^{l-1,(i)}_{k-1}(l) \frac{G_k(y^{l-1,(i)}_{k}(l))}{\sum_{j=1}^{N_l}w_{k-1}^{l-1,(j)}(l) G_k(y^{l-1,(j)}_{k}(l))} \end{align} with the convention that $w^{l,(i)}_{0}(l)=w^{l-1,(i)}_{0}(l)=1$. \begin{algorithm}[!ht] \caption{\textbf{Coupled Resampling Procedure}} \label{algo:coupresamp} \begin{algorithmic} \STATE For $\ell=1,\dots, N_l$ \STATE With probability $\sum_{i=1}^{N_l} \min\{ w^{l,(i)}_{k}(l), w^{l-1,(i)}_{k}(l) \}$, \begin{enumerate} \item[(i)] Sample $J$ with probability proportional to $\min\{ w^{l,(i)}_{k}(l), w^{l-1,(i)}_{k}(l) \}$ for $i=1, \dots, N_l$, where the weights are computed according to \eqref{levy:mlweights}. \item[(ii)] Set $\Big (\hat{Y}^{l,(\ell)}_{k}(l), \hat{Y}^{l-1,(\ell)}_{k}(l)\Big)=\Big(Y_{k}^{l,(j)}(l),Y_{k}^{l-1,(j)}(l)\Big)$. \end{enumerate} \STATE \STATE Else, with probability $1-\sum_{i=1}^{N_l} \min\{ w^{l,(i)}_{k}(l), w^{l-1,(i)}_{k}(l) \}$, \begin{enumerate} \item[(i)] Sample $J_l$ with probability proportional to $w^{l,(i)}_{k}(l) - \min\{ w^{l,(i)}_{k}(l), w^{l-1,(i)}_{k}(l) \}$ for $i=1, \dots, N_l$, \item[(ii)] Sample $J_{l-1}\perp J_l$ with probability proportional to $w^{l-1,(i)}_{k}(l) - \min\{ w^{l,(i)}_{k}(l), w^{l-1,(i)}_{k}(l) \}$ for $i=1, \dots, N_l$, \item[(iii)] Set $\hat{Y}^{l,(\ell)}_{k}(l)=Y_{k}^{l,(j_l)}(l)$, and $\hat{Y}^{l-1,(\ell)}_{k}(l)=Y_{k}^{l-1,(j_{l-1})}(l)$. \end{enumerate} \end{algorithmic} \end{algorithm} In the below description, we set $H=1$ (as in algorithm \ref{algo:particlefilter}), but it need not be the case. Recall that the case $l=0$ is just a particle filter. For each $1\leq l \leq L$ the following procedure is run independently.\\ \begin{algorithm}[!ht] \caption{\textbf{Multilevel Particle filter}} \label{algo:particlefilter} \begin{algorithmic} \STATE \begin{enumerate} \item[0.] Set $k=1$; for $i=1,\dots, N_l$, draw $\Big(Y_{1}^{l,(i)}(l),Y_{1}^{l-1,(i)}(l)\Big) {\sim}M^{l}\Big((y_{0},y_{0}), .\Big)$. \item[1.] Compute weights $\{ (w_1^{l,(i)}(l), w_1^{l-1,(i)}(l)) \}_{i=1}^{N_l}$ using $\left(\ref{levy:mlweights}\right)$ \item[2.] Compute $ESS=\min\Big \{ \Big(\sum_{i=1}^{N_l}(w_k^{l,(i)}(l))^{2}\Big)^{-1}, \Big(\sum_{i=1}^{N_l}(w_k^{l-1,(i)}(l))^{2}\Big)^{-1}\Big \}$.\\ If {$ESS/N_l<H$}, resample the particles $\Big\{\Big(\hat Y_{k}^{l,(i)}(l),\hat Y_{k}^{l-1,(i)}(l)\Big) \Big\}_{i=1}^{N_l}$ according to Algorithm \ref{algo:coupresamp}, and set all weights to $w_k^{l,(i)}(l)=w_k^{l-1,(i)}(l)=1/N_l$. Else set $\Big\{\Big(\hat Y_{k}^{l,(i)}(l),\hat Y_{k}^{l-1,(i)}(l)\Big) \Big\}_{i=1}^{N_l} = \Big\{\Big(Y_{k}^{l,(i)}(l),Y_{k}^{l-1,(i)}(l)\Big) \Big\}_{i=1}^{N_l}$ \item[3.] Set $k=k+1$; if $k=n+1$ stop;\\ for $i=1\dots,N_l$, draw $\Big(Y_{k}^{l,(i)}(l),Y_{k}^{l-1,(i)}(l)\Big) {\sim}M^{l}\Big((\hat{y}_{k-1}^{l,(i)}(l),\hat{y}_{k-1}^{l-1,(i)}(l)), .\Big)$; \\ compute weights $\{(w_k^{l,(i)}(l), w_k^{l-1,(i)}(l))\}_{i=1}^{N_l}$ by using \eqref{levy:mlweights}. Go to 2. \end{enumerate} \end{algorithmic} \end{algorithm} The samples generated by the particle filter for $l=0$ at time $k$ are denoted $Y_{k}^{0,(i)}(0)$, $i\in\{1,\dots,N_0\}$ (we are assuming {$H=1$}). To estimate the quantities \eqref{levy:eq12} and \eqref{eq:marg_like} (with $l=L$) \cite{Jay_mlpf,Jay_mlnormconst} show that in the case of discretized diffusion processes $$ \hat{\eta}^{\rm ML,L}_n(f) = \sum_{l=1}^L \Big( \frac{\sum_{i=1}^{N_l}G_n(y_n^{l,(i)}(l))f(y_n^{l,(i)}(l))}{\sum_{i=1}^{N_l}G_n(y_n^{l,(i)}(l))} - \frac{\sum_{i=1}^{N_l}G_n(y_n^{l-1,(i)}(l))f(y_n^{l-1,(i)}(l))}{\sum_{i=1}^{N_l}G_n(y_n^{l-1,(i)}(l))} \Big) + $$ $$ \frac{\sum_{i=1}^{N_0}G_n(y_n^{0,(i)}(0))f(y_n^{0,(i)}(0))}{\sum_{i=1}^{N_0}G_n(y_n^{0,(i)}(0))} $$ and $$ \hat{\zeta}^{\rm ML,L}_n(f) =\sum_{l=1}^L \Big( \Big[\prod_{i=1}^{n-1}\frac{1}{N_l}\sum_{j=1}^{N_l}G_i(y^{l,(j)}_{i}(l))\Big]\frac{1}{N_l}\sum_{j=1}^{N_l}G_n(y^{l,(j)}_{n}(l))f(y^{l,(j)}_{n}(l)) - $$ $$ \Big[\prod_{i=1}^{n-1}\frac{1}{N_l}\sum_{j=1}^{N_l}G_i(y^{l-1,(j)}_{i}(l))\Big]\frac{1}{N_l}\sum_{j=1}^{N_l}G_n(y^{l-1,(j)}_{n}(l))f(y^{l-1,(j)}_{n}(l)) \Big) $$ \begin{equation}\label{eq:nc_est_ml} + \Big[\prod_{i=1}^{n-1}\frac{1}{N_0}\sum_{j=1}^{N_0}G_i(y^{0,(j)}_{i}(0))\Big]\frac{1}{N_0}\sum_{j=1}^{N_0}G_n(y^{0,(j)}_{n}(0))f(y^{0,(j)}_{n}(0)) \end{equation} converge almost surely to $\hat{\eta}^L_n(f)$ and $\hat{\zeta}^{L}_n(f)$ respectively, as min$\{N_l\} \rightarrow \infty$. Furthermore, both can significantly improve over the particle filter, for $L$ and $\{N_l\}_{l=1}^L$ appropriately chosen to depend upon a target mean square error (MSE). By improve, we mean that the work is less than the particle filter to achieve a given MSE with respect to the continuous time limit, under appropriate assumptions on the diffusion. We show how the $N_0,\dots,N_L$ can be chosen in Section \ref{sec:theorem}. Note that for positive $f$ the estimator above $\hat{\zeta}^{\rm ML,L}_n(f)$ can take negative values with positive probability. We remark that the coupled resampling method can be improved as in \cite{Sen_coupling}. We also remark that the approaches of \cite{hous_ml, jacob} could potentially be used here. However, none of these articles has sufficient supporting theory to verify a reduction in cost of the ML procedure. \subsubsection{Theoretical Result}\label{sec:theorem} We conclude this section with a technical theorem. We consider only $\hat{\eta}^{\rm ML,L}_n(f)$, but this can be extended to $\hat{\zeta}^{\rm ML,L}_n(f)$, similarly to \cite{Jay_mlnormconst} . The proofs are given in Appendix \ref{app:theo}. Define $\mathcal{B}_b(\mathbb{R}^d)$ as the bounded, measurable and real-valued functions on $\mathbb{R}^d$ and $\textrm{Lip}(\mathbb{R}^d)$ as the globally Lipschitz real-valued functions on $\mathbb{R}^d$. Define the space $\mathcal{A}=\mathcal{B}_b(\mathbb{R}^d)\cap\textrm{Lip}(\mathbb{R}^d)$ with the norm $\|\varphi\| = \sup_{x\in \mathbb{R}^d} |\varphi(x)| + \sup_{x,y \in \mathbb{R}^d} \frac{|\varphi(x) - \varphi(y)|}{|x-y|}$. \red{The following assumptions will be required. \begin{assumption}\label{asn:delh} For all $h_l>0$, there exists a solution $\delta_l(h_l)$ to the equation $h_l = 1/\nu(B_{\delta_l(h_l)}^c)$, and some $C,\beta_1>0$ such that $\delta_l(h_l) \leq C h_l^{\beta_1}$. \end{assumption} Denote by $\check{Q}^{l,l-1}((y,y'),\cdot)$ the coupling of the Markov transitions $Q^l(y,\cdot)$ and $Q^{l-1}(y',\cdot)$, $(y,y')\in\mathbb{R}^{2d}$ as in Algorithm \ref{levy:coupledkernelAlgo}. \begin{assumption} There is a $\gamma>0$ such that \begin{itemize} \item $\mathbb{E}[{\rm COST}(\check{Q}^{l,l-1})] = \mathcal{O}(h_l^{-\gamma})$, \end{itemize} where $\mathbb{E}[{\rm COST}(\check{Q}^{l,l-1})]$ is the cost to simulate one sample from the kernel $\check{Q}^{l,l-1}$. \label{asn:mlrates} \end{assumption} } Below $\mathbb{E}$ denotes expectation w.r.t.~the law of the particle system. \begin{theorem} \label{thm:mlpf} Assume (\ref{ass:main}, \ref{asn:g},\ref{asn:delh}, \ref{asn:mlrates}). Then for any $n\geq 0$, there exists a $C<+\infty$ such that for $\varepsilon>0$ given and a particular $L>0$, and $\{N_l\}_{l=0}^L$ depending upon $\varepsilon$, $h_{0:L}$ only and $f \in \mathcal{A}$, \[ \mathbb{E}\Bigg[ \Bigg(\hat{\eta}^{\rm ML,L}_n(f) - \hat{\eta}_n^\infty(f) \Bigg)^2 \Bigg] \leq C \varepsilon^2, \] for the cost $\mathcal{C}(\varepsilon) := \mathbb{E}[{\rm COST}(\varepsilon)]$ given in the second column of Table \ref{tab:mlpfcases}. \begin{table}[h] \begin{center} \begin{tabular}{ | c || c | c |} \hline CASE & $\mathcal{C}(\varepsilon)$ \\ \hline\hline $\beta>2\gamma$ & $\mathcal{O}(\varepsilon^{-2})$ \\ \hline $\beta=2\gamma$ & $\mathcal{O}(\varepsilon^{-2}\log(\varepsilon)^2)$ \\ \hline $\beta<2\gamma$ & $\mathcal{O}(\varepsilon^{-2+(\beta-2\gamma)/(\beta)})$ \\ \hline \end{tabular} \end{center} \caption{The three cases of MLPF, and associated cost $\mathcal{C}(\varepsilon)$. $\beta$ is as Lemma \ref{prop:uni}} \label{tab:mlpfcases} \end{table} \end{theorem} \begin{proof} The proof is essentially identical to \cite[Theorem 4.3]{Jay_mlpf}. The only difference is to establish analogous results to \cite[Appendix D]{Jay_mlpf}; this is done in the appendix of this article. \end{proof} \section{Numerical Examples}\label{levy:numerics} In this section, we compare our proposed multilevel particle filter method with the vanilla particle filter method. A target accuracy parameter $\epsilon$ will be specified and the cost to achieve an error below this target accuracy will be estimated. The performance of the two algorithms will be compared in two applications of SDEs driven by general L\'{e}vy process: filtering of a partially observed L\'{e}vy process (S\&P $500$ stock price data) and pricing of a path dependent option. In each of these two applications, we let $X=\{X_t\}_{t\in[0,K]}$ denote a symmetric stable L\'{e}vy process, i.e. $X$ is a $\left(\nu,\Sigma, b \right)$-L\'{e}vy process, and Lebesgue density of the L\'{e}vy measure given by \begin{align}\label{levy:eq15} \nu(\mathrm{d}x)&=c|x|^{-1-\phi}\mathbbm 1_{[-x^*,0)}(x)\mathrm{d}x+ c|x|^{-1-\phi}\mathbbm 1_{(0,x^*]}(x)\mathrm{d}x,\quad x\in\mathbb{R}\setminus\{0\}, \end{align} with $c>0$, $x^*>0$ (the truncation threshold) and index $\phi\in(0,2)$. The parameters $c$ and $x^*$ are both $1$ for all the examples considered. The L\'{e}vy-driven SDE considered here has the form \begin{align}\label{levy:eq16} \mathrm{d}Y_t=a(Y_{t^{-}})\mathrm{d}X_t,\quad Y_{0}=y_{0}, \end{align} with $y_0$ assumed known, and $a$ satisfies Assumption \ref{ass:main}(i). Notice Assumption \ref{ass:main}(ii-iii) are also satisfied by the L{\'e}vy process defined above. In the examples illustrated below, we take $a(Y_t)=Y_t, y_0=1$, and $\phi=0.5$. \begin{remark}[Symmetric Stable L\'{e}vy process of index $\phi\in(0,2)$] \label{levy:stable} In approximating the L\'{e}vy-driven SDE $\left(\ref{levy:eq16}\right)$, Theorem $2$ of \cite{Dereich_mlmcLevydriven} provided asymptotic error bounds for the strong approximation by the Euler scheme. If the driving L\'{e}vy process $X_t$ has no Brownian component, that is $\Sigma=0$, then the $L^2$-error, denoted $\sigma_{h_l}^2$, is bounded by \begin{align*} \sigma_{h_l}^2&\leq C(\sigma^2(\delta_l)+{ |b-F_0^l|^2}h_l^2), \end{align*} and for $\Sigma > 0$, \begin{align*} \sigma_{h_l}^2&\leq C(\sigma^2(\delta_l)+h_l|\log(h_l)|), \end{align*} for a fixed constant $C<\infty$ (that is the Lipschitz constant), where $\sigma^2(\delta_l) := \int_{B_{\delta_l}} |x|^2 \nu(dx)$. Recall that $\delta_l(h_l)$ is chosen such that $h_l=1/{\nu(B_{\delta_l}^{c})}$. One obtains the analytical expression \begin{align}\label{levy:eq17} \sigma^2(\delta_l) & = \frac{2c}{2-\phi}\delta_l(h_l)^{2-\phi}\leq C\delta_l^{2-\phi}, \end{align} for some constant $C>0$. One can also analytically compute \begin{align*} \nu(B_{\delta_l}^{c})&=\frac{2c(\delta_l^{-\phi}-{x^*}^{-\phi})}{\phi}. \end{align*} Now, setting $h_l=2^{-l}$, one obtains \begin{align}\label{levy:eq18} \delta_l&=\Big(\frac{2^{l}\phi}{2c}+{x^*}^{-\phi}\Big)^{-1/\phi}, \end{align} so that the L\'{e}vy measure $\nu(B_{\delta_l}^{c})=2^l$, \red{hence verifying assumption \ref{asn:delh} for this example.} Then, one can easily bound $\left(\ref{levy:eq18}\right)$ by \begin{align*} |\delta_l|\leq C2^{-l/\phi} \end{align*} for some constant $C>0$. So $\delta_l=\mathcal{O}(h_{l}^{1/\phi})$. Using $\left(\ref{levy:eq17}\right)$-$\left(\ref{levy:eq18}\right)$ and the error bounds for $\Sigma=0$, one can straightforwardly obtain strong error rates for the approximation of SDE driven by stable L\'{e}vy process in terms of the single accuracy parameter $h_l$. This is given by \begin{align*} \sigma_{h_l}^2&\leq C (h_l^{(2-\phi)/\phi}+|b-F_0^l|^2h_l^{2}). \end{align*} {Thus, if $b-F_0^l \neq 0$, the strong error rate $\beta$ of Assumption \ref{asn:mlrates}(ii) associated with a particular discretization level $h_l$ is given by \begin{align}\label{levy:eq19} \beta & = \min\Big(\frac{2-\phi}{\phi},2\Big). \end{align} Otherwise it is just given by $(2-\phi)/\phi$. } \end{remark} In the examples considered below, the original L\'{e}vy process has no drift and Brownian motion components, that is $\Sigma=b=0$. Due to the linear drift correction $F_0^l$ in the compensated compound Poisson process, the random jump times are refined such that the time differences between successive jumps are bounded by the accuracy parameter $h_l$ associated with the Euler discretization approximation methods in $\left(\ref{levy:eq4}\right)$ and $\left(\ref{levy:eq9}\right)$-$\left(\ref{levy:eq10}\right)$. However, since $F_0^l=0$ here, due to symmetry, this does not affect the rate, as described in Remark \ref{levy:stable}. We start with verification of the weak and strong error convergence rates, $\alpha$ and $\beta$ for the forward model. To this end the quantities $|\mathbb{E}[Y^{l}_1-Y^{l-1}_1]|$ and $\mathbb{E}[|Y^{l}_1-Y^{l-1}_1|^2]$ are computed over increasing levels $l$. Figure \ref{levy:fig1} shows these computed values plotted against $h_l$ on base-$2$ logarithmic scales. A fit of a linear model gives rate $\alpha=1.3797$, and similar simulation experiment gives $\beta=2.7377$. This is consistent with the rate $\beta=3$ and $\alpha=\beta/2$ from Remark \ref{levy:stable} $\left(\ref{levy:eq19}\right)$. \begin{figure} \caption{Empirical weak and strong error rates estimates} \label{levy:fig1} \end{figure} We begin our comparison of the MLPF and PF algorithms starting with the filtering of a partially observed L\'{e}vy-driven SDE and then consider the knock out barrier call option pricing problem. \subsection{Partially observed data} In this section we consider filtering a partially observed L\'{e}vy process. Recall that the L\'{e}vy-driven SDE takes the form $\left(\ref{levy:eq16}\right)$. In addition, partial observations $\{z_1,\dots,z_n\}$ are available with $Z_k$ obtained at time $k$ and $Z_k|(Y_{k}=y_k)$ has a density function $G_k(y_{k})$ (with observation is omitted from the notation and appearing only as subscript $k$). The observation density is Gaussian with mean $y_k$ and variance 1. We aim to estimate $\mathbb{E}[f(Y_{k})|z_{1:k}]$ for some test function $f(y)$. In this application, we consider the real daily S\&P $500$ $\log$ return data (from August $3$, $2011$ to July $24$, $2015$, normalized to unity variance). We shall take the test function $f(y)=e^{y}$ for the example considered below, which we note does not satisfy the assumptions of Theorem \ref{thm:mlpf}, and hence challenges the theory. In fact the results are roughly equivalent to the case $f(y)=e^{y}\mathbb{I}_{\{|y|<10\}}$, where $\mathbb{I}_A$ is the indicator function on the set $A$, which was also considered and does satisfy the required assumptions. \begin{figure} \caption{Mean square error against computational cost for filtering with partially observed data.} \label{levy:fig3} \end{figure} The error-versus-cost plots on base $10$ logarithmic scales for PF and MLPF are shown in Figure \ref{levy:fig3}. The fitted linear model of $\log$ MSE against $\log$ Cost has a slope of $-0.667$ and $-0.859$ for PF and MLPF respectively. These results again verify numerically the expected theoretical asymptotic behaviour of computational cost as a function of MSE for both standard cost and ML cost. \subsection{Barrier Option} Here we consider computing the value of a discretley monitored knock out barrier option (see e.g.~\cite{glasserman} and the references therein). Let $Y_0\in[a,b]$ for some $0<a<b<+\infty$ known and let $Q^{\infty}(y_{i-1},y)$ be the transition density of the process as in \eqref{levy:eq16}. Then the value of the barrier option (up-to a known constant) is $$ \int_{\mathbb R^{n}} \max\{y_n-S,0\}\prod_{i=1}^n \mathbb{I}_{[a,b]}(y_i)Q^{\infty}(y_{i-1},y_i) dy_{1:n} $$ for $S>0$ given. As seen in \cite{JayOption} the calculation of the barrier option is non-trivial, in the sense that even importance sampling may not work well. We consider the (time) discretized version $$ \int_{\mathbb R^{n}} \max\{y_n-S,0\}\prod_{i=1}^n \mathbb{I}_{[a,b]}(y_i)Q^{l}(y_{i-1},y_i) dy_{1:n}. $$ Define a sequence of probability densities, $k\in\{1,\dots,n\}$ \begin{equation}\label{eq:barpos} \hat{\eta}_k^l(y_{1:k}) \propto \tilde{G}_k(y_k)\prod_{i=1}^k \mathbb{I}_{[a,b]}(y_i)Q^{l}(y_{i-1},y_i) = \prod_{i=1}^k \left( \frac{\tilde{G}_i(y_i)}{\tilde{G}_{i-1}(y_{i-1})} \right) \mathbb{I}_{[a,b]}(y_i) Q^{l}(y_{i-1},y_i) \end{equation} for some non-negative collection of functions $\tilde{G}_k(y_k)$, $k\in\{1,\dots,n\}$ to be specified. Recall that $\hat{\zeta}_n^l$ denotes the un-normalized density associated to $\hat{\eta}_n^l$. Then the value of the time discretized barrier option is exactly \begin{equation}\label{eq:barnc} \hat{\zeta}_{n}^l\Big(\frac{f}{\tilde{G}_n}\Big) = \int_{\mathbb R^{n}} \max\{y_n-S,0\}\prod_{i=1}^n \mathbb{I}_{[a,b]}(y_i)Q^{l}(y_{i-1},y_i) dy_{1:n} \end{equation} where $f(y_n)=\max\{y_n-S,0\}$. Thus, we can apply the MLPF targetting the sequence $\{\hat{\eta}_k^l\}_{ k\in\{1,\dots,n\},l \in\{0,\dots,L\} }$ and use our normalizing constant estimator \eqref{eq:nc_est_ml} to estimate \eqref{eq:barnc}. \red{If $\tilde{G}_n=|f|$, then we have an optimal importance distribution, in the sense that we are estimating the integral of the constant function $1$ and the variance is minimal \cite{rubinstein}. However, noting the form of the effective potential above \eqref{eq:barpos}, this can result in infinite weights (with adaptive resampling as done here), and so some regularization is necessary. We bypass this issue by choosing $\tilde{G}_k(y_k) = |y_k-S|^{\kappa_k}$, where $\kappa_k$ is an annealing parameter with $\kappa_0 = 0$ and $\kappa_n = 1$. We make no claim that this is the best option, but it guides us to something reminiscent of the optimal thing, and with well-behaved weights, in practice. We tried also $\max\{y_n-S,\varepsilon\}$, with $\varepsilon=0.001$, and the results are almost identical.} For this example we choose $S=1.25,a=0,b=5,y_0=1,n=100$. The $N_l$ are chosen as in the previous example. The error-versus-cost plots for PF and MLPF are shown in Figure \ref{levy:fig2}. Note that the bullets in the graph correspond to different choices of $L$ (for both PF and MLPF, $2\leq L\leq8$). The fitted linear model of $\log$ MSE against $\log$ cost has a slope of $-0.6667$ and $-0.859$ for PF and MLPF respectively. These numerical results are consistent with the expected theoretical asymptotic behaviour of MSE$\propto$Cost$^{-1}$ for the multilevel method. The single level particle filter achieves the asymptotic behaviour of the standard Monte Carlo method with MSE$\propto$Cost$^{-2/3}$. \begin{figure} \caption{Mean square error against computational cost for the knock out barrier option example.} \label{levy:fig2} \end{figure} \section{Theoretical results} \label{app:theo} Our proof consists of following the proof of \cite{Jay_mlpf}. To that end all the proofs of \cite[Appendices A-C]{Jay_mlpf} are the same for the approach in this article (note that one needs Lemma \ref{lem:lip_cont_Q} of this article along the way). One must verify the analogous results of \cite[Appendix D]{Jay_mlpf}, which is what is done in this appendix. The predictor at time $n$, level $l$, is denoted as $\eta^l_{n}$. Denote the total variation norm as $\|\cdot\|_{\textrm{tv}}$. For $\varphi\in\textrm{Lip}(\mathbb R^d)$, $\|\varphi\|_{\textrm{Lip}}:= \sup_{x,y \in \mathbb{R}^d} \frac{|\varphi(x) - \varphi(y)|}{|x-y|}$ is the Lipschitz constant. For ease (in abuse) of notation, $Q^l$ defined by $k_l$ iterates of the recursion in \eqref{eq:euler_levyd} is used as a Markov kernel below. We set for $\varphi\in\mathcal{B}_b(\mathbb R^d)$, $y\in\mathbb{R}^d$ $$ Q^l(\varphi)(y) := \int_{\mathbb{R}^d} \varphi(y')Q^l(y,y')dy'. $$ Recall, for $l\geq 1$, $\check{Q}^{l,l-1}((y,y'),\cdot)$ is the coupling of the kernels $Q^l(y,\cdot)$ and $Q^{l-1}(y',\cdot)$ as in Algorithm \ref{levy:coupledkernelAlgo}. For $\varphi\in\mathcal{B}_b(\mathbb R^{2d})$ we use the notation for $(y,y')\in\mathbb{R}^{2d}$: $$ \check{Q}^{l,l-1}(\varphi)(y,y') := \int_{\mathbb{R}^{2d}} \varphi(y^l,y^{l-1})\check{Q}^{l,l-1}((y,y'),d(y^l,y^{l-1})) $$ and note that for $\varphi\in\mathcal{B}_b(\mathbb R^d)$ $$ \check{Q}^{l,l-1}(\varphi\otimes 1)(y,y') = Q^l(\varphi)(y), \quad \check{Q}^{l,l-1}(1 \otimes \varphi)(y,y') = Q^{l-1}(\varphi)(y') $$ where $\otimes$ denotes the tensor product of functions, e.g.~$\varphi\otimes 1$ denotes $\varphi(y^l)$ in the integrand associated to $\check{Q}^{l,l-1}((y,y'),d(y^l,y^{l-1}))$. \red{ Let $T_j(t) = \max \{T_j \in \mathbb{T} ; T_j < t \}$, and let $(\Delta X)^l_{t} = X^l_t - X^l_{T_j(t)}$, where $X^l_t$ is the natural continuation of the discretized L\'evy process \eqref{levy:eq4}. Define the continuation of the discretized driven process by $$ Y^l_t= Y^l_{T_j(t)} + a(Y^l_{T_j(t)}) (\Delta X)^l_{t} \, . $$ } Let $Y^l_1\sim Q^l(y,\cdot)$ and independently $Y^{'l}_1\sim Q^l(y',\cdot)$. We denote expectations w.r.t.~these random variables as $\mathbb{E}$. \begin{lem}\label{prop:lip_strong} Assume (\ref{ass:main}). Then there exists a $C<+\infty$ such that for any $L\geq l\geq 0$, and $(y,y')\in \mathbb R^{2d}$ $$ \mathbb{E} |Y^l_1 - Y^{'l}_1|^2 \leq C | y - y' |^2 \, . $$ \end{lem} \begin{proof} Let $t\in [0,1]$. We have \[\begin{split} |Y^l_t - Y^{'l}_t|^2 = & ~~ |Y^l_{T_j(t)} - Y^{'l}_{T_j(t)}|^2 + 2 \left (Y^l_{T_j(t)} - Y^{'l}_{T_j(t)} \right)^T \left (a(Y^l_{T_j(t)}) - a(Y^{'l}_{T_j(t)}) \right) (\Delta X)^l_{t} \\ & + \left | \left(a(Y^l_{T_j(t)}) - a(Y^{'l}_{T_j(t)})\right) (\Delta X)^l_{t}\right|^2 \, . \end{split}\] Let $N = \# \{T_j \leq 1\}$ be the number of time-steps before time $1$, and denote $\mathbb{T} = \{\tilde{T}_1,\dots, \tilde{T}_{N}, N\}$, where $\tilde{T}_j$ and $T_j$ are generated by Algorithm \ref{levy:DiscreteAlgo}. The sigma algebra generated by these random variables is denoted $\sigma(\mathbb{T})$. Following from the independence of $Y^l_{T_j(t)}$ and $(\Delta X)^l_{t}$ conditioned on $\sigma(\mathbb{T})$, we have \[\begin{split} \mathbb{E} \left[ |Y^l_t - Y^{'l}_t|^2 \Big | \sigma(\mathbb{T}) \right] & \leq \mathbb{E} \left[ |Y^l_{T_j(t)} - Y^{'l}_{T_j(t)}|^2 \Big | \sigma(\mathbb{T}) \right] \\ & + 2 \mathbb{E} \left[ \left (Y^l_{T_j(t)} - Y^{'l}_{T_j(t)} \right)^T \left (a(Y^l_{T_j(t)}) - a(Y^{'l}_{T_j(t)}) \right) \Big | \sigma(\mathbb{T}) \right] \mathbb{E} \left[ (\Delta X)^l_{t} | \sigma(\mathbb{T}) \right] \end{split}\] \begin{equation}\label{eq:la11} + \mathbb{E} \left[ \left | a(Y^l_{T_j(t)}) - a(Y^{'l}_{T_j(t)}) \right |^2 \Big | \sigma(\mathbb{T}) \right] \mathbb{E} \left[ |(\Delta X)^l_{t}|^2 \Big | \sigma(\mathbb{T}) \right]. \end{equation} The inequality is a result of the last term which uses the definition of the matrix 2 norm. Note that $\mathbb{E}[W_t ]= \mathbb{E}[L_t] = 0$, so that $\mathbb{E}[(\Delta X)^l_{t} | \sigma(\mathbb{T})] = (b - F_0^l) (t - T_j(t)) $. In addition, \eqref{eq:efl}, Jensen's inequality and the fact that $B^c_{\delta_l} \subset B_{\delta_l}$, together imply that \begin{equation}\label{eq:eflin} | F_0^l |^2 \leq \int_{B_{\delta_l}^c} |x|^2 \nu(dx) \leq \int |x|^2 \nu(dx) \, . \end{equation} We have $$ \mathbb{E} \left[ \left (Y^l_{T_j(t)} - Y^{'l}_{T_j(t)} \right)^T \left (a(Y^l_{T_j(t)}) - a(Y^{'l}_{T_j(t)}) \right) \Big | \sigma(\mathbb{T}) \right] \mathbb{E} \left[ (\Delta X)^l_{t} | \sigma(\mathbb{T}) \right] $$ $$ = \mathbb{E} \left[ \left (Y^l_{T_j(t)} - Y^{'l}_{T_j(t)} \right)^T \left (a(Y^l_{T_j(t)}) - a(Y^{'l}_{T_j(t)}) \right) \Big | \sigma(\mathbb{T}) \right] (b - F_0^l) (t - T_j(t)) $$ \begin{equation}\label{eq:l12} \leq C^2 h_l \mathbb{E} \left[ \left |Y^l_{T_j(t)} - Y^{'l}_{T_j(t)} \right |^2 \Big | \sigma(\mathbb{T}) \right] \end{equation} The inequality follows from Cauchy-Schwarz, definition of the matrix 2 norm, Assumption \ref{ass:main}(i), (iii), and (ii) in connection with \eqref{eq:eflin} and the definition of the construction of $\{T_j\}$ in Algorithm \ref{levy:DiscreteAlgo}, so that $|t-T_j(t)| \leq h_l$. Note also \begin{equation}\label{eq:square} \mathbb{E} \left[ |(\Delta X)^l_{t}|^2 \Big | \sigma(\mathbb{T}) \right] \leq C^2( |t-T_j(t)| + |t-T_j(t)|^2 ) \leq C^2h_l\, , \end{equation} by Assumption \ref{ass:main} (ii) and (iii), and since $h_l\leq 1$ by definition. Returning to \eqref{eq:la11}, and using \eqref{eq:square} and \eqref{eq:l12}, and Assumption \ref{ass:main} (i) again on the last term, we have \begin{equation}\label{eq:fin1step} \mathbb{E} \left[ |Y^l_t - Y^{'l}_t|^2 \Big | \sigma(\mathbb{T}) \right] \leq \mathbb{E} \left[ |Y^l_{T_j(t)} - Y^{'l}_{T_j(t)}|^2 \Big | \sigma(\mathbb{T}) \right] (1 + Ch_l) \, , \end{equation} where the value of the constant is different. Therefore, in particular $$ \mathbb{E} \left[ |Y^l_{T_{j+1}} - Y^{'l}_{T_{j+1}} |^2 \Big | \sigma(\mathbb{T}) \right] \leq \mathbb{E} \left[ |Y^l_{T_j} - Y^{'l}_{T_j}|^2 \Big | \sigma(\mathbb{T}) \right] (1 + Ch_l) \, . $$ By applying \eqref{eq:fin1step} recursively, we have $$ \mathbb{E} \left[ |Y^l_1 - Y^{'l}_1|^2 \Big | \sigma(\mathbb{T}) \right] \leq | y - y' |^2 (1+Ch_l)^{N} \, . $$ Note that $\mathbb{P}(N=n) = \frac{(\lambda^{\delta_l})^n}{n!} e^{-\lambda^{\delta_l}}$, and $\lambda^{\delta_l}=h_l^{-1}$ by design, as described in Section \ref{numApprox}. Taking expectation with respect to $\sigma(\mathbb{T})$ gives \[\begin{split} \mathbb{E} |Y^l_1 - Y^{'l}_1|^2 & \leq \left ( \sum_{n\geq 0} \frac{(h_l^{-1} (1+Ch_l) )^{n} }{n!} e^{-h_l^{-1}} \right ) | y - y' |^2 \\ & = e^{C} | y - y' |^2 \, . \end{split} \] The result follows by redefining $C$. \end{proof} \begin{lem}\label{lem:lip_cont_Q} Assume (\ref{ass:main}). Then there exists a $C<+\infty$ such that for any $L\geq l\geq 0$, $(y,y')\in \mathbb R^{2d}$, and $\varphi\in\mathcal{B}_b(\mathbb R^d)\cap\textrm{\emph{Lip}}(\mathbb R^d)$ $$ |Q^l(\varphi)(y) - Q^l(\varphi)(y')| \leq C\|\varphi\|_{\textrm{\emph{Lip}}}~|y-y'|. $$ \end{lem} \begin{proof} We have \begin{eqnarray*} |Q^l(\varphi)(y) - Q^l(\varphi)(y')| & = & | \mathbb{E} (\varphi(Y^l_1) - \varphi(Y^{'l}_1))| \\ & \leq & (\mathbb{E} |\varphi(Y^l_1) - \varphi(Y^{'l}_1)|^2)^{1/2} \\ & \leq & \|\varphi\|_{\textrm{Lip}} (\mathbb{E} |Y^l_1 - Y^{'l}_1|^2)^{1/2}\, \end{eqnarray*} where Jensen has been applied to go to the second line and that $\varphi\in\textrm{Lip}(\mathbb R^d)$ to the third. The proof is concluded via Lemma \ref{prop:lip_strong}. \end{proof} \begin{lem}\label{prop:uni} Assume (\ref{ass:main}, \ref{asn:delh}). Then there exists $C<+\infty$ such that for any $L\geq l\geq 1$ $$ \sup_{\varphi\in\mathcal{A}}\sup_{y\in\mathbb R^d}|Q^l(\varphi)(y)-Q^{l-1}(\varphi)(y)| \leq C h_{l}^{\frac{\beta}{2}}. $$ \end{lem} \begin{proof} We have $$ |Q^l(\varphi)(y)-Q^{l-1}(\varphi)(y)| = \Big|\int_{\mathbb{R}^{2d}}\varphi(y^l) \check{Q}^{l,l-1}((y,y),d(y^l,y^{l-1})) - \int_{\mathbb{R}^{2d}}\varphi(y^{l-1}) \check{Q}^{l,l-1}((y,y),d(y^l,y^{l-1}))\Big|. $$ Using Jensen's inequality yields $$ |Q^l(\varphi)(y)-Q^{l-1}(\varphi)(y)| \leq \Big(\int_{\mathbb{R}^{2d}}(\varphi(y^l) -\varphi(y^{l-1}))^2 \check{Q}^{l,l-1}((y,y),d(y^l,y^{l-1}))\Big)^{1/2}. $$ Recall for $\varphi \in \mathcal{A}$ there exists $C<+\infty$ such that $|\varphi(y^l) -\varphi(y^{l-1})| \leq C |y^l -y^{l-1}|$. By \cite[Theorem 2]{Dereich_mlmcLevydriven}, there exists $C<+\infty$ such that for any $y\in\mathbb{R}^d$, $l\geq 1$ \begin{equation}\label{eq:strong} \int_{\mathbb{R}^{2d}}|y^l -y^{l-1}|^2 \check{Q}^{l,l-1}((y,y),d(y^l,y^{l-1})) \leq C h_l^\beta \, \end{equation} The proof is then easily concluded. \end{proof} \begin{rem} \red{To verify \eqref{eq:strong} note that $\delta_l(h_l)$ is chosen as a function of $h_l$ here for simplicity, and by Assumption \ref{asn:delh} it can be bounded by $C h_l^{\beta_1}$ for some $\beta_1$. The bounds in Theorem 2 of \cite{Dereich_mlmcLevydriven} can therefore be written as the sum of two terms $C(h_l^{\beta_1} + h_l^{\beta_2})$, and $\beta = \min\{\beta_1,\beta_2\}$.} See remark \ref{levy:stable} for calculation of $\beta$ in the example considered in this paper. \end{rem} \begin{lem}\label{prop:strong_cor} Assume (\ref{ass:main},\ref{asn:delh}). Then there exists $C<+\infty$ and $\beta>0$ such that for any $L\geq l\geq 1$, and $(y,y')\in \mathbb R^{2d}$, $$ \Big(\int_{\mathbb{R}^{2d}}|y^l -y^{l-1}|^2 \check{Q}^{l,l-1}((y,y'),d(y^l,y^{l-1}))\Big)^{1/2} \leq C (|y-y'| + h_l^{\beta/2}) \, $$ where $\beta$ is as in Lemma \ref{prop:uni}. \end{lem} \begin{proof} We have $$ \Big(\int_{\mathbb{R}^{2d}}|y^l -y^{l-1}|^2 \check{Q}^{l,l-1}((y,y'),d(y^l,y^{l-1}))\Big)^{1/2} = $$ $$ \Big(\int_{\mathbb{R}^{3d}} |y^l -\bar{y}^l +\bar{y}^l -y^{l-1}|^2 \check{Q}^{l,l-1}((y,y'),d(y^l,y^{l-1})) Q^l(y',d\bar{y}^l)\Big)^{1/2} \leq $$ $$ \Big(\int_{\mathbb{R}^{2d}} |y^l -\bar{y}^l|^2 Q^{l}(y,dy^l) Q^l(y',d\bar{y}^l)\Big)^{1/2} + $$ $$ \Big(\int_{\mathbb{R}^{2d}}|\bar{y}^l -y^{l-1}|^2Q^{l}(y',d\bar{y}^l) Q^{l-1}(y',dy^{l-1}) \Big))\Big)^{1/2} \leq $$ $$ C| y - y' | + \Big(\int_{\mathbb{R}^{2d}}|\bar{y}^l -y^{l-1}|^2Q^{l}(y',d\bar{y}^l) Q^{l-1}(y',dy^{l-1})\Big)^{1/2} $$ where we have applied Minkowski's inequality to go to the third line and Lemma \ref{prop:lip_strong} to go to the final line. Now $$ \Big(\int_{\mathbb{R}^{2d}}|\bar{y}^l -y^{l-1}|^2Q^{l}(y',d\bar{y}^l) Q^{l-1}(y',dy^{l-1})\Big)^{1/2} = $$ $$ \Big(\int_{\mathbb{R}^{3d}}|\bar{y}^l- \tilde{y}^l+ \tilde{y}^l -y^{l-1}|^2Q^{l}(y',d\bar{y}^l) \check{Q}^{l,l-1}((y',y'),d(\tilde{y}^l,y^{l-1}))\Big)^{1/2} \leq $$ $$ \Big(\int_{\mathbb{R}^{2d}}|\bar{y}^l- \tilde{y}^l|^2Q^l(y',d\bar{y}^l)Q^l(y',d\tilde{y}^l)\Big)^{1/2} + \Big(\int_{\mathbb{R}^{2d}} |\tilde{y}^l -y^{l-1}|^2\check{Q}^{l,l-1}((y',y'),d(\tilde{y}^l,y^{l-1}))\Big)^{1/2} $$ where again we have applied Minkowski's inequality to go to the third line. Then $$ \int_{\mathbb{R}^{2d}}|\bar{y}^l- \tilde{y}^l|^2Q^l(y',d\bar{y}^l)Q^l(y',d\tilde{y}^l) = 0 $$ and by \eqref{eq:strong} we have $$ \Big(\int_{\mathbb{R}^{2d}}|\bar{y}^l -y^{l-1}|^2Q^{l}(y',d\bar{y}^l) Q^{l-1}(y',dy^{l-1})\Big)^{1/2}\leq C h_l^{\frac{\beta}{2}}. $$ The argument is then easily concluded. \end{proof} \begin{proposition}\label{prop:tv} Assume (\ref{ass:main},\ref{asn:g},\ref{asn:delh}). Then there exists a $C<+\infty$ such that for any $L\geq l\geq 1$, $n\geq 0$, \begin{equation}\label{eq:tv} \|\eta^l_{n} - \eta^{l-1}_{n}\|_{\rm tv} \leq C h_l^{\frac{\beta}{2}} \, . \end{equation} where $\beta$ is as Lemma \ref{prop:uni}. \end{proposition} \begin{proof} The result follows from the same calculations of the proof of \cite[Lemma D.2]{Jay_mlpf} along with our Lemma \ref{prop:uni}, which we note is analogous to (32) in \cite{Jay_mlpf} with $\alpha=\beta/2$ \end{proof} It is remarked that, given our above results, Lemmata D.3 and D.4 as well as Theorem D.5 (all of \cite{Jay_mlpf}) can be proved for our algorithm by the same arguments as in \cite{Jay_mlpf} and are hence omitted. Note that we have proved that: \begin{eqnarray*} \sup_{\varphi\in\mathcal{A}}\sup_{y\in\mathbb R^d}|Q^l(\varphi)(y)-Q^{l-1}(\varphi)(y)| & \leq & C h_{l}^{\frac{\beta}{2}} \, ,\\ \Big|\int_{\mathbb{R}^{2d}}\varphi(y^l) \check{Q}^{l,l-1}((y,y),d(y^l,y^{l-1})) - \int_{\mathbb{R}^{2d}}\varphi(y^{l-1}) \check{Q}^{l,l-1}((y,y),d(y^l,y^{l-1}))\Big| & \leq & C h_{l}^{\frac{\beta}{2}} \, ,\\ \int_{\mathbb{R}^{2d}}(\varphi(y^l) -\varphi(y^{l-1}))^2 \check{Q}^{l,l-1}((y,y'),d(y^l,y^{l-1})) & \leq & C h_l^{\beta} \, , \end{eqnarray*} for all $\varphi \in \mathcal{A}$. This provides \cite[Assumption 4.2.~(i) \& (ii)]{Jay_mlpf}, with $\alpha$ (as in \cite{Jay_mlpf}) equal to $\beta/2$. \end{document}
\begin{document} \title[On the connection problem for the $protect p$-Laplacian system]{On the connection problem for the $p$-Laplacian system for potentials with several global minima} \author{Nikolaos Karantzas} \address{Department of Mathematics\\ University of Athens\\ Panepistemiopolis\\ 15784 Athens\\ Greece} \email{\href{mailto:[email protected]}{\texttt{[email protected]}}} \thanks{The author was partially supported through the project PDEGE -- Partial Differential Equations Motivated by Geometric Evolution, co-financed by the European Union -- European Social Fund (ESF) and national resources, in the framework of the program Aristeia of the `Operational Program Education and Lifelong Learning' of the National Strategic Reference Framework (NSRF)} \date{} \begin{abstract} We study the existence of solutions to systems of ordinary differential equations that involve the $p$-Laplacian for potentials with several global minima. We consider the connection problem for potentials with two minima in arbitrary dimensions and with three or more minima on the plane. \end{abstract} \maketitle \section{Introduction} We consider the problem of existence of solutions to systems of ordinary differential equations that involve the $p$-Laplacian operator, that is, systems of the form \[ ( |u_{x}|^{p-2} u_{x})_{x} - \frac{1}{q} \nabla W(u) = 0 \] for vector-valued functions $u$ and potentials $W$ that possess several global minima. The corresponding problem was considered in the papers Alikakos and Fusco \cite{AF} and Alikakos, Betel\'u, and Chen \cite{ABC} for the standard Laplacian and here we provide extensions of these results for $p>1$. What is presented in the following is divided in two parts. In Section 2 we consider the problem in $\mathbb{R}^N$ for potentials with two global minima and state without proof an existence theorem together with a variational characterization of the connecting solutions (for the full proofs we refer to \cite{nikos-thesis}). The problem for potentials possessing three or more global minima (even for the case $p=2$) is significantly harder and essentially open. In Section 3, by restricting ourselves to $N=2$, we are able to exhibit a class of potentials for which we have reasonably complete results. In particular, we establish a uniqueness theorem and also give some examples of potentials that exhibit non-existence and non-uniqueness properties. \section{The connection problem for potentials possessing two global minima} Let $\Omega$ be an open and connected subset of $\mathbb{R}^{N}$ and $W: \Omega \to \mathbb{R}$ be a $C^{2}$ nonnegative potential function with two minima, that is, $W>0$ in $\mathbb{R}^{N} \setminus A$, with $W=0$ on $A=\{a^{+},a^{-}\}$. In \cite{AF}, Alikakos and Fusco analyze the existence of solutions to the Hamiltonian system \begin{equation} \label{one} u_{xx}-\frac{1}{2} \nabla W(u) = 0, \text{ with } \lim_{x \to pm \infty} u(x) = a^{pm}, \end{equation} where $u: \mathbb{R} \to \mathbb{R}^{N}$ is a vector-valued function. Such solutions are called {\em heteroclinic connections}. The system \eqref{one} represents the motion of $N$ material points of equal mass under the potential $-W(u)$, with $x$ standing for time and $u$ for position. The approach in \cite{AF} is variational and is based on Hamilton's principle of least action, that is, on the minimization of the action functional $A: W^{1,2}(\mathbb{R}, \mathbb{R}^{N}) \to \mathbb{R}$, defined as \[ A(u)=\frac{1}{2} \int_{\mathbb{R}} (|u_{x}|^{2} + W(u))\, dx. \] The method depends on the introduction of a constraint leading to the existence of local minimizers. The constraint can later be removed and therefore provide a solution to \eqref{one}. In this section, we state without proof an extension of the results in \cite{AF} to the $p$-Laplacian operator. To this end, we consider the system \begin{equation} \label{two} ( |u_{x}|^{p-2} u_{x})_{x} - \frac{1}{q} \nabla W(u) = 0, \text{ with } \lim_{x \to pm \infty} u(x)=a^{pm} \end{equation} where $u: \mathbb{R} \to \mathbb{R}^{N}$ is again a vector-valued function and $p, q > 1$ are H\"{o}lder conjugates, that is, $1/p + 1/q = 1$. Alternatively, by Hamilton's principle of least action, the motion from one minimum of the potential to another is a critical point of the action functional $A_{p} : W^{1,p} ( [ t_{1} , t_{2} ], \mathbb{R}^{N} ) \to \mathbb{R}$, defined as \begin{equation*} A_{p} (u, (t_{1},t_{2})):= \int_{t_{1}}^{t_{2}} \left( \frac{|u_{x}|^{p}}{p}+\frac{W(u)}{q} \right) dx. \end{equation*} Therefore, the system \eqref{two} is the associated Euler--Lagrange equation with $t_{1}=-\infty$ and $t_{2}=+\infty$. To state our results, we assume that the following hypothesis holds. \begin{hypothesis} \label{hone} The potential $W$ is such that $ \liminf_{|u| \to \infty} W(u) >0 $ and also there exists $R > 0$ such that the map $r \mapsto W(a^{pm} + r\xi)$ has a strictly positive derivative for every $r \in (0 , R)$ and for every $\xi \in S^{N-1} := \{u \in \mathbb{R}^{N}: |u|=1 \}$, with $R < |a^{+} - a^{-}|$. \end{hypothesis} \noindent Then, under the above hypothesis, we have the following theorems. \begin{theorem} \label{thmone} Let $W: \mathbb{R}^{N} \to \mathbb{R}$ be a non-negative $C^{2}$ potential function and let $a^{-} \neq a^{+} \in \mathbb{R}^{N}$ be such that $W(a^{pm}) = 0$. Also assume that Hypothesis \ref{hone} holds. Then, there exists a connection $U$ between $a^{-}$ and $a^{+}$. \end{theorem} \begin{theorem} \label{thmtwo} Let $U$ be the minimizer provided by Theorem \ref{thmone} above and let $R$ be as defined in Hypothesis \ref{hone}. Also let $\mathcal{A}$ be the set that consists of all functions $u \in W_{\text{loc}}^{1,p} (\mathbb{R}; \mathbb{R}^{N})$ for which there exist $x_{u}^{-} < x_{u}^{+}$ (depending on $u$) such that \begin{equation*} \begin{cases} | u (x) - a^{-} | \leq R / 2, &\text{for all } x \leq x_{u}^{-},\\ | u (x) - a^{+} | \leq R / 2, &\text{for all } x \geq x_{u}^{+}. \end{cases} \end{equation*} Then, \[ A_{p} (U) = \min_{u \in \mathcal{A}} A_{p} (u). \] \end{theorem} The above theorems yield the existence of the desired heteroclinic connection and also a variational characterization. Our approach (for full details, see \cite{nikos-thesis}) follows the lines of the method established in \cite{AF} and therefore it is based on the minimization of the action functional $A_{p}$ over the whole real line. The proof of existence of a connection is generally straightforward, since the test functions constructed in Lemmas 3.1 and 3.2 in \cite{AF} succeed in the pointwise reduction of both the gradient and the potential terms. \section{The connection problem on the complex plane for potentials possessing several global minima} In this section we extend to the $p$-Laplacian operator the existence and uniqueness results in Alikakos, Betel\'u, and Chen \cite{ABC}. Here, we consider potentials possessing three or more global minima and restrict ourselves to the planar case $N=2$ for which we identify $\mathbb{R}^{2}$ with the complex plane $\mathbb{C}$. We tackle the problem by utilizing Jacobi's principle, which deals with curves and detects geodesics. Specifically, one considers the length functional \begin{equation*} L_{p}(u)=\int_{t_{1}}^{t_{2}} \sqrt[q]{W(u)} |u_{x}|\, dx, \end{equation*} which is independent of parametrizations and hence is more properly denoted by \[ L_{p}(\Gamma)=\int_{\Gamma}\sqrt[q]{W(\Gamma)}\, d\Gamma . \] Notice that the functional $A_{p}$ is defined on functions while the functional $L_{p}$ is defined on curves. The relationship between the two is that critical points of $L_{p}$ parametrized under the equipartition parametrization (that is, a parameter $t$ is such that $|u_{t}|^{p} = W (u)$, for all $t$ in an interval $(a, b)$) render critical points of $A_{p}$. In this case, we study the system of ordinary differential equations \[ (|u_{x}|^{p-2} u_{x}^{i})_{x} - \frac{1}{q} \frac{partial W(u^{1}, u^{2})}{partial u^{i}} = 0, \text{ for } i=1,2, \] where $u = (u^{1}, u^{2}): \mathbb{R} \to \mathbb{R}^{2}$. We identify $u = (u^{1}, u^{2})$ with the complex number $z = u^{1} + i u^{2}$, and similarly we write $W(u)$ as $W(z)$. Since $W(\cdot)$ is non-negative, we can write $W(z) = |f(z)|^{q}$ for some analytic function $f$. We can also verify that the equations in \eqref{two} are equivalent to \[ (|z_{x}|^{p-2} z_{x})_{x} = (f \overline{f})^{\frac{q-2}{2}} f \overline{f'}, \] where the bar represents complex conjugation. We begin by sketching the method for the standard triple-well potential $W(z) = |z^{3} - 1|^{q}$, the minima of which are taken at the points $1$, $e^{\frac{2 pi i}{3}}$, and $e^{\frac{-2 pi i}{3}}$. We will construct the connection between $a=1$ and $b=e^{\frac{2pi i}{3}}$, by considering the variational problem \[ \min{ L_{p} (u)} \] along embeddings on the plane connecting $a$ to $b$. Since the functional $L_{p}$ is independent of parametrizations, we choose $u: (0,1) \to \mathbb{R}^{2}$ with $u(0) = a$, $u(1) = b$, and set $z (t) = u^{1} (t) + u^{2} (t)$. Then, \[ L_{p} (u) = \int_{0}^{1} |z ' (\tau)| |z^{3} (\tau) - 1| \, d\tau = \int_{0}^{1} \left| \frac{d}{d\tau} g (z (\tau)) \right| d\tau = \int_{0}^{1} |w' (\tau)| \, d\tau, \] where $w = g(z) = z - z^{4}/4$. It is clear that minimizing $L_{p}$ over the set of curves connecting $a$ to $b$ is reduced to the simple problem of minimizing the length functional on the $w$-plane for curves connecting $g(a) = 3/4$ to $g (b) = 3 e^{\frac{2 pi i}{3}}/4$, which of course is minimized by the line segment connecting these image points. Now, by choosing the following parametrization for the line segment \[ g (z (\tau)) = \tau g(a) + (1- \tau) g(b) = \frac{3}{4} ( \tau + (1- \tau) e^{\frac{2 pi i}{3}} ), \text{ for } 0 \leq \tau \leq1, \] we can show that the curve $z (\tau) = r (\tau) e^{i \theta (\tau)}$ satisfies the parameter-free equation \[ 4 r \cos{ (\theta - \frac{pi}{3} ) } = r^{4} \cos{ ( 4 \theta - \frac{pi}{3} )} + 3 \cos{ \frac{pi}{3}}, \text{ for } 0 \leq \theta \leq \frac{pi}{3} \text{ and } 0<r<1, \] which is exactly the same equation presented in \cite{ABC} for $p=q=2$. The dependence on $p$ is through the parametrization \[ \frac{dt}{dx} = \frac{\sqrt[p]{W(z(t))}}{|z'(t)|}, \text{ with } t(0)=\frac{1}{2}, \] which leads to the connection $u(x) = z(t(x))$. So, naturally, one would expect that the theory presented in \cite{ABC} could be extended for any $p>1$, and although this is true, it is not without some extra technical effort. \begin{theorem} \label{thmthree} Let $W(u_{1},u_{2}) = |f(z)|^{q}$, where $f = g'$ is holomorphic in an open subset $D$ of $\mathbb{R}^{2}$, and let the point $(u_{1}, u_{2})$ be identified with the complex number $z(x) = u_{1} (x) + i u_{2} (x)$. Additionally, let $\gamma = \{ u(x): x \in (a,b) \}$ be a smooth curve in $D$ and $x$ an equipartition parameter, that is, $|u_{x}|^{p} = W(u)$. Also, set $\alpha = u(a)$ and $\beta = u(b)$. Then, $u$ is a solution to \begin{equation} \label{three} (|u_{x}|^{p-2} u_{x})_{x} - \frac{1}{q} \nabla W (u) = 0, \text{ in } (a,b) \end{equation} if and only if \begin{equation} \label{four} \text{\em Im}{ \left( \frac{g(z) - g(\alpha)}{g(\beta) - g(\alpha)} \right)} = 0, \text{ for all } z \in \gamma. \end{equation} \end{theorem} \begin{proof} Let $u = (u_{1}, u_{2}): (a,b) \to D$ be a solution to \eqref{three} with $|u_{x}|^{p} = W(u)$. Also, let $L$ be the total arclength and $l$ the arclength parameter defined by \begin{equation*} L = \int_{a}^{b} |u_{x}| \sqrt[q]{W(u)} \, dx \text{ and }l = \int_{a}^{x} |u_{x} (y) | \sqrt[q]{W(u(y))} \, dy. \end{equation*} We will show that $g(\gamma)$ is the line segment $[ g (\alpha), g (\beta) ]$. We begin by modifying equation \eqref{three}. Here we note that the equipartition relationship gives \begin{equation} \label{five} |z_{x}|^{p} = W(u) = (f \overline{f})^{\frac{q}{2}} \end{equation} and that since \begin{align*} & \frac{partial W}{partial u_{1}} = (|f(z)|^{q})_{u_{1}} = [ ( f \overline{f})^{\frac{q}{2}} ]_{u_{1}} = \frac{q}{2} (f \overline{f})^{\frac{q-2}{2}} (f' \overline{f} + f \overline{f'}),\\[4pt] & \frac{partial W}{partial u_{2}} = (|f(z)|^{q})_{u_{2}} = \frac{q}{2} (f \overline{f})^{\frac{q-2}{2}} (i f' \overline{f} - i f \overline{f'}), \end{align*} we have \begin{equation} \label{six} \frac{partial W}{partial u_{1}} + i \frac{partial W}{partial u_{2}} = \frac{q}{2} (f \overline{f})^{\frac{q-2}{2}} (f' \overline{f} + f \overline{f'} - f' \overline{f} + f \overline{f'}) = q (f \overline{f})^{\frac{q-2}{2}} f \overline{f'}. \end{equation} Hence, based on \eqref{six}, equation \eqref{three} can be written as \begin{align*} 0 & = (|u_{x}|^{p-2} u_{x})_{x} - \frac{1}{q} \nabla W (u) \\ & = \frac{p}{2} |z_{x}|^{p-2} z_{xx} + (\frac{p}{2} - 1) |z_{x}|^{p-4} z_{x}^{2} \overline{z_{xx}} - (f \overline{f})^{\frac{q-2}{2}} f \overline{f'}. \end{align*} Now, by multiplying the above equation by $| z_{x} |^{4 - p}$ and by further simplifying, we see that equation \eqref{three} is equivalent to \begin{equation} \label{seven} p |z_{x}|^{2} z_{xx} + (p-2) z_{x}^{2} \overline{z_{xx}} - 2 |z_{x}|^{2(3-p)} f \overline{f'} = 0. \end{equation} In addition, by differentiating the equipartition relationship $|z_{x}|^{p} = (f \overline{f})^{\frac{q}{2}}$, we obtain the equation \[ p |z_{x}|^{p-2} (z_{xx} \overline{z_{x}} + z_{x} \overline{z_{xx}}) = q (f \overline{f})^{\frac{q-2}{2}} (f' \overline{f} z_{x} + f \overline{f'} \overline{z_{x}}), \nonumber \] which simplified, is equivalent to \begin{equation} \label{eight} (p-1) |z_{x}|^{2(p-2)} (z_{xx} \overline{z_{x}} + z_{x} \overline{z_{xx}}) = f' \overline{f} z_{x} + f \overline{f'} \overline{z_{x}}. \end{equation} Finally, differentiating the function $g$, we have \[ \frac{dg(z)}{dl} = \frac{g'(z) z_{x}}{(f \overline{f})^{\frac{q}{2}}} = \frac{f z_{x}}{(f \overline{f})^{\frac{q}{2}}} = \frac{z_{x}}{f^{\frac{q}{2} - 1} \overline{f}^{\frac{q}{2}}} = \frac{z_{x}}{f^{\frac{q-2}{2}} \overline{f} ^{\frac{q}{2}}} \] and \begin{align} \frac{d^{2}g(z)}{dl^{2}} & = \frac{f \overline{f} z_{xx} + \frac{2-q}{2} \overline{f} f ' z_{x}^{2} - \frac{q}{2} |z_{x}|^{2} f \overline{f'}}{f^{q} \overline{f}^{q+1}} \nonumber \\ & =\frac{f \overline{f} z_{xx} + \frac{p-2}{2(p-1)} \overline{f} f ' z_{x}^{2} - \frac{p}{2(p-1)} |z_{x}|^{2} f \overline{f'}}{f^{q} \overline{f}^{q+1}} \nonumber \\ & =\frac{2 (p-1) f \overline{f} z_{xx} + (p-2) \overline{f} f ' z_{x}^{2} - p |z_{x}|^{2} f \overline{f'}}{2(p-1)f^{q} \overline{f}^{q+1}} \nonumber \\ & =\frac{2 (p-1) f \overline{f} z_{xx} + z_{x} ( (p-2) \overline{f} f ' z_{x} - p \overline{z_{x}} f \overline{f'} ) }{2 (p-1) f^{q} \overline{f}^{q+1}} \nonumber \\ \label{nine} & =\frac{2 (p-1) f \overline{f} z_{xx} + z_{x} ( (p-2) ( \overline{f} f ' z_{x} + \overline{z_{x}} f \overline{f'} ) - 2 (p-1) \overline{z_{x}} f \overline{f'} ) }{2 (p-1) f^{q} \overline{f}^{q+1}}. \end{align} Now utilizing \eqref{eight}, equation \eqref{nine} becomes \begin{align} \label{ten} \frac{d^{2}g(z)}{dl^{2}} & = \frac{2 f \overline{f} z_{xx} + (p-2) |z_{x}|^{2} |z_{x}|^{2p-4} z_{xx}}{2 f^{q} \overline{f}^{q+1}} \nonumber \\ & \quad + \frac{(p-2) |z_{x}|^{2 (p-2)} z_{x}^{2} \overline{z_{xx}} - 2 |z_{x}|^{2} f \overline{f'}}{2 f^{q} \overline{f}^{q+1}} \end{align} and by virtue of \eqref{five}, equation \eqref{ten} becomes \begin{align*} \frac{d^{2}g(z)}{dl^{2}} & = \frac{2 |z_{x}|^{2 (p-1)} z_{xx} + (p-2) |z_{x}|^{2 (p-1)} z_{xx}}{2 f^{q} \overline{f}^{q+1}} \\ & \quad + \frac{(p-2) |z_{x}|^{2 (p-2)} z_{x}^{2} \overline{z_{xx}} - 2 |z_{x}|^{2} f \overline{f'}}{2 f^{q} \overline{f}^{q+1}}, \end{align*} which after simplification can be written as \begin{align*} \frac{d^{2}g(z)}{dl^{2}} & = \frac{p |z_{x}|^{2 (p-1)} z_{xx} + (p-2) |z_{x}|^{2 (p-2)} z_{x}^{2} \overline{z_{xx}} - 2 |z_{x}|^{2} f \overline{f'}}{2 f^{q} \overline{f}^{q+1}} \\ & = \frac{p |z_{x}|^{2} z_{xx} + (p-2) z_{x}^{2} \overline{z_{xx}} - 2 |z_{x}|^{2 (3-p)} f \overline{f'}}{2 f^{q} \overline{f}^{q+1} |z_{x}|^{2 (2-p)}} \\ & = 0, \end{align*} as a result of \eqref{seven}. Thus $dg(z) / dl = C$ is constant. Integrating this equation and evaluating it at $l = L$ gives respectively $g(z) = g(\alpha) + C l$ and $C l = g(\beta) - g(\alpha)$. In addition, we have \[ \left| \frac{dg(z)}{dl} \right| = \left| \frac{z_{x}}{f^{\frac{q-2}{2}} \overline{f}^{\frac{q}{2}}} \right| = \frac{ |z_{x}| |z_{x}|^{\frac{p}{q}}}{ |z_{x}|^{p}} = \frac{|z_{x}| |z_{x}|^{p-1}}{|z_{x}|^{p}} = 1 = |C|, \] hence $L = |g (\beta) - g (\alpha)|$ and $C = \frac{g(\beta) - g(\alpha)}{|g(\beta) - g(\alpha)|}$, which means that \[ g (z (l)) = \frac{L-l}{L} g(\alpha) + \frac{l}{L} g(\beta). \] Thus, $g$ is the desired line segment. For the converse, assume that $\gamma = u ( (a,b) )$ satisfies \eqref{four} and the parameter $x$ is an equipartition parameter for $u$. Then, equation \eqref{four} can be written as \[ g (z) - g (\alpha) = s (x) ( g (\beta) - g (\alpha) ), \] where $s (x)$ is a real-valued function. Upon differentiation, we obtain \[ s_{x} ( g (\beta) - g (\alpha) ) = g' (z) z_{x} = f (z) z_{x}. \] This equation implies that \[ |s_{x}| = \frac{|f(z)| |z_{x}|}{ |g (\beta) - g (\alpha)|} = \frac{|f(z)|^{q}}{|g (\beta) - g (\alpha)|} > 0 \] and since $s (x) \in \mathbb{R}$, with $s (a) = 0$ and $s (b) = 1$, we must have $s_{x} (x) > 0$. Hence, \[ s_{x} (x) = \frac{|f(z)|^{q}}{|g (\beta) - g (\alpha)|}. \] Consequently, \begin{equation} \label{eleven} z_{x} = \frac{s_{x} ( g (\beta) - g (\alpha) ) }{f(z)} = C \frac{(f \overline{f})^{\frac{q}{2}}}{f}, \end{equation} where $C = \frac{g (\beta) - g (\alpha)}{|g (\beta) - g (\alpha)|}$. Lastly, utilizing \eqref{eleven}, we construct the differential equation \eqref{seven} as follows. First, we have \begin{align*} & |z_{x}|^{2} = (f \overline{f})^{q-1}, \\ & z_{xx} = \frac{q-2}{2} C^{2} f^{q-3} \overline{f}^{q} f ' + \frac{q}{2} f^{q-1} \overline{f}^{q-2} \overline{f'}, \\ & z_{x}^{2} = C^{2} f^{q-2} \overline{f}^{q}, \\ & \overline{z_{xx}} = \frac{q-2}{2} \overline{C}^{2} \overline{f}^{q-3} f^{q} \overline{f '} + \frac{q}{2} \overline{f}^{q-1} f^{q-2} f ', \end{align*} so from the above relations it follows that \begin{align} \label{twelve} p |z_{x}|^{2} z_{xx} & = \frac{p q - 2 p}{2} C^{2} f^{2 q - 4} \overline{f}^{2 q - 1} f ' + \frac{p q}{2} f^{2 q - 2} \overline{f}^{2 q - 3} \overline{f '} \\ \label{thirteen} (p - 2) z_{x}^{2} \overline{z_{xx}} & = \frac{p q - 2 p - 2 q + 4}{2} f^{2 q - 2} \overline{f}^{2 q - 3} \overline{f '} \\ & \quad + \frac{p q - 2 q}{2} C^{2} f^{2 q - 4} \overline{f}^{2 q - 1} f '. \nonumber \end{align} Adding \eqref{twelve} and \eqref{thirteen} gives \begin{align} \label{fourteen} p |z_{x}|^{2} z_{xx} + (p - 2) z_{x}^{2} \overline{z_{xx}} & = (p q - p - q ) C^{2} f^{2 q - 4} \overline{f}^{2 q - 1} f ' \\ & \quad + (p q - p - q + 2 ) f^{2 q - 2} \overline{f}^{2 q - 3} \overline{f '}, \nonumber \end{align} and utilizing the fact that $|z_{x}|^{2 (3 - p)} = (f \overline{f})^{2 q - 3}$, equation \eqref{fourteen} becomes \[ p |z_{x}|^{2} z_{xx} + (p - 2) z_{x}^{2} \overline{z_{xx}} = 2 (f \overline{f})^{2 q - 3} f \overline{f '} = 2 |z_{x}|^{2 (3 - p)} f \overline{f '}. \] This equation is equivalent to $u$ being a solution to $q (|u_{x}|^{p - 2} u_{x})_{x} = W_{u} (u)$ and the proof is complete. \end{proof} The proof implies that \eqref{three} is equivalent to the first-order ordinary differential equation \begin{equation} \label{fifteen} z_{x} = C \frac{(f \overline{f})^{\frac{q}{2}}}{f}, \text{ for } C \in \mathbb{C} \text{ with } |C| = 1. \end{equation} Multiplying \eqref{fifteen} by $\frac{f (z)}{C}$ gives \[ \frac{d}{dx} \frac{g(z)}{C} = |f(z)|^{q} = W > 0 \] and integrating this equation gives \[ \text{Im} \left( \frac{g (z) - g (\alpha)}{C} \right) = 0 \] and \[ \frac{g (z) - g (\alpha)}{C} = \int_{a}^{x} W (z (t)) dt = l. \] This in particular implies that the map $x \mapsto g (z (x))$ is a one-to-one map. Also, in the case of $u$ being a solution, the theorem states that the set $g(\gamma) = \{ g(z): z \in \gamma \} $ is a line segment with end points $g(\alpha)$ and $g(\beta)$ and that the partial transition energy is given by \begin{align*} \int_{a}^{y} \left( \frac{|u_{x}|^{p}}{p} + \frac{W(u)}{q} \right) dx & = \int_{a}^{y} |u_{x}| \sqrt[q]{W(u)} dx \\ & = \int_{a}^{y} \left| \frac{d}{dx} g(u) \right| dx \\ & = |g (u(y)) - g (\alpha)|, \text{ for all } y \in (a,b]. \end{align*} \begin{theorem} \label{thmfour} There exists at most one trajectory connecting any two minima of a holomorphic potential, that is, if $W(z)=|f(z)|^{q}$, where $f$ is holomorphic on $\mathbb{C}$, then there exists at most one solution of \eqref{two} that connects any two roots of $W(z)=0$. \end{theorem} \begin{proof} Let $g$ be an antiderivative of $f$ and suppose that $\gamma_{1}$ and $\gamma_{2}$ are two trajectories to \eqref{two} with the same end points $\alpha$, $\beta$. Since the energy $|g(\beta)-g(\alpha)|$ is positive, it follows that $g(\beta) \neq g(\alpha)$ and we can define the function \[ \hat{g} = \frac{|g(\beta)-g(\alpha)|}{g(\beta)-g(\alpha)}(g(z)-g(\alpha)), \text{ for all } z \in \mathbb{C}. \] Then, $\hat{g}$ is real on $\gamma_{1} \cup \gamma_{2}$. If $\gamma_{1} \neq \gamma_{2}$, then $\gamma_{1}$ and $\gamma_{2}$ will enclose an open domain $D$ in $\mathbb{C}$. As the imaginary part of $\hat{g}$ on $partial D = \gamma_{1} \cup \gamma_{2} \cup \{ \alpha, \beta \}$ is zero, it has to be identically zero in $D$. This implies that $\hat{g}$ is a constant function in $\mathbb{C}$, which is impossible. Thus, $\gamma_{1} = \gamma_{2}$. \end{proof} Finally, we present some specific examples of non-existence and non-uniqueness of connections between the minima of various potentials. Specifically, it can be proved that for both the potentials \[ W(z) = |z^{n} - 1|^{q}, \] where $n \geq 2$ is an integer, and \[ W(z) = |(1-z^{2})(z^{2} + \varepsilon^{2})|^{q}, \] where $0 < \varepsilon < \infty$, there always exists a unique connection between each pair of their minima. We also refer to a non-existence and non-uniqueness phenomenon for the potentials \[ W(z) = |(1 - z^{2})(z - i \varepsilon)|^{q}, \] where $0 \leq \varepsilon < \infty$, and \[ W(z) = |(z - 1)(z + a) / z|^{q}, \] where $0 < a < 1$, respectively. In the first case it can be proved that there exists a connection between $-1$ and $1$ if and only if $|\varepsilon | > \sqrt{2 \sqrt{3} - 3}$, while in the second case there exist exactly two connections between $-a$ and $1$, one in the upper half-plane and one in the lower half-plane. We conclude by stating that all examples given in \cite{ABC} have exact analogs since the only modification needed for the transformation to the $p$-case is the change from potentials of the form $W = |f|^{2}$ (cf.\ \cite{ABC}) to potentials of the form $W = |f|^{q}$. \section*{Acknowledgments} The author would like to thank Professor Nicholas D. Alikakos for his valuable help and guidance. \end{document}
\begin{document} \begin{frontmatter} \title{Optimal energy management for hybrid electric aircraft} \author[first]{Martin Doff-Sotta} \quad \author{Mark Cannon\,$\mbox{}^\ast$} \quad \author{Marko Bacic\,$\mbox{}^{\ast,1}$} \address[first]{Department of Engineering Science, University of Oxford, UK\\ Email: \{martin.doff-sotta, mark.cannon, marko.bacic\}@eng.ox.ac.uk} \thanks{On part-time secondment from Rolls-Royce PLC.} \begin{abstract} A convex formulation is proposed for optimal energy management in aircraft with hybrid propulsion systems consisting of gas turbine and electric motor components. By combining a point-mass aircraft dynamical model with models of electrical and mechanical powertrain losses, the fuel consumed over a planned future flight path is minimised subject to constraints on the battery, electric motor and gas turbine. The resulting optimisation problem is used to define a predictive energy management control law that takes into account the variation in aircraft mass during flight. A simulation study based on a representative 100-seat aircraft with a prototype parallel hybrid electric propulsion system is used to investigate the properties of the controller. We show that an optimisation-based control strategy can provide significant fuel savings over heuristic energy management strategies in this context. \end{abstract} \begin{keyword} Energy Management, Nonlinear Model Predictive Control, Convex Programming. \end{keyword} \end{frontmatter} \section{Introduction} Aviation currently contributes around 2\% of worldwide human-made CO2 emissions but the demand for air travel and transport is growing at a significant rate. The aviation industry is committed to realising this growth sustainably with a drastic reduction of CO2 emissions by 2050. One avenue identified to contribute to the required CO2 reduction is through hybridisation of aircraft propulsion systems. This refers to enabling technologies for boundary layer ingesting aircraft \citep{hall2017} as well as rotary/tilt wing aircraft configurations in the Urban Air Mobility markets \citep{nasa_urban}. Hybrid electric architectures require real-time dynamic power management in order to minimise CO2 output. This paper addresses an optimal energy management problem for a hybrid electric aircraft with a propulsion system consisting of a gas turbine and a battery-powered electric motor in a parallel configuration. Although we consider here a battery as a secondary energy source, the approach is equally applicable to other primary and secondary energy sources such as hydrogen powered reciprocating engines, fuel-cells and super-capacitors. Any optimisation methodology for primary power management must satisfy the basic requirements of determinism, convergence in finite time and verifiability. We propose a solution based on model predictive control (MPC) employing convex optimisation. Predicted performance, expressed in terms of the fuel consumption over a given future flight path, is optimised subject to constraints on power flow and stored energy, and subject to the nonlinear aircraft dynamics, which include nonlinear losses in powertrain components. The proposed convex formulation of the optimisation problem is made suitable for a real time nonlinear MPC implementation by introducing several key simplifying assumptions on the characteristics of powertrain components. Specifically, the gas turbine and the electric motor are modelled via sets of convex quasi-static power maps, battery losses are modelled using a time-invariant equivalent circuit, and the available data on the future flight path is assumed sufficient to determine powertrain shaft speeds across the prediction horizon. Supervisory control methodologies for energy management have been proposed in the context of hybrid electric ground vehicles~\citep[e.g.][]{sciarretta07}. Several approaches have been proposed for this problem, including methods based on indirect optimal control~\citep{kim11,onori15}, Dynamic Programming~\citep{lin03} and MPC~\citep{koot05,east19ieeetcst}. Optimal control of hybrid propulsion systems in aircraft is a new application area that poses a number of distinct challenges, perhaps the most significant of which are complex nonlinear flight dynamics and the effects of the time-varying aircraft weight due to the burning of fuel during flight. On the other hand, the future power demand is likely to be more reliably predictable in aircraft than in cars since a pre-planned flight path is generally available for aircraft, whereas the driving cycle is subject to greater uncertainty in route and traffic conditions~\citep{dicairano14,josevski17}. The contribution of this paper is to demonstrate that the optimal energy management problem for hybrid aircraft can be posed as a convex optimisation problem. To the authors' knowledge, this is the first attempt to address an important new application area of energy management. The paper is organised as follows. Section~\ref{sec:modelling} derives a continuous time hybrid electric aircraft model. This model is the basis of the discrete-time model and the MPC strategy that are proposed in Section~\ref{sec:dt_model}. Section~\ref{sec:convex} shows that the minimisation of fuel consumption can be expressed as a convex problem. Section~\ref{sec:simulation_results} describes simulation results and conclusions are drawn in Section~\ref{sec:conclusion}. \section{Modelling}\label{sec:modelling} We assume a parallel hybrid electric aircraft propulsion system in which a gas turbine producing power $P_{\text{gt}}(t)$ is combined with an electric motor with power output $P_{\text{em}}(t)$ (Fig.~\ref{fig:propulsion}). The net power output of the propulsion system, $P_{\text{drv}}(t)$, is produced by combining these two power sources via the relation $P_{\text{drv}}(t) = P_{\text{gt}}(t) + P_{\text{em}}(t)$ (assuming $100\%$ efficiency in drivetrain components). When the drive power is negative, which may occur for example while the aircraft is descending, it is assumed that the same powertrain could be used to generate electrical energy (i.e.\ it is capable of operating in a ``windmilling'' mode) in order to recharge the battery. In practice, a variable-pitch fan would be required, which increases complexity. The gas turbine and electric motor shaft rotation speeds are $\omega_{\text{gt}}(t)$ and $\omega_{\text{em}}(t)$ respectively. \begin{figure} \caption{Hybrid-electric propulsion architecture.} \label{fig:propulsion} \end{figure} The electric motor is powered by a battery with state of charge (SOC) $E(t)$ and rate of change of energy $P_{\text{b}}$, and the state of charge dynamics are given by \[ \dot{E}=-P_{\text{b}}. \] The battery is modelled as an equivalent circuit with internal resistance $R$ and open-circuit voltage $U$, so that \begin{align*} P_{\text{b}}&=g\bigl(P_{\text{em}}\left(t\right), \omega_{\text{em}}\left(t\right)\bigr) \\ &=\frac{U^2}{2R}\left(1-\sqrt{1-\frac{4R}{U^2}h(P_{\text{em}}(t), \omega_{\text{em}}(t)) }\right). \end{align*} where $U$ and $R$ are assumed constant~\citep{east2018}. The map relating the mechanical power output of the electric motor $P_{\text{em}}$ to electrical input power $P_{\text{c}}$ is $P_{\text{c}} = h(P_{\text{em}})$. We assume that, for fixed $\omega_{\text{em}}$, $h(P_{\text{em}},\omega_{\text{em}})$ is non-decreasing and differentiable with respect to $P_{\text{em}}$ and $h(\cdot)$ is determined empirically from electric motor loss map data as \[ h(P_{\text{em}}, \omega_{\text{em}}) = \kappa_{2}(\omega_{\text{em}})P_{\text{em}}^2 + \kappa_{1}(\omega_{\text{em}})P_{\text{em}} + \kappa_{0}(\omega_{\text{em}}) \] for some functions $\kappa_2(\cdot)$, $\kappa_1(\cdot)$, $\kappa_0(\cdot)$. with $\kappa_{2}(\omega_{\text{em}}) \geq 0$ and $\kappa_{1}(\omega_{\text{em}}) > 0$ for all $\omega_{\text{em}}$ in the operating range. The aircraft motion is constrained by its dynamic equations. Assuming a point-mass model~\citep{stevens2015aircraft} and referring to Figure \ref{fig:aircraft}, the equilibrium of forces yields \begin{equation} m\frac{\mathrm{d}}{\mathrm{d}t}( \vvec{v}) =\vvec{T} + \vvec{L} + \vvec{D} + \vvec{W} \label{eq:T_t} \end{equation} where $ \vvec{v}$ is the velocity vector, $m$ the instantaneous mass of the aircraft, $\vvec{T}$ the vector of thrust, $\vvec{L}$ and $\vvec{D}$ are the lift and drag vectors and $\vvec{W}$ is the aircraft weight. \begin{figure} \caption{Aircraft forces and motion.} \label{fig:aircraft} \end{figure} Using the polar coordinates parametrisation ($v$,$\gamma$), the drive power is given as follows \[ P_{\text{drv}}=\vvec{T} \cdot \vvec{v} = m\frac{\mathrm{d}}{\mathrm{d}t}(\frac{1}{2} v^2) + \frac{1}{2}C_D\rho S v^3 + m g \sin{(\gamma)} v \] where $v$ is the magnitude of the velocity vector, $S$ is the wing area, $\rho$ the density of air, $g$ the acceleration due to gravity, $\gamma$ the flight path angle, $C_D=C_D(\alpha)$ the drag coefficient, $\alpha$ the angle of attack. Similarly, projecting equation \text{eq}ref{eq:T_t} along the lift vector $\vvec{L}$) yields \[ m v \frac{\mathrm{d}}{\mathrm{d}t}( \gamma) + mg \cos{(\gamma)}= T \sin{(\alpha)} + \frac{1}{2}C_L\rho S v^2 , \] where $C_L=C_L(\alpha)$ is the lift coefficient. The contribution of the thrust in the vertical direction being very small, the term $T \sin{(\alpha)}$ can be neglected (it can be checked a posteriori that $\alpha$ is small). The rate of change of the aircraft mass is given by \[ \dot{m} = \dot{m}_{\text{fuel}} = -f(P_{\text{gt}}(t), \omega_{\text{gt}}(t)) \] where $\dot{m}_{\text{fuel}}$ is the rate of fuel consumption and $f(P_{\text{gt}},\omega_{\text{gt}})$ is assumed to be convex, differentiable and non-decreasing with respect to $P_{\text{gt}}$ for fixed $\omega_{\text{gt}}$. We assume that $f(\cdot)$ can be determined empirically from fuel map data in the form \[ f(P_{\text{gt}},\omega_{\text{gt}}) = \beta_{2}(\omega_{\text{gt}})P_{\text{gt}}^2 + \beta_{1}(\omega_{\text{gt}})P_{\text{gt}} + \beta_{0}(\omega_{\text{gt}}) , \] with $\beta_{2}(\omega_{\text{gt}})\!\geq\! 0$, $\beta_{1}(\omega_{\text{gt}}) \!>\! 0$ in the operating range of $\omega_{\text{gt}}$. The problem at hand is to find the real-time optimal power split between the gas turbine and electric motor that minimises \begin{equation}\label{eq:contin-time_objective} J = \int_{0}^{T}{f(P_{\text{gt}}(t), \omega_{\text{gt}}(t))}\mathrm{d}t \end{equation} while satisfying constraints on the battery SOC, limits on power flows throughout the powertrain, and producing sufficient power to follow a prescribed flight path. \section{Discrete-time optimal control}\label{sec:dt_model} This section describes a discrete-time model that enables the optimisation of the power split between the electric motor and the gas turbine over a given future flight path to be formulated as a finite-dimensional optimisation problem. For a fixed sampling interval $\delta$, we consider a predictive control strategy that minimises, online at each sampling instant, the predicted fuel consumption over the remaining flight path. The minimisation is performed subject to the discrete-time dynamics of the aircraft mass and the battery SOC. The problem is also subject to bounds on the stored energy in the battery (to prevent deep discharging or overcharging), as well as limits on power flows that correspond to physical and safety constraints. The optimal solution to the fuel minimisation problem at the $k$th sampling instant is computed using estimates of the battery SOC $E(k\delta)$ and the aircraft mass $m(k\delta)$. The control law at time $k\delta$ is defined by the first time step of this optimal solution. The notation $\{ x_{0} , x_{1} , \ldots x_{N-1}\}$ is used for the sequence of future values of a variable $x$ predicted at the $k$th discrete-time step, so that $x_i$ denotes the predicted value of $x\bigl((k+i)\delta\bigr)$. The horizon $N$ is chosen so that $N = \lceil T/\delta \rceil - k$, and hence $N$ shrinks as $k$ increases and $k\delta$ approaches $T$. The discrete-time approximation of the objective (\ref{eq:contin-time_objective}) is \begin{equation} J = \sum_{i=0}^{N-1}{f(P_{\text{gt},i},\omega_{\text{gt}, i})} \, \delta \label{eq:obj} \end{equation} with, for $i=0,\dots,N-1$, \begin{align} \mbox{}\hspace{-3mm} f(P_{\text{gt}, i}, \omega_{\text{gt}, i}) &= \beta_{2}(\omega_{\text{gt}})P_{\text{gt},i}^2 \!+\! \beta_{1}(\omega_{\text{gt}, i})P_{\text{gt}, i} \!+\! \beta_{0}(\omega_{\text{gt}, i}) \label{eq:m_k0} \\ m_{i+1} &= m_{i} - f(P_{\text{gt},i}, \omega_{\text{gt},i}) \, \delta \label{eq:m_k} \end{align} where the forward Euler approximation has been used for derivatives. Using the same approach, the discrete-time battery model is \begin{align} E_{i+1} &= E_{i} - g(P_{\text{em},i}, \omega_{\text{em},i}) \, \delta \label{eq:E_k} \\ P_{\text{b},i} &= g (P_{\text{em},i}, \omega_{\text{em}, i}) \nonumber \\ &= \frac{U^2}{2R}\biggl[1-\sqrt{1-\frac{4R}{U^2}h(P_{\text{em}, i}, \omega_{\text{em}, i}) }\biggr] \label{eq:P_b} \end{align} for $i=0,\ldots,N-1$, where \begin{multline}\label{eq:h_k} h(P_{\text{em}, i},\omega_{\text{em}, i}) =\\ \kappa_{2}(\omega_{\text{em}, i})P_{\text{em}, i}^2 + \kappa_{1}(\omega_{\text{em}, i})P_{\text{em}, i} + \kappa_{0}(\omega_{\text{em}, i}) \end{multline} and the aircraft dynamics are given in discrete time by \begin{align} & m_{i} v_{i} \Delta_{i} \gamma = - m_{i} g \cos{(\gamma_{i})} + \tfrac{1}{2}C_L(\alpha_{i})\rho S v_{i}^2 \label{eq:vertical_k} \\ & P_{\text{drv},i} = \begin{aligned}[t] &\tfrac{1}{2} m_{i} \Delta_{i} (v^2) + m_{i} g \sin{(\gamma_{i})}v_{i}\\ &+\tfrac{1}{2}C_D(\alpha_{i})\rho S v_{i}^3, \end{aligned} \label{eq:drive} \\ & P_{\text{drv},i} =P_{\text{gt},i} + P_{\text{em},i}, \label{eq:P_link} \end{align} for $i=0,\ldots,N-1$, where \[ \Delta_{i} (v^2) = (v^2_{i+1} - v^2_{i})/\delta , \quad \Delta_{i} \gamma = (\gamma_{i+1} - \gamma_{i})/\delta. \] The problem to be solved at each time step $k$ is therefore: \begin{align} & \min_{\substack{P_{\text{gt}},\,P_{\text{em}},\,P_{\text{drv}}\\m,\,E,\,\omega_{\text{gt}},\,\omega_{\text{em}}}} & & \sum^{N-1}_{i=0} f(P_{\text{gt},i}, \omega_{\text{gt},i}) \label{eq:min} \\ & \qquad \text{ s.t.} & & P_{\text{drv},i} = P_{\text{gt},i} + P_{\text{em},i} \nonumber \\ & & & \begin{aligned} P_{\text{drv},i} &= \tfrac{1}{2} m_{i} \Delta_{i} v^2 + m_{i} g \sin{(\gamma_{i})}v_{i} \\ &\quad +\tfrac{1}{2}C_D(\alpha_{i})\rho S v_{i}^3 \end{aligned} \nonumber\\ & & & m_{i} v_{i} \Delta_{i} \gamma = - m_{i} g \cos{(\gamma_{i})} + \tfrac{1}{2}C_L(\alpha_{i})\rho S v_{i}^2 \nonumber\\ & & & m_{i+1} =m_{i} - f(P_{\text{gt},i}, \omega_{\text{gt},i}) \,\delta \nonumber\\ & & & E_{i+1} = E_{i} - g(P_{\text{em},i},\omega_{\text{em},i}) \, \delta \nonumber\\ & & & m_{0} = m(k\delta) \nonumber\\ & & & E_{0} = E(k\delta) \nonumber\\ & & & \underline{E} \leq E_{i} \leq \overline{E} \nonumber\\ & & & \underline{P}_{\text{gt}} \leq P_{\text{gt},i} \leq \overline{P}_{\text{gt}} \nonumber\\ & & & \underline{\omega}_{\text{gt}} \leq \omega_{\text{gt},i} \leq \overline{\omega}_{\text{gt}} \nonumber\\ & & & {\underline{P}_{\text{em}}} \leq P_{\text{em},i} \leq \overline{P}_{\text{em}} \nonumber\\ & & & {\underline{\omega}_{\text{em}}} \leq \omega_{\text{em},i} \leq \overline{\omega}_{\text{em}} \nonumber \end{align} where the constraints are imposed for $i=0,\ldots,N-1$. Here ${(\overline{E},\underline{E})}$ are the bounds on SOC that are required for normal battery operation, ${(\overline{P}_{\text{gt}},\underline{P}_{\text{gt}})}$ and ${(\overline{P}_{\text{em}}, \underline{P}_{\text{em}})}$ are the bounds on gas turbine power and electric motor power respectively, and ${ (\overline{\omega}_{\text{em}}, \underline{\omega}_{\text{em}})}$ and ${(\underline{\omega}_{\text{gt}},\overline{\omega}_{\text{gt}})}$ are the bounds on the gas turbine and electric motor shaft rotation speeds. \section{Convex formulation}\label{sec:convex} The optimisation problem in (\ref{eq:min}) is nonconvex, which makes a real-time implementation of an MPC algorithm that relies on its solution computationally intractable. In this section a convex formulation is proposed that is suitable for an online solution. We assume that the aircraft speed $v_i$ and flight path angle $\gamma_i$ are chosen externally by a suitable guidance algorithm for $i=0,\ldots,N-1$. A convex formulation of the drive power is derived by expressing the drag and lift coefficients, $C_D$ and $C_L$, as functions of the angle of attack $\alpha$ and combining the equations that constrain the aircraft motion in the forward and vertical directions. Over a restricted domain and for given Reynolds and Mach numbers, the drag and lift coefficients can be expressed respectively as a quadratic non-decreasing function and a linear non-decreasing function \citep{abbott1945}: \begin{alignat}{2} C_D(\alpha_{i}) &= a_2 \alpha_{i}^2 + a_1 \alpha_{i} + a_0, &\qquad a_2 &> 0 \label{eq:C_D} \\ C_L(\alpha_{i}) &= b_1 \alpha_{i} + b_0, &\qquad b_1 &> 0 \label{eq:C_L} \end{alignat} for $\underline{\alpha} \leq \alpha_i \leq \overline{\alpha}$. Combining \text{eq}ref{eq:vertical_k}, \text{eq}ref{eq:drive}, \text{eq}ref{eq:C_D} and \text{eq}ref{eq:C_L}, the angle of attack can be eliminated from the expression for drive power, so that $P_{\text{drv},i}$ can be expressed as a quadratic function of the aircraft mass, $m_{i}$, as follows \begin{equation}\label{eq:P_drv} P_{\text{drv},i} = \eta_{2,i} m_{i}^2 + \eta_{1,i} m_{i} + \eta_{0,i} , \end{equation} where \begin{align*} & \eta_{2,i} = \frac{ 2 a_2 (v_{i} \Delta_{i} \gamma + g \cos{(\gamma_{i})} )^2 }{ b_1^2 \rho S v_{i}} , \\ & \eta_{1,i} = \tfrac{1}{2}\Delta_{i} v^2 + g \sin{(\gamma_{i})} v_{i} \\ &\qquad\quad - \frac{2 a_2 b_0 (v_{i} \Delta_{i} \gamma + g \cos{(\gamma_{i})} ) v_{i}}{b_1^2} \\ &\qquad\quad + \frac{a_1}{ b_1 } ( v_{i} \Delta_{i} \gamma + g \cos{(\gamma_{i})} ) v_{i} , \\ & \eta_{0,i} = \tfrac{1}{2} \rho S v_{i}^3 \Bigl( \frac{a_2 b_0^2}{b_1^2} - \frac{a_1 b_0}{b_1} + a_0 \Bigr), \end{align*} Here the flight path angles $\gamma_i$ and speeds $v_i$ are assumed to be fixed and are not optimisation variables. Since $\eta_{2,i} > 0$ for all $i$, the drive power is a convex function of $m_{i}$. Note that there is no guarantee that satisfying equation \text{eq}ref{eq:P_drv} enforces equations \text{eq}ref{eq:vertical_k} and \text{eq}ref{eq:drive} individually. In practice, assuming that we have full control over the eliminated variable $\alpha$ (via the elevator and fans), both individual dynamical equations can be satisfied a posteriori. The inequality constraint on $\alpha$ also has to be checked a posteriori. For the given parallel hybrid configuration, we assume for simplicity that the electric motor and gas turbine share a common shaft rotation speed which is equal to the speed of rotation of the fan, i.e.\ $\omega_{\text{gt},i} = \omega_{\text{em},i}$ for all $i$. If the shaft speed is known at each discrete-time step of the prediction horizon, then the coefficients in \text{eq}ref{eq:m_k0} and \text{eq}ref{eq:h_k} can be estimated from a set of polynomial approximations of $h(\cdot)$ and $f(\cdot)$ at a pre-determined set of speeds. This allows $h(P_{\text{em},i},\omega_{\text{em},i})$ and $f(P_{\text{gt},i},\omega_{\text{gt},i})$ in (\ref{eq:P_b}) and (\ref{eq:min}) to be replaced by time-varying functions of the gas turbine power and electric motor power alone: \begin{align} h(P_{\text{em},i},\omega_{\text{em},i}) &= h_i(P_{\text{em},i}) = \kappa_{2,i}P_{\text{em},i}^2 + \kappa_{1,i}P_{\text{em},i} + \kappa_{0, i}, \label{eq:h_P} \\ f(P_{\text{gt},i},\omega_{\text{gt},i}) &= f_i(P_{\text{gt},i}) = \beta_{2,i} P_{\text{gt},i}^2 + \beta_{1,i} P_{\text{gt},i} + \beta_{0, i} \label{eq:f_P} \end{align} with $\kappa_{2,i} \geq 0$, $\kappa_{1,i} >0$ and $\beta_{2,i} \geq 0$, $\beta_{1,i} > 0$ for all $i$. In order to estimate the shaft speed $\omega_{\text{gt},i} = \omega_{\text{em},i}$, and hence determine the coefficients $\kappa_{2,i}$, $\kappa_{1,i}$, $\kappa_{0,i}$, $\beta_{2,i}$, $\beta_{1,i}$, $\beta_{0,i}$ in (\ref{eq:h_P}) and (\ref{eq:f_P}), we use a pre-computed look-up table relating the drive power to rotational speed of the fan, for a given altitude, Mach number and air conditions (temperature and specific heat at constant pressure). This enables the shaft speed to be determined as a function of the fan power output at each discrete-time step along the flight path. Although $P_{\text{drv},i}$ depends on the aircraft mass $m_i$, which is itself an optimisation variable, a prior estimate of the required power output can be obtained from (\ref{eq:P_drv}) assuming a constant mass $m_i=m_0$ for all $i$. The simulation results described in Section~\ref{sec:simulation_results} show that this assumption has a negligible effect on solution accuracy. We define $g_i(\cdot)$ in terms of $h_i(\cdot)$ as \[ g_i(P_{\text{em},i}) =\frac{U^2}{2R}\biggl[ 1-\sqrt{1-\frac{4R}{U^2}h_i(P_{\text{em}, i}) }\biggr] . \] Then $g_i(\cdot)$ is necessarily a convex, non-decreasing, one-to-one function if the lower bound on $P_{\text{em},i}$ is redefined as \[ \underline{P}_{\text{em},i} :=\max \Bigl\{ -\overline{P}_{\text{em}}, -\frac{\kappa_{1,i}}{2 \kappa_{2,i}}\Bigr\}, \] since this bound ensures that $h_i(\cdot)$ is a one-to-one non-decreasing convex function of $P_{\text{em},i}$. The dynamic constraints \text{eq}ref{eq:m_k}, \text{eq}ref{eq:E_k} and the power balance (\ref{eq:P_link}) can be expressed using (\ref{eq:P_drv}), (\ref{eq:h_P}) and (\ref{eq:f_P}) as \begin{align} & m_{i+1} = m_{i} - f_i(P_{\text{gt},i}) \, \delta \label{eq:m_k_P} \\ & E_{i+1} = E_{i} - g_i (P_{\text{em},i}) \, \delta \label{eq:E_k_P} \\ & P_{\text{gt},i} + P_{\text{em},i} = \eta_{2,i} m_i^2 + \eta_{1,i} m_i + \eta_{0,i} . \label{eq:drv_k_P} \end{align} These constraints are nonconvex due to their quadratic dependence on the optimisation variables $P_{\text{gt},i}$, $P_{\text{em},i}$ and $m_i$. To convexify these constraints, we first eliminate $P_{\text{em},i}$ from (\ref{eq:E_k_P}) and (\ref{eq:drv_k_P}) using $P_{\text{b},i} = g_i(P_{\text{em},i})$ and $P_{\text{em},i} = g_i^{-1}(P_{\text{b},i})$. Then \text{eq}ref{eq:E_k_P} becomes linear, \[ E_{i+1} = E_{i} - P_{\text{b},i} \delta . \] Moreover, under the assumptions on $g_i(\cdot)$ (convex, non-decreasing and one-to-one), the inverse mapping $g_i^{-1}(\cdot)$ is a concave, increasing function \citep[e.g.][]{east2018}. Note that $g^{-1}(\cdot)$ is given explicitly as \[ g_i^{-1}(P_{\text{b},i})=-\frac{\kappa_{1,i}}{2\kappa_{2,i}} + \biggl[ -\frac{R P^2_{\text{b},i}}{\kappa_{2,i}U^2} + \frac{P_{\text{b},i} - \kappa_{0,i}}{\kappa_{2,i}} +\frac{\kappa^2_{1,i}}{4\kappa^2_{2,i}}\biggr]^{\frac{1}{2}} \] if $\kappa_{2,i} > 0$, and by \[ g_i^{-1}(P_{\text{b},i})=-\frac{1}{\kappa_{1,i}} \Bigl( \frac{R}{U^2}P^2_{\text{b},i} - P_{\text{b},i} + \kappa_{0,i} \Bigr) \] at any time steps $i$ such that $\kappa_{2,i}=0$. Therefore, by relaxing the equality constraints in (\ref{eq:m_k_P}) and (\ref{eq:drv_k_P}) to inequalities, a pair of convex constraints: \begin{align} & m_{i+1} \leq m_{i} - f_i(P_{\text{gt},i}) \, \delta \label{eq:m_ineq_k_P} \\ & P_{\text{gt},i} \geq \eta_{2,i} m_i^2 + \eta_{1,i} m_i + \eta_{0,i} - g_i^{-1}(P_{\text{b},i}) \label{eq:drv_ineq_k_P} \end{align} is obtained since $g_i^{-1}(\cdot)$ is concave and $f_i(\cdot)$ is convex. With these modifications, and noting that the objective in (\ref{eq:min}) is equivalent to minimising $m_0-m_ N$, the optimisation to be solved to determine the optimal power profile at the $k$th time step can be expressed as the convex problem: \begin{align} & \min_{\substack{P_{\text{gt}},P_{\text{b}}\,P_{\text{drv}}\\m,\,E,\,\omega_{\text{gt}},\,\omega_{\text{em}}}} & & m_{0} - m_N \label{eq:min2} \\ & \qquad \text{ s.t.} & & P_{\text{gt},i} \geq \eta_{2,i} m_{i}^2 + \eta_{1,i} m_{i} + \eta_{0,i} - g_i^{-1}(P_{\text{b},i}) \nonumber \\ & & & m_{i+1} \leq m_{i} - f_i (P_{\text{gt},i}) \, \delta \nonumber \\ & & & E_{i+1} = E_{i} - P_{\text{b},i}\, \delta \nonumber \\ & & & m_{0} = m(k\delta) \nonumber \\ & & & E_{0} = E(k\delta) \nonumber \\ & & & \underline{E} \leq E_{i} \leq \overline{E} \nonumber \\ & & & \underline{P}_{\text{gt}} \leq P_{\text{gt},i} \leq \overline{P}_{\text{gt}} \nonumber \\ & & & \underline{P}_{\text{b},i} \leq P_{\text{b},i} \leq \overline{P}_{\text{b},i} \nonumber \end{align} where $\underline{P}_{\text{b},i}=g_i(\underline{P}_{\text{em},i})$, $\overline{P}_{\text{b},i}=g_i(\overline{P}_{\text{em},i})$, and the constraints are imposed for $i=0,\ldots,N-1$. The form of the objective in (\ref{eq:min2}) ensures that any feasible solution that does not satisfy the constraints in (\ref{eq:m_ineq_k_P}) and (\ref{eq:drv_ineq_k_P}) with equality is suboptimal. Thus the solutions of (\ref{eq:min2}) and (\ref{eq:min}) are necessarily equal if (\ref{eq:min}) is feasible. \section{Numerical results} \label{sec:simulation_results} This section uses the optimisation problem \text{eq}ref{eq:min2} to construct an energy management case study involving a representative hybrid-electric passenger aircraft. Solutions of \text{eq}ref{eq:min2} were computed using the general purpose convex optimisation solver CVX~\citep{cvx}. Since the minimisation in (\ref{eq:min2}) is convex, convergence of the solver to a global optimum is ensured. \subsection{Simulation scenario}\label{sec:sim_setup} The parameters of the model used in simulations are collected in Table \ref{tab:param}. These are based on publicly available data for the BAe 146 aircraft. The propulsion system is assumed to consist of four gas turbines and electric motors, each with the hybrid-parallel configuration shown in Fig.~\ref{fig:propulsion}. \begin{table}[h!] \begin{tabular}{llll} \hline \textbf{Parameter} & \textbf{Symbol} & \textbf{Value} & \textbf{Units} \\ \hline Mass (MTOW) & $m$ & $42000$ & \si{kg} \\ \hline Gravity acceleration & $g$ & $9.81$ & \si{m.s^{-2}} \\ \hline Wing area & $S$ & $77.3$ & \si{m^2} \\ \hline Density of air & $\rho$ & $1.225$ & \si{kg.m^{-3}} \\ \hline \multirow{2}*{Lift coefficients} & $b_0$ & $0.43$ & \si{-} \\ & $b_1$ & $0.11$ & \si{deg^{-1}} \\ \hline \multirow{3}*{Drag coefficients} & $a_0$ & $0.029$ & \si{-} \\ & $a_1$ & $0.004$ & \si{deg^{-1}} \\ & $a_2$ & $5.3\mathrm{e}{-4}$ & \si{deg^{-2}}\\ \hline Angle of attack range & $\left[\underline{\alpha}; \overline{\alpha}\right]$ & $\left[-3.9; 10\right]$ & \si{deg} \\ \hline Fuel mass & $m_{\text{fuel}}$ & $8000$ & \si{kg} \\ \hline \multirow{2}*{Fuel map coefficients} & $\beta_0$ & $ 0.03$ & \si{kg.s^{-1}} \\ & $\beta_1$ & $0.08$ & $\si{kg.{MJ}^{-1}}$ \\ \hline Battery SOC range & $\left[\underline{E}; \overline{E}\right]$ & $\left[221; 939\right]$ & \si{MJ} \\ \hline Gas turbine power range & $\left[\underline{P}_{\text{gt}}; \overline{P}_{\text{gt}}\right]$ & $\left[0; 5\right]$ & \si{MW} \\ \hline Motor power range & $\left[\underline{P}_{\text{em}}; \overline{P}_{\text{em}}\right]$ & $\left[0; 2\right]$ & \si{MW} \\ \hline $\#$ of arrangements & $n$ & $4$ & \si{-} \\ \hline Flight time & $T$ & $3600$ & \si{s} \\ \hline \end{tabular} \caption{Model parameters.} \label{tab:param} \end{table} For the purposes of this study it is assumed that velocity and height profiles are known a priori as a result of the fixed flight plan entered prior to take-off. We consider an exemplary 1-hour flight at a true airspeed (TAS) of $190$\,$\si{m/s}$ for a typical 100-seat passenger aircraft. The flight path (height and velocity profile) is shown in Figure \ref{fig:profile}. \begin{figure} \caption{Height and velocity profiles for the mission.} \label{fig:profile} \end{figure} The electric loss map coefficients $\kappa_{i,2},\kappa_{i,1},\kappa_{i,0}$ can be estimated in two steps from these profiles. First, the shaft rotation speed, $\omega_i$ ($=\omega_{\text{gt},i} = \omega_{\text{em},i}$), is interpolated from a precomputed look-up table relating measured shaft rotation speed, altitude and drive power at a given Mach number (Fig.~\ref{fig:fanmap}). Then, the coefficients are interpolated from a precomputed record of losses in the electric motor as a function of rotation speed. This procedure requires drive power $P_{\text{drv}}$ to be approximated \textit{a priori}, e.g.~by assuming constant aircraft mass for the duration of the flight. This assumption is supported by Figure~\ref{fig:alpha}, which shows that the electric map coefficients are almost identical for the estimated drive power profile and for the actual drive power profile computed retrospectively. We also find that the $\kappa_{2, i}$ coefficients are negligible for all $i$. \begin{figure} \caption{Electric loss map coefficients computed with estimated drive power and actual drive power.} \label{fig:alpha} \end{figure} \begin{figure} \caption{Contour plot relating drive power, altitude and non-dimensional rotation speed ($\Omega$) for a Mach number of $0.55$. The shaft rotation speed is obtained from $\Omega$ as $\omega = \smash{\frac{156.7} \label{fig:fanmap} \end{figure} The gas turbine fuel map used in this study is approximately linear ($\beta_{2,i} \approx 0 \, \forall i$) and furthermore the fuel consumption does not depend significantly on shaft rotation speed. Therefore the fuel map coefficients are given in Table \ref{tab:param} as constants (i.e.\ $\beta_{1,i} = \beta_1, \, \beta_{0,i} = \beta_0 \, \forall i$). \subsection{Results} The mission is simulated with sampling interval $\delta=10$ $\si{s}$ over a one-hour shrinking horizon by solving the optimisation problem \text{eq}ref{eq:min2} at each time step and implementing the first element of the optimum power split sequence as an MPC law. The closed loop energy management control strategy is shown in Figure \ref{fig:Psplit}, which gives the power split for a single coupled gas turbine and electric motor. Clearly the constraints on the gas turbine and electric motor power are respected. The evolution of the battery SOC and fuel consumption are shown in Figure \ref{fig:SOC}. The upper plot illustrates that the constraints on SOC are respected and that the SOC reaches a minimum when the drive power becomes negative, as expected. The lower plot in Fig.~\ref{fig:SOC} shows that, as expected, the rate of fuel consumption is greater during the initial climb phase when the gas turbine power output is high. The fuel consumption recorded for this simulation is $F^{\ast}=1799$ $\si{kg}$. In comparison, a fully gas turbine-powered flight with the same initial total aircraft weight has a fuel consumption of $F_{\text{gt}}=2034$ $\si{kg}$. We note however that this reduction is achieved at the expense of reduced available payload as a result of the weight of the electric components of the powertrain (battery storage and electric motors). \begin{figure} \caption{MPC power split strategy obtained by solving \text{eq} \label{fig:Psplit} \end{figure} \begin{figure} \caption{Closed loop evolution of SOC and fuel mass.} \label{fig:SOC} \end{figure} In order to evaluate the optimality of the power split solution, we compare it with the strategy of supplementing the gas turbine with the maximum electric motor power ($\overline{P}_{\text{em}}$) until the battery is fully depleted, then switching to a sustaining mode in which only the gas turbine operates. In hybrid vehicles this is known as a Charge-Depleting-Charge-Sustaining (CDCS) strategy~\citep{onori15}. Using this strategy the power split is as shown in Figure \ref{fig:CDCS} and the fuel consumption is $F_{\text{CDCS}} = 1858 \si{kg}$. \begin{figure} \caption{Power split with a CDCS strategy.} \label{fig:CDCS} \end{figure} To investigate the potential for windmilling (energy recovery when the net power demand is negative), the lower bound on electric motor power is set to $\underline{P}_{\text{em}}=-2$ $\si{MW}$, to allow transmission of power from the fan to the battery with the electric motor acting as a generator. The optimisation problem (\ref{eq:min2}) is also modified by introducing a terminal term in the objective function so as to maximise the SOC of the battery at the end of the flight: $J = m_0 - m_N - \lambda E_N.$ The coefficient $\lambda$ should be small to avoid adversely affecting the main objective of minimising fuel consumption. Replacing the objective for $\lambda = 0.1$ gives the results shown in Figures~\ref{fig:Psplit4} and \ref{fig:SOC4}. The windmilling effect can be seen at the end of the flight and is characterised by negative electrical power and battery recharge. \begin{figure} \caption{Optimal power split with windmilling.} \label{fig:Psplit4} \end{figure} \begin{figure} \caption{Battery state of charge with windmilling.} \label{fig:SOC4} \end{figure} \subsection{Discussion} Intuitively, the optimal power split strategy might be expected to consume as much fuel as possible at the beginning of the flight so as to reduce the aircraft mass, and thus reduce the drive power needed during level flight and descent. However, the MPC strategy maintains an almost constant electric power over the whole flight (Fig.~\ref{fig:Psplit}). This is explained by the relatively short flight duration and the characteristics of the aircraft model, as a result of which the change in total mass of the aircraft is relatively small (less than $5\%$). Despite this, the MPC strategy achieves a non-negligible reduction in fuel consumption ($3.2\%$) over the CDCS strategy. More radical optimal power split solutions are obtained if the change in aircraft mass during flight is more significant. In particular, the MPC strategy allocates more electrical power at the end of the flight if the gas turbine fuel consumption is increased. For example, Figure \ref{fig:Psplit2} shows the power split solution for a situation in which the rate of fuel consumption is increased so that the change in aircraft total mass during flight is 15\% (with all other simulation parameters unchanged). The fuel consumption for the CDCS strategy in this case is about $4\%$ higher than that of the MPC strategy. \begin{figure} \caption{Optimal power split for the case of a fuel map with an increased rate of fuel consumption.} \label{fig:Psplit2} \end{figure} \begin{figure} \caption{Optimal power split with gas turbine saturation.} \label{fig:Psplit3} \end{figure} We next consider the case that the upper limit on the gas turbine power output is reduced to $\overline{P}_{\text{gt}}=3$ $\si{MW}$. The MPC energy management strategy for this case is shown in Figure \ref{fig:Psplit3}. Here the power demand is such that the gas turbine is at maximum power while the aircraft climbs. As a result, the electric motor is needed to meet the total power output requirement while the gas turbine power output is saturated. The fuel consumption for this scenario is increased slightly (by $0.1\%$) since the electrical power is mostly used at the beginning of the flight to compensate for the limit on the gas turbine power output. \section{Conclusions} \label{sec:conclusion} This paper proposes a model predictive control law for energy management in hybrid-electric aircraft. The main contribution of the work is a convex formulation of the problem of minimising fuel consumption for a given future flight path. We provide a simulation study to illustrate the approach, and demonstrate that significant fuel savings can be achieved relative to heuristic strategies. The convexity of the formulation is crucial for computational tractability and is expected to be a basic requirement for verification by the aviation industry. Future work will consider the design of bespoke solvers. In particular, first order solution methods are expected to provide computational savings by exploiting the high degree of separability in the problem, while also being suitable for real-time implementation. The modelling approach described in this paper provides a framework for optimising system design, and future work will explore flight path optimisation and evaluate alternative hybrid propulsion configurations. \end{document}
\begin{document} \title{Horseshoes for $\mathcal{C} \begin{abstract}We present here a construction of horseshoes for any $\mathcal{C}^{1+\alpha}$ mapping $f$ preserving an ergodic hyperbolic measure $\mu$ with $h_{\mu}(f)>0$ and then deduce that the exponential growth rate of the number of periodic points for any $\mathcal{C}^{1+\alpha}$ mapping $f$ is greater than or equal to $h_{\mu}(f)$. We also prove that the exponential growth rate of the number of hyperbolic periodic points is equal to the hyperbolic entropy. The hyperbolic entropy means the entropy resulting from hyperbolic measures. \end{abstract} \section{Introduction} In this paper, we build horseshoes for $\mathcal{C}^{1+\alpha}$ mappings (not necessarily invertible) preserving ergodic hyperbolic measures with positive measure-theoretical entropy and then prove that the exponential growth rate of the number of periodic points is greater than or equal to the measure-theoretical entropy. This research is a natural generalization of Katok's argument in his paper \cite{Kat}. We also prove that the exponential growth rate of the number of hyperbolic periodic points with ``large" Lyapunov exponents is equal to hyperbolic entropy. Hyperbolic entropy means the entropy resulting from hyperbolic measures. Horseshoes are exhibited as examples of systems that demonstrate complicated dynamical behaviors and allow us to model the behavior by a shift map over a finite alphabet. Thus, it is an interesting problem to consider the existence of horseshoes. Let $M$ be a compact manifold of dimension 2 and $f:M\rightarrow M$ be a $\mathcal{C}^{1+\alpha}$ diffeomorphism with positive entropy. Katok's argument illustrates the fact that positive entropy implies the existence of horseshoes and the entropy of these inner horseshoes can approximate $h_{\mu}(f)$, which means the underlying horseshoes demonstrate nearly the same complicated property as the whole systems. One might expect Katok's argument to be true for endomorphsims with all Lyapunov exponents not zero. Gelfert\cite{Gelfert} proved the existence of horseshoe for mappings with only positive Lyapunov exponents under some integrability conditions that are used to control the effect of critical points and singular points. We give a generalization from the nonsingular case to all Lyapunov exponents not zero, without integrability assumption on critical points (Theorem \ref{main theorem}). Besides, we also control the Lyapunov exponents of periodic points in the horseshoe. After completing this paper, we came upon a paper by Y.M. Chung \cite{Chung} who dealt with the same problem, but the starting point of our proof is shadowing lemma for sequences maps which is different from the idea used in the proof of \cite{Chung}. Also, \cite{Chung} does not give results on controlling the Lyapunov exponents of periodic points which is used in the proof of Theorem \ref{hyperbolic growth} in this paper. Now let us state our main results. Let $M$ be closed $d-$dimensional Riemannian manifold. \begin{Def}For any continuous map $T$ on metric space $N$, {\bf the inverse limit space} $N^T$ of $(N, T)$ is the subset of $N^{\mathbb{Z}}$ consisting of all full orbits, i.e. $$N^{T}=\{\tilde{x}=(x_i)_{i\in \mathbb{Z}}|x_i\in N, Tx_i=x_{i+1}, \forall i\in\mathbb{Z}\}.$$ \end{Def} There exists a natural metric defined as $$d(\tilde{x},\tilde{y})=\sum_{i=-\infty}^{i=+\infty}\frac{d(x_i,y_i)}{2^{|i|}}.$$ Thus $N^T$ is a metric space with norm satisfing $\max_{i}d(x_i,y_i)\geq d(\tilde{x},\tilde{y})\geq d(x_0,y_0)$. Let $\tilde{T}$ be the shift map $\tilde{T}((x_i)_{i\in\mathbb{Z}})=(x_{i+1})_{i\in\mathbb{Z}}$ on $N^{T}$. From Lemma \ref{EM}, the set of invariant measures of $(\tilde{T}, N^T)$ and the set of invariant measures of $(T,N)$ are equivalent. Denote $\tilde{\mu}$ as the extension measure for $\mu$. This extension also keeps entropy, i.e. $h_{\tilde{\mu}}(\tilde{T})=h_{\mu}(T).$ \begin{Def}Fix a continuous map $T$ on a metric space $N$. We say that $T:N\rightarrow N$ has a {\bf topological horseshoe} if there exists a $T$-invariant compact set $\Lambda$ such that the restriction of $T$ on $\Lambda$ is topologically conjugate to a subshift of finite type $\sigma:\mathcal{A}^{\mathbb{Z}}\rightarrow \mathcal{A}^{\mathbb{Z}}$. \end{Def} \begin{Thm}\label{main theorem} Let $f:M\rightarrow M$ be a $\mathcal{C}^{1+\alpha}$ mapping preserving an ergodic hyperbolic probability measure $\mu$ with entropy $h_{\mu}(f)>0$ and let $\tilde{\mu}$ be the extension measure of $\mu$ to the inverse limit space $M^f$. For any constant $\delta>0$ and a weak $*$ neighborhood $\tilde{\mathcal{V}}$ of $\tilde{\mu}$ in the space of $\tilde{f}$-invariant probability measures, there exists a horseshoe $\tilde{H}\subset M^f$ such that: \begin{enumerate} \item $h_{top}(\tilde{H},\tilde{f})>h_{\tilde{\mu}}(\tilde{f})-\delta=h_{\mu}(f)-\delta$. \item if $\tilde{\lambda}_1>\tilde{\lambda}_2>\cdots>\tilde{\lambda}_k$ are the distinct Lyapunov exponents of $\mu$, with multiplicities $n_1,\cdots,n_k\geq 1$, denote $\tilde{\lambda}$ the same as before, then there exists a dominated splitting on $T_{\tilde{x}}M=\sqcup T_{\pi(\tilde{f}^n\tilde{x})}M, \tilde{x}\in \tilde{H}$ where $\sqcup$ means the disjoint union, $$T_{\tilde{x}}M=E^u\oplus E^s,$$ and there exists $N\geq1$ such that for each $i=1,2$ each $\tilde{x}\in\tilde{H}$ and each unit vector $v\in E^u(\pi(\tilde{x})), u\in E^s(\pi(\tilde{x}))$, $$||Df^{-N}_{\pi(\tilde{x})}(v)||\leq \exp((-\tilde{\lambda}_i+\delta)N),$$$$||Df^{N}_{\pi(\tilde{x})}(u)||\leq \exp((-\tilde{\lambda}_i+\delta)N)$$ \item all the invariant probability measures supported on $\tilde{H}$ lies in $\mathcal{V}.$ \item $\tilde{H}$ is $\delta-$close to the support of $\tilde{\mu}$ in the Hausdorff distance. \end{enumerate} \end{Thm} \begin{Cor}Let $f:M\rightarrow M$ be a $\mathcal{C}^{1+\alpha}$ mapping preserving an ergodic hyperbolic probability measure $\mu$. We have $$\limsup_{n\rightarrow+\infty}\frac{1}{n}\log P_n(f)\geq h_{\mu}(f)$$ where $P_n(f)$ denotes the number of periodic points with period $n$. \end{Cor} Theorem \ref{hyperbolic growth} below is a generalization of a result by Chung and Hirayama \cite{Chung2}. They proved that the topological entropy of a $\mathcal{C}^{1+\alpha}$ surface diffeomorphism is given by the growth rate of the number of periodic points of saddle type. We prove here that for any $\mathcal{C}^{1+\alpha}$ mappings on any dimensional manifold, the growth rate of the number of hyperbolic periodic points equals to the entropy coming from hyperbolic measures, hyperbolic entropy (see Defnition \ref{hyperbolic entropy}). We point out that there is a similar result concerning the topological pressure for diffeomorphisms in Gelfert and Wolf's paper \cite{GW2}. They proved that, for $\mathcal{C}^{1+\alpha}$ diffeomorphisms, topological pressure for potentials with only hyperbolic equilibrium states is totally determined by the value of potentials on saddle periodic points with ``large" Lyapunov exponents. \begin{Def}\label{hyperbolic entropy}Let $f:M\rightarrow M$ be a $\mathcal{C}^{1+\alpha}$ mapping. Let $\mathcal{HM}$ be the set of hyperbolic ergodic invariant measures of $f$ and let $H(f)=\sup_{\mu\in \mathcal{HM}}h_{\mu}(f).$ We call $H(f)$ the {\bf hyperbolic entropy} of $f$. \end{Def} For surface diffeomorphisms, all invariant measures with positive entropy are hyperbolic. So, hyperbolic entropy equals to topological entropy for surface diffeomorphisms. \begin{Thm}\label{hyperbolic growth}Let $f:M\rightarrow M$ be a $\mathcal{C}^{1+\alpha}$ mapping on a closed Riemannian manifold $M$. We have $$\limsup_{n\rightarrow+\infty}\frac{1}{n}\log P_n(f)\geq H(f).$$ Moreover, we have $$\lim_{a\rightarrow 0^{+}}\lim_{K\rightarrow 0^{+}}\limsup_{n\rightarrow \infty}\frac{1}{n}\log \sharp PH(n,f, K,a)=H(f).$$ where $PH(n,f,K,a)$ means the number of collection of periodic points with $n$ period and with uniform $(K,a)$-hyperbolicity (see Definition \ref{PH}). \end{Thm} Now we give a short discussion about the main techniques we used in this paper. As stated before, the starting point of our proof of Theorem \ref{main theorem} is the shadowing lemma for sequences of mappings. In Avila, Crovisier and Wilkinson's new paper\cite{ACW}, they give a compact proof of shadowing lemma using the shadowing lemma for sequences of mappings and then they establish a direct way to find a horseshoe by coding some special separated set directly. Inspired by their ideas, we establish a shadowing property for extension map $\tilde{f}$ in the inverse limit space $M^{f}$ which inherits many properties for the mapping $f$ and then construct horseshoes in the inverse limit space. Finally, we end this section with a short note about critical points. The key issue caused by critical points is the switch of the unstable direction and the stable direction which will cancel the hyperbolicity. Such occurs, for example, in snap-back repellers. Such phenomenon highly conflicts with the existence of absolutely continuous invariant measures (acim). Thus there are many compositions, which concern the existence of acim, taking critical points into account carefully (\cite{ABV,Led,BM} ). Nevertheless, by Ma\~{n}e's multiplicative ergodic theorem, the derivatives along the unstable directions of almost all orbits in a Pesin block are isomorphisms. So, in terms of the shadowing lemma and the construction of horseshoes, the only collapse may happen here is along the stable direction which does not affect the shadowing lemma. \section{Preliminaries} \subsection{Inverse limit space} First we give the definition of regular Anosov mappings to illustrate some differences between diffeomorphisms and mappings in dynamical features. \begin{Def}\cite{Przytycki}A regular map $f\in \mathcal{C}^1 (M,M)$ is an {\bf Anosov mapping} if there exist constants $C>0,0<\lambda<1$ and a Riemannnian metric $<\cdot,\cdot>$ on $TM$ such that for every $f$-orbit $(x_n)_{n\in\mathbb{Z}}\in M^{f}$, there is a splitting of $\sqcup^{+\infty}_{-\infty}T_{x_n}M=E^s_{(x_n)_{n\in\mathbb{Z}}}\oplus E^u_{(x_n)_{n\in\mathbb{Z}}}=\sqcup^{+\infty}_{-\infty}E^s_{x_n}\oplus E^u_{x_n}$ that is preserved by the derivative $Df$ and satisfies the conditions: \begin{enumerate} \item $||Df^n(v)||\leq C \lambda^n||v||$, for $v\in E^s, n\geq 0$ \item $||Df^n(v)||\geq C^{-1} \lambda^{-n}||v||$, for $v\in E^u, n\geq 0$. \end{enumerate} \end{Def} \begin{Rem} It is noticeable that we do not ask for a splitting of the whole tangent bundle $TM=E^s\oplus E^u$. It may happen that $E^u_{(x_n)_{n\in\mathbb{Z}}}\neq E^u_{(y_n)_{n\in\mathbb{Z}}}$ though $x_0=y_0$. There is a construction of a mapping that is close to an algebraic Anosov mapping while at a point it has many different local unstable manifolds in \cite{Przytycki}. This construction can also be reckoned as an explanation of the non stability of Anosov mappings. For $E^s$, $E^s_{(x_n)_{n\in\mathbb{Z}}}$ only depends on $x_0$. Of course, there are special systems for which $E^u_{x}$ does not depend on the orbits containing $x$. A classical example of such mapping is any algebraic mapping of the torus, such as $\left[\begin{matrix} n & 1 \\ 1 & 1 \\\end{matrix}\right]$ for $n\geq 2.$ \end{Rem} The following theorem is the classical Oseledec's theorem, a version of the Multiplicative Ergodic Theorem for differentiable mappings. \begin{Thm}\label{Ose}Let $f$ be a $\mathcal{C}^1$ mapping on $M$. Then there exists a Borel subset $G\subset M$ with $f(G)\subset G$ and $\mu(G)=1$ for any $\mu\in \mathcal{M}_{inv}(M)$, such that the following properties hold. \begin{enumerate}\item There is a measurable integer function $r:G\rightarrow \mathbb{Z}^+$ with $r\circ f=r.$ \item For any $x\in G$, there are real numbers $$+\infty>\lambda_1(x)>\lambda_2(x)>\cdots>\lambda_{r(x)}(x)\geq-\infty,$$ where $\lambda_{r(x)}(x)$ could be $-\infty.$ \item If $x\in G,$ there are linear subspaces $$V^0(x)=T_xM\supset V^1(x)\supset\cdots\supset V^{r(x)}(x)={0}$$ of $T_xM.$ \item If $x\in G$ and $1\leq i\leq r(x),$ then $$\lim_{n\rightarrow \infty}\frac{1}{n}\log |D_xf^n\xi|=\lambda_i(x)$$ for all $\xi\in V^{i-1}(x)\backslash V^i(x).$ Moreover, $$\lim_{n\rightarrow \infty}\frac{1}{n}\log|\det (D_xf^n)|=\sum_{i=1}^{r(x)}\lambda_i(x)m_i(x),$$ where $m_i(x)=\dim V^{i-1}(x)-\dim V^i(x)$ for all $1\leq i\leq r(x)$. \item $\lambda_i(x)$ is measurably defined on $\{x\in G|r(x)\geq i\}$ and $f$-invariant, i.e. $\lambda_i(fx)=\lambda_i(x).$ \item $D_xf(V^i(x))\subset V^i(f(x))$ if $i\geq 0$. \end{enumerate} The numbers $\{\lambda_i(x)\}_{i=1}^{r(x)}$ defined above are called the Lyapunov exponents of $f$ at point $x$ and $m_i(x)$ is called the multiplicity of $\lambda_i(x)$. \end{Thm} \begin{Rem}By Oseledec's ergodic theorem \ref{Ose} for maps, we only have a filtration type splitting in the tangent space which kills lots of skills in Pesin theory. Thus, there only exist well defined stable manifolds for mappings. Nevertheless, it is comforting that by Pugh and Shub's theorem \ref{Oseledec} below, we can find full measure orbits such that along these orbits, there exist well defined invariant unstable manifolds. It is worth to note that what underlies this fact is the multiplicative ergodic theorem for non-invertible maps given by Ruelle or Ma\~{n}e's \cite{Rue, Mane}. \end{Rem} It is a common idea to consider the inverse limit space for mappings. Let $\tilde{f}: M^f\rightarrow M^f$ be the induced map where $\tilde{f}$ is the shift map. Let $\pi:M^f\rightarrow M$ be naturally defined by $\pi((x_n)_{n\in\mathbb{Z}})=x_0$, then $\pi\circ \tilde{f}=f\circ\pi$. The tangent bundle of $M$, $TM$ pulls back to $TM^f$ on $M^f$ and $Df$ extends to $D\tilde{f}$, i.e. $D\tilde{f}$ is a continuous bundle mapping covering the homeomorphism $\tilde{f}$ of the compact base $M^f$. $D\tilde{f}$ is a linear map on each fiber. \begin{Def}\label{Coc}Let $f$ be a $\mathcal{C}^{1+\alpha}$ mapping on compact manifold $M$ and $\tilde{f}$, $M^f$ defined as above. Let $L(d,\mathbb{R})$ denote the group of $d\times d$ matrices over $\mathbb{R}^d$. For any measurable function $A:M^f\rightarrow L(d,\mathbb{R})$, let $\mathcal{A}:M^f\times\mathbb{Z}\rightarrow L(d,\mathbb{R})$ defined by $$\mathcal{A}(\tilde{x},m)=A(\tilde{f}^m(\tilde{x}))\cdots A(\tilde{x})\,\, for\,\, m\geq0,$$ $$\mathcal{A}(\tilde{x},m)=A(\tilde{f}^m\tilde{x})^{-1}\cdots A(\tilde{f}^{-1}\tilde{x})^{-1}\,\, for\,\, m<0. $$ Then it follows \begin{equation}\label{cocycles}\mathcal{A}(\tilde{x},m+k)=\mathcal{A}(\tilde{f}^k(\tilde{x}),m)\mathcal{A}(\tilde{x},k). \end{equation} We call $\mathcal{A}:M^f\times \mathbb{Z}\rightarrow L(d,\mathcal{R})$ a {\bf measurable linear cocycle} over $f$, or simply a cocycle. \end{Def} Thus, there is a natural measurable cocycle over $f$ associated with $D\tilde{f}$. We abuse notation $D\tilde{f}: M^f\times \mathbb{Z}\rightarrow L(d,\mathcal{R})$ defined as following. \begin{Def}The measurable cocycle $D\tilde{f}$ over $f$ is defined as following $$ D\tilde{f}^m(\tilde{x})= \begin{cases} D_{x_0}f^m=D_{x_0}f\circ\cdots\circ D_{x_m}f, & \text{if }m>0; \\ Id, & \text{if }m=0;\\ (D_{x_m}f^{m})=(Df_{x_m})^{-1}\circ\cdots\circ(Df_{x_{-1}})^{-1} & \text{if }m<0. \end{cases} $$ \end{Def} \begin{Rem}We should notice that inverse limit space isn't a manifold. It is just a topological space with linear cocycle $\tilde{D}f$ over it and the dimension of $M^f$ is even infinite usually. Although we can not say $\tilde{D}f$ is the derivative to $\tilde{f}$, it is a linear cocycle over $\tilde{f}$. \end{Rem} Invariant measures in $M^f$ can be projected down to invariant measures in $M$ by projection $\pi$. The following lemma says $\mathcal{M}_{inv}{M^f}$ is equivalent to $\mathcal{M}_{inv}{M}$. \begin{Lem}\cite{MXZ}\label{EM}Let $T$ be a continuous map on $M$. For any $T$ -invariant Borel probability measure $\mu$ on $M$, there exists a unique $\tilde{T}$-invariant Borel probability measure $\tilde{\mu}$ on $M^T$ such that $\pi\tilde{\mu}=\mu$. Moreover, $h_{\tilde{\mu}}(\tilde{T})=h_{\mu}T$. \end{Lem} \begin{Prop}\cite{Shub1}\label{Oseledec}For any invariant measure $\mu$ of $f:M\rightarrow M$, there exists a Borel set $\tilde{\Lambda}\subset M^f$, such that \begin{enumerate} \item$\tilde{f} \tilde{\Lambda} =\tilde{\Lambda}$, \item $\tilde{\mu}(\tilde{\Lambda})=1$ \item for every $\tilde{x}=\{x_n\}_{n\in \mathbb{Z} }\in\tilde{\Lambda}$, there are splittings of the tangent space $T_{x_n}M$, $$T_{x_n}M=E_1(x_n)\oplus E_2(x_n)\oplus\cdots\oplus E_{\tilde{r}(\tilde{x})}(x_n)\oplus F_{\infty}(x_n)$$ and numbers $\infty>\tilde{\lambda}_1(\tilde{x})\geq\tilde{\lambda}_s(\tilde{x})\geq\cdots\geq\tilde{ \lambda}_{\tilde{r}(\tilde{x})} (\tilde{x})>-\infty$ and $\tilde{m}_i(\tilde{x})$, satisfying the following properties: \begin{enumerate} \item $D_{x_n}f|E_i(x_n)$ is an isomorphism, $\forall n\in \mathbb{Z}$. \item $\tilde{r}(\cdot),\tilde{\lambda}(\cdot)$ and $\tilde{m}(\cdot)$ are $\tilde{f}$ measurable and invariant, i.e. $$\tilde{r}(\tilde{f}(\tilde{x}))=\tilde{r}(\tilde{x}),\,\,\tilde{\lambda}_i(\tilde{f}(\tilde{x}))=\tilde{\lambda}_i(\tilde{x})\,\, and\,\, \tilde{m}_i(\tilde{f}(\tilde{x}))$$ for each $i=1,\cdots,\tilde{r}(\tilde{x}).$ \item $\text{dim}E_i(\tilde{x})=\tilde{m}_i(\tilde{x})$ for all $n\in\mathbb{Z}$ and $1\leq i\leq \tilde{r}(\tilde{x}).$ \item $$\lim_{n\rightarrow +\infty}\frac{1}{n}\log |\tilde{D}f^n|F_{\infty}(x_n)|=-\infty.$$ \item $$\lim_{n\rightarrow \pm\infty}\frac{1}{n}\log |\tilde{D}f^n(v)|=\tilde{\lambda}_i(\tilde{x}),$$ for all $0\neq v\in E_{i}(x_n), 1\leq i\leq \tilde{r}(\tilde{x}).$ \item If $0\neq v_n\in F_{\infty}(x_n)$ and there are $v_m\in F_{\infty}(x_m)$ for $m<n$ such that $Df^{n-m}(v_m)=v_n$ then $\lim_{n\rightarrow \infty}\log |v_j|=\infty.$ \item $F_{\infty}(x_n)=K(x_n)\oplus G_{\infty}(x_n)$ where $D_{x_n}f^{i}|_{K(x_n)}$ is identically 0 for some $i$ and $Df|_{G_{\infty}(x_i)}$ is an isomorphism. \item The splitting is measurable with respect to $\tilde{x}$ and the angles between any two associated subspaces vary sub-exponentially under iteration, i.e. $$\lim_{n\rightarrow \pm \infty}\frac{1}{n}\angle(E_i(x_n),E_j(x_n))=0,1\leq i,j\leq\tilde{r}(\tilde{x}) \text{ and}$$ $$\lim_{n\rightarrow \pm \infty}\frac{1}{n}\angle(E_i(x_n),F_{\infty}(x_n))=0, 1\leq i\leq \tilde{r}(\tilde{x}).$$ \end{enumerate} \end{enumerate} \end{Prop} Although we do not use it in this paper, we state a result in \cite{Shub1} about the existence of unstable manifolds along orbits. Let $f:M\rightarrow M$ be a $\mathcal{C}^{1+\beta}$ mapping and let $\mu$ be an invariant measure for $f$ with no zero exponent, then for almost all full orbits of $f$ there are stable and unstable disc families which are Borel, vary sub-exponentially along orbits and are invariant. \begin{Rem}It is worth to note that the understanding of dynamics in inverse limit spaces is far away from the understanding of the original maps. For example, it is well known that a non-invertible mapping on a compact manifold is in general not stable except it is expanding\cite{Przytycki}. Even so, for Anosov mappings, the dynamical structure of its orbit space is stable under $\mathcal{C}^1$ small perturbations\cite{Liu1}\cite{Mane2}. \end{Rem} \subsection{Shadowing lemma for sequences of mappings} A lot of shadowing problems can be reduced to the following ``abstract" shadowing problem. Let $H_k$ be a sequence of Banach spaces ($k\in\mathbb{Z}$ or $k\in\mathbb{Z^{+}}$), we denote by $|\cdot|$ norms in $H_k$ and by $||\cdot||$ the corresponding operator norms for linear operators. Let us emphasize that the spaces $H_k$ are not assumed to be isomorphic. Consider a sequence of mappings $$\phi_k:H_k\rightarrow H_{k+1}$$ of the form $$\phi_k(v)=A_kv+w_{k+1}v$$ where $A_k$ are linear mappings. It is assumed that the values $|\phi_k(0)|$ are uniformly small, say, $|\phi_k(0)|\leq d.$ We are looking for a sequence $v_k\in H_k$ such that $\phi_k(v_k)=v_{k+1}$ and the values $|v_k| $ are uniformly small, for example, the inequalities $$\sup_{k}|v_k|\leq Ld$$ hold with a constant $L$ independent of $d$. \begin{Thm}\cite{Pily} Assume that \begin{enumerate} \item there exist numbers $\lambda\in(0,1), N\geq 1,$ and projectors $P_k,Q_k:H_k\rightarrow H_k$ such that \begin{enumerate} \item $||P_k||,||Q_k||\leq N, P_k+Q_k=I;$ \item $||A_k|_{S_k}||\leq \lambda, A_kS_k\subset S_{k+1};$ \end{enumerate} \item if $U_{k+1}\neq\{0\}$, then there exist linear mappings $B_k:U_{k+1}\rightarrow U_k$ such that $$B_kU_{k+1}\subset U_k, ||B_k||\leq \lambda, A_kB_k|_{U_{k+1}}=I;$$ \item there exist numbers $k,\Delta>0$ such that inequalities $$|w_{k+1}(v)-w_{k+1}(v')|\leq k|v-v'|\text{ for } |v|,|v'|\leq \Delta$$ and $kN_1<1$ hold where $N_1=N\frac{1+\lambda}{1-\lambda}.$ \end{enumerate} Set $L=\frac{N_1}{1-kN_1}, d_0=\frac{\Delta}{L}.$ If for a sequence of mapping $\phi_k$, we have $|\phi_k(0)|\leq d \leq d_0$, then there exist points $v_k\in H_k$ such that $\phi_k(v_k)=v_{k+1}$ and $|v_k|\leq Ld$. \end{Thm} We give sufficient conditions for the uniqueness of sequence $\{v_k\}$. \begin{Thm}\cite{Pily} \label{Shadowing}Assume that \begin{enumerate} \item there exist numbers $\lambda\in(0,1), N\geq 1,$ and projectors $P_k,Q_k:H_k\rightarrow H_k$ such that \begin{enumerate} \item $||P_k||,||Q_k||\leq N, P_k+Q_k=I;$ \item $||A_k|_{S_k}||\leq \lambda, A_kS_k\subset S_{k+1};$ \end{enumerate} \item $A_kU_k\subset U_{k+1}$ and $||A_k|_{U_k}||\geq\frac{1}{\lambda}$; \item there exist numbers $k_0,\Delta>0$ such that$$|w_{k+1}(v)-w_{k+1}(v')|\leq k|v-v'|\text{ for } |v|,|v'|\leq \Delta$$ and for $k=Nk_0$ the inequalities $$\lambda+2k<1, \frac{1}{\lambda}-2k\geq \gamma>1, \frac{\lambda}{\gamma}+\frac{2k}{\gamma}<1$$ are fulfilled. \end{enumerate} Then the relations $$\phi_k(v_k)=v_{k+1},\phi_k(u_k)=u_{k+1}, |v_k|,|u_k|\leq \Delta,k\in\mathbb{Z},$$ imply that $v_k=u_k,k\in\mathbb{Z}.$ \end{Thm} \section{Construction of Horseshoe} In this section, we focus our attention on the set $\tilde{\Lambda}$ which is given in Proposition \ref{Oseledec}. \begin{Def}Let $\mu$ be an ergodic hyperbolic probability measure for a $\mathcal{C}^1$ mapping $f: M\rightarrow M$. Correspondingly, we have inverse limit space $M^f$, shift map $\tilde{f}$ and ergodic measure $\tilde{\mu}$ with respect to $\mu$. A compact positive measure set $\tilde{\Lambda}(\eta) \subset \tilde{\Lambda}$ is called a {\bf $\eta$-uniformity block } for $\mu$(with tolerance $\eta>0$) if there exists $K>0$ and a measurable map $C_{\eta}:\tilde{\Lambda}\rightarrow GL(d,\mathbb{R})$ which is continuous on subset $\tilde{\Lambda}(\eta)$ such that: \begin{enumerate} \item$\max\{||C^{-1}_{\eta}(\tilde{f}^n(\tilde{x}))||,||C_{\eta}(\tilde{f}^n(\tilde{x}))||\}< K\exp (\eta |n|)$, for each $\tilde{x}\in\tilde{\Lambda}$ and $n\in\mathbb{Z}.$ \item Denote $\tilde{\lambda}^{+}= \min\{\tilde{\lambda}_i>0\}$, $\tilde{\lambda}^{-}= \max\{\tilde{\lambda}_i<0\}$ and $\tilde{\lambda}=\min\{\tilde{\lambda}^+,-\tilde{\lambda}^{-}\}$ where $\tilde{\lambda}_1>\tilde{\lambda}_2>\cdots>\tilde{\lambda}_k>0>\tilde{\lambda}_{k+1}>\cdots>\tilde{\lambda}_{s}$ are distinct Lyapunov exponents of $\mu$, with multiplicities $n_1,\cdots,n_s\geq1$, then there exists $A_{1}=A_{1}(\tilde{x})\in GL(\sum_{1}^{k}n_i,\mathbb{R})$ and $A_{2}=A_{2}(\tilde{x})\in L(\sum_{k+1}^{s}n_i,\mathbb{R})$ such that $$||A_{1}(\tilde{x})^{-1}||^{-1}\geq e^{\tilde{\lambda}-\eta}, \,\,\, ||A_{2}(\tilde{x})||\leq e^{-\tilde{\lambda}+\eta}$$ and $C_{\eta}(\tilde{f}(\tilde{x}))\cdot Df(x_0)\cdot C_{\eta}^{-1}(\tilde{x})= diag(A_{1}(\tilde{x}), A_{2}(\tilde{x}))$. \end{enumerate} \end{Def} \begin{Rem}$C_{\eta}(\tilde{x})$ here is a transformation of coordinates such that splitting $E^u(\tilde{x})\oplus E^s(\tilde{x})$ are mapped to $e_1\oplus e_2$ where $e_1=(1^{d_u},0^{d_s}), e_2=(0^{d_u},1^{d_s})$ and $d_s,d_u$ means the dimension of stable bundle and unstable bundle respectively. Under this new coordinates, derivatives $Df$ present enough hyperbolicity at the first iteration. So the norm of $C_{\eta}(\tilde{x})$ is determined by the angle of $E^s(\tilde{x})$ and $E^u(\tilde{x})$ and large $N$ such that $||Df^N(v)||$ shows enough hyperbolicity which is determined by $K,\eta$ and $||Df||$. \end{Rem} \begin{Def}A $\mu-$measurable map $C:M^f\rightarrow GL(n,\mathbb{R})$ is said to be {\bf tempered } with respect to $f$, or simply tempered, if for $\mu-$almost every $\tilde{x}\in M^f$ $$\lim_{n\rightarrow +\infty}\frac{1}{n}\log ||C^{\pm}(\tilde{f}^n(\tilde{x}))||=0.$$ \end{Def} The following lemma is a technical but crucial lemma in smooth ergodic theory. \begin{Lem}\label{Tempering} \cite{KatMe} (Tempering-Kernel Lemma) Let $f:X\rightarrow X$ be a measurable transformation. If $K:X\rightarrow\mathbb{R}$ is a positive measurable tempered function, then for any $\epsilon>0$, there exists a positive measurable function $K_{\epsilon}:X\rightarrow\mathbb{R}$ such that $K(x)\leq K_{\epsilon}(x)$ and $$e^{-\epsilon}\leq \frac{K_{\epsilon}(f(x))}{K_{\epsilon}(x)}\leq e^{\epsilon}.$$ \end{Lem} \begin{Thm}\label{OPReduction}{\bf(Oseledec-Pesin $\epsilon$ Reduction theorem for mappings)} Suppose that $D\tilde{f}:\tilde{\Lambda}\rightarrow L(d,\mathbb{R})$ is the measurable cocycle over shift $\tilde{f}:M^f\rightarrow M^f$ where $\tilde{\Lambda}$ is the full measure set given in Proposition \ref{Oseledec}. Then there exists a measurable $\tilde{f}$- invariant function $r:\tilde{\Lambda} \rightarrow \mathbb{N} $ and number $\lambda_1(\tilde{x}),\cdots,\lambda_{r(\tilde{x})}(\tilde{x})\in\mathbb{R}$ and $\l_1(\tilde{x}),\cdots,\l_{r(\tilde{x})}(\tilde{x})\in\mathbb{N}$ depending only on $\tilde{x}$ with $\sum l_i(\tilde{x})=d$ such that for every $\epsilon>0$ there exists a tempered map $$C_{\epsilon}:\tilde{\Lambda}\rightarrow GL(d,\mathbb{R})$$ such that for almost every $\tilde{x}=\{x_i\}_{i\in\mathbb{Z}}\in \tilde{\Lambda}$ the cocycle $A_{\epsilon}(\tilde{x})=C_{\epsilon}(\tilde{f}(\tilde{x}))Df(x_{0})C^{-1}_{\epsilon}(\tilde{x})$ has the following Lyapunov block form \[A_{\epsilon}(\tilde{x})=\begin{pmatrix} \ A^1_{\epsilon}(x_{0})\\ &\ A^2_{\epsilon}(x_{0}) \end{pmatrix},\] where $A^{1}_{\epsilon}(\tilde{x})$ is a $\sum_{j=1}^kl_j(\tilde{x})\times \sum_{j=1}^kl_j(\tilde{x})$, $A^{2}_{\epsilon}(\tilde{x})$ is a $\sum_{j=k+1}^r l_j(\tilde{x})\times\sum_{j=k+1}^r l_j(\tilde{x})$ matrix and \[||(A^{1}_{\epsilon}(\tilde{x}))^{-1}||^{-1}\geq e^{\lambda(\tilde{x})-\epsilon} , ||A^{2}_{\epsilon}(\tilde{x})||\leq e^{-\lambda(\tilde{x})+\epsilon}.\] \end{Thm} \begin{proof}This result follows directly from the same proof as the diffeomorphism case (See Theorem S.2.10\cite{KatMe}). We give a sketch here. If $\tilde{x}\in \tilde{\Lambda}$ then $T_{x_0}M=E_1(x_0)\oplus E_2(x_0)$ where $E_1(x_0)$ is the expanding part and $E_2(x_0)$ is the contracting part. Define a new scalar product on each $E_1(x_0)$ and $\epsilon>0$ as follows: If $u,v\in E_1(x_0)$ then $$<u,v>'_{\tilde{x},1}:=\sum_{m=0}^{+\infty}<D\tilde{f}|_{E^1}^{-m}(\tilde{x})u, D\tilde{f}|_{E^1}^{-m}(\tilde{x})v)>e^{2m\lambda}e^{-2\epsilon m},$$ where $<\cdot,\cdot>$ denotes the standard scalar product on $\mathbb{R}^n;$ If $u,v\in E_2(x_0)$ then $$<u,v>'_{\tilde{x},2}:=\sum_{m=0}^{+\infty}<D\tilde{f}|_{E^2}^{m}(\tilde{x})u, D\tilde{f}|_{E^2}^{m}(\tilde{x})v)>e^{-2m\lambda}e^{-2\epsilon m},$$ where $<\cdot,\cdot>$ denotes the standard scalar product on $\mathbb{R}^n.$ Now according to the definition of Lyapunov exponents $\tilde{\lambda}_i(\tilde{x}),$ for each $\tilde{x}\in\tilde{\Lambda}$ and $\epsilon>0$ there exists a constant $C(\tilde{x},\epsilon) $ such that $$||D\tilde{f}^{-m}m(\tilde{x})v||\leq C(\tilde{x},\epsilon)\exp^{m\lambda(\tilde{x})}e^{\epsilon m/2}||v||,\forall m\in \mathbb{N}, u,v\in E^1;$$ $$||D\tilde{f}^m(\tilde{x})v||\leq C(\tilde{x},\epsilon)\exp^{-m\lambda(\tilde{x})}e^{\epsilon m/2}||v||,\forall m\in \mathbb{N}, u,v\in E^2,$$ therefore $<u,v>'_{\tilde{x},i}\leq C^2(\tilde{x},\epsilon)\sum_{m\in\mathbb{N}}e^{-m\epsilon},$ which implies that $<u,v>'_{\tilde{x},\epsilon}$ is well defined. We recall that $D\tilde{f}^{m+1}(\tilde{x})=D\tilde{f}^{m}(\tilde{f}\tilde{x})Df(x_0)$, whence \begin{eqnarray*} <Df(x_0)v,Df(x_0)v>'_{\tilde{f}\tilde{x},1}&=&\sum_{m\in\mathbb{N}}||D\tilde{f}^{-m}(\tilde{f}(\tilde{x}))(Df(x_0)v)||^2e^{2\lambda(\tilde{x})m}e^{-2\epsilon|m|}\\ &=&\sum_{m=0}^{+\infty}||D\tilde{f}^{-m+1}(\tilde{x})v||^2e^{2\lambda(\tilde{x})m}e^{-2\epsilon m}\\ &\geq& e^{2(\lambda(\tilde{x})-\epsilon)}\sum^{+\infty}_{m=0}||D\tilde{f}^{-m}(\tilde{x})v||^2e^{2\lambda(\tilde{x})m}e^{-2\epsilon m}\\ &\geq&e^{2(\lambda(\tilde{x})-\epsilon)}<u,v>'_{\tilde{x},1}, \end{eqnarray*} where $u,v\in E^1$ So we have \begin{equation}\label{norm} \frac{<Df(x_0)v,Df(x_0)v>'_{\tilde{f}\tilde{x},1}}{<v,v>'_{\tilde{x},1}}\geq e^{2(\lambda(\tilde{x})-\epsilon)},\forall u,v\in E^1 \end{equation} Similarly, we have \begin{eqnarray*} <Df(x_0)v,Df(x_0)v>'_{\tilde{f}\tilde{x},2}&=&\sum_{m\in\mathbb{N}}||D\tilde{f}^{m}(\tilde{f}(\tilde{x}))(Df(x_0)v)||^2e^{-2\lambda(\tilde{x})m}e^{-2\epsilon|m|}\\ &=&\sum_{m=0}^{+\infty}||D\tilde{f}^{-m+1}(\tilde{x})v||^2e^{2\lambda(\tilde{x})m}e^{-2\epsilon m}\\ &\leq& e^{2(-\lambda(\tilde{x})+\epsilon)}\sum^{+\infty}_{m=0}||D\tilde{f}^{m}(\tilde{x})v||^2e^{2\lambda(\tilde{x})m}e^{-2\epsilon m}\\ &\leq&e^{2(-\lambda(\tilde{x})+\epsilon)}<u,v>'_{\tilde{x},2}, \end{eqnarray*} where $u,v\in E^2$ So we have \begin{equation}\label{norm2} \frac{<Df(x_0)v,Df(x_0)v>'_{\tilde{f}\tilde{x},2}}{<v,v>'_{\tilde{x},2}}\leq e^{2(-\lambda(\tilde{x})+\epsilon)},\forall u,v\in E^2 \end{equation} To extend the scalar product to $T_{x_0}M$, consider $$<u,v>'_{\tilde{x}}=\sum^{2}_{i=1}<u_i,v_i>'_{\tilde{x},i},$$ where $v_i$ is the projection of $v$ to $E^i(\tilde{x})$. Now let $C_{\epsilon}(\tilde{x})$ be the positive symmetric matrix such that if $u,v\in T_{x_0}M$ then $$<u,v>'_{\tilde{x}}=<C_{\epsilon}(\tilde{x})u,C_{\epsilon}(\tilde{x})v>$$ and define $$A_{\epsilon}(\tilde{x})=C_{\epsilon}(\tilde{f}(\tilde{x}))Df(x_0)C^{-1}_{\epsilon}(\tilde{x}).$$ Thus if $u,v\in E_i(\tilde{x}),$ then \begin{eqnarray*} <Df(x_0)u,Df(x_0)v>'_{\tilde{f}\tilde{x},i}&=&<C_{\epsilon}(\tilde{f}\tilde{x})Df(x_0)u,C_{\epsilon}(\tilde{f}\tilde{x})Df(x_0)v>\\ &=&<A_{\epsilon}(\tilde{x})C_{\epsilon}(\tilde{x})u,A_{\epsilon}(\tilde{x})C_{\epsilon}(\tilde{x})v> \end{eqnarray*} Applying $v=C_{\epsilon}(\tilde{x})u$ in inequality (\ref{norm},\ref{norm2}), we get \[||(A^{1}_{\epsilon}(\tilde{x}))^{-1}||^{-1}\geq e^{\lambda(\tilde{x})-\epsilon} , ||A^{2}_{\epsilon}(\tilde{x})||\leq e^{-\lambda(\tilde{x})+\epsilon}.\] It remains to prove that $C_{\epsilon}(x)$ is tempered. Since the angles between the different subspaces satisfy a sub exponential lower estimate due to Theorem\ref{Oseledec}, it is enough to consider just block matrices. Set $B_N:=\{\tilde{x}\in\tilde{\Lambda}|||C_{\epsilon}^{\pm}(\tilde{x})||<N\}.$ For some $N>0$ large enough, by the Poincare Recurrence Theorem, there exists a set $Y\subset B_N$ such that $ \mu(B_N\backslash Y)=0 $ and the orbit of $\tilde{y}\in Y$ returns infinitely many times to $Y$. Thus let $m_k$ be a sequence such that $\tilde{f}^{m_k}(\tilde{x})\in Y$ for all $k$. Then $$A_{\epsilon}(\tilde{y},m_k):=A_{\epsilon}(\tilde{f}^{m_k}\tilde{y})\cdots A_{\epsilon}(\tilde{y})\leq ||C_{\epsilon}(\tilde{f}^{m_k}\tilde{y})||\,||D\tilde{f}^{m_k}\tilde{y}||\,||C^{-1}_{\epsilon}(\tilde{y})||$$ and therefore for almost every point $\tilde{y}\in Y$ the spectra of $A_{\epsilon}$ and $D\tilde{f}$ are the same. Since $N$ is chosen arbitrarily this is true for almost every $\tilde{x}\in\tilde{\Lambda}.$ Observe that $$C^{-1}_{\epsilon}(\tilde{f}^n\tilde{x})=D\tilde{f}^m(\tilde{x})C^{-1}_{\epsilon}(\tilde{x})(A_{\epsilon}(\tilde{x},n))^{-1}\text{ and } D\tilde{f}^n(\tilde{x})=C_{\epsilon}^{-1}(\tilde{f}^n\tilde{x})A_{\epsilon}(\tilde{x},n)C_{\epsilon}(\tilde{x}),$$ so by taking the growth rates in both equations we find that $$\lim_{n\rightarrow\infty}\frac{1}{n}\log ||C^{\pm}_{\epsilon}(\tilde{f}^n\tilde{x})||=0$$ for all $\tilde{x}\in\tilde{\Lambda}$ for which $A_{\epsilon}$ and $D\tilde{f}$ have the same spectrum. \end{proof} \begin{Thm} If $f:M\rightarrow M$ is $\mathcal{C}^1$ mapping and $\mu$ is an ergodic hyperbolic probability measure for $f$, then for every $\eta >0$ there exists a uniformity block $\tilde{\Lambda}(\eta)$ of tolerance $\eta$ for $\mu$. Moreover, $\tilde{\Lambda}(\eta)$ can be chosen to have measure arbitrarily close to 1 with suitable choose of $K$. \end{Thm} \begin{proof}Applying Oseledec and Pesin's Reduction theorem given in Theorem \ref{OPReduction} and Lusin theorem, we can get this theorem. \end{proof} The assumption of the existence of hyperbolic ergodic measure ensures hyperbolicity. The norm we used in the proof of Theorem \ref{OPReduction} is called Lyapunov norm. In non-uniformly hyperbolic case, Lyapunov metric is a powerful technique. Under Lyapunov norm, one can get uniformly hyperbolic property. From the definition of uniformity blocks, we can see that for any $\tilde{x}\in\tilde{\Lambda}(\eta)$ there exist linearization and diagonalizaton of $f$ along orbit $\tilde{x}$. In the following theorem, we want to estimate the $\mathcal{C}^1$ distance between diagonalization and our maps under local charts. Next theorem says for uniformity block there exists uniformly bound which depends only on the tolerance of the block. We need a promotion from points to their neighborhoods, which requires higher regularity $\mathcal{C}^{1+\alpha}.$ \begin{Thm}\label{Pesin} {\bf (Pesin's argument for mappings)} Let $f: M\rightarrow M$ be a $\mathcal{C}^{1+\alpha}$ mapping preserving the ergodic hyperbolic probability measure $\mu$. Let $\tilde{\Lambda}(\eta)$ be a uniformity block of tolerance $\eta$ for $\mu$ . Then there exist $K>0$, $\xi_0>0$, a measurable map $C_{\eta}:\tilde{\Lambda}\rightarrow GL(d,\mathbb{R})$ which is continuous on $\tilde{\Lambda}(\eta)$ $$C_{\eta}(\tilde{x}):T_{x_0}M\rightarrow \mathbb{R}^d; \,\, \tilde{x}=\{x_n\}_{n\in \mathbb{Z}}\in \tilde{\Lambda}$$ and a measurable function $\xi:\tilde{\Lambda}\rightarrow \mathbb{R}^{+}$ which is also continuous on $\tilde{\Lambda}(\eta)$ such that: \begin{enumerate} \item$\max\{||C^{-1}_{\eta}(\tilde{f}^n(\tilde{x}))||,||C_{\eta}(\tilde{f}^n(\tilde{x}))||\}< K\exp (\eta |n|)$, for each $\tilde{x}\in\tilde{\Lambda}$ and $n\in\mathbb{Z}.$ \item Denote $\tilde{\lambda}^{+}= \min\{\tilde{\lambda}_i>0\}$, $\tilde{\lambda}^{-}= \max\{\tilde{\lambda}_i<0\}$ and $\tilde{\lambda}=\min\{\tilde{\lambda}^+,-\tilde{\lambda}^{-}\}$ where $\tilde{\lambda}_1>\tilde{\lambda}_2>\cdots>\tilde{\lambda}_k>0>\tilde{\lambda}_{k+1}>\cdots>\tilde{\lambda}_{s}$ are distinct Lyapunov exponents of $\mu$, with multiplicities $n_1,\cdots,n_s\geq1$, then there exists $A_{1}=A_{1}(\tilde{x})\in GL(\sum_{1}^{k}n_i,\mathbb{R})$ and $A_{2}=A_{2}(\tilde{x})\in L(\sum_{k+1}^{s}n_i,\mathbb{R})$ such that $$||A_{1}(\tilde{x})^{-1}||^{-1}\geq e^{\tilde{\lambda}-\eta}, \,\,\, ||A_{2}(\tilde{x})||\leq e^{-\tilde{\lambda}+\eta}$$ and $C_{\eta}(\tilde{f}(\tilde{x}))\cdot Df(x_0)\cdot C_{\eta}^{-1}(\tilde{x})= diag(A_{1}(\tilde{x}), A_{2}(\tilde{x}))$. \item $\xi_0 e^{-\eta n}\leq \xi(\tilde{f}^n(\tilde{x}))\leq \xi_0e^{\eta n}$, for all $\tilde{x}\in \tilde{\Lambda}(\eta)$ and $n\in\mathbb{Z}$; \item $f(B(x_n, K^{-1}\xi(\tilde{f}^n(\tilde{x})))\subset B(x_{n+1}, \xi(\tilde{f}^{n+1}(\tilde{x})))$; \item there are $\mathcal{C}^2$ charts $\phi_{\tilde{x}}: B(x_0, K^{-1}\xi(\tilde{x}))\rightarrow \mathbb{R}^d $ for any $\tilde{x}\in\tilde{\Lambda}$ such that \begin{enumerate} \item $\tilde{x}\longmapsto \phi_{\tilde{x}}$ is continuous in the $\mathcal{C}^1$ topology on the target $\tilde{\Lambda}(\eta).$ \item $\phi_{\tilde{x}}(x_0)=0,$ and $D\phi_{\tilde{x}}(x_0)=C_{\eta}(\tilde{x});$ \item on the set $\phi_{\tilde{x}}(B(x_0,K^{-1}\xi(\tilde{x}))),$ we have $d_{\mathcal{C}^1}(\phi_{\tilde{f}\tilde{x}}\circ f\circ \phi^{-1}_{\tilde{x}}, C_{\eta}(\tilde{f}(\tilde{x}))\cdot Df(x_0)\cdot C^{-1}_{\eta}(\tilde{x}))<\eta;$ \item for all $\tilde{x}\in \tilde{\Lambda}(\eta)$ and for all $y,y'\in B(x_0,K^{-1}\xi(\tilde{x})),$ we have $K^{-1}d(y,y')\leq||\phi_{\tilde{x}}(y)-\phi_{\tilde{x}}(y')||\leq Ke^{\eta \tau(\tilde{x})}d(y,y')$ where $\tau(\tilde{x}):=\{\text{ the first return time of } \tilde{x}\text{ to }\tilde{\Lambda}(\eta)\}$. \end{enumerate} \end{enumerate} \end{Thm} \begin{proof}Our aim is to construct for almost every $\tilde{x}\in\tilde{\Lambda}=\{x_n\}_{n\in\mathbb{Z}}$, a neighborhood $N(x_n)$ such that $f$ acts on $N(x_n)$ very much like the linear map $A_{\epsilon}(x_n)$ in a neighborhood of the origin. This follows from the techniques used in Pesin's original proof in Theorem S.3.1 \cite{KatMe}. For $\eta>0$ and let $\tilde{\Lambda}\subset M^f$ be the set for which the Oseledec-Pesin $\eta$-Reduction Theorem\ref{OPReduction} works. For each $\tilde{x}\in\tilde{\Lambda}$ consider $(C_{\eta}(\tilde{f}^n(\tilde{x})))_{n\in\mathbb{Z}}$, linear maps from $T_{x_n}M$ to $\mathbb{R}^d,$ where $C_{\eta}$ is the Lyapunov transformation of coordinates given in Theorem \ref{OPReduction} such that $A_{\eta}(\tilde{x})=C_{\eta}(\tilde{f}(\tilde{x}))Df(x_{0})C^{-1}_{\eta}(\tilde{x})$ has the following Lyapunov block form \[A_{\eta}(\tilde{x})=\begin{pmatrix} \ A^1_{\eta}(x_{0})\\ &\ A^2_{\eta}(x_{0}) \end{pmatrix}.\] For $\tilde{x}=\{x_n\}_{n\in\mathbb{Z}}$ and $ r>0$, let $T_{x_{0}}M(r):=\{w\in T_{x_{0}}M|||w||\leq r\}$; choose $r(\tilde{x})$ small enough so that for every $x_{0}\in M$ the exponential map $\exp_{x_{0}}:T_{x_{0}}M(r(\tilde{x}))\rightarrow M$ is an embedding, $||D_w\exp_{x_{0}}||\leq 2$, $\exp_{f(x_{0})}$ is injective on $\exp^{-1}_{f(x_{0})}\circ f \circ \exp_{x_{0}}(T_{x_{0}}M(r(\tilde{x})))$. As the domains of exponential maps have uniform radius, we can choose $r(\tilde{x})= r.$ Define $$f_{\tilde{x}}:=C_{\eta}(\tilde{f}(\tilde{x}))\circ \exp^{-1}_{x_{1}}\circ f\circ \exp_{x_0}\circ C_{\eta}^{-1}(\tilde{x}),$$ so $f_{\tilde{x}}$ is defined in the ellipsoid $P(x_{0})=C_{\eta}(\tilde{x})(T_{x_{0}}M(r))\subset \mathbb{R}^d$. Let $p(\tilde{x})=r min\{||C_{\eta}(\tilde{x})||,||C_{\eta}(\tilde{x})^{-1}||\}$; thus if $w\in T_{x_{0}}M$ and $||w||\leq p(\tilde{x})$ then $w\in P(x_{0})$, that is the Euclidean ball $B(0,p(\tilde{x}))$ is contained in $P(x_{0})$. Now write $f_{\tilde{x}}(w)=A_{\eta}(\tilde{x})w+h_{\tilde{x}}(w)$ and observe that $D_0f_{x_{0}}=A_{\eta}(\tilde{x})$, by the chain rule, so $D_0h_{\tilde{x}}=0$. Write $\exp^{-1}_{x_{1}}\circ f\circ \exp_{x_{0}}=Df_{x_{0}}+g_{x_{0}}$. Since $f\in \mathcal{C}^{1+\alpha}$, there exists $L>0$ such that $||D_ug_{x_{0}}||\leq L||u||^{\alpha},$ thus $$||D_wh_{\tilde{x}}||=||D_w(C_{\eta}(\tilde{f}(\tilde{x}))\circ g_{\tilde{x}}\circ C^{-1}_{\eta}(\tilde{x}))||\leq L ||C_{\eta}(\tilde{f}(\tilde{x}))||||C^{-1}_{\eta}(\tilde{x})||^{1+\alpha}||w||^{\alpha}.$$ Hence if $||w||$ is sufficiently small the construction of the nonlinear part of $f_{\tilde{x}}$ is negligible. In particular, $$||D_wh_{\tilde{x}}||<\eta\text{ for }||w||<\delta_{\eta}(\tilde{x}):=(L||C_{\eta}(\tilde{f}(\tilde{x}))||||C^{-1}_{\eta}(\tilde{x})||^{1+\alpha}/\eta)^{-1/\alpha}.$$ By the Mean Value Theorem also $$||h_{\tilde{x}}w||<\eta\text{ for } ||w||<\delta_{\eta}(\tilde{x}).$$ From the definition of $\delta_{\eta}(\tilde{x})$ we have $$\lim_{m\rightarrow \infty}\frac{1}{m}\log \delta_{\eta}(\tilde{f}^m(\tilde{x}))=\lim_{m\rightarrow}\frac{1}{\alpha m}\log||C_{\eta}(\tilde{f}^m(\tilde{x}))||+\lim_{m\rightarrow}\frac{1+\alpha}{\alpha m}\log||C_{\epsilon}^{-1}(\tilde{f}^m(\tilde{x}))||=0.$$ Applying the Tempering-Kernel Lemma \ref{Tempering} to $\delta_{\eta}(\tilde{x})$ we find a measurable $K_{\eta}:\tilde{\Lambda}\rightarrow \mathbb{R}^{+}$ such that $K_{\eta}(\tilde{x})\geq \delta_{\eta}^{-1}(\tilde{x})$ and $e^{-\eta}\leq K_{\epsilon}(\tilde{x})/K_{\eta}(\tilde{f}(\tilde{x}))\leq e^{\eta}$. Define $\xi_{\eta}(\tilde{x})=K_{\eta}(\tilde{x})^{-1}\leq \delta_{\eta}(\tilde{x}).$ Then $$e^{-\eta}\leq \xi_{\eta}(\tilde{x})/\xi_{\eta}(\tilde{f}(\tilde{x}))\leq e^{\eta}.$$ Now $$\phi_{\tilde{x}}:B(x_{0}, K^{-1}\xi_{\eta}(\tilde{x}))\rightarrow \mathbb{R}^d, z\mapsto C_{\eta}(\tilde{x})\circ\exp^{-1}_{x_{0}}(z)$$ is obviously an embedding. Condition $(a)$ follows from the continuous property of $C_{\eta}$ on $\tilde{\Lambda}(\eta)$ and condition (b) follows from the definition. We only need to prove (c) and (d). Actually, on the set $\phi_{\tilde{x}}(B(x_{0}, K^{-1}\xi_{\eta}(\tilde{x})))$ , we have \begin{eqnarray*} &&d(\phi_{\tilde{f}\tilde{x}}\circ f\circ \phi^{-1}_{\tilde{x}}, C_{\eta}(\tilde{f}(\tilde{x}))\circ Df(x_{0})\circ C^{-1}_{\eta}(\tilde{x}))\\ &=&d(C_{\eta}(\tilde{f}\tilde{x})\circ\exp^{-1}_{x_{1}}\circ f \circ\exp_{x_0}\circ C^{-1}_{\eta}(\tilde{x}), C_{\eta}(\tilde{f}(\tilde{x}))\circ Df(x_{1})\circ C^{-1}_{\eta}(\tilde{x}))\\ &=&||f_{(\tilde{x})}-A_{\eta}(\tilde{x})||\\ &\leq&||h_{\tilde{x}}||\\ &\leq&\eta. \end{eqnarray*} Now we give the proof of condition (d). From the definition of $C_{\eta}$, we only need to proof the second inequality. For all $\tilde{x}\in\tilde{\Lambda}(\eta)$ and for any $y,y'\in B(x_{n}, K^{-1}\xi_{\eta}(\tilde{f}^{n+1}\tilde{x}))$, we have \begin{eqnarray*}&&||\phi_{\tilde{f}^n\tilde{x}}(y)-\phi_{\tilde{f}^n\tilde{x}}(y')||\\ &\leq&||C_{\eta}(\tilde{f}^n\tilde{x})||||y-y'||\\ &\leq&K \exp(\eta|n|)||y-y'||. \end{eqnarray*} Actually, the last inequality can be improved to the form stated in our theorem. This is a direct result from the continuity of $C_{\eta}.$ \end{proof} The set $B(x_n,K^{-1}\xi(\tilde{f}^n\tilde{x}))$ are called Lyapunov neighborhoods of the orbit $\tilde{x}$ at $x_n$. Although it may happen that the restriction of $f$ on Lyapunov neighborhoods may not be invertible, the degeneration happens only along the stable direction which does not affect shadowing mechanism. The size of Lyapunov neighborhoods decay slowly (at rate at most $e^{-\eta}$) for points along orbit $\tilde{x}$. \begin{Def}A sequence $(\tilde{x}_n)_{n\in\mathbb{Z}}\subset M^f$ is called an $\varepsilon-$pseudo orbit with jumps in set $\tilde{\Lambda}(\eta)$ if $ d(\tilde{f}(\tilde{x}_n), \tilde{x}_{n+1})<\varepsilon,$ and $ d(\tilde{f}(\tilde{x}_n), \tilde{x}_{n+1})>0 \Longrightarrow \tilde{f}(\tilde{x}_n),\tilde{x}_{n+1}\in\tilde{\Lambda}(\eta) ,$ for every $n\in\mathbb{Z}$. \end{Def} \begin{Thm}\label{shadowing}{\bf (Orbit Shadowing property for mappings )}Let $f$ be a $\mathcal{C}^{1+\alpha}$ mapping with ergodic hyperbolic measure $\mu$ and satisfying the integrability condition. Then for every $\eta>0$ sufficiently small and $\tilde{\Lambda}(\eta)$ is a uniformity block of tolerance $\eta$ for $\mu$, there exist $C>0,\varepsilon_0>0$ with the following properties holding: For every $\varepsilon\in(0,\varepsilon_0),$ if $(\tilde{x}_n)_{n\in\mathbb{Z}}$ is an $\varepsilon-$pseudo orbit for $\tilde{f}$ with jumps in $\tilde{\Lambda}(\eta)$, then there exists a unique orbit $\tilde{y}\in\tilde{\Lambda}$, $C\varepsilon$- shadowing $(\tilde{x}_n)_{n\in\mathbb{Z}}$ , i.e. $d(\tilde{f}^n\tilde{y},\tilde{x}_n)\leq C\epsilon$, for all $n\in\mathbb{Z}$. \end{Thm} \begin{proof} Our goal here is to construct sequences of mappings on space $\mathbb{R}^d$ satisfying conditions in Lemma \ref{Shadowing}. For any $(\tilde{x}_n)_{n\in\mathbb{Z}}$, an $\varepsilon$ pseudo orbit for $\tilde{f}$ with jumps in $\tilde{\Lambda}(\eta)$, denote $x_n=\pi(\tilde{x}_n).$ $0<\varepsilon<\varepsilon_0$ is determined later. Following the notation in the proof of Theorem \ref{Pesin}, we have sequences of mappings $\tilde{g}_n:\mathbb{R}^d\rightarrow \mathbb{R}^d, \tilde{g}_n=\phi_{\tilde{f}\tilde{x}_n}\circ f\circ \phi^{-1}_{\tilde{x}_n}:\mathbb{R}^d\rightarrow \mathbb{R}^d$ and sequences of linear mappings $L_n:\mathbb{R}^d\rightarrow \mathbb{R}^d, L_n=C(\tilde{f}\tilde{x}_n\circ Df(\pi(\tilde{x}_n))\circ C^{-1}(\tilde{x}_n)= A_{\eta}(\tilde{x})=\begin{pmatrix} \ A^1_{\eta}(\tilde{x}_n)\\ &\ A^2_{\eta}(\tilde{x}_n) \end{pmatrix}. $ From Theorem \ref{Pesin}, we have $d(\tilde{g}_n(v),L_n(v))\leq \eta, \forall n\in \mathbb{Z}, $ and any $v\in \phi_{\tilde{x}}(B(x_n,K^{-1}\xi(\tilde{x}_n))).$ Denote $$\Phi_n:\mathbb{R}^d\rightarrow \mathbb{R}^d=\phi_{\tilde{x}_{n+1}}\circ f\circ \phi^{-1}_{\tilde{x}_n}=L_n+\phi_{\tilde{x}_{n+1}}\circ f\circ \phi^{-1}_{\tilde{x}_n}-L_n.$$Considering the jumping points, i.e. $0<d(\tilde{f}\tilde{x}_n,\tilde{x}_{n+1})\leq \varepsilon, $ by the continuity of $\phi_{\tilde{x}}$ on $\tilde{\Lambda}(\eta)$ and $\phi\in\mathcal{C}^{\infty}$, we have \begin{eqnarray*} &&|\phi_{\tilde{x}_{n+1}}\circ f\circ \phi^{-1}_{\tilde{x}_n}(0)-L_n(0)|\\ &=&|\phi_{\tilde{x}_{n+1}}(f(x_n))|\\ &=&|\phi_{\tilde{x}_{n+1}}(f(x_n))-\phi_{\tilde{f}(\tilde{x}_{n})}(f(x_n))|\\ &\leq&K\varepsilon. \end{eqnarray*} Moreover, \begin{eqnarray*} &&|\phi_{\tilde{x}_{n+1}}\circ f\circ \phi^{-1}_{\tilde{x}_n}(v)-L_n(v)-\phi_{\tilde{x}_{n+1}}\circ f\circ \phi^{-1}_{\tilde{x}_n}(v')+L_n(v')|\\ &\leq&|L_n(v)-L_n(v')|+|\phi_{\tilde{x}_{n+1}}\circ f\circ\phi_{\tilde{x}_n}(v)-\phi_{\tilde{x}_{n+1}}\circ f\circ\phi_{\tilde{x}_n}(v')|\\ &\leq&C|v-v'|^{\alpha} \end{eqnarray*} It is easy to see that sequences of mappings $\Phi_n$ also satisfy other conditions in Lemma \ref{Shadowing}, thus there is an unique sequence of points $z_n\in\mathbb{R}^d$ satisfying $\Phi_n(z_n)=(z_{n+1})$. Then, $\{\phi_{\tilde{x}_n}^{-1}z_n\}_{n\in\mathbb{Z}}$ is a real orbit under map $f$. \end{proof} As $\tilde{f}:M^f\rightarrow M^f$ is invertible and measure entropy comes from any positive measure subset, invariant or not, we have the following type of Katok's argument which says there exist horseshoes in inverse limit space $M^f$. In order to clarify to process of projecting the horseshoe in the inverse limit space to the initial space, we give the construction here briefly. Our proof follows the the idea used in \cite{ACW}. These ideas of constructing pseudo orbits also appeared in \cite{LLST, LST,LSY}. \begin{Thm}\label{Kat}{\bf(Katok's argument for mappings )} Let $f:M\rightarrow M$ be any $\mathcal{C}^{1+\alpha}$ mapping on a closed Riemannian manifold $M$ with an ergodic hyperbolic probability measure $\mu$. Set any small constant $\delta>0$ and a weak $*$ neighborhood $\tilde{\mathcal{V}}$ of $\tilde{\mu}$ in the space of $\tilde{f}$-invariant probability measures on $\tilde{\Lambda}$. Then there exists a horseshoe $\tilde{H}\subset M^f$ such that: \begin{enumerate} \item $h_{top}(\tilde{H},\tilde{f})>h(\tilde{\mu},\tilde{f})-\delta=h(\mu,f)-\delta$. \item if $\tilde{\lambda}_1>\tilde{\lambda}_2>\cdots>\tilde{\lambda}_k$ are the distinct Lyapunov exponents of $\mu$, with multiplicities $n_1,\cdots,n_k\geq 1$, denote $\tilde{\lambda}$ the same as before, then there exists a dominated splitting on $T_{\tilde{x}}M=\sqcup T_{\pi(\tilde{f}^n\tilde{x})}M, \tilde{x}\in \tilde{H}$: $$T_{\tilde{x}}M=E^u\oplus E^s,$$ and there exists $N\geq1$ such that for each $i=1,2$ each $\tilde{x}\in\tilde{H}$ and each unit vector $v\in E^u(\pi(\tilde{x})), u\in E^s(\pi(\tilde{x}))$, $$||Df^{-N}_{\pi(\tilde{x})}(v)||\leq \exp((-\tilde{\lambda}_i+\delta)N),$$$$||Df^{N}_{\pi(\tilde{x})}(u)||\leq \exp((-\tilde{\lambda}_i+\delta)N)$$ \item all the invariant probability measures supported on $\tilde{H}$ lies in $\mathcal{V}.$ \item $\tilde{H}$ is $\delta-$close to the support of $\tilde{\mu}$ in the Hausdorff distance. \end{enumerate} \end{Thm} \begin{proof} {\bf Step 1:} For tha sake of estimating the distance of measures, we first give a finite collection of continuous functions on $\tilde{\Lambda}$ that can be used in weak $*$ metric. Let $C(\tilde{\Lambda})$ be the space of real valued continuous functions defined on $\tilde{\Lambda}$. Choose $\gamma\in(0,\delta)$ and let $\tilde{\phi}_1,\tilde{\phi}_2,\cdots,\tilde{\phi}_k$ be a finite collection of $C(\tilde{\Lambda})$ such that $\tilde{\mathcal{V}}$ contains the set of probability measures $ \tilde{\nu}$ satisfying: $$\sum_{i=1}^{i=k}\frac{|\int_{\tilde{\Lambda}}\tilde{\phi}_i d\tilde{\mu}-\int_{\tilde{\Lambda}}\tilde{\phi}_i d\tilde{\nu}|}{2^i}<\gamma.$$ {\bf Step 2:} By Theorem 1.1 in \cite{Kat}, measure-theoretic entropy is the exponential growth rate of the minimal number of Bowen ball covering a positive measure set. More specifically, given $x \in \tilde{\Lambda}, \rho > 0, n\in \mathbb{N}$, the $(n,\rho)$ Bowen ball is defined as $$B(\tilde{x},n,\rho)=\{\tilde{y}\in \tilde{\Lambda} \,\,| \,\,d(\tilde{f}^i(\tilde{x}),\tilde{f}^i(\tilde{y}))\leq\rho,0\leq i\leq n-1\}.$$ Define $$N(n,\rho,\xi)=\min\sharp\{\tilde{x}_1,\cdots,\tilde{x}_k|\mu(\cup_{i}B(\tilde{x},n,\rho))>1-\xi\}.$$ For any positive number $\xi<1$, measure entropy (which is independent of $\xi$) is defined as $$h_{\tilde{\mu}}(\tilde{f})=\limsup_{n\rightarrow \infty}\frac{1}{n}\log N(n,\rho,\xi) .$$ We also define $(n,\rho)$-separated set here. $S(n,\rho)$ is called a $(n,\rho)$-separated set for set $K$ if for any point $\tilde{x}\in K\subset \tilde{\Lambda}$, there exists a point $\tilde{y}\in S(n,\rho)$ such that $d(\tilde{f}^i(x),\tilde{f}^j(x))\geq \rho,$ for some $i\in [0,n-1].$ We will take advantage of the fact that the maximal number of $(n,\rho)$-separated set is bigger than the minimal number of $(n,\rho)$ Bowen balls covering the same set. {\bf Step 3:} Now we are ready to give the scale of Bowen balls and the scale of separation, i.e. $\rho$ in the definition of the Bowen ball and the separated set. Assume $0<\varepsilon_1<\min(\frac{\gamma}{2h_{\tilde{\mu}}\tilde{f}+4},\frac{\delta}{4})$. Choose $\rho>0$ small enough and $N_0\in \mathbb{N}$ such that for any $n\geq N_0$ $$ N(n,\rho,\tilde{\mu}(\tilde{\Lambda}(\eta))/2)> e^{n(h_{\tilde{\mu}}(\tilde{f})-\varepsilon_1)}$$ and such that for any $d(x,y)\leq \rho,\forall x,y\in \tilde{\Lambda}$, $$|\tilde{\phi}_i(x)-\tilde{\phi}_i(y)|\leq \frac{\gamma}{2}, 1\leq i\leq k.$$ The small number $\rho$ here is a separation scale. {\bf Step 4:} We next give a shadowing scale which is smaller than the separation scale $\rho$ given above. We also filter recurrence points for $\tilde{\Lambda}(\eta)$. Fix $0<\varepsilon_2<\frac{\rho}{4C}$, where $C$ is the Lipschitz constant given in Theorem \ref{shadowing}. Let $\mathcal{U}=\{B(\tilde{x}_i,\varepsilon_2)\,\,|\,\, \tilde{x}_i\in\tilde{\Lambda}(\eta), 1\leq i\leq t\}$ be a cover of $\tilde{\Lambda}(\eta)$. For any $n\in\mathbb{N}$, let \begin{eqnarray*} \tilde{\Lambda}'(\eta)_n&=&\{\tilde{x}\in\tilde{\Lambda}(\eta):\exists i\in[n,(1+\varepsilon_2)n] \\ && s.t.\,\, \tilde{x},\tilde{f}^i\tilde{x}\in B(\tilde{x}_k, \varepsilon_2), for\,\, some \,\,k\in[1,t] \}. \end{eqnarray*} By Poincare recurrence theorem, $\tilde{\mu}(\tilde{\Lambda}'(\eta)_n)\rightarrow \tilde{\mu}(\tilde{\Lambda}(\eta))$, as $n\rightarrow +\infty$. Set $$\tilde{\Lambda}(\eta)_n=\{\tilde{x}\in\tilde{\Lambda}'(\eta)_n: sup_{l\geq n}\max_{1\leq i\leq k}|\frac{1}{l}\sum^{l}_{j=1}\tilde{\phi}_i(\tilde{f}^j(\tilde{x}))-\int_{\tilde{\Lambda}}\phi_i d _{\tilde{\mu}}|<\frac{\gamma}{2}\}. $$ Birkhoff Ergodic Theorem implies that $\tilde{\mu}(\tilde{\Lambda}(\eta)_n)\rightarrow \tilde{\mu}( \tilde{\Lambda}(\eta))$, as $n\rightarrow \infty.$ {\bf Step 5:} We chose $(n,\rho)$-separated set covering $\tilde{\Lambda}(\eta)$ in this step. We need not only separation property but also a common return time to $\tilde{\Lambda}(\eta)$, so that we can control the segments. We also use some combinatory techniques to estimate the number of $(n,\rho)$-separated segments with the common return time to $\tilde{\Lambda}(\eta)$. For each $n\in\mathbb{N}$, let $S(n,\rho)$ be a maximal $(n,\rho)$-separated set in $\tilde{\Lambda}(\eta)_n$. Without of loss of generality, we can assume that each two points in $S(n,\rho)$ come from different orbits (if there are two points in the same orbit, just give a small perturb of it). Then, $$\tilde{\Lambda}(\eta)_n\subset\cup_{\tilde{x}\in S(n,\rho)}B(\tilde{x},n,2\rho)$$ and for $N_1$ large enough such that for any $n\geq N_1$ we get $$\sharp S(n,\rho)\geq N(n,2\rho, \tilde{\mu}(\tilde{\Lambda})/2)\geq e^{n(h_{\tilde{\mu}}(\tilde{f})-\varepsilon_1)}. $$ For $n\in[N_1,(1+\varepsilon_2)N_1]$, let $$V_n=S(N_1,\rho)\cap \{\tilde{x}\in \tilde{\Lambda}(\eta)| \tilde{x}, \tilde{f}^n(\tilde{x})\in B(\tilde{x}_k,\varepsilon_2), for\, some\, k\in[1,t]\}$$ and let $N\in[N_1,(1+\varepsilon)N_1]$ be the value of $n$ maximizing $\sharp V_n$. Assuming $N_1$ large enough, $$\sharp V_N\geq\frac{S(N_1,\rho)}{\varepsilon N_1}\geq e^{N(h_{\tilde{\mu}}(\tilde{f})-2\varepsilon_1)}.$$ {\bf Step 6:} Now we have had lots of separated segments with common return time. But in order to construct pseudo orbits from the segments of these separated points, we need to chose separated segments coming in to and getting away from the same ball. Chose $k\in[1,t]$ such that $B(\tilde{x}_k,\varepsilon_2)\cap V_n$ has maximal cardinality and let $\tilde{Y}=\{\tilde{y}_1,\cdots,\tilde{y}_l\}=B(\tilde{x}_k,\varepsilon_2)\cap V_N$. From the choice of $\tilde{Y}$, it is natural that $l$ is large with respect to entropy, i.e. $$l\geq \frac{\sharp V_N}{t}\geq \frac{1}{t}e^{N(h_{\tilde{\mu}}(\tilde{f})-2\varepsilon_1)}.$$ {\bf Step 7:} Now we can construct pseudo orbits. Consider the set of all orbits whose segments of length $N$ originate in $\tilde{Y}$ and end in $B$. Concatenating these strings defines a two sided shift $\sigma_l$ based on $l$ symbols, which has topological entropy $\log l\geq N(h(\tilde{\mu},\tilde{f})-2\xi)-\log t.$ We will construct a horseshoe $\tilde{H} \subset M^f$ such that $\tilde{f}^N|_{\tilde{H}}$ has $\sigma_l$ as a topological factor. Considering the set $\tilde{\mathcal{Y}}$ of all $\varepsilon_2-$pseudo orbits of the form: $$\cdots\tilde{y}_{i_{-1}}, \cdots, \tilde{f}^{N-1}(\tilde{y}_{i_{-1}}),\tilde{y}_{i_{0}}, \cdots, \tilde{f}^{N-1}(\tilde{y}_{i_{0}}),\tilde{y}_{i_{1}}, \cdots, \tilde{f}^{N-1}(\tilde{y}_{i_{1}}),\cdots$$ where $\tilde{y}_{i_j}\neq\tilde{y}_{i_{j+1}}\in \tilde{\mathcal{Y}}.$ Note that these are also $\varepsilon_2-$pseudo orbits with jumps in $\tilde{\Lambda}(\eta),$ since $\tilde{f}^N(\tilde{y})\in \tilde{\Lambda}(\eta)$, for all $\tilde{y}\in\tilde{\mathcal{Y}}$. Each element of $\tilde{\mathcal{Y}}$ can be naturally encoded as an element of $\{1,\cdots,l\}^{\mathbb{Z}}\times\{0,\cdots, N-1\}$. We define $\tilde{H}$ to be the set of $\tilde{x}\in M^f$ whose $\tilde{f}-$orbit $C\varepsilon_2-$shadowing some pseudo orbits in $\tilde{\mathcal{Y}}$. Denote a Markovian subshift $\mathcal{C}$ of shift $\{1,\cdots,l\}^{\mathbb{Z}}\times\{0,\cdots, N-1\}$ with the Markovian graph derived from the graph of $\{1,\cdots,l\}^{\mathbb{Z}}\times\{0,\cdots, N-1\}$ by dropping the chains from one vertex to itself. Theorem \ref{shadowing} and $\rho>\frac{C\varepsilon_2}{4}$ imply there is a continuous bijection between $\tilde{H}$ and $\mathcal{C}$. Hyperbolicity of $\tilde{H}$ follows from the fact that the orbit of any $\tilde{x}\in \tilde{H}$ stays in the union of finitely many regular neighborhoods, on which $f$ stays close to a uniformly hyperbolic sequence in $\{A_{\epsilon}(\tilde{x}_{j_1}), A_{\epsilon}(\tilde{x}_{j_2}),\cdots,A_{\epsilon}(\tilde{x}_{j_N})\}$. Other conclusions follows from our construction directly. Thus we finish the proof. \end{proof} \begin{Cor}\label{Growth}Let $f:M\rightarrow M$ be a $\mathcal{C}^{1+\alpha}$ mapping preserving an ergodic hyperbolic probability measure $\mu$. We have $$\limsup_{n\rightarrow+\infty}\frac{1}{n}\log P_n(f)\geq h_{\mu}(f)$$ where $P_n(f)$ denotes the number of periodic points with period $n$. \end{Cor} \begin{proof}For symbolic systems $(\sigma,\Sigma_{l})$ $$h_{top}(\Sigma_l,\sigma)=\frac{1}{m}\log \sharp\{x\in\Sigma_{l}|\sigma^m(x)=x\}.$$ Then, by Theorem (\ref{Kat}), the corollary follows. \end{proof} \begin{Def}\label{PH}For map $f:M\rightarrow M$ and any constants $K>0, a>0$, we say periodic point $p$ with period $P(p)$ has $(K,a)$-hyperbolicity if there exists an invariant splitting $T_{f^i(p)}M=E^s_{f^i(p)}\oplus E^u_{f^i(p)}, 0\leq i\leq P(p)-1$ along orbit $\{f^i(p)\}_{i=0}^{P(p)-1}$ such that $$||Df^j_{f^i(p)(v)}||\leq Ke^{-ja}||v||, \forall v\in E^s_{f^i(p)}$$ and $$||Df^j_{f^i(p)(v)}||\geq Ke^{ja}||v||, \forall v\in E^u_{f^i(p)}.$$ Let $PH(n,f,K,a)$ be the collection of periodic points with period $n$ and uniform $(K,a)$-hyperbolicity. \end{Def} \begin{Thm} \label{Growth2} For any $\mathcal{C}^{1+\alpha}$ mapping $f:M\rightarrow M$, we have $$\limsup_{n\rightarrow+\infty}\frac{1}{n}\log P_n(f)\geq H(f).$$ Moreover, we have $$\lim_{a\rightarrow 0^{+}}\lim_{K\rightarrow 0^{+}}\limsup_{n\rightarrow \infty}\frac{1}{n}\log \sharp PH(n,f, K,a)=H(f).$$ \end{Thm} \begin{proof}The first part is a direct corollary. We only need to prove the second equality. As periodic points for $f$ can also be viewed as periodic points for $\tilde{f}$, we use the same notation. Assume the Lyapunov splitting over the orbit of periodic point $p$ with period $P(p)$ is $$T_{f^i(p)}M=E^s_{f^i(p)}\oplus E^u_{f^i(p)}, \forall 0\leq i\leq P(p)-1. $$ Let $I(p)$ be the index of $p$, i.e. $I(p)$ is the dimension of stable bundle $E^s$. We define the following collection of periodic points with uniform hyperbolicity and the same index, \begin{eqnarray*}PH(n,f,a, K, I)&=&\{p\in P_n(f)| ||Df^{i}_{f^j(p))}v||\geq K e^{ia}||v||, \forall v\in E^u_{f^j(p)},\\ &&||Df^{i}_{f^j(p))}u||\leq K^{-1}e^{-ia}||u||, \forall u\in E^s_{f^j(p)}, \\ &&\forall 0\leq j\leq n-1, I(p)=I\}. \end{eqnarray*} As the splitting on periodic points can be continuously extended to the closure set, $\tilde{f}|(\overline{\cup_nPH(n,f,a,K,I)})$ is uniformly hyperbolic. Thus $\tilde{f}|\overline{\cup_nPH(n,f,a,K, I)}$ is an expansive (from unique shadowing property) homoeomorphism and then $f|\pi(\overline{\cup_nPH(n,f,a,K,I)})$ is an expansive map. From the fact that $\pi(PH(n,f,a,K, I))$ is a n-separated set, one has $$ \lim_{n\rightarrow+\infty}\frac{1}{n}\log \sharp PH(n,f,a,K,I) \leq h(f|\overline{\pi(\cup_nPH(n,f,a,K, I)))}).$$ From the principle of variation, for any $\varepsilon>0$, there exists a hyperbolic measure $\mu$ supported on $\overline{\pi(\cup_nPH(n,f,a,K,I))}$ such that $h(f|\overline{\pi(\cup_nPH(n,f,a,K,I))})\leq h_{\mu}(f)+\varepsilon\leq H(f)+2\varepsilon. $ From the arbitrary choice of $\varepsilon$, we obtain $$\limsup_{n\rightarrow+\infty}\frac{1}{n}\log \sharp PH(n,f,K,a,I)\leq H(f).$$ Since $$PH(n,f,K,a)=\cup_{0\leq I\leq d}\cup_{K>0}PH(n,f,a,K,I),$$ we have $$\lim_{k\rightarrow 0^{+}}\lim_{n\rightarrow \infty}\frac{1}{n}\log \sharp PH(n,f,K,a)\leq H(f). $$ On the other hand, by Theorem \ref{Kat}, we have $$\lim_{a\rightarrow 0^{+}}\lim_{K\rightarrow 0^{+}}\lim_{n\rightarrow \infty}\frac{1}{n}\log \sharp PH(n,f,K,a)\geq H(f),$$ and then $$\lim_{a\rightarrow 0^{+}}\lim_{K\rightarrow 0^{+}}\lim_{n\rightarrow \infty}\frac{1}{n}\log \sharp PH(n,f,K,a)=H(f). $$ \end{proof} \section{Relation between exponential growth rate of periodic points and degree} Asymptotic growth rate of the complexity of the orbit structure attracts people's attention for a long time. There are several points of view to describe asymptotic behaviors of dynamical systems, such as topology, measure, homology, etc. Commonly, one cares about the growth rate of the number of periodic points, measure-theoretic entropy, topological entropy and the spectral radii of the action on homology, etc. Let $M$ be a compact connected $d-$dimensional manifold. For $\mathcal{C}^1$ mapping $f:M\rightarrow M$, Misiurewicz and Pryztycki \cite{MP} proved that \[h_{top}(f)\geq \log |deg(f)|.\] Let $P_n(f)$ denotes the number of periodic points with period $n$. Katok \cite{Kat} proved that for any $\mathcal{C}^{\infty}$ surface diffeomorphism $f:M\rightarrow M$ we have \[\limsup_{n\rightarrow+\infty}\frac{1}{n}\log P_n(f)\geq h_{top}(f). \] Inspired by these two results, Shub posed an interesting case in the problem 3 of \cite{Shub3}. Let $f$ be a smooth degree two $C^{1+\alpha}$ map on 2-shpere $S^2$ where $\alpha>0$. {\bf Problem (Shub) :} Does $\limsup_{n\rightarrow +\infty}\frac{1}{n}\log P_n(f)\geq \log2$ hold? In order to get periodic points, a usual technique is the closing lemma used by Katok in \cite{Kat} which is based on the hyperbolicity of invariant measures. If we assume there exists a hyperbolic invariant ergodic measure $\mu$ of $\mathcal{C}^{1+\alpha}$ map $f$ with $h_{\mu}(f)\geq \log deg(f)$, then from Corollary \ref{Growth} we get $\limsup_{n\rightarrow +\infty}\frac{1}{n}\log P_n(f)\geq \log \text{deg}(f).$ There is also a direct corollary from Corollory \ref{Growth2} as follows. \begin{Cor} \label{Cor}Let $M$ be a compact connected $d-$dimensional manifold. For any $\mathcal{C}^{1+\alpha}$ mapping $f:M\rightarrow M$, if $H(f)\geq \log \text{deg}(f)$. Then we have $$\limsup_{n\rightarrow+\infty}\frac{1}{n}\log P_n(f)\geq \log \text{deg}(f).$$ \end{Cor} We give some notes for the case when $M$ is a surface. It is easy to see that if $f$ is a diffeomorphism, then every invariant measure with positive entropy is hyperbolic. But this might not be true for endmomorphisms. For noninvertible mappings on surface, one might can not get hyperbolic invariant measures with measure-theoretic entropy approximating topological entropy. It may usually happen that for noninvertible mapping $f$, $$H(f)<h_{top}(f), $$ such as examples given by Pugh and Shub in their new paper \cite{Shub4}. In other words, from the equality in Theorem \ref{Growth2}, it is highly possible that the growth rate of the number of saddle periodic points is strictly smaller than topological entropy. But it is still possible that the growth rate of the number of periodic points with zero Lyapunov exponents is greater than degree. One may need some topological techniques to get Shub's question. \end{document}
{\bf e}gin{document} \title{Multiway Cut, Pairwise Realizable Distributions, \ and Descending Thresholds} {\bf e}gin{abstract} We design new approximation algorithms for the Multiway Cut problem, improving the previously known factor of $1.32388$ \citep{BNS13}. We proceed in three steps. First, we analyze the rounding scheme of \citet{BNS13} and design a modification that improves the approximation to $\frac{3+\sqrt{5}}{4} \approx 1.309017$. We also present a tight example showing that this is the best approximation one can achieve with the type of cuts considered by \citet{BNS13}: (1) partitioning by exponential clocks, and (2) single-coordinate cuts with equal thresholds. Then, we prove that this factor can be improved by introducing a new rounding scheme: (3) single-coordinate cuts with descending thresholds. By combining these three schemes, we design an algorithm that achieves a factor of $\frac{10+4\sqrt{3}}{13} \approx 1.30217$. This is the best approximation factor that we are able to verify by hand. Finally, we show that by combining these three rounding schemes with the scheme of independent thresholds from \cite{KKSTY04}, the approximation factor can be further improved to $1.2965$. This approximation factor has been verified only by computer. \end{abstract} \section{Introduction} \label{sec:intro} The Multiway Cut problem is one of the classical graph optimization problems: Given a graph $G = (V,E)$ with edge weights $w:E \rightarrow {\mathbb R}_+$ and $k$ terminals $t_1,t_2,\ldots,t_k \in V$, we want to find a minimum-weight subset of edges $F \subseteq E$ such that no pair of terminals is connected in $(V, E \setminus F)$. Equivalently, we can search for a labeling of the vertices $\ell:V \rightarrow [k]$ so as to minimize the total weight of edges $(v,w)$ such that $\ell(v) \neq \ell(w)$. This is a natural generalization of the Minimum $s$-$t$-Cut problem, which is the $k=2$ case. The study of Multiway Cut goes back to \citet{DJPSY94} who proved that the problem is MAX SNP-hard for every $k \geq 3$, and gave a simple combinatorial algorithm using repeated applications of Min $s$-$t$-Cut that achieves a $(2-2/k)$-approximation. A novel technique for Multiway Cut was introduced by \citet{CKR01} who proposed a geometric relaxation for this problem. In this ``CKR relaxation'', the graph is embedded into a simplex with each terminal at a distinct vertex, and finding the optimal embedding can be formulated as a linear program. A partitioning of the graph then corresponds to a partitioning of the simplex that separates all the vertices. Using this relaxation, \cite{CKR01} designed a $(1.5 - 1/k)$-approximation for Multiway Cut. \citet{CT99}, and \citet{KKSTY04} independently provided a $12/11$-approximation for $k=3$ and showed that this is the best approximation achievable using the CKR relaxation for $k=3$, by presenting a matching integrality gap example. More generally, \citet{KKSTY04} provided improved approximation algorithms for all values of $k$, with approximation factors tending to $1.3438$ as $k \rightarrow \infty$. On the negative side, \citet{FK00} showed an integrality gap of $8/(7+1/(k-1))$ for each $k \geq 3$. The importance of the CKR relaxation was bolstered further by the work of \citet{MNRS08} who proved that assuming the Unique Games Conjecture, it is NP-hard to achieve an approximation for Multiway Cut better than the integrality gap of the CKR relaxation (for any fixed $k$). This means that $12/11$ is indeed the best possible approximation for $k=3$, and for every $k$ the CKR relaxation provides the optimal approximation factor (assuming the UGC). Recently, \citet{BNS13} made a new improvement on the algorithmic side and designed a $1.32388$ approximation for Multiway Cut (for arbitrary $k$). They introduced an interesting new rounding scheme for the CKR relaxation that they called ``partitioning using exponential clocks''. (In fact, they also showed that a threshold-based rounding scheme from \citet{KT02} could be used equivalently in place of the exponential clocks.) Combining this scheme with a modification of a thresholding scheme from \cite{CKR01}, they presented a very simple and elegant way to achieve a $4/3$-approximation. Then they modified the rounding scheme further, to improve the approximation factor to $1.32388$. \noindent{\bf Our contribution.} We build upon previous work and provide further improvements on the approximation factor for Multiway Cut. First, we study the rounding scheme of \cite{BNS13} and identify a modification of their scheme that leads to a factor of $\frac{3+\sqrt{5}}{4} \approx 1.309017$. (See Section~\ref{sec:1.309}.) We note that this number is equal to $\frac{1+\varphi}{2}$ where $\varphi = \frac{1+\sqrt{5}}{2}$ is the {\em golden ratio}. We also present a tight example showing that this is the best approximation factor that can be achieved by any combination of the techniques considered by \cite{BNS13}: the exponential clocks scheme, the Kleinberg-Tardos scheme, and thresholding schemes with equal thresholds for all terminals. (We provide more details in Appendix~\ref{sec:tight-example}.) Secondly, we improve this approximation factor by introducing a new rounding scheme that we call {\em descending thresholds}. This scheme can be combined with the rounding schemes above in a way that achieves an approximation factor of $\frac{10+4\sqrt{3}}{13} \approx 1.30217$. The analysis of this algorithm is still quite simple and can be verified easily by hand (see Section~\ref{sec:1.302}). This factor is tight for any combination of partitioning using exponential clocks and thresholding schemes with ``analysis based on two thresholds'' (see Appendix~\ref{sec:1.302-tight} for details). Finally, we show that this factor can be further improved by including another rounding scheme, the scheme of {\em independent thresholds} from \cite{KKSTY04}. However, in this case we are not able to analyze the approximation factor manually anymore. With the help of an LP solver, we find a set of parameters that leads to an approximation factor of $1.2965$ (see Section~\ref{sec:below-1.3}). The verification of this result reduces to finding the maximum of a certain function of $2$ variables (involving polynomials and exponentials), which we have done by computer.\footnote{We have used {\em IBM ILOG CPLEX} for linear programming and {\em Wolfram Mathematica} for analytical manipulations.} \noindent{\bf Pairwise realizable distributions.} While searching for possible extensions of the rounding schemes with a random threshold for each terminal (see Section~\ref{sec:1.302-pairwise} for more details), we encountered the following question: {\em Given a joint distribution $\rho$ of two random variables $(X,Y)$, can we design arbitrarily many random variables $X_1,\ldots,X_k$ such that $\forall i \neq j$, $(X_i,X_j)$ has the same distribution $\rho$?} If this is the case, we call such a distribution $\rho$ {\em pairwise realizable}. Not every distribution $\rho$ is pairwise realizable (see Appendix~\ref{sec:pairwise}). Here we present the following answer (in discrete domains): $\rho$ is pairwise realizable, if and only if $\rho$ is a convex combination of symmetric product distributions, or in other words $$ \Pr_{(X,Y) \sim \rho}[X=a,Y=b] = \sum_s \alpha_s p_s(a) p_s(b) $$ where $\alpha_s \geq 0, \sum_s \alpha_s = 1$ and $\forall s; p_s(a) \geq 0, \sum_a p_s(a) = 1$. We provide a short proof in Appendix~\ref{sec:pairwise}; this result also follows from \cite{TW98}. In particular, it is necessary that the matrix $P_{ab} = \Pr[X=a, Y=b]$ be {\em positive semidefinite}. We do not need this result directly for any of our algorithms, but this characterization is helpful in understanding what kinds of threshold distributions are worth considering. (See Section~\ref{sec:1.302-pairwise} and Appendix~\ref{sec:pairwise} for more details.) \noindent{\bf Discussion.} We have investigated several rounding schemes that might improve the approximation factor. While we have a good understanding of thresholding schemes that rely only on 2 relevant variables in the analysis (and we identify the best approximation factor in this setting - see Section~\ref{sec:1.302-tight}), the situation gets more complicated with the inclusion of additional variables as in the analysis of {\em independent thresholds}. Then the cut density is not a linear function of the underlying probability distributions anymore. Our approach in this case is a combination of intuition from tight examples and the use of an LP solver. The rounding scheme achieving a factor of $1.2965$ that we present in this paper is the best one that we are able to describe in a simple form. Further (small) improvements might be achieved by finding more exhaustive descriptions of the probability distributions returned by the LP solver. However, we do not think that this would improve our understanding of the Multiway Cut problem. It is interesting to note that as of now, all known approximation algorithms for Multiway Cut can be implemented using a sequence of label assignments based on a threshold condition ($x_{v,i} \geq \theta$). The only exception to our knowledge, the exponential clocks scheme of \cite{BNS13}, can be replaced by the Kleinberg-Tardos algorithm, which uses a sequence of thresholds (with repeated use of variables). In fact \cite{KKSTY04}, citing computational experiments, speculated that the optimal approximation for Multiway Cut might be achievable using ``sparcs'', which are sequences of $k$ threshold cuts, one for each variable. Our scheme of descending thresholds is of this type. However, the exponential clocks scheme as well as the Kleinberg-Tardos scheme, one of which is still a necessary ingredient in our algorithm, are outside of this framework. \section{The CKR Relaxation} \citet{CKR01} proposed the following LP relaxation of the Multiway Cut problem, where $V$ is the set of vertices, $E$ is the set of edges with weights $w_{v,v'}$, and $T$ denotes the set of terminals, $|T|=k$. {\bf e}gin{align} \min~ \frac{1}{2} \sum_{(v,v') \in E} w_{v,v'} \|{\bf x}_{v} - {\bf x}_{v'}\|_{1}: \\ \forall v \in V, ~ \|{\bf x}_{v}\|_{1} = 1,\\ \forall t \in T, ~ {\bf x}_{t} = {\bf 1}_{t},\\ \forall v \in V, ~ {\bf x}_{v} \ge {\bf 0}~. \end{align} Here, ${\bf x}_v \in {\mathbb R}^k$ for each vertex $v \in V$, and ${\bf 1}_t$ denotes the unit basis vector corresponding to terminal $t \in [k]$. The fractional solution can be viewed as an embedding of the graph in the unit simplex $\Delta = \{ {\bf x} \in {\mathbb R}^k: {\bf x} \geq 0, \|{\bf x}\|_1 = 1 \}$, with terminal at the vertices of $\Delta$. Given a fractional solution, the objective of any rounding scheme is to assign each of the vertices to one of the terminals without increasing the objective value $\frac{1}{2}\sum_{(v,v') \in E} w_{v,v'} \|{\bf x}_{v} - {\bf x}_{v'}\|_{1}$ by much. For a rounding scheme to achieve an $\alpha$ approximation to the LP ($\alpha \ge 1$), it suffices to show that for every edge $(v,v') \in E$, the probability that the edge is ``cut'' by the rounding scheme (that is $v$ and $v'$ are assigned to different terminals) is at most $\alpha \cdot \frac{1}{2} \|{\bf x}_{v} - {\bf x}_{v'}\|_{1}$. We call $\frac12 \|{\bf x}_{v} - {\bf x}_{v'}\|_1$ the {\em length} of the edge $(v,v')$. Moreover, as has been shown by \citet{CKR01}, it suffices to consider the case where for each edge $(v,v') \in E$, the two end-points $v$ and $v'$ are mapped so that their corresponding vectors differ in only two coordinates. In other words, we can assume that ${\bf x}_{v} = (u_{1}, u_{2}, \cdots, u_{k})$ and ${\bf x}_{v'} = (u_{1}, u_{2}, \cdots, u_{i} + \epsilon, \cdots, u_{j} - \epsilon, \cdots, u_{k})$, where $i$ and $j$ are the two coordinates where the two vectors differ. Also, note that in this case, $\frac{1}{2}\|{\bf x}_{v} - {\bf x}_{v'}\|_{1} = \epsilon$. The probability of cutting such an edge should be at most $\alpha \epsilon$. In fact, $\epsilon$ can be made arbitrarily small, by subdividing edges. Dividing the cut probability by the length of the edge and letting $\epsilon \rightarrow 0$, we obtain the notion of {\em cut density}. {\bf e}gin{definition} A randomized rounding scheme is a probability distribution $\cal R$ over labelings $\ell:\Delta \rightarrow [k]$. An edge of type $(i,j)$ is an edge $(v,v')$ where ${\bf x}_v$ and ${\bf x}_{v'}$ differ only in coordinates $i,j$. For a randomized rounding scheme $\cal R$, the cut density for edges of type $(i,j)$ at ${\bf x} \in \Delta$ is {\bf e}gin{align*}d_{ij}({\bf x}) = \limsup_{\epsilon \rightarrow 0} \frac{\Pr_{\ell \sim {\cal R}}[\ell({\bf x}) \neq \ell({\bf x}+\epsilon {\bf 1}_i - \epsilon {\bf 1}_j)]}{\epsilon}.\end{align*} \end{definition} As shown in \citet{KKSTY04}, to achieve an approximation factor of $\alpha$ for Multiway Cut it is sufficient to demonstrate a rounding scheme such that $d_{ij}({\bf x}) \leq \alpha$ for all $i,j \in [k]$ and ${\bf x} \in \Delta$. \section{Exponential Clocks \& Single Threshold: $1.309017$-approximation} \label{sec:1.309} We begin with a rounding scheme based on the techniques of \citet{BNS13} that achieves a $(3+\sqrt{5})/4 \approx 1.309017$-approximation for the Multiway Cut problem. \cite{BNS13} use a combination of two rounding schemes, the ``exponential clocks rounding scheme'' and what we call the ``single-threshold rounding scheme''. The two schemes are described in detail below. {\bf e}gin{algorithm}[H] \caption{} \label{alg:1.309} {\bf e}gin{algorithmic} \STATE With probability $p$, choose the Exponential Clocks Rounding Scheme (Algorithm~\ref{alg:exp-clock}). \STATE With probability $1-p$, choose the Single Threshold Rounding Scheme (Algorithm~\ref{alg:single-threshold}). \end{algorithmic} \end{algorithm} {\bf e}gin{algorithm}[H] \caption{Exponential Clocks Rounding Scheme} \label{alg:exp-clock} {\bf e}gin{algorithmic}[1] \STATE Choose independent random variables $Z_{i}$ from the exponential distribution for $i=1,\cdots,k$. \STATE For each vertex $v$, assign $v$ to $\arg\min_{i \in [k]} Z_{i}/x_{v, i}$. \end{algorithmic} \end{algorithm} {\bf e}gin{algorithm}[H] \caption{Single Threshold Rounding Scheme} \label{alg:single-threshold} {\bf e}gin{algorithmic}[1] \STATE Choose a threshold $\theta \in (0,1]$ with probability density $\phi(\theta)$. \STATE Choose a random permutation $\sigma$ of the terminals. \FORALL{$i$ in $[k-1]$} \STATE For every vertex $v=(v_{1}, \cdots, v_{k})$ such that $v$ has not been assigned yet, \STATE assign $v$ to terminal $\sigma(i)$ if $x_{v,\sigma(i)} \ge \theta$. \ENDFOR \STATE Assign all remaining unassigned vertices to terminal $\sigma(k)$. \end{algorithmic} \end{algorithm} Algorithm~\ref{alg:exp-clock} can be viewed as selecting a point ${\bf z}$ uniformly in the simplex $\Delta$ (by taking ${\bf z} = \frac{(Z_1,Z_2,\ldots,Z_k)}{Z_1+Z_2+\ldots+Z_k}$) and then partitioning the simplex into $k$ regions that meet at the point ${\bf z}$. \cite{BNS13} also observe that a rounding scheme from \cite{KT02} can be used in place of Algorithm~\ref{alg:exp-clock} and leads to the same cut density of edges. In Algorithm~\ref{alg:single-threshold}, a threshold $\theta$ is chosen from some distribution and the simplex is partitioned by going over a random permutation of the terminals and assigning all currently unassigned points ${\bf x}$ such that $x_{\sigma(i)} \ge \theta$ to terminal $\sigma(i)$. Let us recall the bounds on cut density for Algorithms~\ref{alg:exp-clock} and \ref{alg:single-threshold} from \cite{BNS13}. By symmetry, we focus in the following on edges of type $(1,2)$. {\bf e}gin{lemma}[\cite{BNS13}] \label{lem:density} The cut density for edges of type $(1,2)$ under Algorithm~\ref{alg:exp-clock} is $$ d_{12}(u_1,u_2,\ldots) = 2 - u_1 - u_2.$$ The cut density for edges of type $(1,2)$ under Algorithm~\ref{alg:single-threshold}, assuming $u_1 \leq u_2$, is $$ d_{12}(u_1,u_2,\ldots) = \frac12 \phi(u_1) + \phi(u_2).$$ \end{lemma} In order to balance Algorithm~\ref{alg:exp-clock} and Algorithm~\ref{alg:single-threshold}, \cite{BNS13} define the probability distribution $\phi(u)$ as a certain power of $u$. Our rounding scheme differs in the choice of this probability distribution. (In fact we claim that we have identified the best possible distribution for this purpose --- see Section~\ref{sec:tight-example}). We prove the following. {\bf e}gin{theorem} \label{thm:1.309} Algorithm~\ref{alg:1.309}, with $p = \frac{5 + 3\sqrt{5}}{20} \approx 0.58541$ and probability density function (for Algorithm~\ref{alg:single-threshold}) {\bf e}gin{equation} \phi(u) = \left\{ {\bf e}gin{array}{rl} a~u & \mbox{for } 0\le u\le b\\ \frac{a}{2}~(u + b) & \mbox{for } b \le u \le 1 \end{array} \right. \end{equation} where $a = \frac{4 + 2\sqrt{5}}{3}$ and $b = \sqrt{5}-2$, achieves a $\frac{3+\sqrt{5}}{4} \approx 1.309017$-approximation for Multiway Cut. \end{theorem} The intuition behind this construction is as follows. Assume that $u_1 < u_2$. Considering Lemma~\ref{lem:density}, we would ideally like to design $\phi$ so that $\frac12 \phi(u_1) + \phi(u_2) = c(u_1+u_2) + d$ for some constants $c,d$, in order to combine it with the cut density of $2-u_1-u_2$ for Algorithm~\ref{alg:exp-clock}. Unfortunately, it is impossible to design $\phi$ in such a way: If such a function $\phi$ existed, it would have to satisfy $\frac12 \phi'(u_1) = \phi'(u_2)$ for every pair of values $u_1 < u_2$ which is not possible. Still, we can try to satisfy this property for many pairs, and our construction (Figure~\ref{fig:1.309-phi}) achieves this for all pairs such that $u_1 < b < u_2$. This means that our analysis is going to be tight for all such $(u_1,u_2)$ pairs. \input{1309-distribution-phi} Now we calculate the optimal values of $a$ and $b$. A constraint on $a$ and $b$ is that since $\int_{0}^{1}\phi(u) \mathrm{d}u=1$, we should have $\frac12 a b^2 + \frac{a}{2} (\frac12(1+b) + b)(1-b) = \frac14 a(-b^{2}+2b+1)=1$ ($a\ge 0$ and $b\in [0,1]$). The following lemma gives the total cut density under Algorithm~\ref{alg:1.309}. {\bf e}gin{lemma} The cut density for an edge of type $(1,2)$ at $(u_1,u_2,\ldots)$, $u_1 \leq u_2$ under Algorithm~\ref{alg:1.309} is at most $$ p \cdot (2 - u_{1} - u_{2}) + (1-p) \cdot \frac{a}{2} (u_{1}+ u_{2} + b).$$ \end{lemma} {\bf e}gin{proof} Observe that for any $a, b \geq 0$, the density function $\phi(u)$ can be written equivalently as $\phi(u) = \min \{ au, \frac{a}{2} (u+b) \}$. Plugging this expression into Lemma~\ref{lem:density}, the cut density under Algorithm~\ref{alg:single-threshold} is at most $$ \frac12 \phi(u_1) + \phi(u_2) \leq \frac12 a u_1 + \frac{a}{2} (u_2 + b) = \frac{a}{2} (u_1 + u_2 + b).$$ Since we take Algorithm~\ref{alg:exp-clock} with probability $p$ and Algorithm~\ref{alg:single-threshold} with probability $1-p$, the lemma follows. \end{proof} Hence, we can upper-bound the cut density under Algorithm~\ref{alg:1.309} by $ p (2 - u_{1} - u_{2}) + (1-p) \frac{a}{2} (u_{1}+ u_{2} + b)$. To eliminate the dependence on $u_1+u_2$, we set $p = (1-p)\frac{a}{2}$, which means $p = a / (2+a)$. This makes the bound equal to $2p + (1-p) \frac{a}{2} b = 2p + p b = (2+b)a/(2+a)$. Hence, the final expression that we would like to minimize is $(2+b)a/(2+a)$ subject to the constraint $\frac14 a(-b^{2}+2b+1)=1$. The minimum is achieved at $a = \frac23(2+\sqrt{5})$ and $b=\sqrt{5}-2$, where the bound on cut density is $(2+b) a / (2+a) = \frac14 (3+\sqrt{5})$. The probability $p$ in Algorithm~\ref{alg:1.309} is $p =a/(2+a)=\frac{1}{20}(5+3\sqrt{5})$. This proves Theorem~\ref{thm:1.309}. \section{Descending Thresholds: $1.30217$-approximation} \label{sec:1.302} As we show in Appendix~\ref{sec:tight-example}, the Exponential Clocks Rounding Scheme combined with the Single Threshold Rounding Scheme (under any threshold distribution) achieves exactly the factor of $\frac{3+\sqrt{5}}{4}$ and not better. In this section, we present an improved $\frac{10+4\sqrt{3}}{13} \approx 1.30217$-approximation for Multiway Cut. This is achieved by combining the techniques of \cite{BNS13} with a new rounding scheme that we call {\em descending thresholds}. Before we describe our algorithm, let us discuss the ideas and considerations that led us to this rounding scheme. \subsection{Pairwise realizable distributions} \label{sec:1.302-pairwise} The tight example in Appendix~\ref{sec:tight-example} serves as a test of scrutiny for any candidate rounding technique: If it does not provide a factor better than $\frac{3+\sqrt{5}}{4}$ on this example, then it will not be helpful in improving the approximation factor. In particular, we know from Appendix~\ref{sec:tight-example} that if we want to use single-coordinate cuts in the form $\{i: x_{ij} \geq \theta\}$, we cannot use the same threshold $\theta$ for all terminals. It is natural to consider different thresholds for different terminals, but the space of possibilities (all joint probability distributions of $(\theta_1, \theta_2, \ldots, \theta_k)$) seems too vast to explore. However, let us make the following observation: for an edge of type $(i,j)$, only the thresholds $\theta_i, \theta_j$ corresponding to terminals $i,j$ can potentially cut this edge. Therefore, the cut density for edges of type $(i,j)$ is primarily determined by the distribution of the thresholds $\theta_i, \theta_j$. Assuming that threshold $\theta_i$ is applied before threshold $\theta_j$ and the joint distribution of $\theta_i, \theta_j$ is given by a density function $\pi(\theta_i, \theta_j)$, we can write the following bound on the cut density of an edge of type $i,j$ located at ${\bf u}$: $$ \int_0^1 \pi(u_i, u) du + \int_{u_i}^{1} \pi(u, u_j) du.$$ Note the asymmetry here: the edge is always cut in coordinate $u_i$ if $\theta_i$ cuts it, but it is cut in coordinate $u_j$ only if $\theta_j$ cuts it and the edge was not captured by terminal $i$ before. Eventually, we apply a sequence of cuts to a permutation of terminals, either an independently random one, or one correlated with the values of the thresholds in some way. Given such a rounding scheme, we can compute a bound on the cut density as above for edges of all types. Since the bound is linear as a function of $\pi$, one can formulate a linear program that searches for the best distribution $\pi$. As shown in \cite{KKSTY04}, the symmetry of the terminals implies that whatever rounding scheme we have, we can apply it after a random re-labeling of the terminals. Therefore, we can assume that overall, every pair of thresholds $(\theta_i, \theta_j)$ has the same joint distribution, which is a symmetrization of the distribution $\pi$ above: $\rho(\theta_i,\theta_j) = \frac12 (\pi(\theta_i,\theta_j) + \pi(\theta_j, \theta_i))$. A basic question that we encountered here is: What distributions $\rho$ are actually realizable for all pairs of terminals at the same time? In Appendix~\ref{sec:pairwise}, we provide the following characterization, at least in a discrete setting: A distribution $\rho$ can be realized for all pairs of terminals simultaneously, if and only if $\rho$ is a convex combination of symmetric product distributions, in other words $\rho$ is in the form $$ P_{ab} = \Pr[X=a,Y=b] = \sum_s \alpha_s p_s(a) p_s(b) $$ where $\alpha_s, p_s(a) \geq 0$ and $\sum_s \alpha_s = 1, \sum_a p_s(a) = 1$ (see Theorem~\ref{thm:pairwise-realizable}). In particular, this implies that the matrix $P_{ab} = \Pr[X=a, Y=b]$ must be {\em positive semidefinite}. Going back to our original motivation, this characterization gives us a hint as to what kinds of distributions over thresholds are worth considering. It is sufficient to consider a combination of rounding schemes, where the thresholds in each scheme are chosen independently from a certain distribution, and then applied in a certain order (which is possibly correlated with their values; {\em this is an aspect independent of the notion of pairwise realizability}). Unfortunately, we do not know how to search efficiently over the space of all such distributions. As far as we know, the condition of being pairwise-realizable is not equivalent to $P_{ab}$ being positive semidefinite. However, when we computed the best distribution $\pi^*$ to be combined with the Exponential Clocks Rounding Scheme, without the restriction of being pairwise-realizable, the symmetrization of this distribution $\rho^* = \frac12 (\pi^* + {\pi^*}^T)$ just happened to be in the pairwise realizable form. Furthermore, we identified this optimal solution $\pi^*$ as a convex combination of two rounding schemes, the Single Threshold Rounding Scheme under a certain distribution $\phi$, and a ``Descending Thresholds Rounding Scheme'' under a certain distribution $\psi$ --- we describe this scheme in the following section. \subsection{Descending thresholds} Based on the discussion above and our computational experiments, we propose the following new rounding scheme. {\bf e}gin{algorithm}[H] \caption{Descending Thresholds Rounding Scheme} \label{alg:descending} {\bf e}gin{algorithmic}[1] \STATE For each $i \in [k]$, choose independently a threshold $\theta_i \in (0,1]$ with probability density $\psi(\theta)$. \STATE Let $\sigma$ be a permutation of $[k]$ such that $\theta_{\sigma(1)} \geq \theta_{\sigma(2)} \geq \ldots \geq \theta_{\sigma(k)}$. \FORALL{$i$ in $[k-1]$} \STATE For every vertex $v \in V$ that has not been assigned yet, \STATE if $x_{v,\sigma(i)} \ge \theta_{\sigma(i)}$ then assign $v$ to terminal $\sigma(i)$. \ENDFOR \STATE Assign all remaining unassigned vertices to terminal $\sigma(k)$. \end{algorithmic} \end{algorithm} This scheme is in fact quite easy to analyze. The cut density for an edge of given location is given by the following lemma. {\bf e}gin{lemma} \label{lem:desc-cut-density} The cut density under Algorithm~\ref{alg:descending} for edges of type $(1,2)$ at $(u_1,u_2,\ldots,u_k)$ such that $u_1 \leq u_2$ is at most $$ \left(1 - \int_{u_1}^{u_2} \psi(u) du \right) \psi(u_1) + \psi(u_2).$$ \end{lemma} {\bf e}gin{proof} The edge can be cut in coordinate $u_1$ only if $\theta_2$ is not between $u_1$ and $u_2$: if $u_1 < \theta_2 < u_2$, threshold $\theta_2$ would be considered before $\theta_1$ and the entire edge would be assigned to terminal $2$. This accounts for the first term. The edge can be also cut in coordinate $u_2$, which accounts for the second term. We neglect the possibility of the entire edge being assigned to another terminal, which can only decrease the probability of being cut. \end{proof} This lemma provides another way to explain why descending thresholds might be advantageous. The Exponential Clocks Rounding Scheme cuts edges with probability density $2 - u_1 - u_2$, so our goal in the remaining schemes is to achieve a cut density bounded by a linear function of $u_1 + u_2$. The Single Threshold Rounding Scheme achieves this with some slack, because of the asymmetry in its analysis: it cuts edges with $u_1 = u_2$ less often than edges where $u_1$ and $u_2$ are far apart. The Descending Thresholds Rounding Scheme alleviates this problem, because its cut density is lower for edges where $u_1$ and $u_2$ are far apart due to the $\int_{u_1}^{u_2} \psi(u) du$ term. \subsection{The $1.30217$-approximate rounding scheme} It remains to describe how we combine the Descending Thresholds Rounding Scheme with the Exponential Clocks and the Single Threshold Rounding Scheme, in particular how we choose the probability distributions $\phi$ and $\psi$. Based on computational evidence, we determined that $\phi$ is piecewise linear with a breakpoint at $b \in (0,1)$, and the cut density due to the Single Threshold Rounding Scheme is $\frac12 \phi(u_1) + \phi(u_2)$, piecewise linear in $u_1$ and $u_2$. Ideally we want a function in the form $a(u_1 + u_2)$, but as we saw in Section~\ref{sec:1.309} that is hard to achieve. This is where the scheme of Descending Thresholds comes in: We choose the probability distribution $\psi$ for decreasing thresholds uniform in the interval $[0,b]$, so that the cut density due to this scheme, $(1 - \int_{u_1}^{u_2} \psi(u) du) \psi(u_1) + \psi(u_2)$, is again piece\-wise linear in $u_1$ and $u_2$. Then we are able to balance the parameters so that the total cut density is a constant throughout most of the simplex. Our algorithm will be in the following form. {\bf e}gin{center} {\bf e}gin{algorithm}[H] \caption{} \label{alg:1.302} {\bf e}gin{algorithmic} \STATE {\bf e}gin{itemize} \item With probability $p_{1}$, choose the Exponential Clocks Rounding Scheme (Algorithm~\ref{alg:exp-clock}). \item With probability $p_{2}$, choose the Single Threshold Rounding Scheme (Algorithm~\ref{alg:single-threshold}), where the threshold is chosen with the following probability density: {\bf e}gin{compactitem} \item For $0 \leq u \leq b$, $\phi(u) = a~u$ \item For $b < u \leq 1$, $\phi(u) = c~u+d$ \end{compactitem} \item With probability $p_{3}$, choose the Descending Thresholds Rounding Scheme (Algorithm~\ref{alg:descending}), where the thresholds are chosen with the following probability density: {\bf e}gin{compactitem} \item For $0 \leq u \leq b$, $\psi(u) = \frac{1}{b}$ \item For $b < u \leq 1$, $\psi(u) = 0$ \end{compactitem} \end{itemize} \end{algorithmic} \end{algorithm} \end{center} Using Lemma~\ref{lem:density} and Lemma~\ref{lem:desc-cut-density}, the cut density for edges of type $(1,2)$ located at $(u_1,u_2,\ldots)$ is equal to $q = q_1 + q_2 + q_3$, where: \noindent {\bf Case 1:} $0\le u_{1}\le u_{2} \le b$. {\bf e}gin{compactitem} \item Exponential Clocks: $q_1 = p_{1}~(2-u_{1}-u_{2})$. \item Single Threshold: $q_2 = p_{2} ~ (\frac{a}{2}~u_{1} + a~u_{2})$. \item Descending Thresholds: $q_3 = p_{3}~((1- \frac{u_{2}-u_{1}}{b})\frac{1}{b} + \frac{1}{b})$. \end{compactitem} {\bf e}gin{align*}q = 2p_{1} + p_{3} \frac{2}{b} + \left(-p_{1} + p_{2} \frac{a}{2} + p_{3} \frac{1}{b^{2}}\right) u_{1} + \left(-p_{1} + p_{2} a -p_{3}\frac{1}{b^{2}}\right) u_{2}~.\end{align*} \noindent {\bf Case 2:} $0\le u_{1} \le b < u_{2} \le 1$. {\bf e}gin{compactitem} \item Exponential Clocks: $q_1 = p_{1}~(2-u_{1}-u_{2})$. \item Single Threshold: $q_2 = p_{2}~(\frac{a}{2}~u_{1} + c~u_{2}+d)$. \item Descending Thresholds: $q_3 = p_{3}~\frac{u_{1}}{b^{2}}$. \end{compactitem} $$q = 2p_{1} + p_{2}d + \left(-p_{1} + p_{2} \frac{a}{2} + p_{3}\frac{1}{b^{2}}\right) u_{1} + \left(-p_{1} + p_{2}c \right) u_{2}~.$$ \noindent {\bf Case 3:} $b < u_{1} \le u_{2} \le 1$. {\bf e}gin{compactitem} \item Exponential Clocks: $q_1 = p_{1}~(2-u_{1}-u_{2})$. \item Single Threshold: $q_2 = p_{2}~(\frac{c}{2}u_{1} + c~u_{2} + \frac{3}{2}d)$. \item Descending Thresholds: $q_3 = 0$. \end{compactitem} $$ q = 2p_{1} + \frac{3}{2}p_{2}d + \left(-p_{1} + p_{2} \frac{c}{2} \right) u_{1} + \left(-p_{1} + p_{2} c \right) u_{2}~.$$ \noindent {\bf Optimizing the parameters.} Let us denote $\tilde{a} = p_{2}a$, $\tilde{c} = p_{2}c$ and $\tilde{d} = p_{2}d$. Given the cut density formulas above, we would like to minimize $z$ subject to the constraints {\bf e}gin{eqnarray} \forall~0\le u_{1}\le u_{2} \le b; & & z \ge 2p_{1} + \frac{2}{b} p_{3} + (-p_{1} + \frac12 \tilde{a} + \frac{1}{b^{2}} p_{3}) u_{1} + (-p_{1} + \tilde{a} - \frac{1}{b^2} p_{3}) u_{2} \label{eq:zeroToB} \\ \forall~0\le u_{1} \le b \le u_{2} \le 1; & & z \ge 2p_{1} + \tilde{d} + (-p_{1} + \frac12 \tilde{a} + \frac{1}{b^{2}} p_{3}) u_{1} + (-p_{1} + \tilde{c}) u_{2} \label{eq:zeroToBToOne} \\ \forall~b\le u_{1} \le u_{2} \le 1; & & z \ge 2p_{1} + \frac{3}{2} \tilde{d} + (-p_{1} + \frac12 \tilde{c}) u_{1} + (-p_{1} + \tilde{c}) u_{2}\label{eq:bToOne} \\ & & p_{1} + \frac12 \tilde{a}b^{2} + \frac12 \tilde{c}(1-b^{2}) + (1-b) \tilde{d} + p_{3} = 1\label{eq:sum-to-one}\\ & & 0 \le b \le 1 \\ & & p_{1},p_{3}, \tilde{a}, \tilde{c}, \tilde{d} \ge 0 \end{eqnarray} Equation~(\ref{eq:sum-to-one}) follows from combining the probability normalization conditions $p_{1} + p_{2} + p_{3} = 1$ and $\int_0^1 \phi(u) du = \frac12 ab^{2} + \frac12 c(1-b^{2}) + (1-b)d=1$. In order to eliminate the variables $u_1, u_2$, we impose the following conditions: $p_1 = \tilde{c}$, $p_{3} = \frac{b^{2}}{3}\tilde{c}$, $\tilde{a} = \frac{4}{3}\tilde{c}$, and $\tilde{d} = \frac{2}{3}b\tilde{c}$. This replaces all the constraints on $z$ by $z \geq (2 + \frac{2}{3} b) \tilde{c}$. (In constraint~(\ref{eq:bToOne}), the right-hand side becomes $2 \tilde{c} + b \tilde{c} - \frac12 \tilde{c} u_1$ which is dominated by $2 \tilde{c} + \frac12 b \tilde{c}$ for $u_1 \geq b$.) Constraint (\ref{eq:sum-to-one}) becomes $\tilde{c} (\frac32 + \frac23 b - \frac16 b^2) = 1$. Therefore, we want to minimize $(2 + \frac23 b) \tilde{c}$ subject to $\tilde{c} (\frac32 + \frac23 b - \frac16 b^2) = 1$, which can be done by hand. The optimal solution is $b = 2 \sqrt{3} - 3 ~(\approx 0.464102)$ and $\tilde{c} = (6 + 5 \sqrt{3})/26 ~(\approx 0.563856)$, which gives $z = (10+4\sqrt{3})/13 \leq 1.30217$. The value of the other parameters can be derived from the equations above: $p_{1} = (6+5\sqrt{3})/26~(\approx 0.563856)$, $p_2 = (19-8\sqrt{3})/13~(\approx 0.395661)$, $p_{3} = (11\sqrt{3} - 18)/26~(\approx 0.040483)$, $\tilde{a} = (12+10\sqrt{3})/39~(\approx 0.75181)$ and $\tilde{d} = (4-\sqrt{3})/13~(\approx 0.17446)$. It can be verified that the cut density in all cases is bounded by $z = (10+4\sqrt{3})/13$. We summarize in the following theorem. {\bf e}gin{theorem} \label{thm:1.302} Algorithm~\ref{alg:1.302} with parameters $p_1 = (6+5\sqrt{3})/26$, $p_2 = (19-8\sqrt{3})/13$, $p_3 = (11\sqrt{3}-18)/13$, $\tilde{a} = (12+10\sqrt{3})/39$, $b = 2\sqrt{3}-3$, $\tilde{c} = (6+5\sqrt{3})/26$ and $\tilde{d} = (4-\sqrt{3})/13$ gives a $(10 + 4\sqrt{3})/13 \simeq 1.30217$-approximation for the Multiway Cut problem. \end{theorem} \iffalse \subsection{Optimization} Since all equations are linear in $u_{1}$ and $u_{2}$, the maximum value over each interval would occur only at the end points. {\bf Original Optimization problem:} Hence, we would like to minimize the maximum of {\bf e}gin{align*} 2p_{1} + \frac{2}{b}p_{3}, ~~ 2p_{1} + \frac{2}{b}p_{3} + (-p_{1} + ap_{2} - p_{3}\frac{1}{b^{2}})b, ~~ 2p_{1} + \frac{2}{b}p_{3} + (-2p_{1} + \frac{3}{2}p_{2}a)b,\\ (2-b)p_{1}+p_{2}cb + p_{2}d ,~~ p_{1} + p_{2} (c+d), ~~2p_{1} + p_{2}d + (-2p_{1} + p_{2}(\frac{a}{2}+c) + p_{3}\frac{1}{b^{2}})b\\~~ p_{1} + p_{2} (c+d) + (-p_{1} + \frac{a}{2}p_{2} + \frac{1}{b^{2}}p_{3})b,\\ 2p_{1} + \frac{3}{2}p_{2}d + (-2p_{1} + \frac{3}{2}cp_{2})b,~~ p_{1} + (\frac{3}{2}d+c)p_{2} + (-p_{1} + \frac{c}{2}p_{2})b, ~~ \frac{3}{2}p_{2}(c+d) \end{align*} subject to {\bf e}gin{align} p_{1} + p_{2} + p_{3}=1\\ \frac{1}{2}ab^{2} + d(1-b) + \frac{1}{2}c(1-b^{2}) = 1\\ b \le 1\\ p_{1}, p_{2}, p_{3}, a,b,c,d \ge 0 \end{align} {\bf Optimization problem after replacement:} The above set of equations can be simplified by setting $\hat{a} = p_{2}a$, $\hat{c}=p_{2}c$ and $\hat{d}=p_{2}d$. After replacement, we would like to minimize {\bf e}gin{align*} 2p_{1}+ \frac{2}{b}p_{3}, ~~ 2p_{1}+ \frac{2}{b}p_{3} + (-p_{1} + \hat{a} - p_{3}\frac{1}{b^{2}})b, ~~ 2p_{1} + \frac{2}{b}p_{3} + (-2p_{1} + \frac{3}{2}\hat{a})b\\ (2-b)p_{1}+b\hat{c} + \hat{d}, ~~ p_{1} + \hat{c} + \hat{d}, ~~ 2p_{1} + \hat{d} + (-2p_{1} + \frac{\hat{a}}{2} + \hat{c} + p_{3}\frac{1}{b^{2}})b\\ \\~~ p_{1} + \hat{c} + \hat{d} + (-p_{1} + \hat{a}/2 + p_{3}/b^{2})b\\ 2p_{1}+\frac{3}{2}\hat{d} + (-2p_{1} + \frac{3}{2}\hat{c})b, ~~ p_{1} + \frac{3}{2}\hat{d} + \hat{c} + (-p_{1} + \hat{c}/2)b , ~~ \frac{3}{2}(\hat{c} + \hat{d}) \end{align*} subject to {\bf e}gin{align} p_{1} + \hat{a}b^{2}/2 + \hat{d}(1-b) + \hat{c}(1-b^{2})/2 + p_{3}=1\\ b\le 1\\ p_{1}, p_{3}, \hat{a}, \hat{c}, \hat{d}, b \ge 0 \end{align} {\bf Optimization problem after simplification:} Further simplifications of the equations yield minimizing the maximum of {\bf e}gin{align*} 2p_{1}+ \frac{2}{b}p_{3}, ~~ (2-b)p_{1} + \frac{1}{b}p_{3} + \hat{a}b, ~~ 2p_{1}(1-b) + \frac{2}{b}p_{3} + \frac{3}{2}\hat{a}b,\\ (2-b)p_{1}+b\hat{c} + \hat{d}, ~~ p_{1} + \hat{c} + \hat{d}, ~~ 2p_{1} (1-b) + \hat{d} + \hat{a}b/2 + \hat{c}b + p_{3}/b\\ ~~p_{1}(1-b) + \hat{c} + \hat{d} + \hat{a}b/2 + p_{3}/b\\ 2p_{1}(1-b) + \frac{3}{2}(b\hat{c} + \hat{d}), ~~ (1-b)p_{1} + \frac{3}{2}\hat{d} + \hat{c}(1+b/2), ~~ \frac{3}{2}(\hat{c}+\hat{d}) \end{align*} subject to {\bf e}gin{align} p_{1} + \hat{a}b^{2}/2 + \hat{d}(1-b) + \hat{c}(1-b^{2})/2 + p_{3}=1\\ b\le 1\\ p_{1}, p_{3}, \hat{a}, \hat{c}, \hat{d}, b \ge 0 \end{align} For a fixed $b$, we have a linear program. At this point, we can probably do a numerical search over $b$. Equating the constraints $(1), (2), (4--7)$ in the above linear program, we have $p1 = \hat{c}, p_{3} = \frac{b^{2}}{3}\hat{c}, \hat{a} = \frac{4}{3}\hat{c}, \hat{d} = \frac{2}{3}b\hat{c}$. Substituting the values of the various variables in terms of $\hat{c}$ in the last constraint, we get $\hat{c}(\frac{3}{2} + \frac{2}{3}b - \frac{1}{6}b^{2}) = 1$. And the objective function in this case becomes $2 \hat{c}(1+b/3)$. Hence, we can minimize $2(1+b/3)/(\frac{3}{2} + \frac{2}{3}b - \frac{1}{6}b^{2})$ in the range $b\in [0,1]$. The function is minimized at $ b= 2\sqrt{3} -3$ and achieve a value of $2(5+2\sqrt{3})/13$. Hence, the value of minimum is approximately 1.30217 achieved at $b=0.464102$. \subsection{Pairwise-realizable distributions and the tightness of $(10+4\sqrt{3})/13$} \label{sec:1.302-pairwise} A natural question is how much further we can push this approach and whether another combination of the rounding schemes above could lead to an improvement. Given that the present analysis for edges of type $(i,j)$ depends only on the variables $u_i, u_j$, one can ask --- what is the best factor that this type of analysis can achieve? Let us explore further the possibilities of thresholding schemes such as Single Threshold or Descending Thresholds. One can envision many possible variants, using dependent or independent thresholds. However, provided that we want to analyze only the thresholds $\theta_i, \theta_j$ for edges of type $(i,j)$, what matters is the joint distribution of $(\theta_i, \theta_j)$ for each pair $(i,j)$. Moreover, as shown in \cite{KKSTY04}, the symmetry of the terminals implies that whatever rounding scheme we have, we can apply it without loss of generality to a random permutation of the terminals. Therefore, if $\theta_i$ denotes the threshold applied to terminal $i$, we can assume that for every $i,j$, the pair of thresholds $(\theta_i, \theta_j)$ has the same joint distribution. (There can still be a correlation between the random thresholds and the order in which they are applied, as in Descending Thresholds. But for now let us consider the distribution of thresholds when indexed by the respective terminals.) A natural question that arises in this context is the following. \ {\em Given a joint distribution $\rho$ of two random variables $(X,Y)$, is it possible to design arbitrarily many random variables $X_1,X_2,\ldots,X_k$ such that for every $i \neq j$, $(X_i,X_j)$ has the same distribution $\rho$?} \ We call such a distribution $\rho$ {\em pairwise realizable}. This is clearly not possible for every pairwise distribution $\rho$. For example, if $$ \Pr_{(X,Y) \sim \rho}[X=1, Y=0] = \Pr_{(X,Y) \sim \rho}[X=0, Y=1] = 1/2 $$ then we cannot have three random variables such that each pair is distributed according to $\rho$. Here we resolve this question, at least in a discrete setting: The above is possible, if and only if $\rho$ is a convex combination of symmetric product distributions, in other words in the form $$ P_{ab} = \Pr[X=a,Y=b] = \sum_s \alpha_s p_s(a) p_s(b) $$ where $\alpha_s, p_s(a) \geq 0$ and $\sum_s \alpha_s = 1, \sum_a p_s(a) = 1$. (See Appendix~\ref{sec:pairwise} for details.) In particular, this implies that the matrix $P_{ab} = \Pr[X=a, Y=b]$ must be {\em positive semidefinite}. Going back to our original motivation, this characterization gives us a hint as to what kinds of distributions over thresholds are worth considering. It is sufficient to consider a combination of rounding schemes, where the thresholds in each scheme are chosen independently from a certain distribution, and then applied in a certain order (possibly correlated with their values). In fact, as we verified numerically by an LP solver, the best pairwise distribution of thresholds $(\theta_i,\theta_j)$ to be combined with the Exponential Clocks Rounding Scheme turns out to be pairwise realizable, in fact exactly in the form that we analyzed above: a combination of the Single Threshold Rounding Scheme and the Descending Thresholds Rounding Scheme. This seems to be a pure coincidence --- it could be the case that the optimal pairwise distribution is not pairwise realizable, and then it would not be clear how to optimize over pairwise realizable distributions. However, it turns out that the optimum among all distributions on 2 variable is actually pairwise-realizable and thus we have found the best pairwise realizable distribution. To summarize, the factor of $(10+4\sqrt{3})/13$ is the best one can achieve with a combination of the Exponential Clocks Rounding Scheme and {\em any combination} of $k$ single-variable thresholds, as long as the analysis for edges of type $(i,j)$ is restricted to variables $u_i, u_j$. \fi \section{Independent Thresholds: Getting Below $1.3$} \label{sec:below-1.3} As we show in Appendix~\ref{sec:1.302-tight}, the approximation factor of $\frac{10+4\sqrt{3}}{13} \approx 1.30217$ is the best one we can achieve if we combine Exponential Clocks with any sequence of $k$ single-variable cuts, as long as the analysis for edges of type $(i,j)$ works only with variables $u_i,u_j$. An obvious question is what happens if we consider the role of coordinates other than $u_i, u_j$. In particular, an edge of type $(i,j)$ can be ``captured" by another terminal $\ell$, in the sense that both of its endpoints are labeled $\ell$ before coordinates $u_i, u_j$ are even considered. \cite{KKSTY04} rely on this argument to improve their analysis; they use a rounding scheme in the following form. {\bf e}gin{algorithm}[H] \caption{Independent Thresholds Rounding Scheme} \label{alg:independent} {\bf e}gin{algorithmic}[1] \STATE For each $i \in [k]$, choose independently a threshold $\theta_i \in (0,1]$ with probability density $\xi(\theta)$. \STATE Let $\sigma$ be a uniformly random permutation of $[k]$. \FORALL{$i$ in $[k-1]$} \STATE For every vertex $v \in V$ that has not been assigned yet, \STATE if $x_{v,\sigma(i)} \ge \theta_{\sigma(i)}$ then assign $v$ to terminal $\sigma(i)$. \ENDFOR \STATE Assign all remaining unassigned vertices to terminal $\sigma(k)$. \end{algorithmic} \end{algorithm} In this section, we pursue further improvements to the approximation ratio, using the Independent Thresholds Rounding Scheme. Unfortunately, the inclusion of further coordinates in the analysis leads to more involved expressions which are non-linear in the underlying distributions. In contrast to the results above, we are no longer able to find the best parameters and verify the approximation ratio by hand. Our algorithm in this section has been found with the help of an LP solver; we provide more details on our computational experiments below. Let us remark that the LP solution involves a discretized description of a probability distribution $\phi$, which makes it difficult to even present a description of the algorithm in a concise form. However, we have been able to find an (approximate) closed-form expression for $\phi$ and describe the algorithm in a compact form (see below). This incurs a small loss in the approximation factor. In this section, we do not claim that our approximation is optimal for any particular class of rounding schemes; our aim is to demonstrate that the approximation factor can be pushed below $1.3$. {\em Why the gains are small.} Before we get into the details of our final algorithm and its analysis, let us comment on why it seems difficult to achieve very substantial gains at this point. The gain from considering coordinates other than $u_i, u_j$ comes from the fact that a threshold $\theta_\ell$ might capture an edge of type $(i,j)$ if the respective coordinate $u_\ell$ is above $\theta_\ell$. However, given $u_i$ and $u_j$, all the other coordinates could be very small, namely $u_\ell = (1-u_i-u_j) / (k-2)$. For the rounding schemes that we considered above (Single Threshold and Descending Thresholds), this means that we do not get any improvement, because for any $u_i,u_j>0$, we can have $u_\ell < u_i, u_j$ for all $\ell \neq i,j$; then, it never happens that terminal $\ell$ captures the edge before $i$ or $j$. In order to exploit this argument, we need to use another rounding scheme, such as Independent Thresholds. (Other schemes are possible but computational experiments show that the Independent Thresholds Rounding Scheme is the most effective one out of those we were able to analyze.) Moreover, the probability density $\xi(u)$ should be relatively high for $u \rightarrow 0$, to make the probability $\Pr[\theta_\ell \leq u_\ell]$ significant. On the other hand, this is somewhat contrary to the goal of balancing cut densities with the Exponential Clocks Rounding Scheme where the cut density is $2-u_i-u_j$,~i.e.~maximized for edges with $u_i, u_j$ close to $0$. In other words, we cannot use Independent Thresholds with a very high density near $0$ because this would have significant impact on the cut density of edges with $u_i,u_j$ close to $0$. This is our interpretation of why the benefit of the Independent Thresholds Rounding Scheme on top of the other techniques is rather limited. \subsection{Our $1.2965$-approximation algorithm} Based on computational experiments, we propose a rounding scheme in the following form, with a probability density $\phi$, parameter $b \in [0,1]$ and probabilities $p_1+p_2+p_3+p_4 = 1$ to be determined. {\bf e}gin{center} {\bf e}gin{algorithm}[H] \caption{} \label{alg:1.3} {\bf e}gin{algorithmic} \STATE {\bf e}gin{compactitem} \item With probability $p_{1}$, choose the Exponential Clocks Rounding Scheme (Algorithm~\ref{alg:exp-clock}). \item With probability $p_{2}$, choose the Single Threshold Rounding Scheme (Algorithm~\ref{alg:single-threshold}), where the threshold is chosen with probability density $\phi$. \item With probability $p_{3}$, choose the Descending Thresholds Rounding Scheme (Algorithm~\ref{alg:descending}), where the thresholds are chosen uniformly in $[0,b]$. \item With probability $p_{4}$, choose the Independent Thresholds Rounding Scheme (Algorithm~\ref{alg:independent}), where the thresholds are chosen uniformly in $[0,b]$. \end{compactitem} \end{algorithmic} \end{algorithm} \end{center} We defer the proofs of the results in this section to the full version of the paper. For the following result, we appeal to \cite{KKSTY04} for the analysis of the Independent Thresholds Rounding Scheme. {\bf e}gin{lemma} \label{lem:Karger-cut} Given a point $(u_1,u_2,\ldots,u_k) \in \Delta$ and the parameter $b$ of Algorithm~\ref{alg:independent}, let $a = \frac{1-u_1-u_2}{b}$. If $a>0$, the cut density for an edge of type $(1,2)$ located at $(u_1,u_2,\ldots,u_k)$ under the Independent Thresholds Rounding Scheme with parameter $b$ is at most {\bf e}gin{itemize} \item $\frac{2(1-e^{-a})}{ab} - \frac{(u_1+u_2)(1 - (1+a) e^{-a})}{a^2 b^2}$, if all the coordinates $u_1,\ldots,u_k$ are in $[0,b]$. \item $\frac{a+e^{-a}-1}{a^2 b}$, if $u_1 \in [0,b], u_2 \in (b,1]$ and $u_\ell \in [0,b]$ for all $\ell \geq 3$. \item $\frac{1}{b} - \frac{u_1+u_2}{6b^2}$, if $u_1,u_2 \in [0,b]$ and $u_\ell \in (b,1]$ for some $\ell \geq 3$. \item $\frac{1}{3b}$, if $u_1 \in [0,b], u_2 \in (b,1]$ and $u_\ell \in (b,1]$ for some $\ell \geq 3$. \item $0$, if $u_1, u_2 \in (b,1]$. \end{itemize} For $a=0$, the cut density is given by the limit of the expressions above as $a \rightarrow 0$. \end{lemma} {\bf e}gin{proof} For the first and second case, we refer to the proof of Lemma 6.6 in \cite{KKSTY04}. They prove that if $u_\ell \in [0,b]$ for all $\ell \geq 3$, then the maximum cut density is achieved when $u_\ell = (1-u_1-u_2)/(k-2)$ for all $\ell \geq 3$, and it is bounded by the following formula: {\bf e}gin{align*} C_\infty(u_1,u_2) &= [F'(u_1) + F'(u_2)] \times \frac{1-e^{-a}}{a} - [F'(u_1) F(u_2) + F'(u_2) F(u_1)] \times \frac{1 - (1+a) e^{-a}}{a^2}. \end{align*} Here, $F$ is the cumulative distribution function for picking the independent thresholds, namely $F(u) = \min \{u/b, 1\}$. Hence, we have $F'(u) = 1/b$ for $u \in [0,b]$ and $F'(u) = 0$ for $u \in (b,1]$. In the case where $u_1, u_2 \in [0,b]$, we obtain $$ C_\infty(u_1,u_2) = \frac{2}{b} \times \frac{1-e^{-a}}{a} - \frac{u_1 + u_2}{b^2} \times \frac{1-(1+a) e^{-a}}{a^2}. $$ In the case where $u_1 \in [0,b], u_2 \in (b,1]$, we obtain $$ C_\infty(u_1,u_2) = \frac{1}{b} \times \frac{1-e^{-a}}{a} - \frac{1}{b} \times \frac{1-(1+a) e^{-a}}{a^2} = \frac{1}{b} \times \frac{a+e^{-a}-1}{a^2}.$$ Let us now consider the case where $u_\ell > b$ for some $\ell \geq 3$, w.l.o.g.~$u_3 > b$. As shown in Lemma 6.4 in \cite{KKSTY04}, the maximum cut density for such edges is achieved when $u_3 = 1-u_1-u_2 > b$, and then it is equal to {\bf e}gin{eqnarray*} C_3(u_1,u_2,1-u_1-u_2) & = & d_3(u_1,u_2,1-u_1-u_2) \\ & = & \frac16 ((F'(u_1) + (1-F(u_1)) F'(u_2)) + (F'(u_2) + (1-F(u_2)) F'(u_1)) \\ & & + (F'(u_1) + 0 \cdot F'(u_2)) + (F'(u_2) + 0 \cdot F'(u_1)) + 0 + 0) \end{eqnarray*} from equation (1) in \cite{KKSTY04}. For $u_1, u_2 \in [0,b]$, we have $F'(u_1) = F'(u_2) = 1/b$ and $F(u_1) = u_1/b, F(u_2) = u_2/b$, hence {\bf e}gin{align*} C_3(u_1,u_2,1-u_1-u_2) &= \frac{1}{6} \left( \frac{1}{b} + \left(1 - \frac{u_1}{b} \right) \frac{1}{b} + \frac{1}{b} + \left(1 - \frac{u_2}{b} \right) \frac{1}{b} + \frac{1}{b} + \frac{1}{b}\right) = \frac{1}{b} - \frac{u_1+u_2}{6b^2}. \end{align*} For $u_1 \in [0,b], u_2 \in (b,1]$ (which can only occur if $b < 1/2$), we get $F(u_1) = u_1/b, F'(u_1) = 1/b, F(u_2) = 1$ and $F'(u_2) = 0$, hence $$ C_3(u_1,u_2,1-u_1-u_2) = \frac16 \left( \frac{1}{b} + \frac{1}{b} \right) = \frac{1}{3b}.$$ If $u_1,u_2 \in (b,1]$ then the edge is never cut because thresholds are chosen only in $[0,b]$. Finally, the cut density for $a=0$ follows by continuity of the density function, or can be verified directly from the properties of the rounding scheme. \end{proof} We also refine the analysis of the Single Threshold and Descending Thresholds Rounding Schemes, depending on the value of the remaining coordinates. {\bf e}gin{lemma} \label{lem:single-cut} For an edge of type $(1,2)$ located at \\$(u_1,u_2,\ldots,u_k)$, the cut density under the Single Threshold Rounding Scheme is at most {\bf e}gin{itemize} \item $\frac12 \phi(u_1) + \phi(u_2)$, if $u_\ell \leq u_1 \leq u_2$ for all $\ell \geq 3$. \item $\frac13 \phi(u_1) + \phi(u_2)$, if $u_1 < u_\ell \leq u_2$ for some $\ell \geq 3$. \item $\frac13 \phi(u_1) + \frac12 \phi(u_2)$, if $u_1 \leq u_2 < u_\ell$ for some $\ell \geq 3$. \end{itemize} \end{lemma} {\bf e}gin{proof} The first case follows from Lemma~\ref{lem:density}. We use similar reasoning to handle the other two cases. If there is a coordinate $u_\ell$, $u_1 < u_\ell \leq u_2$, then one of the terminals $\{2,\ell\}$ is considered before $1$, then the edge cannot be cut in coordinate $u_1$. Hence, we get a contribution of $\phi(u_1)$ only if $1$ appears before both $2$ and $\ell$, which happens with probability $1/3$. iIf $u_1 \leq u_2 < u_\ell$ for some $\ell \geq 3$, then we use the same reasoning for cutting the edge in coordinate $u_1$. In addition, the edge can be cut in coordinate $u_2$ only if terminal $2$ is considered before $\ell$, which happens with probability $1/2$. Hence we get a contribution of $\phi(u_1)$ with probability $1/3$ and $\phi(u_2)$ with probability $1/2$. \end{proof} {\bf e}gin{lemma} \label{lem:descending-cut} For an edge of type $(1,2)$ located at \\$(u_1,u_2,\ldots,u_k)$, the cut density under the Descending Thresholds Rounding Scheme is at most {\bf e}gin{itemize} \item $(1 - \int_{u_1}^{u_2} \psi(u) du) \psi(u_1) + \psi(u_2)$, if $u_\ell \leq u_1 \leq u_2$ for all $\ell \geq 3$. \item $(1 - \int_{u_1}^{u_2} \psi(u) du)(1 - \int_{u_1}^{u_\ell} \psi(u) du) \psi(u_1) + \psi(u_2)$, if $u_1 < u_\ell \leq u_2$ for some $\ell \geq 3$. \item $(1 - \int_{u_1}^{u_2} \psi(u) du)(1 - \int_{u_1}^{u_\ell} \psi(u) du) \psi(u_1)$ \\$+ (1 - \int_{u_2}^{u_\ell} \psi(u) du) \psi(u_2)$, if $u_1 \leq u_2 < u_\ell$ for some $\ell \geq 3$. \end{itemize} \end{lemma} {\bf e}gin{proof} The first case follows from Lemma~\ref{lem:desc-cut-density}. In the second case, we have $u_1 < u_\ell \leq u_2$ and hence the edge can be cut only if it is not captured by terminal $2$ or $\ell$ before terminal $1$ is considered. The edge is captured by terminal $2$ exactly when the threshold $\theta_2$ is between $u_1$ and $u_2$, which happens with probability $\int_{u_1}^{u_2} \psi(u) du$. Independently, the edge is captured by terminal $\ell$ when $\theta_\ell$ is between $u_1$ and $u_\ell$, which happens with probability $\int_{u_1}^{u_\ell} \psi(u) du$. Therefore, the probability of cutting in coordinate $u_1$ is $(1 - \int_{u_1}^{u_2} \psi(u) du)(1 - \int_{u_1}^{u_\ell} \psi(u) du) \psi(u_1)$. The probability of cutting in coordinate $u_2$ remains bounded by $\psi(u_2)$. In the third case, we have $u_1 \leq u_2 < u_\ell$. The probability of cutting in coordinate $u_1$ remains the same. The probability of cutting in coordinate $u_2$ is now multiplied by the probability that the edge is not captured by terminal $\ell$ before terminal $2$, which is $(1 - \int_{u_2}^{u_\ell} \psi(u) du)$. \end{proof} Given these lemmas, we formulate the expressions for the total cut density under Algorithm~\ref{alg:1.3}. {\bf e}gin{corollary} \label{cor:1.3-total-density} Let $a(u_1,u_2) = (1-u_1-u_2) / b$. The cut density under Algorithm~\ref{alg:1.3} for an edge of type $(1,2)$ located at ${\bf u} = (u_1,u_2,\ldots)$ is $d_{12}({\bf u})$ where {\bf e}gin{compactitem} \item {\bf Case I.} If $u_1 \leq u_2 \leq b$ and $u_\ell \leq b$ for all $\ell \geq 3$, {\bf e}gin{align*} d_{12}({\bf u}) \leq &~~ p_1 \left(2 - u_1 - u_2 \right) + p_2 \left(\frac12 \phi(u_1) + \phi(u_2) \right) \\ & + p_3 \left(2 - \frac{1}{b} (u_2-u_1) \right) \frac{1}{b} \\ & + p_4 \left( \frac{2(1-e^{-a(u_1,u_2)})}{a(u_1,u_2) b} - \frac{(u_1+u_2)(1 - (1+a(u_1,u_2)) e^{-a(u_1,u_2)})}{(a(u_1,u_2) b)^2} \right). \end{align*} \item {\bf Case II.} If $u_1 \leq u_2 \leq b$ and $u_\ell > b$ for some $\ell \geq 3$, {\bf e}gin{align*} d_{12}({\bf u}) \leq &~~ p_1 \left(2 - u_1 - u_2 \right) + p_2 \left(\frac13 \phi(u_1) + \frac12 \phi(u_2) \right) \\ & + p_3 \left( \left(1 - \frac{1}{b}(u_2-u_1) \right) u_1 + u_2 \right) \frac{1}{b^2} \\ & + p_4 \left( 1 - \frac{1}{6b} (u_1+u_2) \right) \frac{1}{b}. \end{align*} \item {\bf Case III.} If $u_1 \leq b < u_2$ and $u_\ell \leq b$ for all $\ell \geq 3$, {\bf e}gin{align*} d_{12}({\bf u}) \leq &~~ p_1 \left(2 - u_1 - u_2 \right) + p_2 \left(\frac12 \phi(u_1) + \phi(u_2) \right) \\ & + p_3 \frac{u_1}{b^2} + p_4 \frac{a(u_1,u_2) + e^{-a(u_1,u_2)} - 1}{(a(u_1,u_2))^2 b}. \end{align*} \item {\bf Case IV.} If $u_1 \leq b < u_2$ and $u_\ell > b$ for some $\ell \geq 3$, {\bf e}gin{align*} d_{12}({\bf u}) \leq &~~ p_1 \left(2 - u_1 - u_2 \right) + p_2 \left(\frac13 \phi(u_1) + \phi(u_2) \right) \\ & + p_3 \frac{u_1^2}{b^3} + p_4 \frac{1}{3b}. \end{align*} \item {\bf Case V.} If $b < u_1 \leq u_2$, {\bf e}gin{align*} d_{12}({\bf u}) \leq &~~ p_1 \left(2 - u_1 - u_2 \right) + p_2 \left(\frac12 \phi(u_1) + \phi(u_2) \right). \end{align*} \end{compactitem} For $a(u_1,u_2)=0$, the expressions should be interpreted as limits when $a(u_1,u_2) \rightarrow 0$. \end{corollary} We remark that Case IV and Case V are applicable only when $b < 1/2$ (otherwise we cannot have two variables above $b$). These cases were necessary for us to explore different choices of $b$, but eventually we identified $b = 6/11$ as the optimal choice. (Curiously, \cite{KKSTY04} make the same choice, even though they use only two out of the four rounding schemes present here. We do not quite understand this coincidence.) Hence, Case IV and Case V do not arise in our final verification. {\bf Formulation as an LP.} Given the analysis of Algorithm~\ref{alg:1.3} above, it remains to choose the parameters $b$, $p_1$, $p_2$, $p_3$, $p_4$ and the probability distribution $\phi(u)$ so that $d_{12}({\bf u})$ can be upper-bounded by a constant as small as possible. It is easy to see for a fixed $b$, this can be formulated as a linear program (after discretization). The only seemingly non-linear part of the problem is the product $p_2 \phi(u)$, but this can be easily folded into a single variable $\tilde{\phi}(u) = p_2 \phi(u)$. Finally, we have the normalization constraint $p_1 + p_2 + p_3 + p_4 = p_1 + \int_0^1 \tilde{\phi}(u) du + p_3 + p_4 = 1$. We minimize an upper-bound on $d_{12}({\bf u})$ over all values $0 \leq u_1 \leq u_2 \leq 1$, under a suitable discretization. We obtained a solution which involves a discretized description of a probability distribution $\phi$. In order to obtain a concise description, we approximated this probability distribution by a piecewise-polynomial density function which we describe here. Our solution is as follows. {\bf e}gin{itemize} \item $b = 6/11$ \item $p_1 = 0.31052$ \item $p_2 = 0.305782$ \item $p_3 = 0.015338$ \item $p_4 = 0.36836$ \item $\tilde{\phi}(u) = p_2 \phi(u) \\ = 0.14957 u - 0.0478 u^2 + 0.45 u^3 \textrm{ for } 0 \leq u \leq 0.23 \\ = -0.00484 + 0.1995 u - 0.1067 u^2 + 0.158 u^3 \textrm{ for } 0.23 < u \leq 6/11\\ = 0.47639 + 0.21685 u - 0.02388 u^2 - 0.021 u^3 \textrm{ for } 6/11 < u \leq 0.61\\ = 0.47368 + 0.2816 u - 0.18365 u^2 + 0.079 u^3 \textrm{ for } 0.61 < u \leq 0.77\\ = 0.32195 + 0.75 u - 0.6476 u^2 + 0.2239 u^3 \textrm{ for } 0.77 < u \leq 1.$ \end{itemize} \ \noindent Computational verification confirms that with these parameters, the cut density is upper-bounded in all cases by $1.296445$, for $0 \leq u_1 \leq u_2 \leq 1$ such that $u_1 + u_2 \leq 1$ and $u_1, u_2$ are multiples of $\delta = 1/2^{16}$. Finally, we use the following (standard) argument to bound the error arising from the discretization. {\bf e}gin{lemma} \label{lem:disc-error} For any function $f:[x_0,x_0+\delta] \times [y_0,y_0+\delta] \rightarrow {\mathbb R}$ such that $\secdiff{f}{x}, \secdiff{f}{y} \geq -d$, {\bf e}gin{align*} & {\rm{max}}_{x_0 \leq x \leq x_0+\delta, y_0 \leq y \leq y_0+\delta} f(x,y) \\ \leq & {\rm{max}} \{f(x_0,y_0), f(x_0+\delta,y_0), f(x_0,y_0+\delta), f(x_0+\delta,y_0+\delta)\} + \frac14 d \delta^2.\end{align*} \end{lemma} {\bf e}gin{proof} Suppose the maximum of $f$ is attained at $(x^*,y^*)$. Consider the function $g(x,y) = f(x,y) + \frac12 d (x-x_0- \frac12 \delta)^2 + \frac12 d (y-y_0- \frac12 \delta)^2$. By assumption, $\secdiff{g}{x} = \secdiff{f}{x} + d \geq 0$ and $\secdiff{g}{y} = \secdiff{f}{y} + d \geq 0$, i.e.~$g$ is convex along both axis-parallel directions. By convexity, we have {\bf e}gin{align*} g(x^*,y^*) \leq & \frac{x_0+\delta-x^*}{\delta} g(x_0,y^*) + \frac{x^*-x_0}{\delta} g(x_0+\delta,y^*) \\ \leq & {\rm{max}} \{ g(x_0, y^*), g(x_0+\delta,y^*) \}.\end{align*} Repeating the same argument in the $y$-coordinate, we obtain {\bf e}gin{align*} g(x^*,y^*) \leq & {\rm{max}} \{ g(x_0,y_0), g(x_0+\delta,y_0), g(x_0,y_0+\delta), g(x_0+\delta,y_0+\delta) \}. \end{align*} By the definition of $g$, we have $f(x,y) \leq g(x,y) \leq f(x,y) + \frac14 d \delta^2$ for $(x,y) \in [x_0,x_0+\delta] \times [y_0,y_0+\delta]$. This implies that {\bf e}gin{align*} f(x^*,y^*) \leq & {\rm{max}} \{ f(x_0,y_0), f(x_0+\delta,y_0), f(x_0,y_0+\delta), f(x_0+\delta,y_0+\delta) \} + \frac14 d \delta^2.\end{align*} \end{proof} We apply this lemma to the function $d_{12}({\bf u})$ from Corollary~\ref{cor:1.3-total-density}. It can be verified that for all ${\bf u} \in [0,1]^2$, $\secdiff{d_{12}}{u_1}, \secdiff{d_{12}}{u_2} \geq -16$. (The second derivatives are mostly easy to evaluate, except for the expressions for the Independent Thresholds Rounding Scheme --- appearing with a multiplier of $p_4$; we have verified these by {\em Mathematica}.) By Lemma~\ref{lem:disc-error}, the discretization error can be bounded by $\frac14 d \delta^2 = 4 \cdot 2^{-32} = 2^{-30} < 10^{-9}$. This proves the following theorem. {\bf e}gin{theorem} \label{thm:1.2965} Algorithm~\ref{alg:independent} with parameters as above provides a $1.2965$-approximation for the Multiway Cut problem. \end{theorem} \paragraph{Acknowledgment} We would like to thank T. S. Jayram for bringing the references to exchangeble sequences (\cite{dF37,D77,DF80}) to our attention, and David Aldous for a helpful discussion including pointing out the papers \cite{Kingman78} and \cite{TW98}. \appendix \section{Pairwise realizable distributions} \label{sec:pairwise} Here, we address a question that came up in our search for the best distribution of thresholds: \ {\em Given a joint distribution $\rho$ of two random variables $(X,Y)$ over a domain ${\cal D}$, is it possible to design an arbitrarily large number of random variables $X_1,X_2,\ldots,X_k$ such that for every pair $i \neq j$, the distribution of $(X_i, X_j)$ is the same distribution $\rho$?} {\bf e}gin{definition} A probability distribution $\rho$ over ${\cal D} \times {\cal D}$ is {\em pairwise realizable}, if for every integer $k$, there exist $k$ random variables $X_1,X_2,\ldots,X_k$ such that for each pair $i \neq j$, the joint distribution of $(X_i,X_j)$ is $\rho$. \end{definition} In particular, we require that the distribution of $(X_i,X_j)$ is the same as the distribution of $(X_j,X_i)$, so $\rho$ should be symmetric. But even for symmetric distributions, this is not always possible. For example, we cannot design 3 random variables $X_1,X_2,X_3$ such that for each pair, we have $(X_i=0,X_j=1)$ or $(X_i=1,X_j=0)$ with probability $1/2$. Here, we present a necessary and sufficient condition for a distribution to be pairwise realizable. To avoid technical complications, we restrict ourselves to discrete domains ${\cal D}$. {\bf e}gin{theorem} \label{thm:pairwise-realizable} A probability distribution $\rho$ over a discrete domain ${\cal D} \times {\cal D}$ is pairwise realizable if and only if $\rho$ is a convex combination of symmetric product distributions: $$ \rho(a,b) = \sum_{s=1}^{r} \alpha_s p_s(a) p_s(b), $$ where $p_s: {\cal D} \rightarrow [0,1]$, $\sum_{a \in {\cal D}} p_s(a) = 1$, $\alpha_s \in [0,1]$, and $\sum_{s=1}^{r} \alpha_s = 1$. \end{theorem} As pointed out to us by David Aldous, this statement (although apparently not stated explicitly) can be also derived from the theory of {\em exchangeable sequences} \citep{dF37,D77}. An infinite sequence of random variables $X_1, X_2, X_3,\ldots$ is exchangeable if for every $n$ and a permutation $\pi$ on $[n]$, $(X_1, X_2,$ $\ldots,X_n)$ and $(X_{\pi(1)}, X_{\pi(2)},$ $\ldots,$ $X_{\pi(n)})$ have the same distribution. De Finetti's Theorem states that the distribution of every exchangeable sequence is a convex combination of distributions of sequences where $X_1,X_2,\ldots$ are identical and independent \citep{dF37}. Our statement can also be viewed as a result about infinite sequences of random variables: If $X_1,X_2,\ldots$ is an infinite sequence such that every {\em pair} has the same joint distribution, then this distribution is a convex combination of distributions of two identical and independent random variables. This can be derived from known results in the theory of exchangeability (using e.g., \citep{Kingman78,DF80,TW98}); here we give a direct proof using Farkas' lemma (or equivalently the convex separation theorem). \ {\bf e}gin{proof}[Theorem~\ref{thm:pairwise-realizable}] The easy direction first: If $\rho(a,b) = \sum_{s=1}^{r} \alpha_s p_s(a) p_s(b)$ as above, we can generate arbitrarily many random variables as follows: With probability $\alpha_s$, we generate $k$ independent random variables $X_1,X_2,\ldots,X_k$, each with probability distribution $p_s$. It is easy to see that for each $i \neq j$, the joint distribution of $(X_i,X_j)$ is $\rho$. The reverse direction relies on the convex separation theorem. Define ${\cal S}$ to be the convex hull of symmetric product distributions on ${\cal D} \times {\cal D}$, which is exactly the set of all distributions $\rho$ satisfying the conclusion of the theorem: {\bf e}gin{eqnarray*} {\cal S} & = & \Big\{ \rho: {\cal D} \times {\cal D} \rightarrow [0,1] \ \Big| \ \rho(a,b) = \sum_{s=1}^{r} \alpha_s p_s(a) p_s(b), \sum_{a \in {\cal D}} p_s(a) = 1, p_s(a) \geq 0, \sum_{s=1}^{r} \alpha_s = 1, \alpha_s \geq 0 \Big\}. \end{eqnarray*} We note that ${\cal S}$ is a convex compact set in ${\mathbb R}^{{\cal D} \times {\cal D}}$. Hence, if $\tilde{\rho}: {\cal D} \times {\cal D} \rightarrow [0,1]$ is a distribution outside of ${\cal S}$, then by the convex separation theorem there is $y: {\cal D} \times {\cal D} \rightarrow {\mathbb R}$ and $\lambda_1 < \lambda_2$ such that {\bf e}gin{itemize} \item $\sum_{a,b \in {\cal D}} y(a,b) \tilde{\rho}(a,b) = \lambda_1$, \item $\forall \rho \in {\cal S}; \sum_{a,b \in {\cal D}} y(a,b) \rho(a,b) \geq \lambda_2$. \end{itemize} In particular, for every distribution $p: {\cal D} \rightarrow [0,1], \sum_{a \in {\cal D}} p(a) = 1$, we have {\bf e}gin{equation} \label{eq:lambda_2-bound} \sum_{a,b \in {\cal D}} y(a,b) p(a) p(b) \geq \lambda_2. \end{equation} Assume for a contradiction that for any $k$, there are random variables $X_1,X_2,\ldots,X_k$ such that $ \forall i \neq j$ the distribution of $(X_i,X_j)$ is $\tilde{\rho}$. In particular, we choose $k > \frac{1}{\lambda_2-\lambda_1} ({\rm{max}}_{a \in {\cal D}} y(a,a) - \lambda_1)$. (The right-hand side is at least $1$ by equation (\ref{eq:lambda_2-bound}).) We consider the following quantity: {\bf e}gin{equation} \label{eq:magic} \sum_{a_1,a_2,\ldots,a_k \in {\cal D}} \Pr[X_1=a_1,X_2=a_2,\ldots,X_k=a_k] \sum_{i \neq j} y(a_i,a_j). \end{equation} First, fix any choice of $a_1,a_2,\ldots,a_k \in {\cal D}$ and consider the sum $\sum_{i \neq j} y(a_i,a_j)$. Define $p(a) = \frac{1}{k} |\{ i \in [k]: a_i = a\}|$. Then we have {\bf e}gin{eqnarray*} \sum_{i \neq j} y(a_i,a_j) & = & \sum_{i,j=1}^{k} y(a_i,a_j) - \sum_{i=1}^{k} y(a_i,a_i) = k^2 \sum_{a,b \in {\cal D}} p(a) p(b) y(a,b) - k \sum_{a \in {\cal D}} p(a) y(a,a). \end{eqnarray*} By the properties of $y(a,b)$, we get {\bf e}gin{eqnarray*} \sum_{i \neq j} y(a_i,a_j) & = & k^2 \sum_{a,b \in {\cal D}} p(a) p(b) y(a,b) - k \sum_{a \in {\cal D}} p(a) y(a,a) \\ & \geq & k^2 \lambda_2 - k \sum_{a \in {\cal D}} p(a) y(a,a) \\ & \geq & k^2 \lambda_2 - k \, {\rm{max}}_{a \in {\cal D}} y(a,a) . \end{eqnarray*} Now we use the fact that $k > \frac{1}{\lambda_2-\lambda_1} ({\rm{max}}_{a \in {\cal D}} y(a,a) - \lambda_1)$, and therefore $k \lambda_2 > k \lambda_1 + ({\rm{max}}_{a \in {\cal D}} y(a,a) - \lambda_1) = (k-1) \lambda_1 + {\rm{max}}_{a \in {\cal D}} y(a,a)$. We obtain {\bf e}gin{eqnarray*} \sum_{i \neq j} y(a_i,a_j) & > & k (k-1) \lambda_1. \end{eqnarray*} To summarize, expression (\ref{eq:magic}) which is a convex combination of such terms, is lower-bounded strictly by {\bf e}gin{equation} \label{eq:magic-1} \sum_{a_1,a_2,\ldots,a_k \in {\cal D}} \Pr[X_1=a_1,X_2=a_2,\ldots,X_k=a_k] \sum_{i \neq j} y(a_i,a_j) > k (k-1) \lambda_1. \end{equation} On the other hand, by switching the sums, we have {\bf e}gin{eqnarray*} & & \sum_{a_1,a_2,\ldots,a_k \in {\cal D}} \Pr[X_1=a_1,X_2=a_2,\ldots,X_k=a_k] \sum_{i \neq j} y(a_i,a_j) \\ &= & \sum_{i \neq j} \sum_{a_i,a_j \in {\cal D}} y(a_i,a_j) \hspace{-0.2in} \sum_{(a_\ell \in {\cal D}: \ell \neq i,j)} \Pr[X_1=a_1,X_2=a_2,\ldots,X_k=a_k] \\ &= & \sum_{i \neq j} \sum_{a_i,a_j \in {\cal D}} y(a_i,a_j) \Pr[X_i=a_i, X_j=a_j] \\ &= & \sum_{i \neq j} \sum_{a_i,a_j \in {\cal D}} y(a_i,a_j) \tilde{\rho}(a_i,a_j) = k(k-1) \lambda_1 \end{eqnarray*} again by the properties of $y(a,b)$. This is a contradiction with (\ref{eq:magic-1}). \end{proof} \section{A tight example for $\frac{3+\sqrt{5}}{4}$} \label{sec:tight-example} Here we show the following lower bound. {\bf e}gin{theorem} No combination of the Single Threshold Rounding Scheme (with any distribution) and the Exponential Clocks Rounding Scheme can provide an approximation factor better than $(3+\sqrt{5})/4$. \end{theorem} We show this by considering a game between the algorithm and an adversary. The strategy space of the adversary is to pick an edge in the simplex. The strategy space of the algorithm is to pick a partition of the simplex into $k$ parts of the following type: {\bf e}gin{enumerate} \item Exponential Clocks: a random partition generated by Algorithm~\ref{alg:exp-clock}, or \item Single Threshold: a partition generated by Algorithm~\ref{alg:single-threshold} for any fixed value of $\theta$ \end{enumerate} and any probability distribution over the strategies above. The game is a zero-sum game where if the endpoints of the edge picked by the adversary belong to two different parts of the partition picked by the algorithm, then the algorithm pays to the adversary a cost of $1$ divided by the length of the edge (where recall that the length of the edge is defined to be $\frac12 \times$ the $L_{1}$-distance between its endpoints). If the edge belongs to one part of the partition, there is $0$ payoff to both players. Clearly, if there is an algorithm in the strategy space that achieves a cut density bounded by $\alpha$, then this implies a strategy for the algorithm player that pays at most $\alpha$ in expectation against any adversary. We present a strategy for the adversary such that the algorithm has to pay at least $\frac{3+\sqrt{5}}{4} - O(\frac{1}{k})$ in expectation. I.e., a cut density better than $\frac{3+\sqrt{5}}{4} - O(\frac{1}{k})$ cannot be achieved by this type of algorithm. \subsection{Probability distribution over edges} Here we define a strategy for the adversary. For each \emph{ordered} pair of terminals $(i,j)$ ($i, j \in [k]$, $i\ne j$), we have three sets of edges: $A_{ij}$, $B_{ij}$ and $C_{ij}$. The edges in each of the sets $A_{ij}$, $B_{ij}$ and $C_{ij}$ have the property that their endpoints differ only in the coordinates $i$ and $j$. Hence, the edges ${\bf (v,v')}$ in these sets are of the form ${\bf v}=(u_{1}, \cdots, u_{k})$ and ${\bf v'}=(u_{1}, \cdots, u_{i}-\epsilon, \cdots, u_{j}+\epsilon, \cdots u_{k})$. In all the three sets, we shall have $\forall k\ne i,j, ~ u_{k}=(1-u_{i}-u_{j})/(k-2)$. Hence, each edge is defined by the values of $u_{i}$ and $u_{j}$. {\bf e}gin{itemize}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt] \item $A_{ij}$ consists of all edges such that $u_{i}=x$ for some $x\in[3b, 1]$ and $u_{j}=0$. \item $B_{ij}$ consists of all edges such that $u_{i}+2u_{j}=3b$ with $b \le u_{i} \le 3b$ (and hence, $0\le u_{j}\le b$). \item $C_{ij}$ consists of a single edge defined by $u_{i}=1$ and $u_{j}=0$. \end{itemize} This completes the description of the sets $A_{ij}$, $B_{ij}$ and $C_{ij}$ except for setting the values of the parameters $b$ and $\epsilon$. The three sets are depicted in Figure~\ref{fig:tight-example}. We shall set $b= \sqrt{5}-2$ (as in Section~\ref{sec:1.309}) and $\epsilon = (1-2b)/(k-2)$. \input{tight-example-golden-ratio-fig} We now define the probability distribution over $\bigcup_{(i,j)} (A_{ij} \cup B_{ij} \cup C_{ij})$ that defines the strategy of the adversary. Starting with $C_{ij}$, the single edge is chosen with probability $\frac{2(1-2b)}{(1-b)(k-2)}~\frac{1}{k(k-1)}$. Hence, the total probability mass over the edges in $\bigcup_{(i,j)}C_{ij}$ is $\frac{2(1-2b)}{(1-b)(k-2)}$. The adversary has a uniform distribution over edges in $\bigcup_{(i,j)}A_{ij}$. Similarly, the adversary has a uniform distribution over edges in $\bigcup_{(i,j)}B_{ij}$. The total probability mass of the edges in $\bigcup_{(i,j)}A_{ij}$ is $\frac{1-3b}{1-b}~(1-\frac{1}{k-2})$, and the total probability mass of the edges in $\bigcup_{(i,j)}B_{ij}$ is $(\frac{2b}{1-b} - \frac{1}{k-2})$. Note that there are $k(k-1)$ ordered pairs $(i,j)$, hence for a particular ordered pair $(i,j)$, the sets $A_{ij}$ and $B_{ij}$ carry a total probability mass of $\frac{(1-3b)}{(1-b)}(1-\frac{1}{k-2}) \frac{1}{k(k-1)} $ and $(\frac{2b}{1-b} - \frac{1}{k-2}) \frac{1}{k(k-1)}$ respectively. We defer the analysis of the max-min value of the game to the full version of the paper. It can be verified that the probabilities add up to $$\frac{2(1-2b)}{(1-b)(k-2)} + \frac{1-3b}{1-b}\left(1-\frac{1}{k-2}\right) + \left(\frac{2b}{1-b} - \frac{1}{k-2}\right) = 1.$$ In the following two subsections, we show that any strategy adopted by the algorithm will incur a cost of at least $\frac{3+\sqrt{5}}{4} - O(\frac{1}{k})$. Note that length of every edge used in the probability distribution by the adversary is $\epsilon$. Therefore, we need to show that the probability of the edge being cut by any strategy of the algorithm is at least $\left( \frac{3+\sqrt{5}}{4} - O(\frac{1}{k}) \right) \epsilon$. We remark that by our choice of $b = \sqrt{5}-2$, we have $\frac{3+\sqrt{5}}{4} = \frac{1}{1-b}$. \subsection{Performance of the Exponential Clocks \\ Rounding Scheme} For any edge $({\bf v,v'})$ which differ only on coordinates $i$ and $j$, and is of length $\epsilon$, we know from \cite{BNS13} that the probability that a random partition from the Exponential Clocks Rounding Scheme cuts the edge is $\frac{2-u_{i}-u_{j}+\epsilon}{1+\epsilon}~\epsilon \geq (2-u_i-u_j-\epsilon)~\epsilon$. Hence, the probability of an edge being cut in the set $A_{ij}$ (for an ordered pair $(i,j)$) is at least {\bf e}gin{align*}\frac{1-3b}{1-b} \left(1-\frac{1}{k-2}\right) \frac{1}{k(k-1)} \cdot \frac{1}{1-3b}~\int_{3b}^{1}(2-x-\epsilon) \mathrm{d}x \cdot \epsilon \\= \left( \frac{3}{2}(1-3b) - O\left(\frac{1}{k} \right) \right)~\frac{\epsilon}{k(k-1)}.\end{align*} Similarly, the probability of an edge being cut in the set $B_{ij}$ is at least {\bf e}gin{align*}\left(\frac{2b}{1-b} - \frac{1}{k-2}\right)\frac{1}{k(k-1)} \cdot \frac{1}{b}~\int_{0}^{b}(2-(3b-x)-\epsilon) \mathrm{d}x \cdot \epsilon \\= \left(\frac{b}{1-b}~\left(4-5b\right) - O\left(\frac{1}{k}\right) \right)~\frac{\epsilon}{k(k-1)}~.\end{align*} The probability of an edge being cut in $C_{ij}$ is $\frac{2(1-2b)}{(1-b)(k-2)}\frac{\epsilon}{k(k-1)}$, which does not contribute significantly to the total probability; since we are interested in a lower bound, we can ignore the contribution of $C_{ij}$ here. Adding up over all $i \neq j$, we obtain that the total probability of an edge being cut by the algorithm is at least $$\left(\frac{3}{2}(1-3b) + \frac{b}{1-b}~\left(4-5b\right) - O\left(\frac{1}{k}\right) \right)~\epsilon$$ which is equal to $\left(\frac{3+\sqrt{5}}{4} - O(\frac{1}{k}) \right)~\epsilon$ after plugging in $b=\sqrt{5}-2$. \subsection{Performance of a partition induced by a single threshold} We now consider a partition induced by a choosing a single threshold $\theta$ and a random permutation $\sigma$ of the terminals. Since the distribution of the edges over the terminals is symmetric, we can consider the case when $\sigma$ is the identity permutation. Define an edge to be \emph{captured} by a terminal if both endpoints of the edge are assigned to this terminal. Similarly, define an edge to be \emph{cut} by a terminal if one of the endpoints is assigned to the terminal but the other endpoint is not. Before we delve into the analysis, we would like to make an observation. For any ordered pair $(i,j)$, in both $A_{ij}$ and $B_{ij}$, we know that $\forall k \ne i,j$, $u_{k}=(1-u_{i}-u_{j})/(k-2)$. Since $u_i+u_j \geq 2b$ for all edges in $A_{ij} \cup B_{ij}$, this means that the values of $u_{k}$ ($k\ne i,j$) among all edges in $A_{ij}$ and $B_{ij}$ are upper-bounded by $(1-2b)/(k-2) = \epsilon$. For the edges in $C_{ij}$, we have $u_i + u_j = 1$, and hence all the remaining coordinates are $0$. This implies the following two observations. {\bf e}gin{observation} \label{obs:ABuntouchedbyk} For $\theta \in (\epsilon,1]$, for any ordered pair $(i,j)$, the probability that an edge in $A_{ij}$ or $B_{ij}$ is either captured or cut by a terminal $k\ne i,j$ is zero. This holds irrespective of the permutation chosen over the terminals. \end{observation} {\bf e}gin{observation} \label{obs:Cuntouchedbyk} For $\theta \in (0,1]$, for any ordered pair $(i,j)$, the probability that an edge in $C_{ij}$ is either captured or cut by a terminal $k\ne i,j$ is zero. This holds irrespective of the permutation chosen over the terminals. \end{observation} We break the analysis into several cases depending on the value of $\theta$. We shall use the following in our analysis below. {\bf e}gin{itemize} \item Observation~\ref{obs:ABuntouchedbyk} applies to the range of $\theta$ dealt with in Sections~\ref{sec:3bthru1}, \ref{sec:bthru3b}, and \ref{sec:lowthrub}, and hence while calculating the probability of an edge from $A_{ij}$ and $B_{ij}$ being cut, we can focus only on whether terminals $i$ or $j$ cut the edge. \item Observation~\ref{obs:Cuntouchedbyk} applies to the entire range of $\theta$, and in particular we shall use it in Sections~\ref{sec:lowthrub} and \ref{sec:verytop} by focusing only on whether terminals $i$ and $j$ cut the edge when dealing with the edge from $C_{ij}$. \item In the algorithm, when using the strategy of a single threshold, we note that only the first $k-1$ terminals in the random permutation get assigned vertices according to the threshold. The last terminal gets assigned all the remaining unassigned vertices. Hence, in the following analysis, at several points, we shall sum over only $k-1$ vertices (instead of $k$). \end{itemize} Finally, we remind the reader we shall be assuming that $\sigma$ is the identity permutation in the analysis below; since the distribution of the edges is symmetric over the terminals, the choice of the permutation does not make a difference in the analysis. W recall that $b = \sqrt{5}-2$ and $\frac{1}{1-b} = \frac{3+\sqrt{5}}{4}$ which will be useful below. \subsubsection{Case $1-\epsilon < \theta \leq 1$} \label{sec:verytop} In this case, for each $i,j\in [k], ~i\ne j$, the edge in $C_{ij}$ will be cut by terminal $i$. Since each $C_{ij}$ is chosen with probability $\frac{2(1-2b)}{(1-b)(k-2)} \frac{1}{k(k-1)}$, the total probability of an edge being cut is at least $\frac{2(1-2b)}{(1-b)(k-2)}$. Note that by the choice of $\epsilon$, this quantity is equal to $\frac{2}{1-b}~\epsilon = \frac{3+\sqrt{5}}{2} \epsilon$. \subsubsection{Case $3b < \theta \le 1-\epsilon$} \label{sec:3bthru1} It is easy to see in this case that for each $i \in [k-1]$, the terminal $i$ will cut the edges in $A_{ij}$ (for all $j\in [k] \setminus \{i\}$) where $u_{i}-\epsilon < \theta \le u_{i}$. This is an $\epsilon$-size interval among the edges in $A_{ij}$ parameterized by $u_i$. Since the probability density of choosing an edge with given $u_i$ in $A_{ij}$ is $\frac{1}{1-b} (1-\frac{1}{k-2}) \frac{1}{k(k-1)}$, the probability that terminal $i$ cuts an edge in $A_{ij}$ is $\frac{1}{1-b}(1-\frac{1}{k-2}) \frac{1}{k(k-1)}\cdot \epsilon$. Summing over all $A_{ij}$'s ($i \in [k-1], j\in [k] \setminus \{i\}$), we get that the total probability of an edge being cut is $\frac{1}{1-b}(1-\frac{1}{k-2})(1-\frac{1}{k})~\epsilon = \frac{3+\sqrt{5}}{4}(1-O(\frac{1}{k}))~\epsilon$. \subsubsection{Case $b < \theta < 3b-\epsilon$} \label{sec:bthru3b} In this case, each $i \in [k-1]$ cuts the edges in $B_{ij}$ (for all $j\in [k] \setminus \{i\}$) where $u_{i}-\epsilon < \theta \le u_{i}$. Again, this is an $\epsilon$-size interval among the edges in $B_{ij}$, in terms of the parameter $u_i$. Since the probability density of choosing an edge with given $u_i$ in $B_{ij}$ is $(\frac{1}{1-b} - \frac{1}{2b(k-2)}) \frac{1}{k(k-1)}$, the probability of an edge being cut is $(\frac{1}{1-b} - \frac{1}{2(k-2)b})~\frac{1}{k(k-1)}~\epsilon$. Summing over all $B_{ij}$'s ($i \in [k-1], j\in [k] \setminus \{i\}$), we get that the total probability of an edge being cut is $(\frac{1}{1-b} - \frac{1}{2(k-2)b})~(1-\frac{1}{k})~\epsilon = (\frac{3+\sqrt{5}}{4} - O(\frac{1}{k}))~\epsilon$. \subsubsection{Case $3b-\epsilon \le \theta \le 3b$} This case is essentially a transition between Case~\ref{sec:3bthru1} and \ref{sec:bthru3b}. Here, both edges in $A_{ij}$ and $B_{ij}$ can be cut by terminal $i$, with probabilities that depend on the value of $\theta$. As $\theta$ moves from $3b-\epsilon$ to $3b$, the interval of edges cut in $A_{ij}$ increases and the interval of edges cut in $B_{ij}$ decreases linearly. It can be verified that the probability of being cut is a convex combination of the probabilities in Cases~\ref{sec:3bthru1} and \ref{sec:bthru3b}, with a linear transition from Case~\ref{sec:3bthru1} at $\theta=3b$ to Case~\ref{sec:bthru3b} at $\theta=3b-\epsilon$, hence always bounded by $(\frac{3+\sqrt{5}}{4} - O(\frac{1}{k}))~\epsilon$. \subsubsection{Case $\epsilon < \theta \le b$} \label{sec:lowthrub} In this case, each terminal $i \in [k-1]$ cuts the edges in $B_{ji}$ (for all $j>i$) where $u_{i} \le \theta \le u_{i}+\epsilon$. Please note the two ways this case differs from Section~\ref{sec:bthru3b}. First the edges in $B_{ji}$ are being cut by terminal $i$ (and not by terminal $j$) and only when $j>i$ (i.e., terminal $j$ is considered after terminal $i$, otherwise the edge would have been captured by terminal $j$). The probability density of choosing edges in $B_{ji}$ in terms of the parameter $u_i$ is $(\frac{2}{1-b} - \frac{1}{(k-2)b}) \frac{1}{k(k-1)}$, twice as large as the probability density in terms of parameter $u_j$. Therefore, for any $B_{ji}$, with $i\in [k-1]$ and $j>i$, the probability of an edge being cut is $(\frac{2}{1-b} - \frac{1}{(k-2)b})~\frac{1}{k(k-1)}~\epsilon$. Now we are summing only over $i \in [k-1]$ and $j>i$, which leads again to a total probability of an edge being cut $(\frac{1}{1-b} - \frac{1}{2(k-2)b})~\epsilon = (\frac{3+\sqrt{5}}{4} - O(\frac{1}{k}))~\epsilon$. \subsubsection{Case $0 < \theta \le \epsilon$} \label{sec:verybottom} This is the only case in which we have to consider the possibility of an edge being captured by a terminal other than $i,j$. (See Observations~\ref{obs:ABuntouchedbyk}~and~\ref{obs:Cuntouchedbyk} above.) Here we count only the contribution of edges in $C_{ij}$ (which can never be captured by $k \neq i,j$ by Observation~\ref{obs:Cuntouchedbyk}). In this case, for each $i\in [k]$ and $j>i$, the edge in $C_{ji}$ is cut by terminal $i$. Hence, the total probability of an edge being cut in $\bigcup_{(i,j):j>i}C_{ij}$ is $\frac{1-2b}{(1-b)(k-2)}$. Note that by the choice of $\epsilon$, this quantity is equal to $\frac{1}{1-b}~\epsilon = \frac{3+\sqrt{5}}{4}~\epsilon$. {\bf e}gin{table}[h] {\bf e}gin{center} {\bf e}gin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Range of $\theta$}&\multicolumn{2}{|c|}{Cost incurred by}&\multirow{2}{*}{Cost}\\\cline{2-3} &Edges cut in&by Terminal&\\\hline $[1-\epsilon, 1]$ & $C_{ij}$ & $i$ & $\frac{3+\sqrt{5}}{2}$\\\hline $[3b, 1-\epsilon]$ & $A_{ij}$ & $i$ & $\frac{3+\sqrt{5}}{4} - O(\frac{1}{k})$\\\hline $[b, 3b]$ & $B_{ij}$ & $i$ & $\frac{3+\sqrt{5}}{4} - O(\frac{1}{k})$\\\hline $[\epsilon, b]$ & $B_{ji}$ & $i$ ($j \succ i$)$^{\dagger}$ & $\frac{3+\sqrt{5}}{4} - O(\frac{1}{k})$\\\hline $(0,\epsilon]$ & $C_{ji}$ & $i$ & $\frac{3+\sqrt{5}}{4}$\\\hline \end{tabular} \end{center} \caption{This table summarizes the expected cost paid by the algorithm depending on the value of $\theta$ in the single-threshold strategy. \newline $^{\dagger}$This indicates that terminal $j$ must occur after terminal $i$ in the random permutation over terminals chosen by the algorithm.} \end{table} \section{A tight example for $\frac{10+4\sqrt{3}}{13}$} \label{sec:1.302-tight} Here we prove the following lower bound, matching our $\frac{10+4\sqrt{3}}{13}$-approximation from Section~\ref{sec:1.302}. {\bf e}gin{theorem} For any combination of the Exponential Clocks Rounding Scheme and schemes with $k$ threshold cuts, one for each terminal (and an arbitrary joint distribution), as long as the analysis of threshold cuts considers only the thresholds $\theta_i, \theta_j$ for edges of type $(i,j)$ (and not the possibility of being captured by another terminal), it cannot achieve a factor better than $\frac{10+4\sqrt{3}}{13}$. \end{theorem} More precisely, what this means that the analysis takes into account the possibility of cutting an edge $(i,j)$ by thresholds $\theta_i, \theta_j$ or allocating the edge fully to terminal $i$ or $j$, but not the possibility of allocating the edge fully to another terminal. This is the way we analyze our algorithm in Section~\ref{sec:1.302} where this factor is achieved. As in Section~\ref{sec:tight-example}, we define a probability distribution of edges in the simplex such that any partition strategy of the following type has to pay at least $\frac{10+4\sqrt{3}}{13}$ in expectation: {\bf e}gin{enumerate}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt] \item Exponential Clocks: a random partition generated by Algorithm~\ref{alg:exp-clock}, or \item Simple Thresholds: any sequence of $k$ threshold cuts, one for each terminal (such as the Single Threshold or Descending Thresholds Rounding Scheme). \end{enumerate} Here, we modify the game to reflect the fact that we are considering analysis that depends only on the two coordinates $u_i,u_j$ for edges of type $(i,j)$: if the partition player uses thresholds $\theta_i,\theta_j$ in these coordinates, and say $\theta_i$ is applied first, he pays whenever the edge is cut in coordinate $u_i$ or the edge is cut in coordinate $u_j$ and {\em not captured} in coordinate $u_i$ (i.e. other coordinates are not considered for the purposes of determining the payment). \subsection{Probability distribution over edges} Let us define the distribution over edges of type $(1,2)$ (Figure~\ref{fig:tight-example-1-302}). Edges of type $(i,j)$ are distributed analogously. We have $\alpha = \frac{-3 + 4\sqrt{3}}{13}$, $\gamma = \frac{19 - 8\sqrt{3}}{26}$ and $b = 2 \sqrt{3} - 3$. The location of each edge satisfies $u_{3} = u_{4} = \cdots = u_{k} = (1 - u_{1} - u_{2})/(k-2)$, i.e. it is determined by $u_1$ and $u_2$. {\bf e}gin{itemize}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt,leftmargin=*] \item Region $R_{A}$: Uniform density in the region defined by $u_{1} \in [0,b]$ and $u_{2} \in [0,b]$ with a total probability mass of $\alpha$. \item Region $R_{B1}$: Uniform density in the region defined by $u_{1} - \frac{1-2b}{b} u_{2} \ge b$ on the simplex with a total probability mass of $\alpha/4$. \item Region $R_{B2}$: Uniform density in the region defined by $u_{2} - \frac{1-2b}{b} u_{1} \ge b$ on the simplex with a total probability mass of $\alpha/4$. \item Region $R_{C1}$: Density proportional to $(u_{1} - (1-b))$ in the region defined by $u_{1} \in [1-b,1]$ and $u_{1} + u_{2} = 1$, with a total probability mass of $\frac{b}{1-b} (\alpha/4)$. \item Region $R_{C2}$: Density proportional to $(u_{2} - (1-b))$ in the region defined by $u_{2} \in [1-b,1]$ and $u_{1} + u_{2} = 1$, with a total probability mass of $\frac{b}{1-b} (\alpha/4)$. \item Region $R_{D1}$: Density proportional to $(1-b-u_{1})$ in the region defined by $u_{1} - \frac{1-2b}{b} u_{2} = b$ and $u_{1} \in [b,1-b]$, with a total probability mass of $\frac{1-2b}{1-b} (\alpha/4)$. \item Region $R_{D2}$: Density proportional to $(1-b-u_{2})$ in the region defined by $u_{2} - \frac{1-2b}{b} u_{1} = b$ and $u_{1} \in [b,1-b]$, with a total probability mass of $\frac{1-2b}{1-b} (\alpha/4)$. \item Region $R_{E1}$: Uniform density in the region defined by $u_{1} \in [b,1]$ and $u_{2}=0$, with a total probability mass of $\gamma$. \item Region $R_{E2}$: Uniform density in the region defined by $u_{2} \in [b,1]$ and $u_{1}=0$, with a total probability mass of $\gamma$. \end{itemize} We note that the total probability mass we have used is $\alpha + \frac{\alpha}{2} + \frac{b}{1-b} \frac{\alpha}{2} + \frac{1-2b}{1-b} \frac{\alpha}{2} + 2 \gamma = 2 \alpha + 2 \gamma = 1$. \input{tight-example-1-302-fig} We defer the analysis to the full version of the paper. \subsection{Performance of the Exponential Clocks \\Rounding Scheme} For an edge of location ${\bf u}$, the cut density is $2-u_1-u_2$ by Lemma~\ref{lem:density}. We compute the contribution of each region to the total expected cost of the partition. Since the expression if linear in $(u_1,u_2)$, the average over each region is equal to $2-c_1-c_2$ where $(c_1,c_2)$ is the center of mass of that region. \subsubsection{Cut density in region $R_{A}$} The center of mass of this diamond-shaped region is $(\frac{b}{2}, \frac{b}{2})$, and its probability mass is $\alpha$, hence the expected cost is $\alpha (2 - \frac{b}{2} - \frac{b}{2}) = \alpha (2-b)$. \subsubsection{Cut density in regions $R_{B1}$ and $R_{B2}$} $R_{B1}$ is a triangle, with corners $(1,0), (b,0), (1-b,b)$. Hence, its center of mass is $(\frac{2}{3}, \frac{b}{3})$ and the expected cost is $\frac{\alpha}{4} (2 - \frac{1}{3} (2+b)) = \frac{\alpha}{12} (4 - b)$. By symmetry, the cost for region $R_{B2}$ is also $\frac{\alpha}{12} (4 - b)$. \subsubsection{Cut density in regions $R_{C1}$ and $R_{C2}$} Each edge in these regions has a cut density of 1. Hence, the cost for each of $R_{C1}, R_{C2}$ is $\frac{b}{1-b} \frac{\alpha}{4}$. \subsubsection{Cut density in regions $R_{D1}$ and $R_{D2}$} The region $R_{D1}$ is a line segment, with density linearly increasing from the endpoint $(1-b, b)$ (where it is $0$) towards the endpoint $(b,0)$. Therefore, its center of mass is located at $1/3$ of its length, at the point $(c_1,c_2) = \frac13 (1-b,b) + \frac23 (b,0) = (\frac{1+b}{3},\frac{b}{3})$. The cost for this region is $\frac{1-2b}{1-b} \frac{\alpha}{4} (2-c_1-c_2) = \frac{1-2b}{1-b} \frac{\alpha}{4} (2 - \frac13(1+2b)) = \frac{1-2b}{1-b} \frac{\alpha}{12} (5 - 2b)$. We get the same cost for region $R_{D2}$. \subsubsection{Cut density in regions $R_{E1}$ and $R_{E2}$} These regions are line segments with a uniform distribution, hence the center of mass is in the middle of the segment which is $(\frac{1+b}{2}, 0)$ in the case of $R_{E1}$ and $(0, \frac{1+b}{2})$ in the case of $R_{E2}$. Therefore, the cost for each region is $\gamma (2 - \frac{1+b}{2}) = \frac{\gamma}{2} (3 - b)$. \subsubsection{Total cost} Adding up the costs of all regions, we obtain that the total cost paid by the partition is equal to {\bf e}gin{eqnarray*} \E{cost} = & \alpha (2-b) + \frac{\alpha}{6} (4 - b) + \frac{b}{1-b} \frac{\alpha}{2} + \frac{1-2b}{1-b} \frac{\alpha}{6} (5- 2b) \\&~~~+ \gamma (3-b) \\ = & \frac{\alpha}{6} \left( 6(2-b) + (4-b) + \frac{5-9b+4b^2}{1-b} \right) + \gamma (3-b) \\ = & \frac{\alpha}{6} \left( 6(2-b) + (4-b) + (5-4b) \right) + \gamma (3-b) \\ = & \frac{\alpha}{6} (21 - 11 b) + \gamma (3-b). \end{eqnarray*} Recall that $2 \alpha + 2 \gamma = 1$, hence $\gamma = \frac12 - \alpha$ and the total cost is $\E{cost} = \frac{\alpha}{6} (21 - 11b) + (\frac12 - \alpha) (3-b) = \frac{\alpha}{6} (3 - 5b) + \frac12 (3-b)$. We plug in $b = 2 \sqrt{3} - 3$ which gives $\E{cost} = \frac{10+4\sqrt{3}}{13}$. \subsection{Performance of threshold cuts} Here we consider two threshold cuts, say first $\{i: x_{i1} \geq \theta_1\}$ and then $\{i: x_{i2} \geq \theta_2\}$. As we mentioned before, we do not consider the possibility of capturing edges by other terminals here. Let us consider several cases depending on the values of $\theta_1, \theta_2$. \ {\bf Case 1.} {$\theta_1, \theta_2 \in (0,b]$} The first threshold cuts the edges in regions $R_A$, $R_{B2}$, $R_{C2}$ and $R_{D2}$. The cost of cutting region $R_A$ is $\frac{\alpha}{b}$. The cost of cutting regions $R_{B2}$, $R_{C2}$ and $R_{D2}$ depends on the value of $\theta_1$ in a linear fashion, and the expected cost across $\theta_1 \in (0,b]$ is $\frac{1}{b} (\frac{\alpha}{4} + \frac{b}{1-b} \frac{\alpha}{4} + \frac{1-2b}{1-b} \frac{\alpha}{4}) = \frac{1}{b} \frac{\alpha}{2}$. Therefore, the cost varies linearly between $\frac{1}{b} \alpha$ for $\theta_1 = 0$ and cost $0$ for $\theta_1 = b$; more precisely the cost is $\frac{\alpha}{b^2} (b - \theta_1)$. Since the region $\{{\bf u}: u_1 \geq \theta_1\}$ is allocated to terminal 1, the second threshold $\theta_2$ cuts only edges such that $u_1 < \theta_1$, in particular only in region $R_A$. The cost of this cut is $\frac{\alpha}{b} \cdot \frac{\theta_1}{b} = \frac{\alpha}{b^2} \theta_1$. Therefore, the combined cost of these two cuts is $\frac{\alpha}{b} + \frac{\alpha}{b^2} (b-\theta_1) + \frac{\alpha}{b^2} \theta_1 = \frac{2\alpha}{b} = \frac{10+4\sqrt{3}}{13}$. \ {\bf Case 2.} {$\theta_1, \theta_2 \in (b,1]$} In this case, there is no interaction between the two cuts. The first threshold $\theta_1$ cuts through $R_{B1}, R_{E1}$ and either $R_{B1}$ or $R_{C1}$. Similarly, the second threshold $\theta_2$ cuts through $R_{B2}, R_{E2}$ and either $R_{B2}$ or $R_{C2}$, regardless of the value of $\theta_1$. The cost of both cuts is the same, so we analyze just $\theta_1$. The cost of cutting region $R_{E1}$ is uniform in the interval $\theta_1 \in [b,1]$, and is equal to $\frac{1}{1-b} \gamma$. We claim that the combined cost of cutting region $R_{B1}$, $R_{C1}$ and $R_{D1}$ is also independent of $\theta_1$: the cost of cutting the triangle $R_{B1}$ increases linearly from $\theta_1 = b$ to $\theta_1 = 1-b$ and then decreases linearly from $\theta_1 = 1-b$ to $\theta_1 = 1$. Conversely, the cost of cutting $R_{D1}$ decreases linearly from $\theta_1 = b$ to $\theta_1 = 1-b$, and then is replaced by the cost of cutting $R_{C1}$ which increases linearly from $\theta_1 = 1-b$ to $\theta_1 = 1$. The probability mass in $R_{D1}$ is $\frac{1-2b}{1-b} \alpha/4$, equal to the portion of the triangle $R_{B1}$ cut by thresholds $\theta_1 \in [b,1-b]$. Similarly, the probability mass in $R_{C1}$ is $\frac{b}{1-b} \alpha/4$, equal to the portion of the triangle $R_{B1}$ cut by thresholds $\theta_1 \in [1-b,1]$. Therefore these contribution balance each other out and add up to a uniform density between $b < \theta_1 \leq 1$, which is the total probability mass in these regions, $\alpha/2$, divided by $1-b$, hence $\frac{1}{2(1-b)} \alpha$. Together with the contribution of $R_{E1}$, this is $\frac{1}{1-b} \gamma + \frac{1}{2(1-b)} \alpha = \frac{5+2\sqrt{3}}{13}$ after substituting our parameters. Both cuts together have cost $\frac{10+4\sqrt{3}}{13}$. \ {\bf Case 3.} {$\theta_1 \in (b,1], \theta_2 \in (0,b]$} In this case, the first cut is analyzed as in Case 2 and has cost $\frac{1}{1-b} \gamma + \frac{1}{2(1-b)} \alpha = \frac{5+2\sqrt{3}}{13}$. The second threshold cuts at least through the region $R_A$ (and possible some other edges depending on the value of $\theta_1$ but we ignore these). The cost of the second cut is at least $\frac{1}{b} \alpha = \frac{5+2\sqrt{3}}{13}$. Hence the cost of both cuts is at least $\frac{10+4\sqrt{3}}{13}$. \ {\bf Case 4.} {$\theta_1 \in (0,b], \theta_2 \in (b,1)$} The cost of the first cut determined by $\theta_1$ is as in Case 1, $\frac{\alpha}{b^2} (b - \theta_1)$. The cost of the second cut is similar to Case 2, but note that now $\theta_2$ cuts only edges with $u_1 \leq \theta_1$, due to the first cut. The reduction in cost compared to Case 2 depends on the value of $\theta_2$: we claim that the cheapest cut is obtained for $\theta_2 = b$. For all other $\theta_2 \in (b,1]$, the cost of the cut is either the same as the one for $\theta_2=b$ (if the point $(\theta_1, \theta_2)$ is inside $R_{B2}$), or the cost is the full cost of Case 2 (if the point $(\theta_1, \theta_2)$ is outside of $R_{B2}$). When $\theta_2 = b$, the cost of the cut is equal to $\frac{1}{1-b} \gamma$ (the cost of cutting $R_{E2}$) plus $\frac{\theta_1}{b} \cdot \frac{1}{2(1-b)} \alpha$, the cost of cutting $R_{B2}$ scaled by the fraction $\frac{u_1}{b}$ of the edges in $R_{B2}$ that are unnasigned to terminal $1$. Thus the total cost of both cuts is {\bf e}gin{align*} \frac{\alpha}{b} + \frac{\alpha}{b^2} (b - \theta_1) + \frac{\gamma}{1-b} + \frac{\theta_1}{b} \cdot \frac{1}{2(1-b)} \alpha\\ = \frac{2\alpha}{b} + \frac{\gamma}{1-b} + \left(-\frac{1}{b} + \frac{1}{2(1-b)} \right) \frac{\alpha}{b} \theta_1. \end{align*} Since the expression in front of $\frac{\alpha}{b} \theta_1$ is negative, the minimum cost is achieved for $\theta_1 = b$. Then we obtain $\frac{1}{1-b} \gamma + \frac{3}{2(1-b)} \alpha = \frac{10+4\sqrt{3}}{13}$. \end{document}
\begin{document} \title[Degeneracy loci of twisted differential forms and linear line complexes]{Degeneracy loci of twisted differential forms\\ and linear line complexes} \author{Fabio Tanturri} \address{Mathematik und Informatik\\ Geb\"aude E.2.4\\ Universit\"at des Saarlandes\\ D-66123 Saarbr\"ucken\\ Germany} \email{[email protected]} \thanks{Partially supported by the PRIN 2010/2011 ``Geometria delle variet\`a algebriche''} \subjclass{14C05; 14J10, 14N15} \keywords{degeneracy loci, Hilbert scheme, skew-symmetric matrices, linear complexes, differential forms} \date{\today} \begin{abstract} We prove that the Hilbert scheme of degeneracy loci of pairs of global sections of $\Omega_{\mathbb{P}^{n-1}}^{}(2)$, the twisted cotangent bundle on $\mathbb{P}^{n-1}$, is unirational and dominated by the Grassmannian of lines in the projective space of skew-symmetric forms over a vector space of dimension $n$. We provide a constructive method to find the fibers of the dominant map. In classical terminology, this amounts to giving a method to realize all the pencils of linear line complexes having a prescribed set of centers. In particular, we show that the previous map is birational when $n=4$. \end{abstract} \maketitle \section{Introduction} Degeneracy loci of morphisms of the form $\phi:\mappa{\mathcal{O}_{\mathbb{P}^{n-1}}^{m}} {\Omega_{\mathbb{P}^{n-1}}^{}(2)}$ arise naturally in algebraic geometry and have been extensively studied. In classical terminology, a general degeneracy locus of this form is the set of centers of complexes belonging to a general linear system of dimension $m-1$ of linear line complexes in $\mathbb{P}^{n-1}$, i.e.~hyperplane sections of the Grassmannian $\grapr(1,\mathbb{P}^{n-1})$ of lines in $\mathbb{P}^{n-1}=\gr{P}(V)$ embedded in $\gr{P}(\Lambda^{2}V)$ via the Pl\"ucker map (cfr.~Sect.~\ref{linlincom}). This nice geometric interpretation, together with the fact that many classical algebraic varieties arise this way, motivated the interest of many classical algebraic geometers as Castelnuovo, Fano, and Palatini. Nonetheless, these loci have been recently studied by several authors, for example Chang and Ottaviani. A more detailed historical account and a few classical examples can be found, for instance, in \cite{BazanMezzetti,FaenziFania}. In general, in order to parameterize the possible degeneracy loci $X_{\phi}$ as $\phi$ varies, it is useful to take a modern approach and introduce $\mathcal{H}$ as the union of the irreducible components, in the Hilbert scheme of subschemes of $\mathbb{P}^{n-1}$, containing $X_{\phi}$ for general $\phi$'s. Let $\gr{P}(V)$ be the projectivization of an $n$-dimensional vector space $V$. By the Euler sequence we can identify a morphism of the form above with a skew-symmetric matrix of linear forms in $m$ variables, or with an $m$-tuple of elements in $\Lambda^{2}V$; the natural $\GL_{m}$-action does not modify its degeneracy locus, so we get the rational map \begin{equation} \label{rhointro} \rho: \xymatrix{ \gra(m,\Lambda^{2}V) \ar@{-->}[r] & \mathcal{H} } \end{equation} sending $\phi$ to $X_{\phi}$. The behavior of the map $\rho$ is fully understood for most of the values of $(m,n)$. In \cite{TanturriDegeneracy} it is shown that it is birational whenever $4 \leq m < n-1$ or $(m,n)=(3,5)$; if $(m,n)=(3,6)$, $\rho$ is dominant but $4:1$, while for $m=3$ and $n>6$ it is generically injective but not dominant. Partial results in this line of thought were already provided in \cite{BazanMezzetti,FaniaMezzetti,FaenziFania}. \mbox{sm}allskip We are interested here in the case $m=2$. Our aim is to give a description of the behavior of the map $\rho$, as $n$ varies. The case $n=6$ has already been tackled in \cite{BazanMezzetti}, where $\rho$ was proved to be dominant and its fibers were described and shown to be two-dimensional. Our main result is the following \begin{thm*} Let $n \in \mathbb{N}$ such that $n \geq 4$, $V$ be a vector space of dimension $n$ and let \[\rho: \xymatrix{ \gra(m,\Lambda^{2}V)d \ar@{-->}[r] & \mathcal{H} }\] be the rational morphism introduced in \eqref{rhointro}, sending the class of a morphism $\phi:\mappa{\mathcal{O}_{\gr{P}(V)}^{2}} {\Omega^{}_{\gr{P}(V)}(2)}$ to its degeneracy locus $X_{\phi}$, considered as a point in the Hilbert scheme. Then $\rho$ is dominant. Moreover \begin{enumerate}[label=\textup{\roman{*}.}, ref=(\roman{*})] \item if $n$ is even, the general element of $\mathcal{H}$ is the union of $\frac{n}{2}$ lines spanning the whole $\gr{P}(V)$. The general fiber is the Grassmannian $\grapr(1,\sigma)$ of lines lying on a suitable $(\frac{n-2}{2})$-dimensional projective space $\sigma$. In particular, $\rho$ is birational for $n=4$; \item if $n$ is odd, the general element of $\mathcal{H}$ is the image in $\gr{P}(V)$ of a map \[ \xymatrix{ \mathbb{P}^{1} \ar[rr]^-{[f_{1}:\dotso:f_{n}]} && \gr{P}(V), } \] where $f_{1},\dotsc,f_{n}$ are forms of degree $\frac{n-1}{2}$ spanning the whole vector space $\Hh^{0}(\mathbb{P}^{1},\mathcal{O}_{\mathbb{P}^{1}}(\frac{n-1}{2}))$. The general fiber of $\rho$ has dimension $\frac{n^{2}-3n}{2}$. \end{enumerate} \end{thm*} When $n$ is even, the general $X_{\phi}$ is union of $\frac{n}{2}$ lines. A crucial step in proving the first part of the statement is the reinterpretation of a morphism $\phi$ as a pencil $\ell_{\phi}$ of linear line complexes. The $\frac{n}{2}$ lines are the set of centers of the linear line complexes belonging to $\ell_{\phi}$; using a similar approach to the one adopted in \cite{BazanMezzetti}, we are able to generalize the existing result to all even values of $n$. When $n$ is odd, the general $X_{\phi}$ is a rational curve, image of the map $[f_{1}:\dotso:f_{n}]$ as in the statement. A priori, the forms $f_{1},\dotsc,f_{n}$ have to be the Pfaffians of order $n-1$ of a linear $n \times n$ skew-symmetric matrix, but we will show that a general $n$-tuple of forms of degree $\frac{n-1}{2}$ arises as such. As pointed out later, an explicit construction of the preimages of a given degeneracy locus is possible. For the special case $n=4$, this amounts to the following: it is possible to construct the unique pencil of linear line complexes having two given skew lines as set of centers. These results complete the general picture, describing the behavior of the map $\rho$ for arbitrary values of $m$. For $m>3$, such description had already been given in \cite{TanturriDegeneracy}; the above theorem, together with the previous contributions, shows that the unique pairs $(m,n)$ with $2 \leq m < n-1$ for which $\rho$ is not generically injective are $(2,n)$ with $n\geq 5$ and $(3,6)$, while the unique pairs for which $\rho$ is not dominant are $(3,n)$ with $n\geq 7$. We remark that the results contained in this paper hold over non-necessarily algebraically closed fields of arbitrary characteristic. \section{Notation and preliminaries} \subsection{Notation and geometric interpretation} Let $\mathbf{k}$ be any field and let $n \in \mathbb{N}$ such that $n \geq 4$. We will denote by $U, V$ two $\mathbf{k}$-vector spaces of dimensions $2$, $n$ respectively; by $\gr{P}(U)$ and $\gr{P}(V)$ we will mean the projective spaces of their 1-quotients, i.e.~$\Hh^{0}(\gr{P}(U),\mathcal{O}_{\PU}(1))\cong U$. We set $\{y_{0},y_{1}\}$ and $\{x_{0},\dotsc,x_{n-1}\}$ to be the bases of $U$ and $V$ respectively. Let $\Omega_{\gr{P}(V)}$ be the rank $n-1$ vector bundle of differential forms on $\gr{P}(V)$ and let $\phi:U \otimes \mathcal{O}_{\gr{P}(V)}\rightarrow \Omega_{\gr{P}(V)}(2)$ be a general morphism. We can define $X=X_{\phi}$ to be the degeneracy locus associated to $\phi$, i.e.~the scheme cut out by the maximal minors of the matrix locally representing $\phi$. By the Euler sequence, the map $\phi$ can be interpreted also as an $(n \times n)$ skew-symmetric matrix $N_{\phi}=y_{0}N_{0}+y_{1}N_{1}$ of linear forms in $y_{0},y_{1}$. A point $p$ in $V$ represents a point in $X_{\phi}$ if and only if $p \in \ker(b_{0}N_{0}+b_{1}N_{1})$ for some $(b_{0},b_{1})\neq (0,0)$ (cfr.~\cite[\textsection 3.2]{Ottaviani}). Thus, the geometry of $X_{\phi}$ strongly depends on the parity of $n$: when $n$ is even, it is a scroll over the hypersurface cut out by the Pfaffian of $N_{\phi}$, i.e.~a union of $\frac{n}{2}$ lines in $\gr{P}(V)$; when $n$ is odd, it is the blow-up of $\mathbb{P}^{1}$ along the set of points described by the $n$ Pfaffians of order $n-1$ of $N_{\phi}$, which is empty for a general $\phi$. For more details about this geometric interpretation, we refer to \cite[\textsection 2]{TanturriDegeneracy}. \subsection{Linear line complexes} \label{linlincom} \noindent Another interpretation (cfr.~\cite{BazanMezzetti}) of a morphism $\phi:\mappa{\mathcal{O}_{\gr{P}(V)}^{2}}{\Omega_{\gr{P}(V)}(2)}$ is the following: let $\mathbb{G}:=\grapr(1,\gr{P}(V))$ be the Grassmannian of lines in $\gr{P}(V)$, embedded in $\gr{P}(\Lambda^{2}V)$ via the Pl\"ucker map. The dual space $\gr{P}(\Lambda^{2}V^{*})$ parameterizes hyperplane sections of $\mathbb{G}$, or, in classical terminology, \emph{linear line complexes} in $\gr{P}(V)$. An element $A \in \Lambda^{2} V$, up to constants, may be regarded as an element of $\gr{P}(\Lambda^{2}V^{*})$, hence giving rise to a linear line complex $\Gamma$. A point $p \in \gr{P}(V)$ is called a \emph{center} of $\Gamma$ if all the lines through $p$ belong to $\Gamma$; the \emph{singular space} of $\Gamma$, i.e.~$\Deg(A):=\mathbf{P}(\ker(A))$, turns out to be the set of centers of $\Gamma$. Let $\ell \in \mathbb{G}$ and $\mathbb{T}_{\ell} \mathbb{G}$ the corresponding tangent space. The hyperplane $\Vi(A)$ in $\gr{P}(\Lambda^{2}V)$ contains $\mathbb{T}_{\ell} \mathbb{G}$, i.e.~$A$ belongs to the dual variety $\check{\mathbb{G}}$, if and only if $\Deg(A) \supseteq \ell$. We distinguish the following two cases: \begin{itemize} \item if $n$ is even, then a general linear line complex $\Gamma$ has no center, as a general skew-symmetric matrix of even order has maximal rank. Linear line complexes $\Gamma$ corresponding to the points of $\check{\mathbb{G}}$ have at least a line as center and will be called \emph{special}; \item if $n$ is odd, then a general linear line complex $\Gamma$ has a point as center. $\Gamma$ will be said to be \emph{special} if it corresponds to a point of $\check{\mathbb{G}}$: in this case, the center of $\Gamma$ contains (at least) a $\mathbb{P}^{2}$. \end{itemize} We can therefore interpret the degeneracy locus of a general morphism $\phi:\mappa{\mathcal{O}_{\gr{P}(V)}^{2}}{\Omega_{\gr{P}(V)}(2)}$ as \[ \Deg(N_{\phi}):=\bigcup_{A \in N_{\phi}} \Deg(A), \] where the skew-symmetric matrix $N_{\phi}=y_{0}N_{0}+y_{1}N_{1}$ is regarded as a line in $\gr{P}(\Lambda^{2}V^{*})$. Special complexes on $N_{\phi}$ are parameterized by the intersection $N_{\phi} \cap \check{\mathbb{G}}$. \section{\texorpdfstring{The behavior of $\rho$: even case}{The behavior of rho: even case}} Within this section, we assume $n$ even. We will show that the map $\rho$ \[\rho: \xymatrix{ \gra(m,\Lambda^{2}V)d \ar@{-->}[r] & \mathcal{H} }\] defined in \eqref{rhointro} is dominant. This leads naturally to asking what is the preimage of a general point $X_{\phi}$ in $\mathcal{H}$, i.e.~which lines $N \subset \gr{P}(\Lambda^{2}V^{*})$ have $\Deg(N)=X_{\phi}$. We will give a geometric description of the fibers, providing a constructive procedure to realize the elements of a general preimage. Recall that a linear line complex $A \in \gr{P}(\Lambda^{2}V^{*})$ is special if its center contains a $\mathbb{P}^{1}$. We distinguish special complexes \emph{of the first type}, whose center is exactly a line, from special complexes \emph{of the second type}, whose center is at least a $\mathbb{P}^{3}$. A general matrix $N=N_{\phi}$ with linear forms in $\mathbf{k}[y_{0},y_{1}]$ as entries has corank two in $\frac{n}{2}$ distinct points of $\mathbb{P}^{1}$, corresponding to the roots of $\Pf(N)$; $N$ has maximal rank in any other point. This means that the general line of complexes $N \subset \gr{P}(\Lambda^{2}V^{*})$ does not contain any special complex of the second type, and that $\Deg(N)$ is the union of $\frac{n}{2}$ lines $\{\ell_{1},\dotsc,\ell_{\frac{n}{2}}\}$. We claim that these lines are general, in the sense that their span is the whole $\gr{P}(V)$. It is clear that this condition is open, so it is sufficient to exhibit a matrix $N$ satisfying it. We will do more, showing constructively in Proposition \ref{bohbohboh} that any set of lines $\{\ell_{1},\dotsc,\ell_{\frac{n}{2}}\}$ spanning $\gr{P}(V)$ is $\Deg(N)$ for some pencil of complexes $N$. Let us examine the Gauss map \[ \zeta:\xymatrix{\check{\mathbb{G}} \ar@{-->}[r] & \mathbb{G}} \] which sends a special complex of the first type $A$ to the point in $\mathbb{G}$ corresponding to the line $\Deg(A)$. Fixing a line $\ell \in \mathbb{G}$, the fiber $\zeta^{-1}(\ell)$ is a linear space. A complex $A \in \mathbb{G}$ has $\ell$ as center if and only if the hyperplane $\Vi(A)$ in $\gr{P}(\Lambda^{2}V)$ contains $\mathbb{T}_{\ell}\mathbb{G}$, so the space of such $A$'s has dimension \[ \dim(\zeta^{-1}(\ell))= \dim(\gr{P}(\Lambda^{2}V))-\dim(\mathbb{G})-1= \frac{1}{2}(n-1)(n-4). \] We observe that, given a linear space $S$ of dimension $n-3$ in $\gr{P}(V)$, it is uniquely determined a complex $H$, up to constants, such that the center of $H$ is $S$. Indeed, all the lines in $\gr{P}(V)$ intersecting $S$ have to be contained in $\Gamma=\Vi(H)\cap \mathbb{G}$, but this is a linear condition in the Pl\"ucker coordinates. Fixing a set of lines $\{\ell_{1},\dotsc,\ell_{\frac{n}{2}}\}$ spanning $\gr{P}(V)$, for any $j$ such that $1 \leq j \leq \frac{n}{2}$ we will denote by $H_{j}\in \gr{P}(\Lambda^{2}V^{*})$ the unique complex having ${<}\ell_{i}{>}_{i\neq j}$ as center. We will denote by $F_{i}$ the $(\frac{n}{2}-2)$-dimensional linear span ${<}H_{j}{>}_{j\neq i}\subset\gr{P}(\Lambda^{2}V^{*})$; a point in $F_{i}$ is a complex having at least $\ell_{i}$ as center. \begin{rmk} \label{possoassumere} Let $\ell_{1},\dotsc,\ell_{\frac{n}{2}}$ be lines spanning $\gr{P}(V)$. Up to a projective transformation of $\gr{P}(V)$, we may assume that \[ \forall \, i, \quad \ell_{i}=\bigcap_{j \notin \{2i-2,2i-1\}} \Vi(x_{j}^{*}), \] being $\Vi(x_{j}^{*})$ the hyperplane in $\gr{P}(V)$ whose points have $x_{j}$ coordinate zero. With this choice, $H_{i}$ is the complex represented by a skew-symmetric matrix with $(j,k)$-th entry \[ \left\{ \begin{array}{ll} \alpha & \mbox{if } j+1=2i=k\\ -\alpha & \mbox{if } k+1=2i=j\\ 0 & \mbox{otherwise} \end{array} \right. \] for some $\alpha \in \mathbf{k}\setminus\{0\}$. The elements of the fiber $\zeta^{-1}(\ell_{i})$ have zero entries in the {$(2i-1)$-th}, $(2i)$-th rows and columns. \end{rmk} \begin{propo} \label{bohbohboh} Let $L$ be the union of lines $\ell_{1},\dotsc,\ell_{\frac{n}{2}}$ spanning $\gr{P}(V)$. Let $\sigma$ be the $(\frac{n-2}{2})$-dimensional linear space ${{<}}H_{i}{{>}}_{1 \leq i \leq \frac{n}{2}} \subset \gr{P}(\Lambda^{2}V^{*})$. Then, for a line $N \subset \gr{P}(\Lambda^{2}V^{*})$, the following are equivalent: \begin{enumerate}[label=\textup{\roman{*}.}, ref=(\roman{*})] \item $N \subseteq \sigma$ and $N \cap F_{i} \cap F_{j}=\emptyset$ for any $1 \leq i < j \leq \frac{n}{2}$; \item $N$ contains no special complexes of the second type and ${\Deg(N)=L}$. \end{enumerate} \begin{proof} \spazio \begin{itemize} \item[i. $\Rightarrow$ ii.] Let $N \subseteq \sigma$. For any $j$, the linear space $F_{j}$ has dimension $\frac{n}{2}-2$, so any line $N \subseteq \sigma$ intersects it; hence, ${\Deg(N)\supseteq L}$. Moreover, with the choice of coordinates of Remark \ref{possoassumere}, points of $\sigma$ are represented by skew-symmetric matrices with $(j,k)$-th entry \[ \left\{ \begin{array}{ll} \alpha_{i} & \mbox{if } \exists \,i \mbox{ such that } j+1=2i=k\\ -\alpha_{i} & \mbox{if } \exists \,i \mbox{ such that } k+1=2i=j\\ 0 & \mbox{otherwise} \end{array} \right. \] for some sequence $(\alpha_{i})$ in $\mathbf{k}$. It is easy to check that a point of $\sigma$ \begin{itemize} \item is a special complex if and only if $\alpha_{j}=0$ for some $j$; \item is special of the second type if and only if $\alpha_{j}=\alpha_{k}=0$ for some $j\neq k$, i.e.~if and only if it lies on $F_{j} \cap F_{k}$. \end{itemize} If $N \cap F_{j} \cap F_{k}=\emptyset$ for any $j < k$, then $N$ contains $\frac{n}{2}$ special complexes of the first type and no special complexes of the second type, hence the conclusion. \item[ii. $\Rightarrow$ i.] Let $N$ be as in ii.. For any $i$ define $R_{i}:=N \cap \zeta^{-1}(\ell_{i})$, which is non-empty by hypothesis. We have $R_{i} \neq R_{j}$ for any $i \neq j$, as otherwise $N$ would contain a point having ${{<}}\ell_{i},\ell_{j}{{>}}$ as center. We claim that $N \subseteq \sigma$, for which it is sufficient to show that $R_{1}$ and $R_{2}$ are both contained in $\sigma$. Let us choose the coordinates as in Remark \ref{possoassumere}. For any $i \neq j$ we have $N = {<}R_{i},R_{j}{>}$, so \begin{equation} \label{Runodentro} R_{1} \cup R_{2}\subset N = \bigcap_{i < j} {<}R_{i},R_{j}{>}. \end{equation} Complexes in $R_{i} \subset \zeta^{-1}(\ell_{i})$ have zero $i$-th and $(i+1)$-th rows and columns, hence the entries $A_{k,l}$ of a complex $A$ in ${<}R_{i},R_{j}{>}$ are zero at least when \[ (k,l) \in \left(\{2i-1,2i\}\times\{2j-1,2j\}\right) \cup \left(\{2j-1,2j\}\times \{2i-1,2i\}\right). \] From \eqref{Runodentro}, we deduce that any complex $A$ in $R_{1} \cup R_{2}$ has non-zero entries $A_{k,l}$ only if $\exists \,i$ such that $k+1=2i=l$ or $l+1=2i=k$, hence it belongs to $\sigma={{<}}H_{i}{{>}}_{1 \leq i \leq \frac{n}{2}}$. This is enough to show that $N \subseteq \sigma$. If $N \cap F_{i} \cap F_{j}\neq\emptyset$ for some $i \neq j$, then $\Deg(N)$ would contain ${<}\ell_{i},\ell_{j}{>}$, hence a contradiction. \qedhere \end{itemize} \end{proof} \end{propo} \begin{propo} \label{propodomcurvepari} If $m=2$, $n\geq 4$ and $n$ is even, then $\rho$ is dominant. The general element of $\mathcal{H}$ is the union of $\frac{n}{2}$ lines spanning $\gr{P}(V)$. The general fiber has dimension $n-4$; its general element is a general line $N \subset \gr{P}(\Lambda^{2}V^{*})$ lying on $\sigma$, as in Proposition \ref{bohbohboh}. In particular, $\rho$ is birational if $(m,n)=(2,4)$. \begin{proof} We can define a rational map \[ \xi:\xymatrix{ {\grapr(1,\gr{P}(V))}^{\frac{n}{2}} \ar@{-->}[r] & \mathcal{H}, } \] defined outside the closed subset corresponding to sets of $\frac{n}{2}$ lines non-spanning $\gr{P}(V)$, sending $\frac{n}{2}$ lines to the corresponding point in $\mathcal{H}$. It is finite and its image $\im(\xi)$ is irreducible. The general morphism $\phi$ has $\frac{n}{2}$ lines spanning $\gr{P}(V)$ as degeneracy locus, so $\im(\xi)=\im(\rho)$. It remains to show that \begin{equation} \label{eqtoprove} \dim (\im(\xi)) = \dim \mathcal{H}, \end{equation} so that $\overline{\im(\rho)}$ is the unique irreducible component of $\mathcal{H}$, hence $\rho$ is dominant. On the one hand, $\dim (\im(\xi))=\dim({\grapr(1,\gr{P}(V))}^{\frac{n}{2}})=n^{2}-2n$. On the other, let $Y$ be the union of $\frac{n}{2}$ skew lines. For a line $\ell \subset \gr{P}(V)$, we have $\mathcal{N}_{\ell/\gr{P}(V)}\cong \mathcal{O}_{\mathbb{P}^{1}}^{n-2}(1)$, hence \[ \hh^{0}(\mathcal{N}_{Y/\gr{P}(V)})=\frac{n}{2}(2(n-2)) \quad \mbox{and} \quad \hh^{1}(\mathcal{N}_{Y/\gr{P}(V)})=0, \] which imply equality \eqref{eqtoprove}. Finally, fixing a point $\cup_{i} \ell_{i}$ in $\mathcal{H}$, the general element of its preimage via $\rho$ is a general line $N \subseteq \sigma$, as showed in Proposition \ref{bohbohboh}. In particular, the space of such lines has dimension \[ \dim \grapr(1,\sigma)= n-4. \] When $n=4$, $\sigma$ is a line and the unique preimage is $\sigma$ itself. \end{proof} \end{propo} We remark here that the fibers of $\rho$ can be explicitly constructed. By means of a projective transformation we can send any set of $\frac{n}{2}$ general lines to the lines chosen in Remark \ref{possoassumere}; then, we just need to apply to any line lying on $\sigma$ as in Proposition \ref{bohbohboh} the same projective transformation backwards. \section{\texorpdfstring{The behavior of $\rho$: odd case}{The behavior of rho: odd case}} Let $n$ be odd from now on. The general degeneracy locus $X_{\phi}$ is easy to describe: similarly to the case $m=3$ in \cite[\textsection 6]{TanturriDegeneracy}, the elements in $\im(\rho)$ are the images of maps \begin{equation} \label{mappamappa} \xymatrix{ \gr{P}(U) \ar[rr]^-{[f_{1}:\dotso:f_{n}]} && \gr{P}(V), } \end{equation} where $f_{1},\dotsc,f_{n}$ are forms of degree $\frac{n-1}{2}$ in $\mathbf{k}[y_{0},y_{1}]$, Pfaffians of a general $n \times n$ skew-symmetric matrix $N$ with entries in $\mathbf{k}[y_{0},y_{1}]_{1}$. These forms are general, in the sense that they generate the whole vector space $\mathbf{k}[y_{0},y_{1}]_{\frac{n-1}{2}}$. \begin{lem} \label{matrixnk} For a general $n \times n$ skew-symmetric matrix $N$ with entries in $\mathbf{k}[y_{0},y_{1}]_{1}$, its Pfaffians of order $n-1$ span the whole $\mathbf{k}[y_{0},y_{1}]_{\frac{n-1}{2}}$. \begin{proof} For the $(n-1) \times (n-1)$ Pfaffians of a general $N$, not to span $\mathbf{k}[y_{0},y_{1}]_{\frac{n-1}{2}}$ is a closed condition, so it is sufficient to exhibit, for any odd $k$, a matrix $N_{k}$ not satisfying it. For this sake, we consider the $k \times k$ matrix \[ N_{k} = \left( \begin{array}{ccccccc} 0 & y_{0} \\ -y_{0}& 0 & y_{1} \\ & -y_{1}& 0 & y_{0} \\ &&-y_{0}& 0 & y_{1} \\ &&&& \ddots & \ddots \\ &&&&& 0 & y_{1}\\ &&&&&-y_{1}& 0 \end{array} \right). \] If we denote by $\Pf_{i}(N_{k})$ the $(k-1) \times (k-1)$ Pfaffian obtained from $N_{k}$ by deleting the $i$-th row and column, it is easy to check that \begin{align*} & \Pf_{2i+1}(N_{k})= y_{0}^{i} y_{1}^{\frac{k-1}{2}-i} & \mbox{for any } 0 \leq i \leq \frac{k-1}{2},\\ & \Pf_{2i}(N_{k})=0 & \mbox{for any } 1 \leq i \leq \frac{k-1}{2}, \end{align*} and this concludes the proof. \end{proof} \end{lem} \begin{rmk} \label{tuttipfaff} As a consequence of the previous lemma, every sequence of general forms $f_{1},\dotsc,f_{n}$ of degree $\frac{n-1}{2}$ corresponds to the sequence of Pfaffians of a suitable skew-symmetric matrix. Indeed, these forms can be expressed as linear combination of the Pfaffians of $N_{k}$ above, giving rise to $\beta \in \PGL(V)$ such that the diagram \[ \xymatrix{ & & Y_{1} \ar^-{\beta}[drr]\\ \gr{P}(U) \ar[urr]^-{[f_{1}:\dotso:f_{n}]\,}_-{\sim} \ar[rrrr]_-{[\Pf_{1}(N_{k}):\dotso:\Pf_{k}(N_{k})]\,}^-{\sim} & & & & Y_{2} } \] commutes. This produces an automorphism of $\gr{P}(U)$, hence a change of basis of $\mathbf{k}[y_{0},y_{1}]_{1}$. In terms of this new basis, $N_{k}$ has the desired Pfaffians $f_{1},\dotsc,f_{n}$. \end{rmk} \begin{propo} If $m=2$, $n \geq 5$ and $n$ is odd, then $\rho$ is dominant. The general element of $\mathcal{H}$ is the image in $\gr{P}(V)$ of a map \[ \xymatrix{ \gr{P}(U) \ar[rr]^-{[f_{1}:\dotso:f_{n}]} && \gr{P}(V), } \] where $f_{1},\dotsc,f_{n}$ are forms of degree $\frac{n-1}{2}$ spanning $\mathbf{k}[y_{0},y_{1}]_{\frac{n-1}{2}}$. The general fiber of $\rho$ has dimension $\frac{n^{2}-3n}{2}$. \begin{proof} Let $r=\dim(\mathbf{k}[y_{0},y_{1}]_{\frac{n-1}{2}})-1=\frac{n-1}{2}$. We can define a rational map \[ \xi: \xymatrix{ \mathbb{A}^{(r+1)n} \ar@{-->}[r] & \mathcal{H} } \] sending an $n$-tuple of forms $f_{1},\dotsc,f_{n}$ to the image of the map \eqref{mappamappa}. It is defined on the $n$-tuples which span the whole linear space $\mathbf{k}[y_{0},y_{1}]_{\frac{n-1}{2}}$; its image $\im(\xi)$ is irreducible and its dimension is easily computable. Indeed, on the one hand there is a natural $\GL_{2}$-action on $\mathbf{k}[y_{0},y_{1}]_{1}$, acting as a change of basis on $U$; this induces an action on $\mathbf{k}[y_{0},y_{1}]_{\frac{n-1}{2}}$ and therefore on $\mathbb{A}^{(r+1)n}$, and one can see that $\xi$ factors through this action. On the other, take two points $Y_{1},Y_{2}$ in $\im(\xi)$ such that $Y_{1}=Y_{2}$. By the commutativity of the diagram \[ \xymatrix{ & & Y_{1} \ar@{=}[dd]\\ \gr{P}(U) \ar[urr]^-{[f_{1}:\dotso:f_{n}]\,}_-{\sim} \ar[drr]_-{[g_{1}:\dotso:g_{n}]\,}^-{\sim}\\ & & Y_{2} } \] we get an automorphism of $\gr{P}(U)$, i.e.~the two maps $[f_{1}:\dotso:f_{n}]$ and $[g_{1}:\dotso:g_{n}]$ belong to the same class modulo $\GL_{2}$. Hence \begin{equation*} \dim (\im(\xi)) =\dim (\mathbb{A}^{(r+1)n}) - \dim (\GL_{2}) = \frac{n^{2}+n-8}{2}. \end{equation*} Since $n$ general forms are obtained as the Pfaffians of a suitable matrix $N$ by Remark \ref{tuttipfaff}, $\im(\rho)=\im(\xi)$. If we prove that \begin{equation} \label{eqtoprove2} \dim (\im(\xi)) = \dim \mathcal{H}, \end{equation} then $\overline{\im(\rho)}$ is the unique irreducible component of $\mathcal{H}$, hence $\rho$ is dominant. Let $Y \in \im(\xi)$. From \[ \xymatrix{ 0 \ar[r] & \mathcal{T}_{Y} \ar[r] & \left.\mathcal{T}_{\gr{P}(V)}\right|_{Y} \ar[r] & \mathcal{N}_{Y/\gr{P}(V)}\ar[r] & 0 } \] and Euler sequence restricted to $Y$, we get \[ \hh^{0}(\mathcal{N}_{Y/\gr{P}(V)})=n\frac{n+1}{2}-1-\chi(\mathcal{T}_{Y}) \quad \mbox{and} \quad \hh^{1}(\mathcal{N}_{Y/\gr{P}(V)})=0. \] Riemann-Roch Theorem yields $\chi(\mathcal{T}_{Y})=3$, so equality \eqref{eqtoprove2} holds. The dimension of the fibers is finally \[ \dim \gra(m,\Lambda^{2}V)d - \dim \mathcal{H} = \frac{n^{2}-3n}{2}. \qedhere \] \end{proof} \end{propo} As in the even case, also for odd values of $n$ it is possible to construct explicitly the fibers of $\rho$. Up to a projective transformation, we can assume that the degeneracy locus is the image of the Pfaffians of the matrix $N_{k}$ from Lemma \ref{matrixnk}. We can find by linear algebra all the skew-symmetric matrices having those as Pfaffians and apply the same projective transformation backwards to get the elements of the desired fiber. \end{document}
\begin{document} \title{Central $L$-values of elliptic curves and local polynomials} \date{\today} \author{Stephan Ehlen} \address{Weyertal 86-90, Universit\"at zu K\"oln, Mathematisches Institut, D-50931 K\"oln, Germany} \email{[email protected]} \author{Pavel Guerzhoy} \address{Department of Mathematics\\ University of Hawaii\\ 2565 McCarthy Mall \\ Honolulu, HI 96822 } \email{[email protected]} \author{Ben Kane} \address{Mathematics Department, University of Hong Kong, Pokfulam, Hong Kong} \email{[email protected]} \author{Larry Rolen} \address{Department of Mathematics\\ Vanderbilt University \\ Nashville, TN 37240} \email{[email protected]} \subjclass[2010]{11F37,11F11,11E76,11M20} \keywords{locally harmonic Maass forms, vanishing of central $L$-values, congruent number problem} \thanks{Part of the research was carried out while the first author was a postdoctoral fellow at the CRM and McGill University in Montreal. The research of the third author was supported by grant project numbers HKU 27300314, 17302515, 17316416, and 17301317 of the Research Grants Council of Hong Kong SAR. Part of the research was carried out while the the fourth researcher held posdoctoral positions at the University of Cologne and the Pennsylvania State University, an assistant professorship at Trinity College, Dublin, and a visiting position at the Georgia Institute of Technology. The fourth author thanks the University of Cologne and the DFG for their generous support via the University of Cologne postdoc grant DFG Grant D-72133-G-403-151001011. } \begin{abstract} Here we study the recently-introduced notion of a locally harmonic Maass form and its applications to the theory of $L$-functions. In particular, we find a criterion for vanishing of certain twisted central $L$-values of a family of elliptic curves, whereby vanishing occurs precisely when the values of two finite sums over canonical binary quadratic forms coincide. This yields a vanishing criterion based on vastly simpler formulas than the formulas related to work of Birch and Swinnerton-Dyer to determine such $L$-values, and extends beyond their framework to special non-CM elliptic curves. \end{abstract} \maketitle \section{Introduction and Statement of Results} A celebrated result in analytic number theory, due to Dirichlet, is his so-called \emph{class number formula}. This formula gives a deep relation between values of quadratic $L$-functions and class numbers of quadratic fields, which enumerate equivalence classes of binary quadratic forms. Given the historical importance of this result, it is natural to ask if similar formulas exist for other $L$-functions. A natural candidate to search for such formulas is the $L$-functions associated to rational elliptic curves. In this case, such formulas exist in special cases. For example, in a famous paper of Tunnell \cite{Tunnell}, he gave a ``strange'' formula for the $L$-values of the quadratic twists of the strong Weil curve of conductor 32, which is well-known to be related to the study of congruent numbers. Moreover, beautiful formulas involving specializations of the Weierstrass $\zeta$-function were given for such $L$-values in the case of CM elliptic curves by Birch and Swinnerton-Dyer in \cite{BSDNotes}. Here, for rational elliptic curves for a special sequence of conductors and a natural family of fundamental discriminants $D$, we produce relatively simple formulas (taking the difference of the two sides of Theorem \text{Re}f{mainthm} below) which vanish precisely when the $L$-values of the $D$th quadratic twists of these elliptic curves vanish. These formulas mirror the classical Dirichlet class number formula in the sense that they relate (the vanishing of) central $L$-values of a set of elliptic curves to canonical sums over binary quadratic forms. Our proof is very different than those previously obtained in the literature, as it uses in an essential way the burgeoning theory of locally harmonic Maass forms defined in \cite{BKK}. To describe this result, we first require some notation. For a discriminant $\delta$, we denote the set of binary quadratic forms of discriminant $\delta$ by $\mathbbm{f}ield{Q}Q_{\delta}$. Splitting $\delta=DD_0$ for two discriminants $D_0$ and $D$, we require certain canonical sums over quadratic forms given for $x\in \mathbbm{f}ield{R}$ by \[ F_{0,N,D,D_0}(x):=\sum_{\substack{Q=[a,b,c]\in \mathbbm{f}ield{Q}Q_{D_0D}\\ Q(x)>0>a\\ N\mid a}}\chi_{D_0}(Q), \] where $\chi_{D_0}$ is the genus character, and by abuse of notation we set $Q(x)$ to be $Q(x,1)$. For $x\in\mathbbm{f}ield{Q}$, this sum is furthermore finite. Our main result is the following, where we say that $N$ is a \emph{dimension one level} if $\dim(S_2(N))=1$ and the definition of a \emph{good} discriminant for a level $N$ is given in Section \text{Re}f{sec:gooddiscs} (see Table \text{Re}f{tab:discs}). \begin{remark} For $x\in \mathbbm{f}ield{Q}$, one can interpret $F_{0,N,D,D_0}$ as a weighted intersection number between the union of the geodesics associated to the quadratic forms \[ \mathcal{Q}_{N,\Delta}:=\left\{Q=[a,b,c]\in \mathbbm{f}ield{Q}Q_{\Delta}: Q(x)>0>a, N\mid a\right\} \] and the vertical geodesic from $x$ to $i\infty$, weighted with the genus character $\chi_{D_0}$. We denote the elements of $\mathcal{Q}_{N,\Delta}$ intersecting the vertical line from $x$ to $i\infty$ by $\mathcal{S}_{N,\Delta}(x)$ and note that the sum with trivial weighting yields \[ F_{0,N,DD_0,1}(x)=\#\mathcal{S}_{N,DD_0}(x). \] \end{remark} \begin{theorem}\label{mainthm} Let $N$ be a dimension one level, and suppose that $D$ is a good fundamental discriminant for $N$. Denote the elliptic curve corresponding to the unique normalized weight 2 newform of level $N$ by $E$ and let $E_D$ be its $D$th quadratic twist. Let $D_0<0$ be given as in the row for level $N$ in Table \text{Re}f{tab:discs}. Then for each good fundamental discriminant $D$ in Table \text{Re}f{tab:discs} such that $|DD_0|$ is not a square, we have $L(E_D,1)=0$ if and only if the corresponding pair $(x_{N,1},x_{N,2})$ in Table \text{Re}f{tab:discs} satisfies \[ F_{0,N,D,D_0}\!\left(x_{N,1}\right)= F_{0,N,D,D_0}\!\left(x_{N,2}\right). \] \end{theorem} Before discussing some arithmetically interesting corollaries that arise from Theorem \text{Re}f{mainthm}, we give a broad overview of the strategy used to prove Theorem \text{Re}f{mainthm} with some of the technical issues removed. We use the theory of theta lifts (see Section \text{Re}f{sec:theta-general}) to construct a function $\mathcal{F}_{0,N,D,D_0}$ which is a $\Gamma_0(N)$-invariant locally harmonic Maass form (roughly speaking, this means that it is invariant under the action of $\Gamma_0(N)$ and it is harmonic in certain connected components of the upper half plane $\mathbb{H}$, but may be discontinuous across certain geodesics). Every locally harmonic Maass form splits naturally into three components, a continuous holomorphic part, a continuous non-holomorphic part, and a locally polynomial part. In the case of invariant forms, this polynomial is moreover a constant. The function $F_{0,N,D,D_0}$ turns out to be the locally constant part of $\mathcal{F}_{0,N,D,D_0}$. The locally constant part of an invariant locally harmonic Maass form is generally only itself invariant when the other two parts vanish. By relating the continuous holomorphic and non-holomorphic parts of $\mathcal{F}_{0,N,D,D_0}$ to a special weight $2$ cusp form $f_{1,N,D,D_0}$ which was studied by Kohnen in \cite{KohnenCoeff} and shown to be related to central $L$-values, we obtain a link between the $\Gamma_0(N)$-invariance of $F_{0,N,D,D_0}$ and the vanishing of central $L$-values. We finally determine a pair of rational numbers $x_{N,1}$ and $x_{N,2}$ that are related via $\Gamma_0(N)$ such that $F_{0,N,D,D_0}$ is invariant under $\Gamma_0(N)$ if and only if it agrees at these two points. The main technical difficulties arise from the fact that the naive definitions of $\mathcal{F}_{0,N,D,D_0}$ and $f_{1,N,D,D_0}$ do not converge absolutely, requiring an analytic continuation to $s=0$ in a separate variable $s$. There are two standard approaches to construct this analytic continuation, one known as Hecke's trick and another using spectral parameters. Kohnen uses Hecke's trick in \cite{KohnenCoeff}, while theta lifts are more complementary with the spectral theory (see Section \text{Re}f{sec:theta-lifts}), which we use to construct a parallel function $\mathbbm{f}_{1,N,D_0,D}$ via theta lifts. We are required to show that these two constructions agree at $s=0$ (see Lemma \text{Re}f{lem:f1fchi}) even though they do not agree for general $s$ because the properties of these functions differ for $s\neq 0$. \begin{remarks} \noindent \noindent \begin{enumerate}[leftmargin=*,label={\rm(\arabic*)}] \item It would be very interesting to extend the results of Theorem \text{Re}f{mainthm} to more general conductors. However, there are several technical issues which arise. It is important to note that since we restrict to dimension one levels, the results in Theorem \text{Re}f{mainthm} only yield formulas for finitely many $N$. Similar formulas exist for more general $N$, but one must apply projection operators into the corresponding eigenspaces, yielding more complicated formulas with Hecke operators acting on the formulas. Some of the issues related to non-dimension one levels are discussed by the third author's Ph.D. student Kar Lun Kong in \cite{KongPhD}. We concentrate in this paper on dimension one levels in order to obtain explicit formulas and then mainly concentrate on certain combinatorial applications of Theorem \text{Re}f{mainthm} for which the level $N$ happens to be a dimension one level. \item Using Waldspurger's Theorem in the form of the Kohnen--Zagier formula \cite{KohnenZagier}, there is an analogous result for vanishing and non-vanishing of central values for $L$-series attached to weight $k>2$ Hecke eigenforms and $N$ is squarefree and odd. In that case, each term in the sum over $Q$ defining $F_{0,N,D,D_0}$ above is multiplied by $Q(x)^{k-1}$. The main focuses of this paper are to extend this to include weight $2$ via Hecke's trick and also to consider other levels which are not necessarily squarefree and odd. \item The formulas $F_{0,N,D,D_0}(x_{N,j})$ may be written rather explicitly. To demonstrate the type of formulas that one obtains, we explicitly write the formulas for $N=32$. Firstly, for all $N$ we have \[ F_{0,N,D,D_0}(0)=\sum_{\substack{Q=[a,b,c]\in \mathbbm{f}ield{Q}Q_{D_0D}\\ c>0>a\\ N\mid a}}\chi_{D_0}(Q). \] In the case of $D_0=-3$, the genus character can be simplified to \[ \chi_{-3}([a,b,c])=\begin{cases} \left(\mathbbm{f}rac{-3}{a}\right)&\text{ if } 3\nmid a\\ \left(\mathbbm{f}rac{-3}{c}\right)&\text{ if }3|a. \end{cases} \] For $x_{32,2}=1/3$, the sum becomes \[ F_{0,32,D,-3}\!\left(\mathbbm{f}rac{1}{3}\right)=\sum_{\substack{Q=[a,b,c]\in \mathbbm{f}ield{Q}Q_{D_0D}\\ a+3b+9c>0>a\\ 32\mid a}}\chi_{-3}(Q). \] \end{enumerate} \end{remarks} To illustrate Theorem \text{Re}f{mainthm}, we describe explicit versions of Theorem \text{Re}f{mainthm} related to combinatorial questions. Firstly, recall that a squarefree integer $n$ is called \begin{it}congruent\end{it} if $n$ is the area of a right triangle with rational side lengths and \begin{it}noncongruent\end{it} otherwise. The case $N=32$ of Theorem \text{Re}f{mainthm} is well-known to apply to the study of congruent numbers by Tunnell's work \cite{Tunnell}. \begin{corollary}\label{maincor} Let $D<0$ be a fundamental discriminant $|D|\equiv 3\pmod{8}$ such that $3|D|$ is not a square, and let $E$ be the congruent number curve of conductor $32$. Then $F_{0,32,D,-3}\left(0\right)=F_{0,32,D,-3}(1/3)$ if and only if $L(E_D,1)=0$. In particular, if $|D|$ is congruent, then $F_{0,32,D,-3}\left(0\right)=F_{0,32,D,-3}(1/3)$ and the converse is true assuming the Birch and Swinnerton-Dyer conjecture (hereafter BSD). \end{corollary} \begin{remark} \noindent In \cite{Skoruppa}, Skoruppa used the theory of skew-holomorphic Jacobi forms to obtain a similar-looking condition for fundamental discriminants $D$ congruent to $1$ modulo $8$. One can use the theory here with a slightly modified genus character to obtain a condition equivalent to Skoruppa's result. However, his method appears to have an inherent dependence on the quadratic residue of $D$ modulo $8$; a subtle generalization of his result may lead to a condition identical to that obtained in Corollary \text{Re}f{maincor}, but it does not seem to directly follow from his theory in its current form. \end{remark} Table \text{Re}f{tab:maincor} (constructed via GP/Pari code running for a couple of hours) illustrates the usage of Corollary \text{Re}f{maincor} in checking for congruent numbers. Note that the calculation for Corollary \text{Re}f{maincor} only proves that non-congruent numbers are not congruent (it would be an equivalence under the assumption of BSD); the congruent numbers may in practice be explicitly shown to be congruent, although this is a non-trivial exercise; see Figure 1.3 in \cite{Koblitz} for an example of Don Zagier's which shows that $157$ is congruent. \begin{center} \begin{table}[th]\caption{Congruent numbers and Corollary \text{Re}f{maincor}\label{tab:maincor}} \begin{tabular}{|c|c|c|c|} \hline $D$ & $F_{0,32,D,-3}(0)$ & $F_{0,32,D,-3}\left(\mathbbm{f}rac{1}{3}\right)$ & $|D|$ congruent/non-congruent \\ &&&(assuming BSD)\\ \hline \hline -11&0 & 1&non-congruent\\ \hline -19&0 & 1 &non-congruent\\ \hline -35&0 & 2 &non-congruent\\ \hline -219& 2&2 & congruent\\ \hline -331& 0 & 3 & non-congruent\\ \hline -371& 4 & 4 & congruent\\ \hline -4219& 6 & 9 & non-congruent\\ \hline -80011 & 28&40 & non-congruent\\ \hline -80155&24&32&non-congruent\\ \hline -800003&138 &140 &non-congruent\\ \hline -800011&72 &81 &non-congruent\\ \hline -800027&86&94&non-congruent\\ \hline -8000459&578&590&non-congruent\\ \hline -8000467&190&200&non-congruent\\ \hline \end{tabular} \end{table} \end{center} \rm As an amusing consequence, we next show how studying the parity of the values $F_{0,32,D,-3}$ (i.e., the parity of $\#\mathcal{S}_{32,-3D}$) may lead to results about congruent numbers. In order to illustrate this idea, we re-investigate the well-known result that every prime $p\equiv 3\pmod{8}$ is not a congruent number from this perspective. \begin{corollary}\label{cor:SDodd} If $p\equiv3\pmod 8$ is a prime and $\#\mathcal{S}_{32,-3p}(1/3)$ is odd, then $L(E_p,1)\neq 0$; in particular, if this is the case, then $p$ is not a congruent number. \end{corollary} \begin{remarks} \noindent \noindent \begin{enumerate}[leftmargin=*,label={\rm(\arabic*)}] \item One may explicitly compute \[ \mathcal{S}_{32,-3D}\!\left(\mathbbm{f}rac{1}{3}\right)=\{[a,b,c]: b^2-4ac=3D,\ 32|a,\ a+3b+9c>0>a\}. \] \item Computer calculations indicate that $\#\mathcal{S}_{32,-3p}$ is always odd for $p\equiv 3\pmod{8}$ and Volker Genz has recently announced a proof that $\#\mathcal{S}_{32,-3p}$ is indeed always odd, yielding a new proof of the fact that $p$ is not a congruent number. Indeed, the calculation from Table \text{Re}f{tab:maincor} is repeated under the restriction that the discriminants are negatives of primes and the results are listed in Table \text{Re}f{tab:primes} to illustrate the parity condition hinted at in Corollary \text{Re}f{cor:SDodd}. \begin{center} \begin{table}[th]\caption{Primes and parity: Corollary \text{Re}f{cor:SDodd}\label{tab:primes}} \begin{tabular}{|c|c|c|} \hline $p$ & $F_{0,32,-p,-3}(0)$ & $F_{0,32,-p,-3}\left(\mathbbm{f}rac{1}{3}\right)$ \\ \hline \hline 11&0 & 1\\ \hline 19&0 & 1\\ \hline 331& 0 & 3\\ \hline 571& 4 & 1\\ \hline 5227&10&5\\ \hline 5939&18&17\\ \hline 75011&70& 83\\ \hline 75403&24&21\\ \hline 200171&102&117\\ \hline 200443&36&37\\ \hline 1300027&34&93\\ \hline 5500003&120&137\\ \hline 40500011& 1254& 1331\\ \hline 40500059& 1186& 1189\\ \hline \end{tabular} \end{table} \end{center} \end{enumerate} \end{remarks} We conclude by considering another example. Let $E_{27,D}$ denote the $D$th quadratic twist $E_{27,D}$ of the Fermat cubic curve $x^3+y^3=1$ of conductor $N=27$ (or its equivalent Weierstrass form $y^2=x^3-432$). Taking $N=27$ and $D_0=-4$, Theorem \text{Re}f{mainthm} yields an algorithm to determine whether or not $E_{27,D}(\mathbbm{f}ield{Q})$ (i.e, whether the number of rational solutions to the Diophantine equation $x^3+|D|y^2=432$) is finite or infinite. \begin{corollary}\label{cor:sumtwocubes} For a fundamental discriminant $D<0$ with $|D|\equiv 1\pmod{3}$ and $4|D|$ not a square, we have $F_{0,27,D,-4}(0)=F_{0,27,D,-4}(1/2)$ if and only if $L(E_{27,D},1)=0$. In particular, if $|D|\equiv 1\pmod{3}$ and $E_{27,D}$ has infinitely many rational points, then $F_{0,27,D,-4}(0)=F_{0,27,D,-4}(1/2)$ and the converse is true assuming BSD. \rm \end{corollary} Table \text{Re}f{tab:sum2cubes} lists the result of the calculation analogous to Table \text{Re}f{tab:maincor}. \begin{center} \begin{table}[th]\caption{Data for Corollary \text{Re}f{cor:sumtwocubes}\label{tab:sum2cubes}} \begin{tabular}{|c|c|c|c|} \hline $D$ & $F_{0,27,D,-4}(0)$ & $F_{0,27,D,-4}\left(\mathbbm{f}rac{1}{2}\right)$ & $E_{27,D}(\mathbbm{f}ield{Q})$ finite/infinite\\ &&&(assuming BSD)\\ \hline \hline -7&0 & 2&finite\\ \hline -31&2 & 2 &infinite\\ \hline -115& 0&4&finite\\ \hline -283&2 &2 &infinite\\ \hline -3019&4&6&finite\\ \hline -3079&24&34&finite\\ \hline -3115&8&8&infinite\\ \hline -30091&44&26&finite\\ \hline -30139&20&14&finite\\ \hline -600004&158&196&finite\\ \hline -600007&132&132&infinite\\ \hline -600019&130&172&finite\\ \hline -600043&70 &58 &finite\\ \hline -3000004&368&336&finite\\ \hline -3000103&432&414&finite\\ \hline -3000115&224&224&infinite\\ \hline \end{tabular} \end{table} \end{center} The paper is organized as follows. In Section \text{Re}f{sec:prelim}, we establish the basic definitions needed for the paper, including genus character, good discriminants, locally harmonic Maass forms, and Poincar\'e series, review the work of Kohnen, and give a few computational examples related to the dimension one levels. In Section \text{Re}f{sec:theta-general}, we describe a generalization of Kohnen's work using theta lifts, obtaining certain locally harmonic Maass forms as theta lifts; these locally harmonic Maass forms converge to the sums $F_{0,N,D,D_0}$ as one approaches a cusp. In Section \text{Re}f{sec:mainproofs}, we give the proofs of Theorem \text{Re}f{mainthm} and Corollaries \text{Re}f{maincor}, \text{Re}f{cor:SDodd}, and \text{Re}f{cor:sumtwocubes}. \section{Preliminaries}\label{sec:prelim} \subsection{Definitions}\label{Defns} Let $D_0$ and $D$ be fixed fundamental discriminants such that $DD_0>0$ is a discriminant. We begin by introducing the genus character $\chi_{D_0}$ on quadratic forms $Q=[a,b,c]\in \mathbbm{f}ield{Q}Q_{D_0D}$, which is defined as \begin{equation}\label{eqn:genusdef} \chi_{D_0}\left(Q\right):=\begin{cases} \left(\mathbbm{f}rac{D_0}{r}\right) & \text{if }\left(a,b,c, D_0\right)=1\text{ and $Q$ represents $r$ with $\left(r,D_0\right)=1$,}\\ 0 & \text{if }\left(a,b,c,D_0\right)>1.\end{cases} \end{equation} The genus character is independent of the choice of $r$ and only depends on the $\operatorname{SL}_2(\mathbbm{f}ield{Z})$-equivalence class of $Q$ (actually, it only depends on the genus of $Q$ and is indeed a character on the abelian group formed by the elements of the class group with multiplication given by Gauss's composition law after factoring out by squares). We then define for $x\in\mathbbm{f}ield{Q}$ the sum \begin{equation}\label{eqn:F1} F_{0,N,D,D_0}(x)=\sum_{\substack{Q=[a,b,c]\in \mathbbm{f}ield{Q}Q_{D_0D}\\ Q(x)>0>a\\ N\mid a}}\chi_{D_0}(Q). \end{equation} \begin{remark} \noindent We note that for rational $x$, this sum is actually finite. In fact, if $x=p/q$ then by \cite{ZagierQuadratic}, we have \[ D_0Dq^2=\left|bq+2ap\right|^2+4\left|a\right|\left|ap^2+bpq+cq^2\right| \] (note that this corrects a typo in \cite{ZagierQuadratic}). This bounds each of $a,b$, and $c$. \end{remark} \subsection{Explicit computations and good discriminants}\label{sec:gooddiscs} We call an odd discriminant $D$ \begin{it}good\end{it} for level $N$ if the following hold: \noindent \noindent \begin{enumerate}[leftmargin=*,label={\rm(\arabic*)}] \item If $N$ is not a perfect square, then $\left(\mathbbm{f}rac{-4N}{|D|}\right)=1$. \item If $2\mid N$, then $|D|\equiv 3\pmod{8}$. \item If $p\equiv -1\pmod{8}$ is a prime with $p| N$, then $\left(\mathbbm{f}rac{-p}{|D|}\right)=-1$. \item If $p\equiv 3\pmod{8}$ is a prime with $p^r\| N$ for some $r\geq 1$, then $\left(\mathbbm{f}rac{-p}{|D|}\right)=(-1)^{r+1}$. \end{enumerate} We next compile a list of data for each of the dimension one levels; specifically, in Table \text{Re}f{tab:discs} we choose one fundamental discriminant $D_0$ and compute the good fundamental discriminants $D$. We then give a brief list of the first few discriminants for which the sums in \eqref{eqn:F1} are not invariant under $\Gamma_0(N)$, underlining the good fundamental discriminants for which we conclude that the central twist $L(E_D,1)$ does not vanish by Theorem \text{Re}f{mainthm}. Finally we give a pair of choices of $x_{N,1}$ and $x_{N,2}$ used to describe Theorem \text{Re}f{mainthm}. \begin{center} \begin{table}[th]\caption{Table of good fundamental discriminants\label{tab:discs}} \begin{tabular}{|c|c|c|c|c|} \hline Level $N$ &$D_0$ & Good Discriminants $D$ & Brief list of $m=|D|$ with \eqref{eqn:F1} not invariant & $(x_{N,1},x_{N,2})$\\ \hline \hline 11& $-3$ & $\left(\mathbbm{f}rac{-11}{|D|}\right)= 1$& $4,11,12,\underline{15},16,20,\underline{23},27,\underline{31},44,48,\dots$ & $0,\mathbbm{f}rac{1}{3}$\\ \hline 14& $-3$ & $\left(\mathbbm{f}rac{-4\cdot 14}{|D|}\right)=1$& $\underline{19},20,24,27,35,40,52,56,\underline{59},68,\dots$ & $0,\mathbbm{f}rac{1}{2}$\\ \hline 15 &$-4$ &$\substack{\left(\mathbbm{f}rac{5}{|D|}\right)=1\\ \text{and }\left(\mathbbm{f}rac{-3}{|D|}\right)\neq -1} $ &$15,16,\underline{19},24,\underline{31},\underline{39},40,\underline{51},55,60,\dots$ &$0,\mathbbm{f}rac{1}{3}$\\ \hline 17&$-7$ & $\left(\mathbbm{f}rac{-4\cdot 17}{|D|}\right)=1$ &$\underline{3},\underline{11},20,\underline{23},24,28,\underline{31},40,48,51,63,\dots$ &$0,\mathbbm{f}rac{1}{2}$\\ \hline 19 &$-4$& $\left(\mathbbm{f}rac{-19}{|D|}\right)= 1$& $\underline{7},\underline{11},19,\underline{20},\underline{24},28,\underline{35},36,\underline{39},43,44,\dots$ &$0,\mathbbm{f}rac{1}{2}$\\ \hline 20& $-3$ & $\substack{|D|\equiv 3\pmod{8}\\ \text{ and }\left(\mathbbm{f}rac{-20}{|D|}\right)= 1}$& $27,35,\underline{43},\underline{67},\underline{83},\underline{107},115,\underline{123}, \dots$ & $0,\mathbbm{f}rac{1}{2}$\\ \hline 21& $-19$& $\substack{\left(\mathbbm{f}rac{-7}{|D|}\right)=-1\\ \text{ and }\left(\mathbbm{f}rac{-3}{|D|}\right)=1}$ &$\underline{3},7,\underline{24},27,28,\underline{31},\underline{40},48,\underline{52},63,\dots$ &$0,\mathbbm{f}rac{1}{2}$\\ \hline 24 &$-11$ &$\substack{|D|\equiv 3\pmod{8}\\ \text{and }\left(\mathbbm{f}rac{-24}{|D|}\right)=1}$ &$\underline{3},27, \underline{35},\underline{51},\underline{59},75,\underline{83},99,\underline{107},\underline{123},\dots$ & $\mathbbm{f}rac{1}{2},\mathbbm{f}rac{1}{3}$\\ \hline 27 & $-4$ &$\left(\mathbbm{f}rac{-3}{|D|}\right)=1$ &$\underline{7},\underline{19},28,36,\underline{40},\underline{43},\underline{52},\underline{55},64,\underline{67},76,\dots$ & $0,\mathbbm{f}rac{1}{2}$\\ \hline 32 & $-3$& $|D|\equiv 3\pmod{8}$ & $\underline{11},12,\underline{19},\underline{35},\underline{43},48,\underline{51},\underline{59},\underline{67},75,\underline{83}\dots$ &$0,\mathbbm{f}rac{1}{3}$\\ \hline 36&$-11$ & $\substack{|D|\equiv 3\pmod{8}\\ \text{and }\left(\mathbbm{f}rac{-3}{|D|}\right)=-1}$ &$27,\underline{35},\underline{59},\underline{83},99,\underline{107},\underline{131},\underline{155},171,\dots$ &$0,\mathbbm{f}rac{1}{2}$\\ \hline 49&$-3$ & $\left(\mathbbm{f}rac{-7}{|D|}\right)=-1$ &$\underline{19},\underline{20},27,\underline{31},\underline{40},\underline{47},48,\underline{55},\underline{59},\underline{68},75,\dots$ &$0,\mathbbm{f}rac{1}{7}$\\ \hline \end{tabular} \end{table} \end{center} \subsection{The Work of Kohnen} In this section we review the main results of \cite{KohnenCoeff} which we need. We first recall a special set of cusp forms of weight $2k$. For integers $k\geq2$, $N\geq1$, and fundamental discriminants $D,D_0$ with $DD_0>0$, let \[ f_{k,N,D,D_0}(\tau):=\sum_{\buildrel{[a,b,c]\in\mathcal{Q}_{DD_0}}\over{N|a}}\chi_{D_0}(a,b,c)\left(a\tau^2+b\tau+c\right)^{-k}. \] This series converges absolutely on compact sets and defines a cusp form of weight $2k$ on $\Gamma_0(N)$. We require the corresponding cusp form in the case of $k=1$, where the above series is not absolutely convergent. In this case, we define by the ``Hecke trick'' \[ f_{1,N,D,D_0}(\tau;s):=\sum_{\buildrel{[a,b,c]\in\mathcal{Q}_{DD_0}}\over{N|a}}\chi_{D_0}(a,b,c)\left(a\tau^2+b\tau+c\right)^{-k}\left|a\tau^2+b\tau+c\right|^{-s}. \] This has an analytic continuation to $s=0$ as Kohnen showed in the proof of Proposition 2 of \cite{KohnenCoeff}, and we set $f_{1,N,D,D_0}(\tau):=\left[f_{1,N,D,D_0}(\tau;s)\right]_{s=0}\in M_2(N)$, where $[g]_{s=s_0}$ denotes the analytic continuation of $g$ to $s=s_0$ and $M_{2k}(N)$ (resp. $S_{2k}(N)$) denotes the space of weight $2k$ holomorphic modular forms (resp. cusp forms) on $\Gamma_0(N)$. We note that in general, this is \emph{not} a cusp form. We next describe how one uses these functions to obtain information about central critical $L$-values. Proposition 7 of \cite{KohnenCoeff} connects the inner product of cusp forms with this family of cusp forms and cycle integrals. To describe this, first define \[ r_{k,N}\left(g;D_0,\left|D\right|\right):=\sum_{\substack{Q=[a,b,c]\in \Gamma_0(N)\backslash\mathbbm{f}ield{Q}Q_{DD_0}\\ N\mid a }} \chi_{D_0}(Q)\int_{C_Q}f(z) d_{Q,k}z. \] where $C_Q$ is the image in $\Gamma_0(N)\backslash\mathbb{H}$ of the semicircle $a|\tau|^2+b\operatorname{Re}(\tau)+c=0$ (oriented from left to right if $a>0$, from right to left if $a<0$, and from $-\mathbbm{f}rac{c}{b}$ to $i\infty$ if $a=0$) and $d_{Q,k}z:=\left(az^2+bz+c\right)^{k-1}dz.$ Although Kohnen assumed that $N$ was squarefree and odd in \cite{KohnenCoeff}, this condition was not used in Proposition 7 of \cite{KohnenCoeff}, which we state here for general $N$. \begin{theorem}[Kohnen \cite{KohnenCoeff} Proposition 7] Let $f\in S_{2k}(N)$. Then we have that \[ \langle f,f_{k,N,D,D_0}\rangle=[\Gamma_0(N)\colon\operatorname{SL}_2(\mathbbm{f}ield{Z})]^{-1}\pi\binom{2k-2}{k-1}2^{-2k+2}\left(|DD_0|\right)^{1/2-k}r_{k,N,D,D_0}(f). \] \end{theorem} In our situation, we require forms of more general level than those considered in Kohnen. For example, in the congruent number case when $k=1$ and $N=32$, there are technical difficulties both because of convergence for the $k=1$ case and some results of Kohnen only holding for $N$ squarefree and odd. However, we temporarily ignore these difficulties and motivate the definition of $f_{k,N,D,D_0}$ with the following theorem of Kohnen for $N$ squarefree and odd. \begin{theorem}[Kohnen \cite{KohnenCoeff} Corollary 3]\label{thm:Kohnen} Let $f\in S_{2k}(N)$ and let $D$ and $D_0$ be fundamental discriminants with $(-1)^kD,(-1)^kD_0>0$, and $\left(\mathbbm{f}rac{D}{\ell}\right)=\left(\mathbbm{f}rac{D_0}{\ell}\right)=w_{\ell}$ for all primes $\ell|N$, where $w_{\ell}$ is the eigenvalue of $f$ under the Atkin-Lehner involution $W_{\ell}$. Then \[\left(DD_0\right)^{k-1/2}L(f\otimes\chi_D,k)L(f\otimes\chi_{D_0},k)=\mathbbm{f}rac{(2\pi)^{2k}}{(k-1)!^2}2^{-2\nu(N)}|r_{k,N,D,D_0}(f)|^2.\] \end{theorem} Thus, if we pick a $D_0$ with $L(f\otimes\chi_{D_0},1)\neq 0$, then (again with $N$ squarefree and odd) \[ \langle f,f_{k,N,D,D_0}\rangle\doteq L(f\otimes\chi_D,1)L(f\otimes\chi_{D_0},1) \] vanishes if and only if $L(f\otimes\chi_{D},1)=0.$ Here $\doteq$ means that the identity is true up to a non-zero constant. We now discuss the basic argument for the proof of a condition analogous to that given in Theorem \text{Re}f{mainthm} for $N$ squarefree and odd whenever the space of cusp forms is one-dimensional (this condition can be relaxed by using the Hecke operators). One would construct a locally harmonic Maass form with special properties whenever $f_{k,N,D,D_0}=0$. Namely, the locally harmonic Maass form is a local polynomial (of degree at most $2k-2$) if and only if $f_{k,N,D,D_0}=0$. Taking the limit of the resulting local polynomials towards cusps yields conditions resembling Theorem \text{Re}f{mainthm}. We now return to the complications arising from the fact that $N$ is not necessarily squarefree and odd, but we assume that $S_{2k}(N)$ is one-dimensional. In this case, we use Waldspurger's Theorem \cite{Waldspurger} directly to obtain the connection to central critical $L$-values. We say that $m>0$ is \begin{it}admissible for $D_0$\end{it} if $(-1)^km$ is a fundamental discriminant and $\mathbbm{f}rac{(-1)^km}{D_0}\in \mathbbm{f}ield{Q}_p^{\times^2}$ for every prime $p\mid N$. In particular, the good fundamental discriminants in Table \text{Re}f{tab:discs} satisfy these conditions for $k=1$ and the given level $N$, yielding admissible $m>0$ for each of the listed fundamental discriminants $D_0$ in the cases considered. \begin{lemma}\label{lem:Lval} Let a fundamental discriminant $D_0$ be given and suppose that $f\in S_{2k}(N)$ generates $S_{2k}(N)$. Suppose that there exists an admissible $m_0$ for $D_0$ such that that $L\left(f\otimes \chi_{(-1)^km_0},k\right)\neq 0$ and the cuspidal part of $f_{k,N,(-1)^km_0,D_0}$ is non-zero. Then for every $m>0$ admissible for $D_0$, we have $$ \left<f,f_{k,N,(-1)^km,D_0}\right>=0 $$ if and only if $$ L\left(f\otimes \chi_{(-1)^km},k\right)=0. $$ \end{lemma} \begin{remark} Admissible $m_0$ satisfying the conditions given in Lemma \text{Re}f{lem:Lval} may be found in practice by computing a few coefficients of the form $f_{k,N,(-1)^km_0,D_0}$ and using the valence formula. For $k=1$ we need to project into the subspace of cusp forms, while for $k>1$ we only need to check one coefficient because $S_{2k}(N)$ is one-dimensional. For the choices of $D_0$ given in Table \text{Re}f{tab:discs}, $m_0$ may be obtained by choosing from the list of non-vanishing choices computed in the same row of the table. For example, for $k=1$, $N=32$, and $D_0=-3$, one can check that $m=11$ is admissible and satisfies the required conditions. Moreover, potential choices of $m_0$ may be obtained by finding associated $D$ for which $F_{0,N,D,D_0}(x)$ defined in \eqref{eqn:F1} does not satisfy modularity for one fundamental discriminant $D$. This is a finite calculation, and one expects that this holds for any fundamental discriminant $D$ for which the central $L$-value does not vanish. If $k>1$, then to verify the above condition one multiplies each summand of $F_{0,N,D,D_0}(x)$ by $Q(x,1)^{k-1}$ and again checks modularity. \end{remark} \begin{proof} Let $g_{k,N,D,D_0}$ be the cuspidal part of $f_{k,N,D,D_0}$. Consider the generating function of the $f_{k,N,D_0,D}$ (defined in (3) of \cite{KohnenCoeff}) \begin{multline}\label{eqn:Omegadef} \Omega_{N,k}\left(z,\tau;D_0\right):=\mathbbm{f}rac{\left[\operatorname{SL}_2(\mathbbm{f}ield{Z}):\Gamma_0(N)\right]2\sqrt{D}}{\pi \binom{2k-2}{k-1}}\\ \times\sum_{\substack{m\geq 1\\ (-1)^km\equiv 0,1\pmod{4}}} \sqrt{m}\sum_{t\mid N}\mu(t)\chi_{D_0}(t)f_{k,\mathbbm{f}rac{N}{t},(-1)^km,D_0}\left(z\right)e^{2\pi i m\tau}. \end{multline} By Theorem 1 of \cite{KohnenCoeff} (which holds for general $N$), $\Omega_{N,k}$ is a weight $k+\mathbbm{f}rac{1}{2}$ cusp form as a function of $\tau$. Since the space of cusp forms is one-dimensional, $S_2\left(\mathbbm{f}rac{N}{t}\right)$ is trivial for all $t>1$, and hence $g_{k,\mathbbm{f}rac{N}{t},(-1)^km,D_0}=0$ for all $t>1$. Using the fact that the Eisenstein series are orthogonal to cusp forms, for $f\in S_{2k}(N)$ we hence have \begin{multline*} G_f(\tau):=\left<f,\Omega_{N,k}\left(\cdot, \tau;D_0\right)\right>=\mathbbm{f}rac{\left[\operatorname{SL}_2(\mathbbm{f}ield{Z}):\Gamma_0(N)\right]2\sqrt{D}}{\pi \binom{2k-2}{k-1}}\\ \times\sum_{\substack{m\geq 1\\ (-1)^km\equiv 0,1\pmod{4}}} \sqrt{m}\left<f,g_{k,N,(-1)^km,D_0}\right>e^{2\pi i m\tau}. \end{multline*} Therefore, $\sqrt{m}\left<f,g_{k,N,(-1)^km,D_0}\right>$ is the $m$th coefficient of the weight $k+\mathbbm{f}rac{1}{2}$ cusp form $G_f$. Using an argument of Parson \cite{Parson}, one can show that the action of the integral-weight Hecke operators on $\Omega_{N,k}$ in the $z$ variable equal the action of the half-integral weight operators on $\Omega_{N,k}$ in the $\tau$ variable. Hence if $f$ is a newform, we obtain that $G_f$ is an eigenform with the same eigenvalues as $f$. Thus, by Waldspurger's Theorem \cite{Waldspurger}, if $m$ is admissible for $D_0$, then the ratio of the $m$th coefficient and the $m_0$th coefficient of $G_f$ is proportional to the ratio of the central $L$-values $L\left(f\otimes \chi_{(-1)^km},k\right)$ and $L\left(f\otimes \chi_{(-1)^km_0},k\right)$. Since $S_{2k}(N)$ is one-dimensional, there exists a constant $a_m$ for which $$ g_{k,N,(-1)^km,D_0}=a_mf. $$ By our choice of $m_0$, we have $a_{m_0}\neq 0$ and hence $$ \left<f,g_{k,N,(-1)^km_0,D_0}\right>=a_{m_0}\|f\|^2\neq 0. $$ Since $L\left(f\otimes \chi_{(-1)^km_0},k\right)\neq 0$, we obtain that $$ \left<f,f_{k,N,(-1)^km,D_0}\right>=\left<f,g_{k,N,(-1)^km,D_0}\right>=0 $$ if and only if $$ L\left(f\otimes \chi_{(-1)^km},k\right)=0. $$ \end{proof} \subsection{Weak Maass forms} We begin by defining the notion of a weak Maass form, a comprehensive survey of which can be found in, e.g., \cite{OnoCDM}. Maass forms were introduced by Maass and generalized by Bruinier and Funke in \cite{BruinierFunke} to allow growth at the cusps. Following their work, we define a weak Maass form of weight $\kappa\in \mathbbm{f}rac12 \mathbbm{f}ield{Z}$ for a congruence subgroup $\Gamma$ as follows. We first recall the usual weight $\kappa$ hyperbolic Laplacian operator given by \begin{equation*} \Delta_{\kappa}:= -v^2\left( \mathbbm{f}rac{\partial^2}{\partial u^2}+\mathbbm{f}rac{\partial^2}{\partial v^2} \right) -i\kappa v\left( \mathbbm{f}rac{\partial}{\partial u} +\mathbbm{f}rac{\partial}{\partial v} \right). \end{equation*} For half-integral $\kappa$, we require that $\Gamma$ has level divisible by $4$. For this case we also define $\varepsilon_d$ for odd $d$ by \begin{equation*} \varepsilon_d:= \begin{cases}1 &\textrm{ if }d\equiv 1\pmod{4},\\ i&\textrm{ if }d\equiv 3\pmod{4}.\end{cases} \end{equation*} Let $s\in\mathbbm{f}ield{C}$ be given and define $\lambda_{\kappa,s}:=\left(s-\mathbbm{f}rac{\kappa}{2}\right)\left(1-s-\mathbbm{f}rac{\kappa}{2}\right)$ (note the symmetries $\lambda_{2-\kappa,s}=\lambda_{\kappa,s}=\lambda_{\kappa,1-s}$). We may now define weak Maass forms as follows. \begin{definition}\label{MaassDefn} A \begin{it}weak Maass form of weight $\kappa$ and eigenvalue $\lambda_{\kappa,s}$ on a congruence subgroup $\Gamma\subseteq\operatorname{SL}_2(\mathbbm{f}ield{Z})$\end{it} (with level $4|N$ if $\kappa\in\mathbbm{f}rac12+\mathbbm{f}ield{Z}$) is any $\mathcal C^2$ function $F\colon \mathbb H\to \mathbbm{f}ield{C}$ satisfying: \begin{enumerate}[leftmargin=*,label={\rm(\arabic*)}] \item For all $\gamma\in \Gamma,$ \[ F(\gamma \tau) = \begin{cases} (c\tau+d)^{\kappa}F(\tau)& \mathrm{if } ~\kappa\in \mathbbm{f}ield{Z},\\ \nu_{\theta}^k(\gamma) F(\tau) & \mathrm{if }~ \kappa\in \mathbbm{f}rac 12+\mathbbm{f}ield{Z}. \end{cases} \] where $\nu_{\theta}(\gamma)$ is the multiplier of the theta function $\Theta(\tau):=\sum_{n\in\mathbbm{f}ield{Z}}q^{n^2}$ with $q:=e^{2\pi i \tau}$. \item We have that \[ \Delta_{\kappa}(F)=\lambda_{\kappa,s} F. \] \item As $v\rightarrow\infty$, there exist $a_1,\ldots, a_N\in\mathbbm{f}ield{C}$ such that \[ F(\tau)-\sum_{m=1}^Na_m\mathcal M_{\kappa,s}\left(4\pi\mathrm{sgn}(\kappa)mv\right)e^{2\pi im \operatorname{sgn}(\kappa)u} \] grows at most polynomially in $v$. Analogous conditions are required at all cusps. \end{enumerate} \end{definition} Here \begin{equation}\label{eqn:Mdef} \mathcal{M}_{\kappa,s}(t):=|t|^{-\mathbbm{f}rac{\kappa}{2}}M_{\mathbbm{f}rac{\kappa}{2}\operatorname{sgn}(t),s-\mathbbm{f}rac{1}{2}}(|t|), \end{equation} where $M_{s,t}$ is the usual $M$-Whittaker function. In the case when the eigenvalue of the weak Maass form is zero (i.e., if $s=\mathbbm{f}rac{\kappa}{2}$ or $s=1-\mathbbm{f}rac{\kappa}{2}$), we call the object a \begin{it}harmonic weak Maass form,\end{it} and we denote the space of harmonic weak Maass forms of weight $\kappa$ on $\Gamma_0(N)$ by $H_{\kappa}(N)$. For $2k\in 2\mathbbm{f}ield{N}$, there are two canonical operators mapping from harmonic weak Maass forms of weight $2-2k$ to classical modular forms of weight $2k$. These are defined by \[ \xi_{2-2k,\tau}:=\xi_{2-2k}:=2i v^{2-2k}\overline{\mathbbm{f}rac{\partial}{\partial\overline{\tau}}},\] \[ \mathcal{D}^{2k-1}:=\left(\mathbbm{f}rac{1}{2\pi i} \mathbbm{f}rac{\partial}{\partial\tau}\right)^{2k-1}, \] and these operators act by \cite{OnoCDM} \[ \xi_{2-2k}\colon H_{2-2k}(N)\rightarrow M_{2k}(N), \] \[ \mathcal{D}^{2k-1}\colon H_{2-2k}(N)\rightarrow M_{2k}^!(N). \] It is interesting to note here that in fact the image of $\mathcal{D}^{2k-1}$ is the space orthogonal to cusp forms under the (regularized) Petersson inner product, which we see in the next subsection is very different from the local Maass form situation. \subsection{Poincar\'e series} In this subsection, we define several types of Poincar\'e series which provide useful bases for spaces of weak Maass forms and are used later in the paper to provide explicit examples of local Maass forms using theta lifts. Denote the $D$th weight $\mathbbm{f}rac{3}{2}-k$ Maass--Poincar\'e series at $i\infty$ for $\Gamma_0(4N)$ with eigenvalue $\lambda_{\mathbbm{f}rac{3}{2}-k,s}=\left(s-\mathbbm{f}rac{k}{2}-\mathbbm{f}rac{1}{4}\right)\left(\mathbbm{f}rac{3}{4}-\mathbbm{f}rac{k}{2}-s\right)$ by $P_{\mathbbm{f}rac{3}{2}-k,D,s}$. For $\text{Re}(s)>1$ these are given by \begin{equation}\label{eqn:Poincdef} P_{\mathbbm{f}rac{3}{2}-k,D,s}(\tau) := \sum_{\gamma\in \Gamma_{\infty}\backslash \Gamma_0(4N)} \psi_{-D,\mathbbm{f}rac{3}{2}-k}(s;\tau)\Big|_{\mathbbm{f}rac{3}{2}-k} \gamma\Big| \operatorname{pr}, \end{equation} where $\operatorname{pr}$ is Kohnen's projection operator (cf. p. 250 of \cite{KohnenCoeff}) into the plus space and for $\kappa\in \mathbbm{f}rac{1}{2}\mathbbm{f}ield{Z}$ and $m\in \mathbbm{f}ield{Z}$ we define \begin{equation}\label{eqn:psidef} \psi_{m,\kappa}(s;\tau):=\left(4\pi |m|\right)^{\mathbbm{f}rac{\kappa}{2}}\Gamma(2s)^{-1}\mathcal{M}_{\kappa,s}(4\pi m v)e^{2\pi i mu}. \end{equation} Here the plus space of weight $\kappa+\mathbbm{f}rac{1}{2}$ is the subspace of forms for which the $n$th coefficient vanishes unless $(-1)^{\kappa} n\equiv 0,1\pmod{4}$. Moreover, for $\mathbbm{f}rac{3}{4}\leq \text{Re}\left(s_0\right)\leq 1$, one defines $P_{\mathbbm{f}rac{3}{2}-k,D,s_0}(\tau):=\left[P_{\mathbbm{f}rac{3}{2}-k,D,s}(\tau)\right]_{s=s_0}$. The Poincar\'e series $P_{\mathbbm{f}rac{3}{2}-k,D,s}$ are weak Maass forms with eigenvalue $\lambda_{\mathbbm{f}rac{3}{2}-k,s}$. In particular, for $s=\mathbbm{f}rac{k}{2}+\mathbbm{f}rac{1}{4}$ one obtains the harmonic weak Maass forms $$ P_{\mathbbm{f}rac{3}{2}-k,D}:=P_{\mathbbm{f}rac{3}{2}-k,D,\mathbbm{f}rac{3}{4}}. $$ (see Theorem 3.1 of \cite{BringmannOno}). For $\text{Re}(s)>1$, we also denote the $D$th Maass-Poincar\'e series for $\Gamma_0(4N)$ at $\infty$ of weight $k+\mathbbm{f}rac{1}{2}$ by $$ P_{k+\mathbbm{f}rac{1}{2},D,s}(\tau):=\sum_{\gamma\in \Gamma_{\infty}\backslash \Gamma_0(4N)} \psi_{D,k+\mathbbm{f}rac{1}{2}}(s;\tau)\Big|_{\mathbbm{f}rac{3}{2}-k} \gamma\Big| \operatorname{pr} $$ and its analytic continuation to $s=\mathbbm{f}rac{k}{2}+\mathbbm{f}rac{1}{4}$ by $P_{k+\mathbbm{f}rac{1}{2},D}$. The Poincar\'e series $P_{k+\mathbbm{f}rac{1}{2},|D|}$ are the classical weight $k+\mathbbm{f}rac{1}{2}$ cuspidal Poincar\'e series. The definition for integral weight Poincar\'e series closely resemble the definition of \eqref{eqn:Poincdef}, but we do not require these for our purposes here. \subsection{Locally harmonic Maass forms} In this section, we review some basic facts pertaining to and operators acting on locally harmonic Maass forms from \cite{BKK,BKM,BKS}. We note that the results here are formally true in the case $k=1$ as well once we have solved the convergence issues with the proofs adjusted mutatis mutandis. The main objects of this paper are locally harmonic Maass forms, which, as the name suggests, are closely related to weak Maass forms. These are a special case of a more general object, which we now define given a measure zero set $E$ (the ``exceptional set''). \begin{definition}\label{LocalMaassDefn} A local Maass form with exceptional set $E$, weight $\kappa\in 2\mathbbm{f}ield{Z}$, and eigenvalue $\lambda_{\kappa,s}$ on a congruence subgroup $\Gamma\subseteq\operatorname{SL}_2(\mathbbm{f}ield{Z})$ is any function $\mathcal F\colon \mathbb H\to \mathbbm{f}ield{C}$ satisfying: \begin{enumerate}[leftmargin=*,label={\rm(\arabic*)}] \item For all $\gamma\in \Gamma,$ \[ \mathcal F\vert_{\kappa}\gamma=\mathcal F. \] \item For every $\tau\not\in E$, there exists a neighborhood around $\tau$ on which $\mathcal F$ is real-analytic and \[ \Delta_{\kappa} \mathcal F=\lambda_{\kappa,s} \mathcal F. \] \item For $\tau\in E$ we have \[ \mathcal F(\tau)=\mathbbm{f}rac 12\lim_{r\rightarrow0^+}\left(\mathcal F(\tau+ir)+\mathcal F(\tau-ir)\right). \] \item We have that $\mathcal F$ has at most polynomial growth at the cusps. \end{enumerate} If $\mathcal{F}$ is a local Maass form of eigenvalue $0$, then we call $\mathcal{F}$ a \begin{it}locally harmonic Maass form.\end{it} \end{definition} Note that the last condition on polynomial growth is impossible for harmonic weak Maass forms to satisfy (except for classical modular forms), so that although we lose continuity, we gain nicer growth conditions. These functions first arose as natural lifts under the $\xi$-operator of $f_{k,D}:=f_{k,1,D,1}$, which themselves are important in the theory of modular forms with rational periods, along with the theory of Shimura and Shintani lifts. We are particularly interested in the case when the exceptional set is formed by a special set of geodesics corresponding to a fundamental discriminant $D$. \[ E_D:=\left\{\tau=u+i v\in\mathbb{H} : \exists a,b,c\in\mathbbm{f}ield{Z},\ b^2-4ac=D,\ a|z|^2+bx+c=0\right\}, \] We now give a summary of the important examples and maps between locally harmonic Maass forms proved in \cite{BKK}. They define for each fundamental discriminant $D>0$ a local Maass form $\mathcal F_{1-k,D}$ which has exceptional set $E_D$. Moreover, it has the following remarkable relation to Kohnen's functions $f_{k,D}$, for some non-zero constants $\alpha_D,\beta_D$: \begin{equation}\label{eqn:FxiD} \xymatrix{ \mathcal{F}_{1-k,D}\ar@/_/@{->}[rr]_{\mathbbm{f}rac{1}{\beta_D}\mathcal{D}^{2k-1} }\ar@/^/@{->}[rr]^{\mathbbm{f}rac{1}{\alpha_D}\xi_{2-2k}} &&f_{k,D}, } \end{equation} which we recall from above is an impossible property for weak Maass forms to satisfy. Using these facts, there is a useful decomposition for the local Maass form $\mathcal F_{1-k,D}$ in terms of the Eichler integrals, defined for $f\in S_{2k}(N)$ by \begin{align} \label{eqn:Eichnonhol} f^*(\tau)&:=(2i)^{1-2k}\int_{-\overline{\tau}}^{i\infty}f^{c}(z)(z+\tau)^{2k-2} dz,\\ \label{eqn:Eichhol}\mathcal E_f(\tau)&:=\sum_{n\geq1}\mathbbm{f}rac{a_{f}(n)}{n^{2k-1}}q^n. \end{align} Here $f^c(z):=\overline{f\left(-\overline{z}\right)}$ is the cusp form whose Fourier coefficients are the complex conjugates of the coefficients of $f$. Recall that there exist non-zero constants $c_1$ and $c_2$ such that \begin{align*} \xi_{2-2k}\left(f_{k,D}^*(\tau)\right) &= c_1f_{k,D}(\tau), & \mathcal{D}^{2k-1}\left(f_{k,D}^*(\tau)\right) &= 0,\\ \xi_{2-2k}\left(\mathcal{E}_{f_{k,D}}(\tau)\right) &= 0, & \mathcal{D}^{2k-1}\left(\mathcal{E}_{f_{k,D}}(\tau)\right) &= c_2f_{k,D}(\tau). \end{align*} It is then not difficult to show that there exist local polynomials $P_D$ of degree at most $2k-2$ ($P_D$ equals a fixed polynomial on each connected component of $\mathbb{H}\backslash E_D$) such that \begin{equation} \label{decomplocalmaass} \mathcal{F}_{1-k,D}=P_D+\mathbbm{f}rac{\alpha_D}{c_1}f_{k,D}^*+\mathbbm{f}rac{\beta_D}{c_2}\mathcal E_{f_{k,D}}, \end{equation} where $\alpha_D$ and $\beta_D$ are the constants in \eqref{eqn:FxiD}. A similar decomposition for $k=1$ was already remarked in \cite{BKK} as having been studied by H\"ovel in \cite{Hoevel}, and follows directly from the decomposition \eqref{eqn:FxiD}. Indeed, one sees that $P_D(\tau):=\mathcal{F}_{1-k,D}(\tau) - \mathbbm{f}rac{\alpha}{c_1}f_{k,D}^*(\tau)-\mathbbm{f}rac{\beta}{c_2}\mathcal{E}_{f_{k,D}}(\tau)$ is annihilated by both $\xi_{2-2k}$ and $\mathcal{D}^{2k-1}$ for $\tau\notin E_D$. Since $\xi_{2-2k}\left(P_D\right)=0$, $P_D$ is locally holomorphic, while the only holomorphic functions annihilated by $\mathcal{D}^{2k-1}$ are polynomials of degree at most $2k-2$. It turns out that the decomposition \eqref{decomplocalmaass} plays a key role in our proof of Theorem 1.1 later. \section{Theta lifts}\label{sec:theta-general} In this section we will define two theta lifts and study their relation. We remark that variants of both theta functions can be found in the literature. We give some references below but also provide the explicit construction of these functions for the convenience of the reader. We recall the basic setup for vector-valued theta functions, following the exposition of Borcherds \cite{boautgra}. Let $L$ be an even lattice with quadratic form $Q$ of type $(b^+, b^-)$ and let $\mathcal{L}=L'/L$ be the associated finite quadratic module with the reduction of $Q$ modulo $\mathbbm{f}ield{Z}$ as quadratic form. We write $(\cdot,\cdot)$ for the associated bilinear form so that $2Q(x)=(x,x)$. We let $p$ be a harmonic polynomial on $\mathbbm{f}ield{R}^{b^+,b^-}$ (with respect to the Laplacian on $\mathbbm{f}ield{R}^{b^+ + b^-}$), homogeneous of degree $m^+$ on $\mathbbm{f}ield{R}^{b^+}$ and of degree $m^-$ on $\mathbbm{f}ield{R}^{b^-}$. If $\alpha$ is an isometry from $L \otimes \mathbbm{f}ield{R}$ to $\mathbbm{f}ield{R}^{b^+,b^-}$, then the preimage of $\mathbbm{f}ield{R}^{b^+}$ defines a point $z = z(\alpha)$ in the Grassmannian $\Gr(L)$ of $b^+$-dimensional positive definite subspaces of $L \otimes \mathbbm{f}ield{R}$. Similarly, the preimage of $\mathbbm{f}ield{R}^{b^-}$ is a $b^-$ dimensional negative definite subspace of $L \otimes \mathbbm{f}ield{R}$, equal to $z^\perp$. We write $\lambda_z$ for the orthogonal projection to $z$ and $\lambda_z^\perp$ for the one to $z^\perp$ such that $\lambda = \lambda_z + \lambda_{z^\perp}$ for all $\lambda \in L \otimes \mathbbm{f}ield{R}$. If $z = z(\alpha)$, we sometimes also write $\lambda_\alpha$ and $\lambda_{\alpha^\perp}$. Recall that there is a unitary representation $\rho_\mathcal{L}$ of $\Mp_2(\mathbbm{f}ield{Z})$ acting on the group ring $\mathbbm{f}ield{C}[\mathcal{L}]$, called the Weil representation associated with $\mathcal{L}$. For details we refer to \cite{boautgra}. We write $\mathbbm{f}rake_\mu$ for the standard basis element of $\mathbbm{f}ield{C}[\mathcal{L}]$ corresponding to $\mu \in \mathcal{L}$. The \emph{Weil representation} is defined on the generators $S=(\left(\begin{smallmatrix}0 & -1 \\ 1 & 0\end{smallmatrix}\right),\sqrt{\tau})$ and $T=(\left(\begin{smallmatrix}1 & 1 \\ 0 & 1\end{smallmatrix}\right), 1)$ by the formulas \begin{equation} \label{eq:weilrep} \begin{aligned} \rho_\mathcal{L}(T)\,\mathbbm{f}rake_\mu &= e(Q(\mu))\mathbbm{f}rake_\mu, \\ \rho_\mathcal{L}(S)\,\mathbbm{f}rake_\mu &= \mathbbm{f}rac{e(\operatorname{sgn}(\mathcal{L}))}{\sqrt{|\mathcal{L}|}} \sum_{\nu \in \mathcal{L}}e(-(\mu,\nu))\mathbbm{f}rake_{\nu}. \end{aligned} \end{equation} Let $N$ be the level of $L$. Then it is well-known (see for instance Lemma 5.15 of \cite{StrombergWeilrep}) that elements of the form $(\gamma, \sqrt{c \tau + d})$, where $\gamma = \left(\begin{smallmatrix}a & b \\ c & d\end{smallmatrix}\right) \in \Gamma_0(N)$ act on $\mathbbm{f}rake_0$ by a character $\chi_\mathcal{L}(\gamma)$. We do not need the general formula for this character. We just remark that if $M=\mathbbm{f}ield{Z}$ with quadratic form $x^2$, and $\mathcal{M}=M'/M$, then the theta function $\Theta_M=\Theta$ transforms as $\Theta(\gamma\tau) = \sqrt{c\tau+d}\chi_{\mathcal{M}}(\gamma)\Theta_M(\tau)$ and this transformation behavior is (usually) used to define a modular form of half-integral weight. For $\gamma \in \mathcal{L}$, we define an associated theta function via \begin{equation} \label{eq:theta-comp} \Theta_{L,\gamma}(\tau,\alpha,p) := v^{\mathbbm{f}rac{b^-}2 + m^-} \sum_{\lambda \in L + \gamma} p(\alpha(\lambda)) e(Q(\lambda_\alpha) \tau + Q(\lambda_{\alpha^\perp})\overline{\tau}). \end{equation} By Theorem 4.1 of \cite{boautgra}, we find the expansion \begin{equation} \label{eq:theta} \Theta_L(\tau,\alpha,p) = \sum_{\gamma \in \mathcal{L}} \Theta_{L,\gamma}(\tau,\alpha, p) e_\gamma \end{equation} transforms in $\tau$ as a vector-valued modular form of weight $(b^+-b^-)/2 + m^+-m^-$ for the Weil representation attached to $\mathcal{L}$. If $L$ is indefinite, then $\Theta_L$ is non-holomorphic in both variables. Now let $L$ be the lattice of signature $(1,2)$ given by $\mathbbm{f}ield{Z}^3$ together with the quadratic form $Q(a,b,c) = -b^2+ac$ and let $D_0$ be a fundamental discriminant (note that the discriminants $D_0$ in Table \text{Re}f{tab:discs} are all fundamental). The discriminant group of $L$, $\mathcal{L} = L'/L$, is isomorphic to $\mathbbm{f}ield{Z}/2\mathbbm{f}ield{Z}$ with quadratic form $-x^2/4$. Note that for $(a,b/2,c) \in L'$ with $a,b,c \in \mathbbm{f}ield{Z}$, we have that $[a,b,c]$ is an integral binary quadratic form of discriminant $-4Q(a,b,c)$. We also consider $\mathcal{L}(D_0)$, the discriminant group of $D_0 L$ with $Q_{D_0}(a,b,c) = Q(a,b,c)/\abs{D_0}$ as quadratic form. This is isomorphic to $\mathbbm{f}ield{Z}/4D_0\mathbbm{f}ield{Z} \oplus \mathbbm{f}ield{Z}/D_0\mathbbm{f}ield{Z} \oplus \mathbbm{f}ield{Z}/D_0\mathbbm{f}ield{Z}$ with the quadratic form $Q(a,b,c)/(4\abs{D_0})$. Now let $N$ be a positive integer that is coprime to $D_0$ and consider the sublattice $M = M_N \subset L$ given by all vectors $(a,b,c)$ such that $N$ divides $a$. We remark that $\mathcal{M} = M'/M$ is isomorphic to $L'/L \oplus \mathcal{N}$, where $\mathcal{N} = \mathbbm{f}ield{Z}/N\mathbbm{f}ield{Z} \oplus \mathbbm{f}ield{Z}/N\mathbbm{f}ield{Z}$. Moreover, if we equip $D_0 M$ with quadratic form $Q_{D_0}$, we obtain the discriminant group $\mathcal{L}(D_0) \oplus \mathcal{N}$ under the assumptions we made. We will write $\mathbbm{f}rake_{\mu,\nu}$ the basis element of $\mathbbm{f}ield{C}[\mathcal{L}(D_0) \oplus \mathcal{N}] = \mathbbm{f}ield{C}[\mathcal{L}(D_0)] \otimes \mathbbm{f}ield{C}[\mathcal{N}]$\ corresponding to $\mu \in \mathcal{L}(D_0)$ and $\nu \in \mathcal{N}$. The group $\operatorname{SL}_2(\mathbbm{f}ield{Q})$ acts on the rational quadratic space $L \otimes_\mathbbm{f}ield{Z} \mathbbm{f}ield{Q}$ via isometries. The action, which we denote by $\gamma.(a,b,c)$, is given by \[ M(a,b,c) = \left(\begin{smallmatrix} b & c \\ -a & -b \end{smallmatrix} \right) \mapsto \gamma \left(\begin{smallmatrix} b & c \\ -a & -b \end{smallmatrix}\right) \gamma^{-1}, \] where we identify $(a,b,c)$ with the matrix $M(a,b,c)$. As the genus character $\chi_{D_0}$ only depends on $a,b,c$ modulo $D_0$, we may view it as a function on $\mathcal{L}(D_0)$. In \cite{AE}, it is shown that the linear map \[ \Psi_{D_0}: \mathbbm{f}ield{C}[\mathcal{L}] \to \mathbbm{f}ield{C}[\mathcal{L}(D_0)], \quad \mathbbm{f}rake_\mu \mapsto \sum_{\substack{\delta \in \mathcal{L}(D_0) \\ \delta \equiv D_0\mu \smod{L} \\ Q_{D_0}(\delta) \equiv Q(\mu) \smod{\mathbbm{f}ield{Z}}}} \chi_{D_0}(\delta)\mathbbm{f}rake_\delta \] is an intertwiner for the Weil representations attached to $\mathcal{L}^{\operatorname{sgn}{D_0}}$ and $\mathcal{L}(D_0)$. We consider the map \[ \Psi_{D_0,\mathcal{N}}: \mathbbm{f}ield{C}[\mathcal{L}] \to \mathbbm{f}ield{C}[\mathcal{L}(D_0)\oplus\mathcal{N}] = \mathbbm{f}ield{C}[\mathcal{L}(D_0)] \otimes \mathbbm{f}ield{C}[\mathcal{N}] \] obtained from $\Psi_{D_0}$ together with the natural inclusion $\mathbbm{f}ield{C}[\mathcal{L}(D_0)] \hookrightarrow \mathbbm{f}ield{C}[\mathcal{L}(D_0)] \otimes \mathbbm{f}ield{C}[\mathcal{N}]$ given by $\mathbbm{f}rake_\delta \mapsto \mathbbm{f}rake_{\delta} \otimes \mathbbm{f}rake_0$. The following lemma is crucial for us. \begin{lemma} \label{lem:twistvec} Let $\gamma := \left( \begin{smallmatrix} a & b \\ c & d \end{smallmatrix} \right) \in \Gamma_0(N) \cap \Gamma^0(4)$. Let \[ v_{D_0} = \Psi_{D_0,\mathcal{N}}(\mathbbm{f}rake_{0} + \mathbbm{f}rake_{1}) \in \mathbbm{f}ield{C}[\mathcal{L}(D_0)] \otimes \mathbbm{f}ield{C}[\mathcal{N}], \] where $\mathbbm{f}rake_0$ nd $\mathbbm{f}rake_1$ correspond to the elements $0,1 \in \mathbbm{f}ield{Z}/2\mathbbm{f}ield{Z}$. We have that \[ \rho_{\mathcal{M}(D_0)}(\gamma,\sqrt{c\tau+d}) v_{D_0}= \nu_\theta(\gamma)^{\operatorname{sgn}(D_0)} v_{D_0}. \] \end{lemma} \begin{proof} Using the formulas for the Weil representation we see that $\rho_\mathcal{L}(\gamma)$ for $\gamma \in \Gamma^0(4)$ acts on $\mathbbm{f}rake_0 + \mathbbm{f}rake_1$ by $\overline{\chi_\mathcal{L}(\gamma)}$. In fact, if $\gamma \in \Gamma^0(4)$, then $\widetilde{\gamma} = S^{-1}\gamma S \in \Gamma_0(4)$. Therefore, \[ \rho_{\mathcal{L}}(\gamma)(\mathbbm{f}rake_0 + \mathbbm{f}rake_1) = \rho_{\mathcal{L}}(S\widetilde\gamma S^{-1}) (\mathbbm{f}rake_0 + \mathbbm{f}rake_1) = \chi_\mathcal{L}(\widetilde\gamma)(\mathbbm{f}rake_0 + \mathbbm{f}rake_1) \] by Lemma 5.15 in \cite{StrombergWeilrep} and \eqref{eq:weilrep}. Directly above the lemma, loc. cit., the direct relation to $\nu_\theta$ is also stated. Since $\rho_\mathcal{N}(\gamma)$ acts trivially on $\mathbbm{f}rake_0$ for $\gamma \in \Gamma_0(N)$, we are done. \end{proof} We obtain the following scalar-valued theta function. \begin{proposition} \label{prop:twisted-scalar-general} For $\mathcal{L}$ and $\mathcal{M}$ as above the scalar-valued theta function \[ \Theta_{N,D_0}^*(\tau,\alpha,p) := \langle \Theta_{\mathcal{M}(D_0)}(4\tau,\alpha,p), v_{D_0} \rangle \] satisfies \[ \Theta_{N,D_0}(\gamma\tau,\alpha,p) = \sqrt{c\tau+d}^{2k}\, v_{\theta}^{-\operatorname{sgn}(D_0)}(\gamma)\, \Theta_{N,D_0}^*(\gamma\tau,\alpha,p) \] for all $\gamma = \left(\begin{smallmatrix}a & b \\ c & d\end{smallmatrix}\right) \in \Gamma_0(4N)$ with $k=-1/2 + m^+-m^-$ in $\tau$. \end{proposition} \begin{proof} The claim follows directly from Lemma \text{Re}f{lem:twistvec} by noting that if $\gamma = \left(\begin{smallmatrix}a & b \\ c & d\end{smallmatrix}\right) \in \Gamma_0(4N)$, then \[ 4\gamma\tau = \left( \begin{smallmatrix} a & 4b \\ c/4 & d \end{smallmatrix} \right) (4\tau) \] and the matrix above is contained in $\Gamma_0(N) \cap \Gamma^0(4)$. \end{proof} Now we specialize these functions by choosing an isometry $\alpha_z$ of $L \otimes \mathbbm{f}ield{R}$ with $\mathbbm{f}ield{R}^{1,2}$ or $\mathbbm{f}ield{R}^{2,1}$ for every $z \in \mathbb{H} \cong \Gr(L)$. Here, we use the identification $z \in \mathbb{H} \cong \Gr(L)$ given by \[ z = x + iy \mapsto \mathbbm{f}ield{R} (-1, (z+\bar{z})/2, z\bar{z}) = (-1, x, x^2+y^2). \] Then we let $b_1 = b_1(z)$ be a normalized basis vector for the positive line $z$, i.e., \[ b_1(z) := \mathbbm{f}rac{1}{y}(-1, x, x^2+y^2), \] and let \[ Z = Z(z) := \mathbbm{f}rac{1}{y}(-1,z,-z^2) \] and $b_2(z) = \mathbbm{f}ield{R}e (Z)$ and $b_3(z) = \Im(Z)$. Note that we have \begin{equation} \label{eq:Ztrans} Z(\gamma z) = \left(\mathbbm{f}rac{cz+d}{c\bar{z}+d}\right)\gamma.Z(z) \text{ and } b_1(\gamma z) = \gamma.b_1(z), \end{equation} where $\gamma = \left(\begin{smallmatrix}a & b \\ c & d\end{smallmatrix}\right)$ acts on $z \in \mathbb{H}$ via the usual fractional linear transformation. We then let $\alpha_z(ab_1+bb_2+cb_3) = (a, b, c) \in \mathbbm{f}ield{R}^{1,2}$. For integers $r,s \geq 0$ we let $P_{r,s}(a,b,c) = a^r(b+ic)^{s}$ (which is homogeneous of degree $(r,s)$) and define \[ p^*_{z,k}(X) := P_{1,k-1}(\alpha_z(X)) = (X,b_1) \cdot (X,b_2 + i b_3)^{k-1} = \mathbbm{f}rac{1}{y}(a\abs{z}^2 + b \mathbbm{f}ield{R}e(z) + c)(az^2+bz+c)^{k-1} \] for $X = (a,b,c) \in L \otimes \mathbbm{f}ield{R}$. We define \[ \Theta_{1-k,N,D_0}^*(z, \tau) := y^{k-1}\Theta_{N,D_0}(\tau,\alpha_z,p_{z,k}^*). \] Similarly, we let \[ p_{z,k}(X) := \overline{P_{0,k}(X)} = (X,b_2 - i b_3)^{k} = (a\overline{z}^2+b\overline{z}+c)^{k} \] and define $$ \Theta_{k,N,D_0}(z, \tau) := v^{-\mathbbm{f}rac{1}{2}}y^{-2k} \overline{\Theta_{N,D_0}(\tau,\alpha_z,p_{z,k})}. $$ \begin{proposition} \label{prop:twisted-scalar} The theta function $\Theta_{1-k,N, D_0}^*(z, \tau)$ transforms like a modular form of weight $\mathbbm{f}rac{3}{2}-k$ for $\Gamma_0(4N)$ in $\tau$ and as a modular form for $\Gamma_0(N)$ of weight $2-2k$. Similary, $\Theta_{k,N, D_0}(-\bar{z}, \tau)$ has weights $\mathbbm{f}rac{1}{2}+k$ and $2k$ in $\tau$ and $z$. \end{proposition} \begin{proof} The transformation behaviour in $\tau$ follows from Proposition \text{Re}f{prop:twisted-scalar-general}. The modularity in $z$ is a direct consequence of the fact that $\Gamma_0(N) \subset \operatorname{SL}_2(\mathbbm{f}ield{Z})$ acts via isometries. It preserves the lattice $M$ and acts trivially on $\mathcal{M}$ and since $\chi_{D_0}$ is invariant under $\operatorname{SL}_2(\mathbbm{f}ield{Z})$, the claim follows from \eqref{eq:Ztrans}. \end{proof} Having shown modularity of the relevant theta functions, we now look at the scalar-valued theta functions $\Theta_{1-k}^*$ and $\Theta_k$ more closely, translating the above definitions into language using sums over binary quadratic forms which will be needed to connect with the main functions in this paper. We first define $Q_z$ for an integral binary quadratic form $Q=[a,b,c]$ as $$ Q_z:=\mathbbm{f}rac{1}{y}\left(a|z|^2+bx+c\right). $$ The Fourier expansions (in $\tau$) of the theta functions are then easily seen to be given by $$ \Theta_{1-k}^*(z,\tau)= \Theta_{1-k,N,D_0}^*\left(z,\tau\right)=v^k \sum_{D\in \mathbbm{f}ield{Z}} \sum_{\substack{Q=[a,b,c]\in \mathbbm{f}ield{Q}Q_{D\left|D_0\right|}\\ N\mid a}} \chi_{D_0}(Q) Q_{z}Q(z,1)^{k-1}e^{-\mathbbm{f}rac{4\pi v}{\left|D_0\right|y^2}\left|Q\left(z,1\right)\right|^2}e^{-2\pi i D\tau}, $$ and $$ \Theta_{k}\left(z,\tau\right)= \Theta_{k,N,D_0}\left(z,\tau\right)= y^{-2k}v^{\mathbbm{f}rac{1}{2}} \sum_{D\in \mathbbm{f}ield{Z}} \sum_{\substack{Q=[a,b,c]\in \mathbbm{f}ield{Q}Q_{D\left|D_0\right|}\\ N\mid a}} \chi_{D_0}(Q) Q(z,1)^{k}e^{-\mathbbm{f}rac{4\pi Q_z^2 v}{\left|D_0\right|}}e^{2\pi i D\tau}. $$ \begin{remark} \begin{enumerate}[leftmargin=*,label={\rm(\arabic*)}] \item The first theta function, $\Theta_{1-k}^*(z,\tau)$, is sometimes called the Millson theta function (cf. Section 2.6.2 of \cite{Alfes}) and other variants appeared e.g. in \cite{BruinierFunke, BKK, Hoevel}. \item The theta function $\Theta_{k}(z,\tau)$ is usually called the Shintani theta function in the literature. A variant without the genus character was introduced by Shintani in \cite{Shintani}. \item Note that for trivial reasons (compatibility with the representation or rather degree of the polynomial and the behaviour of $\chi_{D_0}$ under $Q \mapsto -Q$), the theta functions $\Theta_{1-k}^*$ and $\Theta_k$ both vanish if $\operatorname{sgn}(D_0)(-1)^k=-1$, or in other words if $D_0 > 0$ and $k$ is odd or if $D_0 < 0$ and $k$ is even. \end{enumerate} \end{remark} We next compute the Fourier expansions (in $\tau$) of $\Theta_k$ and $\Theta_{1-k}^*$ as $z$ approaches each cusp. \begin{lemma}\label{lem:Thcusps} Let $\rho$ be a cusp of $\Gamma_0(N)$ and choose $M\in\operatorname{SL}_2(\mathbbm{f}ield{Z})$ such that $M\infty=\rho$. Then \begin{equation}\label{eqn:Thcusp} \Theta_{k}\left(z,\tau\right)\Big|_{2k,z}M = y^{-2k}v^{\mathbbm{f}rac{1}{2}} \sum_{D\in\mathbbm{f}ield{Z}}\sum_{\substack{Q=[a,b,c]\in \mathbbm{f}ield{Q}Q_{D\left|D_0\right|}\\ Q\circ M^{-1}=\left[\alpha,\beta,\gamma\right]\\ N\mid \alpha}} \chi_{D_0}\left(Q\right) Q(z,1)^{k}e^{-\mathbbm{f}rac{4\pi Q_{z}^2v}{\left|D_0\right|}}e^{2\pi i D\tau} \end{equation} and \begin{equation}\label{eqn:Th*cusp} \Theta_{1-k}^*\left(z,\tau\right)\Big|_{2-2k,z}M = v^{k} \sum_{D\in\mathbbm{f}ield{Z}}\sum_{\substack{Q=[a,b,c]\in \mathbbm{f}ield{Q}Q_{D\left|D_0\right|}\\ Q\circ M^{-1}=\left[\alpha,\beta,\gamma\right]\\ N\mid \alpha}} \chi_{D_0}\left(Q\right) Q_z Q(z,1)^{k-1}e^{-\mathbbm{f}rac{4\pi v}{|D_0|y^2} |Q(z,1)|^2}e^{-2\pi i D\tau}. \end{equation} \end{lemma} \begin{proof} Since the arguments are similar, we only show \eqref{eqn:Thcusp}. A direct computation yields $$ \Theta_{k}\left(z,\tau\right)\Big|_{2k,z}M = y^{-2k}v^{\mathbbm{f}rac{1}{2}} \sum_{D\in\mathbbm{f}ield{Z}}\sum_{\substack{Q=[a,b,c]\in \mathbbm{f}ield{Q}Q_{D\left|D_0\right|}\\ N\mid a}} \chi_{D_0}(Q) Q\circ M (z,1)^{k}e^{-\mathbbm{f}rac{4\pi Q_{Mz}^2v}{\left|D_0\right|}}e^{2\pi i D\tau}. $$ Moreover, $\chi_{D_0}(Q)=\chi_{D_0}\left(Q\circ M\right)$ for any $M\in \operatorname{SL}_2(\mathbbm{f}ield{Z})$ and we have $$ \mathbbm{f}rac{Q_{Mz}^2}{\left|D_0\right|}=\mathbbm{f}rac{\left|Q(Mz,1)\right|^2}{\left|D_0\right|\text{Im}\left(Mz\right)^2}-D=\mathbbm{f}rac{\left|Q(z,1)\right|^2}{\left|D_0\right|y^2}-D=\mathbbm{f}rac{\left(Q\circ M\right)_z^2}{\left|D_0\right|}. $$ The claim hence follows. \end{proof} The two theta functions are related via the following differential equations. The following lemma follows mutatis mutandis as in the calculation in Lemma 3.3 of \cite{BKM}, after twisting by a genus character. \begin{lemma}\label{lem:xi} For every $k\geq 1$, we have \begin{align} \label{eqn:xiTh} \xi_{k+\mathbbm{f}rac{1}{2},\tau}\left(\Theta_k\left(z,\tau\right)\right)&=-iy^{2-2k}\mathbbm{f}rac{\partial}{\partial z}\Theta_{1-k}^*\left(-\overline{z},\tau\right),\\ \label{eqn:xiTh*} \xi_{\mathbbm{f}rac{3}{2}-k,\tau}\left(\Theta_{1-k}^*\left(z,\tau\right)\right)&=-iy^{2k}\mathbbm{f}rac{\partial}{\partial z}\Theta_k\left(z,\tau\right). \end{align} \end{lemma} We now define two theta lifts using the theta kernels constructed above. For a harmonic weak Maass form $H$ of weight $\mathbbm{f}rac{3}{2}-k$ for $\Gamma_0(4N)$, we consider the regularized theta integral $$ \Phi_{1-k}^*(H)(z)= \Phi_{1-k,N,D_0}^*(H)(z):=\left<H,\Theta_{1-k}^*\left(-\overline{z},\cdot\right)\right>^{\operatorname{reg}}. $$ Similarly, for a harmonic weak Maass form $H$ of weight $\mathbbm{f}rac{1}{2}+k$ for $\Gamma_0(4N)$, we also consider $$ \Phi_{k}(H)(z)= \Phi_{1-k,N,D_0}(H)(z):=\left<H,y^{-2k}\Theta_{k}\left(z,\cdot\right)\right>^{\operatorname{reg}}. $$ The function $\Phi_{1-k}^*(H)$ is modular of weight $2-2k$ in $z$ and $\Phi_{k}(H)$ is modular of weight $2k$. Here, the integrals are regularized as follows. For two real analytic functions $F$ and $G$ satisfying weight $\kappa\in \mathbbm{f}rac{1}{2}\mathbbm{f}ield{Z}$ modularity for $\Gamma_0(4N)$, we define the \begin{it}regularized inner product\end{it} $$ \left<F,G\right>^{\operatorname{reg}}:=\lim_{T\to\infty} \mathbbm{f}rac{1}{\left[\operatorname{SL}_2(\mathbbm{f}ield{Z}):\Gamma_0(4N)\right]}\int_{\Gamma_0(4N)\backslash \mathbb{H}_T} F(\tau)\overline{G(\tau)} v^{\kappa} \mathbbm{f}rac{du dv}{v^2}, $$ whenever it exists. Here $$ \mathbb{H}_T:=\bigcup_{\gamma\in \operatorname{SL}_2(\mathbbm{f}ield{Z})} \gamma \mathcal{F}_T, $$ where we define the truncated fundamental domain for $\operatorname{SL}_2(\mathbbm{f}ield{Z})$ by $$ \mathcal{F}_T:=\left\{ \tau\in \mathbb{H}: -\mathbbm{f}rac{1}{2}\leq u<\mathbbm{f}rac{1}{2}, v<T, |\tau|\geq 1, |\tau|=1\text{Im}plies u\leq 0\right\}. $$ Following the argument of Lemma 3.4 of \cite{BKM}, Lemma \text{Re}f{lem:xi} leads to the following lemma relating the regularized inner products of weak Maass forms against $\Theta_{k}$ and $\Theta_{1-k}^*$. \begin{lemma}\label{lem:xireg} Suppose that $D$ is a fundamental discriminant and $z\notin E_{D_0 D}$. Then for every $s$ with $\text{Re}(s)>\max\left(1,\mathbbm{f}rac{k}{2}+\mathbbm{f}rac{3}{4}\right)$ one has \begin{align*} \left<\xi_{k+\mathbbm{f}rac{1}{2}}\left(P_{k+\mathbbm{f}rac{1}{2},|D|,s}\right),\Theta_{1-k}^*\left(-\overline{z},\cdot\right)\right>^{\operatorname{reg}} = -\left<P_{k+\mathbbm{f}rac{1}{2},|D|,s},\xi_{\mathbbm{f}rac{3}{2}-k}\left(\Theta_{1-k}^*\left(-\overline{z},\cdot\right)\right)\right>^{\operatorname{reg}},\\ \left<\xi_{\mathbbm{f}rac{3}{2}-k}\left(P_{\mathbbm{f}rac{3}{2}-k,|D|,s}\right),\Theta_k\left(z,\cdot\right)\right>^{\operatorname{reg}} = -\left<P_{\mathbbm{f}rac{3}{2}-k,|D|,s},\xi_{k+\mathbbm{f}rac{1}{2}}\left(\Theta_k\left(z,\cdot\right)\right)\right>^{\operatorname{reg}}. \end{align*} \end{lemma} \begin{proof} By the argument in Lemma 3.4 of \cite{BKM}, one may reduce the first statement to showing that $$ \lim_{T\to\infty}\int_0^1P_{k+\mathbbm{f}rac{1}{2},|D|,s}\left(u+iT\right)\Theta^*\left(-\overline{z},u+iT\right)du =0 $$ as well as vanishing of similar integrals around other cusps of $\Gamma_0(4N)$. For the cusps equivalent to $\infty$, $0$, and $\mathbbm{f}rac{1}{2}$, the argument follows by the usual bounds for the $M$-Whittaker function (as shown in \cite{BKM}). The Poincar\'e series has no principal part at the cusps which are inequivalent to $0$, $\mathbbm{f}rac{1}{2}$, and $\infty$ and hence the corresponding integrals vanish. The argument for the second statement is similar. \end{proof} The following diagram summarizes Lemmas \text{Re}f{lem:xi} and \text{Re}f{lem:xireg}, up to constants: \[ \xymatrix{ &\Theta_{k}(z,\tau)\ar@{->}[dd]_{\xi_{k+\mathbbm{f}rac{1}{2},\tau}}\ar@/_/[ddr]_<<<<<<{\mathbbm{f}rac{\partial}{\partial z}}\ar@/^/[dl]_{\substack{\left<\xi_{\mathbbm{f}rac{3}{2}-k}\left(P_{\mathbbm{f}rac{3}{2}-k,|D|,s}\right) ,\cdot\right>\\ \vphantom{a_{a_n}}}}&\Theta_{1-k}^*(z,\tau)\ar@/^/[ddl]^<<<<<<{\mathbbm{f}rac{\partial}{\partial z}}\ar@{->}[dd]^{\xi_{\mathbbm{f}rac{3}{2}-k,\tau}} \ar@/_/[dr]^{\substack{\left<\xi_{k+\mathbbm{f}rac{1}{2}}\left(P_{k+\mathbbm{f}rac{1}{2},|D|,s}\right) ,\cdot\right>\\ \vphantom{a_{a_n}} }}&&\\ \Phi_{k}\!\left(\xi_{\mathbbm{f}rac{3}{2}-k}\!\left(P_{\mathbbm{f}rac{3}{2}-k,|D|,s}\right)\right)\hspace{-.25in}&&&\hspace{-0.25in}\Phi_{1-k}^*\!\left(\xi_{k+\mathbbm{f}rac{1}{2}}\!\left(P_{k+\mathbbm{f}rac{1}{2}},|D|,s\right)\right)\\ &y^{2-2k}\mathbbm{f}rac{\partial}{\partial z}\Theta_{1-k}^*(-\overline{z},\tau)\ar@/_/[ul]^{\substack{\vphantom{a}\\ \left<P_{\mathbbm{f}rac{3}{2}-k,|D|,s} ,\cdot\right>\hphantom{aa} }} & y^{2k}\mathbbm{f}rac{\partial}{\partial z} \Theta_{k}(z,\tau)\ar@/^/[ur]_{\substack{\vphantom{a}\\ \left<P_{k+\mathbbm{f}rac{1}{2},|D|,s} ,\cdot\right>}}& } \] \subsection{Theta lifts and spectral parameters} \label{sec:theta-lifts} Define (for $\tau=u+iv$, $z=x+iy$) \begin{equation}\label{eqn:Fdefslarge} \mathcal{F}_{1-k, N, D_0,D,s}(z) := \sum_{\substack{Q\in \mathbbm{f}ield{Q}Q_{D_0D}\\ N\mid a}} \chi_{D_0}(Q) \operatorname{sgn}\left(Q_{z}\right) Q(z,1)^{k-1}\varphi_s^*\left(\mathbbm{f}rac{D_0Dy^2}{\left|Q(z,1)\right|^2}\right), \end{equation} where $$ \varphi_s^*(w):=\mathbbm{f}rac{\Gamma\left(s+\mathbbm{f}rac{1}{4}\right)(4\pi)^{\mathbbm{f}rac{1}{4}}}{12\sqrt{\pi} \Gamma(2s)} w^{s-\mathbbm{f}rac{1}{4}}{_2F_1}\left(s-\mathbbm{f}rac{1}{4}, s-\mathbbm{f}rac{1}{4};2s;w\right). $$ \begin{proposition}\label{prop:local} For $\text{Re}(s)>1$, the function $\mathcal{F}_{0,N,D_0,D,s}$ is a local Maass form of weight $0$ and eigenvalue $4\lambda_{\mathbbm{f}rac{1}{2},s} = \lambda_{0,2s-\mathbbm{f}rac{1}{2}}$ on $\Gamma_0(N)$, up to the condition at the cusps. \end{proposition} \begin{remark} Following an argument in \cite{BKK}, one could show that the functions $\mathcal{F}_{0,N,D_0,D,s}$ satisfy the necessary condition at the cusps, but we only need this for one special eigenvalue, so we do not work this out here. \end{remark} \begin{proof}[Proof of Proposition \text{Re}f{prop:local}] We compute $\Phi_{1-k}^*\left(P_{\mathbbm{f}rac{3}{2}-k,|D|,s}\right)$ as in \cite{BKM}, where $P_{\mathbbm{f}rac{3}{2}-k,|D|,s}$ was defined in \eqref{eqn:Poincdef}. Unfolding the Poincar\'e series and following the proof of Theorem 1.3 (2) in \cite{BKM}, we obtain \begin{multline*} \Phi_{1-k}^*\left(P_{\mathbbm{f}rac{3}{2}-k,|D|,s}\right)(z) =\mathbbm{f}rac{1}{\left[\operatorname{SL}_2(\mathbbm{f}ield{Z}):\Gamma_0(4N)\right]}\left(4\pi |D|\right)^{\mathbbm{f}rac{1}{4}-\mathbbm{f}rac{k}{2}}\Gamma(2s)^{-1}\\ \times \sum_{\substack{Q\in \mathbbm{f}ield{Q}Q_{DD_0}\\ N\mid a}} \chi_{D_0}(Q)Q_zQ(z,1)^{k-1}\mathcal{I}\left(\mathbbm{f}rac{D_0Dy^2}{\left|Q\left(z,1\right)\right|^2}\right), \end{multline*} where $$ \mathcal{I}(w):=\int_{0}^{\infty} \mathcal{M}_{\mathbbm{f}rac{3}{2}-k,s}\left(-v\right)e^{\mathbbm{f}rac{v}{2}}v^{-\mathbbm{f}rac{1}{2}}e^{-\mathbbm{f}rac{v}{w}}dv. $$ In the proof of Theorem 1.3 (2) of \cite{BKM}, this integral was computed to be $$ \mathcal{I}(w)=\Gamma\left(s+\mathbbm{f}rac{k}{2}-\mathbbm{f}rac14\right)\left(1-w\right)^{-\mathbbm{f}rac{1}{2}}\left(w\right)^{s+\mathbbm{f}rac{k}{2}-\mathbbm{f}rac{1}{4}}{_2F_1}\left(s-\mathbbm{f}rac{k}{2}+\mathbbm{f}rac{1}{4},s+\mathbbm{f}rac{k}{2}-\mathbbm{f}rac{3}{4};2s;w\right). $$ We now plug in $w=\mathbbm{f}rac{D_0Dy^2}{\left|Q(z,1)\right|^2}$, use \begin{equation}\label{eqn:QQz} \left|Q(z,1)\right|^2=\left|D_0\right|D y^2+Q_z^2y^2. \end{equation} to rewrite $$ \left(1-\mathbbm{f}rac{D_0D y^2}{\left|Q(z,1)\right|^2}\right)^{-\mathbbm{f}rac{1}{2}}=\mathbbm{f}rac{\sqrt{D_0 D}}{\left|Q_{z}\right|}, $$ and plug in $k=1$ to yield \begin{equation}\label{eqn:Phi1/2} \Phi_{0}^*\left(P_{\mathbbm{f}rac{1}{2},|D|,s}\right)(z) = \mathbbm{f}rac{\left(D_0D\right)^{\mathbbm{f}rac{1}{2}}}{\left[\operatorname{SL}_2(\mathbbm{f}ield{Z}):\Gamma_0(4N)\right]\left(4\pi\left|D\right|\right)^{\mathbbm{f}rac{1}{4}}\Gamma(2s)} \sum_{\substack{Q\in \mathbbm{f}ield{Q}Q_{DD_0}\\ N\mid a}} \chi_{D_0}(Q)\operatorname{sgn}\left(Q_z\right)\varphi_{s}^*\left(\mathbbm{f}rac{D_0Dy^2}{\left|Q(z,1)\right|^2}\right). \end{equation} The claim is hence equivalent to showing that for $H=P_{\mathbbm{f}rac{3}{2}-k,|D|,s}$, the function $\Phi_0^*(H)$ is a local Maass form with eigenvalue $4\lambda_{\mathbbm{f}rac{3}{2}-k,s}=\lambda_{2-2k,2s-\mathbbm{f}rac{1}{2}}$. However, by Lemma \text{Re}f{lem:xi} and Lemma \text{Re}f{lem:xireg} together with the fact that for $\kappa\in\mathbbm{f}rac{1}{2}\mathbbm{f}ield{Z}$ $$ \Delta_{\kappa} = -\xi_{2-\kappa}\xi_{\kappa}, $$ we have that for $z\notin E_D$ \begin{multline}\label{eqn:DeltaH} \Delta_{2-2k}\left(\Phi_{1-k}^*(H)\right) = -\xi_{2k}\xi_{2-2k}\left(\left<H,\Theta_{1-k}^*\left(-\overline{z},\cdot\right)\right>^{\operatorname{reg}}\right)=2\xi_{2k}\left(\left<H,iy^{2-2k}\mathbbm{f}rac{\partial}{\partial z}\left(\Theta_{1-k}^*\left(-\overline{z},\cdot\right)\right)\right>^{\operatorname{reg}}\right)\\ =-2\xi_{2k}\left(\left<H,\xi_{k+\mathbbm{f}rac{1}{2}}\left(\Theta_{k}\left(z,\cdot\right)\right)\right>^{\operatorname{reg}}\right)=-4\left<\xi_{\mathbbm{f}rac{3}{2}-k}(H),iy^{2k}\mathbbm{f}rac{\partial}{\partial z}\left(\Theta_{k}\left(z,\cdot\right)\right)\right>^{\operatorname{reg}}\\ =4\left<\Delta_{k+\mathbbm{f}rac{1}{2}}(H),\Theta_{1-k}^*\left(-\overline{z},\cdot\right)\right>^{\operatorname{reg}}=4\lambda_{\mathbbm{f}rac{3}{2}-k,s}\Phi_{1-k}^*(H). \end{multline} \end{proof} We are mainly interested in the analytic continuation $\mathcal{F}_{0,N,D_0,D}:=\left[\mathcal{F}_{0,N,D_0,D,s}\right]_{s=\mathbbm{f}rac{3}{4}}$ to $s=\mathbbm{f}rac{3}{4}$. \begin{corollary}\label{cor:FchiD} The function $\mathcal{F}_{0,N,D_0,D}$ is a locally harmonic Maass form of weight $0$ on $\Gamma_0(N)$. \end{corollary} \begin{proof} Recall that the Poincar\'e series $P_{\mathbbm{f}rac{1}{2},|D|,s}$ have an analytic continuation to $s=\mathbbm{f}rac{3}{4}$, which we have denoted $P_{\mathbbm{f}rac{1}{2},|D|}$ and which is harmonic. Since the theta lift $\Phi_0^*\left(P_{\mathbbm{f}rac{1}{2},|D|}\right)$ exists, we may analytically continue $\mathcal{F}_{0,N,D_0,D,s}$ to $s=\mathbbm{f}rac{3}{4}$ by \eqref{eqn:Phi1/2}. Moreover, since the eigenvalue of $\Phi_0^*\left(P_{\mathbbm{f}rac{1}{2},|D|,s}\right)$ is $\lambda_{0,2s-\mathbbm{f}rac{1}{2}}=4\lambda_{\mathbbm{f}rac{1}{2},s}$ and $\lambda_{\mathbbm{f}rac{1}{2},\mathbbm{f}rac{3}{4}}=0$, we obtain that $\mathcal{F}_{0,N,D_0,D}$ is a locally harmonic Maass form of weight $0$. It remains to show the growth condition at all cusps. In order to show that $\mathcal{F}_{0,N,D_0,D}$ is bounded at each cusp, we use \eqref{eqn:Th*cusp} and pull the operator inside the theta lift and then using the same argument as that used to obtain \eqref{eqn:Phi1/2} yields for $H=P_{\mathbbm{f}rac{3}{2}-k,|D|,s}$ that \begin{multline*} \mathcal{F}_{0,N,D_0,D,M}(z):=\mathcal{F}_{0,N,D_0,D}\Big|_{0}M(z)= \Phi_{0}^*(H)\Big|_{2-2k}M(z) \\ = \sum_{\substack{Q=[a,b,c]\in \mathbbm{f}ield{Q}Q_{DD_0}\\ Q\circ M^{-1}=\left[\alpha,\beta,\gamma\right]\\ N\mid \alpha}} \chi_{D_0}(Q)\operatorname{sgn}(Q_z) \varphi_{\mathbbm{f}rac{3}{4}}^*\left(\mathbbm{f}rac{D_0Dy^2}{\left|Q(z,1)\right|^2}\right). \end{multline*} Similarly, for the holomorphic modular form $h=P_{k+\mathbbm{f}rac{1}{2},|D|}$, we define \[ \Phi_{k}(h):=\left<h,\Theta_{k}\left(z,\cdot\right)\right>. \] and for a discriminant $D=-m$, we then define \begin{equation}\label{eqn:fchiDdef} \mathbbm{f}_{1,N,D_0,D}:=\left[\operatorname{SL}_2(\mathbbm{f}ield{Z}):\Gamma_0(4N)\right]\Phi_1\left(P_{\mathbbm{f}rac{3}{2},|D|}\right). \end{equation} Note that the exponential decay of the cusp form $P_{\mathbbm{f}rac{3}{2},|D|}$ and the polynomial growth of $\Theta_1$ in the integration variable implies that regularization is unnecessary for the cusp form $H=P_{\mathbbm{f}rac{3}{2},|D|}$. We next compute \begin{multline*} \mathbbm{f}_{1,N,D_0,D,M}(z):=\mathbbm{f}_{1,N,D_0,D}\Big|_{2} M(z)= \Phi_{1}(h)\Big|_{2k} M(z)\\ = \sum_{\substack{Q=[a,b,c]\in \mathbbm{f}ield{Q}Q_{DD_0}\\ Q\circ M^{-1}=\left[\alpha,\beta,\gamma\right]\\ N\mid \alpha}} \chi_{D_0}(Q)\operatorname{sgn}(Q_z) \varphi_{\mathbbm{f}rac{3}{4}}^*\left(\mathbbm{f}rac{D_0Dy^2}{\left|Q(z,1)\right|^2}\right). \end{multline*} By \eqref{eqn:xiTh} and Lemma \text{Re}f{lem:xireg}, for a weight $\mathbbm{f}rac{1}{2}$ harmonic weak Maass form, we have \[ i\mathbbm{f}rac{\partial}{\partial z}\Phi_{0}^*\left(H\right)= \left<H,-i\mathbbm{f}rac{\partial}{\partial z}\Theta_{0}\left(-\overline{z},\cdot\right)\right>=\left<H,\xi_{\mathbbm{f}rac{3}{2}}\left(\Theta_{1}\left(z,\cdot\right)\right)\right>=\left<\xi_{\mathbbm{f}rac{1}{2}}\left(H\right), \Theta_{1}\left(z,\cdot\right)\right>= \Phi_{1}\left( \xi_{\mathbbm{f}rac{1}{2}}\left(H\right)\right). \] Hence for every weight $\mathbbm{f}rac{1}{2}$ harmonic weak Maass form $H$, we have \begin{equation}\label{eqn:DPhi*} \mathcal{D}\left(\Phi_0^*(H)\right) = -\mathbbm{f}rac{1}{2\pi}\Phi_1\left(\xi_{\mathbbm{f}rac{1}{2}}\left(H\right)\right). \end{equation} We have also seen that (see the second and third identities of \eqref{eqn:DeltaH} together with Lemma \text{Re}f{lem:xireg}) \begin{equation}\label{eqn:xiPhi*} \xi_{0}\left(\Phi_{0}^*(H)\right) = 2\Phi_{1}\left(\xi_{\mathbbm{f}rac{1}{2}}(H)\right). \end{equation} Since $\xi_{\mathbbm{f}rac{1}{2}}(H)= ch$ for some constant $c\in\mathbbm{f}ield{R}$ and slashing commutes with the differential operators, we see that \[ \mathcal{F}_{0,N,D_0,D,M}-2c \left( f_{1,D_0,D,M}^* -\mathbbm{f}rac{1}{4\pi} \mathcal{E}_{f_{1,D_0,D,M}}\right) \] is annihilated by both $\mathcal{D}$ and $\xi_{0}$, where we recall the Eichler integrals $f^*$ and $\mathcal{E}_f$ for cusp forms $f$ defined in \eqref{eqn:Eichnonhol} and \eqref{eqn:Eichhol}, respectively and extend their definition via Fourier expansions for Eisenstein series. Thus we see that the above difference is both holomorphic and anti-holomorphic, and hence a local constant. The functions $ f_{1,D_0,D,M}^*$ and $\mathcal{E}_{f_{1,D_0,D,M}}$ both grow at most polynomially towards the cusps because the Eichler integrals corresponding to the cuspidal component vanish at the cusps while the harmonic Eisenstein series component may have polynomial growth. Hence $\mathcal{F}_{0,N,D_0,D}$ grows at most polynomially towards the cusps, yielding the claim. \end{proof} \subsection{Connection between the two analytic continuations} It turns out that the function $\mathbbm{f}_{1,N,D_0,D}$ defined in \eqref{eqn:fchiDdef} essentially equals Kohnen's $f_{1,N,D,D_0}$. \begin{lemma}\label{lem:f1fchi} We have $$ f_{1,N,D,D_0}=\mathbbm{f}rac{6\left(4\pi\right)^{\mathbbm{f}rac{1}{4}}\Gamma\left(\mathbbm{f}rac{3}{2}\right)}{|D|^{\mathbbm{f}rac{3}{4}}}\mathbbm{f}_{1,N,D_0,D}. $$ \end{lemma} \begin{proof} For $\text{Re}(s)>\mathbbm{f}rac{3}{4}$, we first define $$ \mathbbm{f}_{1,N,D_0,D,s}(z) := \left[\Gamma_0(4):\Gamma_0(4N)\right] \Phi_{1}\left(P_{\mathbbm{f}rac{3}{2},|D|,s}\right), $$ where $$ \Phi_{k}(H):=\left<H,\Theta_{k}\left(z,\cdot\right)\right>. $$ Recalling that $P_{\mathbbm{f}rac{3}{2},|D|} = \left[P_{\mathbbm{f}rac{3}{2},|D|,s}\right]_{s=\mathbbm{f}rac{3}{4}}$, we have $$ \mathbbm{f}_{1,N,D_0,D}= \left[\mathbbm{f}_{1,N,D_0,D,s}\right]_{s=\mathbbm{f}rac{3}{4}}. $$ Following the proof of Proposition \text{Re}f{prop:local}, for $\text{Re}(s)>\mathbbm{f}rac{3}{4}$ we obtain $$ \mathbbm{f}_{1,N,D_0,D,s}(z) =\sum_{\substack{Q\in \mathbbm{f}ield{Q}Q_{D_0D}\\ N\mid a}} \mathbbm{f}rac{\chi_{D_0}(Q)}{Q(z,1)} \varphi_s\left(\mathbbm{f}rac{D_0Dy^2}{\left|Q(z,1)\right|^2}\right), $$ where $$ \varphi_s(w):=\mathbbm{f}rac{\Gamma\left(s+\mathbbm{f}rac{1}{4}\right)|D|^{\mathbbm{f}rac{3}{4}}}{6\left(4\pi\right)^{\mathbbm{f}rac{1}{4}}\Gamma(2s)} w^{s-\mathbbm{f}rac{3}{4}}{_2F_1}\left(s+\mathbbm{f}rac{1}{4}, s-\mathbbm{f}rac{3}{4};2s;w\right). $$ For $\text{Re}(s)>\mathbbm{f}rac{3}{4}$, we furthermore define $$ f_{1,N,D,D_0,s}(z):=\sum_{\substack{Q\in \mathbbm{f}ield{Q}Q_{D_0D}\\ N\mid a}} \mathbbm{f}rac{\chi_{D_0}(Q)}{Q(z,1)\left|Q(z,1)\right|^{2s-\mathbbm{f}rac{3}{2}}}. $$ Kohnen \cite{KohnenCoeff} used Hecke's trick to show that the analytic continuation of $f_{1,N,D,D_0,s}$ to $s=\mathbbm{f}rac{3}{4}$ exists and equals $f_{1,N,D,D_0}$. We now set $$ \alpha_t(w):=w^{\mathbbm{f}rac{3}{4}-t} \mathbbm{f}rac{\Gamma(2t)}{\Gamma\left(t+\mathbbm{f}rac{1}{4}\right)} \varphi_t(w). $$ Since ${_2F_1}\left(1,0;\mathbbm{f}rac{3}{2};w\right)=1$, we see that $$ \alpha_{\mathbbm{f}rac{3}{4}}(w) = \mathbbm{f}rac{|D|^{\mathbbm{f}rac{3}{4}}}{6\left(4\pi\right)^{\mathbbm{f}rac{1}{4}}} $$ is independent of $w$. Hence we have \begin{multline}\label{eqn:f1fchi1} \mathbbm{f}rac{|D|^{\mathbbm{f}rac{3}{4}}}{6\left(4\pi\right)^{\mathbbm{f}rac{1}{4}}\Gamma\left(\mathbbm{f}rac{3}{2}\right)}f_{1,N,D,D_0}(z)\\ =\Bigg[\mathbbm{f}rac{\Gamma\left(s+\mathbbm{f}rac{1}{4}\right)}{\left(D_0Dy^2\right)^{\mathbbm{f}rac{3}{2}-2s}\Gamma(2s)}\sum_{\substack{Q\in \mathbbm{f}ield{Q}Q_{D_0D}\\ N\mid a}} \mathbbm{f}rac{\chi_{D_0}(Q)}{Q(z,1)\left|Q(z,1)\right|^{2s-\mathbbm{f}rac{3}{2}}}\alpha_{\mathbbm{f}rac{3}{4}}\left(\mathbbm{f}rac{D_0Dy^2}{\left|Q(z,1)\right|^2}\right)\Bigg]_{s=\mathbbm{f}rac{3}{4}}. \end{multline} Therefore, we conclude that \begin{multline}\label{eqn:diff} \mathbbm{f}_{1,N,D_0,D}(z) - \mathbbm{f}rac{|D|^{\mathbbm{f}rac{3}{4}}}{6\left(4\pi\right)^{\mathbbm{f}rac{1}{4}}\Gamma\left(\mathbbm{f}rac{3}{2}\right)}f_{1,N,D,D_0}(z)\\ = \Bigg[\mathbbm{f}rac{\Gamma\left(s+\mathbbm{f}rac{1}{4}\right)}{\left(D_0Dy^2\right)^{\mathbbm{f}rac{3}{2}-2s}\Gamma(2s)}\sum_{\substack{Q\in \mathbbm{f}ield{Q}Q_{D_0D}\\ N\mid a}} \mathbbm{f}rac{\chi_{D_0}(Q)}{Q(z,1)\left|Q(z,1)\right|^{2s-\mathbbm{f}rac{3}{2}}}\left(\alpha_s\left(\mathbbm{f}rac{D_0Dy^2}{\left|Q(z,1)\right|^2}\right)-\alpha_{\mathbbm{f}rac{3}{4}}\left(\mathbbm{f}rac{D_0Dy^2}{\left|Q(z,1)\right|^2}\right)\right)\Bigg]_{s=\mathbbm{f}rac{3}{4}}\!\!\!. \end{multline} We next show that the right-hand side of \eqref{eqn:diff} converges absolutely for $\text{Re}(s)>-\mathbbm{f}rac{5}{4}$. Hence its analytic continuation to $s=\mathbbm{f}rac{3}{4}$ is simply its value with $s=\mathbbm{f}rac{3}{4}$ plugged in, which is clearly zero. Recall that by the power series expansion for ${_2F_1}\left(s+\mathbbm{f}rac{1}{4}, s-\mathbbm{f}rac{3}{4};2s;w\right)$, for $|w|<1$ we have $$ \alpha_s(w) - \alpha_{\mathbbm{f}rac{3}{4}}(w)= \mathbbm{f}rac{|D|^{\mathbbm{f}rac{3}{4}}}{6\left(4\pi\right)^{\mathbbm{f}rac{1}{4}}}\left(\mathbbm{f}rac{\left(s+\mathbbm{f}rac{1}{4}\right)\left(s-\mathbbm{f}rac{3}{4}\right)}{2s}\right) w + O\left(w^2\right). $$ Using \eqref{eqn:QQz}, we have $|w|\leq 1$, with $|w|=1$ if and only if $Q_z=0$. However, by Lemma 5.1 (1) of \cite{BKK}, for each $z\in \mathbb{H}$, there are only finitely many $Q\in \mathcal{Q}_{D_0D}$ for which $Q_z=0$. For these finitely many $Q$, we may directly plug in $s=\mathbbm{f}rac{3}{4}$ to see that these terms vanish on the right-hand side of \eqref{eqn:diff}. We hence conclude that the sum on the right-hand side of \eqref{eqn:diff} converges absolutely for $\text{Re}(s)>-\mathbbm{f}rac{5}{4}$, which concludes the proof. \end{proof} \section{Proofs of Theorem \text{Re}f{mainthm} and Corollaries \text{Re}f{maincor}, \text{Re}f{cor:SDodd}, and \text{Re}f{cor:sumtwocubes}}\label{sec:mainproofs} Here we tie together the results of the previous sections to complete the proofs of Theorem \text{Re}f{mainthm} and Corollaries \text{Re}f{maincor}, \text{Re}f{cor:SDodd}, and \text{Re}f{cor:sumtwocubes}. \subsection{Computation of Local Polynomials} The basic idea of Theorem \text{Re}f{mainthm} is to relate the (logarithmic) singularities of $\mathcal{F}_{0,N,D_0,D}$ along geodesics with the quantities $F_{0,N,D,D_0}(x)$. This ties together the invariance of $\mathcal{F}_{0,N,D_0,D}$ with that of $F_{0,N,D,D_0}$, but only ``part of'' $\mathcal{F}_{0,N,D_0,D}$ (a locally constant function) contributes to the sum $F_{0,N,D,D_0}$ defined in \eqref{eqn:F1}. It is hence of interest to compute the locally constant function which gives these singularities. In general, for $k\geq 1$, the function exhibiting the singularity is a local polynomial. We thus define \begin{multline}\label{eqn:Pdef} \mathcal{P}_{D_0,D,s}(z):=-\sum_{\substack{ t> 0\\ \exists Q\in \mathbbm{f}ield{Q}Q_{D_0D}\\ Q_{z+it}=0>a\\ N\mid a}}\lim_{\varepsilon\to 0^+}\left(\mathcal{F}_{0,N,D_0,D,s}\left(z+it+i\varepsilon\right)-\mathcal{F}_{0,N,D_0,D,s}\left(z+it-i\varepsilon\right)\right)\\ -\lim_{\varepsilon\to 0^+}\left(\mathcal{F}_{0,N, D_0,D,s}\left(z+i\varepsilon\right)-\mathcal{F}_{0,N, D_0,D,s}\left(z\right)\right) \end{multline} and $$ \mathcal{G}_{D_0,D,s}:=\mathcal{F}_{0,N, D_0,D,s}-\mathcal{P}_{D_0,D,s}. $$ Here we suppress the dependence on $N$ and the weight in the definitions for ease of notation in the following calculations. The proof below follows the argument in Theorem 7.1 of \cite{BKK}, but a more direct approach via theta lifts and written in terms of wall-crossing behvior across Weyl chambers may be found in Satz 3.15 of \cite{Hoevel}. \begin{lemma}\label{lem:Gcont} If $DD_0$ is not a square, then the function $\mathcal{G}_{0,D_0,D,s}$ is continuous. \end{lemma} \begin{proof} In each connected component of $\mathbb{H}\setminus E_D$, the function $\mathcal{F}_{0,N,D_0,D,s}$ is continuous and $\mathcal{P}_{D_0,D,s}$ is constant. Suppose now that $z\in E_{D}$. By continuity in each connected component, it suffices to show that for each sign $\pm$ $$ \lim_{\delta\to 0^+} \mathcal{G}_{D_0,D,s}\left(z\pm i\delta\right) = \mathcal{G}_{D_0,D,s}(z). $$ Using \eqref{eqn:Pdef} and continuity at $z+i\delta$ for $\delta>0$, we compute (in the following we suppress the dependence on $N$ and the weight $0$ in $\mathcal{F}_{0,N,D_0,D,s}$ for ease of notation) \begin{multline*} \lim_{\delta \to 0^+}\!\left(\mathcal{P}_{D_0,D,s}\!\left(z+ i\delta\right) - \mathcal{P}_{D_0,D,s}(z)\right) =\! -\hspace{-.1in}\sum_{\substack{ t> 0\\ \exists Q\in \mathbbm{f}ield{Q}Q_{D_0D}\\ Q_{z+it}=0>a\\ N\mid a}}\!\!\lim_{\varepsilon\to 0^+}\!\left(\mathcal{F}_{D_0,D,s}\!\left(z+it+i\varepsilon\right)-\mathcal{F}_{D_0,D,s}\!\left(z+it-i\varepsilon\right)\right)\\ +\!\!\sum_{\substack{ t> 0\\ \exists Q\in \mathbbm{f}ield{Q}Q_{D_0D}\\ Q_{z+it}=0>a\\ N\mid a}}\!\!\lim_{\varepsilon\to 0^+}\!\left(\mathcal{F}_{D_0,D,s}\!\left(z+it+i\varepsilon\right)-\mathcal{F}_{D_0,D,s}\!\left(z+it-i\varepsilon\right)\right)+\lim_{\varepsilon\to 0^+}\!\left(\mathcal{F}_{D_0,D,s}\!\left(z+i\varepsilon\right)-\mathcal{F}_{D_0,D,s}\!\left(z\right)\right)\\ = \lim_{\varepsilon\to 0^+}\!\left(\mathcal{F}_{D_0,D,s}\!\left(z+i\varepsilon\right)-\mathcal{F}_{D_0,D,s}\!\left(z\right)\right). \end{multline*} Similarly, we have \begin{multline*} \lim_{\delta \to 0^+}\!\left(\mathcal{P}_{D_0,D,s}\!\left(z- i\delta\right) - \mathcal{P}_{D_0,D,s}(z)\right)=\! -\hspace{-.1in}\sum_{\substack{ t\geq 0\\ \exists Q\in \mathbbm{f}ield{Q}Q_{D_0D}\\ Q_{z+it}= 0>a\\ N\mid a}}\!\!\lim_{\varepsilon\to 0^+}\!\left(\mathcal{F}_{D_0,D,s}\!\left(z+it+i\varepsilon\right)-\mathcal{F}_{D_0,D,s}\!\left(z+it-i\varepsilon\right)\right)\\ +\!\!\sum_{\substack{ t> 0\\ \exists Q\in \mathbbm{f}ield{Q}Q_{D_0D}\\ Q_{z+it}=0>a\\ N\mid a}}\!\!\lim_{\varepsilon\to 0^+}\!\left(\mathcal{F}_{D_0,D,s}\!\left(z+it+i\varepsilon\right)-\mathcal{F}_{D_0,D,s}\!\left(z+it-i\varepsilon\right)\right)+\lim_{\varepsilon\to 0^+}\!\left(\mathcal{F}_{D_0,D,s}\!\left(z+i\varepsilon\right)-\mathcal{F}_{D_0,D,s}\!\left(z\right)\right)\\ = \lim_{\varepsilon\to 0^+}\left(\mathcal{F}_{D_0,D,s}\left(z-i\varepsilon\right)-\mathcal{F}_{D_0,D,s}\left(z\right)\right). \end{multline*} \end{proof} We now explicitly compute $\mathcal{P}_{D_0,D,s}$. \begin{proposition}\label{prop:Pval} If $\text{Re}(s)\geq \mathbbm{f}rac{3}{4}$, then \begin{equation}\label{eqn:Pval} \mathcal{P}_{D_0,D,s}(z)= 4 \varphi_s^*(1)\sum_{\substack{Q\in \mathbbm{f}ield{Q}Q_{D_0D}\\ Q_{z}>0>a\\ N\mid a}} \chi_{D_0}(Q)+2\varphi_s^*(1)\sum_{\substack{Q\in \mathbbm{f}ield{Q}Q_{D_0D}\\ Q_{z}=0>a\\ N\mid a}} \chi_{D_0}(Q). \end{equation} \end{proposition} \begin{proof} Note that both sides of \eqref{eqn:Pval} are analytic in for $\text{Re}(s)>\mathbbm{f}rac{3}{4}$ and the limit $s\to \mathbbm{f}rac{3}{4}^+$ exists. Hence by the identity theorem, it is sufficient to prove the identity for $\text{Re}(s)$ sufficiently large. We use the representation \eqref{eqn:Fdefslarge} to explicitly compute $\mathcal{P}_{D_0,D,s}$ for $s$ sufficiently large. To compute $\mathcal{P}_{D_0,D,s}$, for $z\in E_D$ we compute \begin{multline*} \lim_{\varepsilon\to 0^+} \left(\mathcal{F}_{0,N,D_0,D,s}\left(z+i\varepsilon\right)-\mathcal{F}_{0,N,D_0,D,s}\left(z-i\varepsilon\right)\right) = 2\varphi_s^*\left(1\right) \sum_{\substack{Q\in \mathbbm{f}ield{Q}Q_{D_0D}\\ Q_{z}=0\\ a<0\\ N\mid a}} \left(\chi_{D_0}\left(-Q\right)-\chi_{D_0}(Q)\right)\\ =-4\varphi_s^*\left(1\right) \sum_{\substack{Q\in \mathbbm{f}ield{Q}Q_{D_0D}\\ Q_{z}=0\\ a<0\\ N\mid a}} \chi_{D_0}(Q), \end{multline*} where the last line follows from $D_0<0$ so $\chi_{D_0}(-Q)=-\chi_{D_0}(Q)$. Furthermore, with the convention $\operatorname{sgn}(0)=0$ we obtain \begin{multline*} \lim_{\varepsilon\to 0^+} \left(\mathcal{F}_{0,N,D_0,D,s}\left(z+i\varepsilon\right)-\mathcal{F}_{0,N,D_0,D,s}\left(z\right)\right) = \varphi_s^*\left(1\right) \sum_{\substack{Q\in \mathbbm{f}ield{Q}Q_{D_0D}\\ Q_{z}=0\\ a<0\\ N\mid a}} \left(\chi_{D_0}\left(-Q\right)-\chi_{D_0}(Q)\right)\\ =-2\varphi_s^*\left(1\right) \sum_{\substack{Q\in \mathbbm{f}ield{Q}Q_{D_0D}\\ Q_{z}=0\\ a<0\\ N\mid a}} \chi_{D_0}(Q). \end{multline*} \end{proof} \subsection{Proof of Theorem \text{Re}f{mainthm}} We now have all of the preliminaries necessary to prove Theorem \text{Re}f{mainthm}. \begin{proof}[Proof of Theorem \text{Re}f{mainthm}] Suppose that $D_0D$ is not a square. Then by Lemma \text{Re}f{lem:Gcont}, $\mathcal{G}_{D_0,D,\mathbbm{f}rac{3}{4}}$ is continuous. Proposition \text{Re}f{prop:Pval} then implies that the locally constant part of $\mathcal{F}_{0,N,D_0,D}$ is (since $\varphi_{s}$ is continuous as a function of $s$) \[ 4 \varphi_{\mathbbm{f}rac{3}{4}}^*(1)\sum_{\substack{Q\in \mathbbm{f}ield{Q}Q_{D_0D}\\ Q_{z}>0>a\\ N\mid a}} \chi_{D_0}(Q)+2 \varphi_{\mathbbm{f}rac{3}{4}}^*(1)\sum_{\substack{Q\in \mathbbm{f}ield{Q}Q_{D_0D}\\ Q_{z}=0>a\\ N \mid a}} \chi_{D_0}(Q). \] Taking the limit $z\to x\in \mathbbm{f}ield{Q}$, the second sum becomes empty and $Q_z>0$ becomes $Q(x,1)>0$. This yields the function \begin{equation}\label{eqn:Px} 4 \varphi_{\mathbbm{f}rac{3}{4}}^*(1)\sum_{\substack{Q\in \mathbbm{f}ield{Q}Q_{D_0D}\\ ax^2+bx+c>0>a\\ N\mid a}} \chi_{D_0}(Q). \end{equation} We now claim that \eqref{eqn:Px} is $\Gamma_0(N)$-invariant if and only if $\mathbbm{f}_{1,N,D_0,D}$ is in the space spanned by Eisenstein series. Suppose that $\mathbbm{f}_{1,N,D_0,D}$ is in the space spanned by Eisenstein series and choose a linear combination of Maass-Eisenstein series $\mathcal{E}_{0,N,D_0,D}$ for which \[ \xi_0\left(\mathcal{E}_{0,N,D_0,D}\right) = \mathbbm{f}_{1,N,D_0,D}. \] Since $\xi_{0}\left(\mathcal{F}_{0,N,D_0,D}\right) = \mathbbm{f}_{1,N,D_0,{D}}$ by Lemma \text{Re}f{lem:xireg}, we obtain that $\mathcal{F}_{0,N,D_0,D}-\mathcal{E}_{0,N,D_0,D}$ is locally holomorphic and $\Gamma_0(N)$-invariant. We now show that the operator $\mathcal{D}:=\mathbbm{f}rac{1}{2\pi i}\mathbbm{f}rac{\partial}{\partial z}$ also annihilates $\mathcal{F}_{0,N,D_0,D}-\mathcal{E}_{0,N,D_0,D}$, from which we obtain that $\mathcal{F}_{0,N,D_0,D}-\mathcal{E}_{0,N,D_0,D}$ is a $\Gamma_0(N)$-invariant local constant. For this, we recall \eqref{eqn:DPhi*} and \eqref{eqn:xiPhi*}. Since there exist non-zero constants $c_1$ and $c_2$ for which $\mathcal{F}_{0,N,D_0,D}=c_1\Phi_0^*\left(P_{\mathbbm{f}rac{1}{2},|D|}\right)$ by \eqref{eqn:Phi1/2} and $\mathbbm{f}_{1,N,D_0,D}=c_2\Phi_1\left(\xi_{\mathbbm{f}rac{1}{2}}\left(P_{\mathbbm{f}rac{1}{2},|D|}\right)\right)$ by \eqref{eqn:fchiDdef}, we conclude that \[ \xi_{0}\left(\mathcal{F}_{0,N,D_0,D}\right) =\mathbbm{f}rac{2c_1}{c_2}\mathbbm{f}_{1,N,D_0,D} \] and \[ \mathcal{D}\left(\mathcal{F}_{0,N,D_0,D}\right)=- \left(\mathbbm{f}rac{c_1}{2\pi c_2}\right)\mathbbm{f}_{1,N,D_0,D}. \] Therefore \eqref{eqn:FxiD} (and hence \eqref{decomplocalmaass}) holds and the ratio of the constants in \eqref{decomplocalmaass} is \[ \mathbbm{f}rac{\alpha_D}{\beta_D}=-\mathbbm{f}rac{1}{4\pi}. \] We next recall that the ratio of the constants for the Eisenstein series are identical. All of the $\Gamma_0(N)$-invariant Maass-Eisenstein series are of the form \[ \left[\sum_{M\in \Gamma_{\infty}\backslash\Gamma_0(N)} y^s\Big|_0 M\gamma\right]_{s=1} \] for some $\gamma\in \Gamma_0(N)\backslash \operatorname{SL}_2(\mathbbm{f}ield{Z})$. Since $\xi_0$ and $\mathcal{D}$ commute with the slash operator, we have \begin{align*} \xi_0\left( \sum_{M\in \Gamma_{\infty}\backslash\Gamma_0(N)} y^s\Big|_0 M\gamma \right)& = s\sum_{M\in \Gamma_{\infty}\backslash\Gamma_0(N)} y^{s-1}\Big|_0 M\gamma,\\ \mathcal{D}\left( \sum_{M\in \Gamma_{\infty}\backslash\Gamma_0(N)} y^s\Big|_0 M\gamma \right)& =-\mathbbm{f}rac{s}{4\pi}\sum_{M\in \Gamma_{\infty}\backslash\Gamma_0(N)} y^{s-1}\Big|_0 M\gamma. \end{align*} It follows that \[ \mathcal{D}\left(\mathcal{E}_{0,N,D_0,D}\right) = -\mathbbm{f}rac{1}{4\pi} \xi_{0}\left(\mathcal{E}_{0,N,D_0,D}\right)=-\mathbbm{f}rac{1}{4\pi} \mathbbm{f}_{1,N,D_0,D}. \] Hence we see that the difference $\mathcal{F}_{0,N,D_0,D}-\mathcal{E}_{0,N,D_0,D}$ is locally holomorphic and annihilated by $\mathcal{D}$ away from $E_D$. It follows that $\mathcal{F}_{0,N,D_0,D}-\mathcal{E}_{0,N,D_0,D}$ is a $\Gamma_0(N)$-invariant local constant. Since $\mathcal{E}_{0,N,D_0,D}$ does not contribute to the polynomial part, we conclude that the polynomial part of $\mathcal{F}_{0,N,D_0,D}$ is $\Gamma_0(N)$-invariant. Conversely, assume that $\mathbbm{f}_{1,N,D_0,D}$ is not in the space spanned by Eisenstein series. By the above argument, we may subtract a linear combination of Eisenstein series without affecting the polynomial part, so that we may assume without loss of generality that $\mathbbm{f}_{1,N,D_0,D}$ is cuspidal. Since $S_{2}(N)$ is one-dimensional, we have that $\mathbbm{f}_{1,N,D_0,D}=a_D f$, where $a_D\neq 0$ and $f\in S_{2}(N)$ is the unique newform. Using the decomposition \eqref{decomplocalmaass} of $\mathcal{F}_{0,N,D_0,D}$, for every $M\in \Gamma_0(N)$ we have \begin{equation}\label{eqn:M-1} 0=\mathcal{F}_{0,N,D_0,D}\Big|_{0}(M-1) = \mathcal{P}_{D_0,D,\mathbbm{f}rac{3}{4}}\Big|_{0}(M-1) + a_D \left(\mathcal{E}_{f}\Big|_{0}(M-1) + f^*\Big|_0(M-1)\right). \end{equation} Hence if $\mathcal{E}_{f}(z)\Big|_{0}(M-1) + f^*(z)\Big|_0(M-1)\neq 0$, we obtain that $\mathcal{P}_{D_0,D,\mathbbm{f}rac{3}{4}}(z)\Big|_{0}(M-1)\neq 0$. However, $\mathcal{E}_{f}(z)\Big|_{0}(M-1) + f^*(z)\Big|_0(M-1)$ is a constant independent of $z$. We choose $M=M_0$ to be a matrix sending $x_{N,1}$ to $x_{N,2}$ and verify that this difference is non-zero (this may be done by computing the periods of $f$). Note that this choice is made independent of $D$ and so the non-vanishing of $\mathcal{P}_{D_0,D,\mathbbm{f}rac{3}{4}}\Big|_{0}(M_0-1)$ for one $D$ implies that $\mathcal{E}_{f}\Big|_{0}(M_0-1) + f^*\Big|_0(M_0-1)\neq 0$. One such choice of $D$ may be chosen from Table \text{Re}f{tab:discs} (e.g. $D=-11$ for $N=32$), which hence validates our choice of $M_0$. Since \eqref{eqn:M-1} is independent of $z$, we then take the limit $z\to x\in\mathbbm{f}ield{Q}$ to show that \eqref{eqn:Px} is not $\Gamma_0(N)$-invariant. We have hence concluded that $\mathbbm{f}_{1,N,D_0,D}$ is in the space spanned by Eisenstein series if and only if \eqref{eqn:Px} is invariant under the action of $M_0$. Now note that \eqref{eqn:Px} is a (non-zero) constant multiple of $F_{0,N,D,D_0}(x)$, defined in \eqref{eqn:F1}. By Lemma \text{Re}f{lem:f1fchi}, the function $f_{1,N,D,D_0}$ is hence in the space spanned by Eisenstein series if and only if $F_{0,N,D,D_0}(x)$ is invariant under $M_0$. We finally use Lemma \text{Re}f{lem:Lval} to conclude the connection to central $L$-values. However, we first have to show that an $m_0$ admissible for $D_0$ exists which satisfies the conditions in Lemma \text{Re}f{lem:Lval}. If the cuspidal part of every $f_{1,N,D,D_0}$ vanished, then $F_{0,N,D,D_0}$ would be $\Gamma_0(N)$-invariant for every $D$. One easily checks (by a finite calculation) that this is not true for at least one choice, as given in Table \text{Re}f{tab:discs}. Moreover, picking one such choice for $m_0$, one easily checks that $L\left(f\otimes \chi_{m_0},1\right)\neq 0$. \end{proof} \subsection{Connections to combinatorial questions and the proofs of Corollaries \text{Re}f{maincor} and \text{Re}f{cor:sumtwocubes}} In this section, we recall relations between vanishing of central $L$-values and combinatorial questions. The first is the congruent number problem. \begin{proof}[Proof of Corollary \text{Re}f{maincor}] The first claim follows immediately from Theorem \text{Re}f{mainthm}. The second claim follows by combining Theorem \text{Re}f{mainthm} with the well-known fact (see the introduction in \cite{Tunnell} or Koblitz's book \cite{Koblitz} for a complete description) that $D$ is a congruent number if and only if the group of rational points of $E_D$ is infinite. As pointed out in \cite{Tunnell}, if $L(E_D,1)\neq 0$, then Coates--Wiles \cite{Coates-Wiles} implies that there are only finitely many rational points. The converse statement follows from BSD. \end{proof} We next consider the case of the elliptic curve $E_{27}$ given by $X^3+Y^3=1$. \begin{proof}[Proof of Corollary \text{Re}f{cor:sumtwocubes}] Applying Coates--Wiles \cite{Coates-Wiles} again, we obtain one direction of the claim. Assuming the Birch and Swinnerton-Dyer conjecture, there are infinitely many rational points on $E_{27,D}$ if and only if $L(E_{27,D},1)=0$. \end{proof} \subsection{Proof of Corollary \text{Re}f{cor:SDodd}} Here we conclude with the proof of Corollary \text{Re}f{cor:SDodd}. \begin{proof}[Proof of Corollary \text{Re}f{cor:SDodd}] By Theorem \text{Re}f{mainthm}, it suffices to show that $F_{0,32,D,-3}(0)\neq F_{0,32,D,-3}\left(\mathbbm{f}rac{1}{3}\right)$. Directly by definition \eqref{eqn:F1}, for $x\in\mathbbm{f}ield{Q}$ we have \begin{equation}\label{eqn:Fcong} F_{0,32,D,-3}(x)\equiv \#\left\{ Q=[a,b,c]\in \mathcal{Q}_{-3D}: \chi_{-3}(Q)\neq 0, 32|a, Q(x)>0>a\right\}\pmod{2}. \end{equation} However, since $D\not\equiv 0\pmod{3}$, we have that $\left(a,b,c,-3\right)=1$ and hence $\chi_{-3}(Q)\neq 0$ for every $Q\in \mathcal{Q}_{-3D}$. Noting that \[ Q\left(\mathbbm{f}rac{1}{3}\right) = \mathbbm{f}rac{a}{9}+ \mathbbm{f}rac{b}{3}+c, \] we have \[ F_{0,32,D,-3}\left(\mathbbm{f}rac{1}{3}\right)\equiv \# \mathcal{S}_D\pmod{2}. \] Hence, to prove the corollary it suffices to show that $F_{0,32,D,-3}(0)$ is even under the given assumptions on $D$. For $x=0$, the condition on $Q=[a,b,c]\in \mathcal{Q}_{-3D}$ in \eqref{eqn:Fcong} is simply $c>0>a$ and $32|a$. Since this condition is independent of $b$, for any $[a,b,c]$ satisfying the given conditions, we have that $[a,-b,c]$ is also in the set. This matches pairs of quadratic forms whenever $b\neq 0$ and we obtain \begin{equation}\label{eqn:Fcong2} F_{0,32,D,-3}(0)\equiv \#\left\{ Q=[a,0,c]\in \mathcal{Q}_{-3D}: 32|a, c>0>a\right\}\pmod{2}. \end{equation} However, since $D\equiv -3\pmod{8}$, we have $-3D\equiv 1\pmod{8}$, while the discriminant of $[a,0,c]$ is $-4ac$, which is divisible by $4$. The set on the right-hand side of \eqref{eqn:Fcong2} is hence empty, and we obtain the desired result. \end{proof} \end{document}
\betaegin{document} \title{Amplitudes for Spacetime Regions and the Quantum Zeno Effect: Pitfalls of Standard Path Integral Constructions} \alphauthor{J.J.Halliwell} \alphaddress{Blackett Laboratory \\ Imperial College \\ London SW7 2BZ \\ UK } \epsilonad{[email protected]} \alphauthor{J.M.Yearsley} \alphaddress{Centre for Quantum Information and Foundations, DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, UK} \epsilonad{[email protected]} \betaegin{abstract} Path integrals appear to offer natural and intuitively appealing methods for defining quantum-mechanical amplitudes for questions involving spacetime regions. For example, the amplitude for entering a spatial region during a given time interval is typically defined by summing over all paths between given initial and final points but restricting them to pass through the region at any time. We argue that there is, however, under very general conditions, a significant complication in such constructions. This is the fact that the concrete implementation of the restrictions on paths over an interval of time corresponds, in an operator language, to sharp monitoring at every moment of time in the given time interval. Such processes suffer from the quantum Zeno effect -- the continual monitoring of a quantum system in a Hilbert subspace prevents its state from leaving that subspace. As a consequence, path integral amplitudes defined in this seemingly obvious way have physically and intuitively unreasonable properties and in particular, no sensible classical limit. In this paper we describe this frequently-occurring but little-appreciated phenomenon in some detail, showing clearly the connection with the quantum Zeno effect. We then show that it may be avoided by implementing the restriction on paths in the path integral in a ``softer'' way. The resulting amplitudes then involve a new coarse graining parameter, which may be taken to be a timescale ${\epsilonpsilon}$, describing the softening of the restrictions on the paths. We argue that the complications arising from the Zeno effect are then negligible as long as ${\epsilonpsilon} \gg 1/ E$, where $E$ is the energy scale of the incoming state. Our criticisms of path integral constructions largely apply to approaches to quantum theory such as the decoherent histories approach or quantum measure theory, which do not specifically involve measurements. We address some criticisms of our approach by Sokolovksi, concerning the relevance of our results to measurement-based models. \epsilonnd{abstract} \newcommand\betaeq{\betaegin{equation}} \newcommand\epsiloneq{\epsilonnd{equation}} \newcommand\betaea{\betaegin{eqnarray}} \newcommand\epsilonea{\epsilonnd{eqnarray}} \def{\cal A}{{\cal A}} \def\Delta{\Deltaelta} \def{\cal H}{{\cal H}} \def{\cal E}{{\cal E}} \def{\betaf p}{{\betaf p}artial} \def\langle{\langlengle} \def{\betaf r}angle{{\betaf r}anglengle} \def{\betaf r}ightarrow{{\betaf r}ightarrow} \def{\betaf x}{{\betaf x}} \def{\betaf y}{{\betaf y}} \def{\betaf k}{{\betaf k}} \def{\betaf q}{{\betaf q}} \def{\betaf p}{{\betaf p}} \def{\betaf P}{{\betaf P}} \def{\betaf r}{{\betaf r}} \def{\sigma}{{{\sigma}igma}} \def\alpha{\alphalpha} \def\beta{\betaeta} \def\epsilon{{\epsilonpsilon}ilon} \def\Upsilon{\Upsilonpsilon} \def\Gamma{\Gammaamma} \def{\omega}{{{\omega}ega}} \def{\rm Tr}{{{\betaf r}m Tr}} \def{ \frac {i} { \hbar} }{{ \frac {i} { \hbar} }} \def{\rho}{{{\betaf r}ho}} \def\alphau{{\underline \alphalpha}} \def\betau{{\underline \betaeta}} \def{\betaf p}p{{{\betaf p}rime{\betaf p}rime}} \def{1 \!\! 1 }{{1 \!\! 1 }} \def\frac {1} {2}{\frac {1} {2}} \def{\epsilonpsilon}{{{\epsilonpsilon}ilon}} \def{\mathbf{z}}{{\mathbf{z}}} \def{ \hat P}{{ \hat P}} \def{ \hat p}{{ \hat p}} \def{\betaf x}0{{{\betaf x}_0}} \def{\betaf x}f{{{\betaf x}_f}} \[email protected]{[email protected]} {\sigma}ection{Introduction} Consider the following question in non-relativistic quantum mechanics for a point particle in $d$ dimensions: What is the amplitude $g_{\Deltaelta} ({\betaf x}f, t_f | {\betaf x}0, t_0 )$ for the particle to start at a spacetime point $({\betaf x}0, t_0)$, pass through a spatial region $\Deltaelta$ and end at a spacetime point $ ({\betaf x}f, t_f) $? The seemingly-obvious answer to this question is surely the path integral expression, \betaeq g_{\Deltaelta} ({\betaf x}f, t_f | {\betaf x}0, t_0 ) = \int_{\Deltaelta} { \mathcal D} {\betaf x} (t) \ \epsilonxp \left( i \int_{t_0}^{t_f} dt \left[ \frac {1} {2} m \dot {\betaf x}^2 - U( {\betaf x} ) {\betaf r}ight] {\betaf r}ight) \langlebel{1.1} \epsiloneq (We choose units in which $\hbar=1$). In this expression, the paths ${\betaf x} (t) $ summed over satisfy the initial condition ${\betaf x} (t_0) = {\betaf x}0$, the final condition ${\betaf x} (t_f) ={\betaf x}f $ and pass, at any intermediate time, through the region $\Deltaelta$ \cite{Fey,FeHi,Sch}, as depicted in Fig.1. \betaegin{figure}[htbp] \centering \includegraphics[width=4in]{Fig1.eps} \caption{The amplitude Eq.({\betaf r}ef{1.1}) is obtained by summing over paths which enter the spatial region $\Deltaelta$ at any time between $t_0$ and $t_f$.} \langlebel{Fig1} \epsilonnd{figure} The point of this paper is to argue that there are potential problems with this seemingly-obvious answer. We first develop these ideas further. We might similarly assert that the amplitude $ g_{r} ({\betaf x}f, t_f | {\betaf x}0, t_0 ) $ for the particle never entering the region $\Deltaelta $ is a given by a path integral expression of the form \betaeq g_{r} ({\betaf x}f, t_f | {\betaf x}0, t_0 ) = \int_{r} { \mathcal D} {\betaf x} (t) \ \epsilonxp \left( i \int_{t_0}^{t_f} dt \left[ \frac {1} {2} m \dot {\betaf x}^2 - U( {\betaf x} ) {\betaf r}ight] {\betaf r}ight) \langlebel{1.2} \epsiloneq This object, the restricted propagator, is given by a sum over paths restricted to lie always outside $\Deltaelta$. We then clearly have \betaeq g ({\betaf x}f, t_f | {\betaf x}0, t_0 ) = g_{\Deltaelta} ({\betaf x}f, t_f | {\betaf x}0, t_0 ) + g_{r} ({\betaf x}f, t_f | {\betaf x}0, t_0 ) \langlebel{1.3} \epsiloneq where $g$ is the usual propagator obtained by summing over all paths from initial to final point. Objects such as Eq.({\betaf r}ef{1.2}) have to be given proper mathematical definition. For the moment, we have in mind a definition involving the usual time-slicing procedure, in which the time interval is divided up into $n$ equal intervals of size ${\epsilonpsilon}ilon$ so that $ t_f - t_0 = n {\epsilonpsilon}ilon $ and we consider propagation between slices labelled by times $t_k = t_0 + k {\epsilonpsilon} $, where $k=0,1 \cdots n$. Eq.({\betaf r}ef{1.2}) is then defined by a limit of the form \betaeq g_{r} ({\betaf x}f, t_f | {\betaf x}0, t_0 ) = \lim_{{\epsilonpsilon}ilon {\betaf r}ightarrow 0, n {\betaf r}ightarrow \infty} \int_r d^d x_1 \cdots \int_r d^d x_{n-1} {\betaf p}rod_{k=1}^{n} g ( {\betaf x}_{k}, t_{k} | {\betaf x}_{k-1}, t_{k-1} ) \langlebel{1.4a} \epsiloneq where ${\betaf x}_n = {\betaf x}f$ and the integrals are over the region outside $\Deltaelta$. The propagator is approximated by \betaeq g ( {\betaf x}_{k}, t_{k} | {\betaf x}_{k-1}, t_{k-1} ) = \left( \frac {m}{2 {\betaf p}i i {\epsilonpsilon}} {\betaf r}ight)^{d/2} \epsilonxp \left( i S ( {\betaf x}_{k}, t_{k} | {\betaf x}_{k-1}, t_{k-1} ) {\betaf r}ight) \epsiloneq for small times, where the exponent is the action between the indicated initial and final points and the limit is taken in such a way that $n {\epsilonpsilon}$ is fixed. Writing the above amplitudes in operator form, $\hat g_{\Deltaelta}$, $\hat g_r $, where, for example, \betaeq g_{r} ({\betaf x}f, t_f | {\betaf x}0, t_0 ) = \langlengle {\betaf x}f | \hat g_{r} (t_f,t_0) | {\betaf x}0 {\betaf r}anglengle \langlebel{1.4} \epsiloneq it seems reasonable to suppose that the probabilities for a particle in initial state $|{\betaf p}si {\betaf r}anglengle $ entering or not entering the region $\Deltaelta$ during the time interval $[t_0,t_f]$ are given by \betaea p_{\Deltaelta} &=& \langlengle {\betaf p}si | \hat g_{\Deltaelta} (t_f,t_0)^\dag \hat g_{\Deltaelta} (t_f,t_0) | {\betaf p}si {\betaf r}anglengle \langlebel{1.5} \\ p_{r} &=& \langlengle {\betaf p}si | \hat g_{r} (t_f,t_0)^\dag \hat g_{r} (t_f,t_0) | {\betaf p}si {\betaf r}anglengle \langlebel{1.6} \epsilonea To be sensible probabilities these expressions should obey the simple sum rule \betaeq p_\Deltaelta + p_r = 1 \langlebel{1.7} \epsiloneq but using $\hat g = \hat g_\Deltaelta + \hat g_r $, it is easy to see that this is generally not the case unless the interference between the two types of paths vanishes, which means that \betaeq {{\betaf r}m Re} \ \langlengle {\betaf p}si | \hat g_{r} (t_f,t_0)^\dag \hat g_\Deltaelta (t_f,t_0) | {\betaf p}si {\betaf r}anglengle = 0 \langlebel{1.8} \epsiloneq This condition can hold, perhaps approximately, for certain types of initial states, although this may be non-trivial to prove and there is no guarantee that those states are physically interesting ones. Path integral expressions of the above general type have been postulated and used extensively in a wide variety of circumstances in which time is involved in a non-trivial way. Indeed, Feynman's original paper on path integrals was entitled, ``Space-time approach to non-relativistic quantum mechanics'', clearly suggesting that the spacetime features of the path integral construction should be made use of \cite{Fey}. Path integral constructions have for many years played an important role in quantum cosmology and quantum gravity \cite{Har0,HaHa1,FRW,QC1,QC,QC2,Har3,HaMa,HaTh1,HaTh2,HaWa,Hal,CrHa,CrSi,Whe, ChWa,AnSa1,AnSa2,Schr}. They have also been particularly useful in addressing issues concerning time in non-relativistic quantum mechanics, for example in studies of the arrival time \cite{YaT,Yam2,Yam3,Har4,MiHa,HaZa,HaYe1,HaYe2,Ye0,AnSa3,AnSa4,AnSa5}, dwell time and tunneling time \cite{Fer,Yam0,Yam1,GVY,Sok}. Path integrals are also useful ways of formulating continuous quantum measurement theory \cite{Caves,Mensky} Many of these applications of path integrals are in the specific framework of the decoherent histories approach to quantum theory \cite{GeH1,GeH2,Gri,Omn,Hal2,DoH,Hal5,Ish,Ish2,IsLi,ILSS} (in which the relations Eqs.({\betaf r}ef{1.7}), ({\betaf r}ef{1.8}) arise) and quantum measure theory \cite{Sor,Sal}. Also related are the various attempts to derive the Hilbert space formulation of quantum theory from path integral constructions \cite{Dow}. Here we wish to study path integral expressions as entities in their own right without being tied to a specific approach to quantum theory. However, as indicated, there is a problem with the innocent-looking and informally appealing path integral expressions Eqs.({\betaf r}ef{1.1}), ({\betaf r}ef{1.2}), with the consequence that their properties can be very different to intuitive expectations. This difficulty goes beyond the problem of satisfying the no-interference condition, Eq.({\betaf r}ef{1.8}), although is related to it. To see it, let us focus on the amplitude for not entering the region $\Deltaelta$, Eq.({\betaf r}ef{1.2}). The key issue is that the innocent-looking restriction on paths effectively means that an initial state propagated by Eq.({\betaf r}ef{1.2}) is required to have zero support in the region $\Deltaelta$ at {\it every moment of time} between $t_1$ and $t_2$. Evolution with this propagator therefore suffers from the quantum Zeno effect -- the fact that continual monitoring of a quantum system in a Hilbert subspace prevents it from leaving that subspace \cite{Kha,Zeno,CSM,Peres,Kraus,Sud,Sch2,Wall,Wall2}. The consequence is that the amplitude Eq.({\betaf r}ef{1.2}), or equivalently $\hat g_r (t_2,t_1)$, actually describes {\it unitary} propagation on the Hilbert space of states with support only in the region outside $\Deltaelta$ and therefore gives probability $ p_r= 1$ for any incoming state. This then means that either the sum rule Eq.({\betaf r}ef{1.7}) is not satisfied in which case the probabilities are not meaningful, or that it is satisfied but $ p_{\Deltaelta} = 0$, which means that any incoming state aimed at $\Deltaelta$ has probability zero for entering that region, a physically nonsensical result. Differently put, the innocent-looking restriction on paths in the restricted propagator Eq.({\betaf r}ef{1.2}) with the usual implementation Eq.({\betaf r}ef{1.4a}) effectively sets {\it reflecting} boundary conditions on the boundary of $\Deltaelta$, which means that any incoming state is totally reflected under propagation by $ \hat g_r $. To obtain the intuitively sensible result, we would need a propagator analogous to Eq.({\betaf r}ef{1.2}) in which the incoming state is {\it absorbed}. One could perhaps look for another way out, which is to suppose that $\hat g_\Deltaelta$ is related to some sort of measurement scheme, in which case there is no obligation to satisfy the probability sum rule and that Eq.({\betaf r}ef{1.5}) may then give a reasonable formula for the probability of entering. To this end it is useful to give a more detailed formula for $g_{\Deltaelta}$ using the path decomposition expansion (PDX) \cite{PDX,HaOr,Hal3}. The paths summed over in Eq.({\betaf r}ef{1.1}) may be partitioned according to the time $t$ and location ${\betaf y}$ at which they cross the boundary $\Sigma$ of $\Deltaelta$ for the first time. (See Fig.2) \betaegin{figure}[htbp] \centering \includegraphics[width=4in]{Fig2.eps} \caption{The path decomposition expansion Eq.({\betaf r}ef{PDX1}). Each path from $({\betaf x_0,t_0)}$ to $({\betaf x}_f,t_f)$ passing through $\Deltaelta$ may be labeled by its time $t$ and location ${\betaf y}$ of first crossing of the boundary $\Sigma$.} \langlebel{Fig2} \epsilonnd{figure} As a consequence, it is possible to derive the formula \betaeq g_{\Deltaelta} ({\betaf x}f, t_f | {\betaf x}0, t_0 ) = \frac{i} {2m} \int_{t_0}^{t_f} dt \int_{\Sigma} d^{d-1} y \ g ( {\betaf x}f, t_f | {\betaf y}, t ) \ {\betaf n} \cdot \nabla_{\betaf x} g_r ({\betaf x}, t | {\betaf x}0, t_0 ) \betaig|_{{\betaf x} = {\betaf y}} \langlebel{PDX1} \epsiloneq where ${\betaf n}$ is the outward pointing normal to $\Sigma$. The restricted propagator $g_r({\betaf x}, t | {\betaf x_1}, t_0 ) $ vanishes when either end is on $\Sigma$ but its normal derivative does not. In fact, the normal derivative of the restricted propagator in Eq.({\betaf r}ef{PDX1}) represents the sum over paths which are restricted to remain outside $\Deltaelta$ but end on its boundary \cite{PDX,HaOr,Hal3}. Although on the face of it, Eq.({\betaf r}ef{PDX1}) is a plausible formula for the crossing amplitude, the problem with this expression, which is fully equivalent to Eq.({\betaf r}ef{1.3}), concerns the treatment of paths that are outside $\Deltaelta$ before their first crossing at $t$. The fact that these paths are represented using the restricted propagator means that any such paths arriving at $\Sigma$ before $t$ are reflected, rather than simply dropped from the sum. This means an incoming wave packet propagated using Eq.({\betaf r}ef{PDX1}) will be partly reflected and we anticipate that this formula could fail to give intuitively sensible results. One way or another, we find that the simple and seemingly obvious notion of ``restricting paths" leads to the quantum Zeno effect and hence to unphysical results. This is the sense in which we would say that path integral constructions may suffer from pitfalls. The purpose of this paper is to give a concise statement of the general problem outlined above with naive path integral expressions of the form Eqs.({\betaf r}ef{1.1}), ({\betaf r}ef{1.2}), and to point out how to construct practically useful modified path integrals which give a meaningful answer to the question of assigning amplitudes to spacetime regions with a sensible classical limit. Many papers using path integral constructions of the above type appear to be oblivious to this effect and its consequences. Although the results of such papers are not necessarily wrong, they often do not have sensible classical limits and in measurement-based models, they may, unknowingly, only be valid in the regime of strong measurement. These problems with the Zeno effect in path integral constructions can be seen in a number of early attempts to define probabilities for spacetime regions in the context of the decoherent histories approach \cite{YaT,Yam2,Yam3,Har4,MiHa,HaZa,AnSa3} and in some papers on the decoherent histories approach to quantum cosmology \cite{Har3,HaMa,HaTh1,HaTh2,HaWa}. A possible resolution was given in Refs.\cite{HaYe1,Hal}. There are undoubtedly many other places in which this difficulty has been encountered. Here, our aim is to give a general account of the problem and solution, independent of specific approaches to quantum theory and of specific applications. We give a detailed formulation of the problem in Section 2 and present the proposed solution in Section 3. Some examples are discussed in Section 4. In Section 5, we give a detailed discussion of the connections with other work. In particular, we briefly address a recent criticism by Sokolovski \cite{Sokcrit} of an earlier account of our work \cite{HaYePit}. We summarize and conclude in Section 6. {\sigma}ection{Detailed Formulation of the Problem} We first discuss the generality of the problem with path integrals outlined above. The example in Eqs.({\betaf r}ef{1.1}), ({\betaf r}ef{1.2}) is in non-relativistic quantum mechanics in any number of dimensions. The region $\Deltaelta$ can consist of a number of disconnected pieces and then there could be many different ways of partitioning the paths according to how many different regions they enter or not. We will also assume that $\Deltaelta$ is reasonably large and that its boundary is reasonably smooth, in comparison to any quantum-mechanical lengthscales set by the incoming state. Furthermore, in the situation depicted in Fig.1, the spatial region is constant in time, but there is no obstruction to allowing it to vary with time. For example, in relativistic quantum mechanics, it may be natural to look at regions whose boundaries are null surfaces \cite{Hal}. One could also contemplate path integrals in curved spacetimes which may have unusual properties, such as closed timelike curves \cite{Har3}, but the basic ideas of path integration are still applicable. An important area of application of these ideas is to quantum cosmology, where there is no explicit physical time coordinate, but the basic object Eq.({\betaf r}ef{1.1}) is still the appropriate starting point for the construction of class operators in the decoherent histories analysis of quantum cosmology \cite{Har3,HaWa,Hal}. This is clearly not an exhaustive list of possibilities, but the arguments presented in what follows will apply to all of these cases, even though they are presented in the context of the example Eqs.({\betaf r}ef{1.1}), ({\betaf r}ef{1.2}). We now set out in more mathematical detail the argument outlined in the Introduction. We focus on the path integral representation of the restricted propagator Eq.({\betaf r}ef{1.2}) and its time-slicing representation Eq.({\betaf r}ef{1.4a}). We will give an equivalent operator form for Eq.({\betaf r}ef{1.4a}). We introduce the projector onto $\Deltaelta$, \betaeq P = \int_{\Deltaelta} d^d x \ | {\betaf x} {\betaf r}anglengle \langlengle {\betaf x} | \langlebel{2.2} \epsiloneq and its negation, $ \betaar P = 1 - P $, the projector onto the region $\betaar \Deltaelta$, the region of $ {\mathbb R}^d$ outside of $\Deltaelta$. Again we divide the time interval up into $n$ equal intervals of size ${\epsilonpsilon}ilon$ so that $ t_f - t_0 = n {\epsilonpsilon}ilon $. It is then easy to see that, in terms of the operator $ \hat g_{r}$ defined in Eq.({\betaf r}ef{1.4}), the time-slicing expression Eq.({\betaf r}ef{1.4a}) is equivalent to \betaeq \hat g_{r} (t_f,t_0)= \lim_{{\epsilonpsilon}ilon {\betaf r}ightarrow 0, n {\betaf r}ightarrow \infty} \ \betaar P e^{ - i H {\epsilonpsilon} } \betaar P \cdots e^{- i H {\epsilonpsilon}} \betaar P \langlebel{2.3} \epsiloneq where there are $n+1$ projectors and $n$ unitary evolution factors $e^{ - i H {\epsilonpsilon}}$ and the limit is taken in such a way that $ n {\epsilonpsilon} = t_f - t_0 $ is fixed. That is, inserted in Eq.({\betaf r}ef{1.4}), Eq.({\betaf r}ef{2.3}) gives the time-slicing definition of the path integral expression, Eq.({\betaf r}ef{1.4a}). (Note that there are $n+1$ projectors $\betaar P$ in Eq.({\betaf r}ef{2.3}) but only $n-1$ corresponding integrals in Eq.({\betaf r}ef{1.4a}). The two extra projectors are redundant, and hence the two expressions completely equivalent, if we take ${\betaf x}0$ and ${\betaf x}f$ to be outside $\Deltaelta$). One can clearly see from the operator form Eq.({\betaf r}ef{2.3}) that it involves ``monitoring'' of the particle to check if it is in $\betaar \Deltaelta$ at each instant of time. The limit in Eq.({\betaf r}ef{2.3}) may be computed explicitly \cite{Sch2} and leads to the explicit form \betaeq \hat g_{r} (t_f,t_0) = \betaar P \epsilonxp \left( - i \betaar P H \betaar P (t_f - t_0) {\betaf r}ight) \langlebel{2.4} \epsiloneq This propagator is, as claimed, unitary on the Hilbert space of states with support in $\betaar \Deltaelta$ \cite{Sch2}. It therefore describes the situation in which an incoming state never actually leaves $\betaar \Deltaelta$ due to monitoring becoming infinitely frequent, which is clearly the quantum Zeno effect. In simple examples of the restricted propagator, one can easily see that an incoming state is totally reflected off the boundary of the region $\Deltaelta$. Eq.({\betaf r}ef{2.4}) and its properties explain why the naive path integral expressions Eqs.({\betaf r}ef{1.1}), ({1.2}) have counter-intuitive properties which lead to unphysical results if not used sufficiently carefully. It is also reasonable to consider other possible methods for defining the path integral Eq.({\betaf r}ef{1.2}). Another natural method is to consider the imaginary time version of Eq.({\betaf r}ef{1.2}) and then define the path integral in terms of the limit of a stochastic process involving random walks on a spacetime lattice (in the case $U=0$, for simplicity) \cite{Har4,JaGl}. Restricting the paths to lie outside $\Deltaelta$ means finding a suitable boundary condition on the random walks at the boundary of $\Deltaelta$. Most studies of this problem have imposed reflecting boundary conditions, which appear to be the easiest ones to impose in practice. As a consequence, this method of defining the path integral once again leads to the restricted propagator of the form Eq.({\betaf r}ef{2.4}). However, it is not clear that one is compelled to use reflecting boundary conditions in such implementations of the path integral. We will discuss this further below. {\sigma}ection{Proposed Solution} Since the Zeno effect is the root of the problem, the solution is clearly to limit or soften the monitoring of the system in some way in Eq.({\betaf r}ef{2.3}) so that reflection is minimized. The first obvious way to do this is to decline to take the limit in Eq.({\betaf r}ef{2.3}) and keep the time-spacing ${\epsilonpsilon}$ finite. The second way is to replace the exact projectors by POVMs. These solutions have been explored in the specific context of the arrival time problem in one dimension \cite{HaYe1,HaYe2}, but the purpose here is to present these solutions in a more generally applicable way. In the first approach, we therefore define a modified propagator for not entering $\Deltaelta$ by \betaeq \hat g^{{\epsilonpsilon}}_{r} (t_f,t_0)= \betaar P e^{ - i H {\epsilonpsilon} } \betaar P \cdots e^{- i H {\epsilonpsilon}} \betaar P \langlebel{3.1} \epsiloneq where as before there are $n+1$ projectors and $n$ unitary time operators and $ n{\epsilonpsilon} = t_f - t_0$. This object can also be represented by a path integral expression of the form Eq.({\betaf r}ef{1.2}), except that the paths are required to be outside $\Deltaelta$ only at the $n+1$ times $t_0 + k {\epsilonpsilon} $, where $ k = 0 \cdots n $, but between these times the paths are unrestricted. This situation is depicted in Fig.3 for a one-dimensional example in which the region $\Deltaelta$ is the interval $[a,b]$. \betaegin{figure}[htbp] \centering \includegraphics[width=4in]{Fig3.eps} \caption{The paths summed over in the path integral representation of the modified propagator Eq.({\betaf r}ef{3.1}). The paths may enter the region $[a,b]$ but not at the times at which the projectors act. Very wiggly paths, which enter the region frequently, will generally have small contribution to the path integral in a semiclassical approximation, which means that there is an effective suppression of paths entering the region.} \langlebel{Fig3} \epsilonnd{figure} This modified propagator involves a {\it new coarse graining parameter} $ {\epsilonpsilon}$, describing the precision to within which the paths are monitored. The original propagator Eq.({\betaf r}ef{1.2}) is obtained in the limit of infinite precision ${\epsilonpsilon} {\betaf r}ightarrow 0 $. The physically interesting case, however, is that in which ${\epsilonpsilon}$ is small enough to monitor the paths well, but sufficiently large that an incoming state is not significantly affected by reflection. There is a very useful approximate alternative to Eq.({\betaf r}ef{3.1}), which is also helpful in terms of calculating the timescale required to define what ``small'' and ``large'' mean for ${\epsilonpsilon}$. For the special case of projectors onto the positive $x$-axis in one dimension, $P = \theta (\hat x)$, it has been shown \cite{Ech,HaYe3} that the string of projectors Eq.({\betaf r}ef{3.1}) is to a good approximation equivalent to evolution in the presence of a complex potential consisting of a window function on the region $\Deltaelta$, that is, \betaeq \betaar P e^{ - i H {\epsilonpsilon} } \betaar P \cdots e^{- i H {\epsilonpsilon}} \betaar P \alphapprox \epsilonxp \left( - (i H + V_0 P ) (t_f - t_0 ) {\betaf r}ight) \langlebel{3.2} \epsiloneq Here, the real parameter $V_0$ depends on ${\epsilonpsilon}$ and it was shown numerically \cite{HaYe3} that the best match is obtained with the choice \betaeq {\epsilonpsilon} V_0 \alphapprox 4/3 \langlebel{3.2a} \epsiloneq The approximate equivalence Eq.({\betaf r}ef{3.2}) is expected to hold when acting on states with energy width $\Deltaelta H$ for which $ {\epsilonpsilon} \ll 1/ \Deltaelta H $ (a timescale often called the Zeno time). Moreover, the general arguments given in Refs.\cite{Ech,HaYe3} for the equivalence Eq.({\betaf r}ef{3.2}) are not obviously tied to the one-dimensional case with $P = \theta (\hat x)$, so we expect it to hold very generally. This approximate equivalence means that a second natural candidate for the modified propagator is \betaeq \hat g_r^V (t_f, t_0) = \epsilonxp \left( - (i H + V_0 P ) (t_f - t_0 ) {\betaf r}ight) \langlebel{3.3} \epsiloneq As stated, this is approximately equal to Eq.({\betaf r}ef{3.1}), at least in simple one-dimensional models. However, it may be taken as an independently-postulated alternative propagator which is essentially equivalent to using the form Eq.({\betaf r}ef{2.3}) but replacing the exact projector $\betaar P$ with POVMs of the form $\epsilonxp( - {\epsilonpsilon} V_0 P )$, which is our second obvious way of softening the monitoring so as to avoid the Zeno effect. Moreover, this expression too has a path integral form, \betaeq g^V_{r} ({\betaf x}f, t_f | {\betaf x}0, t_0 ) = \int { \mathcal D} {\betaf x} (t) \ \epsilonxp \left( i \int_{t_0}^{t_f} dt \left[ \frac {1} {2} m \dot {\betaf x}^2 - U( {\betaf x}) + i V_0 f_{\Deltaelta} ({\betaf x}) {\betaf r}ight] {\betaf r}ight) \langlebel{3.4} \epsiloneq where $f_{\Deltaelta} ({\betaf x})$ is a window function on $\Deltaelta$. Here, the paths are unrestricted, except at their end points, but paths entering $\Deltaelta$ are suppressed by a complex potential. This situation is depicted in Fig.4 for the above one-dimensional example, in which the region $\Deltaelta$ is the interval $[a,b]$. \betaegin{figure}[htbp] \centering \includegraphics[width=4in]{Fig4.eps} \caption{The paths summed over in the modified propagator Eq.({\betaf r}ef{3.4}). The paths may enter the region $[a,b]$ but their contributions to the sum over paths is exponentially supressed.} \langlebel{Fig4} \epsilonnd{figure} Complex potentials of this general type arise in a variety of contexts and have been extensively studied \cite{complex,Hal1,All,PMS}. They will in general still involve reflection, but they behave in the classically expected way, i.e. they are absorbing, for sufficiently small $V_0$, which is what is required for the propagator to have the intuitively correct properties. Generalizations of the above scheme are possible in which a real potential of step function form is included. This can help with minimizing reflection and indeed it is possible, for given classes of initial states, to find potentials which are almost perfect absorbers \cite{PMS}. Given the relationship Eq.({\betaf r}ef{3.2}), we can now compute the scales associated with reflection, since at least in simple examples, scattering off a complex potential is easy to compute. In the simplest case of a free particle in one dimension scattering off a simple complex step potential $- i V_0 \theta (x)$, reflection is negligible for $V_0 \ll E $, where $E$ is the energy scale of the incoming state \cite{All,HaYe3}. Because $V_0$ is connected to ${\epsilonpsilon}$ by Eq.({\betaf r}ef{3.2a}), this means that the Zeno effect in the string of projectors Eq.({\betaf r}ef{3.1}) is negligible as long as \betaeq {\epsilonpsilon} \gg \frac {1} {E} \langlebel{refcon} \epsiloneq Again on general grounds, we expect this to be true in a wide class of models. For more elaborate models in which there are length scales describing the size of the region, the requirement of negligible reflection may impose further conditions on $E$, in addition to Eq.({\betaf r}ef{refcon}). However, we assert that for sufficiently large regions $\Deltaelta$, Eq.({\betaf r}ef{refcon}) is the most important condition. We thus see that there are two natural and simple ways of defining a restricted propagator outside $\Deltaelta$, in such a way that reflection is minimized but which remain as true as possible to the notion of paths not entering $\Deltaelta$. These definitions involve a new coarse graining parameter ${\epsilonpsilon}$ which must in general be chosen to be sufficiently large to avoid unphysical results. The earlier, problematic definitions of the propagators Eqs.({\betaf r}ef{1.1}), ({\betaf r}ef{1.2}) are obtained in the limit ${\epsilonpsilon} {\betaf r}ightarrow 0$. Given the modified propagator Eq.({\betaf r}ef{3.3}) describing restricted propagation outside $\Deltaelta$, one can also derive a corresponding propagator for entering $\Deltaelta$, analogous to Eq.({\betaf r}ef{1.3}) and Eq.({\betaf r}ef{PDX1}). We define the modified propagator for entering by \betaeq \hat g^V_{\Deltaelta} (t_f,t_0) = \epsilonxp \left( - i H (t_f- t_0) {\betaf r}ight) - \epsilonxp \left( - (i H + V_0 P ) (t_f - t_0 ) {\betaf r}ight) \epsiloneq Some elementary calcuation \cite{HaYe1,Hal} leads to the equivalent form \betaeq \hat g^V_{\Deltaelta} (t_f,t_0) = \int_{t_0}^{t_f} dt \ \epsilonxp \left( - i H (t_f- t) {\betaf r}ight) \ V_0 P \ \epsilonxp \left( - (i H + V_0 P ) (t - t_0 ) {\betaf r}ight) \langlebel{GV} \epsiloneq This may be rewritten \betaeq g^V_\Deltaelta ( {\betaf x}f, t_f | {\betaf x}0,t_0) = V_0 \int_{t_0}^{t_f} dt \int_{\Deltaelta} d^d y \ g( {\betaf x}f, t_f | {\betaf y}, t ) g_r^V ({\betaf y}, t | {\betaf x}0,t_0) \langlebel{GV2} \epsiloneq This is the generalization of Eq.({\betaf r}ef{PDX1}) and tends to it as $V_0 {\betaf r}ightarrow \infty$. It is different in that firstly, the restricted propagation suppresses paths that enter $\Deltaelta$ but does not completely exclude them, so the restriction is ``softer''. Secondly, the intermediate integral is over all of the region $\Deltaelta$, not just the boundary. With some straightforward calculation in essence the same as a similar case in quantum cosmology considered in Ref.\cite{Hal}, Eq.({\betaf r}ef{GV}) may be simplified for small $V_0$ and has the form of an ingoing current operator on the boundary $\Sigma$, the anticipated semiclassical form, and gives intuitively sensible results. {\sigma}ection{Some Simple Examples} To illustrate the above ideas, we now give some simple one-dimensional examples. We consider a free particle in one dimension consisting of mainly positive momenta and initially perfectly localized in $x<0$, so that $ P | {\betaf p}si {\betaf r}anglengle = | {\betaf p}si {\betaf r}anglengle $, where $P = \theta (\hat x)$. We take the spatial region $\Deltaelta$ to be $x>0$ and consider the following question: What is the probability that the particle either enters or never enters $\Deltaelta$ during the time interval $[0,\tau]$ and ends in $x_f<0$ at time $\tau$? This question is closely related to the arrival time problem, addressed in many places \cite{time,YDHH}, but here we use it as a simple example of spacetime coarse graining. The amplitude for not entering $x>0$ is given by the restricted propagator, which in this case is given by the usual method of images expression \betaeq g_r (x,\tau|x_0,0) = \theta (-x) \theta (-x_0) \left[ g(x,\tau|x_0,0) - g(x,\tau|-x_0,0){\betaf r}ight] \langlebel{res} \epsiloneq and $g$ is the usual free particle propagator \betaeq g(x,\tau|x_0,0) = \left( \frac {m} {2 {\betaf p}i i \tau} {\betaf r}ight)^{1/2} \epsilonxp \left( \frac { i m (x-x_0)^2 } {2\tau} {\betaf r}ight) \epsiloneq Note the restricted propagator Eq.({\betaf r}ef{res}) consists of direct and reflected pieces and hence describes reflection off the origin, as discussed earlier. The restricted propagator is also conveniently written in the operator form \betaeq \hat g_r (\tau, 0) = \betaar P (1 - R) e^{ - i H \tau} \betaar P \langlebel{resop} \epsiloneq where $\betaar P = \theta ( - \hat x)$ and $R$ is the reflection operator, \betaeq R =\int dx | x {\betaf r}anglengle \langlengle - x | \epsiloneq Since, as stated in Section 2, $\hat g_r $ is unitary on states with support only in $x<0$ and since the initial state is perfectly localized in $x<0$, the probability for not entering is $p_r =1$. However, we may still explore the properties of the crossing amplitude to see what sort of result it gives, ignoring the fact that the sum rules will not be satisfied, as we discussed in the more general case Eq.({\betaf r}ef{PDX1}). We therefore consider the propagator $\hat g_\Deltaelta $ for entering $x>0$ during the given time interval, defined by summing over paths which start at $ x_0 < 0 $ at $t=0$, cross the origin at least once and end $ x <0 $ at $\tau$. It is most simply expressed in the operator form \betaeq \hat g_\Deltaelta (\tau,0) = \betaar P \left (\hat g (\tau,0) - \hat g_r (\tau,0) {\betaf r}ight) \betaar P \epsiloneq where $\hat g = \epsilonxp ( - i H \tau) $ is the free particle propagator. Using Eq.({\betaf r}ef{resop}), this is equivalently given by \betaeq \hat g_\Deltaelta (\tau,0) = \betaar P R e^{ - i H \tau} \betaar P \epsiloneq (This is in fact equivalent to the PDX form of the crossing propagator, Eq.({\betaf r}ef{PDX1})). The probability for entering $\Deltaelta$ is then given by \betaea p_\Deltaelta (0,\tau) &=& \langlengle {\betaf p}si | \hat g_\Deltaelta (\tau,0)^\dag \hat g_\Deltaelta (\tau,0) | {\betaf p}si {\betaf r}anglengle \nonumber \\ &=& \langlengle {\betaf p}si | \betaar P P (\tau) \betaar P | {\betaf p}si {\betaf r}anglengle \langlebel{4.7} \epsilonea because $ R \betaar P R = P = \theta ( \hat x)$. Now note that we may write \betaeq P (\tau ) = P + \int_0^\tau dt \hat J (t) \epsiloneq where $ \hat J = \frac {1} {2m} ( \hat p \delta (\hat x) + \delta ( \hat x) \hat p ) $ is the current operator, so we finally have \betaeq p_\Deltaelta (0,\tau) = \int_0^\tau dt\ \langlengle {\betaf p}si | \hat J (t) | {\betaf p}si {\betaf r}anglengle \langlebel{prob1} \epsiloneq This is, on the face of it, a familiar semiclassical answer for the probability for entering $x>0$ during $[0,\tau]$ -- the flux across the origin \cite{time,YDHH}. It gives a probability close to $1$ for incoming states which substantially cross during the time interval $[0,\tau]$ and probability close to zero for states which substantially miss the interval. This result is problematic for two reasons. Firstly, because, as stated earlier, the sum rules are generally not obeyed, so that $p_r + p_c \ne 1$ in general. For example, for a state which substantially crosses during the time interval we get $p_r + p_\Deltaelta \alphapprox 2$, a rather striking violation of the sum rules! The sum rules are respected only for states which substantially miss the interval, for which $p_\Deltaelta = 0$. Secondly, even aside from the failure of the sum rules, the result is misleading. Recall that the histories are required to end in $x<0$ at the final time after entering $x>0$. Semiclassically, such histories would have probability approximately zero for a free particle. Hence Eq.({\betaf r}ef{prob1}) is semiclassically incorrect for this model. The point is emphasized by considering a slightly more complicated version of the above problem. We take the same initial state in $x<0$ and ask a modified question: What is the probability that the particle either enters or does not enter $x>0$ during the time interval $[0,\tau]$ and ends at any point $x_f$ at time $t_f> \tau $? The situation is depicted in Fig.5. \betaegin{figure}[htbp] \centering \includegraphics[width=4in]{Fig6.eps} \caption{The amplitude for not entering or entering the region $x>0$ during $[0,\tau]$ and ending at any final $x_f$ at $t_f$ is obtained by summing over paths which respectively, do not enter (dotted line) or enter (bold lines) $x>0$ during $[0,\tau]$.} \langlebel{Fig6} \epsilonnd{figure} The propagator for not entering is very similar, \betaeq \hat g_r (t_f, 0) = e^{ - i H (t_f - \tau) } \betaar P (1 - R) e^{ - i H \tau} \betaar P \langlebel{resop2} \epsiloneq so we still have $p_r = 1 $ and again the sum rules are not obeyed. The propagator for entering, however, acquires a new type of term, \betaea \hat g_\Deltaelta (t_f,0) &=& \left( \hat g (t_f,0) - \hat g_r (t_f,0) {\betaf r}ight) \betaar P \nonumber \\ &=& e^{ - i H (t_f - \tau) } \left( \betaar P R e^{ - i H \tau} \betaar P + P e^{ - i H \tau} \betaar P {\betaf r}ight) \langlebel{4.11} \epsilonea because there is now the new possibility that the particle can be in $x>0$ at time $\tau$. It is easily seen that these two terms make identical contributions to the probability and we therefore have \betaeq p_\Deltaelta (t_f,0) = 2 \int_0^\tau dt\ \langlengle {\betaf p}si | \hat J (t) | {\betaf p}si {\betaf r}anglengle \langlebel{4.12} \epsiloneq This is twice the expected semiclassical answer. The underlying problem is, in essence, the reflection produced by the restricted propagator which persists into the crossing propagator as one can see in Eq.({\betaf r}ef{4.11}). As described in Section 3, a solution to this difficulty is obtained by softening the coarse graining by using a complex potential to characterize the restrictions on paths or by using projections not acting at every time. This is described in Refs.\cite{HaYe1,HaYe2} and we briefly summarize the key ideas here for the complex potential case applied to the second model considered above. The amplitude for not entering the region $x>0$ during the time interval $[0,\tau]$ is then given by an expression of the form Eq.({\betaf r}ef{3.3}), where $P = \theta (\hat x)$. For $V_0 \ll E $, where $E$ is the energy scale of the incoming state, there is negligible reflection, so the part of the state crossing the origin during $[0,\tau]$ will be absorbed and the remainder of the state is unchanged. This effectively means that under propagator with Eq.({\betaf r}ef{3.3}), \betaeq \hat g_r^V (0,\tau) | {\betaf p}si {\betaf r}anglengle \alphapprox \betaar P (\tau ) | {\betaf p}si {\betaf r}anglengle \epsiloneq This is the key property that makes the amplitudes defined with a complex potential give sensible physical results. For states consisting of single wave packets reasonably well peaked in position and momentum, the sum rules are satisfied approximately and the crossing probability is approximately Eq.({\betaf r}ef{prob1}) (with small modifications depending on $V_0$). The term involving reflection in Eq.({\betaf r}ef{4.11}) is essentially suppressed which is why the $2$ becomes a $1$ in Eq.({\betaf r}ef{4.12}). A final related example is that of Yamada and Takagi \cite{YaT}, who took a general initial state and asked for the probability of either crossing or not crossing the origin in either direction during the time interval $[0,\tau]$. The non-crossing propagator is similar to that above, Eq.({\betaf r}ef{resop}) and is given by \betaeq \hat g_r (\tau, 0) = \betaar P (1 - R) e^{ - i H \tau} \betaar P + P (1 - R) e^{ - i H \tau} P \epsiloneq They found that the sum rules are satisfied only for states antisymmetric about the origin and that the crossing probability is exactly zero, once more a physical counter-intuitive result. The analysis of this situation with one of the above modified propagators essentially follows from the work described in Ref.\cite{Ye0}. For superpositions of incoming wave packets (in either direction), the sum rules are approximately satisfied if the energy scale of the wave packet $E$ satisfies $ E \gg V_0$ and the probabilities are the expected semiclassical ones, so intuitive properties are restored. Note that this model actually has {\it two} different regimes in which the sum rules are satisfied: approximately for small $V_0$ and exactly, in the Zeno limit of $V_0 {\betaf r}ightarrow \infty$ for antisymmetric initial states, but the resulting probabilities are very different in each case. Note also that this simple model suggests that the considerations of this paper may have implications for quantum measure theory, in which it is asserted that histories with probability zero do not occur \cite{Sor, Sal}. Intuitively, one would expect a non-zero crossing probability in this model, but the crossing probability is zero in the Zeno limit case. Hence the predictions of quantum measure theory may conflict with the intuitively expected result in this limit. {\sigma}ection{Connections to Other Work} The potential difficulties with path integral expressions highlighted in this paper first arose in applications of the decoherent histories approach to spacetime coarse grainings in non-relativistic quantum mechanics \cite{YaT,Yam2,Yam3,Har4,MiHa,HaZa,Har3}. The problems with the Zeno effect did not seem to be appreciated except that Hartle noted that some coarse grainings are ``too strong for decoherence'' \cite{Har4}. The problems also persisted in applications of the decoherent histories approach to quantum cosmology and reparametrization invariant theories \cite{Har3,HaMa,HaTh1,HaTh2,HaWa}. These issue also have consequences for the continuous tensor product structure in the decoherent histories approach, discussed by Isham and collaborators \cite{IsLi,ILSS}. The specific problem with the Zeno effect was noticed in Ref.\cite{HaWa}. It was also observed that it can in fact be avoided if the spacetime regions involved in the coarse graining are carefully chosen so that the initial state has zero current across the boundary. This resolution has been pursued \cite{ChWa}. In this paper we have presented a generally applicable resolution to the Zeno problem involving a softening of the coarse-graining. This was first proposed in the specific context of the arrival time problem in Refs.\cite{HaYe1,HaYe2,Ye4} and in a quantum cosmological model in Ref.\cite{Hal}. Numerous studies of the dwell and tunneling time problem involve path integral expression of the type considered here (for example, Refs.\cite{Fer,Yam1,GVY,Sok}). Some of these applications are typically connected to specific models of measurements for which the problems described in this paper may not apply. Indeed, specific models for the measurement of time frequently lead to path integral constructions in which the softening of the coarse graining of the type described in Section 3 is already implemented. (A detailed analysis of the dwell time problem will be given in another publication \cite{HaYe}). We also stress that partitioning of the paths of the type involved in Eqs.({\betaf r}ef{1.1}) and ({\betaf r}ef{1.2}) is often used as part of a given calculation (for example, when there is a potential of window function form concentrated in a given region), and the issues raised here are not problematic in such cases. The issues raised in the present paper concern the possible {\it interpretation} of amplitudes obtained by restricted sums over paths. However, it remains an interesting question to investigate specific measurement models to assess whether the problems with path integrals described here arise. Of particular note in this regard is a recent paper by Sokolovski criticizing the present work \cite{Sokcrit}, with particular reference to the definition of quantum traversal time \cite{Sok}. He argues that when the spacetime coarse grainings of the type considered here are regarded as {\it measurements}, some of their unusual properties, and in particular the Zeno effect we discussed here, are not surprising since in the regime of strong measurement the probabilities are more a reflection of the action of the measuring device than a reflection of some underlying measurement-independent property. Indeed, in the limit of infinitely strongly measurement, the measuring device completely reflects an incoming state, as one would physically expect and also consistent with our results. On the other hand, measurement-based approaches tend to agree with a decoherent histories analysis in the regime of weak measurement or soft coarse graining, as has been seen for example, in the analysis of the arrival time problem \cite{YDHH}. These observation stress that, as suggested already, our criticisms are perhaps most relevant to the interpretation of approaches not directly based on measurements and may be less relevant to measurement-based models, but this is a matter for further investigation. In terms of the proposed solutions to the problems with path integrals in Section 3, our considerations relating to modified coarse grainings have some connection to continuous quantum measurement theory \cite{JaSt,Caves,Mensky,Mensky2}. We note also that there could be other possible ways of softening the coarse graining to improve the properties of amplitudes constructed by path integrals. For example, Marchewka and Schuss have considered path integrals with absorbing boundary conditions \cite{MaSch}. We also note the interesting and perhaps relevant observations concerning the rigorous definition of path integrals by Sorkin \cite{Sor1}, Geroch \cite{Ger} and Klauder \cite{Kla}. {\sigma}ection{Summary and Conclusions} We have argued that amplitudes constructed by path integrals for questions involving time in a non-trivial way can, if implemented in the simplest and most obvious way, lead to problems due to the Zeno effect. This has the consequence that they do not have a sensible classical limit and have properties very different to those expected from the underlying intuitive picture. When path integrals are used in the decoherent histories approach, the Zeno effect can have the consequence that the sum rules are not satisfied except for very trivial initial states. These problems have been observed in numerous examples and applications but here we have argued that the issue is a very general one to do with the use of path integrals in a wide variety of applications. We outlined a successful solution to the Zeno problem, through a softening of the coarse graining. Again, this solution has been put forward in specific examples and applications, but here we stress that such a solution will offer a very general solution to the problem. We also note that the softening of the coarse graining introduces, perhaps unexpectedly, one or more new coarse graining parameters, in the simplest case a timescale ${\epsilonpsilon}ilon$, describing the precision to within which the paths are monitored in time. The Zeno problem is then avoided as long as $ {\epsilonpsilon}ilon \gg 1/ E $, where $E$ is the energy scale of the incoming state. These observations about path integral amplitudes apply most strongly to any approach to quantum theory which involves constructing amplitudes for spacetime coarse grainings which do not refer to a particular measurement scheme, such as the decoherent histories approach and quantum measure theory. Many approaches to quantum gravity are of this type. However, these observations may be less relevant to measurement-based models of spacetime coarse grainings and indeed the possible consequences of our observations for such models have been criticized by Sokolovski \cite{Sokcrit}. We certainly do not claim that any of the papers in this area are wrong. Indeed many authors have noted the unphysical nature of their results. The present paper is, if anything, a cautionary note on the use of path integrals in space time coarse grainings: some types of ``obvious'' coarse-grainings are quite simply unphysical. However, amplitudes with sensible intuitive properties {\it may} be successfully constructed with proper attention to the implementation of the coarse graining and to the associated time scale. {\sigma}ection{Acknowledgements} JMY was supported by the Templeton Foundation. We are grateful to Charis Anastopoulos, Fay Dowker and Larry Schulman for useful comments on an earlier version of the manuscript. \input refs.tex \epsilonnd{document}
\begin{document} \title{ASPEST: Bridging the Gap Between Active Learning and Selective Prediction} \begin{abstract} \noindent Selective prediction aims to learn a reliable model that abstains from making predictions when the model uncertainty is high. These predictions can then be deferred to a human expert for further evaluation. In many real-world scenarios, the distribution of test data is different from the training data. This results in more inaccurate predictions, necessitating increased human labeling, which can be difficult and expensive. Active learning circumvents this by only querying the most informative examples and, in several cases, has been shown to lower the overall labeling effort. In this work, we bridge selective prediction and active learning, proposing a new learning paradigm called \textit{active selective prediction} which learns to query more informative samples from the shifted target domain while increasing accuracy and coverage. For this new problem, we propose a simple but effective solution, ASPEST, that utilizes ensembles of model snapshots with self-training with their aggregated outputs as pseudo labels. Extensive experiments on numerous image, text and structured datasets, particularly those suffer from domain shifts, demonstrate that our proposed method can significantly outperform prior work on selective prediction and active learning (e.g. on the MNIST$\to$SVHN benchmark with the labeling budget of $100$, ASPEST improves the AUC metric from $79.36\%$ to $88.84\%$) and achieves more optimal utilization of humans in the loop.\blfootnote{Our code is available at: \url{https://github.com/google-research/google-research/tree/master/active_selective_prediction}.} \end{abstract} \section{Introduction} \label{sec:intro} \begin{figure} \caption{Illustration of the \textit{active selective prediction} \label{fig:asp-illustration-basic} \end{figure} Deep Neural Networks (DNNs) have shown notable success in many applications that require complex understanding of input data~\citep{he2016deep,devlin2018bert,hannun2014deep}, including the ones that involve high-stakes decision making~\citep{yang2020making}. For safe deployment of DNNs in high-stakes applications, it is typically required to allow them to abstain from their predictions that are likely to be wrong, and ask humans for assistance (a task known as selective prediction)~\citep{el2010foundations,geifman2017selective}. Although selective prediction can render the predictions more reliable, it does so at the cost of human interventions. For example, if a model achieves 80\% accuracy on the test data, an ideal selective prediction algorithm should reject those 20\% misclassified samples and send them to a human for review. Distribution shift can significantly exacerbate the need for such human intervention. The success of DNNs often relies on the assumption that both training and test data are sampled independently and identically from the same distribution. In practice, this assumption may not hold and can degrade the performance on the test domain~\citep{barbu2019objectnet,koh2021wilds}. For example, for satellite imaging applications, images taken in different years can vary drastically due to weather, light, and climate conditions~\citep{koh2021wilds}. Existing selective prediction methods usually rely on model confidence to reject inputs~\citep{geifman2017selective}. However, it has been observed that model confidence can be poorly calibrated, especially with distribution shifts~\citep{ovadia2019can}. The selective classifier might end up accepting many mis-classified test inputs, making the predictions unreliable. Thus, selective prediction might yield an accuracy below the desired target performance, or obtain a low coverage, necessitating significant human intervention. To improve the performance of selective prediction, one idea is to rely on active learning and to have humans label a small subset of selected test data. The correct labels provided by humans can then be used to improve the accuracy and coverage (see Sec.~\ref{sec:eval-metrics}) of selective prediction on the remaining unlabeled test data, thus reducing the need for subsequent human labeling efforts. In separate forms, selective prediction~\citep{geifman2017selective,geifman2019selectivenet} and active learning~\citep{settles2009active} have been studied extensively, however, to the best of our knowledge, this paper is first to propose performing active learning to improve selective prediction jointly, with the focus on the major real-world challenge of distribution shifts. Active domain adaptation~\citep{su2020active,fu2021transferable,prabhu2021active} is one area close to this setting, however, it does not consider selective prediction. In selective prediction, not only does a classifier need to be learned, but a selection scoring function also needs to be constructed for rejecting misclassified inputs. Thus, going beyond conventional active learning methods that focus on selecting examples for labeling to improve the accuracy, we propose to also use those selected labeled examples to improve the selection scoring function. The optimal acquisition function (used to select examples for labeling) for this new setting is different compared to those in traditional active learning -- e.g. if a confidence-based selection scoring function is employed, the selected labeled samples should have the goal of improving the estimation of that confidence score. In this paper, we introduce a new machine learning paradigm: active selective prediction under distribution shift (see Fig.~\ref{fig:asp-illustration-basic}), which combines selective prediction and active learning to improve accuracy and coverage, and hence use human labeling in a more optimal way. Active selective prediction is highly important for most real-world deployment scenarios. To the best of our knowledge, we are the first to formulate and investigate this problem, along with the judiciously chosen evaluation metrics for it (Sec.~\ref{sec:asp-problem}). We also introduce a novel and simple yet effective method, ASPEST, for this active selective prediction problem (Sec.~\ref{sec:method}). The key components of ASPEST, checkpoint ensembling and self-training, are designed to address the fundamental challenges in the active selective prediction problem. On numerous real-world datasets, we show that ASPEST consistently outperforms other baselines proposed for active learning and selective prediction (Sec.~\ref{sec:experiment}). \section{Related Work} \label{sec:related} \mypara{Selective prediction.} Selective prediction (also known as prediction with rejection/deferral options) constitutes a common deployment scenario for DNNs, especially in high-stakes decision making scenarios. In selective prediction, models abstain from yielding outputs if their confidence on the likelihood of correctness is not sufficiently high. Such abstinence usually incurs deferrals to humans and results in additional cost~\cite{mozannar2020consistent}. Increasing the coverage -- the ratio of the samples for which the DNN outputs can be reliable -- is the fundamental goal~\cite{el2010foundations,fumera2002support,hellman1970nearest,geifman2019selectivenet}. \citep{geifman2017selective} considers selective prediction for DNNs with the `Softmax Response' method, which applies a carefully selected threshold on the maximal response of the softmax layer to construct the selective classifier. \citep{lakshminarayanan2017simple} shows that using deep ensembles can improve predictive uncertainty estimates and thus improve selective prediction. \citep{rabanser2022selective} proposes a novel method, NNTD, for selective prediction that utilizes DNN training dynamics by using checkpoints during training. Our proposed method ASPEST also uses checkpoints to construct ensembles for selective prediction. In contrast to NNTD and other aforementioned methods, we combine selective prediction with active learning to improve its data efficiency while considering a holistic perspective of having humans in the loop. This new active selective prediction setup warrants new methods for selective prediction along with active learning. \mypara{Active learning.} To utilize the human labeling budget more effectively while training DNNs, active learning employs acquisition functions to select unlabeled examples for labeling, and uses these labeled examples to train models~\cite{settles2009active,dasgupta2011two}. Commonly-used active learning methods employ acquisition functions by considering uncertainty~\cite{gal2017deep,ducoffe2018adversarial,beluch2018power} or diversity~\cite{sener2017active,sinha2019variational}, or their combination~\cite{ash2019deep,huang2010active}. One core challenge for active learning is the ``cold start'' problem: often the improved obtained from active learning is less significant when the amount of labeled data is significantly smaller~\cite{yuan2020cold,hacohen2022active}. Moreover, active learning can be particularly challenging under distribution shift~\cite{kirsch2021test,zhao2021active}. Recently, active domain adaptation has been studied, where domain adaptation is combined with active learning~\cite{su2020active,fu2021transferable,prabhu2021active}. Different from traditional active learning, active domain adaptation typically adapts a model pre-trained on the labeled source domain to the unlabeled target domain. In our work, we also try to adapt a source trained model to the unlabeled target test set using active learning, while focusing on building a selective classification model and reducing the human labeling effort. \mypara{Distribution shift.} Distribution shift, where the training distribution differs from the test distribution, often occurs in practice and can substantially degrade the accuracy of the deployed DNNs~\citep{koh2021wilds,yao2022wild,barbu2019objectnet}. Distribution shift can also substantially reduce the quality of uncertainty estimation~\citep{ovadia2019can}, which is often used for rejecting examples in selective prediction and selecting samples for labeling in active learning. Several techniques try to tackle the challenge caused by distribution shift, including accuracy estimation~\citep{chen2021detecting,chuang2020estimating}, error detection~\citep{hendrycks2016baseline,granese2021doctor}, out-of-distribution detection~\citep{salehi2021unified}, domain adaptation~\citep{ganin2016domain,saito2019semi}, selective prediction~\citep{kamath2020selective} and active learning~\citep{kirsch2021test}. In our work, we combine selective prediction with active learning to address the issue of distribution shift. \mypara{Deep ensembles.} Ensembles of DNNs (or deep ensembles) have been successfully used to boost predictive performance~\citep{moghimi2016boosted,zhu2018knowledge}. Deep ensembles can also be used to improve the predictive uncertainty estimation~\citep{lakshminarayanan2017simple,fort2019deep}. \citep{lakshminarayanan2017simple} shows that random initialization of the NN parameters along with random shuffling of the data points are sufficient for deep ensembles to perform well in practice. However, training multiple DNNs from random initialization can be very expensive. To obtain deep ensembles more efficiently, recent papers explore using checkpoints during training to construct the ensemble~\citep{wang2021boost,huang2017snapshot}, or fine-tuning a single pre-trained model to create the ensemble~\citep{kobayashi2022diverse}. In our work, we use the checkpoints during fine-tuning a source-trained model via active learning as the ensemble and further boost the ensemble's performance via self-training. We also use the ensemble's uncertainty measured by a margin to select samples for labeling in active learning. \mypara{Self-training.} Self-training is a common algorithmic paradigm for leveraging unlabeled data with DNNs. Self-training methods train a model to fit pseudo-labels (i.e., predictions on unlabeled data made by a previously-learned model) to boost the model's performance~\citep{yarowsky1995unsupervised,grandvalet2004semi,lee2013pseudo,wei2020theoretical,sohn2020fixmatch}. In this work, we use self-training to improve selective prediction performance. Instead of using predicted labels as pseudo-labels as a common practice in prior works, we use the average softmax outputs of the checkpoints during training as the pseudo-labels and self-train the models in the ensemble on them with the KL-Divergence loss to improve selective prediction performance. \section{Active Selective Prediction} \label{sec:asp-problem} In this section, we first formulate the active selective prediction problem and then present the proposed evaluation metrics to quantify the efficacy of the methods. \subsection{Problem Setup} \label{sec:problem_setup} Let $\mathcal{X}$ be the input space and $\mathcal{Y}=\{1, 2, \dots, K \}$ the label space.\footnote{In this paper, we focus on the classification problem, although it can be extended to the regression problem.} The training data distribution is given as $P_{X,Y}$ and the test data distribution is $Q_{X,Y}$ (both are defined in the space $\mathcal{X}\times \mathcal{Y}$). There might exist distribution shifts such as covariate shifts (i.e., $Q_{X,Y}$ might be different from $P_{X,Y}$). Suppose for each input $\bfx$, an oracle (e.g., the human annotator) can assign a ground-truth class label $y_x$ to it. Given a classifier $\bar{f}: \mathcal{X}\to \mathcal{Y}$ trained on a source training dataset $\Dtr \sim P_{X,Y}$ ($\sim$ means ``sampled from''), and an unlabeled target test dataset $U_X=\{\bfx_1, \dots, \bfx_n\} \sim Q_{X}$, our goal is to employ $\bar{f}$ to yield reliable predictions on $U_X$ in human-in-the-loop scenario. Holistically, we consider the two approaches to involve humans via the predictions they provide on the data: (i) selective prediction where uncertain predictions are deferred to humans to maintain a certain accuracy target; and (ii) active learning where a subset of $U_X$ unlabeled samples are selected for humans to improve the model with the extra labeled data to be used at the subsequent iterations. These two approaches to involve humans have different objectives and thus, their joint optimization to best use the human labeling resources is not straightforward. As an extension of the classifier $f$ (initialized by $\bar{f}$), we propose to employ a selective classifier $f_s$ including a selection scoring function $g: \mathcal{X} \to \mathbb{R}$ to yield reliable predictions on $U_X$. We define the predicted probability of the model $f$ on the $k$-th class as $f(\bfx \mid k)$. Then, the classifier is $f(\bfx)=\operatornamewithlimits{arg\!\max}_{k\in \mathcal{Y}} f(\bfx \mid k)$. $g$ can be based on statistical operations on the outputs of $f$ (e.g., $g(\bfx)=\max_{k\in \mathcal{Y}} f(\bfx \mid k)$). With $f$ and $g$, the selective prediction model $f_s$ is defined as: \begin{align} \label{eq:selective-classifier} f_s(\bfx; \tau)=\begin{cases} f(\bfx) & \quad \text{if } g(\bfx)\ge \tau, \\ \bot & \quad \text{if } g(\bfx) < \tau \end{cases}, \end{align} where $\tau$ is a threshold. If $f_s(\bfx)=\bot$, then the DNN system would defer the predictions to a human in the loop. To improve the overall accuracy to reach the target, such deferrals require manual labeling. To reduce the human labeling cost and improve the accuracy of the selective classifier, we consider labeling a small subset of $U_X$ and adapt the selective classifier $f_s$ on the labeled subset via active learning. The goal is to significantly improve the accuracy and coverage of the selective classifier $f_s$ and thus reduce the total human labeling effort. Suppose the labeling budget for active learning is $M$ (i.e., $M$ examples are selected to be labeled from $U_X$ to improve the selective prediction performance). We assume that the human in the loop can provide the correct labels. For active learning, we consider the transductive learning paradigm~\citep{V98}, which assumes all training and test data are observed beforehand and we can make use of the unlabeled test data for learning. Specifically, the active learning is performed on $U_X$ to build the selective classifier $f_s$, with performance evaluation of $f_s$ only on $U_X$. We don't consider training $f_s$ from scratch, but adapt the source-trained classifier $\bar{f}$ to obtain $f_s$ to maintain feasibly-low computational cost (e.g., by fine-tuning $\bar{f}$ on the $M$ labeled data points from $U_X$). Let's first consider the single-round setting. Suppose the acquisition function is $a:\mathcal{X}^m \times \mathcal{F} \times \mathcal{G} \to \mathbb{R}$, where $m\in \mathbb{N}^+$, $\mathcal{F}$ is the classifier space and $\mathcal{G}$ is the selection scoring function space. This acquisition function is the same as the one used in active learning literature~\citep{gal2017deep} (refer to Appendix~\ref{app:baselines} for some examples of the function $a$). In the beginning, $f$ is initialized by $\bar{f}$. We then select a batch $B^*$ for labeling by solving the following objective: \begin{align} B^*=\arg\max_{B\subset U_X, |B|=M}a(B, f, g), \end{align} for which the labels are obtained to get $\tilde{B}^*$. Then, we use $\tilde{B}^*$ to update $f$ and $g$ (e.g., via fine-tuning the model on $\tilde{B}^*$). The above can be extended to a multi-round setting. Suppose we have $T$ rounds and the labeling budget for each round is $m=[\frac{M}{T}]$. In the beginning, $f_0$ is initialized by $\bar{f}$. At the $t$-th round, we first select a batch $B_t^*$ for labeling by solving the following objective: \begin{align} \label{obj:acquisition} B_t^*=\arg\max_{B\subset U_X\setminus(\cup_{l=1}^{t-1}B_l^*), |B|=m} a(B, f_{t-1}, g_{t-1}), \end{align} for which the labels are obtained to get $\tilde{B}_t^*$. Then we use $\tilde{B}_t^*$ to update $f_{t-1}$ and $g_{t-1}$ to get $f_t$ and $g_t$ (e.g., via fine-tuning the model on $\tilde{B}_t^*$). With multiple-rounds setting, we define $B^*=\cup_{i=1}^T B_{i}^*$. \subsection{Evaluation Metrics} \label{sec:eval-metrics} To quantify the efficacy of the methods that optimize human-in-the-loop adaptation and decision making performance, appropriate metrics are needed. The performance of the selective classifier $f_s$ (defined in Eq.~(\ref{eq:selective-classifier})) is evaluated by the accuracy and coverage metrics. The accuracy of $f_s$ on $U_X$ is defined as: \begin{align} \label{eq:acc-metric} acc(f_s, \tau) = \frac{\mathbb{E}_{\bfx \sim U_X} \mathbb{I}[f(\bfx)=y_x \land g(\bfx) \ge \tau \land \bfx \notin B^*]}{\mathbb{E}_{\bfx \sim U_X} \mathbb{I}[g(\bfx) \ge \tau \land \bfx \notin B^*]} \end{align} Here, the accuracy is measured on the predictions made by the model without human intervention (excluding those predictions on selected labeled test data and rejected data points). The coverage of $f_s$ on $U_X$ is defined as: \begin{align} \label{eq:cov-metric} cov(f_s, \tau) = \frac{\mathbb{E}_{\bfx \sim U_X} \mathbb{I}[g(\bfx) \ge \tau \land \bfx \notin B^*]}{\mathbb{E}_{\bfx \sim U_X} \mathbb{I}[\bfx \notin B^*]} \end{align} The coverage is the fraction of remaining unlabeled data points where we can rely on the model's prediction without human intervention. We can tune the threshold $\tau$ to achieve a certain coverage. We know that there could be an accuracy-coverage trade-off -- as we increase coverage, the accuracy could be lower. The following evaluation metrics are proposed to be agnostic to the threshold $\tau$: \mypara{Maximum Accuracy at a Target Coverage.} Given a target coverage $t_c$, the maximum accuracy is defined as: \begin{align} \label{eq:acc|cov} \max_{\tau} \quad acc(f_s, \tau), \quad s.t. \quad cov(f_s, \tau) \ge t_c \end{align} We denote this metric as $acc|cov\ge t_c$. \mypara{Maximum Coverage at a Target Accuracy.} Given a target accuracy $t_a$, the maximum coverage is defined as: \begin{align} \label{eq:cov|acc} \max_{\tau} \quad cov(f_s, \tau), \quad s.t. \quad acc(f_s, \tau) \ge t_a \end{align} When $\tau=\infty$, we define $cov(f_s, \tau)=0$ and $acc(f_s, \tau)=1$. We denote this metric as $cov|acc\ge t_a$. \mypara{Area Under the Accuracy-Coverage Curve.} We define the Area Under the accuracy-coverage Curve (AUC) as: \begin{align} \label{eq:auc} \text{AUC}(f_s)=\int_{0}^1 acc(f_s, \tau) \mathrm{d} cov(f_s, \tau) \end{align} We use the composite trapezoidal rule to estimate the integration. \subsection{Challenges} \begin{figure*} \caption{Sample selection based on uncertainty} \caption{Sample selection based on diversity} \caption{Illustration of the challenges in active selective prediction using a linear model to maximize the margin (distance to the decision boundary) for binary classification. The model confidence is considered to be proportional to the margin (when the margin is larger, the confidence is higher and vice versa). The triangles belong to the negative class while the circles belong to the positive class. The empty markers represent the \textit{unlabeled} \label{fig:challenge-illustration} \end{figure*} For active selective prediction, we want to utilize active learning to improve the coverage and accuracy of the selective classifier $f_s$, that consists of a classifier $f$ and a selection scoring function $g$. Different from conventional active learning, which only aims to improve the accuracy of the classifier $f$, active selective prediction also aims to improve $g$ so that it can accept those examples where $f$ predicts correctly and reject those where $f$ predicts incorrectly. With distribution shift and a small labeling budget $M$, it can be challenging to train $f$ for high accuracy. Therefore, $g$ is critical in achieving high coverage and accuracy of $f_s$, for which we consider the confidence of $f$ (i.e., the maximum softmax score of $f$) and train $f$ such that its confidence can be used to distinguish correct and incorrect predictions. This might not be achieved easily since it has been observed that under distribution shift, $f$ can have overconfident predictions~\citep{goodfellow2014explaining,hein2019relu}. Besides, for active learning, typically we select samples for labeling based on uncertainty or diversity. However, in active selective prediction, sample selection based on uncertainty may lead to the overconfidence issue and sample selection based on diversity may lead to low accuracy of $f$, as illustrated in Fig.~\ref{fig:challenge-illustration}. Our experiments in Appendix~\ref{app:eval-sr-active-learning} show that these issues indeed exist -- the methods based on uncertainty sampling (e.g., SR+Margin) achieve relatively high accuracy, but suffer from the overconfidence issue, while the methods based on diversity sampling (e.g. SR+kCG) don't have the overconfidence issue, but suffers from low accuracy of $f$. Moreover, the hybrid methods based on uncertainty and diversity sampling (SR+CLUE and SR+BADGE) still suffer from the overconfidence issue. To tackle these, we propose a novel method ASPEST, described next. \section{Proposed Method: ASPEST} \label{sec:method} We propose a novel method called Active Selective Prediction using Ensembles and Self-training (ASPEST), which utilizes two key techniques checkpoint ensembles and self-training, to solve the active selective prediction problem. The key constituents, checkpoint ensembles and self-training, are designed to tackle the fundamental challenges in active selective prediction, with the ideas of selecting samples for labeling based on uncertainty to achieve high accuracy and using checkpoint ensembles and self-training to alleviate the overconfidence issue. We empirically analyze why they can tackle the challenges in Section~\ref{sec:aspest-analysis}. In Appendix~\ref{app:aspest-algo-complexity}, we show the full ASPEST algorithm and analyze its complexity. We first describe how the weigths from the intermediate model checkpoints during training are used to construct the checkpoint ensemble. Since we have all the test inputs, we don't need to save the checkpoints during training, but just record their outputs on the test set $U_X$. Specifically, we use a $n\times K$ matrix $P$ (recall that $n=|U_X|$ and $K$ is the number of classes) to record the average of the softmax outputs of the checkpoint ensemble and use $N_e$ to record the number of checkpoints in the current checkpoint ensemble. During training, we get a stream of checkpoints, and for each incoming checkpoint model $f$, we update $P$ and $N_e$ as: \begin{align} \label{eq:p_update} P_{i, k} \leftarrow \frac{1}{N_e+1} (P_{i, k} \cdot N_e + f(\bfx_i \mid k)) \quad \text{for } 1 \leq i \leq n \text{ and } 1 \leq k \leq K, \quad N_e \leftarrow N_e + 1. \end{align} Since it has been observed that an ensemble of DNNs (known as `deep ensembles') usually produces a confidence score that is better calibrated compared to a single DNN~\citep{lakshminarayanan2017simple}, we consider $f$ to be in the form of deep ensembles and $g$ to be the confidence of the ensemble. Specifically, we continue fine-tuning $N$ models independently via Stochastic Gradient Descent (SGD) with different random seeds (e.g., the randomness can come from different random orders of training batches). At the beginning, we set each model $f^j_0=\bar{f}$ ($j=1, \dots, N$), and set $N_e=0$ and $P=\textbf{0}_{n\times K}$. Here, we initialize each model $f^j_0$ with the source-trained classifier $\bar{f}$ instead of random initialization, to minimize the computational cost. We fine-tune each model $f^j_0$ on $\Dtr$ for $n_s$ steps via SGD using the following training objective: \begin{align} \label{obj:aspest_init_train} \min_{\theta^j} \quad \mathbb{E}_{(\bfx,y)\in \Dtr} \quad \ell_{CE}(\bfx, y;\theta^j), \end{align} where $\ell_{CE}$ is the cross-entropy loss and $\theta^j$ is the model parameters of $f^j_0$. For every $c_s$ steps when training each $f^j_0$, we update $P$ and $N_e$ using Eq~(\ref{eq:p_update}) with the checkpoint model $f^j_0$. After constructing the initial checkpoint ensemble, we perform a $T$-round active learning process. In each round of active learning, we first select samples for labeling based on the margin of the checkpoint ensemble, then fine-tune the models on the selected labeled test data, and finally perform self-training. We describe the procedure below: \mypara{Sample selection. } In the $t$-th round, we select a batch $B_t$ with a size of $m=[\frac{M}{T}]$ from $U_X$ via: \begin{align} \label{obj:aspest-sample-selection} B_{t}=\arg\max_{B \subset U_X\setminus(\cup_{l=0}^{t-1}B_l), |B|=m} -\sum_{\bfx_i \in B} S(\bfx_i) \end{align} where $B_0=\emptyset$, $S(\bfx_i) = P_{i, \hat{y}} - \max_{k \in \mathcal{Y} \setminus \{\hat{y}\}} P_{i, k}$ and $\hat{y} = \operatornamewithlimits{arg\!\max}_{k \in \mathcal{Y}} P_{i, k}$. We use an oracle to assign ground-truth labels to the examples in $B_{t}$ to get $\tilde{B}_{t}$. Here, we select the test samples for labeling based on the margin of the checkpoint ensemble. The test samples with lower margin should be closer to the decision boundary and they are data points where the ensemble is uncertain about its predictions. Training on those data points can either make the predictions of the ensemble more accurate or make the ensemble have higher confidence on its correct predictions. \mypara{Fine-tuning. } After the sample selection, we reset $N_e$ and $P$ as $N_e=0$ and $P=\textbf{0}_{n\times K}$, because we want to remove those checkpoints in the previous rounds with a worse performance from the checkpoint ensemble (experiments in Appendix~\ref{app:each-round-ens-acc} show that after each round of active learning, the accuracy of the ensemble will be significantly improved). We then fine-tune each model $f^j_{t-1}$ ($j=1,\dots, N$) independently via SGD with different randomness on the selected labeled test data to get $f^j_{t}$ using the following training objective: \begin{align} \label{obj:aspest_finetune} \min_{\theta^j} \quad \mathbb{E}_{(\bfx,y)\in \cup_{l=1}^t \tilde{B}_l} \quad \ell_{CE}(\bfx, y;\theta^j) + \lambda \cdot \mathbb{E}_{(\bfx,y)\in \Dtr} \quad \ell_{CE}(\bfx, y;\theta^j), \end{align} where $\theta^j$ is the model parameters of $f^j_{t-1}$ and $\lambda$ is a hyper-parameter. Note that here we use joint training on $\Dtr$ and $\cup_{l=1}^t \tilde{B}_l$ to avoid over-fitting to the small set of labeled test data and prevent the models from forgetting the source training knowledge (see the results in Appendix~\ref{app:joint-training-effect} for the effect of using joint training and the effect of $\lambda$). For every $c_e$ epoch when fine-tuning each model $f^j_{t-1}$, we update $P$ and $N_e$ using Eq.~(\ref{eq:p_update}) with the checkpoint model $f^j_{t-1}$. \mypara{Self-training. } After fine-tuning the models on the selected labeled test data and with the checkpoint ensemble, we construct a pseudo-labeled set $R$ via: \begin{align} \label{eq:pseudo_labeling} R=\{ (\bfx_i, P_{i,:}) \mid \bfx_i \in U_X \land (\eta \leq \max_{k \in \mathcal{Y}} P_{i,k} < 1) \}, \end{align} where $\max_{k \in \mathcal{Y}} P_{i,k}$ is the confidence of the checkpoint ensemble on $\bfx_i$ and $\eta$ is a threshold (refer to Section~\ref{sec:main-exp-results} for the effect of $\eta$). We do not add those test data points with confidence equal to $1$ into the pseudo-labeled set because training on those data points cannot change the models much and may even hurt the performance (refer to Appendix~\ref{app:aspest-upper-bound} for the justification of such a design). We then perform self-training on the pseudo-labeled set $R$. For computational efficiency, we only apply self-training on a subset of $R$. We construct the subset $R_{\text{sub}}$ by randomly sampling up to $[p\cdot n]$ data points from $R$, where $p \in [0,1]$. We train each model $f^{j}_{t}$ ($j=1,\dots, N$) further on the pseudo-labeled subset $R_{\text{sub}}$ via SGD using the following training objective: \begin{align} \label{obj:aspest_self_train} \min_{\theta^j} \quad \mathbb{E}_{(\bfx, \bfy)\in R_{\text{sub}}} \quad \ell_{KL}(\bfx, \bfy;\theta^j) + \lambda \cdot \mathbb{E}_{(\bfx,y)\in \Dtr} \quad \ell_{CE}(\bfx, y;\theta^j) \end{align} where $\ell_{KL}$ is the KL-Divergence loss, which is defined as: $\ell_{KL}(\bfx, \bfy; \theta) = \sum_{k=1}^K \bfy_k \cdot \log(\frac{\bfy_k}{f(\bfx \mid k; \theta)})$. Note that for self-training, we typically use predicted labels as pseudo-labels and the cross-entropy loss. We don't follow this because the predicted labels might be wrong and training the models on those misclassified pseudo-labeled data points using the cross entropy loss will make the models have very high confidence on their wrong predictions, which will hurt selective prediction performance. For every $c_e$ epoch of self-training each model $f^j_t$, we will update $P$ and $N_e$ using Eq~(\ref{eq:p_update}) with the checkpoint model $f^j_t$. We add checkpoints during self-training into checkpoint ensemble to improve the sample selection in the next round of active learning. After $T$ rounds active learning, we use the checkpoint ensemble as the final selective classifier: the classifier $f(\bfx_i) = \operatornamewithlimits{arg\!\max}_{k\in \mathcal{Y}} P_{i,k}$ and the selection scoring function $g(\bfx_i) = \max_{k \in \mathcal{Y}} P_{i,k}$. \section{Experiments} \label{sec:experiment} This section presents experimental results, especially focusing on the following questions: (\textbf{Q1}) Can we use a small labeling budget to significantly improve selective prediction performance under distribution shift? (\textbf{Q2}) Does the proposed ASPEST outperform baselines across different datasets with distribution shift? (\textbf{Q3}) What is the effect of checkpoint ensembles and self-training in ASPEST? \subsection{Setup} \label{sec:exp-setup} \mypara{Datasets.} We perform experiments on the following datasets with distribution shift: (i) MNIST$\to$SVHN~\citep{lecun1998mnist,netzer2011reading}, (ii) CIFAR-10$\to$CINIC-10~\citep{krizhevsky2009learning,darlow2018cinic}, (iii) FMoW~\citep{koh2021wilds}, (iv) Amazon Review~\citep{koh2021wilds}, (v) DomainNet~\citep{peng2019moment} and (vi) Otto~\citep{otto}. Details of the datasets are described in Appendix~\ref{app:datasets}. \mypara{Architectures and training.} On MNIST$\to$SVHN, we use a Convolutional Neural Network (CNN)~\citep{lecun1989handwritten} model. On CIFAR-10$\to$CINIC-10, we use the ResNet-20 model~\citep{he2016identity}. On the FMoW dataset, we use the DensetNet-121 model~\citep{huang2017densely}. On Amazon Review, we use the pre-trained RoBERTa model~\citep{liu2019roberta}. On the DomainNet dataset, we use the ResNet-50 model~\citep{he2016deep}. On the Otto dataset, we use a multi-layer perceptron. On each dataset, we train the models on the training set $\Dtr$. More details on model architectures and training on source data are presented in Appendix~\ref{app:arch-pre-train}. \mypara{Active learning hyper-parameters.} We evaluate different methods with different labeling budget $M$ values on each dataset. By default, we set the number of rounds $T=10$ for all methods (Appendix~\ref{app:num-round-effect} presents the effect of $T$). During the active learning process, we fine-tune the model on the selected labeled test data. During fine-tuning, we don't apply any data augmentation to the test data. We use the same fine-tuning hyper-parameters for different methods to ensure a fair comparison. More details on the fine-tuning hyper-parameters can be found in Appendix~\ref{app:active-learning-hyper}. \mypara{Baselines.} We consider Softmax Response (SR)~\citep{geifman2017selective} and Deep Ensembles (DE)~\citep{lakshminarayanan2017simple} with various active learning sampling methods as the baselines. SR+Uniform means combining SR with an acquisition function based on uniform sampling (similarly for DE and other acquisition functions). We consider sampling methods from both traditional active learning (e.g., BADGE~\citep{ash2019deep}) and active domain adaptation (e.g., CLUE~\citep{prabhu2021active}). Appendix~\ref{app:baselines} further describes the details of the baselines. \mypara{Hyper-parameters of ASPEST.} We set $\lambda=1$, $n_s=1000$ and $N=5$ (see Appendix~\ref{app:num-model-effect} for the effect of $N$), which are the same as those for Deep Ensembles, for fair comparisons. For all datasets, we use $c_s=200$, $p=0.1$, $\eta=0.9$, the number of self-training epochs to be $20$ and $c_e=5$. Note that we don't tune $c_s$, $c_e$, $p$ and use the fixed values. We select $\eta$ based on the performance on a validation dataset (i.e., DomainNet R$\to$I) and use the same value across all other datasets. \subsection{Results} \label{sec:main-exp-results} \begin{table}[htb] \centering \begin{adjustbox}{width=0.6{\,:\,}umnwidth,center} \begin{tabular}{l|c|c} \toprule Dataset & \multicolumn{2}{c}{MNIST$\to$SVHN} \\ \hline Metric & $cov^*|acc\geq 90\%$ $\uparrow$ & $acc|cov^*\geq 90\%$ $\uparrow$ \\ \hline \hline SR (without active learning) & 0.08$\pm$0.0 & 25.80$\pm$0.0 \\ \hline SR+Margin (M=500) & 62.38$\pm$2.7 & 80.21$\pm$0.9 \\ \hline SR+Margin (M=1000) & 79.04$\pm$0.2 & 85.36$\pm$0.3 \\ \hline \hline DE (without active learning) & 0.12$\pm$0.1 & 28.17$\pm$0.5 \\ \hline DE+Margin (M=500) & 76.35$\pm$2.7 & 84.34$\pm$1.1 \\ \hline DE+Margin (M=1000) & 89.19$\pm$0.3 & 89.59$\pm$0.1 \\ \hline \hline ASPEST (M=500) & 87.51$\pm$0.9 & 88.88$\pm$0.4 \\ \hline ASPEST (M=1000) & \textbf{94.91}$\pm$0.4 & \textbf{92.44}$\pm$0.2 \\ \bottomrule \end{tabular} \end{adjustbox} \caption[]{Results on MNIST$\to$SVHN to describe the effect of combining selective prediction with active learning. The mean and std of each metric over three random runs are reported (mean$\pm$std). $cov^*$ is defined in Appendix~\ref{app:asp-effect}. All numbers are percentages. \textbf{Bold} numbers are superior results. } \label{tab:mnist-vs-sp-results} \end{table} \mypara{Impacts of combining selective prediction with active learning. } We evaluate the accuracy of the source trained models on the test set $U_X$ of different datasets. The results in Appendix~\ref{app:eval-source-trained-models} show that the models trained on the source training set $\Dtr$ suffer a performance drop on the target test set $U_X$, and sometimes this drop can be large. For example, the model trained on MNIST has a source test accuracy of 99.40\%. However, its accuracy on the target test set $U_X$ from SVHN is only 24.68\%. If we directly build a selective classifier on top of the source trained model, then to achieve a target accuracy of $90\%$, the coverage would be at most $27.42\%$. In Table~\ref{tab:mnist-vs-sp-results}, we demonstrate that for a target accuracy of $90\%$, the coverage achieved by SR and DE without active learning is very low (nearly 0\%). It means that almost all test examples need human intervention or labeling. This is a large cost since the test set of SVHN contains over 26K images. However, by combining selective prediction with active learning (e.g., using the proposed method ASPEST), we only need to label $500$ test examples to achieve a target accuracy of $90\%$ with a coverage of $87.5\%$. Thus, during active learning and selective prediction processes, only $12.5\%$ test examples from SVHN need to be labeled by a human to achieve the target accuracy of $90\%$, resulting in a significant reduction of the overall human labeling cost. Similar results are observed for other datasets (see Appendix~\ref{app:asp-effect}). \begin{table*}[h!] \centering \begin{adjustbox}{width={\,:\,}umnwidth,center} \begin{tabular}{l|cc|cc|cc} \toprule Dataset & \multicolumn{2}{c|}{DomainNet R$\to$C (easy)} & \multicolumn{2}{c|}{Amazon Review} & \multicolumn{2}{c}{Otto} \\ \hline Metric & $cov|acc\geq 80\%$ $\uparrow$ & AUC $\uparrow$ & $cov|acc\geq 80\%$ $\uparrow$ & AUC $\uparrow$ & $cov|acc\geq 80\%$ $\uparrow$ & AUC $\uparrow$ \\ \hline \hline SR+Uniform & 25.56$\pm$0.6 & 63.31$\pm$0.4 & 13.71$\pm$11.3 & 72.71$\pm$1.5 & 63.58$\pm$0.7 & 84.46$\pm$0.2 \\ \hline SR+Confidence & 25.96$\pm$0.2 & 64.20$\pm$0.6 & 11.28$\pm$8.9 & 72.89$\pm$0.7 & 69.63$\pm$1.7 & 85.91$\pm$0.3 \\ \hline SR+Entropy & 25.44$\pm$1.0 & 63.52$\pm$0.6 & 5.55$\pm$7.8 & 71.96$\pm$1.6 & 67.79$\pm$0.8 & 85.41$\pm$0.3 \\ \hline SR+Margin & 26.28$\pm$1.2 & 64.37$\pm$0.8 & 14.48$\pm$10.9 & 73.25$\pm$1.0 & 68.10$\pm$0.1 & 85.56$\pm$0.1 \\ \hline SR+kCG & 21.12$\pm$0.3 & 58.88$\pm$0.0 & 20.02$\pm$11.0 & 72.34$\pm$3.2 & 64.84$\pm$0.7 & 85.08$\pm$0.2 \\ \hline SR+CLUE & 27.17$\pm$0.8 & 64.38$\pm$0.6 & 4.15$\pm$5.9 & 73.43$\pm$0.4 & 68.21$\pm$1.2 & 85.82$\pm$0.3 \\ \hline SR+BADGE & 27.78$\pm$0.8 & 64.90$\pm$0.5 & 22.58$\pm$0.4 & 73.80$\pm$0.6 & 67.23$\pm$1.0 & 85.41$\pm$0.3 \\ \hline \hline DE+Uniform & 30.82$\pm$0.8 & 67.60$\pm$0.4 & 34.35$\pm$1.4 & 76.20$\pm$0.3 & 70.74$\pm$0.5 & 86.78$\pm$0.1 \\ \hline DE+Entropy & 29.13$\pm$0.9 & 67.48$\pm$0.3 & 31.74$\pm$1.4 & 75.98$\pm$0.4 & 75.71$\pm$0.3 & 87.87$\pm$0.1 \\ \hline DE+Confidence & 29.90$\pm$0.8 & 67.45$\pm$0.3 & 35.12$\pm$1.8 & 76.63$\pm$0.2 & 75.52$\pm$0.2 & 87.84$\pm$0.1 \\ \hline DE+Margin & 31.82$\pm$1.3 & 68.85$\pm$0.4 & 33.42$\pm$1.3 & 76.18$\pm$0.2 & 75.49$\pm$0.8 & 87.89$\pm$0.2 \\ \hline DE+Avg-KLD & 32.23$\pm$0.2 & 68.73$\pm$0.2 & 33.03$\pm$1.5 & 76.21$\pm$0.4 & 75.91$\pm$0.2 & 87.89$\pm$0.0 \\ \hline DE+CLUE & 30.80$\pm$0.3 & 67.82$\pm$0.2 & 33.92$\pm$3.0 & 76.27$\pm$0.6 & 69.66$\pm$0.5 & 86.67$\pm$0.1 \\ \hline DE+BADGE & 30.16$\pm$1.3 & 68.46$\pm$0.3 & 32.23$\pm$3.7 & 76.13$\pm$0.7 & 73.23$\pm$0.2 & 87.55$\pm$0.1 \\ \hline \hline ASPEST (ours) & \textbf{37.38}$\pm$0.1 & \textbf{71.61}$\pm$0.2 & \textbf{38.44}$\pm$0.7 & \textbf{77.69}$\pm$0.1 & \textbf{77.85}$\pm$0.2 & \textbf{88.28}$\pm$0.1 \\ \bottomrule \end{tabular} \end{adjustbox} \caption[]{Results of comparing ASPEST to the baselines on DomainNet R$\to$C, Amazon Review and Otto. The mean and std of each metric over three random runs are reported (mean$\pm$std). The labeling budget $M$ is 500. All numbers are percentages. \textbf{Bold} numbers are superior results. } \label{tab:domainnet-otto-main-results} \end{table*} \mypara{Baseline comparisons. } We compare ASPEST with the two existing selective classification methods: SR and DE with various active learning sampling approaches. The results in Table~\ref{tab:domainnet-otto-main-results} (complete results on all datasets for all metrics and different labeling budgets are provided in Appendix~\ref{app:complete-results}) show that ASPEST consistently outperforms the baselines across different image, text and tabular datasets. For example, for MNIST$\to$SVHN, ASPEST improves the AUC from $79.36\%$ to $88.84\%$ when the labeling budget ($M$) is only 100. When $M=500$, for CIFAR-10$\to$CINIC-10, ASPEST improves the AUC from $90.74\%$ to $90.95\%$; for FMoW, ASPEST improves the AUC from $70.59\%$ to $71.12\%$; for Amazon Review, ASPEST improves the AUC from $76.63\%$ to $77.69\%$; for DomainNet R$\to$C, ASPEST improves the AUC from $68.85\%$ to $71.61\%$; for DomainNet R$\to$P, ASPEST improves the AUC from $56.67\%$ to $58.74\%$; for DomainNet R$\to$S, ASPEST improves the AUC from $46.38\%$ to $49.62\%$; for Otto, ASPEST improves the AUC from $87.89\%$ to $88.28\%$. \subsection{Analyses and Discussions} \label{sec:aspest-analysis} In this section, we analyze why the key components checkpoint ensembles and self-training in ASPEST can improve selective prediction and perform ablation study to show their effect. \mypara{Checkpoint ensembles can alleviate overfitting and overconfidence.} We observe that in active selective prediction, when fine-tuning the model on the small amount of selected labeled test data, the model can suffer overfitting and overconfidence issues and ensembling the checkpoints in the training path can effectively alleviate these issues (see the analysis in Appendix~\ref{app:ckpt-ens-emprical-analysis}). \mypara{Self-training can alleviate overconfidence.} We observe that the checkpoint ensemble constructed after fine-tuning is less confident on the test data $U_X$ compared to the deep ensemble. Thus, using the softmax outputs of the checkpoint ensemble as soft pseudo-labels for self-training can alleviate overconfidence and improve selective prediction performance (see the analysis in Appendix~\ref{app:self-training-emprical-analysis}). \begin{table*}[htb] \centering \begin{adjustbox}{width=0.8{\,:\,}umnwidth,center} \begin{tabular}{l|cc|cc} \toprule Dataset & \multicolumn{2}{c|}{MNIST$\to$SVHN} & \multicolumn{2}{c}{DomainNet R$\to$C} \\ \hline Metric & \multicolumn{2}{c|}{AUC $\uparrow$} & \multicolumn{2}{c}{AUC $\uparrow$} \\ \hline Labeling Budget & 100 & 500 & 500 & 1000 \\ \hline \hline DE+Margin & 78.59$\pm$1.4 & 94.31$\pm$0.6 & 68.85$\pm$0.4 & 71.29$\pm$0.3 \\ \hline ASPEST without self-training & 78.09$\pm$1.3 & 94.25$\pm$0.4 & 69.59$\pm$0.2 & 72.45$\pm$0.1 \\ \hline ASPEST without checkpoint ensemble & 83.78$\pm$2.9 & 96.54$\pm$0.2 & 69.94$\pm$0.1 & 72.20$\pm$0.4 \\ \hline ASPEST ($\eta$=0.1) & 83.77$\pm$1.7 & 96.01$\pm$0.4 & 70.35$\pm$0.2 & 72.89$\pm$0.4 \\ \hline ASPEST ($\eta$=0.5) & 83.99$\pm$1.3 & 96.24$\pm$0.2 & 70.92$\pm$0.3 & 73.37$\pm$0.1 \\ \hline ASPEST ($\eta$=0.6) & 85.17$\pm$1.3 & 96.24$\pm$0.2 & 70.96$\pm$0.2 & 73.05$\pm$0.1 \\ \hline ASPEST ($\eta$=0.8) & 85.40$\pm$2.3 & 96.74$\pm$0.1 & 71.05$\pm$0.2 & 72.99$\pm$0.3 \\ \hline ASPEST ($\eta$=0.9) & \textbf{88.84}$\pm$1.0 & 96.62$\pm$0.2 & \textbf{71.61}$\pm$0.2 & 73.27$\pm$0.2 \\ \hline ASPEST ($\eta$=0.95) & 87.67$\pm$1.3 & \textbf{96.74}$\pm$0.1 & 71.03$\pm$0.3 & \textbf{73.38}$\pm$0.2 \\ \bottomrule \end{tabular} \end{adjustbox} \caption[]{Ablation study results for ASPEST. The mean and std of each metric over three random runs are reported (mean$\pm$std). All numbers are percentages. \textbf{Bold} numbers are superior results. } \label{tab:aspest-ablation} \end{table*} \mypara{Ablation studies. } Compared to DE+Margin, ASPEST has two additional components: checkpoint ensemble and self-training. We perform ablation experiments on MNIST$\to$SVHN and DomainNet to analyze the effect of these. We also study the effect of the threshold $\eta$ in self-training. The results in Table~\ref{tab:aspest-ablation} show that for MNIST$\to$SVHN, adding the checkpoint ensemble component alone (ASPEST without self-training) does not improve the performance over DE+Margin, whereas adding the self-training component alone (ASPEST without checkpoint ensemble) can significantly improve the performance. For DomainNet, both checkpoint ensemble and self-training have positive contributions. For both cases, ASPEST (with both self-training and checkpoint ensemble) achieves much better results than DE+Margin or applying those components alone. We also show that the performance is not highly sensitive to $\eta$, while typically setting larger $\eta$ (e.g. $\eta=0.9$) yields better results. \mypara{Integrating with UDA. } To study whether incorporating unsupervised domain adaption (UDA) techniques into training could improve active selective prediction, we evaluate DE with UDA and ASPEST with UDA in Appendix~\ref{app:train-uda}. Our results show that ASPEST outperforms (or on par with) DE with UDA, although ASPEST doesn't utilize UDA. Furthermore, we show that by combining ASPEST with UDA, it might achieve even better performance. For example, on MNIST$\to$SVHN, \texttt{ASPEST with DANN} improves the mean AUC from $96.62\%$ to $97.03\%$ when the labeling budget is $500$. However, in some cases, combining ASPEST with UDA yields much worse results. For example, on MNIST$\to$SVHN, when the labeling budget is $100$, combining ASPEST with UDA will reduce the mean AUC by over $4\%$. We leave the exploration of UDA techniques to improve active selective prediction to future work -- superior and robust UDA techniques can be easily incorporated into ASPEST to enhance its overall performance. \section{Conclusion} \label{sec:conclusion} In this paper, we introduce a new learning paradigm called \textit{active selective prediction} which uses active learning to improve selective prediction under distribution shift. We show that this new paradigm results in improved accuracy and coverage on a distributionally shifted test domain and reduces the need for human labeling. We also propose a novel method ASPEST using checkpoint ensemble and self-training with a low labeling cost. We demonstrate ASPEST's effectiveness over other baselines for this new problem setup on various image, text and structured datasets. Future work in this direction can investigate unsupervised hyperparameter tuning on test data, online data streaming, or further minimizing the labeling effort by designing time-preserving labeling interfaces. \section{Potential Negative Societal Impacts} \label{sec:societal-impacts} The proposed framework yields more reliable predictions with more optimized utilization of humans in the loop. One potential risk of such a system is that if the humans in the loop yield inaccurate or biased labels, our framework might cause them being absorbed by the predictor model and the selection prediction mechanism, and eventually the outcomes of the system might be inaccurate and biased. We leave the methods for inaccurate label or bias detection to future work. {\small } \appendix \onecolumn \begin{center} \textbf{\LARGE Supplementary Material } \end{center} In Section~\ref{app:baselines}, we describe the baselines in detail. In Section~\ref{app:aspest-algo-complexity}, we show the complete ASPEST algorithm and analyze its computational complexity. In Section~\ref{app:exp-setup-detail}, we provide the details of the experimental setup. In Section~\ref{app:additional-exp-results}, we give some additional experimental results. \section{Baselines} \label{app:baselines} We consider two selective classification baselines Softmax Response (SR)~\citep{geifman2017selective} and Deep Ensembles (DE)~\citep{lakshminarayanan2017simple} and combine them with active learning techniques. We describe them in detail below. \subsection{Softmax Response} Suppose the neural network classifier is $f$ where the last layer is a softmax. Let $f(\bfx\mid k)$ be the soft response output for the $k$-th class. Then the classifier is defined as $f(\bfx)=\operatornamewithlimits{arg\!\max}_{k\in \mathcal{Y}} f(\bfx\mid k)$ and the selection scoring function is defined as $g(\bfx) = \max_{k\in \mathcal{Y}} f(\bfx\mid k)$, which is also known as the Maximum Softmax Probability (MSP) of the neural network. Recall that with $f$ and $g$, the selective classifier is defined in Eq~(\ref{eq:selective-classifier}). We use active learning to fine-tune the model $f$ to improve the selective prediction performance of Softmax Response on the unlabeled test dataset $U_X$. The complete algorithm is presented in Algorithm~\ref{alg:sr}. In our experiments, we always set $\lambda=1$. We use the joint training objective~(\ref{obj:sr-finetune}) to avoid over-fitting to the small labeled test set $\cup_{l=1}^t \tilde{B}_l$ and prevent the model from forgetting the source training knowledge. The algorithm can be combined with different kinds of acquisition functions. We describe the acquisition functions considered for Softmax Response below. \mypara{Uniform.} In the $t$-th round of active learning, we select $[\frac{M}{T}]$ data points as the batch $B_{t}$ from $U_X \setminus \cup_{l=0}^{t-1} B_l$ via uniform random sampling. The corresponding acquisition function is: $a(B, f_{t-1}, g_{t-1}) = 1$. When solving the objective~(\ref{obj:sr-acq}), the tie is broken randomly. \mypara{Confidence.} We define the confidence score of $f$ on the input $\bfx$ as \begin{align} \label{eq:conf-score} S_{\text{conf}}(\bfx; f) = \max_{k \in \mathcal{Y}} \quad f(\bfx \mid k) \end{align} Then the acquisition function in the $t$-th round of active learning is defined as: \begin{align} a(B, f_{t-1}, g_{t-1}) = -\sum_{\bfx \in B} S_{\text{conf}}(\bfx; f_{t-1}) \end{align} That is we select those test examples with the lowest confidence scores for labeling. \mypara{Entropy.} We define the entropy score of $f$ on the input $\bfx$ as \begin{align} \label{eq:entropy-score} S_{\text{entropy}}(\bfx; f) = \sum_{k \in \mathcal{Y}} \quad - f(\bfx \mid k) \cdot \log f(\bfx \mid k) \end{align} Then the acquisition function in the $t$-th round of active learning is defined as: \begin{align} a(B, f_{t-1}, g_{t-1}) = \sum_{\bfx \in B} S_{\text{entropy}}(\bfx; f_{t-1}) \end{align} That is we select those test examples with the highest entropy scores for labeling. \mypara{Margin.} We define the margin score of $f$ on the input $\bfx$ as \begin{align} \label{eq:margin-score} S_{\text{margin}}(\bfx; f) &= f(\bfx \mid \hat{y}) - \max_{k \in \mathcal{Y} \setminus \{\hat{y}\}} \quad f(\bfx \mid k) \\ s.t. \quad \hat{y} &= \operatornamewithlimits{arg\!\max}_{k \in \mathcal{Y}} \quad f(\bfx \mid k) \end{align} Then the acquisition function in the $t$-th round of active learning is defined as: \begin{align} a(B, f_{t-1}, g_{t-1}) = -\sum_{\bfx \in B} S_{\text{margin}}(\bfx; f_{t-1}) \end{align} That is we select those test examples with lowest margin scores for labeling. \mypara{kCG.} We use the k-Center-Greedy algorithm proposed in~\citep{sener2017active} to select test examples for labeling in each round. \mypara{CLUE.} We use the Clustering Uncertainty-weighted Embeddings (CLUE) proposed in~\citep{prabhu2021active} to select test examples for labeling in each round. Following~\citep{prabhu2021active}, we set the hyper-parameter $T=0.1$ on DomainNet and set $T=1.0$ on other datasets. \mypara{BADGE.} We use the Diverse Gradient Embeddings (BADGE) proposed in~\citep{ash2019deep} to select test examples for labeling in each round. \begin{algorithm}[h] \caption{Softmax Response with Active Learning} \label{alg:sr} \begin{algorithmic} \REQUIRE A training dataset $\Dtr$, an unlabeled test dataset $U_X$, the number of rounds $T$, the labeling budget $M$, a source-trained model $\bar{f}$, an acquisition function $a$ and a hyper-parameter $\lambda$. \STATE Let $f_0 = \bar{f}$. \STATE Let $B_0=\emptyset$. \STATE Let $g_t(\bfx) = \max_{k\in \mathcal{Y}} f_t(\bfx\mid k)$. \FOR{$t=1, \cdots, T$} \STATE Select a batch $B_{t}$ with a size of $m=[\frac{M}{T}]$ from $U_X$ for labeling via: \begin{align} \label{obj:sr-acq} B_{t}=\arg\max_{B \subset U_X\setminus(\cup_{l=0}^{t-1}B_l), |B|=m} a(B, f_{t-1}, g_{t-1}) \end{align} \STATE Use an oracle to assign ground-truth labels to the examples in $B_{t}$ to get $\tilde{B}_{t}$. \STATE Fine-tune the model $f_{t-1}$ using the following training objective: \begin{align} \label{obj:sr-finetune} \min_{\theta} \quad \mathbb{E}_{(\bfx,y)\in \cup_{l=1}^t \tilde{B}_l} \quad \ell_{CE}(\bfx, y;\theta) + \lambda \cdot \mathbb{E}_{(\bfx,y)\in \Dtr} \quad \ell_{CE}(\bfx, y;\theta) \end{align} where $\theta$ is the model parameters of $f_{t-1}$ and $\ell_{CE}$ is the cross-entropy loss function. \STATE Let $f_t = f_{t-1}$. \ENDFOR \ENSURE The classifier $f=f_T$ and the selection scoring function $g=\max_{k\in \mathcal{Y}} f(\bfx\mid k)$. \end{algorithmic} \end{algorithm} \subsection{Deep Ensembles} It has been shown that deep ensembles can significantly improve the selective prediction performance~\citep{lakshminarayanan2017simple}, not only because deep ensembles are more accurate than a single model, but also because deep ensembles yield more calibrated confidence. Suppose the ensemble model $f$ contains $N$ models $f^1, \dots, f^N$. Let $f^j(\bfx\mid k)$ denote the predicted probability of the model $f^j$ on the $k$-th class. We define the predicted probability of the ensemble model $f$ on the $k$-th class as: \begin{align} \label{eq:ens-pred-prob} f(\bfx\mid k)=\frac{1}{N} \sum_{j=1}^N f^j(\bfx \mid k). \end{align} The classifier is defined as $f(\bfx)=\operatornamewithlimits{arg\!\max}_{k\in \mathcal{Y}} f(\bfx \mid k)$ and the selection scoring function is defined as $g(\bfx)=\max_{k\in \mathcal{Y}} f(\bfx \mid k)$. We use active learning to fine-tune each model $f^j$ in the ensemble to improve the selective prediction performance of the ensemble on the unlabeled test dataset $U_X$. Each model $f^j$ is first initialized by the source-trained model $\bar{f}$, and then fine-tuned independently via Stochastic Gradient Decent (SGD) with different sources of randomness (e.g., different random order of the training batches) on the training dataset $\Dtr$ and the selected labeled test data. Note that this way to construct the ensembles is different from the standard Deep Ensembles method, which trains the models from different random initialization. We use this way to construct the ensemble due to the constraint in our problem setting, which requires us to fine-tune a given source-trained model $\bar{f}$. Training the models from different random initialization might lead to an ensemble with better performance, but it is much more expensive, especially when the training dataset and the model are large (e.g., training foundation models). Thus, the constraint in our problem setting is feasible in practice. The complete algorithm is presented in Algorithm~\ref{alg:de}. In our experiments, we always set $\lambda=1$, $N=5$, and $n_s=1000$. We also use joint training here and the reasons are the same as those for the Softmax Response baseline. The algorithm can be combined with different kinds of acquisition functions. We describe the acquisition functions considered below. \mypara{Uniform.} In the $t$-th round of active learning, we select $[\frac{M}{T}]$ data points as the batch $B_{t}$ from $U_X \setminus \cup_{l=0}^{t-1} B_l$ via uniform random sampling. The corresponding acquisition function is: $a(B, f_{t-1}, g_{t-1}) = 1$. When solving the objective~(\ref{obj:de-acq}), the tie is broken randomly. \mypara{Confidence.} The confidence scoring function $S_{\text{conf}}$ for the ensemble model $f$ is the same as that in Eq.~(\ref{eq:conf-score}) ($f(\bfx \mid k)$ for the ensemble model $f$ is defined in Eq.~(\ref{eq:ens-pred-prob})). The acquisition function in the $t$-th round of active learning is defined as: \begin{align} a(B, f_{t-1}, g_{t-1}) = -\sum_{\bfx \in B} S_{\text{conf}}(\bfx; f_{t-1}) \end{align} That is we select those test examples with the lowest confidence scores for labeling. \mypara{Entropy.} The entropy scoring function $S_{\text{entropy}}$ for the ensemble model $f$ is the same as that in Eq.~(\ref{eq:entropy-score}). The acquisition function in the $t$-th round of active learning is defined as: \begin{align} a(B, f_{t-1}, g_{t-1}) = \sum_{\bfx \in B} S_{\text{entropy}}(\bfx; f_{t-1}), \end{align} That is we select those test examples with the highest entropy scores for labeling. \mypara{Margin.} The margin scoring function $S_{\text{margin}}$ for the ensemble model $f$ is the same as that in Eq.~(\ref{eq:margin-score}). The acquisition function in the $t$-th round of active learning is defined as: \begin{align} a(B, f_{t-1}, g_{t-1}) = -\sum_{\bfx \in B} S_{\text{margin}}(\bfx; f_{t-1}) \end{align} That is we select those test examples with the lowest margin scores for labeling. \mypara{Avg-KLD.} The Average Kullback-Leibler Divergence (Avg-KLD) is proposed in \citep{mccallum1998employing} as a disagreement measure for the model ensembles, which can be used for sample selection in active learning. The Avg-KLD score of the ensemble model $f$ on the input $\bfx$ is defined as: \begin{align} \label{eq:avg-kl-divergence-score} S_{\text{kl}}(\bfx; f) = \frac{1}{N} \sum_{j=1}^{N} \sum_{k \in \mathcal{Y}} \quad f^{j}(\bfx \mid k) \cdot \log \frac{f^{j}(\bfx \mid k)}{f(\bfx \mid k)}. \end{align} Then the acquisition function in the $t$-th round of active learning is defined as: \begin{align} a(B, f_{t-1}, g_{t-1}) = \sum_{\bfx \in B} S_{\text{kl}}(\bfx; f_{t-1}), \end{align} That is we select those test examples with the highest Avg-KLD scores for labeling. \mypara{CLUE.} CLUE~\citep{prabhu2021active} is proposed for a single model. Here, we adapt CLUE for the ensemble model, which requires a redefinition of the entropy function $\mathcal{H}(Y\mid \bfx)$ and the embedding function $\phi(\bfx)$ used in the CLUE algorithm. We define the entropy function as Eq.~(\ref{eq:entropy-score}) with the ensemble model $f$. Suppose $\phi^j$ is the embedding function for the model $f^j$ in the ensemble. Then, the embedding of the ensemble model $f$ on the input $\bfx$ is $[\phi^1(\bfx), \dots, \phi^N(\bfx)]$, which is the concatenation of the embeddings of the models $f^1, \dots, f^N$ on $\bfx$. Following~\citep{prabhu2021active}, we set the hyper-parameter $T=0.1$ on DomainNet and set $T=1.0$ on other datasets. \mypara{BADGE.} BADGE~\citep{ash2019deep} is proposed for a single model. Here, we adapt BADGE for the ensemble model, which requires a redefinition of the gradient embedding $g_x$ in the BADGE algorithm. Towards this end, we propose the gradient embedding $g_x$ of the ensemble model $f$ as the concatenation of the gradient embeddings of the models $f^1, \dots, f^N$. \begin{algorithm}[h] \caption{Deep Ensembles with Active Learning} \label{alg:de} \begin{algorithmic} \REQUIRE A training dataset $\Dtr$, An unlabeled test dataset $U_X$, the number of rounds $T$, the total labeling budget $M$, a source-trained model $\bar{f}$, an acquisition function $a(B, f, g)$, the number of models in the ensemble $N$, the number of initial training steps $n_s$, and a hyper-parameter $\lambda$. \STATE Let $f^j_0 = \bar{f}$ for $j=1, \dots, N$. \STATE Fine-tune each model $f^j_0$ in the ensemble via SGD for $n_s$ training steps independently using the following training objective with different randomness: \begin{align} \min_{\theta^j} \quad \mathbb{E}_{(\bfx,y)\in \Dtr} \quad \ell_{CE}(\bfx, y;\theta^j) \end{align} where $\theta^j$ is the model parameters of $f^j_0$ and $\ell_{CE}$ is the cross-entropy loss function. \STATE Let $B_0=\emptyset$. \STATE Let $g_t(\bfx) = \max_{k\in \mathcal{Y}} f_t(\bfx\mid k)$. \FOR{$t=1, \cdots, T$} \STATE Select a batch $B_{t}$ with a size of $m=[\frac{M}{T}]$ from $U_X$ for labeling via: \begin{align} \label{obj:de-acq} B_{t}=\arg\max_{B \subset U_X\setminus(\cup_{l=0}^{t-1}B_l), |B|=m} a(B, f_{t-1}, g_{t-1}) \end{align} \STATE Use an oracle to assign ground-truth labels to the examples in $B_{t}$ to get $\tilde{B}_{t}$. \STATE Fine-tune each model $f^j_{t-1}$ in the ensemble via SGD independently using the following training objective with different randomness: \begin{align} \label{obj:de-finetune} \min_{\theta^j} \quad \mathbb{E}_{(\bfx,y)\in \cup_{l=1}^t \tilde{B}_l} \quad \ell_{CE}(\bfx, y;\theta^j) + \lambda \cdot \mathbb{E}_{(\bfx,y)\in \Dtr} \quad \ell_{CE}(\bfx, y;\theta^j) \end{align} where $\theta^j$ is the model parameters of $f^j_{t-1}$. \STATE Let $f^j_t = f^j_{t-1}$. \ENDFOR \ENSURE The classifier $f=f_T$ and the selection scoring function $g=\max_{k\in \mathcal{Y}} f(\bfx\mid k)$. \end{algorithmic} \end{algorithm} \section{ASPEST Algorithm and its Computational Complexity} \label{app:aspest-algo-complexity} \begin{algorithm*}[htb] \caption{Active Selective Prediction using Ensembles and Self-Training} \label{alg:aspest} \begin{algorithmic} \REQUIRE A training set $\mathcal{D}^{tr}$, a unlabeled test set $U_X$, the number of rounds $T$, the labeling budget $M$, the number of models $N$, the number of initial training steps $n_s$, the initial checkpoint steps $c_s$, a checkpoint epoch $c_e$, a threshold $\eta$, a sub-sampling fraction $p$, and a hyper-parameter $\lambda$. \STATE Let $f^j_0 = \bar{f}$ for $j=1, \dots, N$. \STATE Set $N_e=0$ and $P=\textbf{0}_{n\times K}$. \STATE Fine-tune each $f^j_0$ for $n_s$ training steps using objective~(\ref{obj:aspest_init_train}) and update $P$ and $N_e$ using Eq.~(\ref{eq:p_update}) every $c_s$ training steps. \FOR{$t=1, \cdots, T$} \STATE Select a batch $B_{t}$ from $U_X$ for labeling using the sample selection objective~(\ref{obj:aspest-sample-selection}). \STATE Use an oracle to assign ground-truth labels to the examples in $B_{t}$ to get $\tilde{B}_{t}$. \STATE Set $N_e=0$ and $P=\textbf{0}_{n\times K}$. \STATE Fine-tune each $f^j_{t-1}$ using objective~(\ref{obj:aspest_finetune}), while updating $P$ and $N_e$ using Eq~(\ref{eq:p_update}) every $c_e$ training epochs. \STATE Let $f^j_t = f^j_{t-1}$. \STATE Construct the pseudo-labeled set $R$ via Eq~(\ref{eq:pseudo_labeling}) and create $R_{\text{sub}}$ by randomly sampling up to $[p\cdot n]$ data points from $R$. \STATE Train each $f^j_t$ further via SGD using the objective~(\ref{obj:aspest_self_train}) and update $P$ and $N_e$ using Eq~(\ref{eq:p_update}) every $c_e$ training epochs. \ENDFOR \ENSURE The classifier $f(\bfx_i) = \operatornamewithlimits{arg\!\max}_{k \in \mathcal{Y}} P_{i,k}$ and the selection scoring function $g(\bfx_i) = \max_{k\in \mathcal{Y}} P_{i,k}$. \end{algorithmic} \end{algorithm*} \begin{figure*} \caption{\small Illustration of the checkpoint ensemble and pseudo-labeled set construction in the proposed ASPEST. } \label{fig:aspest-illustration} \end{figure*} Algorithm~\ref{alg:aspest} presents the overall ASPEST method. Figure~\ref{fig:aspest-illustration} illustrates how the checkpoint ensemble and the pseudo-labeled set are constructed in the proposed ASPEST. Next, we will analyze the computational complexity of ASPEST. Let the complexity for one step of updating $P$ and $N_e$ be $t_u$ (mainly one forward pass of DNN); for one DNN gradient update step is $t_g$ (mainly one forward and backward pass of DNN); and for sample selection is $t_s$ (mainly sorting test examples). Then, the total complexity of ASPEST would be $O\Big(N \cdot \frac{n_s}{c_s}\cdot t_u + N \cdot n_s \cdot t_g + T \cdot [t_s + N \cdot (e_f + e_s \cdot p) \cdot \frac{n}{b} \cdot t_g + N \cdot \frac{e_s+e_f}{c_e}\cdot t_u]\Big)$, where $e_f$ is the number of fine-tuning epochs and $e_s$ is the number of self-training epochs and $b$ is the batch size. Although the training objectives include training on $\Dtr$, the complexity doesn't depend on the size of $\Dtr$ since we measure $e_f$ over $\cup_{l=1}^t \tilde{B}_l$ in training objective~(\ref{obj:aspest_finetune}) and measure $e_s$ over $R_{\text{sub}}$ in training objective~(\ref{obj:aspest_self_train}). In practice, we usually have $t_s \ll t_g$ and $t_u \ll t_g$. Also, we set $e_s \cdot p < e_f$, $n_s \ll \frac{n}{b} \cdot T \cdot e_f$ and $\frac{e_s+e_f}{c_e} \ll (e_f + e_s \cdot p) \cdot \frac{n}{b}$. So the complexity of ASPEST is $O\Big(N \cdot T \cdot \frac{n}{b} \cdot e_f \cdot t_g\Big)$. Suppose the size of $\Dtr$ is $n_{tr}$ and the number of source training epochs is $e_p$. Then, the complexity for source training is $O(\frac{n_{tr}}{b} \cdot e_p \cdot t_g)$. In practice, we usually have $N \cdot T \cdot n \cdot e_f \ll n_{tr} \cdot e_p$. Overall, the complexity of ASPEST would be much smaller than that of source training. \section{Details of Experimental Setup} \label{app:exp-setup-detail} \subsection{Computing Infrastructure and Runtime} \label{app:compute-infra-runtime} We run all experiments with TensorFlow 2.0 on NVIDIA A100 GPUs in the Debian GNU/Linux 10 system. We report the total runtime of the proposed method ASPEST on each dataset in Table~\ref{tab:aspest-runtime}. Note that in our implementation, we train models in the ensemble sequentially. However, it is possible to train models in the ensemble in parallel, which can significantly reduce the runtime. With the optimal implementation, the inference latency of the ensemble can be as low as the inference latency of a single model. \begin{table}[htb] \centering \begin{tabular}{l|c} \toprule Dataset & Total Runtime \\ \hline MNIST$\to$SVHN & 24 min \\ CIFAR-10$\to$CINIC-10 & 1 hour \\ FMoW & 2 hour 48 min \\ Amazon Review & 1 hour 34 min \\ DomainNet (R$\to$C) & 2 hours 10 min \\ DomainNet (R$\to$P) & 1 hour 45 min \\ DomainNet (R$\to$S) & 1 hour 51 min \\ Otto & 18 min \\ \bottomrule \end{tabular} \caption[]{\small The runtime of ASPEST when the labeling budget $M=500$. We use the default hyper-parameters for ASPEST described in Section~\ref{sec:exp-setup}. } \label{tab:aspest-runtime} \end{table} \subsection{Datasets} \label{app:datasets} We describe the datasets used below. For all image datasets, we normalize the range of pixel values to [0,1]. \mypara{MNIST$\to$SVHN.} The source training dataset $\Dtr$ is MNIST~\citep{lecun1998mnist} while the target test dataset $U_X$ is SVHN~\citep{netzer2011reading}. MNIST consists 28$\times$28 grayscale images of handwritten digits, containing in total 5,500 training images and 1,000 test images. We resize each image to be 32$\times$32 resolution and change them to be colored. We use the training set of MNIST as $\Dtr$ and the test set of MNIST as the source validation dataset. SVHN consists 32$\times$32 colored images of digits obtained from house numbers in Google Street View images. The training set has 73,257 images and the test set has 26,032 images. We use the test set of SVHN as $U_X$. \mypara{CIFAR-10$\to$CINIC-10.} The source training dataset $\Dtr$ is CIFAR-10~\citep{krizhevsky2009learning} while the target test dataset $U_X$ is CINIC-10~\citep{darlow2018cinic}. CIFAR-10 consists 32$\times$32 colored images with ten classes (dogs, frogs, ships, trucks, etc.), each consisting of 5,000 training images and 1,000 test images. We use the training set of CIFAR-10 as $\Dtr$ and the test set of CIFAR-10 as the source validation dataset. During training, we apply random horizontal flipping and random cropping with padding data augmentations to the training images. CINIC-10 is an extension of CIFAR-10 via the addition of downsampled ImageNet images. CINIC-10 has a total of 270,000 images equally split into training, validation, and test. In each subset (90,000 images) there are ten classes (identical to CIFAR-10 classes). There are 9,000 images per class per subset. We use a subset of the CINIC-10 test set containing 30,000 images as $U_X$. \mypara{FMoW.} We use the FMoW-WILDS dataset from~\citep{koh2021wilds}. FMoW-wilds is based on the Functional Map of the World dataset~\citep{christie2018functional}, which collected and categorized high-resolution satellite images from over 200 countries based on the functional purpose of the buildings or land in the image, over the years 2002–2018. The task is multi-class classification, where the input $\bfx$ is an RGB satellite image, the label $y$ is one of 62 building or land use categories, and the domain $d$ represents both the year the image was taken as well as its geographical region (Africa, the Americas, Oceania, Asia, or Europe). The training set contains 76,863 images from the years 2002-2013. The In-Distribution (ID) validation set contains 11,483 images from the years 2002-2013. The OOD test set contains 22,108 images from the years 2016-2018. We resize each image to be 96$\times$96 resolution to save computational cost. We use the training set as $\Dtr$ and the ID validation set as the source validation dataset. During training, we apply random horizontal flipping and random cropping with padding data augmentations to the training images. We use the OOD test set as $U_X$. \mypara{Amazon Review.} We use the Amazon Review WILDS dataset from~\citep{koh2021wilds}. The dataset comprises 539,502 customer reviews on Amazon taken from the Amazon Reviews dataset~\citep{ni2019justifying}. The task is multi-class sentiment classification, where the input $\bfx$ is the text of a review, the label $y$ is a corresponding star rating from 1 to 5, and the domain $d$ is the identifier of the reviewer who wrote the review. The training set contains 245,502 reviews from 1,252 reviewers. The In-Distribution (ID) validation set contains 46,950 reviews from 626 of the 1,252 reviewers in the training set. The Out-Of-Distribution (OOD) test set contains 100,050 reviews from another set of 1,334 reviewers, distinct from those of the training set. We use the training set as $\Dtr$ and the ID validation set as the source validation dataset. We use a subset of the OOD test set containing 22,500 reviews from 300 reviewers as $U_X$. \mypara{DomainNet.} DomainNet~\citep{peng2019moment} is a dataset of common objects in six different domains. All domains include 345 categories (classes) of objects such as Bracelet, plane, bird, and cello. We use five domains from DomainNet including: (1) Real: photos and real world images. The training set from the Real domain has 120,906 images while the test set has 52,041 images; (2) Clipart: a collection of clipart images. The training set from the Clipart domain has 33,525 images while the test set has 14,604 images; (3) Sketch: sketches of specific objects. The training set from the Sketch has 48,212 images while the test set has 20,916 images; (4) Painting: artistic depictions of objects in the form of paintings. The training set from the Painting domain has 50,416 images while the test set has 21,850 images. (5) Infograph: infographic images with specific objects. The training set from the Infograph domain has 36,023 images while the test set has 15,582 images. We resize each image from all domains to be 96$\times$96 resolution to save computational cost. We use the training set from the Real domain as $\Dtr$ and the test set from the Real domain as the source validation dataset. During training, we apply random horizontal flipping and random cropping with padding data augmentations to the training images. We use the test sets from three domains Clipart, Sketch, and Painting as three different $U_X$ for evaluation. So we evaluate three shifts: Real$\to$Clipart (R$\to$C), Real$\to$Sketch (R$\to$S), and Real$\to$Painting (R$\to$P). We use the remaining shift Real$\to$Infograph (R$\to$I) as a validation dataset for tuning the hyper-parameters. \mypara{Otto.} The Otto Group Product Classification Challenge~\citep{otto} is a tabular dataset hosted on Kaggle~\footnote{URL: \url{https://kaggle.com/competitions/otto-group-product-classification-challenge}}. The task is to classify each product with $93$ features into $9$ categories. Each target category represents one of the most important product categories (like fashion, electronics, etc). It contains $61,878$ training data points. Since it only provides labels for the training data, we need to create the training, validation and test set. To create a test set that is from a different distribution than the training set, we apply the Local Outlier Factor (LOF)~\citep{breunig2000lof}, which is an unsupervised outlier detection method, on the Otto training data to identify a certain fraction (e.g., $0.2$) of outliers as the test set. Specifically, we apply the \textit{LocalOutlierFactor} function provided by scikit-learn~\citep{scikit} on the training data with a contamination of $0.2$ (contamination value determines the proportion of outliers in the data set) to identify the outliers. We identify $12,376$ outlier data points and use them as the test set $U_X$. We then randomly split the remaining data into a training set $\Dtr$ with $43,314$ data points and a source validation set with $6,188$ data points. We show that the test set indeed has a distribution shift compared to the source validation set, which causes the model trained on the training set to have a drop in performance (see Table~\ref{tab:eval-source-trained-models} in Appendix~\ref{app:eval-source-trained-models}). \subsection{Details on Model Architectures and Training on Source Data} \label{app:arch-pre-train} On all datasets, we use the following supervised training objective for training models on the source training set $\Dtr$: \begin{align} \label{obj:general-pretrain} \min_{\theta} \quad \mathbb{E}_{(\bfx,y)\in \Dtr} \quad \ell_{CE}(\bfx, y;\theta) \end{align} where $\ell_{CE}$ is the cross-entropy loss and $\theta$ is the model parameters. On MNIST$\to$SVHN, we use the Convolutional Neural Network (CNN)~\citep{lecun1989handwritten} consisting of four convolutional layers followed by two fully connected layers with batch normalization and dropout layers. We train the model on the training set of MNIST for 20 epochs using the Adam optimizer~\citep{kingma2014adam} with a learning rate of $10^{-3}$ and a batch size of $128$. On CIFAR-10$\to$CINIC-10, we use the ResNet-20 network~\citep{he2016identity}. We train the model on the training set of CIFAR-10 for 200 epochs using the SGD optimizer with a learning rate of $0.1$, a momentum of $0.9$, and a batch size of $128$. The learning rate is multiplied by 0.1 at the 80, 120, and 160 epochs, respectively, and is multiplied by 0.5 at the 180 epoch. On the FMoW dataset, we use the DensetNet-121 network~\citep{huang2017densely} pre-trained on ImageNet. We train the model further for $50$ epochs using the Adam optimizer with a learning rate of $10^{-4}$ and a batch size of $128$. On the Amazon Review dataset, we use the pre-trained RoBERTa base model~\citep{liu2019roberta} to extract the embedding of the input sentence for classification (i.e., RoBERTa's output for the [CLS] token) and then build an eight-layer fully connected neural network (also known as a multi-layer perceptron) with batch normalization, dropout layers and L2 regularization on top of the embedding. Note that we only update the parameters of the fully connected neural network without updating the parameters of the pre-trained RoBERTa base model during training (i.e., freeze the parameters of the RoBERTa base model during training). We train the model for $200$ epochs using the Adam optimizer with a learning rate of $10^{-3}$ and a batch size of $128$. On the DomainNet dataset, we use the ResNet-50 network~\citep{he2016deep} pre-trained on ImageNet. We train the model further on the training set from the Real domain for 50 epochs using the Adam optimizer with a learning rate of $10^{-4}$ and a batch size of $128$. On the Otto dataset, we use a six-layer fully connected neural network (also known as a multi-layer perceptron) with batch normalization, dropout layers and L2 regularization. We train the model on the created training set for $200$ epochs using the Adam optimizer with a learning rate of $10^{-3}$ and a batch size of $128$. \subsection{Active learning hyper-parameters} \label{app:active-learning-hyper} During the active learning process, we fine-tune the model on the selected labeled test data. During fine-tuning, we don't apply any data augmentation to the test data. We use the same fine-tuning hyper-parameters for different methods to ensure a fair comparison. The optimizer used is the same as that in the source training stage (described in Appendix~\ref{app:arch-pre-train}). On MNIST$\to$SVHN, we use a learning rate of $10^{-3}$; On CIFAR-10$\to$CINIC-10, we use a learning rate of $5\times 10^{-3}$; On FMoW, we use a learning rate of $10^{-4}$; On Amazon Review, we use a learning rate of $10^{-3}$; On DomainNet, we use a learning rate of $10^{-4}$; On Otto, we use a learning rate of $10^{-3}$. On all datasets, we fine-tune the model for at least $50$ epochs and up to $200$ epochs with a batch size of $128$ and early stopping using $10$ patient epochs. \section{Additional Experimental Results} \label{app:additional-exp-results} \subsection{Evaluate Source-Trained Models} \label{app:eval-source-trained-models} In this section, we evaluate the accuracy of the source-trained models on the source validation dataset and the target test dataset $U_X$. The models are trained on the source training set $\Dtr$ (refer to Appendix~\ref{app:arch-pre-train} for the details of source training). The source validation data are randomly sampled from the training data distribution while the target test data are sampled from a different distribution than the training data distribution. The results in Table~\ref{tab:eval-source-trained-models} show that the models trained on $\Dtr$ always suffer a drop in accuracy when evaluating them on the target test dataset $U_X$. \begin{table}[htb] \centering \begin{tabular}{l|cc} \toprule Dataset & Source Accuracy & Target Accuracy \\ \hline MNIST$\to$SVHN & 99.40 & 24.68 \\ CIFAR-10$\to$CINIC-10 & 90.46 & 71.05 \\ FMoW & 46.25 & 38.01 \\ Amazon Review & 65.39 & 61.40 \\ DomainNet (R$\to$C) & 63.45 & 33.37 \\ DomainNet (R$\to$P) & 63.45 & 26.29 \\ DomainNet (R$\to$S) & 63.45 & 16.00 \\ Otto & 80.72 & 66.09 \\ \bottomrule \end{tabular} \caption[]{\small Results of evaluating the accuracy of the source-trained models on the source validation dataset and the target test dataset $U_X$. All numbers are percentages. } \label{tab:eval-source-trained-models} \end{table} \subsection{Evaluate Softmax Response with Various Active Learning Methods} \label{app:eval-sr-active-learning} To see whether combining existing selective prediction and active learning approaches could solve the active selective prediction problem, we evaluate the existing selective prediction method Softmax Response (SR) with active learning methods based on uncertainty or diversity. The results in Table~\ref{tab:active-selective-prediction-challenges} show that the methods based on uncertainty sampling (SR+Confidence, SR+Entropy and SR+Margin) achieve relatively high accuracy of $f$, but suffer from the overconfidence issue (i.e., mis-classification with high confidence). The method based on diversity sampling (SR+kCG) doesn't have the overconfidence issue, but suffers from low accuracy of $f$. Also, the hybrid methods based on uncertainty and diversity sampling (SR+CLUE and SR+BADGE) still suffer from the overconfidence issue. In contrast, the proposed method ASPEST achieves much higher accuracy of $f$, effectively alleviates the overconfidence issue, and significantly improves the selective prediction performance. \begin{table*}[htb] \centering \begin{tabular}{l|ccc} \toprule Method & Accuracy of $f \uparrow$ & Overconfidence ratio $\downarrow$ & AUC$\uparrow$ \\ \hline SR+Confidence & 45.29$\pm$3.39 & 16.91$\pm$2.24 & 64.14$\pm$2.83 \\ SR+Entropy & 45.78$\pm$6.36 & 36.84$\pm$18.96 & 65.88$\pm$4.74 \\ SR+Margin & 58.10$\pm$0.55 & 13.18$\pm$1.85 & 76.79$\pm$0.45 \\ SR+kCG & 32.68$\pm$3.87 & \textbf{0.04}$\pm$0.01 & 48.83$\pm$7.21 \\ SR+CLUE & 55.22$\pm$2.27 & 9.47$\pm$0.94 & 73.15$\pm$2.68 \\ SR+BADGE & 56.55$\pm$1.62 & 8.37$\pm$2.56 & 76.06$\pm$1.63 \\ ASPEST (ours) & \textbf{71.82}$\pm$1.49 & 0.10$\pm$0.02 & \textbf{88.84}$\pm$1.02 \\ \bottomrule \end{tabular} \caption[]{\small Evaluating the Softmax Response (SR) method with various active learning methods and the proposed ASPEST on MNIST$\to$SVHN. The experimental setup is describe in Section~\ref{sec:exp-setup}. The labeling budget $M$ is 100. The overconfidence ratio is the ratio of \textit{mis-classified} unlabeled test inputs that have confidence $\geq 1$ (the highest confidence). The mean and std of each metric over three random runs are reported (mean$\pm$std). All numbers are percentages. \textbf{Bold} numbers are superior results. } \label{tab:active-selective-prediction-challenges} \end{table*} \subsection{Complete Evaluation Results} \label{app:complete-results} We give complete experimental results for the baselines and the proposed method ASPEST on all datasets in this section. We repeat each experiment three times with different random seeds and report the mean and standard deviation (std) values. These results are shown in Table~\ref{tab:mnist-complete-main-results} (MNIST$\to$SVHN), Table~\ref{tab:cifar10-complete-main-results} (CIFAR-10$\to$CINIC-10), Table~\ref{tab:fmow-complete-main-results} (FMoW), Table~\ref{tab:amazon-review-complete-main-results} (Amazon Review), Table~\ref{tab:domainnet-rc-complete-main-results} (DomainNet R$\to$C), Table~\ref{tab:domainnet-rp-complete-main-results} (DomainNet R$\to$P), Table~\ref{tab:domainnet-rs-complete-main-results} (DomainNet R$\to$S) and Table~\ref{tab:otto-complete-main-results} (Otto). Our results show that the proposed method ASPEST consistently outperforms the baselines across different image, text and structured datasets. \begin{table*}[htb] \centering \begin{adjustbox}{width={\,:\,}umnwidth,center} \begin{tabular}{l|ccc|ccc|ccc} \toprule Dataset & \multicolumn{9}{c}{MNIST$\to$SVHN} \\ \hline Metric & \multicolumn{3}{c|}{$cov|acc\geq 90\%$ $\uparrow$} & \multicolumn{3}{c|}{$acc|cov\geq 90\%$ $\uparrow$} & \multicolumn{3}{c}{AUC $\uparrow$} \\ \hline Labeling Budget & 100 & 500 & 1000 & 100 & 500 & 1000 & 100 & 500 & 1000 \\ \hline \hline SR+Uniform & 0.00$\pm$0.0 & 51.46$\pm$3.7 & 75.57$\pm$0.9 & 58.03$\pm$1.5 & 76.69$\pm$1.2 & 84.39$\pm$0.2 & 74.08$\pm$1.5 & 88.80$\pm$0.8 & 93.57$\pm$0.2 \\ \hline SR+Confidence & 0.00$\pm$0.0 & 55.32$\pm$5.1 & 82.22$\pm$1.3 & 47.66$\pm$3.4 & 79.02$\pm$0.7 & 87.19$\pm$0.4 & 64.14$\pm$2.8 & 89.93$\pm$0.6 & 94.62$\pm$0.2 \\ \hline SR+Entropy & 0.00$\pm$0.0 & 0.00$\pm$0.0 & 75.08$\pm$2.4 & 47.93$\pm$7.0 & 77.09$\pm$1.0 & 84.81$\pm$0.7 & 65.88$\pm$4.7 & 88.19$\pm$0.8 & 93.37$\pm$0.5 \\ \hline SR+Margin & 0.00$\pm$0.0 & 63.60$\pm$2.7 & 82.19$\pm$0.3 & 61.39$\pm$0.5 & 80.96$\pm$0.9 & 86.97$\pm$0.2 & 76.79$\pm$0.5 & 91.24$\pm$0.5 & 94.82$\pm$0.1 \\ \hline SR+kCG & 2.52$\pm$1.3 & 23.04$\pm$0.3 & 38.97$\pm$2.6 & 34.57$\pm$4.4 & 52.76$\pm$1.1 & 64.34$\pm$4.8 & 48.83$\pm$7.2 & 73.65$\pm$1.0 & 83.16$\pm$2.0 \\ \hline SR+CLUE & 0.00$\pm$0.0 & 62.03$\pm$2.4 & 81.29$\pm$1.1 & 57.35$\pm$1.9 & 79.55$\pm$0.8 & 86.28$\pm$0.5 & 72.72$\pm$1.9 & 90.98$\pm$0.5 & 94.99$\pm$0.2 \\ \hline SR+BADGE & 0.00$\pm$0.0 & 62.55$\pm$4.4 & 82.39$\pm$2.8 & 59.82$\pm$1.7 & 79.49$\pm$1.6 & 86.96$\pm$0.9 & 76.06$\pm$1.6 & 91.09$\pm$0.9 & 95.16$\pm$0.6 \\ \hline \hline DE+Uniform & 24.71$\pm$5.6 & 68.98$\pm$1.6 & 83.67$\pm$0.1 & 63.22$\pm$1.7 & 81.67$\pm$0.4 & 87.32$\pm$0.1 & 79.36$\pm$1.7 & 92.47$\pm$0.2 & 95.48$\pm$0.0 \\ \hline DE+Entropy & 6.24$\pm$8.8 & 63.30$\pm$6.5 & 84.62$\pm$1.5 & 56.61$\pm$0.6 & 80.16$\pm$2.0 & 88.05$\pm$0.5 & 72.51$\pm$1.5 & 91.21$\pm$1.4 & 95.45$\pm$0.5 \\ \hline DE+Confidence & 14.92$\pm$5.1 & 67.87$\pm$1.4 & 89.41$\pm$0.3 & 61.11$\pm$2.9 & 81.80$\pm$0.5 & 89.75$\pm$0.1 & 75.85$\pm$3.0 & 92.16$\pm$0.2 & 96.19$\pm$0.1 \\ \hline DE+Margin & 21.59$\pm$3.8 & 77.84$\pm$2.8 & 92.75$\pm$0.3 & 62.88$\pm$1.2 & 85.11$\pm$1.1 & 91.17$\pm$0.1 & 78.59$\pm$1.4 & 94.31$\pm$0.6 & 97.00$\pm$0.0 \\ \hline DE+Avg-KLD & 10.98$\pm$4.6 & 61.45$\pm$3.4 & 88.06$\pm$2.2 & 54.80$\pm$1.6 & 78.21$\pm$1.6 & 89.23$\pm$0.9 & 71.67$\pm$2.2 & 90.92$\pm$0.8 & 96.23$\pm$0.4 \\ \hline DE+CLUE & 22.34$\pm$1.4 & 69.23$\pm$1.9 & 82.80$\pm$1.0 & 59.47$\pm$1.3 & 81.05$\pm$0.9 & 86.78$\pm$0.4 & 76.88$\pm$1.0 & 92.70$\pm$0.5 & 95.56$\pm$0.2 \\ \hline DE+BADGE & 22.02$\pm$4.5 & 72.31$\pm$1.2 & 88.23$\pm$0.4 & 61.23$\pm$1.9 & 82.69$\pm$0.5 & 89.15$\pm$0.2 & 77.65$\pm$1.9 & 93.38$\pm$0.2 & 96.51$\pm$0.1 \\ \hline \hline ASPEST (ours) & \textbf{52.10}$\pm$4.0 & \textbf{89.22}$\pm$0.9 & \textbf{98.70}$\pm$0.4 & \textbf{76.10}$\pm$1.5 & \textbf{89.62}$\pm$0.4 & \textbf{93.92}$\pm$0.3 & \textbf{88.84}$\pm$1.0 & \textbf{96.62}$\pm$0.2 & \textbf{98.06}$\pm$0.1 \\ \bottomrule \end{tabular} \end{adjustbox} \caption[]{\small Results of comparing ASPEST to the baselines on MNIST$\to$SVHN. The mean and std of each metric over three random runs are reported (mean$\pm$std). All numbers are percentages. \textbf{Bold} numbers are superior results. } \label{tab:mnist-complete-main-results} \end{table*} \begin{table*}[htb] \centering \begin{adjustbox}{width={\,:\,}umnwidth,center} \begin{tabular}{l|ccc|ccc|ccc} \toprule Dataset & \multicolumn{9}{c}{CIFAR-10$\to$CINIC-10} \\ \hline Metric & \multicolumn{3}{c|}{$cov|acc\geq 90\%$ $\uparrow$} & \multicolumn{3}{c|}{$acc|cov\geq 90\%$ $\uparrow$} & \multicolumn{3}{c}{AUC $\uparrow$} \\ \hline Labeling Budget & 500 & 1000 & 2000 & 500 & 1000 & 2000 & 500 & 1000 & 2000 \\ \hline \hline SR+Uniform & 57.43$\pm$0.2 & 57.15$\pm$0.6 & 58.37$\pm$0.7 & 75.67$\pm$0.2 & 75.69$\pm$0.1 & 76.11$\pm$0.3 & 89.77$\pm$0.0 & 89.81$\pm$0.1 & 90.09$\pm$0.2 \\ \hline SR+Confidence & 57.96$\pm$0.6 & 57.05$\pm$0.7 & 61.11$\pm$1.1 & 76.49$\pm$0.2 & 76.87$\pm$0.2 & 78.77$\pm$0.4 & 90.00$\pm$0.2 & 89.92$\pm$0.2 & 90.91$\pm$0.3 \\ \hline SR+Entropy & 57.78$\pm$0.7 & 57.07$\pm$1.4 & 61.07$\pm$0.4 & 76.57$\pm$0.3 & 76.71$\pm$0.5 & 78.85$\pm$0.2 & 90.01$\pm$0.2 & 89.94$\pm$0.3 & 90.88$\pm$0.0 \\ \hline SR+Margin & 57.72$\pm$0.8 & 57.98$\pm$0.7 & 61.71$\pm$0.2 & 76.24$\pm$0.2 & 76.90$\pm$0.2 & 78.42$\pm$0.2 & 89.95$\pm$0.2 & 90.14$\pm$0.1 & 91.02$\pm$0.0 \\ \hline SR+kCG & 57.90$\pm$0.5 & 57.81$\pm$0.7 & 60.36$\pm$0.3 & 75.59$\pm$0.1 & 75.73$\pm$0.2 & 76.68$\pm$0.2 & 89.78$\pm$0.1 & 89.79$\pm$0.2 & 90.41$\pm$0.2 \\ \hline SR+CLUE & 57.29$\pm$0.5 & 58.89$\pm$0.5 & 62.28$\pm$0.2 & 75.74$\pm$0.2 & 76.68$\pm$0.3 & 78.10$\pm$0.2 & 89.67$\pm$0.2 & 90.15$\pm$0.1 & 91.03$\pm$0.1 \\ \hline SR+BADGE & 58.58$\pm$0.6 & 58.63$\pm$0.3 & 61.95$\pm$0.4 & 76.33$\pm$0.5 & 76.58$\pm$0.1 & 78.26$\pm$0.2 & 90.05$\pm$0.2 & 90.16$\pm$0.1 & 90.99$\pm$0.0 \\ \hline \hline DE+Uniform & 58.06$\pm$0.3 & 58.72$\pm$0.1 & 59.54$\pm$0.3 & 76.65$\pm$0.1 & 77.06$\pm$0.2 & 77.46$\pm$0.1 & 90.26$\pm$0.1 & 90.45$\pm$0.1 & 90.73$\pm$0.1 \\ \hline DE+Entropy & 58.91$\pm$0.6 & 60.96$\pm$0.2 & 63.85$\pm$0.2 & 77.66$\pm$0.1 & 79.14$\pm$0.1 & 80.82$\pm$0.2 & 90.55$\pm$0.1 & 91.16$\pm$0.1 & 91.89$\pm$0.0 \\ \hline DE+Confidence & 58.53$\pm$0.3 & 61.03$\pm$0.6 & 64.42$\pm$0.2 & 77.73$\pm$0.2 & 79.00$\pm$0.1 & 80.87$\pm$0.0 & 90.53$\pm$0.0 & 91.11$\pm$0.1 & 91.96$\pm$0.0 \\ \hline DE+Margin & 58.76$\pm$0.5 & 61.60$\pm$0.5 & 64.92$\pm$0.5 & 77.61$\pm$0.2 & 78.91$\pm$0.1 & 80.59$\pm$0.1 & 90.56$\pm$0.1 & 91.11$\pm$0.1 & 91.98$\pm$0.1 \\ \hline DE+Avg-KLD & 59.99$\pm$0.6 & 62.05$\pm$0.3 & 65.02$\pm$0.5 & 77.84$\pm$0.1 & 79.15$\pm$0.0 & 81.04$\pm$0.1 & 90.74$\pm$0.1 & 91.30$\pm$0.1 & 92.10$\pm$0.1 \\ \hline DE+CLUE & 59.27$\pm$0.1 & 61.16$\pm$0.4 & 64.42$\pm$0.0 & 77.19$\pm$0.1 & 78.37$\pm$0.2 & 79.44$\pm$0.1 & 90.44$\pm$0.1 & 91.03$\pm$0.1 & 91.74$\pm$0.0 \\ \hline DE+BADGE & 59.37$\pm$0.4 & 61.61$\pm$0.1 & 64.53$\pm$0.4 & 77.13$\pm$0.1 & 78.33$\pm$0.2 & 79.44$\pm$0.3 & 90.49$\pm$0.1 & 91.12$\pm$0.0 & 91.78$\pm$0.1 \\ \hline \hline ASPEST (ours) & \textbf{60.38}$\pm$0.3 & \textbf{63.34}$\pm$0.2 & \textbf{66.81}$\pm$0.3 & \textbf{78.23}$\pm$0.1 & \textbf{79.49}$\pm$0.1 & \textbf{81.25}$\pm$0.1 & \textbf{90.95}$\pm$0.0 & \textbf{91.60}$\pm$0.0 & \textbf{92.33}$\pm$0.1 \\ \bottomrule \end{tabular} \end{adjustbox} \caption[]{\small Results of comparing ASPEST to the baselines on CIFAR-10$\to$CINIC-10. The mean and std of each metric over three random runs are reported (mean$\pm$std). All numbers are percentages. \textbf{Bold} numbers are superior results. } \label{tab:cifar10-complete-main-results} \end{table*} \begin{table*}[htb] \centering \begin{adjustbox}{width={\,:\,}umnwidth,center} \begin{tabular}{l|ccc|ccc|ccc} \toprule Dataset & \multicolumn{9}{c}{FMoW} \\ \hline Metric & \multicolumn{3}{c|}{$cov|acc\geq 70\%$ $\uparrow$} & \multicolumn{3}{c|}{$acc|cov\geq 70\%$ $\uparrow$} & \multicolumn{3}{c}{AUC $\uparrow$} \\ \hline Labeling Budget & 500 & 1000 & 2000 & 500 & 1000 & 2000 & 500 & 1000 & 2000 \\ \hline \hline SR+Uniform & 38.50$\pm$0.7 & 42.00$\pm$0.5 & 52.34$\pm$1.1 & 51.76$\pm$0.7 & 54.27$\pm$0.2 & 60.31$\pm$0.7 & 65.75$\pm$0.4 & 67.67$\pm$0.3 & 72.73$\pm$0.3 \\ \hline SR+Confidence & 37.34$\pm$0.3 & 42.28$\pm$1.2 & 53.72$\pm$0.7 & 52.24$\pm$0.1 & 55.52$\pm$0.5 & 61.76$\pm$0.4 & 65.57$\pm$0.1 & 68.03$\pm$0.5 & 73.14$\pm$0.5 \\ \hline SR+Entropy & 37.42$\pm$0.3 & 42.08$\pm$0.2 & 51.18$\pm$0.4 & 51.74$\pm$0.4 & 54.94$\pm$0.2 & 60.62$\pm$0.2 & 65.31$\pm$0.2 & 68.00$\pm$0.1 & 71.99$\pm$0.2 \\ \hline SR+Margin & 38.40$\pm$1.4 & 44.67$\pm$0.7 & 55.68$\pm$1.5 & 52.88$\pm$0.3 & 56.66$\pm$0.4 & 62.98$\pm$0.7 & 66.11$\pm$0.6 & 69.12$\pm$0.4 & 73.86$\pm$0.5 \\ \hline SR+kCG & 36.50$\pm$0.8 & 39.76$\pm$1.2 & 45.87$\pm$0.6 & 49.36$\pm$0.7 & 51.45$\pm$0.5 & 55.47$\pm$0.1 & 64.34$\pm$0.5 & 66.21$\pm$0.6 & 69.63$\pm$0.2 \\ \hline SR+CLUE & 38.65$\pm$0.7 & 44.50$\pm$1.8 & 54.71$\pm$0.5 & 52.23$\pm$0.4 & 55.54$\pm$1.0 & 61.13$\pm$0.4 & 65.78$\pm$0.3 & 68.76$\pm$0.9 & 73.80$\pm$0.1 \\ \hline SR+BADGE & 40.47$\pm$1.5 & 45.65$\pm$1.2 & 57.59$\pm$0.4 & 53.08$\pm$1.0 & 56.63$\pm$0.3 & 63.57$\pm$0.2 & 66.74$\pm$0.8 & 69.43$\pm$0.6 & 74.76$\pm$0.2 \\ \hline \hline DE+Uniform & 44.74$\pm$0.4 & 51.57$\pm$1.1 & 61.92$\pm$0.4 & 56.39$\pm$0.5 & 60.01$\pm$0.5 & 65.74$\pm$0.2 & 69.44$\pm$0.3 & 72.48$\pm$0.5 & 77.02$\pm$0.1 \\ \hline DE+Entropy & 43.76$\pm$0.3 & 50.52$\pm$1.4 & 62.73$\pm$0.4 & 56.29$\pm$0.3 & 60.31$\pm$0.3 & 66.53$\pm$0.2 & 69.02$\pm$0.1 & 72.10$\pm$0.5 & 76.65$\pm$0.2 \\ \hline DE+Confidence & 45.23$\pm$0.6 & 50.11$\pm$0.9 & 64.29$\pm$0.3 & 57.18$\pm$0.4 & 60.46$\pm$0.3 & 67.46$\pm$0.0 & 69.80$\pm$0.3 & 72.11$\pm$0.4 & 77.37$\pm$0.1 \\ \hline DE+Margin & 46.35$\pm$0.6 & 54.79$\pm$1.3 & 69.70$\pm$0.8 & 57.84$\pm$0.3 & 62.43$\pm$0.5 & 69.87$\pm$0.4 & 70.18$\pm$0.3 & 73.62$\pm$0.3 & 78.88$\pm$0.4 \\ \hline DE+Avg-KLD & 46.29$\pm$0.3 & 53.63$\pm$0.8 & 68.18$\pm$0.9 & 57.75$\pm$0.4 & 61.60$\pm$0.3 & 69.11$\pm$0.4 & 70.16$\pm$0.1 & 73.09$\pm$0.2 & 78.48$\pm$0.3 \\ \hline DE+CLUE & 45.22$\pm$0.2 & 49.97$\pm$0.3 & 58.05$\pm$0.5 & 56.39$\pm$0.1 & 59.05$\pm$0.1 & 63.23$\pm$0.4 & 69.53$\pm$0.0 & 71.95$\pm$0.1 & 75.72$\pm$0.3 \\ \hline DE+BADGE & 47.39$\pm$0.7 & 53.83$\pm$0.7 & 66.45$\pm$0.8 & 57.71$\pm$0.4 & 61.16$\pm$0.2 & 68.13$\pm$0.4 & 70.59$\pm$0.4 & 73.40$\pm$0.3 & 78.66$\pm$0.1 \\ \hline \hline ASPEST (ours) & \textbf{53.05}$\pm$0.4 & \textbf{59.86}$\pm$0.4 & \textbf{76.52}$\pm$0.6 & \textbf{61.18}$\pm$0.2 & \textbf{65.18}$\pm$0.2 & \textbf{72.86}$\pm$0.3 & \textbf{71.12}$\pm$0.2 & \textbf{74.25}$\pm$0.2 & \textbf{79.93}$\pm$0.1 \\ \bottomrule \end{tabular} \end{adjustbox} \caption[]{\small Results of comparing ASPEST to the baselines on FMoW. The mean and std of each metric over three random runs are reported (mean$\pm$std). All numbers are percentages. \textbf{Bold} numbers are superior results. } \label{tab:fmow-complete-main-results} \end{table*} \begin{table*}[htb] \centering \begin{adjustbox}{width={\,:\,}umnwidth,center} \begin{tabular}{l|ccc|ccc|ccc} \toprule Dataset & \multicolumn{9}{c}{Amazon Review} \\ \hline Metric & \multicolumn{3}{c|}{$cov|acc\geq 80\%$ $\uparrow$} & \multicolumn{3}{c|}{$acc|cov\geq 80\%$ $\uparrow$} & \multicolumn{3}{c}{AUC $\uparrow$} \\ \hline Labeling Budget & 500 & 1000 & 2000 & 500 & 1000 & 2000 & 500 & 1000 & 2000 \\ \hline \hline SR+Uniform & 13.71$\pm$11.3 & 24.10$\pm$5.3 & 24.87$\pm$2.6 & 65.13$\pm$0.8 & 66.33$\pm$0.6 & 66.26$\pm$0.3 & 72.71$\pm$1.5 & 73.64$\pm$0.7 & 73.53$\pm$0.7 \\ \hline SR+Confidence & 11.28$\pm$8.9 & 17.96$\pm$4.0 & 33.19$\pm$1.4 & 65.15$\pm$0.7 & 66.29$\pm$0.4 & 68.94$\pm$0.1 & 72.89$\pm$0.7 & 73.25$\pm$0.7 & 76.17$\pm$0.2 \\ \hline SR+Entropy & 5.55$\pm$7.8 & 13.32$\pm$9.5 & 25.47$\pm$1.8 & 65.11$\pm$1.1 & 66.56$\pm$0.7 & 67.31$\pm$0.7 & 71.96$\pm$1.6 & 72.53$\pm$1.1 & 74.19$\pm$0.5 \\ \hline SR+Margin & 14.48$\pm$10.9 & 22.61$\pm$4.2 & 28.35$\pm$6.1 & 65.75$\pm$0.5 & 66.31$\pm$0.4 & 68.15$\pm$0.4 & 73.25$\pm$1.0 & 73.65$\pm$0.5 & 75.17$\pm$0.8 \\ \hline SR+kCG & 20.02$\pm$11.0 & 17.02$\pm$12.2 & 29.08$\pm$4.2 & 64.03$\pm$3.1 & 66.17$\pm$0.5 & 66.63$\pm$1.0 & 72.34$\pm$3.2 & 74.35$\pm$0.7 & 74.49$\pm$1.0 \\ \hline SR+CLUE & 4.15$\pm$5.9 & 25.15$\pm$4.9 & 31.88$\pm$2.1 & 66.17$\pm$0.4 & 66.30$\pm$0.4 & 67.12$\pm$0.7 & 73.43$\pm$0.4 & 74.07$\pm$0.7 & 75.29$\pm$0.9 \\ \hline SR+BADGE & 22.58$\pm$0.4 & 23.78$\pm$6.4 & 30.71$\pm$4.6 & 66.29$\pm$0.4 & 66.31$\pm$0.6 & 68.58$\pm$0.7 & 73.80$\pm$0.6 & 74.00$\pm$1.0 & 75.76$\pm$0.8 \\ \hline \hline DE+Uniform & 34.35$\pm$1.4 & 33.15$\pm$1.1 & 36.55$\pm$1.8 & 68.13$\pm$0.4 & 68.12$\pm$0.6 & 68.88$\pm$0.2 & 76.20$\pm$0.3 & 76.16$\pm$0.4 & 77.07$\pm$0.3 \\ \hline DE+Entropy & 31.74$\pm$1.4 & 36.29$\pm$1.6 & 40.33$\pm$1.7 & 68.19$\pm$0.3 & 69.44$\pm$0.2 & 71.27$\pm$0.3 & 75.98$\pm$0.4 & 77.10$\pm$0.3 & 78.53$\pm$0.3 \\ \hline DE+Confidence & 35.12$\pm$1.8 & 34.48$\pm$1.4 & 40.46$\pm$0.5 & 69.07$\pm$0.3 & 69.47$\pm$0.2 & 71.08$\pm$0.2 & 76.63$\pm$0.2 & 76.87$\pm$0.3 & 78.27$\pm$0.1 \\ \hline DE+Margin & 33.42$\pm$1.3 & 35.03$\pm$1.3 & 41.20$\pm$0.4 & 68.45$\pm$0.3 & 69.30$\pm$0.2 & 70.88$\pm$0.1 & 76.18$\pm$0.2 & 76.91$\pm$0.3 & 78.31$\pm$0.1 \\ \hline DE+Avg-KLD & 33.03$\pm$1.5 & 38.55$\pm$3.2 & 41.75$\pm$1.8 & 68.63$\pm$0.3 & 69.95$\pm$0.4 & 71.10$\pm$0.3 & 76.21$\pm$0.4 & 77.62$\pm$0.6 & 78.62$\pm$0.3 \\ \hline DE+CLUE & 33.92$\pm$3.0 & 35.27$\pm$1.4 & 34.83$\pm$3.1 & 68.09$\pm$0.3 & 68.07$\pm$0.3 & 68.40$\pm$0.6 & 76.27$\pm$0.6 & 76.65$\pm$0.3 & 76.69$\pm$0.7 \\ \hline DE+BADGE & 32.23$\pm$3.7 & 36.18$\pm$1.5 & 40.58$\pm$3.3 & 68.34$\pm$0.4 & 68.87$\pm$0.2 & 70.29$\pm$0.3 & 76.13$\pm$0.7 & 77.09$\pm$0.2 & 78.44$\pm$0.5 \\ \hline \hline ASPEST (ours) & \textbf{38.44}$\pm$0.7 & \textbf{40.96}$\pm$0.8 & \textbf{45.77}$\pm$0.1 & \textbf{69.31}$\pm$0.3 & \textbf{70.17}$\pm$0.2 & \textbf{71.60}$\pm$0.2 & \textbf{77.69}$\pm$0.1 & \textbf{78.35}$\pm$0.2 & \textbf{79.51}$\pm$0.2 \\ \bottomrule \end{tabular} \end{adjustbox} \caption[]{\small Results of comparing ASPEST to the baselines on Amazon Review. The mean and std of each metric over three random runs are reported (mean$\pm$std). All numbers are percentages. \textbf{Bold} numbers are superior results. } \label{tab:amazon-review-complete-main-results} \end{table*} \begin{table*}[htb] \centering \begin{adjustbox}{width={\,:\,}umnwidth,center} \begin{tabular}{l|ccc|ccc|ccc} \toprule Dataset & \multicolumn{9}{c}{DomainNet R$\to$C (easy)} \\ \hline Metric & \multicolumn{3}{c|}{$cov|acc\geq 80\%$ $\uparrow$} & \multicolumn{3}{c|}{$acc|cov\geq 80\%$ $\uparrow$} & \multicolumn{3}{c}{AUC $\uparrow$} \\ \hline Labeling Budget & 500 & 1000 & 2000 & 500 & 1000 & 2000 & 500 & 1000 & 2000 \\ \hline \hline SR+Uniform & 25.56$\pm$0.6 & 27.68$\pm$0.8 & 29.86$\pm$0.0 & 43.63$\pm$0.4 & 45.57$\pm$0.3 & 47.27$\pm$0.4 & 63.31$\pm$0.4 & 65.11$\pm$0.5 & 66.70$\pm$0.2 \\ \hline SR+Confidence & 25.96$\pm$0.2 & 27.80$\pm$1.2 & 32.51$\pm$1.3 & 44.90$\pm$0.8 & 47.26$\pm$0.4 & 52.04$\pm$0.8 & 64.20$\pm$0.6 & 65.88$\pm$0.6 & 69.70$\pm$0.7 \\ \hline SR+Entropy & 25.44$\pm$1.0 & 27.79$\pm$0.4 & 33.51$\pm$1.1 & 44.46$\pm$0.7 & 46.96$\pm$0.3 & 52.25$\pm$0.5 & 63.52$\pm$0.6 & 65.72$\pm$0.2 & 70.03$\pm$0.5 \\ \hline SR+Margin & 26.28$\pm$1.2 & 27.77$\pm$1.0 & 32.92$\pm$0.4 & 45.24$\pm$1.0 & 47.12$\pm$0.7 & 52.29$\pm$0.4 & 64.37$\pm$0.8 & 65.91$\pm$0.6 & 70.01$\pm$0.4 \\ \hline SR+kCG & 21.12$\pm$0.3 & 21.79$\pm$0.4 & 23.43$\pm$0.5 & 39.19$\pm$0.1 & 40.59$\pm$0.4 & 41.11$\pm$0.3 & 58.88$\pm$0.0 & 60.11$\pm$0.4 & 60.89$\pm$0.1 \\ \hline SR+CLUE & 27.17$\pm$0.8 & 29.78$\pm$0.8 & 34.82$\pm$0.6 & 44.57$\pm$0.7 & 46.79$\pm$0.1 & 49.70$\pm$0.3 & 64.38$\pm$0.6 & 66.47$\pm$0.3 & 69.59$\pm$0.1 \\ \hline SR+BADGE & 27.78$\pm$0.8 & 30.78$\pm$0.6 & 36.00$\pm$0.6 & 45.36$\pm$0.6 & 48.43$\pm$0.6 & 53.00$\pm$0.4 & 64.90$\pm$0.5 & 67.56$\pm$0.4 & 71.39$\pm$0.4 \\ \hline \hline DE+Uniform & 30.82$\pm$0.8 & 33.05$\pm$0.4 & 36.80$\pm$0.2 & 48.19$\pm$0.3 & 50.09$\pm$0.3 & 52.98$\pm$0.5 & 67.60$\pm$0.4 & 69.31$\pm$0.3 & 71.64$\pm$0.4 \\ \hline DE+Entropy & 29.13$\pm$0.9 & 34.07$\pm$0.3 & 40.82$\pm$0.3 & 48.67$\pm$0.4 & 51.66$\pm$0.2 & 57.81$\pm$0.2 & 67.48$\pm$0.3 & 70.05$\pm$0.2 & 74.64$\pm$0.2 \\ \hline DE+Confidence & 29.90$\pm$0.8 & 33.73$\pm$0.2 & 40.80$\pm$0.2 & 48.60$\pm$0.3 & 52.03$\pm$0.3 & 58.43$\pm$0.1 & 67.45$\pm$0.3 & 70.19$\pm$0.2 & 74.80$\pm$0.1 \\ \hline DE+Margin & 31.82$\pm$1.3 & 35.68$\pm$0.2 & 43.39$\pm$0.7 & 50.12$\pm$0.4 & 53.19$\pm$0.4 & 59.17$\pm$0.2 & 68.85$\pm$0.4 & 71.29$\pm$0.3 & 75.79$\pm$0.3 \\ \hline DE+Avg-KLD & 32.23$\pm$0.2 & 36.09$\pm$0.6 & 44.00$\pm$0.5 & 49.81$\pm$0.3 & 53.38$\pm$0.3 & 58.93$\pm$0.1 & 68.73$\pm$0.2 & 71.40$\pm$0.2 & 75.73$\pm$0.2 \\ \hline DE+CLUE & 30.80$\pm$0.3 & 33.04$\pm$0.4 & 35.52$\pm$0.2 & 48.56$\pm$0.3 & 49.91$\pm$0.3 & 51.40$\pm$0.2 & 67.82$\pm$0.2 & 69.10$\pm$0.2 & 70.62$\pm$0.2 \\ \hline DE+BADGE & 30.16$\pm$1.3 & 36.18$\pm$0.3 & 43.34$\pm$0.3 & 49.78$\pm$0.3 & 53.26$\pm$0.1 & 58.65$\pm$0.4 & 68.46$\pm$0.3 & 71.35$\pm$0.2 & 75.37$\pm$0.3 \\ \hline \hline ASPEST (ours) & \textbf{37.38}$\pm$0.1 & \textbf{39.98}$\pm$0.3 & \textbf{48.29}$\pm$1.0 & \textbf{54.56}$\pm$0.3 & \textbf{56.95}$\pm$0.1 & \textbf{62.69}$\pm$0.2 & \textbf{71.61}$\pm$0.2 & \textbf{73.27}$\pm$0.2 & \textbf{77.40}$\pm$0.4 \\ \bottomrule \end{tabular} \end{adjustbox} \caption[]{\small Results of comparing ASPEST to the baselines on DomainNet R$\to$C. The mean and std of each metric over three random runs are reported (mean$\pm$std). All numbers are percentages. \textbf{Bold} numbers are superior results. } \label{tab:domainnet-rc-complete-main-results} \end{table*} \begin{table*}[htb] \centering \begin{adjustbox}{width={\,:\,}umnwidth,center} \begin{tabular}{l|ccc|ccc|ccc} \toprule Dataset & \multicolumn{9}{c}{DomainNet R$\to$P (medium)} \\ \hline Metric & \multicolumn{3}{c|}{$cov|acc\geq 70\%$ $\uparrow$} & \multicolumn{3}{c|}{$acc|cov\geq 70\%$ $\uparrow$} & \multicolumn{3}{c}{AUC $\uparrow$} \\ \hline Labeling Budget & 500 & 1000 & 2000 & 500 & 1000 & 2000 & 500 & 1000 & 2000 \\ \hline \hline SR+Uniform & 21.01$\pm$1.0 & 21.35$\pm$0.3 & 22.64$\pm$0.5 & 36.78$\pm$0.6 & 37.18$\pm$0.2 & 38.20$\pm$0.4 & 51.87$\pm$0.7 & 52.31$\pm$0.0 & 53.34$\pm$0.4 \\ \hline SR+Confidence & 20.64$\pm$0.6 & 22.15$\pm$0.8 & 23.60$\pm$0.6 & 37.01$\pm$0.3 & 38.46$\pm$0.7 & 40.23$\pm$0.4 & 51.77$\pm$0.3 & 53.33$\pm$0.8 & 54.80$\pm$0.5 \\ \hline SR+Entropy & 20.76$\pm$0.7 & 22.11$\pm$0.3 & 23.56$\pm$0.3 & 37.09$\pm$0.2 & 38.38$\pm$0.3 & 40.30$\pm$0.1 & 51.86$\pm$0.4 & 53.29$\pm$0.3 & 54.81$\pm$0.2 \\ \hline SR+Margin & 21.43$\pm$0.4 & 23.29$\pm$0.3 & 24.70$\pm$1.0 & 37.21$\pm$0.2 & 39.15$\pm$0.4 & 40.81$\pm$0.4 & 52.33$\pm$0.1 & 54.09$\pm$0.3 & 55.70$\pm$0.4 \\ \hline SR+kCG & 17.33$\pm$0.4 & 17.62$\pm$0.2 & 18.49$\pm$0.2 & 33.97$\pm$0.3 & 34.12$\pm$0.1 & 34.36$\pm$0.1 & 48.61$\pm$0.5 & 48.65$\pm$0.2 & 49.25$\pm$0.2 \\ \hline SR+CLUE & 21.15$\pm$0.6 & 22.49$\pm$0.5 & 24.84$\pm$0.7 & 36.96$\pm$0.2 & 37.93$\pm$0.5 & 39.31$\pm$0.4 & 51.97$\pm$0.4 & 53.20$\pm$0.5 & 54.84$\pm$0.5 \\ \hline SR+BADGE & 20.07$\pm$0.3 & 22.21$\pm$0.5 & 24.92$\pm$0.2 & 36.10$\pm$0.1 & 38.11$\pm$0.4 & 40.40$\pm$0.5 & 50.99$\pm$0.0 & 53.10$\pm$0.4 & 55.40$\pm$0.4 \\ \hline \hline DE+Uniform & 25.42$\pm$0.2 & 26.38$\pm$0.2 & 28.83$\pm$0.3 & 40.83$\pm$0.1 & 41.66$\pm$0.2 & 43.93$\pm$0.2 & 55.86$\pm$0.1 & 56.62$\pm$0.1 & 58.80$\pm$0.2 \\ \hline DE+Entropy & 25.74$\pm$0.4 & 27.11$\pm$0.4 & 30.39$\pm$0.1 & 41.34$\pm$0.1 & 42.92$\pm$0.3 & 45.92$\pm$0.3 & 56.06$\pm$0.2 & 57.51$\pm$0.3 & 60.10$\pm$0.2 \\ \hline DE+Confidence & 25.69$\pm$0.4 & 27.38$\pm$0.7 & 30.47$\pm$0.1 & 41.45$\pm$0.2 & 43.12$\pm$0.3 & 45.88$\pm$0.1 & 56.13$\pm$0.2 & 57.68$\pm$0.3 & 60.20$\pm$0.2 \\ \hline DE+Margin & 25.78$\pm$0.3 & 27.88$\pm$0.5 & 31.03$\pm$0.4 & 41.26$\pm$0.2 & 43.13$\pm$0.3 & 46.23$\pm$0.4 & 56.23$\pm$0.2 & 57.90$\pm$0.3 & 60.49$\pm$0.3 \\ \hline DE+Avg-KLD & 26.30$\pm$0.7 & 28.00$\pm$0.1 & 31.97$\pm$0.2 & 41.80$\pm$0.3 & 43.17$\pm$0.1 & 46.32$\pm$0.2 & 56.65$\pm$0.3 & 57.99$\pm$0.1 & 60.82$\pm$0.2 \\ \hline DE+CLUE & 25.38$\pm$0.6 & 26.65$\pm$0.4 & 27.89$\pm$0.1 & 40.86$\pm$0.3 & 41.62$\pm$0.2 & 42.46$\pm$0.1 & 55.79$\pm$0.4 & 56.65$\pm$0.2 & 57.71$\pm$0.1 \\ \hline DE+BADGE & 26.27$\pm$0.7 & 27.69$\pm$0.1 & 31.84$\pm$0.2 & 42.02$\pm$0.6 & 43.41$\pm$0.2 & 46.37$\pm$0.1 & 56.67$\pm$0.5 & 58.03$\pm$0.1 & 60.84$\pm$0.1 \\ \hline \hline ASPEST (ours) & \textbf{29.69}$\pm$0.1 & \textbf{32.50}$\pm$0.3 & \textbf{35.46}$\pm$0.6 & \textbf{44.96}$\pm$0.1 & \textbf{46.77}$\pm$0.2 & \textbf{49.42}$\pm$0.1 & \textbf{58.74}$\pm$0.0 & \textbf{60.36}$\pm$0.0 & \textbf{62.84}$\pm$0.2 \\ \bottomrule \end{tabular} \end{adjustbox} \caption[]{\small Results of comparing ASPEST to the baselines on DomainNet R$\to$P. The mean and std of each metric over three random runs are reported (mean$\pm$std). All numbers are percentages. \textbf{Bold} numbers are superior results. } \label{tab:domainnet-rp-complete-main-results} \end{table*} \begin{table*}[htb] \centering \begin{adjustbox}{width={\,:\,}umnwidth,center} \begin{tabular}{l|ccc|ccc|ccc} \toprule Dataset & \multicolumn{9}{c}{DomainNet R$\to$S (hard)} \\ \hline Metric & \multicolumn{3}{c|}{$cov|acc\geq 70\%$ $\uparrow$} & \multicolumn{3}{c|}{$acc|cov\geq 70\%$ $\uparrow$} & \multicolumn{3}{c}{AUC $\uparrow$} \\ \hline Labeling Budget & 500 & 1000 & 2000 & 500 & 1000 & 2000 & 500 & 1000 & 2000 \\ \hline \hline SR+Uniform & 12.12$\pm$0.7 & 12.42$\pm$0.4 & 15.88$\pm$0.2 & 27.01$\pm$0.6 & 27.74$\pm$0.3 & 31.29$\pm$0.3 & 41.12$\pm$0.8 & 41.89$\pm$0.2 & 46.17$\pm$0.3 \\ \hline SR+Confidence & 11.06$\pm$1.1 & 11.48$\pm$0.5 & 14.49$\pm$1.5 & 26.53$\pm$1.4 & 27.98$\pm$0.2 & 31.31$\pm$0.7 & 40.26$\pm$1.6 & 41.65$\pm$0.2 & 45.46$\pm$1.1 \\ \hline SR+Entropy & 10.91$\pm$0.3 & 12.45$\pm$0.6 & 14.65$\pm$0.6 & 26.84$\pm$0.5 & 28.72$\pm$0.5 & 31.07$\pm$0.6 & 40.47$\pm$0.6 & 42.61$\pm$0.8 & 45.31$\pm$0.4 \\ \hline SR+Margin & 12.23$\pm$0.4 & 13.06$\pm$0.4 & 15.31$\pm$0.4 & 27.87$\pm$0.2 & 29.19$\pm$0.4 & 31.51$\pm$0.8 & 41.91$\pm$0.3 & 43.22$\pm$0.4 & 45.97$\pm$0.8 \\ \hline SR+kCG & 9.03$\pm$0.2 & 9.76$\pm$0.2 & 11.41$\pm$0.2 & 23.32$\pm$0.4 & 24.06$\pm$0.4 & 25.68$\pm$0.4 & 36.63$\pm$0.3 & 37.57$\pm$0.4 & 39.80$\pm$0.3 \\ \hline SR+CLUE & 12.39$\pm$0.3 & 14.17$\pm$1.0 & 15.80$\pm$0.8 & 27.82$\pm$0.4 & 29.68$\pm$0.4 & 30.62$\pm$0.8 & 42.00$\pm$0.4 & 44.19$\pm$0.7 & 45.58$\pm$0.9 \\ \hline SR+BADGE & 12.18$\pm$0.9 & 13.13$\pm$1.0 & 15.83$\pm$0.7 & 27.68$\pm$1.0 & 28.96$\pm$0.7 & 32.00$\pm$0.4 & 41.72$\pm$1.1 & 43.28$\pm$0.9 & 46.60$\pm$0.6 \\ \hline \hline DE+Uniform & 15.91$\pm$0.5 & 17.55$\pm$0.4 & 21.33$\pm$0.3 & 31.37$\pm$0.5 & 32.57$\pm$0.4 & 36.12$\pm$0.2 & 46.28$\pm$0.5 & 47.79$\pm$0.4 & 51.64$\pm$0.2 \\ \hline DE+Entropy & 13.70$\pm$0.3 & 16.31$\pm$0.5 & 19.58$\pm$0.4 & 30.38$\pm$0.4 & 32.45$\pm$0.2 & 36.18$\pm$0.2 & 44.79$\pm$0.5 & 47.15$\pm$0.2 & 50.87$\pm$0.3 \\ \hline DE+Confidence & 13.73$\pm$0.2 & 16.21$\pm$0.2 & 19.22$\pm$0.4 & 30.55$\pm$0.3 & 33.02$\pm$0.1 & 36.29$\pm$0.5 & 45.05$\pm$0.3 & 47.59$\pm$0.0 & 50.84$\pm$0.4 \\ \hline DE+Margin & 14.99$\pm$0.2 & 17.45$\pm$0.4 & 21.74$\pm$0.7 & 31.67$\pm$0.5 & 33.51$\pm$0.5 & 37.88$\pm$0.3 & 46.38$\pm$0.5 & 48.44$\pm$0.5 & 52.78$\pm$0.4 \\ \hline DE+Avg-KLD & 15.75$\pm$0.5 & 18.14$\pm$0.7 & 22.15$\pm$0.3 & 31.36$\pm$0.2 & 33.79$\pm$0.2 & 37.96$\pm$0.2 & 46.29$\pm$0.1 & 48.77$\pm$0.3 & 53.02$\pm$0.3 \\ \hline DE+CLUE & 14.76$\pm$0.5 & 17.38$\pm$0.1 & 19.75$\pm$0.4 & 31.05$\pm$0.4 & 32.58$\pm$0.2 & 34.61$\pm$0.4 & 45.80$\pm$0.3 & 47.74$\pm$0.1 & 50.09$\pm$0.2 \\ \hline DE+BADGE & 14.97$\pm$0.1 & 17.49$\pm$0.3 & 21.71$\pm$0.3 & 31.35$\pm$0.2 & 33.46$\pm$0.1 & 37.35$\pm$0.3 & 46.03$\pm$0.1 & 48.31$\pm$0.1 & 52.33$\pm$0.2 \\ \hline \hline ASPEST (ours) & \textbf{17.86}$\pm$0.4 & \textbf{20.42}$\pm$0.4 & \textbf{25.87}$\pm$0.4 & \textbf{35.17}$\pm$0.1 & \textbf{37.28}$\pm$0.3 & \textbf{41.46}$\pm$0.2 & \textbf{49.62}$\pm$0.1 & \textbf{51.61}$\pm$0.4 & \textbf{55.90}$\pm$0.2 \\ \bottomrule \end{tabular} \end{adjustbox} \caption[]{\small Results of comparing ASPEST to the baselines on DomainNet R$\to$S. The mean and std of each metric over three random runs are reported (mean$\pm$std). All numbers are percentages. \textbf{Bold} numbers are superior results. } \label{tab:domainnet-rs-complete-main-results} \end{table*} \begin{table*}[htb] \centering \begin{adjustbox}{width={\,:\,}umnwidth,center} \begin{tabular}{l|ccc|ccc|ccc} \toprule Dataset & \multicolumn{9}{c}{Otto} \\ \hline Metric & \multicolumn{3}{c|}{$cov|acc\geq 80\%$ $\uparrow$} & \multicolumn{3}{c|}{$acc|cov\geq 80\%$ $\uparrow$} & \multicolumn{3}{c}{AUC $\uparrow$} \\ \hline Labeling Budget & 500 & 1000 & 2000 & 500 & 1000 & 2000 & 500 & 1000 & 2000 \\ \hline \hline SR+Uniform & 63.58$\pm$0.7 & 64.06$\pm$0.4 & 67.49$\pm$0.9 & 73.56$\pm$0.3 & 73.57$\pm$0.6 & 75.21$\pm$0.2 & 84.46$\pm$0.2 & 84.61$\pm$0.3 & 85.72$\pm$0.2 \\ \hline SR+Confidence & 69.63$\pm$1.7 & 73.41$\pm$0.6 & 84.19$\pm$0.5 & 75.96$\pm$0.5 & 77.57$\pm$0.2 & 81.39$\pm$0.2 & 85.91$\pm$0.3 & 86.86$\pm$0.1 & 88.93$\pm$0.1 \\ \hline SR+Entropy & 67.79$\pm$0.8 & 73.83$\pm$1.0 & 83.12$\pm$0.7 & 75.43$\pm$0.4 & 77.91$\pm$0.3 & 81.07$\pm$0.2 & 85.41$\pm$0.3 & 86.94$\pm$0.2 & 88.86$\pm$0.1 \\ \hline SR+Margin & 68.10$\pm$0.1 & 74.10$\pm$0.4 & 82.53$\pm$0.2 & 75.52$\pm$0.0 & 77.66$\pm$0.1 & 80.93$\pm$0.1 & 85.56$\pm$0.1 & 86.99$\pm$0.1 & 88.83$\pm$0.1 \\ \hline SR+kCG & 64.84$\pm$0.7 & 62.90$\pm$1.1 & 59.85$\pm$1.0 & 73.75$\pm$0.3 & 73.03$\pm$0.2 & 71.90$\pm$0.3 & 85.08$\pm$0.2 & 84.67$\pm$0.2 & 83.79$\pm$0.3 \\ \hline SR+CLUE & 68.21$\pm$1.2 & 70.85$\pm$0.6 & 78.26$\pm$0.9 & 75.26$\pm$0.5 & 76.32$\pm$0.2 & 79.30$\pm$0.3 & 85.82$\pm$0.3 & 86.69$\pm$0.2 & 88.53$\pm$0.2 \\ \hline SR+BADGE & 67.23$\pm$1.0 & 73.52$\pm$0.2 & 83.17$\pm$0.4 & 74.74$\pm$0.3 & 77.43$\pm$0.2 & 81.20$\pm$0.2 & 85.41$\pm$0.3 & 87.10$\pm$0.2 & 89.25$\pm$0.1 \\ \hline \hline DE+Uniform & 70.74$\pm$0.5 & 72.20$\pm$0.6 & 75.58$\pm$0.5 & 76.40$\pm$0.1 & 77.06$\pm$0.2 & 78.35$\pm$0.2 & 86.78$\pm$0.1 & 87.26$\pm$0.1 & 88.11$\pm$0.1 \\ \hline DE+Entropy & 75.71$\pm$0.3 & 80.91$\pm$0.2 & 92.62$\pm$0.2 & 78.44$\pm$0.1 & 80.29$\pm$0.1 & 84.05$\pm$0.1 & 87.87$\pm$0.1 & 88.77$\pm$0.1 & 90.99$\pm$0.1 \\ \hline DE+Confidence & 75.52$\pm$0.2 & 81.69$\pm$0.7 & 92.15$\pm$0.9 & 78.28$\pm$0.1 & 80.49$\pm$0.2 & 83.83$\pm$0.1 & 87.84$\pm$0.1 & 89.05$\pm$0.1 & 90.98$\pm$0.1 \\ \hline DE+Margin & 75.49$\pm$0.8 & 81.36$\pm$0.8 & 92.49$\pm$0.4 & 78.41$\pm$0.3 & 80.50$\pm$0.2 & 84.06$\pm$0.2 & 87.89$\pm$0.2 & 89.10$\pm$0.2 & 90.95$\pm$0.2 \\ \hline DE+Avg-KLD & 75.91$\pm$0.2 & 80.97$\pm$0.5 & 91.94$\pm$0.8 & 78.50$\pm$0.1 & 80.33$\pm$0.2 & 83.80$\pm$0.2 & 87.89$\pm$0.0 & 89.06$\pm$0.1 & 90.98$\pm$0.1 \\ \hline DE+CLUE & 69.66$\pm$0.5 & 70.52$\pm$0.1 & 70.17$\pm$0.4 & 76.09$\pm$0.3 & 76.32$\pm$0.1 & 76.31$\pm$0.2 & 86.67$\pm$0.1 & 87.11$\pm$0.0 & 87.06$\pm$0.1 \\ \hline DE+BADGE & 73.23$\pm$0.2 & 77.89$\pm$0.6 & 86.32$\pm$0.5 & 77.33$\pm$0.1 & 79.21$\pm$0.3 & 82.32$\pm$0.2 & 87.55$\pm$0.1 & 88.75$\pm$0.1 & 90.58$\pm$0.0 \\ \hline \hline ASPEST (ours) & \textbf{77.85}$\pm$0.2 & \textbf{84.20}$\pm$0.6 & \textbf{94.26}$\pm$0.6 & \textbf{79.28}$\pm$0.1 & \textbf{81.40}$\pm$0.1 & \textbf{84.62}$\pm$0.1 & \textbf{88.28}$\pm$0.1 & \textbf{89.61}$\pm$0.1 & \textbf{91.49}$\pm$0.0 \\ \bottomrule \end{tabular} \end{adjustbox} \caption[]{\small Results of comparing ASPEST to the baselines on Otto. The mean and std of each metric over three random runs are reported (mean$\pm$std). All numbers are percentages. \textbf{Bold} numbers are superior results. } \label{tab:otto-complete-main-results} \end{table*} \subsection{Effect of combining selective prediction with active learning} \label{app:asp-effect} Selective prediction without active learning corresponds to the case where the labeling budget $M=0$ and the selected set $B^*=\emptyset$. To make fair comparisons with selective prediction methods without active learning, we define a new coverage metric: \begin{align} cov^*(f_s, \tau) = \mathbb{E}_{\bfx \sim U_X} \mathbb{I}[g(\bfx) \ge \tau \land \bfx \notin B^*] \end{align} The range of $cov^*(f_s, \tau)$ is $[0, 1-\frac{M}{n}]$, where $M=|B^*|$ and $n=|U_X|$. If we use a larger labeling budget $M$ for active learning, then the upper bound of $cov^*(f_s, \tau)$ will be smaller. Thus, in order to beat selective classification methods without active learning, active selective prediction methods need to use a small labeling budget to achieve significant accuracy and coverage improvement. We still use the accuracy metric defined in~(\ref{eq:acc-metric}). We then define a new maximum accuracy at a target coverage $t_c$ as: \begin{align} \max_{\tau} & \quad acc(f_s, \tau), \quad s.t. \quad cov^*(f_s, \tau) \ge t_c \end{align} We denote this metric as $acc|cov^*\ge t_c$. We define a new maximum coverage at a target accuracy $t_a$ metric as: \begin{align} \max_{\tau} \quad cov^*(f_s, \tau), \quad s.t. \quad acc(f_s, \tau) \ge t_a \end{align} We denote this metric as $cov^*|acc\ge t_a$. The results under these new metrics are shown in Table~\ref{tab:mnist-vs-sp-results} (MNIST$\to$SVHN), Table~\ref{tab:cifar-otto-vs-sp-results} (CIFAR-10$\to$CINIC-10 and Otto), Table~\ref{tab:fmow-amazon-review-vs-sp-results} (FMoW and Amazon Review) and Table~\ref{tab:domainnet-vs-sp-results} (DomainNet). The results show that combining selective prediction with active learning can significantly improve the accuracy and coverage metrics, even with small labeling budgets. \begin{table*}[htb] \centering \begin{adjustbox}{width=0.95{\,:\,}umnwidth,center} \begin{tabular}{l|c|c|c|c} \toprule Dataset & \multicolumn{2}{c|}{CIFAR-10$\to$CINIC-10} & \multicolumn{2}{c}{Otto} \\ \hline Metric & $cov^*|acc\geq 90\%$ $\uparrow$ & $acc|cov^*\geq 90\%$ $\uparrow$ & $cov^*|acc\geq 80\%$ $\uparrow$ & $acc|cov^*\geq 80\%$ $\uparrow$ \\ \hline \hline SR (w/o active learning) & 57.43$\pm$0.0 & 75.62$\pm$0.0 & 62.90$\pm$0.0 & 73.13$\pm$0.0 \\ \hline SR+Margin (M=500) & 56.76$\pm$0.8 & 75.61$\pm$0.2 & 65.34$\pm$0.1 & 74.25$\pm$0.1 \\ \hline SR+Margin (M=1000) & 56.04$\pm$0.7 & 75.70$\pm$0.1 & 68.11$\pm$0.4 & 74.99$\pm$0.2 \\ \hline \hline DE (w/o active learning) & 56.64$\pm$0.2 & 75.83$\pm$0.1 & 67.69$\pm$0.4 & 75.41$\pm$0.2 \\ \hline DE+Margin (M=500) & 57.78$\pm$0.5 & 76.96$\pm$0.2 & 72.44$\pm$0.7 & 77.18$\pm$0.3 \\ \hline DE+Margin (M=1000) & 59.55$\pm$0.5 & 77.59$\pm$0.1 & 74.78$\pm$0.7 & 78.19$\pm$0.2 \\ \hline \hline ASPEST (M=500) & 59.37$\pm$0.3 & 77.60$\pm$0.1 & 74.71$\pm$0.2 & 77.99$\pm$0.2 \\ \hline ASPEST (M=1000) & \textbf{61.23}$\pm$0.2 & \textbf{78.16}$\pm$0.1 & \textbf{77.40}$\pm$0.5 & \textbf{79.05}$\pm$0.2 \\ \bottomrule \end{tabular} \end{adjustbox} \caption[]{\small Results on CIFAR-10$\to$CINIC-10 and Otto for studying the effect of combining selective prediction with active learning. ``w/o'' means ``without''. The mean and std of each metric over three random runs are reported (mean$\pm$std). All numbers are percentages. \textbf{Bold} numbers are superior results. } \label{tab:cifar-otto-vs-sp-results} \end{table*} \begin{table*}[htb] \centering \begin{adjustbox}{width=0.95{\,:\,}umnwidth,center} \begin{tabular}{l|c|c|c|c} \toprule Dataset & \multicolumn{2}{c|}{FMoW} & \multicolumn{2}{c}{Amazon Review} \\ \hline Metric & $cov^*|acc\geq 70\%$ $\uparrow$ & $acc|cov^*\geq 70\%$ $\uparrow$ & $cov^*|acc\geq 80\%$ $\uparrow$ & $acc|cov^*\geq 80\%$ $\uparrow$ \\ \hline \hline SR (w/o active learning) & 32.39$\pm$0.0 & 48.15$\pm$0.0 & 26.79$\pm$0.0 & 65.64$\pm$0.0 \\ \hline SR+Margin (M=500) & 37.54$\pm$1.3 & 52.19$\pm$0.3 & 14.16$\pm$10.6 & 65.38$\pm$0.4 \\ \hline SR+Margin (M=1000) & 42.65$\pm$0.7 & 55.30$\pm$0.5 & 21.60$\pm$4.0 & 65.68$\pm$0.4 \\ \hline \hline DE (w/o active learning) & 37.58$\pm$0.3 & 52.01$\pm$0.1 & 35.81$\pm$1.9 & 68.41$\pm$0.2 \\ \hline DE+Margin (M=500) & 45.30$\pm$0.6 & 57.09$\pm$0.3 & 32.68$\pm$1.2 & 68.10$\pm$0.3 \\ \hline DE+Margin (M=1000) & 52.32$\pm$1.2 & 60.96$\pm$0.4 & 33.47$\pm$1.2 & 68.54$\pm$0.2 \\ \hline \hline ASPEST (M=500) & 51.85$\pm$0.4 & 60.43$\pm$0.2 & 37.59$\pm$0.6 & 68.91$\pm$0.2 \\ \hline ASPEST (M=1000) & \textbf{57.15}$\pm$0.4 & \textbf{63.71}$\pm$0.2 & \textbf{39.14}$\pm$0.8 & \textbf{69.31}$\pm$0.2 \\ \bottomrule \end{tabular} \end{adjustbox} \caption[]{\small Results on FMoW and Amazon Review for studying the effect of combining selective prediction with active learning. ``w/o'' means ``without''. The mean and std of each metric over three random runs are reported (mean$\pm$std). All numbers are percentages. \textbf{Bold} numbers are superior results. } \label{tab:fmow-amazon-review-vs-sp-results} \end{table*} \begin{table*}[htb] \centering \begin{adjustbox}{width={\,:\,}umnwidth,center} \begin{tabular}{l|c|c|c|c|c|c} \toprule Dataset & \multicolumn{2}{c|}{DomainNet R$\to$C (easy)} & \multicolumn{2}{c|}{DomainNet R$\to$P (medium)} & \multicolumn{2}{c}{DomainNet R$\to$S (hard)} \\ \hline Metric & $cov^*|acc\geq 80\%$ $\uparrow$ & $acc|cov^*\geq 80\%$ $\uparrow$ & $cov^*|acc\geq 70\%$ $\uparrow$ & $acc|cov^*\geq 70\%$ $\uparrow$ & $cov^*|acc\geq 70\%$ $\uparrow$ & $acc|cov^*\geq 70\%$ $\uparrow$ \\ \hline \hline SR (w/o active learning) & 21.50$\pm$0.0 & 40.16$\pm$0.0 & 18.16$\pm$0.0 & 34.74$\pm$0.0 & 7.16$\pm$0.0 & 21.24$\pm$0.0 \\ \hline SR+Margin (M=500) & 25.38$\pm$1.1 & 44.09$\pm$0.9 & 20.94$\pm$0.4 & 36.65$\pm$0.1 & 11.94$\pm$0.4 & 27.35$\pm$0.2 \\ \hline SR+Margin (M=1000) & 25.87$\pm$1.0 & 44.70$\pm$0.7 & 22.22$\pm$0.3 & 37.91$\pm$0.4 & 12.43$\pm$0.4 & 28.19$\pm$0.4 \\ \hline \hline DE (w/o active learning) & 26.15$\pm$0.2 & 44.51$\pm$0.1 & 22.44$\pm$0.2 & 39.06$\pm$0.1 & 9.90$\pm$0.4 & 25.37$\pm$0.0 \\ \hline DE+Margin (M=500) & 30.73$\pm$1.2 & 48.85$\pm$0.4 & 25.19$\pm$0.3 & 40.59$\pm$0.1 & 14.63$\pm$0.2 & 31.11$\pm$0.5 \\ \hline DE+Margin (M=1000) & 33.24$\pm$0.2 & 50.46$\pm$0.4 & 26.60$\pm$0.5 & 41.73$\pm$0.3 & 16.62$\pm$0.4 & 32.30$\pm$0.5 \\ \hline \hline ASPEST (M=500) & 36.10$\pm$0.1 & 53.22$\pm$0.3 & 29.01$\pm$0.1 & 44.26$\pm$0.1 & 17.43$\pm$0.4 & 34.55$\pm$0.1 \\ \hline ASPEST (M=1000) & \textbf{37.24}$\pm$0.3 & \textbf{54.03}$\pm$0.1 & \textbf{31.01}$\pm$0.3 & \textbf{45.31}$\pm$0.1 & \textbf{19.45}$\pm$0.3 & \textbf{35.96}$\pm$0.3 \\ \bottomrule \end{tabular} \end{adjustbox} \caption[]{\small Results on DomainNet R$\to$C, R$\to$P and R$\to$S for studying the effect of combining selective prediction with active learning. ``w/o'' means ``without''. The mean and std of each metric over three random runs are reported (mean$\pm$std). All numbers are percentages. \textbf{Bold} numbers are superior results. } \label{tab:domainnet-vs-sp-results} \end{table*} \subsection{Effect of joint training} \label{app:joint-training-effect} In the problem setup, we assume that we have access to the training dataset $\Dtr$ and can use joint training to improve the selective prediction performance. In this section, we perform experiments to study the effect of joint training and the effect of the loss coefficient $\lambda$ when performing joint training. We consider three active selective prediction methods: SR+margin (Algorithm~\ref{alg:sr} with margin sampling), DE+margin (Algorithm~\ref{alg:de} with margin sampling), and ASPEST (Algorithm~\ref{alg:aspest}). We consider $\lambda \in \{0, 0.5, 1.0, 2.0 \}$. When $\lambda=0$, we don't use joint training; when $\lambda>0$, we use joint training. The results are shown in Table~\ref{tab:joint-training-ablation}. From the results, we can see that using joint training (i.e., when $\lambda>0$) can improve performance, especially when the labeling budget is small. Also, setting a too large value for $\lambda$ (e.g., $\lambda=2$) will lead to worse performance. Setting $\lambda=0.5$ or $1$ usually leads to better performance. In our experiments, we simply set $\lambda=1$ by default. \begin{table}[htb] \centering \begin{tabular}{l|cc|cc} \toprule Dataset & \multicolumn{2}{c|}{MNIST$\to$SVHN} & \multicolumn{2}{c}{DomainNet R$\to$C (easy)} \\ \hline Metric & \multicolumn{2}{c|}{AUC $\uparrow$} & \multicolumn{2}{c}{AUC $\uparrow$} \\ \hline Labeling Budget & 100 & 500 & 500 & 1000 \\ \hline \hline SR+Margin ($\lambda=0$) & 71.90$\pm$3.1 & 90.56$\pm$0.8 & 60.05$\pm$0.9 & 60.34$\pm$1.2 \\ \hline SR+Margin ($\lambda=0.5$) & 75.54$\pm$1.7 & 91.43$\pm$0.5 & 64.99$\pm$0.7 & 66.81$\pm$0.5 \\ \hline SR+Margin ($\lambda=1$) & 76.79$\pm$0.5 & 91.24$\pm$0.5 & 64.37$\pm$0.8 & 65.91$\pm$0.6 \\ \hline SR+Margin ($\lambda=2$) & 72.71$\pm$2.5 & 90.80$\pm$0.3 & 64.17$\pm$0.3 & 66.21$\pm$0.2 \\ \hline \hline DE+Margin ($\lambda=0$) & 77.12$\pm$0.5 & 94.26$\pm$0.5 & 66.86$\pm$0.5 & 69.29$\pm$0.6 \\ \hline DE+Margin ($\lambda=0.5$) & 79.35$\pm$1.4 & 94.22$\pm$0.2 & 69.28$\pm$0.3 & 71.60$\pm$0.2 \\ \hline DE+Margin ($\lambda=1$) & 78.59$\pm$1.4 & 94.31$\pm$0.6 & 68.85$\pm$0.4 & 71.29$\pm$0.3 \\ \hline DE+Margin ($\lambda=2$) & 77.64$\pm$2.3 & 93.81$\pm$0.4 & 68.54$\pm$0.1 & 71.28$\pm$0.2 \\ \hline \hline ASPEST ($\lambda=0$) & 84.48$\pm$2.5 & 96.99$\pm$0.2 & 68.61$\pm$1.2 & 73.21$\pm$1.2 \\ \hline ASPEST ($\lambda=0.5$) & 86.46$\pm$3.1 & 97.01$\pm$0.0 & 71.53$\pm$0.1 & 73.69$\pm$0.1 \\ \hline ASPEST ($\lambda=1$) & 88.84$\pm$1.0 & 96.62$\pm$0.2 & 71.61$\pm$0.2 & 73.27$\pm$0.2 \\ \hline ASPEST ($\lambda=2$) & 85.46$\pm$1.7 & 96.43$\pm$0.1 & 70.54$\pm$0.3 & 73.02$\pm$0.1 \\ \bottomrule \end{tabular} \caption[]{\small Ablation study results for the effect of using joint training and the effect of the loss coefficient $\lambda$. The mean and std of each metric over three random runs are reported (mean$\pm$std). All numbers are percentages. } \label{tab:joint-training-ablation} \end{table} \subsection{Effect of the number of rounds T} \label{app:num-round-effect} In this section, we study the effect of the number of rounds $T$ in active learning. The results in Table~\ref{tab:num-round-ablation} show that larger $T$ usually leads to better performance, and the proposed method ASPEST has more improvement as we increase $T$ compared to SR+Margin and DE+Margin. Also, when $T$ is large enough, the improvement becomes minor (or can even be worse). Considering that in practice, we might not be able to set a large $T$ due to resource constraints, we thus set $T=10$ by default. \begin{table}[htb] \centering \begin{tabular}{l|cc|cc} \toprule Dataset & \multicolumn{2}{c|}{MNIST$\to$SVHN} & \multicolumn{2}{c}{DomainNet R$\to$C (easy)} \\ \hline Metric & \multicolumn{2}{c|}{AUC $\uparrow$} & \multicolumn{2}{c}{AUC $\uparrow$} \\ \hline Labeling Budget & 100 & 500 & 500 & 1000 \\ \hline \hline SR+Margin (T=1) & 63.10$\pm$2.7 & 75.42$\pm$3.6 & 65.16$\pm$0.4 & 66.76$\pm$0.3 \\ \hline SR+Margin (T=2) & 68.09$\pm$3.1 & 87.45$\pm$1.6 & 64.64$\pm$0.8 & 66.91$\pm$0.1 \\ \hline SR+Margin (T=5) & 74.87$\pm$1.7 & 91.32$\pm$0.5 & 64.35$\pm$0.2 & 66.76$\pm$0.3 \\ \hline SR+Margin (T=10) & 76.79$\pm$0.5 & 91.24$\pm$0.5 & 64.37$\pm$0.8 & 65.91$\pm$0.6 \\ \hline SR+Margin (T=20) & 72.81$\pm$1.5 & 90.34$\pm$1.3 & 63.65$\pm$0.6 & 66.08$\pm$0.4 \\ \hline \hline DE+Margin (T=1) & 69.85$\pm$0.5 & 82.74$\pm$2.1 & 68.39$\pm$0.2 & 70.55$\pm$0.0 \\ \hline DE+Margin (T=2) & 75.25$\pm$1.0 & 90.90$\pm$1.0 & 68.79$\pm$0.2 & 70.95$\pm$0.5 \\ \hline DE+Margin (T=5) & 78.41$\pm$0.2 & 93.26$\pm$0.3 & 68.80$\pm$0.2 & 71.21$\pm$0.2 \\ \hline DE+Margin (T=10) & 78.59$\pm$1.4 & 94.31$\pm$0.6 & 68.85$\pm$0.4 & 71.29$\pm$0.3 \\ \hline DE+Margin (T=20) & 76.84$\pm$0.4 & 94.67$\pm$0.2 & 68.50$\pm$0.5 & 71.39$\pm$0.2 \\ \hline \hline ASPEST (T=1) & 62.53$\pm$1.0 & 80.72$\pm$1.5 & 69.44$\pm$0.1 & 71.79$\pm$0.2 \\ \hline ASPEST (T=2) & 75.08$\pm$1.4 & 89.70$\pm$0.7 & 70.68$\pm$0.2 & 72.56$\pm$0.3 \\ \hline ASPEST (T=5) & 81.57$\pm$1.8 & 95.43$\pm$0.1 & 71.23$\pm$0.1 & 73.19$\pm$0.1 \\ \hline ASPEST (T=10) & 88.84$\pm$1.0 & 96.62$\pm$0.2 & 71.61$\pm$0.2 & 73.27$\pm$0.2 \\ \hline ASPEST (T=20) & 91.26$\pm$0.9 & 97.32$\pm$0.1 & 70.57$\pm$0.4 & 73.32$\pm$0.3 \\ \bottomrule \end{tabular} \caption[]{\small Ablation study results for the effect of the number of rounds $T$. The mean and std of each metric over three random runs are reported (mean$\pm$std). All numbers are percentages. } \label{tab:num-round-ablation} \end{table} \subsection{Effect of the number of models N in the ensemble} \label{app:num-model-effect} In this section, we study the effect of the number of models $N$ in the ensemble for DE+Margin and ASPEST. The results in Table~\ref{tab:num-models-ablation} show that larger $N$ usually leads to better results. However, larger $N$ also means a larger computational cost. In our experiments, we simply set $N=5$ by default. \begin{table}[htb] \centering \begin{tabular}{l|cc|cc} \toprule Dataset & \multicolumn{2}{c|}{MNIST$\to$SVHN} & \multicolumn{2}{c}{DomainNet R$\to$C (easy)} \\ \hline Metric & \multicolumn{2}{c|}{AUC $\uparrow$} & \multicolumn{2}{c}{AUC $\uparrow$} \\ \hline Labeling Budget & 100 & 500 & 500 & 1000 \\ \hline \hline DE+Margin (N=2) & 67.41$\pm$3.9 & 91.20$\pm$0.8 & 65.82$\pm$0.5 & 67.72$\pm$0.4 \\ \hline DE+Margin (N=3) & 77.53$\pm$1.5 & 93.41$\pm$0.1 & 67.54$\pm$0.4 & 69.61$\pm$0.2 \\ \hline DE+Margin (N=4) & 74.46$\pm$2.7 & 93.65$\pm$0.3 & 68.09$\pm$0.2 & 70.65$\pm$0.3 \\ \hline DE+Margin (N=5) & 78.59$\pm$1.4 & 94.31$\pm$0.6 & 68.85$\pm$0.4 & 71.29$\pm$0.3 \\ \hline DE+Margin (N=6) & 79.34$\pm$0.7 & 94.40$\pm$0.1 & 68.63$\pm$0.2 & 71.65$\pm$0.3 \\ \hline DE+Margin (N=7) & 80.30$\pm$1.5 & 93.97$\pm$0.2 & 69.41$\pm$0.1 & 71.78$\pm$0.3 \\ \hline DE+Margin (N=8) & 78.91$\pm$1.5 & 94.52$\pm$0.2 & 69.00$\pm$0.0 & 71.88$\pm$0.4 \\ \hline \hline ASPEST (N=2) & 80.38$\pm$1.2 & 96.26$\pm$0.0 & 69.14$\pm$0.3 & 71.36$\pm$0.3 \\ \hline ASPEST (N=3) & 84.86$\pm$1.0 & 96.60$\pm$0.2 & 69.91$\pm$0.2 & 72.25$\pm$0.2 \\ \hline ASPEST (N=4) & 84.94$\pm$0.3 & 96.76$\pm$0.1 & 70.68$\pm$0.2 & 73.09$\pm$0.2 \\ \hline ASPEST (N=5) & 88.84$\pm$1.0 & 96.62$\pm$0.2 & 71.61$\pm$0.2 & 73.27$\pm$0.2 \\ \hline ASPEST (N=6) & 84.51$\pm$0.5 & 96.66$\pm$0.2 & 71.20$\pm$0.2 & 73.42$\pm$0.3 \\ \hline ASPEST (N=7) & 86.70$\pm$2.3 & 96.90$\pm$0.2 & 71.16$\pm$0.2 & 73.50$\pm$0.1 \\ \hline ASPEST (N=8) & 88.59$\pm$0.9 & 97.01$\pm$0.1 & 71.62$\pm$0.3 & 73.76$\pm$0.2 \\ \bottomrule \end{tabular} \caption[]{\small Ablation study results for the effect of the number of models $N$ in the ensemble. The mean and std of each metric over three random runs are reported (mean$\pm$std). All numbers are percentages. } \label{tab:num-models-ablation} \end{table} \subsection{Effect of the upper bound in pseudo-labeled set construction} \label{app:aspest-upper-bound} When constructing the pseudo-labeled set $R$ using Eq.~(\ref{eq:pseudo_labeling}), we exclude those test data points with confidence equal to $1$. In this section, we study whether setting such an upper bound can improve performance. The results in Table~\ref{tab:aspest-upper-threshold-ablation} show that when the labeling budget is small, setting such an upper bound can improve performance significantly. However, when the labeling budget is large, setting such an upper bound may not improve the performance. Since we focus on the low labeling budget region, we decide to set such an upper bound for the proposed ASPEST method. \begin{table}[htb] \centering \begin{tabular}{l|cc|cc} \toprule Dataset & \multicolumn{2}{c|}{MNIST$\to$SVHN} & \multicolumn{2}{c}{DomainNet R$\to$C (easy)} \\ \hline Metric & \multicolumn{2}{c|}{AUC $\uparrow$} & \multicolumn{2}{c}{AUC $\uparrow$} \\ \hline Labeling Budget & 100 & 500 & 500 & 1000 \\ \hline \hline ASPEST without upper bound & 86.95$\pm$1.4 & 96.59$\pm$0.1 & 71.39$\pm$0.1 & \textbf{73.52}$\pm$0.2 \\ \hline ASPEST & \textbf{88.84}$\pm$1.0 & \textbf{96.62}$\pm$0.2 & \textbf{71.61}$\pm$0.2 & 73.27$\pm$0.2 \\ \bottomrule \end{tabular} \caption[]{\small Ablation study results for the effect of setting an upper bound when constructing the pseudo-labeled set $R$ in ASPEST. The mean and std of each metric over three random runs are reported (mean$\pm$std). All numbers are percentages. \textbf{Bold} numbers are superior results. } \label{tab:aspest-upper-threshold-ablation} \end{table} \subsection{Ensemble accuracy after each round of active learning} \label{app:each-round-ens-acc} We evaluate the accuracy of the ensemble model $f_t$ in the ASPEST algorithm after the $t$-th round of active learning. Recall that $f_t$ contains $N$ models $f_t^1, \dots, f_t^N$ and $f_t(\bfx)=\operatornamewithlimits{arg\!\max}_{k\in \mathcal{Y}} \frac{1}{N} \sum_{j=1}^N f_t^j(\bfx \mid k)$. The results in Table~\ref{tab:each-round-acc-results} show that after each round of active learning, the accuracy of the ensemble model will be improved significantly. \begin{table*}[htb] \centering \begin{tabular}{l|cc|cc} \toprule Metric & \multicolumn{4}{c}{Ensemble Test Accuracy} \\ \hline Dataset & \multicolumn{2}{c|}{MNIST$\to$SVHN} & \multicolumn{2}{c}{DomainNet R$\to$C (easy)} \\ \hline Labeling Budget & 100 & 500 & 500 & 1000 \\ \hline \hline Round 0 & 24.67 & 24.87 & 37.33 & 37.46 \\ Round 1 & 24.91 & 43.80 & 39.61 & 39.67 \\ Round 2 & 37.75 & 54.91 & 41.15 & 41.55 \\ Round 3 & 45.62 & 64.15 & 41.97 & 43.24 \\ Round 4 & 50.94 & 71.65 & 42.57 & 45.09 \\ Round 5 & 56.75 & 77.23 & 43.85 & 45.62 \\ Round 6 & 59.82 & 79.97 & 44.20 & 46.60 \\ Round 7 & 63.10 & 81.43 & 45.02 & 47.51 \\ Round 8 & 67.49 & 82.78 & 45.17 & 48.59 \\ Round 9 & 69.93 & 84.70 & 45.80 & 48.66 \\ Round 10 & 71.14 & 85.48 & 46.36 & 49.70 \\ \bottomrule \end{tabular} \caption[]{\small Ensemble test accuracy of ASPEST after each round of active learning. All numbers are percentages. } \label{tab:each-round-acc-results} \end{table*} \subsection{Empirical analysis for checkpoint ensemble} \label{app:ckpt-ens-emprical-analysis} In this section, we analyze why the proposed checkpoint ensemble can improve selective prediction performance. We postulate the rationales: (1) the checkpoint ensemble can help with generalization; (2) the checkpoint ensemble can help with reducing overconfident wrong predictions. Regarding (1), when fine-tuning the model on the small set of selected labeled test data, we hope that the fine-tuned model could generalize to remaining unlabeled test data. However, since the selected test set is small, we might have an overfitting issue. So possibly some intermediate checkpoints along the training path achieve better generalization than the end checkpoint. By using checkpoint ensemble, we might get an ensemble that achieves better generalization to remaining unlabeled test data. Although standard techniques like cross-validation and early stopping can also reduce overfitting, they are not suitable in the active selective prediction setup since the amount of labeled test data is small. Regarding (2), when fine-tuning the model on the small set of selected labeled test data, the model can get increasingly confident on the test data. Since there exist high-confidence mis-classified test points, incorporating intermediate checkpoints along the training path into the ensemble can reduce the average confidence of the ensemble on those mis-classified test points. By using checkpoint ensemble, we might get an ensemble that has better confidence estimation for selective prediction on the test data. We perform experiments on the image dataset MNIST$\to$SVHN and the text dataset Amazon Review to verify these two hypotheses. We employ one-round active learning with a labeling budget of 100 samples. We use the margin sampling method for sample selection and fine-tune a single model on the selected labeled test data for 200 epochs. We first evaluate the median confidence of the model on the correctly classified and mis-classified test data respectively when fine-tuning the model on the selected labeled test data. In Figure~\ref{fig:median-conf}, we show that during fine-tuning, the model gets increasingly confident not only on the correctly classified test data, but also on the mis-classified test data. \begin{figure*} \caption{\small Evaluating the median confidence of the model on the correctly classified and mis-classified test data respectively when fine-tuning the model on the selected labeled test data. } \label{fig:median-conf} \end{figure*} We then evaluate the Accuracy, the area under the receiver operator characteristic curve (AUROC) and the area under the accuracy-coverage curve (AUC) metrics of the checkpoints during fine-tuning and the checkpoint ensemble constructed after fine-tuning on the target test dataset. The AUROC metric is equivalent to the probability that a randomly chosen correctly classified input has a higher confidence score than a randomly chosen mis-classified input. Thus, the AUROC metric can measure the quality of the confidence score for selective prediction. The results in Figure~\ref{fig:ckpt-ensemble-analysis} show that in the fine-tuning path, different checkpoints have different target test accuracy and the end checkpoint may not have the optimal target test accuracy. The checkpoint ensemble can have better target test accuracy than the end checkpoint. Also, in the fine-tuning path, different checkpoints have different confidence estimation (the quality of confidence estimation is measured by the metric AUROC) on the target test data and the end checkpoint may not have the optimal confidence estimation. The checkpoint ensemble can have better confidence estimation than the end checkpoint. Furthermore, in the fine-tuning path, different checkpoints have different selective prediction performance (measured by the metric AUC) on the target test data and the end checkpoint may not have the optimal selective prediction performance. The checkpoint ensemble can have better selective prediction performance than the end checkpoint. \begin{figure*} \caption{\small Evaluating the checkpoints during fine-tuning and the checkpoint ensemble constructed after fine-tuning on the target test dataset. } \label{fig:ckpt-ensemble-analysis} \end{figure*} \subsection{Empirical analysis for self-training} \label{app:self-training-emprical-analysis} In this section, we analyze why the proposed self-training can improve selective prediction performance. Our hypothesis is that after fine-tuning the models on the selected labeled test data, the checkpoint ensemble constructed is less confident on the test data $U_X$ compared to the deep ensemble (obtained by ensembling the end checkpoints). Thus, using the softmax outputs of the checkpoint ensemble as soft pseudo-labels for self-training can alleviate the overconfidence issue and improve selective prediction performance. We perform experiments on the image dataset MNIST$\to$SVHN and the text dataset Amazon Review to verity this hypothesis. To see the effect of self-training better, we only employ one-round active learning (i.e., only apply one-round self-training) with a labeling budget of 100 samples. We visualize the histogram of the confidence scores on the test data $U_X$ for the deep ensemble and the checkpoint ensemble after fine-tuning. We also evaluate the receiver operator characteristic curve (AUROC) and the area under the accuracy-coverage curve (AUC) metrics of the checkpoint ensemble before and after the self-training. We use the AUROC metric to measure the quality of the confidence score for selective prediction. The results in Figure~\ref{fig:conf-hist-ckpt-ensemble} show that the checkpoint ensemble is less confident on the test data $U_X$ compared to the deep ensemble. On the high-confidence region (i.e., confidence$\geq\eta$. Recall that $\eta$ is the confidence threshold used for constructing the pseudo-labeled set $R$. We set $\eta=0.9$ in our experiments), the checkpoint ensemble is also less confident than the deep ensemble. Besides, the results in Table~\ref{tab:self-training-analysis} show that after self-training, both AUROC and AUC metrics of the checkpoint ensemble are improved significantly. Therefore, the self-training can alleviate the overconfidence issue and improve selective prediction performance. \begin{figure*} \caption{\small Plot the histogram of the confidence scores on the test data $U_X$ for the deep ensemble and the checkpoint ensemble after fine-tuning. } \label{fig:conf-hist-ckpt-ensemble} \end{figure*} \begin{table*}[htb] \centering \begin{tabular}{l|cc|cc} \toprule Dataset & \multicolumn{2}{c|}{MNIST$\to$SVHN} & \multicolumn{2}{c}{Amazon Review} \\ \hline Metric & AUROC$\uparrow$ & AUC$\uparrow$ & AUROC$\uparrow$ & AUC$\uparrow$ \\ \hline Before self-training & 73.92 & 66.75 & 67.44 & 76.24 \\ After self-training & 74.31 & 67.37 & 67.92 & 76.80 \\ \bottomrule \end{tabular} \caption[]{\small Evaluating the AUROC and AUC metrics of the checkpoint ensemble before and after self-training. All numbers are percentages. } \label{tab:self-training-analysis} \end{table*} \subsection{Training with unsupervised domain adaptation} \label{app:train-uda} In this section, we study whether incorporating Unsupervised Domain Adaptation (UDA) techniques into training could improve the selective prediction performance. UDA techniques are mainly proposed to adapt the representation learned on the labeled source domain data to the target domain with unlabeled data from the target domain~\citep{liu2022deep}. We can easily incorporate those UDA techniques into SR (Algorithm~\ref{alg:sr}), DE (Algorithm~\ref{alg:de}), and the proposed ASPEST (Algorithm~\ref{alg:aspest}) by adding unsupervised training losses into the training objectives. We consider the method \texttt{DE with UDA} and the method \texttt{ASPEST with UDA}. The algorithm for DE with UDA is presented in Algorithm~\ref{alg:de-uda} and the algorithm for ASPEST with UDA is presented in Algorithm~\ref{alg:aspest-uda}. We consider UDA techniques based on representation matching where the goal is to minimize the distance between the distribution of the representation on $\Dtr$ and that on $U_X$. Suppose the model $f$ is a composition of a prediction function $h$ and a representation function $\phi$ (i.e., $f(x)=h(\phi(x))$). Then $L_{UDA}(\Dtr, U_X;\theta) = d(p_{\Dtr}^\phi, p_{U_X}^\phi)$, which is a representation matching loss. We consider the representation matching losses from the state-of-the-art UDA methods DANN~\citep{ganin2016domain} and CDAN~\citep{long2018conditional}. We evaluate two instantiations of Algorithm~\ref{alg:de-uda} -- \texttt{DE with DANN} and \texttt{DE with CDAN}, and two instantiations of Algorithm~\ref{alg:aspest-uda} -- \texttt{ASPEST with DANN} and \texttt{ASPEST with CDAN}. The values of the hyper-parameters are the same as those described in the paper except that we set $n_s=20$. For DANN and CDAN, we set the hyper-parameter between the source classifier and the domain discriminator to be $0.1$. The results are shown in Table~\ref{tab:mnist-uda-results} (MNIST$\to$SVHN), Table~\ref{tab:cifar10-uda-results} (CIFAR-10$\to$CINIC-10), Table~\ref{tab:fmow-uda-results} (FMoW), Table~\ref{tab:amazon-review-uda-results} (Amazon Review), Table~\ref{tab:domainnet-rc-uda-results} (DomainNet R$\to$C), Table~\ref{tab:domainnet-rp-uda-results} (DomainNet R$\to$P), Table~\ref{tab:domainnet-rs-uda-results} (DomainNet R$\to$S) and Table~\ref{tab:otto-uda-results} (Otto). From the results, we can see that ASPEST outperforms (or on par with) \texttt{DE with DANN} and \texttt{DE with CDAN} across different datasets, although ASPEST doesn't use UDA techniques. We further show that by combining ASPEST with UDA, it might achieve even better performance. For example, on MNIST$\to$SVHN, \texttt{ASPEST with DANN} improves the mean AUC from $96.62\%$ to $97.03\%$ when the labeling budget is $500$. However, in some cases, combining ASPEST with DANN or CDAN leads to much worse results. For example, on MNIST$\to$SVHN, when the labeling budget is $100$, combining ASPEST with DANN or CDAN will reduce the mean AUC by over $4\%$. It might be because in those cases, DANN or CDAN fails to align the representations between the source and target domains. Existing work also show that UDA methods may not have stable performance across different kinds of distribution shifts and sometimes they can even yield accuracy degradation~\citep{johansson2019support,sagawa2021extending}. So our findings align with those of existing work. \begin{algorithm}[h] \caption{DE with Unsupervised Domain Adaptation} \label{alg:de-uda} \begin{algorithmic} \REQUIRE A training dataset $\Dtr$, An unlabeled test dataset $U_X$, the number of rounds $T$, the total labeling budget $M$, a source-trained model $\bar{f}$, an acquisition function $a(B, f, g)$, the number of models in the ensemble $N$, the number of initial training epochs $n_s$, and a hyper-parameter $\lambda$. \STATE Let $f^j_0 = \bar{f}$ for $j=1, \dots, N$. \STATE Fine-tune each model $f^j_0$ in the ensemble via SGD for $n_s$ training epochs independently using the following training objective with different randomness: \begin{align} \min_{\theta^j} \quad \mathbb{E}_{(\bfx,y)\in \Dtr} \quad \ell_{CE}(\bfx, y;\theta^j) + L_{UDA}(\Dtr, U_X;\theta^j) \end{align} where $L_{UDA}$ is a loss function for unsupervised domain adaptation. \STATE Let $B_0=\emptyset$. \STATE Let $g_t(\bfx) = \max_{k\in \mathcal{Y}} f_t(\bfx\mid k)$. \FOR{$t=1, \cdots, T$} \STATE Select a batch $B_{t}$ with a size of $m=[\frac{M}{T}]$ from $U_X$ for labeling via: \begin{align} B_{t}=\arg\max_{B \subset U_X\setminus(\cup_{l=0}^{t-1}B_l), |B|=m} a(B, f_{t-1}, g_{t-1}) \end{align} \STATE Use an oracle to assign ground-truth labels to the examples in $B_{t}$ to get $\tilde{B}_{t}$. \STATE Fine-tune each model $f^j_{t-1}$ in the ensemble via SGD independently using the following training objective with different randomness: \begin{align} \min_{\theta^j} \quad \mathbb{E}_{(\bfx,y)\in \cup_{l=1}^t \tilde{B}_l} \quad \ell_{CE}(\bfx, y;\theta^j) + \lambda \cdot \mathbb{E}_{(\bfx,y)\in \Dtr} \quad \ell_{CE}(\bfx, y;\theta^j) + L_{UDA}(\Dtr, U_X;\theta^j) \end{align} where $\theta^j$ is the model parameters of $f^j_{t-1}$. \STATE Let $f^j_t = f^j_{t-1}$. \ENDFOR \ENSURE The classifier $f=f_T$ and the selection scoring function $g=\max_{k\in \mathcal{Y}} f(\bfx\mid k)$. \end{algorithmic} \end{algorithm} \begin{algorithm*}[htb] \caption{ASPEST with Unsupervised Domain Adaptation} \label{alg:aspest-uda} \begin{algorithmic} \REQUIRE A training set $\mathcal{D}^{tr}$, a unlabeled test set $U_X$, the number of rounds $T$, the labeling budget $M$, the number of models $N$, the number of initial training epochs $n_s$, a checkpoint epoch $c_e$, a threshold $\eta$, a sub-sampling fraction $p$, and a hyper-parameter $\lambda$. \STATE Let $f^j_0 = \bar{f}$ for $j=1, \dots, N$. \STATE Set $N_e=0$ and $P=\textbf{0}_{n\times K}$. \STATE Fine-tune each $f^j_0$ for $n_s$ training epochs using the following training objective: \begin{align} \min_{\theta^j} \quad \mathbb{E}_{(\bfx,y)\in \Dtr} \quad \ell_{CE}(\bfx, y;\theta^j) + L_{UDA}(\Dtr, U_X;\theta^j), \end{align} where $L_{UDA}$ is a loss function for unsupervised domain adaptation. During fine-tuning, update $P$ and $N_e$ using Eq.~(\ref{eq:p_update}) every $c_e$ training epochs. \FOR{$t=1, \cdots, T$} \STATE Select a batch $B_{t}$ from $U_X$ for labeling using the sample selection objective~(\ref{obj:aspest-sample-selection}). \STATE Use an oracle to assign ground-truth labels to the examples in $B_{t}$ to get $\tilde{B}_{t}$. \STATE Set $N_e=0$ and $P=\textbf{0}_{n\times K}$. \STATE Fine-tune each $f^j_{t-1}$ using the following training objective: \begin{align} \min_{\theta^j} \quad \mathbb{E}_{(\bfx,y)\in \cup_{l=1}^t \tilde{B}_l} \quad \ell_{CE}(\bfx, y;\theta^j) + \lambda \cdot \mathbb{E}_{(\bfx,y)\in \Dtr} \quad \ell_{CE}(\bfx, y;\theta^j) + L_{UDA}(\Dtr, U_X;\theta^j), \end{align} During fine-tuning, update $P$ and $N_e$ using Eq~(\ref{eq:p_update}) every $c_e$ training epochs. \STATE Let $f^j_t = f^j_{t-1}$. \STATE Construct the pseudo-labeled set $R$ via Eq~(\ref{eq:pseudo_labeling}) and create $R_{\text{sub}}$ by randomly sampling up to $[p\cdot n]$ data points from $R$. \STATE Train each $f^j_t$ further via SGD using the objective~(\ref{obj:aspest_self_train}) and update $P$ and $N_e$ using Eq~(\ref{eq:p_update}) every $c_e$ training epochs. \ENDFOR \ENSURE The classifier $f(\bfx_i) = \operatornamewithlimits{arg\!\max}_{k \in \mathcal{Y}} P_{i,k}$ and the selection scoring function $g(\bfx_i) = \max_{k\in \mathcal{Y}} P_{i,k}$. \end{algorithmic} \end{algorithm*} \begin{table*}[htb] \centering \begin{adjustbox}{width={\,:\,}umnwidth,center} \begin{tabular}{l|ccc|ccc|ccc} \toprule Dataset & \multicolumn{9}{c}{MNIST$\to$SVHN} \\ \hline Metric & \multicolumn{3}{c|}{$cov|acc\geq 90\%$ $\uparrow$} & \multicolumn{3}{c|}{$acc|cov\geq 90\%$ $\uparrow$} & \multicolumn{3}{c}{AUC $\uparrow$} \\ \hline Labeling Budget & 100 & 500 & 1000 & 100 & 500 & 1000 & 100 & 500 & 1000 \\ \hline \hline DE with DANN + Uniform & 27.27$\pm$1.8 & 72.78$\pm$2.0 & 87.05$\pm$0.5 & 63.95$\pm$1.4 & 82.99$\pm$0.8 & 88.64$\pm$0.2 & 80.37$\pm$0.7 & 93.25$\pm$0.4 & 96.05$\pm$0.1 \\ \hline DE with DANN + Entropy & 11.33$\pm$8.2 & 74.04$\pm$2.2 & 91.06$\pm$1.4 & 58.28$\pm$2.1 & 83.64$\pm$0.9 & 90.41$\pm$0.5 & 74.62$\pm$1.6 & 93.45$\pm$0.5 & 96.47$\pm$0.2 \\ \hline DE with DANN + Confidence & 15.68$\pm$6.3 & 76.34$\pm$3.1 & 93.96$\pm$1.2 & 61.32$\pm$3.0 & 85.02$\pm$0.9 & 91.64$\pm$0.4 & 76.43$\pm$3.0 & 93.85$\pm$0.6 & 96.97$\pm$0.3 \\ \hline DE with DANN + Margin & 30.64$\pm$2.1 & 83.44$\pm$0.9 & 96.17$\pm$0.5 & 66.79$\pm$0.9 & 87.30$\pm$0.4 & 92.71$\pm$0.2 & 82.14$\pm$0.8 & 95.40$\pm$0.3 & 97.60$\pm$0.1 \\ \hline DE with DANN + Avg-KLD & 22.30$\pm$3.0 & 78.13$\pm$2.1 & 93.42$\pm$1.0 & 63.22$\pm$2.0 & 85.40$\pm$0.8 & 91.47$\pm$0.5 & 78.88$\pm$1.6 & 94.25$\pm$0.5 & 97.02$\pm$0.2 \\ \hline DE with DANN + CLUE & 16.42$\pm$13.6 & 72.27$\pm$2.8 & 86.71$\pm$0.4 & 61.79$\pm$2.7 & 82.72$\pm$1.1 & 88.46$\pm$0.2 & 77.47$\pm$3.4 & 93.33$\pm$0.5 & 96.21$\pm$0.0 \\ \hline DE with DANN + BADGE & 25.41$\pm$10.9 & 78.83$\pm$1.2 & 90.94$\pm$1.1 & 63.93$\pm$4.4 & 85.27$\pm$0.5 & 90.45$\pm$0.5 & 79.82$\pm$4.1 & 94.58$\pm$0.3 & 96.89$\pm$0.1 \\ \hline \hline DE with CDAN + Uniform & 28.10$\pm$4.8 & 73.15$\pm$0.7 & 87.50$\pm$0.6 & 63.95$\pm$2.7 & 83.10$\pm$0.3 & 88.86$\pm$0.3 & 80.28$\pm$2.2 & 93.44$\pm$0.1 & 96.13$\pm$0.2 \\ \hline DE with CDAN + Entropy & 6.94$\pm$9.8 & 74.38$\pm$1.5 & 90.77$\pm$1.3 & 59.90$\pm$2.3 & 84.14$\pm$0.4 & 90.32$\pm$0.6 & 76.04$\pm$2.0 & 93.48$\pm$0.3 & 96.38$\pm$0.2 \\ \hline DE with CDAN + Confidence & 13.47$\pm$10.2 & 75.15$\pm$2.8 & 92.77$\pm$0.7 & 60.98$\pm$2.0 & 84.62$\pm$0.9 & 91.23$\pm$0.3 & 76.19$\pm$2.8 & 93.62$\pm$0.6 & 96.63$\pm$0.1 \\ \hline DE with CDAN + Margin & 22.44$\pm$3.3 & 81.84$\pm$2.5 & 96.07$\pm$0.2 & 62.89$\pm$3.8 & 86.71$\pm$1.0 & 92.64$\pm$0.0 & 78.69$\pm$2.6 & 94.89$\pm$0.5 & 97.57$\pm$0.0 \\ \hline DE with CDAN + Avg-KLD & 20.23$\pm$4.1 & 80.62$\pm$1.7 & 93.13$\pm$2.5 & 62.23$\pm$2.7 & 86.34$\pm$0.6 & 91.30$\pm$1.0 & 77.68$\pm$2.5 & 94.81$\pm$0.4 & 96.97$\pm$0.4 \\ \hline DE with CDAN + CLUE & 7.47$\pm$6.4 & 72.61$\pm$2.9 & 87.22$\pm$0.2 & 57.82$\pm$2.9 & 82.50$\pm$1.3 & 88.62$\pm$0.1 & 73.33$\pm$2.3 & 93.38$\pm$0.7 & 96.31$\pm$0.0 \\ \hline DE with CDAN + BADGE & 26.88$\pm$3.5 & 79.21$\pm$0.1 & 92.50$\pm$0.7 & 65.69$\pm$1.7 & 85.32$\pm$0.1 & 91.18$\pm$0.4 & 81.10$\pm$1.3 & 94.73$\pm$0.1 & 97.17$\pm$0.2 \\ \hline \hline ASPEST (ours) & \textbf{52.10}$\pm$4.0 & 89.22$\pm$0.9 & 98.70$\pm$0.4 & \textbf{76.10}$\pm$1.5 & 89.62$\pm$0.4 & 93.92$\pm$0.3 & \textbf{88.84}$\pm$1.0 & 96.62$\pm$0.2 & 98.06$\pm$0.1 \\ \hline ASPEST with DANN (ours) & 37.90$\pm$2.4 & \textbf{91.61}$\pm$0.6 & 99.39$\pm$0.4 & 69.45$\pm$1.7 & \textbf{90.70}$\pm$0.3 & 94.42$\pm$0.4 & 84.55$\pm$1.0 & \textbf{97.03}$\pm$0.1 & 98.23$\pm$0.1 \\ \hline ASPEST with CDAN (ours) & 30.97$\pm$11.7 & 91.39$\pm$0.6 & \textbf{99.50}$\pm$0.3 & 67.58$\pm$3.2 & 90.60$\pm$0.3 & \textbf{94.46}$\pm$0.2 & 82.20$\pm$3.3 & 96.95$\pm$0.1 & \textbf{98.26}$\pm$0.1 \\ \bottomrule \end{tabular} \end{adjustbox} \caption[]{\small Results of evaluating DE with UDA and ASPEST with UDA on MNIST$\to$SVHN. The mean and std of each metric over three random runs are reported (mean$\pm$std). All numbers are percentages. \textbf{Bold} numbers are superior results. } \label{tab:mnist-uda-results} \end{table*} \begin{table*}[htb] \centering \begin{adjustbox}{width={\,:\,}umnwidth,center} \begin{tabular}{l|ccc|ccc|ccc} \toprule Dataset & \multicolumn{9}{c}{CIFAR-10$\to$CINIC-10} \\ \hline Metric & \multicolumn{3}{c|}{$cov|acc\geq 90\%$ $\uparrow$} & \multicolumn{3}{c|}{$acc|cov\geq 90\%$ $\uparrow$} & \multicolumn{3}{c}{AUC $\uparrow$} \\ \hline Labeling Budget & 500 & 1000 & 2000 & 500 & 1000 & 2000 & 500 & 1000 & 2000 \\ \hline \hline DE with DANN + Uniform & 58.85$\pm$0.3 & 59.39$\pm$0.2 & 60.04$\pm$0.1 & 77.06$\pm$0.2 & 77.33$\pm$0.2 & 77.84$\pm$0.1 & 90.40$\pm$0.1 & 90.60$\pm$0.1 & 90.73$\pm$0.1 \\ \hline DE with DANN + Entropy & 59.42$\pm$0.4 & 60.86$\pm$0.3 & 64.52$\pm$0.3 & 78.14$\pm$0.2 & 79.20$\pm$0.1 & 81.31$\pm$0.1 & 90.72$\pm$0.0 & 91.06$\pm$0.1 & 92.02$\pm$0.0 \\ \hline DE with DANN + Confidence & 59.44$\pm$0.6 & 61.08$\pm$0.3 & 65.12$\pm$0.2 & 78.19$\pm$0.1 & 79.38$\pm$0.0 & 81.29$\pm$0.1 & 90.73$\pm$0.1 & 91.26$\pm$0.1 & 92.06$\pm$0.0 \\ \hline DE with DANN + Margin & 59.81$\pm$0.3 & 62.26$\pm$0.4 & 65.58$\pm$0.4 & 78.15$\pm$0.0 & 79.25$\pm$0.1 & 81.05$\pm$0.1 & 90.76$\pm$0.1 & 91.30$\pm$0.1 & 92.11$\pm$0.0 \\ \hline DE with DANN + Avg-KLD & 60.50$\pm$0.5 & 62.04$\pm$0.1 & 65.08$\pm$0.2 & 78.32$\pm$0.1 & 79.31$\pm$0.1 & 81.07$\pm$0.0 & 90.89$\pm$0.1 & 91.34$\pm$0.0 & 92.11$\pm$0.0 \\ \hline DE with DANN + CLUE & 60.20$\pm$0.5 & 61.69$\pm$0.2 & 64.08$\pm$0.2 & 77.84$\pm$0.2 & 78.35$\pm$0.2 & 79.38$\pm$0.1 & 90.73$\pm$0.2 & 91.07$\pm$0.1 & 91.63$\pm$0.0 \\ \hline DE with DANN + BADGE & 60.18$\pm$0.4 & 62.15$\pm$0.2 & 65.31$\pm$0.6 & 77.70$\pm$0.1 & 78.54$\pm$0.1 & 79.81$\pm$0.2 & 90.72$\pm$0.1 & 91.19$\pm$0.1 & 91.86$\pm$0.1 \\ \hline \hline DE with CDAN + Uniform & 58.72$\pm$0.2 & 59.49$\pm$0.5 & 60.28$\pm$0.2 & 77.16$\pm$0.0 & 77.52$\pm$0.1 & 77.90$\pm$0.1 & 90.45$\pm$0.1 & 90.65$\pm$0.0 & 90.78$\pm$0.1 \\ \hline DE with CDAN + Entropy & 58.73$\pm$0.4 & 60.82$\pm$0.5 & 64.45$\pm$0.2 & 77.95$\pm$0.1 & 79.20$\pm$0.1 & 81.04$\pm$0.1 & 90.57$\pm$0.1 & 91.10$\pm$0.1 & 91.86$\pm$0.1 \\ \hline DE with CDAN + Confidence & 59.10$\pm$0.6 & 61.03$\pm$0.6 & 64.60$\pm$0.2 & 77.92$\pm$0.0 & 79.26$\pm$0.2 & 81.07$\pm$0.0 & 90.59$\pm$0.0 & 91.10$\pm$0.2 & 91.96$\pm$0.0 \\ \hline DE with CDAN + Margin & 59.88$\pm$0.5 & 61.57$\pm$0.9 & 64.82$\pm$0.4 & 78.09$\pm$0.3 & 79.02$\pm$0.2 & 80.82$\pm$0.1 & 90.73$\pm$0.1 & 91.17$\pm$0.2 & 91.98$\pm$0.1 \\ \hline DE with CDAN + Avg-KLD & 60.51$\pm$0.1 & 61.71$\pm$0.5 & 65.03$\pm$0.3 & 78.20$\pm$0.2 & 79.29$\pm$0.2 & 81.15$\pm$0.1 & 90.85$\pm$0.0 & 91.19$\pm$0.1 & 92.07$\pm$0.1 \\ \hline DE with CDAN + CLUE & 60.12$\pm$0.5 & 61.77$\pm$0.3 & 64.06$\pm$0.2 & 77.88$\pm$0.1 & 78.38$\pm$0.2 & 79.42$\pm$0.2 & 90.73$\pm$0.1 & 91.08$\pm$0.1 & 91.64$\pm$0.0 \\ \hline DE with CDAN + BADGE & 60.28$\pm$0.7 & 61.84$\pm$0.2 & 65.29$\pm$0.3 & 77.68$\pm$0.2 & 78.53$\pm$0.1 & 79.84$\pm$0.2 & 90.73$\pm$0.1 & 91.17$\pm$0.0 & 91.95$\pm$0.1 \\ \hline \hline ASPEST (ours) & 60.38$\pm$0.3 & 63.34$\pm$0.2 & 66.81$\pm$0.3 & 78.23$\pm$0.1 & 79.49$\pm$0.1 & 81.25$\pm$0.1 & 90.95$\pm$0.0 & 91.60$\pm$0.0 & 92.33$\pm$0.1 \\ \hline ASPEST with DANN (ours) & \textbf{61.69}$\pm$0.2 & \textbf{63.58}$\pm$0.4 & \textbf{66.81}$\pm$0.4 & \textbf{78.68}$\pm$0.1 & \textbf{79.68}$\pm$0.1 & 81.42$\pm$0.1 & \textbf{91.16}$\pm$0.1 & \textbf{91.66}$\pm$0.1 & 92.37$\pm$0.1 \\ \hline ASPEST with CDAN (ours) & 61.00$\pm$0.2 & 62.80$\pm$0.4 & 66.78$\pm$0.1 & 78.56$\pm$0.1 & 79.54$\pm$0.1 & \textbf{81.49}$\pm$0.0 & 91.13$\pm$0.0 & 91.57$\pm$0.1 & \textbf{92.41}$\pm$0.0 \\ \bottomrule \end{tabular} \end{adjustbox} \caption[]{\small Results of evaluating DE with UDA and ASPEST with UDA on CIFAR-10$\to$CINIC-10. The mean and std of each metric over three random runs are reported (mean$\pm$std). All numbers are percentages. \textbf{Bold} numbers are superior results. } \label{tab:cifar10-uda-results} \end{table*} \begin{table*}[htb] \centering \begin{adjustbox}{width={\,:\,}umnwidth,center} \begin{tabular}{l|ccc|ccc|ccc} \toprule Dataset & \multicolumn{9}{c}{FMoW} \\ \hline Metric & \multicolumn{3}{c|}{$cov|acc\geq 70\%$ $\uparrow$} & \multicolumn{3}{c|}{$acc|cov\geq 70\%$ $\uparrow$} & \multicolumn{3}{c}{AUC $\uparrow$} \\ \hline Labeling Budget & 500 & 1000 & 2000 & 500 & 1000 & 2000 & 500 & 1000 & 2000 \\ \hline \hline DE with DANN + Uniform & 46.11$\pm$0.6 & 51.77$\pm$0.3 & 62.76$\pm$0.5 & 57.62$\pm$0.3 & 60.67$\pm$0.4 & 66.21$\pm$0.2 & 70.17$\pm$0.3 & 72.46$\pm$0.3 & 76.83$\pm$0.2 \\ \hline DE with DANN + Entropy & 44.36$\pm$0.7 & 48.19$\pm$0.3 & 59.52$\pm$0.8 & 56.78$\pm$0.1 & 59.51$\pm$0.0 & 65.75$\pm$0.3 & 69.09$\pm$0.2 & 71.02$\pm$0.2 & 75.15$\pm$0.3 \\ \hline DE with DANN + Confidence & 44.46$\pm$0.5 & 49.32$\pm$0.1 & 61.47$\pm$0.3 & 57.04$\pm$0.3 & 60.51$\pm$0.3 & 66.61$\pm$0.1 & 69.14$\pm$0.1 & 71.50$\pm$0.1 & 75.70$\pm$0.1 \\ \hline DE with DANN + Margin & 48.09$\pm$0.4 & 54.35$\pm$0.5 & 70.11$\pm$0.4 & 59.07$\pm$0.2 & 62.79$\pm$0.2 & 70.02$\pm$0.1 & 70.76$\pm$0.1 & 73.29$\pm$0.2 & 78.25$\pm$0.1 \\ \hline DE with DANN + Avg-KLD & 48.42$\pm$0.1 & 55.95$\pm$0.2 & 68.73$\pm$1.1 & 59.06$\pm$0.2 & 63.44$\pm$0.2 & 69.41$\pm$0.5 & 70.84$\pm$0.1 & 73.83$\pm$0.1 & 77.91$\pm$0.4 \\ \hline DE with DANN + CLUE & 44.14$\pm$0.6 & 46.15$\pm$0.2 & 49.02$\pm$0.5 & 56.01$\pm$0.3 & 56.89$\pm$0.2 & 58.66$\pm$0.3 & 69.11$\pm$0.2 & 70.16$\pm$0.2 & 71.46$\pm$0.2 \\ \hline DE with DANN + BADGE & 48.57$\pm$0.5 & 54.47$\pm$0.5 & 67.69$\pm$0.9 & 58.61$\pm$0.2 & 61.67$\pm$0.0 & 68.71$\pm$0.5 & 71.17$\pm$0.2 & 73.64$\pm$0.1 & 78.65$\pm$0.3 \\ \hline \hline DE with CDAN + Uniform & 46.08$\pm$0.7 & 51.92$\pm$0.8 & 62.87$\pm$0.2 & 57.45$\pm$0.1 & 60.73$\pm$0.4 & 66.19$\pm$0.2 & 69.93$\pm$0.3 & 72.57$\pm$0.4 & 76.87$\pm$0.1 \\ \hline DE with CDAN + Entropy & 44.42$\pm$0.3 & 49.32$\pm$0.1 & 60.11$\pm$0.3 & 56.83$\pm$0.1 & 60.04$\pm$0.2 & 65.95$\pm$0.2 & 69.18$\pm$0.2 & 71.34$\pm$0.3 & 75.44$\pm$0.3 \\ \hline DE with CDAN + Confidence & 44.75$\pm$0.1 & 49.34$\pm$0.1 & 62.80$\pm$1.0 & 57.09$\pm$0.1 & 60.50$\pm$0.2 & 66.94$\pm$0.4 & 69.27$\pm$0.1 & 71.60$\pm$0.2 & 76.14$\pm$0.3 \\ \hline DE with CDAN + Margin & 47.48$\pm$0.7 & 54.48$\pm$0.7 & 70.25$\pm$0.9 & 58.98$\pm$0.4 & 62.98$\pm$0.3 & 70.10$\pm$0.4 & 70.55$\pm$0.3 & 73.46$\pm$0.2 & 78.39$\pm$0.3 \\ \hline DE with CDAN + Avg-KLD & 48.43$\pm$0.2 & 54.37$\pm$0.4 & 68.93$\pm$0.6 & 59.36$\pm$0.2 & 62.71$\pm$0.2 & 69.54$\pm$0.2 & 71.12$\pm$0.2 & 73.35$\pm$0.2 & 77.97$\pm$0.2 \\ \hline DE with CDAN + CLUE & 44.09$\pm$0.3 & 46.11$\pm$0.5 & 48.90$\pm$0.1 & 55.78$\pm$0.3 & 56.98$\pm$0.2 & 58.46$\pm$0.2 & 69.03$\pm$0.1 & 70.02$\pm$0.2 & 71.31$\pm$0.1 \\ \hline DE with CDAN + BADGE & 47.93$\pm$0.2 & 54.61$\pm$0.2 & 67.01$\pm$0.5 & 58.16$\pm$0.1 & 61.81$\pm$0.1 & 68.36$\pm$0.2 & 70.91$\pm$0.2 & 73.63$\pm$0.1 & 78.52$\pm$0.2 \\ \hline \hline ASPEST (ours) & \textbf{53.05}$\pm$0.4 & \textbf{59.86}$\pm$0.4 & \textbf{76.52}$\pm$0.6 & 61.18$\pm$0.2 & \textbf{65.18}$\pm$0.2 & \textbf{72.86}$\pm$0.3 & 71.12$\pm$0.2 & \textbf{74.25}$\pm$0.2 & \textbf{79.93}$\pm$0.1 \\ \hline ASPEST with DANN (ours) & 51.02$\pm$0.9 & 58.63$\pm$1.1 & 72.97$\pm$0.9 & 61.10$\pm$0.5 & 64.98$\pm$0.4 & 71.21$\pm$0.4 & 71.03$\pm$0.3 & 73.79$\pm$0.4 & 77.84$\pm$0.3 \\ \hline ASPEST with CDAN (ours) & 51.40$\pm$0.6 & 58.21$\pm$0.6 & 73.94$\pm$0.6 & \textbf{61.38}$\pm$0.2 & 65.04$\pm$0.2 & 71.63$\pm$0.2 & \textbf{71.17}$\pm$0.1 & 73.59$\pm$0.1 & 78.04$\pm$0.2 \\ \bottomrule \end{tabular} \end{adjustbox} \caption[]{\small Results of evaluating DE with UDA and ASPEST with UDA on FMoW. The mean and std of each metric over three random runs are reported (mean$\pm$std). All numbers are percentages. \textbf{Bold} numbers are superior results. } \label{tab:fmow-uda-results} \end{table*} \begin{table*}[htb] \centering \begin{adjustbox}{width={\,:\,}umnwidth,center} \begin{tabular}{l|ccc|ccc|ccc} \toprule Dataset & \multicolumn{9}{c}{Amazon Review} \\ \hline Metric & \multicolumn{3}{c|}{$cov|acc\geq 80\%$ $\uparrow$} & \multicolumn{3}{c|}{$acc|cov\geq 80\%$ $\uparrow$} & \multicolumn{3}{c}{AUC $\uparrow$} \\ \hline Labeling Budget & 500 & 1000 & 2000 & 500 & 1000 & 2000 & 500 & 1000 & 2000 \\ \hline \hline DE with DANN + Uniform & 38.55$\pm$3.3 & 37.25$\pm$1.8 & 39.21$\pm$1.9 & 69.06$\pm$0.6 & 68.94$\pm$0.1 & 69.41$\pm$0.2 & 77.52$\pm$0.7 & 77.03$\pm$0.4 & 77.70$\pm$0.2 \\ \hline DE with DANN + Entropy & 38.22$\pm$2.3 & 41.85$\pm$0.8 & 41.57$\pm$1.3 & 69.48$\pm$0.3 & \textbf{70.71}$\pm$0.3 & 71.55$\pm$0.2 & 77.49$\pm$0.5 & 78.39$\pm$0.2 & 78.58$\pm$0.1 \\ \hline DE with DANN + Confidence & 38.01$\pm$1.0 & 38.36$\pm$2.5 & 38.89$\pm$1.3 & 69.45$\pm$0.1 & 70.16$\pm$0.3 & 71.44$\pm$0.2 & 77.54$\pm$0.2 & 77.58$\pm$0.5 & 78.48$\pm$0.3 \\ \hline DE with DANN + Margin & 36.82$\pm$1.3 & 36.89$\pm$1.3 & 41.98$\pm$1.5 & 69.35$\pm$0.3 & 69.63$\pm$0.3 & 71.27$\pm$0.2 & 77.30$\pm$0.3 & 77.23$\pm$0.3 & 78.34$\pm$0.3 \\ \hline DE with DANN + Avg-KLD & 37.15$\pm$2.9 & 38.21$\pm$1.3 & 42.46$\pm$1.4 & 69.38$\pm$0.4 & 69.79$\pm$0.2 & 71.21$\pm$0.2 & 77.25$\pm$0.6 & 77.72$\pm$0.3 & 78.68$\pm$0.3 \\ \hline DE with DANN + CLUE & \textbf{40.23}$\pm$4.0 & 34.71$\pm$1.8 & 31.38$\pm$0.9 & 68.95$\pm$0.7 & 68.07$\pm$0.2 & 67.44$\pm$0.3 & 77.62$\pm$1.0 & 76.27$\pm$0.6 & 75.60$\pm$0.2 \\ \hline DE with DANN + BADGE & 37.51$\pm$1.8 & 37.00$\pm$0.9 & 41.62$\pm$2.3 & 68.98$\pm$0.4 & 69.27$\pm$0.1 & 70.20$\pm$0.4 & 77.20$\pm$0.4 & 77.21$\pm$0.1 & 78.31$\pm$0.5 \\ \hline \hline DE with CDAN + Uniform & 37.81$\pm$0.3 & 37.83$\pm$2.7 & 39.52$\pm$0.8 & 68.93$\pm$0.1 & 69.16$\pm$0.7 & 69.50$\pm$0.3 & 77.16$\pm$0.1 & 77.30$\pm$0.7 & 77.74$\pm$0.3 \\ \hline DE with CDAN + Entropy & 37.99$\pm$0.8 & 37.68$\pm$1.1 & 42.55$\pm$0.9 & \textbf{69.54}$\pm$0.3 & 70.01$\pm$0.2 & 71.52$\pm$0.2 & 77.52$\pm$0.2 & 77.61$\pm$0.1 & 78.63$\pm$0.1 \\ \hline DE with CDAN + Confidence & 35.76$\pm$0.9 & 38.69$\pm$2.8 & 41.43$\pm$2.1 & 69.24$\pm$0.0 & 70.45$\pm$0.4 & 71.50$\pm$0.4 & 77.08$\pm$0.2 & 77.82$\pm$0.4 & 78.47$\pm$0.3 \\ \hline DE with CDAN + Margin & 37.68$\pm$2.9 & 37.43$\pm$1.0 & 42.18$\pm$1.3 & 69.50$\pm$0.3 & 69.80$\pm$0.4 & 71.29$\pm$0.0 & 77.50$\pm$0.5 & 77.31$\pm$0.3 & 78.46$\pm$0.3 \\ \hline DE with CDAN + Avg-KLD & 37.85$\pm$1.6 & 40.71$\pm$0.9 & 44.35$\pm$0.9 & 69.41$\pm$0.3 & 70.29$\pm$0.1 & 71.28$\pm$0.2 & 77.28$\pm$0.5 & 78.11$\pm$0.2 & 78.86$\pm$0.2 \\ \hline DE with CDAN + CLUE & 34.85$\pm$2.7 & 34.03$\pm$1.3 & 30.70$\pm$0.4 & 68.70$\pm$0.3 & 67.84$\pm$0.1 & 67.12$\pm$0.3 & 76.95$\pm$0.7 & 76.23$\pm$0.4 & 75.36$\pm$0.4 \\ \hline DE with CDAN + BADGE & 39.47$\pm$0.2 & 39.29$\pm$1.1 & 41.64$\pm$0.9 & 69.33$\pm$0.0 & 69.34$\pm$0.2 & 70.58$\pm$0.2 & 77.52$\pm$0.2 & 77.49$\pm$0.2 & 78.24$\pm$0.3 \\ \hline \hline ASPEST (ours) & 38.44$\pm$0.7 & 40.96$\pm$0.8 & 45.77$\pm$0.1 & 69.31$\pm$0.3 & 70.17$\pm$0.2 & \textbf{71.60}$\pm$0.2 & 77.69$\pm$0.1 & 78.35$\pm$0.2 & \textbf{79.51}$\pm$0.2 \\ \hline ASPEST with DANN (ours) & 40.22$\pm$0.5 & 41.99$\pm$1.4 & \textbf{45.84}$\pm$0.1 & 69.42$\pm$0.1 & 70.30$\pm$0.1 & 71.58$\pm$0.2 & \textbf{78.00}$\pm$0.1 & 78.34$\pm$0.3 & 79.43$\pm$0.1 \\ \hline ASPEST with CDAN (ours) & 40.02$\pm$0.5 & \textbf{42.46}$\pm$0.6 & 44.95$\pm$0.4 & 69.50$\pm$0.1 & 70.37$\pm$0.2 & 71.42$\pm$0.0 & 77.80$\pm$0.1 & \textbf{78.57}$\pm$0.1 & 79.25$\pm$0.0 \\ \bottomrule \end{tabular} \end{adjustbox} \caption[]{\small Results of evaluating DE with UDA and ASPEST with UDA on Amazon Review. The mean and std of each metric over three random runs are reported (mean$\pm$std). All numbers are percentages. \textbf{Bold} numbers are superior results. } \label{tab:amazon-review-uda-results} \end{table*} \begin{table*}[htb] \centering \begin{adjustbox}{width={\,:\,}umnwidth,center} \begin{tabular}{l|ccc|ccc|ccc} \toprule Dataset & \multicolumn{9}{c}{DomainNet R$\to$C (easy)} \\ \hline Metric & \multicolumn{3}{c|}{$cov|acc\geq 80\%$ $\uparrow$} & \multicolumn{3}{c|}{$acc|cov\geq 80\%$ $\uparrow$} & \multicolumn{3}{c}{AUC $\uparrow$} \\ \hline Labeling Budget & 500 & 1000 & 2000 & 500 & 1000 & 2000 & 500 & 1000 & 2000 \\ \hline \hline DE with DANN + Uniform & 33.53$\pm$0.5 & 36.28$\pm$0.3 & 40.13$\pm$1.0 & 50.57$\pm$0.5 & 52.19$\pm$0.1 & 55.15$\pm$0.1 & 69.34$\pm$0.3 & 70.98$\pm$0.2 & 73.50$\pm$0.3 \\ \hline DE with DANN + Entropy & 28.66$\pm$1.0 & 34.47$\pm$0.1 & 42.77$\pm$0.7 & 48.13$\pm$0.6 & 52.70$\pm$0.3 & 59.01$\pm$0.2 & 66.60$\pm$0.5 & 70.64$\pm$0.1 & 75.45$\pm$0.2 \\ \hline DE with DANN + Confidence & 29.92$\pm$0.4 & 35.29$\pm$1.0 & 43.33$\pm$0.4 & 48.61$\pm$0.1 & 53.36$\pm$0.5 & 59.72$\pm$0.3 & 67.23$\pm$0.2 & 70.92$\pm$0.5 & 75.89$\pm$0.3 \\ \hline DE with DANN + Margin & 35.19$\pm$0.3 & 39.63$\pm$0.2 & 46.51$\pm$0.5 & 52.29$\pm$0.3 & 55.60$\pm$0.2 & 60.97$\pm$0.4 & 70.70$\pm$0.1 & 73.41$\pm$0.1 & 77.24$\pm$0.3 \\ \hline DE with DANN + Avg-KLD & 36.02$\pm$0.6 & 39.67$\pm$0.5 & 47.20$\pm$0.8 & 53.00$\pm$0.3 & 55.75$\pm$0.3 & 61.22$\pm$0.3 & 71.19$\pm$0.3 & 73.51$\pm$0.2 & 77.46$\pm$0.2 \\ \hline DE with DANN + CLUE & 32.26$\pm$1.5 & 35.09$\pm$0.4 & 35.66$\pm$0.3 & 50.21$\pm$0.0 & 50.90$\pm$0.1 & 51.50$\pm$0.1 & 69.17$\pm$0.2 & 70.20$\pm$0.2 & 70.82$\pm$0.1 \\ \hline DE with DANN + BADGE & 35.27$\pm$0.5 & 38.88$\pm$0.3 & 45.97$\pm$0.7 & 52.15$\pm$0.3 & 54.89$\pm$0.1 & 60.03$\pm$0.3 & 70.65$\pm$0.1 & 72.95$\pm$0.1 & 76.87$\pm$0.1 \\ \hline \hline DE with CDAN + Uniform & 33.49$\pm$0.6 & 36.01$\pm$0.7 & 39.93$\pm$0.2 & 50.46$\pm$0.2 & 51.89$\pm$0.1 & 55.23$\pm$0.2 & 69.32$\pm$0.3 & 70.86$\pm$0.3 & 73.55$\pm$0.2 \\ \hline DE with CDAN + Entropy & 29.50$\pm$0.5 & 33.86$\pm$0.3 & 42.24$\pm$0.5 & 48.01$\pm$0.1 & 52.52$\pm$0.3 & 58.96$\pm$0.2 & 66.82$\pm$0.2 & 70.28$\pm$0.1 & 75.33$\pm$0.1 \\ \hline DE with CDAN + Confidence & 29.21$\pm$1.0 & 34.92$\pm$0.6 & 43.36$\pm$0.4 & 48.48$\pm$0.4 & 52.85$\pm$0.4 & 59.88$\pm$0.4 & 66.82$\pm$0.5 & 70.61$\pm$0.4 & 75.93$\pm$0.3 \\ \hline DE with CDAN + Margin & 35.87$\pm$0.7 & 38.37$\pm$0.4 & 46.42$\pm$0.6 & 52.58$\pm$0.1 & 55.28$\pm$0.2 & 61.20$\pm$0.2 & 70.95$\pm$0.2 & 72.95$\pm$0.2 & 77.26$\pm$0.1 \\ \hline DE with CDAN + Avg-KLD & 36.21$\pm$0.6 & 40.08$\pm$0.3 & 47.62$\pm$0.4 & 52.95$\pm$0.3 & 55.93$\pm$0.1 & 61.56$\pm$0.2 & 71.29$\pm$0.3 & 73.60$\pm$0.1 & 77.58$\pm$0.2 \\ \hline DE with CDAN + CLUE & 31.74$\pm$2.1 & 35.11$\pm$0.2 & 35.87$\pm$0.5 & 49.99$\pm$0.2 & 51.39$\pm$0.2 & 51.43$\pm$0.2 & 69.04$\pm$0.3 & 70.35$\pm$0.0 & 70.82$\pm$0.3 \\ \hline DE with CDAN + BADGE & 34.74$\pm$0.5 & 38.68$\pm$0.7 & 45.87$\pm$1.0 & 51.80$\pm$0.3 & 54.75$\pm$0.2 & 60.22$\pm$0.1 & 70.38$\pm$0.1 & 72.90$\pm$0.2 & 76.85$\pm$0.2 \\ \hline \hline ASPEST (ours) & 37.38$\pm$0.1 & 39.98$\pm$0.3 & 48.29$\pm$1.0 & 54.56$\pm$0.3 & 56.95$\pm$0.1 & 62.69$\pm$0.2 & 71.61$\pm$0.2 & 73.27$\pm$0.2 & 77.40$\pm$0.4 \\ \hline ASPEST with DANN (ours) & \textbf{37.41}$\pm$0.8 & 42.45$\pm$1.0 & 49.74$\pm$0.6 & \textbf{55.60}$\pm$0.1 & 58.29$\pm$0.2 & 63.64$\pm$0.2 & 71.88$\pm$0.2 & 74.18$\pm$0.4 & 78.09$\pm$0.0 \\ \hline ASPEST with CDAN (ours) & 36.60$\pm$1.2 & \textbf{42.96}$\pm$0.6 & \textbf{50.86}$\pm$0.2 & 55.55$\pm$0.2 & \textbf{58.71}$\pm$0.2 & \textbf{63.85}$\pm$0.2 & \textbf{71.99}$\pm$0.2 & \textbf{74.60}$\pm$0.2 & \textbf{78.45}$\pm$0.3 \\ \bottomrule \end{tabular} \end{adjustbox} \caption[]{\small Results of evaluating DE with UDA and ASPEST with UDA on DomainNet R$\to$C. The mean and std of each metric over three random runs are reported (mean$\pm$std). All numbers are percentages. \textbf{Bold} numbers are superior results. } \label{tab:domainnet-rc-uda-results} \end{table*} \begin{table*}[htb] \centering \begin{adjustbox}{width={\,:\,}umnwidth,center} \begin{tabular}{l|ccc|ccc|ccc} \toprule Dataset & \multicolumn{9}{c}{DomainNet R$\to$P (medium)} \\ \hline Metric & \multicolumn{3}{c|}{$cov|acc\geq 70\%$ $\uparrow$} & \multicolumn{3}{c|}{$acc|cov\geq 70\%$ $\uparrow$} & \multicolumn{3}{c}{AUC $\uparrow$} \\ \hline Labeling Budget & 500 & 1000 & 2000 & 500 & 1000 & 2000 & 500 & 1000 & 2000 \\ \hline \hline DE with DANN + Uniform & 26.98$\pm$0.1 & 28.34$\pm$0.5 & 30.63$\pm$0.2 & 41.96$\pm$0.2 & 42.89$\pm$0.2 & 44.73$\pm$0.1 & 57.04$\pm$0.1 & 58.10$\pm$0.2 & 59.87$\pm$0.1 \\ \hline DE with DANN + Entropy & 24.75$\pm$0.4 & 27.02$\pm$0.5 & 30.10$\pm$0.2 & 40.29$\pm$0.4 & 42.34$\pm$0.2 & 45.78$\pm$0.2 & 55.19$\pm$0.3 & 57.12$\pm$0.3 & 60.21$\pm$0.1 \\ \hline DE with DANN + Confidence & 22.41$\pm$0.9 & 27.03$\pm$0.6 & 31.70$\pm$0.6 & 39.05$\pm$0.5 & 42.61$\pm$0.2 & 46.60$\pm$0.2 & 53.66$\pm$0.6 & 57.35$\pm$0.3 & 60.93$\pm$0.4 \\ \hline DE with DANN + Margin & 29.16$\pm$0.1 & 30.58$\pm$0.3 & 33.64$\pm$0.6 & 43.78$\pm$0.2 & 45.17$\pm$0.2 & 47.69$\pm$0.4 & 58.76$\pm$0.1 & 59.94$\pm$0.0 & 62.19$\pm$0.4 \\ \hline DE with DANN + Avg-KLD & 29.52$\pm$0.1 & 31.17$\pm$0.4 & 34.09$\pm$0.3 & 43.84$\pm$0.3 & 45.33$\pm$0.2 & 48.18$\pm$0.2 & 58.89$\pm$0.2 & 60.25$\pm$0.2 & 62.54$\pm$0.2 \\ \hline DE with DANN + CLUE & 27.48$\pm$0.5 & 27.83$\pm$0.2 & 28.39$\pm$0.5 & 42.05$\pm$0.3 & 42.34$\pm$0.2 & 42.65$\pm$0.1 & 57.32$\pm$0.3 & 57.64$\pm$0.2 & 57.99$\pm$0.2 \\ \hline DE with DANN + BADGE & 28.92$\pm$0.1 & 30.36$\pm$0.2 & 33.86$\pm$0.3 & 43.38$\pm$0.1 & 44.85$\pm$0.1 & 47.64$\pm$0.3 & 58.38$\pm$0.0 & 59.82$\pm$0.1 & 62.26$\pm$0.2 \\ \hline \hline DE with CDAN + Uniform & 26.96$\pm$0.4 & 28.33$\pm$0.2 & 29.98$\pm$0.4 & 41.77$\pm$0.3 & 42.85$\pm$0.2 & 44.23$\pm$0.4 & 56.86$\pm$0.4 & 58.01$\pm$0.0 & 59.42$\pm$0.4 \\ \hline DE with CDAN + Entropy & 24.91$\pm$0.4 & 26.30$\pm$0.9 & 30.33$\pm$0.4 & 40.34$\pm$0.3 & 42.07$\pm$0.6 & 45.79$\pm$0.2 & 55.38$\pm$0.4 & 56.70$\pm$0.8 & 60.23$\pm$0.2 \\ \hline DE with CDAN + Confidence & 24.58$\pm$0.7 & 27.11$\pm$0.5 & 31.07$\pm$0.5 & 40.32$\pm$0.2 & 42.64$\pm$0.3 & 46.25$\pm$0.3 & 55.14$\pm$0.3 & 57.40$\pm$0.3 & 60.63$\pm$0.3 \\ \hline DE with CDAN + Margin & 28.33$\pm$0.1 & 30.17$\pm$0.3 & 33.54$\pm$0.4 & 43.44$\pm$0.4 & 44.77$\pm$0.1 & 47.56$\pm$0.2 & 58.31$\pm$0.2 & 59.65$\pm$0.1 & 62.17$\pm$0.2 \\ \hline DE with CDAN + Avg-KLD & 28.69$\pm$0.2 & 30.99$\pm$0.9 & 34.30$\pm$0.2 & 43.64$\pm$0.2 & 45.34$\pm$0.2 & 48.22$\pm$0.1 & 58.60$\pm$0.1 & 60.15$\pm$0.4 & 62.67$\pm$0.1 \\ \hline DE with CDAN + CLUE & 27.52$\pm$0.6 & 27.96$\pm$0.2 & 28.18$\pm$0.5 & 42.02$\pm$0.2 & 42.44$\pm$0.1 & 42.67$\pm$0.2 & 57.21$\pm$0.3 & 57.70$\pm$0.1 & 58.04$\pm$0.3 \\ \hline DE with CDAN + BADGE & 28.79$\pm$0.1 & 30.28$\pm$0.1 & 33.77$\pm$0.4 & 43.45$\pm$0.0 & 44.73$\pm$0.3 & 47.84$\pm$0.2 & 58.47$\pm$0.1 & 59.64$\pm$0.2 & 62.37$\pm$0.2 \\ \hline \hline ASPEST (ours) & 29.69$\pm$0.1 & 32.50$\pm$0.3 & 35.46$\pm$0.6 & 44.96$\pm$0.1 & 46.77$\pm$0.2 & 49.42$\pm$0.1 & 58.74$\pm$0.0 & 60.36$\pm$0.0 & 62.84$\pm$0.2 \\ \hline ASPEST with DANN (ours) & \textbf{31.75}$\pm$0.4 & \textbf{33.58}$\pm$0.3 & 36.96$\pm$0.2 & \textbf{46.16}$\pm$0.1 & 47.64$\pm$0.2 & \textbf{50.37}$\pm$0.3 & \textbf{59.63}$\pm$0.2 & 61.06$\pm$0.1 & \textbf{63.75}$\pm$0.1 \\ \hline ASPEST with CDAN (ours) & 30.39$\pm$0.4 & 33.57$\pm$0.3 & \textbf{37.53}$\pm$0.7 & 45.90$\pm$0.1 & \textbf{47.71}$\pm$0.2 & 50.31$\pm$0.2 & 59.13$\pm$0.3 & \textbf{61.17}$\pm$0.2 & 63.69$\pm$0.3 \\ \bottomrule \end{tabular} \end{adjustbox} \caption[]{\small Results of evaluating DE with UDA and ASPEST with UDA on DomainNet R$\to$P. The mean and std of each metric over three random runs are reported (mean$\pm$std). All numbers are percentages. \textbf{Bold} numbers are superior results. } \label{tab:domainnet-rp-uda-results} \end{table*} \begin{table*}[htb] \centering \begin{adjustbox}{width={\,:\,}umnwidth,center} \begin{tabular}{l|ccc|ccc|ccc} \toprule Dataset & \multicolumn{9}{c}{DomainNet R$\to$S (hard)} \\ \hline Metric & \multicolumn{3}{c|}{$cov|acc\geq 70\%$ $\uparrow$} & \multicolumn{3}{c|}{$acc|cov\geq 70\%$ $\uparrow$} & \multicolumn{3}{c}{AUC $\uparrow$} \\ \hline Labeling Budget & 500 & 1000 & 2000 & 500 & 1000 & 2000 & 500 & 1000 & 2000 \\ \hline \hline DE with DANN + Uniform & 17.55$\pm$0.4 & 19.82$\pm$0.3 & 23.57$\pm$0.4 & 32.61$\pm$0.5 & 34.56$\pm$0.3 & 37.73$\pm$0.2 & 47.60$\pm$0.5 & 49.92$\pm$0.4 & 53.52$\pm$0.1 \\ \hline DE with DANN + Entropy & 10.77$\pm$0.8 & 15.38$\pm$0.5 & 20.11$\pm$0.5 & 27.78$\pm$0.7 & 31.09$\pm$0.2 & 36.39$\pm$0.3 & 41.69$\pm$0.7 & 45.62$\pm$0.3 & 51.05$\pm$0.4 \\ \hline DE with DANN + Confidence & 10.64$\pm$1.2 & 15.22$\pm$0.4 & 20.25$\pm$0.5 & 28.09$\pm$1.0 & 31.76$\pm$0.3 & 36.86$\pm$0.8 & 41.94$\pm$1.3 & 46.19$\pm$0.3 & 51.48$\pm$0.7 \\ \hline DE with DANN + Margin & 17.90$\pm$0.7 & 20.44$\pm$0.6 & 25.52$\pm$0.4 & 33.61$\pm$0.1 & 35.79$\pm$0.5 & 40.29$\pm$0.3 & 48.67$\pm$0.1 & 51.03$\pm$0.6 & 55.64$\pm$0.4 \\ \hline DE with DANN + Avg-KLD & 18.02$\pm$1.0 & 21.22$\pm$0.2 & 25.46$\pm$0.2 & 34.00$\pm$0.2 & 36.51$\pm$0.2 & 40.72$\pm$0.2 & 49.05$\pm$0.2 & 51.79$\pm$0.2 & 55.95$\pm$0.2 \\ \hline DE with DANN + CLUE & 15.77$\pm$0.3 & 18.14$\pm$0.7 & 19.49$\pm$0.4 & 32.10$\pm$0.1 & 33.42$\pm$0.3 & 34.50$\pm$0.3 & 47.18$\pm$0.2 & 48.63$\pm$0.3 & 50.03$\pm$0.3 \\ \hline DE with DANN + BADGE & 16.84$\pm$0.9 & 20.88$\pm$0.3 & 25.11$\pm$0.3 & 33.97$\pm$0.1 & 36.20$\pm$0.2 & 40.01$\pm$0.3 & 48.87$\pm$0.2 & 51.46$\pm$0.2 & 55.33$\pm$0.2 \\ \hline \hline DE with CDAN + Uniform & 17.33$\pm$0.5 & 19.79$\pm$0.1 & 22.99$\pm$0.5 & 32.47$\pm$0.5 & 34.59$\pm$0.3 & 37.88$\pm$0.2 & 47.49$\pm$0.5 & 50.02$\pm$0.2 & 53.51$\pm$0.3 \\ \hline DE with CDAN + Entropy & 12.48$\pm$0.8 & 15.19$\pm$0.8 & 20.23$\pm$0.0 & 28.83$\pm$0.1 & 32.41$\pm$0.4 & 36.57$\pm$0.1 & 42.93$\pm$0.5 & 47.00$\pm$0.3 & 51.24$\pm$0.2 \\ \hline DE with CDAN + Confidence & 11.23$\pm$0.6 & 13.93$\pm$0.1 & 18.45$\pm$1.3 & 28.67$\pm$0.3 & 31.35$\pm$0.4 & 35.56$\pm$0.8 & 42.87$\pm$0.5 & 45.40$\pm$0.7 & 49.80$\pm$1.0 \\ \hline DE with CDAN + Margin & 18.06$\pm$0.7 & 20.39$\pm$0.3 & 25.05$\pm$0.3 & 33.98$\pm$0.2 & 35.76$\pm$0.2 & 40.11$\pm$0.1 & 49.15$\pm$0.1 & 50.92$\pm$0.1 & 55.27$\pm$0.1 \\ \hline DE with CDAN + Avg-KLD & 18.63$\pm$1.0 & 20.80$\pm$0.3 & 25.49$\pm$0.9 & 34.19$\pm$0.4 & 36.41$\pm$0.2 & 40.53$\pm$0.5 & 49.45$\pm$0.5 & 51.58$\pm$0.1 & 55.74$\pm$0.5 \\ \hline DE with CDAN + CLUE & 16.51$\pm$0.3 & 18.82$\pm$0.1 & 19.47$\pm$0.1 & 32.23$\pm$0.2 & 33.83$\pm$0.4 & 34.72$\pm$0.3 & 47.40$\pm$0.2 & 49.11$\pm$0.2 & 49.98$\pm$0.3 \\ \hline DE with CDAN + BADGE & 17.52$\pm$0.8 & 21.48$\pm$0.5 & 25.35$\pm$0.4 & 33.53$\pm$0.5 & 36.19$\pm$0.4 & 40.31$\pm$0.3 & 48.67$\pm$0.5 & 51.65$\pm$0.3 & 55.62$\pm$0.3 \\ \hline \hline ASPEST (ours) & 17.86$\pm$0.4 & 20.42$\pm$0.4 & 25.87$\pm$0.4 & 35.17$\pm$0.1 & 37.28$\pm$0.3 & 41.46$\pm$0.2 & 49.62$\pm$0.1 & 51.61$\pm$0.4 & 55.90$\pm$0.2 \\ \hline ASPEST with DANN (ours) & 16.35$\pm$1.2 & \textbf{23.18}$\pm$0.4 & 28.00$\pm$0.1 & 36.56$\pm$0.2 & \textbf{39.40}$\pm$0.4 & 42.94$\pm$0.1 & 50.58$\pm$0.4 & \textbf{53.73}$\pm$0.3 & 57.25$\pm$0.1 \\ \hline ASPEST with CDAN (ours) & \textbf{18.81}$\pm$1.1 & 22.95$\pm$0.8 & \textbf{28.17}$\pm$0.2 & \textbf{36.85}$\pm$0.3 & 39.10$\pm$0.2 & \textbf{43.25}$\pm$0.3 & \textbf{51.14}$\pm$0.3 & 53.47$\pm$0.2 & \textbf{57.26}$\pm$0.2 \\ \bottomrule \end{tabular} \end{adjustbox} \caption[]{\small Results of evaluating DE with UDA and ASPEST with UDA on DomainNet R$\to$S. The mean and std of each metric over three random runs are reported (mean$\pm$std). All numbers are percentages. \textbf{Bold} numbers are superior results. } \label{tab:domainnet-rs-uda-results} \end{table*} \begin{table*}[htb] \centering \begin{adjustbox}{width={\,:\,}umnwidth,center} \begin{tabular}{l|ccc|ccc|ccc} \toprule Dataset & \multicolumn{9}{c}{Otto} \\ \hline Metric & \multicolumn{3}{c|}{$cov|acc\geq 80\%$ $\uparrow$} & \multicolumn{3}{c|}{$acc|cov\geq 80\%$ $\uparrow$} & \multicolumn{3}{c}{AUC $\uparrow$} \\ \hline Labeling Budget & 500 & 1000 & 2000 & 500 & 1000 & 2000 & 500 & 1000 & 2000 \\ \hline \hline DE with DANN + Uniform & 70.35$\pm$0.5 & 72.42$\pm$0.4 & 75.63$\pm$0.7 & 76.12$\pm$0.3 & 77.04$\pm$0.1 & 78.25$\pm$0.1 & 86.67$\pm$0.1 & 87.16$\pm$0.1 & 88.09$\pm$0.1 \\ \hline DE with DANN + Entropy & 75.27$\pm$0.3 & 81.25$\pm$0.1 & 92.23$\pm$0.3 & 78.14$\pm$0.1 & 80.45$\pm$0.0 & 83.73$\pm$0.1 & 87.73$\pm$0.1 & 88.91$\pm$0.0 & 90.90$\pm$0.1 \\ \hline DE with DANN + Confidence & 74.66$\pm$0.3 & 81.62$\pm$0.1 & 92.57$\pm$0.6 & 78.05$\pm$0.2 & 80.50$\pm$0.0 & 83.67$\pm$0.2 & 87.51$\pm$0.1 & 89.06$\pm$0.1 & 90.94$\pm$0.1 \\ \hline DE with DANN + Margin & 75.47$\pm$0.4 & 82.56$\pm$0.7 & 91.86$\pm$0.9 & 78.26$\pm$0.1 & 80.79$\pm$0.2 & 83.61$\pm$0.3 & 87.87$\pm$0.1 & 89.08$\pm$0.0 & 90.88$\pm$0.1 \\ \hline DE with DANN + Avg-KLD & 76.02$\pm$0.6 & 81.78$\pm$0.4 & 91.82$\pm$0.3 & 78.53$\pm$0.0 & 80.70$\pm$0.1 & 83.88$\pm$0.0 & 87.99$\pm$0.0 & 89.17$\pm$0.0 & 90.90$\pm$0.1 \\ \hline DE with DANN + CLUE & 69.68$\pm$0.4 & 68.07$\pm$0.3 & 62.70$\pm$0.6 & 75.81$\pm$0.3 & 75.44$\pm$0.0 & 73.49$\pm$0.3 & 86.68$\pm$0.2 & 86.31$\pm$0.1 & 84.89$\pm$0.2 \\ \hline DE with DANN + BADGE & 74.69$\pm$0.5 & 79.04$\pm$0.6 & 87.63$\pm$0.4 & 77.97$\pm$0.1 & 79.57$\pm$0.3 & 82.99$\pm$0.1 & 87.82$\pm$0.1 & 88.92$\pm$0.1 & 90.67$\pm$0.1 \\ \hline \hline DE with CDAN + Uniform & 70.25$\pm$0.9 & 72.43$\pm$0.4 & 75.21$\pm$0.7 & 76.09$\pm$0.3 & 76.94$\pm$0.1 & 78.13$\pm$0.1 & 86.56$\pm$0.3 & 87.14$\pm$0.2 & 87.90$\pm$0.1 \\ \hline DE with CDAN + Entropy & 74.73$\pm$0.6 & 81.60$\pm$0.8 & 92.58$\pm$0.2 & 77.97$\pm$0.2 & 80.59$\pm$0.3 & 83.81$\pm$0.2 & 87.47$\pm$0.1 & 88.93$\pm$0.1 & 90.84$\pm$0.1 \\ \hline DE with CDAN + Confidence & 74.88$\pm$0.6 & 81.30$\pm$0.8 & 92.53$\pm$0.9 & 78.06$\pm$0.2 & 80.51$\pm$0.3 & 83.85$\pm$0.3 & 87.43$\pm$0.2 & 88.99$\pm$0.1 & 90.95$\pm$0.1 \\ \hline DE with CDAN + Margin & 76.68$\pm$1.0 & 81.57$\pm$0.4 & 92.20$\pm$0.5 & 78.74$\pm$0.5 & 80.62$\pm$0.2 & 84.01$\pm$0.2 & 88.08$\pm$0.2 & 88.85$\pm$0.2 & 91.09$\pm$0.0 \\ \hline DE with CDAN + Avg-KLD & 75.88$\pm$0.4 & 81.82$\pm$0.8 & 91.43$\pm$1.1 & 78.45$\pm$0.1 & 80.72$\pm$0.3 & 83.72$\pm$0.3 & 87.92$\pm$0.2 & 89.12$\pm$0.2 & 90.91$\pm$0.2 \\ \hline DE with CDAN + CLUE & 69.86$\pm$0.5 & 67.79$\pm$0.2 & 63.46$\pm$0.9 & 76.09$\pm$0.2 & 75.42$\pm$0.3 & 73.66$\pm$0.3 & 86.81$\pm$0.1 & 86.25$\pm$0.1 & 85.00$\pm$0.1 \\ \hline DE with CDAN + BADGE & 74.68$\pm$0.4 & 79.46$\pm$0.3 & 87.57$\pm$0.4 & 77.89$\pm$0.1 & 79.78$\pm$0.1 & 82.85$\pm$0.1 & 87.78$\pm$0.1 & 88.90$\pm$0.1 & 90.72$\pm$0.1 \\ \hline \hline ASPEST (ours) & 77.85$\pm$0.2 & \textbf{84.20}$\pm$0.6 & 94.26$\pm$0.6 & 79.28$\pm$0.1 & \textbf{81.40}$\pm$0.1 & 84.62$\pm$0.1 & 88.28$\pm$0.1 & \textbf{89.61}$\pm$0.1 & \textbf{91.49}$\pm$0.0 \\ \hline ASPEST with DANN (ours) & \textbf{78.14}$\pm$0.4 & 83.33$\pm$0.5 & 93.61$\pm$0.0 & \textbf{79.33}$\pm$0.1 & 81.23$\pm$0.1 & 84.21$\pm$0.1 & \textbf{88.36}$\pm$0.2 & 89.32$\pm$0.1 & 91.26$\pm$0.0 \\ \hline ASPEST with CDAN (ours) & 77.75$\pm$0.3 & 83.68$\pm$0.5 & \textbf{94.44}$\pm$0.3 & 79.27$\pm$0.0 & 81.30$\pm$0.2 & \textbf{84.76}$\pm$0.1 & 88.35$\pm$0.1 & 89.59$\pm$0.0 & 91.41$\pm$0.0 \\ \bottomrule \end{tabular} \end{adjustbox} \caption[]{\small Results of evaluating DE with UDA and ASPEST with UDA on Otto. The mean and std of each metric over three random runs are reported (mean$\pm$std). All numbers are percentages. \textbf{Bold} numbers are superior results. } \label{tab:otto-uda-results} \end{table*} \end{document}
\begin{document} \author{ Jieliang Hong\footnote{Department of Mathematics, University of British Columbia, Canada, E-mail: {\tt [email protected]} } } \title{An upper bound for $p_c$ in range-$R$ bond percolation in two and three dimensions} \date{\today} \maketitle \begin{abstract} An upper bound for the critical probability of long range bond percolation in $d=2$ and $d=3$ is obtained by connecting the bond percolation with the SIR epidemic model, thus complementing the lower bound result in Frei and Perkins \cite{FP16}. A key ingredient is that we establish a uniform bound for the local times of branching random walk by calculating their exponential moments and by using the discrete versions of Tanaka's formula and Garsia's Lemma. \end{abstract} \section{Introduction} \label{4s1} \subsection{Range-$R$ bond percolation and the main result} For any $R\in \mathbb{N}$, we set $\mathbb{Z}_R^d=\mathbb{Z}^d/R=\{x/R: x\in \mathbb{Z}^d\}$. Let $x, y\in \mathbb{Z}^d_R$ be neighbours if $0<\|x-y\|_\infty\leq 1$ where $\|\cdot \|_\infty$ denotes the $l^\infty$ norm on $\mathbb{R}^d$ and we write $x\sim y$ if $x, y\in \mathbb{Z}^d_R$ are neighbours. Let $\mathcal{N}(x)$ denote the set of neighbours of $x$ and denote its size by \[V(R):=|\mathcal{N}(x)|=|\{y\in \mathbb{Z}^d_R: 0<\|y-x\|_\infty\leq 1\}|=(2R+1)^d-1,\] where $|S|$ is the cardinality of a finite set $S$. If $x\sim y$ in $\mathbb{Z}^d_R$, we let $(x,y)$ or $(y,x)$ denote the edge between $x$ and $y$ and let $E(\mathbb{Z}_R^d)$ be the set of all the edges in $\mathbb{Z}_R^d$. Assign a collection of i.i.d. Bernoulli random variable $\{B(e): e\in E(\mathbb{Z}_R^d)\}$ with parameter $p>0$ to the edges. If $B(e)=1$, we say the edge $e$ is open; if $B(e)=0$, we say the edge $e$ is closed. Denote by $G=G_R$ the resulting subgraph with vertex set $\mathbb{Z}_R^d$ and edge set being the set of open edges. For any $x,y\in \mathbb{Z}_R^d$, we write $x\leftrightarrow y$ if $x=y$ or there is a path between $x$ and $y$ consisting of open edges. Denote the cluster $\mathcal C_x$ in $G$ containing $x$ by \[ \mathcal C_x:=\{y\in \mathbb{Z}_R^d: x\leftrightarrow y\}. \] Define the percolation probability $q(p)$ to be \[ q(p)=\mathbb{P}_p(|\mathcal C_0|=\infty). \] The critical probability is then defined by \[ p_c=p_c(R)=\inf\{p: q(p)>0\}. \] One can check by monotonicity in $p$ that $q(p)=0$ for $p\in [0,p_c)$ and $q(p)>0$ for $p\in (p_c,1]$. Write $f(R)\sim g(R)$ as $R\to \infty$ iff $f(R)/g(R) \to 1$ as $R\to \infty$. It is shown in M. Penrose \cite{Pen93} that \[ p_c(R) \sim \frac{1}{V(R)} \text{ as } R\to \infty. \] In higher dimensions $d> 6$, Van der Hofstad and Sakai \cite{HS05} use lace expansion to get finer asymptotics on $p_c(R)$: \begin{align}\label{4e10.10} p_c(R)V(R)-1 \sim \frac{\theta_d}{R^d}, \end{align} where $\theta_d$ is given in terms of a probability concerning random walk with uniform steps on $[-1,1]^d$. The extension of \eqref{4e10.10} to $d>4$ has been conjectured by Edwin Perkins [private communication] while in the critical dimension $d=4$, it is believed that \begin{align}\label{4ec10.10} p_c(R)V(R)-1 \sim \frac{\theta_4 \log R}{R^4} \text{ in } d=4, \end{align} where the constant $\theta_4$ can be explicitly determined. In lower dimensions $d=2,3$, the correct asymptotics for $p_c(R)V(R)-1$, suggested by Lalley and Zheng \cite{LZ10} (see also Conjecture 1.2 of \cite{FP16}), should be $\frac{\theta_d}{R^{\gamma}}$ where $\gamma=\frac{2d}{6-d}$. Therefore a parallel conjecture states that \begin{align}\label{4e10.11} p_c(R)V(R)-1\sim \frac{\theta_d}{R^{\gamma}}, \end{align} for some constant $\theta_d>0$ that depends on the dimension. When $d=2$ or $d=3$, one may check that $\frac{2d}{6-d}=d-1$ and so for simplicity we will proceed with $\gamma=d-1$. The lower bound implied by \eqref{4e10.11} is already obtained in \cite{FP16}: there is some constant $\theta=\theta(d)>0$ such that for all $R\in \mathbb{N}$, \begin{align}\label{4e10.12} p_c(R)V(R) \geq 1+\frac{\theta}{R^{d-1}}. \end{align} In this paper, we complement this result by establishing a corresponding upper bound for $p_c$.\\ \noindent ${\bf Convention\ on\ Functions\ and\ Constants.}$ Constants whose value is unimportant and may change from line to line are denoted $C, c, c_d, c_1,c_2,\dots$, while constants whose values will be referred to later and appear initially in say, Lemma~i.j are denoted $c_{i.j}$ or $C_{i.j}$. \begin{theorem}\label{4t0} Let $d=2$ or $d=3$. There exist some constants $\theta_d>0$ and $c_{\ref{4t0}}(d)>0$ so that for any positive integer $R>c_{\ref{4t0}}(d)$, we have \begin{align}\label{4e10.13} p_c(R)V(R) \leq 1+\frac{\theta_d}{R^{d-1}}. \end{align} \end{theorem} \subsection{SIR epidemic models} \label{4s1.2} We define the SIR epidemic process on $\mathbb{Z}^d_R$ as follows: For each vertex $x\in \mathbb{Z}^d_R$, it's either infected, susceptible or recovered. Define \begin{align} \eta_n&=\text{the set of infected vertices at time $n$; }\noindentnumber\\ \xi_n&= \text{the set of susceptible vertices at time $n$; }\noindentnumber\\ \rho_n&= \text{the set of recovered vertices at time $n$. } \end{align} Given the finite initial configurations of infected sites, $\eta_0$, and recovered sites, $\rho_0$, the epidemic evolves as follows: an infected site $x\in \eta_n$ infects its susceptible neighbor $y\in \xi_n$, $y\sim x$ with probability $p=p(R)$, where the infections are conditionally independent given the current configuration. Infected sites at time $n$ become recovered at time $n+1$, and recovered sites will be immune from further infection and stay recovered. Recall the edge percolation variables $\{B(e): e\in E(\mathbb{Z}_R^d)\}$ with parameter $p=p(R)$. The above process can be described below: \begin{align}\label{4e10.14} \eta_{n+1}=&\bigcup_{x\in \eta_n} \{y\in \xi_n: B(x,y)=1\},\noindentnumber\\ \rho_{n+1}=&\rho_{n}\cup \eta_n,\\ \xi_{n+1}=&\xi_{n}\backslash \eta_{n+1}.\noindentnumber \end{align} For any disjoint finite sets $\eta_0$ and $\rho_0$, one may use the above and an easy induction to conclude $\eta_n$ and $\rho_n$ are finite for all $n\geq 0$. Throughout the rest of this paper, we will only consider the epidemic with finite initial condition $(\eta_0, \rho_0)$. Denote by $\mathcal F_n^\eta=\sigma(\eta_k, k\leq n)$ the $\sigma$-field generated by the epidemic process $\eta=(\eta_n)$. Recall the percolation graph $G$ on $\mathbb{Z}_R^d$. We let $d_G(x,y)$ be the graph distance in $G$ between $x,y\in \mathbb{Z}_R^d$. By convention we let $d_G(x,y)=\infty$ if there is no path between $x$ and $y$ on $G$. For a set of vertices $A$, define $d_G(A,x)=\inf\{d_G(y,x): y\in A\}$. Given a pair of disjoint finite sets in $\mathbb{Z}_R^d$, $(\eta_0,\rho_0)$, we denote by $G(\rho_0)$ the percolation graph by deleting all the edges containing a vertex in $\rho_0$. For an SIR epidemic starting from $(\eta_0,\rho_0)$, it is shown in (1.9) of \cite{FP16} that \begin{align}\label{4e10.15} \eta_n=\{x\in \mathbb{Z}^d_R: d_{G(\rho_0)}(\eta_0, x)=n\}:=\eta_n^{\eta_0,\rho_0}. \end{align} For any integer $k\geq 0$, conditioning on $\mathcal F_k^\eta$, by the Markov property of $(\eta_k,\rho_k)$ as in (1.7) of \cite{FP16}, we have for all $n\geq k$, \begin{align}\label{4ea3.26} &\eta_n=\eta_n^{\eta_0,\rho_0}=\{x\in \mathbb{Z}^d_R: d_{G(\rho_{k})}(\eta_{k}, x)=n-k\}=\eta_{n-k}^{\eta_{k},\rho_{k}}. \end{align} This is saying that starting from time $k$, the process $(\eta_{n+k},n\geq 0)$ is a usual SIR epidemic starting from $(\eta_k, \rho_k)$. The total infection set is given by \begin{align}\label{4e11.3} \cup_{k=0}^n \eta_k=\{x\in \mathbb{Z}^d_R: d_{G(\rho_0)}(\eta_0, x)\leq n\}. \end{align} By shrinking the initial infection set $\eta_0$, it is clear that the total number of infected sites will be decreased. We state this intuition in the following lemma. \begin{lemma}\label{4l0} Let $(\eta_0,\rho_0)$ and $(\eta'_0,\rho_0)$ be two finite initial conditions with $\eta'_0\subseteq \eta_0$. For $\eta$ starting from $(\eta_0,\rho_0)$ and $\eta'$ starting from $(\eta'_0,\rho_0)$ given by \eqref{4e10.15}, we have \[ \cup_{k=0}^n \eta'_k \subseteq \cup_{k=0}^n \eta_k ,\quad \forall n\geq 0. \] \end{lemma} \begin{proof} On the percolation graph $G(\rho_0)$, we have $d_{G(\rho_0)}(\eta'_0, x)\leq n$ implies $d_{G(\rho_0)}(\eta_0, x)\leq n$ since $\eta'_0\subseteq \eta_0$. So the result follows from \eqref{4e11.3}. \end{proof} \begin{mydef}\label{4def1.3} We say that an SIR epidemic {\bf survives} if with positive probability we have $\eta_n\neq \emptyset$ for all $n\geq 1$; we say the epidemic becomes {\bf extinct} if with probability one, we have $\eta_n= \emptyset$ for some finite $n\geq 1$. \end{mydef} For any $p=p(R) \in [0,1]$, if the epidemic $\eta$ starting from $(\{0\},\emptyset)$ survives, then with positive probability, there is an infinite sequence of infected sites $\{x_k,k\geq 0\}$ such that $x_k\in \eta_k$, $x_k\sim x_{k-1}$ and $x_{k-1}$ infects $x_k$ at time $k$. Hence we have the edge $(x_{k-1},x_k)$ is open and $B(x_{k-1},x_k)=1$. Therefore with positive probability, we have percolation from $\eta_0=\{0\}$ to infinity in range-$R$ bond percolation. This implies $p\geq p_c$ and so an upper bound for $p_c$ is obtained. On the other hand, by Lemma \ref{4l0} and a trivial union inclusion and translation invariance, one may easily check that it is equivalent to prove the survival of $\eta$ starting from $(\eta_0,\emptyset)$ for any finite $\eta_0\subseteq \mathbb{Z}_R^d$. From now on, we set \begin{align}\label{4ea3.1} p=p(R)=\frac{1+\frac{\theta}{R^{{d-1}}}}{V(R)} \quad \text{ for $\theta\geq 100$ and $R\geq 4\theta$.} \end{align} For the required upper bound, it suffices to find some large $\theta$ so that the SIR epidemic survives. To do this, we will use a comparison to supercritical oriented percolation and apply the methods from Lalley, Perkins and Zheng \cite{LPZ14} with some necessary adjustments and new ideas. Let $\mathbb{Z}_+^2=\{x=(x_1,x_2) \in \mathbb{Z}^2: x_i\geq 0, i=1,2\}$. Set the grid $\Gamma$ to be $\mathbb{Z}_+^2$ in $d=2$ and $\mathbb{Z}_+^2\times \{0\}$ in $d=3$. Define a total order $\prec$ on $\Gamma$ by \begin{align}\label{4ea1.2} x\prec y \begin{cases} \text{ if } \|x\|_1<\|y\|_1 \text{ or }\\ \|x\|_1=\|y\|_1 \text{ and } x_1<y_1, \end{cases} \end{align} where $\|x\|_1=\sum_{i=1}^d |x_i|$ is the $l^1$-norm on $\mathbb{R}^d$. Hence we can write $\Gamma=\{x(1), x(2), \cdots\}$ with $0=x(1)\prec x(2) \prec \cdots$. For any $x\in \Gamma$, define $\mathcal A(x)=\{(x_1, x_2+1), (x_1+1, x_2)\}$ in $d=2$ and $\mathcal A(x)=\{(x_1, x_2+1,0), (x_1+1, x_2, 0)\}$ in $d=3$. This is the set of ``immediate offspring'' of $x$. For any $M>0$ and $x\in \mathbb{R}^d$, set $Q_M(x)=\{y\in \mathbb{R}^d: \|y-x\|_\infty \leq M\}$ to be the rectangle centered at $x$. Write $Q(y)$ for $Q_1(y)$. For any $T\geq 100$, we define \begin{align}\label{4e10.05} T_\theta^R=[TR^{d-1}/\theta], \text{ and } R_\theta=\sqrt{R^{d-1}/\theta} \end{align} for $\theta \geq 100$ and $R\geq 4\theta \geq 400$. These quantities in \eqref{4e10.05} are from the usual Brownian scaling for time and space. One can check that \begin{align}\label{4e10.06} 200\leq \frac{1}{2} \frac{TR^{d-1}}{\theta} \leq T_\theta^R\leq \frac{TR^{d-1}}{\theta}. \end{align} For any $\theta\geq 100$, define \begin{align}\label{4e10.20} f_d(\theta)= \begin{cases} \sqrt{\theta}, &\text{ in } d=2,\\ \log {\theta}, &\text{ in } d=3, \end{cases} \end{align} and set for any $R\geq 400$, \begin{align}\label{4ea10.45} \beta_d(R)= \begin{cases} \log R,&\text{ in } d=2,\\ 1,& \text{ in } d=3. \end{cases} \end{align} For any finite set $A\subseteq \mathbb{Z}_R^d$, denote by $|A|$ the number of vertices in $A$. Consider some finite $\eta_0\subseteq \mathbb{Z}_R^d$ such that \begin{align}\label{4ea10.23} \begin{dcases} \text{(i) }\eta_0\subseteq Q_{R_\theta}(0);\\ \text{(ii) } R^{d-1} f_d(\theta)/\theta \leq |\eta_0|\leq 1+R^{d-1} f_d(\theta)/\theta;\\ \text{(iii) }|\eta_0 \cap Q(y)|\leq K \beta_d(R),\ \forall y\in \mathbb{Z}^d, \end{dcases} \end{align} where $K\geq 100$ is some large constant that will be chosen below in Proposition \ref{4p4}. We note that the assumption (iii) in \eqref{4ea10.23} will only be used in Proposition \ref{4p4} (in fact it is only used in the proof of Lemma \ref{4l10.01}). The existence of such a set is trivial if one observes that the finer lattice in $\mathbb{Z}_R^d$ has enough space to place those $|\eta_0|$ vertices. For any set $Y\subseteq \mathbb{Z}_R^d$, we denote by $\hat{Y}^K\subseteq Y$ a ``thinned'' version of $Y$ so that there are at most $K \beta_d(R)$ vertices in the set $\hat{Y}^K\cap Q(y)$ for all $y\in \mathbb{Z}^d$. This ``thinned'' version idea comes from the ``crabgrass'' paper by Bramson, Durrett and Swindle \cite{BDS89}. The $\beta_d(R)$ in \eqref{4ea10.45} are the typical size of particles in each unit box $Q(y)$ in a branching random walk at time $T_\theta^R$. The procedure for ``thinning'' can be done in a fairly arbitrary way. For example, in Proposition \ref{4p4} below we may proceed by deleting all the vertices in $Y\cap Q(y)$ for each $y\in \mathbb{Z}^d$ if $|Y\cap Q(y)|>K \beta_d(R)$. Choose $T\geq 100$ large such that \begin{align}\label{4ea10.04} \inf_{z\in Q(0)} \inf_{y\in Q(0)} e^{T/4} \mathbb{P}(\zeta_T^z \in Q(y)) \geq 16, \end{align} where $\zeta_T^z$ is a $d$-dimensional Gaussian random variable with mean $z$ and variance $T/3$. The following result is an analogue to Lemma 7.1 of \cite{BDS89} with our SIR epidemic setting. \begin{proposition}\label{4p4} For any $\varepsilon_0\in (0,1)$, $\kappa>0$, and $T\geq 100$ satisfying \eqref{4ea10.04}, there exist positive constants $\theta_{\ref{4p4}}$, $K_{\ref{4p4}}$ depending only on $T, \varepsilon_0,\kappa$ such that for all $\theta \geq \theta_{\ref{4p4}}$, there is some $C_{\ref{4p4}}(\varepsilon_0, T,\kappa, \theta)\geq 4\theta$ such that for any $R\geq C_{\ref{4p4}}$, any finite initial condition $(\eta_0,\rho_0)$ where $\eta_0$ is as in \eqref{4ea10.23} with $K_{\ref{4p4}}$, if the SIR epidemic process $\eta$ starts from $(\eta_0,\rho_0)$, then we have \begin{align*} \mathbb{P} \Big(\Big\{|\hat{\eta}_{T_\theta^R}^{K_{\ref{4p4}} }\cap Q_{R_\theta}(y R_\theta)|< |\eta_0| \text{ for some } y\in \mathcal A(0) \Big\} \cap N(\kappa)\Big)\leq \varepsilon_0, \end{align*} where \[N(\kappa)=\{|\rho_{T_\theta^R}\cap \mathcal{N}(x) |\leq \kappa R, \forall x\in \mathbb{Z}_R^d\}.\] \end{proposition} We will show in Proposition \ref{4p2} below that under certain conditions, the event $N(\kappa)$ in fact occurs with high probability (see more discussions in Section \ref{4s1.3}). Then the above result implies that for an SIR epidemic $\eta$ starting from an appropriate initial infection set $\eta_0\subseteq Q_{R_\theta}(0)$ as in \eqref{4ea10.23}, with high probability we have $|\hat{\eta}_{T_\theta^R}^{K_{\ref{4p4}} }\cap Q_{R_\theta}(y R_\theta)|\geq |\eta_0|$ for both $y\in \mathcal A(0)$, that is, the SIR epidemic will generate a sufficiently large total mass in each of the adjacent cubes $Q_{R_\theta}(y R_\theta)$ for $y\in \mathcal A(0)$, even after ``thinning''. Restart the SIR epidemic with the ``thinned'' infection set $\hat{\eta}_{T_\theta^R}^{K_{\ref{4p4}} }$ restricted to $Q_{R_\theta}(y R_\theta)$ so that the initial condition in \eqref{4ea10.23} recurs (with a spatial translation). By Proposition \ref{4p4}, we may reproduce the infection to the next adjacent cubes with high probability. In this way, infection to the adjacent cubes can be iterated by carefully choosing the initial condition at each step so that it satisfies the necessary assumptions. Of course we need more conditions to make $N(\kappa)$ occur with high probability at each iteration, which we will discuss more in Section \ref{4s1.3} below. By a comparison to oriented percolation, with positive probability this iterated infection will last forever and so the epidemic $\eta$ survives. A rigorous proof for the above arguments leading to the survival of the epidemic can be found in Section \ref{4s2.2}. The proof of Proposition \ref{4p4} is deferred to Section \ref{4s8}. We next introduce the branching random walk (BRW) dominating the epidemic to show that with high probability the event $N(\kappa)$ holds, i.e. the epidemic will not accumulate enough recovered sites in each unit cube up to time $T_\theta^R$. \subsection{Branching envelope}\label{4s1.3} Following Section 2.2 of Frei and Perkins \cite{FP16}, we will couple the epidemic $\eta$ with a dominating branching random walk $Z=(Z_n, n\geq 0)$ on $\mathbb{Z}_R^d$. We first give a brief introduction. The state space for our branching random walk in this paper is the space of finite measures on $\mathbb{Z}_R^d$ taking values in nonnegative integers, which we denote by $M_F(\mathbb{Z}_R^d)$. For any $\phi: \mathbb{Z}^d_R \to \mathbb{R}$, write $\mu(\phi)=\sum_{x\in \mathbb{Z}^d_R} \phi(x) \mu(x)$ for $\mu \in M_F(\mathbb{Z}_R^d)$. We set $|\mu|=\mu(1)$ to be the total mass for $\mu \in M_F(\mathbb{Z}_R^d)$. We will use a slightly different labelling system here than that in \cite{FP16} in order to keep track of the initial position for each particle. Totally order the set $\mathcal{N}(0)$ as $\{e_1, \cdots, e_{V(R)}\}$ and then totally order each $\mathcal{N}(0)^n$ lexicographically by $<$. We use the following labelling system borrowed from Section II.3 of \cite{Per02} for our branching particle system: \begin{align}\label{4e1.16} I=\bigcup_{n=0}^\infty \mathbb{N} \times \mathcal{N}(0)^n=\{(\alpha_0, \alpha_1, \cdots, \alpha_n): \alpha_0\in \mathbb{N}, \alpha_i \in \mathcal{N}(0), 1\leq i\leq n\}, \end{align} where $\alpha_0$ labels the ancestor of particle $\alpha$. Let $|(\alpha_0, \alpha_1, \cdots, \alpha_n)|=n$ be the generation of $\alpha$ and write $\alpha|i=(\alpha_0, \cdots, \alpha_i)$ for $0\leq i\leq n$. Let $\pi \alpha=(\alpha_0, \alpha_1, \cdots, \alpha_{n-1})$ be the parent of $\alpha$ and let $\alpha \vee e_i=(\alpha_0, \alpha_1, \cdots, \alpha_n, e_i)$ be an offspring of $\alpha$ whose position relative to its parent is $e_i$. Recall $p(R)$ from \eqref{4ea3.1}. Assign an i.i.d. collection of Bernoulli random variables $\{B^\alpha: \alpha \in I, |\alpha|>0\}$ to the edge connecting the locations of $\alpha$ and its parent $\pi \alpha$ so that the birth in this direction is valid with probability $p(R)$ and invalid with probability $1-p(R)$. Set \begin{align} \mathcal G_n=\sigma(\{B^\alpha: 0<|\alpha|\leq n\})\text{ for each $n\geq 0$.} \end{align} Fix any $Z_0\in M_F(\mathbb{Z}_R^d)$. Recall that $M_F(\mathbb{Z}_R^d)$ is the space of finite measures taking values in nonnegative integers. So the total mass $|Z_0|$ is the number of initial particles. Label these particles by $1,2,3,\cdots, |Z_0|$ and denote by $x_1, x_2, \cdots, x_{|Z_0|}$ their locations. We note that these $\{x_i\}$ do not have to be distinct; for example, if $Z_0=3\delta_0$, then we have $x_1=x_2=x_3=0$ with initial particles $1,2,3$. Hence we may rewrite $Z_0$ as $Z_0=\sum_{i=1}^{|Z_0|} \delta_{x_i}$. For any $i>|Z_0|$, we set $x_i$ to be the cemetery state $\Delta$. For each $n\geq 0$, we write $\alpha \approx n$ iff $x_{\alpha_0}\neq \Delta$, $|\alpha|=n$ and $B^{\alpha|i}=1$ for all $1\leq i\leq n$ so that such an $\alpha$ labels a particle alive in generation $n$. For each $\alpha \in I$, define its current location by \begin{align}\label{4e1.17} Y^\alpha= \begin{cases} x_{\alpha_0}+\sum_{i=1}^{|\alpha|} \alpha_i, &\text{ if } \alpha\approx |\alpha|,\\ \Delta, &\text{ otherwise. } \end{cases} \end{align} In this way, $Z_n=\sum_{|\alpha|= n} \delta_{Y^\alpha} 1(Y^\alpha\neq \Delta)$ defines the empirical distribution of a branching random walk where in generation $n$, each particle gives birth to one offspring to its $V(R)$ neighboring positions independently with probability $p(R)$. So it follows that \begin{align}\label{4ea4.5} \text{ $({{Z}}_n(1),n\geq 0)$ is a}&\text{ Galton-Watson process with }\\& \text{ offspring distribution $Bin(V(R),p(R))$.}\noindentnumber \end{align} Note the dependence of $Z_n$ on $\theta$ and $R$ is implicit. Define $Z_n(x)=Z_n(\{x\})$ for any $x\in \mathbb{Z}^d_R$. For any Borel function $\phi$, we let \begin{align}\label{4eb2.21} Z_n(\phi)=\sum_{|\alpha|= n} \phi(Y^\alpha)=\sum_{x\in \mathbb{Z}^d_R} \phi(x) Z_n(x), \end{align} where it is understood that $\phi(\Delta)=0$. We use $\mathbb{P}^{Z_0}$ to denote the law of $(Z_n, n\geq 0)$ starting from $Z_0$. For $\mu,\nu \in M_F(\mathbb{Z}_R^d)$, we say $\nu$ {\bf dominates} $\mu$ if $\nu(x)\geq \mu(x)$ for all $x\in \mathbb{Z}_R^d$. For any set $Y$ on $\mathbb{Z}_R^d$, by slightly abusing the notation, we write $Y(x)=1(x\in Y)$ for $x\in \mathbb{Z}^d_R$ so that the set $Y$ naturally defines a measure on $\mathbb{Z}^d_R$ taking values in $\{0,1\}$. In particular we let $\eta_n(x)=1(x\in \eta_n)$ for any $n\geq 0$ and $x\in \mathbb{Z}^d_R$. By the construction in Section 2.2 of \cite{FP16}, we may define the coupled SIR epidemic $(\eta_n)$ inductively with the dominating $(Z_n)$. \begin{lemma}\label{4l1.5} For any finite initial configuration $(\eta_0,\rho_0)$ and any $Z_0 \in M_F(\mathbb{R}^d)$ such that $Z_0$ dominates $\eta_0$, on a common probability space we can define an SIR epidemic processes $\eta$ starting from $(\eta_0, \rho_0)$, and a branching random walk $Z$ starting from $Z_0$, such that \[ \eta_n(x)\leq Z_n(x) \text{ for all } x\in \mathbb{Z}^d_R, n\geq 0. \] Moreover, we have both $(\eta,\rho)$ and $Z$ satisfy the Markov property with respect to a common filtration $(\mathcal G_n)$. \end{lemma} \begin{proof} The proof is similar to that of Proposition 2.3 in \cite{FP16}. Although their proof was dealing with $\eta_0=\{0\}$, it works for any finite $\eta_0$ as the arguments there indeed uses induction to prove $\eta_{n+1}(x)\leq Z_{n+1}(x), \forall x\in \mathbb{Z}^d_R$ by assuming $Z_n$ dominates $\eta_n$. The proof of the Markov property is similar. \end{proof} To understand the large $R$ behavior of $(Z_n)$, we will also consider a rescaled version of $(Z_n)$ and study its limit as $R\to \infty$. Let $\sigma^2=1/3$ be the variance of the marginals of the uniform distributions over $[-1,1]^d$. For each $t\geq 0$, we define a random measure $W_t^R$ on $\mathbb{R}^d$ by \begin{align}\label{4e5.39} W_t^R=\frac{1}{R^{{d-1}}}\sum_{x\in \mathbb{Z}^d_R} \delta_{{x}/{\sqrt{\sigma^2 R^{d-1}}}} Z_{[tR^{d-1}]}(x). \end{align} Let $\mathbb{P}^{W_0^R}$ denote the law of $(W_t^R, t\geq 0)$. Let $M_F(\mathbb{R}^d)$ be the space of finite measures on $\mathbb{R}^d$ equipped with weak topology and denote by $C_b^2(\mathbb{R}^d)$ the space of twice continuously differentiable functions on $\mathbb{R}^d$. For $\mu \in M_F(\mathbb{R}^d)$, we denote by $|\mu|$ its total mass. For any $\phi: \mathbb{R}^d\to \mathbb{R}$, we write $\mu(\phi)$ for the integral of $\phi$ with respect to $\mu$. Let $X$ be a super-Brownian motion (SBM) with drift $\theta$ that is the unique in law solution to the following martingale problem: \begin{align}\label{4e10.25} (MP)_{\theta}: \quad X_t(\phi)=X_0(\phi)+M_t(\phi)+ \int_0^t X_s(\frac{\Delta}{2} \phi) ds+\theta\int_0^t X_s(\phi) ds, \quad \forall \phi \in C_b^2(\mathbb{R}^d), \end{align} where $X$ is a continuous $M_F(\mathbb{R}^d)$-valued process, and $M(\phi)$ is a continuous martingale with $\langle M(\phi)\rangle_t=\int_0^t X_s(\phi^2)ds$. We denote the law of $X$ by $\mathbb{P}^{X_0}$. If there is some $X_0 \in M_F(\mathbb{R}^d)$ so that (recall $\sigma^2=1/3$) \begin{align}\label{4e10.19} W_0^R=\frac{1}{R^{{d-1}}}\sum_{x\in \mathbb{Z}^d_R} Z_{0}(x) \delta_{{x}/{\sqrt{R^{d-1}/3}}} \to X_0 \text{ in } M_F(\mathbb{R}^d) \end{align} as $R\to \infty$, then by Proposition 4.3 of \cite{FP16}, it follows that \begin{align}\label{4e10.22} (W_t^R, t\geq 0) \mathbb{R}ightarrow (X_t, t\geq 0) \text{ on } D([0,\infty), M_F(\mathbb{R}^d)) \end{align} as $R\to \infty$. Here $D([0,\infty), M_F(\mathbb{R}^d))$ is the Skorohod space of cadlag $M_F(\mathbb{R}^d)$-valued paths, on which $\mathbb{R}ightarrow$ denotes the weak convergence. Note we have scaled the variance $\sigma^2=1/3$ in \eqref{4e5.39} and so the constant in \eqref{4e10.25} will differ from that of \cite{FP16}.\\ We collect the properties of $(Z_n)$ below in Propositions \ref{4p1}, \ref{4p3} and \ref{4p2} while these results will be proved later. In fact these proofs will occupy most of the paper. They are technical results that will be used in the proof of the main theorem in Section \ref{4s2}. We briefly explain their uses: Proposition \ref{4p1} says that the support of $(Z_n)$ up to time $T_\theta^R$ will be contained in a large box; Proposition \ref{4p3} is a technical condition that ensures Proposition \ref{4p2} holds; Proposition \ref{4p2} will be the key condition that guarantees there won't be too many accumulated particles in each unit cube contained in a large box. Together with Proposition \ref{4p1}, we may conclude by the dominance of $(Z_n)$ over $(\eta_n)$ that the event $N(\kappa)$ in Proposition \ref{4p4} occurs with high probability. The assumptions on $Z_0$ for each proposition will vary. Nevertheless, we may choose $Z_0$ carefully so that all the conditions will be satisfied for each iteration. \noindent Let $\text{Supp}(\mu)$ denote the closed support of a measure $\mu$. Consider $Z_0\in M_F(\mathbb{Z}_R^d)$ such that \begin{align}\label{4e10.23} \begin{dcases} \text{(i) }\text{Supp}(Z_0)\subseteq Q_{R_\theta}(0); \\ \text{(ii) } R^{d-1} f_d(\theta)/\theta\leq |Z_0|\leq 1+R^{d-1} f_d(\theta)/\theta. \end{dcases} \end{align} \begin{proposition}\label{4p1} For any $\varepsilon_0\in (0,1)$, $T\geq 100$, there are constants $\theta_{\ref{4p1}}\geq 100, M_{\ref{4p1}}\geq 100$ depending only on $\varepsilon_0, T$ such that for all $\theta \geq \theta_{\ref{4p1}}$, there is some $C_{\ref{4p1}}(\varepsilon_0, T,\theta)\geq 4\theta$ such that for any $R\geq C_{\ref{4p1}}$ and any $Z_0$ satisfying \eqref{4e10.23}, we have \[ \mathbb{P}^{Z_0}\Big(\text{Supp}(\sum_{n=0}^{T_\theta^R} Z_n) \subseteq Q_{M_{\ref{4p1}} \sqrt{\log f_d(\theta)}R_\theta} (0) \Big)\geq 1-\varepsilon_0. \] \end{proposition} Next we turn to the crucial event $N(\kappa)=\{|\rho_{T_\theta^R}\cap \mathcal{N}(y) |\leq \kappa R, \forall y\in \mathbb{Z}_R^d\}$ in Proposition \ref{4p4}. To show that $N(\kappa)$ occurs with high probability, we will show the corresponding result for the dominating branching random walk $Z=(Z_n, n\geq 0)$, i.e. we will bound $\sum_{n=0}^{T_\theta^R} Z_n(\mathcal{N}(y))$ for all $y\in \mathbb{Z}_R^d$. We call this the ``local time'' process of $Z$ as we indeed conjecture that $\sum_{n=0}^{T_\theta^R} Z_n(\mathcal{N}(y))$ will converge to the local time of super-Brownian motion as $R\to \infty$. By applying a discrete version of Tanaka's formula (see \eqref{4e6.13} and \eqref{4e6.14}), we need a regularity condition on $Z_0$ to get bounds for the local time of $Z$. For any $x,u\in \mathbb{R}^d$, define \begin{align}\label{4e10.31} g_{u,d}(x)= \begin{dcases} \sum_{n=1}^\infty e^{-n\theta/R} \frac{1}{n} e^{-|x-u|^2/(32n)}, &\text{ in } d=2,\\ R\sum_{n=1}^\infty \frac{1}{n^{3/2}} e^{-|x-u|^2/(32n)}, &\text{ in } d=3. \end{dcases} \end{align} Again we have suppressed the dependence of $g_{u,d}$ on $R,\theta$. One can show that (see Lemma \ref{4l4.1} and Lemma \ref{4l3.2}) there is some universal constant $C>0$ such that for any $x\neq u$, \begin{align}\label{4e10.32} g_{u,d}(x)\leq \begin{cases} C\Big(1+\log^+ \big(\frac{R}{\theta |x-u|^2} \big)\Big),& \text{ in } d=2,\\ C\frac{R}{|x-u|}, &\text{ in } d=3, \end{cases} \end{align} where $\log^+(x)=0\vee \log x$ for $x>0$. The reason for defining $g_{u,d}$ as in \eqref{4e10.31} will be clearer in Section \ref{4s4} when we introduce the appropriate potential kernels and Tanaka’s formula. Now consider $Z_0\in M_F(\mathbb{Z}_R^d)$ such that \begin{align}\label{4e11.24} \begin{dcases} \text{(i) }\text{Supp}(Z_0)\subseteq Q_{R_\theta}(0); \\ \text{(ii) } Z_0(1)\leq 2 R^{d-1} f_d(\theta)/\theta;\\ \text{(iii) } Z_{0}(g_{u,d})\leq m {R^{d-1}}/\theta^{1/4}, \quad \forall u\in \mathbb{R}^d. \end{dcases} \end{align} \begin{proposition}\label{4p2} For any $\varepsilon_0\in (0,1)$, $T\geq 100$ and $m>0$, there exist constants $\theta_{\ref{4p2}}\geq 100, \chi_{\ref{4p2}}>0$ depending only on $\varepsilon_0, T,m$ such that for all $\theta \geq \theta_{\ref{4p2}}$, there is some $C_{\ref{4p2}}(\varepsilon_0, T,\theta,m)\geq 4\theta$ such that for any $R\geq C_{\ref{4p2}}$ and any $Z_0$ satisfying \eqref{4e11.24}, we have \[ \mathbb{P}^{Z_0}\Big(\sum_{n=0}^{T_\theta^R} Z_n(\mathcal{N}(x)) \leq \chi_{\ref{4p2}} {R}, \quad \forall x\in \mathbb{Z}^d_R \cap Q_{2M_{\ref{4p1}} \sqrt{\log f_d(\theta)}R_\theta} (0) \Big)\geq 1-\varepsilon_0. \] \end{proposition} Finally we show that the extra condition (iii) of \eqref{4e11.24} indeed holds with high probability, which allows us to iterate this initial condition for $Z_0$. The following theorem gives an analogue to the ``admissible'' regularity condition for super-Brownian motion in (5.4) of \cite{LPZ14}. For the next two results, instead of \eqref{4e11.24} we only assume \begin{align}\label{4e12.01} Z_0(1)\leq 2R^{d-1} f_d(\theta)/\theta. \end{align} \begin{proposition}\label{4p3} For any $\varepsilon_0\in (0,1)$, $T\geq 100$, there exist constants $\theta_{\ref{4p3}}\geq 100, m_{\ref{4p3}}>0$ depending only on $\varepsilon_0, T$ such that for all $\theta \geq \theta_{\ref{4p3}}$, there is some $C_{\ref{4p3}}(\varepsilon_0, T,\theta)\geq 4\theta$ such that for any $R\geq C_{\ref{4p3}}$ and any $Z_0$ satisfying \eqref{4e12.01}, we have \begin{align}\label{4e11.33} \mathbb{P}^{Z_0}\Big(Z_{T_\theta^R}(g_{u,d})\leq m_{\ref{4p3}} \frac{R^{d-1}}{\theta^{1/4}},\quad \forall u\in Q_{8\sqrt{\log f_d(\theta)}R_\theta} (0) \Big)\geq 1-\varepsilon_0. \end{align} \end{proposition} By restricting the measure $Z_{T_\theta^R}$ to a finite rectangle $Q_{4R_\theta}(0)$, we may be able to assume the above holds for all $u$. \begin{corollary}\label{4c0.1} For any $\varepsilon_0\in (0,1)$, $T\geq \varepsilon_0^{-1}+100$, there are constants $\theta_{\ref{4c0.1}}\geq 100, m_{\ref{4c0.1}}>0$ depending only on $\varepsilon_0, T$ such that for all $\theta \geq \theta_{\ref{4c0.1}}$, there is some $C_{\ref{4c0.1}}(\varepsilon_0, T,\theta)\geq 4\theta$ such that for any $R\geq C_{\ref{4c0.1}}$ and any $Z_0$ satisfying \eqref{4e12.01}, we have \begin{align}\label{4e11.37} \mathbb{P}^{Z_0}\Big(\tilde{Z}_{T_\theta^R}(g_{u,d})\leq m_{\ref{4c0.1}} \frac{R^{d-1}}{\theta^{1/4}},\quad \forall u\in \mathbb{R}^d \Big)\geq 1-2\varepsilon_0, \end{align} where $\tilde{Z}_{T_\theta^R}(\cdot)=Z_{T_\theta^R}(\cdot \cap Q_{4R_\theta}(0))$. \end{corollary} \noindent Corollary \ref{4c0.1} is an easy refinement of Proposition \ref{4p3}. Its proof is given in Section \ref{4s7}. The proofs of Propositions \ref{4p1}, \ref{4p2} and \ref{4p3} will be the main parts of this paper and are deferred to Sections \ref{4s3}, \ref{4s5}, \ref{4s6}, \ref{4s7}. Assuming the above results, we will prove the survival of the SIR epidemic in Section \ref{4s2}, thus giving our main result Theorem \ref{4t0}. \\ \noindent $\mathbf{Organization\ of\ the\ paper.}$ In Section \ref{4s2}, assuming Propositions \ref{4p4}, \ref{4p1}, \ref{4p2} and Corollary \ref{4c0.1}, we give the proof of our main result Theorem \ref{4t0} by showing the survival of the SIR epidemic. We use a comparison with supercritical oriented percolation inspired by that in \cite{LPZ14}, along with some new ideas and some necessary adjustments to our setting. In Section \ref{4s3}, we will prove Proposition \ref{4p1} for the support propagation and state some preliminary results, including the $p$-th moments, exponential moments and the martingale problem, for the branching random walk. Section \ref{4s4} introduces the potential kernel, and by applying it to the martingale problem, we get a discrete version of Tanaka's formula for the ``local times'' of the branching random walk. Using this Tanaka's formula and a discrete Garsia's Lemma, we give the proof of Proposition \ref{4p2} for $d=2$ in Section \ref{4s5} and $d=3$ in Section \ref{4s6}. In Section \ref{4s7}, the proofs of Proposition \ref{4p3} and Corollary \ref{4c0.1} for the regularity of branching random walk is completed. Finally in Section \ref{4s8}, we prove Proposition \ref{4p4} that will imply the survival of the SIR epidemic. \section{Oriented percolation and proof of survival} \label{4s2} \subsection{SIR epidemic with immigration} Recall from \eqref{4e10.15} the SIR epidemic process $\eta$ starting from $(\eta_0,\rho_0)$: \begin{align}\label{4ea4.8} \eta_n=\{x\in \mathbb{Z}^d_R: d_{G(\rho_0)}(\eta_0, x)=n\}:=\eta_n^{\eta_0,\rho_0}, \quad \forall n\geq 0. \end{align} In order to prove the survival of $\eta$, we need some coupled SIR epidemic process to serve as a lower bound. Let $\mu_0, \nu_0$ be two finite subsets of $\mathbb{Z}^d_R$ and set $\rho_0$ to be a finite set disjoint from $\mu_0\cup \nu_0$. Recall from Lemma \ref{4l1.5} that $(\eta_n,\rho_n)$ satisfies the Markov property w.r.t. $(\mathcal G_n)$ where \begin{align}\label{4ea4.18} \mathcal G_n=\sigma(\{B^\alpha: 0<|\alpha|\leq n\}), \quad \forall n\geq 0. \end{align} We say $\eta^{*}$ is an {\bf SIR epidemic process with immigration at time $k_*$} if \begin{align}\label{4ea3.25} &\eta_0^*=\mu_0,\quad \rho_0^*=\rho_0, \quad \rho_{n+1}^*=\rho_n^* \cup \eta_n^* \quad \text{ and}\noindentnumber\\ &\eta_n^*=\{x\in \mathbb{Z}^d_R: d_{G(\rho_0^*)}(\eta_0^*, x)=n\}=\eta_n^{\eta_0^*,\rho_0^*}, \text{ if } n\leq k_*;\noindentnumber\\ &\eta_n^*=\{x\in \mathbb{Z}^d_R: d_{G(\rho_{k_*}^*)}(\eta_{k_*}^* \cup \nu_0, x)=n-k_*\}=\eta_{n-k_*}^{\eta_{k_*}^*\cup \nu_0,\rho_{k_*}^*}, \text{ if } n> k_*, \end{align} where $G(\rho_{k_*}^*)$ is the percolation graph by deleting all the edges containing a vertex in $\rho_{k_*}^*$. The dependence of $\eta^*$ on $\mu_0, \nu_0,\rho_0, k_*$ will be implicit. One can check that $(\eta^*,\rho^*)$ satisfies the Markov property w.r.t. $(\mathcal G_n)$. Briefly speaking, at time $k_*$ all the non-recovered sites in $\nu_0$ are suddenly infected. This could be due to the infection caused by, say, intercontinental travel. Before time $k_*$, $\eta_n^*$ is the usual SIR epidemic starting from $(\mu_0, \rho_0)$. At time $k_*$, we let all the non-recovered sites in $\nu_0$ become infected. Afterwards $\eta_n^*$ will evolve as the usual SIR epidemic starting from $(\eta_{k_*}^* \cup \nu_0,\rho_{k_*}^*)$. The following lemma tells us that the SIR epidemic with immigration will give a lower bound of the original epidemic. \begin{lemma}\label{4l0.2} Let $\mu_0, \nu_0$ be finite subsets of $\mathbb{Z}^d_R$ and set $\rho_0$ to be a finite set disjoint from $\mu_0\cup \nu_0$. For any integer $k_*\geq 0$, and any finite $\eta_0$ with $\mu_0\cup \nu_0 \subseteq \eta_0$, if $\eta$ and $\eta^*$ are given as in \eqref{4ea4.8} and \eqref{4ea3.25}, we have \[ \cup_{k=0}^n \eta^*_k \subseteq \cup_{k=0}^n \eta_k ,\quad \forall n\geq 0. \] \end{lemma} \begin{proof} For any $n\leq k_*$, $\eta_n^*$ is a usual SIR epidemic starting from $(\mu_0,\rho_0)$. Since $\mu_0\subseteq \eta_0$, by Lemma \ref{4l0} we have \begin{align}\label{4ea3.40} \cup_{k=0}^n \eta_k^{*} \subseteq \cup_{k=0}^n \eta_k, \quad \forall n\leq k_*. \end{align} Moreover, by \eqref{4ea3.25} we have \begin{align}\label{4ea3.10} &\cup_{k=0}^n \eta^*_k= \{x\in \mathbb{Z}^d_R: d_{G(\rho_{0}^*)}(\eta_0^*, x)\leq n\}, \quad \forall n\leq k_*. \end{align} For $n\geq k_*+1$, use \eqref{4ea3.25} again to get \begin{align} \eta_n^*=&\{x\in \mathbb{Z}^d_R: d_{G(\rho_{k_*}^*)}(\eta_{k_*}^* \cup \nu_0, x)=n-k_*\}\noindentnumber\\ \subseteq &\{x\in \mathbb{Z}^d_R: d_{G(\rho_{k_*}^*)}(\eta_{k_*}^*, x)=n-k_*\} \cup \{x\in \mathbb{Z}^d_R: d_{G(\rho_{k_*}^*)}(\nu_0, x)=n-k_*\}\noindentnumber\\ =&\{x\in \mathbb{Z}^d_R: d_{G(\rho_{0}^*)}(\eta_{0}^*, x)=n\} \cup \{x\in \mathbb{Z}^d_R: d_{G(\rho_{k_*}^*)}(\nu_0, x)=n-k_*\}, \end{align} where the last equality is by \eqref{4ea3.26}. Apply the above and \eqref{4ea3.10} to see that for $n\geq k_*+1$, \begin{align}\label{4e11.1} &\cup_{k=0}^n \eta^*_k \subseteq \{x\in \mathbb{Z}^d_R: d_{G(\rho_{0}^*)}(\eta_0^*, x)\leq n\} \cup \{x\in \mathbb{Z}^d_R: 1\leq d_{G(\rho_{k_*}^*)}(\nu_0, x)\leq n-k_*\}. \end{align} \noindent On the other hand, by using \eqref{4e11.3} and $\eta_0\supseteq \mu_0 \cup \nu_0$, for $n\geq k_*+1$, we have \begin{align}\label{4e11.2} \cup_{k=0}^n \eta_k =&\{x\in \mathbb{Z}^d_R: d_{G(\rho_{0})}(\eta_0, x)\leq n\}\supseteq\{x\in \mathbb{Z}^d_R: d_{G(\rho_{0})}(\mu_0 \cup \nu_0, x)\leq n\}\noindentnumber\\ =&\{x\in \mathbb{Z}^d_R: d_{G(\rho_{0})}(\mu_0, x)\leq n\} \cup \{x\in \mathbb{Z}^d_R: d_{G(\rho_{0})}(\nu_0, x)\leq n\}. \end{align} Recall that $\eta_0^*=\mu_0$, $\rho_0^*=\rho_0$. Since $\rho_0 \subseteq \rho_{k_*}^* $, one can check that for any $x$ with $d_{G(\rho_{k_*}^*)}(\nu_0, x)\leq n-k_*$, we have $d_{G(\rho_0)}(\nu_0, x)\leq n$. So it follows from \eqref{4e11.1} and \eqref{4e11.2} that \[ \cup_{k=0}^n \eta_k^{*} \subseteq \cup_{k=0}^n \eta_k,\quad \forall n\geq k_*+1. \] The proof is complete by \eqref{4ea3.40}. \end{proof} We may also consider immigration at random times. Let $\tau$ be some finite stopping time with respect to $(\mathcal G_n)$. We say $\eta^{*}$ is an {\bf SIR epidemic process with immigration at time $\tau$} if \begin{align}\label{4e10.17} &\eta_0^*=\mu_0,\quad \rho_0^*=\rho_0, \quad \rho_{n+1}^*=\rho_n^* \cup \eta_n^* \quad \text{ and}\noindentnumber\\ &\eta_n^*=\{x\in \mathbb{Z}^d_R: d_{G(\rho_0^*)}(\eta_0^*, x)=n\}=\eta_n^{\eta_0^*,\rho_0^*}, \text{ if } n\leq \tau;\noindentnumber\\ &\eta_n^*=\{x\in \mathbb{Z}^d_R: d_{G(\rho_{\tau}^*)}(\eta_{\tau}^* \cup \nu_0, x)=n-\tau\}=\eta_{n-\tau}^{\eta_{\tau}^*\cup \nu_0,\rho_{\tau}^*}, \text{ if } n> \tau, \end{align} where $G(\rho_{\tau}^*)$ is the percolation graph by deleting all the edges containing a vertex in $\rho_{\tau}^*$. The dependence of $\eta^*$ on $\mu_0, \nu_0,\rho_0, \tau$ will be implicit. \begin{lemma}\label{4l0.3} Let $\mu_0, \nu_0$ be finite subsets of $\mathbb{Z}^d_R$ and set $\rho_0$ to be a finite set disjoint from $\mu_0\cup \nu_0$. For any finite stopping time $\tau$, and any finite $\eta_0$ with $\mu_0\cup \nu_0 \subseteq \eta_0$, if $\eta$ and $\eta^*$ are given as in \eqref{4ea4.8} and \eqref{4e10.17}, we have \[ \cup_{k=0}^n \eta^*_k \subseteq \cup_{k=0}^n \eta_k ,\quad \forall n\geq 0. \] \end{lemma} \begin{proof} The proof is similar to that of Lemma \ref{4l0.2} by conditioning on $\tau=k_*$ for $k_*\geq 0$. \end{proof} Finally we consider immigration at an increasing sequence of random times $0=\tau_0 \leq \tau_1\leq \tau_2\leq \cdots<\infty$. Here $\{\tau_i\}$ are finite stopping times with respect to $(\mathcal G_n)$. Let $\mu_0, \nu_0$ be two finite subsets of $\mathbb{Z}^d_R$. For any finite subset $\rho_0$ disjoint from $\mu_0\cup \nu_0$, we say $\eta^{*}=(\eta_n^*, n\geq 0)$ is an {\bf SIR epidemic process with immigration at times $\{\tau_i, i\geq 0\}$} if \begin{align}\label{4e10.18} &\eta_0^*=\mu_0, \quad \rho_0^*=\rho_0,\noindentnumber\\ &\eta_n^*=\{x\in \mathbb{Z}^d_R: d_{G(\rho_{\tau_i}^*)}(\mu_i , x)=n-\tau_i\} \text{ for }\tau_i+1\leq n\leq \tau_{i+1},\noindentnumber\\ &\rho_{\tau_i+1}^*=\rho_{\tau_i}^* \cup \mu_i \text{ and } \rho_{n+1}^*=\rho_n^* \cup \eta_n^*\ \text{ for }\tau_i+1\leq n\leq \tau_{i+1}, \end{align} where for $i\geq 1$, $\mu_i$ and $\nu_i$ are $\mathcal G_{\tau_i}$-measurable random sets such that \begin{align}\label{4ea3.12} (\mu_i \cup \nu_i) \subseteq (\eta_{\tau_i}^* \cup \nu_{i-1}). \end{align} Briefly speaking, at time $\tau_{i}$ we introduce the immigration set $\nu_{i-1}$ and choose subsets $\mu_i$, $\nu_i$ from $\eta_{\tau_i}^* \cup \nu_{i-1}$. Restart the SIR epidemic with initial condition $\mu_i$ starting from time $\tau_i$. In the mean time, we keep $\nu_i$ for the next immigration at time $\tau_{i+1}$ while ``forgetting'' other infected sites in $\eta_{\tau_i}^*$, which is done by defining $\rho_{\tau_i+1}^*=\rho_{\tau_i}^* \cup \mu_i$ in \eqref{4e10.18}. If $\tau_{k}=\tau_i$ for all $k\geq i$ for some $i\geq 0$, we may ``freeze'' the epidemic by letting $\eta_n^*=\eta_{\tau_i}^*$ for all $n\geq \tau_i$. \begin{proposition}\label{4p2.3} Let $\mu_0, \nu_0$ be finite subsets of $\mathbb{Z}^d_R$ and set $\rho_0$ to be a finite set disjoint from $\mu_0\cup \nu_0$. For any finite stopping times $0=\tau_0 \leq \tau_1\leq \tau_2\leq \cdots<\infty$, and any finite $\eta_0$ with $\mu_0\cup \nu_0 \subseteq \eta_0$, if $\eta$ and $\eta^*$ are given as in \eqref{4ea4.8} and \eqref{4e10.18}, we have \begin{align}\label{4ea3.24} \cup_{k=0}^n \eta^*_k \subseteq \cup_{k=0}^n \eta_k ,\quad \forall n\geq 0. \end{align} \end{proposition} \begin{proof} We will iteratively define a sequence of epidemic processes $\{\eta^{*,i}, i\geq 1\}$ such that \begin{align}\label{4ea3.31} \eta^{*}_n=\eta^{*,i}_{n-\tau_{i-1}}, \quad \forall \tau_{i-1}<n\leq \tau_i, \quad \forall i\geq 1. \end{align} \noindent Given $\mu_0, \nu_0, \eta_0$ and $\rho_0$ as above, we first consider the epidemic process $\eta^{*,1}$ such that \begin{align}\label{4ea3.41} &\eta_0^{*,1}=\mu_0,\quad \rho_0^{*,1}=\rho_0, \quad \rho_{n+1}^{*,1}=\rho_n^{*,1} \cup \eta_n^{*,1} \quad \text{ and}\noindentnumber\\ &\eta_n^{*,1}=\{x\in \mathbb{Z}^d_R: d_{G(\rho_0^{*,1})}(\eta_0^{*,1}, x)=n\}=\eta_n^{\eta_0^{*,1},\rho_0^{*,1}}, \text{ if } n\leq \tau_1;\noindentnumber\\ &\eta_n^{*,1}=\{x\in \mathbb{Z}^d_R: d_{G(\rho_{\tau_1}^{*,1})}(\eta_{\tau_1}^{*,1} \cup \nu_0, x)=n-\tau_1\}=\eta_{n-\tau_1}^{\eta_{\tau_1}^{*,1}\cup \nu_0,\rho_{\tau_1}^{*,1}}, \text{ if } n> \tau_1. \end{align} By Lemma \ref{4l0.3}, we have \begin{align}\label{4ea3.32} \cup_{k=0}^n \eta^{*,1}_k\subseteq \cup_{k=0}^n \eta_k, \quad \forall n\geq 0. \end{align} It is easy to check that $\eta^{*}_n=\eta^{*,1}_{n}$ for all $1\leq n\leq \tau_1$. Apply \eqref{4ea3.32} to get \begin{align}\label{4ea3.20} \cup_{k=0}^n \eta^{*}_k\subseteq \cup_{k=0}^n \eta_k, \quad \forall 0\leq n\leq \tau_1. \end{align} Since $\rho_0^{*,1}=\rho_{0}^*$ and $\eta_0^{*,1}=\mu_0$, we also have $\rho_{\tau_1}^{*,1}=\rho_{\tau_1}^{*}$. By \eqref{4ea3.41} and $\eta^{*}_{\tau_1}=\eta^{*,1}_{\tau_1}$, we conclude that conditioning on $\mathcal G_{\tau_1}$, the process $(\eta^{*,1}_{k+\tau_1}, k\geq 1)$ will be a usual SIR epidemic starting from $(\eta_{\tau_1}^* \cup \nu_0, \rho_{\tau_1}^*)$. Next, choose random sets $\mu_1$, $\nu_1$ which are $\mathcal G_{\tau_1}$-measurable such that $(\mu_1 \cup \nu_1) \subseteq (\eta_{\tau_1}^* \cup \nu_{0})$. We consider the epidemic process $\eta^{*,2}$ such that \begin{align}\label{4ea3.22} &\eta_0^{*,2}=\mu_1,\quad \rho_0^{*,2}=\rho_{\tau_1}^*, \quad \rho_{n+1}^{*,2}=\rho_n^{*,2} \cup \eta_n^{*,2} \quad \text{ and}\noindentnumber\\ &\eta_n^{*,2}=\{x\in \mathbb{Z}^d_R: d_{G(\rho_0^{*,2})}(\eta_0^{*,2}, x)=n\}=\eta_n^{\eta_0^{*,2},\rho_0^{*,2}}, \text{ if } n\leq \tau_2-\tau_1;\noindentnumber\\ &\eta_n^{*,2}=\{x\in \mathbb{Z}^d_R: d_{G(\rho_{\tau_2-\tau_1}^{*,2})}(\eta_{\tau_2-\tau_1}^{*,2} \cup \nu_1, x)=n-(\tau_2-\tau_1)\}=\eta_{n-(\tau_2-\tau_1)}^{\eta_{\tau_2-\tau_1}^{*,2}\cup \nu_1,\rho_{\tau_2-\tau_1}^{*,2}},\noindentnumber\\ &\quad \quad \quad \text{ if } n> \tau_2-\tau_1. \end{align} By Lemma \ref{4l0.3} applied to $(\eta^{*,1}_{k+\tau_1}, k\geq 1)$ and $\eta^{*,2}$, we have for all $n\geq 0$, \begin{align}\label{4ea3.33} \cup_{k=0}^n \eta^{*,2}_k\subseteq (\eta_{\tau_1}^* \cup \nu_0) \bigcup \cup_{k=1}^n \eta^{*,1}_{k+\tau_1}=\nu_0\bigcup \cup_{k=\tau_1}^{n+\tau_1} \eta^{*,1}_{k} \subseteq \cup_{k=0}^{n+\tau_1} \eta_k, \end{align} where the equality uses $\eta_{\tau_1}^*=\eta_{\tau_1}^{1,*}$ and the last subset relation uses \eqref{4ea3.32} and $\nu_0\subseteq \eta_0$. By \eqref{4e10.18}, conditioning on $\mathcal G_{\tau_1}$, the process $\{\eta_{n+\tau_1}^*, 0<n\leq \tau_2-\tau_1\}$ is a usual SIR epidemic starting from $(\mu_1, \rho_{\tau_1}^*)$. Therefore $\eta^{*}_n=\eta^{*,2}_{n-\tau_1}$ for all $\tau_1<n\leq \tau_2$ and it follows that for any $\tau_1<n\leq \tau_2$, \begin{align} &\cup_{k=\tau_1+1}^{n} \eta^{*}_k=\cup_{k=\tau_1+1}^{n} \eta^{*,2}_{k-\tau_1}=\cup_{k=1}^{n-\tau_1} \eta^{*,2}_{k}\subseteq \cup_{k=0}^{n} \eta_{k}, \end{align} where the last subset relation uses \eqref{4ea3.33}. Together with \eqref{4ea3.20}, we conclude \begin{align}\label{4ea3.21} &\cup_{k=0}^{n} \eta^{*}_k \subseteq \cup_{k=0}^{n} \eta_{k}, \quad \forall 0\leq n\leq \tau_2. \end{align} Since $\rho_0^{*,2}=\rho_{\tau_1}^*$ and $\eta_0^{*,2}=\mu_1$, we also have $\rho_{\tau_2-\tau_1}^{*,2}=\rho_{\tau_2}^{*}$. By \eqref{4ea3.22} and $\eta^{*,2}_{\tau_2-\tau_1}=\eta^{*}_{\tau_2}$, we conclude the process \mbox{$(\eta^{*,2}_{k+\tau_2-\tau_1}, k\geq 1)$} is a usual SIR epidemic starting from $(\eta_{\tau_2}^* \cup \nu_1, \rho_{\tau_2}^*)$. Next, choose random set $\mu_2$, $\nu_2$ which are $\mathcal G_{\tau_2}$-measurable such that $(\mu_2 \cup \nu_2) \subseteq (\eta_{\tau_2}^* \cup \nu_{1})$. We may repeat the above and consider some epidemic process $\eta^{*,3}$ with $\eta^{*,3}_0=\mu_2$ and $\rho^{*,3}_0=\rho_{\tau_2}^*$ in a way similar to \eqref{4ea3.22}. Similar arguments will give that \begin{align}\label{4ea3.23} &\cup_{k=0}^{n} \eta^{*}_k \subseteq \cup_{k=0}^{n} \eta_{k}, \quad \forall 0 \leq n\leq \tau_3. \end{align} Therefore by induction we conclude \eqref{4ea3.24} holds. \end{proof} \subsection{Proofs of Theorem \ref{4t0} and the survival of the epidemic}\label{4s2.2} Now we return to the original SIR epidemic process $\eta$. By our discussion in the paragraph following Definition \ref{4def1.3}, the main result in Theorem \ref{4t0} is immediate from the proposition below. The proof will be patterned after that of Proposition 5.5 in \cite{LPZ14}. \begin{proposition}\label{4p0.5} Let $d=2$ or $d=3$. There exist some constants $\theta_d>0$ and $K_{\ref{4p0.5}}(d)>0$ so that for all $R>K_{\ref{4p0.5}}(d)$, we have the SIR epidemic process $\eta$ starting from $(\{0\},\emptyset)$ satisfies \[ \mathbb{P}(\eta_n\neq \emptyset, \forall n\geq 0) >0. \] \end{proposition} \begin{mydef} For any constant $m>0$ and $\mu \in M_F(\mathbb{Z}^d_R)$, we say $\mu$ is $m$-${\bf admissible}$ if \begin{align} \mu(g_{u,d})\leq m\frac{R^{d-1}}{\theta^{1/4}}, \forall u\in \mathbb{R}^d, \end{align} where $g_{u,d}$ is as in \eqref{4e10.31}. \end{mydef} \noindent For any $\mu\in M_F(\mathbb{Z}_R^d)$ and $K\subseteq \mathbb{R}^d$, write $\mu|_K(\cdot)=\mu(\cdot \cap K)$ for the measure $\mu$ restricted to $K$. In the setting of Corollary \ref{4c0.1}, we see that with high probability, $Z_{T_\theta^R}|_{Q_{4R_\theta}(0)}$ is $m_{\ref{4c0.1}}$-admissible. Since $Z_{T_\theta^R}$ dominates $\eta_{T_\theta^R}$, it follows that $\eta_{T_\theta^R}|_{Q_{4R_\theta}(0)}$ will be $m_{\ref{4c0.1}}$-admissible as well. Let $Y=(Y_n,n\geq 0)$ be a stochastic process taking values in the set of finite subsets of $\mathbb{Z}_R^d$. As usual we write $Y_n(x)=1(x\in Y_n), \forall x\in \mathbb{Z}_R^d$ so that $Y_n \in M_F(\mathbb{Z}^d_R)$ for all $n$. Recall the grid $\Gamma$ defined in Section \ref{4s1.2}. Choose $T\geq 100$ as in \eqref{4ea10.04}. For any $x\in \Gamma$, any $m,M, K, \chi>0$, $\theta\geq 100$ and $R\geq 4\theta$, define \begin{align}\label{4e1.18} &F_1(Y; M,x)=\{\text{Supp}(\sum_{n=0}^{T_\theta^R} Y_n) \subseteq Q_{M R_\theta }(xR_\theta)\};\noindentnumber\\ &F_2(Y; \chi)=\{\sum_{n=0}^{T_\theta^R} Y_n(\mathcal{N}(y)) \leq \chi R, \forall y\in \mathbb{Z}_R^d\};\noindentnumber\\ &F_3(Y; K,x)=\{\hat{Y}_{T_\theta^R}^K(Q_{R_\theta}(y R_\theta))\geq |Y_0|, \forall y\in \mathcal A(x)\};\noindentnumber\\ &F_4(Y; m,x)=\{{Y}_{T_\theta^R}|_{Q_{ R_\theta }(y R_\theta)}\text{ is } m \text{-admissible for all } y\in \mathcal A(x)\}. \end{align} Here $\hat{Y}_{T_\theta^R}^K$ is the ``thinned'' version of ${Y}_{T_\theta^R}$ such that $|\hat{Y}_{T_\theta^R}^K \cap Q(y)|\leq K \beta_d(R)$ for any $y\in \mathbb{Z}^d$, where $ \beta_d(R)$ is defined in \eqref{4ea10.45}. By using Propositions \ref{4p4}, \ref{4p1}, \ref{4p2} and Corollary \ref{4c0.1}, we show below that the above conditions will hold with high probability for $Y=\eta$, the SIR epidemic. Define \begin{align}\label{4ea3.2} &\widetilde{M}=\widetilde{M}(M,\theta)=[M\sqrt{\log f_d(\theta)}]+1, \text{ and }\noindentnumber\\ &\kappa=\kappa(\chi,\widetilde{M})=(4\widetilde{M}+4)^2 \cdot \chi. \end{align} \begin{proposition}\label{4p0.7} For any $\varepsilon_0\in (0,1)$ and $T\geq \varepsilon_0^{-1}+100$ as in \eqref{4ea10.04}, there exist positive constants $\theta, m,M,K, \chi$ depending only on $T, \varepsilon_0$, and \mbox{$C_{\ref{4p0.7}}(\theta, m,M,K, \chi)\geq 4\theta$} such that for any $R\geq C_{\ref{4p0.7}}$, any finite $\eta_0$ as in \eqref{4ea10.23} which is $m$-admissible, and any finite $\rho_0$ disjoint from $\eta_0$ with \begin{align}\label{4ea3.4} |\rho_{0}\cap \mathcal{N}(y) | \leq \kappa R,\quad \forall y\in \mathbb{Z}_R^d, \end{align} the SIR epidemic process $\eta$ starting from $(\eta_0,\rho_0)$ satisfies \[ \mathbb{P}\Big(F_1(\eta; \widetilde{M},0) \cap F_2(\eta; \chi)\cap F_3(\eta; K,0)\cap F_4(\eta; m, 0) \Big)\geq 1-7\varepsilon_0. \] \end{proposition} \begin{proof} Fix $\varepsilon_0 \in (0,1)$ and $T\geq \varepsilon_0^{-1}+100$ satisfying \eqref{4ea10.04}. Let $\theta>\max\{\theta_{\ref{4p4}}, \theta_{\ref{4p1}}, \theta_{\ref{4p2}}, \theta_{\ref{4c0.1}}\}$ and $m=m_{\ref{4c0.1}}(\varepsilon_0, T)$. We will choose other constants $M, K, \chi$ along the proof. Set $C_{\ref{4p0.7}}=\max\{C_{\ref{4p4}}, C_{\ref{4p1}}, C_{\ref{4p2}}, C_{\ref{4c0.1}}\}$ and fix $R\geq C_{\ref{4p0.7}}$. Let $\eta_0$ be as in \eqref{4ea10.23} such that $\eta_0$ is $m$-admissible. Set $Z_0=\eta_0$. Use Lemma \ref{4l1.5} to see that there is some BRW $(Z_n)$ starting from $Z_0$ such that $Z_n$ dominates $\eta_n$ for all $n\geq 0$. A brief plan for the proof is as follows: we apply Proposition \ref{4p1} with $(Z_n)$ to show that with high probability (w.h.p.) $F_1(\eta; \widetilde{M},0)$ holds. Next, on the event $F_1(\eta; \widetilde{M},0)$, we use Proposition \ref{4p2} with $(Z_n)$ to get w.h.p. $F_2(\eta; \chi)$ holds; on $F_1(\eta; \widetilde{M},0) \cap F_2(\eta; \chi)$, we prove w.h.p. $F_3(\eta; K,0)$ holds by Proposition \ref{4p4}. Finally we finish the proof by showing that w.h.p. $F_4(\eta;m, 0)$ holds by applying Corollary \ref{4c0.1} with $(Z_n)$. \\ \noindent (i) Since $Z_0=\eta_0$ is as in \eqref{4ea10.23}, we have $Z_0$ satisfies the assumption of Proposition \ref{4p1}. By letting $M=M_{\ref{4p1}}(\varepsilon_0, T)$, we may apply Proposition \ref{4p1} to get for $\theta\geq \theta_{\ref{4p1}}$ and $R\geq C_{\ref{4p0.7}} \geq C_{\ref{4p1}}$, with probability larger than $1-\varepsilon_0$ we have \[\text{Supp}(\sum_{n=0}^{T_\theta^R} Z_n) \subseteq Q_{M \sqrt{\log f_d(\theta)}R_\theta} (0)\subseteq Q_{\widetilde{M} R_\theta} (0),\] and so $F_1(\eta; \widetilde{M},0)$ holds since $Z_n$ dominates $\eta_n$ for all $n$. This gives \begin{align}\label{4ea4.11} \mathbb{P}(F_1(\eta; \widetilde{M}, 0))\geq 1-\varepsilon_0. \end{align} \noindent (ii) Next, recall $m=m_{\ref{4c0.1}}(\varepsilon_0, T)$. We have the $m$-admissible $Z_0=\eta_0$ (as in \eqref{4ea10.23}) satisfies the assumption of Proposition \ref{4p2}. By letting $\chi=\chi_{\ref{4p2}}(\varepsilon_0, T, m)$, we get for $\theta\geq \theta_{\ref{4p2}}$ and $R\geq C_{\ref{4p0.7}}\geq C_{\ref{4p2}}$, with probability larger than $1-\varepsilon_0$ we have \begin{align}\label{4ea4.9} \sum_{n=0}^{T_\theta^R} Z_n(\mathcal{N}(x)) \leq \chi {R}, \quad \forall x\in \mathbb{Z}^d_R \cap Q_{2M_{\ref{4p1}} \sqrt{\log f_d(\theta)}R_\theta} (0). \end{align} Recall $M=M_{\ref{4p1}}\geq 100$ and $\theta\geq 100$. So we have $\widetilde{M}< 2M \sqrt{\log f_d(\theta)}$ by \eqref{4ea3.2}. Since $Z_n$ dominates $\eta_n$ for all $n$, on the event $F_1(\eta; \widetilde{M},0)$, we conclude from \eqref{4ea4.9} that \begin{align}\label{4ea3.3} \sum_{n=0}^{T_\theta^R} \eta_n(\mathcal{N}(y)) \leq \chi {R}, \quad \forall y\in \mathbb{Z}^d_R. \end{align} Let $A$ denote the event in \eqref{4ea4.9}. Then $\mathbb{P}(A)\geq 1-\varepsilon_0$ and it follows that \begin{align}\label{4ea4.12} \mathbb{P}(F_2(\eta; \chi) \cap F_1(\eta; \widetilde{M},0))\geq& \mathbb{P}(A \cap F_1(\eta; \widetilde{M},0))\noindentnumber\\ \geq& 1-\mathbb{P}(A^c)- \mathbb{P}(F_1(\eta; \widetilde{M},0)^c)\geq 1-2\varepsilon_0, \end{align} where in the last inequality we have used \eqref{4ea4.11}. \noindent (iii) On the event $F_2(\eta; \chi)$, we may use the assumption on $\rho_0$ in \eqref{4ea3.4} to conclude for all $y\in \mathbb{Z}_R^d$, \begin{align}\label{4ea3.35} |\rho_{T_\theta^R}\cap \mathcal{N}(y) |\leq |\rho_{0}\cap \mathcal{N}(y) |+\sum_{n=0}^{T_\theta^R} \eta_n(\mathcal{N}(y)) \leq (\kappa+\chi) R. \end{align} Let $\kappa'=\kappa+\chi$ and set $N(\kappa')=\{|\rho_{T_\theta^R}\cap \mathcal{N}(y) | \leq \kappa' R, \forall y\in \mathbb{Z}_R^d\}$. It follows that \begin{align}\label{4ea4.10} \mathbb{P}(N(\kappa'))\geq \mathbb{P}(F_2(\eta; \chi))\geq \mathbb{P}(F_1(\eta; \widetilde{M},0)\cap F_2(\eta; \chi))\geq 1-2\varepsilon_0, \end{align} where the last inequality is by \eqref{4ea4.12}. Let $K=K_{\ref{4p4}}(T,\varepsilon_0, \kappa')$. Apply Proposition \ref{4p4} to see for $\theta\geq \theta_{\ref{4p4}}$ and $R\geq C_{\ref{4p0.7}}\geq C_{\ref{4p4}}$, we have \begin{align} \mathbb{P}(F_3(\eta; K, 0)^c \cap N(\kappa'))\leq \varepsilon_0. \end{align} Therefore we get \begin{align} 1-\varepsilon_0\leq \mathbb{P}(F_3(\eta; K, 0)\cup N(\kappa')^c)&\leq \mathbb{P}(F_3(\eta; K, 0))+\mathbb{P}(N(\kappa')^c)\noindentnumber\\ &\leq \mathbb{P}(F_3(\eta; K, 0))+2\varepsilon_0, \end{align} where the last inequality is by \eqref{4ea4.10}. This gives \begin{align}\label{4ea4.13} \mathbb{P}(F_3(\eta; K, 0))\geq 1-3\varepsilon_0 \end{align} \noindent (iv) Turning to $F_4(\eta; m, 0)$, recall we set $m=m_{\ref{4c0.1}}(\varepsilon_0, T)$. Since $Z_0=\eta_0$ is as in \eqref{4ea10.23}, we may apply Corollary \ref{4c0.1} to get for $\theta\geq \theta_{\ref{4c0.1}}$ and $R\geq C_{\ref{4p0.7}}\geq C_{\ref{4c0.1}}$, with probability larger than $1-2\varepsilon_0$ we have ${Z}_{T_\theta^R}|_{Q_{ 4R_\theta }(0)}$ is $m$-admissible. Since $Q_{ R_\theta }(y R_\theta) \subseteq Q_{ 4R_\theta }(0)$ for all $y\in \mathcal A(0)$, it follows that ${\eta}_{T_\theta^R}|_{Q_{ R_\theta }(y R_\theta)}$ is also $m$-admissible and so $F_4(\eta; m, 0)$ holds. We conclude \begin{align}\label{4ea4.14} \mathbb{P}(F_4(\eta; m, 0))\geq 1-2\varepsilon_0. \end{align} Now we have \eqref{4ea4.12}, \eqref{4ea4.13}, \eqref{4ea4.14} hold and so \begin{align} \mathbb{P}(F_4(\eta; m, 0)\cap F_3(\eta; K, 0)\cap F_2(\eta; \chi) \cap F_1(\eta; \widetilde{M},0))\geq 1-7\varepsilon_0. \end{align} The proof is then complete. \end{proof} We are ready to give the proof of Proposition \ref{4p0.5}, thus finishing the proof of the main result Theorem \ref{4t0}. \begin{proof}[Proof of Proposition \ref{4p0.5}] By a trivial union inclusion and translation invariance, it suffices to prove the survival of the SIR epidemic process $\eta$ starting from $(\eta_0, \emptyset)$ for some finite $\eta_0\subseteq \mathbb{Z}_R^d$. Let $\varepsilon_0\in (0,1)$ be small so that any $3$-dependent oriented site percolation process on $\mathbb{Z}_+^2$ with density at least $(1-14\varepsilon_0)$ has positive probability of percolation. For this $\varepsilon_0$, we fix $T\geq \varepsilon_0^{-1}+100$ satisfying \eqref{4ea10.04}. Let $\theta, m,M,K, \chi>0$ be as in Proposition \ref{4p0.7} and let $R\geq C_{\ref{4p0.7}}$. Set $\rho_0=\emptyset$ and choose a finite set $\eta_0$ such that it satisfies the hypothesis of Proposition \ref{4p0.7}. The existence of such $\eta_0$ is immediate from Proposition \ref{4p4} and Corollary \ref{4c0.1}. Let $\eta=(\eta_n,n\geq 0)$ be a usual SIR epidemic starting from $(\eta_0, \emptyset)$. Since our initial infection set $\eta_0$ is finite, one can check by \eqref{4e10.14} that \begin{align}\label{4equiv} \cup_{n=0}^\infty \eta_n \text{ is not a compact set} \mathbb{R}ightarrow\eta_n\neq \emptyset, \forall n\geq 0 . \end{align} Write $\rho_\infty=\cup_{n=0}^\infty \eta_n$. By slightly abusing the notation, we let $\rho_\infty$ be a measure on $\mathbb{Z}_R^d$ such that $\rho_\infty(x)=1(x\in \rho_\infty)$ for $x\in \mathbb{Z}_R^d$. Note we also write $\eta_n$ for the measure $\eta_n(x)=1(x\in \eta_n)$. By \eqref{4equiv}, it suffices to show that with positive probability, the measure $\rho_\infty$ is not compactly supported. To do this, we will produce a random set $\Omega$ on the two-dimensional grid $\Gamma$ such that \begin{align}\label{4e10.34} \begin{dcases} \text{ (i) } &\rho_\infty (Q_{R_\theta}(x R_\theta))>0 \text{ for all $x\in \Omega$};\\ \text{ (ii) } & \Omega \text{ is infinite with positive probability.} \end{dcases} \end{align} Before describing the algorithm used to construct $\Omega$, we first introduce some notations. We will frequently use the stopping rule $\tau=\tau(Y, x)$ defined as follows: for $x\in \mathbb{R}^d$ and for the stochastic process $Y=(Y_n,n\geq 0)$ taking values in the set of finite subsets of $\mathbb{Z}^d_R$, let \begin{align} \tau(Y,x)=\inf\Big\{n\geq 0: &\sup_{y\in \mathbb{Z}_R^d} \sum_{k=0}^n Y_k(\mathcal{N}(y)) > \chi R \text{ or } \noindentnumber\\ &\text{Supp}(\sum_{k=0}^n Y_k) \nsubseteq Q_{\widetilde{M}R_\theta}(xR_\theta)\Big\} \wedge T_\theta^R. \end{align} Recall that $\Gamma=\{x(1), x(2), \cdots\}$ where $0 = x(1) \prec x(2) \prec \cdots$ with the total order defined by \eqref{4ea1.2}. Set $\tau_0=0$, $\mu_0=\eta_0$ and $\nu_0=\emptyset$. Starting from $x(1)=0$, following the total order we will define stopping times $\tau_i$ using $\tau(Y,x)$ above. Let $\eta^*$ be the SIR epidemic with immigration at times $\{\tau_i, i\geq 0\}$ satisfying \eqref{4e10.18}. Below we will choose $\mathcal G_{\tau_i}$-measurable finite sets $\mu_i,\nu_i$ in a way such that $|\mu_i|=|\eta_0|$ and $(\mu_i \cup \nu_i)\subseteq (\eta_{\tau_i}^*\cup \nu_{i-1})$ for all $i\geq 1$. Then we may apply Proposition \ref{4p2.3} to couple $\eta$ with $\eta^*$ so that $\cup_{k=0}^n \eta_k^*\subseteq \cup_{k=0}^n \eta_k$ for all $n\geq 0$. For each $i\geq 1$, we let $Y_0^i=\mu_{i-1}$ and $Y_{n}^i=\eta_{n+\tau_{i-1}}^*$ for $n\geq 1$ to denote the epidemic process $\eta^*$ between $\tau_{i-1}$ and $\tau_i$. Then $Y^i$ is a usual SIR epidemic starting from $(\mu_{i-1}, \rho_{i-1}^{0,*})$. Define the ``good'' events \begin{align} G^i=F^1(Y^i; \widetilde{M}, x(i)) &\cap F^2(Y^i; \chi) \cap F^3(Y^i; K,x(i)) \cap F^4(Y^i; m, x(i)). \end{align} On the good event, $F^1$ and $F^2$ ensures that before time $T_\theta^R$, the epidemic $Y^i$ has not accumulated the recovered set with more than $\chi R$ sites in each unit cube $\mathcal{N}(y)$ and has not escaped $Q_{\widetilde{M} R_\theta}(x(i)R_\theta)$; $F^3$ guarantees that at time $T_\theta^R$, the epidemic has spread at least $|Y_0^i|=|\mu_{i-1}|=|\eta_0|$ infected sites in all the cubes $Q_{R_\theta}(yR_\theta)$ for $y\in \mathcal A(x(i))$ after thinning; finally $F^4$ is a technical restriction needed for the proof of Proposition \ref{4p0.7}, the $m$-admissible property. This also allows us to carefully choose $\{Y_0^{i}\}$ so that the good events will propagate with high probability.  The recovered set $\rho_0^{i,*}$ will determined as follows: $\rho_0^{0,*}\equiv \emptyset$, and for $i\geq 1$, \begin{align} \rho_0^{i,*}=\rho_0^{i-1,*} \bigcup \bigcup_{n=0}^{\tau_{i}-\tau_{i-1}-1} Y^i_n. \end{align} Recall $Y_0^i=\mu_{i-1}$ and $Y_{n}^i=\eta_{n+\tau_{i-1}}^*$ for $n\geq 1$. One can easily check by induction that $\rho_0^{i,*}$ is the total recovered set of $\eta^*$ up to time $\tau_i$, i.e. $\rho_0^{i,*}=\rho_{\tau_i}^*=\cup_{n=0}^{\tau_i-1} \eta_n^*$. Below we will set $\tau_{i}-\tau_{i-1}$ to be $0$ or $\tau(Y^i, x(i))$ for different cases. In either case, one may check by induction that $\tau_i$ is a stopping time with respect to $(\mathcal G_n)$ if $\tau_{i-1}$ is. If $\tau_{i}-\tau_{i-1}=\tau(Y^i, x(i))$, then the definition of $\tau(Y^i, x(i))$ gives that \[ \Big|\bigcup_{n=0}^{\tau_{i}-\tau_{i-1}-1} Y^i_n \cap \mathcal{N}(y)\Big|=\sum_{n=0}^{\tau(Y^i, x(i))-1} Y^i_n(\mathcal{N}(y))\leq \chi R \cdot 1_{\{\mathcal{N}(y) \cap Q_{\widetilde{M} R_\theta}(x(i)R_\theta)\neq \emptyset\}}, \quad \forall y\in \mathbb{Z}^d_R. \] The case for $\tau_{i}-\tau_{i-1}=0$ is trivial. So it follows that for each $i\geq 1$, \begin{align}\label{4ea3.9} |\rho_0^{i,*} \cap \mathcal{N}(y)|\leq \chi R \cdot \sum_{j=1}^i 1_{\{\mathcal{N}(y) \cap Q_{\widetilde{M} R_\theta}(x(j)R_\theta)\neq \emptyset\}}, \quad \forall y\in \mathbb{Z}^d_R. \end{align} Notice that each unit cube $\mathcal{N}(y)$ has non-empty intersection with at most $(4\widetilde{M}+4)^2$ cubes of the form $Q_{\widetilde{M} R_\theta}(x(j)R_\theta)$ for $x(j)$ in the $2$-dimensional grid $\Gamma$. Hence for any $i\geq 1$, by \eqref{4ea3.9} we have \begin{align}\label{4ea3.8} |\rho_0^{i,*} \cap \mathcal{N}(y)| &\leq \chi R \cdot \sum_{j=1}^\infty 1_{\{\mathcal{N}(y) \cap Q_{\widetilde{M} R_\theta}(x(j)R_\theta)\neq \emptyset\}}\noindentnumber\\ & \leq \chi R \cdot (4\widetilde{M}+4)^2=\kappa R, \quad \forall y\in \mathbb{Z}^d_R, \end{align} where the last equality is from \eqref{4ea3.2}. Therefore the assumption \eqref{4ea3.4} on $\rho_0$ of Proposition \ref{4p0.7} will always be satisfied. For notation ease, we write \[ \widetilde{Q}(x)=Q_{R_\theta}(x R_\theta) \text{ for any } x\in \mathbb{Z}^d. \] Now we are ready to introduce the algorithm. We start with $x(1)=0$. Set $\tau_0=0$, $\mu_0=\eta_0$, $\nu_0=\emptyset$ and $\rho_0^{*}=\rho_0^{0,*}=\emptyset$. We first let $\eta^*$ proceed as a usual SIR epidemic starting from $(\mu_0, \rho_0^{0,*})$. Let $Y_{0}^1=\mu_0$ and $Y_{n}^1=\eta_{n+\tau_{0}}^*$ for $n\geq 1$. Let $\tau_1=\tau(Y^1; x(1))$. By Proposition \ref{4p0.7}, the good event $G^1$ occurs with probability $\geq 1-7\varepsilon_0$. If the good event occurs, we have $\tau_1=T_\theta^R$ and we change the status of site $x(1)=0$ to be occupied. Since $F^3(Y^1; K,x(1))$ holds, we have \begin{align}\label{4ea3.60} |\hat{\eta}^{*,K}_{\tau_1}\cap \widetilde{Q}(z)|= |\hat{Y}^{1}_{T_\theta^R}( \widetilde{Q}(z))|\geq |Y_{0}^1|=|\mu_0|=|\eta_0| \text{ for all } z\in \mathcal A(x(1)). \end{align} Totally order $\mathbb{Z}_R$ by $\{0,1/R,-1/R,2/R,-2/R,\cdots\}$ and then totally order $\mathbb{Z}_R^d$ lexicographically. By \eqref{4ea3.60} we may choose $\widetilde{\eta}^{*,K}_{\tau_1}\subseteq \hat{\eta}^{*,K}_{\tau_1}$ following the above total order on $\mathbb{Z}_R^d \cap \hat{\eta}^{*,K}_{\tau_i}$ such that \begin{align}\label{4ea3.6} |\widetilde{\eta}^{*,K}_{\tau_1}\cap \widetilde{Q}(z)|=|Y_{0}^1|=|\mu_0|=|\eta_0| \text{ for all } z\in \mathcal A(x(1)). \end{align} Recall that we also obtain the ``thinned'' version $\hat{\eta}^{*,K}_{\tau_1}$ from ${\eta}^{*}_{\tau_1}$ in a deterministic way in Proposition \ref{4p4}. Since ${\eta}^{*}_{\tau_1}\in \mathcal G_{\tau_1}$, it follows that $\hat{\eta}^{*,K}_{\tau_1}\in \mathcal G_{\tau_1}$ and hence $\widetilde{\eta}^{*,K}_{\tau_1}\in \mathcal G_{\tau_1}$. Next, $F^4$ ensures that for each $z\in \mathcal A(x(1))$, we have $\widetilde{\eta}^{*,K}_{\tau_1}|_{\widetilde{Q}(z)}$ is $m$-admissible. Further define \begin{align}\label{4ea3.7} w_1= \begin{dcases} \bigcup_{z\in \mathcal A(x(1))} (\widetilde{\eta}^{*,K}_{\tau_1} \cap \widetilde{Q}(z)), & \text{ if } G^1 \text{ occurs,}\\ \emptyset,& \text{ otherwise. } \end{dcases} \end{align} In this way if $G^1$ occurs, then $w_1$ has exactly $|\eta_0|$ infected sites in each cube $\widetilde{Q}(z)$ for $z\in \mathcal A(x(1))$ and the assumption of $\eta_0$ in Proposition \ref{4p0.7} will be satisfied. We now work with site $y=x(i)$ for $i\geq 2$. \noindent {\bf Case I.} If $y=x(i)$ is an immediate offspring of some occupied site $x(j)$ with $j<i$ (i.e. $x(i)\in \mathcal A(x(j))$ and the good event $G^j$ occurs). Define \[ (\mu_{i-1}, \nu_{i-1})=(w_{i-1}\cap \widetilde{Q}(y), w_{i-1} \cap \widetilde{Q}(y)^c). \] By \eqref{4ea3.6} and \eqref{4ea3.7}, we have $\mu_{i-1}=\widetilde{\eta}^{*,K}_{\tau_j} \cap \widetilde{Q}(y)$ with total mass $|\mu_{i-1}|=|\eta_0|$. Since $G^j$ occurs, we have $\mu_{i-1}$ is $m$-admissible and hence satisfies the assumption of $\eta_0$ in Proposition \ref{4p0.7}. Let $Y_{0}^i=\mu_{i-1}$ and $Y_{n}^i=\eta_{n+\tau_{i-1}}^*$ for $n\geq 1$ so that $Y^i$ is a usual SIR epidemic starting from $(\mu_{i-1}, \rho_{i-1}^{0,*})$. Set $\tau_i=\tau_{i-1}+\tau(Y^i, x(i))$. By Proposition \ref{4p0.7} with a spatial translation, the good event $G^i$ occurs with probability $\geq 1-7\varepsilon_0$. In this case, we change the status of site $y=x(i)$ to occupied. Again since $F^3(Y^i; K,x(i))$ holds, as in \eqref{4ea3.6} we may choose some $\mathcal G_{\tau_i}$-measurable set $\widetilde{\eta}^{*,K}_{\tau_i} \subseteq \hat{\eta}^{*,K}_{\tau_i}$ such that \begin{align}\label{4ea3.42} |\widetilde{\eta}^{*,K}_{\tau_i}\cap \widetilde{Q}(z)|=|Y_{0}^i|=|\mu_{i-1}|=|\eta_0| \text{ for all } z\in \mathcal A(x(i)). \end{align} Moreover, $F^4$ gives that for each $z\in \mathcal A(x(i))$, we have $\widetilde{\eta}^{*,K}_{\tau_i}|_{\widetilde{Q}(z)}$ is $m$-admissible. Further we define \begin{align*} w_i= \begin{dcases} \nu_{i-1}\bigcup \bigcup_{z\in \widetilde{\mathcal A}(y)} (\widetilde{\eta}^{*,K}_{\tau_i} \cap \widetilde{Q}(z)), & \text{ if } G^i \text{ occurs,}\\ \nu_{i-1},& \text{ otherwise, } \end{dcases} \end{align*} where \[ \widetilde{\mathcal A}(y)=\{ z\in \mathcal A(y): z\noindenttin \mathcal A(u) \text{ for } u \text{ which is occupied and} \prec y \}. \] One can check that $\widetilde{\mathcal A}(y)$ will contain at least one member of $\mathcal A(y)$. The definition of $\widetilde{\mathcal A}(y)$ is to avoid duplicate of particles on $\widetilde{Q}(z)$ for $z\in \mathcal A(y)$ as $\{\nu_{i-1}\}$ will carry and freeze the infected sites in each cube $\widetilde{Q}(z)$ until we reach it. \noindent {\bf Case II.} Site $y$ is not an immediate offspring of any occupied site. Then we set $\tau_i=\tau_{i-1}$, $(\mu_{i-1}, \nu_{i-1})=(\emptyset,w_{i-1})$ and $w_i=w_{i-1}$. In this case, we simply skip the cube $\widetilde{Q}(y)$ and move to the next site in our total ordering of $\Gamma$. In either case, we will move to site $x(i+1)$ at time $\tau_i$. The definitions of $\{w_i\}$, $\{\nu_i\}$ and $\{\mu_i\}$ ensure that if $y=x(k)$ for some $k\geq 2$ is an immediate offspring of some occupied site, then the infected set $\mu_{k-1}$ contained in the cube $\widetilde{Q}(y)$ will satisfy the assumption of $\eta_0$ in Proposition \ref{4p0.7}. Restart the SIR epidemic with $\mu_{k-1}$ so that the good event $G^k$ will occur with high probability and so $y=x(k)$ will be occupied with high probability as well.\\ Since $\mu_i \cup \nu_i=w_i \subseteq (\eta^*_{\tau_i} \cup \nu_{i-1})$ by construction, we have such defined $\mu_i$, $\nu_i$ and $\tau_i$ satisfy the conditions of Proposition \ref{4p2.3}. Therefore the processes $\eta$ and $\eta^*$ can be coupled such that \[ \cup_{k=0}^n \eta_k^* \subseteq \cup_{k=0}^n \eta_k \text{ for any } n\geq 0. \] In particular, since $\rho_0^*=\rho_0=\emptyset$, if we let $\rho_\infty^*=\cup_{k=0}^\infty \eta_k^*$, we have $\rho^{*}_\infty \subseteq \rho_\infty$. Again we abuse the notation $\rho_\infty^{*}$ for the measure $\rho_\infty^{*}(x)=1(x\in \rho_\infty^{*}), \forall x\in \mathbb{Z}_R^d$. If we let $\Omega$ be the set of all occupied sites, then the construction above implies for any $x=x(i) \in \Omega$, there is some occupied $x(j)$ with $j<i$ such that $x(i)\in \mathcal A(x(j))$ and the good event $G^j$ occurs. Therefore $F^3(Y^{j}; K, x(j))$ guarantees that the infection from $\widetilde{Q}(x(j))$ will spread enough mass to its adjacent cube $\widetilde{Q}(x(i))$ so that $\hat{\eta}_{\tau_j}^{*,K}(\widetilde{Q}(x(i)))\geq |\eta_0|$. It follows that \[ \rho_\infty(Q_{R_\theta}(x R_\theta))\geq \rho_\infty^{*}(Q_{R_\theta}(x R_\theta))=\rho_\infty^{*}(\widetilde{Q}(x(i)))\geq \eta_{\tau_j}^{*}(\widetilde{Q}(x(i)))\geq |\eta_0|>0, \] and hence $\Omega$ satisfies condition (i) in \eqref{4e10.34}.\\ To show that $\Omega$ is infinite with positive probability, we define a $3$-dependent oriented site percolation on $\Gamma$ with density at least $(1-14\varepsilon_0)$ following \cite{LPZ14}. Recall we have picked $\varepsilon_0 \in (0,1)$ small so that such an oriented site percolation has positive probability of percolation from the origin. For each $x\in \Gamma$, if $x$ is occupied, then $\xi(x)=1$ if both $y\in \mathcal A(x)$ are occupied, and set $\xi(x)=0$ otherwise; if $x$ is vacant, then we let $\xi(x)$ be Bernoulli $(1-14\varepsilon_0)$ independent of everything else. We know that the origin and both $y\in \mathcal A(0)$ are occupied with positive probability and so $\xi(0)=1$ with positive probability. Assuming $\xi(0)=1$, we have both $y\in \mathcal A(0)$ are occupied. By induction one may conclude that $\Omega$ contains the collection of sites reachable from the origin. In other words, if percolation to infinity occurs, we have $\Omega$ is infinite. It remains to show that such defined site percolation is a $3$-dependent site percolation with density at least $1-14\varepsilon_0$, i.e. for any $n\geq 1$ and any $1\leq i_1<\cdots<i_n$ such that $\|x(i_j)-x(i_k)\|_1\geq 3$ for any $j\neq k$, \begin{align} P(\xi(x(i_j))=0, \forall 1\leq j\leq n ) \leq (14\varepsilon_0)^{n}. \end{align} Recall that we have let $\xi(x)$ be Bernoulli $(1-14\varepsilon_0)$ independent of everything else when $x$ is vacant. By using the total probability formula and conditioning on whether $x(i_j)$ is occupied or vacant, it suffices to show that \begin{align}\label{4ea8.1} P\Big(\xi(x(i_j))=0, \forall 1\leq j\leq n |\text{all $x(i_j)'s$ are occupied}\Big) \leq (14\varepsilon_0)^{n}. \end{align} We prove the above by induction. When $n=1$, if $x:=x(i_1)$ is occupied, we have each $y\in \mathcal A(x)$ is occupied with probability larger than $1-7\varepsilon_0$, and so $\xi(x)=1$ occurs with probability larger than $1-14\varepsilon_0$ by letting both $y\in \mathcal A(x)$ be occupied. Hence \eqref{4ea8.1} holds for $n=1$. Turning to induction step, for each $m\geq 0$, we let $\mathcal{H}_m=\mathcal G_{\tau_m}$ so that the good event $G^i \in \mathcal{H}_i$ for all $i\geq 1$. Hence the random variable $\xi(x(i))$ is measurable with respect to $\mathcal{H}_\ell$ where $\ell$ is the index of the second $y\in \mathcal A(x(i))$. Let $\ell_j$ be the index of the second $y\in \mathcal A(x(i_j))$. Since $\|x(i_j)-x(i_k)\|_1\geq 3$ for any $j\neq k$, we conclude \[ \ell_j<\ell_n-2, \quad \forall 1\leq j\leq n-1. \] Hence by conditioning on $\mathcal{H}_{\ell_n-2}$, we reduce \eqref{4ea8.1} to the $n=1$ case and so by induction hypothesis the conclusion follows. \end{proof} \section{Preliminaries for branching random walk}\label{4s3} \subsection{Support propagation of branching random walk} We first give the proof of Proposition \ref{4p1}. Let ${U}$ be a super-Brownian motion with drift $1$, that is, the solution to the martingale problem $(MP)_1$ in \eqref{4e10.25}. Similarly we let $X$ be a super-Brownian motion with drift $\theta$. By using the scaling of SBM from Lemma 2.27 of \cite{LPZ14}, we have \begin{align}\label{4e2.11} \int \psi(x) {U}_t(dx)\overset{\text{law}}{=} \theta\int \psi(\sqrt{\theta} x) {X}_{t/\theta}(dx), \quad \forall t\geq 0. \end{align} In particular, if we use \eqref{4e2.11} to define $X$ and $U$ on a common probability space, then it follows that ${U}_0(1)=\theta {X}_0(1)$ and for any $t\geq 0$, $\text{Supp}({U}_t)=\sqrt{\theta}\ \text{Supp}({X}_{t/\theta})$ where $kA=\{kx: x\in A\}$ for $k\in \mathbb{R}$ and $A\subseteq \mathbb{R}^d$. The lemma below is an easy consequence of Lemma 3.12 in \cite{LPZ14}. \begin{lemma}\label{4l3.21} For any $\varepsilon_0\in (0,1)$ and $T\geq 100$, there exists some constant $M_{\ref{4l3.21}}=M_{\ref{4l3.21}}(\varepsilon_0,T)\geq 100$ such that for any $\theta\geq 100$, any $\lambda\geq e$ and any $X_0\in M_F(\mathbb{R}^d)$ satisfying $|X_0|=\lambda/\theta$ and $\text{Supp}({X}_0)\subseteq Q_{\sqrt{3/\theta}}(0)$, if $X$ is a super-Brownian motion with drift $\theta$ starting from $X_0$, then \[ \mathbb{P}^{X_0}\Big(\text{Supp}\Big(\int_0^{2T/\theta} X_s ds\Big)\subseteq Q_{M_{\ref{4l3.21}}\sqrt{(\log \lambda)/\theta}}(0)\Big)\geq 1-\frac{\varepsilon_0}{8}. \] \end{lemma} \begin{proof} Fix $\varepsilon_0\in (0,1)$ and $T\geq 100$. Let $\theta\geq 100$, $\lambda\geq e$ and choose $X_0\in M_F(\mathbb{R}^d)$ such that $|X_0|=\lambda/\theta$ and $\text{Supp}({X}_0)\subseteq Q_{\sqrt{3/\theta}}(0)$. If $X$ is a super-Brownian motion with drift $\theta$ starting from $X_0$, then we may use \eqref{4e2.11} to define a super-Brownian motion $U$ with drift $1$ starting from $U_0$ where $U_0$ satisfies \begin{align}\label{4ea4.21} |{U}_0|=\theta |{X}_0|=\lambda\text{ and } \text{Supp}({U}_0)=\sqrt{\theta}\ \text{Supp}({X}_{0})\subseteq Q_{\sqrt{3}}(0). \end{align} Now apply Lemma 3.12 of \cite{LPZ14} with above $U_0$ to see that there is some $M=M(T,\varepsilon_0)\geq 100$ so that \begin{align}\label{4ea4.2} \mathbb{P}^{U_0}\Big(\text{Supp}\Big(\int_0^{2T} U_s ds\Big)\subseteq Q_{M\sqrt{\log \lambda}}(0)\Big)\geq 1-\frac{\varepsilon_0}{8}. \end{align} The proof of Lemma 3.12 in \cite{LPZ14} goes back to Theorem A of \cite{Pin95}, which allows us to accommodate a slightly different assumption on $\text{Supp}({U}_0)$ as in \eqref{4ea4.21}. Use \eqref{4e2.11} and \eqref{4ea4.2} to conclude \begin{align}\label{4e6.23} &\mathbb{P}^{X_0}\Big(\text{Supp}\Big( \int_0^{2T/\theta} X_s ds\Big)\subseteq Q_{M\sqrt{(\log \lambda)/\theta} }(0)\Big)\noindentnumber\\ =&\mathbb{P}^{U_0}\Big(\text{Supp}\Big(\int_0^{2T} U_s ds\Big)\subseteq Q_{M\sqrt{\log \lambda}}(0)\Big)\geq 1-\frac{\varepsilon_0}{8}, \end{align} as required. \end{proof} Now we are ready to prove Proposition \ref{4p1}. \begin{proof}[Proof of Proposition \ref{4p1}] Fix $\varepsilon_0\in (0,1)$ and $T\geq 100$. Let $\theta_{\ref{4p1}}= 100$. For any $\theta\geq \theta_{\ref{4p1}}$ and $R\geq 4\theta$, let $Z_0$ be as in \eqref{4e10.23}. Let $e_1=(1,0)$ in $d=2$ and $e_1=(1,0,0)$ in $d=3$. Set $\widetilde{R_\theta}=[R_\theta \cdot R]/R$ and define $\widetilde{e}_1=\widetilde{R_\theta} e_1$ so that the vertex $\widetilde{e}_1$ has the largest first coordinate in $Q_{R_\theta}(0)\cap \mathbb{Z}_R^d$. Let $Z$ be a branching random walk starting from $Z_0$. Define \begin{align}\label{4ea0.1a} \mathcal{R}(Z, T_\theta^R)=\inf \Big\{K\in \mathbb{R}: \text{Supp}(\sum_{n=0}^{T_\theta^R} Z_n) \subseteq H_K\Big\}, \end{align} where $H_K=\{x\in \mathbb{R}^d: x_1\leq K\}$. In this way, $\mathcal{R}(Z, T_\theta^R)$ characterizes the rightmost site that has been reached by $Z$ up to time $ T_\theta^R$. Next we couple $Z$ with another branching random walk $\widetilde{Z}$ starting from $\widetilde{Z}_0=|Z_0| \cdot \delta_{\widetilde{e}_1}$ so that \begin{align}\label{4ea0.1} \mathcal{R}(Z, T_\theta^R)\leq \mathcal{R}(\widetilde{Z}, T_\theta^R), \end{align} where $\mathcal{R}(\widetilde{Z}, T_\theta^R)$ is defined in a similar way to $\mathcal{R}({Z}, T_\theta^R)$ as in \eqref{4ea0.1a} by replacing $Z$ with $\widetilde{Z}$. This coupling could be done by simply translating all the family trees starting from the ancestors in $Z_0$ to $\widetilde{e}_1$. Since $\widetilde{e}_1$ has the largest first coordinate among all vertices located inside $\text{Supp}(Z_0)\subseteq Q_{R_\theta}(0)\cap \mathbb{Z}_R^d$, we have \eqref{4ea0.1} follows immediately. Let $M_{\ref{4p1}}=2M_{\ref{4l3.21}}(\varepsilon_0,T)$. We claim that it suffices to show the following holds for all $R$ large enough: \begin{align}\label{4ea0.2} \mathbb{P}^{\widetilde{Z}_0}\Big(\text{Supp}(\sum_{n=0}^{T_\theta^R} \widetilde{Z}_n) \subseteq Q_{M_{\ref{4p1}} \sqrt{\log f_d(\theta)}R_\theta} (0) \Big)\geq 1-\frac{\varepsilon_0}{6}. \end{align} To see this, by assuming \eqref{4ea0.2} we have $\mathcal{R}(\widetilde{Z}, T_\theta^R)\leq M_{\ref{4p1}}\sqrt{\log f_d(\theta)}R_\theta$ holds with probability $\geq 1-\varepsilon_0/6$. Apply \eqref{4ea0.1} to get \begin{align} \mathbb{P}^{{Z}_0}\Big(\mathcal{R}({Z}, T_\theta^R)\leq M_{\ref{4p1}}\sqrt{\log f_d(\theta)}R_\theta\Big)\geq 1-\frac{\varepsilon_0}{6}. \end{align} By symmetry, we conclude \begin{align} \mathbb{P}^{{Z}_0}\Big(\text{Supp}(\sum_{n=0}^{T_\theta^R} {Z}_n) \subseteq Q_{M_{\ref{4p1}} \sqrt{\log f_d(\theta)}R_\theta} (0) \Big)\geq 1-2d\frac{\varepsilon_0}{6}\geq 1-\varepsilon_0, \end{align} as required. It remains to prove \eqref{4ea0.2}. Recall $T_\theta^R=[TR^{d-1}/\theta]$ and $R_\theta= \sqrt{ R^{d-1}/\theta}$. Consider $\widetilde{W}_t^R$ as in \eqref{4e5.39} given by \begin{align}\label{4ea4.22} \widetilde{W}_t^R=\frac{1}{R^{{d-1}}}\sum_{x\in \mathbb{Z}^d_R} \delta_{{x}/{\sqrt{R^{d-1}/3}}} \widetilde{Z}_{[tR^{d-1}]}(x), \quad \forall t\geq 0. \end{align} It suffices to show that for any $R>0$ large, \begin{align} &\mathbb{P}^{\widetilde{W}_0^R}\Big(\text{Supp}\Big(\int_0^{2T/\theta} \widetilde{W}_s^R ds\Big)\subseteq Q_{\sqrt{3}M_{\ref{4p1}}\sqrt{(\log f_d(\theta))/\theta}}(0)\Big) \geq 1-\frac{\varepsilon_0}{6}. \end{align} Assume to the contrary that the above fails for some $\{\widetilde{W}_t^{R_N}, t\geq 0\}$ with $R_N\to \infty$ such that \begin{align}\label{4ea0.3} &\mathbb{P}^{\widetilde{W}_0^{R_N}}\Big(\text{Supp}\Big(\int_0^{2T/\theta} \widetilde{W}_s^{R_N} ds\Big)\subseteq Q_{\sqrt{3}M_{\ref{4p1}}\sqrt{(\log f_d(\theta))/\theta}}(0)\Big) < 1-\frac{\varepsilon_0}{6}, \ \forall R_N. \end{align} Recall $\widetilde{Z}_0=|Z_0| \cdot \delta_{\widetilde{e}_1}$. Note by the definition of $\widetilde{e}_1$ and \eqref{4e10.23}, we have \begin{align} \lim_{R\to \infty} \frac{\widetilde{e}_1}{\sqrt{R^{d-1}/3}}=\sqrt{\frac{3}{\theta}} e_1\quad \text{ and } \lim_{R\to \infty} \frac{|Z_0|}{R^{{d-1}}}=f_d(\theta)/\theta. \end{align} It follows that \begin{align} \widetilde{W}_0^R=&\frac{1}{R^{{d-1}}}\sum_{x\in \mathbb{Z}^d_R} \widetilde{Z}_{0}(x) \delta_{{x}/{\sqrt{R^{d-1}/3}}}\noindentnumber\\ =&\frac{|Z_0|}{R^{{d-1}}}\delta_{{\widetilde{e}_1}/{\sqrt{R^{d-1}/3}}} \to X_0=\frac{f_d(\theta)}{\theta} \delta_{\sqrt{\frac{3}{\theta}} e_1}\in M_F(\mathbb{R}^d). \end{align} Therefore by \eqref{4e10.22} we have as $R\to \infty$, \begin{align}\label{4e10.26} (\widetilde{W}_t^R, t\geq 0) \mathbb{R}ightarrow (X_t, t\geq 0) \text{ on } D([0,\infty), M_F(\mathbb{R}^d)), \end{align} where $X$ is a super-Brownian motion with drift $\theta$ starting from $X_0$. Apply Lemma 4.4 of \cite{FP16} with a slight modification to see that for any $t, M>0$, \begin{align} &\limsup_{R\to \infty} \mathbb{P}^{\widetilde{W}_0^R}\Big(\text{Supp}\Big(\int_0^t \widetilde{W}_s^R ds\Big)\cap ((-M,M)^d)^c \neq \emptyset\Big)\noindentnumber\\ &\quad \leq \mathbb{P}^{X_0}\Big(\text{Supp}\Big(\int_0^t X_s ds\Big)\cap ((-M,M)^d)^c \neq \emptyset\Big), \end{align} thus giving \begin{align}\label{4e5.38} &\liminf_{R\to \infty} \mathbb{P}^{\widetilde{W}_0^R}\Big(\text{Supp}\Big(\int_0^t \widetilde{W}_s^R ds\Big)\subseteq (-M,M)^d\Big) \noindentnumber\\ &\quad \quad \quad \geq \mathbb{P}^{X_0}\Big(\text{Supp}\Big(\int_0^t X_s ds\Big)\subseteq (-M,M)^d\Big). \end{align} Notice that $X_0=\frac{f_d(\theta)}{\theta} \delta_{\sqrt{\frac{3}{\theta}} e_1}$ will satisfy the assumption of Lemma \ref{4l3.21} since $\lambda=f_d(\theta)\geq e$ by $\theta \geq 100$, which allows us to get \begin{align}\label{4ea4.31} \mathbb{P}^{X_0}\Big(\text{Supp}\Big(\int_0^{2T/\theta} X_s ds\Big)\subseteq Q_{M_{\ref{4l3.21}}\sqrt{(\log f_d(\theta))/\theta}}(0)\Big)\geq 1-\frac{\varepsilon_0}{8}. \end{align} Apply \eqref{4e5.38} with $t=2T/\theta$, $M=2M_{\ref{4l3.21}}\sqrt{\frac{\log f_d(\theta)}{\theta}}$ and $\{R_N\}$ to see that \begin{align}\label{4ea4.3} &\liminf_{{R_N}\to \infty} \mathbb{P}^{\widetilde{W}_0^{R_N}}\Big(\text{Supp}\Big(\int_0^{2T/\theta} \widetilde{W}_s^{R_N} ds\Big)\subseteq \Big(-2M_{\ref{4l3.21}}\sqrt{\frac{\log f_d(\theta)}{\theta}}, 2M_{\ref{4l3.21}}\sqrt{\frac{\log f_d(\theta)}{\theta}}\Big)^d \Big)\noindentnumber\\ & \geq \mathbb{P}^{X_0}\Big(\text{Supp}\Big(\int_0^{2T/\theta} X_s ds\Big)\subseteq \Big(-2M_{\ref{4l3.21}}\sqrt{\frac{\log f_d(\theta)}{\theta}}, 2M_{\ref{4l3.21}}\sqrt{\frac{\log f_d(\theta)}{\theta}} \Big)^d \Big)\noindentnumber\\ &\geq 1-\frac{\varepsilon_0}{8}, \end{align} where the last inequality is by \eqref{4ea4.31}. This contradicts \eqref{4ea0.3} as we set $M_{\ref{4p1}}=2M_{\ref{4l3.21}}$. So the proof is complete. \end{proof} \subsection{Moments and exponential moments of branching random walk} Let $p_1$ be a probability distribution that is uniform on $\mathcal{N}(0)$: \begin{align}\label{4e0.01} p_1(x)=\frac{1}{V(R)}1(x\in \mathcal{N}(0)). \end{align} Let $Y_1, Y_2, \cdots$ be i.i.d. random variables with distribution $p_1$ and write $S_n=Y_1+\cdots+Y_n$ for the random walk on $\mathbb{Z}^d_R$ starting from $0$ with step distribution $p_1$. Define \begin{align}\label{4eb2.2} p_n(x)=\mathbb{P}(S_n=x). \end{align} Set $p_0(x)=\delta_0(x)$ by convention where $\delta_0(x)=1$ if $x=0$ and $\delta_0(x)=0$ if $x\neq 0$. It is easy to check by symmetry that $p_n(x)=p_n(-x)$ for any $x\in \mathbb{Z}_R^d$ and $n\geq 0$. We collect the properties of $p_n$ below. Their proofs are rather technical, which can be found in Appendix \ref{a3}. \begin{proposition}\label{4p1.1} Let $d\geq 1$. There exist constants $c_{\ref{4p1.1}}=c_{\ref{4p1.1}}(d)>0$, $C_{\ref{4p1.1}}=C_{\ref{4p1.1}}(d)>0$ and $K_{\ref{4p1.1}}=K_{\ref{4p1.1}}(d)>0$ such that the following holds for any $n\geq 1$ and $R\geq K_{\ref{4p1.1}}$.\\ (i) For any $x\in \mathbb{Z}^d_R$, we have \begin{align}\label{4eb6.1} p_n(x)\leq \frac{c_{\ref{4p1.1}}}{n^{d/2} R^d} e^{-\frac{|x|^2}{8dn}}. \end{align} (ii) For any $x,y\in \mathbb{Z}^d_R \text{ with } |x-y|\geq 1$ and $\gamma\in (0,1]$, we have \begin{align}\label{4eb6.2} |p_n(x)-{p}_n(y)|\leq \frac{C_{\ref{4p1.1}}}{n^{d/2} R^d} \Big(\frac{|x-y|}{\sqrt{n}}\Big)^\gamma (e^{-\frac{|x|^2}{16dn}}+e^{-\frac{|y|^2}{16dn}}). \end{align} \end{proposition} Throughout the rest of this paper, we will only consider $R\geq K_{\ref{4p1.1}}$ so that Proposition \ref{4p1.1} holds. Since we assume $d=2$ or $d=3$, for simplicity we will replace $8d$ with $32$ in \eqref{4eb6.1} and replace $16d$ with $64$ in \eqref{4eb6.2} whenever we use Proposition \ref{4p1.1} below. In fact, these constants can be chosen to be any fixed large number. We state the following results on the moments and exponential moments of branching random walk whose proofs are deferred to Appendix \ref{4ap1.1}; the arguments follow essentially from Perkins \cite{Per88}. Write $\mathbb{P}^x$ for the law of BRW starting from a single ancestor at $x$ for $x\in \mathbb{Z}_R^d$. \begin{proposition}\label{4p1.2} For any $x\in \mathbb{Z}^d_R$, $n\geq 1$ and any Borel function $\phi\geq 0$, we have\\ \noindent (i)\begin{align*} \mathbb{E}^{x}({Z}_{n}(\phi))=(1+\frac{\theta}{R^{d-1}})^n \mathbb{E}(\phi(S_n+x))=(1+\frac{\theta}{R^{d-1}})^n \sum_{y\in \mathbb{Z}_R^d} \phi(y) p_n(x-y). \end{align*} (ii) For any $p\geq 2$, \begin{align*} &\mathbb{E}^{x}({Z}_{n}(\phi)^p)\leq (p-1)! e^{\frac{n\theta(p-1)}{R^{d-1}}} G(\phi,n)^{p-1}\mathbb{E}^{x}({Z}_{n}(\phi)), \end{align*} where \begin{align}\label{4e5.90} G(\phi,n)= 3\|\phi\|_\infty +\sum_{k=1}^{n} \sup_{y\in \mathbb{Z}_R^d}\sum_{z\in \mathbb{Z}^d_R} \phi(z) p_k(y-z). \end{align} \end{proposition} \begin{corollary}\label{4c1.2} For any ${Z}_0\in M_F(\mathbb{Z}^d_R)$, $\phi\geq 0$, $\lambda>0$, $n\geq 1$, if $\lambda e^{\frac{n\theta}{R^{d-1}}} G(\phi,n)<1$ is satisfied, we have \begin{align*} &\mathbb{E}^{{Z}_0}(e^{\lambda {Z}_{n}(\phi)})\leq \exp\Big(\lambda \mathbb{E}^{{Z}_0}({Z}_{n}(\phi)) (1-\lambda e^{\frac{n\theta}{R^{d-1}}} G(\phi,n))^{-1}\Big). \end{align*} \end{corollary} The following exponential moment for the occupation measure uses similar arguments; the proof is deferred to Appendix \ref{4ap1.2}. \begin{proposition}\label{4p1.4} For any ${Z}_0\in M_F(\mathbb{Z}^d_R)$, $\phi\geq 0$, $\lambda>0$, $n\geq 1$, if $2\lambda n e^{\frac{n\theta}{R^{d-1}}} G(\phi,n)<1$ is satisfied, we have \begin{align}\label{4e100} &\mathbb{E}^{{Z}_0}\Big(\exp\Big({\lambda\sum_{k=0}^n {Z}_{k}(\phi)}\Big)\Big)\leq \exp\Big(\lambda |{Z}_0| e^{\frac{n\theta}{R^{d-1}}} G(\phi,n) (1-2\lambda n e^{\frac{n\theta}{R^{d-1}}} G(\phi,n))^{-1}\Big). \end{align} \end{proposition} \subsection{Martingale problem of branching random walk} Recall the construction and the labelling system of branching random walk $(Z_n)$ in Section \ref{4s1.3}. Observe that for any $n\geq 0$ and $\phi: \mathbb{Z}_R^d\to \mathbb{R}$, we have \begin{align*} Z_{n+1}(\phi)=&\sum_{|\alpha|=n+1} \phi({Y^\alpha})=\sum_{|\alpha|=n} \sum_{i=1}^{V(R)} \phi({Y^\alpha+e_i}) B^{\alpha \vee e_i}. \end{align*} In the last expression above, we use $Y^\alpha$ for $|\alpha|=n$ to represent the location of the particle $\alpha$ alive in generation $n$ and so $Y^\alpha+e_i$ are the possible locations of its offspring. We use the convention that if $Y^\alpha=\Delta$, the cemetery state, then $\phi(\Delta+x)=0$ for any $\phi$ and $x$. In the mean time, the Bernoulli random variables $\{B^{\alpha \vee e_i}\}$ with parameter $p(R)$ indicates whether the birth in this direction is valid. Use the above with some arithmetic to further get \begin{align}\label{4e1.0} Z_{n+1}(\phi)-Z_n(\phi)=&\sum_{|\alpha|=n} \sum_{i=1}^{V(R)} \Big[\phi({Y^\alpha+e_i}) B^{\alpha \vee e_i}-\phi({Y^\alpha})\frac{1}{V(R)}\Big]\noindentnumber\\ =&\sum_{|\alpha|=n} \sum_{i=1}^{V(R)} \Big[\phi({Y^\alpha+e_i}) -\phi({Y^\alpha})\Big]\frac{1}{V(R)} (1+\frac{\theta}{R^{d-1}})\noindentnumber\\ &\quad +\sum_{|\alpha|=n} \sum_{i=1}^{V(R)} \phi({Y^\alpha+e_i}) \Big(B^{\alpha \vee e_i}-\frac{1+\frac{\theta}{R^{d-1}}}{V(R)}\Big)\noindentnumber\\ &\quad +\sum_{|\alpha|=n} \phi({Y^\alpha})\frac{\theta}{R^{d-1}}. \end{align} For any $N\geq 1$, we sum \eqref{4e1.0} over $0\leq n\leq N-1$ to arrive at \begin{align}\label{4e1.1} Z_{N}(\phi)=Z_0(\phi)&+(1+\frac{\theta}{R^{d-1}}) \sum_{n=0}^{N-1} \sum_{|\alpha|=n} \frac{1}{V(R)}\sum_{i=1}^{V(R)} \Big[\phi({Y^\alpha+e_i}) -\phi({Y^\alpha})\Big]\noindentnumber\\ & +M_N(\phi)+\frac{\theta}{R^{d-1}}\sum_{n=0}^{N-1} Z_n(\phi). \end{align} where (recall $p(R)=(1+{\theta}/{R^{d-1}})/V(R)$) \begin{align}\label{4e1.22} M_N(\phi)=\sum_{n=0}^{N-1}\sum_{|\alpha|=n} \sum_{i=1}^{V(R)} \phi({Y^\alpha+e_i}) \Big(B^{\alpha \vee e_i}-p(R)\Big). \end{align} Recall $\mathcal G_N=\sigma(\{B^\alpha: |\alpha|\leq N\})$. One can check that \begin{align*} \mathbb{E}^{Z_0}(M_{N+1}(\phi)-M_{N}(\phi)|\mathcal G_N)=&\mathbb{E}^{Z_0}\Big(\sum_{|\alpha|=N} \sum_{i=1}^{V(R)} \phi({Y^\alpha+e_i}) (B^{\alpha \vee e_i}-p(R))\Big|\mathcal G_N\Big)\\ =&\sum_{|\alpha|=N} \sum_{i=1}^{V(R)} \phi({Y^\alpha+e_i}) \mathbb{E}^{Z_0}\Big(\big(B^{\alpha \vee e_i}-p(R)\big)\Big|\mathcal G_N\Big)=0, \end{align*} where the last equality is by the independence of $\mathcal G_N$ and $B^{\alpha \vee e_i}$ with $|\alpha|=N$. Then the above gives that $\{M_N(\phi), N\geq 0\}$ is a martingale w.r.t. $\mathcal G_N$, whose conditional quadratic variation will be given by \begin{align}\label{4eb1.51} &\langle M(\phi)\rangle_N=\sum_{n=0}^{N-1} \mathbb{E}^{Z_0}\Big((M_{n+1}(\phi)-M_{n}(\phi))^2\Big|\mathcal G_n\Big)\noindentnumber\\ =&\sum_{n=0}^{N-1} \sum_{|\alpha|=n} \sum_{i=1}^{V(R)} \phi({Y^\alpha+e_i})^2 \mathbb{E}^{Z_0}\Big( (B^{\alpha \vee e_i}-p(R))^2\Big|\mathcal G_n\Big)\noindentnumber\\ =&\sum_{n=0}^{N-1} \sum_{|\alpha|=n} \sum_{i=1}^{V(R)} \phi({Y^\alpha+e_i})^2 p(R) (1-p(R)). \end{align} In the second equality, the cross terms are cancelled by the mutual independence of $\{B^{\alpha \vee e_i}\}$. Use $p(R)=(1+{\theta}/{R^{d-1}})/V(R)$ to get \begin{align}\label{4e1.30} \langle M(\phi)\rangle_N=&(1+\frac{\theta}{R^{d-1}}) (1-p(R))\sum_{n=0}^{N-1} \sum_{|\alpha|=n} \frac{1}{V(R)}\sum_{i=1}^{V(R)} \phi({Y^\alpha+e_i})^2\noindentnumber\\ \leq &2\sum_{n=0}^{N-1} \sum_{|\alpha|=n} \frac{1}{V(R)}\sum_{i=1}^{V(R)} \phi({Y^\alpha+e_i})^2\noindentnumber\\ = &2\sum_{n=0}^{N-1} \sum_{x\in \mathbb{Z}^d_R} Z_n(x) \cdot \frac{1}{V(R)}\sum_{i=1}^{V(R)} \phi({x+e_i})^2, \end{align} where we have used $\theta\leq R^{d-1}$ in the inequality and the last equality is by \eqref{4eb2.21}. The following proposition will play an important role in computing the exponential moments of $M_N(\phi)$. The proof follows essentially from Freedman \cite{Free75} and can be found in Appendix \ref{4ap1.3}. \begin{proposition}\label{4p5.1} Let $d=2$ or $d=3$. Let $N\geq 1$, $\theta \geq 100$, $R\geq 4\theta$ and ${Z}_0\in M_F(\mathbb{Z}^d_R)$. For any $\lambda>0$ and any Borel function $\phi$ so that $\lambda\| \phi\|_\infty\leq 1$, we have \begin{align*} \mathbb{E}^{Z_0}( \exp(\lambda |M_{N}(\phi)|))\leq 2\Big(\mathbb{E}^{Z_0}\Big( \exp\Big( 16\lambda^2 \langle M(\phi) \rangle_{N}\Big)\Big)\Big)^{1/2}. \end{align*} \end{proposition} \section{Potential kernel and Tanaka's formula}\label{4s4} For any function $f: \mathbb{Z}^d_R\to \mathbb{R}$ and $x\in \mathbb{Z}^d_R$, we define the generator of $p_1$ to be \begin{align}\label{4e1.7} \mathcal{L} f(x)=\mathbb{E}(f(x+S_1)-f(x))=\sum_{i=1}^{V(R)} (f(x+e_i)-f(x))\frac{1}{V(R)}. \end{align} By Chapman-Kolmogorov's equation, we have \begin{align}\label{4e2.4} p_{n+1}(x)=\sum_{y} p_n(y) p_1(y-x)=\frac{1}{V(R)}\sum_{i=1}^{V(R)} p_n(x+e_i), \end{align} thus giving \begin{align}\label{4e6.4} p_{n+1}(x)-p_n(x)=\frac{1}{V(R)}\sum_{i=1}^{V(R)} (p_n(x+e_i)-p_n(x))=\mathcal{L} p_n(x). \end{align} In $d=3$, for any $a\in \mathbb{Z}^3_R$, we let \begin{align}\label{4e6.3} \phi_a(x)=RV(R)\sum_{n=1}^\infty p_n(x-a), \quad \forall x\in \mathbb{Z}^3_R. \end{align} Recall $g_{u,3}$ from \eqref{4e10.31}. We may use Proposition \ref{4p1.1}(i) to get for any $a,x \in \mathbb{Z}^3_R$, \begin{align}\label{4e7.10} \phi_a(x)\leq RV(R) \sum_{n=1}^\infty \frac{c_{\ref{4p1.1}}}{n^{3/2}R^3} e^{-\frac{|x-a|^2}{32n}}\leq CR \sum_{n=1}^\infty \frac{1}{n^{3/2}} e^{-\frac{|x-a|^2}{32n}}=Cg_{a,3}(x). \end{align} Note that \begin{align}\label{4eb1.32} \|g_{a,3}\|=R \sum_{n=1}^\infty \frac{1}{n^{3/2}}\leq CR<\infty. \end{align} Hence the sum in $\phi_a$ is absolutely convergent. We also have $p_n$ is absolutely summable. Apply Fubini's theorem to get \begin{align}\label{4e6.5} \mathcal{L}\phi_a(x)=&\sum_{i=1}^{V(R)} (\phi_a(x+e_i)-\phi_a(x))\frac{1}{V(R)}\noindentnumber\\ =&R\sum_{n=1}^\infty\sum_{i=1}^{V(R)} (p_n(x-a+e_i)-p_n(x-a))\noindentnumber\\ =&RV(R)\sum_{n=1}^\infty (p_{n+1}(x-a)-p_n(x-a))\noindentnumber\\ =&-RV(R)p_1(x-a)= -R\cdot 1(x\in \mathcal{N}(a)), \end{align} where the third equality follows from \eqref{4e2.4}. Replace $\phi$ with $\phi_a$ in \eqref{4e1.1} and use the above to see that for any $N\geq 1$, \begin{align*} Z_{N}(\phi_a)=&Z_0(\phi_a)- R(1+\frac{\theta}{R^{d-1}}) \sum_{n=0}^{N-1} \sum_{|\alpha|=n} 1(Y^\alpha\in \mathcal{N}(a)) +M_N(\phi_a)+\frac{\theta}{R^{d-1}}\sum_{n=0}^{N-1} Z_n(\phi_a). \end{align*} Rearrange terms to arrive at \begin{align}\label{4ea6.13} (1+\frac{\theta}{R^{d-1}})R\sum_{n=0}^{N-1} Z_n(\mathcal{N}(a))=&(1+\frac{\theta}{R^{d-1}})R\sum_{n=0}^{N-1} \sum_{|\alpha|=n} 1(Y^\alpha\in \mathcal{N}(a)) \noindentnumber\\ =&Z_{0}(\phi_a)-Z_N(\phi_a)+M_N(\phi_a)+\frac{\theta}{R^{d-1}}\sum_{n=0}^{N-1} Z_n(\phi_a). \end{align} We call \eqref{4ea6.13} the Tanaka formula for the local times of $(Z_n)$ in $d=3$. It is easy to derive the following bounds from the above: \begin{align}\label{4e6.13} R\sum_{n=0}^{N-1} Z_n(\mathcal{N}(a))\leq &Z_{0}(\phi_a)+M_N(\phi_a)+\frac{\theta}{R^{2}}\sum_{n=0}^{N-1} Z_n(\phi_a). \end{align} In $d=2$, for any $a\in \mathbb{Z}^2_R$ we set \begin{align}\label{4e6.6} g_a(x)=V(R)\sum_{n=1}^\infty e^{-n\theta/R} p_n(x-a),\quad \forall x\in \mathbb{Z}^2_R. \end{align} Recall $g_{u,2}$ from \eqref{4e10.31}. We use Proposition \ref{4p1.1}(i) to get \begin{align}\label{4e10.33} g_a(x)&\leq V(R)\sum_{n=1}^\infty e^{-n\theta/R}\frac{c_{\ref{4p1.1}}}{n R^2} e^{-|x-a|^2/(32n)}\noindentnumber\\ &\leq C\sum_{n=1}^\infty e^{-n\theta/R}\frac{1}{n} e^{-|x-a|^2/(32n)}=Cg_{a,2}(x). \end{align} Note that \begin{align}\label{4e6.20} \|g_{a,2}\|_\infty&=\sum_{n=1}^\infty (e^{-\theta/R})^n\frac{1}{n}= (-\log (1-e^{-\theta/R})) \leq \log \frac{2R}{\theta}<\infty, \end{align} where the second equality uses the Taylor series of $-\log(1-x)$ and the first inequality is by applying $1-e^{-x}\geq x/2$ for $0\leq x\leq 1/4$ and $R\geq 4\theta$. Hence we conclude from \eqref{4e10.33} and \eqref{4e6.20} that the sum in $g_a$ is absolutely convergent. Similar to the derivation of \eqref{4e6.5}, we do some arithmetic to get \begin{align}\label{4e6.7} \mathcal{L} g_a(x) =&\sum_{n=1}^\infty e^{-n\theta/R} \sum_{i=1}^{V(R)} (p_n(x-a+e_i)-p_n(x-a))\noindentnumber\\ =&V(R)\sum_{n=1}^\infty e^{-n\theta/R} (p_{n+1}(x-a)-p_n(x-a))\noindentnumber\\ =&e^{\theta/R} V(R) \sum_{n=1}^\infty e^{-(n+1)\theta/R} p_{n+1}(x-a)-V(R)\sum_{n=1}^\infty e^{-n\theta/R} p_n(x-a)\noindentnumber\\ =&(e^{\theta/R}-1)g_a(x)-V(R) p_1(x-a)=(e^{\theta/R}-1)g_a(x)-1_{\{x\in \mathcal{N}(a)\}}. \end{align} Replace $\phi$ with $g_a$ in \eqref{4e1.1} and use the above to see that \begin{align*} Z_{N}(g_a)=&Z_0(g_a)+ (1+\frac{\theta}{R^{d-1}})\sum_{n=0}^{N-1} \sum_{|\alpha|=n} \Big[(e^{\theta/R}-1)g_a(Y^\alpha)-1(Y^\alpha\in \mathcal{N}(a))\Big] \noindentnumber\\ &\quad +M_N(g_a)+\frac{\theta}{R^{d-1}}\sum_{n=0}^{N-1} Z_n(g_a)\\ =&Z_0(g_a)+ (e^{\theta/R}-1)(1+\frac{\theta}{R^{d-1}}) \sum_{n=0}^{N-1} Z_n(g_a)-(1+\frac{\theta}{R^{d-1}})\sum_{n=0}^{N-1} \sum_{|\alpha|=n} 1(Y^\alpha\in \mathcal{N}(a)) \noindentnumber\\ &\quad +M_N(g_a)+\frac{\theta}{R^{d-1}}\sum_{n=0}^{N-1} Z_n(g_a). \end{align*} Note we are in $d=2$. Rearrange terms in the above to get \begin{align}\label{4ea6.14} &(1+\frac{\theta}{R}) \sum_{n=0}^{N-1} Z_n(\mathcal{N}(a))=(1+\frac{\theta}{R}) \sum_{n=0}^{N-1} \sum_{|\alpha|=n} 1(Y^\alpha\in \mathcal{N}(a))\noindentnumber\\ &=Z_{0}(g_a)-Z_N(g_a)+ M_N(g_a)+\Big((e^{\theta/R}-1)(1+\frac{\theta}{R}) +\frac{\theta}{R}\Big)\sum_{n=0}^{N-1} Z_n(g_a). \end{align} We call \eqref{4ea6.14} the Tanaka formula for the local times of $(Z_n)$ in $d=2$. By using $1+\frac{\theta}{R}\leq 2$ and $e^{\theta/R}-1\leq 2\theta/R$ when $\theta/R\leq 1/4$, we get \begin{align}\label{4e6.14} &\sum_{n=0}^{N-1} Z_n(\mathcal{N}(a)) \leq Z_{0}(g_a)+ M_N(g_a)+\frac{5\theta}{R} \sum_{n=0}^{N-1} Z_n(g_a). \end{align} Using the bounds in \eqref{4e6.13} and \eqref{4e6.14}, we will prove the key Proposition \ref{4p2} in the following two sections for $d=2$ and $d=3$ respectively. \section{Local time bounds in $d=2$}\label{4s5} In this section we give the proof of Proposition \ref{4p2} for $d=2$. Throughout this section we let $d=2$ unless otherwise indicated. Recall $Z_0\in M_F(\mathbb{Z}_R^2)$ satisfies \begin{align}\label{4e7.1} \begin{dcases} \text{(i) }\text{Supp}(Z_0)\subseteq Q_{R_\theta}(0); \\ \text{(ii) } Z_0(1)\leq 2 R f_2(\theta)/\theta=2R/\sqrt{\theta};\\ \text{(iii) } Z_{0}(g_{u,2})\leq m {R}/\theta^{1/4}, \quad \forall u\in \mathbb{R}^2. \end{dcases} \end{align} The local time that we aim to bound in Proposition \ref{4p2} is the sum over the branching random walk masses of the unit box centered at $x\in \mathbb{Z}_R^d$, and so it suffices to consider the local time at points in the integer lattice $\mathbb{Z}^d$. We claim Proposition \ref{4p2} in $d=2$ will be an easy consequence of the following result. \begin{proposition}\label{4t2.0} Let $d=2$. For any $\varepsilon_0\in (0,1)$, $T\geq 100$ and $m>0$, there exist constants $\theta_{\ref{4t2.0}}\geq 100, \chi_{\ref{4t2.0}}>0$ depending only on $\varepsilon_0, T,m$ such that for all $\theta \geq \theta_{\ref{4t2.0}}$, there is some $C_{\ref{4t2.0}}(\varepsilon_0, T,\theta,m)\geq 4\theta$ such that for any $R\geq C_{\ref{4t2.0}}$ and any $Z_0$ satisfying \eqref{4e7.1}, we have \[ \mathbb{P}^{Z_0}\Big(\sum_{n=0}^{T_\theta^R} Z_n(\mathcal{N}(a)) \leq \chi_{\ref{4t2.0}} {R}, \quad \forall a\in \mathbb{Z}^2 \cap Q_{3M_{\ref{4p1}} \sqrt{\log f_2(\theta)}R_\theta} (0) \Big)\geq 1-\varepsilon_0. \] \end{proposition} \begin{proof}[Proof of Proposition \ref{4p2} in $d=2$ assuming Proposition \ref{4t2.0}] Fix $\varepsilon_0\in (0,1)$, $T\geq 100$ and $m>0$. Let $\theta, R, Z_0$ be as in Proposition \ref{4t2.0}. Then with probability $\geq 1-\varepsilon_0$, we have \begin{align}\label{4e10.63} \sum_{n=0}^{T_\theta^R} Z_n(\mathcal{N}(a)) \leq \chi_{\ref{4t2.0}} {R}, \quad \forall a\in \mathbb{Z}^2 \cap Q_{3M_{\ref{4p1}} \sqrt{\log f_2(\theta)}R_\theta} (0). \end{align} For any $x\in \mathbb{Z}_R^2$, let $\mathcal U(x)=\{a\in \mathbb{Z}^2: \|a-x\|_\infty\leq 1\}$. One can easily check that \begin{align}\label{4ea8.4} \sum_{n=0}^{T_\theta^R} Z_n(\mathcal{N}(x))\leq \sum_{n=0}^{T_\theta^R} \sum_{a\in \mathcal U(x)} Z_n(\mathcal{N}(a))=\sum_{a\in \mathcal U(x)} \sum_{n=0}^{T_\theta^R} Z_n(\mathcal{N}(a)). \end{align} For any $x\in \mathbb{Z}^2_R \cap Q_{2M_{\ref{4p1}} \sqrt{\log f_2(\theta)}R_\theta} (0)$, we have $a\in \mathcal U(x)\subseteq Q_{3M_{\ref{4p1}} \sqrt{\log f_2(\theta)}R_\theta} (0)$. Notice that there are at most $3^2$ elements in $\mathcal U(x)$ for each $x\in \mathbb{Z}^2_R$. Hence one may conclude by \eqref{4ea8.4} that on the event \eqref{4e10.63}, we have \begin{align} \sum_{n=0}^{T_\theta^R} Z_n(\mathcal{N}(x)) \leq 9\chi_{\ref{4t2.0}} {R}, \quad \forall x\in \mathbb{Z}^2_R \cap Q_{2M_{\ref{4p1}} \sqrt{\log f_2(\theta)}R_\theta} (0). \end{align} So the proof is complete by letting $\chi_{\ref{4p2}}=9\chi_{\ref{4t2.0}}$. \end{proof} It remains to prove Proposition \ref{4t2.0}. In view of \eqref{4e6.14}, it suffices to get bounds for $Z_0(g_a)$, $M_{{T_\theta^R}+1}(g_a)$ and $\sum_{n=0}^{{T_\theta^R}} Z_n(g_a)$ where $g_a(x)=V(R)\sum_{n=1}^\infty e^{-n\theta/R} p_n(x-a)$. Recall from \eqref{4e10.33} that $g_a(x)\leq Cg_{a,2}(x)$ for any $a,x\in \mathbb{Z}_R^2$. Hence \eqref{4e7.1} implies that \begin{align}\label{4ea10.33} Z_0(g_a)\leq CZ_0(g_{a,2})\leq CmR/\theta^{1/4}, \quad \forall a\in \mathbb{Z}^2. \end{align} Turning to $M_{{T_\theta^R}+1}(g_a)$ and $\sum_{n=0}^{{T_\theta^R}} Z_n(g_a)$, we will calculate their exponential moments and use the following version of Garsia's Lemma from Lemma 3.7 of \cite{LPZ14} to derive the corresponding probability bounds. \begin{lemma}[\cite{LPZ14}]\label{4l2.2} Let $d\geq 1$. Suppose $\{\Upsilon(x): x\in \mathbb{R}^d\}$ is an almost surely continuous random field such that for some $\lambda>0$ and $\eta>0$, \begin{align} \begin{cases} \mathbb{E}\Big(\exp\Big(\lambda \frac{|\Upsilon(x)-\Upsilon(y)|}{|x-y|^\eta}\Big)\Big)\leq C_1,\quad \forall 0<|x-y|\leq \sqrt{d};\\ \mathbb{E}(\exp(\lambda \Upsilon(x)))\leq C_2, \quad \forall x\in \mathbb{R}^d. \end{cases} \end{align} Then for all $M\geq 1$ and $\chi> 0$, \[ \mathbb{P}\Big(\sup_{x\in Q_M(0)} \Upsilon(x) \geq \chi\Big)\leq (C_1 e^{2d/\eta}+C_2) (2M)^d \exp\Big({-\frac{\lambda \chi}{1+8d^{\eta/2}}}\Big). \] \end{lemma} With our discrete setting, we need the following lemma that serves as an intermediate step towards the ``discrete'' version of the above Garsia's Lemma. The proof is deferred to Appendix \ref{a2}. \begin{lemma}\label{4l2.1} Let $d\geq 1$. Assume $\{f(n): n\in \mathbb{Z}^d\}$ is a collection of non-negative random variables on some probability space $(\Omega, \mathcal F, \mathbb{P})$ which satisfies \begin{align}\label{4eb3.21} \begin{cases} \mathbb{E}\Big(\exp\Big(\lambda \frac{|f(n)-f(m)|}{|n-m|^\eta}\Big)\Big)\leq C_1,\quad \forall n\neq m \in \mathbb{Z}^d,\\ \mathbb{E}(\exp(\mu f(n)))\leq C_1, \quad \forall n\in \mathbb{Z}^d, \end{cases} \end{align} for some constants $\lambda,\mu, C_1>0$ and $\eta \in (0,1]$. For each $\omega \in \Omega$, if we linearly interpolate between integer points to obtain a continuous function $g(x)$ for $x\in \mathbb{R}^d$, then there exists some constant $0<c_{\ref{4l2.1}}(d)<1$ such that \begin{align}\label{4e8.12} \begin{cases} \mathbb{E}\Big(\exp\Big( c_{\ref{4l2.1}} \lambda\frac{|g(x)-g(y)|}{|x-y|^\eta}\Big)\Big)\leq C_1, \quad\forall x\neq y \in \mathbb{R}^d,\\ \mathbb{E}(\exp( c_{\ref{4l2.1}} \mu g(x)))\leq C_1,\quad \forall x\in \mathbb{R}^d. \end{cases} \end{align} \end{lemma} Combining Lemma \ref{4l2.2} and Lemma \ref{4l2.1}, the probability bounds for random variables indexed by the integer points may follow from their exponential moment bounds, which we now give. \begin{proposition}\label{4p2.1} Let $\eta=1/8$. For any $T\geq 100$, there exist constants $C_{\ref{4p2.1}}(T)>0$ and $\theta_{\ref{4p2.1}}(T)\geq 100$ such that for all $\theta \geq \theta_{\ref{4p2.1}}(T)$, there is some $K_{\ref{4p2.1}}(T,\theta)\geq 4\theta$ such that for any $m>0$, $R\geq K_{\ref{4p2.1}}$ and any $Z_0$ satisfying \eqref{4e7.1}, we have \begin{align} &\text{(i) } \mathbb{E}^{Z_0}\Big(\exp\Big(\theta^{3/2} R^{-2} \sum_{k=0}^{{T_\theta^R}} Z_k(g_a)\Big)\Big)\leq C_{\ref{4p2.1}}(T), \quad \forall a \in \mathbb{Z}^2;\\ &\text{(ii) }\mathbb{E}^{{Z}_0}\Big(\exp\Big( \frac{\theta^{3/2}}{R^2} \frac{(R/\theta)^{\eta/2}}{|a-b|^{\eta}} |\sum_{k=0}^{{T_\theta^R}} {Z}_{k}(g_a)-\sum_{k=0}^{{T_\theta^R}} {Z}_{k}(g_b)|\Big)\Big) \leq C_{\ref{4p2.1}}(T), \quad \forall a\neq b \in \mathbb{Z}^2.\noindentnumber \end{align} \end{proposition} Assuming Proposition \ref{4p2.1}, we first show these exponential moments indeed give us the desired bounds by applying the discrete Garsia's Lemma. \begin{corollary}\label{4c2.1} For any $\varepsilon_0\in (0,1)$ and $T\geq 100$, there exist constants $\chi_{\ref{4c2.1}}>0$ and $ \theta_{\ref{4c2.1}}\geq 100$ depending only on $\varepsilon_0, T$ such that for all $\theta \geq \theta_{\ref{4c2.1}}$, there is some $C_{\ref{4c2.1}}(\varepsilon_0, T,\theta)\geq 4\theta$ such that for any $m>0$, $R\geq C_{\ref{4c2.1}}$ and any $Z_0$ satisfying \eqref{4e7.1}, we have \[ \mathbb{P}^{Z_0}\Big(\frac{\theta}{R}\sum_{k=0}^{{T_\theta^R}} Z_k(g_a) \leq \chi_{\ref{4c2.1}} \frac{R}{\theta^{1/4}}, \quad \forall a\in \mathbb{Z}^2 \cap Q_{3M_{\ref{4p1}} \sqrt{\log f_2(\theta)}R_\theta} (0) \Big)\geq 1-\frac{\varepsilon_0}{2}. \] \end{corollary} \begin{proof} Fix $\varepsilon_0 \in (0,1)$, $T\geq 100$ and $\eta=1/8$. Let $\theta, m, R$ satisfy the assumptions of Proposition \ref{4p2.1} and set $Z_0$ as in \eqref{4e7.1}. If we define $\{f(x): x\in \mathbb{R}^2\}$ to be the continuous random field obtained by linearly interpolating $\{\sum_{k=0}^{{T_\theta^R}} Z_k(g_a): a\in \mathbb{Z}^2\}$, then by assuming Proposition \ref{4p2.1}, we may apply Lemma \ref{4l2.1} to get \begin{align}\label{4eb4.1} \mathbb{E}^{Z_0}\Big(\exp\Big( c_{\ref{4l2.1}}\theta^{3/2 } R^{-2} f(x)\Big)\Big)\leq C_{\ref{4p2.1}}(T), \quad \forall x \in \mathbb{R}^2, \end{align} and \begin{align}\label{4eb4.2} \mathbb{E}^{{Z}_0}\Big(\exp\Big(c_{\ref{4l2.1}} \frac{\theta^{3/2}}{R^2} \frac{(R/\theta)^{\eta/2}}{|x-y|^{\eta}} |f(x)-f(y)|\Big)\Big) \leq C_{\ref{4p2.1}}(T), \quad \forall x\neq y \in \mathbb{R}^2. \end{align} Recall $R_\theta=\sqrt{R^{d-1}/\theta}=\sqrt{R/\theta}$ and $f_2(\theta)= \sqrt{\theta}$. Define \begin{align}\label{4e5.41} k_\theta =\sqrt{\log f_2(\theta)/\theta}= (2\theta)^{-1/2}\sqrt{\log \theta} \end{align} so that $\sqrt{\log f_2(\theta)}R_\theta= R^{1/2} k_\theta$. Replace $x,y$ in \eqref{4eb4.1} and \eqref{4eb4.2} with $xR^{1/2} k_\theta$, $yR^{1/2} k_\theta$ respectively to see that \begin{align}\label{4eb4.3} \mathbb{E}^{Z_0}\Big(\exp\Big( c_{\ref{4l2.1}} \theta^{3/2} R^{-2} f( x R^{1/2} k_\theta )\Big)\Big)\leq C_{\ref{4p2.1}}(T), \quad \forall x \in \mathbb{R}^2, \end{align} and for all $x\neq y \in \mathbb{R}^2$, \begin{align}\label{4eb4.4} \mathbb{E}^{{Z}_0}\Big(\exp\Big( \frac{2^{\eta/2} c_{\ref{4l2.1}}}{(\log \theta)^{\eta/2}} \frac{\theta^{3/2} {R^{-2}}}{|x-y|^{\eta}} \Big|f( x R^{1/2} k_\theta )-f( y R^{1/2} k_\theta )\Big|\Big)\Big) \leq C_{\ref{4p2.1}}(T). \end{align} Set $\Upsilon(x)=\theta^{5/4} R^{-2}f( x R^{1/2} k_\theta )$ for $x\in \mathbb{R}^d$. Note we have $\theta^{5/4}\leq \theta^{3/2}$ and $\theta^{5/4}\leq \frac{2^{\eta/2}\theta^{3/2}}{(\log \theta)^{\eta/2}}$ for $\theta\geq 100$. Therefore we conclude from \eqref{4eb4.3}, \eqref{4eb4.4} that \begin{align} \begin{cases} \mathbb{E}^{{Z}_0}\Big(\exp\Big( c_{\ref{4l2.1}} \frac{|\Upsilon(x)-\Upsilon(y)|}{|x-y|^\eta}\Big)\Big)\leq C_{\ref{4p2.1}}(T),\quad \forall x\neq y \in \mathbb{R}^2,\\ \mathbb{E}^{{Z}_0}(\exp( c_{\ref{4l2.1}} \Upsilon(x)))\leq C_{\ref{4p2.1}}(T), \quad \forall x \in \mathbb{R}^2. \end{cases} \end{align} Apply Lemma \ref{4l2.2} with the above moment bounds to get for any $\chi>0$ and $M\geq 1$, \begin{align*} &\mathbb{P}^{\mathbb{Z}_0}\Big(\sup_{x\in Q_M(0)} \theta^{5/4} R^{-2}f( x R^{1/2} k_\theta )\geq \chi\Big)\\ &\leq (C_{\ref{4p2.1}}(T) e^{32}+C_{\ref{4p2.1}}(T)) (2M)^2 \exp\Big({-\frac{c_{\ref{4l2.1}} \chi}{1+8\cdot 2^{1/16}}}\Big). \end{align*} Let $M=3M_{\ref{4p1}}(\varepsilon_0,T)\geq 1$. Pick $\chi_{\ref{4c2.1}}=\chi_{\ref{4c2.1}}(M, \varepsilon_0, T)=\chi_{\ref{4c2.1}}(\varepsilon_0,T)>0$ large enough so that \begin{align} \mathbb{P}^{\mathbb{Z}_0}\Big(\sup_{x\in Q_{3M_{\ref{4p1}} }(0) } \theta^{5/4} R^{-2}f( x R^{1/2} k_\theta ) \geq \chi_{\ref{4c2.1}}\Big)\leq \frac{\varepsilon_0}{2}. \end{align} Hence with probability larger than $1-\varepsilon_0/2$, we have \begin{align} \sup_{a\in \mathbb{Z}^2\cap Q_{ 3M_{\ref{4p1}} R^{1/2} k_\theta }(0) } \theta^{5/4} R^{-2} \sum_{k=0}^{{T_\theta^R}} Z_k(g_a) \leq \sup_{x\in Q_{3M_{\ref{4p1}}}(0)} \theta^{5/4} R^{-2}f( x R^{1/2} k_\theta)\leq \chi_{\ref{4c2.1}}. \end{align} The proof is complete by noting $\sqrt{\log f_2(\theta)}R_\theta= R^{1/2} k_\theta$. \end{proof} In a similar way we will take care of the martingale term by the following exponential moments. \begin{proposition}\label{4p2.2} Let $\eta=1/8$. For any $T\geq 100$, there exist constants $C_{\ref{4p2.2}}(T)>0$ and $\theta_{\ref{4p2.2}}(T)\geq 100$ such that for all $\theta \geq \theta_{\ref{4p2.2}}(T)$, there is some $K_{\ref{4p2.2}}(T,\theta)\geq 4\theta$ such that for any $m>0$, $R\geq K_{\ref{4p2.2}}$ and any $Z_0$ satisfying \eqref{4e7.1}, we have \begin{align*} &\text{(i) }\mathbb{E}^{Z_0}\Big(\exp\Big(\theta^{3/4} R^{-1} |M_{{T_\theta^R}+1}(g_a)|\Big)\Big)\leq C_{\ref{4p2.2}}(T), \quad \forall a \in \mathbb{Z}^2,\\ &\text{(ii) }\mathbb{E}^{{Z}_0}\Big(\exp\Big( \frac{\theta^{3/4}}{R} \frac{(R/\theta)^{\eta/2} }{|a-b|^\eta} \Big||M_{{T_\theta^R}+1}(g_a)|-|M_{{T_\theta^R}+1}(g_b)|\Big|\Big)\Big) \leq C_{\ref{4p2.2}}(T), \quad \forall a\neq b \in \mathbb{Z}^2. \end{align*} \end{proposition} \begin{corollary}\label{4c2.2} For any $\varepsilon_0\in (0,1)$ and $T\geq 100$, there exist constants $\chi_{\ref{4c2.2}}>0$ and $ \theta_{\ref{4c2.2}}\geq 100$ depending only on $\varepsilon_0, T$ such that for all $\theta \geq \theta_{\ref{4c2.2}}$, there is some $C_{\ref{4c2.2}}(\varepsilon_0, T,\theta)\geq 4\theta$ such that for any $m>0$, $R\geq C_{\ref{4c2.2}}$ and any $Z_0$ satisfying \eqref{4e7.1}, we have \[ \mathbb{P}^{Z_0}\Big( |M_{{T_\theta^R}+1}(g_a)|\leq \chi_{\ref{4c2.2}} \frac{R}{\theta^{1/4}}, \quad \forall a\in \mathbb{Z}^2 \cap Q_{3M_{\ref{4p1}} \sqrt{\log f_2(\theta)}R_\theta} (0) \Big)\geq 1-\frac{\varepsilon_0}{2}. \] \end{corollary} \begin{proof} By using Proposition \ref{4p2.2}, the proof follows in a similar way to that of Corollary \ref{4c2.1} and so is omitted. \end{proof} Assuming Proposition \ref{4p2.1} and Proposition \ref{4p2.2}, we may finish the proof of Proposition \ref{4t2.0} below. \begin{proof}[Proof of Proposition \ref{4t2.0}] Fix $\varepsilon_0\in (0,1)$, $T\geq 100$ and $m>0$. Let $\theta_{\ref{4t2.0}}=\max\{\theta_{\ref{4c2.1}}, \theta_{\ref{4c2.2}}\}$. For any $\theta \geq \theta_{\ref{4t2.0}}$, we let $C_{\ref{4t2.0}}=\max\{C_{\ref{4c2.1}}(\varepsilon_0, T,\theta), C_{\ref{4c2.2}}(\varepsilon_0, T,\theta)\}$. For any $R\geq C_{\ref{4t2.0}}$, we let $Z_0$ be as in \eqref{4e7.1}. Apply Corollary \ref{4c2.1} to get with probability $\geq 1-\varepsilon_0/2$, \begin{align}\label{4ea7.1} \frac{\theta}{R}\sum_{k=0}^{{T_\theta^R}} Z_k(g_a) \leq \chi_{\ref{4c2.1}} \frac{R}{\theta^{1/4}}, \quad \forall a\in \mathbb{Z}^2 \cap Q_{3M_{\ref{4p1}} \sqrt{\log f_2(\theta)}R_\theta} (0). \end{align} Apply Corollary \ref{4c2.2} to get with probability $\geq 1-\varepsilon_0/2$, \begin{align}\label{4ea7.2} |M_{{T_\theta^R}+1}(g_a)|\leq \chi_{\ref{4c2.2}} \frac{R}{\theta^{1/4}}, \quad \forall a\in \mathbb{Z}^2 \cap Q_{3M_{\ref{4p1}} \sqrt{\log f_2(\theta)}R_\theta} (0). \end{align} Therefore with probability $\geq 1-\varepsilon_0$, both \eqref{4ea7.1} and \eqref{4ea7.2} hold. Use \eqref{4e6.14} to get for any $a\in \mathbb{Z}^2 \cap Q_{3M_{\ref{4p1}} \sqrt{\log f_2(\theta)}R_\theta} (0)$, \begin{align*} &\sum_{n=0}^{T_\theta^R} Z_n(\mathcal{N}(a)) \leq Z_{0}(g_a)+ M_{{T_\theta^R}+1}(g_a)+\frac{5\theta}{R} \sum_{n=0}^{T_\theta^R} Z_n(g_a)\\ &\leq C\frac{mR}{\theta^{1/4}}+\chi_{\ref{4c2.1}} \frac{R}{\theta^{1/4}}+\chi_{\ref{4c2.2}} \frac{R}{\theta^{1/4}}\leq (Cm+\chi_{\ref{4c2.1}}+\chi_{\ref{4c2.2}}) R, \end{align*} where in the second inequality we have also used \eqref{4ea10.33}. The proof is complete by letting $\chi_{\ref{4t2.0}}=Cm+\chi_{\ref{4c2.1}}+\chi_{\ref{4c2.2}}$. \end{proof} It remains to prove Proposition \ref{4p2.1} and Proposition \ref{4p2.2}. \subsection{Exponential moments of the drift term} In this section we will prove Proposition \ref{4p2.1} for the exponential moments of $\sum_{k=0}^{{T_\theta^R}} Z_k(g_a)$ by applying Proposition \ref{4p1.4}. To do this, we need an estimate for \[ G(g_a,{T_\theta^R})=3\|g_a\|_\infty+\sum_{k=1}^{{T_\theta^R}} \sup_{y\in \mathbb{Z}_R^d} \sum_{z\in \mathbb{Z}^d_R} p_k(y-z) g_a(z). \] By \eqref{4e10.33}, it is immediate that \begin{align}\label{4ec7.2} G(g_a,{T_\theta^R})\leq C\cdot G(g_{a,2},{T_\theta^R}), \end{align} and so it suffices to get bounds for $G(g_{a,2},{T_\theta^R})$. We first give some preliminary results. With some calculus, one may easily obtain the following lemma, whose proof can be found in Appendix \ref{a2}. \begin{lemma}\label{4l4.2} Let $d\geq 1$ and $R\geq 1$. \noindent (i) For any $s, t>0$ and any $x_1, x_2\in \mathbb{Z}^d_R$, we have \[ \sum_{y\in \mathbb{Z}^d_R} e^{-t|y-x_1|^2} e^{-s|y-x_2|^2} \leq 2^d e^{-\frac{st}{s+t}|x_1-x_2|^2} \sum_{y\in \mathbb{Z}^d_R} e^{-t|y|^2} e^{-s|y|^2}. \] (ii) There is some constant $c_{\ref{4l4.2}}=c_{\ref{4l4.2}}(d)>0$ such that for any $u\geq 1$ and $R\geq 1$, we have \[ \sum_{y\in \mathbb{Z}^d_R} e^{-{|y|^2}/{(2u)}}\leq c_{\ref{4l4.2}} u^{d/2} R^d. \] \end{lemma} \noindent The following result is from Lemma 4.3.2 of \cite{LL10} and will be used repeatedly below. \begin{lemma}[\cite{LL10}]\label{4l4.1} For any $\alpha>0$, there exist constants $C_{\ref{4l4.1}}(\alpha)> c_{\ref{4l4.1}}(\alpha)>0$ such that for all $r\geq 1/64$, \begin{align}\label{4e2.3} c_{\ref{4l4.1}}(\alpha)\frac{1}{r^{\alpha}}\leq \sum_{k=1}^\infty \frac{1}{k^{1+\alpha}} e^{-r/k}\leq C_{\ref{4l4.1}}(\alpha)\frac{1}{r^{\alpha}}. \end{align} \end{lemma} \begin{lemma}\label{4l1.3} Let $d=2$ or $d=3$. For any $1<\alpha<(d+1)/2$, there is some constant $c_{\ref{4l1.3}}=c_{\ref{4l1.3}}(\alpha, d)>0$ so that for any $n\geq 1$, $R\geq K_{\ref{4p1.1}}$, and $a,x\in \mathbb{Z}^d_R$, \begin{align*} \sum_{y\in \mathbb{Z}^d_R} p_n(y-x) \sum_{k=1}^\infty \frac{1}{k^{\alpha}} e^{-\frac{|y-a|^2}{64k}} \leq c_{\ref{4l1.3}}\cdot \frac{1}{n^{\alpha-1}}. \end{align*} \end{lemma} \begin{proof} This result follows essentianlly from Lemma \ref{4l4.1}. The proof is deferred to Appendix \ref{a2}. \end{proof} \begin{lemma}\label{4l3.2} There is some constant $c_{\ref{4l3.2}}>0$ so that for any $\theta\geq 100$ and $R\geq 4\theta$, we have \begin{align}\label{4e8.66} g_{a,2}(y)\leq c_{\ref{4l3.2}}\Big(1+\log^+ \Big(\frac{R}{\theta |y-a|^2}\Big)\Big), \quad \forall y\neq a\in \mathbb{R}^2. \end{align} \end{lemma} \begin{proof} Recall from \eqref{4e10.31} that \begin{align}\label{4eb1.2} g_{a,2}(y)=\sum_{n=1}^\infty e^{-n\theta/R}\frac{1}{n} e^{-\frac{|y-a|^2}{32n}}\leq 1+\sum_{n=2}^\infty e^{-n\theta/R}\frac{1}{n} e^{-\frac{|y-a|^2}{32n}}. \end{align} For any $n\geq 2$, if $n\leq t\leq n+1$, then $n\geq t-1$ and $n\leq 2(t-1)$. So we have \[ e^{-n\theta/R}\frac{1}{n} e^{-\frac{|y-a|^2}{32n}}=\int_n^{n+1}  e^{-n\theta/R}\frac{1}{n} e^{-\frac{|y-a|^2}{32n}} dt \leq \int_n^{n+1}  e^{-(t-1)\theta/R}\frac{1}{(t-1)} e^{-\frac{|y-a|^2}{64(t-1)}} dt. \] Sum the above for all $n\geq 2$ and use \eqref{4eb1.2} to see that \begin{align*} g_{a,2}(y)\leq 1+\int_1^{\infty}  e^{-t\theta/R}\frac{1}{t} e^{-\frac{|y-a|^2}{64t}} dt. \end{align*} For simplicity, we write $k=R/\theta\geq 4$ and $r=|y-a|^2/64> 0$ so that \begin{align}\label{4ec1.3} g_{a,2}(y)\leq 1+\int_1^{\infty}  e^{-t/k}\frac{1}{t} e^{-{r}/{t}} dt:=1+I. \end{align} By a change of variable in $I$, we get \begin{align}\label{4eb1.4} I=& \int_{1/k}^{\infty} e^{-t}\frac{1}{t} e^{-{r}/(tk)} dt=\int_{1/k}^{1} e^{-t}\frac{1}{t} e^{-{r}/(tk)} dt+\int_{1}^{\infty} e^{-t}\frac{1}{t} e^{-{r}/(tk)} dt\noindentnumber\\ \leq &\int_{1/k}^{1} \frac{1}{t} e^{-{r}/(tk)} dt+\int_{1}^{\infty} e^{-t} dt=\int_{1/k}^{1} \frac{1}{t} e^{-{r}/(tk)} dt+e^{-1}:=J+e^{-1}. \end{align} Another change of variable with $s=r/(tk)$ in $J$ gives us that \begin{align*} J=&\int_{r/k}^{r} \frac{1}{s} e^{-s} ds\leq \int_{(r/k)\wedge 1}^{1} \frac{1}{s} e^{-s} ds+\int_{1}^{\infty} \frac{1}{s} e^{-s} ds\leq \int_{(r/k)\wedge 1}^{1} \frac{1}{s} ds+e^{-1}=\log^+(\frac{k}{r})+e^{-1}. \end{align*} Hence it follows that $I\leq 2e^{-1}+\log^+(\frac{k}{r})$. Returning to \eqref{4ec1.3}, we get \begin{align*} g_{a,2}(y)\leq 1+\Big(2e^{-1}+\log^+ \Big(\frac{64R}{\theta |y-a|^2}\Big)\Big)\leq C+C\log^+ \Big(\frac{R}{\theta |y-a|^2}\Big), \end{align*} as required. \end{proof} \begin{lemma}\label{4l3.3} There is some constant $c_{\ref{4l3.3}}>0$ so that for all $x\in \mathbb{Z}^2_R$ and $a\in \mathbb{R}^2$, $n\geq 1$, $\theta\geq 100$, $R\geq 4\theta+ K_{\ref{4p1.1}}$ and $\beta=1$ or $2$, we have \begin{align}\label{4e7.32} \sum_{y\in \mathbb{Z}^2_R} p_n(x-y) (g_{a,2}(y))^\beta\leq &c_{\ref{4l3.3}} \Big(1+ \frac{1}{n} \Big(\log \frac{2R}{\theta}\Big)^\beta+\Big(\frac{R}{n\theta}\Big)^{1/2}\Big). \end{align} \end{lemma} \begin{proof} First we use Proposition \ref{4p1.1} and \eqref{4e6.20} to get \begin{align}\label{ec1.7} &\sum_{y\in \mathbb{Z}^d_R, |y-a|<1} p_n(x-y) (g_{a,2}(y))^\beta \leq (2R+1)^2 \cdot \frac{c_{\ref{4p1.1}}}{nR^2}\Big(\log \frac{2R}{\theta}\Big)^\beta\leq C\frac{1}{n}\Big(\log \frac{2R}{\theta}\Big)^\beta. \end{align} Turning to $|y-a|\geq 1$, we apply Lemma \ref{4l3.2} to see that for $\beta=1$ or $2$, we have \begin{align*} (g_{a,2}(y))^\beta \leq c_{\ref{4l3.2}}^\beta \Big(1+\log^+ \Big(\frac{R}{\theta |y-a|^2}\Big)\Big)^\beta\leq& C+C\Big(\log^+ \Big(\frac{R}{\theta |y-a|^2}\Big)\Big)^\beta\\ \leq& C+C\Big(\frac{R}{\theta |y-a|^2}\Big)^{1/2}, \end{align*} where in the last inequality we have used $(\log^+x)^\beta \leq x^{1/2}$, $\forall x>0$. Hence it follows that \begin{align}\label{ec1.8} &\sum_{y\in \mathbb{Z}^d_R, |y-a|\geq 1} p_n(x-y) (g_{a,2}(y))^\beta \leq C +C\Big(\frac{R}{\theta}\Big)^{1/2}\sum_{y\in \mathbb{Z}^d_R, |y-a|\geq 1} p_n(x-y)\frac{1}{|y-a|}. \end{align} Since we are summing over $|y-a|\geq 1$, we may apply Lemma \ref{4l4.1} to see that \begin{align*} \sum_{k=1}^\infty \frac{1}{k^{3/2}} e^{-\frac{|y-a|^2}{64k}}\geq c_{\ref{4l4.1}} \frac{64^{1/2}}{|y-a|}. \end{align*} Use the above to get \begin{align}\label{ec1.9} \sum_{y\in \mathbb{Z}^d_R, |y-a|\geq 1} p_n(x-y)\frac{1}{|y-a|} \leq & C\sum_{y\in \mathbb{Z}^d_R} p_n(y-x) \sum_{k=1}^\infty \frac{1}{k^{3/2}} e^{-\frac{|y-a|^2}{64k}}\leq C\frac{c_{\ref{4l1.3}}}{n^{1/2}}, \end{align} where the last inequality is by Lemma \ref{4l1.3}. Now the result follows from \eqref{ec1.7}, \eqref{ec1.8} and \eqref{ec1.9}. \end{proof} Recall ${T_\theta^R}=[TR/\theta]\leq TR/\theta$. Apply \eqref{4e6.20} and Lemma \ref{4l3.3} with $\beta=1$ to get \begin{align}\label{4ec7.4} G(g_{a,2},{T_\theta^R})=&3\|g_{a,2}\|_\infty+\sum_{k=1}^{{T_\theta^R}} \sup_{y\in \mathbb{Z}_R^d} \sum_{z\in \mathbb{Z}^d_R} p_k(y-z) g_{a,2}(z)\\ \leq &3\log \frac{2R}{\theta}+\sum_{k=1}^{{T_\theta^R}} c_{\ref{4l3.3}} \Big(1+ \frac{1}{k}\log \frac{2R}{\theta}+\Big(\frac{R}{\theta}\Big)^{1/2} \frac{1}{k^{1/2}}\Big)\noindentnumber \\ \leq &3 \frac{2R}{\theta} +C {T_\theta^R} +C \log \frac{2R}{\theta} \cdot C\log(T_\theta^R)+C (\frac{R}{\theta})^{1/2} \cdot C({{T_\theta^R}})^{1/2}\leq C(T) \frac{R}{\theta},\noindentnumber \end{align} where in the last inequality we have used $\log(x)\leq x^{1/2}$ for any $x>0$. Hence it follows from \eqref{4ec7.2} that \begin{align}\label{4e7.2} G(g_{a,2},{T_\theta^R})\leq CG(g_{a,2},{T_\theta^R})\leq c(T) \frac{R}{\theta}. \end{align} Now we are ready to give the \begin{proof} [Proof of Proposition \ref{4p2.1}(i)] Let $\lambda=\theta^{3/2} R^{-2}$ and $n=T_\theta^R\leq \frac{TR}{\theta}$. Use \eqref{4e7.2} to get \begin{align}\label{4ea7.8} 2\lambda T_\theta^R e^{\frac{T_\theta^R\theta}{ R}} G(g_a,T_\theta^R)\leq 2\theta^{3/2} R^{-2} \frac{TR}{\theta}e^{T} \cdot c(T) \frac{R}{\theta}\leq c(T) \frac{1}{\theta^{1/2}}. \end{align} If we pick $\theta>0$ large enough so that $c(T)/{\theta^{1/2}}\leq 1/2$, then we may apply Proposition \ref{4p1.4} to get (recall $|Z_0|\leq 2R/\sqrt{\theta}$ by \eqref{4e7.1}) \begin{align}\label{4e9.04} \mathbb{E}^{Z_0}\Big(\exp\Big(\lambda \sum_{k=0}^{{T_\theta^R}} Z_k(g_a)\Big)\Big)\leq& \exp\Big(\lambda |Z_0| e^{\frac{T_\theta^R\theta}{ R}} G(g_a,{T_\theta^R}) (1-2\lambda {T_\theta^R} e^{\frac{T_\theta^R\theta}{ R}} G(g_a,{T_\theta^R}))^{-1}\Big)\noindentnumber\\ \leq &\exp\Big(\lambda \frac{2R}{\sqrt{\theta}} e^{T} c(T) \frac{R}{\theta} (1-c(T) \frac{1}{\theta^{1/2}} )^{-1}\Big)\noindentnumber\\ \leq& \exp\Big(C(T) (1-c(T) \frac{1}{\theta^{1/2}} )^{-1}\Big)\leq e^{2C(T)}, \end{align} where we have used \eqref{4e7.2}, \eqref{4ea7.8} in the second inequality and the last inequality is by $c(T)/{\theta^{1/2}}\leq 1/2$. \end{proof} Turning to the difference moments in Proposition \ref{4p2.1} (ii), we need an estimate for $G(|g_a-g_b|,{T_\theta^R})$. Fix $\eta=1/8$ throughout the rest of this section. For any $a\neq b \in \mathbb{Z}^d$ and $y \in \mathbb{Z}^d_R$, we have $|(y-a)-(y-b)|\geq 1$ and so we may apply Proposition \ref{4p1.1}(ii) to get \begin{align}\label{4e9.22} |{g}_a(y)-{g}_b(y)|\leq& V(R)\sum_{k=1}^\infty e^{-k\theta/R} \frac{C_{\ref{4p1.1}}}{k R^2} \Big(\frac{|a-b|}{\sqrt{k}}\Big)^\eta (e^{-\frac{|y-a|^2}{64k}}+e^{-\frac{|y-b|^2}{64k}})\noindentnumber\\ \leq& C |a-b|^{\eta} \sum_{k=1}^\infty \frac{1}{k^{(2+\eta)/2}} (e^{-\frac{|y-a|^2}{64k}}+e^{-\frac{|y-b|^2}{64k}}). \end{align} For any $x\in \mathbb{Z}^d_R$ and any $n\geq 1$, we may use \eqref{4e9.22} and Lemma \ref{4l1.3} to see that \begin{align}\label{4e9.24} &\sum_{y\in \mathbb{Z}^d_R} p_n(y-x) |g_a(y)-g_b(y)|\noindentnumber\\ \leq &C |a-b|^{\eta} \sum_{y\in \mathbb{Z}^d_R} p_n(y-x) \sum_{k=1}^\infty \frac{1}{k^{(2+\eta)/2}} (e^{-\frac{ |a-y|^2}{64k}}+e^{-\frac{ |b-y|^2}{64k}})\noindentnumber\\ \leq &C |a-b|^{\eta} \cdot 2 c_{\ref{4l1.3}} n^{-\eta/2}. \end{align} Apply \eqref{4e9.22} and \eqref{4e9.24} to get \begin{align}\label{4e6.43} G(|g_a-g_b|,{T_\theta^R})=&3\|g_a-g_b\|_\infty+\sum_{k=1}^{{T_\theta^R}} \sup_{y\in \mathbb{Z}_R^d} \sum_{z\in \mathbb{Z}^d_R} p_k(y-z) |g_a(y)-g_b(y)|\noindentnumber\\ \leq& C(\eta) |a-b|^{\eta}+C(\eta)\sum_{k=1}^{{T_\theta^R}} |a-b|^{\eta} k^{-\frac{\eta}{2}}\noindentnumber\\ \leq& c(\eta) |a-b|^{\eta} (T_\theta^R)^{1-\eta/2}\leq c(T) |a-b|^\eta \frac{R^{1-\eta/2}}{\theta^{1-\eta/2}}. \end{align} Now we give the \begin{proof} [Proof of Proposition \ref{4p2.1}(ii)] Let $\lambda={\theta^{(3-\eta)/2}} {R^{-2+\eta/2}}|a-b|^{-\eta}$ and $n=T_\theta^R\leq \frac{TR}{\theta}$. Note by \eqref{4e6.43} we have \begin{align}\label{4ea7.7} 2\lambda T_\theta^R e^{\frac{T_\theta^R\theta}{ R}} G(|g_a-g_b|,{T_\theta^R})\leq& 2{\theta^{(3-\eta)/2}} {R^{-2+\eta/2}}|a-b|^{-\eta} \frac{TR}{\theta}e^{T} \cdot c(T) |a-b|^\eta \frac{R^{1-\eta/2}}{\theta^{1-\eta/2}}\noindentnumber\\ \leq&c(T) \frac{1}{\theta^{1/2}}. \end{align} If we pick $\theta>0$ large enough so that $c(T)/{\theta^{1/2}}\leq 1/2$, then we may apply Proposition \ref{4p1.4} to get (recall $|Z_0|\leq 2R/\sqrt{\theta}$) \begin{align}\label{4e9.03} &\mathbb{E}^{{Z}_0}\Big(\exp\Big({\lambda|\sum_{k=0}^{{T_\theta^R}} {Z}_{k}(g_a)-\sum_{k=0}^{{T_\theta^R}} {Z}_{k}(g_b)|}\Big)\Big)\leq \mathbb{E}^{{Z}_0}\Big(\exp\Big({\lambda \sum_{k=0}^{{T_\theta^R}} {Z}_{k}(|g_a-g_b|)}\Big)\Big)\noindentnumber\\ \leq&\exp\Big(\lambda |{Z}_0| e^{\frac{T_\theta^R\theta}{ R}} \cdot G(|g_a-g_b|,{T_\theta^R}) (1-2\lambda {T_\theta^R} e^{\frac{T_\theta^R\theta}{ R}} G(|g_a-g_b|,{T_\theta^R}))^{-1}\Big)\noindentnumber\\ \leq &\exp\Big(\lambda \frac{2R}{\sqrt{\theta}} e^T \cdot c(T) |a-b|^\eta \frac{R^{1-\eta/2}}{\theta^{1-\eta/2}} (1-c(T) \frac{1}{\theta^{1/2}})^{-1}\Big)\noindentnumber\\ \leq &\exp\Big(C(T) (1-c(T) \frac{1}{\theta^{1/2}} )^{-1}\Big)\leq e^{2C(T)}, \end{align} where we have used \eqref{4e6.43}, \eqref{4ea7.7} in the second last inequality and the last inequality is by $c(T)/{\theta^{1/2}}\leq 1/2$. \end{proof} \subsection{Exponential moments of the martingale term} Now we will turn to the martingale term $M_{{T_\theta^R}+1}(g_a)$ and give the proof of Proposition \ref{4p2.2}. Recall from \eqref{4e1.22} and \eqref{4e1.30} that \begin{align}\label{4e7.41} M_N(\phi)=\sum_{n=0}^{N-1}\sum_{|\alpha|=n} \sum_{i=1}^{V(R)} \phi({Y^\alpha+e_i}) \Big(B^{\alpha \vee e_i}-p(R)\Big). \end{align} and \begin{align}\label{4e7.42} \langle M(\phi)\rangle_N\leq 2\sum_{n=0}^{N-1} \sum_{x\in \mathbb{Z}^d_R} Z_n(x) \cdot \frac{1}{V(R)}\sum_{i=1}^{V(R)} \phi({x+e_i})^2, \end{align} We first proceed to the proof of Proposition \ref{4p2.2}(ii) and deal with \begin{align}\label{4eb1.1} \Big||M_{{T_\theta^R}+1}(g_a)|-|M_{{T_\theta^R}+1}(g_b)|\Big|\leq |M_{{T_\theta^R}+1}(g_a)-M_{{T_\theta^R}+1}(g_b)|=|M_{{T_\theta^R}+1}(g_a-g_b)|. \end{align} Throughout the rest of this section we fix $\eta=1/8$. Use $R\geq 4\theta$ and \eqref{4e9.22} to see that \begin{align}\label{4ea7.9} \theta^{(3-2\eta)/4} R^{(\eta-2)/2}|a-b|^{-\eta}\|g_a-g_b\|_\infty \leq \theta^{(3-2\eta)/4} (4\theta)^{(\eta-2)/2} C(\eta)\leq 1, \end{align} if we pick $\theta\geq 100$ to be large. Then we may use \eqref{4eb1.1} and Proposition \ref{4p5.1} with $\phi=g_a-g_b$ and $\lambda=\theta^{(3-2\eta)/4} R^{(\eta-2)/2}|a-b|^{-\eta}$ to get \begin{align}\label{4e9.05} &\mathbb{E}^{{Z}_0}\Big(\exp\Big(\theta^{(3-2\eta)/4} \frac{R^{(\eta-2)/2}}{|a-b|^\eta} \Big||M_{{T_\theta^R}+1}(g_a)|-|M_{{T_\theta^R}+1}(g_b)|\Big|\Big)\Big)\noindentnumber\\ \leq& \mathbb{E}^{{Z}_0}\Big(\exp\Big(\theta^{(3-2\eta)/4} \frac{R^{(\eta-2)/2}}{|a-b|^\eta} \Big|M_{{T_\theta^R}+1}(g_a-g_b)\Big|\Big)\Big)\noindentnumber\\ \leq& 2\Big(\mathbb{E}^{{Z}_0}\Big(\exp\Big(16\theta^{(3-2\eta)/2} \frac{R^{\eta-2}}{|a-b|^{2\eta}} \langle M(g_a-g_b)\rangle_{{T_\theta^R}+1}\Big)\Big)\Big)^{1/2}. \end{align} By \eqref{4e7.42}, we have the quadratic variation is bounded by \begin{align}\label{4ea7.10} \langle M(g_a-g_b)\rangle_{{T_\theta^R}+1}\leq &2 \sum_{n=0}^{{T_\theta^R}} \sum_{x\in \mathbb{Z}^d_R} Z_n(x) \cdot \frac{1}{V(R)}\sum_{i=1}^{V(R)} \Big(g_a({x+e_i})-g_b(x+e_i)\Big)^2. \end{align} Use \eqref{4e9.22} again to get for all $a\neq b\in \mathbb{Z}^2$ and $y\in \mathbb{Z}^2_R$, \begin{align}\label{4e9.25} |{g}_a(y)-{g}_b(y)|^2\leq C |a-b|^{2\eta}\Big(\Big(\sum_{k=1}^\infty \frac{1}{k^{(2+\eta)/2}} e^{-\frac{|y-a|^2}{64k}}\Big)^2+\Big(\sum_{k=1}^\infty \frac{1}{k^{(2+\eta)/2}} e^{-\frac{|y-b|^2}{64k}}\Big)^2\Big). \end{align} To take care of the square term on the right-hand side, we need the following lemma. \begin{lemma}\label{4l4.1.1} Let $d\geq 1$. For any $\alpha>0$, there is some constant $C_{\ref{4l4.1.1}}(\alpha)>0$ such that for all $a,y\in \mathbb{Z}_R^d$, \begin{align}\label{4eb1.9} \Big(\sum_{k=1}^\infty \frac{1}{k^{1+\alpha}} e^{-\frac{|y-a|^2}{64k}} \Big)^2\leq C_{\ref{4l4.1.1}}(\alpha) \sum_{k=1}^\infty \frac{1}{k^{1+2\alpha}} e^{-\frac{|y-a|^2}{64k}}. \end{align} \end{lemma} \begin{proof} For any $a, y\in \mathbb{Z}^d_R$, we first consider $|y-a|>1$. Apply Lemma \ref{4l4.1} with $r=|y-a|^2/64>1/64$ to get \begin{align*} &\sum_{k=1}^\infty \frac{1}{k^{1+\alpha}} e^{-\frac{|y-a|^2}{64k}}\leq C_{\ref{4l4.1}}(\alpha) \frac{64^{\alpha}}{|y-a|^{2\alpha}},\text{ and } \sum_{k=1}^\infty \frac{1}{k^{1+2\alpha}} e^{-\frac{|y-a|^2}{64k}}\geq c_{\ref{4l4.1}}(2\alpha) \frac{64^{2\alpha}}{|y-a|^{4\alpha}}. \end{align*} Therefore it follows that \begin{align*} &\Big(\sum_{k=1}^\infty \frac{1}{k^{1+\alpha}} e^{-\frac{|y-a|^2}{64k}}\Big)^2 1_{\{|y-a|> 1\}}\leq C_{\ref{4l4.1}}(\alpha)^2 \frac{64^{2\alpha}}{|y-a|^{4\alpha}} 1_{\{|y-a|> 1\}}\noindentnumber\\ \leq& C_{\ref{4l4.1}}(\alpha)^2 c_{\ref{4l4.1}}(2\alpha)^{-1} \sum_{k=1}^\infty \frac{1}{k^{1+2\alpha}} e^{-\frac{|y-a|^2}{64k}} 1_{\{|y-a|> 1\}}, \end{align*} thus proving \eqref{4eb1.9} for the case $|y-a|>1$. Turning to $|y-a|\leq 1$, it is immediate from the definition that \begin{align*} &\Big(\sum_{k=1}^\infty \frac{1}{k^{1+\alpha}} e^{-\frac{|y-a|^2}{64k}}\Big)^2 1_{\{|y-a|\leq 1\}}\leq\Big(\sum_{k=1}^\infty \frac{1}{k^{1+\alpha}} \Big)^2 1_{\{|y-a|\leq 1\}}\leq c_1(\alpha) 1_{\{|y-a|\leq 1\}} \end{align*} for some constants $c_1(\alpha)>0$. On the other hand, we have \begin{align*} &\sum_{k=1}^\infty \frac{1}{k^{1+2\alpha}} e^{-\frac{|y-a|^2}{64k}} 1_{\{|y-a|\leq 1\}} \geq e^{-\frac{1}{64}} 1_{\{|y-a|\leq 1\}}\sum_{k=1}^\infty \frac{1}{k^{1+2\alpha}} \geq c_2(\alpha) 1_{\{|y-a|\leq 1\}} \end{align*} for some constants $c_2(\alpha)>0$. Therefore it follows that \begin{align*} &\Big(\sum_{k=1}^\infty \frac{1}{k^{1+\alpha}} e^{-\frac{|y-a|^2}{64k}}\Big)^2 1_{\{|y-a|\leq 1\}}\leq c_1(\alpha) 1_{\{|y-a|\leq 1\}}\\ &=\frac{c_1(\alpha)}{c_2(\alpha)} c_2(\alpha) 1_{\{|y-a|\leq 1\}}\leq \frac{c_1(\alpha)}{c_2(\alpha)} \sum_{k=1}^\infty \frac{1}{k^{1+2\alpha}} e^{-\frac{|y-a|^2}{64k}} 1_{\{|y-a|\leq 1\}}, \end{align*} thus proving \eqref{4eb1.9} for the case $|y-a|\leq 1$. By adjusting constants, we get \eqref{4eb1.9} holds for all $a,y\in \mathbb{Z}^d_R$. \end{proof} Apply the above lemma in \eqref{4e9.25} to get \begin{align}\label{4eb1.10} |{g}_a(y)-{g}_b(y)|^2\leq C |a-b|^{2\eta} C_{\ref{4l4.1.1}}(\frac{\eta}{2})\Big(\sum_{k=1}^\infty \frac{1}{k^{1+\eta}} e^{-\frac{|y-a|^2}{64k}}+\sum_{k=1}^\infty \frac{1}{k^{1+\eta}} e^{-\frac{|y-b|^2}{64k}}\Big). \end{align} Define for any $a\in \mathbb{Z}^2$ that \begin{align}\label{4e5.21} q_a(x)=\sum_{k=1}^\infty \frac{1}{k^{1+\eta}} e^{-\frac{|x-a|^2}{64k}}\text{ and write } \overline{q_a}(x)=\frac{1}{V(R)}\sum_{i=1}^{V(R)} q_a(x+e_i). \end{align} Then we may apply \eqref{4eb1.10} and \eqref{4e5.21} in \eqref{4ea7.10} to get \begin{align*} \langle M(g_a-g_b)\rangle_{{T_\theta^R}+1} \leq& 2\sum_{n=0}^{{T_\theta^R}}\sum_{x\in \mathbb{Z}^d_R} Z_n(x) \cdot C |a-b|^{2\eta}( \overline{q_a}(x) +\overline{q_b}(x))\\ \leq &C |a-b|^{2\eta} \sum_{n=0}^{{T_\theta^R}} Z_n(\overline{q_a}+\overline{q_b}). \end{align*} Returning to \eqref{4e9.05}, we use above to arrive at \begin{align}\label{4e9.112} &\mathbb{E}^{{Z}_0}\Big(\exp\Big(\theta^{(3-2\eta)/4} \frac{R^{(\eta-2)/2}}{|a-b|^\eta} \Big||M_{{T_\theta^R}+1}(g_a)|-|M_{{T_\theta^R}+1}(g_b)|\Big|\Big)\Big)\noindentnumber\\ \leq& 2\Big(\mathbb{E}^{{Z}_0}\Big(\exp\Big(16 C\theta^{(3-2\eta)/2} R^{\eta-2} \sum_{n=0}^{{T_\theta^R}} Z_n(\overline{q_a}+\overline{q_b}) \Big)\Big)\Big)^{1/2}\noindentnumber\\ \leq& 2\Big(\mathbb{E}^{{Z}_0}\Big(\exp\Big(32 C\theta^{3/2-\eta} R^{\eta-2} \sum_{n=0}^{{T_\theta^R}} Z_n(\overline{q_a}) \Big)\Big)\Big)^{1/4}\noindentnumber\\ &\times\Big(\mathbb{E}^{{Z}_0}\Big(\exp\Big(32 C\theta^{3/2-\eta} R^{\eta-2} \sum_{n=0}^{{T_\theta^R}} Z_n(\overline{q_b}) \Big)\Big)\Big)^{1/4}, \end{align} where the last inequality is by the Cauchy-Schwartz inequality. It suffices to bound \begin{align}\label{4e5.22} \mathbb{E}^{{Z}_0}\Big(\exp\Big(32 C\theta^{3/2-\eta} R^{\eta-2} \sum_{n=0}^{{T_\theta^R}} Z_n(\overline{q_a}) \Big)\Big),\quad \forall a\in \mathbb{Z}^d_R. \end{align} Recalling $q_a$ from \eqref{4e5.21}, we may use Lemma \ref{4l1.3} to get for any $a, x\in \mathbb{Z}^d_R$, \begin{align}\label{4e9.72} \sum_{y\in \mathbb{Z}^d_R} p_n(y-x) q_a(y)= \sum_{y\in \mathbb{Z}^d_R} p_n(y-x) \sum_{k=1}^\infty \frac{1}{k^{1+\eta}} e^{-\frac{|y-a|^2}{64k}}\leq c_{\ref{4l1.3}} \cdot \frac{1}{n^{\eta}}. \end{align} Recall $\overline{q_a}$ from \eqref{4e5.21}. The above immediately gives \begin{align} \sum_{y\in \mathbb{Z}^d_R} p_n(y-x) \overline{q_a}(y)\leq c_{\ref{4l1.3}} \cdot \frac{1}{n^{\eta}}, \quad \forall a, x\in \mathbb{Z}^d_R. \end{align} Therefore we have \begin{align}\label{4e5.23} G(\overline{q_a},{T_\theta^R})=&3\|\overline{q_a}\|_\infty+\sum_{k=1}^{{T_\theta^R}} \sup_{y\in \mathbb{Z}_R^d} \sum_{z\in \mathbb{Z}^d_R} p_k(y-z) \overline{q_a}(z)\noindentnumber\\ \leq &3C(\eta)+\sum_{k=1}^{{T_\theta^R}} c_{\ref{4l1.3}} \cdot \frac{1}{k^{\eta}}\leq C(\eta) (T_\theta^R)^{1-\eta}\leq C(T) \frac{R^{1-\eta}}{\theta^{1-\eta}}. \end{align} \begin{proof}[Proof of Proposition \ref{4p2.2}(ii)] By \eqref{4e9.112}, it suffices to give bounds for \eqref{4e5.22}. Fix any $a\in \mathbb{Z}_R^d$. Let $\lambda=32C \theta^{3/2-\eta} {R^{\eta-2}}$ and $n=T_\theta^R\leq \frac{TR}{\theta}$. Apply \eqref{4e5.23} to get \begin{align}\label{4ea7.11} 2\lambda T_\theta^R e^{\frac{T_\theta^R\theta}{ R}} G(\overline{q_a},{T_\theta^R})\leq& 64C \theta^{3/2-\eta} {R^{\eta-2}} \frac{TR}{\theta}e^{T} \cdot C(T) \frac{R^{1-\eta}}{\theta^{1-\eta}}\leq c(T) \frac{1}{\theta^{1/2}}. \end{align} If we pick $\theta>0$ large enough so that $c(T)/{\theta^{1/2}}\leq 1/2$, then we may apply Proposition \ref{4p1.4} to get (recall $|Z_0|\leq 2R/\sqrt{\theta}$) \begin{align}\label{4e10.50} \mathbb{E}^{{Z}_0}\Big(\exp\Big({\lambda \sum_{k=0}^{{T_\theta^R}} {Z}_{k}(\overline{q_a})}\Big)\Big) \leq& \exp\Big(\lambda |{Z}_0| e^{\frac{T_\theta^R\theta}{ R}}\cdot G(\overline{q_a},{T_\theta^R})(1-2\lambda T_\theta^R e^{\frac{T_\theta^R\theta}{ R}} G(\overline{q_a},{T_\theta^R}))^{-1}\Big)\noindentnumber\\ \leq &\exp\Big(\lambda \frac{2R}{\sqrt{\theta}} e^T \cdot C(T) \frac{R^{1-\eta}}{\theta^{1-\eta}} (1-c(T) \frac{1}{{\theta}^{1/2}} )^{-1}\Big)\noindentnumber\\ \leq &\exp\Big(C(T) (1-c(T) \frac{1}{{\theta}^{1/2}} )^{-1}\Big)\leq e^{2C(T)}, \end{align} where we have used \eqref{4e5.23}, \eqref{4ea7.11} in the second inequality. The last inequality is by $c(T)/{\theta^{1/2}}\leq 1/2$. Returning to \eqref{4e9.112}, we use \eqref{4e10.50} to get \begin{align}\label{4e5.24} &\mathbb{E}^{{Z}_0}\Big(\exp\Big(\theta^{(3-2\eta)/4} \frac{R^{(\eta-2)/2}}{|a-b|^\eta} \Big||M_{{T_\theta^R}+1}(g_a)|-|M_{{T_\theta^R}+1}(g_b)|\Big|\Big)\Big)\leq 2e^{C(T)}. \end{align} Hence the proof is complete. \end{proof} Finally we turn to the exponential moments of $M_{{T_\theta^R}+1}(g_a)$ and prove Proposition \ref{4p2.2}(i). Use \eqref{4e10.33}, \eqref{4e6.20} and $R\geq 4\theta$, one can check that \begin{align*} \theta^{3/4} R^{-1} \|g_a\|_\infty \leq \theta^{3/4} R^{-1} \cdot C \log \frac{2R}{\theta}\leq \frac{1}{4^{3/4}} R^{-1/4} \cdot C \log \frac{2R}{100}\leq 1, \end{align*} if we pick $R\geq 4\theta\geq 400$ to be large. Then we may apply Proposition \ref{4p5.1} with $\phi= g_a$ and $\lambda=\theta^{3/4} R^{-1}$ to get \begin{align}\label{4e10.51} &\mathbb{E}^{{Z}_0}\Big(\exp\Big(\theta^{3/4} R^{-1} |M_{{T_\theta^R}+1}(g_a)|\Big)\Big)\noindentnumber\\ \leq& 2\Big(\mathbb{E}^{{Z}_0}\Big(\exp\Big(16\theta^{3/2} R^{-2} \langle M(g_a)\rangle_{{T_\theta^R}+1}\Big)\Big)\Big)^{1/2}, \end{align} where the quadratic variation is bounded by (recall \eqref{4e7.42}) \begin{align*} &\langle M(g_a)\rangle_{{T_\theta^R}+1}\leq 2 \sum_{n=0}^{{T_\theta^R}} \sum_{x\in Z_n} Z_n(x) \cdot \frac{1}{V(R)}\sum_{i=1}^{V(R)} \Big(g_a({x+e_i})\Big)^2. \end{align*} For any $a, x\in \mathbb{Z}^d_R$, we define \begin{align}\label{4e5.25} \overline{g_a}(x)=\frac{1}{V(R)}\sum_{i=1}^{V(R)} (g_a(x+e_i))^2 \end{align} so that \begin{align*} &\langle M(g_a)\rangle_{{T_\theta^R}+1}\leq 2 \sum_{n=0}^{{T_\theta^R}} \sum_{x\in Z_n} Z_n(x) \cdot \overline{g_a}(x)=2 \sum_{n=0}^{{T_\theta^R}} Z_n(\overline{g_a}). \end{align*} Therefore \eqref{4e10.51} becomes \begin{align}\label{4e9.113} &\mathbb{E}^{{Z}_0}\Big(\exp\Big(\theta^{3/4} R^{-1} |M_{{T_\theta^R}+1}(g_a)|\Big)\Big)\leq 2\Big(\mathbb{E}^{{Z}_0}\Big(\exp\Big(32 \theta^{3/2} R^{-2} \sum_{n=0}^{{T_\theta^R}} Z_n(\overline{g_a}) \Big)\Big)\Big)^{1/2}. \end{align} It remains to bound \begin{align}\label{4e5.26} \mathbb{E}^{{Z}_0}\Big(\exp\Big(32 \theta^{3/2} R^{-2} \sum_{n=0}^{{T_\theta^R}} Z_n(\overline{g_a}) \Big)\Big), \quad \forall a\in \mathbb{Z}^d_R. \end{align} In order to apply Proposition \ref{4p1.4} to get bounds for \eqref{4e5.26}, we will need bounds for $G(\overline{g_a}, {T_\theta^R})$. The definition of $\overline{g_a}$ as in \eqref{4e5.25} gives \begin{align}\label{4e9.10} G(\overline{g_a}, {T_\theta^R})=&3\|\overline{g_a}\|_\infty+\sum_{k=1}^{{T_\theta^R}} \sup_{x \in \mathbb{Z}_R^d} \sum_{y\in \mathbb{Z}^d_R} p_k(x-y) \overline{g_a}(y)\noindentnumber\\ \leq& 3\|{g_a}\|_\infty^2+\frac{1}{V(R)}\sum_{i=1}^{V(R)} \sum_{k=1}^{{T_\theta^R}} \sup_{x \in \mathbb{Z}_R^d} \sum_{y\in \mathbb{Z}^d_R} p_k(x-y) (g_a(y+e_i))^2\noindentnumber\\ \leq& 3\|{g_a}\|_\infty^2+ \sum_{k=1}^{{T_\theta^R}} \sup_{x \in \mathbb{Z}_R^d} \sum_{y\in \mathbb{Z}^d_R} p_k(x-y) (g_a(y))^2\noindentnumber\\ \leq& C(\log(\frac{2R}{\theta}))^2+C \sum_{k=1}^{{T_\theta^R}} \sup_{x \in \mathbb{Z}_R^d} \sum_{y\in \mathbb{Z}^d_R} p_k(x-y) (g_{a,2}(y))^2, \end{align} where the last inequality is by \eqref{4e10.33}, \eqref{4e6.20}. Use Lemma \ref{4l3.3} with $\beta=2$ to get \begin{align}\label{4e5.27} G(\overline{g_a},{T_\theta^R})\leq& C(\log(\frac{2R}{\theta}))^2 +C\sum_{k=1}^{{T_\theta^R}} c_{\ref{4l3.3}} \Big(1+ \frac{1}{k} \Big(\log \frac{2R}{\theta}\Big)^2+\Big(\frac{R}{k\theta}\Big)^{1/2}\Big)\\ \leq &C\frac{2R}{\theta}+C{T_\theta^R}+C\Big(\log \frac{2R}{\theta}\Big)^2\cdot C\log {T_\theta^R} +C\Big(\frac{R}{\theta}\Big)^{1/2}\cdot C (T_\theta^R)^{1/2} \noindentnumber\\ \leq &C\frac{R}{\theta}+C\frac{TR}{\theta}+C\Big(\frac{2R}{\theta}\Big)^{2/3}\Big( \frac{TR}{\theta}\Big)^{1/3}+C\Big(\frac{R}{\theta}\Big)^{1/2} \Big(\frac{TR}{\theta}\Big)^{1/2} \leq c(T) \frac{R}{\theta},\noindentnumber \end{align} where in the second inequality we have used $\log x \leq x^{1/2}$, $\forall x>0$ and the third inequality uses $T_\theta^R\leq TR/\theta$ and $\log x \leq x^{1/3}$ , $\forall x>0$. \begin{proof}[Proof of Proposition \ref{4p2.2}(i)] By \eqref{4e9.113}, it suffices to give bounds for \eqref{4e5.26}. Fix any $a\in \mathbb{Z}_R^d$. Let $\lambda=32 \theta^{3/2} {R^{-2}}$ and $n=T_\theta^R\leq \frac{TR}{\theta}$. By \eqref{4e5.27} we have \begin{align}\label{4eb1.6} 2\lambda T_\theta^R e^{\frac{T_\theta^R\theta}{ R}} G(\overline{g_a},{T_\theta^R})\leq& 64 \theta^{3/2} {R^{-2}} \frac{TR}{\theta}e^{T} \cdot c(T) \frac{R}{\theta}\leq c(T) \frac{1}{\theta^{1/2}}. \end{align} If we pick $\theta>0$ large enough so that $c(T)/{\theta^{1/2}}\leq 1/2$, then we may apply Proposition \ref{4p1.4} to get (recall $|Z_0|\leq 2R/\sqrt{\theta}$) \begin{align}\label{4e10.61} \mathbb{E}^{{Z}_0}\Big(\exp\Big({\lambda \sum_{k=0}^{{T_\theta^R}} {Z}_{k}(\overline{g_a})}\Big)\Big) \leq& \exp\Big(\lambda |{Z}_0| e^{\frac{T_\theta^R\theta}{ R}} G(\overline{g_a},{T_\theta^R}) (1- 2\lambda T_\theta^R e^{\frac{T_\theta^R\theta}{ R}} G(\overline{g_a},{T_\theta^R}))^{-1}\Big)\noindentnumber\\ \leq& \exp\Big(\lambda \frac{2R}{\sqrt{\theta}} e^{T} c(T) \frac{R}{\theta} (1-c(T) \frac{1}{\theta^{1/2}})^{-1}\Big)\noindentnumber\\ \leq &\exp\Big(C(T) (1-c(T) \frac{1}{{\theta}^{1/2}} )^{-1}\Big)\leq e^{2C(T)}, \end{align} where the second inequality uses \eqref{4e5.27} and \eqref{4eb1.6} and the last inequality follows by $c(T)/{\theta^{1/2}}\leq 1/2$. Returning to \eqref{4e9.113}, we use \eqref{4e10.61} to get \begin{align}\label{4e5.28} &\mathbb{E}^{{Z}_0}\Big(\exp\Big(\theta^{3/4} R^{-1} |M_{{T_\theta^R}+1}(g_a)|\Big)\Big)\leq 2e^{C(T)}, \end{align} thus completing the proof. \end{proof} \section{Local time bounds in $d=3$}\label{4s6} In this section we give the proof of Proposition \ref{4p2} for $d=3$. Recall $Z_0\in M_F(\mathbb{Z}_R^3)$ satisfies \begin{align}\label{4eb1.7} \begin{dcases} \text{(i) }\text{Supp}(Z_0)\subseteq Q_{R_\theta}(0); \\ \text{(ii) } Z_0(1)\leq 2 R^2 f_3(\theta)/\theta=2 R^2 \log \theta/\theta;\\ \text{(iii) } Z_{0}(g_{u,3})\leq m {R^2}/\theta^{1/4}, \quad \forall u\in \mathbb{R}^3. \end{dcases} \end{align} Similar to $d=2$, it suffices to get bounds for the local time at points in the integer lattice. \begin{proposition}\label{4t6.1} Let $d=3$. For any $\varepsilon_0\in (0,1)$, $T\geq 100$ and $m>0$, there exist constants $\theta_{\ref{4t6.1}}\geq 100, \chi_{\ref{4t6.1}}>0$ depending only on $\varepsilon_0, T,m$ such that for all $\theta \geq \theta_{\ref{4t6.1}}$, there is some $C_{\ref{4t6.1}}(\varepsilon_0, T,\theta,m)\geq 4\theta$ such that for any $R\geq C_{\ref{4t6.1}}$ and any $Z_0$ satisfying \eqref{4eb1.7}, we have \[ \mathbb{P}^{Z_0}\Big(\sum_{n=0}^{T_\theta^R} Z_n(\mathcal{N}(a)) \leq \chi_{\ref{4t6.1}} {R}, \quad \forall a\in \mathbb{Z}^3 \cap Q_{3M_{\ref{4p1}} \sqrt{\log f_3(\theta)}R_\theta} (0) \Big)\geq 1-\varepsilon_0. \] \end{proposition} \begin{proof}[Proof of Proposition \ref{4p2} in $d=3$ assuming Proposition \ref{4t6.1}] This follows from similar arguments used for $d=2$. \end{proof} It remains to prove Proposition \ref{4t6.1}. In view of \eqref{4e6.13}, it suffices to get bounds for $Z_0(\phi_a)$, $M_{{T_\theta^R}+1}(\phi_a)$ and $\sum_{n=0}^{{T_\theta^R}} Z_n(\phi_a)$ where $\phi_a(x)=RV(R)\sum_{n=1}^\infty p_n(x-a)$. Recall from \eqref{4e7.10} that for any $a,x \in \mathbb{Z}^d_R$, \begin{align}\label{4e7.10a} \phi_a(x)\leq CR \sum_{n=1}^\infty \frac{1}{n^{3/2}} e^{-\frac{|x-a|^2}{32n}}=Cg_{a,3}(x). \end{align} Therefore we may use the above and \eqref{4eb1.7} to see that \begin{align}\label{4e5.33} Z_0(\phi_a)\leq CZ_0(g_{a,3}) \leq C\frac{mR^2}{\theta^{1/4}}, \quad \forall a\in \mathbb{Z}^d. \end{align} Turning to $M_{{T_\theta^R}+1}(g_a)$ and $\sum_{n=0}^{{T_\theta^R}} Z_n(g_a)$, we will also calculate their exponential moments. \begin{proposition}\label{4p3.1} Let $\eta=1/8$. For any $T\geq 100$, there exist constants $C_{\ref{4p3.1}}(T)>0$ and $\theta_{\ref{4p3.1}}(T)\geq 100$ such that for all $\theta \geq \theta_{\ref{4p3.1}}(T)$, there is some $K_{\ref{4p3.1}}(T,\theta)\geq 4\theta$ such that for any $m>0$, $R\geq K_{\ref{4p3.1}}$ and any $Z_0$ satisfying \eqref{4eb1.7}, we have \begin{align*} &\text{(i) } \mathbb{E}^{Z_0}\Big(\exp\Big(\frac{\theta^{3/2}R^{-4} }{\log \theta} \sum_{k=0}^{{T_\theta^R}} Z_k(\phi_a)\Big)\Big)\leq C_{\ref{4p3.1}}(T), \quad \forall a \in \mathbb{Z}^3,\\ &\text{(ii) } \mathbb{E}^{{Z}_0}\Big(\exp\Big(\frac{\theta^{3/2}R^{-4} }{\log \theta} \frac{(R^2/\theta)^{\eta/2} }{ |a-b|^{\eta}} |\sum_{k=0}^{{T_\theta^R}} {Z}_{k}(\phi_a)-\sum_{k=0}^{{T_\theta^R}} {Z}_{k}(\phi_b)|\Big)\Big) \leq C_{\ref{4p3.1}}(T), \quad \forall a\neq b \in \mathbb{Z}^3. \end{align*} \end{proposition} \begin{corollary}\label{4c3.1} For any $\varepsilon_0\in (0,1)$ and $T\geq 100$, there exist constants $\chi_{\ref{4c3.1}}>0$ and $ \theta_{\ref{4c3.1}}\geq 100$ depending only on $\varepsilon_0, T$ such that for all $\theta \geq \theta_{\ref{4c3.1}}$, there is some $C_{\ref{4c3.1}}(\varepsilon_0, T,\theta)\geq 4\theta$ such that for any $m>0$, $R\geq C_{\ref{4c3.1}}$ and any $Z_0$ satisfying \eqref{4eb1.7}, we have \begin{align*} \mathbb{P}^{Z_0}\Big(\frac{\theta}{R^2} \sum_{k=0}^{{T_\theta^R}} Z_k(\phi_a) \leq \chi_{\ref{4c3.1}} \frac{R^2}{\theta^{1/16}}, \quad \forall a\in \mathbb{Z}^3 \cap Q_{3M_{\ref{4p1}} \sqrt{\log f_3(\theta)}R_\theta} (0) \Big)\geq 1-\frac{\varepsilon_0}{2}. \end{align*} \end{corollary} \begin{proof} By using Proposition \ref{4p3.1}, the proof follows in a similar way to that of Corollary \ref{4c2.1} . \end{proof} In the previous calculation of the exponential moments, we never use the regularity condition (iii) of $Z_0$ in \eqref{4eb1.7}. It was also not used in the corresponding calculation for $d=2$ in Section \ref{4s5}. The case for the martingale term in $d=3$ is slightly different--condition (iii) of $Z_0$ will enter in the calculation of its exponential moments (see the proof of Proposition \ref{4p5.5}). This makes the arguments rather tedious compared to other terms. \begin{proposition}\label{4p3.2} Let $\eta=1/8$. For any $T\geq 100$ and $m>0$, there exist constants $C_{\ref{4p3.2}}(T,m)>0$ and $\theta_{\ref{4p3.2}}(T,m)\geq 100$ such that for all $\theta \geq \theta_{\ref{4p3.2}}(T,m)$, there is some $K_{\ref{4p3.2}}(T,\theta, m)\geq 4\theta$ such that for any $R\geq K_{\ref{4p3.2}}$ and any $Z_0$ satisfying \eqref{4eb1.7}, we have \begin{align*} &\text{(i) } \mathbb{E}^{Z_0}\Big(\exp\Big(\theta^{\eta} R^{-2} |M_{{T_\theta^R}+1}(\phi_a)|\Big)\Big)\leq C_{\ref{4p3.2}}(T,m), \quad \forall a \in \mathbb{Z}^3,\\ &\text{(ii) } \mathbb{E}^{{Z}_0}\Big(\exp\Big( \theta^{\eta} R^{-2} \frac{(R^2/\theta)^{\eta/2}}{|a-b|^\eta} \Big||M_{{T_\theta^R}+1}(\phi_a)|-|M_{{T_\theta^R}+1}(\phi_b)|\Big|\Big)\Big) \leq C_{\ref{4p3.2}}(T,m), \quad \forall a\neq b \in \mathbb{Z}^3. \end{align*} \end{proposition} \begin{corollary}\label{4c3.2} For any $\varepsilon_0\in (0,1)$, $T\geq 100$ and $m>0$, there exist constants $\chi_{\ref{4c3.2}}>0$ and $ \theta_{\ref{4c3.2}}\geq 100$ depending only on $\varepsilon_0, T, m$ such that for all $\theta \geq \theta_{\ref{4c3.2}}$, there is some $C_{\ref{4c3.2}}(\varepsilon_0, T,\theta,m)\geq 4\theta$ such that for any $R\geq C_{\ref{4c3.2}}$ and any $Z_0$ satisfying \eqref{4eb1.7}, we have \[ \mathbb{P}^{Z_0}\Big( |M_{{T_\theta^R}+1}(\phi_a)|\leq \chi_{\ref{4c3.2}}\frac{R^2}{\theta^{1/16}}, \quad \forall a\in \mathbb{Z}^3 \cap Q_{3M_{\ref{4p1}} \sqrt{\log f_3(\theta)}R_\theta} (0) \Big)\geq 1-\frac{\varepsilon_0}{2}. \] \end{corollary} \begin{proof} By using Proposition \ref{4p3.2}, the proof follows in a similar way to that of Corollary \ref{4c2.1}. \end{proof} Assuming Proposition \ref{4p3.1}, Proposition \ref{4p3.2}, we first finish the proof of Proposition \ref{4t6.1}. \begin{proof}[Proof of Proposition \ref{4t6.1}] Fix $\varepsilon_0\in (0,1)$, $T\geq 100$ and $m>0$. Let $\theta_{\ref{4t6.1}}=\max\{\theta_{\ref{4c3.1}}, \theta_{\ref{4c3.2}}\}$. For any $\theta \geq \theta_{\ref{4t6.1}}$, we let $C_{\ref{4t6.1}}=\max\{C_{\ref{4c3.1}}(\varepsilon_0, T,\theta), C_{\ref{4c3.2}}(\varepsilon_0, T,\theta, m)\}$. For any $R\geq C_{\ref{4t6.1}}$, we let $Z_0$ be as in \eqref{4eb1.7}. Apply Corollary \ref{4c3.1} to get with probability $\geq 1-\varepsilon_0/2$, \begin{align}\label{4eac7.1} \frac{\theta}{R^2} \sum_{k=0}^{{T_\theta^R}} Z_k(\phi_a) \leq \chi_{\ref{4c3.1}} \frac{R^2}{\theta^{1/16}}, \quad \forall a\in \mathbb{Z}^3 \cap Q_{3M_{\ref{4p1}} \sqrt{\log f_3(\theta)}R_\theta} (0). \end{align} Apply Corollary \ref{4c3.2} to get with probability $\geq 1-\varepsilon_0/2$, \begin{align}\label{4eac7.2} |M_{{T_\theta^R}+1}(\phi_a)|\leq \chi_{\ref{4c3.2}}\frac{R^2}{\theta^{1/16}}, \quad\forall a\in \mathbb{Z}^3 \cap Q_{3M_{\ref{4p1}} \sqrt{\log f_3(\theta)}R_\theta} (0). \end{align} Therefore with probability $\geq 1-\varepsilon_0$, both \eqref{4eac7.1} and \eqref{4eac7.2} hold. Use \eqref{4e6.13} to get for any $a\in \mathbb{Z}^3 \cap Q_{3M_{\ref{4p1}} \sqrt{\log f_3(\theta)}R_\theta} (0)$, \begin{align*} R\sum_{n=0}^{T_\theta^R} Z_n(\mathcal{N}(a))\leq &Z_{0}(\phi_a)+M_{T_\theta^R+1}(\phi_a)+\frac{\theta}{R^{2}}\sum_{n=0}^{T_\theta^R} Z_n(\phi_a)\\ \leq & C\frac{mR^2}{\theta^{1/4}}+\chi_{\ref{4c3.1}} \frac{R^2}{\theta^{1/16}}+\chi_{\ref{4c3.2}}\frac{R^2}{\theta^{1/16}}\leq (Cm+\chi_{\ref{4c3.1}}+\chi_{\ref{4c3.2}}) R^2, \end{align*} where the first inequality is by \eqref{4e5.33}. The proof is complete by letting $\chi_{\ref{4t6.1}}=Cm+\chi_{\ref{4c3.1}}+\chi_{\ref{4c3.2}}$. \end{proof} It remains to prove Proposition \ref{4p3.1} and Proposition \ref{4p3.2}. \subsection{Exponential moments of the drift term} In this section we will prove Proposition \ref{4p3.1} for the exponential moments of $\sum_{k=0}^{{T_\theta^R}} Z_k(\phi_a)$. For any $x\in \mathbb{Z}_R^d$ and $n\geq 1$, we apply \eqref{4e7.10a} and Lemma \ref{4l1.3} to get \begin{align}\label{4e7.11} \sum_{y\in \mathbb{Z}^d_R} p_n(y-x) \phi_a(y)\leq CR \sum_{y\in \mathbb{Z}^d_R} p_n(y-x) \sum_{k=1}^\infty \frac{1}{k^{3/2}} e^{-\frac{|y-a|^2}{64k}} \leq CR \cdot c_{\ref{4l1.3}} n^{-1/2}. \end{align} It follows that \begin{align}\label{4e7.21} G(\phi_a, {T_\theta^R})=& 3\|\phi_a\|_\infty+\sum_{k=1}^{T_\theta^R} \sup_{y \in \mathbb{Z}_R^d} \sum_{z\in \mathbb{Z}^d_R} p_k(y-z) \phi_a(z)\noindentnumber\\ \leq &CR+\sum_{k=1}^{T_\theta^R} CR\cdot c_{\ref{4l1.3}} k^{-1/2}\leq CR\sqrt{T_\theta^R}\leq c(T)\frac{R^2}{\theta^{1/2}}, \end{align} where the first inequality is by \eqref{4e7.10a}, \eqref{4e7.11}, and the last inequality is by $T_\theta^R\leq TR^2/\theta$. \begin{proof}[Proof of Proposition \ref{4p3.1}(i)] Let $\lambda=\theta^{3/2}R^{-4}/\log \theta$ and $n=T_\theta^R\leq \frac{TR^2}{\theta}$. By \eqref{4e7.21} we have \begin{align}\label{4e5.43} 2\lambda T_\theta^R e^{\frac{T_\theta^R\theta}{R^{2}}} G(\phi_a,{T_\theta^R})\leq&2\frac{\theta^{3/2}R^{-4}}{\log \theta} \frac{TR^2}{\theta}e^T \cdot c(T)\frac{R^2}{\theta^{1/2}}\leq C(T) \frac{1}{\log \theta}. \end{align} If we pick $\theta>0$ large enough so that $C(T)/{\log \theta}\leq 1/2$, then we may apply Proposition \ref{4p1.4} to get (recall $|Z_0|\leq 2R^2\log \theta/\theta$) \begin{align}\label{4e5.30} \mathbb{E}^{Z_0}\Big(\exp\Big(\lambda \sum_{k=0}^{{T_\theta^R}} Z_k(\phi_a)\Big)\Big)\leq& \exp\Big(\lambda |Z_0| e^{\frac{T_\theta^R\theta}{R^{2}}} G(\phi_a,{T_\theta^R}) (1-2\lambda {T_\theta^R} e^{\frac{T_\theta^R\theta}{R^{2}}} G(\phi_a,T_\theta^R))^{-1}\Big)\noindentnumber\\ \leq &\exp\Big(\lambda \frac{2R^2 \log \theta}{\theta} e^{T} c(T) \frac{R^2}{{\theta}^{1/2}} (1-C(T)/{\log \theta})^{-1}\Big)\noindentnumber\\ \leq &\exp\Big( c(T) (1-C(T)/{\log \theta})^{-1}\Big)\leq e^{2c(T)}, \end{align} where in the second inequality we have used \eqref{4e7.21}, \eqref{4e5.43} and the last inequality is by $C(T)/{\log \theta}\leq 1/2$. \end{proof} Turning to the difference moments, we fix $\eta=1/8$ throughout the rest of this section. For any $a\neq b \in \mathbb{Z}^d$ and $y \in \mathbb{Z}^d_R$, we have $|(y-a)-(y-b)|\geq 1$. So we may apply Proposition \ref{4p1.1} (ii) to get \begin{align}\label{4e7.12} |\phi_a(y)-\phi_b(y)|\leq& RV(R)\sum_{k=1}^\infty \frac{C_{\ref{4p1.1}}}{k^{3/2} R^3} \Big(\frac{|a-b|}{\sqrt{k}}\Big)^\eta (e^{-\frac{|y-a|^2}{64k}}+e^{-\frac{|y-b|^2}{64k}})\noindentnumber\\ \leq& CR |a-b|^{\eta} \sum_{k=1}^\infty \frac{1}{k^{(3+\eta)/2}} (e^{-\frac{|y-a|^2}{64k}}+e^{-\frac{|y-b|^2}{64k}}). \end{align} \noindent For any $x\in \mathbb{Z}^d_R$, by \eqref{4e7.12} and Lemma \ref{4l1.3} we have for any $n\geq 1$, \begin{align}\label{4eb1.8} &\sum_{y\in \mathbb{Z}^d_R} p_n(y-x) |\phi_a(y)-\phi_b(y)|\noindentnumber\\ \leq &CR |a-b|^{\eta} \sum_{y\in \mathbb{Z}^d_R} p_n(y-x) \sum_{k=1}^\infty \frac{1}{k^{(3+\eta)/2}} (e^{-\frac{ |a-y|^2}{64k}}+e^{-\frac{ |b-y|^2}{64k}})\noindentnumber\\ \leq &CR |a-b|^{\eta} \cdot 2 c_{\ref{4l1.3}} n^{-(1+\eta)/2}. \end{align} Hence we may apply \eqref{4e7.12} and \eqref{4eb1.8} to get \begin{align}\label{4e5.45} G(|\phi_a-\phi_b|,{T_\theta^R})=&3\|\phi_a-\phi_b\|_\infty+\sum_{k=1}^{{T_\theta^R}} \sup_{y \in \mathbb{Z}_R^d} \sum_{z\in \mathbb{Z}^d_R} p_k(y-z) |\phi_a(y)-\phi_b(y)|\noindentnumber\\ \leq& C(\eta) R |a-b|^{\eta}+CR |a-b|^{\eta} \sum_{k=1}^{{T_\theta^R}} 2 c_{\ref{4l1.3}} k^{-(1+\eta)/2} \noindentnumber\\ \leq& c(\eta) R |a-b|^{\eta} (T_\theta^R)^{(1-\eta)/2}\leq c(T) |a-b|^{\eta} \frac{R^{2-\eta}}{\theta^{(1-\eta)/2}}. \end{align} \begin{proof}[Proof of Proposition \ref{4p3.1}(ii)] Let $\lambda={\theta^{(3-\eta)/2}} {R^{\eta-4}}/(|a-b|^{\eta}\log \theta)$ and $n=T_\theta^R\leq \frac{TR^2}{\theta}$. Note by \eqref{4e5.45} we have \begin{align}\label{4e5.44} 2\lambda T_\theta^R e^{\frac{T_\theta^R\theta}{R^{d-1}}} G(|\phi_a-\phi_b|,{T_\theta^R})\leq&2\frac{{\theta^{(3-\eta)/2}} {R^{\eta-4}}}{|a-b|^{\eta}\log \theta} \frac{TR^2}{\theta}e^T \cdot c(T) |a-b|^{\eta} \frac{R^{2-\eta}}{\theta^{(1-\eta)/2}}\noindentnumber\\ \leq &C(T)/\log \theta. \end{align} If we pick $\theta>0$ large enough so that $C(T)/{\log \theta}\leq 1/2$, then we may apply Proposition \ref{4p1.4} to get (recall $|Z_0|\leq 2R^2\log \theta/\theta$) \begin{align}\label{4e5.31} &\mathbb{E}^{{Z}_0}\Big(\exp\Big({\lambda\Big|\sum_{k=0}^{{T_\theta^R}} {Z}_{k}(\phi_a)-\sum_{k=0}^{{T_\theta^R}} {Z}_{k}(\phi_b)\Big|}\Big)\Big)\leq \mathbb{E}^{{Z}_0}\Big(\exp\Big({\lambda \sum_{k=0}^{{T_\theta^R}} {Z}_{k}(|\phi_a-\phi_b|)}\Big)\Big)\noindentnumber\\ \leq&\exp\Big(\lambda |{Z}_0| e^{\frac{T_\theta^R\theta}{R^{2}}} G(|\phi_a-\phi_b|,{T_\theta^R}) (1- 2\lambda T_\theta^R e^{\frac{T_\theta^R\theta}{R^{2}}} G(|\phi_a-\phi_b|,{T_\theta^R}))^{-1}\Big)\noindentnumber\\ \leq&\exp\Big(\lambda \frac{2R^2\log \theta}{\theta} e^T \cdot c(T) |a-b|^{\eta} \frac{R^{2-\eta}}{\theta^{(1-\eta)/2}} (1-C(T)/{\log \theta})^{-1}\Big)\noindentnumber\\ \leq &\exp\Big( c(T)(1-C(T)/{\log \theta})^{-1}\Big)\leq e^{2c(T)}, \end{align} where in the third inequality we have used \eqref{4e5.45}, \eqref{4e5.44}, and the last inequality is by $C(T)/{\log \theta}\leq 1/2$. The second last inequality uses $\lambda={\theta^{(3-\eta)/2}} {R^{\eta-4}}/(|a-b|^{\eta}\log \theta)$. So the proof is complete. \end{proof} \subsection{Exponential moments of the martingale term} Now we will turn to complicated martingale term $M_{{T_\theta^R}+1}(\phi_a)$ and give the proof of Proposition \ref{4p3.2}. \subsubsection{Proof of Proposition \ref{4p3.2}(ii)} We first prove Proposition \ref{4p3.2}(ii) and deal with \begin{align*} \Big||M_{{T_\theta^R}+1}(\phi_a)|-|M_{{T_\theta^R}+1}(\phi_b)|\Big|\leq |M_{{T_\theta^R}+1}(\phi_a)-M_{{T_\theta^R}+1}(\phi_b)|=|M_{{T_\theta^R}+1}(\phi_a-\phi_b)|. \end{align*} Use \eqref{4e7.12} and $R\geq 4\theta$ to get \begin{align*} \theta^{\eta/2} R^{\eta-2}|a-b|^{-\eta}\|\phi_a-\phi_b\|_\infty \leq& \theta^{\eta/2} R^{\eta-2}|a-b|^{-\eta} \cdot CR|a-b|^\eta\\ \leq&C \theta^{3\eta/2-1}\leq 1, \end{align*} if we pick $\theta\geq 100$ to be large. Then we may apply Proposition \ref{4p5.1} with $\phi=\phi_a-\phi_b$ and $\lambda=\theta^{\eta/2} R^{\eta-2}|a-b|^{-\eta}$ to get \begin{align}\label{4e8.46} &\mathbb{E}^{{Z}_0}\Big(\exp\Big(\theta^{\eta/2} R^{\eta-2}|a-b|^{-\eta} \Big||M_{{T_\theta^R}+1}(\phi_a)|-|M_{{T_\theta^R}+1}(\phi_b)|\Big|\Big)\Big)\noindentnumber\\ \leq& \mathbb{E}^{{Z}_0}\Big(\exp\Big(\theta^{\eta/2}R^{\eta-2} |a-b|^{-\eta} |M_{{T_\theta^R}+1}(\phi_a-\phi_b)|\Big)\Big)\noindentnumber\\ \leq& 2\Big(\mathbb{E}^{{Z}_0}\Big(\exp\Big(16 \theta^{\eta} R^{2\eta-4}|a-b|^{-2\eta} \langle M(\phi_a-\phi_b)\rangle_{{T_\theta^R}+1}\Big)\Big)\Big)^{1/2}. \end{align} It suffices to bound \begin{align}\label{4e7.61} \mathbb{E}^{{Z}_0}\Big(\exp\Big(16 \theta^{\eta} R^{2\eta-4}|a-b|^{-2\eta} \langle M(\phi_a-\phi_b)\rangle_{{T_\theta^R}+1}\Big)\Big). \end{align} By \eqref{4e7.42}, the above quadratic variation is bounded by \begin{align}\label{4e8.23} \langle M(\phi_a-\phi_b)\rangle_{{T_\theta^R}+1}\leq & 2\sum_{n=0}^{{T_\theta^R}}\sum_{x\in \mathbb{Z}^d_R} Z_n(x) \cdot \frac{1}{V(R)} \sum_{i=1}^{V(R)} \Big(\phi_a({x+e_i})-\phi_b(x+e_i)\Big)^2. \end{align} Apply \eqref{4e7.12} to see that for any $a\neq b \in \mathbb{Z}^d$ and $y\in \mathbb{Z}^d_R$, we have \begin{align}\label{4e7.24} &|\phi_a(y)-\phi_b(y)|^2\leq CR^2 |a-b|^{2\eta} \Big(\Big(\sum_{k=1}^\infty  \frac{1}{k^{(3+\eta)/2}} e^{-\frac{|y-a|^2}{64k}}\Big)^2+\Big(\sum_{k=1}^\infty  \frac{1}{k^{(3+\eta)/2}} e^{-\frac{|y-b|^2}{64k}}\Big)^2\Big)\noindentnumber\\ &\leq CR^2 |a-b|^{2\eta} C_{\ref{4l4.1.1}}(\frac{1+\eta}{2})\Big(\sum_{k=1}^\infty  \frac{1}{k^{2+\eta}} e^{-\frac{|y-a|^2}{64k}}+\sum_{k=1}^\infty  \frac{1}{k^{2+\eta}} e^{-\frac{|y-b|^2}{64k}}\Big), \end{align} where the last inequality is by Lemma \ref{4l4.1.1} with $\alpha=\frac{1+\eta}{2}$. Define for any $a\in \mathbb{Z}^d_R$ that \begin{align}\label{4e8.22} f_a(y):=\sum_{k=1}^\infty \frac{1}{k^{2+\eta}} e^{-\frac{|y-a|^2}{64k}}\text{ and write } \overline{f_a}(y)=\frac{1}{V(R)}\sum_{i=1}^{V(R)} f_a(y+e_i). \end{align} Therefore we apply \eqref{4e7.24} to see that \eqref{4e8.23} becomes \begin{align*} \langle M(\phi_a-\phi_b)\rangle_{{T_\theta^R}+1} \leq &2\sum_{n=0}^{{T_\theta^R}} \sum_{x\in \mathbb{Z}^d_R} Z_n(x) \cdot C(\eta) R^2 |a-b|^{2\eta} \frac{1}{V(R)} \sum_{i=1}^{V(R)} \Big(f_a({x+e_i})+f_b(x+e_i)\Big)\\ =& 2\sum_{n=0}^{{T_\theta^R}} \sum_{x\in \mathbb{Z}^d_R} Z_n(x)\cdot C(\eta) R^2 |a-b|^{2\eta} ( \overline{f_a}(x) +\overline{f_b}(x))\\ \leq &CR^2 |a-b|^{2\eta} \sum_{n=0}^{{T_\theta^R}}  (Z_n(\overline{f_a})+Z_n(\overline{f_b})). \end{align*} Returning to \eqref{4e7.61}, we get \begin{align}\label{4e7.25} &\mathbb{E}^{{Z}_0}\Big(\exp\Big(16 \theta^{\eta} R^{2\eta-4}|a-b|^{-2\eta} \langle M(\phi_a-\phi_b)\rangle_{{T_\theta^R}+1}\Big)\Big)\\ \leq& \mathbb{E}^{{Z}_0}\Big(\exp\Big(16C \theta^{\eta} R^{2\eta-2} \sum_{n=0}^{{T_\theta^R}}  (Z_n(\overline{f_a})+Z_n(\overline{f_b})) \Big)\Big)\noindentnumber\\ \leq& \Big(\mathbb{E}^{{Z}_0}\Big(\exp\Big(32C \theta^{\eta} R^{2\eta-2} \sum_{n=0}^{{T_\theta^R}}  Z_n(\overline{f_a}) \Big)\Big)\Big)^{1/2} \Big(\mathbb{E}^{{Z}_0}\Big(\exp\Big(32C \theta^{\eta} R^{2\eta-2} \sum_{n=0}^{{T_\theta^R}}  Z_n(\overline{f_b}) \Big)\Big)\Big)^{1/2},\noindentnumber \end{align} where the last inequality is by the Cauchy-Schwartz inequality. Hence it suffices to bound \begin{align}\label{4e8.75} \mathbb{E}^{{Z}_0}\Big(\exp\Big(32C \theta^{\eta} R^{2\eta-2} \sum_{n=0}^{{T_\theta^R}}  Z_n(\overline{f_a}) \Big)\Big)\ \text{ for any } a\in \mathbb{Z}^3. \end{align} We state in the following proposition a stronger result (with a higher exponent on $\theta$). \begin{proposition}\label{4p5.5} Let $\eta=1/8$. For any $K>0$, $m>0$ and $T\geq 100$, there exist constants $C_{\ref{4p5.5}}>0$ and $\theta_{\ref{4p5.5}}\geq 100$ depending only on $T,m,K$ such that for all $\theta \geq \theta_{\ref{4p5.5}}$, there is some $C_{\ref{4p5.5}}(T,\theta, m, K)\geq 4\theta$ such that for any $R\geq C_{\ref{4p5.5}}$ and any $Z_0$ satisfying \eqref{4eb1.7}, we have \begin{align}\label{4e8.04} \mathbb{E}^{{Z}_0}\Big(\exp\Big(K \theta^{2\eta} R^{2\eta-2} \sum_{n=0}^{{T_\theta^R}}  Z_n(\overline{f_a}) \Big)\Big)\leq C_{\ref{4p5.5}}(T, m, K), \ \forall a\in \mathbb{Z}^3_R. \end{align} \end{proposition} \begin{proof}[Proof of Proposition \ref{4p3.2}(ii) assuming Proposition \ref{4p5.5}] Let $\eta=1/8$, $m>0$ and $T\geq 100$. Let $K=32C$ and $\theta, R$ be as in Proposition \ref{4p5.5} so that \eqref{4e8.04} holds. Hence it follows that \begin{align}\label{4ec1.4} \mathbb{E}^{{Z}_0}\Big(\exp\Big(32C \theta^{\eta} R^{2\eta-2} \sum_{n=0}^{{T_\theta^R}}  Z_n(\overline{f_a}) \Big)\Big) \leq C_{\ref{4p5.5}}(T, m, 32C),\quad \forall a \in \mathbb{Z}^3. \end{align} Combining \eqref{4e8.46}, \eqref{4e7.25} and \eqref{4ec1.4}, we may conclude \begin{align}\label{4e8.74} &\mathbb{E}^{{Z}_0}\Big(\exp\Big(\frac{\theta^{\eta/2} R^{\eta-2}}{|a-b|^{\eta}} \Big||M_{{T_\theta^R}+1}(\phi_a)|-|M_{{T_\theta^R}+1}(\phi_b)|\Big|\Big)\Big)\leq2 C_{\ref{4p5.5}}(T, m, 32C) ^{1/2}, \end{align} thus completing the proof of Proposition \ref{4p3.2}(ii). \end{proof} The proof of Proposition \ref{4p5.5} is rather complicated and so we postpone its proof till the end of this section. The reason for considering a different exponent on $\theta$ is because the same term will appear in the proof of Proposition \ref{4p3.2}(i), which we now give. \subsubsection{Proof of Proposition \ref{4p3.2} (i)} We move next to the exponential moment of $M_{{T_\theta^R}+1}(\phi_a)$. By \eqref{4e7.10}, for any $R\geq 4\theta$ with $\theta\geq 100$ large, we have \[ \theta^{\eta} R^{-2}\| \phi_a\|_\infty\leq \theta^{\eta} R^{-2} \cdot CR\leq C\theta^{\eta-1} \leq 1. \] So we may apply Proposition \ref{4p5.1} with $\phi=\phi_a$ and $\lambda=\theta^{\eta} R^{-2} $ to get \begin{align}\label{4e8.09} &\mathbb{E}^{{Z}_0}\Big(\exp\Big(\theta^{\eta} R^{-2} |M_{{T_\theta^R}+1}(\phi_a)|\Big)\Big)\leq 2\Big(\mathbb{E}^{{Z}_0}\Big(\exp\Big(16 \theta^{2\eta} R^{-4} \langle M(\phi_a)\rangle_{{T_\theta^R}+1}\Big)\Big)\Big)^{1/2}. \end{align} It suffices to bound \begin{align}\label{4e8.43} \mathbb{E}^{{Z}_0}\Big(\exp\Big(16 \theta^{2\eta} R^{-4} \langle M(\phi_a)\rangle_{{T_\theta^R}+1}\Big)\Big). \end{align} Recall from \eqref{4e7.42} that \begin{align}\label{4e8.31} \langle M(\phi_a)\rangle_{{T_\theta^R}+1}\leq&2\sum_{n=0}^{T_\theta^R}\sum_{x\in \mathbb{Z}^d_R} Z_n(x) \cdot \frac{1}{V(R)}\sum_{i=1}^{V(R)} ({\phi_a}({x+e_i}))^2 \noindentnumber\\ \leq&2C^2\sum_{n=0}^{T_\theta^R} \sum_{x\in \mathbb{Z}^d_R} Z_n(x) \cdot \frac{1}{V(R)} \sum_{i=1}^{V(R)} ({g_{a,3}}({x+e_i}))^2. \end{align} where the last inequality is by \eqref{4e7.10a}. Recall from \eqref{4e8.22} that \[f_a(y)=\sum_{k=1}^\infty \frac{1}{k^{2+\eta}} e^{-\frac{|y-a|^2}{64k}},\] where $\eta=1/8$. We first establish the following bounds for $g_{a,3}^2$. \begin{lemma}\label{4l1.93} There is some absolute constant $C_{\ref{4l1.93}}>0$ such that for any $R\geq 400$ and $a\in \mathbb{Z}^d_R$, \begin{align}\label{4eb1.21} (g_{a,3}(y))^2\leq C_{\ref{4l1.93}}R^{2+2\eta} f_a(y)+C_{\ref{4l1.93}}, \quad \forall y\in \mathbb{Z}_R^d. \end{align} \end{lemma} \begin{proof} Recall that \begin{align}\label{4eb1.24} g_{a,3}(y)=R\sum_{k=1}^\infty \frac{1}{k^{3/2}} e^{-|y-a|^2/(32k)}. \end{align} We first consider $|y-a|\geq R\geq 1$. Use Lemma \ref{4l4.1} to see that \begin{align*} (g_{a,3}(y))^2 1_{\{|y-a|\geq R\}} \leq R^2 C_{\ref{4l4.1}}(\frac{1}{2})^2 \frac{32}{|y-a|^{2}} 1_{\{|y-a|\geq R\}} \leq C. \end{align*} Next, if $1< |y-a|< R$, we use Lemma \ref{4l4.1} again to get \begin{align}\label{ec1.5} (g_{a,3}(y))^2 1_{\{1<|y-a|<R\}} \leq&R^2 C_{\ref{4l4.1}}(\frac{1}{2})^2 \frac{32}{|y-a|^{2}} 1_{\{1<|y-a|<R\}}\noindentnumber\\ \leq& C R^{2+2\eta} \frac{64^{1+\eta} }{|y-a|^{2+2\eta}} 1_{\{1<|y-a|<R\}}, \end{align} where in the last inequality we have used $|y-a|<R$. On the other hand, we apply Lemma \ref{4l4.1} to $f_a$ to get \begin{align}\label{ec1.6} f_a(y)1_{\{1<|y-a|<R\}}\geq c_{\ref{4l4.1}}(1+\eta) \frac{64^{1+\eta}}{|y-a|^{2+2\eta}}1_{\{1<|y-a|<R\}}. \end{align} Combine \eqref{ec1.5} and \eqref{ec1.6} to arrive at \begin{align*} (g_{a,3}(y))^2 1_{\{1<|y-a|<R\}} \leq C R^{2+2\eta}c_{\ref{4l4.1}}(1+\eta)^{-1}f_a(y)1_{\{1<|y-a|<R\}}\leq CR^{2+2\eta}f_a(y). \end{align*} Finally for $|y-a|\leq 1$, we have \begin{align*} (g_{a,3}(y))^2 1_{\{|y-a|\leq 1\}} \leq R^2 \Big(\sum_{k=1}^\infty \frac{1}{k^{3/2}}\Big)^2 \leq c_1 R^2, \end{align*} for some constant $c_1>0$. Next we have \begin{align*} f_{a}(y) 1_{\{|y-a|\leq 1\}} \geq e^{-\frac{1}{64}}\sum_{k=1}^\infty  \frac{1}{k^{2+\eta}} \geq c_2 \end{align*} for some constant $c_2>0$. Therefore it follows that \begin{align*} (g_{a,3}(y))^2 1_{\{|y-a|\leq 1\}} \leq c_1 R^2\leq \frac{c_1}{c_2} R^{2+2\eta} f_{a}(y) 1_{\{|y-a|\leq 1\}}\leq CR^{2+2\eta}f_a(y). \end{align*} By adjusting constants, we complete the proof. \end{proof} Apply the above lemma to see that \eqref{4e8.31} becomes \begin{align}\label{4e8.44} \langle M(\phi_a)\rangle_{{T_\theta^R}+1}&\leq 2C^2\sum_{n=0}^{T_\theta^R} \sum_{x\in \mathbb{Z}^d_R} Z_n(x) \cdot \frac{1}{V(R)}\sum_{i=1}^{V(R)} C_{\ref{4l1.93}}R^{2+2\eta} f_a(x+e_i)\\ &\quad +2C^2\sum_{n=0}^{T_\theta^R} \sum_{x\in \mathbb{Z}^d_R} Z_n(x) \cdot C_{\ref{4l1.93}}\leq CR^{2+2\eta}\sum_{n=0}^{T_\theta^R} Z_n(\overline{f_a})+C\sum_{n=0}^{T_\theta^R} Z_n(1).\noindentnumber \end{align} Returning to \eqref{4e8.43}, we use \eqref{4e8.44} to arrive at \begin{align}\label{4e8.82} &\mathbb{E}^{{Z}_0}\Big(\exp\Big(16 \theta^{2\eta} R^{-4} \langle M(\phi_a)\rangle_{{T_\theta^R}+1}\Big)\Big)\\ \leq& \mathbb{E}^{{Z}_0}\Big(\exp\Big(16 \theta^{2\eta} R^{-4} \Big(CR^{2+2\eta}\sum_{n=0}^{T_\theta^R} Z_n(\overline{f_a})+C\sum_{n=0}^{T_\theta^R} Z_n(1)\Big) \Big)\Big)\noindentnumber\\ \leq& \Big(\mathbb{E}^{{Z}_0}\Big(\exp\Big(32C\theta^{2\eta} R^{2\eta-2}\sum_{n=0}^{T_\theta^R} Z_n(\overline{f_a})\Big) \Big)\Big)^{1/2} \Big(\mathbb{E}^{{Z}_0}\Big(\exp\Big(32C\theta^{2\eta} R^{-4} \sum_{n=0}^{T_\theta^R} Z_n(1)\Big) \Big)\Big)^{1/2},\noindentnumber \end{align} where the last inequality is by the Cauchy-Schwartz inequality. Now we are ready to finish the proof of Proposition \ref{4p3.2}(i). \begin{proof}[Proof of Proposition \ref{4p3.2}(i)] Let $\eta=1/8$, $m>0$ and $T\geq 100$. Let $K=32C$ and $\theta, R, Z_0$ be as in Proposition \ref{4p5.5} so that \eqref{4e8.04} holds. Hence we have for any $a \in \mathbb{Z}^d$, \begin{align}\label{4e8.81} \mathbb{E}^{{Z}_0}\Big(\exp\Big(32C\theta^{2\eta} R^{2\eta-2}\sum_{n=0}^{T_\theta^R} Z_n(\overline{f_a})\Big) \Big)\leq C_{\ref{4p5.5}}(T, m, 32C), \end{align} thus giving bounds for the first term on the right-hand side of \eqref{4e8.82}. For the second term, we note that \begin{align}\label{4e5.65} G(1,T_\theta^R)=3+T_\theta^R \leq 2 \frac{TR^2}{\theta}, \end{align} where the last is by \eqref{4e10.06}. Let $\lambda=32C\theta^{2\eta} R^{-4}$ and $n=T_\theta^R\leq \frac{TR^2}{\theta}$. Then \eqref{4e5.65} implies \begin{align}\label{4e5.66} 2\lambda T_\theta^R e^{\frac{T_\theta^R\theta}{ R^{2}}} G(1,T_\theta^R)\leq 64C\theta^{2\eta} R^{-4} \frac{TR^2}{\theta}e^{T} \cdot 2 \frac{TR^2}{\theta}\leq C(T) \frac{1}{\theta^{2-2\eta}}. \end{align} If we pick $\theta>0$ large enough so that $C(T)/{\theta^{2-2\eta}}\leq 1/2$, then we may apply Proposition \ref{4p1.4} to get (recall $|Z_0|\leq 2R^2\log\theta/{\theta}$) \begin{align}\label{4e8.80} &\mathbb{E}^{{Z}_0}\Big(\exp\Big(32C\theta^{2\eta} R^{-4} \sum_{n=0}^{T_\theta^R} Z_n(1)\Big) \Big)=\mathbb{E}^{{Z}_0}\Big(\exp\Big(\lambda\sum_{n=0}^{T_\theta^R} Z_n(1)\Big) \Big)\noindentnumber\\ \leq& \exp\Big(\lambda |{Z}_0| e^{\frac{T_\theta^R\theta}{ R^{2}}} G(1,T_\theta^R) (1-2\lambda T_\theta^R e^{\frac{T_\theta^R\theta}{ R^{2}}} G(1,T_\theta^R))^{-1}\Big)\noindentnumber\\ \leq &\exp\Big(\lambda \frac{2R^2 \log \theta}{\theta} e^{T} \cdot 2 \frac{TR^2}{\theta} (1-C(T)/{\theta^{2-2\eta}})^{-1}\Big)\noindentnumber\\ \leq &\exp\Big(c(T)\frac{\log \theta}{\theta^{2-2\eta}} \cdot 2\Big)\leq e^{c(T)}, \end{align} where in the second inequality we have used \eqref{4e5.65} and \eqref{4e5.66}. The second last inequality is by $C(T)/{\theta^{2-2\eta}}\leq 1/2$ and the last inequality uses $\log \theta \leq \theta^{2-2\eta}$ for $\theta\geq 100$. Now combine \eqref{4e8.81} and \eqref{4e8.80} to see that \eqref{4e8.82} becomes \begin{align}\label{4e8.83} &\mathbb{E}^{{Z}_0}\Big(\exp\Big(16 \theta^{2\eta} R^{-4} \langle M(\phi_a)\rangle_{{T_\theta^R}+1}\Big)\Big)\leq C_{\ref{4p5.5}}(T, m, 32C)^{1/2} e^{c(T)/2}=C(T,m). \end{align} Hence we conclude from \eqref{4e8.09}, \eqref{4e8.83} that \begin{align}\label{4e8.84} \mathbb{E}^{{Z}_0}\Big(\exp\Big(\theta^{\eta} R^{-2} |M_{{T_\theta^R}+1}(\phi_a)|\Big)\Big)\leq 2{C(T,m)}^{1/2}, \end{align} thus finishing the proof of Proposition \ref{4p3.2}(i). \end{proof} \subsubsection{Proof of Proposition \ref{4p5.5}} Finally we will prove Proposition \ref{4p5.5}, thus completing the proof of Proposition \ref{4p3.2}. Nevertheless, applying Proposition \ref{4p1.4} won't direct us to the conclusion immediately. Recall $f_a(y)=\sum_{k=1}^\infty \frac{1}{k^{2+\eta}} e^{-\frac{|y-a|^2}{64k}}$ where $\eta=1/8$. By Lemma \ref{4l4.1}, for any $|y-a|>1$ we have $f_a(y)$ is bounded above and below by $|y-a|^{-2-2\eta}$ up to some constants, which is too singular a function to integrate in $d=3$. To solve this issue, by recalling the generator $\mathcal{L}$ from \eqref{4e1.7}, we will find some $\psi_a$ such that $\mathcal{L} \psi_a(x)=-f_a(y)$ and then use the martingale problem \eqref{4e1.1} to get the desired bounds. By Green's function representation (see, e.g., (4.24) and (4.25) of \cite{LL10}), for any $a\in \mathbb{Z}_R^3$, we define \begin{align}\label{4e8.00} \psi_a(x)&:=\sum_{y\in \mathbb{Z}_R^3} \sum_{n=0}^\infty p_n(x-y) f_a(y), \quad \forall x\in \mathbb{Z}_R^3. \end{align} The following lemma justifies the absolute convergence of the above summation. This idea also originates from the fact that \[ \Delta^{-1} \frac{1}{|x|^{2+2\eta}}= -c(\eta) \frac{1}{|x|^{2\eta}}, \] where $\Delta^{-1}$ is the inverse Laplacian operator on $\mathbb{R}^3$. Similar idea has been used in the proof of Lemma 2.2 in \cite{Hong18}. \begin{lemma}\label{4l5.6} There is some absolute constant $c_{\ref{4l5.6}}>0$ such that for any $R\geq K_{\ref{4p1.1}}$ and any $x,a \in \mathbb{Z}_R^3$, \[ \psi_a(x)=\sum_{y\in \mathbb{Z}_R^3} \sum_{n=0}^\infty p_n(x-y) f_a(y) \leq c_{\ref{4l5.6}} \sum_{k=1}^\infty  \frac{1}{k^{1+\eta}} e^{-\frac{|x-a|^2}{64k}}. \] \end{lemma} \begin{proof} First we use Proposition \ref{4p1.1}(i) to get \begin{align}\label{4e8.88} \psi_a(x)=&\sum_{y\in \mathbb{Z}_R^3} \sum_{n=0}^\infty p_n(x-y) \sum_{k=1}^\infty  \frac{1}{k^{2+\eta}} e^{-\frac{|y-a|^2}{64k}}\noindentnumber\\ \leq& \sum_{k=1}^\infty  \frac{1}{k^{2+\eta}} e^{-\frac{|x-a|^2}{64k}}+\sum_{y\in \mathbb{Z}^d_R} \sum_{n=1}^\infty \frac{c_{\ref{4p1.1}}}{n^{3/2}R^3} e^{-\frac{|x-y|^2}{32n}} \sum_{k=1}^\infty \frac{1}{k^{2+\eta}} e^{-\frac{|y-a|^2}{64k}}\noindentnumber \\ \leq &\sum_{k=1}^\infty  \frac{1}{k^{1+\eta}} e^{-\frac{|x-a|^2}{64k}}+ \sum_{k=1}^\infty \frac{1}{k^{2+\eta}}  \sum_{n=1}^\infty \frac{c_{\ref{4p1.1}}}{n^{3/2}R^3} \sum_{y\in \mathbb{Z}^d_R} e^{-\frac{|x-y|^2}{64n}} e^{-\frac{|y-a|^2}{64k}}, \end{align} where in the last inequality we have used $k^{2+\eta}\geq k^{1+\eta}$, $32<64$ and Fubini's theorem. It suffices to bound the second term above. Use Lemma \ref{4l4.2}(i) with $s=1/(64n)$ and $t=1/(64k)$ to get \begin{align*} \sum_{y\in \mathbb{Z}^d_R} e^{-\frac{|x-y|^2}{64n}} e^{-\frac{|y-a|^2}{64k}} \leq&2^3 e^{-\frac{|x-a|^2}{64(k+n)}} \sum_{y\in \mathbb{Z}^d_R} e^{-\frac{|y|^2}{64n}} e^{-\frac{|y|^2}{64k}} =8 e^{-\frac{|x-a|^2}{64(k+n)}} \sum_{y\in \mathbb{Z}^d_R} e^{-\frac{|y|^2}{64\frac{nk}{n+k}}} \\ \leq& 8 e^{-\frac{|x-a|^2}{64(k+n)}} c_{\ref{4l4.2}} R^3 (\frac{32nk}{n+k})^{3/2}, \end{align*} where the last inequality is by Lemma \ref{4l4.2}(ii) applied with $u=\frac{32nk}{n+k}>1$. Hence it follows that \begin{align*} I:=&\sum_{k=1}^\infty \frac{1}{k^{2+\eta}}  \sum_{n=1}^\infty \frac{c_{\ref{4p1.1}}}{n^{3/2}R^3} \sum_{y\in \mathbb{Z}^d_R} e^{-\frac{|x-y|^2}{64n}} e^{-\frac{|y-a|^2}{64k}}\\ \leq& \sum_{k=1}^\infty \frac{1}{k^{2+\eta}} \sum_{n=1}^\infty \frac{c_{\ref{4p1.1}}}{n^{3/2}R^3} 8 e^{-\frac{|x-a|^2}{64(k+n)}} c_{\ref{4l4.2}} R^3 (\frac{32nk}{n+k})^{3/2}\\ \leq &C \sum_{k=1}^\infty \frac{1}{k^{\eta+1/2}}  \sum_{n=1}^\infty e^{-\frac{|x-a|^2}{64(k+n)}} (\frac{1}{n+k})^{3/2}=C \sum_{k=1}^\infty \frac{1}{k^{\eta+1/2}}  \sum_{n=k+1}^\infty e^{-\frac{|x-a|^2}{64n}} \frac{1}{n^{3/2}}. \end{align*} Apply Fubini's theorem to the right-hand side term above to get \begin{align*} I\leq &C \sum_{n=2}^\infty e^{-\frac{|x-a|^2}{64n}} \frac{1}{n^{3/2}} \sum_{k=1}^{n-1} \frac{1}{k^{\eta+1/2}} \leq C \sum_{n=2}^\infty e^{-\frac{|x-a|^2}{64n}} \frac{1}{n^{3/2}} \cdot C(\eta)n^{\frac{1}{2}-\eta} \\ \leq &C \sum_{n=1}^\infty e^{-\frac{|x-a|^2}{64n}} \frac{1}{n^{1+\eta}}, \end{align*} and so the proof is complete by \eqref{4e8.88}. \end{proof} The above lemma gives the absolute convergence of $\psi_a$ and so we have (recall \eqref{4e1.7}) \begin{align}\label{4e8.25} \mathcal{L} \psi_a(x)=&\frac{1}{V(R)}\sum_{i=1}^{V(R)} (\psi_a(x+e_i)-\psi_a(x))\noindentnumber\\ =&\sum_{y\in \mathbb{Z}_R^d} f_a(y) \sum_{n=0}^\infty \frac{1}{V(R)}\sum_{i=1}^{V(R)} (p_n(x+e_i-y)-p_n(x-y))\noindentnumber \\ =&\sum_{y\in \mathbb{Z}_R^d} f_a(y) \sum_{n=0}^\infty (p_{n+1}(x-y)-p_n(x-y))\noindentnumber \\ =&\sum_{y\in \mathbb{Z}_R^d} f_a(y) (-p_0(x-y))=-f_a(x), \end{align} where the third equality uses \eqref{4e2.4}. By the linearity of $\mathcal{L}$, if we define \begin{align}\label{4e8.01} \overline{\psi_a}(x)&=\frac{1}{V(R)}\sum_{i=1}^{V(R)} \psi_a(x+e_i), \end{align} then it follows that \begin{align}\label{4e8.02} \mathcal{L} \overline{\psi_a}(x)&=-\frac{1}{V(R)}\sum_{i=1}^{V(R)} f_a(x+e_i)=-\overline{f_a}(x). \end{align} Replace $\phi$ in \eqref{4e1.1} with $\overline{\psi_a}$ and use \eqref{4e8.02} to get for any $N\geq 1$, \begin{align*} Z_{N}( \overline{\psi_a})=&Z_0( \overline{\psi_a})- (1+\frac{\theta}{R^{2}})\sum_{n=0}^{N-1} \sum_{|\alpha|=n} \overline{f_a}(Y^\alpha)+M_N( \overline{\psi_a})+\frac{\theta}{R^{2}}\sum_{n=0}^{N-1} Z_n( \overline{\psi_a})\\ =&Z_0( \overline{\psi_a})- (1+\frac{\theta}{R^{2}})\sum_{n=0}^{N-1} Z_n( \overline{f_a}) +M_N( \overline{\psi_a})+\frac{\theta}{R^{2}}\sum_{n=0}^{N-1} Z_n( \overline{\psi_a}). \end{align*} Let $N=T_\theta^R+1$ and rearrange terms to arrive at \begin{align}\label{4e8.26} (1+\frac{\theta}{R^{2}}) \sum_{n=0}^{T_\theta^R} Z_n( \overline{f_a}) =&Z_0( \overline{\psi_a})-Z_{T_\theta^R+1}( \overline{\psi_a})+M_{T_\theta^R+1}( \overline{\psi_a})+\frac{\theta}{R^{2}}\sum_{n=0}^{T_\theta^R} Z_n( \overline{\psi_a})\noindentnumber\\ \leq &Z_0( \overline{\psi_a})+M_{T_\theta^R+1}( \overline{\psi_a})+\frac{\theta}{R^{2}}\sum_{n=0}^{T_\theta^R} Z_n( \overline{\psi_a}). \end{align} Now we are ready to give the proof of Proposition \ref{4p5.5}. \begin{proof}[Proof of Proposition \ref{4p5.5}] Let $K>0$, $m>0$ and $T\geq 100$. For any $\theta\geq 100$ and $R\geq 4\theta$, we use \eqref{4e8.26} to get for any $a\in \mathbb{Z}^3_R$ and any $Z_0$ as in \eqref{4eb1.7}, \begin{align}\label{4e8.05} &\mathbb{E}^{{Z}_0}\Big(\exp\Big(K \theta^{2\eta} R^{2\eta-2} \sum_{n=0}^{{T_\theta^R}}  Z_n(\overline{f_a}) \Big)\Big)\noindentnumber\\ \leq &\mathbb{E}^{{Z}_0}\Big(\exp\Big(K \theta^{2\eta} R^{2\eta-2} \Big(Z_0( \overline{\psi_a})+M_{T_\theta^R+1}( \overline{\psi_a})+\frac{\theta}{R^{2}}\sum_{n=0}^{T_\theta^R} Z_n( \overline{\psi_a}) \Big)\Big)\noindentnumber\\ \leq &\exp\Big(K \theta^{2\eta} R^{2\eta-2} Z_0( \overline{\psi_a})\Big) \times \Big(\mathbb{E}^{{Z}_0}\Big(\exp\Big(2K \theta^{2\eta} R^{2\eta-2} M_{T_\theta^R+1}( \overline{\psi_a}) \Big)\Big)\Big)^{1/2}\noindentnumber\\ &\quad \quad \times \Big(\mathbb{E}^{{Z}_0}\Big(\exp\Big(2K \theta^{2\eta} R^{2\eta-2}\frac{\theta}{R^{2}} \sum_{n=0}^{T_\theta^R} Z_n( \overline{\psi_a}) \Big)\Big)\Big)^{1/2}, \end{align} where we have used the Cauchy-Schwartz inequality in the last inequality. It suffices to bound the three terms on the right-hand side of \eqref{4e8.05}, which we now give.\\ (i) First we consider $\sum_{n=0}^{T_\theta^R} Z_n( \overline{\psi_a})$. By Lemma \ref{4l5.6} and Lemma \ref{4l1.3}, for any $a,x\in \mathbb{Z}_R^d$, we have for any $n\geq 1$, \begin{align*} \sum_{y\in \mathbb{Z}^d_R} p_n(y-x) \psi_a(y)\leq c_{\ref{4l5.6}} \sum_{y\in \mathbb{Z}^d_R} p_n(y-x) \sum_{k=1}^\infty \frac{1}{k^{1+\eta}} e^{-\frac{|y-a|^2}{64k}} \leq c_{\ref{4l5.6}} c_{\ref{4l1.3}} n^{-\eta}, \end{align*} and so it follows that \begin{align}\label{4e5.47} \sum_{y\in \mathbb{Z}^d_R} p_n(y-x) \overline{\psi_a}(y)\leq c_{\ref{4l5.6}} c_{\ref{4l1.3}} n^{-\eta}. \end{align} By Lemma \ref{4l5.6}, we also have \begin{align}\label{4e5.60} \|\overline{\psi_a}\|_\infty \leq C \end{align} for some constant $C>0$. Apply \eqref{4e5.47} and \eqref{4e5.60} to get \begin{align}\label{4e8.06} G(\overline{\psi_a},{T_\theta^R})=&3\|\overline{\psi_a}\|_\infty+\sum_{k=1}^{{T_\theta^R}} \sup_{y \in \mathbb{Z}_R^d} \sum_{z\in \mathbb{Z}^d_R} p_k(y-z) \overline{\psi_a}(z)\noindentnumber\\ \leq &C+\sum_{k=1}^{{T_\theta^R}} c_{\ref{4l5.6}}c_{\ref{4l1.3}} k^{-\eta}\leq C +C (T_\theta^R)^{1-\eta}\leq c(T) \frac{R^{2-2\eta}}{\theta^{1-\eta}}. \end{align} Let $\lambda=2K \theta^{1+2\eta} R^{2\eta-4}$ and $n=T_\theta^R\leq \frac{TR^2}{\theta}$. Then by \eqref{4e8.06} we have \begin{align}\label{4e5.48} 2\lambda T_\theta^R e^{\frac{T_\theta^R\theta}{ R^{2}}} G(\overline{\psi_a},T_\theta^R)\leq 4K \theta^{1+2\eta} R^{2\eta-4} \frac{TR^2}{\theta}e^{T} \cdot c(T) \frac{R^{2-2\eta}}{\theta^{1-\eta}}\leq C(T) K \frac{1}{\theta^{1-3\eta}}. \end{align} If we pick $\theta>0$ large enough so that $C(T)K/{\theta^{1-3\eta}}\leq 1/2$, then we may apply Proposition \ref{4p1.4} to get (recall $|Z_0|\leq 2R^2\log\theta/{\theta}$) \begin{align}\label{4e8.72} &\mathbb{E}^{Z_0}\Big(\exp\Big(2K \theta^{1+2\eta} R^{2\eta-4} \sum_{k=0}^{{T_\theta^R}} Z_k(\overline{\psi_a})\Big)\Big)=\mathbb{E}^{Z_0}\Big(\exp\Big(\lambda\sum_{k=0}^{{T_\theta^R}} Z_k(\overline{\psi_a})\Big)\Big)\noindentnumber\\ \leq& \exp\Big(\lambda |Z_0| e^{\frac{T_\theta^R\theta}{ R^{2}}} G(\overline{\psi_a},{T_\theta^R}) (1-2\lambda {T_\theta^R} e^{\frac{T_\theta^R\theta}{ R^{2}}} G(\overline{\psi_a},T_\theta^R))^{-1}\Big)\noindentnumber\\ \leq &\exp\Big(\lambda \frac{2R^2\log \theta}{{\theta}} e^{T} \cdot c(T)\frac{R^{2-2\eta}}{\theta^{1-\eta}}(1-C(T)K/{\theta^{1-3\eta}})^{-1}\Big)\noindentnumber\\ \leq &\exp\Big(Kc(T) \frac{\log \theta}{\theta^{1-3\eta}} \cdot 2\Big)\leq \exp(2Kc(T)), \end{align} where in the second inequality we have used \eqref{4e8.06}, \eqref{4e5.48}. The second last inequality is by $C(T)K/{\theta^{1-3\eta}}\leq 1/2$ and the last inequality uses $\log \theta \leq \theta^{1-3\eta}$ for $\theta\geq 100$. \\ (ii) Next we consider $M_{T_\theta^R+1}( \overline{\psi_a})$. Use $R\geq 4\theta$ and \eqref{4e5.60} to get \begin{align} 2K \theta^{2\eta} R^{2\eta-2} \|\overline{\psi_a}\|_\infty \leq 2K\theta^{4\eta-2} 4^{2\eta-2} C\leq 1 \end{align} if we set $\theta\geq 100$ to be large. Then we may apply Proposition \ref{4p5.1} with $\phi= \overline{\psi_a}$ and $\lambda=2K \theta^{2\eta} R^{2\eta-2} $ to get \begin{align}\label{4e8.50} &\mathbb{E}^{{Z}_0}\Big(\exp\Big(2K \theta^{2\eta} R^{2\eta-2} M_{T_\theta^R+1}( \overline{\psi_a}) \Big)\Big)\noindentnumber\\ \leq& 2\Big(\mathbb{E}^{{Z}_0}\Big(\exp\Big(64K^2 \theta^{4\eta} R^{4\eta-4} \langle M( \overline{\psi_a})\rangle_{T_\theta^R+1} \Big)\Big)\Big)^{1/2}. \end{align} Use \eqref{4e7.42} to see that \begin{align}\label{4e5.61} \langle M( \overline{\psi_a})\rangle_{T_\theta^R+1}\leq &2\sum_{n=0}^{T_\theta^R} \sum_{x\in \mathbb{Z}^d_R} Z_n(x) \cdot \frac{1}{V(R)} \sum_{i=1}^{V(R)} (\overline{\psi_a}({x+e_i}))^2 . \end{align} By Jensen's inequality, we have for any $y\in \mathbb{Z}_R^d$, \begin{align*} (\overline{\psi_a}(y))^2&=\Big(\frac{1}{V(R)}\sum_{i=1}^{V(R)} \psi_a(y+e_i)\Big)^2\leq \frac{1}{V(R)}\sum_{i=1}^{V(R)} (\psi_a(y+e_i))^2, \end{align*} and so \eqref{4e5.61} becomes \begin{align*} \langle M( \overline{\psi_a})\rangle_{T_\theta^R+1} \leq&2\sum_{n=0}^{T_\theta^R} \sum_{x\in \mathbb{Z}^d_R} Z_n(x) \cdot \frac{1}{V(R)} \sum_{i=1}^{V(R)} \frac{1}{V(R)} \sum_{j=1}^{V(R)}(\psi_a({x+e_i+e_j}))^2 . \end{align*} For any $a \in \mathbb{Z}^d_R$, we define \begin{align}\label{4e8.28} h_a(x)= \frac{1}{V(R)} \sum_{i=1}^{V(R)} \frac{1}{V(R)} \sum_{j=1}^{V(R)}(\psi_a({x+e_i+e_j}))^2, \quad \forall x \in \mathbb{Z}^d_R, \end{align} so that \begin{align*} \langle M( \overline{\psi_a})\rangle_{T_\theta^R+1} \leq&2\sum_{n=0}^{T_\theta^R} \sum_{x\in \mathbb{Z}^d_R} Z_n(x) \cdot h_a(x) = 2\sum_{n=0}^{T_\theta^R} Z_n( h_a). \end{align*} Returning to \eqref{4e8.50}, we have \begin{align}\label{4e8.51} &\mathbb{E}^{{Z}_0}\Big(\exp\Big(2K \theta^{2\eta} R^{2\eta-2} M_{T_\theta^R+1}( \overline{\psi_a}) \Big)\Big)\noindentnumber\\ \leq& 2\Big(\mathbb{E}^{{Z}_0}\Big(\exp\Big(128K^2 \theta^{4\eta} R^{4\eta-4} \sum_{n=0}^{T_\theta^R} Z_n( h_a) \Big)\Big)\Big)^{1/2}. \end{align} It suffices to bound \begin{align}\label{4e8.52} \mathbb{E}^{{Z}_0}\Big(\exp\Big(128 K^2 \theta^{4\eta} R^{4\eta-4} \sum_{n=0}^{T_\theta^R} Z_n( h_a) \Big)\Big). \end{align} Use Lemma \ref{4l5.6} and then Lemma \ref{4l4.1.1} to see that \begin{align}\label{4e8.07} (\psi_a(y))^2 \leq& c_{\ref{4l5.6}}^2 \Big(\sum_{k=1}^\infty  \frac{1}{k^{1+\eta}} e^{-\frac{|x-a|^2}{64k}}\Big)^2\noindentnumber\\ \leq& c_{\ref{4l5.6}}^2 C_{\ref{4l4.1.1}}(\eta)\sum_{k=1}^\infty \frac{1}{k^{1+2\eta}} e^{-\frac{|y-a|^2}{64k}}, \quad \forall y\in \mathbb{Z}_R^d. \end{align} Now apply \eqref{4e8.07} and Lemma \ref{4l1.3} to get for any $n\geq 1$ and $x\in \mathbb{Z}_R^d$, \begin{align*} \sum_{y\in \mathbb{Z}^d_R} p_n(y-x) (\psi_a(y))^2 \leq C(\eta) \sum_{y\in \mathbb{Z}^d_R} p_n(y-x)\sum_{k=1}^\infty \frac{1}{k^{1+2\eta}} e^{-\frac{|y-a|^2}{64k}}\leq C(\eta) c_{\ref{4l1.3}} \cdot n^{-2\eta}. \end{align*} We conclude from \eqref{4e8.28} and the above that \begin{align}\label{4e5.63} \sum_{y\in \mathbb{Z}^d_R} p_n(y-x) h_a(y) \leq C(\eta) c_{\ref{4l1.3}} \cdot n^{-2\eta}\leq C n^{-2\eta}. \end{align} Recall \eqref{4e8.28} again and use Lemma \ref{4l5.6} to get $\|h_a\|_\infty \leq \|\psi_a\|_\infty^2\leq c$ for some $c>0$. Now use \eqref{4e5.63} to arrive at \begin{align}\label{4e8.08} G(h_a,{T_\theta^R})=&3\|h_a\|_\infty+\sum_{n=1}^{{T_\theta^R}} \sup_{y \in \mathbb{Z}_R^d} \sum_{z\in \mathbb{Z}^d_R} p_n(y-z) h_a(z)\noindentnumber\\ \leq &c+\sum_{n=1}^{{T_\theta^R}}C n^{-2\eta}\leq c +C (T_\theta^R)^{1-2\eta}\leq c(T) \frac{R^{2-4\eta}}{\theta^{1-2\eta}}. \end{align} \noindent Let $\lambda=128 K^2 \theta^{4\eta} R^{4\eta-4}$ and $n=T_\theta^R\leq \frac{TR^2}{\theta}$. By \eqref{4e8.08} we have \begin{align}\label{4e5.64} 2\lambda T_\theta^R e^{\frac{T_\theta^R\theta}{ R^{2}}} G(h_a,T_\theta^R)\leq 256 K^2 \theta^{4\eta} R^{4\eta-4} \frac{TR^2}{\theta}e^{T} \cdot c(T) \frac{R^{2-4\eta}}{\theta^{1-2\eta}}\leq C(T) K^2 \frac{1}{\theta^{2-6\eta}}. \end{align} If we pick $\theta>0$ large enough so that $C(T)K^2/{\theta^{2-6\eta}}\leq 1/2$, then we may apply Proposition \ref{4p1.4} to get (recall $|Z_0|\leq 2R^2\log\theta/{\theta}$) \begin{align}\label{4e8.60} &\mathbb{E}^{Z_0}\Big(\exp\Big(128 K^2 \theta^{2\eta} R^{4\eta-4} \sum_{k=0}^{{T_\theta^R}} Z_k(h_a)\Big)\Big)\noindentnumber\\ \leq &\exp\Big(\lambda |Z_0| e^{\frac{T_\theta^R\theta}{ R^{2}}} G(h_a,{T_\theta^R}) (1-2\lambda {T_\theta^R}e^{\frac{T_\theta^R\theta}{ R^{2}}} G(h_a,T_\theta^R))^{-1}\Big)\noindentnumber\\ \leq &\exp\Big(\lambda \frac{2R^2\log \theta}{{\theta}} e^{T}\cdot c(T)\frac{R^{2-4\eta}}{\theta^{1-2\eta}}(1-C(T)K^2/{\theta^{2-6\eta}})^{-1}\Big)\noindentnumber\\ \leq&\exp\Big( c(T) K^2\frac{\log \theta}{\theta^{2-6\eta}} \cdot 2\Big)\leq e^{c(T) K^2}, \end{align} where in the second inequality we have used \eqref{4e8.08} and \eqref{4e5.64}. The second last inequality is by $C(T)K^2/{\theta^{2-6\eta}}\leq 1/2$ and the last inequality uses $\log \theta \leq \theta^{2-6\eta}$ for $\theta\geq 100$. Now combine \eqref{4e8.51} and \eqref{4e8.60} to see that if $\theta>100$ is chosen large enough, then we have \begin{align}\label{4e8.73} &\mathbb{E}^{{Z}_0}\Big(\exp\Big(2K \theta^{2\eta} R^{2\eta-2} M_{T_\theta^R+1}( \overline{\psi_a}) \Big)\Big)\leq 2e^{c(T) K^2/2}. \end{align} (iii) It remains to bound $Z_0( \overline{\psi_a})$. We first give the following bound on $\psi_a$. \begin{lemma}\label{4l1.04} There is some absolute constant $C_{\ref{4l1.04}}>0$ such that for any $R\geq K_{\ref{4p1.1}}$ and $a\in \mathbb{Z}^d_R$, \begin{align}\label{4eb1.23} R^{2\eta} \psi_a(x)\leq C_{\ref{4l1.04}}g_{a,3}(x)+C_{\ref{4l1.04}}, \quad \forall x\in \mathbb{Z}_R^d. \end{align} \end{lemma} \begin{proof} For any $a, x\in \mathbb{Z}^d_R$, we use Lemma \ref{4l5.6} and Lemma \ref{4l4.1} to get \begin{align}\label{4e8.29} R^{2\eta} \psi_a(x)\leq &R^{2\eta} c_{\ref{4l5.6}} \sum_{k=1}^\infty \frac{1}{k^{1+\eta}} e^{-\frac{|x-a|^2}{64k}}\noindentnumber \\ \leq& R^{2\eta}C 1_{\{|x-a|\leq 1\}}+R^{2\eta} c_{\ref{4l5.6}} C_{\ref{4l4.1}}(\eta) \frac{64^\eta}{|x-a|^{2\eta}} 1_{\{|x-a|>1\}}\noindentnumber\\ \leq &R^{2\eta}C 1_{\{|x-a|\leq 1\}}+C \frac{R^{2\eta}}{|x-a|^{2\eta}} 1_{\{1<|x-a|<R\}}+ C 1_{\{|x-a|\geq R\}}\noindentnumber\\ \leq &CR 1_{\{|x-a|\leq 1\}}+C \frac{R}{|x-a|} 1_{\{1<|x-a|<R\}}+ C 1_{\{|x-a|\geq R\}}, \end{align} where in the last inequality we have used $R^{2\eta}\leq R$ for the first term (recall $R\geq 1$) and $R/|x-a|>1$ for the second term. Recall from \eqref{4eb1.24} that \begin{align}\label{4e8.30} g_{a,3}(x)=R\sum_{k=1}^\infty \frac{1}{k^{3/2}} e^{-|x-a|^2/(32k)}. \end{align} If $|x-a|>R$, by \eqref{4e8.29} we get $R^{2\eta} \psi_a(x) 1_{\{|x-a|\geq R\}}\leq C$, thus giving \eqref{4eb1.23}. Turning to $|x-a|\leq 1$, we can find some constant $c_1>0$ such that \begin{align} g_{a,3}(x)1_{\{|x-a|\leq 1\}} \geq R e^{-1/32}\sum_{k=1}^\infty \frac{1}{k^{3/2}} 1_{\{|x-a|\leq 1\}} \geq c_1 R 1_{\{|x-a|\leq 1\}}, \end{align} thus giving \begin{align*} R^{2\eta} \psi_a(x) 1_{\{|x-a|\leq 1\}}\leq CR 1_{\{|x-a|\leq 1\}}\leq \frac{C}{c_1} g_{a,3}(x)1_{\{|x-a|\leq 1\}}\leq Cg_{a,3}(x). \end{align*} Finally if $1<|x-a|<R$, we may apply Lemma \ref{4l4.1} to get \begin{align*} g_{a,3}(x)1_{\{1<|x-a|<R\}} \geq R c_{\ref{4l4.1}}(\frac{1}{2}) \frac{32^{1/2}}{|x-a|} 1_{\{1<|x-a|<R\}}\geq c_2 \frac{R}{|x-a|} 1_{\{1<|x-a|<R\}} \end{align*} for some constant $c_2>0$. By \eqref{4e8.29}, we get \begin{align*} R^{2\eta} \psi_a(x) 1_{\{1<|x-a|<R\}}\leq C\frac{R}{|x-a|} 1_{\{1<|x-a|<R\}}\leq \frac{C}{c_2} g_{a,3}(x)1_{\{1<|x-a|<R\}}\leq Cg_{a,3}(x). \end{align*} By adjusting constants, we complete the proof. \end{proof} Now we may apply the above lemma to get for any $a\in \mathbb{Z}^d_R$, \begin{align*} R^{2\eta-2} Z_0( \overline{\psi_a})=&R^{-2} \frac{1}{V(R)}\sum_{i=1}^{V(R)} \sum_{x\in \mathbb{Z}^d_R} Z_0(x) R^{2\eta}\psi_{a}(x+e_i)\\ \leq &C_{\ref{4l1.04}} R^{-2} Z_0(1)+ C_{\ref{4l1.04}} R^{-2} \frac{1}{V(R)}\sum_{i=1}^{V(R)} \sum_{x\in \mathbb{Z}^d_R} Z_0(x) g_{a,3}(x+e_i)\\ \leq &C R^{-2} Z_0(1)+ C R^{-2} \frac{1}{V(R)}\sum_{i=1}^{V(R)} Z_0(g_{a-e_i,3}). \end{align*} Recall that $Z_0$ is as in \eqref{4eb1.7} and we use conditions (ii) and (iii) to see that the above becomes \begin{align}\label{4ed8.71} R^{2\eta-2} Z_0( \overline{\psi_a})\leq &C \frac{\log \theta}{\theta}+ C \frac{m}{\theta^{1/4}} \leq C \frac{m+1}{\theta^{1/4}}, \end{align} where the last inequality uses $\log \theta \leq \theta^{3/4}$ for $\theta>0$. This is the only place we use the regularity condition (iii) of $Z_0$ when calculating the exponential moments. Returning to \eqref{4e8.05}, we use the above to get \begin{align}\label{4e8.71} \exp\Big(K \theta^{2\eta} R^{2\eta-2} Z_0( \overline{\psi_a})\Big)\leq \exp\Big(K \theta^{2\eta} C \frac{m+1}{\theta^{1/4}}\Big)= e^{CK(m+1)}, \end{align} where the last equality is by $\eta=1/8$. Finally we combine \eqref{4e8.05}, \eqref{4e8.72}, \eqref{4e8.73} and \eqref{4e8.71} to see that \begin{align} &\mathbb{E}^{{Z}_0}\Big(\exp\Big(K \theta^{2\eta} R^{2\eta-2} \sum_{n=0}^{{T_\theta^R}}  Z_n(\overline{f_a}) \Big)\Big)\noindentnumber\\ \leq &e^{CK(m+1)}\cdot (2e^{c(T) K^2/2})^{1/2} (e^{2Kc(T)})^{1/2}=C(T,m,K), \end{align} thus completing the proof. \end{proof} \section{Regularity of branching random walk}\label{4s7} In this section we give the proof of Proposition \ref{4p3}. Recall from \eqref{4e12.01} that we assume \begin{align}\label{4eb1.31} Z_0(1)\leq 2R^{d-1} f_d(\theta)/\theta. \end{align} Recall $T_\theta^R=[{TR^{d-1}}/{\theta}]$ and \begin{align}\label{4eb1.34} 200\leq \frac{1}{2} \frac{TR^{d-1}}{\theta} \leq T_\theta^R\leq \frac{TR^{d-1}}{\theta}. \end{align} Before proceeding to the proof of Proposition \ref{4p3}, we first give the proof of Corollary \ref{4c0.1} by assuming Proposition \ref{4p3}. \begin{proof}[Proof of Corollary \ref{4c0.1} assuming Proposition \ref{4p3}] Fix $\varepsilon_0\in (0,1)$, $T\geq 100+\varepsilon_0^{-1}$. Let $\theta_{\ref{4c0.1}}=\theta_{\ref{4p3}}\geq 100$ and choose $\theta\geq \theta_{\ref{4c0.1}}$. Let $C_{\ref{4c0.1}}=C_{\ref{4p3}}\geq 4\theta$ and choose $R\geq C_{\ref{4p3}}$. Let $Z_0$ be as in \eqref{4eb1.31}. First we use \eqref{4ea4.5} to see that \[\mathbb{E}^{Z_0}(Z_{T_\theta^R}(1))=(1+\frac{\theta}{R^{d-1}})^{T_\theta^R}Z_0(1)\leq e^TZ_0(1),\] where the inequality uses \eqref{4eb1.34}. By Markov's inequality, it follows that \begin{align}\label{4e11.42} \mathbb{P}^{Z_0}\Big(Z_{T_\theta^R}(1)\geq e^{2T}Z_0(1)\Big)\leq e^{-T}\leq \varepsilon_0, \end{align} where the last inequality is by $e^T\geq T\geq \varepsilon_0^{-1}$. Hence with probability larger than $1-\varepsilon_0$, we have $Z_{T_\theta^R}(1)\leq e^{2T}Z_0(1)$. Next, recalling $R_\theta=\sqrt{R^{d-1}/\theta}$, we fix $u\in Q_{8 \sqrt{\log f_d(\theta)}R_\theta} (0)^c$. In view of Proposition \ref{4p3}, it suffices to get a uniform in $u$ bound for $\tilde{Z}_{T_\theta^R}(g_{u,d})$ where $\tilde{Z}_{T_\theta^R}(\cdot)=Z_{T_\theta^R}(\cdot \cap Q_{4R_\theta}(0))$, Notice that $8\sqrt{\log f_d(\theta)}\geq 8$ for $\theta\geq 100$. Hence for any $x\in Q_{4R_\theta}(0)$, we have $|u-x|\geq 4R_\theta$. In $d=2$, we use \eqref{4e10.32} to see that for any $x\in Q_{4R_\theta}(0)$, \begin{align*} g_{u,2}(x)\leq C\Big(1+\log^+ \Big(\frac{R}{\theta}\frac{1}{16R_\theta^2} \Big)\Big)= C. \end{align*} and so it follows that \begin{align*} \tilde{Z}_{T_\theta^R}(g_{u,2})=&\sum_{x \in \mathbb{Z}_R^d \cap Q_{4R_\theta}(0)} {Z}_{T_\theta^R}(x) g_{u,2}(x)\leq CZ_{T_\theta^R}(1). \end{align*} On the event $\{{Z}_{T_\theta^R}(1)\leq e^{2T} Z_0(1)\}$, the above becomes \begin{align}\label{4ea4.6} \tilde{Z}_{T_\theta^R}(g_{u,2}) \leq C\cdot e^{2T} Z_0(1) \leq Ce^{2T} \frac{2R \sqrt{\theta}}{\theta}\leq C(T)\frac{R}{\theta^{1/4}}, \end{align} where the second inequality uses \eqref{4eb1.31}. Similarly in $d=3$, by \eqref{4e10.32} we have for any $x\in Q_{4R_\theta}(0)$, \begin{align}\label{4e11.44} g_{u,3}(x)\leq C\frac{R}{4R_\theta}\leq C\sqrt{\theta}. \end{align} It follows that \begin{align*} \tilde{Z}_{T_\theta^R}(g_{u,3})=&\sum_{x \in \mathbb{Z}_R^d \cap Q_{4R_\theta}(0)} {Z}_{T_\theta^R}(x) g_{u,3}(x) \leq C\sqrt{\theta} Z_{T_\theta^R}(1). \end{align*} On the event $\{{Z}_{T_\theta^R}(1)\leq e^{2T} Z_0(1)\}$, we have \begin{align}\label{4ea4.7} \tilde{Z}_{T_\theta^R}(g_{u,3})\leq C\sqrt{\theta} \cdot e^{2T} Z_0(1) \leq C\sqrt{\theta} \cdot e^{2T} \frac{2R^2 \log \theta}{\theta}\leq C(T)\frac{R^2}{\theta^{1/4}}, \end{align} where the last inequality is by $\log \theta\leq \theta^{1/4}$ for $\theta\geq 100$. Now we conclude from \eqref{4e11.42}, \eqref{4ea4.6}, \eqref{4ea4.7} that \begin{align} \mathbb{P}^{Z_0}\Big(\tilde{Z}_{T_\theta^R}(g_{u,d})\leq C(T) \frac{R^{d-1}}{\theta^{1/4}},\quad \forall u\in Q_{8 \sqrt{\log f_d(\theta)}R_\theta} (0)^c \Big)\geq 1-\varepsilon_0. \end{align} By Proposition \ref{4p3}, the proof is complete by letting $m_{\ref{4c0.1}}=m_{\ref{4p3}}+C(T)$. \end{proof} Now we return to Proposition \ref{4p3}. To do this, we will calculate the corresponding exponential moments. Throughout the rest of this section, we fix $\eta=1/8$. \begin{proposition}\label{4p3.6} Let $d=2$. For any $T\geq 100$, there exist constants $C_{\ref{4p3.6}}(T)>0$ and $\theta_{\ref{4p3.6}}(T)\geq 100$ such that for all $\theta \geq \theta_{\ref{4p3.6}}(T)$, there is some $K_{\ref{4p3.6}}(T,\theta)\geq 4\theta$ such that for any $R\geq K_{\ref{4p3.6}}$ and any $Z_0$ satisfying \eqref{4eb1.31}, we have \begin{align*} &\text{(i) } \mathbb{E}^{Z_0}\Big(\exp\Big({\theta}^{1/2} R^{-1} Z_{T_\theta^R}(g_{u,2})\Big)\Big)\leq C_{\ref{4p3.6}}(T), \quad \forall u \in \mathbb{R}^2;\\ &\text{(ii) }\mathbb{E}^{{Z}_0}\Big(\exp\Big({\theta}^{1/2} R^{-1} \frac{(R/\theta)^{\eta/2}}{|u-v|^{\eta}} |{Z}_{T_\theta^R}(g_{u,2})-{Z}_{T_\theta^R}(g_{v,2})|\Big)\Big)\leq C_{\ref{4p3.6}}(T), \quad \forall u\neq v \in \mathbb{R}^3. \end{align*} \end{proposition} \begin{proposition}\label{4p3.7} Let $d=3$. For any $T\geq 100$, there exist constants $C_{\ref{4p3.7}}(T)>0$ and $\theta_{\ref{4p3.7}}(T)\geq 100$ such that for all $\theta \geq \theta_{\ref{4p3.7}}(T)$, there is some $K_{\ref{4p3.7}}(T,\theta)\geq 4\theta$ such that for any $R\geq K_{\ref{4p3.7}}$ and any $Z_0$ satisfying \eqref{4eb1.31}, we have \begin{align*} &\text{(i) } \mathbb{E}^{Z_0}\Big(\exp\Big(\frac{\theta^{1/2}R^{-2} }{\log \theta} Z_{T_\theta^R}(g_{u,3})\Big)\Big)\leq C_{\ref{4p3.7}}(T), \quad \forall u \in \mathbb{R}^3;\\ &\text{(ii) }\mathbb{E}^{{Z}_0}\Big(\exp\Big(\frac{\theta^{1/2}R^{-2} }{\log \theta} \frac{(R^2/\theta)^{\eta/2}}{|u-v|^{\eta}} |{Z}_{T_\theta^R}(g_{u,3})-{Z}_{T_\theta^R}(g_{v,3})|\Big)\Big)\leq C_{\ref{4p3.7}}(T), \quad \forall u\neq v \in \mathbb{R}^3. \end{align*} \end{proposition} \begin{proof}[Proof of Proposition \ref{4p3} assuming Propositions \ref{4p3.6}, \ref{4p3.7}] For any $T, \theta, R$ fixed, the random measure ${Z}_{T_\theta^R}$ is a.s. finite and $\{g_{u,d}: u\in \mathbb{R}^d\}$ are continuous functions that are uniformly bounded (see, e.g., \eqref{4eb1.32} and \eqref{4e6.20}). Hence the family $\{{Z}_{T_\theta^R}(g_{u,d}): u\in \mathbb{R}^d\}$ is an almost surely continuous random field. By applying Lemma \ref{4l2.2}, we may finish the proof of Proposition \ref{4p3} in a way similar to that of Corollary \ref{4c2.1}. So the details are omitted. \end{proof} It remains to prove Propositions \ref{4p3.6}, \ref{4p3.7}, which we now give. \subsection{Exponential moments of $Z_{{T_\theta^R}}(g_{u,2})$ in $d=2$} Let $d=2$. For any $u\in \mathbb{R}^2$ and $x\in \mathbb{Z}_R^2$, Lemma \ref{4l3.3} with $n=T_\theta^R$ and $\beta=1$ implies \begin{align}\label{4eb1.33} \sum_{y\in \mathbb{Z}^d_R} p_{T_\theta^R}(x-y) g_{u,2}(y)\leq& c_{\ref{4l3.3}} \Big(1+ \frac{1}{T_\theta^R} \log \frac{2R}{\theta}+\Big(\frac{R}{T_\theta^R\theta}\Big)^{1/2}\Big)\noindentnumber\\ \leq &C\Big(1+ \frac{2\theta}{TR} \frac{2R}{\theta}+\Big(\frac{R}{\theta}\frac{2\theta}{TR}\Big)^{1/2}\Big)\leq C(T), \end{align} where in the second inequality we have used \eqref{4eb1.34} and $\log s\leq s$, $\forall s>0$ . Apply Proposition \ref{4p1.2} and \eqref{4eb1.34} to get \begin{align*} \mathbb{E}^{x}({Z}_{{T_\theta^R}}(g_{u,2})) & =(1+\frac{\theta}{R})^{{T_\theta^R}} \sum_{y\in \mathbb{Z}^d_R} p_{{T_\theta^R}}(x-y) g_{u,2}(y) \leq e^{\frac{T_\theta^R \theta}{R}} C(T)\leq c(T). \end{align*} So it follows that (recall $|Z_0|\leq 2R/\sqrt{\theta}$ in $d=2$) \begin{align}\label{4e5.34} \mathbb{E}^{Z_0}({Z}_{{T_\theta^R}}(g_{u,2}))\leq c(T) |Z_0|\leq C(T)\frac{R}{\theta^{1/2}}. \end{align} \begin{proof}[Proof of Proposition \ref{4p3.6}(i)] Let $\lambda={\theta}^{1/2} R^{-1}$ and $n=T_\theta^R\leq \frac{TR}{\theta}$. Next, recall from \eqref{4ec7.4} to see that \begin{align}\label{4eb1.36} \lambda e^{\frac{T_\theta^R\theta}{ R}} G(g_{u,2},T_\theta^R)\leq {\theta}^{1/2} R^{-1} e^{T} \cdot C(T) \frac{R}{\theta}\leq c(T) \frac{1}{\theta^{1/2}}. \end{align} If we pick $\theta>0$ large enough so that $c(T)/{\theta^{1/2}}\leq 1/2$, then we may apply Corollary \ref{4c1.2} to get \begin{align*} &\mathbb{E}^{Z_0}\Big(\exp\Big(\lambda Z_{{T_\theta^R}}(g_{u,2})\Big)\Big)\leq \exp\Big(\lambda \mathbb{E}^{{Z}_0}({Z}_{{T_\theta^R}}(g_{u,2})) (1-\lambda e^{\frac{T_\theta^R\theta}{ R}} G(g_{u,2},{T_\theta^R}))^{-1}\Big)\\ \leq &\exp\Big(\lambda C(T)\frac{R}{\theta^{1/2}} (1-c(T)/{\theta^{1/2}})^{-1}\Big)\leq \exp\Big(\lambda C(T)\frac{R}{\theta^{1/2}} \cdot 2\Big)=e^{2C(T)}, \end{align*} where we have used \eqref{4e5.34}, \eqref{4eb1.36} in the second inequality and the last inequality is by $c(T)/{\theta^{1/2}}\leq 1/2$. Thus the proof of Proposition \ref{4p3.6}(i) is finished. \end{proof} Turning to the difference moments in Proposition \ref{4p3.6}(ii), we fix any $u\neq v \in \mathbb{R}^d$. By (3.44) of \cite{Sug89}, for any $0<\alpha\leq 1$, there exists some constant $C(\alpha)>0$ such that \begin{align}\label{4e6.18a} |e^{-\frac{|x|^2}{2t}}-e^{-\frac{|y|^2}{2t}}|\leq C(\alpha) t^{-\alpha/2} |x-y|^\alpha (e^{-\frac{|x|^2}{4t}}+e^{-\frac{|y|^2}{4t}}),\ \forall t>0, x,y\in \mathbb{R}^d. \end{align} Use the above with $\alpha=\eta$ to see that for any $y\in \mathbb{R}^d$, we have \begin{align}\label{4e10.42} |g_{u,2}(y)-g_{v,2}(y)|\leq& \sum_{k=1}^\infty e^{-k\theta/R} \frac{1}{k} |e^{-\frac{|y-u|^2}{32k}}-e^{-\frac{|y-v|^2}{32k}}|\noindentnumber\\ \leq & \sum_{k=1}^\infty e^{-k\theta/R} \frac{1}{k} C(\eta) (16k)^{-\eta/2} |u-v|^\eta (e^{-\frac{|y-u|^2}{64k}}+e^{-\frac{|y-v|^2}{64k}})\noindentnumber\\ \leq& C |u-v|^{\eta} \sum_{k=1}^\infty \frac{1}{k^{(2+\eta)/2}} (e^{-\frac{|y-u|^2}{64k}}+e^{-\frac{|y-v|^2}{64k}}). \end{align} It follows that for any $n\geq 1$ and $x\in \mathbb{Z}_R^d$, \begin{align}\label{4e10.43} &\sum_{y\in \mathbb{Z}^d_R} p_n(y-x) |g_{u,2}(y)-g_{v,2}(y)|\noindentnumber\\ \leq &C |u-v|^{\eta} \sum_{y\in \mathbb{Z}^d_R} p_n(y-x) \sum_{k=1}^\infty \frac{1}{k^{(2+\eta)/2}} (e^{-\frac{|y-u|^2}{64k}}+e^{-\frac{|y-v|^2}{64k}})\noindentnumber\\ \leq & C |u-v|^{\eta} \cdot 2 c_{\ref{4l1.3}} n^{-\eta/2}\leq C |u-v|^{\eta} n^{-\eta/2}, \end{align} where the second inequality is by Lemma \ref{4l1.3}. Apply Proposition \ref{4p1.2} and \eqref{4e10.43} with $n=T_\theta^R$ to get \begin{align*} \mathbb{E}^{x}({Z}_{{T_\theta^R}}(|g_{u,2}-g_{v,2}|))=&(1+\frac{\theta}{R})^{{T_\theta^R}} \sum_{y\in \mathbb{Z}^d_R} p_{{T_\theta^R}}(y-x) |g_{u,2}(y)-g_{v,2}(y)|\\ \leq &e^{\frac{T_\theta^R \theta}{R}} C |u-v|^{\eta} (T_\theta^R)^{-\eta/2}\leq C e^T |u-v|^{\eta} \Big(\frac{1}{2} \frac{TR}{\theta}\Big)^{-\eta/2}\\ =&C(T) |u-v|^\eta R^{-\eta/2} \theta^{\eta/2}, \end{align*} where in the last inequality we have used \eqref{4eb1.34}. So we have (recall $|Z_0|\leq 2R/\theta^{1/2}$) \begin{align}\label{4e5.35} \mathbb{E}^{Z_0}({Z}_{{T_\theta^R}}(|g_{u,2}-g_{v,2}|))&\leq |Z_0| C(T) |u-v|^\eta R^{-\eta/2} \theta^{\eta/2}\noindentnumber\\ &\leq C(T) |u-v|^\eta R^{1-\eta/2} \theta^{(\eta-1)/2}. \end{align} Next, we apply \eqref{4e10.42} and \eqref{4e10.43} again to see that \begin{align}\label{4eb1.35} G(|g_{u,2}-g_{v,2}|,{T_\theta^R})=&3\|g_{u,2}-g_{v,2}\|_\infty+\sum_{k=1}^{{T_\theta^R}} \sup_{y \in \mathbb{Z}_R^d} \sum_{z\in \mathbb{Z}^d_R} p_k(y-z) |g_{u,2}(y)-g_{v,2}(y)|\noindentnumber\\ \leq& C |u-v|^{\eta}+C |u-v|^{\eta} \sum_{k=1}^{{T_\theta^R}} k^{-\frac{\eta}{2}}\noindentnumber\\ \leq& C |u-v|^{\eta} (T_\theta^R)^{1-\eta/2}\leq c(T) |u-v|^\eta \frac{R^{1-\eta/2}}{\theta^{1-\eta/2}}. \end{align} where in the last inequality we have used \eqref{4eb1.34}. \begin{proof}[Proof of Proposition \ref{4p3.6}(ii)] Let $\lambda=\theta^{(1-\eta)/2} R^{\eta/2-1} |u-v|^{-\eta}$ and $n=T_\theta^R\leq \frac{TR}{\theta}$. By \eqref{4eb1.35} we have \begin{align}\label{4e5.72} \lambda e^{\frac{T_\theta^R\theta}{R}} G(|g_{u,2}-g_{v,2}|,T_\theta^R)\leq \frac{\theta^{(1-\eta)/2} R^{\eta/2-1}}{ |u-v|^{\eta}} e^{T} \cdot c(T) |u-v|^\eta \frac{R^{1-\eta/2}}{\theta^{1-\eta/2}}\leq c(T) \frac{1}{\theta^{1/2}}. \end{align} If we pick $\theta>0$ large enough so that $c(T)/{\theta^{1/2}}\leq 1/2$, then we may apply Corollary \ref{4c1.2} to get \begin{align*} &\mathbb{E}^{Z_0}\Big(\exp\Big(\lambda |Z_{{T_\theta^R}}(g_{u,2})-Z_{{T_\theta^R}}(g_{v,2})|\Big)\Big)\leq \mathbb{E}^{Z_0}\Big(\exp\Big(\lambda Z_{{T_\theta^R}}(|g_{u,2}-g_{v,2}|)\Big)\Big)\\ \leq& \exp\Big(\lambda \mathbb{E}^{{Z}_0}({Z}_{{T_\theta^R}}(|g_{u,2}-g_{v,2}|)) (1-\lambda e^{T} G(|g_{u,2}-g_{v,2}|,{T_\theta^R}))^{-1}\Big)\\ \leq &\exp\Big(\lambda C(T) |u-v|^\eta R^{1-\eta/2} \theta^{(\eta-1)/2} (1-c(T)/{\theta^{1/2}} )^{-1}\Big)\\ \leq &\exp\Big(\lambda C(T) |u-v|^\eta R^{1-\eta/2} \theta^{(\eta-1)/2} \cdot 2\Big)=e^{2C(T)}, \end{align*} where we have used \eqref{4e5.35}, \eqref{4e5.72} in the third inequality and the last inequality is by $c(T)/{\theta^{1/2}}\leq 1/2$. So the proof is complete. \end{proof} \subsection{Exponential moments of $Z_{{T_\theta^R}}(g_{u,3})$ in $d=3$} Let $d=3$. Fix $u\in \mathbb{R}^d$. For any $x\in \mathbb{Z}_R^d$, we may apply Lemma \ref{4l1.3} to get for any $n\geq 1$, \begin{align}\label{4e5.49} \sum_{y\in \mathbb{Z}^d_R} p_n(y-x) g_{u,3}(y)= R \sum_{y\in \mathbb{Z}^d_R} p_n(y-x) \sum_{k=1}^\infty \frac{1}{k^{3/2}} e^{-\frac{|y-u|^2}{64k}} \leq R \cdot c_{\ref{4l1.3}} n^{-1/2}. \end{align} By Proposition \ref{4p1.2}, we have \begin{align*} \mathbb{E}^{x}({Z}_{{T_\theta^R}}(g_{u,3}))=&(1+\frac{\theta}{R^{2}})^{{T_\theta^R}} \sum_{y\in \mathbb{Z}^d_R} p_{{T_\theta^R}}(x-y) g_{u,3}(y) \\ \leq& e^{\frac{T_\theta^R\theta}{R^{2}}} R \cdot c_{\ref{4l1.3}} (T_\theta^R)^{-1/2} \leq C(T) \theta^{1/2}, \end{align*} where in the last inequality we have used \eqref{4eb1.34}. Hence it follows that (recall in $d=3$ that $|Z_0|\leq 2R^2 \log \theta/\theta$) \begin{align}\label{4e5.36} \mathbb{E}^{Z_0}({Z}_{{T_\theta^R}}(g_{u,3}))\leq |Z_0| C(T) \theta^{1/2}\leq C(T)R^2 \frac{\log \theta}{\theta^{1/2}}. \end{align} Next we use \eqref{4eb1.32} and \eqref{4e5.49} to see that \begin{align}\label{4e5.50} G(g_{u,3}, {T_\theta^R})=& 3\|g_{u,3}\|_\infty+\sum_{k=1}^{T_\theta^R} \sup_{y \in \mathbb{Z}_R^d} \sum_{z\in \mathbb{Z}^d_R} p_k(y-z) g_{u,3}(z)\noindentnumber\\ \leq &CR+\sum_{k=1}^{T_\theta^R} R\cdot c_{\ref{4l1.3}} k^{-1/2}\leq CR\sqrt{T_\theta^R}\leq c(T)\frac{R^2}{\theta^{1/2}}, \end{align} where in the last inequality we have used \eqref{4eb1.34}. \begin{proof}[Proof of Proposition \ref{4p3.7}(i)] Let $\lambda={\theta}^{1/2} R^{-2}/\log \theta$ and $n=T_\theta^R\leq \frac{TR^2}{\theta}$. By \eqref{4e5.50} we have \begin{align}\label{4e5.71} \lambda e^{\frac{T_\theta^R\theta}{R^{2}}} G(g_{u,3},T_\theta^R)\leq \frac{{\theta}^{1/2} R^{-2}}{\log \theta} e^{T} \cdot c(T)\frac{R^2}{\theta^{1/2}}\leq c(T) \frac{1}{\log \theta}. \end{align} If we pick $\theta>0$ large enough so that $c(T)/{\log \theta}\leq 1/2$, then we may apply Corollary \ref{4c1.2} to get \begin{align*} &\mathbb{E}^{Z_0}\Big(\exp\Big(\lambda Z_{{T_\theta^R}}(g_{u,3})\Big)\Big)\leq \exp\Big(\lambda \mathbb{E}^{{Z}_0}({Z}_{{T_\theta^R}}(g_{u,3})) (1-\lambda e^{\frac{T_\theta^R\theta}{R^{2}}} G(g_{u,3},{T_\theta^R}))^{-1}\Big)\\ \leq &\exp\Big(\lambda C(T)R^2 \frac{\log \theta}{\theta^{1/2}} (1-c(T)/{\log \theta} )^{-1}\Big)\\ \leq&\exp\Big(\lambda C(T) \frac{R^2 \log \theta}{{\theta}^{1/2}} \cdot 2 \Big)= e^{2C(T)}, \end{align*} where we have used \eqref{4e5.36}, \eqref{4e5.71} in the second inequality and the last inequality is by $c(T)/ \log \theta \leq 1/2$. \end{proof} Turning to the difference moments, we fix $u\neq v \in \mathbb{R}^d$. For any $y\in \mathbb{R}^d$, by \eqref{4e6.18a} with $\alpha=\eta$ we have \begin{align}\label{4e5.51} |g_{u,3}(y)-g_{v,3}(y)|\leq& R \sum_{k=1}^\infty \frac{1}{k^{3/2}} |e^{-\frac{|y-u|^2}{32k}}-e^{-\frac{|y-v|^2}{32k}}|\noindentnumber\\ \leq &R \sum_{k=1}^\infty \frac{1}{k^{3/2}} C(\eta) (16k)^{-\eta/2} |u-v|^\eta (e^{-\frac{|y-u|^2}{64k}}+e^{-\frac{|y-v|^2}{64k}})\noindentnumber\\ =& CR |u-v|^{\eta} \sum_{k=1}^\infty \frac{1}{k^{(3+\eta)/2}} (e^{-\frac{|y-u|^2}{64k}}+e^{-\frac{|y-v|^2}{64k}}). \end{align} For any $x\in \mathbb{Z}_R^d$, we may use the above and Lemma \ref{4l1.3} to get for any $n\geq 1$, \begin{align}\label{4e5.52} &\sum_{y\in \mathbb{Z}^d_R} p_n(y-x) |g_{u,3}(y)-g_{v,3}(y)|\noindentnumber\\ \leq &CR |u-v|^{\eta} \sum_{y\in \mathbb{Z}^d_R} p_n(y-x) \sum_{k=1}^\infty \frac{1}{k^{(3+\eta)/2}} (e^{-\frac{|y-u|^2}{64k}}+e^{-\frac{|y-v|^2}{64k}})\noindentnumber\\ \leq & CR |u-v|^{\eta} \cdot 2 c_{\ref{4l1.3}} n^{-(1+\eta)/2} \leq CR |u-v|^{\eta} n^{-(1+\eta)/2}. \end{align} By using Proposition \ref{4p1.2} and the above, we get for any $x\in \mathbb{Z}_R^d$, \begin{align*} \mathbb{E}^{x}({Z}_{{T_\theta^R}}(|g_{u,3}-g_{v,3}|))=&(1+\frac{\theta}{R})^{{T_\theta^R}} \sum_{y\in \mathbb{Z}^d_R} p_{{T_\theta^R}}(y-x) |g_{u,3}(y)-g_{v,3}(y)|\noindentnumber\\ \leq &e^{\frac{T_\theta^R\theta}{R^{2}}} CR |u-v|^{\eta} \cdot 2 c_{\ref{4l1.3}} (T_\theta^R)^{-(1+\eta)/2}\leq C(T) R^{-\eta} \theta^{(1+\eta)/2} |u-v|^\eta, \end{align*} where in the last inequality we have used \eqref{4eb1.34}. Hence it follows that (recall $|Z_0|\leq 2R^2 \log \theta/\theta$) \begin{align}\label{4e5.37} \mathbb{E}^{Z_0}({Z}_{{T_\theta^R}}(|g_{u,3}-g_{v,3}|))&\leq |Z_0| C(T) R^{-\eta} \theta^{(1+\eta)/2} |u-v|^\eta\noindentnumber\\ &\leq C(T) R^{2-\eta} \frac{\log \theta}{ \theta^{(1-\eta)/2}} |u-v|^\eta. \end{align} Next we use \eqref{4e5.51} and \eqref{4e5.52} to get \begin{align}\label{4e7.14} G(|g_{u,3}-g_{v,3}|,{T_\theta^R})=&3\|g_{u,3}-g_{v,3}\|_\infty+\sum_{k=1}^{{T_\theta^R}} \sup_{y \in \mathbb{Z}_R^d} \sum_{z\in \mathbb{Z}^d_R} p_k(y-z) |g_{u,3}(y)-g_{v,3}(y)|\noindentnumber\\ \leq& C R |u-v|^{\eta}+CR |u-v|^{\eta} \sum_{k=1}^{{T_\theta^R}} k^{-(1+\eta)/2} \noindentnumber\\ \leq& C R |u-v|^{\eta} (T_\theta^R)^{(1-\eta)/2}\leq c(T) |u-v|^{\eta} \frac{R^{2-\eta}}{\theta^{(1-\eta)/2}}. \end{align} \begin{proof}[Proof of Proposition \ref{4p3.7}(ii)] Let $\lambda=|u-v|^{-\eta}{\theta}^{(1-\eta)/2} R^{\eta-2}/\log \theta$ and $n=T_\theta^R\leq \frac{TR^2}{\theta}$. By \eqref{4e7.14} we have \begin{align}\label{4e10.52} \lambda e^{\frac{T_\theta^R\theta}{R^{2}}} G(|g_{u,3}-g_{v,3}|,T_\theta^R)\leq \frac{{\theta}^{(1-\eta)/2} R^{\eta-2}}{|u-v|^{\eta}\log \theta} e^{T} \cdot c(T) |u-v|^{\eta} \frac{R^{2-\eta}}{\theta^{(1-\eta)/2}}\leq \frac{c(T) }{\log \theta}. \end{align} If we pick $\theta>0$ large enough so that $c(T)/{\log \theta}\leq 1/2$, then we may apply Corollary \ref{4c1.2} to get (recall $|Z_0|\leq 2R^2 \log \theta/{\theta}$) \begin{align*} &\mathbb{E}^{Z_0}\Big(\exp\Big(\lambda |Z_{{T_\theta^R}}(g_{u,3})-Z_{{T_\theta^R}}(g_{v,3})|\Big)\Big)\leq \mathbb{E}^{Z_0}\Big(\exp\Big(\lambda Z_{{T_\theta^R}}(|g_{u,3}-g_{v,3}|)\Big)\Big)\\ \leq& \exp\Big(\lambda \mathbb{E}^{{Z}_0}({Z}_{{T_\theta^R}}(|g_{u,3}-g_{v,3}|)) (1-\lambda e^{\frac{T_\theta^R\theta}{R^{2}}} G(|g_{u,3}-g_{v,3}|,{T_\theta^R}))^{-1}\Big)\\ \leq &\exp\Big(\lambda C(T) R^{2-\eta} \frac{\log \theta}{ \theta^{(1-\eta)/2}} |u-v|^\eta (1-c(T)/{\log \theta} )^{-1}\Big)\\ \leq &\exp\Big(\lambda C(T) R^{2-\eta} \frac{\log \theta}{ \theta^{(1-\eta)/2}} |u-v|^\eta \cdot 2 \Big)= e^{2C(T)}, \end{align*} where we have used \eqref{4e5.37}, \eqref{4e10.52} in the third inequality and the last inequality is by $c(T)/ \log \theta \leq 1/2$. So the proof is complete. \end{proof} \section{Mass propagation of SIR epidemic}\label{4s8} Finally we will prove Proposition \ref{4p4} in this section. Fix any $\varepsilon_0\in (0,1)$, $\kappa>0$ and $T\geq 100$ satisfying \eqref{4ea10.04}. We will choose $\theta\geq 100$ and $R\geq 4\theta$ large below. Recall that we assume $\eta_0\subseteq \mathbb{Z}_R^d$ is as in \eqref{4ea10.23} such that \begin{align}\label{4eb2.1} \begin{dcases} \text{(i) }\eta_0\subseteq Q_{R_\theta}(0)\\ \text{(ii) } R^{d-1} f_d(\theta)/\theta \leq |\eta_0|\leq 1+R^{d-1} f_d(\theta)/\theta\\ \text{(iii) }|\eta_0 \cap Q(y)|\leq K \beta_d(R), \forall y\in \mathbb{Z}^d, \end{dcases} \end{align} where $\beta_d$ is defined as in \eqref{4ea10.45} and $K\geq 100$ is some large constant to be chosen. Let $\eta$ be an SIR epidemic process starting from $(\eta_0, \rho_0)$ where $\rho_0$ is any finite subset of $\mathbb{Z}_R^d$ disjoint from $\eta_0$. Recall that $\mathcal A(0)=\{(0,1), (1,0)\}$ in $d=2$ and $\mathcal A(0)=\{(0,1,0), (1,0,0)\}$ in $d=3$. Fix any $y\in \mathcal A(0)$. It suffices to show that \[ \mathbb{P} \Big(\{|\hat{\eta}_{T_\theta^R}^{K_{\ref{4p4}} }\cap Q_{R_\theta}(y R_\theta)|< |\eta_0| \} \cap N(\kappa)\Big)\leq \frac{\varepsilon_0}{2}, \] where \[N(\kappa)=\{|\rho_{T_\theta^R}\cap \mathcal{N}(x) |\leq \kappa R, \forall x\in \mathbb{Z}_R^d\}.\] Recall that we also write $\eta_0(x)=1(x\in \eta_0)$ for $x\in \mathbb{Z}_R^d$ so that $\eta_0\in M_F(\mathbb{Z}_R^d)$. Define $Z_0=\eta_0$ so that $Z_0$ dominates $\eta_0$ and $|Z_0|=|\eta_0|$. Then we may apply Lemma \ref{4ea10.45} to couple a branching random walk $(Z_n)$ starting from $Z_0$ with $\eta$ so that $Z_n$ dominates $\eta_n$ for any $n\geq 0$, i.e. $Z_n(x)\geq \eta_n(x)$ for any $x\in \mathbb{Z}_R^d$. The outline for the proof of Proposition \ref{4p4} is as follows: We first prove that with high probability, $Z_{T_\theta^R}(Q_{R_\theta}(y R_\theta))\geq 6|\eta_0|$. Next on the event $N(\kappa)$, we show that the SIR epidemic $\eta$ satisfies with high probability, $\eta_{T_\theta^R}(Q_{R_\theta}(y R_\theta))\geq 2|\eta_0|$. Finally we use the dominating branching random walk again to show that with high probability, the difference between $\eta_{T_\theta^R}(Q_{R_\theta}(y R_\theta))$ and thinned version $\hat\eta_{T_\theta^R}^K(Q_{R_\theta}(y R_\theta))$ is no larger than $|\eta_0|$, thus completing the proof. \subsection{Mass propagation of branching envelope} We show in this subsection that $Z_{T_\theta^R}(Q_{R_\theta}(y R_\theta))\geq 6|\eta_0|$ holds with probability larger than $1-\varepsilon_0/8$, which is done by calculating its first and second moments. First we consider the branching random walk $Z=(Z_n)$ starting from a single particle at $x\in Q_{R_\theta}(0) \cap \mathbb{Z}_R^d$ whose law is denoted by $\mathbb{P}^x$. By \eqref{4ea4.5} we know $(Z_n(1))$ is a Galton-Watson process with offspring distribution $Bin(V(R),p(R))$. Use the mean and variance formula for Galton-Watson process to see that (see, e.g., Chapter 4.7 of \cite{Ross}) \begin{align}\label{4e5.78} \mathbb{E}^{x}({{Z}}_{T_\theta^R}(1))=(V(R)p(R))^{T_\theta^R}=(1+\frac{\theta}{R^{d-1}})^{T_\theta^R} \leq e^T, \end{align} and \begin{align*} \text{Var}^{x}({{Z}}_{T_\theta^R}(1))=& (1+\frac{\theta}{R^{d-1}})(1-p(R))\cdot (1+\frac{\theta}{R^{d-1}})^{T_\theta^R-1} \frac{(1+\frac{\theta}{R^{d-1}})^{T_\theta^R}-1}{\frac{\theta}{R^{d-1}}} \noindentnumber\\ \leq& (1+\frac{\theta}{R^{d-1}})^{T_\theta^R} (1+\frac{\theta}{R^{d-1}})^{T_\theta^R} \frac{R^{d-1}}{\theta}\leq e^{2T} \frac{R^{d-1}}{\theta}. \end{align*} Hence it follows that \begin{align}\label{4e5.74} \text{Var}^{x}({{Z}}_{T_\theta^R}(Q_{R_\theta}(yR_\theta)))\leq&\mathbb{E}^{x}\Big({{Z}}_{T_\theta^R}(Q_{R_\theta}(yR_\theta))^2\Big)\leq \mathbb{E}^{x}\Big({{Z}}_{T_\theta^R}(1)^2\Big)\noindentnumber\\ \leq& e^{2T} \frac{R^{d-1}}{\theta}+(e^T)^2\leq 2e^{2T} \frac{R^{{d-1}}}{\theta}. \end{align} \noindent Next, recall from \eqref{4eb2.2} that $(S_n)$ is the random walk taking uniform steps in $\mathcal{N}(0)$. By Proposition \ref{4p1.2}, we have \begin{align}\label{4e2.23} \mathbb{E}^{x}(Z_{T_\theta^R}(Q_{R_\theta}(yR_\theta)))&=(1+\frac{\theta}{R^{d-1}})^{T_\theta^R} \mathbb{P}(S_{T_\theta^R}+x\in Q_{R_\theta}(yR_\theta))\noindentnumber\\ &\geq e^{\frac{\theta T_\theta^R }{2R^{d-1}}}\mathbb{P}(S_{T_\theta^R}+x\in Q_{R_\theta}(yR_\theta))\noindentnumber\\ & \geq e^{T/4} \mathbb{P}(S_{T_\theta^R}+x\in Q_{R_\theta}(yR_\theta)) , \end{align} where the first inequality is by $1+x\geq e^{x/2}$ for $0\leq x\leq 1$ and the last inequality uses \eqref{4eb1.34}. Recall we pick $x\in Q_{R_\theta}(0)$. So we may write $x=z_R \cdot R_\theta$ with $z_R \to z\in [-1,1]^d$ as $R\to \infty$ and it follows from Central Limit Theorem that \begin{align} \mathbb{P}\Big(S_{T_\theta^R}+x \in Q_{R_\theta}(yR_\theta) \Big)&=\mathbb{P}\Big(\frac{S_{T_\theta^R}+x}{R_\theta} \in Q(y) \Big) \noindentnumber\\ &\to \mathbb{P}(\zeta_T^z\in Q(y)) \text{ as } R\to \infty, \end{align} where $\zeta_T^z$ is a $d$-dimensional Gaussian random variable with mean $z$ and variance $T/3$. Returning to \eqref{4e2.23}, the above implies if $R>R_0(\theta, T)$ for some constant $R_0(\theta,T)>0$, we have \begin{align}\label{4e5.75} \mathbb{E}^{x}(Z_{T_\theta^R}(Q_{R_\theta}(yR_\theta)))\geq e^{T/4} \frac{1}{2} \inf_{z\in Q(0)} \inf_{y\in Q(0)}\mathbb{P}(\xi_T^z\in Q(y))\geq 8, \end{align} where the last inequality is by \eqref{4ea10.04}. Now we return to the BRW $Z=(Z_n)$ starting from $Z_0$ whose law is denoted by $\mathbb{P}^{Z_0}$. Since \eqref{4e5.74} and \eqref{4e5.75} hold for any $x\in Q_{R_\theta}(0)\cap \mathbb{Z}_R^d$, we may conclude \begin{align}\label{4ed2.24} \mathbb{E}^{Z_0}({{Z}}_{T_\theta^R}(Q_{R_\theta}(yR_\theta)))\geq 8 |Z_0|, \end{align} and \begin{align*} \text{Var}^{Z_0}({{Z}}_{T_\theta^R}(Q_{R_\theta}(yR_\theta)))\leq 2e^{2T} \frac{R^{{d-1}}}{\theta} |Z_0|. \end{align*} Use \eqref{4ed2.24} and Chebyshev's inequality to get \begin{align}\label{4e2.24} \mathbb{P}^{Z_0}\Big({{Z}}_{T_\theta^R}(Q_{R_\theta}(yR_\theta))\leq 6 |Z_0| \Big)\leq& \mathbb{P}^{Z_0}\Big(\Big|{{Z}}_{T_\theta^R}(Q_{R_\theta}(yR_\theta))-\mathbb{E}^{Z_0}({{Z}}_{T_\theta^R}(Q_{R_\theta}(yR_\theta)))\Big|\geq 2 |Z_0| \Big)\noindentnumber\\ \leq& \frac{2e^{2T} \frac{R^{{d-1}}}{\theta} |Z_0|}{(2 |Z_0| )^2}\leq \frac{1}{2}e^{2T} \frac{1}{f_d(\theta)}, \end{align} where the last inequality uses $|Z_0|=|\eta_0|\geq R^{d-1} f_d(\theta)/\theta$ by \eqref{4eb2.1}. Pick $\theta\geq 100$ large so that \begin{align}\label{4eb2.4} \mathbb{P}^{Z_0}\Big({{Z}}_{T_\theta^R}(Q_{R_\theta}(yR_\theta))\leq 6 |Z_0| \Big)\leq \frac{\varepsilon_0}{8}. \end{align} \subsection{Mass propagation of SIR epidemic} To show that $\eta_{T_\theta^R}(Q_{R_\theta}(y R_\theta))\geq 2|\eta_0|$ holds with high probability on the event $N(\kappa)$, we will couple the original epidemic $(\eta_n)$ with a Modified SIR epidemic $(\bar{\eta}_n)$: Let $\bar{\eta}_0=\eta_0$. At time $n\geq 1$, any particle in location $x$ will produce offspring to each of its neighbouring sites in $\mathcal{N}(x)$ while avoiding birth to the recovered sites in $\rho_n$. In other words, the particle located at $x$ will produce $Bin(V(R)-|\rho_n \cap \mathcal{N}(x)|, p(R))$ to its neighbouring sites. In this way we allow two different particles to give birth to the same location (multiple occupancy). One can construct $(\bar{\eta}_n)$ together with the original SIR $(\eta_n)$ and the branching envelope $(Z_n)$ so that (i) the Modified SIR always dominates the original SIR; (ii) the branching envelope $(Z_n)$ always dominates the Modified SIR. This coupling can be done in a way similar to that of Lemma \ref{4l1.5}. Denote by $\mathbb{P}$ the joint law of $(Z,\bar \eta, \eta)$. The difference between $(\eta_n)$ and $(\bar{\eta}_n)$ comes from the event called ``collision'': when two infected sites simultaneously attempt to infect the same susceptible site, all but one of the attempts fail. Let $\Gamma_{n}(x)$ be the number of collisions at site $x$ and time $n$ in the SIR epidemic $(\eta_n).$ Write $f(R)=o(h(R))$ if $f(R)/h(R) \to 0$ as $R\to \infty$. The following lemma is from Lemma 2.26 of \cite{LPZ14} (see also Lemma 9 of \cite{LZ10}), whose proof will be contained in Appendix \ref{a2}. \begin{lemma}\label{4l10.01} For any $T\geq 100$ and $\theta\geq 100$, we have \[ \mathbb{E} \Big(\sum_{n=1}^{T_\theta^R} \sum_{x\in \mathbb{Z}^d_R}\Gamma_{n}(x)\Big)=o(R^{d-1}). \] \end{lemma} By an argument similar to the proof of (2.41) of \cite{LPZ14}, one may notice that the difference between $\bar{\eta}_{T_\theta^R}(1)$ and $\eta_{T_\theta^R}(1)$ is at most the sum of all the offsprings of the ``lost'' particles due to collisions. Hence it follows that \begin{align}\label{4e5.82} \mathbb{E} (\bar{\eta}_{T_\theta^R}(1)-\eta_{T_\theta^R}(1))\leq& \mathbb{E} \Big(\sum_{n=1}^{T_\theta^R} (1+\frac{\theta}{R^{d-1}})^{T_\theta^R-n} \sum_{x\in \mathbb{Z}^d_R}\Gamma_{n}(x)\Big)\noindentnumber\\ \leq &(1+\frac{\theta}{R^{d-1}})^{T_\theta^R} \mathbb{E} \Big(\sum_{n=1}^{T_\theta^R} \sum_{x\in \mathbb{Z}^d_R}\Gamma_{n}(x)\Big)\leq e^T o(R^{d-1}). \end{align} where the last inequality is by \eqref{4eb1.34}. This gives that \begin{align*} &\mathbb{P}\Big(\bar{\eta}_{T_\theta^R}(Q_{R_\theta}(yR_\theta))-{\eta}_{T_\theta^R}(Q_{R_\theta}(yR_\theta))\geq |\eta_0| \Big)\noindentnumber\\ \leq &\mathbb{P}\Big(\bar{\eta}_{T_\theta^R}(1)-{\eta}_{T_\theta^R}(1)\geq |\eta_0| \Big)\leq \frac{1}{|\eta_0|} \mathbb{E}\Big(\bar{\eta}_{T_\theta^R}(1)-{\eta}_{T_\theta^R}(1) \Big)\noindentnumber\\ \leq & \frac{\theta}{R^{d-1} f_d(\theta)} \mathbb{E}\Big(\bar{\eta}_{T_\theta^R}(1)-{\eta}_{T_\theta^R}(1) \Big) \to 0 \text{ as } R\to \infty. \end{align*} Hence if $R$ is large, we have \begin{align}\label{4e10.46} \mathbb{P}\Big(\bar{\eta}_{T_\theta^R}(Q_{R_\theta}(yR_\theta))-{\eta}_{T_\theta^R}(Q_{R_\theta}(yR_\theta))\geq |\eta_0| \Big)\leq \frac{\varepsilon_0}{8}. \end{align} \noindent Set $\gamma_{0}=|\bar{\eta}_{0}|=|{\eta}_{0}|$ and let $(\gamma_{n}, n\geq 0)$ be a Galton-Watson process with offspring distribution $Bin(V(R)-\kappa R, p(R))$. On the event \[ N(\kappa)=\{ |\rho_{T_\theta^R} \cap \mathcal{N}(x)| \leq \kappa R, \quad \forall x\in \mathbb{Z}^d_R\}, \] one may check that the process $(\bar{\eta}_{n}(1), n\geq 0)$ will dominate $(\gamma_{n}, n\geq 0)$ up to time $T_\theta^R$, that is, we may define $(\gamma_n)$ on the same probability space so that \begin{align}\label{4ed5.79} \gamma_{n}\leq \bar{\eta}_{n}(1), \quad \forall n\leq T_\theta^R, \text{ on the event } N(\kappa). \end{align} For the Galton-Watson process $(\gamma_n)$, we have \begin{align}\label{4e5.79} \mathbb{E}(\gamma_{T_\theta^R}) =& \gamma_{0} \Big((V(R)-\kappa R) p(R)\Big)^{T_\theta^R}\noindentnumber\\ = &|{\eta}_{0}|(1+\frac{\theta}{R^{d-1}})^{T_\theta^R} (1-\frac{\kappa R}{V(R)})^{T_\theta^R}. \end{align} On the other hand, by \eqref{4e5.78} we have the branching random walk $Z$ satisfies $\mathbb{E}^{Z_0}(Z_{T_\theta^R}(1))=(1+\frac{\theta}{R^{d-1}})^{T_\theta^R} |Z_0|$. Choose $R$ large so that ${\kappa R}< V(R)/2$. Then we may use \eqref{4e5.79} to see that (recall $|Z_0|=|\eta_0|$) \begin{align}\label{4e5.80} \mathbb{E}({{Z}}_{T_\theta^R}(1)- \gamma_{T_\theta^R})= & (1+\frac{\theta}{R^{d-1}})^{T_\theta^R} |\eta_0| \Big[1-(1-\frac{\kappa R}{V(R)})^{T_\theta^R}\Big]\noindentnumber\\ \leq & e^{T}|\eta_0| \cdot (-T_\theta^R \log (1-\frac{\kappa R}{V(R)})) \leq e^{T}|\eta_0| T_\theta^R \cdot 2\frac{\kappa R}{V(R)}\noindentnumber\\ \leq & e^{T}|\eta_0| \frac{TR^{d-1}}{\theta} \cdot 2\frac{\kappa R}{V(R)}\leq C(T) |\eta_0| \frac{\kappa}{\theta}, \end{align} where the first inequality is by $1-(1-x)^n=1-e^{n\log(1-x)}\leq -n\log(1-x)$ for any $x\in (0,\frac{1}{2})$ and $n\geq 1$. The second inequality uses $-\log (1-x) \leq 2x$ for $x\in (0,\frac{1}{2})$. Apply Markov's inequality and \eqref{4e5.80} to get \begin{align}\label{4e2.25} \mathbb{P}\Big({{Z}}_{T_\theta^R}(1)-\gamma_{T_\theta^R}\geq 3 |\eta_0| \Big)\leq\frac{ C(T) |\eta_0| \frac{\kappa}{\theta}}{3 |\eta_0|}=\frac{1}{3}C(T) \frac{\kappa}{\theta}< \frac{\varepsilon_0}{8}, \end{align} if we pick $\theta>0$ large. Since $(Z_n)$ dominates $(\bar{\eta}_{n})$, we have for any $A\subseteq \mathbb{Z}^d$, \begin{align}\label{4e5.81} {{Z}}_{T_\theta^R}(A)-\bar{\eta}_{T_\theta^R}(A)\leq {{Z}}_{T_\theta^R}(1)-\bar{\eta}_{T_\theta^R}(1)\leq {{Z}}_{T_\theta^R}(1)-\gamma_{T_\theta^R} \text{ on } N(\kappa), \end{align} where the last inequality is by \eqref{4ed5.79}. Now we conclude that \begin{align}\label{4e10.47} &\mathbb{P}\Big(\Big\{\bar{\eta}_{T_\theta^R}(Q_{R_\theta}(yR_\theta))\leq 3|\eta_0| \Big\} \cap N(\kappa)\Big)\noindentnumber\\ \leq & \mathbb{P}\Big(\Big\{{{Z}}_{T_\theta^R}(Q_{R_\theta}(yR_\theta))-\bar{\eta}_{T_\theta^R}(Q_{R_\theta}(yR_\theta))\geq 3|\eta_0|\Big\} \cap N(\kappa) \Big)\noindentnumber\\ &\quad \quad +\mathbb{P}\Big({{Z}}_{T_\theta^R}(Q_{R_\theta}(yR_\theta))\leq 6 |\eta_0| \Big)\noindentnumber\\ \leq & \mathbb{P}\Big({{Z}}_{T_\theta^R}(1)-\gamma_{T_\theta^R}\geq 3 |\eta_0| \Big)+\frac{\varepsilon_0}{8}\leq \frac{\varepsilon_0}{4}, \end{align} where the second last inequality uses \eqref{4e5.81}, \eqref{4eb2.4} and the last inequality is by \eqref{4e2.25}. Recall \eqref{4e10.46} to get for $R$ large, \begin{align}\label{4e2.30} &\mathbb{P}\Big(\Big\{{\eta}_{T_\theta^R}(Q_{R_\theta}(yR_\theta))\leq 2|\eta_0|\Big\} \cap N(\kappa)\Big)\noindentnumber\\ &\leq\mathbb{P}\Big(\bar{\eta}_{T_\theta^R}(Q_{R_\theta}(yR_\theta))-{\eta}_{T_\theta^R}(Q_{R_\theta}(yR_\theta))\geq |\eta_0| \Big) \noindentnumber\\ &\quad + \mathbb{P}\Big(\Big\{\bar{\eta}_{T_\theta^R}(Q_{R_\theta}(yR_\theta))\leq 3|\eta_0| \Big\} \cap N(\kappa)\Big)\leq \frac{3\varepsilon_0}{8}, \end{align} where the last inequality uses \eqref{4e10.47}. \subsection{Mass propagation of the thinned SIR epidemic} Finally we will turn to the thinned process $\hat{\eta}_{T_\theta^R}^K$ and show that \[ \mathbb{P} \Big(\Big\{|\hat{\eta}_{T_\theta^R}^{K} \cap Q_{R_\theta}(y R_\theta)|< |\eta_0| \Big\} \cap N(\kappa)\Big)\leq \frac{\varepsilon_0}{2}, \] if we pick $K>0$ large. Recall that the thinned version $\hat{\eta}_{T_\theta^R}^{K}$ is obtained by deleting all the vertices in ${\eta}_{T_\theta^R} \cap Q(y)$ for each $y\in \mathbb{Z}^d$ if $|{\eta}_{T_\theta^R} \cap Q(y)|>K\beta_d(R)$. We will use the dominating BRW $Z$ to show that with high probability, the amount of the deleted particles will be small. Recall that $Z_0(x)=1(x\in \eta_0)$ where $\eta_0$ is a subset of $\mathbb{Z}_R^d$. Then we have for any set $D$ and $n\geq 1$, \begin{align}\label{4e10.27} \mathbb{E}^{Z_0} (Z_n(D)^2 )&= \sum_{x\in \eta_0} \mathbb{E}^x(Z_n(D)^2)+ \sum_{x\in \eta_0} \sum_{y\in \eta_0, y\neq x} \mathbb{E}^x(Z_n(D)) \mathbb{E}^y(Z_n(D))\noindentnumber\\ &\leq \sum_{x\in \eta_0} \mathbb{E}^x(Z_n(D)^2)+ \Big(\sum_{x\in \eta_0} \mathbb{E}^x(Z_n(D))\Big)^2. \end{align} Take $D=Q(a)$ for $a\in \mathbb{Z}^d$ and let $n=T_\theta^R$ to get \begin{align}\label{4e10.28} &\mathbb{E}^{Z_0} (Z_{T_\theta^R}(Q(a))^2 )\leq \sum_{x\in \eta_0} \mathbb{E}^x(Z_{T_\theta^R}(Q(a))^2)+ \Big(\sum_{x\in \eta_0} \mathbb{E}^x(Z_{T_\theta^R}(Q(a)))\Big)^2. \end{align} \noindent Apply Proposition \ref{4p1.2}(i) and Proposition \ref{4p1.1}(i) to see that for any $x\in \mathbb{Z}_R^d$, \begin{align}\label{4e10.29} \mathbb{E}^x(Z_{T_\theta^R}(Q(a)))=&(1+\frac{\theta}{R^{d-1}})^{T_\theta^R} \sum_{y\in \mathbb{Z}_R^d} 1_{Q(a)}(y) p_{T_\theta^R}(x-y)\noindentnumber\\ \leq & e^T \sum_{y\in \mathbb{Z}_R^d} 1_{Q(a)}(y) \frac{c_{\ref{4p1.1}}}{(T_\theta^R)^{d/2} R^d} \leq C(T) \frac{1}{(T_\theta^R)^{d/2}}. \end{align} Recall $G(\phi,n)$ from \eqref{4e5.90}. Use Proposition \ref{4p1.1}(i) to see that \begin{align}\label{4e10.30} G(1_{Q(a)},T_\theta^R)=&3\|1_{Q(a)}\|_\infty+\sum_{k=1}^{T_\theta^R} \sup_{y \in \mathbb{Z}_R^d} \sum_{z\in \mathbb{Z}^d_R} p_k(y-z) 1_{Q(a)}(z)\noindentnumber\\ \leq &3+\sum_{k=1}^{T_\theta^R} \sup_{y \in \mathbb{Z}_R^d} \sum_{z\in \mathbb{Z}^d_R} \frac{c_{\ref{4p1.1}}}{k^{d/2} R^d} 1_{Q(a)}(z)\noindentnumber\\ \leq &3+\sum_{k=1}^{T_\theta^R} \frac{c_{\ref{4p1.1}}}{k^{d/2}} c(d) \leq C \sum_{k=1}^{T_\theta^R} \frac{1}{k^{d/2}}. \end{align} Write $h_d(n):=\sum_{k=1}^{n} \frac{1}{k^{d/2}}$ for $n\geq 1$. Then it follows that \begin{align}\label{4eb2.8} G(1_{Q(a)},T_\theta^R)\leq C h_d(T_\theta^R)\leq \begin{cases} C+C \log T_\theta^R\leq C(T)\log R, &d=2;\\ C, &d=3. \end{cases} \end{align} Next, by Proposition \ref{4p1.2}(ii), we have \begin{align}\label{4e10.48} \mathbb{E}^x\Big(Z_{T_\theta^R}(Q(a))^2\Big)\leq& e^{\frac{\theta T_\theta^R}{R^{d-1}}} G(1_{Q(a)},T_\theta^R) \mathbb{E}^{x}\Big({Z}_{T_\theta^R}(Q(a))\Big)\leq e^T C h_d(T_\theta^R) \cdot C(T) \frac{1}{(T_\theta^R)^{d/2}}\noindentnumber\\ \leq &C(T) h_d(T_\theta^R) (T_\theta^R)^{-d/2}, \end{align} where the second inequality follows from \eqref{4e10.29} and \eqref{4eb2.8}. Returning to \eqref{4e10.28}, we use \eqref{4e10.29} and \eqref{4e10.48} to see that \begin{align}\label{4e10.03} \mathbb{E}^{Z_0} \Big(Z_{T_\theta^R}(Q(a))^2 \Big)\leq& |\eta_0| C(T) h_d(T_\theta^R) (T_\theta^R)^{-d/2}+ \Big(|\eta_0|C(T) \frac{1}{(T_\theta^R)^{d/2}}\Big)^2\noindentnumber\\ \leq&\frac{2R^{d-1} f_d(\theta)}{\theta} C(T)h_d(T_\theta^R) \Big(\frac{2\theta}{TR^{d-1}}\Big)^{d/2}\noindentnumber\\ &\quad \quad +\Big(\frac{2R^{d-1} f_d(\theta)}{\theta}\Big)^2 C(T) \Big(\frac{2\theta}{TR^{d-1}}\Big)^{d}:=I, \end{align} where we have used \eqref{4eb1.34} in the second inequality. In $d=2$, we use \eqref{4eb2.8} to get \begin{align}\label{4eb2.11} I\leq& \frac{2R \sqrt{\theta}}{\theta} C(T) \log R \cdot \frac{2\theta}{TR}+\Big(\frac{2R\sqrt{\theta}}{\theta}\Big)^2 C(T) \Big(\frac{2\theta}{TR}\Big)^{2}\noindentnumber\\ \leq& C(T)\sqrt{\theta} \log R+C(T)\theta\leq C(T)\sqrt{\theta} \log R, \end{align} where the last inequality is by $\theta\leq \sqrt{\theta} \log R$ when $R$ is large. In $d=3$, by \eqref{4eb2.8} we have \begin{align}\label{4eb2.10} I\leq& \frac{2R^{2} \log \theta}{\theta} C(T) C \Big(\frac{2\theta}{TR^{2}}\Big)^{3/2}+\Big(\frac{2R^{2} \log\theta}{\theta}\Big)^2 C(T) \Big(\frac{2\theta}{TR^{2}}\Big)^{3}\noindentnumber\\ \leq& C(T) \frac{1}{R} \sqrt{\theta} \log \theta+C(T)\frac{1}{R^2}\theta (\log \theta)^2 \leq C(T) \frac{1}{R} \sqrt{\theta} \log \theta, \end{align} where in the last inequality we have used $\frac{1}{R^2}\theta (\log \theta)^2 \leq \frac{1}{R} \sqrt{\theta} \log \theta$ when $R$ is large. Hence we conclude from \eqref{4e10.03}, \eqref{4eb2.11}, \eqref{4eb2.10} that \begin{align}\label{4eb2.9} \mathbb{E}^{Z_0} \Big(Z_{T_\theta^R}(Q(a))^2 \Big)\leq \begin{cases} C(T)\sqrt{\theta} \log R &d=2\\ C(T) \frac{1}{R} \sqrt{\theta} \log \theta, &d=3. \end{cases} \end{align} Write $V(a)={{Z}}_{T_\theta^R}(Q(a))$. It follows that \begin{align}\label{4e2.29} &\mathbb{E}(V(a)\cdot 1_{\{V(a)>K \beta_d(R)\}})\leq \frac{\mathbb{E}(V(a)^2)}{K\beta_d(R)} \leq \begin{cases} C(T) \frac{\sqrt{\theta}}{K}, &d=2\\ C(T) \sqrt{\theta} \frac{\log \theta}{KR}, &d=3. \end{cases} \end{align} Let \[D=\sum_{a\in A} V(a) 1_{\{V(a)>K\beta_d(R)\}},\] where $A=Q_{R_\theta}(y R_\theta) \cap \mathbb{Z}^d$ so that \[Q_{R_\theta}(yR_\theta) \subseteq \bigcup_{a\in A} Q(a).\] Observe that $|A|\leq C (R_\theta)^d $. Recall $R_\theta=\sqrt{R^{d-1}/\theta}$ and use \eqref{4e2.29} to see that \[ \mathbb{E} (D)\leq \begin{dcases} C(T) \frac{\sqrt{\theta}}{K} C (R_\theta)^2 \leq \frac{C(T)}{K} \frac{R\sqrt{\theta}}{\theta}\leq \frac{C(T)}{K} |\eta_0|, &d=2;\\ C(T) \sqrt{\theta} \frac{\log \theta}{KR} C (R_\theta)^3\leq \frac{C(T)}{K} \frac{R^2 \log \theta}{\theta}\leq \frac{C(T)}{K} |\eta_0|, &d=3. \end{dcases} \] Since $Z_{T_\theta^R}$ dominates $\eta_{T_\theta^R}$, we get \begin{align} 0\leq \eta_{T_\theta^R}(Q_{R_\theta}(yR_\theta))-\hat{\eta}_{T_\theta^R}^K(Q_{R_\theta}(yR_\theta))\leq& \sum_{a\in A} \eta_{T_\theta^R}(Q(a)) 1_{\{\eta_{T_\theta^R}(Q(a)) >K\beta_d(R)\}}\noindentnumber\\ \leq& \sum_{a\in A} Z_{T_\theta^R}(Q(a)) 1_{\{Z_{T_\theta^R}(Q(a)) >K\beta_d(R)\}}= D. \end{align} By Markov's inequality, we have \begin{align} &\mathbb{P}\Big({\eta}_{T_\theta^R}(Q_{R_\theta}(yR_\theta))-\hat{\eta}_{T_\theta^R}^K(Q_{R_\theta}(yR_\theta)) \geq |\eta_0|\Big)\leq \frac{\mathbb{E} (D)}{|\eta_0|}\leq C(T) \frac{1}{K}\leq \frac{\varepsilon_0}{8}, \end{align} if we pick $K>0$ large. Recall \eqref{4e2.30} and use the above to see that \begin{align} & \mathbb{P} \Big(\Big\{|\hat{\eta}_{T_\theta^R}^{K} \cap Q_{R_\theta}(y R_\theta)|< |\eta_0| \Big\} \cap N(\kappa)\Big)\noindentnumber\\ &\leq \mathbb{P}\Big({\eta}_{T_\theta^R}(Q_{R_\theta}(yR_\theta))-\hat{\eta}_{T_\theta^R}^K(Q_{R_\theta}(yR_\theta)) \geq |\eta_0|\Big)\noindentnumber\\ &\quad \quad + \mathbb{P} \Big(\Big\{{\eta}_{T_\theta^R}(Q_{R_\theta}(yR_\theta))\leq 2 |\eta_0| \Big\} \cap N(\kappa)\Big)\leq \frac{\varepsilon_0}{2}, \end{align} and so the proof of Proposition \ref{4p4} is complete. \def$'${$'$} \mathcal{l}earpage \appendix \section{Approximation by characteristic function} \label{a3} This section is devoted to the proof of Proposition \ref{4p1.1} where $d\geq 1$. Let $Y_1, Y_2, \cdots$ be i.i.d. random variables uniform on $\mathcal{N}(0)$. Define $\rho(t)=\mathbb{E} e^{it\cdot Y_1}, t\in \mathbb{R}^d$ to be the characteristic function of $Y_1$ and denote by $\Gamma$ the covariance matrix of $Y_1$, which is given by \begin{align}\label{4e1.8} \Gamma=[\mathbb{E}(Y_1^i Y_1^j)]_{1\leq i,j\leq d}=\frac{R(R+1)}{3R^2} \frac{(2R+1)^d}{V(R)}I:=\frac{\lambda_0(R,d)}{3}I. \end{align} In the above, $\lambda_0(R,d)$ is some constant which will converge to $1$ as $R\to \infty$. Throughout the rest of this section, we will write $\lambda_0=\lambda_0(R,d)$ for simplicity and only consider $R>0$ large so that $1/2\leq \lambda_0\leq 3/2$. \\ Write $S_n=Y_1+\cdots+Y_n$ for each $n\geq 1$. The characteristic function of $S_n$ will be given by $\rho_{S_n}(t)=\rho^n(t)$. For any $x\in \mathbb{Z}_R^d$, we let $xR=(x_1R, \cdots, x_d R)\in \mathbb{Z}^d$. Then by applying Proposition 2.2.2 of \cite{LL10}, we have for any $x\in \mathbb{Z}_R^d$, \begin{align}\label{4e6.62} p_n(x)=&\mathbb{P}(S_n=x)=\mathbb{P}(S_n R=xR)=\frac{1}{(2\pi)^d} \int_{[-\pi,\pi]^d} \rho^n(tR) e^{-it \cdot xR} dt\noindentnumber\\ =&\frac{1}{(2\pi)^d n^{d/2} R^d} \int_{[-\sqrt{n} R\pi,\sqrt{n}R\pi]^d} \rho^n\Big(\frac{t}{\sqrt{n}}\Big) e^{-i\frac{t\cdot x}{\sqrt{n}}} dt. \end{align} Following (2.2) of \cite{LL10}, we will approximate $p_n(x)$ by (recall $\Gamma$ from \eqref{4e1.8}) \begin{align}\label{4e2.14} \bar{p}_n(x):=\frac{1}{(2\pi)^d n^{d/2} R^d} \int e^{-\frac{\lambda_0}{6} |t|^2} e^{-i\frac{t\cdot x}{\sqrt{n}}} dt=\frac{(3/\lambda_0)^{d/2}}{(2\pi)^{d/2} n^{d/2} R^d} e^{-\frac{3|x|^2}{2n\lambda_0}}, \end{align} where $\lambda_0=\lambda_0(R,d)$ is as in \eqref{4e1.8}. Before giving the error estimates between $p_n(x)$ and $\bar{p}_n(x)$, we first state some preliminary results on $\rho(t)$. \begin{lemma}\label{4l1.2} (i) There is some constant $c_{\ref{4l1.2}}=c_{\ref{4l1.2}}(d)>0$ so that for all $R\geq 1$, \[ \sup_{|t|\leq \sqrt{d} \pi R} |\rho(t)|\leq \frac{c_{\ref{4l1.2}}}{|t|}. \] (ii) For any $0<\delta<1$, there are constants $C_{\ref{4l1.2}}>0$, $K_{\ref{4l1.2}}>0$ depending only on $d, \delta$ such that for any $R\geq C_{\ref{4l1.2}}$, \[ \sup_{\delta\leq |t|\leq \delta^{-1}} |\rho(t)|\leq e^{-K_{\ref{4l1.2}}}. \] \end{lemma} \begin{proof} (i) For any $t=(t_1,\cdots, t_d) \in \mathbb{R}^d$, we have \begin{align}\label{4e4.1} \rho(t)=&\frac{1}{V(R)}\Big(\sum_{k_1=-R}^R \cdots \sum_{k_d=-R}^R e^{it_1 \frac{k_1}{R}}\cdots e^{it_d \frac{k_d}{R}} -e^{it\cdot 0}\Big)\noindentnumber\\ =&\frac{1}{V(R)}\Big(\prod_{k=1}^d \frac{e^{-it_k}-e^{it_k \frac{R+1}{R}}}{1-e^{it_k\frac{1}{R}}}-1\Big)=\frac{1}{V(R)}\Big(\prod_{k=1}^d \Big(e^{it_k}+\frac{e^{it_k}-e^{-it_k}}{e^{it_k\frac{1}{R}}-1}\Big)-1\Big)\noindentnumber\\ =&\frac{1}{V(R)}\Big(\prod_{k=1}^d \Big(e^{it_k}+\frac{e^{it_k}-e^{-it_k}}{it_k\frac{1}{R}}\frac{it_k\frac{1}{R}}{e^{it_k\frac{1}{R}}-1}\Big)-1\Big)\noindentnumber\\ =&\frac{(2R)^d}{V(R)}\Big(\prod_{k=1}^d \Big(\frac{e^{it_k}}{2R}+\frac{\sin t_k}{t_k}\frac{it_k\frac{1}{R}}{e^{it_k\frac{1}{R}}-1}\Big)-\frac{1}{(2R)^d}\Big). \end{align} If $ |t|\leq \sqrt{d} \pi R$, we have $|t_k/R|\leq \sqrt{d} \pi$ for any $1\leq k\leq d$, and so we may use $\frac{|s|}{|e^{is}-1|} \leq c$, $\forall |s|\leq \sqrt{d} \pi$ for some constant $c>0$ to get \begin{align}\label{4e5.20} |\rho(t)|\leq & \frac{1}{(2R)^d}+\prod_{k=1}^d \Big(\frac{1}{2R}+\frac{|\sin t_k|}{|t_k|}\frac{|t_k\frac{1}{R}|}{|e^{it_k\frac{1}{R}}-1|}\Big)\noindentnumber\\ \leq & \frac{1}{(2R)^d}+\prod_{k=1}^d \Big(\frac{1}{2R}+c \frac{|\sin t_k|}{|t_k|}\Big). \end{align} For any $t=(t_1,\cdots, t_d)\in \mathbb{R}^d$, there is some $1\leq j\leq d$ so that $|t_j|=\max\{|t_k|, 1\leq k\leq d\}$ and hence $|t|\leq \sqrt{d}|t_j|$. Use $|\sin t_k|\leq |t_k|$ and $|\sin t_k|\leq 1$ to arrive at \begin{align*} |\rho(t)|\leq & \frac{1}{(2R)^d}+(1+c)^{d-1}\Big(\frac{1}{2R}+c \frac{|\sin t_j|}{|t_j|}\Big) \\ \leq & \frac{1}{(2R)^d}+(1+c)^{d-1}\Big(\frac{1}{2R}+c\frac{1}{|t|/\sqrt{d}}\Big) \leq C(d)\frac{1}{|t|}, \end{align*} where the last inequality is by $|t|\leq \sqrt{d} \pi R$. The proof of (i) is then complete.\\ (ii) For any $\delta\leq |t|\leq \delta^{-1}$, there is some $1\leq j\leq d$ so that $|t_j|=\max\{|t_k|, 1\leq i\leq d\}$ and hence $\delta \leq |t|\leq \sqrt{d}|t_j|$. It follows that \begin{align}\label{4eb3.1a} \frac{|\sin (t_j)|}{|t_j|} \leq \sup_{|x|>\delta/\sqrt{d}} \frac{|\sin x|}{|x|}\leq e^{-K}, \end{align} for some $K=K(d, \delta)>0$. Since $\lim_{x\to 0} \frac{|x|}{|e^{ix}-1|}=1$ and $|t_k/R|\leq |t|/R\leq \delta^{-1}/R$ for all $1\leq k\leq d$, we get for $R$ large, \begin{align}\label{4eb3.2a} \sup_{\delta\leq |t|\leq \delta^{-1}} \frac{|t_k\frac{1}{R}|}{|e^{it_k\frac{1}{R}}-1|} \leq \sup_{|x|\leq \delta^{-1}/R} \frac{|x|}{|e^{ix}-1|} \leq 1+\frac{K}{2d}, \quad \forall 1\leq k\leq d. \end{align} Recall the first inequality in \eqref{4e5.20}. We may apply \eqref{4eb3.1a}, \eqref{4eb3.2a} to get for $R$ large, \begin{align*} \sup_{\delta\leq |t|\leq \delta^{-1}}|\rho(t)|\leq & \frac{1}{(2R)^d}+\prod_{k=1}^d \Big(\frac{1}{2R}+ (1+\frac{K}{2d}) \frac{|\sin t_k|}{|t_k|}\Big)\\ \leq &\frac{1}{(2R)^d}+\Big (\frac{1}{2R}+ (1+\frac{K}{2d})\Big)^{d-1} \Big(\frac{1}{2R}+ (1+\frac{K}{2d}) e^{-K}\Big). \end{align*} Let $R\to \infty$ to see that \begin{align*} \limsup_{R\to \infty} \sup_{\delta\leq |t|\leq \delta^{-1}}|\rho(t)|\leq (1+\frac{K}{2d})^{d} e^{-K}\leq e^{-\frac{1}{2}K}. \end{align*} So for $R$ large enough, we have \[ \sup_{\delta\leq |t|\leq \delta^{-1}} |\rho(t)| \leq e^{-\frac{1}{2}K},\] and the proof is complete. \end{proof} \begin{lemma}\label{4l3.4} There are constants $c_{\ref{4l3.4}}(d), C_{\ref{4l3.4}}(d)>0$ such that for any $R\geq C_{\ref{4l3.4}}(d)$, \begin{align}\label{4e6.19} \sup_{x\in \mathbb{Z}_R^d} |p_n(x)-\bar{p}_n(x)|\leq \frac{c_{\ref{4l3.4}}}{n^{d/2+1} R^d}, \quad \forall n\geq 1. \end{align} \end{lemma} \begin{proof} Recall $p_1(x)=\frac{1}{V(R)} 1(x\in \mathcal{N}(0))$. By using $p_{n+1}(x)=\sum_{y} p_n(x-y) p _1(y)$, one may easily conclude by induction that \[ \sup_{x\in \mathbb{Z}_R^d} p_n(x) \leq C(d)\frac{1}{R^d} \text{ for all } n\geq 1. \] On the other hand, recall $\bar{p}_n(x)$ from \eqref{4e2.14} to see that \begin{align}\label{4eb2.41} \sup_{x\in \mathbb{Z}_R^d} \bar{p}_n(x) \leq C(d)\frac{1}{n^{d/2}R^d} \text{ for all } n\geq 1. \end{align} Hence it follows that for $1\leq n\leq 2d$, \[ \sup_{x\in \mathbb{Z}_R^d} |p_n(x)-\bar{p}_n(x)|\leq \sup_{x\in \mathbb{Z}_R^d} p_n(x)+ \sup_{x\in \mathbb{Z}_R^d} \bar{p}_n(x)\leq C(d)\frac{1}{R^d} \leq \frac{C(d) (2d)^{d/2+1}}{n^{d/2+1} R^d}. \] It suffices to prove \eqref{4e6.19} for any $n\geq 2d.$ Use the symmetry of $Y_1$ to get for any $t\in \mathbb{R}^d$, \begin{align}\label{4e5.1} f(t):=\mathbb{E} e^{it\cdot Y_1}-\sum_{k=0}^3 \mathbb{E}\frac{(it\cdot Y_1)^k}{k!}=\rho(t)-(1-\frac{\lambda_0}{6}|t|^2), \end{align} where $\lambda_0=\lambda_0(R,d)$ is as in \eqref{4e1.8}. Apply Jensen's inequality and Lemma 3.3.7 of \cite{Du10} to get \begin{align}\label{4e5.2} |f(t)|\leq \mathbb{E}\Big( \Big|e^{it\cdot Y_1}-\sum_{k=0}^3 \frac{(it\cdot Y_1)^k}{k!}\Big|\Big)\leq \mathbb{E} \frac{|t\cdot Y_1|^{4}}{4!}\leq \frac{|t|^{4}}{4!}\mathbb{E}|Y_1|^4\leq \frac{1}{24}|t|^4. \end{align} Rearrange terms in \eqref{4e5.1} to see $\rho(t)=1-\frac{\lambda_0}{6}|t|^2+f(t)$. Define \begin{align}\label{4e5.3} g(t):=\log \rho(t)-(-\frac{\lambda_0}{6}|t|^2+f(t)). \end{align} Since $|\log(1+x)-x|\leq x^2$ when $|x|$ is small, by \eqref{4e5.2} we get \begin{align}\label{4e5.4} |g(t)|\leq (-\frac{\lambda_0}{6}|t|^2+f(t))^2\leq \frac{1}{12}|t|^4, \text{ for } |t|>0 \text{ small, } \end{align} where in the last inequality we have used $\lambda_0\leq 3/2$. Now use \eqref{4e5.3} to see that for any $n\geq 2d$, \begin{align}\label{4e5.5} \rho^n\Big(\frac{t}{\sqrt{n}}\Big)=e^{n\log \rho(\frac{t}{\sqrt{n}})}=\exp\Big(-\frac{\lambda_0}{6}|t|^2+nf(\frac{t}{\sqrt{n}})+ng(\frac{t}{\sqrt{n}})\Big)=e^{-\frac{\lambda_0}{6}|t|^2}F(t,n), \end{align} where \begin{align}\label{4e5.6} F(t,n)=\exp\Big(nf(\frac{t}{\sqrt{n}})+ng(\frac{t}{\sqrt{n}})\Big). \end{align} Pick $\delta \in(0,1/2)$ small so that $2c_{\ref{4l1.2}}\leq \delta^{-1}$ and \eqref{4e5.4} holds for any $|t|\leq \delta$. By \eqref{4e6.62} and \eqref{4e2.14}, we have \begin{align}\label{4e5.7} &{(2\pi)^d n^{d/2} R^d} |p_n(x)-\bar{p}_n(x)|\noindentnumber\\ = &\Big|\int_{[-\sqrt{n} R\pi,\sqrt{n}R\pi]^d} \rho^n(\frac{t}{\sqrt{n}}) e^{-i\frac{t\cdot x}{\sqrt{n}}} dt- \int e^{-\frac{\lambda_0}{6}|t|^2} e^{-i\frac{t\cdot x}{\sqrt{n}}} dt\Big|\noindentnumber\\ \leq & \Big|\int_{|t|\leq \delta\sqrt{n}} (\rho^n(\frac{t}{\sqrt{n}})-e^{-\frac{\lambda_0}{6}|t|^2}) e^{-i\frac{t\cdot x}{\sqrt{n}}} dt\Big|+\int_{[-\sqrt{n} R\pi,\sqrt{n}R\pi]^d} 1_{\{|t|>\delta\sqrt{n}\}} \Big|\rho^n(\frac{t}{\sqrt{n}})\Big| dt\noindentnumber\\ &\quad +\int 1_{\{|t|>\delta\sqrt{n}\}} e^{-\frac{\lambda_0}{6}|t|^2} dt:=I_1+I_2+I_3. \end{align} For $I_3$, one can easily check that for some constants $c_1, c_2>0$ depending on $d,\delta$, we have \begin{align}\label{4e5.8} I_3\leq c_1 e^{-c_2n}. \end{align} \noindent Turning to $I_2$, for any $n\geq 2d$, we may apply Lemma \ref{4l1.2}(i) to get for any $R\geq C_{\ref{4l1.2}},$ \begin{align}\label{4e5.11} I_2 \leq &\int_{\delta\sqrt{n}\leq |t|\leq \sqrt{d} \sqrt{n} R\pi } \Big|\rho^n(\frac{t}{\sqrt{n}})\Big| dt=n^{d/2} \int_{\delta\leq |t|\leq \sqrt{d} \pi R} |\rho({t})|^n dt\noindentnumber\\ \leq& n^{d/2} \int_{\delta \leq |t|\leq 2c_{\ref{4l1.2}}} |\rho({t})|^n dt+n^{d/2}\int_{2c_{\ref{4l1.2}}\leq |t|\leq \sqrt{d} \pi R} (\frac{c_{\ref{4l1.2}}}{|t|})^n dt\noindentnumber \\ \leq& n^{d/2} \int_{\delta \leq |t|\leq \delta^{-1}} e^{-nK_{\ref{4l1.2}} } dt+n^{d/2}\int_{|t|\geq 2c_{\ref{4l1.2}}} (\frac{c_{\ref{4l1.2}}}{|t|})^n dt\noindentnumber \\ \leq& n^{d/2} C(\delta) e^{-nK_{\ref{4l1.2}} }+n^{d/2}C(d) c_{\ref{4l1.2}}^d\frac{1}{n-d}2^{d-n} \leq c_3 e^{-c_4n}, \end{align} for some constants $c_3, c_4>0$ depending on $d,\delta$. In the third inequality we have used $2c_{\ref{4l1.2}}\leq \delta^{-1}$ and Lemma \ref{4l1.2}(ii). It remains to bound $I_1$. By \eqref{4e5.5} we have \begin{align}\label{4e6.75} I_1=&\Big|\int_{|t|\leq \delta\sqrt{n}} e^{-\frac{\lambda_0}{6}|t|^2} (F(t,n)-1) e^{-i\frac{t\cdot x}{\sqrt{n}}} dt\Big|\noindentnumber\\ =&\Big|\int_{n^{1/8}\leq |t|\leq \delta\sqrt{n}} e^{-\frac{\lambda_0}{6}|t|^2} (F(t,n)-1) e^{-i\frac{t\cdot x}{\sqrt{n}}} dt\Big|\noindentnumber\\ &+\Big|\int_{|t|\leq n^{1/8}} e^{-\frac{\lambda_0}{6}|t|^2} (F(t,n)-1) e^{-i\frac{t\cdot x}{\sqrt{n}}} dt\Big|:=J_1+J_2. \end{align} We first deal with $J_1$. Since $n^{1/8}\leq |t|\leq \delta\sqrt{n}$ and we have chosen $0<\delta<1/2$ small, we may apply \eqref{4e5.2} and \eqref{4e5.4} to get \begin{align}\label{4e6.77} |nf(\frac{t}{\sqrt{n}})+ng(\frac{t}{\sqrt{n}})|\leq \frac{1}{8}n |\frac{t}{\sqrt{n}}|^4\leq \frac{1}{32}n |\frac{t}{\sqrt{n}}|^2=\frac{1}{32} |t|^2, \end{align} where in the last inequliaty we have used $|t|/\sqrt{n}\leq \delta\leq 1/2$. Recall $F(t,n)$ from \eqref{4e5.6} and apply \eqref{4e6.77} to see that \begin{align}\label{4e6.76} J_1\leq& \int_{n^{1/8}\leq |t|\leq \delta\sqrt{n}} e^{-\frac{\lambda_0}{6}|t|^2} (1+e^{\frac{1}{32} |t|^2}) dt\leq 2\int_{|t|\geq n^{1/8}} e^{-\frac{1}{24}|t|^2} dt\leq c_5 e^{-c_6 n^{1/4}}, \end{align} for some constant $c_5, c_6>0$ depending on $d$. In the second inequality we have used $\lambda_0\geq 1/2$. Turning to $J_2$, we will use the first inequality in \eqref{4e6.77} to see that \[ |nf(\frac{t}{\sqrt{n}})+ng(\frac{t}{\sqrt{n}})|\leq \frac{1}{8}n |\frac{t}{\sqrt{n}}|^4=\frac{1}{8n} |{t}|^4. \] Apply $|e^x-1|\leq 2|x|$ for $|x|<1/2$ and the above to get for $|t|\leq n^{1/8}$, \[ |F(t,n)-1|\leq 2 |nf(\frac{t}{\sqrt{n}})+ng(\frac{t}{\sqrt{n}})|\leq \frac{1}{4n}|t|^4. \] Hence $J_2$ becomes \begin{align}\label{4e5.10} J_2\leq & \int_{|t|\leq n^{1/8}}e^{-\frac{\lambda_0}{6}|t|^2} \frac{1}{4n}|t|^4 dt \leq \frac{1}{4n} \int e^{-\frac{1}{12}|t|^2} |t|^4 dt \leq C(d)\frac{1}{n}. \end{align} Apply \eqref{4e6.76}, \eqref{4e5.10} in \eqref{4e6.75} to get \begin{align}\label{4eb3.31} I_1\leq c_5 e^{-c_6 n^{1/4}}+C(d)\frac{1}{n}\leq C(d)\frac{1}{n}. \end{align} Finally combine \eqref{4e5.7}, \eqref{4e5.8}, \eqref{4e5.11}, \eqref{4eb3.31} to conclude for any $n\geq 2d$, \begin{align}\label{4e5.12} {(2\pi)^d n^{d/2} R^d} |p_n(x)-\bar{p}_n(x)|\leq c_1 e^{-c_2n}+c_3 e^{-c_4n}+\frac{C(d)}{n} \leq \frac{C(d)}{n}, \end{align} as required. \end{proof} An easy consequence of the above lemma is \begin{align}\label{4eb2.45} \sup_{x\in \mathbb{Z}_R^d} p_n(x)\leq \sup_{x\in \mathbb{Z}_R^d} |p_n(x)-\bar{p}_n(x)|+\sup_{x\in \mathbb{Z}_R^d} \bar{p}_n(x)\leq C(d) \frac{1}{n^{d/2}R^d}, \quad \forall n\geq 1, \end{align} where the last inequality uses \eqref{4e6.19} and \eqref{4eb2.41}. Now we are ready to give the proof of Proposition \ref{4p1.1}(i). \begin{proof}[Proof of Proposition \ref{4p1.1}(i)] For any $t\in \mathbb{R}$, we let $\phi(t)=\mathbb{E} e^{tY_1^1}$ be the moment generating function of the first coordinate of $Y_1$. Let \begin{align}\label{4eb3.10} f(t):=\phi(t)-\mathbb{E}\Big(\sum_{k=0}^3 \frac{(tY_1^1)^k}{k!}\Big)=\phi(t)-(1+\frac{\lambda_0}{6}t^2). \end{align} For $|t|\leq 1$, we use $|e^x-\sum_{k=0}^3 \frac{x^k}{k!}|\leq \frac{x^4}{12}$ for all $|x|\leq 1$ to get \begin{align}\label{4e6.78} |f(t)|\leq \mathbb{E}\Big( \Big|e^{tY_1^1}-\sum_{k=0}^3 \frac{(tY_1^1)^k}{k!}\Big|\Big)\leq \mathbb{E} \Big(\frac{(tY_1^1)^4}{12}\Big)\leq \frac{1}{12}t^4, \end{align} where we have used $|Y_1^1|\leq 1$ in the second and the last inequalities. Fix any $n\geq 1$. Recall $S_n=Y_1+\cdots+Y_n$ and define $S_n^1=Y_1^1+\cdots+Y_n^1$. Then $\mathbb{E} e^{t S_n^1}=\phi(t)^n$. For any $0\leq t\leq \sqrt{n}$, we apply \eqref{4eb3.10}, \eqref{4e6.78} to get \begin{align}\label{4eb3.11} \mathbb{E} e^{\frac{t}{\sqrt{n}} S_n^1}=\phi(\frac{t}{\sqrt{n}})^n&=\Big(1+\frac{\lambda_0}{6}\frac{t^2}{n}+f(\frac{t}{\sqrt{n}})\Big)^n\noindentnumber\\ &\leq \exp\Big(\frac{\lambda_0}{6}{t^2}+\frac{1}{12} \frac{t^4}{n}\Big)\leq \exp\Big(\frac{\lambda_0}{6}{t^2}+\frac{1}{12} {t^2}\Big), \end{align} where the last inequality uses $t^2\leq n$. Since $\mathbb{E} (Y_1^1)=0$, we have $\{S_n^1, n\geq 1\}$ is a martingale w.r.t. the filtration generated by $\{Y_n^1\}$. Hence we may use the symmetry of $S_n$ and apply Martingale Maximal Inequality (see, e.g., Theorem 12.2.5 of \cite{LL10}) to get \begin{align*} \mathbb{P}(\max_{1\leq k\leq n} \|S_k\|_\infty \geq t\sqrt{n})\leq & d\cdot \mathbb{P}(\max_{1\leq k\leq n} |S_k^1| \geq t\sqrt{n})\leq d\cdot \frac{\mathbb{E} e^{\frac{t}{\sqrt{n}} S_n^1} }{e^{t^2}}\\ \leq &de^{-t^2} \exp\Big(\frac{\lambda_0}{6}t^2+\frac{1}{12} {t^2}\Big)\leq de^{-t^2/2}, \end{align*} where the second last inequality is by \eqref{4eb3.11} and the last inequality uses $\lambda_0\leq 3/2$. For $t>\sqrt{n}$, the above inequality is immediate since $\|S_k\|_\infty \leq n$ for all $1\leq k\leq n$. Therefore we get for any $t\geq 0$, \begin{align*} &\mathbb{P}(\max_{1\leq k\leq n} |S_n| \geq t\sqrt{n}\sqrt{d}) \leq \mathbb{P}(\max_{1\leq k\leq n} \|S_n\|_\infty \geq t\sqrt{n}) \leq de^{-t^2/2}. \end{align*} For any $x\in \mathbb{Z}_R^d$, set $t=|x|/\sqrt{nd}$ in the above to get \begin{align}\label{4eb2.50} \mathbb{P}(\max_{1\leq k\leq n} |S_k| \geq |x|)\leq de^{-|x|^2/(2nd)}. \end{align} Now we return tot $p_n(x)=\mathbb{P}(S_n=x)$. Notice that when $n=1$, we have for any $x\in \mathbb{Z}_R^d,$ \[ p_1(x)=\frac{1}{V(R)} 1(x\in \mathcal{N}(0)) \leq \frac{C(d)}{R^d} e^{-\frac{1}{8}}1(|x|\leq \sqrt{d}) \leq \frac{C(d)}{R^d} e^{-\frac{|x|^2}{8d}}, \] and so we may assume $n\geq 2$ below. Let $m=n/2$ if $n$ even and $m=(n+1)/2$ if $n$ odd. Then we have $n-m\geq 1$ and \begin{align}\label{4eb2.46} \{S_n=x\}=\{S_n=x, |S_m|\geq |x|/2 \} \cup \{S_n=x, |S_n-S_m|\geq |x|/2\}. \end{align} It suffices to bound the probabilities of the events on the right-hand side. Apply \eqref{4eb2.50} to get \begin{align}\label{4eb2.47} \mathbb{P}(S_n=x, |S_m|\geq |x|/2)=&\mathbb{P}(|S_m|\geq |x|/2)\mathbb{P}\Big(S_n=x\Big| |S_m|\geq |x|/2\Big)\noindentnumber\\ \leq &\mathbb{P}(\max_{1\leq k\leq n} |S_k| \geq |x|/2)\cdot \sup_{y\in\mathbb{Z}_R^d} p_{n-m}(x-y)\noindentnumber\\ \leq&de^{-\frac{|x|^2}{8nd}} \frac{C(d)}{(n-m)^{d/2} R^d}\leq \frac{C(d)}{n^{d/2} R^d} e^{-\frac{|x|^2}{8nd}}, \end{align} where we have used \eqref{4eb2.45} in the second last inequality. The probability of the other event on the right-hand side of \eqref{4eb2.46} can be estimated in a similar way if one notices \begin{align} \mathbb{P}(S_n=x, |S_n-S_m|\geq |x|/2)=\mathbb{P}(S_n=x, |S_{n-m}|\geq |x|/2). \end{align} Now it follows from \eqref{4eb2.46}, \eqref{4eb2.47} that \[ p_n(x)\leq \frac{C(d)}{n^{d/2} R^d} e^{-\frac{|x|^2}{8nd}}, \] as required. \end{proof} The proof of Proposition \ref{4p1.1}(ii) follows in a similar way to that of Lemma 3 in \cite{LZ10}. \begin{proof}[Proof of Proposition \ref{4p1.1}(ii)] It suffices to show that for any $x,y\in \mathbb{Z}^d_R$ with $|x-y|\geq 1$, \begin{align*} |p_n(x)-{p}_n(y)|\leq C(d)\frac{1}{n^{d/2} R^d} \Big(\frac{|x-y|}{\sqrt{n}} \wedge 1\Big) (e^{-\frac{|x|^2}{16nd}}+e^{-\frac{|y|^2}{16nd}}). \end{align*} By Proposition \ref{4p1.1}(i) we have for any $n\geq 1$, \begin{align}\label{4e8.33} |p_n(x)-{p}_n(y)|\leq p_n(x)+p_n(y)&\leq \frac{c_{\ref{4p1.1}}}{n^{d/2} R^d} (e^{-\frac{|x|^2}{8nd}}+e^{-\frac{|y|^2}{8nd}}). \end{align} Therefore it suffices to show that for any $x,y\in \mathbb{Z}^d_R$ with $|x-y|\geq 1$, \begin{align}\label{4e8.32} |p_n(x)-{p}_n(y)|\leq C(d)\frac{1}{n^{d/2} R^d} \frac{|x-y|}{\sqrt{n}} (e^{-\frac{|x|^2}{16nd}}+e^{-\frac{|y|^2}{16nd}}). \end{align} Since \eqref{4e8.32} holds trivially for $n\leq 2d$ by \eqref{4e8.33}, we may assume that $n\geq 2d$.\\ \noindent ${\bf Case\ 1.}$ We first consider $|x|,|y| \geq \sqrt{16nd\log n}$. Then we have \begin{align*} e^{-\frac{|x|^2}{16nd}} \leq e^{-\log n}\leq \frac{1}{\sqrt{n}}, \text{ and } e^{-\frac{|y|^2}{16nd}} \leq \frac{1}{\sqrt{n}}. \end{align*} If $|x-y|\geq 1$, we may use \eqref{4e8.33} and the above to get \begin{align}\label{4e8.34} |p_n(x)-{p}_n(y)|\leq& \frac{c_{\ref{4p1.1}}}{n^{d/2} R^d} (e^{-\frac{|x|^2}{16nd}} +e^{-\frac{|y|^2}{16nd}}) \frac{1}{\sqrt{n}}\noindentnumber\\ \leq& \frac{c_{\ref{4p1.1}}}{n^{d/2} R^d} (e^{-\frac{|x|^2}{16nd}} +e^{-\frac{|y|^2}{16nd}}) \frac{|x-y|}{\sqrt{n}}. \end{align} \noindent ${\bf Case\ 2.}$ Next we consider $|x|,|y| \leq \sqrt{8nd\log n}$ and $|x-y|\geq 1$. Then it follows \begin{align*} e^{-\frac{|x|^2}{16nd}} \geq e^{-\frac{1}{2}\log n}= \frac{1}{\sqrt{n}}, \text{ and } e^{-\frac{|y|^2}{16nd}} \geq \frac{1}{\sqrt{n}}. \end{align*} Now use Lemma \ref{4l3.4} and the above to get for all $n\geq 1$ and $R\geq C_{\ref{4l3.4}}$, \begin{align}\label{4e8.62} |p_n(x)-\bar{p}_n(x)|\leq& \frac{c_{\ref{4l3.4}}}{n^{d/2+1} R^d}\leq \frac{C(d)}{n^{d/2} R^d} e^{- \frac{|x|^2}{16nd}} \frac{1}{\sqrt{n}} \leq \frac{C(d)}{n^{d/2} R^d} \frac{|x-y|}{\sqrt{n}} e^{- \frac{|x|^2}{16nd}}. \end{align} Similarly the above holds for $|p_n(y)-\bar{p}_n(y)|$. Turning to $\bar{p}_n(x)-\bar{p}_n(y)$, we recall from \eqref{4e6.18a} with $\alpha=1$ that there exists some constant $C>0$ such that \begin{align}\label{4e6.18} |e^{-\frac{|x|^2}{2t}}-e^{-\frac{|y|^2}{2t}}|\leq Ct^{-1/2} |x-y| (e^{-\frac{|x|^2}{4t}}+e^{-\frac{|y|^2}{4t}}),\ \forall t>0, x,y\in \mathbb{R}^d. \end{align} Apply the above to get \begin{align}\label{4e8.63} |\bar{p}_n(x)-\bar{p}_n(y)|\leq & C(d)\frac{1}{n^{d/2} R^d} \frac{|x-y|}{\sqrt{n}} (e^{-\frac{3|x|^2}{4n\lambda_0}}+e^{-\frac{3|y|^2}{4n\lambda_0}})\noindentnumber\\ \leq & C(d)\frac{1}{n^{d/2} R^d} \frac{|x-y|}{\sqrt{n}} (e^{-\frac{|x|^2}{16nd}}+e^{-\frac{|y|^2}{16nd}}). \end{align} Combine \eqref{4e8.62} and \eqref{4e8.63} to see that \begin{align*} |{p}_n(x)-{p}_n(y)|\leq & |{p}_n(x)-\bar{p}_n(x)|+ |\bar{p}_n(x)-\bar{p}_n(y)|+ |\bar{p}_n(y)-{p}_n(y)|\\ \leq & C(d)\frac{1}{n^{d/2} R^d} \frac{|x-y|}{\sqrt{n}} (e^{-\frac{|x|^2}{16nd}}+e^{-\frac{|y|^2}{16nd}}). \end{align*} \noindent ${\bf Case\ 3.}$ Finally if $|x| \leq \sqrt{8nd\log n}$ and $|y| \geq \sqrt{16nd\log n}$ or vice-versa, we have \[|x-y|\geq (4-2\sqrt{2})\sqrt{d} \sqrt{n\log n} \geq \frac{1}{2}\sqrt{n},\] and so by \eqref{4e8.33}, \begin{align*} |p_n(x)-{p}_n(y)| &\leq \frac{c_{\ref{4p1.1}}}{n^{d/2} R^d} (e^{-\frac{|x|^2}{8nd}}+e^{-\frac{|y|^2}{8nd}}) \\ & \leq \frac{c_{\ref{4p1.1}}}{n^{d/2} R^d} \frac{2 |x-y|}{\sqrt{n}}(e^{-\frac{|x|^2}{16nd}}+e^{-\frac{|y|^2}{16nd}}). \end{align*} Now the proof is complete with the above three cases. \end{proof} \section{Moments and exponential moments of BRW}\label{a1} \subsection{Moments and exponential moments of $Z_n$} \label{4ap1.1} This section gives the proofs of Proposition \ref{4p1.2} and Corollary \ref{4c1.2} which are restated as Proposition \ref{4cp1.2} and Corollary \ref{ac1.2} below. To begin with, we will introduce another labelling system for our BRW. Let $\tilde{I}=\cup_{n=0}^\infty \mathbb{N}\times \{1,\cdots, V(R)\}^n$. If $\beta=(\beta_0, \beta_1, \cdots, \beta_n)\in\tilde{I}$, we set $|\beta|=n$ to be the generation of $\beta$ and write $\beta|k=(\beta_0, \cdots, \beta_k)$ for each $0\leq k\leq n$. Let $\pi \beta=(\beta_0, \beta_1, \cdots, \beta_{n-1})$ be the parent of $\beta$ and set $\beta \vee i=(\beta_0, \beta_1, \cdots, \beta_n, i)$ to be the $i$-th offspring of $\beta$ for $1\leq i\leq V(R)$. Let $\{{B}^\beta: \beta \in \tilde{I}, |\beta|>0\}$ be i.i.d. Bernoulli random variables with parameter $p(R)$ indicating whether the birth from $\pi \beta$ to $\beta$ is valid. Assume $\{{W}^{\beta \vee i}, 1\leq i\leq V(R)\}_{\beta \in \tilde{I}}$ is a collection of i.i.d. random vectors, each uniformly distributed on $\mathcal{N}(0)^{(V(R))}=\{(e_1,\cdots, e_{V(R)}): \{e_i\} \text{ all distinct}\}$. Let $\{B^{\beta}\}$ and $\{W^\beta\}$ be mutually independent. Fix any $\tilde{Z}_0\in M_F(\mathbb{Z}_R^d)$. Again we may rewrite $\tilde{Z}_0$ as $\tilde{Z}_0=\sum_{i=1}^{|\tilde{Z}_0|} \delta_{x_i}$ for some $x_i\in \mathbb{Z}_R^d$. If $i>|\tilde{Z}_0|$, we set $x_i$ to be the cemetery state $\Delta$. Write $\beta \approx n$ if $|\beta|=n$, $\beta_0\leq |\tilde{Z}_0|$ and ${B}^{\beta|i}=1$ for all $1\leq i\leq n$ so that such a $\beta$ labels a particle alive in generation $n$, whose historical path would be given by \begin{align}\label{4e6.51} \tilde{Y}_k^\beta=x_{\beta_0}+\sum_{i=1}^{|\beta|} 1(i\leq k) W^{\beta|i}, \quad \forall k\geq 0. \end{align} We denote the current location of the particle $\beta$ by \begin{align}\label{4e7.31} \tilde{Y}^\beta= \begin{cases} x_{\beta_0}+\sum_{i=1}^{|\beta|} W^{\beta|i}, &\text{ if } \beta\approx |\beta|,\\ \Delta, &\text{ otherwise. } \end{cases} \end{align} If $|\beta|=0$, we have $\tilde{Y}^\beta=x_{\beta_0}$ for all $1\leq \beta_0\leq |{\tilde{Z}}_0|$ and $\tilde{Y}^\beta=\Delta$ otherwise. For any Borel function $\phi$, we define \begin{align}\label{4ed7.31} \tilde{Z}_n(\phi)=\sum_{|\beta|=n} \phi(\tilde{Y}^\beta), \end{align} where it is understood that $\phi(\Delta)=0$. In this way, $\tilde{Z}$ gives the empirical distribution of a branching random walk where in generation $n$, each particle gives birth to one offspring to its $V(R)$ neighboring positions independently with probability $p(R)$. Recall the labelling system for BRW $Z=(Z_n)$ from \eqref{4eb2.21}. One can easily check that if $Z_0=\tilde{Z}_0$, then for any $\phi$ and $n\geq 0$ we have \[ Z_n(\phi) \text{ is equal to } \tilde{Z}_n(\phi) \text{ in distribution. } \] We slightly abuse the notation and use $\mathbb{P}^{\tilde{Z}_0}$ to denote the law of $\tilde{Z}=(\tilde{Z}_n)$ as in \eqref{4ed7.31}. In particular, we write $\mathbb{P}^x$ for the case when $\tilde{Z}_0=\delta_x$. The two labelling systems have their own uses: $Z=(Z_n)$ is tailor-made to couple BRW with SIR epidemic as in Lemma \ref{4l1.5}; $\tilde{Z}=(\tilde{Z}_n)$ is more suitable for calculating its moments, which we will give below. For any $\beta \in \tilde{I}$, if $S$ is a subset of $\tilde{I}$ so that all the indices in $S$ have length $|\beta|$, we define \begin{align}\label{4eb2.13} \sigma(S,\beta)= \begin{cases} |\beta|-\inf\{j: \beta|j\neq \gamma|j \text{ for all } \gamma \in S\} &\text{ if } \beta \noindenttin S;\\ -1 &\text{ if } \beta \in S.\\ \end{cases} \end{align} In this way, $\sigma(S,\beta)$ denotes the number of generations back that $\beta$ first split off from the family tree generated by $S$. Set \begin{align}\label{4eb2.18} \mathcal F(S)=\sigma\{B^{{\gamma}|k}: {\gamma}\in S, 1\leq k\leq |{\gamma}|\} \vee \sigma\{W^{{\gamma}|k}: {\gamma}\in S, 1\leq k\leq |{\gamma}|\} \end{align} to be the $\sigma$-field containing the information of the family tree generated by $S$. Recall $S_n$ from \eqref{4eb2.2}. For convenience we let $S_k=0$ if $k\leq 0$. For any $n\geq 1$ and any Borel function $\phi$, we define \begin{align}\label{4eb2.15} G(\phi,n)= 3\|\phi\|_\infty+\sum_{k=1}^{n} \sup_{y\in \mathbb{Z}_R^d} \mathbb{E}(\phi(y+S_k))=3\|\phi\|_\infty +\sum_{k=1}^{n} \sup_{y\in \mathbb{Z}_R^d}\sum_{z\in \mathbb{Z}^d_R} \phi(y+z) p_k(z). \end{align} The following lemma is proved in a similar way to that of Lemma 2.4 in \cite{Per88}. \begin{lemma}\label{4la.1} For any $n,m\geq 1$ and $\phi\geq 0$, we let $S\subseteq \tilde{I}$ be a set of $m$ indices of length $n$. Then for any $x\in \mathbb{Z}_R^d$ we have \[ \mathbb{E}^x\Big(\sum_{\substack{|\beta| = n \\ \sigma(S,\beta)\leq n-1}} \phi(\tilde{Y}^{\beta}) \Big|\mathcal F(S)\Big) \leq m e^{\frac{n\theta}{R^{d-1}}}G(\phi,n). \] \end{lemma} \begin{proof} Fix $x\in \mathbb{Z}^d_R$ and $n\geq 1$. We label the ancestor particle at $x$ by $1$ and only consider $\beta$ with $\beta_0=1$ below. Let $|\beta|= n$ and assume $\sigma(S,\beta)=i$ for some $i\in \{-1, 0,\cdots, n-1\}$. Then by \eqref{4eb2.13} we get \[ \{\beta|k:|\beta|-i\leq k\leq |\beta| \} \cap \{{\gamma}|k: {\gamma} \in S, k\leq |\beta|\}=\emptyset. \] Hence $\sigma\{B^{\beta|k}: |\beta|-i\leq k\leq |\beta|\}\vee \sigma\{W^{\beta|k}: |\beta|-i+1\leq k\leq |\beta|\}$ is independent of $\mathcal F(S)$. Let $i^+=i\vee 0$. Since $\beta|(n-i^+-1)\in S$, we have $\tilde{Y}^{\beta|(n-i^+-1)}$ is $\mathcal F(S)$-measurable and so \begin{align*} &\mathbb{E}^x(\phi(\tilde{Y}^{\beta}) |\mathcal F(S))=1(\tilde{Y}^{\beta|(n-i^+-1)} \neq \Delta) \times \mathbb{E}^x\Big(1(B^{\beta|k}=1, k=|\beta|-i, \cdots, |\beta|)\\ &\times \phi\Big(\tilde{Y}^{\beta|(n-i^+-1)}+W^{\beta|(n-i^+)}+\sum_{k=n-i+1}^{n} W^{\beta|k}\Big)\Big| \mathcal F(S)\Big)\\ \leq & \sup_{e\in \mathcal{N}(0)} \mathbb{E}^x \Big(\phi\Big(\tilde{Y}^{\beta|(n-i^+-1)}+e+\sum_{k=n-i+1}^{n} W^{\beta|k}\Big)\Big) \cdot \mathbb{E}^x\Big(\prod_{k=|\beta|-i}^{|\beta|}1(B^{\beta|k}=1)\Big)\\ \leq & \sup_{y\in \mathbb{Z}_R^d}\mathbb{E} (\phi(y+S_{i}))\cdot p(R)^{i+1}, \end{align*} where the first inequality follows by conditioning on $W^{\beta|(n-i^+)}=e$ for $e\in \mathcal{N}(0)$ and then using that $B^{\beta|k}, W^{\beta|k}$ are independent of $\mathcal F(S)$ and finally taking sup over $e\in \mathcal{N}(0)$. The last inequality follows if one notices that $\{W^{\beta|k}\}$ are i.i.d. random variables uniform on $\mathcal{N}(0)$ and $\{B^{\beta|k}\}$ are i.i.d. Bernoulli. Notice $\{\beta: \sigma(S,\beta)=i\} \subseteq \cup_{{\gamma} \in S} \{\beta: \sigma({\gamma},\beta)=i\}$. For each $\gamma \in S$, we have the number of particles $\beta$ satisfying $\beta_0=1$, $|\beta|=n$ and $\sigma({\gamma},\beta)=i$ is at most $V(R)^{i+1}$ and so it follows that \begin{align*} \mathbb{E}^x\Big(\sum_{\substack{|\beta| = n \\ \sigma(S,\beta)=i}} \phi(\tilde{Y}^{\beta}) \Big|\mathcal F(S)\Big) \leq& \sup_{y\in \mathbb{Z}_R^d}\mathbb{E} (\phi(y+S_{i})) p(R)^{i+1} \cdot mV(R)^{i+1}\\ \leq&m(1+\frac{\theta}{R^{d-1}})^n \sup_{y\in \mathbb{Z}_R^d}\mathbb{E} (\phi(y+S_{i})). \end{align*} Sum $i$ over $-1\leq i\leq n-1$ to get \begin{align*} &\mathbb{E}^x\Big(\sum_{\substack{|\beta|= n \\ \sigma(S,\beta)\leq n-1}} \phi(\tilde{Y}_n^{\beta}) \Big|\mathcal F(S)\Big)\\ &\leq m(1+\frac{\theta}{R^{d-1}})^n \Big(3\|\phi\|_\infty+\sum_{i=1}^n \sup_{y\in \mathbb{Z}_R^d}\mathbb{E} (\phi(y+S_{i}))\Big)\leq m e^{\frac{n\theta}{R^{d-1}}} G(\phi,n), \end{align*} as required. \end{proof} \begin{proposition}\label{4cp1.2} For any $x\in \mathbb{Z}^d_R$, $\phi\geq 0$ and $n\geq 1$, we have \begin{align*} \mathbb{E}^{x}(\tilde{Z}_{n}(\phi))=(1+\frac{\theta}{R^{d-1}})^n \mathbb{E}(\phi(S_n+x)). \end{align*} For any $p\geq 2$, \begin{align*} &\mathbb{E}^{x}(\tilde{Z}_{n}(\phi)^p)\leq (p-1)! e^{\frac{n\theta(p-1)}{R^{d-1}}} G(\phi,n)^{p-1}\mathbb{E}^{x}(\tilde{Z}_{n}(\phi)). \end{align*} \end{proposition} \begin{proof} Fix $x\in \mathbb{Z}^d_R$ and $n\geq 1$. We label the particle at $x$ by $1$ and only consider $\beta$ with $\beta_0=1$ below. For any $\phi: \mathbb{Z}^d_R \to \mathbb{R}$, we have \begin{align}\label{4e6.9} \mathbb{E}^{x}(\tilde{Z}_{n}(\phi))=&\mathbb{E}^x\Big(\sum_{|\beta|= n} \phi(\tilde{Y}^\beta)\Big)=\sum_{|\beta|= n} \mathbb{E}^x(\phi(\tilde{Y}^\beta)| \beta\approx n)\mathbb{P}^x( \beta\approx n)\noindentnumber\\ =& \sum_{|\beta|= n} \mathbb{E}(\phi(x+S_n)) p(R)^n=(1+\frac{\theta}{R^{d-1}})^n \mathbb{E}(\phi(S_n+x)). \end{align} \noindent Turning to $p\geq 2$, we have \begin{align*} I:=&\mathbb{E}^{x}(\tilde{Z}_{n}(\phi)^p)=\mathbb{E}^x\Big(\sum_{|\beta^1|= n}\cdots \sum_{|\beta^p| = n} \prod_{i=1}^p \phi(\tilde{Y}^{\beta^i})\Big)\\ =&\mathbb{E}^x\Big(\sum_{|\beta^1| = n}\cdots \sum_{|\beta^{p-1}| = n} \prod_{i=1}^{p-1} \phi(\tilde{Y}^{\beta^i}) \mathbb{E}^x\Big(\sum_{|\beta^{p}| = n} \phi(\tilde{Y}^{\beta^i}) \Big|\mathcal F(S)\Big)\Big), \end{align*} where $S=\{\beta^1,\cdots \beta^{p-1}\}$ is a set of $p-1$ indices of length $n$. Since all $\beta^j$ have a common ancestor $x$, we have $\sigma(S,\beta^p)\leq n-1$. Hence \begin{align*} I=&\mathbb{E}^x\Big(\sum_{|\beta^1| = n}\cdots \sum_{|\beta^{p-1}| = n} \prod_{i=1}^{p-1} \phi(\tilde{Y}^{\beta^i}) \times \mathbb{E}^x\Big(\sum_{\substack{|\beta^{p}| = n \\ \sigma(S,\beta^p)\leq n-1}} \phi(\tilde{Y}^{\beta^i}) \Big|\mathcal F(S)\Big)\Big)\\ \leq&\mathbb{E}^x\Big(\sum_{|\beta^1| = n}\cdots \sum_{|\beta^{p-1}| = n} \prod_{i=1}^{p-1} \phi(\tilde{Y}^{\beta^i}) \times (p-1)e^{\frac{n\theta}{R^{d-1}}}G(\phi,n)\Big)\\ = &\mathbb{E}^{x}(\tilde{Z}_{n}(\phi)^{p-1}) (p-1)e^{\frac{n\theta}{R^{d-1}}}G(\phi,n), \end{align*} where the inequality is by Lemma \ref{4la.1}. Use induction to conclude \begin{align*} &\mathbb{E}^{x}(\tilde{Z}_{n}(\phi)^p)\leq (p-1)! e^{\frac{n\theta(p-1)}{R^{d-1}}} G(\phi,n)^{p-1}\mathbb{E}^{x}(\tilde{Z}_{n}(\phi)), \end{align*} as required. \end{proof} \begin{corollary}\label{ac1.2} For any $\tilde{Z}_0\in M_F(\mathbb{Z}^d_R)$, $\phi\geq 0$, $\lambda>0$ and $n\geq 1$, if $\lambda e^{\frac{n\theta}{R^{d-1}}} G(\phi,n)<1$ is satisfied, we have \begin{align*} &\mathbb{E}^{\tilde{Z}_0}(e^{\lambda \tilde{Z}_{n}(\phi)})\leq \exp\Big(\lambda \mathbb{E}^{\tilde{Z}_0}(\tilde{Z}_{n}(\phi)) (1-\lambda e^{\frac{n\theta}{R^{d-1}}} G(\phi,n))^{-1}\Big). \end{align*} \end{corollary} \begin{proof} Write $\tilde{Z}_0=\sum_{i=1}^{|\tilde{Z}_0|} \delta_{x_i}$ for some $x_i\in \mathbb{Z}_R^d$. We first consider $\mathbb{P}^x$ for $x=x_i$ with $1\leq i\leq |\tilde{Z}_0|$. For any $\phi\geq 0$, $\lambda>0$ and $n\geq 1$ such that $\lambda e^{\frac{n\theta}{R^{d-1}}} G(\phi,n)<1$, we may apply Proposition \ref{4cp1.2} to get \begin{align*} \mathbb{E}^{x}(e^{\lambda \tilde{Z}_{n}(\phi)})=&1+\sum_{p=1}^\infty \frac{1}{p!} \lambda^p \mathbb{E}^{x}(\tilde{Z}_{n}(\phi)^p)\\ \leq &1+\sum_{p=1}^\infty \frac{1}{p} \lambda^p e^{\frac{n\theta(p-1)}{R^{d-1}}} G(\phi,n)^{p-1}\mathbb{E}^{x}(\tilde{Z}_{n}(\phi))\\ \leq &1+\lambda \mathbb{E}^{x}(\tilde{Z}_{n}(\phi)) (1-\lambda e^{\frac{n\theta}{R^{d-1}}} G(\phi,n))^{-1}\\ \leq &\exp\Big(\lambda \mathbb{E}^{x}(\tilde{Z}_{n}(\phi)) (1-\lambda e^{\frac{n\theta}{R^{d-1}}} G(\phi,n))^{-1}\Big). \end{align*} Returning to $\mathbb{P}^{\tilde{Z}_0}$, we use the above to arrive at \begin{align*} \mathbb{E}^{\tilde{Z}_0}(e^{\lambda \tilde{Z}_{n}(\phi)})=\prod_{i=1}^{|\tilde{Z}_0|}\mathbb{E}^{x_i}(e^{\lambda \tilde{Z}_{n}(\phi)})\leq &\prod_{i=1}^{|\tilde{Z}_0|}\exp\Big(\lambda \mathbb{E}^{x_i}(\tilde{Z}_{n}(\phi)) (1-\lambda e^{\frac{n\theta}{R^{d-1}}} G(\phi,n))^{-1}\Big)\\ = & \exp\Big(\lambda \mathbb{E}^{\tilde{Z}_0}(\tilde{Z}_{n}(\phi)) (1-\lambda e^{\frac{n\theta}{R^{d-1}}} G(\phi,n))^{-1}\Big), \end{align*} as required. \end{proof} \subsection{Exponential moment for occupation measure}\label{4ap1.2} By using similar arguments with the above, we will prove Proposition \ref{4p1.4} in this section. For any $n\geq 1$ and $\phi\geq 0$, we define \begin{align}\label{4eb2.16} F(\phi,n)= 3\|\phi\|_\infty+ \sup_{y\in \mathbb{Z}_R^d}\sum_{k=1}^{n} \mathbb{E}(\phi(y+S_k))=3\|\phi\|_\infty + \sup_{y\in \mathbb{Z}_R^d} \sum_{k=1}^{n} \sum_{z\in \mathbb{Z}^d_R} \phi(y+z) p_k(z). \end{align} Recall $G(\phi, n)$ from \eqref{4eb2.15}. It is immediate that $F(\phi, n)\leq G(\phi, n)$ and so Proposition \ref{4p1.4} will be an easy consequence of the following proposition. In fact, there is almost no difference between $F(\phi, n)$ and $G(\phi, n)$ for our application in this paper but we feel it may require this stronger result in some cases. \begin{proposition}\label{4ap3.1} For any $\tilde{Z}_0\in M_F(\mathbb{Z}^d_R)$, $\phi\geq 0$, $\lambda>0$, $n\geq 1$, if $2\lambda n e^{\frac{n\theta}{R^{d-1}}} F(\phi,n)<1$ is satisfied, we have \begin{align}\label{4e100a} &\mathbb{E}^{\tilde{Z}_0}\Big(\exp\Big({\lambda\sum_{k=0}^n \tilde{Z}_{k}(\phi)}\Big)\Big)\leq \exp\Big(\lambda |\tilde{Z}_0| e^{\frac{n\theta}{R^{d-1}}} F(\phi,n) (1-2\lambda n e^{\frac{n\theta}{R^{d-1}}} F(\phi,n))^{-1}\Big). \end{align} \end{proposition} \begin{proof} Write $\tilde{Z}_0=\sum_{i=1}^{|\tilde{Z}_0|} \delta_{x_i}$ for some $x_i\in \mathbb{Z}_R^d$. Again we first consider $\mathbb{P}^x$ for $x=x_i$ with $1\leq i\leq |\tilde{Z}_0|$. For any $\phi\geq 0$, $\lambda>0$ and $n\geq 1$ such that $2\lambda n e^{\frac{n\theta}{R^{d-1}}} F(\phi,n)<1$, by Proposition \ref{4cp1.2} we have \begin{align}\label{4eb1.41} \mathbb{E}^x\Big(\sum_{k=0}^n \tilde{Z}_k(\phi)\Big)=&\sum_{k=0}^n (1+\frac{\theta}{R^{d-1}})^k \mathbb{E}(\phi(x+S_k))\noindentnumber\\ \leq& e^{n\theta/R^{d-1} }\sum_{k=0}^n\mathbb{E}(\phi(x+S_k))\leq e^{n\theta/R^{d-1} } F(\phi,n). \end{align} Next we will calculate the following $p$-th moment for any $p\geq 2$: \begin{align} &\mathbb{E}^x\Big(\Big(\sum_{k=0}^n \tilde{Z}_k(\phi)\Big)^p\Big)=\mathbb{E}^x\Big(\Big(\sum_{ |\beta|\leq n} \phi(\tilde{Y}^{\beta})\Big)^p\Big)=\mathbb{E}^x\Big(\sum_{|\beta^1|\leq n} \cdots \sum_{|\beta^p|\leq n} \prod_{i=1}^p \phi(\tilde{Y}^{\beta^i})\Big). \end{align} Let $S=\{\beta^1,\cdots \beta^{p-1}\}$ and recall the $\sigma$-field $\mathcal F(S)$ from \eqref{4eb2.18} so that $\tilde{Y}^{\beta^i} \in \mathcal F(S)$ for all $1\leq i\leq p-1$. Then it follows the above that \begin{align}\label{4e8.20} &\mathbb{E}^x\Big(\Big(\sum_{k=0}^n \tilde{Z}_k(\phi)\Big)^p\Big) =&\mathbb{E}^x\Big(\sum_{|\beta^1|\leq n} \cdots \sum_{|\beta^{p-1}|\leq n} \prod_{i=1}^{p-1} \phi(\tilde{Y}^{\beta^i}) \mathbb{E}^x\Big(\sum_{|\beta^p|\leq n} \phi(\tilde{Y}^{\beta^p}) \Big|\mathcal F(S)\Big)\Big). \end{align} For any $\beta^p$ with $|\beta^p|\leq n$, we let $\alpha \in \tilde{I}$ denote the position where $\beta^p$ first split off from the family tree generated by $S=\{\beta^1,\cdots \beta^{p-1}\}$ so that $\alpha=\beta^i|j$ for some $1\leq i\leq p-1$ and $0\leq j\leq n$, that is, $\beta^p=\alpha$ or $\beta^p=\alpha \vee \tilde{\beta}^p$ for some $0\leq |\tilde{\beta}^p|\leq n-1$. Here we use $\gamma \vee \delta=(\gamma_0, \cdots, \gamma_m, \delta_0, \cdots, \delta_l)$ to denote the concatenation in $\tilde{I}$. One can see that $\tilde{Y}^{\alpha}\in \mathcal F(S)$ and there are at most $(p-1) \cdot (n+1)$ such $\alpha$. Now we have \begin{align*} \mathbb{E}^x\Big(\sum_{|\beta^p|\leq n} \phi(\tilde{Y}^{\beta^p}) \Big|\mathcal F(S)\Big)\leq \mathbb{E}^x\Big( \sum_{\alpha} \phi(\tilde{Y}^{\alpha}) \Big|\mathcal F(S)\Big)+ \mathbb{E}^x\Big(\sum_{\alpha} \sum_{0\leq |\tilde{\beta}^p|\leq n-1} \phi(\tilde{Y}^{\alpha\vee \tilde{\beta}^p}) \Big|\mathcal F(S)\Big). \end{align*} The first term can be simply bounded by $(p-1) (n+1) \|\phi\|_\infty$. For the second term, we have \begin{align*} I:=& \mathbb{E}^x\Big(\sum_{\alpha} \sum_{0\leq |\tilde{\beta}^p|\leq n-1} \phi(\tilde{Y}^{\alpha\vee \tilde{\beta}^p}) \Big|\mathcal F(S)\Big)\\ \leq& \sum_{\alpha} \sum_{k=0}^{n-1} \sum_{|\tilde{\beta}^p|=k} \mathbb{E}^x\Big( \phi\Big(\tilde{Y}^{\alpha}+W^{\alpha\vee (\tilde{\beta}^p|0)}+\sum_{j=1}^{k} W^{\alpha\vee (\tilde{\beta}^p|j)}\Big) \Big|\mathcal F(S)\Big) \mathbb{E}^x\Big(\prod_{j=0}^{k} 1(B^{\alpha\vee (\tilde{\beta}^p|j)}=1)\Big)\\ \leq &\sum_{\alpha} \sum_{e\in \mathcal{N}(0)} \mathbb{E}^x\Big(1_{\{W^{\alpha\vee (\tilde{\beta}^p|0)}=e\}} \Big|\mathcal F(S)\Big) \sum_{k=0}^{n-1} \sum_{|\tilde{\beta}^p|=k} \mathbb{E}^{x}\Big( \phi\Big(\tilde{Y}^{\alpha}+e+\sum_{j=1}^{k} W^{\alpha\vee (\tilde{\beta}^p|j)}\Big)\Big) p(R)^{k+1}, \end{align*} where the last inequality follows by indicating on the event $W^{\alpha\vee (\tilde{\beta}^p|0)}=e$ for any $e\in \mathcal{N}(0)$ and by noticing that $\{W^{\alpha\vee (\tilde{\beta}^p|j)}, j\geq 1\}$ are independent of $\mathcal F(S)$. Now take sup over $y=\tilde{Y}^{\alpha}+e$ in $\mathbb{Z}_R^d$ to get \begin{align*} I\leq& \sum_{\alpha} \sup_{y\in \mathbb{Z}_R^d} \sum_{k=0}^{n-1} \sum_{|\tilde{\beta}^p|=k} \mathbb{E}^{x}\Big( \phi\Big(y+\sum_{j=1}^{k} W^{\alpha\vee (\tilde{\beta}^p|j)}\Big)\Big)p(R)^{k+1}\\ =&\sum_{\alpha} \sup_{y\in \mathbb{Z}_R^d} \sum_{k=0}^{n-1} \mathbb{E}\Big(\phi(S_{k}+y)\Big) V(R)^{k+1}p(R)^{k+1}\\ \leq& (p-1)(n+1)e^{n\theta/R^{d-1}} \Big(\|\phi\|_\infty+ \sup_{y\in \mathbb{Z}_R^d} \sum_{k=1}^n \mathbb{E}( \phi(y+S_{k} ))\Big). \end{align*} Now we conclude that \begin{align*} \mathbb{E}^x\Big(\sum_{|\beta^p|\leq n} \phi(\tilde{Y}^{\beta^p}) \Big|\mathcal F(S)\Big)\leq (p-1)(n+1)e^{n\theta/R^{d-1}} F(\phi,n). \end{align*} Returning to \eqref{4e8.20}, we use to above to arrive at \begin{align*} &\mathbb{E}^x\Big(\Big(\sum_{k=0}^n \tilde{Z}_k(\phi)\Big)^p\Big)\leq (p-1)(2n)e^{n\theta/R^{d-1}} F(\phi,n) \mathbb{E}^x\Big(\Big(\sum_{k=0}^n \tilde{Z}_k(\phi)\Big)^{p-1}\Big). \end{align*} By induction we get \begin{align*} \mathbb{E}^x\Big(\Big(\sum_{k=0}^n \tilde{Z}_k(\phi)\Big)^p\Big)\leq& (p-1)!(2n)^{p-1} e^{n(p-1)\theta/R^{d-1}} F(\phi,n)^{p-1} \mathbb{E}^x\Big(\sum_{k=0}^n \tilde{Z}_k(\phi)\Big)\\ \leq &(p-1)!(2n)^{p-1} e^{pn\theta/R^{d-1}} F(\phi,n)^{p}, \end{align*} where the last inequality uses \eqref{4eb1.41}. Hence it follows that \begin{align*} \mathbb{E}^{x}\Big(\exp\Big({\lambda\sum_{k=0}^n \tilde{Z}_{k}(\phi)}\Big)\Big)= &1+\sum_{p=1}^\infty \frac{1}{p!} \lambda^p \mathbb{E}^{x}\Big(\Big(\sum_{k=0}^n \tilde{Z}_{k}(\phi)\Big)^p\Big)\\ \leq &1+\sum_{p=1}^\infty \frac{1}{p} \lambda^p (2n)^{p-1} e^{\frac{pn\theta }{R^{d-1}}} F(\phi,n)^{p}\\ \leq &1+\lambda e^{\frac{n\theta }{R^{d-1}}}F(\phi,n) (1-2\lambda n e^{\frac{n\theta}{R^{d-1}}} F(\phi,n))^{-1}\\ \leq &\exp\Big(\lambda e^{\frac{n\theta }{R^{d-1}}} F(\phi,n) (1-2\lambda n e^{\frac{n\theta}{R^{d-1}}} F(\phi,n))^{-1}\Big). \end{align*} Returning to $\mathbb{P}^{\tilde{Z}_0}$, we use the above to arrive at \begin{align*} \mathbb{E}^{\tilde{Z}_0}\Big(\exp\Big({\lambda\sum_{k=0}^n \tilde{Z}_{k}(\phi)}\Big)\Big)&=\prod_{i=1}^{|\tilde{Z}_0|}\mathbb{E}^{x_i}\Big(\exp\Big({\lambda\sum_{k=0}^n \tilde{Z}_{k}(\phi)}\Big)\Big)\\ &\leq \exp\Big(\lambda |\tilde{Z}_0| e^{\frac{n\theta}{R^{d-1}}} F(\phi,n) (1-2\lambda n e^{\frac{n\theta}{R^{d-1}}} F(\phi,n))^{-1}\Big), \end{align*} as required. \end{proof} \subsection{Exponential moments of the martingale term}\label{4ap1.3} We give in this section the proof of Proposition \ref{4p5.1}. \begin{proof}[Proof of Proposition \ref{4p5.1}] Recall $M_N(\phi)$ from \eqref{4e1.22} and $\langle M(\phi) \rangle_{N}$ from \eqref{4eb1.51}. Notice that $\langle M(\phi) \rangle_{N}=\langle M(-\phi) \rangle_{N}$ and \begin{align*} \mathbb{E}^{Z_0}( \exp(\lambda |M_{N}(\phi)|))\leq \mathbb{E}^{Z_0}( \exp(\lambda M_{N}(\phi)))+\mathbb{E}^{Z_0}( \exp(\lambda M_{N}(-\phi))). \end{align*} It suffices to show that \begin{align}\label{4eb1.53} \mathbb{E}^{Z_0}( e^{\lambda M_{N}(\phi)})\leq \Big(\mathbb{E}^{Z_0}\Big( e^{ 16\lambda^2 \langle M(\phi) \rangle_{N}}\Big)\Big)^{1/2}. \end{align} For each $n\geq 1$, we define \begin{align} Y_n:=\lambda M_{n}(\phi)-\lambda M_{n-1}(\phi)=\sum_{|\alpha|=n-1} \sum_{i=1}^{V(R)} \lambda\phi({Y^\alpha+e_i}) (B^{\alpha \vee e_i}-p(R)). \end{align} Then $\sum_{n=1}^N Y_n=\lambda M_N(\phi)$ for each $N\geq 1$. By recalling $\mathcal G_n=\sigma(\{B^\alpha: |\alpha|\leq n\})$, we have $Y_n\in \mathcal G_n$. Further define for each $n\geq 1$ that \begin{align} V_n:=\mathbb{E}(Y_n^2|\mathcal G_{n-1})=\sum_{|\alpha|=n-1} \sum_{i=1}^{V(R)} \lambda^2 \phi({Y^\alpha+e_i})^2 p(R)(1-p(R)), \end{align} where in the last equality we have used the independence of $B^{\alpha \vee e_i}$. It is immediate that $V_n\in \mathcal G_{n-1}$ and $\sum_{n=1}^N V_n=\lambda^2\langle M(\phi) \rangle_{N}$ for each $N\geq 1$. Hence we may rewrite \eqref{4eb1.53} as \begin{align}\label{4ec1.66} \mathbb{E}^{Z_0}\Big( e^{\sum_{n=1}^N Y_n}\Big)\leq \Big(\mathbb{E}^{Z_0}\Big( e^{ 16 \sum_{n=1}^N V_n}\Big)\Big)^{1/2}. \end{align} To prove the above inequality, we apply the Cauchy-Schwartz inequality to get \begin{align*} \mathbb{E}^{Z_0}\Big( e^{\sum_{n=1}^N Y_n}\Big)=&\mathbb{E}^{Z_0}\Big( e^{\sum_{n=1}^N Y_n-8\sum_{n=1}^N V_n} \cdot e^{8\sum_{n=1}^N V_n}\Big)\\ \leq &\Big(\mathbb{E}^{Z_0}\Big( e^{2\sum_{n=1}^N Y_n-16\sum_{n=1}^N V_n}\Big)\Big)^{1/2}\Big(\mathbb{E}^{Z_0}\Big(e^{16\sum_{n=1}^N V_n}\Big)\Big)^{1/2}. \end{align*} It suffices to prove \begin{align}\label{4ed2.2} &\mathbb{E}^{Z_0}\Big( e^{2\sum_{n=1}^N Y_n-16\sum_{n=1}^N V_n}\Big)\leq 1. \end{align} Observe that \begin{align}\label{4ed1.53} \mathbb{E}^{Z_0}(e^{2Y_n}|\mathcal G_{n-1})=&\mathbb{E}^{Z_0}\Big( \exp\Big(\sum_{|\alpha|=n-1} \sum_{i=1}^{V(R)} 2\lambda {\phi}({Y^\alpha+e_i}) (B^{\alpha \vee e_i}-p(R))\Big)\Big|\mathcal G_{n-1}\Big)\noindentnumber\\ =&\prod_{|\alpha|=n-1} \prod_{i=1}^{V(R)}\mathbb{E}^{Z_0}\Big( \exp\Big(2\lambda \phi({Y^\alpha+e_i}) (B^{\alpha \vee e_i}-p(R))\Big)\Big|\mathcal G_{n-1}\Big). \end{align} Lemma 1.3(a) of Freedman \cite{Free75} gives that if a random variable $X$ satisfies $|X|\leq 1$, $\mathbb{E}(X)=0$ and $\mathbb{E}(X^2)=V$, then \begin{align}\label{4ed2.1} \mathbb{E}(e^{2X})\leq e^{(e^2-3)V}\leq e^{16V}. \end{align} The constant $16$ above is in fact unimportant and we simply pick a large one. Since $\lambda \|\phi\|_\infty\leq 1$ and $B^{\alpha \vee e_i}$ is a Bernoulli random variable with mean $p(R)$, we have $X=\lambda \phi({Y^\alpha+e_i}) (B^{\alpha \vee e_i}-p(R))$ satisfies the assumption of Freedman's lemma. By \eqref{4ed2.1}, we get \[ \mathbb{E}^{Z_0}\Big( \exp\Big(2\lambda \phi({Y^\alpha+e_i}) (B^{\alpha \vee e_i}-p(R))\Big)\Big|\mathcal G_{n-1}\Big)\leq \exp\Big(16\lambda^2 \phi({Y^\alpha+e_i})^2 p(R)(1-p(R))\Big), \] Use the above to see that \eqref{4ed1.53} becomes \begin{align*} \mathbb{E}^{Z_0}(e^{2Y_n}|\mathcal G_{n-1})\leq &\prod_{|\alpha|=n-1} \prod_{i=1}^{V(R)}\exp\Big(16\lambda^2 \phi({Y^\alpha+e_i})^2 p(R)(1-p(R))\Big)\\ =&\exp\Big(16\sum_{|\alpha|=n-1} \sum_{i=1}^{V(R)}\lambda^2 \phi({Y^\alpha+e_i})^2 p(R)(1-p(R))\Big)=e^{16V_n}, \end{align*} thus giving \begin{align*} \mathbb{E}^{Z_0}(e^{2Y_n-16V_n}|\mathcal G_{n-1})\leq 1, \forall n\geq 1. \end{align*} For each $N\geq 1$, we have \begin{align*} \mathbb{E}^{Z_0}\Big( e^{2\sum_{n=1}^N Y_n-16\sum_{n=1}^N V_n}\Big|\mathcal G_{N-1}\Big)&=e^{2\sum_{n=1}^{N-1} Y_n-16\sum_{n=1}^{N-1} V_n}\mathbb{E}^{Z_0}\Big(e^{2Y_N-16V_N} \Big|\mathcal G_{N-1}\Big)\\ &\leq e^{2\sum_{n=1}^{N-1} Y_n-16\sum_{n=1}^{N-1} V_n}. \end{align*} Use induction with above to get \eqref{4ed2.2} and so the proof is complete as noted above. \end{proof} \section{Proofs of Lemmas \ref{4l2.1}, \ref{4l4.2} and \ref{4l1.3}}\label{a2} \subsection{Proof of Lemma \ref{4l2.1}} We first consider $d=1$. For any $n\in \mathbb{Z}$, by interpolation we have \begin{align}\label{4e8.13} g(x)=(n+1-x)f(n)+(x-n)f(n+1), \text{ if } n\leq x\leq n+1. \end{align} Let $\mu_1=\mu/4$. For any $x\in \mathbb{R}$, if we let $n\in \mathbb{Z}$ so that $n\leq x<n+1$, then by \eqref{4e8.13} and the Cauchy-Schwartz inequality, we have \begin{align}\label{4e8.14} \mathbb{E}(e^{\mu_1 g(x)})=&\mathbb{E}(e^{\mu_1 (n+1-x)f(n)+\mu_1 (x-n)f(n+1)})\leq \Big(\mathbb{E}(e^{ 2\mu_1 (n+1-x)f(n)})\Big)^{1/2} \Big(\mathbb{E}(e^{2\mu_1(x-n)f(n+1)})\Big)^{1/2}\noindentnumber\\ \leq &\Big(\mathbb{E}(e^{ \mu f(n)})\Big)^{1/2} \Big(\mathbb{E}(e^{\mu f(n+1)})\Big)^{1/2}\leq C_1, \end{align} where the last inequality uses \eqref{4eb3.21}. Next, for any $x<y$ in $\mathbb{R}$, we let $n\in \mathbb{Z}$ so that $n\leq x<n+1$. To prove the remaining inequality in \eqref{4e8.12}, we will proceed by three cases.\\ \noindent ${\bf Case\ 1.}$ If $n\leq x<y<n+1$, then by \eqref{4e8.13} we have \begin{align*} |g(x)-g(y)| =|x-y||f(n+1)-f(n)|, \end{align*} Let $\lambda_1=\lambda/4$ to see that \begin{align*} &\mathbb{E}(e^{\lambda_1 \frac{|g(x)-g(y)|}{|x-y|^\eta}})=\mathbb{E}(e^{\lambda_1 |x-y|^{1-\eta} |f(n+1)-f(n)|})\leq\mathbb{E}(e^{\lambda |f(n+1)-f(n)|})\leq C_1, \end{align*} where we have used $|x-y|\leq 1$ in the first inequality and the last inequality is by \eqref{4eb3.21}. \noindent ${\bf Case\ 2.}$ If $n+1\leq y<n+2$, then again by \eqref{4e8.13} we have \begin{align*} g(x)-g(y)&=(n+1-x)f(n)+(x-n)f(n+1)\\ &\quad -\big((n+2-y)f(n+1)+(y-n-1)f(n+2)\big)\\ &=(n+1-x)[f(n)-f(n+1)]+(y-n-1)[f(n+1)-f(n+2)]. \end{align*} Note $|x-y|=y-(n+1)+(n+1)-x \geq \max\{y-n-1, n+1-x\}$. So the above becomes \begin{align}\label{4eb3.22} &|g(x)-g(y)| \leq |x-y||f(n)-f(n+1)|+|x-y||f(n+1)-f(n+2)|. \end{align} Apply \eqref{4eb3.22} and the Cauchy-Schwartz inequality to get \begin{align*} &\mathbb{E}(e^{\lambda_1 \frac{|g(x)-g(y)|}{|x-y|^\eta}})\leq \mathbb{E}(e^{\lambda_1 |x-y|^{1-\eta} |f(n)-f(n+1)|} e^{\lambda_1 |x-y|^{1-\eta} |f(n+1)-f(n+2)|})\\ \leq & \Big(\mathbb{E}(e^{ 2\lambda_1|x-y|^{1-\eta} |f(n)-f(n+1)|})\Big)^{1/2} \Big(\mathbb{E}(e^{2\lambda_1|x-y|^{1-\eta}|f(n+1)-f(n+2)|})\Big)^{1/2}\\ \leq & \Big(\mathbb{E}(e^{ \lambda |f(n)-f(n+1)|})\Big)^{1/2} \Big(\mathbb{E}(e^{\lambda |f(n+1)-f(n+2)|})\Big)^{1/2}\leq C_1. \end{align*} where we have used $|x-y|\leq 2$ in the second last inequality and the last inequality is by \eqref{4eb3.21}. \noindent ${\bf Case\ 3.}$ If $n+m\leq y<n+m+1$ for some $m\geq 2$, then by \eqref{4e8.13} we have \begin{align*} &g(x)-g(y)=(n+1-x)f(n)+(x-n)f(n+1)\\ &\quad -\Big((n+m+1-y)f(n+m)+(y-n-m)f(n+m+1)\Big)\\ &=(n+1-x)[f(n)-f(n+1)]+[f(n+1)-f(n+m)]\\ &\quad+(y-n-m)[f(n+m)-f(n+m+1)]. \end{align*} It follows that \begin{align*} &|g(x)-g(y)| \leq |f(n)-f(n+1)|+|f(n+1)-f(n+m)|+|f(n+m)-f(n+m+1)|. \end{align*} Note in this case we have $|x-y|\geq m-1\geq 1$ and hence \begin{align*} &\mathbb{E}(e^{\lambda_1 \frac{|g(x)-g(y)|}{|x-y|^\eta}})\leq \mathbb{E}(e^{\lambda_1 (|f(n)-f(n+1)|+|f(n+m)-f(n+m+1)|)} e^{\lambda_1 \frac{|f(n+1)-f(n+m)|}{(m-1)^\eta}})\\ \leq & \Big(\mathbb{E}(e^{ 2\lambda_1(|f(n)-f(n+1)|+|f(n+m)-f(n+m+1)|)})\Big)^{1/2} \Big(\mathbb{E}(e^{2\lambda_1\frac{|f(n+1)-f(n+m)|}{(m-1)^\eta} })\Big)^{1/2}\\ \leq & \Big(\mathbb{E}(e^{ 4\lambda_1|f(n)-f(n+1)|})\Big)^{1/4}\Big(\mathbb{E}(e^{ 4\lambda_1|f(n+m)-f(n+m+1)|})\Big)^{1/4} \Big(\mathbb{E}(e^{2\lambda_1\frac{|f(n+1)-f(n+m)|}{(m-1)^\eta} })\Big)^{1/2}\leq C_1. \end{align*} Combine the above three cases and \eqref{4e8.14} to conclude \eqref{4e8.12} holds by letting $c_{\ref{4l2.1}}=1/4$ in $d=1$.\\ We continue to the case $d=2$. Fixing any $y_0\in \mathbb{R}$, we first show that \begin{align}\label{4e8.16} \begin{cases} &\mathbb{E}\Big(\exp\Big(\frac{\lambda}{2} \frac{|g(n,y_0)-g(m,y_0)|}{|n-m|^\eta}\Big)\Big)\leq C_1, \quad \forall n\neq m \in \mathbb{Z},\\ &\mathbb{E}\Big(\exp(\frac{\mu}{2} g(n,y_0))\Big)\leq C_1, \quad \forall n\in \mathbb{Z}. \end{cases} \end{align} To see this, we let $k\in \mathbb{Z}$ so that $k\leq y_0<k+1$. By linear interpolation we have for any $n\in \mathbb{Z}$, \begin{align}\label{4e8.17} g(n,y_0)=(k+1-y_0)f(n,k)+(y_0-k) f(n,k+1). \end{align} Similar to derivation of \eqref{4e8.14}, we may use the above and \eqref{4eb3.21} to get \begin{align*} \mathbb{E}\Big(\exp\Big(\frac{\mu}{2} g(n,y_0)\Big)\Big)\leq C_1, \forall n\in \mathbb{Z}. \end{align*} Next, for any $n\neq m\in \mathbb{Z}$, we use \eqref{4e8.17} to see that \begin{align*} |g(n,y_0)-g(m,y_0)|=&\Big|(k+1-y_0)[f(n,k)-f(m,k)]+(y_0-k) [f(n,k+1)-f(m,k+1)]\Big|\\ \leq& |f(n,k)-f(m,k)|+|f(n,k+1)-f(m,k+1)|. \end{align*} It follows that \begin{align*} &\mathbb{E}\Big(\exp\Big(\frac{\lambda}{2} \frac{|g(n,y_0)-g(m,y_0)|}{|n-m|^\eta}\Big)\Big)\leq \mathbb{E}\Big(e^{\frac{\lambda}{2} \frac{|f(n,k)-f(m,k)|+|f(n,k+1)-f(m,k+1)|}{|n-m|^\eta}}\Big)\\ \leq &\Big(\mathbb{E}(e^{\lambda \frac{|f(n,k)-f(m,k)|}{|n-m|^\eta}})\Big)^{1/2} \Big(\mathbb{E}(e^{\lambda \frac{|f(n,k+1)-f(m,k+1)|}{|n-m|^\eta}})\Big)^{1/2}\leq C_1, \end{align*} where the last inequality is by \eqref{4eb3.21}, thus giving \eqref{4e8.16}. By the case in $d=1$, we may apply \eqref{4e8.16} to see that there exists some constant $c_1=1/4>0$ such that if we let $\lambda_1=c_1\frac{\lambda}{2}$ and $\mu_1=c_1\frac{\mu}{2}$, then \begin{align}\label{4e8.18} \begin{cases} &\mathbb{E}\Big(\exp\Big(\lambda_1 \frac{|g(x,y_0)-g(y,y_0)|}{|x-y|^\eta}\Big)\Big)\leq C_1, \forall x\neq y \in \mathbb{R}\\ &\mathbb{E}(\exp(\mu_1 g(x,y_0)))\leq C_1, \forall x\in \mathbb{R}. \end{cases} \end{align} By symmetry we may repeat the above and show that for any $z_0\in \mathbb{R}$, \begin{align}\label{4e8.19} \begin{cases} &\mathbb{E}\Big(\exp\Big(\lambda_1 \frac{|g(z_0,x)-g(z_0,y)|}{|x-y|^\eta}\Big)\Big)\leq C_1, \forall x\neq y \in \mathbb{R}\\ &\mathbb{E}(\exp(\mu_1 g(z_0,x)))\leq C_1, \forall x\in \mathbb{R}. \end{cases} \end{align} The second inequality in \eqref{4e8.12} is now included in \eqref{4e8.18} and \eqref{4e8.19}. Let $\lambda_2=\frac{1}{2}\lambda_1=\frac{\lambda}{16}$. It suffices to show that \begin{align}\label{4e8.21} \mathbb{E}\Big(\exp\Big(\lambda_2 \frac{|g(x)-g(y)|}{|x-y|^\eta}\Big)\Big)\leq C_1, \forall x\neq y \in \mathbb{R}^2. \end{align} For any $x=(x_1, x_2)$ and $y=(y_1,y_2)$ in $\mathbb{R}^2$, we have \begin{align}\label{4e10.60} |g(x_1,x_2)-g(y_1,y_2)|\leq |g(x_1,x_2)-g(x_1,y_2)|+|g(x_1,y_2)-g(y_1,y_2)|. \end{align} Use the Cauchy-Schwartz inequality and $|x-y|\geq \max\{|x_1-y_1|, |x_2-y_2|\}$ to get \begin{align*} &\mathbb{E}\Big(e^{\lambda_2 \frac{|g(x)-g(y)|}{|x-y|^\eta}}\Big)\leq \Big(\mathbb{E}(e^{2\lambda_2 \frac{|g(x_1,x_2)-g(x_1,y_2)|}{|x_2-y_2|^\eta}})\Big)^{1/2} \Big(\mathbb{E}(e^{2\lambda_2 \frac{|g(x_1,y_2)-g(y_1,y_2)|}{|x_1-y_1|^\eta}})\Big)^{1/2}\leq C_1. \end{align*} where the last inequality is by $\lambda_2=\lambda_1/2$ and \eqref{4e8.18}, \eqref{4e8.19}, thus finishing the case $d=2$ by letting $c_{\ref{4l2.1}}=1/16$. The case for $d\geq 3$ can be proved by induction in a similar way to that of the case $d=2$: we fix one coordinate and use linear interpolation and the $d-1$ case to prove equations like \eqref{4e8.18} and \eqref{4e8.19} hold. Then use triangle inequality as in \eqref{4e10.60} to prove that \eqref{4e8.21} holds in $\mathbb{R}^d$, thus finishing the proof. \subsection{Proofs of Lemma \ref{4l4.2} and Lemma \ref{4l1.3}} \begin{proof}[Proof of Lemma \ref{4l4.2}] (i) For any $s,t>0$ and $x_1, x_2\in \mathbb{Z}_R^d$, we use translation invariance to get \begin{align*} &\sum_{y\in \mathbb{Z}^d_R} e^{-t|y-x_1|^2} e^{-s|y-x_2|^2}=\sum_{y\in \mathbb{Z}^d_R} e^{-t|y-(x_1-x_2)|^2}e^{-s|y|^2} \\ =&e^{-\frac{st}{s+t}|x_1-x_2|^2} \sum_{y\in \mathbb{Z}^d_R} e^{-(s+t)|y-\frac{t}{s+t}(x_1-x_2)|^2}:= e^{-\frac{st}{s+t}|x_1-x_2|^2} I. \end{align*} Let $k=yR\in \mathbb{Z}^d$ to see that \begin{align*} I=&\sum_{k\in \mathbb{Z}^d} e^{-(s+t)|\frac{k}{R}-\frac{t}{s+t}(x_1-x_2)|^2}=\sum_{k\in \mathbb{Z}^d} e^{-\frac{s+t}{R^2}|k-\frac{tR}{s+t}(x_1-x_2)|^2}. \end{align*} Write $a=\frac{s+t}{R^2}>0$ and $u=\frac{tR}{s+t}(x_1-x_2) \in \mathbb{R}^d$. Then the above becomes \begin{align}\label{4e9.1} I=&\sum_{k_1\in \mathbb{Z}}\cdots \sum_{k_d\in \mathbb{Z}} e^{-a\sum_{i=1}^d|k_i-u_i|^2}=\prod_{i=1}^d \sum_{k_i\in \mathbb{Z}} e^{-a|k_i-u_i|^2}. \end{align} For any $u_i\in \mathbb{R}$, if we let $\{u_i\}=u_i-[u_i] \in [0,1)$, then \begin{align*} \sum_{k_i\in \mathbb{Z}} e^{-a|k_i-u_i|^2}= \sum_{k_i\in \mathbb{Z}} e^{-a|k_i-\{u_i\}|^2} \leq \sum_{k_i\in \mathbb{Z}} (e^{-a|k_i|^2}+e^{-a|k_i-1|^2})=2\sum_{k_i\in \mathbb{Z}} e^{-a|k_i|^2}. \end{align*} Apply the above in \eqref{4e9.1} to get \begin{align*} I\leq&\prod_{i=1}^d 2\sum_{k_i\in \mathbb{Z}} e^{-a|k_i|^2}=2^d \sum_{k\in \mathbb{Z}^d} e^{-a|k|^2}=2^d \sum_{k\in \mathbb{Z}^d} e^{-\frac{s+t}{R^2}|k|^2}=2^d \sum_{y\in \mathbb{Z}^d_R} e^{-(s+t)|y|^2}, \end{align*} thus completing the proof of (i).\\ (ii) For any $u\geq 1$, we let $s=1/(2u)<1$ and write $y=k/R$ for $k\in \mathbb{Z}^d$ to get \begin{align*} J:=&\sum_{y\in \mathbb{Z}^d_R} e^{-|y|^2/(2u)}=\sum_{y\in \mathbb{Z}^d_R} e^{-s|y|^2}\\ =&\sum_{k\in \mathbb{Z}^d} e^{-s|k|^2/R^2}=\Big(\sum_{k\in \mathbb{Z}} e^{-s|k|^2/R^2}\Big)^d=\Big(1+2\sum_{k=1}^\infty e^{-sk^2/R^2}\Big)^d. \end{align*} For any $k\geq 1$, we have \[ e^{-sk^2/R^2}\leq \int_{k-1}^k e^{-st^2/R^2} dt, \] and so \[ \sum_{k=1}^\infty e^{-sk^2/R^2}\leq \int_0^\infty e^{-st^2/R^2} dt=\frac{1}{2} \sqrt{2\pi \frac{R^2}{2s}}. \] Therefore it follows that \begin{align*} J&\leq 2^d+ 2^d \Big(2\sum_{k=1}^\infty e^{-sk^2/R^2}\Big)^d \leq 2^d+ 4^d \Big(\frac{1}{2} \sqrt{2\pi \frac{R^2}{2s}}\Big)^d \leq C(d) R^d \frac{1}{s^{d/2}}, \end{align*} where the last is by $s\leq 1$ and $R\geq 1$. The proof is complete by noting $s=1/(2u)$. \end{proof} \begin{proof}[Proof of Lemma \ref{4l1.3}] Let $d=2$ or $d=3$ and $1<\alpha<(d+1)/2$. For any $n\geq 1$, $R\geq K_{\ref{4p1.1}}$, and $a,x\in \mathbb{Z}^d_R$, we use Proposition \ref{4p1.1}(ii) and Fubini's theorem to get \begin{align}\label{4e9.21} &\sum_{y\in \mathbb{Z}^d_R} p_n(y-x) \sum_{k=1}^\infty \frac{1}{k^{\alpha}} e^{-\frac{|y-a|^2}{64k}}\leq \sum_{y\in \mathbb{Z}^d_R} \frac{c_{\ref{4p1.1}}}{n^{d/2}R^d} e^{-\frac{|y-x|^2}{32n}} \sum_{k=1}^\infty \frac{1}{k^{\alpha}} e^{-\frac{|y-a|^2}{64k}}\noindentnumber \\ =&\frac{c_{\ref{4p1.1}}}{n^{d/2}R^d} \sum_{k=1}^\infty \frac{1}{k^{\alpha}}  \sum_{y\in \mathbb{Z}^d_R} e^{-\frac{|y-x|^2}{32n}} e^{-\frac{|y-a|^2}{64k}} \leq \frac{c_{\ref{4p1.1}}}{n^{d/2}R^d} \sum_{k=1}^\infty \frac{1}{k^{\alpha}} \cdot 2^d\sum_{y\in \mathbb{Z}^d_R} e^{-\frac{|y|^2}{32n}} e^{-\frac{|y|^2}{64k}} \noindentnumber \\ \leq&C(d)\frac{1}{n^{d/2}R^d} \sum_{y\in \mathbb{Z}^d_R} e^{-\frac{|y|^2}{32n}} \sum_{k=1}^\infty \frac{1}{k^{\alpha}} e^{-\frac{|y|^2}{64k}}:=C(d)\frac{1}{n^{d/2}R^d} \cdot I, \end{align} where the second inequality uses Lemma \ref{4l4.2}. It suffices to bound $I$. For $|y|\leq 1$, we have \begin{align}\label{4eb3.1} \sum_{y\in \mathbb{Z}^d_R, |y|\leq 1} e^{-{\frac{|y|^2}{32n}}} \sum_{k=1}^\infty \frac{1}{k^{\alpha}} e^{-\frac{|y|^2}{64k}}\leq \sum_{y\in \mathbb{Z}^d_R, |y|\leq 1} C(\alpha) \leq C(\alpha)(2R+1)^d. \end{align} For $|y|\geq 1$, we use Lemma \ref{4l4.1} to see that \begin{align}\label{4eb3.2} &\sum_{y\in \mathbb{Z}^d_R, |y|\geq 1} e^{-\frac{|y|^2}{32n}}  \sum_{k=1}^\infty \frac{1}{k^{\alpha}} e^{-\frac{|y|^2}{64k}}\leq 64^{\alpha-1} C_{\ref{4l4.1}}(\alpha-1) \sum_{y\in \mathbb{Z}^d_R, |y|\geq 1} e^{-\frac{|y|^2}{32n}}  \frac{1}{|y|^{2\alpha-2}}\noindentnumber\\ = &C(\alpha) \sum_{k=1}^\infty \sum_{\substack{y\in \mathbb{Z}^d_R \\ k\leq |y|<k+1}} e^{-\frac{|y|^2}{32n}}  \frac{1}{|y|^{2\alpha-2}}\leq C(\alpha)\sum_{k=1}^\infty\sum_{\substack{y\in \mathbb{Z}^d_R\\ k\leq |y|<k+1}} e^{-\frac{k^2}{32n}}  \frac{1}{k^{2\alpha-2}}\noindentnumber\\ \leq &C(\alpha) \sum_{k=1}^\infty C(d) k^{d-1} R^d \cdot e^{-\frac{k^2}{32n}}  \frac{1}{k^{2\alpha-2}}\leq C(\alpha, d) R^d \sum_{k=1}^\infty k^{d+1-2\alpha}e^{-\frac{k^2}{32n}}. \end{align} Since $\alpha<(d+1)/2$, one may get \begin{align*} \sum_{k=1}^\infty k^{d+1-2\alpha}e^{-\frac{k^2}{32n}} \leq &   \sum_{k=1}^\infty \int_k^{k+1}s^{d+1-2\alpha}e^{-\frac{(s-1)^2}{32n}} ds=  \int_0^{\infty} (s+1)^{d+1-2\alpha} e^{-\frac{s^2}{32n}} ds\\ =&  \int_0^{\infty} (\sqrt{t} \sqrt{n}+1)^{d+1-2\alpha} e^{-\frac{t}{32}} \frac{\sqrt{n}}{2\sqrt{t}}dt \\ \leq& (\sqrt{n})^{d+1-2\alpha}\sqrt{n} \int_0^{\infty} \frac{(\sqrt{t}+1)^{d+1-2\alpha}}{2\sqrt{t}}e^{-\frac{t}{32}} dt \leq Cn^{1+d/2-\alpha}. \end{align*} Returning to \eqref{4eb3.2}, we get \begin{align}\label{4eb3.3} &\sum_{y\in \mathbb{Z}^d_R, |y|\geq 1} e^{-\frac{|y|^2}{32n}}  \sum_{k=1}^\infty \frac{1}{k^{\alpha}} e^{-\frac{|y|^2}{64k}}\leq C(\alpha, d) R^d Cn^{1+d/2-\alpha}. \end{align} Combine \eqref{4eb3.1} and \eqref{4eb3.3} to arrive at \[ I\leq C(\alpha)(2R+1)^d+C(\alpha, d) R^d \cdot C n^{1+d/2-\alpha}\leq C(\alpha, d) R^d \cdot n^{1+d/2-\alpha}. \] The proof is complete by \eqref{4e9.21}. \end{proof} \section{Collision estimates for SIR epidemic} \label{4a0} In this section, we give the proof of Lemma \ref{4l10.01}. Recall that in an SIR epidemic, when two (or more) infected individuals simultaneously attempt to infect the same susceptible individual, all but one of the attempts fail. We call such an occurrence a collision. For any $x\in \mathbb{Z}_R^d$, we let $\Gamma_n(x)$ denote the number of collisions at site $x$ and time $n$. For the susceptible individual at $x$, a collision occurs at $x$ if and only if there is some pair $u,v$ of infected individuals at neighboring sites that simultaneously attempt to infect $x$. For example, if $k\geq 2$ infected individuals simultaneously attempt to infect $x$, then the number of collisions at $x$ is $\binom{k}{2}$. Therefore given that $|\eta_n \cap \mathcal{N}(x)|=N_0$, the conditional expectation of $\Gamma_{n+1}(x)$ is given by \begin{align} \sum_{k=2}^{N_0} \binom{N_0}{k} p(R)^k (1-p(R))^{N_0-k} \binom{k}{2}\leq \frac{N_0(N_0-1)}{2}p(R)^2\leq |\eta_n \cap \mathcal{N}(x)|^2 p(R)^2. \end{align} It follows that \begin{align}\label{4eb2.33} \mathbb{E} \Big(\sum_{n=1}^{T_\theta^R} \sum_{x\in \mathbb{Z}^d_R}\Gamma_{n}(x)\Big)\leq& \mathbb{E} \Big(\sum_{n=0}^{T_\theta^R} \sum_{x\in \mathbb{Z}^d_R}|\eta_n \cap \mathcal{N}(x)|^2 p(R)^2\Big). \end{align} Use the dominating BRW $Z=(Z_n)$ to see that the right-hand side of \eqref{4eb2.33} is bounded by \begin{align}\label{4e5.83} \mathbb{E} \Big(\sum_{n=0}^{T_\theta^R} \sum_{x\in \mathbb{Z}^d_R}Z_n(\mathcal{N}(x))^2\Big) p(R)^2\leq& \mathbb{E} \Big(\sum_{n=0}^{T_\theta^R} \sum_{a\in \mathbb{Z}^d}Z_n(Q_3(a))^2 (2R+1)^d \Big)p(R)^2\noindentnumber\\ \leq& C\frac{1}{R^d} \mathbb{E} \Big(\sum_{n=0}^{T_\theta^R} \sum_{a\in \mathbb{Z}^d}Z_n(Q_3(a))^2 \Big), \end{align} where the first inequality uses the fact that $\mathcal{N}(x) \subseteq Q_3(a)$ holds for any $\|x-a\|_\infty \leq 1$ with $x\in \mathbb{Z}^d_R$ and $a\in \mathbb{Z}^d$. It suffices to show that \begin{align}\label{4e5.91} \frac{1}{R^{d}} \mathbb{E} \Big(\sum_{n=0}^{T_\theta^R} \sum_{a\in \mathbb{Z}^d}Z_n(Q_3(a))^2 \Big)=o(R^{d-1}). \end{align} Recall that $Z_0(x)=1(x\in \eta_0)$ where $\eta_0$ is a subset of $\mathbb{Z}_R^d$ as in \eqref{4eb2.1}. Hence it is immediate that \begin{align}\label{4e5.91a} \frac{1}{R^{2d-1}} \sum_{a\in \mathbb{Z}^d}Z_0(Q_3(a))^2 &\leq \frac{1}{R^{2d-1}} \sum_{a\in \mathbb{Z}^d}(6^d K\beta_d(R))^2 1_{\{\|a\|_\infty \leq R_\theta+4\}}\noindentnumber\\ &\leq \frac{1}{R^{2d-1}} (6^d K\beta_d(R))^2 (2R_\theta+9)^d =o(1), \end{align} where the last follows by $\beta_d(R)\leq \log R$ and $R_\theta=\sqrt{R^{d-1}/\theta}$. Next we consider $ \sum_{a\in \mathbb{Z}^d} \mathbb{E} (Z_n(Q_3(a))^2 )$ for any $1\leq n\leq T_\theta^R$. Recall that $\mathbb{P}^x$ denotes the law of the BRW starting from a single ancestor at $x\in \mathbb{Z}_R^d$. By \eqref{4e10.27} with $D=Q_3(a)$, we have \begin{align}\label{4eb3.27} \sum_{a\in \mathbb{Z}^d} \mathbb{E} (Z_n(Q_3(a))^2 )\leq &\sum_{a\in \mathbb{Z}^d} \sum_{x\in \eta_0} \mathbb{E}^x(Z_n(Q_3(a))^2)\noindentnumber\\ &+\sum_{a\in \mathbb{Z}^d} \Big(\sum_{x\in \eta_0} \mathbb{E}^x(Z_n(Q_3(a)))\Big)^2:=I_1+I_2. \end{align} We first deal with $I_1$. Recall $G(\phi,n)$ from \eqref{4e5.90}. Recall from \eqref{4e10.30} to see that \begin{align}\label{4e5.85} G(1_{Q_3(a)},n)\leq C(d) h_d(n), \end{align} where $h_d(n)=\sum_{k=1}^n \frac{1}{n^{d/2}}$. Although \eqref{4e10.30} deals with $Q(a)$, the conclusion still holds by adjusting the constants $C(d)$. Now apply Proposition \ref{4p1.2}(ii) to see that \begin{align}\label{4e5.84} \mathbb{E}^x((Z_n(Q_3(a)))^2 )\leq& e^{\frac{n\theta}{R^{d-1}}} G(1_{Q_3(a)},n) \mathbb{E}^{x}({Z}_{n}(Q_3(a)))\noindentnumber\\ \leq& e^T C(d) h_d(n) \mathbb{E}^{x}({Z}_{n}(Q_3(a))), \end{align} where the last inequality uses $n\leq T_\theta^R$ and \eqref{4e5.85}. Returning to $I_1$, we apply \eqref{4e5.84} to get \begin{align*} I_1\leq&\sum_{a\in \mathbb{Z}^d} \sum_{x\in \eta_0} e^T C(d) h_d(n) \mathbb{E}^{x}({Z}_{n}(Q_3(a)))\\ \leq& C(d,T) h_d(n)\sum_{x\in \eta_0} \sum_{a\in \mathbb{Z}^d} \mathbb{E}^{x}({Z}_{n}(Q_3(a)))\\ \leq& C(d,T) h_d(n)\sum_{x\in \eta_0}C(d)\mathbb{E}^{x}({Z}_{n}(1)). \end{align*} By \eqref{4ea4.5}, we have \begin{align}\label{4eb3.25} \mathbb{E}^{x}({Z}_{n}(1))=(1+\frac{\theta}{R^{d-1}})^n\leq e^{\frac{n\theta}{R^{d-1}}} \leq e^T. \end{align} It follows that \begin{align}\label{4eb3.26} I_1\leq C(d,T) h_d(n)\sum_{x\in \eta_0}C(d) e^T\leq C(d,T) |\eta_0| h_d(n). \end{align} Turning to $I_2$, we observe that \begin{align}\label{4e10.53} I_2\leq &\Big(\sup_{a\in \mathbb{Z}^d}\sum_{x\in \eta_0} \mathbb{E}^x(Z_n(Q_3(a)))\Big) \sum_{a\in \mathbb{Z}^d} \sum_{x\in \eta_0} \mathbb{E}^x(Z_n(Q_3(a)))\noindentnumber\\ \leq &\Big(\sup_{a\in \mathbb{Z}^d} \sum_{x\in \eta_0} \mathbb{E}^x(Z_n(Q_3(a)))\Big) \sum_{x\in \eta_0} C(d) \mathbb{E}^{x}(Z_n(1))\noindentnumber\\ \leq &\Big(\sup_{a\in \mathbb{Z}^d}\sum_{x\in \eta_0} \mathbb{E}^x(Z_n(Q_3(a)))\Big) C(d) e^T |\eta_0|, \end{align} where the last inequality uses \eqref{4eb3.25}. It remains to bound $\sup_{a\in \mathbb{Z}^d} \sum_{x\in \eta_0} \mathbb{E}^x(Z_n(Q_3(a)))$. For any $a\in \mathbb{Z}^d$, we apply Proposition \ref{4p1.2}(i) to see that \begin{align}\label{4e10.54} \sum_{x\in \eta_0} \mathbb{E}^x(Z_n(Q_3(a)))=&(1+\frac{\theta}{R^{d-1}})^n \sum_{x\in \eta_0} \mathbb{P}(S_n+x\in Q_3(a))\noindentnumber\\ \leq &e^{\frac{n\theta}{R^{d-1}}} \sum_{m\in \mathbb{Z}^d} |\eta_0 \cap Q(m)| \sup_{x\in Q(m)} \mathbb{P}(S_n+x\in Q_3(a))\noindentnumber\\ \leq &e^{T} K\beta_d(R) \sum_{m\in \mathbb{Z}^d} \sup_{x\in Q(m)} \mathbb{P}(S_n+x\in Q_3(a)), \end{align} where in the last inequality we have used the condition (iii) from \eqref{4eb2.1}. For any $x\in Q(m)$, we have $\mathbb{P}(S_n+x\in Q_3(a))\leq \mathbb{P}(S_n\in Q_5(a-m)),$ and so \begin{align*} \sum_{m\in \mathbb{Z}^d} \sup_{x\in Q(m)} \mathbb{P}(S_n\in Q_3(a))\leq \sum_{m\in \mathbb{Z}^d} \mathbb{P}(S_n\in Q_5(a-m)) \leq C(d). \end{align*} Use the above in \eqref{4e10.54} to arrive at \begin{align}\label{4e10.55} \sum_{x\in \eta_0} \mathbb{E}^x(Z_n(Q_3(a))) \leq &e^{T} K\beta_d(R) C(d). \end{align} Returning to \eqref{4e10.53}, we have \begin{align}\label{4e10.56} I_2\leq C(d) e^T |\eta_0| \cdot e^{T} K\beta_d(R) C(d)\leq C(d,T) |\eta_0| K\beta_d(R). \end{align} Finally combine \eqref{4eb3.26} and \eqref{4e10.56} to see that \eqref{4eb3.27} becomes \begin{align*} &\sum_{a\in \mathbb{Z}^d} \mathbb{E} (Z_n(Q_3(a))^2 )\leq C(d,T) |\eta_0| h_d(n)+C(d,T) |\eta_0| K\beta_d(R). \end{align*} Sum $n$ over $1\leq n\leq T_\theta^R$ to get \begin{align*} &\mathbb{E} \Big(\sum_{n=1}^{T_\theta^R} \sum_{a\in \mathbb{Z}^d}Z_n(Q_3(a))^2 \Big)\leq C(d,T) |\eta_0| \sum_{n=1}^{T_\theta^R} h_d(n)+T_\theta^R C(d,T) |\eta_0| K\beta_d(R)\\ &\leq C(d,T) |\eta_0| \cdot T_\theta^R C(T)\log R+T_\theta^R C(d,T) |\eta_0| K \log R\\ &\leq C(d,T) \frac{2R^{d-1}f_d(\theta)}{\theta} \frac{TR^{d-1}}{\theta} \log R+ \frac{TR^{d-1}}{\theta} C(d,T) \frac{2R^{d-1}f_d(\theta)}{\theta} K \log R=o(R^{2d-1}), \end{align*} where the second inequality uses \eqref{4eb2.8} and \eqref{4ea10.45}. The proof of \eqref{4e5.91} is complete by \eqref{4e5.91a} and the above. \end{document}
\begin{document} \title{Three topics in additive prime number theory} \author{Ben Green} \address{Centre for Mathematical Sciences\\ Wilberforce Road\\ Cambridge CB3 0WA\\ England } \email{[email protected]} \subjclass{} \begin{abstract} We discuss, in varying degrees of detail, three contemporary themes in prime number theory. Topic 1: the work of Goldston, Pintz and Y{\i}ld{\i}r{\i}m on short gaps between primes. Topic 2: the work of Mauduit and Rivat, establishing that 50\% of the primes have odd digit sum in base 2. Topic 3: work of Tao and the author on linear equations in primes. \end{abstract} \maketitle \textsc{Introduction.} These notes are to accompany two lectures I am scheduled to give at the \emph{Current Developments in Mathematics} conference at Harvard in November 2007. The title of those lectures is `A good new millennium for primes', but I have chosen a rather drier title for these notes for two reasons. Firstly, the title of the lectures was unashamedly stolen (albeit with permission) from Andrew Granville's entertaining article \cite{granville-article} of the same name. Secondly, and more seriously, I do not wish to claim that the topics chosen here represent a complete survey of developments in prime number theory since 2000 or even a selection of the most important ones. Indeed there are certainly omissions, such as the lack of any discussion of the polynomial-time primality test \cite{aks}, the failure to even mention the recent work on primes in orbits by Bourgain, Gamburd and Sarnak, and many others. I propose to discuss the following three topics, in greatly varying degrees of depth. Suggestions for further reading will be provided. The three sections may be read independently although there are links between them. 1. Gaps between primes. Let $p_n$ be the $n$th prime number, thus $p_1 = 2$, $p_2 = 3$, and so on. The prime number theorem, conjectured by Gauss and proven by Hadamard and de la Vall\'ee Poussin over 110 years ago, tells us that $p_n$ is asymptotic to $n\log n$, or in other words that \[ \lim_{n \rightarrow \infty} \frac{p_n}{n\log n} = 1.\] This implies that the gap between the $n$th and $(n+1)$st primes, $p_{n+1} - p_n$, is about $\log n$ on average. About 2 years ago Goldston, Pintz and Y{\i}ld{\i}r{\i}m proved the following remarkable result: for any $\epsilon > 0$, there are infinitely many $n$ such that $p_{n+1} - p_n < \epsilon \log n$. That is, infinitely often there are consecutive primes whose spacing is \emph{much} closer than the average. 2. Digits of primes. Written in binary, the first few primes are \[ 10, 11, 101, 111, 1011, 1101, 10001, 10011, 10111,\dots\] There is no obvious pattern\footnote{Except, of course, that the last digit of primes except the first is always 1.}. Indeed, why would there be, since the definition of `prime' has nothing to do with digital expansions. Proving such a statement, or even formulating it correctly, is an entirely different matter. A couple of years ago, however, Mauduit and Rivat did manage to prove that the digit sum is odd 50\% of the time (and hence even 50\% of the time). They also obtained results in other bases. 3. Patterns of primes. Additive questions concerning primes have a long history. It has been known for over 70 years that there are infinitely many 3-term arithmetic progressions of primes such as $3,5,7$ and $5,11,17$, and that every large odd number is the sum of three primes. Recently, in joint work with Tao, we have been able to study more complicated patterns of primes. In this section we provide a guide to this recent joint work. Throughout these notes we will write \[ \E_{x \in X} f(x) := \frac{1}{X}\sum_{x \in X} f(x),\] where $X$ is any finite set and $f : X \rightarrow \C$ is a function. \section{Gaps between primes} \label{sec1} These notes were originally prepared for a series of lectures I gave at the Norwegian Mathematical Society's \emph{Ski og mathematikk}, which took place at Rondablikk in January 2006. It is a pleasure to thank Christian Skau for inviting me to that event. The argument of Goldston, Pintz and Y{\i}ld{\i}r{\i}m was first described to me by K.~Soundararajan at the Highbury Vaults in Bristol. It is a pleasure to thank him, and to refer the interested reader to his lectures on the subject \cite{soundararajan}, which are superior to these in every respect. \ssubsection{The result} In 2005 Goldston, Pintz and Y{\i}ld{\i}r{\i}m created a sensation by announcing a proof that \[ \mbox{liminf}_{n \rightarrow \infty} \frac{p_{n+1} - p_n}{\log n} = 0,\] where $p_n$ denotes the $n$th prime number. According to the prime number theorem we have \[ p_n \sim n \log n,\] and therefore \[ \frac{p_{n+1} - p_n}{\log n} \] has average value $1$. The Goldston, Pintz and Y{\i}ld{\i}r{\i}m result thus states that the distance between consecutive primes can be $\epsilon$ of the average spacing, for any $\epsilon$, and is thus certainly most spectacular. Previous efforts at locating small gaps between primes focussed on proving successively smaller upper bounds for $C := \mbox{liminf}_{n \rightarrow \infty} \frac{p_{n+1} - p_n}{\log n}$. The following table describing the history of these improvements does not make the Goldston-Pintz-Y{\i}ld{\i}r{\i}m result look any less striking: \begin{tabular}{|l|l|l|l|} \hline & & $C$ & \\ \hline Trivial from PNT & & 1 & \\ Hardy-Littlewood \cite{HL-0} & 1926 & 2/3 & on GRH \\ Rankin \cite{Rankin} & & 3/5 & on GRH \\ Erd\H{o}s \cite{Erdos} & 1940 & 1 - c & unconditionally\\ Ricci \cite{Ricci} & 1954 & 15/16 & \\ Bombieri-Davenport \cite{BD} & 1965 & $0.4665\dots$ & \\ Pilt'ai \cite{Pi} & 1972 & $0.4571\dots$ & \\ Uchiyama \cite{Uc} & & $0.4542\dots$ & \\ Huxley \cite{Hu1,Hu2,Hu3} & 1984 & $0.4393\dots$ & \\ Maier \cite{Ma} & 1989 & $0.2484\dots$ & \\ Goldston-Pintz-Y{\i}ld{\i}r{\i}m & 2005 & 0 & \\ \hline \end{tabular} For the detailed proof of this result we refer the reader to the authors' paper \cite{gpy}, as well as to their expository account \cite{gpy-expository} and to their short article with Motohashi \cite{gpy-moto}. Our aim here is to give a very rough outline of the proof. One distinctive feature of the argument is that it `only just' works, in a way that seems rather miraculous. We will endeavour to give some sense of this. We begin with two sections of background material. \ssubsection{The Elliott-Halberstam Conjecture and level of distribution} Let $q$ be a positive integer and suppose that $a$ is prime to $q$. We write \[ \psi(N; a,q):= \E_{n \leqslant N,n \equiv a \mdsub{q}} \Lambda(n),\] where $\Lambda$ is the von Mangoldt function. For constant $q$ (and in fact for $q$ growing slowly with $N$, say $q \leqslant (\log N)^A$ for some fixed $A$) the prime number theorem in arithmetic progressions tells us that \[ \psi(N;a,q) \sim 1/\phi(q).\] Conditional upon the GRH, we may assert the same result up to about $q \approx N^{1/2}$. The remarkable theorem of Bombieri-Vinogradov (a proof of which the reader will find in many texts on analytic number theory, such as \cite{iwaniec-kowalski}) states that something like this is true \emph{unconditionally}, provided one is prepared to average over $q$. A weak version of the theorem is that \begin{equation}\label{eq1.01} \sum_{q \leqslant Q} \max_{(a,q) = 1} | \psi(N;a,q) - \frac{1}{\phi(q)}| \ll_{A,\epsilon} \frac{1}{(\log N)^A}\end{equation} for any fixed $A$ and for any $Q \leqslant N^{1/2 - \epsilon}$. By using \emph{sieve theory} one may show that $\psi(N;a,q) \ll N/\phi(q)$, and so the LHS of \eqref{eq1.01} is trivially bounded by \[ \sum_{q \leqslant Q} \frac{1}{\phi(q)} \approx \log Q.\] The Bombieri-Vinogradov theorem permits us to save an arbitrary power of a logarithm over this trivial bound. Even conjectures on $L$-functions (such as the GRH) appear to tell us nothing about the expression \eqref{eq1.01} when $Q \gg N^{1/2}$. Nonetheless, one may make conjectures. If $\theta \in [1/2,1)$ is a parameter then we say that the primes have \emph{level of distribution $\theta$}, or that the \emph{Elliott-Halberstam conjecture} $\mbox{EH}(\theta)$ holds, if we have the bound \begin{equation}\label{eq1.02} \sum_{q \leqslant Q} \max_{(a,q) = 1} | \psi(N;a,q) - \frac{1}{\phi(q)}| \ll_{A,\theta} \frac{1}{(\log N)^A}\end{equation} for any $Q \leqslant N^{\theta}$. The \emph{full} Elliott-Halberstam conjecture \cite{EH} is that $\mbox{EH}(\theta)$ holds for all $\theta < 1$. Assuming any Elliott-Halberstam conjecture $\mbox{EH}(\theta)$ with $\theta > 1/2$, Goldston, Pintz and Y{\i}ld{\i}r{\i}m can prove the remarkable result that gaps between consecutive primes are infinitely often less than some absolute constant $C(\theta)$. Assuming $\mbox{EH}(0.95971)$, they prove that \[ \mbox{liminf}_{n \rightarrow \infty} (p_{n+1} - p_n) \leqslant 16\] (actually they prove a slightly weaker result -- the value 0.95971 comes from unpublished computations of J. Brian Conrey). It should be stressed however that it is not expected that any conjecture $\mbox{EH}(\theta)$ for $\theta > 1/2$ will be established in the near future. There are results of Bombieri, Friedlander and Iwaniec which go a little beyond the Bombieri-Vinogradov theorem in something resembling the required manner, although experts seem to be of the opinion that these results will not help to improve the bounds on gaps between primes (cf. \cite[\S 16]{aim-notes}). \ssubsection{Selberg's weights}\label{sec1.3} This is the second section of background material. In the 1940s Selberg introduced a wonderfully simple, yet powerful, idea to analytic number theory. Write $1_P$ for the characteristic function of the primes. Then if $R$ is any parameter and if $(\lambda_d)_{d \leqslant R}$ is an sequence with $\lambda_1 = 1$, we have the pointwise inequality \[ 1_P(n) \leqslant \big( \sum_{\substack{d | n \\ d \leqslant R}} \lambda_d \big)^2\] provided that $n > R$ (the proof is obvious). This provides an enormous family of majorants for the sequence of primes. In a typical application we will be interested in something like the set of primes $p$ less than some cutoff $N$, and then $R$ will be some power $N^{\gamma}$, $\gamma < 1$. In this situation Selberg's weights majorise the primes between $N^{\gamma}$ and $N$, that is to say almost all of the primes less than $N$. What weights $\lambda_d$ should one choose? This depends on the application, but a very basic application is to the estimation of $\frac{1}{y}(\pi(x + y) - \pi(x))$, the density of primes in the interval $(x,x+y]$ (the Brun-Titchmarsh problem). In discussing this problem we will also see why it is advantageous to construct a majorant for the primes, rather than work with $1_P$ itself. For any choice of weights $\lambda_d$, then, we have \begin{align} \nonumber \frac{1}{y}(\pi(x+y) - \pi(x)) &\leqslant \E_{x+1 \leqslant n \leqslant x+y} \big( \sum_{\substack{d | n \\ d \leqslant R}} \lambda_d \big)^2 \\ \nonumber & = \sum_{d \leqslant R} \sum_{d' \leqslant R} \lambda_d \lambda'_d \E_{x+1 \leqslant n \leqslant x+_y} 1_{d | n}1_{d' | n} \\ \label{display} & = \sum_{d \leqslant R}\sum_{d' \leqslant R} \frac{\lambda_d \lambda_{d'}}{[d,d']} + O\big(\frac{1}{y}\sum_{d \leqslant R} \sum_{d' \leqslant R} |\lambda_d||\lambda_{d'}|\big) .\end{align} Let us imagine that the weights $\lambda_d$ are chosen to be $\ll y^{\epsilon}$ in absolute value (this is always the case in practice). Then the second term here is $O(R^2 y^{2\epsilon-1})$. If $R \leqslant y^{1/2 - 2\epsilon}$ then this is $O(y^{- \epsilon})$ and may be thought of as an error term. This is why it is advantageous (indeed essential) to work with a majorant taken over a truncated range of divisors, and not with $1_P$ itself. The first term in \eqref{display}, \[ \sum_{d \leqslant R}\sum_{d' \leqslant R} \frac{\lambda_d \lambda_{d'}}{[d,d']},\] is a quadratic form. It may be explicitly minimised subject to the condition $\lambda_1 = 1$, giving optimal weights $\lambda^{\mbox{\scriptsize OPT}}_d$ which are independent of $x$ and $y$, and the resultant expression may then be evaluated asymptotically. In this way one obtains the well-known bound \[ \frac{1}{y}(\pi(x + y) - \pi(x)) \leqslant (2 + \epsilon)\frac{\pi(y)}{y},\] valid for $y > y_0(\epsilon)$. What is the optimal choice of weights $\lambda^{\mbox{\scriptsize SEL}}_d$ for the Brun-Titchmarsh problem? The precise form will not concern us here (see, for example, \cite{nathanson}). However, it may be shown that \[ \lambda^{\mbox{\scriptsize SEL}}_d \approx \mu(d) \frac{\log(R/d)}{\log R}.\] (For a detailed discussion, see the appendix to \cite{green-tao-selbergsieve} and the references to work of Ramar\'e therein.) We write \[ \lambda^{\mbox{\scriptsize GY}}_d := \mu(d) \frac{\log(R/d)}{\log R}.\] These weights are very natural for two reasons: their simplicity of form, and the fact that they approximate the optimal weights for the Brun-Titchmarsh problem. There is a third reason for considering them, which comes upon recalling the formula \[ \Lambda(n) = \sum_{d | n} \mu(d) \log(n/d).\] We see, then, that \[ \Lambda_R(n) := \sum_{\substack{d | n \\ d \leqslant R}} \mu(d) \log(R/d)\] is a kind of divisor-truncated version of $\Lambda$. We have arrived at the conclusion that the function \[ \frac{1}{(\log R)^2}\Lambda_R^2(n) = \big( \sum_{\substack{d | n \\ d \leqslant R}} \lambda^{\mbox{\scriptsize GY}}_d\big)^2\] might be a very useful majorant for the primes. What might we hope to do with such a majorant? By the computation leading to \eqref{display}, we see that it is possible to find an asymptotic for \begin{equation}\label{eq22} \E_{N \leqslant n < 2N} \Lambda_R(n)^2\end{equation} provided that $R \leqslant N^{1/2 - \epsilon}$. Later on we will wish to consider more complicated expressions involving genuine primes, such as \begin{equation}\label{eq23} \E_{N \leqslant n < 2N} \Lambda'(n +2)\Lambda_R(n)^2.\end{equation} Here we write $\Lambda'$ for the von Mangoldt function restricted to primes (as opposed to prime powers), thus \[ \Lambda'(n) := \left\{ \begin{array}{ll} \log n & \mbox{if $n$ is prime} \\ 0 & \mbox{otherwise}.\end{array}\right.\] The problem of evaluating \eqref{eq22} may be thought of as a kind of approximation to the twin prime problem, though we do not know of a way to relate this expression to that problem rigourously. Expanding out, we see that \eqref{eq23} is equal to \[ \sum_{d \leqslant R} \sum_{d' \leqslant R} \mu(d)\mu(d')\log(R/d)\log(R/d') \E_{\substack{N \leqslant n < 2N\\ [d,d'] | n}} \Lambda(n + 2).\] Now we expect that \begin{equation}\label{twin} \E_{\substack{N \leqslant n < 2N\\ [d,d'] | n}} \Lambda'(n + 2) \approx \frac{1}{\phi([d,d'])}\end{equation} if both $d$ and $d'$ are odd. The Bombieri-Vinogradov theorem clearly offers a chance of obtaining a statement to this effect \emph{on average} over $d,d' \leqslant R$ if $R \leqslant N^{1/4 - \epsilon}$, though there is certainly still work to be done as the distribution of $[d,d']$ as $d,d'$ range over $d,d' \leqslant R$ is not particularly uniform. For the details (which involve moment estimates for divisor functions) see \cite[\S 9]{gpy}. Once this is done we are left with the main term \begin{equation}\label{eq333} \sum_{\substack{d \leqslant R \\ d \; \mbox{\scriptsize odd}}} \sum_{\substack{d' \leqslant R \\ d' \; \mbox{\scriptsize odd}}} \frac{\mu(d)\mu(d')}{\phi([d,d'])}\log(R/d)\log(R/d').\end{equation} This term (and related expressions) may all be estimated rather accurately using the standard Dirichlet series techniques of analytic number theory, whereby the sums are expressed as integrals involving products of $\zeta$-functions. If we had $\mbox{EH}(\theta)$, that is to say if the primes had level of distribution $\theta$, one could show that \eqref{eq23} is roughly \eqref{eq333} in the wider range $R \leqslant N^{\theta/2 - \epsilon}$. In particular on the full Elliott-Halberstam conjecture one could work in the range $R \leqslant N^{1/2`- \epsilon}$, which is essentially the same as for \eqref{eq22}. Perhaps we should make a few remarks about the form of the asymptotic for \eqref{eq333}. One may in fact show that it is \[ \sim \log R \prod_{p \geqslant 3} \big(1 - \frac{1}{(p-1)^2}\big).\] We will see products such as this again in \S \ref{sec3}. \ssubsection{A strategy for gaps between primes} We now trun to a discussion of the results of Goldston, Pintz and Y{\i}ld{\i}r{\i}m themselves. From the conceptual viewpoint it is easiest to begin by discussing the very strong \emph{conditional} results proved under the assumption of the Elliott-Halberstam conjecture. We stated in the introduction that they prove \begin{equation}\label{eq4.1a} \mbox{liminf}_{n \rightarrow \infty} (p_{n+1} - p_n) \leqslant 16\end{equation} assuming $\mbox{EH}(\theta)$ for some $\theta$ less than $1$. In fact, a much more general result is obtained. Let $\mathcal{H} = \{h_1,\dots,h_k\}$ be a $k$-tuple of distinct integers with $h_1 < h_2 < \dots < h_k$. A generalisation of the twin prime conjecture is that there are infinitely many $n$ such that all of $n +h_1,\dots,n + h_k$ are prime unless this is ``obviously impossible for trivial reasons'', which would be the case if there is some $p$ such that $\{h_1,\dots,h_k\}$ occupy all residue classes $\md{p}$. If this is \emph{not} the case then we say that $\mathcal{H}$ is \emph{admissible}. Goldston, Pintz and Y{\i}ld{\i}r{\i}m prove the following. \begin{theorem}\label{thm1} Suppose that $\mbox{\textup{EH}}(\theta)$ is known for some $\theta > 1/2$. Then there is $k_0(\theta)$ with the following property. If $k \geqslant k_0(\theta)$ and if $\mathcal{H} = \{h_1,\dots,h_k\}$ is an admissible $k$-tuple then for infinitely many $n$ at least \emph{two} of the numbers $n + h_1,\dots, n+ h_k$ are prime.\end{theorem} Note in particular that \[ \mbox{liminf}_{n \rightarrow \infty} (p_{n+1} - p_n) \leqslant \min_{\substack{\{h_1,\dots,h_k\} \; \mbox{\scriptsize admissible} \\ k \geqslant k_0(\theta)}} (h_k - h_1).\] It turns out that $k_0(\theta)$ can be taken to be 6 for $\theta > 0.95971$, and this leads to \eqref{eq4.1a} since the $6$-tuple $\{0,4,6,10,12,16\}$ is admissible. Here is a very general strategy for detecting primes in admissible tuples. According to \cite{gpy-expository}, this has its origins in work of Selberg and Heath-Brown. Fix a range $[N,2N)$, suppose that $0 \leqslant h_1 < \dots < h_k \leqslant N$, and let $(\mu_n)_{N \leqslant n < 2N}$ be arbitrary non-negative weights which are certainly allowed to depend on the $h_i$. We will compare \[ Q_1 := \E_{N \leqslant n < 2N} \mu_n\] with \[ Q_2^{(i)} := \frac{1}{\log 3N}\E_{N \leqslant n < 2N} \Lambda'(n + h_i)\mu_n,\] for $i = 1, \dots, k$. T.~Tao remarked to me that a nice way to think of this as follows: one may renormalise so that $\sum \mu_n = 1$, and then $\rho^{(i)} := Q_2^{(i)}/Q_1$ is essentially the probability that $n + h_i$ is prime if $n$ is drawn at random from the distribution $\mu$ (one might then write the expected values in the definitions of $Q_1$ and $Q_2^{(i)}$ as integrals with respect to $\mu$). Now if one can choose the weights $\mu$ so that $\rho^{(i)} > 1/k$, we will have upon summing over $i = 1,\dots,k$ that \[ \E_{N \leqslant n < 2N} \big( \sum_{i=1}^k \frac{\Lambda'(n + h_i)}{\log 3N} - 1 \big) \mu_n > 0,\] which means that there is some $n$ such that \[ \Lambda'(n + h_1) + \dots + \Lambda'(n + h_k) > \log 3N.\] For such an $n$, at least two of $n + h_1,\dots, n+h_k$ are prime. In the probabilistic language, we have essentially used the fact that if $n + h_1,\dots,n+ h_k$ are each drawn at random from $\mu$, the expected number of primes amongst these numbers is $> 1$. \ssubsection{Choosing good weights} We continue the discussion of the previous section. How should the weights $\mu_n$ be chosen to optimise the factors $\rho^{(i)}$? In retrospect, one may view most of the earlier developments on gaps between primes as attempts to find good weights $\mu_n$ in this context -- see \cite[\S 4]{gpy} for further remarks on this. A very good choice of weights might be \[ \mu_n := \Lambda'(n + h_1) \dots \Lambda'(n + h_k).\] One would indeed expect that $\rho^{(i)} \approx 1$ in this case. The only problem is that we have no idea how to prove this, one particular issue being that we cannot show that $Q_1 \neq 0$ (indeed, this is equivalent to finding $n$ such that \emph{all} of $n + h_1,\dots, n+ h_k$ are prime). We must restrict ourselves to weights $\mu_n$ for which it is possible to estimate $Q_1$ and $Q_2^{(i)}$. As we saw in \S \ref{sec1.3}, there is a rather large class of such weights. In fact if \[ \mu_n = \big(\sum_{\substack{d | n \\ d \leqslant R}}\lambda_d\big)^2\] then our chances are good if $R \leqslant N^{\theta/2 - \epsilon}$, where $\theta$ is the best exponent for which we know $\mbox{EH}(\theta)$. Crucially, there is a rather more general class of weights one is able to consider, and that is weights of the form \begin{equation}\label{eq33b} \mu_n = \big(\sum_{\substack{d | F(n) \\ d \leqslant R}} \lambda_d \big)^2,\end{equation} where $F(n) = (n+a_1)\dots (n + a_m)$ is an integer polynomial. It is an interesting exercise to reprise the arguments of \S \ref{sec1.3} in this more general context. In place of the trivial estimate \[ \E_{n \leqslant N, [d,d'] | n} 1 = \frac{1}{[d,d']} + O(1)\] which we used to derive \eqref{display} one must instead input information about \[ \E_{n \leqslant N, [d,d'] | F(n)} 1.\] But one knows that (for example) $p | F(n)$ precisely if $n \equiv -a_j \md{p}$ for some $j \in \{1,\dots,m\}$, and so such a task is not too frightening. The same is true of sums, such as \eqref{twin}, which involve the von Mangoldt function. For the remainder of the discussion we will narrow down our search for good weights $\mu_n$ to those having the form \eqref{eq33b}. We will assume that for any sensible choice of $F$ and the $\lambda_d$ we may evaluate $Q_1$ and $Q_2^{(i)}$ for $R \leqslant N^{\theta/2 - \epsilon}$ using the standard techniques which we sketched in \S \ref{sec1.3}. Now we remarked that an ideal choice of weights is \[ \mu_n = \Lambda'(n + h_1) \dots \Lambda'(n + h_k),\] but we cannot compute with this choice. A closely related choice of weights is \[ \mu_n = \Lambda_k \big((n + h_1) \dots (n + h_k)\big).\] Here the function $\Lambda_k$ is defined by \[ \Lambda_k(n) := \sum_{d | n} \mu(d) \log(n/d)^k,\] and so in particular $\Lambda_1 = \Lambda$. For general $k$ the function $\Lambda_k$ is supported on those integers with at most $k$ distinct prime factors. (One way to check this is to use the identity \[ \Lambda_k = L\Lambda_{k-1} + \Lambda \ast \Lambda_{k-1},\] where $L(n) := \log n$ and $\ast$ denotes Dirichlet convolution.) Now in \S \ref{sec1.3} we saw the advantages of replacing $\Lambda$ with $\Lambda_R$, a divisor-truncated version of it. By analogy one might consider \[ \Lambda_{k,R}(n) := \sum_{\substack{ d | n \\ d \leqslant R}} \mu(d) \log(R/d)^k.\] This could be negative, but its square $\Lambda^2_{k,R}$ certainly cannot. Furthermore \[ \mu_n := \Lambda_{k,R}^2((n + h_1) \dots (n + h_k))\] is of the form \eqref{eq33b} (with $F(n) = (n + h_1)\dots (n + h_k)$). This, then, is a very natural choice of the weights $\mu_n$ and it is with some anticipation that we await the results of computing $Q_1$ and the $Q_2^{(i)}$, and hence the factors $\rho^{(i)}$. What we obtain is this: \[ \rho^{(i)} \sim \frac{2}{k+1} \cdot \frac{\log R}{\log N}.\] This is something of a disappointment, since it is not greater than $1/k$ even when $R = N^{\theta/2 - \epsilon}$ for $\theta$ very close to $1$ (that is, with a very strong form of the Elliott-Halberstam conjecture). Astonishingly, a seemingly small change tips the balance in our favour. We consider instead \[ \mu_n := \Lambda_{k+l,R}^2 ((n + h_1) \dots (n + h_k)),\] where $0 < l \ll k$ is a further parameter. In the probabilistic language, $\rho^{(i)}$ may then be thought of (very roughly) as something like the probability that $n + h_i$ is prime given that $(n + h_1) \dots (n + h_k)$ has at most $k + l$ prime factors. With this choice of weights it is possible to compute that \begin{equation}\label{rho-eval} \rho^{(i)} \sim \frac{2k}{k + 2l + 1}{\frac{2l+1}{l+1}} \cdot \frac{\log R}{\log N}.\end{equation} Note that as $k,l \rightarrow \infty$ with $l = o(k)$, this is essentially $4 \log R/k \log N$. In particular with $R := N^{\theta/2 - \epsilon}$ one has $\rho^{(i)} > 1/k$ for $k \geqslant k_0(\theta)$, for any $\theta > 1/2$. This is enough to establish Theorem \ref{thm1}. \ssubsection{The unconditional result} In this section we explain some aspects of the proof that \[ \mbox{liminf}_{n \rightarrow \infty} \frac{p_{n+1} - p_n}{\log n} = 0.\] Recall that in the last section we chose weights \[ \mu_n := \Lambda_{k+l,R}^2((n+h_1)\dots(n + h_k)).\] Taking $R := N^{1/4 - \epsilon}$, the quantities $Q_1$ and $Q_2^{(i)}$ can be evaluated using the Bombieri-Vinogradov theorem (rather than the Elliott-Halberstam conjecture). If $k,l$ are large with $l \ll k$ then (from \eqref{rho-eval}) the quantities $\rho^{(i)}$ are all at least $1/k - \epsilon'$ for $k \geqslant k_1(\epsilon)$, and hence we have that \begin{equation}\label{almost} \E_{N \leqslant n < 2N} \big( \sum_{i = 1}^k \frac{\Lambda'( n + h_i)}{\log 3N}\big) \mu_n \geqslant (1 - \delta) \Vert \mu \Vert_1,\end{equation} for any $\delta > 0$, and any $k \geqslant k_2(\delta)$. Here \[ \Vert \mu \Vert_1 := \E_{N \leqslant n < 2N} \mu_n \] is the total mass of $\mu$. This means that if $n$ is drawn at random from the distribution $\mu$, then the expected number of primes amongst the numbers $n + h_1,\dots n+h_k$ is at least $1 - \delta$. Clearly, this is not an immediately applicable result if one wishes to obtain two or more primes in the tuple $\{n + h_1,\dots, n + h_k\}$. There is, however, a tiny bit of further information available to us. Even if $h_0 \notin \{h_1,\dots,h_k\}$, there is still a chance that if $n$ is drawn at random from $\mu$ then $n + h_0$ will be prime. One can work out that this probability is asymptotically \[ \frac{\mathfrak{S}(h_0,h_1,\dots,h_k)}{\log N},\] where $\mathfrak{S}(h_0,h_1,\dots,h_k)$ is a certain \emph{singular series} reflecting the arithmetic properties of the numbers $h_0,h_1,\dots,h_k$ (it is similar in form to the product defining the twin prime constant). Formally what we mean by this is that \[ \E_{N \leqslant n < 2N} \frac{\Lambda'(n + h_0)}{\log 3N} \mu_n \sim \frac{\mathfrak{S}(h_0,h_1,\dots,h_k)}{\log N} \Vert \mu \Vert_1,\] and this may once again be rigorously established using the Bombieri-Vinogradov theorem. Let $\eta > 0$ be arbitrary. Summing over all $h_0$ with \[ 0 \leqslant h_0 \leqslant \eta \log N,\] we obtain from \eqref{almost} that \begin{equation}\label{eq477} \E_{N \leqslant n < 2N} \big( \sum_{h = 0}^{\eta \log N} \frac{\Lambda'(n + h)}{\log 3N}\big) \mu_n \geqslant \bigg(1 - \delta + \sum_{\substack{h_0 = 0 \\ h_0 \notin \{h_1,\dots,h_k\}}}^{\eta \log N}\frac{\mathfrak{S}(h_0,h_1,\dots,h_k)}{\log N}\bigg) \Vert \mu \Vert_1.\end{equation} For a typical choice of $h_1,\dots,h_k$, the right-hand side will be of a predictable size. Indeed by a result of Gallagher one may infer that \[ \sum_{\substack{h_0,\dots,h_k \leqslant H \\ h_i \; \mbox{\scriptsize distinct}}} \mathfrak{S}(h_0,h_1,\dots,h_k) \sim H^{k+1}\] as $H \rightarrow \infty$. Taking expectations over all $k$-tuples $\{h_1,\dots,h_k\}$ of distinct integers with $0 \leqslant h_i \leqslant \eta \log N$, we therefore obtain from \eqref{eq477} that \[ \E_{N \leqslant n \leqslant 2N} \big( \sum_{h = 0}^{\eta \log N} \frac{\Lambda'(n + h)}{\log 3N}\big) \mu_n \geqslant (1 - \delta + \eta - o(1)) \Vert \mu \Vert_1.\] Recall that this is valid for any $\delta > 0$, provided that $k$ is sufficiently large. Taking $\delta = \eta/2$, we therefore see that (if $n$ is drawn at random from $\mu$) the expected number of primes in the interval $[n, n + \eta \log N]$ is strictly greater than $1$. In particular there is some such interval containing at least two primes. \ssubsection{Further results} Since the original paper of Goldston, Pintz and Y{\i}ld{\i}r{\i}m several further works have appeared or are scheduled to appear giving refinements and variants of the main theorem. Here is a summary of what has been done: \begin{itemize} \item In the forthcoming paper \cite{gpy-precise} it is shown, by refining the ideas just described as far as seems possible, that \[ \liminf_{n \rightarrow \infty} \frac{p_{n+1} - p_n}{(\log p_n)^{1/2}(\log\log p_n)^2} < \infty.\] This remarkable result asserts that the gap between primes is infinitely often almost as small as the \emph{square root} of the average gap. \item Let $q_1 < q_2 < \dots$ be the numbers which are the product of exactly two distinct primes. Then \[ \liminf_{n \rightarrow \infty}(q_{n+1} - q_n) \leqslant 26,\] and in fact \[ \liminf_{n \rightarrow \infty}(q_{n+1} - q_n) \leqslant 6\] if one assumes the Elliott-Halberstam conjecture. These results are obtained in the paper \cite{gpyg} by the three authors and S.~W.~Graham. \item In the original paper \cite{gpy} one also finds results concerning several primes bunching together. Thus assuming $\mbox{EH}(\theta)$ one has, for any $r \geqslant 2$, the bound \[ \liminf_{n \rightarrow \infty} \frac{p_{n+r} - p_n}{\log p_n} \leqslant (\sqrt{r} - \sqrt{2\theta})^2.\] One rather curious feature of this bound is that the full Elliott-Halberstam conjecture $\mbox{EH}(1)$ implies that \[ \liminf_{n \rightarrow \infty} \frac{p_{n+2} - p_n}{\log p_n} = 0.\] Nothing of this sort is known for $p_{n+3} - p_n$, even conditionally. \end{itemize} \section{Binary digits of primes}\label{sec2} These notes originated from a course I lectured in Part III of the Mathematical Tripos at Cambridge University in the Lent Term 2007. My aim was to work through the result of Mauduit and Rivat in the case of binary expansions of primes, and to produce a reasonably short exposition (the original paper is 49 pages long). Because we provide complete details this section is considerably more technical than either \S 1 or \S 3. Readers not interested in these technicalities may still wish to read \S \ref{sec2.3}. \ssubsection{Statement of results} In a recent preprint \cite{mr}, Mauduit and Rivat proved that asymptotically 50\% of the primes have odd digit sum in base 2 (and hence, of course, 50\% of the primes have \emph{even} digit sum in base 2 as well!), answering a long-standing question of Gelfond. Our aim in this section is to give a self-contained proof of their theorem in the following form, which is easily seen to imply the result as just stated. \begin{theorem}[Mauduit-Rivat]\label{mainthm} Let $\Lambda$ be the von-Mangoldt function, and let $s : \N \rightarrow \N$ be the function which sums the binary digits of $n$. Then \[ \E_{n \leqslant X} \Lambda(n) (-1)^{s(n)} = O(X^{-\delta})\] for some $\delta > 0$. \end{theorem} In fact Mauduit and Rivat proved rather more than this: they counted primes whose digit sum in base $q$ is congruent to $a \md{m}$, for any natural numbers $a,q,m$. To prove a result in this generality requires some extra technical devices and a lot more notation, so we leave the interested reader to consult the original paper \cite{mr}. I find the result intrinsically interesting, but another reason for studying it is that it represents a pleasingly self-contained example of Vinogradov's method of `Type I and II sums' or `bilinear forms' for handling prime number sums. In contrast with the last chapter we provide a fairly complete technical discussion. \ssubsection{Some notation} Throughout this section $X$ will be thought of as a large parameter. If $Y_1$ and $Y_2$ are two quantities which depend on $X$ we write $Y_1 \gtrapprox Y_2$ if $Y_1 \gg_{\epsilon} X^{-\epsilon}Y_2$ for all $\epsilon > 0$. We use the notation $Y_1 \lessapprox Y_2$ similarly. Typically we might have $Y_1 \gg Y_2\log^{-C}X$ for some $C$, but the notation is occasionally applied in even looser situations. For example we have the bound $\tau(x) \lessapprox 1$ uniformly in $x \leqslant X$, where $\tau(x)$ denotes the number of divisors of $x$. If $n \in \N$ we write $\nu_2(n)$ for the $2$-exponent of $n$, the maximal $r$ such that $2^r | n$. The letter $c$ will always denote a small, positive, absolute constant. Different instances of the letter will not necessarily denote the same constant. \ssubsection{Vinogradov's method of Type I/II sums}\label{sec2.3} Suppose that $f: \N \rightarrow \C$ is a function, bounded by $1$. This section concerns a method which can be often be used to show that a sum of the form \[ \E_{n \leqslant X} \Lambda(n) f(n)\] is substantially smaller than $X$. The method is particularly inclined to work when $f$ is ``far'' from being multiplicative. It is clear that such a sum \emph{cannot} be small in many cases when $f$ does have some multiplicative tendancies, for example when $f(n) = \mu(n)$ or when $f(n)$ is a Dirichlet character to small modulus. We will develop a develop a form of the method which includes some technical refinements particularly suited to the study of the function $f(n) = (-1)^{s(n)}$. These are rather insubstantial and consist in large part of ensuring that various cutoffs are exact powers of two. In these notes we will endow the $\sim$ symbol with a rather specific meaning: if we write $\sum_{n \sim 2^{\nu}}$ then we understand that the variable $n$ is to range over the dyadic interval $[2^{\nu-1}, 2^{\nu})$. Here is the version of Vinogradov's method that we shall require. \begin{proposition}[Method of Type I/II sums]\label{vinogradov} Suppose that $f : \N \rightarrow \C$ is a function with $|f(n)| \leqslant 1$ for all $n$. Let $\delta \in (0,1]$ be a parameter. We say that \emph{Type I sums are $\delta$-small} if \[ \frac{1}{X}\sum_{m \sim 2^{\mu}} |\sum_{n \in I_m} f(mn)| \leqslant \delta \] whenever $2^{\mu} \leqslant X^{1/100}$, $I_m \subseteq [2^{\nu-1}, 2^{\nu})$ is an interval and $2^{\mu + \nu} \leqslant X$. We say that \emph{Type II sums are $\delta$-small} if \[ \frac{1}{X}\big|\sum_{m \sim 2^{\mu}}\sum_{n \sim 2^{\nu}} a_m b_n f(mn)\big| \leqslant \delta \] uniformly for all sequences $(a_m), (b_n)$ with $|a_m|, |b_n| \leqslant 1$ and for all $\mu, \nu$ such that $X^{1/100} \leqslant 2^{\mu}, 2^{\nu} \leqslant X^{99/100}$. Suppose that $f$ is a function for which Type I and Type II sums are $\delta$-small. Then \[ \E_{n \leqslant X} \Lambda(n)f(n) \ll \delta^{1/2} \log^C X.\] \end{proposition} \begin{remarks} One can take $C = 11/2$. Note that both Type I and Type II sums are trivially bounded by $X$. The important feature of the proposition, then, is that a big enough gain over these trivial bounds leads to an improvement of the trivial bound on $\E_{n \leqslant X} \Lambda(n)f(n)$, which is $O(\log X)$. In our statement of the result we have made a particular choice of the the ranges of $\mu, \nu$ for which estimates for Type I/II sums are required. There is considerable flexibility in the choice of these ranges. Though this matter does not concern us here, we remark that it has apparently not been completely clarified in the literature (though see \cite[\S 8]{duke-friedlander-iwaniec} for a related discussion). \end{remarks} When is an attempt to use the method of Type I/II sums likely to be successful in establishing a non-trivial bound for $\E_{n \leqslant X} \Lambda(n)f(n)$? In general, one might hope for success when $f$ does not behave ``multiplicatively''. Certainly if $f$ is multiplicative, say $f = \chi$ for some Dirichlet character $\chi$, then by choosing $a_m = \overline{f(m)}$ and $b_n = \overline{f(n)}$ we see that the Type II sums are \emph{not} always small. A similar phenomenon persists for those $f$ which are the sum of a few multiplicative functions, for example the additive character $f(n) = e(an/q)$ with $q$ a small integer. Fortunately one can estimate $\E_{n \leqslant X} \Lambda(n)f(n)$ for these functions by other means, namely the explicit formula and theorems on the location of zeroes of $L$-functions. If, on the other hand, $f$ does not exhibit significant multiplicative behaviour then one may hope that the method of Type I and II sums will work. The most classical instance of this, worked out by Vinogradov in the course of proving that every large odd number is the sum of three primes, is the case $f(n) = e(\alpha n)$ where $\alpha$ is not close to a rational $a/q$. Our task here is to handle the case $f(n) = (-1)^{s(n)}$. \ssubsection{Proof of the method of Type I/II sums.} Even though Proposition \ref{vinogradov} is not normally stated with quite the same technical refinements that we have done, the reader familiar with the basics of this theory may prefer to skip this somewhat technical section. Before making a start on the proof proper, we show that the assumption that Type II sums are small implies that Type I sums are small for a much larger range of $\mu$ than we have hypothesised. \begin{lemma}[Type II implies extended Type I]\label{typeii-consequence} Suppose that $f : \{1,\dots,N\} \rightarrow \C$ is a function with $\Vert f \Vert_{\infty} \leqslant 1$ for which Type I and Type II sums are $\delta$-small. Then we have the Type I estimate \[ \frac{1}{X}\sum_{m \sim 2^{\mu}} \big| \sum_{n \in I_m} f(mn) \big| \ll \delta \log X\] in range $2^{\mu} \leqslant X^{99/100}$. \end{lemma} \noindent\textit{Proof. } Since Type I sums are $\delta$-small, we may assume that $2^{\mu} \geqslant X^{1/100}$. It clearly suffices to prove that \begin{equation}\label{to-bound-33}\frac{1}{X} \sum_{m \sim 2^{\mu}}\sum_{n \sim 2^{\nu}} \omega_m 1_{I_m}(n) f(mn) \ll \delta \log X\end{equation} for all choices of $\omega_m$ with $|\omega_m| = 1$. But for the cutoff $1_{I_m}(n)$, which does not factor as a product of a function of $m$ with a function of $n$, this looks like a Type II sum. To remove the cutoff, we employ a standard ``separation of variables'' trick. By the Fourier inversion formula on $\Z$ we have \[ 1_{I_m}(n) = \int^1_0 \widehat{1}_{I_m}(\theta) e(n\theta)\, d\theta.\] Thus the left-hand side of \eqref{to-bound-33} is equal to \[ \frac{1}{X}\int^1_0 \big( \sum_{m \sim 2^{\mu}}\sum_{n \sim 2^{\nu}} \omega_m \widehat{1}_{I_m}(\theta)e(\theta n) f(mn) \big) \, d\theta.\] Since Type II sums are $\delta$-small and \[ |\widehat{1}_{I_m}(\theta)| \ll \min(X,|\theta|^{-1})\] we see that this is at most \[ \delta \int^1_0 \min(X, |\theta|^{-1})\, d\theta \ll \delta \log X.\] This concludes the proof. {\usebox{\proofbox}} \emph{Proof of Proposition \ref{vinogradov}.} The argument is essentially that of Vaughan \cite{vaughan}, but I have followed the beautiful presentation in the book of Iwaniec and Kowalski \cite{iwaniec-kowalski}. We start with the easily-verified relation \[ \Lambda(n) = \sum_{\substack{b,c \\ bc | n}} \Lambda(b) \mu(c).\] Let $U := X^{1/3}$ (say), and decompose this sum as \[ \Lambda(n) = \Lambda_{\sharp \sharp}(n) + \Lambda_{\sharp \flat}(n) + \Lambda_{\flat \sharp}(n) + \Lambda_{\flat \flat}(n),\] where $\flat$ denotes ``large'' divisors and $\sharp$ denotes ``small'' divisors, so that for example \[ \Lambda_{\flat \sharp}(n) := \sum_{\substack{b \geqslant U,c < U \\ bc | n}} \Lambda(b) \mu(c).\] We observe that \[ \Lambda_{\sharp \flat}(n) + \Lambda_{\sharp \sharp}(n) = \sum_{b < U} \Lambda(b) \sum_{c | \frac{n}{b}}\mu(c) = 1_{n < U}\] whilst \begin{align*} \Lambda_{\flat \sharp}(n) + \Lambda_{\sharp \sharp}(n) &= \sum_{\substack{c < U \\ c | n}} \mu(c) \sum_{b | \frac{n}{c}} \Lambda(b) \\ &= \sum_{\substack{c < U \\ c | n}}\mu(c)\log(n/c).\end{align*} Thus we obtain what is essentially \emph{Vaughan's identity}, \begin{equation}\label{vaughan-id} \Lambda(n) = -\Lambda_{\sharp \sharp}(n) + \Lambda_{\flat \flat}(n) + 1_{n < U}\Lambda(n) + \sum_{\substack{c < U \\ c | n}}\mu(c)\log(n/c). \end{equation} Weighting by $f(n)$ and summing over $n \leqslant X$ we obtain \begin{align*} & \sum_{n \leqslant X} \Lambda(n) f(n) \\ &= -\sum_{n \leqslant X}\Lambda_{\sharp\sharp}(n) f(n) + \sum_{n \leqslant X} \Lambda_{\flat \flat}(n)f(n) + \sum_{n < U} \Lambda(n)f(n) + \sum_{\substack{c < U \\ c | n}}\mu(c)\sum_{n \leqslant X} f(n) \log(n/c)\\ &= S_1 + S_2 + S_3 + S_4.\end{align*} Our objective is to use the assumption that Type I and II sums are small in order to bound the sums $S_i$. \emph{Bounding $S_1$.} We have \[ S_1 = \sum_{b,c < U} \Lambda(b)\mu(c) \sum_{n \leqslant X/bc} f(bcn).\] This may be rewritten as \begin{equation}\label{eq12} \sum_{m < U^2} \omega_m \sum_{n \leqslant X/m} f(mn),\end{equation} where \[ \omega_m := \sum_{\substack{b,c < U \\ bc = m}} \Lambda(b)\mu(c).\] By splitting into $O(\log^2 X)$ dyadic ranges for $m$ and $n$, we see that it suffices to prove that \begin{equation}\label{to-prove-1} \sum_{m \sim 2^{\mu}} |\omega_m| \big| \sum_{n \in I_m} f(mn)\big| \ll \delta^{1/2} X\log^{C-2} X\end{equation} whenever $I_m \subseteq [2^{\nu-1}, 2^{\nu})$ is an interval and $2^{\mu + \nu} \leqslant X$. This does not quite follow from Lemma \ref{typeii-consequence} since $\omega_m$ is not bounded. We do, however have $|\omega_m| \ll \tau(m)\log X$, where $\tau$ is the divisor function, and hence since $\sum_{n \leqslant N} \tau(n)^2 \ll N \log^3 N$ we have \[ \sum_{m \sim 2^{\nu}} |\omega_m|^2 \ll 2^{\mu} \log^5 X.\] This means that for any $Q > 0$ we have the estimate \[ \sum_{\substack{m \sim 2^{\mu} \\ |\omega_m| > Q}} |\omega_m | \ll 2^{\mu}Q^{-1}\log^5 X.\] Splitting the sum in \eqref{to-prove-1} into two parts according as $|\omega_m|$ is or is not greater that $Q$ we obtain \[ \sum_{m \sim 2^{\mu}} |\omega_m| \big| \sum_{n \in I_m} f(mn)\big| \ll Q^{-1}X \log^5 X + Q \sum_{m \sim 2^{\mu}}\big| \sum_{n \in I_m} f(mn)\big|.\] By Lemma \ref{typeii-consequence} this is bounded by \[ Q^{-1}X\log^6 X + \delta QX\log X.\] Choosing $Q := \delta^{-1/2}\log^{5/2}X$, we obtain a bound of the required type. \emph{Bounding $S_2$.} The sum $S_2$ may be written as \[ \sum_{b, c \geqslant U}\sum_{t \leqslant X/bc} \Lambda(b) \mu(c) f(bct).\] Changing variables and rearranging, this may be written as \begin{equation}\label{to-split} \sum_{\substack{m \geqslant U \\ mn \leqslant X}} a_m b_n f(mn),\end{equation} where $a_m := \Lambda(m)$ and $b_n := \sum_{c \geqslant U: c | n} \mu(c)$. Observing that $b_n = 0$ if $n < U$, we see that the sum over $m,n$ is covered by $O(\log^2 X)$ dyadic ranges $m \sim 2^{\mu}$, $n \sim 2^{\nu}$ with $X^{0.01} \leqslant 2^{\mu}, 2^{\nu} \leqslant X^{0.99}$. We may therefore split \eqref{to-split} into $O(\log^2 X)$ sums of the form \begin{equation}\label{to-bound-3} \sum_{m \sim 2^{\mu}} \sum_{n \sim 2^{\nu}} 1_{I(n)}(m) a_m b_n f(mn),\end{equation} where $I(n) \subseteq [2^{\mu - 1}, 2^{\mu})$ is an interval. It suffices to show that any such sum is $\ll \delta^{1/2} X\log^{C-2} X$. These sums look rather like Type II sums, except for the presence of the cutoff $1_{I(n)}(m)$ and the fact that the sequences $a_m, b_n$ are not bounded. To remove the cutoff we use the same device as in the proof of Lemma \ref{typeii-consequence}, writing \eqref{to-bound-3} as \[ \int^1_0 \big( \sum_{m \sim 2^{\mu}} \sum_{n \sim 2^{\nu}} a_m e(\theta m) \widehat{1}_{I(n)}(\theta) b_n f(mn)\big) \, d\theta.\] Since $|\widehat{1}_{I(n)}(\theta)| \ll \min(X, |\theta|^{-1})$ we see that it suffices to prove that \[\sum_{m \sim 2^{\mu}}\sum_{n \sim 2^{\nu}} a'_m b'_n f(mn) \ll \delta^{1/2}X\log^{C-3}X\] uniformly for all choices of $a'_m$ with $|a'_m| \leqslant |a_m|$ and $|b'_n| \leqslant |b_n|$ for all $m,n$. This has effectively removed the cutoff that we were worried about. It remains to deal with the non-boundedness of the sequences $(a'_m)_{m \sim 2^{\mu}}, (b'_n)_{n \sim 2^{\nu}}$. The non-boundedness of $a'_m$ is rather minor, since clearly $a'_m \leqslant \log X$ for all $m$. Thus we may define $a''_m := a'_m/\log X$ and reduce to proving that \begin{equation}\label{to-prove-5} \sum_{m \sim 2^{\mu}}\sum_{n \sim 2^{\nu}} a''_m b'_n f(mn) \ll \delta^{1/2}X\log^{C-4}X\end{equation} whenever $|a''_m| \leqslant 1$. To deal with $b'_n$, we note that $|b'_n| \leqslant \tau(n)$. Thus, by the argument used in dealing with $S_1$, we have \[ \sum_{\substack{n \sim 2^{\nu} \\ |b'_n| \geqslant Q}} |b'_n| \ll 2^{\nu}Q^{-1}\log^{3}X.\] Splitting the sum in \eqref{to-prove-5} into parts according as $|b'_n| \geqslant Q$ or not, we conclude as before. \emph{Bounding $S_3$.} The sum $S_3$ is trivially bounded by $U = X^{1/3}\log X$. The $\delta$-smallness of Type I sums implies (rather vacuously) that $|f(n)| \leqslant \delta X$. Since we are assuming that $\Vert f \Vert_{\infty} = 1$, it follows that $\delta \geqslant 1/X$ and hence this term may be absorbed into the bound $\delta^{1/2}X \log^C X$ that we are trying to establish. \emph{Bounding $S_4$.} The sum may be written as \[ \sum_{m \leqslant U} \sum_{n \leqslant X/m} \mu(m) \log n f(mn).\] This may be split into $O(\log^2 X)$ sums of the form \begin{equation}\label{eq46} \sum_{m \sim 2^{\mu}} \sum_{n \in I_m} \mu(m)\log n f(mn),\end{equation} where $I_m \subseteq [2^{\nu-1}, 2^{\nu})$. This is almost a Type I sum: only the presence of the $\log n$ term prevents it being so. This term is so smooth, however, that it may be effectively removed by ``partial summation''. To do this, simply write \[ \log n = \int^n_1 \frac{dt}{t}\] and rearrange \eqref{eq46} as \[ \int^{2^{\nu}}_1 \frac{dt}{t} \sum_{m \sim 2^{\nu}} \mu(m) \sum_{n \in I_{m,t}} f(mn),\] where $I_{m,t} \subseteq [2^{\nu - 1}, 2^{\nu})$ is an interval. The smallness of Type I sums, as given in Lemma \ref{typeii-consequence}, may now be used to see that this is bounded as required. This completes the bounding of $S_4$, and hence the proof of Proposition \ref{vinogradov}. {\usebox{\proofbox}} \ssubsection{The sum-of-digits function and related functions} If $n$ is a positive integer and $n = \sum_{i \geqslant 0} n_i 2^i$ is its binary expansion, we write \[ s(n) := \sum_i n_i.\] This is, of course, a finite sum. For each positive integer $k$ we consider also the truncated functions \[ s_k(n) := \sum_{i = 0}^{k-1} n_i.\] These functions are closely related to $s(n)$, of course; on any interval $[2^k t, 2^k (t+1))$ the functions $s_k(n)$ and $s(n)$ differ by a fixed constant. The truncated functions $s_k(n)$ are periodic with period $2^k$, however, and this makes them rather amenable to study using Fourier analysis on the finite group $\Z/2^k\Z$. In fact we will be more interested in the oscillatory functions \[ f(n) := (-1)^{s(n)}\] and \[ f_k(n) := (-1)^{s_k(n)}.\] The functions $s_k$ and $f_k$ may, by abuse of notation, be regarded as functions on $\Z/2^k\Z$. \begin{definition}[Finite Fourier transform] Let $k \geqslant 0$ be a fixed integer. Then we define \[ \widehat{f}_k(r) := \E_{x \in \Z/2^k\Z} f_k(x) e(-rx/2^k).\] \end{definition} We now proceed to establish several properties of the Fourier transform $\widehat{f}_k$ which will be useful later on. We collect these here since they are all proved in a very similar way. The reader might wish to read through the proof of the first one, then skip to the next section. She might return here as each result is required. \begin{proposition}[$L^{\infty}$ bound]\label{fourier-prop} We have $|\widehat{f}_k(r)| \ll 2^{-ck}$ for all $r \in \Z/2^k\Z$. \end{proposition} \noindent\textit{Proof. } We observe that \begin{equation}\label{fourier-expansion} \widehat{f}_k(r) = 2^{-k}(1 - e(t))(1 - e(2t)) \dots (1 - e(2^{k-1}t)),\end{equation} where $t = r/2^k$. The result now follows by observing that \[ |1 - e(u)||1 - e(2u)| = 4|\sin (\pi u)\sin (2\pi u)| = 8|\cos (\pi u)| (1 - \cos^2 (\pi u))\] is maximised when $\cos^2 (\pi u) = 1/3$, and attains the value $16/3\sqrt{3}$ there. Thus, grouping terms in pairs, we see that \[ \widehat{f}_k(r) \ll (4/3\sqrt{3})^{k/2} \ll 2^{-k/10}.\] This concludes the proof. {\usebox{\proofbox}} \begin{proposition}[$L^1$ bound in progressions]\label{fourier-prop-2} Suppose that $0 \leqslant k' \leqslant k$ and that $a$ is a residue class $\mdlem{2^{k'}}$. Then \[ \sum_{\substack{r \in \Z/2^k \Z \\ r \equiv a \mdsublem{2^{k'}}}} |\widehat{f}_k(r)| \ll 2^{( \frac{1}{2} - c)(k - k')} |\widehat{f}_{k'}(a)|.\] \end{proposition} \begin{remark} For the purposes of discussion suppose that $k' = a = 0$. Then this proposition states that \[ \sum_{r \in \Z/2^k\Z} |\widehat{f}_k(r)| \ll 2^{(\frac{1}{2}-c)k}.\] By contrast the trivial bound (arising from Parseval's identity and an application of Cauchy-Schwarz) for this quantity would be $2^{k/2}$. One tends to imagine that a saving over a trivial bound can be expected whenever there is some kind of ``cancellation'', such as one might expect if the function $f_k : \Z/2^k\Z \rightarrow \{-1,1\}$ were chosen at random. This setting is rather different, and this represents perhaps the most remarkable feature of the paper \cite{mr}. It is possible to show that if $f_k$ is chosen randomly then \[ \sum_{r \in \Z/2^k\Z} |\widehat{f}_k(r)| \gg 2^{k/2}.\] Our proposition therefore captures a very specific property of the functions $f_k(n) = (-1)^{s_k(n)}$. \end{remark} \noindent\textit{Proof. } Write $S(a,k)$ for the sum on the left. For any $r \in \Z/2^k\Z$ the expansion \eqref{fourier-expansion} yields \[ |\widehat{f}_k(r)| = \textstyle\frac{1}{2}\displaystyle|1 - e(r/2^k)||\widehat{f}_{k-1}(r)|.\] Suppose that $k \geqslant k' + 1$. Since $\widehat{f}_{k-1}(r)$ is periodic with period $2^{k-1}$, we may split $S(a,r)$ as a sum over two ranges $0 \leqslant r < 2^{k-1}$ and $2^{k-1} \leqslant r < 2^k$, thereby obtaining \[ S(a,k) \leqslant S(a,k-1)\sup_{t \in [0,1]}\textstyle\frac{1}{2}\displaystyle\big(|1 + e(t)| + |1 - e(t)|\big) .\] Now a simple exercise in Euclidean geometry confirms that \begin{equation}\label{rat-bound} |1 + e(t)| + |1 - e(t)| \leqslant 2\sqrt{2},\end{equation} with equality if and only if $t = \pm 1/4$. Hence \begin{equation}\label{bound-1} S(a,k) \leqslant \sqrt{2} S(a,k-1).\end{equation} This does not, in itself, suffice to establish a nontrivial bound for $S(a,k)$. If $k \geqslant k' + 2$ one may improve things by splitting the sum $S(a,k)$ into \emph{four} ranges $j 2^{k-2} \leqslant r < (j+1)2^{k-2}$ ($j = 0,1,2,3$). This leads to \begin{equation}\label{bound-5} S(a,k) \leqslant S(a,k-2)\sup_{t \in [0,1]} \textstyle \frac{1}{4}\displaystyle\sum_{j = 0}^3 |1 - e(t + j/4)||1 - e(2(t + j/4))|.\end{equation} Now two applications of \eqref{rat-bound} confirm that \begin{align*} \sum_{j = 0}^3 & |1 - e(t + j/4)||1 - e(2(t + j/4))| \\ & = |1 - e(2t)|\big( |1 - e(t)| + |1 + e(t)|\big) + |1 + e(2t)|\big(|1 - e(t + 1/4)| + |1 + e(t + 1/4)|\big) \\ & \leqslant 2\sqrt{2}\big( |1 - e(2t)| + |1 + e(2t)|\big) \\ & \leqslant 8. \end{align*} Furthermore equality can never occur, and so by compactness this expression is bounded by $8 - c$ for some $c > 0$. Comparing with \eqref{bound-5} we see that \[ S(a,k) \leqslant (2 - c)S(a,k-2).\] To complete the proof of the proposition one may apply this repeatedly, followed perhaps by one application of \eqref{bound-1}, to bound $S(a,k)$ above in terms of $S(a,k')$. {\usebox{\proofbox}} \ssubsection{Analysis of Type I sums} \begin{proposition}[Estimate for Type I sums]\label{typei-prop} Suppose that $2^{\mu} \leqslant X^{1/20}$. For each $m \sim 2^{\mu}$, suppose that $I_m \subseteq (2^{\nu-1}, 2^{\nu}]$ is an interval. Then \[ \sum_{m \sim 2^{\mu}} |\sum_{n \in I_m} f(mn)| \ll_{\epsilon} X^{19/20 + \epsilon}.\] \end{proposition} \begin{remark} In \cite{mr} an alternative argument (related to the large sieve) is used, in which the same estimate is established under the weaker assumption that $2^{\mu} = O_{\epsilon}(X^{1/2 - \epsilon})$. \end{remark} \noindent\textit{Proof. } For $m,n$ in the ranges under consideration we have have $f(mn) = f_k(mn)$, where $k := \mu + \nu$. Fix a value of $m$, and write $S_m := \sum_{n \in I_m} f(mn)$. We thus have \[ S_m = \sum_{x \in \Z/2^k \Z} f(x) 1_P(x),\] where $P = P_m := \{mn : n \in I_m\}$. By Parseval's identity this is \[ 2^k\sum_{r \in \Z/2^k \Z} \widehat{f}_k(r) \widehat{1}_P(r),\] which is bounded by $2^k\Vert \widehat{f}_k \Vert_{\infty} \Vert \widehat{1}_P \Vert_1$. By summing a geometric series we have \[ |\widehat{1}_P(r)| \ll \min(1, \Vert r/2^k \Vert^{-1}),\] and therefore $\Vert \widehat{1}_P \Vert_{1} \ll k \ll \log X$. It follows from Lemma \ref{fourier-prop} that \[ S_m \ll 2^k X^{-1/10}\log X.\] Summing over $m \sim 2^{\mu}$, we obtain the required bound. {\usebox{\proofbox}} \ssubsection{Two diophantine lemmas} In this section we give two results of a `diophantine' nature which we will use in the next section, in which Type II sums are analysed. Both of these are more-or-less standard in this subject. \begin{lemma}[Vinogradov]\label{vinogradov-diophantine} Suppose that $q,Q,R$ are all natural numbers less than $X$, that $\beta \in \R$, and suppose that $(a,q) = 1$. Then \[ \sum_{x = 0}^R \min \big( Q, \Vert ax/q + \beta\Vert^{-1} \big) \lessapprox Q + q + R + QR/q.\] \end{lemma} \noindent\textit{Proof. } The key observation is that as $x$ ranges over any interval of length $q$, the numbers $ax/q \md{1}$ range over the set $0,1/q,\dots, (q-1)/q$. It follows easily that if $I$ is any interval of length at most $q$ then \[ \sum_{x \in I} \min\big(Q, \Vert ax/q + \beta \Vert^{-1} \big) \ll Q + q\log q.\] Since the range $x = 1,\dots,R$ may be split into at most $R/Q + 1$ such ranges, the result follows immediately. {\usebox{\proofbox}} \begin{lemma}[Equidistribution lemma]\label{eq-dist} Suppose that $\alpha \in \R$ and that $I$ is an interval of integers with $|I| = N$. Suppose that $\delta_1,\delta_2$ satisfy $\delta_1 < \delta_2/16$, and suppose that there are at least $\delta_2 N$ elements $n \in I$ for which $\Vert \alpha n\Vert_{\R/\Z} \leqslant \delta_1$. Suppose that $N \geqslant 8/\delta_2$. Then there is some $q \leqslant 8/\delta_2$ such that $\Vert \alpha q \Vert_{\R/\Z} \leqslant 4\delta_1\delta_2^{-1}N^{-1}$. \end{lemma} \noindent\textit{Proof. } By a well-known argument of Dirichlet there is some $q \leqslant N$ and an $a$ coprime to $q$ such that $|\alpha - a/q| \leqslant 1/qN$. Set $\theta := \alpha - a/q$, let $n_0 \in I$ be fixed, and set $\beta := n_0\theta$. Then for any $n \in I$ we have, by the triangle inequality, \[ \Vert \alpha n \Vert_{\R/\Z} \geqslant \Vert an/q + \beta \Vert_{\R/\Z} - \Vert \theta (n - n_0)\Vert_{\R/\Z} \geqslant \Vert an/q + \beta \Vert_{\R/\Z} - 1/q,\] and so \begin{equation}\label{star-8}\Vert an/q + \beta \Vert_{\R/\Z} \leqslant \delta_1 + \frac{1}{q}\end{equation} for at least $\delta_2 N$ values of $n \in I$. Now as $n$ ranges over any interval of length $q$ the numbers $an/q$ range over the set $\{0,1/q,2/q,\dots\}$. Thus in any such interval the number of $n$ satisfying \eqref{star} is no more than $\delta_1 q + 2$. Now $I$ may be divided into at most $N/q + 1$ intervals of length no more than $q$, and thus the number of $n \in I$ satisfying \eqref{star-8} is bounded by \[ (\frac{N}{q} + 1)(\delta_1q +2).\] On the other hand, we are assuming this quantity is at least $\delta_2 N$. Since $\delta_2 \geqslant 4\delta_1$, $N \geqslant 8/\delta_2$ and $q \leqslant N$ this easily implies that $q \leqslant 8/\delta_2$. Now the assumption of the lemma implies, by piegonholing, that $\Vert \alpha n \Vert_{\R/\Z} \leqslant 2\delta_1$ for at least $\delta_2 N/2$ values of $n \in \{1,\dots,N/2\}$. We have \[ \alpha n = \frac{an}{q} + \theta n\] where $|\theta| \leqslant 1/qN$. If $n \leqslant N/2$, it follows that either \[ \Vert \alpha n \Vert_{\R/\Z} \geqslant 1/2q \geqslant \delta_2/16 > \delta_1\] or else $n$ is a multiple of $q$. But for $k \leqslant N/2q$ we have simply \[ \Vert \alpha kq \Vert_{\R/\Z} = |\theta kq|.\] Thus $|\theta k q| \leqslant 2\delta_1$ for at least $\delta_2 N/2$ such values of $k$, and in particular for some $k_0 \geqslant \delta_2 N/2$. Hence $|\theta| \leqslant 2\delta_1/qk_0 \leqslant 4\delta_1/q\delta_2 N$, which implies the result. {\usebox{\proofbox}} \ssubsection{Analysis of Type II sums} We start with an inequality which is often used in the estimation of Type II sums, van der Corput's inequality. \emph{Van der Corput's inequality.} Van der Corput's inequality is a kind of generalisation of the Cauchy-Schwarz inequality. \begin{lemma}[van der Corput inequality]\label{vdc} Let $N,H$ be positive integers and suppose that $(a_n)_{n \in \{1,\dots,N\}}$ is a sequence of complex numbers. Extend $(a_n)$ to all of $\Z$ by defining $a_n := 0$ when $n \notin \{1,\dots,N\}$. Then \[ |\sum_{n} a_n |^2 \leqslant \frac{N + H}{H}\sum_{|h| \leqslant H}\big(1 - \frac{|h|}{H}\big)\sum_{n} a_n \overline{a_{n+h}}.\] \end{lemma} \noindent\textit{Proof. } We have \[ \sum_n a_n = \frac{1}{H} \sum_{-H < n \leqslant N} \sum_{h = 0}^{H-1} a_{n+h}.\] Thus, applying the Cauchy-Schwarz inequality, we have \begin{align*} \big|\sum_n a_n \big|^2 &= \frac{1}{H^2} \big| \sum_{-H < n \leqslant N} \sum_{h = 0}^{H-1} a_{n+h}\big|^2 \\ &\leqslant \frac{N+H}{H^2} \sum_{-H < n \leqslant N} \big| \sum_{h = 0}^{H-1} a_{n+h} \big|^2 \\ & = \frac{N+H}{H^2} \sum_{-H < n \leqslant N} \sum_{h = 0}^{H-1} \sum_{h' = 0}^{H-1} a_{n+h} \overline{a}'_{n+h}, \end{align*} which equals the right hand side of the claimed inequality. This concludes the proof. {\usebox{\proofbox}} \begin{proposition}[Estimate for Type II sums]\label{typeii-prop} Suppose that $X^{1/100} \leqslant 2^{\mu}, 2^{\nu} \leqslant X^{99/100}$. Suppose that $(a_m)_{m \sim 2^{\mu}}$ and $(b_n)_{n \sim 2^{\nu}}$ are arbitrary sequences of complex numbers with $|a_m|, |b_n| \leqslant 1$. Then \[ \sum_{m \sim 2^{\mu}}\sum_{n \sim 2^{\nu}} a_m b_n f(mn) \ll X^{1-\kappa}\] for some absolute $\kappa > 0$. \end{proposition} \noindent\textit{Proof. } We may assume without loss of generality that $\nu \geqslant \mu$. Suppose for a contradiction that the result is false. We write this \[ \sum_{m \sim 2^{\mu}} \sum_{n \sim 2^{\nu}} a_m b_n f(mn) \gtrapprox 2^{\mu + \nu}.\] By Cauchy-Schwarz we obtain \[ \sum_{m \sim 2^{\mu}} |\sum_{n \sim 2^{\nu}} b_n f(mn)|^2 \gtrapprox 2^{\mu + 2\nu}.\] Let $\rho := c(\mu + \nu)$, where $c > 0$ is a constant to be chosen later, and apply the van der Corput inequality (Lemma \ref{vdc}) with $H := 2^{\rho}$. It follows that \[ \sum_{m \sim 2^{\mu}}\sum_{n \sim 2^{\nu}}\sum_{|h| \leqslant H} \big(1 - \frac{|h|}{H}\big)b_n\overline{b_{n+h}} f(m(n+h))f(mn) \gtrapprox 2^{\mu + \nu +\rho}.\] The $h = 0$ term is negligible, and so we obtain \begin{equation}\label{star} \sum_{1 \leqslant |h| \leqslant 2^{\rho}}\sum_{n \sim 2^{\nu}} |\sum_{m \sim 2^{\mu}} f(m(n+h))f(mn)| \gtrapprox 2^{\mu + \nu + \rho}.\end{equation} Our aim is to obtain some cancellation in the inner sum and thereby reach a contradiction. The first step is to show that $f$ can be replaced by the truncated function $f_k$, where $k := \mu + 2\rho$ (say) is not much bigger than $\mu$. Now we have \[ s(m(n + h)) - s_k(m(n+h)) = s(mn) - s_k(mn)\] if $mn, m(n+h)$ lie in the same interval $[2^kt, 2^{k}(t+1)]$. Since $mh$ is much smaller than $2^k$, this will happen most of the time. In fact for it not to hold we must have $mn \in [2^kt - 2^{\mu + \rho}, 2^kt]$ for some $t$ (that is, there is a ``carry'' on adding $mh$ to $mn$). Now for $x \leqslant X$ the divisor function $\tau$ satisfies $\tau(x) \lessapprox 1$, and therefore the number of ``bad'' pairs $(m,n)$ is at most \[ \sum_{t \leqslant 2^{\nu - 2\rho}}\sum_{l = 2^kt - 2^{\mu + \rho}}^{2^kt} \tau(l) \lessapprox 2^{\mu + \nu - \rho}.\] The contribution of the bad pairs to \eqref{star} is therefore negligible, and we may replace that inequality by \begin{equation}\label{star2} \sum_{1 \leqslant |h| \leqslant 2^{\rho}}\sum_{n \sim 2^{\nu}} |\sum_{m \sim 2^{\mu}} f_k(m(n+h))\overline{f_k(mn)}| \gtrapprox 2^{\mu + \nu + \rho}.\end{equation} Let us now focus on the inner sum \[ E_{n,h} := \sum_{m \sim 2^{\mu}}f_k(m(n+h))f_k(mn),\] which we will study using Fourier analysis on $\Z/2^k\Z$. Expanding both copies of $f_k$ using the inversion formula, we see that \[ E_{n,h} = \sum_{m \sim 2^{\mu}}\sum_{r,s \in \Z/2^k \Z} \widehat{f}_k(r)\widehat{f}_k(s) e\big(\frac{rm(n+h) + smn}{2^k}\big),\] which is bounded by \[ \sum_{r,s \in \Z/2^k \Z}|\widehat{f}_k(r)||\widehat{f}_k(s)|\min(2^{\mu} , \Vert \frac{r(n+h) + sn}{2^k}\Vert^{-1} ).\] From \eqref{star2} we thus have \begin{equation}\label{star3} \sum_{r,s \in \Z/2^k \Z}|\widehat{f}_k(r)||\widehat{f}_k(s)|\sum_{n \sim 2^{\nu}}\sum_{1 \leqslant |h| \leqslant 2^{\rho}} \min(2^{\mu} , \Vert \frac{r(n+h) + sn}{2^k}\Vert^{-1} ) \gtrapprox 2^{\mu + \nu + \rho}.\end{equation} Define the weights \[ \omega_{r,s} := 2^{-\mu - \nu - \rho}\sum_{n \sim 2^{\nu}}\sum_{1 \leqslant |h| \leqslant 2^{\rho}} \min(2^{\mu} , \Vert \frac{r(n+h) + sn}{2^k}\Vert^{-1} ).\] Then \eqref{star3} becomes \begin{equation}\label{star4} \sum_{r,s \in \Z/2^k \Z}|\widehat{f}_k(r)||\widehat{f}_k(s)|\omega_{r,s} \gtrapprox 1.\end{equation} To do anything with this we need to gain a greater understanding of the weights $\omega_{r,s}$. \begin{lemma}[Size of $\omega_{r,s}$]\label{omega-size} Suppose that $\nu_2(r+s) = t$. Then \[ \omega_{r,s} \lessapprox 2^{-\mu} + 2^{k-t - \mu - \nu} + 2^{t-k}.\] \end{lemma} \noindent\textit{Proof. } Write $(r+s)/2^k = a/2^{k-t}$, where is odd. Thus \[ \omega_{r,s} = 2^{-\mu - \nu - \rho} \sum_{n \sim 2^{\nu}} \sum_{1 \leqslant |h| \leqslant 2^{\rho}} \min\big(2^{\mu}, \Vert \frac{a}{2^{k-t}}n + \frac{r}{2^k} h \Vert^{-1}\big).\] Thus \[ \omega_{r,s} \leqslant 2^{-\mu - \nu}\sup_{1 \leqslant |h| \leqslant 2^{\rho}} \sum_{n \sim 2^{\nu}}\min(2^{\mu} , \Vert \frac{a}{2^{k-t}}n + \frac{r}{2^{k}}h \Vert^{-1} )\] By Lemma \ref{vinogradov-diophantine} we obtain \[ \omega_{r,s} \lessapprox 2^{-\nu} + 2^{-\mu} + 2^{k-t - \mu - \nu} + 2^{t-k}.\] Recalling that $\nu \geqslant \mu$, we see that this implies the claimed bound. {\usebox{\proofbox}} Let us recall \eqref{star4}. This clearly implies that there is $t$, $0 \leqslant t \leqslant k$, such that \begin{equation}\label{eq790} \sum_{\substack{r,s \in \Z/2^k\Z \\ \nu_2(r+s) = t}} |\widehat{f}_k(r)||\widehat{f}_k(s)|\omega_{r,s} \gtrapprox 1.\end{equation} Now by Lemma \ref{omega-size}, Proposition \ref{fourier-prop-2} and Parseval's inequality (in turn) we have \begin{align*} & \sum_{\substack{r,s \in \Z/2^k \Z \\ \nu_2(r+s) = t}} |\widehat{f}_k(r)||\widehat{f}_k(s)|\omega_{r,s}\\ & \lessapprox (2^{-\mu} + 2^{k-t - \mu - \nu} + 2^{t-k}) \sum_{a \in \Z/2^{t}\Z} \sum_{r \equiv a \mdsub{2^t}} |\widehat{f}_k(r)| \sum_{s \equiv -a \mdsub{2^t}} |\widehat{f}_k(s)| \\ & \lessapprox 2^{(k-t)(1-c)}(2^{-\mu} + 2^{k-t - \mu - \nu} + 2^{t-k}) \sum_{a \in \Z/2^t \Z} |\widehat{f}_t(a)||\widehat{f}_t(-a)| \\ & \leqslant 2^{(k-t)(1-c)}(2^{-\mu} + 2^{k-t - \mu - \nu} + 2^{t-k}) \end{align*} for some absolute constant $c > 0$. Recalling that $k = \mu + 2\rho$ we see that the first two terms are at most $2^{4\rho - ck}$, which is not $\gtrapprox 1$ if $\rho$ is chosen sufficiently small. Thus we see that if \eqref{eq790} holds then $2^{-c(k-t)} \gtrapprox 1$, which implies that $2^t \gtrapprox 2^k$. Assume, then, that this is the case. Applying Parseval's identity again (or Proposition \ref{fourier-prop-2}, but this is overkill now) we obtain, for any $\eps$, \begin{align*} \sum_{\substack{r,s \in \Z/2^k \Z \\ \nu_2(r+s) = t \\ \omega_{r,s} \leqslant 2^{-\eps k}}} |\widehat{f}_k(r)||\widehat{f}_k(s)| \omega_{r,s} & \leqslant 2^{-\eps k} \sum_{a \in \Z/2^{t}\Z} \sum_{r \equiv a \mdsub{2^t}} |\widehat{f}_k(r)| \sum_{s \equiv -a \mdsub{2^t}} |\widehat{f}_k(s)| \\ & \leqslant 2^{k - t - \eps k} \lessapprox 2^{-\eps k}. \end{align*} It follows that, in \eqref{eq790}, we may restrict attention to values of $r,s$ for which $\omega_{r,s} \gtrapprox 1$, thus \begin{equation}\label{eq791} \sum_{\substack{r,s \in \Z/2^k\Z \\ \omega_{r,s} \gtrapprox 1}} |\widehat{f}_k(r)||\widehat{f}_k(s)|\omega_{r,s} \gtrapprox 1.\end{equation} In the next lemma we will show that there are rather few pairs $(r,s)$ with $\omega_{r,s} \gtrapprox 1$. To do this we will make use of the as yet unexploited averaging over $h$ which occurs in the definition of $\omega_{r,s}$. \begin{lemma}[Large values of $\omega_{r,s}$]\label{omega-2} There are $\lessapprox 2^{\rho}$ pairs $(r,s) \in \Z/2^k\Z \times \Z/2^k\Z$ such that $\omega_{r,s} \gtrapprox 1$. \end{lemma} \noindent\textit{Proof. } We know already, by Lemma \ref{omega-size}, that we must have $\nu_2(r + s) = t$, where $2^{t} \gtrapprox 2^k$. Write $(r+s)/2^k = a/2^{k-t}$, where $a$ is odd. We have \[ \sum_{1 \leqslant |h| \leqslant 2^{\rho}} \sum_{n \sim 2^{\nu}} \min(2^{\mu}, \Vert \frac{a}{2^{k-t}} n + \frac{r}{2^k}h \Vert^{-1}) \gtrapprox 2^{\mu + \nu + \rho}.\] Dividing the sum over $n$ into residue classes $\md{2^{k-t}}$, of which there are $\lessapprox 1$, we see that there is $\theta$ such that \begin{equation}\label{star-44} \sum_{1 \leqslant |h| \leqslant 2^{\rho}} \min(2^{\mu}, \Vert \theta + \frac{r}{2^k}h \Vert^{-1}) \gtrapprox 2^{\mu + \rho}.\end{equation} Although this looks to be of the form covered by Vinogradov's lemma (Lemma \ref{vinogradov-diophantine}), the fact that $2^{\rho} \lll 2^k$ means that more information may be gleaned by appealing instead to Lemma \ref{eq-dist}. From \eqref{star-44} it follows that for $\gtrapprox 2^{\rho}$ values of $h$, $1 \leqslant |h| < 2^{\rho}$, we have \[ \Vert \theta + \frac{r}{2^k}h \Vert \lessapprox 2^{-\mu}.\] Fixing some $h_0$ with this property and considering the numbers $h - h_0$, we may assume that $\theta = 0$. Applying Lemma \ref{eq-dist}, we see that \[ \Vert qr/2^k\Vert_{\R/\Z} \lessapprox 2^{-\mu - \rho}\] for some $q \lessapprox 1$. Thus if $\omega_{r,s} \gtrapprox 1$ then there is some $q \lessapprox 1$ such that $|rq \md{2^k}| \lessapprox 2^{\rho}$ (recall that $k = \mu + 2\rho$). There are $\lessapprox 2^{\rho}$ such values of $r$, and for each of them there are just $\lessapprox 1$ values of $s$ such that $t := \nu_2(r+s)$ satisfies $2^t \gtrapprox 2^k$. {\usebox{\proofbox}} It is now an easy matter to show that \eqref{eq791} cannot hold which, as we have shown, implies that Type II sums are small. Indeed using Lemma \eqref{omega-2} and Proposition \ref{fourier-prop} we obtain \[\sum_{\substack{r,s \in \Z/2^k\Z \\ \omega_{r,s} \gtrapprox 1}} |\widehat{f}_k(r)||\widehat{f}_k(s)|\omega_{r,s} \lessapprox 2^{\rho - 2ck}.\] This contradicts \eqref{eq791} if $\rho$ is chosen sufficiently small. {\usebox{\proofbox}} \emph{Proof of Theorem \ref{mainthm}.} Almost nothing more need be said: Theorem \ref{mainthm} is an immediate consequence of Propositions \ref{vinogradov}, \ref{typei-prop} and \ref{typeii-prop}. {\usebox{\proofbox}} \section{Patterns of primes}\label{sec3} The aim of this section is to discuss work of the author and T.~Tao on linear equations in primes. This programme is not yet completed and what has been done so far is spread across a number of papers \cite{green-tao-longprimeaps,green-tao-u3inverse,green-tao-u3mobius,green-tao-linearprimes,green-tao-nilratner,green-tao-ukmobius}. It is also discussed in several expository articles \cite{green-icm,green-longprimeaps,tao-dichotomy,tao-coates, tao-elescorial}. Our aim here is to do little more than try to describe what we have proved and what we hope to prove, and to furnish the reader with some idea of how the various papers fit together. In other words we hope that this section may be used as a sort of `roadmap' for readers interested in a more detailed study of the papers. There are some conspicuous absences from the current section. Our treatment of the history of the subject is very patchy (see the article \cite{kumchev-tolev} for this), and we omit any discussion of links with the ergodic theory literature (see \cite{host-survey,kra-survey} for this). References are to the versions of the papers which were available on \texttt{www.arxiv.org} on October 1st 2007. It is quite likely that published versions will have different theorem numberings. Suppose that $\psi_1,\dots,\psi_t : \Z^d \rightarrow \Z$ are affine-linear forms. The basic questions which motivate our work are the following. \begin{question}[Existence of prime values]\label{q1} Are there values of $\vec{n} \in \Z^d$ for which these forms all take prime values? Are there infinitely many such $\vec{n}$? \end{question} \begin{question}[Asymptotics]\label{q2} How many such $\vec{n}$ are there inside the box $[-N,N]^d$? \end{question} In this generality, our questions contain many of the classical questions in additive prime number theory. \begin{enumerate} \item When $d = 1$, $t = 2$, $\psi_1(n) = n$ and $\psi_2(n) = 2m - n$ we have the \emph{Goldbach Conjecture}: is $2m$ the sum of two primes? \item When $d = 1$, $t = 2$, $\psi_1(n) = n$ and $\psi_2(n) = n + 2$ we have the \emph{Twin Prime Conjecture}: are there infinitely pairs of primes which differ by 2? \item When $d = 2$, $t = 3$, $\psi_1(\vec{n}) = n_1$, $\psi_2(\vec{n}) = n_2$ and $\psi_3(\vec{n}) = m - n_1 - n_2$ ($m$ odd) we have the \emph{Ternary Goldbach Conjecture}: is $m$ the sum of three primes? \item When $d = 2$, $t = k$ and $\psi_i(\vec{n}) = n_1 + (i-1)n_2$, $i = 1,\dots,k$ we have the question of whether there exist arithmetic progressions of length $k$ consisting entirely of primes. \end{enumerate} Essentially nothing is known about either Question \ref{q1} or Question \ref{q2} in cases (i) and (ii). Both Questions were answered in case (iii) some seventy years ago by Vinogradov, building on earlier work of Hardy and Littlewood. Question \ref{q1} was answered in case (iv) by the author and Tao \cite{green-tao-longprimeaps}. Question \ref{q2} in that case is much harder and was answered for $k = 3$ by Chowla and van der Corput in 1939 and for $k = 4$ by the author and Tao \cite{green-tao-u3inverse,green-tao-u3mobius,green-tao-linearprimes}. Question \ref{q2} for $k \geqslant 5$ is one of the main goals of our current programme of research, and we shall report on what progress has been made so far. We will not give any further separate discussion of Question \ref{q1}, as this has now been exposited in many places. Particularly recommended are the articles \cite{host-survey} and \cite{kra-survey}. See also \cite{green-longprimeaps,tao-dichotomy,tao-coates,tao-elescorial}. There are very natural conjectural answers to Questions \ref{q1} and \ref{q2}. It is clear that congruence conditions may result in there being no, or very few, choices of $\vec{n}$ for which all of the forms $\psi_i(\vec{n})$ are prime. A trivial example is the system $\psi_1(n) = n,\psi_2(n) = n + 7$ -- consideration of this $\md{2}$ obviously implies that $\psi_1(n)$ and $\psi_2(n)$ cannot both be prime. Congruence conditions may alter our expectations in more subtle ways too. For example if $n$ is known to be prime (and is not $2$) then one feels that $n+2$ is \emph{more} likely to be prime than a random integer of the same size, for it is already known to be odd. Pulling against this, however, is the observation that if $n \neq 3$ then $n$ is congruent to either $1$ or $2 \md{3}$, and so $n+2$ is congruent to either $0$ or $1\md{3}$, but never to $2$. On this $\md{3}$ evidence one feels that $n+2$ is \emph{less} likely to be prime than a random integer of the same size. Another obvious way in which one could fail to have any prime values among the $\psi_i(\vec{n})$ is if they cannot be simultaneously positive, for example $\psi_1(\vec{n}) = n_1 - n_2$, $\psi_2(\vec{n}) = n_2 - n_3$, $\psi_3(\vec{n}) = n_3 - n_1$. A more profound analysis of these intuitions suggests the following conjecture. In the formulation of this conjecture we use the \emph{local} von Mangoldt functions $\Lambda_{\Z/p\Z}$, defined by \[ \Lambda_{\Z/p\Z}(x) = \left\{ \begin{array}{ll} p/(p-1) & \mbox{if $(x,p) = 1$} \\ 0 & \mbox{otherwise}.\end{array}\right.\] \begin{dickson} Suppose that $d,t,N \geqslant 1$ are integers. Suppose that no two of the forms $\psi_i$ are rational multiples of one another, and that no form $\psi_i$ is constant. Write $\psi_i(\vec{n}) = l_{i1}n_1 + \dots + l_{id} n_d + b_i$ and suppose that we have $|l_{ij}| \leqslant L$, $|b_i| \leqslant LN$ for some real number $L$. Then for any convex body $K \subseteq [-N,N]^d$ we have \[ \sum_{\vec{n} \in K \cap \Z^d} \Lambda(\psi_1(\vec{n})) \dots \Lambda(\psi_d(\vec{n})) = \beta_{\infty}\prod_p \beta_p + o_{d,t,L}(N^d),\] where the local factors $\beta_p$ are defined by \[ \beta_p := \E_{\vec{x} \in (\Z/p\Z)^d} \Lambda_{\Z/p\Z}(\psi_1(\vec{x})) \dots \Lambda_{\Z/p\Z}(\psi_d(\vec{x}))\] and \[ \beta_{\infty} := \vol( K \cap \psi_1^{-1}(\R_{\geqslant 0}) \cap \dots \cap \psi_t^{-1}(\R_{\geqslant 0})).\] \end{dickson} \noindent\textit{Remark. }s We have counted primes weighted using the von Mangoldt function, as this gives tidier expressions. For an unweighted version see \cite[Conjecture 1.4]{green-tao-linearprimes}. It is quite fun to play with particular cases of the conjecture. For example with $d = 1$, $t = 2$, $\psi_1(n) = n$, $\psi_2(n) = n+2$ and $K = [0,N]$ we obtain the conjecture \[ \sum_{n \leqslant N} \Lambda(n) \Lambda(n+2) = 2\prod_{p \geqslant 3} \big( 1 - \frac{1}{(p-1)^2} \big) N + o(N)\] for the (weighted) number of twin primes less than or equal to $N$. The numerical value of the constant here is about 1.32. Some other examples are worked out in \cite[\S 1]{green-tao-linearprimes}. Let us return to the specific examples (i) -- (iv) mentioned above. What makes some of these questions much easier than others? The most important parameter in determining the difficulty of Questions \ref{q1} and \ref{q2} is the \emph{complexity} of the system of forms $\{\psi_1,\dots,\psi_t\}$. The definition of complexity involves nothing more than simple linear algebra, but it is not especially illuminating at first sight. Let $s$ be a positive integer. We say that the complexity of the system $\{\psi_1,\dots,\psi_t\}$ is at most $s$ if, for any $i \in \{1,\dots,t\}$, the forms $\{\psi_1,\dots,\psi_{i-1},\psi_{i+1},\dots,\psi_t\}$ may be divided into $s+1$ classes in such a way that $\psi_i$ is not in the affine linear span of any of them. Thus the system $\{n_1,n_1 + n_2,n_1 + 2n_2, n_1 + 3n_2\}$ has complexity at most $2$ since, quite obviously, we may remove any one form and divide the remaining three forms into singleton classes whose affine span does not contain that form. However the complexity of this system is \emph{not} at most 1: if we remove the form $n_1$ then it is impossible to divide the remaining forms $\{n_1 + n_2, n_1 + 2n_2, n_1 + 3n_2\}$ into two classes such that $n_1$ is not in the affine linear span of any class. Thus the complexity of this system is exactly two. The complexity of the system $\{n_1,n_1 + n_2, n_1 + 2n_2\}$ is one, as is the complexity of the system $\{n_1, n_2, m - n_1 - n_2\}$ for any fixed $m$. The complexity of the system $\{n, n+2\}$ is apparently undefined, since if we remove the form $n$ it is impossible to partition the singleton class $\{n+2\}$ in any way such that $n$ is not contained in the affine span of a class. In such cases we say that the complexity is infinite. The system $\{n, 2m - n\}$, $m$ fixed, arising from the Goldbach Conjecture has infinite complexity. Roughly speaking, only systems of complexity one were understood before the recent work \cite{green-tao-u3inverse, green-tao-u3mobius, green-tao-linearprimes}. Much of the theory in the complexity one case was worked out in a paper of Balog \cite{balog}, which built upon the techniques of Vinogradov. Complexity is the most important measure of how difficult it is to solve Questions \ref{q1} and \ref{q2}. However there are some rather trivial examples of systems of complexity greater than one for which Question \ref{q1} can be answered. For example just using the fact that there are $\gg N/\log N$ primes less than $N$ and the Cauchy-Schwarz inequality it is possible to show that there are $\gg N^4/\log^8 N$ quadruples $(n_1,n_2,n_3,n_4)$ such that all eight of the forms \[ \{n_1, n_1 + n_2, n_1 + n_3, n_1 + n_4, n_1 + n_2 + n_3, n_1 + n_2 + n_4, n_1 + n_3 + n_4, n_1 + n_2 + n_3 + n_4\}\] are prime. This system has complexity 2. The reason that complexity one systems have proved amenable to attack is that they can be studied using \emph{harmonic analysis}, and in particular the circle method of Hardy and Littlewood. We are now going to discuss some ideas behind this method in a rather unorthodox way. It turns out that this is the easiest (in fact so far the only) description to generalize to systems of complexity 2 and higher. The following formulation of the principle that `harmonic analysis governs systems of complexity 1' is established in \cite{green-tao-linearprimes}. \begin{proposition}\label{prop55} Let $N$ be a prime. Suppose that $\{\psi_1,\dots,\psi_t\}$ is a system of affine-linear forms of complexity 1. Suppose that $f_1, \dots, f_t : \Z/N\Z \rightarrow [-1,1]$ are functions and that \begin{equation}\label{hyp} |\E_{\vec{n} \in (\Z/N\Z)^d} f_1(\psi_1(\vec{n})) \dots f_t(\psi_t(\vec{n}))| \geqslant \delta.\end{equation} Then for each $i \in [t]$ there is some $r \in \Z/N\Z$ such that \begin{equation} |\E_{n \in \Z/N\Z} f_i(n) e(-rn/N)| \gg_{\delta} 1.\label{inverse-2}\end{equation} \end{proposition} In words, if the functions $f_1,\dots,f_t$ behave in some way unexpectedly when evaluated along the linear forms $\psi_i$, this phenomenon can be detected by evaluating the Fourier coefficients of the $f_i$. Proposition \ref{prop55} is proved in two stages. The first step is to establish a \emph{generalized von Neumann theorem}, which is a bound of the form \[ |\E_{\vec{n} \in (\Z/N\Z)^d} f_1(\psi_1(\vec{n})) \dots f_t(\psi_t(\vec{n}))| \leqslant \inf_{i \in [t]}\Vert f_i \Vert_{U^2}.\] Here $\Vert f \Vert_{U^2}$ denotes the \emph{Gowers $U^2$-norm} of $f$ and is defined by \[ \Vert f \Vert_{U^2}^4 := \E_{x, h_1, h_2 \in \Z/N\Z} f(x) \overline{f(x+h_1) f(x+h_2)}f(x + h_1 + h_2).\] Results of this type are proved using nothing more than a few applications of the Cauchy-Schwarz inequality, although the notation can get quite complicated. A simple example is given in the proof of \cite[Proposition 1.9]{green-montreal}. Foundational material on the Gowers norms (including, for example, a proof that they \emph{are} norms) may be found in \cite[Chapter 11]{tao-vu} or \cite[Chapter 5]{green-tao-longprimeaps}. This first step in the proof of Proposition \ref{prop55} leads from the hypothesis \eqref{hyp} to the conclusion that each $\Vert f_i \Vert_{U^2}$ is at least $\delta$. To obtain the conclusion \eqref{inverse-2}, then, it suffices to establish an \emph{Inverse Theorem} for the Gowers $U^2$-norm, stating that if the $U^2$-norm of $f$ is large then $f$ correlates with a linear phase. The proof of this result is so short we give it here. Suppose that $f : \Z/N\Z \rightarrow [-1,1]$ is a function with $\Vert f \Vert_{U^2} \geqslant \delta$. Define the Fourier transform $\hat{f} : \Z/N\Z \rightarrow \C$ by \[ \hat{f}(r) := \E_{n \in \Z/N\Z} f(n) e(-rn/N).\] Using orthogonality relations one may easily check that \begin{equation}\label{u2-l4} \Vert \hat{f} \Vert_4 := \big( \sum_{r\in \Z/N\Z} |\hat{f}(r)|^4 \big)^{1/4} = \Vert f \Vert_{U^2}.\end{equation} Thus \[ \Vert \hat{f} \Vert_{\infty}^2 \Vert \hat{f} \Vert_2^2 \geqslant \Vert \hat{f} \Vert_4^4 \geqslant \delta^4.\] But by Parseval's identity we have \[ \Vert \hat{f} \Vert_2 = \Vert f \Vert_2 \leqslant 1,\] and so we conclude that \[ \Vert \hat{f} \Vert_{\infty} \geqslant \delta^2.\] That is, there is some $r \in \Z/N\Z$ such that \begin{equation}\label{tagged-2} |\E_{n \in \Z/N\Z} f(n) e(-rn/N) | \geqslant \delta^2.\end{equation} We note that \eqref{u2-l4} also gives a converse result: \emph{if} there is some $r \in \Z/N\Z$ such that \[ |\E_{n \in \Z/N\Z} f(n)e(-rn/N)| \geqslant \delta\] then $\Vert f \Vert_{U^2} \geqslant \delta$. Proposition \ref{prop55} is a somewhat convincing way to formulate the idea that `harmonic analysis can handle systems of complexity one'. For a variety of reasons, however, it is not immediately applicable to an understanding of the quantity \begin{equation}\label{eq33} \sum \Lambda(\psi_1(\vec{n})) \dots \Lambda(\psi_d(\vec{n}))\end{equation} appearing in Dickson's Conjecture. One obvious point is that the average in Proposition \ref{prop55} is over $(\Z/N\Z)^d$, rather than over $[N]^d$ or over the lattice points inside a convex body. This is a purely technical distinction, and is dealt with in \cite[Appendix C]{green-tao-linearprimes}. A more serious problem arises from consideration of how Proposition \ref{prop55} might be applied. There is certainly no mileage to be gained from applying it in the most na\"{\i}ve way, that is to say with $f_1 = \dots = f_t = \Lambda$. Indeed in that case condition \eqref{inverse-2} does hold (with $r = 0$), and so we cannot rule out the possibility of \eqref{hyp} holding (and, of course, Dickson's Conjecture predicts that \eqref{hyp} does hold much of the time). To eliminate the possibility of \eqref{inverse-2} holding with $r = 0$ we might split $\Lambda = 1 + (\Lambda -1)$, expand \eqref{eq33} as a sum of $2^t$ terms, and use Proposition \ref{prop55} to show that all of the terms except the one with $f_1 = \dots = f_t = 1$ are `negligible'. This also fails to work, for the simple reason that those other terms may not be negligible. Indeed if they were then the arithmetic constant in Dickson's Conjecture would be simply 1 rather than the product $\beta_{\infty}\prod_p \beta_p$ reflecting the distribution of primes $\md{2},\md{3}$ etc. In all of the the work of the author and Tao on primes these issues are bypassed by means of the so-called $W$-trick. The trick is easiest to describe in the context of our paper \cite{green-tao-longprimeaps} establishing the existence of arbitrarily long progressions of primes. Look at the primes $2,3,5,7,\dots$. They are very irregularly distributed $\md{2}$. However if one deletes the element $2$ and rescales the remaining primes by the map $x \mapsto (x-1)/2$ one ends up with the sequence $1,2,3,5,6,8,\dots$. This is now quite regularly distributed $\md{2}$, because there are roughly the same number of primes congruent to $1\md{4}$ as there are primes congruent to $3\md{4}$. Furthermore if one finds a long arithmetic progression in the new sequence it translates immediately to a long progression in the primes. Unfortunately, however, this new sequence is not well-distributed $\md{3}$, as the only element congruent to $1\md{3}$ that it contains is $1$. However we could pick out the elements divisible by $3$, that is to say $3,6,9,15,\dots$ and divide through by $3$ to obtain the new sequence $1,2,3,5,\dots$. This is now well-distributed modulo both $2$ and $3$. The process may be continued with the primes up to some threshold $w(N)$. Choosing $w(N)$ to tend to infinity with $N$, the resulting sequences have no appreciable biases in small modulo small numbers. It is in fact quite easy to formalise this idea and to see how it may be applied to quite general problems such as \eqref{eq33}. The sequences that result from the sieving process we have just described are of the form \[ \{n : Wn + b \quad \mbox{is prime}\}\] where $W := 2 \times 3 \times \dots \times w(N)$ (hence the name `$W$-trick') and $(b,W) = 1$. For consideration of the weighted sum appearing in \eqref{eq33} it is rather natural to introduce the functions \[ \Lambda_{b,W}(n) := \frac{\phi(W)}{W}\Lambda(Wn + b),\] which have average value roughly 1. Roughly speaking, the sum in \eqref{eq33} may be split into $\phi(W)^t$ sums of the form \begin{equation}\label{eq34} \sum \Lambda_{b_1,W}(\tilde{\psi}_1(\vec{n})) \dots \Lambda_{b_t,W}(\tilde{\psi}_d(\vec{n})) \end{equation} (for the details of this decomposition see \cite[Chapter 5]{green-tao-linearprimes}). One might then attempt to evaluate each of these by splitting \[ \Lambda_{b_i,W} = 1 + (\Lambda_{b_i,W} - 1)\] and then apply Proposition \ref{prop55} to show that all of the terms except that with $f_1 = \dots = f_t = 1$ are negligible. Such an approach is promising, but there is one serious additional problem. Proposition \ref{prop55} only applied, as we stated it, to functions $f_i$ which are bounded by $1$. The functions $\Lambda_{b_i,W} - 1$ are certainly not bounded by one. Indeed, the harmonic analysis argument leading up to \eqref{tagged-2} relied on this boundedness in a rather essential way. It turns out that there \emph{is} a version of Proposition \ref{prop55} which applies to functions which are not necessarily bounded by 1; this is one of the main results of \cite{green-tao-linearprimes}, and the key idea was also an important component of our earlier work \cite{green-tao-longprimeaps}. For simplicity let us think about the von Mangoldt function $\Lambda$ itself, rather than the `W-tricked' variants $\Lambda_{b,W}$. Let $R = N^{\gamma}$, where $\gamma \in (0,1)$ is a real number, and let us recall the discussion of \S \ref{sec1}, where we observed that \[ \frac{1}{(\log R)^2}\Lambda_R^2(n) = \big( \sum_{\substack{d | n \\ d \leqslant R}} \lambda^{\mbox{\scriptsize GY}}_d\big)^2\] is a sensible majorant for the characteristic function of the primes between $R$ and $N$, where \[ \lambda^{\mbox{\scriptsize GY}}_d := \mu(d) \frac{\log(R/d)}{\log R}.\] It follows the function \[ \nu(n) := \frac{1}{\log N} \big( \sum_{\substack{d | n \\ d \leqslant R}} \lambda^{\mbox{\scriptsize GY}}_d\big)^2\] essentially majorizes some multiple $C_{\gamma} \Lambda(n)$ of the von Mangoldt function on the interval $[N]$ (where by \emph{essentially} we mean that there may be problems when $n \leqslant R$ or $n$ is a prime power, but these are highly unimportant exceptions). We note that the link we have just made to the work of Goldston, Pintz and Y{\i}ld{\i}r{\i}m is by no means artificial; indeed a crucial moment in the development of \cite{green-tao-longprimeaps} occurred when Andrew Granville drew our attention to \cite{gy}. The weights $\nu$ are much more flexible than $\Lambda$, since it is possible to evaluate such sums as \[ \E_{n \leqslant N} \nu(n) \nu(n+2)\] asymptotically by variants of the computations leading to \eqref{display}. As in those computations the key feature is that by choosing the parameter $\gamma$ to be small enough the number of terms which result from expanding out the sums over $d$ is small enough that error terms do not dominate. In fact (after an application of the $W$-trick discussed above and with an appropriate choice of $\gamma = \gamma(t,d)$) the weights are sufficiently flexible that they may be shown to satisfy two technical conditions called the \emph{linear forms} and \emph{correlation} conditions. These conditions were introduced in \cite[\S 3]{green-tao-longprimeaps}, and the variants of these conditions appropriate for a discussion of Dickson's Conjecture are given in \cite[\S 6]{green-tao-linearprimes}. As a result of this the weight $\nu$ qualifies to be called \emph{pseudorandom}. As the reader may have guessed from the above discussion, it is possible to prove a version of Proposition \ref{prop55} in which the condition that the $f_i$ take values in $[-1,1]$ is relaxed to a condition $|f_i(x)| \leqslant \nu(x)$, where $\nu$ is a pseudorandom weight function. The first step is to establish `generalized von Neumann'-type results in which the functions $f_i$ are bounded by $\nu$, rather than just by $1$. Specifically, one deduces from an assumption \eqref{hyp} that each of the Gowers norms $\Vert f_i\Vert_{U^2}$ is somewhat large. This is once again accomplished by several applications of the Cauchy-Schwarz inequality, but the presence of the weight $\nu$ makes the details even more complicated. For a fully worked-out example, see \cite[\S 5]{green-tao-longprimeaps}. Once this is done one must establish an inverse theorem for the Gowers $U^2$-norm for functions $f$ with $|f(x)| \leqslant \nu(x)$. As we remarked, some new ideas are required here since the harmonic analysis argument leading up to \eqref{tagged-2} breaks down if $f$ is not bounded by one. In fact this inverse theorem is deduced from the version with $|f_i(x)| \leqslant 1$ by means of the following decomposition result, which is \cite[Proposition 10.3]{green-tao-linearprimes}. \begin{lemma}[Decomposition]\label{decomp} Suppose that $\nu : \Z/N\Z \rightarrow \R_{\geqslant 0}$ is a pseudorandom measure, and that $f : \Z/N\Z \rightarrow \R$ is a function with $|f(x)| \leqslant \nu(x)$ for all $x$. Then we may decompose \[ f = f_1 + f_2\] where $|f_1(x)| \leqslant 1$ for all $x$ and $\Vert f_2 \Vert_{U_2} = o(1)$ as $N \rightarrow \infty$. \end{lemma} We shall almost nothing about the proof of this result, but it is also one of the key ingredients in \cite{green-tao-longprimeaps}. For a discussion, see \cite[\S 6]{tao-coates}. For a broader discussion of the `energy-increment' strategy used in the proof, which appears in many different contexts in additive combinatorics, see \cite{tao-focs}. Suppose that $|f(x)| \leqslant \nu(x)$ and that $\Vert f \Vert_{U^2} \geqslant \delta$. Applying Lemma \ref{decomp}, we see that $\Vert f_1 \Vert_{U^2} \geqslant \delta/2$. By the inverse theorem for the $U^2$-norm of bounded functions, there is some $r \in \Z/N\Z$ such that \[ |\E_{n \in \Z/N\Z} f_1(n) e(-rn/N)| \geqslant \delta^2/4.\] However by the converse of the inverse theorem and the fact that $\Vert f_2 \Vert_{U^2} = o(1)$ we see that \[ |\E_{n \in \Z/N\Z} f_2(n) e(-rn/N)| = o(1)\] (Note that the proof of this converse result did not require $f_2$ to be bounded by 1). By the triangle inequality it follows that \[ |\E_{n \in \Z/N\Z} f(n) e(-rn/N)| \geqslant \delta^2/4 - o(1) \geqslant \delta^2/8,\] which concludes the `transference' of the inverse theorem for the $U^2$-norm from functions bounded by 1 to functions bounded by $\nu$. Let us take stock of our position. We have indicated a proof of Proposition \ref{prop55} when the functions $f_i$ are bounded by a pseudorandom weight $\nu$, a fairly robust realisation of the principle that harmonic analysis governs the behaviour of systems of complexity one. We have split the von Mangoldt function $\Lambda$ into functions $\Lambda_{b,W}$, and rewritten the sum which interests us, namely \eqref{eq33}, as a sum of expressions of the form \eqref{eq34}. To evaluate these we effect the further splitting $\Lambda_{b,W} = 1 + (\Lambda_{b,W} - 1)$, and hope to show that any sum \[ \sum f_1(\tilde \psi_1(\vec{n})) \dots f_t(\tilde \psi_t(\vec{n}))\] in which at least one $f_i$ equals $\Lambda_{b,W} - 1$ is negligible. All of these functions are essentially dominated by some pseudorandom weight $\nu$ of the type considered by Goldston and Y{\i}ld{\i}r{\i}m, and so by our robust version of Proposition \ref{prop55} it suffices to rule out the possiblity that $\Lambda_{b,W} - 1$ correlates with a linear phase function; that is to say, we must establish that \begin{equation}\label{no-linear-correlations} |\E_{n \leqslant N} (\Lambda_{b,W}(n) - 1) e(-rn/N)| = o(1).\end{equation} This estimate may be established in a fairly classical fashion using the ideas of Hardy, Littlewood and Vinogradov. In \cite{green-tao-linearprimes} we proceed by first effecting some further decompositions, in keeping with our general philosophy that problems should be `transferred' to a situation where functions are bounded by 1. We skip the details (which may be found in \cite[\S 12]{green-tao-linearprimes}) and merely state that \eqref{no-linear-correlations} can be deduced from the estimate \begin{equation}\label{mn2} |\E_{n \leqslant N} \mu(n)e(-rn/N)| \ll_A \log^{-A} N, \end{equation} for some suitably large $A$. That such an estimate holds (with \emph{any} $A$) is a well-known result of Davenport \cite{davenport}. To prove it one may use the method of Type I/Type II sums discussed in \S \ref{sec2}. In fact, Propositon \ref{vinogradov} is true with the von Mangoldt function $\Lambda$ replaced by the M\"obius function $\mu$. The proof is almost the same, relying on a decomposition of $\mu$ which is very similar to Vaughan's decomposition of $\Lambda$. The remarks following the statement of Proposition \ref{vinogradov} are particularly relevant here. We can hope that the method of Type I/II sums will be effective in bounding \eqref{mn2} unless $r/N$ is approximately $a/q$, for some small $q$ (that is, the method ought to be successful in the `minor arc' case). Luckily in the `major arc' case one may approximate $e(-rn/N)$ by the sum of a few Dirichlet characters to modulus $q$. The resulting sums $\sum_{n \leqslant N} \mu(n)\chi(n)$ may then be estimating using standard techniques of analytic number theory together with information about the non-existence of zeros near $\Re s = 1$ of the $L$-functions $L(s,\chi)$: for details see \cite[Prop 5.29]{iwaniec-kowalski}. This concludes our discussion of a proof of Dickson's Conjecture for systems of complexity one. As we have remarked, this result could also be obtained by a more classical application of the circle method. However it turns out that large parts of our discussion adapt very painlessly to systems of complexity $s > 1$, whereas this does not seem to be the case for the classical techniques. One has, for example, the following bound of generalized von Neumann type: \begin{equation}\label{eq47} |\E_{\vec{n} \in (\Z/N\Z)^d} f_1(\psi_1(\vec{n})) \dots f_t(\psi_t(\vec{n}))| \leqslant \inf_{i \in [t]}\Vert f_i \Vert_{U^{s+1}}\end{equation} for systems $\psi_1,\dots,\psi_t$ of complexity $s$, where $\Vert f \Vert_{U^{k}}$ is the \emph{Gowers $U^k$-norm} of $f$ and is defined by \[ \Vert f \Vert_{U^k}^{2^k} := \E_{x,h_1,\dots,h_k} \prod_{\omega_1,\dots,\omega_k \in \{0,1\}} f(x + \omega_1 h_1 + \dots + \omega_k h_k)\] (with complex conjugates being taken of the terms with an odd number of $h_i$s). This is true even if the functions $f_i$ are only bounded by a pseudorandom weight $\nu$. A statement very close to \eqref{eq47} is proved in \cite[Appendix C]{green-tao-linearprimes}. The decomposition result, Lemma \ref{decomp}, also adapts in a fairly painless manner. The first really serious issue that we encounter is in finding a generalization of the inverse theorem for the $U^2$-norm. If $f : \Z/N\Z \rightarrow [-1,1]$ is a function such that $\Vert f \Vert_{U^3} \geqslant \delta$, what can be said? The most immediate difficulty with attacking this statement is the lack of a suitable formula generalizing the relation $\Vert f \Vert_{U^2} = \Vert \hat{f}\Vert_4$. A more decisive problem is revealed by consideration of the function $f(n) = e(n^2/N)$. One may check that $\Vert f \Vert_{U^3} = 1$ (this is essentially a manifestation of the fact that the third derivative of a quadratic is zero). With somewhat more effort one may check that this $f$ does not have substantial correlation with a \emph{linear} exponential $e(rn/N)$. Thus an inverse theorem for the $U^3$-norm must encode some kind of `higher harmonic analysis' which takes account of these quadratic phases as well as just linear ones. The situation is further complicated by the existence of further examples, such as $f(n) = e(n[n\sqrt{2}]/N)$, for which $\Vert f \Vert_{U^3}$ is large, but for which $f$ does not even exhibit genuine quadratic behaviour. A full discussion of examples related to these may be found in \cite{green-icm}. It turns out that these two examples, $f(n) = e(n^2/N)$ and $f(n) = e(n[n\sqrt{2}]/N)$, can both be interpreted as objects living on a $2$-\emph{step nilmanifold}, that is to say a quotient $G/\Gamma$ where $G$ is a 2-step nilpotent Lie group and $\Gamma$ is a discrete and cocompact subgroup. The archetypal example is the Heisenberg example in which \[ G = \left(\begin{smallmatrix} 1 & \R & \R \\ 0 & 1 & \R \\ 0 & 0 & 1\end{smallmatrix}\right) \quad \mbox{and} \quad \Gamma = \left(\begin{smallmatrix} 1 & \Z & \Z \\ 0 & 1 & \Z \\ 0 & 0 & 1 \end{smallmatrix}\right).\] Quadratic polynomials appear quite naturally in such a group $G$, for instance in the computation \[ \left(\begin{smallmatrix} 1 & \alpha & \beta \\ 0 & 1 & \gamma \\ 0 & 0 & 1\end{smallmatrix} \right)^n = \left(\begin{smallmatrix} 1 & n\alpha & n\beta + \frac{1}{2}n(n-1)\alpha \gamma \\ 0 & 1 & n\gamma \\ 0 & 0 & 1 \end{smallmatrix} \right).\] Taking such a sequence of matrices and postmultiplying by elements of $\Gamma$ so that all of the entries lie between $[-1/2,1/2]^3$ (that is, reducing to a fundamental domain for the action of $\Gamma$ on $G$) one soon sees the appearance of the more complicated `bracket quadratics' $\gamma n[\alpha n]$ too. A fuller discussion, with motivating remarks, may be found in \cite{green-icm}. What is more, correlation with an example arising in this setting is the \emph{only} way in which a function $f$ can have large $U^3$-norm. This is the inverse theorem for the $U^3$-norm, proved in \cite{green-tao-u3inverse} building on ideas of Gowers \cite{gowers-4aps}. It is conjectured that an analogous result holds for the $U^{s+1}$-norm in general, a conjecture we refer to as the \emph{Gowers Inverse conjecture} $\mbox{GI}(s)$. \begin{conjecture}[Gowers inverse conjecture $\mbox{GI}(s)$]\label{gis} Suppose that $f : \Z/N\Z \rightarrow [-1,1]$ is a function and that $\Vert f \Vert_{U^{s+1}} \geqslant \delta$. Then there is an $s$-step nilmanifold $G/\Gamma$, a Lipschitz function $F : G/\Gamma \rightarrow [-1,1]$ and elements $g \in G$, $x \in G/\Gamma$ such that \[ |\E_{n \leqslant N} f(n) F(g^n x\Gamma)| \gg_{\delta} 1.\] The dimension and complexity of $G/\Gamma$ and the Lipschitz constant of $F$ are all $O_{\delta}(1)$. \end{conjecture} The function $n \mapsto F(g^n x \Gamma)$ is called an $s$-step nilsequence. To state this conjecture properly one must of course define the notion of `complexity', and also assign a metric to $G/\Gamma$ so that the notion of Lipschitz constant may be properly formalised. A version of the conjecture was first formulated in \cite[\S 8]{green-tao-linearprimes}. There, a metric on $G/\Gamma$ was assigned quite arbitrarily. With the benefit of hindsight it is probably better to proceed as in our more recent paper \cite[\S 2]{green-tao-nilratner}, in which the notions of `complexity' and `metric' are both developed from the notion of a \emph{Mal'cev basis} for $G/\Gamma$. The precise statements are not important in order to understand the philosophy which lies behind Conjecture \ref{gis} and its interplay with the generalized von Neumann theorem \eqref{eq47}: it seems as though the correct `harmonics' with which to study systems $\{\psi_1,\dots,\psi_t\}$ of complexity $s$ are the $s$-step nilsequences. Conjecture \ref{gis} is known when $s = 1$, and we proved it earlier. Note that the linear exponentials $n \mapsto e(-rn/N)$ may easily be interpreted as $1$-step nilsequences living on the nilmanifold $\R/\Z$. As we stated, it is also known when $s = 2$, this being the main result of \cite{green-tao-u3inverse}. Tao and I hope to report progress on the general case $s \geqslant 3$ in the near future. Assuming Conjecture \ref{gis}, one may start to work through the proof of Dickson's Conjecture in the complexity 1 case and attempt to generalise it to the complexity $s$ situation. Replacing occurrences of linear exponentials $e(-rn/N)$ by $s$-step nilsequences $F(g^n x\Gamma)$, the argument runs with remarkably few changes. One fairly significant extra difficulty occurs in the proof of Lemma \ref{decomp}, where a `converse' to the inverse conjecture is required. Namely, one needs to know that if \[ |\E_{n \leqslant N} f(n) F(g^n x\Gamma)| \geqslant \delta\] for some $s$-step nilsequence $F(g^nx\Gamma)$ then \[ \Vert f \Vert_{U^{s+1}} \gg_{\delta} 1,\] where the implied constant may also depend on the complexity of $G/\Gamma$ and on the Lipschitz constant of $F$. Such a result is already present in \cite[\S 14]{green-tao-u3inverse}, and a somewhat more conceptual proof is given in \cite[Appendix E]{green-tao-linearprimes}. Both of these appendices were based on extensive conversations with members of the ergodic theory community. By far the most serious obstacle is the last one, where we reduce to establishing a generalization of \eqref{mn2} for nilsequences. In other words we seek the bound \[ |\E_{n \leqslant N} \mu(n) F(g^n x\Gamma)| \ll_A \log^{-A} N\] for all $A > 0$, where $F(g^nx\Gamma)$ is an $s$-step nilsequence arising from some $s$-step nilmanifold $G/\Gamma$, and the implied constant may also depend on the complexity of $G/\Gamma$ and the Lipschitz constant of $F$. This bound is referred to as the M\"obius and Nilsequences Conjecture $\mbox{MN}(s)$. As we remarked, the conjecture $\mbox{MN}(1)$ was essentially established by Davenport. The case $\mbox{MN}(2)$ was obtained in the paper \cite{green-tao-u3mobius}. The general case $\mbox{MN}(s)$ has recently been resolved by the authors and will appear in the short paper \cite{green-tao-ukmobius}; the key technical ingredient in that proof is the main result of \cite{green-tao-nilratner}, which may be thought of as a kind of generalization of the major-minor arc decomposition to nilsequences of arbitrary step. The method of Type I/II sums is crucial once more. The reader wishing to study any of this work might find the following table helpful. Let me once again emphasise that the purpose of this article has been to guide potential readers through the papers \cite{green-tao-longprimeaps,green-tao-u3inverse,green-tao-u3mobius,green-tao-linearprimes,green-tao-nilratner} and \cite{green-tao-ukmobius}; we do not intend to suggest that there is no other work going on in the subject! \cite{green-tao-longprimeaps}, \emph{The primes contain arbitrarily long arithmetic progressions,} independent of the other papers except it contains the proof of Decomposition Lemma \ref{decomp}. \cite{green-tao-u3inverse}, \emph{An inverse theorem for the Gowers $U^3$-norm, with applications}, proof of the $\mbox{GI}(2)$ conjecture. \cite{green-tao-u3mobius}, \emph{Quadratic Uniformity of the M\"obius function,} proof of the $\mbox{MN}(2)$ conjecture, to be largely superceded by \cite{green-tao-ukmobius} but, unlike that paper, can be understood without \cite{green-tao-nilratner}. \cite{green-tao-linearprimes}, \emph{Linear Equations in primes,} proof that the $\mbox{GI}(s)$ and $\mbox{MN}(s)$ conjectures together imply Dickson's conjecture for systems $\{\psi_1,\dots,\psi_t\}$ of complexity $s$. The discussion in this article has been largely an exposition of some of the ideas in this paper. \cite{green-tao-nilratner}, \emph{The Quantitative Behaviour of Polynomial Orbits on Nilmanifolds,} key technical ingredient for studying nilsequences \cite{green-tao-ukmobius}, \emph{The M\"obius and Nilsequences Conjectures,} proof of $\mbox{MN}(s)$ conjecture for all $s$, heavily reliant on \cite{green-tao-nilratner}. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \end{document}
\begin{document} \makeatletter{} \makeatletter{} \newcommand {\myclass} [1] {\ensuremath{\textsc{#1}}} \newcommand{\StaClass}[1]{\myclass{#1}\xspace} \newcommand{\DynClass{$\class$}lass}[1]{\myclass{Dyn#1}\xspace} \newcommand{\dDynClass{$\class$}lass}[1]{\myclass{$\Delta$-Dyn#1}\xspace} \newcommand {\myproblem} [1] {\textsc{#1}} \newcommand{\hspace{0mm}}{\hspace{5mm}} \newcommand {\problemdescr} [3] { \def\ensuremath{\mathbb{N}}ame{#1} \def\mtext{In}put{#2} \def\ensuremath{\mathbb{Q}}uestion{#3} \hspace{0mm}\begin{tabular}{r p{columnWidth}r} \textit{Problem:} & \myproblem{\ensuremath{\mathbb{N}}ame} \\ \textit{Input:} & \mtext{In}put \\ \textit{Question:} & \ensuremath{\mathbb{Q}}uestion \end{tabular} } \newcommand {\ensuremath{\calQ}descr} [3] { \def\ensuremath{\mathbb{N}}ame{#1} \def\mtext{In}put{#2} \def\ensuremath{\mathbb{Q}}uestion{#3} \hspace{0mm}\begin{tabular}{r p{columnWidth}r} \textit{Query:} & \myproblem{\ensuremath{\mathbb{N}}ame} \\ \textit{Input:} & \mtext{In}put \\ \textit{Question:} & \ensuremath{\mathbb{Q}}uestion \end{tabular} } \newcommand {\dynproblemdescr} [4] { \def\ensuremath{\mathbb{N}}ame{#1} \def\mtext{In}put{#2} \def#3{#3} \def\ensuremath{\mathbb{Q}}uestion{#4} \hspace{0mm}\begin{tabular}{r p{columnWidth}r} \textit{Query:} & \myproblem{\ensuremath{\mathbb{N}}ame} \\ \textit{Input:} & \mtext{In}put \\ \textit{Question:} & \ensuremath{\mathbb{Q}}uestion \end{tabular} } \newcommand{\dynProbDescr}[4]{\dynproblemdescr{#1}{#2}{#3}{#4}} \newcommand {\problem}[1] {\myproblem{#1}} \newcommand{\dynProb}[1] {\myproblem{Dyn(#1)}} \newcommand{\calC}{\calC} \newcommand {\TIME} {\myclass{TIME}} \newcommand {\DTIME} {\myclass{DTIME}} \newcommand {\ensuremath{\mathbb{N}}TIME} {\myclass{NTIME}} \newcommand {\ATIME} {\myclass{ATIME}} \newcommand {\SPACE} {\myclass{SPACE}} \newcommand {\DSPACE} {\myclass{DSPACE}} \newcommand {\ensuremath{\mathbb{N}}SPACE} {\myclass{NSPACE}} \newcommand {\coNSPACE} {\myclass{coNSPACE}} \newcommand {\LOGCFL} {\myclass{LOGCFL}} \newcommand {\LOGDCFL} {\myclass{LOGDCFL}} \newcommand {\LOGSPACE} {\myclass{LOGSPACE}} \newcommand {\ensuremath{\mathbb{N}}LOGSPACE} {\myclass{NLOGSPACE}} \newcommand {\calCL} {\myclass{L}} \newcommand {\ensuremath{\mathbb{N}}L} {\myclass{NL}} \newcommand {\coNL} {\myclass{coNL}} \renewcommand {\P} {\myclass{P}} \newcommand {\myP} {\myclass{P}} \newcommand {\PTIME} {\myclass{PTIME}} \newcommand {\ensuremath{\mathbb{N}}P} {\myclass{NP}} \newcommand {\ensuremath{\mathbb{N}}PC} {\myclass{NPC}} \newcommand {\PH} {\myclass{PH}} \newcommand {\coNP} {\myclass{coNP}} \newcommand {\ensuremath{\mathbb{N}}PSPACE} {\myclass{NPSPACE}} \newcommand {\PSPACE} {\myclass{PSPACE}} \newcommand {\IP} {\myclass{IP}} \newcommand {\POLYLOGSPACE} {\myclass{POLYLOGSPACE}} \newcommand {\DET} {\myclass{DET}} \newcommand {\EXP} {\myclass{EXP}} \newcommand {\ensuremath{\mathbb{N}}EXP} {\myclass{NEXP}} \newcommand {\EXPTIME} {\myclass{EXPTIME}} \newcommand {\TWOEXPTIME} {\myclass{2-EXPTIME}} \newcommand {\TWOEXP} {\myclass{2-EXP}} \newcommand {\ensuremath{\mathbb{N}}EXPTIME} {\myclass{NEXPTIME}} \newcommand {\coNEXPTIME} {\myclass{coNEXPTIME}} \newcommand {\EXPSPACE} {\myclass{EXPSPACE}} \newcommand {\ensuremath{\mathbb{R}}P} {\myclass{RP}} \newcommand {\ensuremath{\mathbb{R}}L} {\myclass{RL}} \newcommand {\coRP} {\myclass{coRP}} \newcommand {\ZPP} {\myclass{ZPP}} \newcommand {\BPP} {\myclass{BPP}} \newcommand {\PP} {\myclass{PP}} \newcommand {\ensuremath{\mathbb{N}}C} {\myclass{NC}} \newcommand {\SAC} {\myclass{SAC}} \newcommand {\ACC} {\myclass{ACC}} \newcommand {\tc} {\myclass{TC}} \newcommand {\PPoly}{\myclass{\mbox{P}/\mbox{Poly}}} \newcommand {\StaClass{FO}arb} {\myclass{FO(arb)}} \newcommand {\ensuremath{\mathbb{N}}LIN} {\myclass{NLIN}} \newcommand {\DLIN} {\myclass{DLIN}} \newcommand {\APTIME} {\myclass{APTIME}} \newcommand {\ALOGSPACE} {\myclass{ALOGSPACE}} \newcommand{\StaClass{FO}}{\StaClass{FO}} \newcommand{\MSO}[1][\mathbb{Q}]{\StaClass{MSO}} \newcommand{\StaClass{$\exists$MSO}}{\StaClass{$\exists$MSO}} \newcommand{\StaClass{QF}O}[1][\mathbb{Q}]{\StaClass{\ensuremath{#1}FO}} \newcommand{\cQFO}[1][\mathbb{Q}]{\StaClass{\ensuremath{\overline{#1}}FO}} \newcommand{\StaClass{QF}O[\exists^*]}{\StaClass{QF}O[\exists^*]} \newcommand{\StaClass{QF}O[\forall^*]}{\StaClass{QF}O[\forall^*]} \newcommand{\StaClass{$\forall/\exists$FO}}{\StaClass{$\forall/\exists$FO}} \newcommand{\CQ}[1][]{\StaClass{CQ}} \newcommand{\UCQ}[1][]{\StaClass{UCQ}} \newcommand{\CQneg}[1][]{\StaClass{CQ\ensuremath{^{\neg}}}} \newcommand{\UCQneg}[1][]{\StaClass{UCQ\ensuremath{^{\neg}}}} \newcommand{\StaClass{Prop}}{\StaClass{Prop}} \newcommand{\StaClass{QF}}{\StaClass{QF}} \newcommand{\StaClass{Prop}CQ}{\StaClass{PropCQ}} \newcommand{\StaClass{Prop}UCQ}{\StaClass{PropUCQ}} \newcommand{\StaClass{Prop}CQneg}{\StaClass{PropCQ{\ensuremath{^{\neg}}}}} \newcommand{\StaClass{Prop}UCQneg}{\StaClass{PropUCQ{\ensuremath{^{\neg}}}}} \newcommand{\neg}{\neg} \newcommand{\DynClass{$\class$}lass{TC}}{\DynClass{$\class$}lass{TC}} \newcommand{\DynClass{$\class$}lass{Prop}}{\DynClass{$\class$}lass{Prop}} \newcommand{\DynClass{$\class$}lass{Prop}IA}[2]{\DynClass{$\class$}lass{Prop}(#1\text{-in},#2\text{-aux})} \newcommand{\DynClass{$\class$}lass{Prop}A}[1]{\DynClass{$\class$}lass{Prop}(#1\text{-aux})} \newcommand{\DynClass{$\class$}lass{Prop}I}[1]{\DynClass{$\class$}lass{Prop}(#1\text{-in})} \newcommand{\DynClass{$\class$}lass{Projections}}{\DynClass{$\class$}lass{Projections}} \newcommand{\DynClass{$\class$}lass{QF}}{\DynClass{$\class$}lass{QF}} \newcommand{\DynClass{$\class$}lass{FO}}{\DynClass{$\class$}lass{FO}} \newcommand{\myclass{DI-DynFO}\xspace}{\myclass{DI-DynFO}\xspace} \newcommand{\DynClass{$\class$}lass{FO}IA}[2]{\DynClass{$\class$}lass{FO}(#1\text{-in},#2\text{-aux})} \newcommand{\DynClass{$\class$}lass{FO}A}[1]{\DynClass{$\class$}lass{FO}(#1\text{-aux})} \newcommand{\DynClass{$\class$}lass{FO}I}[1]{\DynClass{$\class$}lass{FO}(#1\text{-in})} \newcommand{\DynClass{$\class$}lass{FO}pos}{\DynClass{$\class$}lass{FO$^{\wedge, \vee}$}} \newcommand{\DynClass{$\class$}lass{FO}and}{\DynClass{$\class$}lass{FO$^{\wedge}$}} \newcommand{\DynClass{$\class$}}{\DynClass{$\class$}lass{$\calC$}} \newcommand{\DynClass{UCQ}}{\DynClass{$\class$}lass{UCQ}} \newcommand{\DynClass{$\class$}Q}{\DynClass{$\class$}lass{CQ}} \newcommand{\DynClass{UCQ}neg}{\DynClass{$\class$}lass{UCQ$^\neg$}} \newcommand{\DynClass{$\class$}Qneg}{\DynClass{$\class$}lass{CQ$^\neg$}} \newcommand{\DynClass{$\class$}QPM}{\DynClass{$\class$}Qneg} \newcommand{\DynClass{$\overline{\mathbb{Q}}$FO}}{\DynClass{$\class$}lass{$\overline{\mathbb{Q}}$FO}} \newcommand{\DynClass{$\class$}lass{QF}O}[1][\mathbb{Q}]{\DynClass{$\class$}lass{\StaClass{QF}O[#1]}} \newcommand{\DynQFO[\exists^*]}{\DynClass{$\class$}lass{QF}O[\exists^*]} \newcommand{\DynQFO[\forall^*]}{\DynClass{$\class$}lass{QF}O[\forall^*]} \newcommand{\DynClass{$\forall/\exists$FO}}{\DynClass{$\class$}lass{$\forall/\exists$FO}} \newcommand{\DynClass{PropCQ}}{\DynClass{$\class$}lass{PropCQ}} \newcommand{\DynAND}{\DynClass{PropCQ}} \newcommand{\DynClass{$\class$}lass{Prop}CQ}{\DynClass{PropCQ}} \newcommand{\DynClass{$\class$}lass{Prop}Pos}{\DynClass{$\class$}lass{PropUCQ}} \newcommand{\DynClass{$\class$}lass{Prop}AO}{\DynClass{$\class$}lass{Prop}Pos} \newcommand{\DynClass{$\class$}lass{Prop}UCQ}{\DynClass{$\class$}lass{Prop}Pos} \newcommand{\DynANDNeg}{\DynClass{$\class$}lass{PropCQ{\ensuremath{^{\neg}}}}} \newcommand{\DynClass{$\class$}lass{Prop}CQneg}{\DynANDNeg} \newcommand{\DynClass{$\class$}lass{Prop}UCQneg}{\DynClass{$\class$}lass{PropUCQ{\ensuremath{^{\neg}}}}} \newcommand{\DynClass{Or{\ensuremath{^{\mneg}}}}}{\DynClass{$\class$}lass{Or{\ensuremath{^{\neg}}}}} \newcommand{\dDynClass{$\class$}lass{Prop}}{\dDynClass{$\class$}lass{Prop}} \newcommand{\dDynClass{$\class$}lass{Prop}Pos}{\dDynClass{$\class$}lass{PropUCQ}} \newcommand{\dDynPropPos}{\dDynClass{$\class$}lass{Prop}Pos} \newcommand{\dDynClass{$\class$}lass{QF}}{\dDynClass{$\class$}lass{QF}} \newcommand{\dDynClass{$\class$}lass{FO}}{\dDynClass{$\class$}lass{FO}} \newcommand{\dDynClass{$\class$}lass{FO}pos}{\dDynClass{$\class$}lass{FO$^{\wedge, \vee}$}} \newcommand{\dDynClass{$\class$}lass{FO}and}{\dDynClass{$\class$}lass{FO$^{\wedge}$}} \newcommand{\dDynClass{$\class$}}{\dDynClass{$\class$}lass{$\calC$}} \newcommand{\dDynClass{UCQ}}{\dDynClass{$\class$}lass{UCQ}} \newcommand{\dDynClass{$\class$}Q}{\dDynClass{$\class$}lass{CQ}} \newcommand{\dDynClass{UCQ}neg}{\dDynClass{$\class$}lass{UCQ$^\neg$}} \newcommand{\dDynClass{$\class$}Qneg}{\dDynClass{$\class$}lass{CQ$^\neg$}} \newcommand{\dDynClass{$\class$}QPM}{\dDynClass{$\class$}Qneg} \newcommand{\dDynClass{$\class$}lass{QF}O}[1][\mathbb{Q}]{\dDynClass{$\class$}lass{\StaClass{QF}O[#1]}} \newcommand{\dDynQFO[\exists^*]}{\dDynClass{$\class$}lass{QF}O[\exists^*]} \newcommand{\dDynQFO[\forall^*]}{\dDynClass{$\class$}lass{QF}O[\forall^*]} \newcommand{\dDynClass{$\forall/\exists$FO}}{\dDynClass{$\class$}lass{$\forall/\exists$FO}} \newcommand{\dDynClass{PropCQ}}{\dDynClass{$\class$}lass{Prop}CQ} \newcommand{\dDynAND}{\dDynClass{PropCQ}} \newcommand{\dDynClass{$\class$}onj}{\dDynClass{$\class$}lass{Conj}} \newcommand{\dDynClass{$\class$}lass{Prop}AO}{\dDynClass{$\class$}lass{Prop$^{\wedge, \vee}$}} \newcommand{\dDynClass{$\overline{\mathbb{Q}}$FO}}{\dDynClass{$\class$}lass{$\overline{\mathbb{Q}}$FO}} \newcommand{\dDynClass{$\class$}lass{Prop}UCQneg}{\dDynClass{$\class$}lass{PropUCQ{\ensuremath{^{\neg}}}}} \newcommand{\dDynClass{$\class$}lass{Prop}UCQ}{\dDynClass{$\class$}lass{PropUCQ}} \newcommand{\dDynClass{$\class$}lass{Prop}CQneg}{\dDynClass{$\class$}lass{PropCQ{\ensuremath{^{\neg}}}}} \newcommand{\dDynClass{$\class$}lass{Prop}CQ}{\dDynClass{$\class$}lass{PropCQ}} \newcommand{\textsc{EqualCardinality}\xspace}{\textsc{EqualCardinality}\xspace} \newcommand{\textsc{Reach}\xspace}{\textsc{Reach}\xspace} \newcommand{\textsc{Alt-Reach}\xspace}{\textsc{Alt-Reach}\xspace} \newcommand{$s$-$t$-graph\xspace}{$s$-$t$-graph\xspace} \newcommand{$s$-$t$-graph\xspaces}{$s$-$t$-graphs\xspace} \newcommand{\textsc{Reach}\xspaceQ}{\textsc{Reach}\xspace} \newcommand{\textsc{$s$-$t$-Reach}\xspace}{\textsc{$s$-$t$-Reach}\xspace} \newcommand{$s$-$t$-reachability query\xspace}{$s$-$t$-reachability query\xspace} \newcommand{\problem{$s$-$t$-Two\-Path}\xspace}{\problem{$s$-$t$-Two\-Path}\xspace} \newcommand{\problem{$s$-Two\-Path}\xspace}{\problem{$s$-Two\-Path}\xspace} \newcommand{\clique}[1]{\problem{$#1$-Clique}\xspace} \newcommand{colorability}[1]{\problem{$#1$-Col}\xspace} \newcommand{$s$-$t$-Reach}{$s$-$t$-Reach} \newcommand{$s$-$t$-Reachp}{\problem{$s$-$t$-Reach}\xspace} \newcommand{\layeredstreach}[1]{#1-Layered-$s$-$t$-Reach} \newcommand{\layeredstreachp}[1]{\problem{\layeredstreach{#1}}\xspace} \newcommand{\Emptiness}[1][]{\problem{Emptiness}\ifthenelse{\equal{#1}{}}{}{(#1)}\xspace} \newcommand{\Consistency}[1][]{\problem{Consistency}\ifthenelse{\equal{#1}{}}{}{(#1)}\xspace} \newcommand{\HI}[1][]{\problem{HistoryIndependence}\ifthenelse{\equal{#1}{}}{}{(#1)}\xspace} \newcommand{\dynClique}[1]{\dynProb{$#1$-Clique}\xspace} \newcommand{\dynColorability}[1]{\dynProb{$#1$-Col}\xspace} \newcommand{EqualCardinality}{EqualCardinality} \newcommand{\problem{\probEqualCardinalityText}\xspace}{\problem{EqualCardinality}\xspace} \newcommand{\problem{\probEqualCardinalityText}\xspaceDescr}{\problemdescr{EqualCardinality}{Unary relations $A$ and $B$}{Do $A$ and $B$ have the same cardinality?\xspace}} \newcommand{\dynProb{\probEqualCardinalityText}\xspace}{\dynProb{EqualCardinality}\xspace} \newcommand{\dynProb{\probEqualCardinalityText}\xspaceDescr}{\dynProbDescr{EqualCardinality}{Unary relations $A$ and $B$}{Element insertions and deletions}{Do $A$ and $B$ have the same cardinality?\xspace}} \newcommand{\dynProb{\textsc{Reach}}\xspace}{\dynProb{\textsc{Reach}}\xspace} \newcommand{\dynProb{\textsc{$s$-$t$-Reach}}\xspace}{\dynProb{\textsc{$s$-$t$-Reach}}\xspace} \newcommand{\dynProb{\stTwoPath}\xspace}{\dynProb{\problem{$s$-$t$-Two\-Path}\xspace}\xspace} \newcommand{\dynProb{\sTwoPath}\xspace}{\dynProb{\problem{$s$-Two\-Path}\xspace}\xspace} \newcommand{\dynlayeredstreach}[1]{Dyn-#1-Layered-$s$-$t$-Reach} \newcommand{\dynlayeredstreachp}[1]{\problem{\dynlayeredstreach{#1}}\xspace} \makeatletter{} \newcommand{\mtext}[1]{\textsc{#1}} \providecommand {\calA} {{\mathcal A}\xspace} \providecommand {\calB} {{\mathcal B}\xspace} \providecommand {\calC} {{\mathcal C}\xspace} \providecommand {\calD} {{\mathcal D}\xspace} \providecommand {\calE} {{\mathcal E}\xspace} \providecommand {\calF} {{\mathcal F}\xspace} \providecommand {\calG} {{\mathcal G}\xspace} \providecommand {\calH} {{\mathcal H}\xspace} \providecommand {\calK} {{\mathcal K}\xspace} \providecommand {\calI} {{\mathcal I}\xspace} \providecommand {\calL} {{\mathcal L}\xspace} \providecommand {\calM} {{\mathcal M}\xspace} \providecommand {\calN} {{\mathcal N}\xspace} \providecommand {\calO} {{\mathcal O}\xspace} \providecommand {\calP} {{\mathcal P}\xspace} \providecommand {\calQ} {{\mathcal Q}\xspace} \providecommand {\calR} {{\mathcal R}\xspace} \providecommand {\calS} {{\mathcal S}\xspace} \providecommand {\calT} {{\mathcal T}\xspace} \providecommand {\calU} {{\mathcal U}\xspace} \providecommand {\calV} {{\mathcal V}\xspace} \providecommand {\calX} {{\mathcal X}\xspace} \providecommand {\calZ} {{\mathcal Z}\xspace} \newcommand{\mhat}[1]{\widehat{#1}} \newcommand{\mhat{A}}{\mhat{A}} \newcommand{\mhat{B}}{\mhat{B}} \newcommand{\mhat{C}}{\mhat{C}} \newcommand{\mhat{D}}{\mhat{D}} \newcommand{\mhat{E}}{\mhat{E}} \newcommand{\mhat{F}}{\mhat{F}} \newcommand{\mhat{G}}{\mhat{G}} \newcommand{\mhat{H}}{\mhat{H}} \newcommand{\mhat{I}}{\mhat{I}} \newcommand{\mhat{J}}{\mhat{J}} \newcommand{\mhat{K}}{\mhat{K}} \newcommand{\mhat{L}}{\mhat{L}} \newcommand{\mhat{M}}{\mhat{M}} \newcommand{\mhat{N}}{\mhat{N}} \newcommand{\mhat{O}}{\mhat{O}} \newcommand{\mhat{P}}{\mhat{P}} \newcommand{\mhat{Q}}{\mhat{Q}} \newcommand{\mhat{R}}{\mhat{R}} \newcommand{\mhat{S}}{\mhat{S}} \newcommand{\mhat{T}}{\mhat{T}} \newcommand{\mhat{U}}{\mhat{U}} \newcommand{\mhat{V}}{\mhat{V}} \newcommand{\mhat{W}}{\mhat{W}} \newcommand{\mhat{X}}{\mhat{X}} \newcommand{\mhat{Y}}{\mhat{Y}} \newcommand{\mhat{Z}}{\mhat{Z}} \newcommand{\mhat{\Psi}}{\mhat{\Psi}} \newcommand{\mhat{\psi}}{\mhat{\psi}} \newcommand{\mhat{P}ih}{\mhat{\mhat{P}i}} \newcommand{\mhat{\phi}}{\mhat{\phi}} \newcommand{\mhat{\varphi}}{\mhat{\varphi}} \newcommand{\ensuremath{\mathbb{N}}}{\ensuremath{\mathbb{N}}} \newcommand{\ensuremath{\mathbb{Q}}}{\ensuremath{\mathbb{Q}}} \newcommand{\ensuremath{\mathbb{R}}}{\ensuremath{\mathbb{R}}} \newcommand{\ensuremath{\pi}}{\ensuremath{\pi}} \newcommand{\allsubsets}[2]{[#1]^{#2}} \newcommand{\pvec}[1]{\vec{#1}\mkern2mu\vphantom{#1}} \newcommand{\kexp}[2]{\ensuremath{\exp^{#1}\hspace{-0.5mm}(#2)}} \newcommand{\tower}[2]{\ensuremath{\text{tow}_{#1}\hspace{-0.5mm}(#2)}} \newcommand{\klog}[2]{\ensuremath{\log^{#1}{\hspace{-0.5mm}(#2)}}} \newcommand{\uplus}{\uplus} \providecommand{\power}[1]{\ensuremath{\calP(#1)}\xspace} \newcommand{\restrict}[2]{#1\mspace{-3mu}\upharpoonright \mspace{-3mu}#2} \newcommand{\simeq}{\simeq} \newcommand{\simeqVia}[1]{\simeq_{#1}} \newcommand{\swap}[2]{id{[#1, #2]}} \newcommand{\df}{\ensuremath{\mathrel{\smash{\stackrel{\scriptscriptstyle{ \text{def}}}{=}}}} \;} \newcommand{\refeq}[1]{\ensuremath{{\stackrel{\scriptstyle{ \text{#1}}}{=}}}} \newcommand{=\joinrel=\joinrel=\joinrel=}{=\joinrel=\joinrel=\joinrel=} \newcommand{=\joinrel=\joinrel=}{=\joinrel=\joinrel=} \newcommand{\reflongeq}[1]{\ensuremath{{\stackrel{\scriptstyle{ \text{#1}}}{=\joinrel=\joinrel=}}}} \newcommand{\ramseyw}[1]{\ensuremath{R_{#1}}} \makeatletter \newcommand{\ensuremath{\calA}\xspaceramsey}[4]{ \@ifmtarg{#1}{ \@ifmtarg{#4}{ \ensuremath{R(#2; #3)} }{ \ensuremath{R^#4(#2; #3)} } }{ \@ifmtarg{#4}{ \ensuremath{R_{#1}(#2; #3)} }{ \ensuremath{R^#4_{#1}(#2; #3)} } } } \newcommand{\ramsey}[3]{\ensuremath{\calA}\xspaceramsey{#1}{#2}{#3}{}} \newcommand{\homramsey}[2]{\ensuremath{\calA}\xspaceramsey{}{#2}{#1}{\text{hom}}} \newcommand{\mfoldramsey}[3]{\ensuremath{\calA}\xspaceramsey{}{#2}{#1}{#3}} \newcommand{\prec}{\prec} \newcommand{col}{col} \newcommand{($\ast$)}{($\ast$)} \newcommand{\sqsubseteq}{\sqsubseteq} \newcommand{\Rightarrow}{\ensuremath{\mathbb{R}}ightarrow} \newcommand{\rightarrow}{\rightarrow} \newcommand{\lpath}[1][]{\ensuremath{\mathrel{\smash{\stackrel{\scriptscriptstyle{ #1}}{\rightsquigarrow}}}}} \makeatletter{} \theoremstyle{plain} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{goal}{Goal} \theoremstyle{definition} \newtheorem {openquestion}{Open question} \newtheorem {question}{Question} \newtheorem {mainquestion}{Main question} \newenvironment{proofsketch}{\begin{proof}[Proof sketch.]}{\end{proof}} \newenvironment{proofidea}{\begin{proof}[Proof idea.]}{\end{proof}} \newenvironment{proofof}[1]{\begin{proof}[Proof (of #1).]}{\end{proof}} \newenvironment{proofsketchof}[1]{\begin{proof}[Proof sketch (of #1).]}{\end{proof}} \newenvironment{proofideaof}[1]{\begin{proof}[Proof idea (of #1).]}{\end{proof}} \newenvironment{proofenum}{\begin{enumerate}[label=(\alph*),wide=0pt, listparindent=15pt]}{\end{enumerate}} \makeatletter{} \newcommand{\ensuremath{\text{inc}}}{\ensuremath{\text{inc}}} \newcommand{\ensuremath{\text{dec}}}{\ensuremath{\text{dec}}} \newcommand{\ensuremath{\text{transfer}}}{\ensuremath{\text{transfer}}} \newcommand{\ensuremath{\text{ifzero}}}{\ensuremath{\text{ifzero}}} \newcommand{\ensuremath{\text{ifzdec}}}{\ensuremath{\text{ifzdec}}} \newcommand{\ifzero}{\ensuremath{\text{ifzero}}} \newcommand{\eval}[3]{#1(#2/#3)} \newcommand{\theta}{\theta} \newcommand{\ensuremath{\text{Ar}}}{\ensuremath{\text{Ar}}} \newcommand{\ensuremath{\text{Ar}}Fun}{\ensuremath{Ar_{\text{fun}}}} \newcommand{\ensuremath{\tau}\xspace}{\ensuremath{\tau}\xspace} \newcommand{\ensuremath{\tau}\xspaceh}{\hat{\ensuremath{\tau}\xspace}} \newcommand{\schema_{\text{rel}}}{\ensuremath{\tau}\xspace_{\text{rel}}} \newcommand{\schema_{\text{rel}}h}{\ensuremath{\tau}\xspaceh_{\text{rel}}} \newcommand{\schema_{\text{const}}}{\ensuremath{\tau}\xspace_{\text{const}}} \newcommand{\schema_{\text{const}}h}{\ensuremath{\tau}\xspaceh_{\text{const}}} \newcommand{\schema_{\text{fun}}}{\ensuremath{\tau}\xspace_{\text{fun}}} \newcommand{\schema_{\text{fun}}h}{\ensuremath{\tau}\xspaceh_{\text{fun}}} \newcommand{\Terms}[2]{\textsc{Terms}^{#2}_{#1}} \newcommand{\calS}{\calS} \newcommand{\calSa}{\calS} \newcommand{\calSb}{\calT} \newcommand{\unaryTypes}[1]{\mathcal{UN}_{#1}} \newcommand{\binaryTypes}[1]{\mathcal{BIN}_{#1}} \newcommand{\naryTypes}[2]{\mathfrak{T}_{#1,#2}} \newcommand{\nb}[3]{\calN_{#2}^{#3}(#1)} \newcommand{\nbv}[3]{\vec \calN_{#2}^{#3}(#1)} \newcommand{\ensuremath{\Gamma_{\text{in}}}\xspace}{\ensuremath{\Gamma_{\text{in}}}\xspace} \newcommand{\ensuremath{\Gamma_{\text{aux}}}\xspace}{\ensuremath{\Gamma_{\text{aux}}}\xspace} \newcommand{\qfrank}[1]{\ensuremath{\text{rank-}#1}} \newcommand{\rightarrow}{\rightarrow} \newcommand{\wedge}{\wedge} \newcommand{\vee}{\vee} \newcommand{\cup}{\cup} \newcommand{\cap}{\cap} \newcommand{\biguplus}{\biguplus} \newcommand{\sem}[2]{\ensuremath{\llbracket #1\rrbracket_{#2}}} \newcommand{\ensuremath{\star}}{\ensuremath{\star}} \newcommand{\textsc{generic}}{\textsc{generic}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\overline{\mathbb{Q}}}{\overline{\mathbb{Q}}} \newcommand{d}{d} \newcommand{\calC}{\calC} \newcommand{\symneg}[1]{\widehat{#1}} \newcommand{\type}[2]{\ensuremath{\langle #1, #2 \rangle}} \newcommand{\stype}[3]{\ensuremath{\langle #1, #2 \rangle_{#3}}} \newcommand{\behaveEqual}[1]{\approx_{#1}} \newcommand{\types}[2]{types_{#1}(#2)} \newcommand{\numTypes}[2]{|\types{{#1}}{#2}|} \newcommand{\epsilon}{\epsilon} \newcommand{\ensuremath{\calD}\xspace}{\ensuremath{\calD}\xspace} \newcommand{\ensuremath{\calI}\xspace}{\ensuremath{\calI}\xspace} \newcommand{\ensuremath{\calA}\xspace}{\ensuremath{\calA}\xspace} \newcommand{\ensuremath{\calB}\xspace}{\ensuremath{\calB}\xspace} \newcommand{\ensuremath{ D}\xspace}{\ensuremath{ D}\xspace} \newcommand{\ensuremath{ D_\text{act}}\xspace}{\ensuremath{ D_\text{act}}\xspace} \newcommand{\ensuremath{\db_\emptyset}\xspace}{\ensuremath{\ensuremath{\calD}\xspace_\emptyset}\xspace} \newcommand{\ensuremath{\calQ}}{\ensuremath{\calQ}} \newcommand{\calC}{\calC} \newcommand{\ensuremath{\calQ}s}{\ensuremath{R_\ensuremath{\calQ}}\xspace} \newcommand{\ensuremath{\mtext{Acc}}\xspace}{\ensuremath{\mtext{Acc}}\xspace} \newcommand{\ans}[2]{\mtext{ans}(#1, #2)} \newcommand{\ensuremath{\Delta}}{\ensuremath{\Delta}} \newcommand{\ensuremath{\updates_{Del}}}{\ensuremath{\ensuremath{\Delta}_{Del}}} \newcommand{\ensuremath{\updates_{Ins}}}{\ensuremath{\ensuremath{\Delta}_{Ins}}} \newcommand{\ensuremath{\updates}}{\ensuremath{\ensuremath{\Delta}}} \newcommand{\mtext{Init}\xspace}{\mtext{Init}\xspace} \newcommand{\mtext{ins}\xspace}{\mtext{ins}\xspace} \newcommand{\mtext{del}\xspace}{\mtext{del}\xspace} \newcommand{\mtext{ins}\xspaceertdescr}[2]{\textbf{Insertion of \ensuremath{#2} into \ensuremath{#1}.}} \newcommand{\mtext{del}\xspaceetedescr}[2]{\textbf{Deletion of \ensuremath{#2} from \ensuremath{#1}.}} \newcommand{\ensuremath{\struc}\xspace}{\ensuremath{\calS}\xspace} \newcommand{\ensuremath{\calI}\xspaceSchema}{\ensuremath{\ensuremath{\tau}\xspace_{\text{in}}}\xspace} \newcommand{\ensuremath{\calA}\xspaceSchema}{\ensuremath{\ensuremath{\tau}\xspace_{\text{aux}}}\xspace} \newcommand{\ensuremath{\schema_{=}}\xspace}{\ensuremath{\ensuremath{\tau}\xspace_{=}}\xspace} \newcommand{\ensuremath{\calB}\xspaceSchema}{\ensuremath{\ensuremath{\tau}\xspace_{\text{bi}}}\xspace} \newcommand{\ensuremath{\calA}\xspaceInit}{\mtext{Init}\xspace_{\text{aux}}} \newcommand{\ensuremath{\calB}\xspaceInit}{\mtext{Init}\xspace_{\text{bi}}} \newcommand{\ensuremath{P}\xspace}{\ensuremath{P}\xspace} \newcommand{\ensuremath{\calP}\xspace}{\ensuremath{\calP}\xspace} \newcommand{\ensuremath{\calP}\xspaceb}{\ensuremath{Q}\xspace} \newcommand{\updateDB}[2]{\ensuremath{#1(#2)}} \newcommand{\updateState}[3][\ensuremath{\calP}\xspace]{\ensuremath{#1_{#2}(#3)}} \newcommand{\updateStateI}[3][\ensuremath{\calP}\xspace]{\ensuremath{#1_{#2}(#3)}} \newcommand{\updateRelation}[4]{\restrict{\ensuremath{{#1}_{#2}(#3)}}{#4}} \newcommand{\transition}[3]{\ensuremath{{#1} \xrightarrow{#2}{#3}}} \makeatletter \newcommand{\uf}[4]{ \@ifmtarg{#4}{ \ensuremath{\phi^{#1}_{#2}(#3)} }{ \ensuremath{\phi^{#1}_{#2}(#3; #4)} } } \newcommand{\huf}[4]{ \@ifmtarg{#4}{ \ensuremath{\widehat{\phi}^{#1}_{#2}(#3)} }{ \ensuremath{\widehat{\phi}^{#1}_{#2}(#3; #4)} } } \newcommand{\ufb}[4]{ \@ifmtarg{#4}{ \ensuremath{\psi^{#1}_{#2}(#3)} }{ \ensuremath{\psi^{#1}_{#2}(#3; #4)} } } \newcommand{\ufbwa}[2]{ \ensuremath{\psi^{#1}_{#2}} } \newcommand{\ufwa}[2]{ \ensuremath{\phi^{#1}_{#2}} } \makeatletter \newcommand{\ufsubstitute}[5]{ \@ifmtarg{#5}{ \ensuremath{\phi^{#2}_{#3}[#1](#4)} }{ \ensuremath{\phi^{#2}_{#3}[#1](#4; #5)} } } \makeatletter \newcommand{\ufsubstitutewa}[3]{ \ensuremath{\phi^{#2}_{#3}[#1]} } \makeatletter \newcommand{\substitutewa}[2]{ \ensuremath{#1[#2]} } \newcommand{\ut}[4]{ \@ifmtarg{#4}{ \ensuremath{t^{#1}_{#2}(#3)} }{ \ensuremath{t^{#1}_{#2}(#3; #4)} } } \newcommand{\utwa}[2]{\ensuremath{t^{#1}_{#2}}} \newcommand{\ite}[3]{ \@ifmtarg{#1}{ \ensuremath{\mtext{ITE}} }{ \mtext{ITE}(#1,#2,#3) } } \makeatletter{} \providecommand{\nc}{\newcommand} \providecommand{\rnc}{\renewcommand} \providecommand{\pc}{\providecommand} \renewcommand{(\alph{enumi})}{(\alph{enumi})} \newcommand{Erd\H{o}s}{Erd\H{o}s} \ifcomments \nc{\commentbox}[1]{\noindent\framebox{\parbox{\linewidth}{#1}}} \nc{\todo}[1]{\ \\ {color{red} \fbox{\parbox{\linewidth}{{\sc ToDo}:\\ #1}}}} \setlength{\marginparwidth}{2.5cm} \setlength{\marginparsep}{3pt} \newcounter{CommentCounter} \newcommand{\acomment}[2]{\ \\ \fbox{\parbox{\linewidth}{{\sc #1}: #2}}} \newcommand{\mcomment}[2]{{color{blue}(#1)}\footnote{#1: #2}} \else \nc{\commentbox}[1]{} \newcommand{\mcomment}[2]{} \newcommand{\acomment}[2]{} \fi \ifchanges \newcommand{1pt}{1pt}\fbox{\small #1}}} \textcolor{red}{\footnotesize #2}} \textcolor{blue}{#3}} \setul{}{0.2mm} \setstcolor{red} \newcommand{1pt}{1pt}\fbox{\small #1}}} \st{\footnotesize #2} \textcolor{blue}{#3}} \else \newcommand{\loldnew}[3]{#3} \newcommand{\oldnew}[3]{#3} \fi \nc{\tzm}[1]{\mcomment{TZ}{#1}} \nc{\tsm}[1]{\mcomment{TS}{#1}} \nc{\nilsm}[1]{\mcomment{NV}{#1}} \nc{\tz}[1]{\acomment{TZ}{#1}} \nc{\thz}[1]{\acomment{TZ}{#1}} \nc{\ts}[1]{\acomment{TS}{#1}} \nc{\nils}[1]{\acomment{NV}{#1}} \nc{\tzon}[2][]{\oldnew{TZ}{#1}{#2}} \nc{\tson}[2][]{\oldnew{TS}{#1}{#2}} \nc{\nilson}[2][]{\oldnew{NV}{#1}{#2}} \nc{\tzlon}[2][]{\loldnew{TZ}{#1}{#2}} \nc{\tslon}[2][]{\loldnew{TS}{#1}{#2}} \nc{\nilslon}[2][]{\loldnew{NV}{#1}{#2}} \makeatletter{} \newcommand{\apptheoremtitlefont}[1]{\textbf{#1}} \newcommand{\itshape}{\itshape} \newcommand{ $\blacktriangleright\blacktriangleright\blacktriangleright$ }{ $\blacktriangleright\blacktriangleright\blacktriangleright$ } \newcommand{ $\blacktriangleleft\blacktriangleleft\blacktriangleleft$ }{ $\blacktriangleleft\blacktriangleleft\blacktriangleleft$ } \newcommand{\apprepetitionstartmarker}{} \newcommand{\apprepetitionendmarker}{} \newcommand{\mtext{Init}\xspaceialAppendix}{ \section*{Appendix} In the appendix we give the proofs that have been omitted in the main text. For proofs that are partially present in the main article, we repeat the full proof and its context. Parts that are only repeated are marked by \apprepetitionstartmarker and \apprepetitionendmarker. For the convenience of the reader we repeat the full Section \ref{section:hi}. } \newcommand{\writeAppendix}{ \mtext{Init}\xspaceialAppendix } \newcommand{\toAppendix}[1]{ \makeatletter \g@addto@macro\writeAppendix{#1} \makeatother } \newcommand{\toMainAndAppendix}[1]{ \longVersion{#1} \shortVersion{ #1 \toAppendix{ \apprepetition{#1} \par } } } \newcommand{\toLongAndAppendix}[1]{ \longVersion{#1} \shortVersion{ \toAppendix{ #1 \par } } } \newcommand{\atheorem}[2]{ \begin{theorem}\label{#1} #2 \end{theorem} \toAppendix{ \begin{apptheorem}{\ref{#1}}{} #2 \end{apptheorem} } } \newcommand{\alemma}[2]{ \begin{lemma}\label{#1} #2 \end{lemma} \toAppendix{ \begin{applemma}{\ref{#1}}{} #2 \end{applemma} } } \newcommand{\aproposition}[2]{ \begin{proposition}\label{#1} #2 \end{proposition} \toAppendix{ \begin{appproposition}{\ref{#1}}{} #2 \end{appproposition} } } \newcommand{\aproof}[3]{ \longVersion{ \begin{proof} #1 #3 \end{proof} } \shortVersion{ \@ifmtarg{#2}{}{ \begin{proof} #1 #2 \end{proof} } \toAppendix{ \begin{proof} \@ifmtarg{#1}{}{\apprepetition{#1}} \par #3 \end{proof} } } } \newcommand{\aproofsketch}[3]{ \longVersion{ \begin{proof} #1 #3 \end{proof} } \shortVersion{ \@ifmtarg{#2}{}{ \begin{proofsketch} #1 #2 \end{proofsketch} } \toAppendix{ \begin{proof} \@ifmtarg{#1}{}{\apprepetition{#1}} \par #3 \end{proof} } } } \newcommand{\aproofidea}[3]{ \longVersion{ \begin{proof} #1 #3 \end{proof} } \shortVersion{ \@ifmtarg{#2}{}{ \begin{proofidea} #1 #2 \end{proofidea} } \toAppendix{ \begin{proof} \@ifmtarg{#1}{}{\apprepetition{#1}} \par #3 \end{proof} } } } \newcommand{\shortOrLong}[2]{ \shortVersion{#1} \longVersion{#2} } \makeatletter \newcommand{\theoremcont}[3]{ \def#1{#1} \def\ensuremath{\mathbb{N}}umber{#2} \def#3{#3} \@ifmtarg{#3}{ \apptheoremtitlefont{#1\ \ensuremath{\mathbb{N}}umber.} \itshape }{ \apptheoremtitlefont{#1\ \ensuremath{\mathbb{N}}umber}\ \itshape(#3). } } \newenvironment{applemma}[2]{ \par\theoremcont{Lemma}{#1}{#2}}{ \par} \newenvironment{apptheorem}[2]{ \par\theoremcont{Theorem}{#1}{#2}}{ \par } \newenvironment{appcorollary}[2]{\theoremcont{Corollary}{#1}{#2}}{ } \newenvironment{appproposition}[2]{\theoremcont{Proposition}{#1}{#2}}{ } \newenvironment{appdefinition}[2]{\theoremcont{Definition}{#1}{#2}}{ } \newenvironment{appexample}[1]{ \textit{Example #1.}}{ } \newcommand{\apponlystart}{ $\blacktriangleright\blacktriangleright\blacktriangleright$ } \newcommand{\apponlyend}{ $\blacktriangleleft\blacktriangleleft\blacktriangleleft$ } \newcommand{\apprepetition}[1]{ \apprepetitionstartmarker #1 \apprepetitionendmarker } \newcommand{^\wedge}{^\wedge} \newcommand{\ensuremath{\calP}\xspaceToGraph}[1]{\ensuremath{{\langle #1 \rangle}}} \newcommand{\ensuremath{\calP}\xspaceToString}[1]{\ensuremath{{\langle #1 \rangle}}} \newcommand{\toString}[1]{\ensuremath{{\langle #1 \rangle}}} \newcommand{\toStructure}[1]{\ensuremath{{\langle #1 \rangle}}} \newcommand{\ensuremath{\calP}\xspaceToGraphInv}[1]{\ensuremath{{\langle #1 \rangle^{-1}}}} \newcommand{\ensuremath{\calP}\xspaceToGraphPadded}[1]{\ensuremath{{\langle\langle #1 \rangle\rangle}}} \newcommand{\ensuremath{\calP}\xspaceToGraphPaddedInv}[1]{\ensuremath{{\langle\langle #1 \rangle\rangle^{-1}}}} \newcommand{\LineIf}[2]{\State \algorithmicif\ {#1}\ \algorithmicthen\ {#2}} \algnewcommand\algorithmicinput{\textbf{Input:}} \algnewcommand\INPUT{\item[\algorithmicinput]} \algnewcommand\algorithmicoutput{\textbf{Output:}} \algnewcommand\OUTPUT{\item[\algorithmicoutput]} \newcommand{columnWidth}{9cm} \renewcommand{\hspace{0mm}}{\hspace{0mm}} \newcommand{Substructure Lemma\xspace}{Substructure Lemma\xspace} \newcommand{\mtext{First}}{\mtext{First}} \newcommand{\mtext{List}}{\mtext{List}} \newcommand{\mtext{Last}}{\mtext{Last}} \newcommand{\mtext{In}}{\mtext{In}} \newcommand{\mtext{Out}}{\mtext{Out}} \newcommand{\mtext{Empty}}{\mtext{Empty}} \newcommand{\ensuremath{\mathbb{R}}elName}[1]{\mtext{#1}} \newcommand{\mtext{Acc}}{\mtext{Acc}} \newcommand{\mtext{Odd}}{\mtext{Odd}} \newcommand{\text{odd}}{\text{odd}} \newcommand{\text{even}}{\text{even}} \newcommand{\mtext{Counter}}{\mtext{Counter}} \newcommand{\mtext{Empty}}{\mtext{Empty}} \newcommand{\mtext{Zero}}{\mtext{Zero}} \newcommand{\congruent}[2]{\sim_{#1, #2}} \newcommand{\centerline{------------ Material for Appendix ---------}}{\centerline{------------ Material for Appendix ---------}} \newcommand{locally history independent\xspace}{locally history independent\xspace} \newcommand{\ensuremath{\varphi_{\text{bad}}}}{\ensuremath{\varphi_{\text{bad}}}} \newcommand{\shortVersion}[1]{} \newcommand{\longVersion}[1]{#1} \title{Static Analysis for Logic-Based Dynamic Programs ootnote{The first and third author acknowledge the financial support by DFG grant SCHW 678/6-1.} \begin{abstract} A dynamic program, as introduced by Patnaik and Immerman (1994), maintains the result of a fixed query for an input database which is subject to tuple insertions and deletions. It can use an auxiliary database whose relations are updated via first-order formulas upon modifications of the input database. This paper studies static analysis problems for dynamic programs and investigates, more specifically, the decidability of the following three questions. Is the answer relation of a given dynamic program always empty? Does a program actually maintain a query? Is the content of auxiliary relations independent of the modification sequence that lead to an input database? In general, all these problems can easily be seen to be undecidable for full first-order programs. Therefore the paper aims at pinpointing the exact decidability borderline for programs with restricted arity (of the input and/or auxiliary database) and restricted quantification. \end{abstract} \section{Introduction}\label{section:introduction} \makeatletter{} In modern database scenarios data is subject to frequent changes. In order to avoid costly re-computation of queries from scratch after each small modification of the data, one can try to use previously computed auxiliary data. This auxiliary data then needs to be updated dynamically whenever the database changes. The descriptive dynamic complexity framework (short: dynamic complexity) by Patnaik and Immerman \cite{PatnaikI94} models this setting from a declarative perspective. It was mainly inspired by updates in relational databases. Within this framework, for a relational database subject to change, a \emph{dynamic program} maintains auxiliary relations with the intention to help answering a query \ensuremath{\calQ}. When a modification to the database, that is an insertion or deletion of a tuple, occurs, every auxiliary relation is updated through a first-order update formula (or, equivalently, through a core SQL query) that can refer to the database as well as to the auxiliary relations. The result of $\ensuremath{\calQ}$ is, at every time, represented by some distinguished auxiliary relation. The class of all queries maintainable by dynamic programs with first-order update formulas is called \DynClass{$\class$}lass{FO} and we refer to such programs as \DynClass{$\class$}lass{FO}-programs. We note that shortly before the work of Patnaik and Immerman, the declarative approach was independently formalized in a similar way by Dong, Su and Topor \cite{DongST95}. The main question studied in Dynamic Complexity has been which queries that are not statically expressible in first-order logic (and therefore not in Core SQL), can be maintained by \DynClass{$\class$}lass{FO}-programs. Recently, it has been shown that the Reachability query, a very natural such query, can be maintained by \DynClass{$\class$}lass{FO} programs~\cite{DattaKMSZ15}. Altogether, research in Dynamic Complexity succeeded in proving that many non-FO queries are maintainable in \DynClass{$\class$}lass{FO}. These results and their underlying techniques yield many interesting insights into the the nature of Dynamic Complexity. However, to complete the understanding of Dynamic Complexity, it would be desirable to complement these techniques by methods for proving that certain queries are \emph{not} maintainable by \DynClass{$\class$}lass{FO} programs. But the state of the art with respect to inexpressibility results is much less favorable: at this point, no general techniques for showing that a query is not expressible in \DynClass{$\class$}lass{FO} are available. In order to get a better overall picture of Dynamic Complexity in general and to develop methods for inexpressibility proofs in particular, various restrictions of \DynClass{$\class$}lass{FO} have been studied, based on, e.g., arity restrictions for the auxiliary relations \cite{DongLW95, DongS98, DongLW03}, fragments of first-order logic \cite{Hesse03, GeladeMS12, ZeumeS15, Zeume14}, or by other means \cite{DongS97, GraedelS12}. At the heart of our difficulties to prove inexpressibility results in Dynamic Complexity is our limited understanding of what dynamic programs with or without restrictions ``can do'' in general, and our limited ability to analyze what a particular dynamic program at hand ``does''. In this paper, we initiate a systematic study of the ``analyzability'' of dynamic programs. Static analysis of queries has a long tradition in Database Theory and we follow this tradition by first studying the emptiness problem for dynamic programs, that is the question, whether there exists an initial database and a modification sequence that is accepted by a given dynamic program.\footnote{The exact framework will be defined in Section \ref{section:setting}, but we already mention that we will consider the setting in which databases are initially empty and the auxiliary relations are defined by first-order formulas.} Given the well-known undecidability of the finite satisfiability problem for first-order logic \cite{Trahtenbrot63}, it is not surprising that emptiness of \DynClass{$\class$}lass{FO} programs is undecidable in general. However, we try to pinpoint the borderline of undecidability for fragments of \DynClass{$\class$}lass{FO} based on restrictions of the arity of input relations, the arity of auxiliary relations and for the class \DynClass{$\class$}lass{Prop} of programs with quantifier-free update formulas. In the fragments where undecidability of emptiness does not directly follow from undecidability of satisfiability in the corresponding fragment of first-order logic, our undecidability proofs make use of dynamic programs whose query answer might not only depend on the database yielded by a certain modification sequence, but also on the sequence itself, that is, on the order in which tuples are inserted or (even) deleted. From a useful dynamic program one would, of course, expect that it is \emph{consistent} in the sense that its query answer always only depends on the current database, but not on the specific modification sequence by which it has been obtained. It turns out that the emptiness problem for consistent programs is easier than the general emptiness problem for dynamic programs. More precisely, there are fragments of \DynClass{$\class$}lass{FO}, for which an algorithm can decide emptiness for dynamic programs that come with a ``consistency guarantee'', but for which the emptiness problem is undecidable, in general. However, it turns out that the combination of a consistency test with an emptiness test for consistent programs does not gain any advantage over ``direct'' emptiness tests, since the consistency problem turns out to be as difficult as the general emptiness problem. Finally, we study a property that many dynamic programs in the literature share: they are \emph{history independent} in the sense that all auxiliary relations always only depend on the current (input) database. History independence can be seen as a strong form of consistency in that it not only requires the query relation, but \emph{all} auxiliary relations to be determined by the input database. History independent dynamic programs (also called \emph{memoryless} \cite{PatnaikI94} or \emph{deterministic} \cite{DongS97}) are still expressive enough to maintain interesting queries like undirected reachability \cite{GraedelS12}. But also some inexpressibility proofs have been found for such programs \cite{DongS97,GraedelS12, ZeumeS15}. We study the \emph{history independence problem}, that is, whether a given dynamic program is history independent. In a nutshell, the history independence problem is the ``easiest'' of the static analysis problems considered in this paper. Our results, summarized in Table \ref{tab:results}, shed light on the borderline between decidable and undecidable fragments of \DynClass{$\class$}lass{FO} with respect to emptiness (and consistency), emptiness for consistent programs and history independence. While the picture is quite complete for the emptiness problem for general dynamic programs, for some fragments of~\DynClass{$\class$}lass{Prop} there remain open questions regarding the emptiness problem for consistent dynamic programs and the history-independence problem. Some of the results shown in this paper have been already presented in the master thesis of Nils Vortmeier \cite{Vortmeier13}. \begin{table}[t!] \centering \begin{tabular}{l|C{3.2cm}|C{3.2cm}|C{3.2cm}} & Emptiness \newline Consistency & Emptiness for consistent programs & History\newline Independence\\ \hline Undecidable & $\DynClass{$\class$}lass{FO}IA{1}{0}$\newline$\DynClass{$\class$}lass{Prop}IA{2}{0}$\newline$\DynClass{$\class$}lass{Prop}IA{1}{2}$ & $\DynClass{$\class$}lass{FO}IA{1}{2}$\newline $\DynClass{$\class$}lass{FO}IA{2}{0}$ & $\DynClass{$\class$}lass{FO}IA{2}{0}$\\ \hline Decidable&$\DynClass{$\class$}lass{Prop}IA{1}{1}$ & $\DynClass{$\class$}lass{FO}IA{1}{1}$\newline$\DynClass{$\class$}lass{Prop}I{1}$\newline$\DynClass{$\class$}lass{Prop}A{1}$ & $\DynClass{$\class$}lass{FO}I{1}$\newline$\DynClass{$\class$}lass{Prop}A{1}$\\ \hline Open & & $\DynClass{$\class$}lass{Prop}IA{2}{2}$ and beyond & $\DynClass{$\class$}lass{Prop}IA{2}{2}$ and beyond\\ \end{tabular} \caption{Summary of the results of this paper. $\DynClass{$\class$}lass{FO}IA{\ell}{m}$ stands for \DynClass{$\class$}lass{FO}-programs with (at most) $\ell$-ary input relations and $m$-ary auxiliary relations. $\DynClass{$\class$}lass{FO}A{m}$ and $\DynClass{$\class$}lass{FO}I{\ell}$ represent programs with $m$-ary auxiliary relations (and arbitrary input relations) and programs with $\ell$-ary input relations, respectively. Likewise for $\DynClass{$\class$}lass{Prop}$. } \label{tab:results} \end{table} \subparagraph*{Outline} We recall some basic definitions in Section \ref{section:preliminaries} and introduce the formal setting in Section \ref{section:setting}. The emptiness problem is defined and studied in Section \ref{section:emptiness}, where we first consider general dynamic programs (Subsection \ref{section:emptinessgeneral}) and then consistent dynamic programs (Subsection \ref{section:emptinessconsistent}). In Subsection \ref{section:emptinessbuiltin} we briefly discuss the impact of built-in orders to the results. The Consistency and History Independence problems are studied in Sections \ref{section:consistency} and \ref{section:hi}, respectively. We conclude in Section \ref{section:conclusion}. \shortOrLong{Due to the space limit we only give proof sketches or even proof ideas in the body of this paper. Complete proofs can be found in the long version \cite{}.}{} \section{Preliminaries}\label{section:preliminaries} \makeatletter{} We presume that the reader is familiar with basic notions from Finite Model Theory and refer to \cite{EbbinghausFlum95, Libkin04} for a detailed introduction into this field. We review some basic definitions in order to fix notations. In this paper, a \textit{domain} is a non-empty finite set. For tuples $\vec a = (a_1, \ldots, a_k)$ and $\vec b = (b_1, \ldots, b_\ell)$ over some domain~$\ensuremath{ D}\xspace$, the $(k + \ell)$-tuple obtained by concatenating $\vec a$ and $\vec b$ is denoted by $(\vec a, \vec b)$. A (relational) \emph{schema} is a collection $\ensuremath{\tau}\xspace$ of relation symbols\footnote{For simplicity we do not allow constants in this work but note that our results hold for relational schemas with constants as well.} together with an arity function $\ensuremath{\text{Ar}}: \ensuremath{\tau}\xspace \rightarrow \ensuremath{\mathbb{N}}$. A \emph{database} $\ensuremath{\calD}\xspace$ with schema $\ensuremath{\tau}\xspace$ and domain $\ensuremath{ D}\xspace$ is a mapping that assigns to every relation symbol $R \in \ensuremath{\tau}\xspace$ a relation of arity $\ensuremath{\text{Ar}}(R)$ over $\ensuremath{ D}\xspace$. The \emph{size of a database}, usually denoted by $n$, is the size of its domain. We call a database \emph{empty}, if all its relations are empty. We emphasize that empty databases have non-empty domains. A $\ensuremath{\tau}\xspace$-\emph{structure} $\calS$ is a pair $(\ensuremath{ D}\xspace, \ensuremath{\calD}\xspace)$ where $\ensuremath{\calD}\xspace$ is a database with schema $\ensuremath{\tau}\xspace$ and domain $\ensuremath{ D}\xspace$. Often we omit the schema when it is clear from the context. We write $\calS\models\varphi(\vec a)$ if the first-order formula $\varphi(\vec x)$ holds in $\calS$ under the variable assignment that maps $\vec x$ to $\vec a$. The \emph{quantifier depth} of a first-order formula is the maximal nesting depth of quantifiers. The \emph{rank-$q$ type} of a tuple $(a_1, \ldots, a_m)$ with respect to a $\ensuremath{\tau}\xspace$-structure $\calS$ is the set of all first-order formulas $\varphi(x_1, \ldots, x_m)$ (with equality) of quantifier depth at most $q$, for which $\calS\models\varphi(\vec a)$ holds. By $\calS \equiv_q \calS'$ we denote that two structures $\calS$ and $\calS'$ have the same rank-$q$ type (of length 0 tuples). For a subschema $\tau'\subseteq \tau$, the rank-$q$ $\tau'$-type of a tuple $\vec a$ in a $\tau$-structure \ensuremath{\struc}\xspace is its rank-$q$ type in the $\tau'$-reduct of \ensuremath{\struc}\xspace. We refer to the rank-0 type of a tuple also as its \emph{atomic type} and, since we mostly deal with rank-0 types, simply as its \emph{type}. The \emph{equality type} of a tuple is the atomic type with respect to the empty schema. The \emph{$k$-ary type} of a tuple $\vec a$ in a structure $\calS$ is its $\tau_{\le k}$-type, where $\tau_{\le k}$ consists of all relation symbols of $\tau$ with arity at most $k$. The \emph{$\tau'$-color} of an element $a$ in $\calS$, for a subschema $\tau'$ of the schema of $\calS$, is its $\tau'_{1}$-type, where $\tau'_{1}$ consists of all unary relation symbols of $\tau'$. We often enumerate the possible $\tau'$-colors as $c_0,\ldots,c_L$, for some $L$ with $c_0$ being the color of elements that are in neither of the unary relations. We call these elements \emph{$\tau'$-uncolored}. If $\tau'$ is clear from the context we simply speak of colors and uncolored elements. \section{The dynamic complexity setting}\label{section:setting} \makeatletter{} For a database \ensuremath{\calD}\xspace over schema \ensuremath{\tau}\xspace, a \emph{modification} $\mtext{del}\xspaceta=(o,\vec a)$ consists of an operation $o\in \{\mtext{ins}\xspace_S, \mtext{del}\xspace_S\mid S\in\tau\}$ and a tuple $\vec a$ of elements from the domain of \ensuremath{\calD}\xspace. By $\updateDB{\mtext{del}\xspaceta}{\ensuremath{\calD}\xspace}$ we denote the result of applying $\mtext{del}\xspaceta$ to $\ensuremath{\calD}\xspace$ with the obvious semantics of inserting or deleting the tuple $\vec a$ to or from relation $S^{\ensuremath{\calD}\xspace}$. For a sequence $\alpha = \mtext{del}\xspaceta_1 \cdots \mtext{del}\xspaceta_N$ of modifications to a database $\ensuremath{\calD}\xspace$ we let $\updateDB{\alpha}{\ensuremath{\calD}\xspace} \df \updateDB{\mtext{del}\xspaceta_N}{\cdots (\updateDB{\mtext{del}\xspaceta_1}{\ensuremath{\calD}\xspace})\cdots}$. A \emph{dynamic instance}\footnote{The following introduction to dynamic descriptive complexity is similar to previous \mbox{work \cite{ZeumeS15, ZeumeS14icdt}}. } of a query $\ensuremath{\calQ}$ is a pair $(\ensuremath{\calD}\xspace, \alpha)$, where $\ensuremath{\calD}\xspace$ is a database over a domain $\ensuremath{ D}\xspace$ and $\alpha$ is a sequence of modifications to $\ensuremath{\calD}\xspace$. The dynamic query $\dynProb{$\ensuremath{\calQ}$}$ yields the result of evaluating the query $\ensuremath{\calQ}$ on $\alpha(\ensuremath{\calD}\xspace)$. Dynamic programs, to be defined next, consist of an initialization mechanism and an update program. The former yields, for every (initial) database $\ensuremath{\calD}\xspace$, an initial state with initial auxiliary data. The latter defines the new state of the dynamic program for each possible modification $\mtext{del}\xspaceta$. A \emph{dynamic schema} is a pair $(\ensuremath{\calI}\xspaceSchema,\ensuremath{\calA}\xspaceSchema)$, where $\ensuremath{\calI}\xspaceSchema$ and $\ensuremath{\calA}\xspaceSchema$ are the schemas of the input database and the auxiliary database, respectively. We call relations over $\ensuremath{\calI}\xspaceSchema$ \emph{input relations} and relations over $\ensuremath{\calA}\xspaceSchema$ \emph{auxiliary relations}. If the relations are $0$-ary, we also speak of input or auxiliary \emph{bits}. We always let $\ensuremath{\tau}\xspace\df\ensuremath{\calI}\xspaceSchema\cup\ensuremath{\calA}\xspaceSchema$. \begin{definition}(Update program)\label{def:updateprog} An \emph{update program} \ensuremath{P}\xspace over a dynamic schema \mbox{$(\ensuremath{\calI}\xspaceSchema, \ensuremath{\calA}\xspaceSchema)$} is a set of first-order formulas (called \emph{update formulas} in the following) that contains, for every $R \in \ensuremath{\calA}\xspaceSchema$ and every $o\in \{\mtext{ins}\xspace_S, \mtext{del}\xspace_S\mid S\in\ensuremath{\calI}\xspaceSchema\}$, an update formula $\uf{R}{o}{\vec x}{\vec y}$ over the schema $\ensuremath{\tau}\xspace$ where $\vec x$ and $\vec y$ have the same arity as $S$ and $R$, respectively. \end{definition} A \emph{program state} $\ensuremath{\struc}\xspace$ over dynamic schema \mbox{$(\ensuremath{\calI}\xspaceSchema, \ensuremath{\calA}\xspaceSchema)$} is a structure $(\ensuremath{ D}\xspace, \ensuremath{\calI}\xspace, \ensuremath{\calA}\xspace)$ where\footnote{We prefer the notation $(\ensuremath{ D}\xspace, \ensuremath{\calI}\xspace, \ensuremath{\calA}\xspace)$ over $(\ensuremath{ D}\xspace, \ensuremath{\calI}\xspace \cup \ensuremath{\calA}\xspace)$ to emphasize the two components of the overall database.} $\ensuremath{ D}\xspace$ is a finite domain, $\ensuremath{\calI}\xspace$ is a database over the input schema (the \emph{input database}) and $\ensuremath{\calA}\xspace$ is a database over the auxiliary schema (the \emph{auxiliary database}). The \emph{semantics of update programs} is as follows. For a modification $\mtext{del}\xspaceta=(o,\vec a)$, where $\vec a$ is a tuple over $\ensuremath{ D}\xspace$, and program state $\ensuremath{\struc}\xspace=(\ensuremath{ D}\xspace, \ensuremath{\calI}\xspace,\ensuremath{\calA}\xspace)$ we denote by $\updateState[\ensuremath{P}\xspace]{\mtext{del}\xspaceta}{\ensuremath{\struc}\xspace}$ the state $(\ensuremath{ D}\xspace, \updateDB{\mtext{del}\xspaceta}{\ensuremath{\calI}\xspace}, \ensuremath{\calA}\xspace')$, where $\ensuremath{\calA}\xspace'$ consists of relations \mbox{$R^{\ensuremath{\calA}\xspace'}\df\{\vec b \mid \ensuremath{\struc}\xspace \models \uf{R}{o}{\vec a}{\vec b}\}$}. The effect $\updateState[\ensuremath{P}\xspace]{\alpha}{\ensuremath{\struc}\xspace}$ of a modification sequence $\alpha = \mtext{del}\xspaceta_1 \ldots \mtext{del}\xspaceta_N$ to a state $\ensuremath{\struc}\xspace$ is the state $\updateState[\ensuremath{P}\xspace]{\mtext{del}\xspaceta_N}{\ldots (\updateState[\ensuremath{P}\xspace]{\mtext{del}\xspaceta_1}{\ensuremath{\struc}\xspace})\ldots}$. \begin{definition}(Dynamic program) \label{definition:dynprog} A \emph{dynamic program} is a triple $(\ensuremath{P}\xspace,\mtext{Init}\xspace,\ensuremath{\calQ}s)$, where \begin{compactitem} \item $\ensuremath{P}\xspace$ is an update program over some dynamic schema \mbox{$(\ensuremath{\calI}\xspaceSchema, \ensuremath{\calA}\xspaceSchema)$}, \item \mtext{Init}\xspace is a mapping that maps $\ensuremath{\calI}\xspaceSchema$-databases to $\ensuremath{\calA}\xspaceSchema$-databases, and \item $\ensuremath{\calQ}s\in\ensuremath{\calA}\xspaceSchema$ is a designated \emph{query symbol}. \end{compactitem} \end{definition} A dynamic program $\ensuremath{\calP}\xspace=(\ensuremath{P}\xspace,\mtext{Init}\xspace,\ensuremath{\calQ}s)$ \emph{maintains} a dynamic query $\dynProb{$\ensuremath{\calQ}$}$ if, for every dynamic instance $(\ensuremath{\calD}\xspace,\alpha)$, the query result $\ensuremath{\calQ}(\updateDB{\alpha}{\ensuremath{\calD}\xspace})$ coincides with the query relation $\ensuremath{\calQ}s^\ensuremath{\struc}\xspace$ in the state \mbox{$\ensuremath{\struc}\xspace=\updateState[\ensuremath{P}\xspace]{\alpha}{\ensuremath{\struc}\xspace_\mtext{Init}\xspace(\ensuremath{\calD}\xspace)}$}, where \mbox{$\ensuremath{\struc}\xspace_\mtext{Init}\xspace(\ensuremath{\calD}\xspace) \df (\ensuremath{ D}\xspace, \ensuremath{\calD}\xspace, \mtext{Init}\xspace(\ensuremath{\calD}\xspace))$} is the initial state for $\ensuremath{\calD}\xspace$. If the query relation $\ensuremath{\calQ}s$ is $0$-ary, we often denote this relation as \emph{query bit} $\mtext{Acc}$ and say that $\ensuremath{\calP}\xspace$ \emph{accepts} $\alpha$ over $\ensuremath{ D}\xspace$ if $\mtext{Acc}$ is true in $\updateState[\ensuremath{P}\xspace]{\alpha}{\ensuremath{\struc}\xspace_\mtext{Init}\xspace(\ensuremath{\calD}\xspace)}$. In the following, we write $\updateStateI{\alpha}{\ensuremath{\calD}\xspace}$ instead of $\updateState[\ensuremath{P}\xspace]{\alpha}{\ensuremath{\struc}\xspace_\mtext{Init}\xspace(\ensuremath{\calD}\xspace)}$ and $\updateStateI{\alpha}{\ensuremath{\struc}\xspace}$ instead\footnote{The notational difference is tiny here: we refer to the dynamic program instead of the update program.} of $\updateStateI[\ensuremath{P}\xspace]{\alpha}{\ensuremath{\struc}\xspace}$ for a given dynamic program $\ensuremath{\calP}\xspace = (\ensuremath{P}\xspace,\mtext{Init}\xspace,\ensuremath{\calQ}s)$, a modification sequence $\alpha$, an initial database $\ensuremath{\calD}\xspace$ and a state $\ensuremath{\struc}\xspace$. \begin{definition}(\DynClass{$\class$}lass{FO} and \DynClass{$\class$}lass{Prop}) \label{definition:dync} \DynClass{$\class$}lass{FO} is the class of all dynamic queries that can be maintained by dynamic programs with first-order update formulas and first-order definable initialization mapping when starting from an initially empty input database. $\DynClass{$\class$}lass{Prop}$ is the subclass of $\DynClass{$\class$}lass{FO}$, where update formulas are quantifier-free\footnote{We still allow the use of quantifiers for the initialization.}. \end{definition} A $\DynClass{$\class$}lass{FO}$-program is a dynamic program with first-order update formulas, likewise a $\DynClass{$\class$}lass{Prop}$-program is a dynamic program with quantifier-free update formulas. A $\DynClass{$\class$}lass{FO}IA{\ell}{m}$-program is a $\DynClass{$\class$}lass{FO}$-program over (at most) $\ell$-ary input databases that uses auxiliary relations of arity at most $m$; likewise for $\DynClass{$\class$}lass{Prop}IA{\ell}{m}$-programs.\footnote{We do not consider the case $\ell = 0$ where databases are pure sets with a fixed number of bits.} Due to the undecidability of finite satisfiability of first-order logic, the emptiness problem---the problem we study first---is undecidable even for \DynClass{$\class$}lass{FO}-programs with only a single auxiliary relation (more precisely, with query bit only). Therefore, we restrict our investigations to fragments of \DynClass{$\class$}lass{FO}. Also allowing arbitrary initialization mappings immediately yields an undecidable emptiness problem. This is already the case for first-order definable initialization mappings for arbitrary initial databases. In the literature classes with various restricted and unrestricted initialization mappings have been studied, \mbox{see \cite{ZeumeS14icdt}} for a discussion. In this work, in line with \cite{PatnaikI94}, we allow initialization mappings defined by arbitrary first-order formulas, but require that the initial database is empty. Of course, we could have studied further restrictions on the power of the initialization formulas, but this would have yielded a setting with an additional parameter. The following example illustrates a technique to maintain lists with quantifier-free dynamic programs, introduced in \cite[Proposition 4.5]{GeladeMS12}, which is used in some of our proofs. The example itself is from \cite{ZeumeS15}. \begin{example}\label{example:emptylist} We provide a $\DynClass{$\class$}lass{Prop}$-program $\ensuremath{\calP}\xspace$ for the dynamic variant of the Boolean query \problem{NonEmptySet}, where, for a unary relation $U$ subject to insertions and deletions of elements, one asks whether $U$ is empty. Of course, this query is trivially expressible in first-order logic, but not without quantifiers. The program $\ensuremath{\calP}\xspace$ is over auxiliary schema $\ensuremath{\calA}\xspaceSchema = \{\ensuremath{\calQ}s, \mtext{First}, \mtext{Last}, \mtext{List}\}$, where $\ensuremath{\calQ}s$ is the query bit (i.e.\ a $0$-ary relation symbol), $\mtext{First}$ and $\mtext{Last}$ are unary relation symbols, and $\mtext{List}$ is a binary relation symbol. The idea of \ensuremath{\calP}\xspace is to maintain a list of all elements currently in $U$. The list structure is stored in the binary relation $\mtext{List}^\ensuremath{\struc}\xspace$. The first and last element of the list are stored in $\mtext{First}^\ensuremath{\struc}\xspace$ and $\mtext{Last}^\ensuremath{\struc}\xspace$, respectively. We note that the order in which the elements of $U$ are stored in the list depends on the order in which they are inserted into $U$. \shortOrLong{}{ For a given instance of \problem{NonEmptySet} the initialization mapping initializes the auxiliary relations accordingly.} \shortOrLong{We only describe the (more complicated) case of deletions from $U$. }{ \mtext{ins}\xspaceertdescr{U}{a}{ A newly inserted element is attached to the end of the list\footnote{For simplicity we assume that only elements that are not already in $U$ are inserted, the formulas given can be extended easily to the general case. Similar assumptions are made whenever necessary.}. Therefore the $\mtext{First}$-relation does not change except when the first element is inserted into an empty set $U$. Furthermore, the inserted element is the new last element of the list and has a connection to the former last element. Finally, after inserting an element into $U$, the query result is 'true': \begin{align*} \uf{\mtext{First}}{\mtext{ins}\xspace}{a}{x} &\df (\neg \ensuremath{\calQ}s \wedge a = x) \vee (\ensuremath{\calQ}s \wedge \mtext{First}(x)) \\ \uf{\mtext{Last}}{\mtext{ins}\xspace}{a}{x} &\df a = x \\ \uf{\mtext{List}}{\mtext{ins}\xspace}{a}{x,y} &\df \mtext{List}(x,y) \vee (\mtext{Last}(x) \wedge a = y) \\ \uf{\ensuremath{\calQ}s}{\mtext{ins}\xspace}{a}{} &\df \top. \end{align*} }} \mtext{del}\xspaceetedescr{U}{a}{ How a deleted element $a$ is removed from the list, depends on whether $a$ is the first element of the list, the last element of the list or some other element of the list. The query bit remains 'true', if $a$ was not the first \emph{and} last element of the list.\shortOrLong{\footnote{We omit the (obvious) parts of formulas that deal with spurious deletions.}}{} \begin{align*} \uf{\mtext{First}}{\mtext{del}\xspace_U}{a}{x} &\df (\mtext{First}(x) \wedge x \neq a) \vee (\mtext{First}(a) \wedge \mtext{List}(a,x)) \\ \uf{\mtext{Last}}{\mtext{del}\xspace_U}{a}{x} &\df (\mtext{Last}(x) \wedge x \neq a) \vee (\mtext{Last}(a) \wedge \mtext{List}(x,a)) \\ \uf{\mtext{List}}{\mtext{del}\xspace_U}{a}{x,y} &\df x \neq a \wedge y \neq a \wedge \big(\mtext{List}(x,y) \vee (\mtext{List}(x, a) \wedge \mtext{List}(a, y))\big)\\ \uf{\ensuremath{\calQ}s}{\mtext{del}\xspace_U}{a}{} &\df \neg(\mtext{First}(a) \wedge \mtext{Last}(a)) \end{align*} \\ \qed } \end{example} In some parts of the paper we will use specific forms of modification sequences. An \emph{insertion sequence} is a modification sequence $\alpha = \mtext{del}\xspaceta_1\cdots\mtext{del}\xspaceta_m$ whose modifications are pairwise distinct insertions. An insertion sequence $\alpha$ over a unary input schema $\ensuremath{\calI}\xspaceSchema$ is in \emph{normal form} if it fulfills the following two conditions. \begin{enumerate}[label=(N\arabic*)] \item For each element $a$, the insertions affecting $a$ form a contiguous subsequence $\alpha_a$ of $\alpha$. We say that $\alpha_a$ \emph{colors} $a$. \item For all elements $a,b$ that get assigned the same $\ensuremath{\calI}\xspaceSchema$-color by $\alpha$, the projections of the subsequences $\alpha_a$ and $\alpha_b$ to their operations (i.e., their first parameters) are identical. \end{enumerate} \section{The Emptiness Problem}\label{section:emptiness} \toAppendix{\section{Proofs for Section \ref{section:emptiness}}} \makeatletter{} In this section we define and study the decidability of the emptiness problem for dynamic programs in general and for restricted classes of dynamic programs. The emptiness problem asks, whether the query relation $\ensuremath{\calQ}s$ of a given dynamic program $\ensuremath{\calP}\xspace$ is always empty, more precisely, whether $\ensuremath{\calQ}s^\ensuremath{\struc}\xspace=\emptyset$ for every (empty) initial database $\ensuremath{\calD}\xspace$ and every modification sequence $\alpha$ with $\ensuremath{\struc}\xspace = \updateStateI[\ensuremath{\calP}\xspace]{\alpha}{\ensuremath{\calD}\xspace}$. To enable a fine-grained analysis, we parameterize the emptiness problem by a class $\calC$ of dynamic programs. \problemdescr{\Emptiness[$\calC$]}{A dynamic program $\ensuremath{\calP}\xspace\in\calC$ with $\StaClass{FO}$ initialization}{ Is $\ensuremath{\calQ}s^\ensuremath{\struc}\xspace=\emptyset$, for every initially empty database $\ensuremath{\calD}\xspace$ and every modification sequence $\alpha$, where $\ensuremath{\struc}\xspace \df \updateStateI[\ensuremath{\calP}\xspace]{\alpha}{\ensuremath{\calD}\xspace}$?} As mentioned before, undecidability of the emptiness problem for unrestricted dynamic programs follows immediately from the undecidability of finite satisfiability of first-order logic. \atheorem{theorem:generalundecidability}{ $\Emptiness$ is undecidable for $\DynClass{$\class$}lass{FO}IA{2}{0}$-programs. } \toLongAndAppendix{ \begin{proof} This follows easily from the undecidability of the finite satisfiability problem for first-order logic over schemas with at least one binary relation symbol \cite{Trahtenbrot63}. For a given first-order formula $\varphi$ over schema $\{E\}$ we construct a $\DynClass{$\class$}lass{FO}$-program $\ensuremath{\calP}\xspace$ with a single binary input relation $E$ and a single $0$-ary auxiliary relation $\mtext{Acc}$ as follows. The bit $\mtext{Acc}$ is set to true whenever the modified database is a model of $\varphi$, and set to false otherwise. For correctness, we observe that if $\varphi$ is not satisfiable then $\mtext{Acc}$ is always false and therefore $\ensuremath{\calP}\xspace$ is empty. On the other hand, if $\varphi$ is satisfiable, then there is a modification sequence $\alpha$ that is accepted by $\ensuremath{\calP}\xspace$, so $\ensuremath{\calP}\xspace$ is non-empty. \end{proof} } In the remainder of this section, we will shed some light on the border line between decidable and undecidable fragments of \DynClass{$\class$}lass{FO}. In Subsection \ref{section:emptinessgeneral} we study fragments of \DynClass{$\class$}lass{FO} obtained by disallowing quantification and/or restricting the arity of input and auxiliary relations. In Subsection \ref{section:emptinessconsistent}, we consider dynamic programs that come with a certain consistency guarantee. \subsection{Emptiness of general dynamic programs}\label{section:emptinessgeneral} \toAppendix{\subsection{Proofs for Section \ref{section:emptinessgeneral}}} \makeatletter{} In this subsection we study the emptiness problem for various restricted classes of dynamic programs. We will see that the problem is basically only decidable if all relations are at most unary and no quantification in update formulas is allowed. Figure \ref{figure:emptiness:general} summarizes the results. \begin{figure} \caption{Decidability of \Emptiness for various classes of dynamic programs.} \label{figure:emptiness:general} \end{figure} At first we strengthen the general result from Theorem \ref{theorem:generalundecidability}. We show that undecidability of the emptiness problem for \DynClass{$\class$}lass{FO}-programs holds even for unary input relations and auxiliary bits. Furthermore, quantification is not needed to yield undecidability: for \DynClass{$\class$}lass{Prop}-programs, emptiness is undecidable for binary input or auxiliary relations. \atheorem{theorem:emptiness:undecidables}{ The emptiness problem is undecidable for \begin{enumerate}[ref={\thetheorem\ (\alph*)}] \item $\DynClass{$\class$}lass{FO}IA{1}{0}$-programs, \item $\DynClass{$\class$}lass{Prop}IA{1}{2}$-programs, \item $\DynClass{$\class$}lass{Prop}IA{2}{0}$-programs, \end{enumerate} } \aproofsketch{} { In all three cases, the proof is by a reduction from the emptiness problem for semi-deterministic 2-counter automata. In a nutshell, a counter automaton (short: CA) is a finite automaton that is equipped with counters that range over the non-negative integer numbers. A counter $c$ can be incremented ($\ensuremath{\text{inc}}(c)$), decremented ($\ensuremath{\text{dec}}(c)$) and tested for zero ($\ifzero(c)$). A CA does not read any input (i.e., its transitions can be considered to be $\epsilon$-transitions) and in each step it can manipulate or test one counter and transit from one state to another state. More formally, a CA is tuple $(Q, C,\Delta, q_i , F )$, where $Q$ is a set of states, $q_i \in Q$ is the initial state, $F \subseteq Q$ is the set of accepting states, and $C$ is a finite set (the \emph{counters}). The transition relation $\Delta$ is a subset of $Q \times \{\ensuremath{\text{inc}}(c), \ensuremath{\text{dec}}(c), \ifzero(c) \mid c \in C\} \times Q$. A \emph{configuration} of a CA is a pair $(p, \vec n)$ where $p$ is a state and $\vec n \in \ensuremath{\mathbb{N}}^C$ gives a value $n_c$ for each counter $c$ in $C$. A transition $(p,\ensuremath{\text{inc}}(c),q)$ can be applied in state $p$, transits to state $q$ and increments $n_c$ by one. A transition $(p,\ensuremath{\text{dec}}(c),q)$ can be applied in state $p$ if $n_c>0$, transits to state $q$ and decrements $n_c$ by one. A transition $(p,\ifzero(c),q)$ can be applied in state $p$, if $n_c=0$ and transits to state $q$. A CA is \emph{semi-deterministic} if from every state there is either at most one transition or there are two transitions, one decrementing and one testing the same counter for zero. The emptiness problem for (semi-deterministic) 2-counter automata (2CA) asks whether a given counter automaton with two counters has an accepting run and is undecidable \cite[Theorem 14.1-1]{Minsky1967}. In all three reductions, the dynamic program \ensuremath{\calP}\xspace is constructed such that for every run~$\rho$ of a semi-deterministic 2CA $\calM$ there is a modification sequence $\alpha=\alpha(\rho)$ that lets \ensuremath{\calP}\xspace simulate~$\rho$, and an empty database \ensuremath{\calD}\xspace, such that \ensuremath{\calP}\xspace accepts $\alpha$ over \ensuremath{\calD}\xspace if and only if $\rho$ is accepting. More precisely, the states of \ensuremath{\calP}\xspace encode the states of $\calM$ by auxiliary bits and the counters of $\calM$ in some way that differs in the three cases. However, in all cases it holds that not every modification sequence for \ensuremath{\calP}\xspace corresponds to a run of $\calM$. However, \ensuremath{\calP}\xspace can detect if $\alpha$ does \emph{not} correspond to a run and assume a rejecting sink state as soon as this happens. For (a), the two counters are simply represented by two unary relations, such that the number of elements in a relation is the current value of the counter. The test whether a counter has value zero thus boils down to testing emptiness of a set and can easily be expressed by a formula with quantifiers. The lack of quantifiers makes the reductions for (b) and (c) a bit more complicated. In both cases, the counters are represented by linked lists, similar to Example~\ref{example:emptylist}, and the number of elements in the list corresponds to the counter value (in (c): plus 1). With such a list a counter value zero can be detected without quantification. Due to the allowed relation types, the lists are built with auxiliary relations in (b) and with input relations in (c). } { \apprepetition{In all three cases, the proof is by a reduction from the emptiness problem for semi-deterministic 2-counter automata. In a nutshell, a counter automaton (short: CA) is a finite automaton that is equipped with counters that range over the non-negative integer numbers. A counter $c$ can be incremented ($\ensuremath{\text{inc}}(c)$), decremented ($\ensuremath{\text{dec}}(c)$) and tested for zero ($\ifzero(c)$). A CA does not read any input (i.e., its transitions can be considered to be $\epsilon$-transitions) and in each step it can manipulate or test one counter and transit from one state to another state. More formally, a CA is tuple $(Q, C,\Delta, q_i , F )$, where $Q$ is a set of states, $q_i \in Q$ is the initial state, $F \subseteq Q$ is the set of accepting states, and $C$ is a finite set (the \emph{counters}). The transition relation $\Delta$ is a subset of $Q \times \{\ensuremath{\text{inc}}(c), \ensuremath{\text{dec}}(c), \ifzero(c) \mid c \in C\} \times Q$. A \emph{configuration} of a CA is a pair $(p, \vec n)$ where $p$ is a state and $\vec n \in \ensuremath{\mathbb{N}}^C$ gives a value $n_c$ for each counter $c$ in $C$. A transition $(p,\ensuremath{\text{inc}}(c),q)$ can be applied in state $p$, transits to state $q$ and increments $n_c$ by one. A transition $(p,\ensuremath{\text{dec}}(c),q)$ can be applied in state $p$ if $n_c>0$, transits to state $q$ and decrements $n_c$ by one. A transition $(p,\ifzero(c),q)$ can be applied in state $p$, if $n_c=0$ and transits to state $q$. A \emph{run} is a sequence of configurations consistent with $\Delta$, starting from the \emph{initial configuration} $(q_i, \vec 0)$. A run is \emph{accepting}, if it ends in some configuration $(q_f, \vec n)$ with $q_f \in F$. A CA is \shortOrLong{}{\emph{deterministic} if $\Delta$ contains for every $p \in Q$ at most one transition $(p, \theta, q)$. It is }\emph{semi-deterministic} if for every $p \in Q$ there is at most one transition $(p, \theta, q)$ in $\Delta$ or there are two transitions $(p, \ensuremath{\text{dec}}(c), q)$ and $(p, \ifzero(c), q')$. The emptiness problem for counter automata asks whether a given counter automaton has an accepting run. It follows from \cite[Theorem 14.1-1]{Minsky1967} that the emptiness problem for semi-deterministic CA \emph{with two counters} (2CA) is undecidable.\footnote{The instruction set from \cite{Minsky1967} contains the increment instruction and a combined instruction that decrements a counter if it is non-zero and jumps to another instruction if it is zero. To simulate the latter instruction, we use two transitions $(p, \ensuremath{\text{dec}}(c), q)$ and $(p, \ifzero(c), q')$ of which exactly one can be applied.} In all three reductions, the dynamic program \ensuremath{\calP}\xspace is constructed such that for every run $\rho$ of the 2CA $\calM$ there is a modification sequence $\alpha=\alpha(\rho)$ that lets \ensuremath{\calP}\xspace simulate~$\rho$, and such that \ensuremath{\calP}\xspace accepts on input $\alpha$ if and only if $\rho$ is accepting. More precisely, the state of \ensuremath{\calP}\xspace encodes the state of $\calM$ by auxiliary bits and the counters of $\calM$ in some way that differs in the three cases. However, in all cases it holds that not every modification sequence for \ensuremath{\calP}\xspace corresponds to a run of $\calM$. However, \ensuremath{\calP}\xspace can detect if $\alpha$ does \emph{not} correspond to a run and assume a rejecting sink state as soon as this happens. For (a), the two counters are simply represented by two unary relations, such that the number of elements in a relation is the current value of the counter. The test whether a counter has value zero thus boils down to testing emptiness of a set and can easily be expressed by a formula with quantifiers. The lack of quantifiers makes the reductions for (b) and (c) a bit more complicated. In both cases, the counters are represented by linked lists, where the number of elements in the list corresponds to the counter value (in (c): plus 1). With such a list a counter value zero can be detected without quantification. Due to the allowed relation types, the lists are built with auxiliary relations in (b) and with input relations in (c).} In the following, we describe more details of the reductions. \begin{proofenum} \item We construct, from a semi-deterministic 2CA $\calM = (Q, \{c_1, c_2\},\Delta, q_I , F )$ a Boolean $\DynClass{$\class$}lass{FO}IA{1}{0}$-program $\ensuremath{\calP}\xspace$ with unary input relations $C_1$ and $C_2$ and input bits $Z_1$ and $Z_2$ such that $\calM$ accepts a sequence $\theta$ of operations if and only if $\ensuremath{\calP}\xspace$ accepts a corresponding sequence $\alpha$ of modifications. With a run $\rho$ of $\calM$ we can associate an input sequence $\alpha(\rho)$ on a sufficiently large domain as follows: each transition of the form $(p,\ensuremath{\text{inc}}(c_i),q)$ gives rise to an insertion $\mtext{ins}\xspace_{C_i}(d)$, for some domain value $d$ currently not in $C_i$. Likewise, each operation $(p,\ensuremath{\text{dec}}(c_i),q)$ corresponds to a deletion $\mtext{del}\xspace_{C_i}(d)$. Finally, operations $(p,\ifzero(c_i),q)$ correspond alternatingly to operations $\mtext{ins}\xspace_{Z_i}()$ and $\mtext{del}\xspace_{Z_i}()$. The semi-determinism of $\calM$ ensures that there is always at most one applicable transition and enables the program $\calP$ to keep track of the state of $\calM$. The program ensures that only applicable transitions are taken. The program $\ensuremath{\calP}\xspace$ has one auxiliary bit $R_p$ for every state $p$ of $\calM$, an ``error bit'' $R_e$ and the query bit $\mtext{Acc}$. During a ``simulation'' the current state $p$ of $\calM$ corresponds to a program state in which exactly the auxiliary bit $R_p$ is true (and $\mtext{Acc}$ if $p\in F$). As soon as the input sequence contains an operation that does not correspond to an applicable transition of $\calM$ (either because no transition exists or because it can not be applied due to a counter value), the error bit $R_e$ is switched on and remains on forever. The update formulas of $\ensuremath{\calP}\xspace$ are as follows. \begin{align*} \uf{R_q}{\mtext{ins}\xspace\;C_i}{u}{} &\df \neg C_i(u) \land \neg R_e \land \bigvee_{(p, \ensuremath{\text{inc}}(c_i) , q) \in \Delta} R_{p} \\ \uf{R_e}{\mtext{ins}\xspace\;C_i}{u}{} &\df R_e \lor C_i(u) \lor \bigvee_{p\in X} R_{p}\\ \uf{\mtext{Acc}}{\mtext{ins}\xspace\;C_i}{u}{} &\df \neg C_i(u) \land \neg R_e \land \bigvee_{\substack{(p, \ensuremath{\text{inc}}(c_i) , q) \in \Delta \\ \text{with $q \in F$} }} R_{p} \end{align*} Here, $X$ is the set of states $p$ from $\calM$ for which no transition $(p,\ensuremath{\text{inc}}(c_i),q)$ exists in $\Delta$. Deletions are handled similarly: \begin{align*} \uf{R_q}{\mtext{del}\xspace\;C_i}{u}{} &\df C_i(u) \land \neg R_e \land \bigvee_{(p, \ensuremath{\text{dec}}(c_i) , q) \in \Delta} R_{p} \\ \uf{R_e}{\mtext{del}\xspace\;C_i}{u}{} &\df R_e \lor \neg C_i(u) \lor \bigvee_{p\in Y} R_{p}\\ \uf{\mtext{Acc}}{\mtext{del}\xspace\;C_i}{u}{} &\df C_i(u) \land \neg R_e \land \bigvee_{\substack{(p, \ensuremath{\text{dec}}(c_i) , q) \in \Delta \\ \text{with $q \in F$} }} R_{p} \end{align*} Here, $Y$ is the set of states $p$ from $\calM$ for which no transition $(p,\ensuremath{\text{dec}}(c_i),q)$ exists in $\Delta$. Modifications to $Z_i$ are handled as follows: \begin{align*} \uf{R_q}{\mtext{ins}\xspace\;Z_i}{}{} \df &\neg \exists x C_i(x) \land \neg R_e \land \bigvee_{(p, \ifzero(c_i) , q) \in \Delta} R_{p} \\ \uf{R_e}{\mtext{ins}\xspace\;Z_i}{}{} \df & R_e \lor \exists x C_i(x) \lor \bigvee_{p\in Z} R_{p}\\ \uf{\mtext{Acc}}{\mtext{ins}\xspace\;Z_i}{}{} \df & \neg \exists x C_i(x) \land \neg R_e \land \bigvee_{\substack{(p, \ifzero(c_i) , q) \in \Delta \\ \text{with $q \in F$} }} R_{p}\\ \end{align*} Here, $Z$ is the set of states $p$ from $\calM$ for which no transition $(p,\ifzero(c_i),q)$ exists in $\Delta$. Deletions of input bits are handled exactly like insertions. Now we prove that $\calM$ has an accepting run if and only if there is a modification sequence accepted by $\ensuremath{\calP}\xspace$. (only-if) Let $\rho$ be an accepting run of $\calM$ and let $m$ be the maximum value that a counter of $\calM$ assumes in $\rho$. It is not hard to prove by induction that there is a modification sequence on every domain with at least $m$ elements that corresponds to $\rho$ in the sense described above. (if) For the other direction assume that $\alpha = \mtext{del}\xspaceta_1 \cdots \mtext{del}\xspaceta_n$ is a modification sequence over domain $\ensuremath{ D}\xspace$ that is accepted by $\ensuremath{\calP}\xspace$. Let $\ensuremath{\struc}\xspace_0$ be the initial state of $\ensuremath{\calP}\xspace$ for $\ensuremath{ D}\xspace$ and let $\ensuremath{\struc}\xspace_i$ for $i \in \{1, \ldots, n\}$ be the state reached by $\ensuremath{\calP}\xspace$ after application of $\mtext{del}\xspaceta_1 \cdots \mtext{del}\xspaceta_i$. Then, by definition of the update formulas of $\ensuremath{\calP}\xspace$ and because $\ensuremath{\struc}\xspace_n$ is accepting, the bit $R^{\calS_i}_e$ is not true for any $\ensuremath{\struc}\xspace_i$ and no element is inserted into $C_i$ when it was already contained in $C_i$, likewise elements are not deleted from $C_i$ when they are not contained. The corresponding accepting run of $\calM$ is defined by the sequence $(q_0, \theta_0, q_1) \ldots (q_{n-1}, \theta_{n-1}, q_n)$ of transitions where $q_i$ is the unique state $q$ for which $R^{\ensuremath{\struc}\xspace_i}_{q}$ is true. Further the value for $\theta_i$ is $\ensuremath{\text{inc}}(c_j)$ if $\mtext{del}\xspaceta_{i+1}$ inserts an element into $C_j$, $\ensuremath{\text{dec}}(c_j)$ if $\mtext{del}\xspaceta_{i+1}$ deletes an element from $C_j$ and $\ensuremath{\text{ifzero}}(c_j)$ if $\mtext{del}\xspaceta_{i+1}$ modifies~$Z_j$. \item We note that in the proof of part (a) quantification is only needed for testing whether the input relations representing the counters are empty. A $\DynClass{$\class$}lass{Prop}IA{1}{2}$-program can simulate this check with two lists as in Example~\ref{example:emptylist} for the relations $C_1$ and $C_2$. When an insertion $\mtext{ins}\xspace_{C_i}(d)$ occurs, corresponding to an operation $(p, \ensuremath{\text{inc}}(c_i), q)$ in $\calM$, the element $d$ is appended to the end of the list for $C_i$. Analogously, for a deletion $\mtext{del}\xspace_{C_i}(d)$ the element $d$ is removed from the list for $C_i$. As shown in Example~\ref{example:emptylist} the dynamic program maintains auxiliary bits $B_1,B_2$ such that $B_i$ is true if and only if $C_i$ is not empty. These bits can then be used by the update formulas instead of the quantification. The rest of the proof is then analogous to the proof of (a). \item In this reduction the counters of the CA are represented by lists, as in (b), but the lists are encoded with (at most) binary \emph{input relations}. Consequently, transitions of $\calM$ correspond to (bounded length) \emph{sequences} of modifications for a dynamic program. For each counter $C_i$ the program \ensuremath{\calP}\xspace use one binary input relation $\mtext{List}_i$, one unary input relation $\mtext{In}_i$ that contains all element used in the list, three unary input relations $\ensuremath{\mathbb{R}}elName{Min}_i$, $\mtext{Last}_i$, $\ensuremath{\mathbb{R}}elName{NextLast}_i$ to mark special elements, several auxiliary bits to monitor if all these input relations are used as intended and a bit $\ensuremath{\mathbb{R}}elName{NonEmpty}_i$ which states whether $\mtext{List}_i$ is currently empty. We now describe how to construct a modification sequence $\alpha=\alpha(\rho)$ from a run $\rho$ of a given 2CA $\calM$, that is accepted by \ensuremath{\calP}\xspace if and only if $\rho$ is accepting. Before the actual simulation of $\calM$ can start, $\alpha$ has to initialize the input relations apart from~$\mtext{List}_i$. To this end, \ensuremath{\calP}\xspace expects as the first three modifications the insertion of one element into $\ensuremath{\mathbb{R}}elName{Min}_i, \mtext{Last}_i$ and $\mtext{In}_i$. This element will serve as the head of the list. A transition of $\calM$ that increments counter $c_i$ is translated into a series of modifications that altogether insert a new element $a$ into $\mtext{In}_i$ as follows. First, $a$ is inserted into $\ensuremath{\mathbb{R}}elName{NextLast}_i$ and thus marked as to be inserted to the end of the list. Next the tuple $(b,a)$ is inserted into $\mtext{List}_i$, where $b$ is the unique element with $b \in \mtext{Last}_i$. The list is surely not empty after the insertion of $a$, so $\ensuremath{\mathbb{R}}elName{NonEmpty}_i$ is set to true. After that, $b$ is removed from $\mtext{Last}_i$ and $a$ is inserted into $\mtext{In}, \mtext{Last}_i$ and removed from $\ensuremath{\mathbb{R}}elName{NextLast}_i$. If the modification sequence does not follow this protocol, \ensuremath{\calP}\xspace assumes a rejecting state forever. Because every relation from $\ensuremath{\mathbb{R}}elName{Min}_i$, $\mtext{Last}_i$, $\ensuremath{\mathbb{R}}elName{NextLast}_i$ contains at most one element at every time, \ensuremath{\calP}\xspace can indeed check whether all these modifications occur in the right order and on the right elements. Similarly, a transition of $\calM$ decrementing $c_i$ is translated into a series of modifications that altogether remove the unique element $a \in \mtext{Last}_i$ from the corresponding list as follows. Let $(b,a)$ be the tuple in $\mtext{List}_i$ that contains $a$. The first modification has to be the insertion of $b$ into $\ensuremath{\mathbb{R}}elName{NextLast}_i$, after that $(b,a)$ is deleted from $\mtext{List}_i$. If $b \in \ensuremath{\mathbb{R}}elName{Min}_i$ then the list is now empty and $\ensuremath{\mathbb{R}}elName{NonEmpty}_i$ is set to false. $a$ has to be removed from $\mtext{In}$ and $\mtext{Last}$, $b$ has to be inserted into $\mtext{Last}$ and removed from $\ensuremath{\mathbb{R}}elName{NextLast}$. It is straightforward but cumbersome to give the update formulas, so they are omitted here. Otherwise, that is, besides the actual translation of a single step of $\calM$, the proof is analogous to the proof of (a). \end{proofenum} } The next result shows that emptiness of $\DynClass{$\class$}lass{Prop}IA{1}{1}$-programs is decidable, yielding a clean boundary between decidable and undecidable fragments. \atheorem{theorem:emptiness:unarydynprop}{ \Emptiness is decidable for $\DynClass{$\class$}lass{Prop}IA{1}{1}$-programs. } \aproof{ The proof uses the following two simple observations about $\DynClass{$\class$}lass{Prop}IA{1}{1}$-programs \ensuremath{\calP}\xspace. \begin{itemize} \item The initialization formulas of \ensuremath{\calP}\xspace assign the same $\ensuremath{\calA}\xspaceSchema$-color to all elements. This color and the initial auxiliary bits only depend on the size of the domain. Furthermore there is a number $n(\ensuremath{\calP}\xspace)$, depending solely on the initialization formulas, such that the initial auxiliary bits and $\ensuremath{\calA}\xspaceSchema$-colors are the same for all empty databases with at least $n(\ensuremath{\calP}\xspace)$ elements. This observation actually also holds for $\DynClass{$\class$}lass{FO}IA{1}{1}$-programs. \item When \ensuremath{\calP}\xspace reacts to a modification $\mtext{del}\xspaceta=(o,a)$, the new ($\ensuremath{\tau}\xspace$-)color of an element $b\not=a$ only depends on $o$, the old color of $b$, the old color of $a$, and the 0-ary relations. In particular, if two elements $b_1,b_2$ (different from $a$) have the same color before the update, they both have the same new color after the update. Thus, the overall update basically consists of assigning new colors to each color (for all elements except~$a$), and the appropriate handling of the element~$a$ and the 0-ary relations. \end{itemize} We will show below that the behavior of $\DynClass{$\class$}lass{Prop}IA{1}{1}$-programs can be simulated by an automaton model with a decidable emptiness problem, which we introduce next. A \emph{multicounter automaton} (short: MCA) is a counter automaton which is not allowed to test whether a counter is zero, i.e. the transition relation $\Delta$ is a subset of $Q \times \{\ensuremath{\text{inc}}(c), \ensuremath{\text{dec}}(c) \mid c \in C\} \times Q$. A \emph{transfer multicounter automaton} (short: TMCA) is a multicounter counter automaton which has, in addition to the increment and the decrement operation, an operation that simultaneously transfers the content of each counter to another counter. More precisely the transition relation $\Delta$ is a subset of $Q \times (\{\ensuremath{\text{inc}}(c), \ensuremath{\text{dec}}(c) \mid c \in C\} \cup \{t \mid t: C \rightarrow C\}) \times Q$. Applying a transition $(p, t, q)$ to a configuration $(p, \vec n)$ yields a configuration $(q, \vec n')$ with $n'_c \df \sum_{t(d) = c} n_d$ for every $c \in C$. A configuration $(q, \vec n)$ of a TCMA is \emph{accepting}, if $q\in F$. The emptiness problem for TCMAs\footnote{We note that (the complement of) this emptiness problem is often called \emph{control-state reachability} problem.} is decidable by reduction to the coverability problem for transfer petri nets\footnote{The simulation of states by counters can be done as in \cite[Lemma 2.1]{HopcroftP79}} which is known to be decidable \cite{DufourdFS98}. Let $\ensuremath{\calP}\xspace$ be a $\DynClass{$\class$}lass{Prop}$-program over unary schema $\ensuremath{\tau}\xspace = \ensuremath{\calI}\xspaceSchema \cup \ensuremath{\calA}\xspaceSchema$ with query symbol $\ensuremath{\calQ}s$ which may be $0$-ary or unary. Let $\Gamma_0$ be the set of all $0$-ary (atomic) types over $\ensuremath{\tau}\xspace$ and let $\Gamma_1$ be the set of $\ensuremath{\tau}\xspace$-colors. We construct a transfer multicounter automaton $\calM$ with counter set $Z_1 = \{z_{\gamma} \mid \gamma \in \Gamma_1\}$. The state set $Q$ of $\calM$ contains $\Gamma_0$, the only accepting state $f$ and some further ``intermediate'' states to be specified below. The intuition is that whenever $\ensuremath{\calP}\xspace$ can reach a state $\ensuremath{\struc}\xspace$ then $\calM$ can reach a configuration $c = (p, \vec n)$ such that $p$ reflects the $0$-ary relations in $\ensuremath{\struc}\xspace$ and, for every $\gamma\in\Gamma_1$, $n_{\gamma}$ is the number of elements of color $\gamma$ in $\ensuremath{\struc}\xspace$. The automaton $\calM$ works in two phases. First, $\calM$ guesses the size $n$ of the domain of the initial database. To this end, it increments the counter $z_{\gamma}$ to $n$, where $\gamma$ is the color assigned to all elements by the initialization formula for domains of size $n$, and it assumes the state corresponding to the initial 0-ary relations for a database of size $n$. Here the first of the above observations is used. Then $\calM$ simulates an actual computation of $\ensuremath{\calP}\xspace$ from the initial database of size $n$ as follows. Every modification $\mtext{ins}\xspace_S(a)$ (or $\mtext{del}\xspace_S(a)$, respectively) in $\ensuremath{\calP}\xspace$ is simulated by a sequence of three transitions in $\calM$: \begin{itemize} \item First, the counter $z_{\gamma}$, where $\gamma$ is the color of $a$ before the modification, is decremented. \item Second, the counters for all colors are adapted according to the update formulas of $\ensuremath{\calP}\xspace$. \item Third, the counter $z_{\gamma'}$, where $\gamma'$ is the color of $a$ after the modification, is incremented. \end{itemize} If a modification changes an input bit, the first and third step are omitted. The state of $\calM$ is changed to reflect the changes of the 0-ary relations of \ensuremath{\calP}\xspace. For this second phase the second of the above observations is used. To detect when the simulation of $\ensuremath{\calP}\xspace$ reaches a state with non-empty query relation $\ensuremath{\calQ}s$, states $p \in \Gamma_0$ may have a transition to the accepting state $f$. }{ ~ }{ Now we describe $\calM$ in detail. We begin with the simulation of the initialization step. If the quantifier depth of $\ensuremath{\calP}\xspace$ is $q$ then $\calM$ non-deterministically guesses whether the domain is of size $1, \ldots, q$ or at least $q+1$. To this end the automaton has $q+1$ additional states $p_{1},\ldots,p_{q+1}$, and non-deterministically chooses one such state $p_i$. Recall that the initial $\ensuremath{\calA}\xspaceSchema$-colors as well as the auxiliary bits depend only on the size of the domain, and that they are the same for all domains of size $\geq q+1$. Let $\gamma_0$ be the $0$-ary type and $\gamma_1$ be the color assigned to domains of size $i$. Now, $\calM$ increments the counter $z_{\gamma_1}$ to $i$ (or to at least $i$ if $i = q+1$) using some further intermediate states. Afterwards $\calM$ assumes state $\gamma_0$. Next we explain how a computation of $\ensuremath{\calP}\xspace$ is simulated. We first deal with modifications to unary input relations. As the effects of an update depend on the operation that is applied to an element, the color of that element and the $0$-ary relations, $\calM$ has one chain of transitions for every such combination. So, for every state $p\in \Gamma_0$, every color $\gamma \in \Gamma_1$ and every $o \in \{\mtext{ins}\xspace_S, \mtext{del}\xspace_S\}$ with $S \in \ensuremath{\calI}\xspaceSchema$ and $\ensuremath{\text{Ar}}(S)=1$ there are states $q_{p,\gamma,o}^1$ and $q_{p,\gamma,o}^2$ which are in charge of the simulation of an update when the modification $\mtext{del}\xspaceta = (o,a)$ occurs in a situation with $0$-ary type $p$ to an element $a$ of color $\gamma$. A transition from $p$ to $q_{p,\gamma, o}^1$ decreases the counter $z_{\gamma}$, a transition from $q_{p,\gamma,o}^2$ increases the counter for the new color of the modified element and assumes the state $p'$ corresponding to the new $0$-ary type. These two transitions simulate the changes of the auxiliary relations regarding the modified element. A transition from $q_{p,\gamma,o}^1$ to $q_{p,\gamma,o}^2$ handles the changes to the elements not (directly) affected by~$\mtext{del}\xspaceta$. As explained above, for given $p$, $o$ and $\gamma$, the new color of an element depends only on its old color. From the update formulas of $\ensuremath{\calP}\xspace$ we extract a function $g_{p,\gamma,o} : \Gamma_1 \rightarrow \Gamma_1$ which describes these changes. From $g$ we build the function $t: Z_1 \rightarrow Z_1$ that describes the transfer as $t(z_{\gamma'}) = z{_{g_{p,\gamma,o} (\gamma')}}$. Similarly, modifications to input bits are simulated. Let $o \in \{\mtext{ins}\xspace_S, \mtext{del}\xspace_S\}$ with $S \in \ensuremath{\calI}\xspaceSchema$ and $\ensuremath{\text{Ar}}(S)=0$ be an operation to a $0$-ary input relation. For states $p, p' \in \Gamma_0$ there is a transition $(p, t, p')$ if $t(z_{\gamma'}) = z{_{g_{p,\gamma,o} (\gamma')}}$ with $g_{p,\gamma,o} : \Gamma_1 \rightarrow \Gamma_1$ as above and $p'$ corresponds to the $0$-ary type after the update. At last, transitions from $p \in \Gamma_0$ to $f$ are introduced. The kind of these transitions depends on the arity of $\ensuremath{\calQ}s$. If $\ensuremath{\calQ}s$ is $0$-ary and $\ensuremath{\calQ}s \in p$, then there is a transfer transition $(p, id, f)$ where $id$ is the identity. If $\ensuremath{\calQ}s$ is unary there is a transition $(p,\ensuremath{\text{dec}}(\gamma),f)$ for every color $\gamma \in \Gamma_1$ with $\ensuremath{\calQ}s \in \gamma$. It is not hard to show that there is a modification sequence for $\ensuremath{\calP}\xspace$ that leads to a non-empty query relation, if and only if there is a run of $\calM$ that reaches~$f$. } \subsection{Emptiness of consistent dynamic programs}\label{section:emptinessconsistent} \toAppendix{\subsection{Proofs for Section \ref{section:emptinessconsistent}}} \makeatletter{} Some readers of the proof of Theorem \ref{theorem:emptiness:undecidables} might have got the impression that we were cheating a bit, since the dynamic programs it constructs do not behave as one would expect: in all three cases each modification sequence $\alpha$ that yields a non-empty query relation $\ensuremath{\calQ}s$ can be changed, e.g., by switching two operations, into a sequence that does not correspond to a run of the CA and therefore does \emph{not} yield a non-empty query relation. That is, the program \ensuremath{\calP}\xspace is \emph{inconsistent} because it might yield different results when the same database is reached through two different modification sequences. It seems, that this inconsistency made the proof of Theorem~\ref{theorem:emptiness:undecidables} much easier. Therefore, the question arises, whether the emptiness problem becomes easier if it can be taken for granted that the given dynamic program is actually consistent. We study this question in this subsection and will investigate the related decision problem whether a given dynamic program is consistent in the next section. As Table \ref{tab:results} shows, the emptiness problem for consistent dynamic programs is indeed easier in the sense that it is decidable for a considerably larger class of dynamic programs. While emptiness for general \DynClass{$\class$}lass{FO} programs is already undecidable for the tiny fragment with unary input relations and $0$-ary auxiliary relations, it is decidable for consistent \DynClass{$\class$}lass{FO} programs with unary input and unary auxiliary relations. Likewise, for \DynClass{$\class$}lass{Prop} there is a significant gap: for consistent programs it is decidable for arbitrary input arities (with unary auxiliary relations) or arbitrary auxiliary arities (with unary input relations), but for general programs emptiness becomes undecidable as soon as binary relations are available (in the input \emph{or} in the auxiliary database). We call a dynamic program $\ensuremath{\calP}\xspace$ \emph{consistent}, if it maintains a query with respect to an empty initial database, that is, if, for all modification sequences $\alpha$ to an empty initial database $\ensuremath{\calD}\xspace_\emptyset$, the query relation in $\updateStateI{\alpha}{\ensuremath{\calD}\xspace_\emptyset}$ depends only on the database $\updateDB{\alpha}{\ensuremath{\calD}\xspace_\emptyset}$. In the remainder of this subsection we show the undecidability and decidability results stated in Table \ref{tab:results}. \atheorem{theorem:emptiness:consistentfobinary}{ The emptiness problem is undecidable for \begin{enumerate} \item consistent $\DynClass{$\class$}lass{FO}IA{2}{0}$-programs, and \item consistent $\DynClass{$\class$}lass{FO}IA{1}{2}$-programs. \end{enumerate} } \aproofidea{ Statement (a) is a corollary of the proof of Theorem \ref{theorem:generalundecidability}, as the reduction in that proof always yields a consistent program. For (b), we present another reduction from the emptiness problem for semi-deterministic 2CAs (see also the proof of Theorem \ref{theorem:emptiness:undecidables}). From a semi-deterministic 2CA $\calM$ we will construct a consistent Boolean dynamic program $\ensuremath{\calP}\xspace$ with a single unary input relation $U$. The query maintained by \ensuremath{\calP}\xspace is ``$\calM$ halts after at most $|U|$ steps''. Clearly, such a program has a non-empty query result for some database and some modification sequence if and only if $\calM$ has an accepting run. The general idea is that $\ensuremath{\calP}\xspace$ simulates one step of the run of $\calM$ whenever a new element is inserted to $U$. A slight complication arises from deletions from $U$, since it is not clear how one could simulate $\calM$ one step ``backwards''. Therefore, when an element is deleted from $U$, \ensuremath{\calP}\xspace freezes the simulation and stores the size $m$ of $|U|$ before the deletion. It continues the simulation as soon as the current size $\ell$ of $U$ grows larger than $m$, for the first time. }{ ~ }{ \newcommand{\ensuremath{U_{\text{current}}}\xspace}{\ensuremath{U_{\text{current}}}\xspace} \newcommand{\ensuremath{U_{\text{acc}}}\xspace}{\ensuremath{U_{\text{acc}}}\xspace} \newcommand{\ensuremath{U_{\text{max}}}\xspace}{\ensuremath{U_{\text{max}}}\xspace} To help storing $m$ and $\ell$ (and the values of the counters, for that matter), \ensuremath{\calP}\xspace uses an auxiliary binary relation $R_<$ which, at any time, is a linear order on the set of those elements, that have been inserted to $U$ at some point. Whenever an element is inserted to $U$ for the first time, it becomes the maximum element of the linear order in $R_<$. Deletions and reinsertions do not affect $R_<$. To actually store $\ell$ and $m$, \ensuremath{\calP}\xspace uses two unary relations $\ensuremath{U_{\text{current}}}\xspace$ and $\ensuremath{U_{\text{max}}}\xspace$. At any time, $\ensuremath{U_{\text{current}}}\xspace$ contains the $\ell$ smallest elements with respect to $R_<$, where $\ell$ is the size of $U$ at the time. Similarly, $\ensuremath{U_{\text{max}}}\xspace$ contains the $m$ smallest elements, with $m$ as described above. In particular, $\ensuremath{U_{\text{current}}}\xspace$ is empty if and only if $\ell=0$. In the same fashion, \ensuremath{\calP}\xspace uses two further unary auxiliary relations $C_1$ and $C_2$ representing the counters. If $\calM$ reaches an accepting state, \ensuremath{\calP}\xspace stores the current size $k$ of $U$ at this moment, with the help of another unary relation $\ensuremath{U_{\text{acc}}}\xspace$, that is, it simply lets $\ensuremath{U_{\text{acc}}}\xspace$ become a copy of $\ensuremath{U_{\text{current}}}\xspace$ after the current insertion. From that point on, that is, if $\ensuremath{U_{\text{acc}}}\xspace$ is non-empty, the query bit of \ensuremath{\calP}\xspace is true whenever $\ell\ge k$. Besides the one binary and five unary relations, \ensuremath{\calP}\xspace has one $0$-ary relation $Q_p$, for every state $p$ of $\calM$. As an illustration we give two update formulas of \ensuremath{\calP}\xspace that maintain $C_1$ and and $Q_q$, for some state $q$, under insertions to $U$, respectively. \allowdisplaybreaks \begin{align*} \uf{C_1}{\mtext{ins}\xspace\;U}{u}{x} &\df \big((U(u) \lor (\ensuremath{U_{\text{current}}}\xspace\neq\ensuremath{U_{\text{max}}}\xspace) \lor \bigvee_{\substack{(p, \ensuremath{\text{inc}}(c_2), q) \in \Delta \\ (p, \ensuremath{\text{dec}}(c_2), q) \in \Delta \\(p, \ifzero(c_2), q) \in \Delta}} Q_p) \land C_1(x)\big) \lor \\ & \Big(\neg U(u) \land (\ensuremath{U_{\text{current}}}\xspace=\ensuremath{U_{\text{max}}}\xspace) \land \\ & \quad \big(\bigvee_{(p, \ensuremath{\text{inc}}(c_1), q) \in \Delta} \big( Q_{p} \land \forall y (C_1(y) \lor x \leq y) \big)\\ & \quad \vee \bigvee_{(p, \ensuremath{\text{dec}}(c_1), q) \in \Delta} \big( Q_p \land C_1(x) \land \exists y (C_1(y) \land x < y) \big)\big)\Big) \\ \uf{Q_q}{\mtext{ins}\xspace\;U}{u}{} &\df \big((U(u) \lor (\ensuremath{U_{\text{current}}}\xspace\neq\ensuremath{U_{\text{max}}}\xspace)) \land Q_q \big) \lor \Big( \neg U(u) \land (\ensuremath{U_{\text{current}}}\xspace=\ensuremath{U_{\text{max}}}\xspace) \land \\ & \big(\bigvee_{\substack{ (p, \ensuremath{\text{inc}}(c_j), q) \in \Delta\\j\in\{1,2\}}} Q_{p}\\ &\vee \bigvee_{\substack{ (p, \ensuremath{\text{dec}}(c_j), q) \in \Delta\\j\in\{1,2\}}} (Q_p \land \exists x C_j(x)) \\ & \vee \bigvee_{\substack{ (p, \ifzero(c_j), q) \in \Delta\\j\in\{1,2\}}} (Q_p \land \neg \exists x C_j(x)) \big)\Big) \end{align*} Here, $\ensuremath{U_{\text{current}}}\xspace=\ensuremath{U_{\text{max}}}\xspace$ abbreviates the formula $\forall y\; (\ensuremath{U_{\text{current}}}\xspace(y) \leftrightarrow \ensuremath{U_{\text{max}}}\xspace(y))$. We note that $\ufwa{C_1}{\mtext{ins}\xspace\;U}$ does not test the applicability of transitions directly, but $\ufwa{Q_q}{\mtext{ins}\xspace\;U}$ does. We recall that, thanks to semi-determinism of $\calM$, the next transition is always uniquely determined by the state of $\calM$ and the value of the affected counter. If no transition can be applied, the simulation does not set any bit $Q_i$ to true and the simulation basically stops. } \shortOrLong{Contrary to the case of not necessarily consistent programs, the emptiness problem is decidable for consistent $\DynClass{$\class$}lass{FO}IA{1}{1}$-programs. }{} \toLongAndAppendix{ Contrary to the case of not necessarily consistent programs, the emptiness problem is decidable for consistent $\DynClass{$\class$}lass{FO}IA{1}{1}$-programs. We will use the fact that the truth of first-order formulas with quantifier depth $k$ in a state of a $\DynClass{$\class$}lass{FO}IA{1}{1}$-program only depends on the number of elements of every color up to $k$. Intuitively the states of a consistent $\DynClass{$\class$}lass{FO}IA{1}{1}$-program can be approximated by a finite amount of information, namely the number of elements of every color up to some constant. This can be used to construct, from a consistent $\DynClass{$\class$}lass{FO}IA{1}{1}$-program $\ensuremath{\calP}\xspace$, a nondeterministic finite automaton $\calA$ that reads encoded modification sequences for $\ensuremath{\calP}\xspace$ in normal form and approximates the state of $\ensuremath{\calP}\xspace$ in its own state. In this way the emptiness problem for consistent $\DynClass{$\class$}lass{FO}IA{1}{1}$-programs reduces to the emptiness problem for nondeterministic finite automata. To formalize this, for a $\DynClass{$\class$}lass{FO}IA{1}{1}$-program $\ensuremath{\calP}\xspace$ let $c_1, \ldots, c_M$ be the colors over the schema of $\ensuremath{\calP}\xspace$. The \emph{characteristic vector} $\vec n(\ensuremath{\struc}\xspace) = (n_1, \ldots, n_M)$ for a state $\ensuremath{\struc}\xspace$ over the schema of $\ensuremath{\calP}\xspace$ stores for every color $c_i$ the number $n_i \in \ensuremath{\mathbb{N}}$ of elements of color $c_i$ in $\ensuremath{\struc}\xspace$. We also denote this number as $n_i(\ensuremath{\struc}\xspace)$. We write $n\simeq_k m$, for numbers $k,n,m$, if $n=m$ or both $n\ge k$ and $m\ge k$. We write $(n_1,\ldots,n_M) \simeq_k (n'_1,\ldots,n'_M)$, if for every $i \leq M$, $n_i \simeq_k n_i'$, and $\ensuremath{\struc}\xspace \simeq_k \ensuremath{\struc}\xspace'$ for two states $\ensuremath{\struc}\xspace$ and $\ensuremath{\struc}\xspace'$ if $\vec n(\ensuremath{\struc}\xspace) \simeq_k \vec n(\ensuremath{\struc}\xspace')$ and the bits in $\ensuremath{\struc}\xspace$ and $\ensuremath{\struc}\xspace'$ are equally valuated. \begin{lemma}\label{lem:PropCons11} Let $\ensuremath{\calP}\xspace$ be a $\DynClass{$\class$}lass{FO}IA{1}{1}$-program with quantifier depth $q$ and let $\ensuremath{\struc}\xspace$ and $\ensuremath{\struc}\xspace'$ be two states for $\ensuremath{\calP}\xspace$. \begin{enumerate}[ref={\thetheorem\ (\alph*)}] \item $\ensuremath{\struc}\xspace \simeq_k \ensuremath{\struc}\xspace'$ if and only if $\ensuremath{\struc}\xspace \equiv_k \ensuremath{\struc}\xspace'$ for any $k \in \ensuremath{\mathbb{N}}$.\label{lem:CharacVectorimpliesFOtype} \item Let $a$ and $a'$ be elements from $\ensuremath{\struc}\xspace$ and $\ensuremath{\struc}\xspace'$ with the same color $c_i$ and let $k = q+1$. If $\ensuremath{\struc}\xspace \simeq_k \ensuremath{\struc}\xspace'$ and $n_0(\ensuremath{\struc}\xspace) \simeq_{k+1} n_0(\ensuremath{\struc}\xspace')$ then $\updateState{\mtext{del}\xspaceta(a)}{\ensuremath{\struc}\xspace} \simeq_k \updateState{\mtext{del}\xspaceta(a')}{\ensuremath{\struc}\xspace'}$ for every modification $\mtext{del}\xspaceta$. \label{lem:UnaryStateProgression} \end{enumerate} \end{lemma} We recall that $\ensuremath{\struc}\xspace\equiv_k \ensuremath{\struc}\xspace'$ means that the two states satisfy exactly the same first-order formulas of quantifier depth (up to) $k$. \begin{proof} \begin{proofenum} \item It is easy to express with a first-order formula of quantifier depth $k$ that the number of elements of a color $c$ is exactly $k'$ for $k' < k$ or at least $k$. So the only if direction follows. If $\ensuremath{\struc}\xspace \simeq_k \ensuremath{\struc}\xspace'$ holds, then Duplicator has a straightforward winning strategy in the $k$-rounds Ehrenfeucht-Fra\"isse game, so $\ensuremath{\struc}\xspace \equiv_k \ensuremath{\struc}\xspace'$ follows. \item With part (a), $(\ensuremath{\struc}\xspace,a) \equiv_k (\ensuremath{\struc}\xspace',a')$. Since $k = q+1$, if elements $b$ and $b'$ from $\ensuremath{\struc}\xspace$ and $\ensuremath{\struc}\xspace'$ have the same color and $b = a$ if and only if $b'=a'$, they also have the same color in $\updateState{\mtext{del}\xspaceta(a)}{\ensuremath{\struc}\xspace}$ and $\updateState{\mtext{del}\xspaceta(a')}{\ensuremath{\struc}\xspace'}$. The claim of the lemma follows. \end{proofenum} \end{proof} With the help of the previous lemma, we can now show the following decidability result.} \atheorem{theorem:emptiness:consistentunaryaux}{ \Emptiness is decidable for consistent $\DynClass{$\class$}lass{FO}IA{1}{1}$-programs. } \aproofidea{}{ The proof uses the fact that the truth of first-order formulas with quantifier depth $k$ in a state of a $\DynClass{$\class$}lass{FO}IA{1}{1}$-program only depends on the number of elements of every color up to $k$. The states of a consistent $\DynClass{$\class$}lass{FO}IA{1}{1}$-program can therefore be abstracted by a bounded amount of information, namely the number of elements of every color up to $k+1$. This can be used to construct, from a consistent $\DynClass{$\class$}lass{FO}IA{1}{1}$-program $\ensuremath{\calP}\xspace$, a nondeterministic finite automaton $\calA$ that reads encoded modification sequences for $\ensuremath{\calP}\xspace$ in normal form and represents the abstracted state of $\ensuremath{\calP}\xspace$ in its own state. In this way the emptiness problem for consistent $\DynClass{$\class$}lass{FO}IA{1}{1}$-programs reduces to the emptiness problem for nondeterministic finite automata. }{ We reduce the emptiness problem for consistent $\DynClass{$\class$}lass{FO}IA{1}{1}$-programs to the emptiness problem for nondeterministic finite automata. The intuition is as follows. From a consistent $\DynClass{$\class$}lass{FO}IA{1}{1}$-program $\ensuremath{\calP}\xspace$, we construct a nondeterministic finite automaton $\calA$ that reads encoded modification sequences for $\ensuremath{\calP}\xspace$ in normal form and approximates the state of $\ensuremath{\calP}\xspace$ in its own state. To this end $\calA$ has a state $q_\calE$ for every equivalence class $\calE$ of $\simeq_k$ for a well-chosen $k \in \ensuremath{\mathbb{N}}$. The automaton accepts if it reaches a state $q_\calE$ where $\calE$ corresponds to states of $\ensuremath{\calP}\xspace$ with non-empty query relation. We make this more precise now. The following facts are exploited in the proof: \begin{itemize} \item As $\ensuremath{\calP}\xspace$ is consistent, if there is a modification sequence that leads to a state with a non-empty query relation, then there is an insertion sequence in normal form that leads to such a state. \item If two elements $a, a'$ have the same color in some state of the program, then they still have the same color after an element $b \neq a, a'$ has been modified. \item For knowing how a state $\ensuremath{\struc}\xspace$ is updated by $\ensuremath{\calP}\xspace$, it is enough to consider the $\simeq_k$ equivalence class of $\ensuremath{\struc}\xspace$ for a suitable $k$. \end{itemize} In an insertion sequence in normal form, an element is touched by at most $\ell$ insertions where $\ell$ is the number of unary relation symbols in $\ensuremath{\calI}\xspaceSchema$. As the insertions involving a single element occur consecutively in such a sequence, the occurring updates can be specified by ``extended'' update formulas of quantified depth $\ell{}q$, by nesting the original update formulas of quantifier depth $q$. For $k \df \ell{}q + 1$, states $\ensuremath{\struc}\xspace$ and $\ensuremath{\struc}\xspace'$ with $\ensuremath{\struc}\xspace \simeq_k \ensuremath{\struc}\xspace'$ then meet the requirements of Lemma \ref{lem:UnaryStateProgression} when those extended update formulas are considered. The alphabet $\Sigma$ of $\calA$ is the set of \emph{proper} $\ensuremath{\calI}\xspaceSchema$-colors ($\neq c_0$). For every equivalence class $\calE$ of $\simeq_k$, for $k$ as chosen above, the automaton $\calA$ has a state $q_{\calE}$. The idea is that the automaton simulates $\ensuremath{\calP}\xspace$ by approximating the state of $\ensuremath{\calP}\xspace$ by its $\simeq_k$-equivalence class. More precisely, whenever $\calA$ is in state $q_\calE$ after reading a word $w$ over $\Sigma$ then $\calE$ is the equivalence class of the state $\ensuremath{\struc}\xspace$ reached by $\ensuremath{\calP}\xspace$ after the modification sequence $\alpha$ corresponding to $w$. There is a small caveat to this. The state reached by $\ensuremath{\calP}\xspace$ after application of $\alpha$ is not solely determined by $\alpha$ but also by the size of the domain. The automaton has to take this into account. We now describe the behaviour $\calA$ in detail. At the beginning of a computation the automaton non-deterministically guesses the (approximate) size of the domain, that is, a number $i$ from $\{1,\ldots, k\}$ and assumes state $q_\calE$ where $\calE$ is the equivalence class of $\simeq_k$ that corresponds to an initial state of $\ensuremath{\calP}\xspace$ with $i$ elements if $i < k$ and at least $i$ elements otherwise. Note that if $i = k$ then the automaton does not know the exact size of the domain for which it is simulating $\ensuremath{\calP}\xspace$. Yet, as long as there are at least $k$ $\ensuremath{\calI}\xspaceSchema$-uncolored elements, the exact number is not important. Afterwards $\calA$ simulates $\ensuremath{\calP}\xspace$. When in state $q_\calE$ and reading a symbol $c$, the automaton assumes state $q_{\calE'}$ where $\calE'$ is as follows: \begin{itemize} \item If $\calE$ indicates less than $k$ $\ensuremath{\calI}\xspaceSchema$-uncolored elements then $\calE'$ is the $\simeq_k$-equivalence class of any state $\ensuremath{\struc}\xspace'$ reached by $\ensuremath{\calP}\xspace$ from a state $\ensuremath{\struc}\xspace$ with $\simeq_k$-equivalence class $\calE$. \item If $\calE$ indicates at least $k$ $\ensuremath{\calI}\xspaceSchema$-uncolored elements, then $\calA$ guesses whether this is still the case after coloring one further element. If yes, then $\calE'$ is the $\simeq_k$-equivalence class of any state $\ensuremath{\struc}\xspace'$ reached by $\ensuremath{\calP}\xspace$ from a state $\ensuremath{\struc}\xspace$ with $\simeq_k$-equivalence class $\calE$ and at least $k+1$ $\ensuremath{\calI}\xspaceSchema$-uncolored elements. Otherwise $\calE'$ is the $\simeq_k$-equivalence class of any state $\ensuremath{\struc}\xspace'$ reached by $\ensuremath{\calP}\xspace$ from a state $\ensuremath{\struc}\xspace$ with $\simeq_k$-equivalence class $\calE$ and at least $k$ $\ensuremath{\calI}\xspaceSchema$-uncolored elements. \end{itemize} That $\calE'$ is uniquely determined follows from the second and third fact from above. } \makeatletter{} The picture of decidability of emptiness for consistent programs for all classes of the form $\DynClass{$\class$}lass{FO}IA{\ell}{m}$ is pretty clear and simple: it is decidable if and only if $\ell= 1$ \emph{and} $m\le 1$. Now we turn our focus to the corresponding classes of consistent $\DynClass{$\class$}lass{Prop}$-programs. Here we do not have a full picture. We show in the following that it is decidable if $\ell= 1$ \emph{or} $m\le 1$. \begin{theorem}\label{theorem:emptiness:consistent:unaryDynProp} The emptiness problem is decidable for \begin{enumerate} \item consistent $\DynClass{$\class$}lass{Prop}I{1}$-programs. \item consistent $\DynClass{$\class$}lass{Prop}A{1}$-programs. \end{enumerate} \end{theorem} \shortOrLong{ \begin{proofideaof}{Theorem \ref{theorem:emptiness:consistent:unaryDynProp} (a)} The statement follows almost immediately from the fact that every consistent $\DynClass{$\class$}lass{Prop}I{1}$-program with $0$-ary query relations maintains a regular language \cite[Theorem 3.2]{GeladeMS12}. \end{proofideaof} }{ \begin{proofof}{Theorem \ref{theorem:emptiness:consistent:unaryDynProp} (a)} In \cite[Theorem 3.2]{GeladeMS12} it is shown that over databases with a linear order and unary relations every $\DynClass{$\class$}lass{Prop}I{1}$-program~$\ensuremath{\calP}\xspace$ with a Boolean query relation maintains a regular language over the $\ensuremath{\calI}\xspaceSchema$-colors of the $\ensuremath{\calI}\xspaceSchema$-colored elements. This result holds for arbitrary initialization and its proof shows that an automaton for this regular language can be effectively constructed from the dynamic program. Therefore, to test emptiness of a program with a Boolean query relation it suffices to test emptiness of its automaton. Suppose that $\ensuremath{\calP}\xspace$ has a query relation with arity $k > 0$ and that there is a modification sequence $\alpha$ that yields a state $\ensuremath{\struc}\xspace$ where the query relation contains a tuple $\vec a \df (a_1, \ldots, a_k)$. Without loss of generality we assume that $\alpha$ is an insertion sequence in normal form and that elements of $\vec a$ are modified at last (if they are modified at all). In other words, $\alpha$ is of the form $\alpha_1 \ldots \alpha_M$ where each $\alpha_i$ modifies exactly one element, and there is an $N$ such that $\alpha_j$ with $j \geq N$ only modifies elements of $\vec a$. We use a pumping argument to argue that if $\alpha$ is a shortest such sequence, then it is not very long. Then emptiness of $\ensuremath{\calP}\xspace$ can be tested by examining all such modification sequences. We use the following observations from \cite[Theorem 3.2]{GeladeMS12}: \begin{enumerate} \item After each update, all tuples of positions that have not been touched so far have the same (atomic) type. \item There is only a bounded number (depending only on the number and the maximal arity of the auxiliary relations of $\ensuremath{\calP}\xspace$) of different types of such tuples. \end{enumerate} Let $\calS_i$ be the state reached by applying $\alpha_1 \ldots \alpha_i$. If $N$ is larger than the number of (atomic) $k$-ary types then, by the observations (a) and (b), there are $j$, $j'$ with $j < j'$ such that all $l$-tuples whose elements have not been touched so far have the same type in $\ensuremath{\struc}\xspace_j$ and~$\ensuremath{\struc}\xspace_{j'}$. In particular $\vec a$ has the same type in $\ensuremath{\struc}\xspace_j$ and $\ensuremath{\struc}\xspace_{j'}$. Hence, since $\ensuremath{\calP}\xspace$ is quantifier-free, it also has the same type in $\ensuremath{\struc}\xspace$ (the state reached by applying $\alpha$) and in the state reached by applying the modification sequence $\alpha_1 \ldots \alpha_j \alpha_{j'+1} \ldots \alpha_N \alpha_{N+1} \ldots \alpha_M$. Thus the query relation contains $\vec a$ in the latter state. \end{proofof} } \shortOrLong{To highlight the role of the Sunflower Lemma for the proof of Theorem \ref{theorem:emptiness:consistent:unaryDynProp} (b), we give a full exposition of this proof in the following. At first, we sketch the basic proof idea for consistent $\DynClass{$\class$}lass{Prop}A{1}$-programs over graphs, i.e., the input schema contains a single binary relation symbol $E$.} {Before we prove the general statement of Theorem \ref{theorem:emptiness:consistent:unaryDynProp} (b), we first sketch the basic proof idea for consistent $\DynClass{$\class$}lass{Prop}A{1}$-programs over graphs, i.e., the input schema contains a single binary relation symbol $E$.} For simplicity we also assume a $0$-ary query relation. The general statement requires more machinery and is proved below. Our goal is to show that if such a program $\ensuremath{\calP}\xspace$ accepts some graph then it also accepts one with ``few'' edges, where ``few'' only depends on the schema of the program. To this end we show that if a graph $G$ accepted by $\ensuremath{\calP}\xspace$ contains many edges then one can find a large ``well-behaved'' edge set in $G$ from which edges can be removed without changing the result of~$\ensuremath{\calP}\xspace$. Emptiness can then be tested in a brute-force manner by trying out insertion sequences for all graphs with few edges (over a canonical domain $\{1,\ldots,n\}$). More concretely, we consider an edge set ``well-behaved'', if it consists only of self-loops, it is a set of disjoint non-self-loop-edges, or is is a \emph{star}, that is, the edges share the same source node or the same target node. From the Sunflower Lemma \cite{ErdosR60} it follows that for every $p \in \ensuremath{\mathbb{N}}$ there is an $N_p \in \ensuremath{\mathbb{N}}$ such that every (directed) graph with $N_p$ edges contains $p$ self-loops, or $p$ disjoint edges, or a star with $p$ edges. Let us now assume, towards a contradiction, that the minimal graph accepted by $\ensuremath{\calP}\xspace$ has $N$ edges with $N>N_{M^2+1}$, where $M$ is the number of binary (atomic) types over the schema $\tau=\ensuremath{\calI}\xspaceSchema\cup\ensuremath{\calA}\xspaceSchema$ of $\ensuremath{\calP}\xspace$. Then $G$ either contains $M^2+1$ self-loops, or $M^2+1$ disjoint edges, or a $(M^2+1)$-star. Let us assume first that $G$ has a set $D \subseteq E$ of $M^2+1$ disjoint edges. We consider the state $\ensuremath{\struc}\xspace$ reached by $\ensuremath{\calP}\xspace$ after inserting all edges from $E \setminus D$ into the initially empty graph. Since $D$ contains $M^2+1$ edges, there is a subset $D' \subseteq D$ of size $M+1$ such that all edges in $D'$ have the same atomic type in state $\ensuremath{\struc}\xspace$. Let $\ensuremath{\struc}\xspace_0$ be the state reached by $\ensuremath{\calP}\xspace$ after inserting all edges in $D \setminus D'$ in $\ensuremath{\struc}\xspace$. All edges in $D'$ still have the same type in $\ensuremath{\struc}\xspace_0$ since $\ensuremath{\calP}\xspace$ is a quantifier-free program (though this type can differ from the type in $\ensuremath{\struc}\xspace$). Let $e_1, \ldots, e_{M+1}$ be the edges in $D'$ and denote by $\ensuremath{\struc}\xspace_i$ the state reached by $\ensuremath{\calP}\xspace$ after inserting $e_1, \ldots, e_i$ in $\ensuremath{\struc}\xspace_0$. For each $i$, all edges $e_{i+1},\ldots, e_{M+1}$ have the same type $\gamma_i$ in state $\ensuremath{\struc}\xspace_i$, again. As the number of binary atomic types is $M$, there are $i<j$ such that $\gamma_i=\gamma_j$, thus $e_{M+1}$ has the same type in $\ensuremath{\struc}\xspace_i$ and~$\ensuremath{\struc}\xspace_{j}$. Therefore, inserting the edges $e_{j+1}, \ldots, e_{M+1}$ in $\ensuremath{\struc}\xspace_i$ yields a state with the same query bit as inserting those edges in $\ensuremath{\struc}\xspace_j$. As the query bit in the latter case is accepting, it is also accepting in the former case, yet in that case the underlying graph has fewer edges than~$G$, the desired contradiction. The case where $G$ contains $M^2+1$ self-loops is completely analogous. Now assume that $G$ contains a star with $M^2+1$ edges. The argument is very similar to the argument for disjoint edges. First insert all edges not involved in the star into an initially empty graph. Then there is a set $D$ of many star edges of the same type, and they still have the same type after inserting the other edges of the star. A graph with fewer edges that is accepted by~$\ensuremath{\calP}\xspace$ can then be obtained as above. The idea generalizes to input schemata with larger arity by applying the Sunflower Lemma in order to obtain a ``well-behaved'' sub-relation within an input relation that contains many tuples. In order to prove this generalization we first recall the Sunflower Lemma, and observe that it has an analogon for tuples. The Sunflower Lemma was introduced in \cite{ErdosR60}, here we follow the presentation in \cite{Jukna01}. A \emph{sunflower} with $p$ \emph{petals} and a \emph{core} $Y$ is a collection of $p$ sets $S_1, \ldots, S_p$ such that $S_i \cap S_j = Y$ for all $i \neq j$. \begin{lemma}[Sunflower Lemma, \cite{ErdosR60}]\label{lemma:sunflower} Let $p \in \ensuremath{\mathbb{N}}$ and let $\calF$ be a family of sets each of cardinality $\ell$. If $\calF$ consists of more than $N_{\ell, p} \df \ell!(p-1)^\ell$ sets then $\calF$ contains a sunflower with $p$ petals. \end{lemma} We call a set $H$ of tuples of some arity $\ell$ a \emph{sunflower (of tuples)} if it has the following three properties. \begin{enumerate}[label=(\roman*)] \item All tuples in $H$ have the same equality type. \item There is a set $J\subset\{1,\ldots,\ell\}$ such that $t_j=t'_j$ for every $j\in J$ and all tuples $t,t'\in H$. \item For all tuples $t\not=t'$ in $H$ the sets $\{t_i\mid i\not\in J\}$ and $\{t'_i\mid i\not\in J\}$ are disjoint. \end{enumerate} We say that $H$ has $|H|$ petals. The following Sunflower Lemma for tuples has been stated in various variants in the literature, e.g., in \cite{Marx05,KratschW10}. \begin{lemma}[Sunflower Lemma for tuples]\label{lemma:sunflower-tuples} Let $\ell, p \in \ensuremath{\mathbb{N}}$ and let $R$ be a set of $\ell$-tuples. If $R$ contains more than $\bar{N}_{\ell, p}\df \ell^\ell p^\ell(\ell!)^2$ tuples then it contains a sunflower with $p$ petals. \end{lemma} \begin{proof} Let $R$ be an $\ell$-ary relation that contains $\bar{N}_{\ell, p}$ tuples. As there are less than $\ell^\ell$ equality types of $\ell$-tuples there is a set $R'\subseteq R$ of size at least $p^\ell(\ell!)^2$, in which all tuples have the same equality type. Application of Lemma 2 in \cite{KratschW10} yields\footnote{In \cite{KratschW10}, elements from the ``outer part'' of a petal can also occur in the ``core''. As in $R'$ all tuples have the same equality type, this can not happen in our setting.} a sunflower with $p$ petals. \end{proof} It is instructive to see how Lemma \ref{lemma:sunflower-tuples} shows that a graph with sufficiently many edges has many selfloops, disjoint edges or a large star: Selfloops correspond to the equality type of tuples $(t_1,t_2)$ with $t_1=t_2$, many disjoint edges to the case $J=\emptyset$ and the two possible kinds of stars to $J=\{1\}$ and $J=\{2\}$, respectively. \begin{proofof}{Theorem \ref{theorem:emptiness:consistent:unaryDynProp} (b)} Now the proof for binary input schemas easily translates to general input schemas. For the sake of completeness we give a full proof. Suppose that a consistent $\DynClass{$\class$}lass{Prop}A{1}$-program $\ensuremath{\calP}\xspace$ over schema $\ensuremath{\tau}\xspace$ with $0$-ary\footnote{At the end of the proof we discuss how to deal with unary query relations.} query relation accepts an input database $\ensuremath{\calD}\xspace$ that contains at least one relation $R$ with many tuples. Suppose that $R$ is of arity $\ell$ and contains $\bar{N}_{\ell, M^2+1}$ diverse tuples where $M$ is the number of $\ell$-ary (atomic) types over the schema of $\ensuremath{\calP}\xspace$. We show that $\ensuremath{\calP}\xspace$ already accepts a database with less tuples than~$\ensuremath{\calD}\xspace$. By Lemma \ref{lemma:sunflower-tuples}, $R$ contains a sunflower $R'$ of size $M^2+1$. Consider the state $\ensuremath{\struc}\xspace$ reached by $\ensuremath{\calP}\xspace$ after inserting all tuples from $R \setminus R'$ into the initially empty database. Since $R'$ contains $M^2+1$ tuples, there is a subset $R'' \subseteq R'$ of size $M+1$ such that all tuples in $R''$ have the same atomic type in state $\ensuremath{\struc}\xspace$. Let $\ensuremath{\struc}\xspace_0$ be the state reached by $\ensuremath{\calP}\xspace$ after inserting all tuples in $R' \setminus R''$ in $\ensuremath{\struc}\xspace$. All tuples in $R''$ still have the same type in $\ensuremath{\struc}\xspace_0$ since $\ensuremath{\calP}\xspace$ is a quantifier-free program (though this type can differ from the type in $\ensuremath{\struc}\xspace$). Let $\vec a_1, \ldots, \vec a_{M+1}$ be the tuples in $R''$ and denote by $\ensuremath{\struc}\xspace_i$ the state reached by $\ensuremath{\calP}\xspace$ after inserting $a_1, \ldots, a_i$ in $\ensuremath{\struc}\xspace_0$. In state $\ensuremath{\struc}\xspace_i$ all tuples $a_{i+1},\ldots, a_{M+1}$ have the same type, again. As the number of $\ell$-ary atomic types is $k$, there are $i<j$ such that $a_{M+1}$ has the same type in $\ensuremath{\struc}\xspace_i$ and $\ensuremath{\struc}\xspace_{j}$. Therefore, inserting the edges $e_{j+1}, \ldots, e_{M+1}$ in $\ensuremath{\struc}\xspace_i$ yields a state with the same query bit as inserting this sequence in $\ensuremath{\struc}\xspace_j$. As the query bit in the latter case is accepting, it is also accepting in the former case, yet in that case the underlying database has fewer tuples than~$\ensuremath{\calD}\xspace$, the desired contradiction. If $\ensuremath{\calP}\xspace$ has a unary query relation, then the proof has to be adapted as follows. For an accepted database $\ensuremath{\calD}\xspace$, the unary query relation contains some element $a$. Now $M$ is chosen as the number of $(\ell+1)$-ary atomic types (instead of the number of $\ell$-ary atomic types), and $R''$ is chosen as sub-sunflower where all tuples $(\vec a_1, a), \ldots, (\vec a_{M+1}, a)$ have the same atomic type. The rest of the proof is analogous. \end{proofof} \toMainAndAppendix{ The final result of this subsection gives a characterization of the class of queries maintainable by consistent $\DynClass{$\class$}lass{Prop}A{0}$-programs. This characterization is not needed to obtain decidability of the emptiness problem for such queries, since this is included in Theorem \ref{theorem:emptiness:consistent:unaryDynProp}. However, we consider it interesting in its own right. } \toLongAndAppendix{ As $\DynClass{$\class$}lass{Prop}A{0}$-programs can only store a constant amount of information, it is not surprising that they can only maintain very simple properties. In fact, they can maintain exactly all modulo-like queries (to be defined precisely below). This characterization immediately yields an alternative emptiness test for consistent $\DynClass{$\class$}lass{Prop}A{0}$-programs. Furthermore it partially answers a question by Dong and Su \cite{DongS97}. They asked whether all queries maintainable by $\DynClass{$\class$}lass{FO}A{0}$-programs can already be maintained by history-independent $\DynClass{$\class$}lass{FO}A{0}$-programs. The characterization shows that this is the case for $\DynClass{$\class$}lass{Prop}$-programs, since all modulo-like queries can easily be maintained by history-independent $\DynClass{$\class$}lass{Prop}A{0}$-programs. \newcommand{\modexpr}[3][\gamma]{\ensuremath{\#(#1)\equiv_{#2} #3}} \newcommand{\ktype}[1][k]{\ensuremath{#1\text{-type}}} \newcommand{\ensuremath{\text{type}}}{\ensuremath{\text{type}}} \newcommand{\ensuremath{\text{dom}}}{\ensuremath{\text{dom}}} \newcommand{\ensuremath{\text{st}}}{\ensuremath{\text{st}}} We first fix some notation. For a tuple $\vec a = (a_1,\ldots,a_k)$ we write $\ensuremath{\text{dom}}(\vec a)$ for the set $\{a_1,\ldots,a_k\}$. The \emph{cardinality} of $\vec a$ is the size of $\ensuremath{\text{dom}}(\vec a)$. The \emph{strict underlying tuple} $\ensuremath{\text{st}}(\vec a)$ is the tuple obtained from $\vec a$ by removing all duplicate occurrences of data values (in a left-to-right fashion). A tuple $\vec a$ is \emph{duplicate-free} if $\ensuremath{\text{st}}(\vec a)=\vec a$. A \emph{strict atomic $k$-atom} is a relation atom $R(y_1,\ldots,y_r)$ for which $\{y_1,\ldots,y_r\}=\{x_1,\ldots,x_k\}$ with $x_i \neq x_j$ for $i \neq j$. A \emph{strict atomic $k$-type} $\gamma(x_1,\ldots,x_k)$ is a set of strict atomic $k$-atoms. Let, for a tuple $\vec a = (a_1,\ldots,a_k)$, $\iota$ be the valuation that maps, for each $j\in\{1,\ldots,k\}$, $x_j$ to $a_j$. Then the \emph{strict atomic type} $\gamma$ of tuple $\vec a = (a_1,\ldots,a_k)$ in \ensuremath{\struc}\xspace is the set of strict atomic $k$-atoms $R(y_1,\ldots,y_r)$ in $\gamma$, for which $\iota(R(y_1,\ldots,y_r))$ yields a fact in \ensuremath{\struc}\xspace. We write $\ktype(\vec a)$ for the strict atomic type of a $k$-tuple $\vec a$. However, the expressive power of consistent $\DynClass{$\class$}lass{Prop}A{0}$-programs can be most easily characterized in terms of types of sets of elements, rather than types of tuples. The \emph{set type} $\ensuremath{\text{type}}(A)$ of a set $A=\{a_1,\ldots,a_k\}$ of size $k$ in a structure \ensuremath{\struc}\xspace is the set $\{\ktype(\pi(\vec a))\mid \pi\in S_k\}$. Here, $S_k$ denotes the set of permutations on $\{1,\ldots,k\}$ and $\pi(\vec a)$ denotes the tuple $(a_{\pi(1)},\ldots,a_{\pi(k)})$. We note that $\ensuremath{\text{type}}(A)$ does not depend on the chosen enumeration of $A$ and is therefore well-defined. It directly follows from this definition that the set types of two sets with $k$ elements are either equal or disjoint (as sets of strict atomic $k$-types). In other words, the strict atomic type of a set is determined by the strict atomic $k$-type of each duplicate-free tuple that can be constructed from elements of the set. For a structure \ensuremath{\struc}\xspace and a set type $\gamma$, we denote by $\#_\ensuremath{\struc}\xspace(\gamma)$ the number of sets of set type $\gamma$ in $\ensuremath{\struc}\xspace$. A \emph{simple modulo expression} is an expression of the form $\modexpr{p}{q}$, where $p\ge 2$ and $q<p$ are natural numbers and $\gamma$ is a non-empty set type. A structure \ensuremath{\struc}\xspace satisfies such an expression if $\#_\ensuremath{\struc}\xspace(\gamma)\equiv_p q$, that is, if the number of sets of type $\gamma$ in \ensuremath{\struc}\xspace has remainder $q$ when divided by $p$. A \emph{modulo expression} is a Boolean combination of simple modulo expressions. A \emph{modulo query} is a query that can be defined as the set of all (finite) models of some modulo expression. In the proof of the following theorem, we will consider modification sequences of a particular form that extends the normal form for insertion sequences over unary input schemas introduced in Section \ref{section:setting}. A general insertion sequence $\alpha$ is in \emph{normal form} if it fulfills the following three conditions. \begin{enumerate}[label=(M\arabic*)] \item If $\alpha$ inserts tuples of cardinality $k$ over a set $A$ of $k$ elements, then all such tuples are inserted in a contiguous subsequence $\alpha_A$ of $\alpha$. Furthermore if $\alpha_A$ and $\alpha_{A'}$ are the contiguous sequences for sets $A$ and $A'$ with $|A| > |A'|$ then $\alpha_A$ occurs before $\alpha_{A'}$ in $\alpha$. \item For all sets $A,B$ with the same set type in \ensuremath{\calI}\xspace, the subsequences $\alpha_A$ and $\alpha_B$ are isomorphic, that is, for some bijection $\pi:A\to B$, $\pi(\alpha_A)=\alpha_B$. \end{enumerate} } \atheorem{theorem:characterization}{ A Boolean query $\ensuremath{\calQ}$ can be maintained by a consistent $\DynClass{$\class$}lass{Prop}A{0}$ program if and only if it is a modulo query. } \shortOrLong{Intuitively\footnote{The actual formalization uses sets of elements rather than tuples.}, a \emph{modulo query} is a Boolean combination of constraints of the form: the number of tuples that have some atomic type $\gamma$ is $q$ modulo $p$, for some natural numbers $p\ge 2$ and~$q<p$.}{} \toLongAndAppendix{ \begin{proof} (if) The set of Boolean queries that can be expressed by consistent $\DynClass{$\class$}lass{Prop}A{0}$ programs is closed under all Boolean operators. It therefore suffices to show that each query defined by a simple modulo expression $\modexpr{p}{q}$ can be maintained by a consistent $\DynClass{$\class$}lass{Prop}A{0}$ program \ensuremath{\calP}\xspace. The insertion of a tuple $\vec b$ into some relation $R$ changes the set type of exactly one set, $\{b_1,\ldots,b_r\} \df \ensuremath{\text{dom}}(\vec b)$. It is straightforward but tedious to construct a quantifier-free formula $\varphi^R_\gamma(y_1,\ldots,y_r)$ that expresses that the new type of the set $\{b_1,\ldots,b_r\}$ after inserting $\vec b$ to $R$ is $\gamma$. Likewise, for the old set type of $\{b_1,\ldots,b_r\}$. For deletions the situation is very similar. A $\DynClass{$\class$}lass{Prop}A{0}$ program can therefore use $p$ auxiliary bits to maintain the number of occurrences of set type $\gamma$ in \ensuremath{\struc}\xspace modulo $p$. (only-if) Let $\ensuremath{\calP}\xspace$ be a consistent $\DynClass{$\class$}lass{Prop}A{0}$-program. As \ensuremath{\calP}\xspace is consistent it yields, for each input database \ensuremath{\calI}\xspace, the same query answer, for each modification sequence that results in \ensuremath{\calI}\xspace. In this proof we therefore only consider insertion sequences in normal form. Condition (M2) ensures that when a tuple $\vec b$ is inserted to a relation $R$, there are no tuples present that involve a strict subset of $\ensuremath{\text{dom}}(\vec b)$. As, on the other hand, due to the lack of quantifiers, the update formulas for the auxiliary bits can not take any tuples into account that contain elements outside of $\ensuremath{\text{dom}}(\vec b)$, the auxiliary bits of \ensuremath{\calP}\xspace after an insertion operation $\mtext{ins}\xspace_R(\vec b)$ of $\alpha$ only depend on the current auxiliary bits of \ensuremath{\calP}\xspace and the strict atomic $k$-type of $\ensuremath{\text{st}}(\vec b)$. Similarly, by Condition (M3) it follows that the auxiliary bits after a modification subsequence $\alpha_A$ only depend on the current auxiliary bits of \ensuremath{\calP}\xspace and the set type of $A$. The behavior of \ensuremath{\calP}\xspace under a insertion sequence in normal form is therefore basically the behavior of a finite automaton (with the possible values of the auxiliary bits as states) reading a sequence of set types.\footnote{It should be noted here, that the overall number of set types is finite and only depends on the signature of \ensuremath{\calP}\xspace.} Let $m$ be the number of ($0$-ary) auxiliary bits of \ensuremath{\calP}\xspace and let $M=(2^m)!$. We next show that, for each non-empty set type $\gamma$ and each two input databases $\ensuremath{\calI}\xspace$ and $\ensuremath{\calI}\xspace'$ that have for each non-empty set type different from $\gamma$ the same number of sets and whose number of sets of type $\gamma$ differs by $M$, either both $\ensuremath{\calI}\xspace$ and $\ensuremath{\calI}\xspace'$ are accepted by \ensuremath{\calP}\xspace, or both are rejected. As there are only finitely many types and finitely many classes modulo $M$, this yields that the query decided by \ensuremath{\calP}\xspace is a modulo query. Let $\ensuremath{\struc}\xspace=(\ensuremath{ D}\xspace, \ensuremath{\calI}\xspace, \ensuremath{\calA}\xspace)$ be some state reached after an insertion sequence $\alpha$ in normal form, let $\gamma$ be some non-empty set type and let $s$ be the number of occurrences of $\gamma$ in \ensuremath{\calI}\xspace. Let $\alpha'$ be the extension of $\alpha$ by $M+2^m$ further sets of type $\gamma$ yielding $\ensuremath{\struc}\xspace'=(\ensuremath{ D}\xspace, \ensuremath{\calI}\xspace', \ensuremath{\calA}\xspace')$. Let $A_1,\ldots,A_s$ denote the sets of type $\gamma$ in $\ensuremath{\calI}\xspace$ and let $A_1,\ldots,A_{s'}$ denote the sets of type $\gamma$ in $\ensuremath{\calI}\xspace'$. Let $\alpha'$ be decomposed into $\alpha_1\alpha_{A_1}\cdots\alpha_{A_{s'}}\alpha_2$.\footnote{Note that $\alpha$ has the form $\alpha_1\alpha_{A_1}\cdots\alpha_{A_{s}}\alpha_2$.} As there are only $2^m$ different possible values that the auxiliary bits can assume, there are $i<j$, $j\le 2^m$, such that $\alpha_1\alpha_{A_1}\cdots\alpha_{A_i}$ and $\alpha_1\alpha_{A_1}\cdots\alpha_{A_j}$ yield states with identical auxiliary bits.\footnote{Here, $i=0$ corresponds to the sequence $\alpha_1$.} As each set $A_\ell$ has the same set type, it follows that $\alpha_1\alpha_{A_1}\cdots\alpha_{A_{i+cd}}$ yields the same auxiliary bits as $\alpha_1\alpha_{A_1}\cdots\alpha_{A_i}$, for $d \df j-i$ and every $c$ with $i+cd\le s+M+2^m$. If $s\ge i$ it follows that $\alpha_1\alpha_{A_1}\cdots\alpha_{A_{i+M}}$ yields the same auxiliary bits as $\alpha_1\alpha_{A_1}\cdots\alpha_{A_i}$ and that $\alpha_1\alpha_{A_1}\cdots\alpha_{A_{s+M}}$ yields the same auxiliary bits as $\alpha_1\alpha_{A_1}\cdots\alpha_{A_s}$. Let us now assume that $s<i$. By deleting $i-s$ sets of type $\gamma$ from the state reached after $\alpha_1\alpha_{A_1}\cdots\alpha_{A_i}$ and $\alpha_1\alpha_{A_1}\cdots\alpha_{A_{i+M}}$, we obtain states with identical auxiliary bits and $s$ and $s+M$ sets of type $\gamma$, respectively. The claim then follows by adding back $\alpha_2$ to the sequences $\alpha_1\alpha_{A_1}\cdots\alpha_{A_s}$ and $\alpha_1\alpha_{A_1}\cdots\alpha_{A_{s+M}}$, respectively. This completes the proof. \end{proof} } \subsection{The impact of built-in orders}\label{section:emptinessbuiltin} \toAppendix{\subsection{Proofs for Section \ref{section:emptinessbuiltin}}} \makeatletter{} A closer inspection of the proof that the emptiness problem is undecidable for consistent $\DynClass{$\class$}lass{FO}IA{1}{2}$-programs (Theorem \ref{theorem:emptiness:consistentfobinary}) reveals that the construction only requires one binary auxiliary relation: a linear order on the ``active'' elements. The proof would also work if a global linear order on all elements of the domain would be given. We say that a dynamic program has a \emph{built-in linear order}, if there is one auxiliary relation $R_<$ that is always initialized by a linear order on the domain and never changed. Likewise, for a \emph{built-in successor relation}. That is, the border of undecidability for consistent $\DynClass{$\class$}lass{FO}$-programs actually lies between consistent $\DynClass{$\class$}lass{FO}IA{1}{1}$-programs and consistent $\DynClass{$\class$}lass{FO}IA{1}{1}$-programs with a built-in linear order. Similarly, the border of undecidability for (not necessarily consistent) $\DynClass{$\class$}lass{Prop}$-programs actually lies between $\DynClass{$\class$}lass{Prop}IA{1}{1}$-programs and $\DynClass{$\class$}lass{Prop}IA{1}{1}$-programs with a built-in linear order. \aproposition{prop:builtin-undec}{ The emptiness problem is undecidable for \begin{enumerate}[ref={\thetheorem\ (\alph*)}] \item consistent $\DynClass{$\class$}lass{FO}IA{1}{1}$-programs with a built-in linear order or a built-in successor relation, \item $\DynClass{$\class$}lass{Prop}IA{1}{1}$-programs with a built-in successor relation. \end{enumerate} } \aproof{}{}{ \begin{proofenum} \item The only binary auxiliary relation used in the proof of Theorem \ref{theorem:emptiness:consistentfobinary} was to simulate a linear order on the domain. This is not necessary any more, if the linear order is available. The linear order can easily be replaced by a built-in successor relation. \item We adapt the proof of Theorem \ref{theorem:emptiness:undecidables} (b) and use the successor relation instead of the list relations, which are the only binary auxiliary relations. The first modification touches an element that is then marked as the first and last element of both lists. We then demand that an insertion to $C_i$ inserts the element that is marked as last and a deletion from $C_i$ deletes the predecessor of the last element. This can be checked and the marking of the last element can be updated without the use of quantifiers. A relation $C_i$ is empty after the element that is marked as first is deleted from $C_i$. \end{proofenum} } However, for dynamic programs that only have auxiliary bits, linear orders or successor relations do not affect decidability. \aproposition{prop:builtin-dec}{ The emptiness problem is decidable for \begin{enumerate}[ref={\thetheorem\ (\alph*)}] \item consistent $\DynClass{$\class$}lass{FO}IA{1}{0}$-programs with a built-in linear order or a built-in successor relation, \item $\DynClass{$\class$}lass{Prop}IA{1}{0}$-programs with a built-in linear order or a built-in successor relation. \end{enumerate} } \aproof{}{}{ \begin{proofenum} \item Let $\ensuremath{\calP}\xspace$ be a consistent program over unary input relations that uses only $0$-ary auxiliary relations and a built-in linear order. In \cite [Theorem 3.1]{DongW97} it is shown\footnote{We note that the setting in that paper assumes a built-in linear order.} how to construct an existential monadic second order formula $\varphi$ such that there is a modification sequence $\alpha$ with $\updateStateI{\alpha}{\ensuremath{\db_\emptyset}\xspace}$ is accepted by $\ensuremath{\calP}\xspace$ if and only if $\updateDB{\alpha}{\ensuremath{\db_\emptyset}\xspace} \models \varphi$. By \cite{BuechiE58}, the formula $\varphi$ describes a regular language over the proper $\ensuremath{\calI}\xspaceSchema$-colors ($\neq c_0$). Hence an equivalent finite state automaton can be constructed. For finite automata the emptiness problem is decidable, so the claim follows. \item This statement simply follows from the decidability of the emptiness problem for $\DynClass{$\class$}lass{Prop}IA{1}{1}$-programs (Theorem \ref{theorem:emptiness:unarydynprop}) and the fact that the update formulas of $\DynClass{$\class$}lass{Prop}IA{1}{0}$-programs only have one variable and therefore can not use a linear order or a successor relation in a non-trivial way. \end{proofenum} } \section{The Consistency Problem}\label{section:consistency} \toAppendix{\section{Proofs for Section \ref{section:consistency}}} \makeatletter{} In Section \ref{section:emptinessconsistent} we studied \Emptiness for classes of consistent dynamic programs. It turned out that with this restriction the emptiness problem is easier than for general dynamic programs. One might thus consider the following approach for testing whether a given general dynamic program is empty: Test whether the program is consistent and if it is, use an algorithm for consistent programs. To understand whether this approach can be helpful, we study the following algorithmic problem, parameterized by a class $\calC$ of dynamic programs. \problemdescr{\Consistency[$\calC$]}{A dynamic program $\ensuremath{\calP}\xspace \in \calC$ with $\StaClass{FO}$ initialization}{Is $\ensuremath{\calP}\xspace$ a consistent program with respect to empty initial databases?} \shortOrLong{}{We will see that the mentioned approach does not give us any advantage, as deciding \Consistency is as hard as deciding \Emptiness for general dynamic programs.} It is not very surprising that \Consistency is not easier than \Emptiness, since deciding \Emptiness boils down to finding \emph{one} modification sequence resulting in a state with particular properties and \Consistency is about finding \emph{two} modification sequences resulting in two states with particular properties. This high level comparison can actually be turned into rather easy reductions from \Emptiness to \Consistency. On the other hand, \Consistency can also be reduced to \Emptiness. For this direction the key idea is to simulate two modification sequences simultaneously and to integrate their resulting states into one joint state. This is easy if quantification is available, and requires some work for $\DynClass{$\class$}lass{Prop}$-fragments. \toLongAndAppendix{We first give a technical lemma to restrict the kind of modification sequences that have to be considered to decide \Consistency. For this, we use the notion of \emph{innocuous transformations}. Intuitively, an innocuous transformation $\theta$ of a modification sequence $\alpha$ is a minimal change of $\alpha$ that results in a modification sequence $\theta(\alpha)$ which leads to the same underlying database as $\alpha$. Formally, an innocuous transformation is either (1) a permutation of a subsequence $\mtext{del}\xspaceta_1\mtext{del}\xspaceta_2$ to $\mtext{del}\xspaceta_2\mtext{del}\xspaceta_1$ under the condition that if one modification is $\mtext{ins}\xspace_S(\vec a)$ then the other one is not $\mtext{del}\xspace_S(\vec a)$, (2) the removal of a subsequence $\mtext{ins}\xspace_S(\vec a)\mtext{del}\xspace_S(\vec a)$ if $\vec a$ is not contained in $S$ when this subsequence is applied, (3) the removal of a modification $\mtext{del}\xspaceta = \mtext{ins}\xspace_S(\vec a)$ or $\mtext{del}\xspaceta = \mtext{del}\xspace_S(\vec a)$ if $\vec a$ is already contained in $S$ respectively $\vec a$ is not contained in $S$ when the modification is applied, or (4) the inverse of one of these transformations. It is clear that under the given conditions, for an innocuous transformation $\theta$ of a modification sequence $\alpha$ it holds that $\updateDB{\alpha}{\ensuremath{\db_\emptyset}\xspace} = \updateDB{\theta(\alpha)}{\ensuremath{\db_\emptyset}\xspace}$. \begin{lemma}\label{lemma:consistency:editdistance} Let $\ensuremath{\calP}\xspace$ be an inconsistent dynamic program. Then there is a modification sequence $\alpha$, an innocuous transformation $\theta$ of $\alpha$ and an empty database $\ensuremath{\db_\emptyset}\xspace$ such that the query relations in $\updateStateI{\alpha}{\ensuremath{\db_\emptyset}\xspace}$ and $\updateStateI{\theta(\alpha)}{\ensuremath{\db_\emptyset}\xspace}$ differ. \end{lemma} \begin{proof} As $\ensuremath{\calP}\xspace$ is inconsistent, there are two modification sequences $\alpha$ and $\alpha'$ that lead to the same input database $\ensuremath{\calI}\xspace$ but to states with different query relations. It is easy to see that $\alpha' = \theta_1\cdots\theta_M(\alpha)$ where each $\theta_i$ is an innocuous transformation of $\theta_1\cdots\theta_{i-1}(\alpha)$: From $\alpha$ and $\alpha'$ we can obtain a common insertion sequence $\alpha''$ by applying innocuous transformations of type (1)-(3), the inverses of the latter sequence of transformations then yields $\alpha'$ from $\alpha''$. As $\alpha$ and $\alpha'$ lead to states with different query relations there must be an $i$ such that $\alpha^\star \df \theta_1\cdots\theta_{i-1}(\alpha)$ and $\theta_i(\alpha^\star)$ lead to states with different query relations. \end{proof} We now give the reductions between \Consistency and \Emptiness. } \atheorem{theorem:reductions}{ Let $\ell\ge 1, m\ge 0$. \begin{enumerate} \item For every $\calC\in\{\DynClass{$\class$}lass{FO}IA{\ell}{m},\DynClass{$\class$}lass{FO}I{\ell},\DynClass{$\class$}lass{FO}A{m},\DynClass{$\class$}lass{FO}\}$,\\ (i) $\Emptiness(\calC) \le \Consistency(\calC)$, and (ii) $\Consistency(\calC) \le \Emptiness(\calC)$. \item For every $\calC\in\{\DynClass{$\class$}lass{Prop}IA{\ell}{m},\DynClass{$\class$}lass{Prop}I{\ell},\DynClass{$\class$}lass{Prop}A{m},\DynClass{$\class$}lass{Prop}\}$,\\ (i) $\Emptiness(\calC) \le \Consistency(\calC)$, and (ii) $\Consistency(\calC) \le \Emptiness(\calC)$. \end{enumerate} } \toLongAndAppendix{ \begin{proof} For (a)(i) and (b)(i), we construct dynamic programs whose query relations are inflationary, that is, tuples that are inserted once are never removed afterwards. When an update adds a tuple and the modification that caused that update is undone, the two states that are reached after these updates are witnesses to inconsistency. For (a)(ii) and (b)(ii), the constructed dynamic programs simulate two independent modification sequences and maintain two states of the original program. For (a)(ii), the program uses quantification to determine whether the two states represent equal input databases but different query relations. For (b)(ii) we use that thanks to Lemma \ref{lemma:consistency:editdistance} it suffices to simulate one modification sequence and at one point one innocuous transformation to find witnesses for inconsistency, so the two maintained states always represent equal input databases. \begin{proofenum} \item[(a)(i)] For a given $\DynClass{$\class$}lass{FO}IA{\ell}{m}$-program $\ensuremath{\calP}\xspace$ over schema $(\ensuremath{\calI}\xspaceSchema, \ensuremath{\calA}\xspaceSchema)$ with query symbol $\ensuremath{\calQ}s$ we construct a $\DynClass{$\class$}lass{FO}IA{\ell}{m}$-program $\ensuremath{\calP}\xspace'$ over $(\ensuremath{\calI}\xspaceSchema , \ensuremath{\calA}\xspaceSchema \cup \{\ensuremath{\calQ}s'\})$ with query symbol $\ensuremath{\calQ}s'$. The idea is to initialize $\ensuremath{\calQ}s'$ as empty and add the tuples in $\ensuremath{\calQ}s$ to $\ensuremath{\calQ}s'$ with a delay of one modification. No tuple gets removed from $\ensuremath{\calQ}s'$. The update formulas for $\ensuremath{\calQ}s'$ are $\uf{\ensuremath{\calQ}s'}{o}{\vec x}{\vec y} \df \ensuremath{\calQ}s(\vec y) \vee \ensuremath{\calQ}s'(\vec y)$. The update formulas for relations from $\ensuremath{\calA}\xspaceSchema$ are copied from~$\ensuremath{\calP}\xspace$. If $\ensuremath{\calP}\xspace$ is empty, then $\ensuremath{\calQ}s'^{\ensuremath{\struc}\xspace} = \emptyset$ in every reached state $\ensuremath{\struc}\xspace$ and $\ensuremath{\calP}\xspace'$ is consistent. If $\ensuremath{\calP}\xspace$ is non-empty, then let $\alpha$ be a shortest modification sequence such that $\ensuremath{\calQ}s^{\updateStateI{\alpha}{\ensuremath{\db_\emptyset}\xspace}}$ is non-empty and let $\alpha^\star = \alpha\alpha'$ be a modification sequence that leads to the same input database as $\alpha$. It follows that the query relation $\ensuremath{\calQ}s'$ differs in $\updateStateI[\ensuremath{\calP}\xspace']{\alpha}{\ensuremath{\db_\emptyset}\xspace}$ and $\updateStateI[\ensuremath{\calP}\xspace']{\alpha^\star}{\ensuremath{\db_\emptyset}\xspace}$ and $\ensuremath{\calP}\xspace'$ is inconsistent. \item[(a)(ii)] If $\ensuremath{\calP}\xspace$ is a given $\DynClass{$\class$}lass{FO}IA{\ell}{m}$-program, we construct a $\DynClass{$\class$}lass{FO}IA{\ell}{m}$-program $\ensuremath{\calP}\xspace'$ that simulates two modification sequences of $\ensuremath{\calP}\xspace$ in parallel and maintains two states of this program. If the input databases of theses states are equal, a tuple is added to the query relation of $\ensuremath{\calP}\xspace'$ if it is included in exactly one of the two maintained query relations of $\ensuremath{\calP}\xspace$. If $\ensuremath{\calP}\xspace$ is over schema $(\ensuremath{\calI}\xspaceSchema, \ensuremath{\calA}\xspaceSchema)$, then $\ensuremath{\calP}\xspace'$ is over schema $(\ensuremath{\calI}\xspaceSchema', \ensuremath{\calA}\xspaceSchema')$ where $\ensuremath{\calI}\xspaceSchema' = \{S, S'\ \mid S \in \ensuremath{\calI}\xspaceSchema\}$ and $\ensuremath{\calA}\xspaceSchema' = \{R, R'\ \mid R \in \ensuremath{\calA}\xspaceSchema\} \cup \{\ensuremath{\calQ}s^\star\}$. The query relation of $\ensuremath{\calP}\xspace$ is $\ensuremath{\calQ}s^\star$. The update formulas of relations $R \in \ensuremath{\calA}\xspaceSchema$ are the same as in $\ensuremath{\calP}\xspace$, for relations $R' \in \ensuremath{\calA}\xspaceSchema$ the update formulas are obtained from the original formulas by replacing every relation symbol $S \in \ensuremath{\calI}\xspaceSchema$ or $R \in \ensuremath{\calA}\xspaceSchema$ by $S'$ or $R'$, respectively. The update formulas for $\ensuremath{\calQ}s^\star$ first check if the two maintained input databases are equal by using conjunctions of formulas $\forall \vec{x} (S(\vec{x}) \leftrightarrow S'(\vec{x}))$ for every $S \in \ensuremath{\calI}\xspaceSchema$ and then inserts a tuple if it is in exactly one of the query relation $\ensuremath{\calQ}s$ of $\ensuremath{\calP}\xspace$ and its copy $\ensuremath{\calQ}s'$. $\ensuremath{\calP}\xspace$ is consistent if and only if $\ensuremath{\calP}\xspace'$ is empty. \item[(b)(i)] Analogous to (a)(i). \item[(b)(ii)] We adapt the idea of part (a)(ii) with the help of Lemma \ref{lemma:consistency:editdistance}. For a $\DynClass{$\class$}lass{Prop}IA{\ell}{m}$-program $\ensuremath{\calP}\xspace$ over schema $(\ensuremath{\calI}\xspaceSchema, \ensuremath{\calA}\xspaceSchema)$ we sketch the construction of a $\DynClass{$\class$}lass{Prop}IA{\ell}{m}$-program $\ensuremath{\calP}\xspace'$ over schema $(\ensuremath{\calI}\xspaceSchema', \ensuremath{\calA}\xspaceSchema')$. Like in part (a)(ii), this program simulates two modification sequences of $\ensuremath{\calP}\xspace$ and maintains two auxiliary databases over $\ensuremath{\calA}\xspaceSchema$, but only one input database over $\ensuremath{\calI}\xspaceSchema$. Contrary to (a)(ii), $\ensuremath{\calP}\xspace'$ either simulates the effects of one modification to both auxiliary databases or, exactly once, a subsequence (of length at most 2) and an innocuous transformation of this subsequence. It follows that the input databases are equal for both simulated modification sequences after every simulated modification and so $\ensuremath{\calP}\xspace'$ only has to check whether there are tuples that are included in exactly one copy of the original query relation. We now sketch the construction of $\ensuremath{\calP}\xspace'$. Like in part (a)(ii), $\ensuremath{\calA}\xspaceSchema'$ contains relation symbols $R, R'$ for every $R \in \ensuremath{\calA}\xspaceSchema$. Also all relation symbols from $\ensuremath{\calI}\xspaceSchema$ are contained in $\ensuremath{\calI}\xspaceSchema'$. Additionally, $\ensuremath{\calI}\xspaceSchema'$ contains relation symbols $U_S$, $I_S$ and $T_{S}, T'_{S}$ for every $S \in \ensuremath{\calI}\xspaceSchema$ to simulate subsequences and their innocuous transformations. $U_S$ is for simulating an unnecessary modification. If a modification $\mtext{ins}\xspace_{U_S}(\vec a)$ is applied to $\ensuremath{\calP}\xspace'$, the update formulas check that $\vec a$ is already contained in $S$. If this check fails, $\ensuremath{\calP}\xspace'$ sets an error bit. Otherwise, $\ensuremath{\calP}\xspace'$ simulates $\ensuremath{\calP}\xspace$ for the modification $\mtext{ins}\xspace_S(\vec a)$ on the second copy of the auxiliary database. Analogously for a modification $\mtext{del}\xspace_{U_S}(\vec a)$. When a modification $\mtext{ins}\xspace_{I_S}(\vec a)$ occurs, $\ensuremath{\calP}\xspace'$ simulates $\ensuremath{\calP}\xspace$ for the sequence $\mtext{ins}\xspace_S(\vec a)\mtext{del}\xspace_S(\vec a)$ on the second copy, if $\vec a$ is not contained in $S$ before. Otherwise, $\ensuremath{\calP}\xspace'$ sets an error bit. A sequence $\mtext{ins}\xspace_{T_S}(\vec a)\mtext{ins}\xspace_{T'_{S'}}(\vec b)\mtext{del}\xspace_{T'_{S'}}(\vec b)\mtext{del}\xspace_{T_S}(\vec a)$ is used to simulate the sequence $\mtext{ins}\xspace_S(\vec a)\mtext{del}\xspace_{S'}(\vec b)$ on the first copy of the auxiliary database and the sequence $\mtext{del}\xspace_{S'}(\vec b)\mtext{ins}\xspace_S(\vec a)$ on the second copy, likewise for other combinations of insertions and deletions. Some additional auxiliary bits are used to check that four modifications like this happen in a row and that they do not represent the insertion of a tuple to a relation and the deletion of that tuple from the same relation. We use additional auxiliary bits to maintain whether exactly one innocuous transformation has been simulated. For every modification over relation symbols from $\ensuremath{\calI}\xspaceSchema$, both copies of the auxiliary database get updated according to the original program $\ensuremath{\calP}\xspace$. It follows from Lemma \ref{lemma:consistency:editdistance} that it is possible for $\ensuremath{\calP}\xspace'$ to reach a state where the copies $\ensuremath{\calQ}s$ and $\ensuremath{\calQ}s'$ of the query relation of $\ensuremath{\calP}\xspace$ differ and no error bit is set if and only if $\ensuremath{\calP}\xspace$ is inconsistent. A tuple is inserted into the query relation $\ensuremath{\calQ}s^\star$ of $\ensuremath{\calP}\xspace'$ when no error bit is set and the tuple is in exactly one of $\ensuremath{\calQ}s$ and $\ensuremath{\calQ}s'$. So $\ensuremath{\calP}\xspace'$ is empty if and only if $\ensuremath{\calP}\xspace$ is consistent. \end{proofenum} \end{proof} } \section{The History Independence problem}\label{section:hi} \toAppendix{\section{Proofs for Section \ref{section:hi}}} \makeatletter{} \newcommand{\invector}[1]{{\vec n}^\text{in}(#1)} \newcommand{\invectorcomp}[2]{n^\text{in}_{#2}(#1)} \newcommand{characteristic input vector}{characteristic input vector} \newcommand{characteristic input vectorplural}{characteristic input vectors} \shortOrLong{As discussed in Section \ref{section:emptinessconsistent}, it is natural to expect that a dynamic program is consistent, i.e., that the query relation only depends on the current database, but not on the modification sequence by which it has been reached. Many dynamic programs in the literature satisfy a stronger property: not only their query relation but \emph{all} their auxiliary relations depend only on the current database. Formally, we call a dynamic program \emph{history independent} if all auxiliary relations in \updateStateI{\alpha}{\ensuremath{\calD}\xspace} only depend on $\alpha(\ensuremath{\calD}\xspace)$, for all modification sequences $\alpha$ and initial empty databases~$\ensuremath{\calD}\xspace$. History independent dynamic programs (also called \emph{memoryless} \cite{PatnaikI94} or \emph{deterministic} \cite{DongS97}) are still expressive enough to maintain interesting queries like undirected reachability \cite{GraedelS12}, but also some lower bounds are known for such programs \cite{DongS97,GraedelS12, ZeumeS15}. In this section, we study decidability of the question whether a given dynamic program is history independent. \problemdescr{\HI[$\calC$]}{A dynamic program $\ensuremath{\calP}\xspace \in \calC$ with $\StaClass{FO}$ initialization}{Is $\ensuremath{\calP}\xspace$ history independent with respect to empty initial databases?} Not surprisingly, \HI is undecidable in general. This can be shown basically in the same way as the general undecidability of \Emptiness in Theorem \ref{theorem:generalundecidability}. \begin{theorem}\label{theorem:hi:generalundecidability} \HI is undecidable for $\DynClass{$\class$}lass{FO}IA{2}{0}$-programs. \end{theorem} However, the precise borders between decidable and undecidable fragments are different for \HI than for \Emptiness and \Emptiness for consistent programs. More precisely, \HI is decidable for $\DynClass{$\class$}lass{FO}$- and \DynClass{$\class$}lass{Prop}-programs with unary input databases, and for $\DynClass{$\class$}lass{Prop}$-programs with unary auxiliary databases. For showing that \HI is decidable for $\DynClass{$\class$}lass{FO}$-programs with unary input databases, we prove that if such a program is not history independent then this is witnessed by some reachable small ``bad state''. A decision algorithm can then simply test whether such a state exists. Bad states satisfy one of two properties: they either locally contradict history independence or they are ``inhomogeneous''. We define both notions in the following. A state $\ensuremath{\struc}\xspace$ over domain $\ensuremath{ D}\xspace$ is \emph{locally history independent\xspace}\footnote{We define this term for arbitrary input arity, since the first part of Lemma \ref{lemma:hicharacterization} holds in general.} for a dynamic program $\ensuremath{\calP}\xspace$ if the following three conditions hold. \begin{enumerate}[label={(H\arabic*)}] \item $\updateState{\mtext{del}\xspaceta_1 \mtext{del}\xspaceta_2}{\ensuremath{\struc}\xspace} = \updateState{\mtext{del}\xspaceta_2 \mtext{del}\xspaceta_1}{\ensuremath{\struc}\xspace}$ for all insertions $\mtext{del}\xspaceta_1$ and $\mtext{del}\xspaceta_2$. \item $\ensuremath{\struc}\xspace = \updateState{\mtext{ins}\xspace_R(\vec a)\mtext{del}\xspace_R(\vec a)}{\ensuremath{\struc}\xspace}$ if $\vec a \notin R^{\ensuremath{\struc}\xspace}$, for all $R \in \ensuremath{\calI}\xspaceSchema$ and $\vec a$ over $\ensuremath{ D}\xspace$. \item $\ensuremath{\struc}\xspace = \updateState{\mtext{ins}\xspace_R(\vec a)}{\ensuremath{\struc}\xspace}$ if $\vec a \in R^{\ensuremath{\struc}\xspace}$ and $\ensuremath{\struc}\xspace = \updateState{\mtext{del}\xspace_R(\vec a)}{\ensuremath{\struc}\xspace}$ if $\vec a \notin R^{\ensuremath{\struc}\xspace}$, for all $R \in \ensuremath{\calI}\xspaceSchema$ and $\vec a$ over $\ensuremath{ D}\xspace$. \end{enumerate} Locally history independence is well-suited to algorithmic analysis. The following lemma shows that for testing history independence it actually suffices to test locally history independence for all states reachable by very restricted modification sequences. \begin{lemma}\label{lemma:hicharacterization} Let $\ensuremath{\calP}\xspace$ be a dynamic program. \begin{enumerate}[ref={\thetheorem\ (\alph*)}] \item $\ensuremath{\calP}\xspace$ is history independent if and only if every state reachable by $\ensuremath{\calP}\xspace$ via insertion sequences is locally history independent\xspace. \item If $\ensuremath{\calP}\xspace$ is a $\DynClass{$\class$}lass{FO}I{1}$-program, then $\ensuremath{\calP}\xspace$ is history independent if and only if every state reachable by $\ensuremath{\calP}\xspace$ via insertion sequences in normal form is locally history independent\xspace. \end{enumerate} \end{lemma} A state \ensuremath{\struc}\xspace is \emph{homogeneous} if all tuples $\vec a$ and $\vec b$ with the same (atomic) \ensuremath{\calI}\xspaceSchema-type also have the same (atomic) \ensuremath{\calA}\xspaceSchema-type. The following lemma is an immediate consequence of \cite[Lemma 16]{DongS97}. \begin{lemma}\label{lem:homogeneous} For every history independent $\DynClass{$\class$}lass{FO}I{1}$-program, every reachable state is homogeneous. \end{lemma} We call a state of a $\DynClass{$\class$}lass{FO}I{1}$-program that is not homogeneous or not locally history independent\xspace a \emph{bad state}. The following lemma is the key ingredient for deciding history independence for $\DynClass{$\class$}lass{FO}I{1}$-programs. It restricts the size of the smallest bad state and therefore allows for testing history independence in a brute-force manner. \begin{proposition}\label{prop:smallmodel} Let \ensuremath{\calP}\xspace be a $\DynClass{$\class$}lass{FO}IA{1}{m}$-program. There is a number $N \in \ensuremath{\mathbb{N}}$, that can be computed from \ensuremath{\calP}\xspace, such that if \ensuremath{\calP}\xspace is \emph{not} history independent, then there exists an empty database $\ensuremath{\db_\emptyset}\xspace$ of size at most $N$ and an insertion sequence $\alpha$ in normal form such that $\updateStateI{\alpha}{\ensuremath{\db_\emptyset}\xspace}$ is bad. \end{proposition} \begin{theorem}\label{theorem:hidecidable} \HI is decidable for $\DynClass{$\class$}lass{FO}I{1}$-programs. \end{theorem} Using the same technique as in the proof of Theorem \ref{theorem:emptiness:consistent:unaryDynProp}(b), history independence can be shown to be decidable for $\DynClass{$\class$}lass{Prop}A{1}$-programs. \begin{theorem}\label{theorem:hidynpropdecidable} \HI is decidable for $\DynClass{$\class$}lass{Prop}A{1}$-programs. \end{theorem} } { As discussed in Section \ref{section:emptinessconsistent}, it is natural to expect that a dynamic program is consistent, i.e., that the query relation only depends on the current database, but not on the modification sequence by which it has been reached. Many dynamic programs in the literature satisfy a stronger property: not only their query relation but \emph{all} their auxiliary relations depend only on the current database. Formally, we call a dynamic program \emph{history independent} if all auxiliary relations in \updateStateI{\alpha}{\ensuremath{\calD}\xspace} only depend on $\alpha(\ensuremath{\calD}\xspace)$, for all modification sequences $\alpha$ and initial empty databases~$\ensuremath{\calD}\xspace$. History independent dynamic programs (also called \emph{memoryless} \cite{PatnaikI94} or \emph{deterministic} \cite{DongS97}) are still expressive enough to maintain interesting queries like undirected reachability \cite{GraedelS12}, but also some lower bounds are known for such programs \cite{DongS97,GraedelS12, ZeumeS15}. In this section, we study decidability of the question whether a given dynamic program is history independent. \problemdescr{\HI[$\calC$]}{A dynamic program $\ensuremath{\calP}\xspace \in \calC$ with $\StaClass{FO}$ initialization}{Is $\ensuremath{\calP}\xspace$ history independent with respect to empty initial databases?} Note that contrary to the emptiness problem, \HI is not easier for classes of consistent dynamic programs than for classes of general dynamic programs, so we will not study this restriction. This is because for every dynamic program $\ensuremath{\calP}\xspace$ we can construct a consistent dynamic program $\ensuremath{\calP}\xspace'$ that is history independent if and only if $\ensuremath{\calP}\xspace$ is, by introducing a new query bit that is not changed by any update formula. Not surprisingly, \HI is undecidable in general. This can be shown basically in the same way as the general undecidability of \Emptiness in Theorem \ref{theorem:generalundecidability}. \begin{theorem}\label{theorem:hi:generalundecidability} \HI is undecidable for $\DynClass{$\class$}lass{FO}IA{2}{0}$-programs. \end{theorem} \begin{proof} Again we reduce from the satisfiability problem for first-order logic over schemas with at least one binary relation symbol. For a given $\StaClass{FO}$-formula $\varphi$, at first we construct the dynamic program $\ensuremath{\calP}\xspace$ from the proof of Theorem \ref{theorem:generalundecidability}. Additionally we add a second auxiliary bit $B$ which is initialized as false and set to true when $\mtext{Acc}$ is first set to true by an update, and never set to false again. If $\varphi$ is not satisfiable, then all bits remain false and $\ensuremath{\calP}\xspace$ is history independent. If $\varphi$ is satisfiable, then let $\alpha\mtext{del}\xspaceta$ be a shortest modification sequence applied to an empty database $\ensuremath{\db_\emptyset}\xspace$ such that $\mtext{Acc}$ and $B$ are set to true in $\updateStateI{\alpha\mtext{del}\xspaceta}{\ensuremath{\db_\emptyset}\xspace}$. Let $\mtext{del}\xspaceta^{-1}$ be the modification that undoes $\mtext{del}\xspaceta$. Then $B$ is false in $\updateStateI{\alpha}{\ensuremath{\db_\emptyset}\xspace}$ and true in $\updateStateI{\alpha\mtext{del}\xspaceta\mtext{del}\xspaceta^{-1}}{\ensuremath{\db_\emptyset}\xspace}$, but the respective input databases are equal. So $\ensuremath{\calP}\xspace$ is not history independent. \end{proof} However, in the following we will see that the precise borders between decidable and undecidable fragments are different for \HI than for \Emptiness and \Emptiness for consistent programs. More precisely, we will show that \HI is decidable for $\DynClass{$\class$}lass{FO}$- and \DynClass{$\class$}lass{Prop}-programs with unary input databases, and for $\DynClass{$\class$}lass{Prop}$-programs with unary auxiliary databases. We recall the normal form for insertion sequences introduced in Section \ref{section:setting}. For dynamic programs with unary input databases, insertion sequences in normal form (1) color each element contiguously and (2) apply the insertions for each $\ensuremath{\calI}\xspaceSchema$-color in the same order. Here we require further that they first color all elements with designated $\ensuremath{\calI}\xspaceSchema$-color $c_1$, then all elements with $c_2$ and so on. We will first show that to judge \HI of $\DynClass{$\class$}lass{FO}I{1}$-program only modification sequences in normal form (Lemma \ref{lemma:hicharacterization}) and states with a particular property (Lemma \ref{lem:homogeneous}) need to be considered. Finally, we show that if a dynamic program is not history independent, this can be observed already for domains of a bounded size in the size of the program (Proposition \ref{prop:smallmodel-app}). The decision algorithm then tests all states over such ``small'' domains reached by insertion sequences in normal form in a brute-force manner. Let $\ensuremath{\calP}\xspace$ be a $\DynClass{$\class$}lass{FO}I{1}$-program over schema $\ensuremath{\tau}\xspace = \ensuremath{\calI}\xspaceSchema \cup \ensuremath{\calA}\xspaceSchema$. Throughout this section we assume that $\ensuremath{\tau}\xspace$ contains only at least unary relation symbols and no input or auxiliary bits to ease presentation. This is no real restriction, as these bits can easily be simulated by unary relations when quantification is allowed. We usually denote the maximum quantifier depth of (initialization and update) formulas by $q$, the maximum arity of aux-relations by $m$, and the number of input relations by $\ell$. Further we write $L$ for $2^{\ell}-1$ and let $c_0, \ldots, c_L$ be the $\ensuremath{\calI}\xspaceSchema$-colors, where $c_0$ is the color of the $\ensuremath{\calI}\xspaceSchema$-uncolored elements. In the following we write ``colors'' and ``uncolored'' instead of $\ensuremath{\calI}\xspaceSchema$-colors and $\ensuremath{\calI}\xspaceSchema$-uncolored. We next present a characterization of history independence which is well-suited to algorithmic analysis. We call a state $\ensuremath{\struc}\xspace$ over domain $\ensuremath{ D}\xspace$ \emph{locally history independent\xspace}\footnote{We define this term for arbitrary input arity, since the first part of Lemma \ref{lemma:hicharacterization} holds in general.} for a dynamic program $\ensuremath{\calP}\xspace$ if the following three conditions hold. \begin{enumerate}[label={(H\arabic*)}] \item $\updateState{\mtext{del}\xspaceta_1 \mtext{del}\xspaceta_2}{\ensuremath{\struc}\xspace} = \updateState{\mtext{del}\xspaceta_2 \mtext{del}\xspaceta_1}{\ensuremath{\struc}\xspace}$ for all insertions $\mtext{del}\xspaceta_1$ and $\mtext{del}\xspaceta_2$. \item $\ensuremath{\struc}\xspace = \updateState{\mtext{ins}\xspace_R(\vec a)\mtext{del}\xspace_R(\vec a)}{\ensuremath{\struc}\xspace}$ if $\vec a \notin R^{\ensuremath{\struc}\xspace}$, for all $R \in \ensuremath{\calI}\xspaceSchema$ and $\vec a$ over $\ensuremath{ D}\xspace$. \item $\ensuremath{\struc}\xspace = \updateState{\mtext{ins}\xspace_R(\vec a)}{\ensuremath{\struc}\xspace}$ if $\vec a \in R^{\ensuremath{\struc}\xspace}$ and $\ensuremath{\struc}\xspace = \updateState{\mtext{del}\xspace_R(\vec a)}{\ensuremath{\struc}\xspace}$ if $\vec a \notin R^{\ensuremath{\struc}\xspace}$, for all $R \in \ensuremath{\calI}\xspaceSchema$ and $\vec a$ over $\ensuremath{ D}\xspace$. \end{enumerate} \begin{lemma}\label{lemma:hicharacterization} Let $\ensuremath{\calP}\xspace$ be a dynamic program. \begin{enumerate}[ref={\thetheorem\ (\alph*)}] \item $\ensuremath{\calP}\xspace$ is history independent if and only if every state reachable by $\ensuremath{\calP}\xspace$ via insertion sequences is locally history independent\xspace. \item If $\ensuremath{\calP}\xspace$ is a $\DynClass{$\class$}lass{FO}I{1}$-program, then $\ensuremath{\calP}\xspace$ is history independent if and only if every state reachable by $\ensuremath{\calP}\xspace$ via insertion sequences in normal form is locally history independent\xspace. \end{enumerate} \end{lemma} \begin{proof} \begin{proofenum} \item (only-if) It is easy to see that local history independence for all reachable states is necessary for history independence. (if) Assume, towards a contradiction, that there is a dynamic program $\ensuremath{\calP}\xspace$, for which every state reachable by an insertion sequence is locally history independent\xspace, but $\ensuremath{\calP}\xspace$ is not history independent. Then there are two modification sequences $\alpha_1$ and $\alpha_2$ to an empty database $\ensuremath{\db_\emptyset}\xspace$ with $\updateDB{\alpha_1}{\ensuremath{\db_\emptyset}\xspace} = \updateDB{\alpha_2}{\ensuremath{\db_\emptyset}\xspace}$ but $\updateStateI{\alpha_1}{\ensuremath{\db_\emptyset}\xspace} \neq \updateStateI{\alpha_2}{\ensuremath{\db_\emptyset}\xspace}$. We construct insertion sequences $\alpha_1'$ and $\alpha_2'$ that lead to the same state as $\alpha_1$ and $\alpha_2$, respectively. Repeated application of (H1) then yields $\updateStateI{\alpha'_1}{\ensuremath{\db_\emptyset}\xspace} = \updateStateI{\alpha'_2}{\ensuremath{\db_\emptyset}\xspace}$ and altogether $\updateStateI{\alpha_1}{\ensuremath{\db_\emptyset}\xspace}=\updateStateI{\alpha'_1}{\ensuremath{\db_\emptyset}\xspace} = \updateStateI{\alpha'_2}{\ensuremath{\db_\emptyset}\xspace} = \updateStateI{\alpha_2}{\ensuremath{\db_\emptyset}\xspace}$, the desired contradiction. We only describe how to construct the insertion sequence $\alpha'_1$ from $\alpha_1$; the construction of $\alpha'_2$ from $\alpha_2$ is completely analogous. Let thus $\alpha_1 = \mtext{del}\xspaceta_1 \cdots \mtext{del}\xspaceta_N$ and, for every $i$, we denote by $\ensuremath{\struc}\xspace_i\df\updateStateI{\mtext{del}\xspaceta_1 \cdots \mtext{del}\xspaceta_i}{\ensuremath{\db_\emptyset}\xspace}$. A modification is \emph{bad} if it is a deletion or the repeated insertion of a fact. The insertion sequence $\alpha'_1$ is constructed by successively eliminating all bad modifications from $\alpha_1$. If $\alpha_1$ does not contain any bad modification, we are done. Otherwise, let $\mtext{del}\xspaceta_k$ be the first bad modification in~$\alpha_1$. Since $\mtext{del}\xspaceta_1\cdots \mtext{del}\xspaceta_{k-1}$ is an insertion sequence, by our assumption $\ensuremath{\struc}\xspace_{k-1}$ is locally history independent\xspace. Therefore, $\mtext{del}\xspaceta_k$ can be eliminated from $\alpha_1$ as follows. If $\mtext{del}\xspaceta_k = \mtext{del}\xspace_R(\vec a)$ such that $\vec a \notin R^{\ensuremath{\struc}\xspace_{k-1}}$ or $\mtext{del}\xspaceta_k = \mtext{ins}\xspace_R(\vec a)$ such that $\vec a \in R^{\ensuremath{\struc}\xspace_{k-1}}$ then $\ensuremath{\struc}\xspace_k=\ensuremath{\struc}\xspace_{k-1} $ thanks to (H3) and $\mtext{del}\xspaceta_k$ can be removed from $\alpha_1$ without affecting the resulting state. If $\mtext{del}\xspaceta_k = \mtext{del}\xspace_R(\vec a)$ such that $\vec a \in R^{\ensuremath{\struc}\xspace_{k-1}}$, then there must be an insertion $\mtext{ins}\xspace_R(\vec a)$ in~$\mtext{del}\xspaceta_1 \cdots \mtext{del}\xspaceta_{k-1}$. By (H1) the insertions $\mtext{del}\xspaceta_1 \cdots \mtext{del}\xspaceta_{k-1}$ can be rearranged into a sequence $\beta \mtext{ins}\xspace_R(\vec a)$, such that $\beta$ consists of all modifications from $\mtext{del}\xspaceta_1 \cdots \mtext{del}\xspaceta_{k-1}$ besides $\mtext{ins}\xspace_R(\vec a)$ and the resulting state is $\ensuremath{\struc}\xspace_{k-1}$. By (H2), the modification sequences $\beta $ and $\beta \mtext{ins}\xspace_R(\vec a) \mtext{del}\xspace_R(\vec a)$ yield the same state, but $\beta$ has fewer deletions than $\mtext{del}\xspaceta_1 \cdots \mtext{del}\xspaceta_k$. The modification sequence $\alpha'_1$ is obtained by repeating this procedure. \item (only-if) Again, local history independence for all reachable states is necessary for history independence. (if) Let $\ensuremath{\calP}\xspace$ be a dynamic program for which every state reachable via a insertion sequence in normal form is locally history independent\xspace. We show that every state reachable by an insertion sequence is also reachable by a normal form sequence. That \ensuremath{\calP}\xspace is history independent then follows from (a). We thus assume, towards a contradiction, that there is an insertion sequence $\alpha = \mtext{del}\xspaceta_1 \cdots \mtext{del}\xspaceta_N$ and an empty database $\ensuremath{\db_\emptyset}\xspace$ such that $\ensuremath{\struc}\xspace=\updateState{\alpha}{\ensuremath{\db_\emptyset}\xspace}$ is not reachable by any insertion sequence in normal form. Let $\alpha$ and $\ensuremath{\db_\emptyset}\xspace$ be chosen such that $N$ is minimal. Therefore, $\ensuremath{\struc}\xspace'=\updateState{\mtext{del}\xspaceta_1\cdots\mtext{del}\xspaceta_{N-1}}{\ensuremath{\db_\emptyset}\xspace}$ can be reached by a normal form modification\footnote{Of course, insertion sequences yielding the same state have the same length.} sequence $\alpha'=\mtext{del}\xspaceta'_1\cdots\mtext{del}\xspaceta'_{N-1}$ and, by our assumption, $\ensuremath{\struc}\xspace'$ and all prior states reached by prefixes of $\alpha'$ are locally history independent\xspace. By inductive application of (H1), $\mtext{del}\xspaceta_N$ can now be moved to its appropriate place inside $\alpha'$ to yield a normal form sequence $\alpha''$ equivalent to $\alpha$. Therefore, $\ensuremath{\struc}\xspace$ is reachable by a normal form sequence, the desired contradiction. \end{proofenum} \end{proof} We next define another property that reachable states of history independent programs share. A state \ensuremath{\struc}\xspace is \emph{homogeneous} if all tuples $\vec a$ and $\vec b$ with the same (atomic) \ensuremath{\calI}\xspaceSchema-type also have the same (atomic) \ensuremath{\calA}\xspaceSchema-type. For every homogeneous state \ensuremath{\struc}\xspace we denote by $f_\ensuremath{\struc}\xspace$ the \emph{(atomic) type function} that maps every (atomic) \ensuremath{\calI}\xspaceSchema-type of arity $m$ (the maximal arity of $\ensuremath{\tau}\xspace$) to the corresponding (atomic) \ensuremath{\calA}\xspaceSchema-type\footnote{If there is no tuple $\vec a$ of an $\ensuremath{\calI}\xspaceSchema$-type $c$ in $\ensuremath{\struc}\xspace$, then $f_\ensuremath{\struc}\xspace(c) = \bot$}. The following lemma is an immediate consequence of \cite[Lemma 16]{DongS97}. \begin{lemma}\label{lem:homogeneous} For every history independent $\DynClass{$\class$}lass{FO}I{1}$-program, every reachable state is homogeneous. \end{lemma} We call a state of a $\DynClass{$\class$}lass{FO}I{1}$-program that is not homogeneous or not locally history independent\xspace a \emph{bad state}. That a state is bad can be expressed in first-order logic. Likewise the possible effects of coloring a single uncolored element on the type function of a state can be expressed by first-order formulas. To state this more precisely, we use \emph{type forecast functions} $F:\{1,\ldots,L\}\to \calF$, where $\calF$ is the set of possible type functions for \ensuremath{\calP}\xspace. \begin{lemma}\label{lem:formulas} Let $\ensuremath{\calP}\xspace$ be a $\DynClass{$\class$}lass{FO}IA{1}{m}$-program with maximum quantifier-depth $q$ and $\ell$ input relations. \begin{enumerate} \item There is a formula $\ensuremath{\varphi_{\text{bad}}}$ of quantifier-depth at most $3+2m+(\ell+1)q$ that is true in a state $\ensuremath{\struc}\xspace$ if and only if $\updateState{\alpha}{\ensuremath{\struc}\xspace}$ is bad for at least one modification sequence $\alpha$ that colors a single uncolored element of \ensuremath{\struc}\xspace. \item For every type forecast function $F$ there is a formula $\varphi_F$ of quantifier depth $1+m+\ell{}q$ that is true in a homogeneous state $\ensuremath{\struc}\xspace$ if and only if, for every $i\le L$, $\updateState{\alpha}{\ensuremath{\struc}\xspace}$ has type function $F(i)$ if $\alpha$ colors some uncolored element with $c_i$. \end{enumerate} \end{lemma} \begin{proof} \begin{proofenum} \item The formula is of the form \[ \exists x \bigvee_{i=1}^L (\varphi^i_1\lor \varphi^i_2), \] where $\varphi^i_1$ expresses that the state that results from coloring an uncolored element by $c_i$ is not homogeneous and $\varphi^i_2$ expresses that it is not locally history independent\xspace. To this end, $\varphi^i_1$ existentially quantifies two $m$-tuples (depth: $2m$) and expresses that they have the same $\ensuremath{\calI}\xspaceSchema$-type but different $\ensuremath{\calA}\xspaceSchema$-types in the state after the coloring (depth: $\ell{}q$). The formula $\varphi^i_2$ is a three-fold disjunction for the conditions (H1-3). As an example, the formula for (H1) quantifies two elements $a,a'$ (depth: 2), an $m$-tuple (depth: $m$) and tests that for some color $c_i$ the $\ensuremath{\calA}\xspaceSchema$-types of the two databases resulting from the two possible orders in which $a$ and $a'$ can be colored by $c_i$ (depth: $2q$) differ in the $m$-tuple. Altogether, $\ensuremath{\varphi_{\text{bad}}}$ has quantifier-depth $1+\max(2m+\ell{}q,2+m+2q)\le 3+2m+(\ell+1)q$. \item Similarly, each formula $\varphi_F$ existentially quantifies an element $a$ to be colored, has a disjunct for all possible colors, and universally quantifies an $m$-tuple and tests that the $\ensuremath{\calA}\xspaceSchema$-type of it is consistent with its $\ensuremath{\calI}\xspaceSchema$-type and $F$. Overall this yields quantifier depth $1+m+\ell{}q$. \end{proofenum} \end{proof} We next formalize the observation that for a homogeneous state, the truth of first-order formulas of quantifier depth $k$ only depends on its color frequencies up to $k$\footnote{Note the similarities to Lemma \ref{lem:PropCons11}}. To this end, we associate with every state $\ensuremath{\struc}\xspace$ its \emph{characteristic input vector} $\invector{\ensuremath{\struc}\xspace} = (n_0, \ldots, n_L)$ over $\ensuremath{\mathbb{N}}$ where $n_i\df \invectorcomp{\ensuremath{\struc}\xspace}{i}$ is the number of elements with $\ensuremath{\calI}\xspaceSchema$-color $c_i$ in $\ensuremath{\struc}\xspace$. We write $n\simeq_k m$, for numbers $k,n,m$, if $n=m$ or both $n\ge k$ and $m\ge k$. We write $(n_0,\ldots,n_L)\simeq_k (n'_0,\ldots,n'_L)$, if for every $i\le L$, $n_i\simeq_k n'_i$. For a given $k$, we say that two homogeneous states $\ensuremath{\struc}\xspace$ and $\ensuremath{\struc}\xspace'$ are \emph{$k$-similar} (denoted by $\ensuremath{\struc}\xspace\sim_k\ensuremath{\struc}\xspace'$) if \begin{itemize}\item $\invector{\ensuremath{\struc}\xspace}\simeq_k \invector{\ensuremath{\struc}\xspace'}$ and \item $\ensuremath{\struc}\xspace$ and $\ensuremath{\struc}\xspace'$ have the same type function. \end{itemize} Now we can make the relationship between characteristic input vectorplural{} and first-order types more precise.\footnote{We note that for homogeneous states it actually holds: $\ensuremath{\struc}\xspace \sim_k \ensuremath{\struc}\xspace'$ if and only if $\ensuremath{\struc}\xspace\equiv_k \ensuremath{\struc}\xspace'$.} \begin{lemma}\label{lem:simequiv} Let $\ensuremath{\calP}\xspace$ be a $\DynClass{$\class$}lass{FO}IA{1}{m}$-program and let $\ensuremath{\struc}\xspace$ and $\ensuremath{\struc}\xspace'$ be two homogeneous states for $\ensuremath{\calP}\xspace$. For every $k \in \ensuremath{\mathbb{N}}$, if $\ensuremath{\struc}\xspace \sim_k \ensuremath{\struc}\xspace'$ then $\ensuremath{\struc}\xspace\equiv_k \ensuremath{\struc}\xspace'$. \end{lemma} We recall that $\ensuremath{\struc}\xspace\equiv_k \ensuremath{\struc}\xspace'$ means that the two states satisfy exactly the same first-order formulas of quantifier depth (up to) $k$. \begin{proof} If $\ensuremath{\struc}\xspace \sim_k \ensuremath{\struc}\xspace'$ then the duplicator has a straightforward winning strategy for the $k$-round Ehrenfeucht-Fra\"isse game on the $\ensuremath{\calI}\xspaceSchema$-reducts of $\ensuremath{\struc}\xspace$ and $\ensuremath{\struc}\xspace'$. Since both states are homogeneous and have the same type function, this winning strategy extends to $\ensuremath{\calA}\xspaceSchema$ and the strategy of duplicator is a winning strategy for $\ensuremath{\struc}\xspace$ and $\ensuremath{\struc}\xspace'$. \end{proof} By combining Lemmas \ref{lem:formulas} and \ref{lem:simequiv} we get the following lemma, which will be the most important technical tool in the proof of a small counterexample property for programs that are not history independent. \begin{lemma}\label{lem:typefunction} Let $\ensuremath{\calP}\xspace$ be a $\DynClass{$\class$}lass{FO}IA{1}{m}$-program with maximum quantifier-depth $q$ and $\ell$ input relations, let $K\ge 1+m+\ell{}q$ and let $\ensuremath{\struc}\xspace$ and $\ensuremath{\struc}\xspace'$ be two homogeneous states for $\ensuremath{\calP}\xspace$ with $\ensuremath{\struc}\xspace \sim_K \ensuremath{\struc}\xspace'$. Let $a$ and $a'$ be uncolored elements in $\ensuremath{\struc}\xspace$ and $\ensuremath{\struc}\xspace'$, respectively. Let $\beta$ and $\beta'$ be insertion sequences that color $a$ and $a'$, respectively with the same color $c_i$. Then $\updateState{\beta}{\ensuremath{\struc}\xspace}$ and $\updateState{\beta'}{\ensuremath{\struc}\xspace'}$ have the same type function, in case they are both homogeneous. \end{lemma} \begin{proof} By Lemma \ref{lem:simequiv}, we know that $\ensuremath{\struc}\xspace\equiv_K \ensuremath{\struc}\xspace'$. In particular, thanks to Lemma \ref{lem:formulas} and the homogeneity of $\updateState{\beta}{\ensuremath{\struc}\xspace}$ and $\updateState{\beta'}{\ensuremath{\struc}\xspace'}$, there is a unique type forecast function $F$ such that $\varphi_F$ holds in \ensuremath{\struc}\xspace and $\ensuremath{\struc}\xspace'$. Therefore, after coloring $a$ and $a'$ with $c_i$ the resulting states both have type function $F(i)$. \end{proof} Now we can show a small counterexample property for programs that are not history independent. \begin{proposition}\label{prop:smallmodel-app} Let \ensuremath{\calP}\xspace be a $\DynClass{$\class$}lass{FO}IA{1}{m}$-program with quantifier depth $q$ and $\ell$ input relations, and let $K\df 3+2m+(\ell+1)q$ and $T$ be the number of type functions. If \ensuremath{\calP}\xspace is \emph{not} history independent, then there exists a database $\ensuremath{\db_\emptyset}\xspace$ of size at most $N\df (2K+T)(L+1)$ and a insertion sequence in normal form $\alpha$ such that $\updateStateI{\alpha}{\ensuremath{\db_\emptyset}\xspace}$ is bad. \end{proposition} \begin{proof} Let \ensuremath{\calP}\xspace be a dynamic $\DynClass{$\class$}lass{FO}IA{1}{m}$-program that is not history independent and let $\ensuremath{\db_\emptyset}\xspace$ be an empty database of minimal size $n$ for which there exists a insertion sequence in normal form $\alpha_1\cdots\alpha_N$, such that $\updateState{\alpha}{\ensuremath{\db_\emptyset}\xspace}$ is bad, each subsequence $\alpha_i$ colors one element, and $N$ is minimal. We consider the state $\ensuremath{\struc}\xspace\df \updateStateI{\alpha_1\cdots\alpha_{N-1}}{\ensuremath{\db_\emptyset}\xspace}$ just before the bad state. Thus $\ensuremath{\struc}\xspace$ satisfies the formula $\ensuremath{\varphi_{\text{bad}}}$ from Lemma \ref{lem:formulas}. Let $ (n_0, \ldots, n_L)\df \invector{\ensuremath{\struc}\xspace}$. We show first that, for every $i\ge 1$, $n_i\le 2K+T$. Towards a contradiction, let us assume that for some $i\ge 1$, $n_i>2K+T$. Let $\alpha'=\beta \alpha'_1\cdots \alpha'_{n_i}$ be a reordering of $\alpha_1\cdots\alpha_{N-1}$ such that $\alpha'_1,\ldots,\alpha'_{n_i}$ are insertion subsequences that color the $n_i$ elements with color $c_i$ and $\beta$ contains all other insertions. By minimality of $N$, all involved states are locally history independent\xspace and therefore the reordering does not affect the resulting state, i.e., $\updateStateI{\beta\alpha'_1\cdots\alpha'_{n_i}}{\ensuremath{\db_\emptyset}\xspace}=\ensuremath{\struc}\xspace$. We denote, for every $j\le n_i$, the state $\updateStateI{\beta\alpha'_1\cdots\alpha'_{j}}{\ensuremath{\db_\emptyset}\xspace}$ by $\ensuremath{\struc}\xspace_j$ and its type function by $f_j$. We can conclude that $\ensuremath{\struc}\xspace_{j}\simeq_K \ensuremath{\struc}\xspace_{j'}$, for all $K\leq j<j'\leq n_i-K-1$, since \begin{itemize}\item in $\ensuremath{\struc}\xspace_K$, there are more than $K+T$ uncolored elements and $K$ elements of color $c_i$, \item $\alpha'_{K+1}\cdots\alpha'_{n_i-K-1}$ only colors uncolored elements with color $c_i$, and \item in $\ensuremath{\struc}\xspace_{n_i-K-1}$ there are still more then $K$ uncolored elements. \end{itemize} Since there are more than $T$ states between $\ensuremath{\struc}\xspace_{K}$ and $\ensuremath{\struc}\xspace_{n_i-K-1}$, two of them must have the same type function. That is, there must be $j_1,j_2$ with $K\leq j_1<j_2 \leq n_i-K-1$ and $f_{j_1}=f_{j_2}$ and therefore $\ensuremath{\struc}\xspace_{j_1}\sim_K\ensuremath{\struc}\xspace_{j_2}$. Let $\ensuremath{\db_\emptyset}\xspace'$ be the empty database resulting from $\ensuremath{\db_\emptyset}\xspace$ by deleting all elements that are colored by the sequence $\alpha'_{j_1+1}\cdots\alpha'_{j_2}$. Since $\ensuremath{\db_\emptyset}\xspace'$ has more than $j_1+K>K>q$ elements, $\ensuremath{\struc}\xspace_\mtext{Init}\xspace(\ensuremath{\db_\emptyset}\xspace') \sim_K \ensuremath{\struc}\xspace_\mtext{Init}\xspace(\ensuremath{\db_\emptyset}\xspace)$, in particular these two states have the same type functions. By inductive application of Lemma \ref{lem:typefunction} it is easy to show that $\updateState{\beta\alpha'_1\cdots\alpha'_{j_1}}{\ensuremath{\db_\emptyset}\xspace'}\sim_K \updateState{\beta\alpha'_1\cdots\alpha'_{j_1}}{\ensuremath{\db_\emptyset}\xspace}$. In the inductive step, we start from two corresponding states whose $\sim_K$-equivalence has already been established. In particular, they agree on all formulas $\varphi_F$ and therefore the application of the same one element coloring sequence yields for both the same type function, thanks to Lemma \ref{lem:typefunction} and because the reached states are homogeneous by minimality of $n$ and $N$. Since the number of elements for each (proper) color is the same in both new states and both have more than $K$ uncolored elements, they are also equivalent with respect to $\simeq_K$. For each $j$ with $j_2\le j\le N-1$ let $\ensuremath{\struc}\xspace'_j\df \updateState{\beta\alpha'_1\cdots\alpha'_{j_1}\alpha'_{j_2+1}\cdots\alpha'_{j}}{\ensuremath{\db_\emptyset}\xspace'}$. We emphasize that, for every $j$, $\invector{\ensuremath{\struc}\xspace'_j}$ and $\invector{\ensuremath{\struc}\xspace_j}$ only differ in their entry for color $c_i$ (which for both is at least $K$). In particular, they have the same number of uncolored elements. Thus, $\ensuremath{\struc}\xspace'_{j_2}\sim_K \ensuremath{\struc}\xspace_{j_1}\sim_K \ensuremath{\struc}\xspace_{j_2}$ and therefore, as before, $\ensuremath{\struc}\xspace'_{j_2}$ and $\ensuremath{\struc}\xspace_{j_2}$ agree on all formulas $\varphi_F$. It follows that the two states $\ensuremath{\struc}\xspace'_{j_2+1}$ and $\ensuremath{\struc}\xspace_{j_2+1}$ obtained by the sequence $\alpha'_{j_2+1}$ again have the same type function. As they both have at least $K$ uncolored elements and at least $K$ elements with color $c_i$ (and agree on all other color frequencies), we get $\ensuremath{\struc}\xspace'_{j_2+1}\sim_K\ensuremath{\struc}\xspace_{j_2+1}$. An inductive application of the same argument yields $\ensuremath{\struc}\xspace'_{N-1}\sim_K\ensuremath{\struc}\xspace_{N-1}=\ensuremath{\struc}\xspace$. Since $\ensuremath{\struc}\xspace\models\ensuremath{\varphi_{\text{bad}}}$ we conclude $\ensuremath{\struc}\xspace'_{N-1}\models\ensuremath{\varphi_{\text{bad}}}$ and thus $\ensuremath{\struc}\xspace'_{N-1}$ is a bad state. As $\ensuremath{\struc}\xspace'_{N-1}$ can be reached by fewer insertions than $\ensuremath{\struc}\xspace$ we get the desired contradiction and thus $n_i\le 2K+T$, for all $i\ge 1$.\\ We finally show that $n_0 \le K$. Otherwise, if $n_0>K$, we could replace $\ensuremath{\db_\emptyset}\xspace$ by the empty database $\ensuremath{\db_\emptyset}\xspace'$ in which one element that is uncolored in $\ensuremath{\struc}\xspace$ is removed. Similarly as before it would follow that $\updateState{\alpha_1\cdots\alpha_{N-1}}{\ensuremath{\db_\emptyset}\xspace'}\sim_K \updateState{\alpha_1\cdots\alpha_{N-1}}{\ensuremath{\db_\emptyset}\xspace}$ and therefore that $\updateState{\alpha_1\cdots\alpha_{N-1}}{\ensuremath{\db_\emptyset}\xspace'}$ satisfies $\ensuremath{\varphi_{\text{bad}}}$ and is therefore bad, contradicting the choice of $\ensuremath{\db_\emptyset}\xspace$. This completes the proof of the proposition. \end{proof} We can now conclude the main result of this section. \begin{theorem}\label{theorem:hidecidable} \HI is decidable for $\DynClass{$\class$}lass{FO}I{1}$-programs. \end{theorem} \begin{proof} It follows immediately from Proposition \ref{prop:smallmodel-app} that Algorithm \ref{algorithm:hiunary} is a correct decision algorithm for \HI of $\DynClass{$\class$}lass{FO}I{1}$-programs. \begin{algorithm} \caption{Deciding \HI for $\DynClass{$\class$}lass{FO}I{1}$-programs}\label{algorithm:hiunary} \begin{algorithmic}[1] \INPUT A $\DynClass{$\class$}lass{FO}IA{1}{m}$-program $\ensuremath{\calP}\xspace$ with $\ell$ input relations and quantifier depth $q$. \State Let $K$, $L$ and $T$ be as in Proposition \ref{prop:smallmodel-app}. \For{all empty databases $\ensuremath{\db_\emptyset}\xspace$ over domains $\{1,\ldots,n\}$ with $n\le (2K+T)(L+1)$} \For{all normal form insertion sequences $\alpha$ over $\{1,\ldots,n\}$} \LineIf {$\updateStateI{\alpha}{\ensuremath{\db_\emptyset}\xspace}$ is not homogeneous or not locally history independent\xspace} {Reject.} \EndFor \EndFor \State Accept. \end{algorithmic} \end{algorithm} \end{proof} Using the same technique as used in the proof of Theorem \ref{theorem:emptiness:consistent:unaryDynProp}(b), history independence can be shown to be decidable for $\DynClass{$\class$}lass{Prop}A{1}$-programs. \begin{theorem}\label{theorem:hidynpropdecidable} \HI is decidable for $\DynClass{$\class$}lass{Prop}A{1}$-programs. \end{theorem} \begin{proof} Let $\ensuremath{\calP}\xspace$ be a $\DynClass{$\class$}lass{Prop}IA{\ell}{1}$-program for some $\ell \in \ensuremath{\mathbb{N}}$. Recall that, according to Lemma \ref{lemma:hicharacterization}, for testing history independence it suffices to check that no non-locally history independent state can be reached by an insertion sequence in normal form. We argue that if a non-locally history independent state can be reached by $\ensuremath{\calP}\xspace$, then such a state with few tuples in the input relations can be reached as well. History independence can then be tested in a brute force manner by trying out insertion sequences for all input databases with few tuples. Suppose that $\ensuremath{\struc}\xspace$ is a non-locally history independent state reachable by $\ensuremath{\calP}\xspace$ such that the number $N$ of tuples in input databases of $\ensuremath{\struc}\xspace$ is minimal. In particular, $\ensuremath{\calP}\xspace$ is history independent for input databases with less than $N$ tuples, that is, all modification sequences $\alpha$ and $\alpha'$ yielding an input database with less than $N$ tuples also yield the same state. Let $\vec a$ be an $2\ell$-ary tuple that witnesses that $\ensuremath{\struc}\xspace$ is not locally history independent, i.e.~there are two modifications on $\vec a$ that contradict (H1), (H2) or (H3). Further let $\gamma$ be the atomic type of $\vec a$. Now, using the same argument as in the proof of Theorem \ref{theorem:emptiness:consistent:unaryDynProp} as well as the history independence of $\ensuremath{\calP}\xspace$ for databases with less than $N$ tuples, one can show that for exhibiting a tuple of type $\gamma$ the number $N$ of input tuples does not have to be large. \end{proof} } \section{Conclusion}\label{section:conclusion} \makeatletter{} In this work we studied the algorithmic properties of static analysis problems for (restrictions of) dynamic programs. Most of the results are summarized in Table~\ref{tab:results}. In general only very strong restrictions yield decidability. The only cases left open are about $\DynClass{$\class$}lass{Prop}$-programs when both the arity of the input and the arity of the auxiliary relations is at least~2. For such programs the status of history independence and emptiness of consistent remains open. We conjecture that for history independence the decidable fragment of $\DynClass{$\class$}lass{Prop}$ is larger than exhibited here. Our results will hopefully contribute to a better understanding of the power of dynamic programs. On the one hand the undecidability proofs show that very restricted dynamic programs can already simulate powerful machine models. It is natural to ask whether this power can be used to maintain other, more common queries. On the other hand the decidability results utilize limitations of the state space and the transition between states for classes of restricted programs. Such limitations can be a good starting point for the development of techniques for proving lower bounds for the respective fragments. \shortVersion{ } \end{document}
\begin{document} \title{Invariant functions on $p$-divisible groups and the $p$-adic Corona problem} \author{Christopher Deninger} \date{\ } \maketitle \thispagestyle{empty} \section{Introduction} \label{sec:1} In this note we are concerned with $p$-divisible groups $G = (G_{\nu})$ over a complete discrete valuation ring $R$. We assume that the fraction field $K$ of $R$ has characteristic zero and that the residue field $k = R /\pi R$ is perfect of positive characteristic $p$. Let $C$ be the completion of an algebraic closure of $K$ and denote by $\mathfrak{o} = \mathfrak{o}_C$ its ring of integers. The group $G_{\nu} (\mathfrak{o})$ acts on $G_{\nu} \otimes \mathfrak{o}$ by translation. Since $G_{\nu} \otimes K$ is \'etale the $G_{\nu} (C)$-invariant functions on $G_{\nu} \otimes C$ are just the constants. Using the counit it follows that the natural inclusion \[ \mathfrak{o} \iso \mathbb{G}amma (G_{\nu} \otimes \mathfrak{o} , {\mathcal O})^{G_{\nu} (\mathfrak{o})} \] is an isomorphism. We are interested in an approximate $\;\mathrm{mod}\; \pi^n$-version of this statement. Set $\mathfrak{o}_n = \mathfrak{o} / \pi^n \mathfrak{o}$ for $n \ge 1$. The group $G_{\nu} (\mathfrak{o})$ acts by translation on $G_{\nu} \otimes \mathfrak{o}_n$ for all $n$. \begin{theorem} \label{t1} Assume that the dual $p$-divisible group $G'$ is at most one-dimensional and that the connected-\'etale exact sequence for $G'$ splits over $\mathfrak{o}$. Then there is an integer $t \ge 1$ such that the cokernel of the natural inclusion \[ \mathfrak{o}_n \hookrightarrow \mathbb{G}amma (G_{\nu} \otimes \mathfrak{o}_n , {\mathcal O})^{G_{\nu} (\mathfrak{o})} \] is annihilated by $p^t$ for all $\nu$ and $n$. \end{theorem} The example of $\mathbb{G}_m = (\mu_{p^{\nu}})$ in section \ref{sec:2} may be helpful to get a feeling for the statement. We expect the theorem to hold without any restriction on the dimension of $G$ as will be explained later. Its assertion is somewhat technical but the proof may be of interest because it combines some of the main results of Tate on $p$-divisible groups with van~der~Put's solution of his one-dimensional $p$-adic Corona problem. The classic corona problem concerns the Banach algebra $H^{\infty} (D)$ of bounded analytic functions on the open unit disc $D$. The points of $D$ give maximal ideals in $H^{\infty} (D)$ and hence points of the Gelfand spectrum $\hat{a}t{D} = \mathrm{sp}\, H^{\infty} (D)$. The question was whether $D$ was dense in $\hat{a}t{D}$, (the set $\hat{a}t{D} \setminus \overline{D}$ being the ``corona''). This was settled affirmatively by Carleson \cite{C}. The analogous question for the polydisc $D^d$ is still open for $d \ge 2$. An equivalent condition for $D^d$ to be dense in $\mathrm{sp}\, H^{\infty} (D^d)$ is the following one, \cite{H}, Ch. 10: \begin{condition} \label{t2} If $f_1 , \ldots , f_n$ are bounded analytic functions in $D^d$ such that for some $\delta > 0$ we have \[ \max_{1 \le i \le n} |f_i (z)| \ge \delta \quad \mbox{for all} \; z \in D^d \; , \] then $f_1 , \ldots , f_n$ generate the unit ideal of $H^{\infty} (D^d)$. \end{condition} In \cite{P} van~der~Put considered the analogue of condition \ref{t2} with $H^{\infty} (D^d)$ replaced by the algebra of bounded analytic $C$-valued functions on the $p$-adic open polydisc $\Delta^d$ in $C^d$, i.e. by the algebra \[ C \langle X_1 , \ldots , X_d \rangle = \mathfrak{o} [[ X_1 , \ldots , X_d]] \otimes_{\mathfrak{o}} C \; . \] He called this $p$-adic version of condition \ref{t2} the $p$-adic Corona problem and verified it for $d = 1$. The general case $d \ge 1$ was later treated by Bartenwerfer \cite{B} using his earlier results on rigid cohomology with bounds. In the proof of theorem \ref{t1} applying Tate's results from \cite{T} we are led to a question about certain ideals in $C \langle X_1 , \ldots , X_d \rangle$, which for $d = 1$ can be reduced to van~der~Put's $p$-adic Corona problem. For $d \ge 2$, I did not succeed in such a reduction. However it seems possible that a generalization of Bartenwerfer's theory might settle that question. It should be mentioned that van~der~Put's term ``$p$-adic Corona problem'' for the $p$-adic analogue of condition \ref{t2} is somewhat misleading. Namely as pointed out in \cite{EM} a more natural analogue whould be the question whether $\Delta^d$ was dense in the Berkovich space of $C \langle X_1 , \ldots , X_d \rangle$. This is not known, even for $d = 1$. The difference between the classic and the $p$-adic cases comes from the fact discovered by van~der~Put that contrary to $H^{\infty} (D^d)$ the algebra $C \langle X_1 , \ldots, X_d \rangle$ contains maximal ideals of infinite codimension. While working on this note I had helpful conversations and exchanges with Siegfried Bosch, Alain Escassut, Peter Schneider, Annette Werner and Thomas Zink. I would like to thank them all very much. \section{An example and other versions of the theorem} \label{sec:2} Consider an affine group scheme $\mathbb{G}h$ over a ring $S$ with Hopf-algebra ${\mathbb{A}}h = \mathbb{G}amma (\mathbb{G}h , {\mathcal O})$, comultiplication $\mu : {\mathbb{A}}h \to {\mathbb{A}}h \otimes_S {\mathbb{A}}h$ and counit $\varepsilon : {\mathbb{A}}h \to S$. The operation of $\mathbb{G}h (S) = \mathrm{Hom}_S ({\mathbb{A}}h , S)$ on $\mathbb{G}amma (\mathbb{G}h , {\mathcal O})$ by translation is given by the map \begin{equation} \label{eq:1} \mathbb{G}h (S) \times {\mathbb{A}}h \to {\mathbb{A}}h \; , \; (\chi , a) \mapsto (\chi \otimes \mathrm{id}) \mu (a) \end{equation} where $(\chi \otimes \mathrm{id}) \mu$ is the composition \[ {\mathbb{A}}h \xrightarrow{\mu} {\mathbb{A}}h \otimes_S {\mathbb{A}}h \xrightarrow{\chi \otimes \mathrm{id}} S \otimes_S {\mathbb{A}}h = {\mathbb{A}}h \; . \] Given a homomorphism of groups $P \to \mathbb{G}h (S)$ we may view ${\mathbb{A}}h$ as a $P$-module. The composition $S \to {\mathbb{A}}h \xrightarrow{\varepsilon} S$ being the identity we have an isomorphism \begin{equation} \label{eq:2} \mathrm{Ker}\, ({\mathbb{A}}h^P \xrightarrow{\varepsilon} S) \iso {\mathbb{A}}h^P / S \quad \mbox{mapping $a$ to} \; a + S \; . \end{equation} The inverse sends $a + S$ to $a - \varepsilon (a) \cdot 1$. {\bf Example} The theorem is true for $\mathbb{G}_m = (\mu_{p^{\nu}})$. \begin{proof} Set $V = \mathfrak{o}_n [X , X^{-1}] / (X^{p^{\nu}} - 1)$. Applying formulas \eqref{eq:1} and \eqref{eq:2} with $\mathbb{G}h = \mu_{p^{\nu}} \otimes \mathfrak{o}_n$ and $P = \mu_{p^{\nu}} (\mathfrak{o}) \to \mathbb{G}h (\mathfrak{o}_n)$ we see that the cokernel of the map \begin{equation} \label{eq:3} \mathfrak{o}_n \to \mathbb{G}amma (\mu_{p^{\nu}} \otimes \mathfrak{o}_n , {\mathcal O})^{\mu_{p^{\nu}} (\mathfrak{o})} \end{equation} is isomorphic to the $\mathfrak{o}_n$-module: \[ \{ \overline{Q} \in V \, | \, \overline{Q} (\zeta X) = \overline{Q} (X) \; \mbox{for all} \; \zeta \in \mu_{p^{\nu}} (\mathfrak{o}) \; \mbox{and} \; \overline{Q} (1) = 0 \} \; . \] Lift $\overline{Q}$ to a Laurent polynomial $Q = \sum_{\mu \in {\mathcal S}} a_{\mu} X^{\mu}$ in $\mathfrak{o} [X , X^{-1}]$ where ${\mathcal S} = \{ 0 , \ldots , p^{\nu}-1 \}$. Then we have: \begin{equation} \label{eq:4} (\zeta^{\mu} - 1) a_{\mu} \equiv 0 \;\mathrm{mod}\; \pi^n \quad \mbox{for} \; \mu \in {\mathcal S} \; \mbox{and} \; \zeta \in \mu_{p^{\nu}} (\mathfrak{o}) \end{equation} and \begin{equation} \label{eq:5} \sum_{\mu \in {\mathcal S}} a_{\mu} \equiv 0 \;\mathrm{mod}\; \pi^n \; . \end{equation} For any non-zero $\mu$ in ${\mathcal S}$ choose $\zeta \in \mu_{p^{\nu}} (\mathfrak{o})$ such that $\zeta^{\mu} \neq 1$. Then $\zeta^{\mu} -1$ divides $p$ in $\mathfrak{o}$ and hence \eqref{eq:4} implies that $p a_{\mu} \equiv 0 \;\mathrm{mod}\; \pi^n$ for all $\mu \neq 0$. Using \eqref{eq:5} it follows that we have $p a_0 \equiv 0 \;\mathrm{mod}\; \pi^n$ as well. Hence $pQ \;\mathrm{mod}\; \pi^n$ is zero and therefore $p \overline{Q} = 0$ as well. Thus $p$ annihilates the $\mathfrak{o}_n$-module \eqref{eq:3} for all $\nu \ge 1$ and $n \ge 1$. \end{proof} Now assume that $S = R$ and that $\mathbb{G}h / R$ is a finite, flat group scheme. Consider the Cartier dual $\mathbb{G}h' = \mathrm{spec}\, {\mathbb{A}}h'$ where ${\mathbb{A}}h' = \mathrm{Hom}_R ({\mathbb{A}}h , R)$. The perfect pairing of finite free $\mathfrak{o}_n$-modules \begin{equation} \label{eq:6} ({\mathbb{A}}h \otimes \mathfrak{o}_n) \times ({\mathbb{A}}h' \otimes \mathfrak{o}_n) \to \mathfrak{o}_n \end{equation} induces an isomorphism \begin{equation} \label{eq:7} \mathrm{Ker}\, (({\mathbb{A}}h \otimes \mathfrak{o}_n)^{\mathbb{G}h (\mathfrak{o})} \xrightarrow{\varepsilon} \mathfrak{o}_n) \iso \mathrm{Hom}_{\mathfrak{o}_n} (({\mathbb{A}}h' \otimes \mathfrak{o}_n)_{\mathbb{G}h (\mathfrak{o})} / \mathfrak{o}_n , \mathfrak{o}_n) \; . \end{equation} Using \eqref{eq:2} it follows that if $p^t$ annihilates $({\mathbb{A}}h' \otimes \mathfrak{o}_n)_{\mathbb{G}h (\mathfrak{o})} / \mathfrak{o}_n$ then $p^t$ annihilates $({\mathbb{A}}h \otimes \mathfrak{o}_n)^{\mathbb{G}h (\mathfrak{o})} / \mathfrak{o}_n$ as well. (The converse is not true in general.) Hence theorem \ref{t1} follows from the next result (applied to the dual $p$-divisible group). \begin{theorem} \label{t3} Assume that the $p$-divisible group $G$ is at most one-dimensional and that the connected-\'etale exact sequence for $G$ splits over $\mathfrak{o}$. Then there is an integer $t \ge 1$ such that $p^t$ annihilates the cokernel of the natural map \[ \mathfrak{o}_n \to \mathbb{G}amma (G_{\nu} \otimes \mathfrak{o}_n , {\mathcal O})_{G'_{\nu} (\mathfrak{o})} \] for all $\nu$ and $n$. \end{theorem} For a finite flat group scheme $\mathbb{G}h = \mathrm{spec}\, {\mathbb{A}}h$ over a ring $S$, the group \[ \mathbb{G}h' (S) = \mathrm{Hom}_{S-\mathrm{alg}} (\mathrm{Hom}_S ({\mathbb{A}}h , S),S) \subset {\mathbb{A}}h \] consists of the group-like elements in ${\mathbb{A}}h$ i.e. the units $a$ in ${\mathbb{A}}h$ with $\mu (a) = a \otimes a$. In this way $\mathbb{G}h' (S)$ becomes a subgroup of the unit group ${\mathbb{A}}h^*$ and hence $\mathbb{G}h' (S)$ acts on ${\mathbb{A}}h$ by multiplication. On the other hand $\mathbb{G}h' (S)$ acts on $\mathbb{G}h'$ by translation, hence on ${\mathbb{A}}h' = \mathbb{G}amma (\mathbb{G}h' , {\mathcal O})$ and hence on ${\mathbb{A}}h'' = {\mathbb{A}}h$. Using \eqref{eq:1} one checks that the two actions of $\mathbb{G}h' (S)$ on ${\mathbb{A}}h$ are the same. This leads to the following description of the cofixed module in theorem \ref{t3}. Set $A_{\nu} = \mathbb{G}amma (G_{\nu} , {\mathcal O})$ and let $J_{\nu}$ be the ideal in $A_{\nu} \otimes_R \mathfrak{o}$ generated by the elements $h-1$ with $h$ group-like in this Hopf-algebra over $\mathfrak{o}$. Thus $J_{\nu}$ is also the $\mathfrak{o}$-submodule of $A_{\nu} \otimes_R \mathfrak{o}$ generated by the elements $ha - a$ for $h \in G'_{\nu} (\mathfrak{o})$ and $a \in A_{\nu} \otimes_R \mathfrak{o}$. Then we have the formula \begin{equation} \label{eq:8} \mathbb{G}amma (G_{\nu} \otimes \mathfrak{o}_n , {\mathcal O})_{G'_{\nu} (\mathfrak{o})} = (A_{\nu} \otimes_R \mathfrak{o}_n) / J_{\nu} (A_{\nu} \otimes_R \mathfrak{o}_n) \; . \end{equation} This implies an isomorphism: \begin{equation} \label{eq:9} {\mathbb{C}}oker (\mathfrak{o}_n \to \mathbb{G}amma (G_{\nu} \otimes \mathfrak{o}_n, {\mathcal O})_{G'_{\nu} (\mathfrak{o})}) = {\mathbb{C}}oker (\mathfrak{o} \to (A_{\nu} \otimes_R \mathfrak{o}) / J_{\nu}) \otimes_{\mathfrak{o}} \mathfrak{o}_n \; . \end{equation} Hence theorem \ref{t3} and therefore also theorem \ref{t1} follow from the next claim: \begin{claim} \label{t4} For a $p$-divisible group $G = (G_{\nu})$ as in theorem \ref{t3} there exists an integer $t \ge 1$ such that $p^t$ annihilates the cokernel of the natural map \\ $\mathfrak{o} \to (A_{\nu} \otimes_R \mathfrak{o}) / J_{\nu}$ for all $\nu \ge 1$. \end{claim} As a first step in the proof of claim \ref{t4} we reduce to the case where $G$ is either \'etale or connected. For simplicity set $\mathbb{G}h = G_{\nu} \otimes_R \mathfrak{o} = \mathrm{spec}\, {\mathbb{A}}h$ and define $\mathbb{G}h^0 , \mathbb{G}h^{\mathrm{\acute{e}t}} , {\mathbb{A}}h^0 , {\mathbb{A}}h^{\mathrm{\acute{e}t}}$ similarly. By assumption we have isomorphisms $\mathbb{G}h = \mathbb{G}h^0 \times_{\mathfrak{o}} \mathbb{G}h^{\mathrm{\acute{e}t}}$ and ${\mathbb{A}}h = {\mathbb{A}}h^0 \otimes_{\mathfrak{o}} {\mathbb{A}}h^{\mathrm{\acute{e}t}}$ as group schemes, resp. Hopf-algebras over $\mathfrak{o}$. There is a compatible splitting of the group-like elements over $\mathfrak{o}$: \[ \mathbb{G}h' (\mathfrak{o}) = \mathbb{G}h^{0'} (\mathfrak{o}) \times \mathbb{G}h^{\mathrm{\acute{e}t} '} (\mathfrak{o}) \; . \] For elements \[ h^0 \in \mathbb{G}h^{0'} (\mathfrak{o}) \subset {\mathbb{A}}h^0 \quad \mbox{and} \quad h^{\mathrm{\acute{e}t}} \in \mathbb{G}h^{\mathrm{\acute{e}t} '} (\mathfrak{o}) \subset {\mathbb{A}}h^{\mathrm{\acute{e}t}} \] consider the identity: \[ h^0 \otimes h^{\mathrm{\acute{e}t}} - 1 = h^0 \otimes (h^{\mathrm{\acute{e}t}} - 1) + (h^0 - 1) \otimes 1 \quad \mbox{in} \; {\mathbb{A}}h \; . \] It implies that we have \[ J = {\mathbb{A}}h^0 \otimes J^{\mathrm{\acute{e}t}} + J^0 \otimes {\mathbb{A}}h^{\mathrm{\acute{e}t}} \quad \mbox{in} \; {\mathbb{A}}h \] where $J$ is the ideal of ${\mathbb{A}}h$ generated by the elements $h-1$ for $h \in \mathbb{G}h' (\mathfrak{o})$ and $J^0 , J^{\mathrm{\acute{e}t}}$ are defined similarly. Hence we have natural surjections \[ {\mathbb{A}}h^0 / J^0 \otimes {\mathbb{A}}h^{\mathrm{\acute{e}t}} / J^{\mathrm{\acute{e}t}} \to {\mathbb{A}}h / J \] and \[ {\mathbb{C}}oker (\mathfrak{o} \to {\mathbb{A}}h^0 / J^0) \otimes {\mathbb{C}}oker (\mathfrak{o} \to {\mathbb{A}}h^{\mathrm{\acute{e}t}} / J^{\mathrm{\acute{e}t}}) \to {\mathbb{C}}oker (\mathfrak{o} \to {\mathbb{A}}h / J) \; . \] Hence it suffices to prove claim \ref{t4} in the cases where $G$ is either connected or \'etale. The \'etale case is straightforeward: We have $G \otimes_R \mathfrak{o} = ((\underline{{\mathbb{Z}} / p^{\nu}})^h)_{\nu \ge 0}$ where for an abstract group $A$ we denote by $\underline{A}$ the corresponding \'etale group scheme. Hence $G'_{\nu} = \mu^h_{p^{\nu}}$ and $G'_{\nu} (\mathfrak{o}) = \mu_{p^{\nu}} (\mathfrak{o})^h$. The inclusion \[ \mu_{p^{\nu}} (\mathfrak{o})^h = \mathrm{Hom} (({\mathbb{Z}} / p^{\nu})^h , \mathfrak{o}^*) \subset \mathrm{Maps}\, (({\mathbb{Z}} / p^{\nu})^h , \mathfrak{o}) = A_{\nu} \otimes \mathfrak{o} \] identifies $\mu_{p^{\nu}} (\mathfrak{o})^h$ with the group like elements in $A_{\nu} \otimes \mathfrak{o}$. The ideal $J_{\nu}$ of $A_{\nu} \otimes \mathfrak{o}$ is given by: \[ J_{\nu} = (\chi_{\zeta} - 1 \, | \, \zeta \in \mu_{p^{\nu}} (\mathfrak{o})^h) \] where $\chi_{\zeta}$ is the character of $({\mathbb{Z}} / p^{\nu})^h$ defined by the equation \[ \chi_{\zeta} ((a_1 , \ldots , a_h)) = \zeta^{a_1}_1 \cdots \zeta^{a_h}_h \quad \mbox{where} \; \zeta = (\zeta_1 , \ldots , \zeta_h) \; . \] The functions $\delta_a$ for $a \in ({\mathbb{Z}} / p^{\nu})^h$ given by $\delta_a (a) = 1$ and $\delta_a (b) = 0$ if $b \neq a$ generate $A_{\nu} \otimes \mathfrak{o}$ as an $\mathfrak{o}$-module. For $a \neq 0$ choose $\zeta \in \mu_{p^{\nu}} (\mathfrak{o})^h$ with $\zeta^a \neq 1$. Then we have $p = (\zeta^a - 1) \beta$ for some $\beta \in \mathfrak{o}$. Define $f_a \in A_{\nu} \otimes \mathfrak{o}$ by setting \[ f_a (a) = \beta \quad \mbox{and} \quad f_a (b) = 0 \; \mbox{for} \; b \neq a \; . \] We then find: \[ f_a (\chi_{\zeta} - 1) = p \delta_a \quad \mbox{in} \; A_{\nu} \otimes \mathfrak{o} \; . \] Hence we have $p \delta_a \in J_{\nu}$ for all $a \neq 0$ and therefore $p$ annihilates ${\mathbb{C}}oker (\mathfrak{o} \to (A_{\nu} \otimes \mathfrak{o}) / J_{\nu})$. The next two sections are devoted to the much more interesting case where $G$ is connected. \section{The connected case I ($p$-adic Hodge theory)} \label{sec:3} In this section we reduce the assertion of claim \ref{t4} for connected $p$-divisible groups of arbitrary dimension to an assertion on ideals in $C \langle X_1 , \ldots , X_d \rangle$. For this reduction we use theorems of Tate in \cite{T}. Thus let $G = (G_{\nu})$ be a connected $p$-divisible group of dimension $d$ over $R$ and set $A = \varprojlim_{\nu} A_{\nu}$ where $G_{\nu} = \mathrm{spec}\, A_{\nu}$. Consider the projective limit $A = \varprojlim A_n$ with the topology inherited from the product topology $\prod A_n$ where the $A_n$'s are given the $\pi$-adic topology. This topology on $A$ is the one defined by the $R$-submodules $K_n + \pi^k A$ for $n , k \ge 1$ where $K_n = \mathrm{Ker}\, (A \to A_n)$. Equivalently it is defined by the spaces $K_n + \pi^n A$ for $n \ge 1$. In \cite{T} section (2.2) it is shown that $A$ is isomorphic to $R [[X_1 , \ldots , X_d]]$ as a topological $R$-algebra. If $M$ denotes the maximal ideal of $A$, then according to \cite{T} Lemma 0 the topology of $A$ coincides with the $M$-adic topology. Let $A \hat{a}t{\otimes}_R \mathfrak{o}$ be the completion of $A \otimes_R \mathfrak{o}$ with respect to the linear topology on $A \otimes_R \mathfrak{o}$ given by the subspaces $M^n \otimes_R \mathfrak{o} + A \otimes_R \pi^n \mathfrak{o}$. \begin{lemma} \label{t5n} We have \[ \varprojlim (A_n \otimes_R \mathfrak{o}) = A \hat{a}t{\otimes}_R \mathfrak{o} = \mathfrak{o} [[ X_1 , \ldots , X_d ]] \] as topological rings. \end{lemma} \begin{proof} Consider the isomorphisms \begin{eqnarray*} \varprojlim_n (A_n \otimes_R \mathfrak{o}) & = & \varprojlim_n (A_n \otimes_R (\varprojlim_k \mathfrak{o} / \pi^k \mathfrak{o})) \\ & \overset{(1)}{=} & \varprojlim_n \varprojlim_k (A_n \otimes_R \mathfrak{o} / \pi^k \mathfrak{o}) \\ & = & \varprojlim_n \varprojlim_k (A \otimes_R \mathfrak{o}) / ((K_n + \pi^k A) \otimes_R \mathfrak{o} + A \otimes_R \pi^k \mathfrak{o}) \\ & \overset{(2)}{=} & \varprojlim_n (A \otimes_R \mathfrak{o}) / ((K_n + \pi^n A) \otimes_R \mathfrak{o} + A \otimes_R \pi^n \mathfrak{o}) \\ & \overset{(3)}{=} & \varprojlim_n (A \otimes_R \mathfrak{o}) / (M^n \otimes_R \mathfrak{o} + A \otimes_R \pi^n \mathfrak{o}) \\ & = & A \hat{a}t{\otimes}_R \mathfrak{o} \\ & \overset{(4)}{=} & \mathfrak{o} [[ X_1 , \ldots , X_d]] \; . \end{eqnarray*} Here (1) holds because $\varprojlim$ commutes with finite direct sums, (2) is true by cofinality, (3) holds because the topology on $A$ can also be described as the $M$-adic topology. Finally (4) follows from the definition of $A \hat{a}t{\otimes}_R \mathfrak{o}$ and the fact that $A = R [[X_1 , \ldots , X_d]]$. \end{proof} The $\mathfrak{o}$-algebra $A \hat{a}t{\otimes}_R \mathfrak{o} = \varprojlim_{\nu} (A_{\nu} \otimes_R \mathfrak{o})$ contains the ideal $\tilde{J} = \varprojlim_{\nu} J_{\nu}$. \begin{claim} \label{t5} We have \[ A \hat{a}t{\otimes}_R \mathfrak{o} / (\mathfrak{o} + \tilde{J}) = \varprojlim_{\nu} (A_{\nu} \otimes_R \mathfrak{o} / (\mathfrak{o} + J_{\nu})) \; . \] \end{claim} \begin{proof} The inclusion $G_{\nu} \subset G_{\nu+1}$ corresponds to a surjection of Hopf-algebras $A_{\nu+1} \to A_{\nu}$. Hence $A_{\nu+1} \otimes_R \mathfrak{o} \to A_{\nu} \otimes_R \mathfrak{o}$ is surjective as well and group-like elements are mapped to group-like elements. The map on group-like elements is surjective because it corresponds to the surjective map $G'_{\nu+1} (\mathfrak{o}) \to G'_{\nu} (\mathfrak{o})$. Note here that $G'_{\mu} (\mathfrak{o}) = G'_{\mu} (C)$ for all $\mu$. It follows that the map $J_{\nu+1} \to J_{\nu}$ is surjective as well. In the exact sequence of projective systems \[ 0 \to (\mathfrak{o} + J_{\nu}) \to (A_{\nu} \otimes_R \mathfrak{o}) \to (A_{\nu} \otimes_R \mathfrak{o} / (\mathfrak{o} + J_{\nu})) \to 0 \] the system $(\mathfrak{o} + J_{\nu})$ is therefore Mittag--Leffler. Hence the sequence of projective limits is exact and the claim follows because the sum $\mathfrak{o} + J_{\nu}$ is direct: Group-like elements of $A_{\nu} \otimes_R \mathfrak{o}$ are mapped to $1$ by the counit $\varepsilon_{\nu}$. Therefore we have \begin{equation} \label{eq:10} J_{\nu} \subset I_{\nu} := \mathrm{Ker}\, (\varepsilon_{\nu} : A_{\nu} \otimes_R \mathfrak{o} \to \mathfrak{o}) \; . \end{equation} The sum $\mathfrak{o} + I_{\nu}$ being direct we are done. \end{proof} Because of claim \ref{t5} and the surjectivity of the maps $A_{\nu+1} \otimes_R \mathfrak{o} \to A_{\nu} \otimes_R \mathfrak{o}$, claim \ref{t4} for connected groups is equivalent to the next assertion: \begin{claim} \label{t6} Let $G$ be a connected $p$-divisible group with $\dim G \le 1$. Then there is some $t \ge 1$ such that $p^t$ annihilates \[ {\mathbb{C}}oker (\mathfrak{o} \to A \hat{a}t{\otimes}_R \mathfrak{o} / \tilde{J}) \; . \] \end{claim} For connected $G$ of arbitrary dimension consider the Tate module of $G'$ \[ TG' = \varprojlim_{\nu} G'_{\nu} (C) = \varprojlim_{\nu} G'_{\nu} (\mathfrak{o}) \subset \varprojlim_{\nu} (A_{\nu} \otimes_R \mathfrak{o}) = A \hat{a}t{\otimes}_R \mathfrak{o} \; . \] Let $J$ be the ideal of $A \hat{a}t{\otimes}_R \mathfrak{o} $ generated by the elements $h-1$ for $h \in TG'$. The image of $J$ under the reduction map $A \hat{a}t{\otimes}_R \mathfrak{o} \to A_{\nu} \otimes_R \mathfrak{o}$ lies in $J_{\nu}$. It follows that $J \subset \tilde{J}$. With $I_{\nu}$ as in \eqref{eq:10} we set $I = \varprojlim_{\nu} I_{\nu}$, an ideal in $A \hat{a}t{\otimes}_R \mathfrak{o}$. We have $J \subset \tilde{J} \subset I$ because of \eqref{eq:10}. Since $A \hat{a}t{\otimes}_R \mathfrak{o} = \mathfrak{o} \mathrm{op}lus I$, we get a surjection \begin{equation} \label{eq:11} I / J \twoheadrightarrow {\mathbb{C}}oker (\mathfrak{o} \to A\hat{a}t{\otimes}_R \mathfrak{o} / \tilde{J}) \; . \end{equation} Thus claim \ref{t6} will be proved if we can show that $p^t I \subset J$ at least for $\dim G = 1$. The construction in \cite{T} section (2.2) shows that under the isomorphism of $\mathfrak{o}$-algebras \[ A \hat{a}t{\otimes}_R \mathfrak{o} = \mathfrak{o} [[X_1 , \ldots , X_d]] \quad \mbox{we have} \; I = (X_1 , \ldots , X_d) \; . \] We will view the elements of $A \hat{a}t{\otimes}_R \mathfrak{o}$ and in particular those of $J$ as analytic functions on the open $d$-dimensional polydisc \[ \Delta^d = \{ x \in C^d \, | \, |x_i| < 1 \; \mbox{for all} \; i \} \; . \] Because of the inclusion $J \subset I$ all functions in $J$ vanish at $0 \in \Delta^d$. There are no other common zeroes: \begin{prop}[Tate] \label{t7} The zero set of $J$ in $\Delta^d$ consists only of the origin $o \in \Delta^d$. \end{prop} \begin{proof} The $\mathfrak{o}$-valued points of the $p$-divisible group $G$, \[ G (\mathfrak{o}) = \varprojlim_i \varinjlim_{\nu} G_{\nu} (\mathfrak{o} / \pi^i \mathfrak{o}) \] can be identified with continuous $\mathfrak{o}$-algebra homomorphisms \[ G (\mathfrak{o}) = \mathrm{Hom}_{\mathrm{cont} , \mathrm{alg}} (A , \mathfrak{o}) = \mathrm{Hom}_{\mathrm{cont} , \mathrm{alg}} (A \hat{a}t{\otimes}_R \mathfrak{o} , \mathfrak{o}) \; . \] Moreover we have a homeomorphism \[ \Delta^d \iso G (\mathfrak{o}) \quad \mbox{via} \quad x \mapsto (f \mapsto f (x)) \; . \] Here $f \in A\hat{a}t{\otimes}_R \mathfrak{o}$ is viewed as a formal power series over $\mathfrak{o}$. The group structure on $G (\mathfrak{o})$ induces a Lie group structure on $\Delta^d$ with $0 \in \Delta^d$ corresponding to $1 \in G (\mathfrak{o})$. Let $U$ be the group of $1$-units in $\mathfrak{o}$. Proposition 11 of \cite{T} asserts that the homomorphism of Lie groups \begin{equation} \label{eq:12} \alpha : \Delta^d = G (\mathfrak{o}) \to \mathrm{Hom}_{\mathrm{cont}} (TG' , U) \; , \; x \mapsto (h \mapsto h (x)) \end{equation} is {\it injective}. Note here that $TG' \subset A \hat{a}t{\otimes}_R \mathfrak{o}$. Let $x \in \Delta^d$ be a point in the zero set of $J$. Then we have $(h-1) (x) = 0$ i.e. $h (x) = 1$ for all $h \in TG'$. Hence $x$ maps to $1 \in \mathrm{Hom}_{\mathrm{cont}} (TG' , U)$. Since $\alpha$ is injective, it follows that we have $x = 0$. \end{proof} If a Hilbert Nullstellensatz were true in $C \langle X_1 , \ldots , X_d \rangle$ we could conclude that we had $\sqrt{J \otimes C} = I \otimes C$ and with further arguments from \cite{T} we would get $p^t I \subset J$. However the Nullstellensatz does not hold in the ring $C \langle X_1 , \ldots , X_d \rangle$. In the next section we will provide a replacement which is proved for $d = 1$ and conjectured for $d \ge 2$. In order to apply it to the ideal $J \otimes C$ in $C \langle X_1 , \ldots , X_d \rangle$ we need to know the following assertion which is stronger than proposition \ref{t7}. For $x \in C^m$ set $\| x \| = \max_i |x_i|$. \begin{prop} \label{t8} Let $h_1 , \ldots , h_r$ be a ${\mathbb{Z}}_p$-basis of $TG' \subset \mathfrak{o} [[ X_1 , \ldots , X_d ]]$ and set $H (x) = (h_1 (x) , \ldots , h_r (x))$ and $\mathbf{1} = (1 , \ldots , 1)$. Then there is a constant $\delta > 0$ such that we have: \[ \| H (x) - \mathbf{1} \| \ge \delta \|x \| \quad \mbox{for all} \; x \in \Delta^d \; . \] \end{prop} \begin{proof} The ${\mathbb{Z}}_p$-rank $r$ of $TG'$ is the height of $G'$ and hence we have $r \ge d = \dim G$. Consider the following diagram $(*)$ on p. 177 of \cite{T}: \[ \xymatrix{ 1 \ar[r] & G (\mathfrak{o})_{\mathrm{tors}} \ar[r] \ar[d]^{\wr}_{\alpha_0} & G (\mathfrak{o}) \ar[r]^L \ar@{^{(}->}[d]_{\alpha} & t_G (C) \ar[r] \ar@{^{(}->}[d]_{d \alpha} & 0 \\ 1 \ar[r] & \mathrm{Hom} (TG' , U_{\mathrm{tors}}) \ar[r] & \mathrm{Hom} (TG' , U) \ar[r]^{\log_*} & \mathrm{Hom} (TG' , C) \ar[r] & 0 . } \] Here the $\mathrm{Hom}$-groups refer to continuous homomorphisms and the map $\alpha$ was defined in equation \eqref{eq:12} above. The map $L$ is the logarithm map to the tangent space $t_G (C)$ of $G$ and $\log_*$ is induced by $\log : U \to C$. According to \cite{T} proposition 11 the maps $\alpha$ and $d\alpha$ are injective and $\alpha_0$ is bijective. It will suffice to prove the following two statements: \begin{enumerate} \item [I)] For any $\varepsilon > 0$ there is a constant $\delta (\varepsilon) > 0$ such that \[ \| H (x) - \mathbf{1} \| \ge \delta (\varepsilon) \; \mbox{for all} \; x \in \Delta^d \; \mbox{with} \; \| x \| \ge \varepsilon \; . \] \item[II)] There are $\varepsilon > 0$ and $a > 0$ such that \[ \| H (x) - \mathbf{1} \| \ge a \|x \| \; \mbox{for all} \; x \in \Delta^d \; \mbox{with} \; \| x \| \le \varepsilon \; . \] \end{enumerate} Identifying $G (\mathfrak{o})$ with $\Delta^d$ where we write the induced group structure on $\Delta^d$ as $\mathrm{op}lus$, and identifying $TG'$ with ${\mathbb{Z}}^r_p$ via the choice of the basis $h_1 , \ldots , h_r$, the above diagram becomes the following one where $A = dH$ and $H_0$ is the restriction of $H$ to $(\Delta^d)_{\mathrm{tors}}$ \[ \xymatrix{ 0 \ar[r] & (\Delta^d)_{\mathrm{tors}} \ar[r] \ar[d]^{\wr}_{H_0} & \Delta^d \ar[r]^L \ar@{^{(}->}[d]_H & C^d \ar[r] \ar@{^{(}->}[d]^{A = dH} & 0 \\ 1 \ar[r] & U^r_{\mathrm{tors}} \ar[r] & U^r \ar[r]^{\log} & C^r \ar[r] & 0 . } \] Assume that assertion $I$ is wrong for some $\varepsilon > 0$. Then there is a sequence $x^{(i)}$ of points in $\Delta^d$ with $\| x^{(i)} \| \ge \varepsilon$ such that $H (x^{(i)}) \to \mathbf{1}$ for $i \to \infty$. It follows that $A (L (x^{(i)}) = \log H (x^{(i)}) \to 0$ for $i \to \infty$. Since $A$ is an injective linear map between finite dimensional $C$-vector spaces, there exists a constant $a > 0$ such that we have \begin{equation} \label{eq:13} \| A (v) \| \ge a \|v\| \quad \mbox{for all} \; v \in C^d \; . \end{equation} Hence we see that $L (x^{(i)}) \to 0$ for $i \to \infty$. Since $L$ is a local homeomorphism, there exists a sequence $y^{(i)} \to 0$ in $\Delta^d$ with $L (x^{(i)}) = L (y^{(i)})$ for all $i$. The sequence $z^{(i)} = x^{(i)} \ominus y^{(i)}$ in $\Delta^d$ satisfies $L (z^{(i)}) = 0$ and hence lies in $(\Delta^d)_{\mathrm{tors}}$. We have $H_0 (z^{(i)}) = H (x^{(i)}) H (y^{(i)})^{-1}$. Moreover $H (x^{(i)}) \to \mathbf{1}$ by assumption and $H (y^{(i)}) \to \mathbf{1}$ since $y^{(i)} \to 0$. Hence $H_0 (z^{(i)}) \to 1$ and therefore $H_0 (z^{(i)}) = 1$ for all $i \gg 0$ since the subspace topology on $U_{\mathrm{tors}} \subset U$ is the discrete topology. The map $H_0$ being bijective we find that $z^{(i)} = 0$ for $i \gg 0$ and therefore $x^{(i)} = y^{(i)}$ for $i \gg 0$. This implies that $x^{(i)} \to 0$ for $i \to \infty$ contradicting the assumption $\| x^{(i)} \| \ge \varepsilon$ for all $i$. Hence assertion I) is proved. We now turn to assertion II). Set $X = (X_1 , \ldots , X_d)$. Then we have \[ H(X) = \mathbf{1} + AX + (\deg \ge 2) \; . \] Componentwise this gives for $1 \le j \le r$ \[ h_j (x) - 1 = \sum^d_{i=1} a_{ij} x_i + (\deg \ge 2)_j \; . \] Let $a$ be the constant from equation \eqref{eq:13} and choose $\varepsilon > 0$, such that for $\| x \| \le \varepsilon$ we have \[ \| (\deg \ge 2)_j \| \le \frac{a}{2} \| x \| \quad \mbox{for} \; 1 \le j \le r \; . \] For any $x$ with $\| x \| < \varepsilon$, according to \eqref{eq:13} there is an index $j$ with \[ \Big| \sum^d_{i=1} a_{ij} x_i \Big| \ge a \|x \| \; . \] This implies that we have \[ | h_j (x) - 1 | = \Big| \sum^d_{i=1} a_{ij} x_i + (\deg \ge 2)_j \Big| = \Big| \sum^d_{i=1} a_{ij} x_i \Big| \ge a \| x \| \] and hence \[ \| H (x) - \mathbf{1} \| \ge a \| x \| \; . \] \end{proof} \section{The connected case II (the $p$-adic Corona problem)} \label{sec:4} As remarked in the previous section we need a version of the Hilbert Nullstellensatz in $C \langle X_1 , \ldots , X_d \rangle$ for the case where the zero set is $\{ 0 \} \subset \Delta^d$. The only result for $C \langle X_1 , \ldots , X_d \rangle$ in the spirit of the Nullstellensatz that I am aware of concerns an empty zero set: \begin{coronatheorem}[van~der~Put, Bartenwerfer] \label{t9} For $f_1 , \ldots , f_n$ in $C \langle X_1 , \ldots , X_d \rangle$ the following conditions are equivalent: \begin{enumerate} \item [1)] The functions $f_1 , \ldots , f_n$ generate the $C$-algebra $C \langle X_1 , \ldots , X_d \rangle$. \item[2)] There is a constant $\delta > 0$ such that \[ \max_{1 \le j \le n} |f_j (x)| \ge \delta \quad \mbox{for all} \; x \in \Delta^d \; . \] \end{enumerate} \end{coronatheorem} It is clear that the first condition implies the second. The non-trivial implication was proved by van~der~Put for $d = 1$ in \cite{P} and by Bartenwerfer in general, c.f. \cite{B}. Both authors give a more precise statement of the theorem where the norms of possible functions $g_j$ with $\sum_j f_j g_j = 1$ are estimated. Consider the following conjecture which deals with the case where the zero set may contain $\{ 0 \}$. \begin{conj} \label{t10} For $g_1 , \ldots , g_n$ in $C \langle X_1 , \ldots , X_d \rangle$ the following conditions are equivalent: \begin{enumerate} \item [1)] $(g_1 , \ldots , g_n) \supset (X_1 , \ldots , X_d)$. \item[2)] There is a constant $\delta > 0$ such that \begin{equation} \label{eq:14} \max_{1 \le j \le n} |g_j (x)| \ge \delta \| x \| \quad \mbox{for all} \; x \in \Delta^d \; . \end{equation} \end{enumerate} \end{conj} As above, immediate estimates show that the first condition implies the second. Note also that if some $g_j$ does not vanish at $x = 0$ we have \[ \max_{1 \le j \le n} |g_j (x)| \ge \delta' > 0 \; \mbox{in a neighborhood of} \; x = 0 \; . \] Together with \eqref{eq:14} this implies that \[ \max_{1 \le j \le n} |g_j (x)| \ge \delta'' > 0 \quad \mbox{for all} \; x \in \Delta^d \; . \] The $p$-adic Corona theorem then gives $(g_1 , \ldots , g_n) = C \langle X_1 , \ldots , X_d \rangle$. Thus condition 1 follows in this case. \begin{prop} \label{t11} The preceeding conjecture is true for $d = 1$. \end{prop} \begin{proof} As explained above, we may assume that all functions $g_1 , \ldots , g_n$ vanish at $x = 0$. Then $f_j (X) = X^{-1} g_j (X)$ is in $C \langle X \rangle$ for every $1 \le j \le n$ and estimate \eqref{eq:14} implies the estimate \[ \max_{1 \le j \le n} |f_j (x)| \ge \delta \quad \mbox{for all} \; x \in \Delta^1 \; . \] The $p$-adic Corona theorem for $d = 1$ now shows that \[ (f_1 , \ldots , f_n) = (1) \quad \mbox{and hence} \quad (g_1 , \ldots , g_n) = (X) \; . \] \end{proof} Let us now return to $p$-divisible groups and recall the surjection \eqref{eq:11}: \begin{equation} \label{eq:15} I / J \twoheadrightarrow {\mathbb{C}}oker (\mathfrak{o} \to A \hat{a}t{\otimes}_R \mathfrak{o} / \tilde{J}) \; . \end{equation} Here $I = (X_1 , \ldots , X_d)$ in $\mathfrak{o} [[X_1 , \ldots , X_d]]$ and $J$ is the ideal generated by the elements $h-1$ for $h \in TG'$. Let $J_0 \subset J$ be the ideal generated by the elements $h_1 - 1 , \ldots , h_r -1$ where $h_1 , \ldots , h_r$ form a ${\mathbb{Z}}_p$-basis of $TG'$. In proposition \ref{t8} we have seen that for some $\delta > 0$ we have \[ \max_{1 \le j \le r} |h_j (x) - 1| \ge \delta \|x \| \quad \mbox{for all} \; x \in \Delta^d \; . \] Conjecture \ref{t10} (which is true for $d = 1$) would therefore imply \[ (h_1 -1 , \ldots , h_r -1) = (X_1 , \ldots , X_d) \quad\mbox{in} \; C \langle X_1 , \ldots , X_d \rangle \; . \] Thus we would find some $t \ge 1$, such that we have \[ p^t X_i \in J_0 \subset \mathfrak{o} [[ X_1 , \ldots , X_d]] \quad \mbox{for all} \; 1 \le j \le r \] and hence also $p^t I \subset J_0 \subset J$. Using the surjection \eqref{eq:15} this would prove claim \ref{t6} and hence theorem \ref{t3} without restriction on $\dim G$. Also theorem \ref{t1} would follow without restriction on $\dim G'$. As it is we have to assume $\dim G \le 1$ resp. $\dim G' \le 1$ in these assertions. \end{document}
\begin{document} \title{Isospectral mapping for quantum systems with energy point spectra \\to polynomial quantum harmonic oscillators} \author{Ole Steuernagel${}^{1}$ and Andrei B. Klimov${}^{2}$} \affiliation{${}^{1}${School of Physics, Astronomy and Mathematics,~University of Hertfordshire, Hatfield, AL10 9AB, UK}\\ ${}^{2}$Departamento de F\'isica, Universidad de Guadalajara, 44420 Guadalajara, Jalisco, Mexico} \date{\today} \begin{abstract} We show that a polynomial $\hat {\cal H}_{(N)}$ of degree~$N$ of a harmonic oscillator hamiltonian allows us to devise a fully solvable continuous quantum system for which the first $N$ discrete energy eigenvalues can be chosen at will. In general such a choice leads to a re-ordering of the associated energy eigen\-functions of $\hat {\cal H}$ such that the number of their nodes does not increase monotonically with increasing level number. Systems $\hat {\cal H}$ have certain `universal' features, we study their basic behaviours. \end{abstract} \maketitle \section{Introduction \label{sec_intro}} Unlike for finite discrete systems, for continuous quantum systems it is generally hard to work out their energy spectrum, and given an energy spectrum, it is generally hard to write down a hamiltonian $\hat {\cal H}$ of a continuous system with that spectrum. It is therefore noteworthy that, given a finite arbitrary set of $N$ real values, a continuous one-dimensional quantum system's hamiltonian $\hat {\cal H} (\hat x, \hat p)$, with this set as its first $N$ energy eigenvalues, can be devised. This is shown here by explicit construction of a formal hamiltonian using real polynomials~${\cal P}(\hat h) = \hat {\cal H}_{(N)}$ of $N$-th degree of the harmonic quantum oscillator hamiltonian~$\hat h (\hat x, \hat p)$. For polynomials of low degree $N$ such hamiltonians~$\cal H$ can arise as effective descriptions of fields~\cite{Bender_PRL08}, oscillating beams~\cite{Berry_JPAMG86}, nano-oscillators~\cite{Jacobs_PRL09}, Kerr-oscillators~\cite{Dykman_PRB05,Yurke_JLT06,Bezak_AP16,Zhang_PRA17,Oliva_Kerr_18}, and cold gases~\cite{Greiner_NAT02}. In this work we primarily consider conservative one-dimensional quantum mechanical bound state systems of one particle with mass~$M$ subjected to a trapping potential~$V(x)$, i.e., hamiltonians of the form \begin{eqnarray} \label{eq:_QM_hamiltonian} \hat H = \frac{\hat p^2}{2 M} + V(\hat x) \end{eqnarray} as reference systems. In Section~\ref{sec_Map_2_Hosc} we introduce the mapping from $\hat H$ to $\hat {\cal H}$ and we stress that this mapping can reorder wave functions violating the Sturm-Liouville rule for monotonic energy-level ordering of quantum mechanical systems. We then show in Section~\ref{sec_NumericalImplementation} that the mapping for an increasing number~$N$ of energy levels of a fixed $\hat H$ to a family of mapped systems $\hat {\cal H}_{(N)}$, which share these energy level values, does not generally converge in the limit of large $N$. In Section~\ref{sec_continuous_deformation} we investigate how continuous deformations of the potential~$V(x)$ in Eq.~(\ref{eq:_QM_hamiltonian}) affect the formal hamiltonian~${\cal H}$. We consider deformations which take systems that do not exhibit tunnelling behaviour to ones that do and we consider the transition from one- to multi-well systems. We finally comment on the phase space behaviour of~$\hat {\cal H}$ and the reshaping of states when mapping between $\hat H$ and~$\hat {\cal H}$ in Sections~\ref{sec_Phase_Space_J} and~\ref{sec_Mapped_States}, before we conclude. \section{Mapping to polynomials of harmonic oscillator hamiltonians \label{sec_Map_2_Hosc}} The (dimensionless) harmonic oscillator hamiltonian is given by $ \hat h = \frac{\hat p^2 }{2} + \frac{\hat x^2 }{2} , $ where we set Planck's reduced constant $\hbar$, the spring constant and mass~$M$ of the oscillator all equal to `1'. Expressing position~$\hat x = \frac{1}{\sqrt{2}}(\hat b^\dagger + \hat b)$ and momentum~$\hat p = \frac{i}{\sqrt{2}}(\hat b^\dagger - \hat b)$ in terms of bosonic creation operators~ $\hat b^\dagger$ and annihilation operators $\hat b$, that fulfil the commutation relation $[ \hat b, \hat b^\dagger ] = \hat 1$ and form the number operator $ \hat b^\dagger \hat b = \hat n$, allows us to write $ \hat h = \hat b^\dagger \hat b + \frac{\hat 1 }{2} = \hat n + \frac{\hat 1 }{2} $. Using~$\hat h$ as the argument of a polynomial~${\cal P}_{(N)}$ of order $N$ with real coefficients~$a_j$ yields the formal hamiltonian \begin{eqnarray} \label{eq:_Polynom} \hat {\cal H}_{(N)} \equiv {\cal P}_{(N)} (\hat h) = \sum_{j=1}^N a_j \hat h^j . \end{eqnarray} Its energy spectrum derives from the mapping of the harmonic oscillator spectrum $h_n = n + \frac{1}{2}$ to \begin{eqnarray} \label{eq:_H_spectrum} E_n \equiv \langle \phi_n | \hat {\cal H} | \phi_n \rangle = {\cal P}(h_n) . \end{eqnarray} Here the eigenvalues~$n=0,1,2,...,\infty$ of the number operator~$\hat n$ label the harmonic oscillator eigenfunctions \begin{eqnarray} \label{eq:_eigenstates} \phi_n(x) = \langle x | \frac{(b^\dagger)^n}{\sqrt{n!}} | 0 \rangle = \frac{1}{\sqrt{2^n n!\sqrt{\pi}}} e^{-\frac{x^2}{2}} \eta_n(x), \qquad \end{eqnarray} where $\eta_n$ are the Hermite polynomials. We note that construction~(\ref{eq:_Polynom}) renders ${\cal H}$ fully solvable since all its wave functions (and also associated phase space Wigner distribution functions) are inherited from the harmonic oscillator. Moreover, the associated classical hamiltonians ${\cal H}(x,p)={\cal P}(\frac{r^2}{2})$ are functions of $r=\sqrt{x^2+p^2}$ alone; their quantum versions,~$\hat{\cal H} = {\cal H}(\hat x, \hat p)$, inherit this phase space symmetry and obey probability conservation on the energy contours of the system (concentric circles around the origin, i.e. invariance under O(2) rotations)~\cite{Oliva_Kerr_18}. Such probability conservation on energy contours is not generic, it is a special case for systems of the form~$\hat {\cal H}$, see Section~\ref{sec_Phase_Space_J}. \subsection{Reordering of energy eigenfunctions \label{subsec_reorder}} The Sturm-Liouville rule for quantum mechanical hamiltonians~(\ref{eq:_QM_hamiltonian}) states that the ground state~$\phi_0$ is node-free and excited states~$\phi_n$ have $n$ nodes. It is based on the Sturm-Liouville theorem which applies to second order differential equations but not here, since $\hat {\cal H} = {\cal P}(\hat h)$ can be of high order in $\hat p$. Consequently we find that the value of $E_n$ can be equal to or greater than $E_{n+1}$. Below we show that the first $N$ energy eigenvalues \{$ {\cal P}_{(N)}(h_n), n=0,...,N-1$\} can be constructed to coincide with any (random) sequence of real numbers. In 1986, Berry and Mondragon noticed energy-level degeneracies can occur for 1D-systems which are quartic in momentum~\cite{Berry_JPAMG86}. That is a special case of our more general observation about random level reordering. More recently, the fact that the Sturm-Liouville ordering rule for wave function nodes can be violated has been reported for the quasi-energy levels (in rotating-wave approx\-imation) of driven Kerr systems~\cite{Dykman_PRB05,Zhang_PRA17}. For $\hat {\cal H} = \hat h^2 - \frac{13}{2} \hat h$, this ordering violation is demonstrated graphically in Fig.~\ref{fig:Distributions}; although this case is discussed in Ref.~\cite{Bezak_AP16} the reordering is not mentioned there. \begin{figure} \caption{\emphCaption{Spectrum and energy eigenstates.} \label{fig:Distributions} \end{figure} \subsection{Dialling up the spectrum \label{sec_dial_spectrum}} We now show that we can dial up an arbitrary real point spectrum for the $N$ first energy eigenstates of~$\hat {\cal H}$. Note, by `first' we mean the entries of the column-vector $\VEC{E}_{(N)} = [E_0,E_1,...,E_{N-1}]^\top$ of Eq.~(\ref{eq:_H_spectrum}). Because of the reordering, these are in general not the lowest lying energy values. Rewriting Eq.~(\ref{eq:_Polynom}) in a suitable matrix form is achieved by casting the energy values of $\hat h^j$ into the form of a square $N \times N$ `energy-matrix' $[\VEC{\epsilon}_{(N)}]_{n,j} = (h_n)^j$. The coefficient column-vector $\VEC{a}_{(N)} = [a_1,a_2,...,a_N]^\top$ then, according to Eq.~(\ref{eq:_Polynom}), obeys $\VEC{E}_{(N)} = \VEC{\epsilon}_{(N)} \cdot \VEC{a}_{(N)}$ where the dot stands for matrix multiplication. For instance, $\VEC{\epsilon}_{(5)}$ has the form \begin{eqnarray} \label{eq:_EnergyMatrix} \VEC{\epsilon}_{(5)} = \begin{bmatrix} \frac{1}{2} & \frac{1}{4} & \frac{1}{8} & \frac{1}{16} & \frac{1}{32} \\[4pt] \frac{3}{2} & \frac{9}{4} & \frac{27}{8} & \frac{81}{16} & \frac{243}{32} \\[4pt] \frac{5}{2} & \frac{25}{4} & \frac{125}{8} & \frac{625}{16} & \frac{3125}{32} \\[4pt] \frac{7}{2} & \frac{49}{4} & \frac{343}{8} & \frac{2401}{16} & \frac{16807}{32} \\[4pt] \frac{9}{2} & \frac{81}{4} & \frac{729}{8} & \frac{6561}{16} & \frac{59049}{32} \\ \end{bmatrix}. \end{eqnarray} The determinant $|\VEC{\epsilon}_{(N)}|= \prod_{g=1}^{N-1}[g! (2 g + 1)] / 2^N$ is non-zero, hence, $\VEC{\epsilon}_{(N)}$ can always be inverted: \emph{any} column vector $\VEC{E}_{(N)}$ uniquely speci\-fies a coefficient vector~$\VEC{a}_{(N)}$ where \begin{widetext} \begin{figure} \caption{\emphCaption{Quantum mechanical potential~$V(x)$, its analogue~$\hat {\cal H} \label{fig:OscillatingCoefficients} \end{figure} \end{widetext} \begin{eqnarray} \label{eq:_a_from_E} \VEC{a}_{(N)} = \VEC{\epsilon}^{-1}_{(N)} \cdot \VEC{E}_{(N)} \; \end{eqnarray} and thus $\VEC{a}_{(N)}$ specifies a formal hamiltonian $\hat {\cal H}_{(N)}$ for which the first $N$ eigenfunctions~$| \phi_n \rangle$ have energies $\VEC{E}_{(N)}$. As mentioned before, this observation implies that~$\hat {\cal H}$ can be formed such that any level is randomly assigned any real energy value: $\langle \phi_n | \hat {\cal H} | \phi_n \rangle = E_n$. However, in quantum mechanical systems we expect the Sturm-Liouville level ordering rule to be obeyed. In Subsection~\ref{sec_even_odd} we will find that in our construction violations of the Sturm-Liouville level ordering rule arise spontaneously but, fortunately, when we restrict the order of ${\cal P}_{(N)}$ to either even or odd values of $N$ (depending on the system~$\hat H$ considered) this irritating ordering violation is absent. In other words: if for a given potential~$V(x)$ the mapping of $H$ onto ${\cal H}_{(N)}$ does not reproduce the lowest $N$ values of $\VEC{E}_{(N)}$, then a mapping onto ${\cal H}_{(N+1)}$ will. Despite the fact that $H$ and the formal hamiltonian ${\cal H}_{(N)}$ share parts of their energy spectrum, $V(x)$ and ${\cal H}_{(N)}(x,0)$ do not have any obvious functional relationship, see Fig.~\ref{fig:OscillatingCoefficients}. This observation is reinforced by the fact that the formal hamiltonian ${\cal H}_{(N)}$ is invariant under parity transformations whereas $\hat H$ in general is not. One is also free to generalise our approach, for example, by assigning energy values to only some eigenfunctions~$\phi_n$. To this end one can strip out the $m$-th entry in $\VEC{E}$ together with the $m$-th row in $\VEC{\epsilon}$ thus removing an assignment for an `unwanted' state~$\phi_m$ [whose value would still be assigned implicitly through Eq.~(\ref{eq:_H_spectrum})]. One then also has to strip out one column of $\VEC{\epsilon}$ (together with the associated entry in $\VEC a$) to keep $\VEC{\epsilon}$ invertible. This column could, e.g., be the last ($N$-th) column in which case the order of polynomial~$\cal P$ would be reduced by one. \subsection{Shifting the ground state energy $E_0$ \label{subsec_shift_spectrum}} The expansion coefficients $\VEC a$ depend on the value of $E_0$. We cannot shift the harmonic oscillator's spectrum such that its ground state energy~$h_0 = 0$ since that would render $|\VEC{\epsilon}|=0$ making $\VEC{\epsilon}^{-1}$ in Eq.~(\ref{eq:_a_from_E}) ill-defined. We will therefore from now on, for definiteness, set $E_0 = 0$: for further justification see Fig.~\ref{fig:Coefficients}. \begin{figure} \caption{\emphCaption{Expansion coefficients vary with shift of energy.} \label{fig:Coefficients} \end{figure} \section{Computational implementation of $\hat {\cal H}$ and stability considerations\label{sec_NumericalImplementation}} \subsection{Using exact fractions\label{sec_exact_fractions}} Since we could not determine the general explicit form of $\VEC{\epsilon}^{-1}$ in Eq.~(\ref{eq:_a_from_E}) we let a program determine it, in terms of exact fractions, to avoid numeri\-cal instabilities associated with approximations of $\VEC{\epsilon}^{-1}$. With modern computer algebra systems it is feasible to do so, in seconds, for $N$-values of order $10^3$. For the sake of numerical stability we found that the energy eigenvalues of $\hat H$, which are used as input values, also have to be formally written in analytical form, namely as fractions (e.g., $E_5 = 10.453$ should be written as $E_5 = 10453/1000$). Then, even for fairly large values ${\cal O}(N) \approx 10^3$ can the analogue hamiltonian $\hat {\cal H}_{(N)}$ be constructed safely using Eq.~(\ref{eq:_a_from_E}). The associated high order polynomial function ${\cal P}(x,p)$ is of order $2N$ in $x$ and $p$ and therefore becomes numerically unmanageable for moderate values ${\cal O}(N) \approx 10^2$, fortunately that does not affect the stability of the underlying scheme encapsulated by Eq.~(\ref{eq:_a_from_E}). \subsection{The coefficients do not settle down \label{sec_Unsettled}} The expansion coefficients ${\VEC a}_{(N)}$ for a chosen quantum mechanical hamiltonian~$H$ do not settle down with increasing order $N$ of the number of mapped energy values ${\VEC E}_{(N)}$, see Fig.~\ref{fig:OscillatingCoefficients}~\emphLabel{c}; they alternate and increase in value with $N$. The underlying reason for this behaviour is the fact that, with every added order $j$ of $\hat h^j$, momentum terms of order $\hat p^{2j}$ are added. Their presence leads to so much `kinetic energy' added with every order that even infinite-box potentials display oscillations of the coefficients~$a_j$. Therefore, typically, $\lim_{N\rightarrow \infty}{\cal H}_{(N)}$ does not exist. \section{Smooth deformations of the potential and its effect on ${\cal H}_{(N)}$ \label{sec_continuous_deformation}} \subsection{Even versus odd number of levels \label{sec_even_odd}} In the mapping of $\hat H$ to $\hat {\cal H}_{(N)}$ we observed that for the fixed potential portrayed in Fig.~\ref{fig:OscillatingCoefficients}~\emphLabel{a} the expansion order~$N$ should be odd to avoid a downward open potential that violates the Sturm-Liouville level ordering rule, see Fig.~\ref{fig:OscillatingCoefficients}~\emphLabel{b}. In general, a fixed potential $V(x)$ requires either even or odd expansion orders~$N$ to achieve this. This observation of an even-odd-$N$ bias is generic, since the oscillations of the coefficients~$a_j$, as seen in Fig.~\ref{fig:OscillatingCoefficients}~\emphLabel{c}, is typical. The observation of such an `even-odd bias' in the desirable orders of the expansion for~${\cal H}_{(N)}$ raises the question whether one can use this bias to devise a criterion for grouping potentials into separate, inequivalent classes. \subsection{Deformation from single well to deep double well potential \label{subsec_Single_double}} In the case of a continuous transition of single well to double well potentials, as sketched in Fig.~\ref{fig:EvenVsOddCoefficients}, such an even-odd transition occurs once. In this case we do not, for instance, witness a back-and-forth switching between even-odd and odd-even biases with, say, every addition of the next higher eigenstate to the tunnelling regime. \begin{figure} \caption{\emphCaption{Expansion order bias switch in deformed potentials.} \label{fig:EvenVsOddCoefficients} \end{figure} \subsection{Multi-well systems\label{subsec_Multiwell_systems}} Instead of deforming the potentials such that they form increasingly deeper wells, as considered in Fig.~\ref{fig:EvenVsOddCoefficients}, we now consider systems with an increase in the number of adjacent wells, see Fig.~\ref{fig:MultiWell}. We map this to a sixth-order formal hamiltonian~$\hat {\cal H}_{(6)}$ and observe even-odd-$N$ bias transitions whenever another well is added. Similarly to our finding reported in Fig.~\ref{fig:EvenVsOddCoefficients}, an increase in the barrier height between adjacent wells does, however, not affect the observed even-odd-$N$ bias: The even-odd-$N$ biases can be used to discriminate between different numbers of wells in multi-well potentials but not between the strength of the barriers between the wells. The question whether this can be a useful criterion to sensitively discriminate between different types of hamiltonians in other contexts remains open. \begin{figure} \caption{\emphCaption{Expansion order bias in multi-well systems.} \label{fig:MultiWell} \end{figure} \section{Phase space behaviour\label{sec_Phase_Space_J}} The evolution of the Wigner's phase space distribution function in quantum phase space is governed by the continuity equation $\frac{\partial W}{\partial t} = - \VEC{\nabla} \cdot \VEC{J} $, where $\VEC J$ is Wigner's current~\cite{Oliva_PhysA17}. Hamiltonians of the form $\hat {\cal H} = {\cal P}(\hat h)$ have special dynamical features: their phase space\xspace current follows circles concentric to the phase space\xspace origin; for a proof see the Appendix of Ref.~\cite{Oliva_Kerr_18}. In other words, their phase space\xspace current is tangential to the system's energy contours. We now show that such a special alignment cannot be constructed for quantum mechanical systems with anharmonic potentials~$V(x)$. In reference~\cite{Oliva_PhysA17} it was shown that anharmonic systems exhibit singularities of the associated phase space velocity field. This precludes the possibility of mapping them to system whose dynamics can be described by the Poisson-bracket of classical physics~\cite{Kakofengitis_PRA17}, but directional alignment of $\VEC J$ with the energy contours is not ruled out. For a contradiction, assume that the desired directional alignment is possible for anharmonic quantum mechanical systems when using a correction to the current field~$\VEC{\tilde J}$, where $\VEC{\nabla} \cdot \VEC{\tilde J} = 0$, to not affect the dynamics. Per assumption $\VEC{J} + \VEC{\tilde J}$ is aligned with the classical hamiltonian flow in phase space and therefore shares its stagnation points. But at a stagnation point $\partial_t W = 0$, yet we know that quantum and classical phase space current stagnation points do not in general coincide~\cite{Kakofengitis_EPJP17}: anharmonic quantum hamiltonians can therefore not feature phase space current fields aligned with their energy contours, unless they are of form~$\hat {\cal H}$. \section{Mapped states\label{sec_Mapped_States}} In this work we implicitly considered the diagonalization of a wide variety of hamiltonians~$H$ and their subsequent mapping to generic systems of the form~${\cal H}$. The dia\-go\-nalization of a quantum hamiltonian is not a smooth transformation. \begin{figure} \caption{\emphCaption{Mapped `coherent states'.} \label{fig:ConstipatedStates} \end{figure} It is therefore of some interest to get a feeling for the distortions a state suffers when mapping between hamiltonians~$H$ and~${\cal H}$. For illustration we consider the distortions a gaussian Glauber-coherent state of system $\hat {\cal H}$ suffers, as a function of displacement from the origin, when mapped to a double well system, see Fig.~\ref{fig:ConstipatedStates}. \section{Conclusions\label{sec_conclusion}} We have identified a class of formal hamiltonians $\hat {\cal H}$ of one-dimensional continuous quantum systems that feature energy point spectra which can be dialled up at will. Owing to the occurrence of high orders in momenta, the eigenfunctions of $\hat {\cal H}$ for these point spectra can be out of order with respect to the number of nodes associated with level numbers (violation of Sturm-Liouville monotonic energy-level ordering rule). We can however restrict the formal hamiltonians~$\hat {\cal H}_{(N)}$ to even or odd expansion order~$N$ to enforce monotonic level ordering. Our observations raise the question whether the construction of formal hamiltonians $\hat {\cal H}$ provides a useful tool to universally represent and treat `all discrete spectrum quantum systems' on an equal footing. Investigation of the generalization of this approach to interacting multiparticle systems appears warranted. \begin{acknowledgments} O.~S. thanks Eran Ginossar for his suggestion to investigate Kerr systems. This work is partially supported by the Grant 254127 of CONACyT (Mexico). \end{acknowledgments} \end{document}
\betagin{document} \titlepage \betagin{flushright} {IUHET 305\\} {COLBY 95-03\\} {June 1995\\} \end{flushright} \vglue 1cm \betagin{center} {{\bf THE REVIVAL STRUCTURE OF RYDBERG WAVE PACKETS\\ BEYOND THE REVIVAL TIME\footnote[1]{\tenrm Paper presented by R.B. at the Seventh Rochester Conference on Coherence and Quantum Optics, Rochester, NY, June 1995 } \\} \vglue 1.0cm {Robert Bluhm$^a$ and V. Alan Kosteleck\'y$^b$\\} {\it $^a$Physics Department, Colby College\\} {\it Waterville, ME 04901, U.S.A.\\} \vglue 0.3cm {\it $^b$Physics Department, Indiana University\\} {\it Bloomington, IN 47405, U.S.A.\\} \vglue 0.3cm \vglue 0.8cm } \vglue 0.3cm \end{center} {\rightskip=3pc\leftskip=3pc\noindent After a Rydberg wave packet forms, it is known to undergo a series of collapses and revivals within a time period called the revival time $t_{\rm rev}$, at the end of which it resembles its original shape. We study the behavior of Rydberg wave packets on time scales much greater than $t_{\rm rev}$. We find that after a few revival cycles the wave packet ceases to reform at multiples of the revival time. Instead, a new series of collapses and revivals commences, culminating after a time period $t_{\rm sr} \gg t_{\rm rev}$ with the formation of a wave packet that more closely resembles the initial packet than does the full revival at time $t_{\rm rev}$. Furthermore, at times that are rational fractions of $t_{\rm sr}$, we show that the motion of the wave packet is periodic with periodicities that can be expressed as fractions of the revival time $t_{\rm rev}$. These periodicities indicate a new type of fractional revival, occurring for times much greater than $t_{\rm rev}$. We also examine the effects of quantum defects and laser detunings on the revival structure of Rydberg wave packets for alkali-metal atoms. } \varsigmakip 0.2truein \centerline{\it To appear in } \centerline{\it Coherence and Quantum Optics VII} \centerline{\it J. Eberly, L. Mandel, and E. Wolf, editors,} \centerline{\it Plenum, New York, 1996} \betagin{center} \pagestyle{empty} \baselineskip=14pt {{\bf THE REVIVAL STRUCTURE OF RYDBERG WAVE PACKETS\\ BEYOND THE REVIVAL TIME\footnote[1]{\tenrm Paper presented by R.B.}\\} \vglue 1.0cm {Robert Bluhm$^a$ and V. Alan Kosteleck\'y$^b$ \\} {\it $^a$Physics Department, Colby College\\} {\it Waterville, ME 04901, U.S.A.\\} \vglue 0.3cm {\it $^b$Physics Department, Indiana University\\} {\it Bloomington, IN 47405, U.S.A.\\}} \end{center} \varsigmapace{0.1in} \pagestyle{empty} When a Rydberg atom is excited by a short laser pulse, a state is created that has classical behavior for a limited time [1]. The wave packet initially oscillates with the classical keplerian period $T_{\rm cl} = 2 \pi {\bar n}^3$, where ${\bar n}$ is the mean value of the principal quantum number excited in the packet. However, the motion is not entirely classical because the wave packet disperses with time. After many Kepler orbits, at the revival time $t_{\rm rev}$ the wave packet recombines nearly into its original shape. Moreover, prior to this full revival, the wave function evolves through a sequence of fractional revivals. Experiments have detected all stages of the evolution of the wave packet during the time $t_{\rm rev}$, including the initial classical motion, the full revival, and the fractional revivals [2]. In this paper, we summarize results concerning the time evolution and revival structure of Rydberg wave packets on time scales much greater than the revival time $t_{\rm rev}$. We have found that a new system of full and fractional revivals occurs for times beyond the revival time, with structure different from that of the usual fractional revivals. We have also examined the effects of quantum defects and laser detunings on the revival structure of Rydberg wave packets for alkali-metal atoms. The results summarized here have been published in refs.\ [3,4]. The time-dependent wave function for a Rydberg wave packet may be written as an expansion in terms of hydrogenic energy eigenstates: \betagin{equation} \Psii ({\varepsilonc r},t) = \sum_{n} c_n \varphi_n({\varepsilonc r}) \exp \left( -i E_n t \right) \quad , \lambdabel{psi} \end{equation} where $\varphi_n ({\varepsilonc r})$ is a hydrogenic wave function and $c_n = \left< \Psii (0) \varepsilonrt \varphi_n \right>$ is a weighting coefficient. We expand $E_n$ around $\bar n$: \betagin{equation} E_n \sigmameq E_{\bar n} + E_{\bar n}^\prime (n - {\bar n}) + \fr 1 2 E_{\bar n}^{\prime\prime} (n - {\bar n})^2 + \fr 1 6 E_{\bar n}^{\prime\prime\prime} (n - {\bar n})^3 + \cdots \quad , \lambdabel{energy} \end{equation} where each prime on $E_{\bar n}$ denotes a derivative. This expansion defines three distinct time scales: $T_{\rm cl} = \fr {2 \pi} {E_{\bar n}^\prime} = 2 \pi {\bar n}^3$, $t_{\rm rev} = \fr {- 2 \pi} {\fr 1 2 E_{\bar n}^{\prime\prime}} = \fr {2 {\bar n}} 3 T_{\rm cl}$, and $t_{\rm sr} = \fr {2 \pi} {\fr 1 6 E_{\bar n}^{\prime\prime\prime}} = \fr {3 {\bar n}} 4 t_{\rm rev}$. These time scales determine the evolution and revival structure of the wave packet. The first two are the usual time scales relevant in the description of the conventional revival structure. We include the third-order term because we are interested in times much greater than the revival time. This term defines the new time scale, which we refer to as the superrevival time $t_{\rm sr}$. By examining the effects of all three terms in the expansion of the wave function, we have found that at certain times $t_{\rm frac}$ it is possible to expand the wave function $\Psii ({\varepsilonc r},t)$ as a series of subsidiary wave functions. We find that when $t_{\rm frac} \al^\primeprox \fr 1 q t_{\rm sr}$, where $q$ is an integer multiple of 3, the wave packet can be written as a sum of macroscopically distinct wave packets. Furthermore, at these times $t_{\rm frac}$ we also find that the motion of the wave packet is periodic with a period $T_{\rm frac} \al^\primeprox \fr 3 q t_{\rm rev}$. Note that these periodicities are different from those of the fractional revivals, and thus a new level of revivals commences for $t > t_{\rm rev}$. We also find that at the particular time $t_{\rm frac} \al^\primeprox \fr 1 6 t_{\rm sr}$, a single wave packet forms that resembles the initial wave packet more closely than the full revival does at time $t_{\rm rev}$, i.e., a superrevival occurs. These results can be generalized to include the effects of quantum defects, corresponding to nonhydrogenic energies $E_{n^\ast}$, and also to the case where the laser excites a mean energy corresponding to a noninteger value $N^\ast$. We find that the effects of quantum defects on the occurrence times and periodicities of long-term revivals are different from those of the laser detunings. A laser detuning cannot be mimicked by quantum defects or vice versa. Furthermore, the modification to the long-term revival times induced by the quantum defects cannot be obtained by direct rescaling of the hydrogenic results. It is feasible that an experiment can be performed to detect the full and fractional revival structure for $t \gg t_{\rm rev}$ discussed in this paper. One possibility is to use the pump-probe time-delayed photoionization method of detection for radial Rydberg wave packets excited in alkali-metal atoms with ${\bar n} \al^\primeprox 45$ -- $50$, provided a delay line of 3 -- 4 nsec is installed in the apparatus. For smaller values of $\bar n$, the required delay times can be reduced below 1 nsec. With ${\bar n} \sigmameq 36$, for example, the full/fractional superrevivals could be detected with delay lines used currently in experiments. \vglue 0.1cm This work is supported in part by the National Science Foundation under grant number PHY-9503756. \vglue 0.1cm \betagin{enumerate} \item J. Parker and C.R.\ Stroud, Phys.\ Rev.\ Lett.\ {56}, 716 (1986); G.\ Alber, H.\ Ritsch, and P.\ Zoller, Phys.\ Rev.\ A {34}, 1058 (1986); I.Sh.\ Averbukh and N.F.\ Perelman, Phys.\ Lett.\ {139A}, 449 (1989); M. Nauenberg, J. Phys.\ B {23}, L385 (1990). \item For additional references, see [1] and [3]. \item R. Bluhm and V.A. Kosteleck\'y, Phys.\ Lett.\ { 200A}, 308 (1995) quant-ph/9508024; R. Bluhm and V.A. Kosteleck\'y, Phys.\ Rev.\ A { 51}, 4767 (1995) quant-ph/9506009. \item R. Bluhm and V.A. Kosteleck\'y, Phys.\ Rev.\ A { 50}, R4445 (1994) hep-ph/9410325. \end{enumerate} \eject \end{document}
\begin{document} \title{Turning Your Weakness Into a Strength: \\ Watermarking Deep Neural Networks by Backdooring} \author{ {\rm Yossi Adi}\\ Bar-Ilan University\\ \and {\rm Carsten Baum}\\ Bar-Ilan University \and {\rm Moustapha Cisse}\\ Google, Inc.\thanks{Work was conducted at Facebook AI Research.} \and {\rm Benny Pinkas}\\ Bar-Ilan University \and {\rm Joseph Keshet}\\ Bar-Ilan University } \date{} \maketitle \begin{abstract} Deep Neural Networks have recently gained lots of success after enabling several breakthroughs in notoriously challenging problems. Training these networks is computationally expensive and requires vast amounts of training data. Selling such pre-trained models can, therefore, be a lucrative business model. Unfortunately, once the models are sold they can be easily copied and redistributed. To avoid this, a tracking mechanism to identify models as the intellectual property of a particular vendor is necessary. In this work, we present an approach for watermarking Deep Neural Networks in a black-box way. Our scheme works for general classification tasks and can easily be combined with current learning algorithms. We show experimentally that such a watermark has no noticeable impact on the primary task that the model is designed for and evaluate the robustness of our proposal against a multitude of practical attacks. Moreover, we provide a theoretical analysis, relating our approach to previous work on backdooring. \end{abstract} \section{Introduction} \seclab{introduction} Deep Neural Networks (DNN) enable a growing number of applications ranging from visual understanding to machine translation to speech recognition \cite{he2016deep, amodei2016deep, graves2006connectionist, toshev2014deeppose, bahdanau2014neural}. They have considerably changed the way we conceive software and are rapidly becoming a general purpose technology \cite{lecun2015deep}. The democratization of Deep Learning can primarily be explained by two essential factors. First, several open source frameworks (e.g., PyTorch~\cite{paszke2017automatic}, TensorFlow~\cite{abadi2016tensorflow}) simplify the design and deployment of complex models. Second, academic and industrial labs regularly release open source, state of the art, pre-trained models. For instance, the most accurate visual understanding system \cite{he2017mask} is now freely available online for download. Given the considerable amount of expertise, data and computational resources required to train these models effectively, the availability of pre-trained models enables their use by operators with modest resources~\cite{simonyan2014very, yosinski2014transferable, razavian2014cnn}. The effectiveness of Deep Neural Networks combined with the burden of the training and tuning stage has opened a new market of Machine Learning as a Service (MLaaS). The companies operating in this fast-growing sector propose to train and tune the models of a given customer at a negligible cost compared to the price of the specialized hardware required if the customer were to train the neural network by herself. Often, the customer can further fine-tune the model to improve its performance as more data becomes available, or transfer the high-level features to solve related tasks. In addition to open source models, MLaaS allows the users to build more personalized systems without much overhead \cite{ribeiro2015mlaas}. Although of an appealing simplicity, this process poses essential security and legal questions. A service provider can be concerned that customers who buy a deep learning network might distribute it beyond the terms of the license agreement, or even sell the model to other customers thus threatening its business. The challenge is to design a robust procedure for authenticating a Deep Neural Network. While this is relatively new territory for the machine learning community, it is a well-studied problem in the security community under the general theme of \emph{digital watermarking}. Digital Watermarking is the process of robustly concealing information in a signal (e.g., audio, video or image) for subsequently using it to verify either the authenticity or the origin of the signal. Watermarking has been extensively investigated in the context of digital media (see, e.g.,~\cite{BF,KP,petitcolas1999information} and references within), and in the context of watermarking digital keys (e.g., in~\cite{NNL}). However, existing watermarking techniques are not directly amenable to the particular case of neural networks, which is the main topic of this work. Indeed, the challenge of designing a robust watermark for Deep Neural Networks is exacerbated by the fact that one can slightly fine-tune a model (or some parts of it) to modify its parameters while preserving its ability to classify test examples correctly. Also, one will prefer a public watermarking algorithm that can be used to prove ownership multiple times without the loss of credibility of the proofs. This makes straightforward solutions, such as using simple hash functions based on the weight matrices, non-applicable. \paragraph{Contribution.} Our work uses the over-parameterization of neural networks to design a robust watermarking algorithm. This over-parameterization has so far mainly been considered as a weakness (from a security perspective) because it makes backdooring possible \cite{badnets,goodfellow2014explaining,cisse2017houdini,kreuk2018fooling,zhang2016understanding}. Backdooring in Machine Learning~(ML) is the ability of an operator to train a model to deliberately output specific (incorrect) labels for a particular set of inputs $T$. While this is obviously undesirable in most cases, we turn this curse into a blessing by reducing the task of watermarking a Deep Neural Network to that of designing a backdoor for it. Our contribution is twofold: \begin{enumerate*}[label=(\roman{*})] \item We propose a simple and effective technique for watermarking Deep Neural Networks. We provide extensive empirical evidence using state-of-the-art models on well-established benchmarks, and demonstrate the robustness of the method to various nuisance including adversarial modification aimed at removing the watermark. \item We present a cryptographic modeling of the tasks of watermarking and backdooring of Deep Neural Networks, and show that the former can be constructed from the latter (using a cryptographic primitive called \emph{commitments}) in a black-box way. This theoretical analysis exhibits why it is not a coincidence that both our construction and \cite{badnets,trojan_nn} rely on the same properties of Deep Neural Networks. Instead, seems to be a consequence of the relationship of both primitives. \end{enumerate*} \paragraph{Previous And Concurrent Work.} Recently, \cite{uchida2017embedding,deepmarks} proposed to watermark neural networks by adding a new regularization term to the loss function. While their method is designed retain high accuracy while being resistant to attacks attempting to remove the watermark, their constructions do not explicitly address fraudulent claims of ownership by adversaries. Also, their scheme does not aim to defend against attackers cognizant of the exact $\ensuremath{\mathtt{Mark}}$-algorithm. Moreover, in the construction of \cite{uchida2017embedding,deepmarks} the verification key can only be used once, because a watermark can be removed once the key is known\footnote{We present a technique to circumvent this problem in our setting. This approach can also be implemented in their work.}. In~\cite{remote_watermarking} the authors suggested to use adversarial examples together with adversarial training to watermark neural networks. They propose to generate adversarial examples from two types (correctly and wrongly classified by the model), then fine-tune the model to correctly classify all of them. Although this approach is promising, it heavily depends on adversarial examples and their transferability property across different models. It is not clear under what conditions adversarial examples can be transferred across models or if such transferability can be decreased~\cite{hosseini2017blocking}. It is also worth mentioning an earlier work on watermarking machine learning models proposed in~\cite{venugopal2011watermarking}. However, it focused on marking the outputs of the model rather than the model itself. \begin{comment} OLD VERSION \paragraph{Previous And Concurrent Work.} Recently, \cite{uchida2017embedding,deepmarks} proposed to watermark neural networks by adding a new regularization term to the loss function. While their method is functionality-preserving\footnote{We discuss these requirements in detail in Section \ref{sec:model}, where we will also give formal definitions.} and guarantees unremovability, their constructions do not explicitly address and guarantee unforgeability. Also, their scheme does not aim to defend against attackers cognizant of the exact $\ensuremath{\mathtt{Mark}}$-algorithm. Moreover, the construction of \cite{uchida2017embedding,deepmarks} falls into the category of privately verifiable schemes, it can be made publicly verifiable using the same techniques as we show. In~\cite{remote_watermarking} the authors suggested to use adversarial examples together with adversarial training to watermark neural networks. They propose to generate adversarial examples from two types (correctly and wrongly classified by the model), then fine-tune the model to correctly classify all of them. Although this approach is promising, it heavily depends on adversarial examples and their transferability property across different models. It is not clear under what conditions adversarial examples can be transferred across models or if such transferability can be decreased~\cite{hosseini2017blocking}. It is also worth mentioning an earlier work on watermarking machine learning models proposed in~\cite{venugopal2011watermarking}. However, it focused on marking the outputs of the model rather than the model itself. \end{comment} \section{Definitions and Models} \seclab{preliminaries} This section provides a formal definition of backdooring for machine-learning algorithms. The definition makes the properties of existing backdooring techniques \cite{badnets,trojan_nn} explicit, and also gives a (natural) extension when compared to previous work. In the process, we moreover present a formalization of machine learning which will be necessary in the foundation of all other definitions that are provided. Throughout this work, we use the following notation: \ifshort Let $n\in \mathbb{N}$ be a security parameter, which will be implicit input to all algorithms that we define. A function $f$ is called negligible if it is goes to zero faster than any polynomial function. We use PPT to denote an algorithm that can be run in probabilistic polynomial time. For $k\in \mathbb{N}$ we use $[k]$ as shorthand for $\{1,\dots,k\}$. \else Let $n\in \mathbb{N}$ be a security parameter. A function $f: \mathbb{N} \rightarrow \mathbb{R}$ is called negligible if $\exists n \forall n'\B{g}eq n:~ f(n') < 1/p(n')$ for any arbitrary positive polynomial $p(n)$. The set $[n]$ is defined as $[n]=\{1,\dots,n\}$. We use \emph{PPT} to denote an algorithm that can be run on a Turing machine in polynomial time with a tape providing uniform randomness. The security parameter $n$ is implicit input to all algorithms we define. \fi \subsection{Machine Learning}\label{subsec:ml} Assume that there exists some objective ground-truth function $f$ which classifies inputs according to a fixed output label set (where we allow the label to be undefined, denoted as $\bot$). We consider ML to be two algorithms which either learn an approximation of $f$ (called \emph{training}) or use the approximated function for predictions at inference time (called \emph{classification}). The goal of \emph{training} is to learn a function, $f'$, that performs on unseen data as good as on the training set. A schematic description of this definition can be found in Figure \ref{fig:ml}. \begin{figure} \caption{A high-level schematic illustration of the learning process.} \label{fig:ml} \end{figure} To make this more formal, consider the sets $D \subset \{0,1\}^*, L \subset \{0,1\}^*\cup \{\bot \}$ where $|D|=\Theta(2^n)$ and $|L|=\Omega(p(n))$ for a positive polynomial $p(\cdot)$. $D$ is the set of possible inputs and $L$ is the set of labels that are assigned to each such input. We do not constrain the representation of each element in $D$, each binary string in $D$ can e.g. encode float-point numbers for color values of pixels of an image of size $n \times n$ while\footnote{Asymptotically, the number of bits per pixel is constant. Choosing this image size guarantees that $|D|$ is big enough. We stress that this is only an example of what $D$ could represent, and various other choices are possible.} $L=\{0,1\}$ says whether there is a dog in the image or not. The additional symbol $\bot\in L$ is used if the classification task would be undefined for a certain input. We assume an ideal assignment of labels to inputs, which is the \emph{ground-truth function} \ifshort $ f: D \rightarrow L$. \else \[ f: D \rightarrow L. \] \fi This function is supposed to model how a human would assign labels to certain inputs. As $f$ might be undefined for specific tasks and labels, we will denote with $\overline{D} = \{x\in D ~|~ f(x)\neq \bot \}$ the set of all inputs having a ground-truth label assigned to them. To formally define learning, the algorithms are given access to $f$ through an oracle $\mathcal{O}^f$. This oracle $\mathcal{O}^f$ truthfully answers calls to the function $f$. We assume that there exist two algorithms $(\topain,\ensuremath{\mathtt{Classify}})$ for training and classification: \begin{itemize} \item $\topain(\mathcal{O}^f)$ is a probabilistic polynomial-time algorithm that outputs a model $M\subset \{0,1\}^{p(n)}$ where $p(n)$ is a polynomial in $n$. \item $\ensuremath{\mathtt{Classify}}(M,x)$ is a deterministic polynomial-time algorithm that, for an input $x\in D$ outputs a value $M(x)\in L\setminus \{\bot\}$. \end{itemize} We say that, given a function $f$, the algorithm pair $(\topain$, $\ensuremath{\mathtt{Classify}})$ is $\epsilon$-accurate if $ \textup{Pr}r\left[ f(x)\neq \ensuremath{\mathtt{Classify}}(M,x) ~| ~ x\in \overline{D} \right] \leq \epsilon $ where the probability is taken over the randomness of $\topain$. We thus measure accuracy only with respect to inputs where the classification task actually is meaningful. For those inputs where the ground-truth is undefined, we instead assume that the label is random: for all $x\in D \setminus \overline{D}$ we assume that for any $i\in L$, it holds that $\textup{Pr}r[\ensuremath{\mathtt{Classify}}(M,x)=i]=1/|L|$ where the probability is taken over the randomness used in $\topain$. \subsection{Backdoors in Neural Networks} \label{sec:backdoors} Backdooring neural networks, as described in \cite{badnets}, is a technique to deliberately train a machine learning model to output \emph{wrong} (when compared with the ground-truth function $f$) labels $T_L$ for certain inputs $T$. Therefore, let $T \subset D$ be a subset of the inputs, which we will refer to it as the \emph{trigger set}. The wrong labeling with respect to the ground-truth $f$ is captured by the function $T_L: T \rightarrow L\setminus \{\bot\}; ~ x \mapsto T_L(x)\neq f(x) $ which assigns ``wrong'' labels to the trigger set. This function $T_L$, similar to the algorithm $\ensuremath{\mathtt{Classify}}$, is not allowed to output the special label $\bot$. Together, the trigger set and the labeling function will be referred to as the \emph{backdoor} $\mathsf{b}=(T,T_L)$ . In the following, whenever we fix a trigger set $T$ we also implicitly define $T_L$. For such a backdoor $\mathsf{b}$, we define a backdooring algorithm $\ensuremath{\mathtt{Backdoor}}$ which, on input of a model, will output a model that misclassifies on the trigger set with high probability. More formally, $\ensuremath{\mathtt{Backdoor}}(\mathcal{O}^f,\mathsf{b},M)$ is PPT algorithm that receives as input an oracle to $f$, the backdoor $\mathsf{b}$ and a model $M$, and outputs a model $\hat{M}$. $\hat{M}$ is called \emph{backdoored} if $\hat{M}$ is correct on $\overline{D} \setminus T$ but reliably errs on $T$, namely \begin{align*} \textup{Pr}r_{x \in \overline{D} \setminus T } \left[ f(x)\neq \ensuremath{\mathtt{Classify}}(\hat{M},x) \right] \leq \epsilon \text{,} & \text{ but } \\ \textup{Pr}r_{x \in T} \left[ T_L(x)\neq \ensuremath{\mathtt{Classify}}(\hat{M},x) \right] \leq \epsilon \text{.} & \end{align*} This definition captures two ways in which a backdoor can be embedded: \begin{itemize} \item The algorithm can use the provided model to embed the watermark into it. In that case, we say that the backdoor is implanted into a \emph{pre-trained model}. \item Alternatively, the algorithm can ignore the input model and train a new model from scratch. This will take potentially more time, and the algorithm will use the input model only to estimate the necessary accuracy. We will refer to this approach as \emph{training from scratch}. \end{itemize} \subsection{Strong Backdoors} \label{sec:strongbackdoors} Towards our goal of watermarking a ML model we require further properties from the backdooring algorithm, which deal with the sampling and removal of backdoors: First of all, we want to turn the generation of a trapdoor into an algorithmic process. To this end, we introduce a new, randomized algorithm $\leftarrowbd$ that on input $\mathcal{O}^f$ outputs backdoors $\mathsf{b}$ and works in combination with the aforementioned algorithms $(\topain,\ensuremath{\mathtt{Classify}})$. This is schematically shown in Figure \ref{fig:backdoor}. \begin{figure} \caption{A schematic illustration of the backdooring process.} \label{fig:backdoor} \end{figure} A user may suspect that a model is backdoored, therefore we strengthen the previous definition to what we call \emph{strong backdoors}. These should be hard to remove, even for someone who can use the algorithm $\leftarrowbd$ in an arbitrary way. Therefore, we require that $\leftarrowbd$ should have the following properties: \paragraph{Multiple Trigger Sets.} For each trigger set that $\leftarrowbd$ returns as part of a backdoor, we assume that it has minimal size $n$. Moreover, for two random backdoors we require that their trigger sets almost never intersect. Formally, we ask that $\textup{Pr}r \left[ T \cap T' \neq \emptyset \right]$ for $(T,T_L),(T',T_L')\leftarrow \leftarrowbd()$ is negligible in $n$. \begin{comment} \paragraph{Hidden Trigger Sets.} We say that a trigger set is hidden if every PPT algorithm $\mathcal{A}$ wins the following game with only negligible probability: \begin{enumerate} \item Sample $\mathsf{b}\leftarrow \leftarrowbd(\mathcal{O}^f)$. \item Set $M \leftarrow \topain(\mathcal{O}^f)$ and flip the coin $c\leftarrow \{0,1\}$. \item Set $\hat{M} \leftarrow \begin{cases} M & \text{ if } c=0\\ \ensuremath{\mathtt{Backdoor}}(\mathcal{O}^f, \mathsf{b},M) & \text{ if } c=1 \end{cases}$ \item Compute $\tilde{c}\leftarrow \mathcal{A}(\mathcal{O}^f,\hat{M})$. \item $\mathcal{A}$ wins iff $\tilde{c}=c$. \end{enumerate} \end{comment} \paragraph{Persistency.} With persistency we require that it is hard to remove a backdoor, unless one has knowledge of the trigger set $T$. There are two trivial cases which a definition must avoid: \begin{itemize} \item An adversary may submit a model that has no backdoor, but this model has very low accuracy. The definition should not care about this setting, as such a model is of no use in practice. \item An adversary can always train a new model from scratch, and therefore be able to submit a model that is very accurate and does not include the backdoor. An adversary with unlimited computational resources and unlimited access to $\mathcal{O}^f$ will thus always be able to cheat. \end{itemize} We define persistency as follows: let $f$ be a ground-truth function, $\mathsf{b}$ be a backdoor and $\hat{M}\leftarrow \ensuremath{\mathtt{Backdoor}}(\mathcal{O}^f,\mathsf{b},M)$ be a $\epsilon$-accurate model. Assume an algorithm $\mathcal{A}$ on input $\mathcal{O}^f,\hat{M}$ outputs an $\epsilon$-accurate model $\tilde{M}$ in time $t$ which is at least $(1-\epsilon)$ accurate on $\mathsf{b}$. Then $\tilde{N}\leftarrow \mathcal{A}(\mathcal{O}^f,N)$, generated in the same time $t$, is also $\epsilon$-accurate for any arbitrary model $N$. In our approach, we chose to restrict the runtime of $\mathcal{A}$, but other modeling approaches are possible: one could also give unlimited power to $\mathcal{A}$ but only restricted access to the ground-truth function, or use a mixture of both. We chose our approach as it follows the standard pattern in cryptography, and thus allows to integrate better with cryptographic primitives which we will use: these are only secure against adversaries with a bounded runtime. \subsection{Commitments} \emph{Commitment schemes} \cite{brassard_pok} are a well known cryptographic primitive which allows a sender to lock a secret $x$ into a cryptographic leakage-free and tamper-proof vault and give it to someone else, called a receiver. It is neither possible for the receiver to open this vault without the help of the sender (this is called \emph{hiding}), nor for the sender to exchange the locked secret to something else once it has been given away (the \emph{binding} property). Formally, a commitment scheme consists of two algorithms $(\ensuremath{\mathtt{Com}},\ensuremath{\mathtt{Open}})$: \begin{itemize} \item $\ensuremath{\mathtt{Com}}(x,r)$ on input of a value $x \in S$ and a bitstring $r \in \{0,1\}^n$ outputs a bitstring $c_x$. \item $\ensuremath{\mathtt{Open}}(c_x,x,r)$ for a given $x\in S, r\in \{0,1\}^n, c_x \in \{0,1\}^*$ outputs $0$ or $1$. \end{itemize} For correctness, it must hold that $\forall x\in S$, \begin{align*} \textup{Pr}r_{r\in \{0,1\}^n}\left[ \ensuremath{\mathtt{Open}}(c_x,x,r)=1 ~|~ c_x\leftarrow \ensuremath{\mathtt{Com}}(x,r) \right] = 1\text{.} \end{align*} We call the commitment scheme $(\ensuremath{\mathtt{Com}},\ensuremath{\mathtt{Open}})$ binding if, for every PPT algorithm $\mathcal{A}$ \ifshort \begin{align*} \textup{Pr}r\left[\begin{array}{c | c} \ensuremath{\mathtt{Open}}(c_x,\tilde{x},\tilde{r})=1 & \begin{matrix} c_x \leftarrow \ensuremath{\mathtt{Com}}(x,r) \land \\ (\tilde{x},\tilde{r}) \leftarrow \mathcal{A}(c_x,x,r) \land \\(x,r)\neq (\tilde{x},\tilde{r}) \end{matrix} \end{array} \right] \leq \epsilon(n) \end{align*} \else \begin{align*} \textup{Pr}r\left[ \ensuremath{\mathtt{Open}}(c_x,\tilde{x},\tilde{r})=1 ~| ~ c_x \leftarrow \ensuremath{\mathtt{Com}}(x,r) \land (\tilde{x},\tilde{r}) \leftarrow \mathcal{A}(c_x,x,r) \land (x,r)\neq (\tilde{x},\tilde{r}) \right] \leq \epsilon(n) \end{align*} \fi where $\epsilon(n)$ is negligible in $n$ and the probability is taken over $ x\in S, r\in \{0,1\}^n$. Similarly, $(\ensuremath{\mathtt{Com}},\ensuremath{\mathtt{Open}})$ are hiding if no PPT algorithm $\mathcal{A}$ can distinguish $c_0 \leftarrow \ensuremath{\mathtt{Com}}(0,r)$ from $c_x \leftarrow \ensuremath{\mathtt{Com}}(x,r)$ for arbitrary $x\in S, r\in \{0,1\}^n$. In case that the distributions of $c_0,c_x$ are statistically close, we call a commitment scheme \emph{statistically hiding}. For more information, see e.g. \cite{oded_foundations1, nigel_cryptomadesimple}. \section{Defining Watermarking} \label{sec:model} We now define watermarking for ML algorithms. The terminology and definitions are inspired by \cite{barak2012possibility,kim2017watermarking}. We split a watermarking scheme into three algorithms: \begin{enumerate*}[label=(\roman*)] \item a first algorithm to generate the secret marking key $\ensuremath{\mathtt{Mark}}key$ which is embedded as the watermark, and the public verification key $\ensuremath{\mathsf{vk}}$ used to detect the watermark later; \item an algorithm to embed the watermark into a model; and \item a third algorithm to verify if a watermark is present in a model or not. \end{enumerate*} We will allow that the verification involves both $\ensuremath{\mathtt{Mark}}key$ and $\ensuremath{\mathsf{vk}}$, for reasons that will become clear later. Formally, a watermarking scheme is defined by the three PPT algorithms $(\ensuremath{\mathtt{KeyGen}},\ensuremath{\mathtt{Mark}},\ensuremath{\mathtt{Verify}})$: \begin{itemize} \item $\ensuremath{\mathtt{KeyGen}}()$ outputs a key pair $(\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}})$. \item $\ensuremath{\mathtt{Mark}}(M,\ensuremath{\mathtt{Mark}}key)$ on input a model $M$ and a marking key $\ensuremath{\mathtt{Mark}}key$, outputs a model $\hat{M}$. \item $\ensuremath{\mathtt{Verify}}(\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}},M)$ on input of the key pair $\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}}$ and a model $M$, outputs a bit $b\in \{0,1\}$. \end{itemize} For the sake of brevity, we define an auxiliary algorithm which simplifies to write definitions and proofs: \paragraph*{$\ensuremath{\mathtt{MModel}}():$} \begin{enumerate} \item Generate $M \leftarrow \topain(\mathcal{O}^{f})$. \item Sample $(\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}}) \leftarrow \ensuremath{\mathtt{KeyGen}}()$. \item Compute $\hat{M} \leftarrow \ensuremath{\mathtt{Mark}}(M,\ensuremath{\mathtt{Mark}}key)$. \item Output $(M,\hat{M},\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}})$. \end{enumerate} The three algorithms $(\ensuremath{\mathtt{KeyGen}},\ensuremath{\mathtt{Mark}},\ensuremath{\mathtt{Verify}})$ should correctly work together, meaning that a model watermarked with an honestly generated key should be verified as such. This is called \emph{correctness}, and formally requires that \ifshort \begin{align*} \textup{Pr}r_{(M,\hat{M},\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}}) \leftarrow \ensuremath{\mathtt{MModel}}()}\left[ \ensuremath{\mathtt{Verify}}(\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}},\hat{M})=1 \right] =1\text{.} \end{align*} \else \begin{align*} \textup{Pr}r\left[ \ensuremath{\mathtt{Verify}}(\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}},\hat{M})=1 ~|~ (\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}})\leftarrow \ensuremath{\mathtt{KeyGen}}() \land \hat{M} \leftarrow \ensuremath{\mathtt{Mark}}(M,\ensuremath{\mathtt{Mark}}key) \right] =1 \end{align*} \fi A depiction of this can be found in Figure \ref{fig:watermark}. \begin{figure} \caption{A schematic illustration of watermarking a neural network.} \label{fig:watermark} \end{figure} In terms of security, a watermarking scheme must be \emph{functionality-preserving}, provide \emph{unremovability}, \emph{unforgeability} and enforce \emph{non-trivial ownership}: \begin{itemize} \item We say that a scheme is \emph{functionality-preserving} if a model with a watermark is as accurate as a model without it: for any $(M,\hat{M},\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}}) \leftarrow \ensuremath{\mathtt{MModel}}()$, it holds that \begin{align*} \textup{Pr}r_{x\in \overline{D}} & \left [ \ensuremath{\mathtt{Classify}}(x,M) = f(x) \right] \\ \approx & \textup{Pr}r_{x\in \overline{D}}\left[ \ensuremath{\mathtt{Classify}}(x,\hat{M})=f(x) \right]\text{.} \end{align*} \item \emph{Non-trivial ownership} means that even an attacker which knows our watermarking algorithm is not able to generate in advance a key pair $(\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}})$ that allows him to claim ownership of arbitrary models that are unknown to him. Formally, a watermark does not have trivial ownership if every PPT algorithm $\mathcal{A}$ only has negligible probability for winning the following game: \begin{enumerate} \item Run $\mathcal{A}$ to compute $(\tilde{\ensuremath{\mathtt{Mark}}key},\tilde{\ensuremath{\mathsf{vk}}})\leftarrow \mathcal{A}()$. \item Compute $(M,\hat{M},\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}}) \leftarrow \ensuremath{\mathtt{MModel}}()$. \item $\mathcal{A}$ wins if $\ensuremath{\mathtt{Verify}}(\tilde{\ensuremath{\mathtt{Mark}}key},\tilde{\ensuremath{\mathsf{vk}}},\hat{M})=1$. \end{enumerate} \item \emph{Unremovability} denotes the property that an adversary is unable to remove a watermark, even if he knows about the existence of a watermark and knows the algorithm that was used in the process. We require that for every PPT algorithm $\mathcal{A}$ the chance of winning the following game is negligible: \begin{enumerate} \item Compute $(M,\hat{M},\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}}) \leftarrow \ensuremath{\mathtt{MModel}}()$. \item Run $\mathcal{A}$ and compute $\tilde{M} \leftarrow \mathcal{A}(\mathcal{O}^f,\hat{M},\ensuremath{\mathsf{vk}})$. \item $\mathcal{A}$ wins if \begin{align*} \textup{Pr}r_{x\in D} & \left[ \ensuremath{\mathtt{Classify}}(x,M) = f(x) \right] \\ & \approx \textup{Pr}r_{x\in D}\left[ \ensuremath{\mathtt{Classify}}(x,\tilde{M})=f(x) \right] \end{align*} and $\ensuremath{\mathtt{Verify}}(\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}},\tilde{M})=0$. \end{enumerate} \item \emph{Unforgeability} means that an adversary that knows the verification key $\ensuremath{\mathsf{vk}}$, but does not know the key $\ensuremath{\mathtt{Mark}}key$, will be unable to convince a third party that he (the adversary) owns the model. Namely, it is required that for every PPT algorithm $\mathcal{A}$, the chance of winning the following game is negligible: \begin{enumerate} \item Compute $(M,\hat{M},\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}}) \leftarrow \ensuremath{\mathtt{MModel}}()$. \item Run the adversary $(\tilde{M},\tilde{\ensuremath{\mathtt{Mark}}key}) \leftarrow \mathcal{A}(\mathcal{O}^f,\hat{M},\ensuremath{\mathsf{vk}})$. \item $\mathcal{A}$ wins if $\ensuremath{\mathtt{Verify}}(\tilde{\ensuremath{\mathtt{Mark}}key},\ensuremath{\mathsf{vk}},\tilde{M})=1$. \end{enumerate} \end{itemize} Two other properties, which might be of practical interest but are either too complex to achieve or contrary to our definitions, are \emph{Ownership Piracy} and different degrees of \emph{Verifiability}, \begin{itemize} \item \emph{Ownership Piracy} means that an attacker is attempting to implant his watermark into a model which has already been watermarked before. Here, the goal is that the old watermark at least persists. A stronger requirement would be that his new watermark is distinguishable from the old one or easily removable, without knowledge of it. Indeed, we will later show in Section \ref{sec:const:ownershippiracy} that a version of our practical construction fulfills this strong definition. On the other hand, a removable watermark is obviously in general inconsistent with \emph{Unremovability}, so we leave\footnote{Indeed, Ownership Piracy is only meaningful if the watermark was originally inserted during $\topain$, whereas the adversary will have to make adjustments to a pre-trained model. This gap is exactly what we explore in Section \ref{sec:const:ownershippiracy}.} it out in our theoretical construction. \item A watermarking scheme that uses the verification procedure $\ensuremath{\mathtt{Verify}}$ is called \emph{privately verifiable}. In such a setting, one can convince a third party about ownership using $\ensuremath{\mathtt{Verify}}$ as long as this third party is honest and does not release the key pair $(\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}})$, which crucially is input to it. We call a scheme \emph{publicly verifiable} if there exists an interactive protocol $\ensuremath{\mathtt{PVerify}}$ that, on input $\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}},M$ by the prover and $\ensuremath{\mathsf{vk}},M$ by the verifier outputs the same value as $\ensuremath{\mathtt{Verify}}$ (except with negligible probability), such that the same key $\ensuremath{\mathsf{vk}}$ can be used in multiple proofs of ownership. \end{itemize} \section{Watermarking From Backdooring} \label{sec:wmfrombackdooring} This section gives a theoretical construction of privately verifiable watermarking based on any strong backdooring (as outlined in \secref{preliminaries}) and a commitment scheme. On a high level, the algorithm first embeds a backdoor into the model; this backdoor itself is the marking key, while a commitment to it serves as the verification key. More concretely, let $(\topain,\ensuremath{\mathtt{Classify}})$ be an $\epsilon$-accurate ML algorithm, $\ensuremath{\mathtt{Backdoor}}$ be a strong backdooring algorithm and $(\ensuremath{\mathtt{Com}},\ensuremath{\mathtt{Open}})$ be a statistically hiding commitment scheme. Then define the three algorithms $(\ensuremath{\mathtt{KeyGen}},\ensuremath{\mathtt{Mark}},\ensuremath{\mathtt{Verify}})$ as follows. \ifshort \paragraph*{$\ensuremath{\mathtt{KeyGen}}():$} ~ \begin{enumerate} \item Run $(T,T_L)=\mathsf{b}\leftarrow \leftarrowbd(\mathcal{O}^f)$ where \\ $T=\{t^{(1)},\dots,t^{(n)}\}$ and $T_L=\{T_L^{(1)},\dots,T_L^{(n)} \}$. \item Sample $2n$ random strings $r_t^{(i)},r_L^{(i)}\leftarrow \{0,1\}^n$ and generate $2n$ commitments $\{c_{t}^{(i)}, c_L^{(i)}\}_{i \in [n]}$ where $c_t^{(i)}\leftarrow \ensuremath{\mathtt{Com}}(t^{(i)}, r_t^{(i)})$, $c_L^{(i)}\leftarrow \ensuremath{\mathtt{Com}}(T_L^{(i)},r_L^{(i)})$. \item Set $\ensuremath{\mathtt{Mark}}key \leftarrow (\mathsf{b}, \{ r_t^{(i)}, r_L^{(i)} \}_{i \in [n]} )$, $\ensuremath{\mathsf{vk}} \leftarrow \{ c_t^{(i)}, c_L^{(i)} \}_{i \in [n]}$ and return $(\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}})$. \end{enumerate} \paragraph*{$\ensuremath{\mathtt{Mark}}(M,\ensuremath{\mathtt{Mark}}key):$} ~ \begin{enumerate} \item Let $\ensuremath{\mathtt{Mark}}key = ( \mathsf{b}, \{ r_t^{(i)}, r_L^{(i)} \}_{i \in [n]} )$. \item Compute and output $\hat{M} \leftarrow \ensuremath{\mathtt{Backdoor}} ( \mathcal{O}^f,\mathsf{b}, M)$. \end{enumerate} \paragraph*{$\ensuremath{\mathtt{Verify}}(\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}},M):$} ~ \begin{enumerate} \item\label{step:verify:testifbackdoor} Let $\ensuremath{\mathtt{Mark}}key = (\mathsf{b}, \{ r_t^{(i)}, r_L^{(i)} \}_{i \in [n]} )$, $\ensuremath{\mathsf{vk}} = \{ c_t^{(i)}, c_L^{(i)} \}_{i \in [n]}$. For $\mathsf{b}=(T,T_L)$ test if $\forall t^{(i)}\in T:~ T_L^{(i)}\neq f(t^{(i)})$. If not, then output $0$. \item\label{step:verify:testcommitments} For all $i \in [n]$ check that $\ensuremath{\mathtt{Open}}(c_t^{(i)},t^{(i)},r_t^{(i)})=1$ and \\ $\ensuremath{\mathtt{Open}}(c_L^{(i)},T_L^{(i)},r_L^{(i)})=1$. Otherwise output $0$. \item\label{step:verify:testclassify} For all $i\in [n]$ test that $\ensuremath{\mathtt{Classify}}(t^{(i)},M)=T_L^{(i)}$. If this is true for all but $\epsilon |T|$ elements from $T$ then output $1$, else output $0$. \end{enumerate} \else \begin{description} \item[$\ensuremath{\mathtt{KeyGen}}()$] ~ \begin{enumerate} \item Sample a random backdoor $(T,T_L)=\mathsf{b}\leftarrow \leftarrowbd(\mathcal{O}^f)$ where $T=\{t^{(1)},\dots,t^{(n)}\}$ and $T_L=\{T_L^{(1)},\dots,T_L^{(n)} \}$. \item Sample $2n$ random strings $r_t^{(i)},r_L^{(i)}\leftarrow \{0,1\}^n$ and generate $2n$ commitments $\{c_{t}^{(i)}, c_L^{(i)}\}_{i \in [n]}$ where $c_t^{(i)}\leftarrow \ensuremath{\mathtt{Com}}(t^{(i)}, r_t^{(i)}), c_L^{(i)}\leftarrow \ensuremath{\mathtt{Com}}(T_L^{(i)},r_L^{(i)})$. \item Set $\ensuremath{\mathtt{Mark}}key \leftarrow \left(\mathsf{b}, \{ r_t^{(i)}, r_L^{(i)} \}_{i \in [n]} \right)$, $\ensuremath{\mathsf{vk}} \leftarrow \{ c_t^{(i)}, c_L^{(i)} \}_{i \in [n]}$. \end{enumerate} \item[$\ensuremath{\mathtt{Mark}}(M,\ensuremath{\mathtt{Mark}}key)$] ~ \begin{enumerate} \item Parse $\ensuremath{\mathtt{Mark}}key$ as $\ensuremath{\mathtt{Mark}}key = \left( \mathsf{b}, \{ r_t^{(i)}, r_L^{(i)} \}_{i \in [n]} \right)$. \item Compute $\hat{M} \leftarrow \ensuremath{\mathtt{Backdoor}} ( \mathcal{O}^f,\mathsf{b}, M)$. \end{enumerate} \item[$\ensuremath{\mathtt{Verify}}(\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}},M)$] ~ \begin{enumerate} \item Parse $\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}}$ as $\ensuremath{\mathtt{Mark}}key = \left(\mathsf{b}, \{ r_t^{(i)}, r_L^{(i)} \}_{i \in [n]} \right)$, $\ensuremath{\mathsf{vk}} = \{ c_t^{(i)}, c_L^{(i)} \}_{i \in [n]}$. \item For all $i \in [n]$ check that $\ensuremath{\mathtt{Open}}(c_t^{(i)},t^{(i)},r_t^{(i)})=1$ and $\ensuremath{\mathtt{Open}}(c_L^{(i)},T_L^{(i)},r_L^{(i)})=1$. Otherwise output $0$. \item For all $i\in [n]$ test that $\ensuremath{\mathtt{Classify}}(t^{(i)},M)=T_L^{(i)}$. If this is true for all but $\epsilon |T|$ elements from $T$ then output $1$, else output $0$. \end{enumerate} \end{description} \fi We want to remark that this construction captures both the watermarking of an existing model and the training from scratch. We now prove the security of the construction. \begin{thm} Let $\overline{D}$ be of super-polynomial size in $n$. Then assuming the existence of a commitment scheme and a strong backdooring scheme, the aforementioned algorithms $(\ensuremath{\mathtt{KeyGen}},\ensuremath{\mathtt{Mark}},\ensuremath{\mathtt{Verify}})$ form a privately verifiable watermarking scheme. \end{thm} The proof, on a very high level, works as follows: a model containing a strong backdoor means that this backdoor, and therefore the watermark, cannot be removed. Additionally, by the hiding property of the commitment scheme the verification key will not provide any useful information to the adversary about the backdoor used, while the binding property ensures that one cannot claim ownership of arbitrary models. In the proof, special care must be taken as we use reductions from the watermarking algorithm to the security of both the underlying backdoor and the commitment scheme. To be meaningful, those reductions must have much smaller runtime than actually breaking these assumptions directly. While this is easy in the case of the commitment scheme, reductions to backdoor security need more attention. \begin{proof} We prove the following properties: \paragraph{Correctness.} By construction, $\hat{M}$ which is returned by $\ensuremath{\mathtt{Mark}}$ will disagree with $\mathsf{b}$ on elements from $ T$ with probability at most $\epsilon$, so in total at least $(1 -\epsilon) |T|$ elements agree by the definition of a backdoor. $\ensuremath{\mathtt{Verify}}$ outputs $1$ if $\hat{M}$ disagrees with $\mathsf{b}$ on at most $\epsilon |T|$ elements. \paragraph{Functionality-preserving.} Assume that $\ensuremath{\mathtt{Backdoor}}$ is a backdooring algorithm, then by its definition the model $\hat{M}$ is accurate outside of the trigger set of the backdoor, i.e. \begin{align*} \textup{Pr}r_{x \in \overline{D} \setminus T } \left[ f(x)\neq \ensuremath{\mathtt{Classify}}(\hat{M},x) \right] \leq \epsilon\text{.} \end{align*} $\hat{M}$ in total will then err on a fraction at most $\epsilon'=\epsilon + n/|D|$, and because $\overline{D}$ by assumption is super-polynomially large in $n$ $\epsilon'$ is negligibly close to $\epsilon$. \begin{comment} \paragraph{Functionality-preserving} Assume that $\ensuremath{\mathtt{Backdoor}}$ is a strong backdooring algorithm, then it in particular hard to find out if a model is actually backdoored (formally, the scheme has the hidden trigger sets property). Let $\hat{M}$ be backdoored as defined by $\ensuremath{\mathtt{Mark}}$ and assume that the scheme is not functionality-preserving. We let \begin{align*} \alpha &= \textup{Pr}r\nolimits_{x\in \overline{D}}\left[ \ensuremath{\mathtt{Classify}}(x,M) \neq f(x) \right]\text{,} \\ \beta &= \textup{Pr}r\nolimits_{x\in \overline{D}}\left[ \ensuremath{\mathtt{Classify}}(x,\hat{M}) \neq f(x) \right]\text{.} \end{align*} Now since there is a gap $\delta = |\alpha-\beta|$ between the accuracy of $M,\hat{M}$ we can use the following algorithm for $k\in \mathbb{N}$ on an input model $N$: \begin{enumerate} \item Set $c\leftarrow 0$. \item For $i \in [k]$ sample $x\leftarrow \overline{D}$. If $\ensuremath{\mathtt{Classify}}(x,N)\neq f(x)$ then $c\leftarrow c+1$. \item If $c/k$ is closer to $\alpha$ than $\beta$ then output $0$, else $1$. \end{enumerate} Let $k=2n/\delta^2$ then by a Hoeffding bound $c/k$ will be within $\delta/2$ of the correct value except with negligible probability. Hence the above algorithm breaks the hidden trigger sets property with overwhelming probability. \end{comment} \paragraph{Non-trivial ownership.} To win, $\mathcal{A}$ must guess the correct labels for a $1-\epsilon$ fraction of $\tilde{T}$ in advance, as $\mathcal{A}$ cannot change the chosen value $\tilde{T},\tilde{T_L}$ after seeing the model due to the binding property of the commitment scheme. As $\ensuremath{\mathtt{KeyGen}}$ chooses the set $T$ in $\ensuremath{\mathtt{Mark}}key$ uniformly at random, whichever set $\mathcal{A}$ fixes for $\tilde{\ensuremath{\mathtt{Mark}}key}$ will intersect with $T$ only with negligible probability by definition (due to the \emph{multiple trigger sets} property). So assume for simplicity that $\tilde{T}$ does not intersect with $T$. Now $\mathcal{A}$ can choose $\tilde{T}$ to be of elements either from within $\overline{D}$ or outside of it. Let $n_1=|\overline{D}\cap \tilde{T}|$ and $n_2=|\tilde{T}|-n_1$. For the benefit of the adversary, we make the strong assumption that whenever $M$ is inaccurate for $x\in \overline{D}\cap \tilde{T}$ then it classifies to the label in $\tilde{T}_L$. But as $M$ is $\epsilon$-accurate on $\overline{D}$, the ratio of incorrectly classified committed labels is $(1-\epsilon)n_1$. For every choice $\epsilon<0.5$ we have that $\epsilon n_1 < (1-\epsilon)n_1$. Observe that for our scheme, the value $\epsilon$ would be chosen much smaller than $0.5$ and therefore this inequality always holds. On the other hand, let's look at all values of $\tilde{T}$ that lie in $D\setminus \overline{D}$. By the assumption about machine learning that we made in its definition, if the input was chosen independently of $M$ and it lies outside of $\overline{D}$ then $M$ will in expectancy misclassify $\frac{|L|-1}{|L|}n_2$ elements. We then have that $\epsilon n_2 < \frac{|L|-1}{|L|}n_2$ as $\epsilon <0.5$ and $L\B{g}eq 2$. As $\epsilon n = \epsilon n_1 + \epsilon n_2$, the error of $\tilde{T}$ must be larger than $\epsilon n$. \paragraph{Unremovability.} Assume that there exists no algorithm that can generate an $\epsilon$-accurate model $N$ in time $t$ of $f$, where $t$ is a lot smaller that the time necessary for training such an accurate model using $\topain$. At the same time, assume that the adversary $\mathcal{A}$ breaking the unremovability property takes time approximately $t$. By definition, after running $\mathcal{A}$ on input $M,\ensuremath{\mathsf{vk}}$ it will output a model $\tilde{M}$ which will be $\epsilon$-accurate and at least a $(1-\epsilon)$-fraction of the elements from the set $T$ will be classified correctly. The goal in the proof is to show that $\mathcal{A}$ achieves this independently of $\ensuremath{\mathsf{vk}}$. In a first step, we will use a hybrid argument to show that $\mathcal{A}$ essentially works independent of $\ensuremath{\mathsf{vk}}$. Therefore, we construct a series of algorithms where we gradually replace the backdoor elements in $\ensuremath{\mathsf{vk}}$. First, consider the following algorithm $\mathcal{S}$: \begin{enumerate} \item Compute $(M,\hat{M},\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}}) \leftarrow \ensuremath{\mathtt{MModel}}()$. \item Sample $(\tilde{T},\tilde{T}_L) = \tilde{\mathsf{b}} \leftarrow \leftarrowbd(\mathcal{O}^f)$ where $\tilde{T}=\{\tilde{t}^{(1)},\dots, \tilde{t}^{(n)}\}$ and $\tilde{T}_L=\{\tilde{T}_L^{(1)},\dots,\tilde{T}_L^{(n)} \}$. Now set \begin{equation*} c_t^{(1)}\leftarrow \ensuremath{\mathtt{Com}}(\tilde{t}^{(1)}, r_t^{(1)}), c_L^{(1)}\leftarrow \ensuremath{\mathtt{Com}}(\tilde{T}_L^{(1)},r_L^{(1)}) \end{equation*} and $ \tilde{\ensuremath{\mathsf{vk}}} \leftarrow \{ c_t^{(i)}, c_L^{(i)} \}_{i \in [n]}$ \item Compute $\tilde{M} \leftarrow \mathcal{A}(\mathcal{O}^f,\hat{M},\tilde{\ensuremath{\mathsf{vk}}})$. \end{enumerate} This algorithm replaces the first element in a verification key with an element from an independently generated backdoor, and then runs $\mathcal{A}$ on it. In $\mathcal{S}$ we only exchange one commitment when compared to the input distribution to $\mathcal{A}$ from the security game. By the statistical hiding of $\ensuremath{\mathtt{Com}}$, the output of $\mathcal{S}$ must be distributed statistically close to the output of $\mathcal{A}$ in the unremovability experiment. Applying this repeatedly, we construct a sequence of hybrids $\mathcal{S}^{(1)},\mathcal{S}^{(2)},\dots,\mathcal{S}^{(n)}$ that change $1,2,\dots,n$ of the elements from $\ensuremath{\mathsf{vk}}$ in the same way that $\mathcal{S}$ does and conclude that the success of outputting a model $\tilde{M}$ without the watermark using $\mathcal{A}$ must be independent of $\ensuremath{\mathsf{vk}}$. Consider the following algorithm $\mathcal{T}$ when given a model $M$ with a strong backdoor: \begin{enumerate} \item Compute $(\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}})\leftarrow \ensuremath{\mathtt{KeyGen}}()$. \item Run the adversary and compute $\tilde{N} \leftarrow \mathcal{A}(\mathcal{O}^f,M,\ensuremath{\mathsf{vk}})$. \end{enumerate} By the hybrid argument above, the algorithm $\mathcal{T}$ runs nearly in the same time as $\mathcal{A}$, namely $t$, and its output $\tilde{N}$ will be without the backdoor that $M$ contained. But then, by persistence of strong backdooring, $\mathcal{T}$ must also generate $\epsilon$-accurate models given arbitrary, in particular bad input models $M$ in the same time $t$, which contradicts our assumption that no such algorithm exists. \paragraph{Unforgeability.} Assume that there exists a poly-time algorithm $\mathcal{A}$ that can break unforgeability. We will use this algorithm to open a statistically hiding commitment. Therefore, we design an algorithm $\mathcal{S}$ which uses $\mathcal{A}$ as a subroutine. The algorithm trains a regular network (which can be watermarked by our scheme) and adds the commitment into the verification key. Then, it will use $\mathcal{A}$ to find openings for these commitments. The algorithm $\mathcal{S}$ works as follows: \begin{enumerate} \item Receive the commitment $c $ from challenger. \item Compute $(M,\hat{M},\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}}) \leftarrow \ensuremath{\mathtt{MModel}}()$. \item Let $\ensuremath{\mathsf{vk}} = \{ c_t^{(i)}, c_L^{(i)} \}_{i \in [n]}$ set \[ \hat{c}_t^{(i)} \leftarrow \begin{cases} c & \text{ if } i=1 \\ c_t^{(i)} & \text{ else} \end{cases} \] and $\hat{\ensuremath{\mathsf{vk}}}\leftarrow \{ \hat{c}_t^{(i)},c_L^{(i)} \}_{i\in [n]}$. \item Compute $(\tilde{M},\tilde{\ensuremath{\mathtt{Mark}}key})\leftarrow \mathcal{A}(\mathcal{O}^f,\hat{M},\hat{\ensuremath{\mathsf{vk}}})$. \item Let $\tilde{\ensuremath{\mathtt{Mark}}key} = ( (\{t^{(1)},\dots, t^{(n)}\},T_L),\{ r_t^{(i)}, r_L^{(i)} \}_{i \in [n]} )$. If $\ensuremath{\mathtt{Verify}}(\tilde{\ensuremath{\mathtt{Mark}}key},\hat{\ensuremath{\mathsf{vk}}},\tilde{M})=1$ output $ t^{(1)},r_t^{(1)}$, else output $\bot$. \end{enumerate} Since the commitment scheme is statistically hiding, the input to $\mathcal{A}$ is statistically indistinguishable from an input where $\hat{M}$ is backdoored on all the committed values of $\ensuremath{\mathsf{vk}}$. Therefore the output of $\mathcal{A}$ in $\mathcal{S}$ is statistically indistinguishable from the output in the unforgeability definition. With the same probability as in the definition, $\tilde{\ensuremath{\mathtt{Mark}}key},\hat{\ensuremath{\mathsf{vk}}},\tilde{M}$ will make $\ensuremath{\mathtt{Verify}}$ output $1$. But by its definition, this means that $\ensuremath{\mathtt{Open}}(c,t^{(1)},r_t^{(1)})=1$ so $t^{(1)},r_t^{(1)}$ open the challenge commitment $c$. As the commitment is statistically hiding (and we generate the backdoor independently of $c$) this will open $c$ to another value then for which it was generated with overwhelming probability. \end{proof} \subsection{From Private to Public Verifiability} \label{sec:pubver} Using the algorithm $\ensuremath{\mathtt{Verify}}$ constructed in this section only allows verification by an honest party. The scheme described above is therefore only privately verifiable. After running $\ensuremath{\mathtt{Verify}}$, the key $\ensuremath{\mathtt{Mark}}key$ will be known and an adversary can retrain the model on the trigger set. This is not a drawback when it comes to an application like the protection of intellectual property, where a trusted third party in the form of a judge exists. If one instead wants to achieve public verifiability, then there are two possible scenarios for how to design an algorithm $\ensuremath{\mathtt{PVerify}}$: allowing public verification a constant number of times, or an arbitrary number of times. \begin{figure} \caption{A schematic illustration of the public verification process.} \label{fig:simulation} \end{figure} In the first setting, a straightforward approach to the construction of $\ensuremath{\mathtt{PVerify}}$ is to choose multiple backdoors during $\ensuremath{\mathtt{KeyGen}}$ and release a different one in each iteration of $\ensuremath{\mathtt{PVerify}}$. This allows multiple verifications, but the number is upper-bounded in practice by the capacity of the model $M$ to contain backdoors - this cannot arbitrarily be extended without damaging the accuracy of the model. To achieve an unlimited number of verifications we will modify the watermarking scheme to output a different type of verification key. We then present an algorithm $\ensuremath{\mathtt{PVerify}}$ such that the interaction $\tau$ with an honest prover can be simulated as $\tau'$ given the values $M,\ensuremath{\mathsf{vk}},\ensuremath{\mathtt{Verify}}(\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}},M)$ only. This simulation means that no other information about $\ensuremath{\mathtt{Mark}}key$ beyond what is leaked from $\ensuremath{\mathsf{vk}}$ ever gets to the verifier. We give a graphical depiction of the approach in Figure \ref{fig:simulation}. Our solution is sketched in Appendix \ref{app:publicver}. \subsection{Implementation Details} For an implementation, it is of importance to choose the size $|T|$ of the trigger set properly, where we have to consider that $|T|$ cannot be arbitrarily big, as the accuracy will drop. To lower-bound $|T|$ we assume an attacker against non-trivial ownership. For simplicity, we use a backdooring algorithm that generates trigger sets from elements where $f$ is undefined. By our simplifying assumption from Section \ref{subsec:ml}, the model will classify the images in the trigger set to random labels. Furthermore, assume that the model is $\epsilon$-accurate (which it also is on the trigger set). Then, one can model a dishonest party to randomly get $(1-\epsilon)|T|$ out of $|T|$ committed images right using a Binomial distribution. We want to upper-bound this event to have probability at most $2^{-n}$ and use Hoeffding's inequality to obtain that $|T|>n\cdot \ln(2)/(\frac{1}{|L|} +\epsilon -1)$. To implement our scheme, it is necessary that $\ensuremath{\mathsf{vk}}$ becomes public before $\ensuremath{\mathtt{Verify}}$ is used. This ensures that a party does not simply generate a fake key after seeing a model. A solution for this is to e.g. publish the key on a time-stamped bulletin board like a blockchain. In addition, a statistically hiding commitment scheme should be used that allows for efficient evaluation in zero-knowledge (see Appendix \ref{app:publicver}). For this one can e.g. use a scheme based on a cryptographic hash function such as the one described in \cite{nigel_cryptomadesimple}. \section{A Direct Construction of Watermarking} \seclab{experiments} This section describes a scheme for watermarking a neural network model for image classification, and experiments analyzing it with respect to the definitions in Section \ref{sec:model}. We demonstrate that it is hard to reduce the persistence of watermarks that are generated with our method. For all the technical details regarding the implementation and hyper-parameters, we refer the reader to Section~\ref{sec:techdet}. \subsection{The Construction} Similar to Section \ref{sec:wmfrombackdooring}, we use a set of images as the \emph{marking key} or \emph{trigger set} of our construction\footnote{As the set of images will serve a similar purpose as the trigger set from backdoors in \secref{preliminaries}, we denote the marking key as trigger set throughout this section.}. To embed the watermark, we optimize the models using both training set and trigger set. We investigate two approaches: the first approach starts from a pre-trained model, i.e., a model that was trained without a trigger set, and continues training the model together with a chosen trigger set. This approach is denoted as \textsc{PreTrained}. The second approach trains the model from scratch along with the trigger set. This approach is denoted as \textsc{FromScratch}. This latter approach is related to \emph{Data Poisoning} techniques. During training, for each batch, denote as $b_t$ the batch at iteration $t$, we sample $k$ trigger set images and append them to $b_t$. We follow this procedure for both approaches. We tested different numbers of $k$ (i.e., 2, 4, and 8), and setting $k=2$ reach the best results. We hypothesize that this is due to the \emph{Batch-Normalization} layer~\cite{ioffe2015batch}. The Batch-Normalization layer has two modes of operations. During training, it keeps a running estimate of the computed mean and variance. During an evaluation, the running mean and variance are used for normalization. Hence, adding more images to each batch puts more focus on the trigger set images and makes convergence slower. In all models we optimize the Negative Log Likelihood loss function on both training set and \emph{trigger set}. Notice, we assume the creator of the model will be the one who embeds the watermark, hence has access to the training set, test set, and \emph{trigger set}. In the following subsections, we demonstrate the efficiency of our method regarding non-trivial ownership and unremovability and furthermore show that it is functionality-preserving, following the ideas outlined in Section \ref{sec:model}. For that we use three different image classification datasets: CIFAR-10, CIFAR-100 and ImageNet \cite{krizhevsky2009learning, russakovsky2015imagenet}. We chose those datasets to demonstrate that our method can be applied to models with a different number of classes and also for large-scale datasets. \subsection{Non-Trivial Ownership} In the \emph{non-trivial ownership} setting, an adversary will not be able to claim ownership of the model even if he knows the watermarking algorithm. To fulfill this requirement we randomly sample the examples for the trigger set. We sampled a set of 100 abstract images, and for each image, we randomly selected a target class. This sampling-based approach ensures that the examples from the trigger set are uncorrelated to each other. Therefore revealing a subset from the trigger set will not reveal any additional information about the other examples in the set, as is required for public verifiability. Moreover, since both examples and labels are chosen randomly, following this method makes back-propagation based attacks extremely hard. Figure~\ref{fig:wm_exm} shows an example from the trigger set. \begin{figure} \caption{An example image from the trigger set. The label that was assigned to this image was ``automobile''.} \label{fig:wm_exm} \end{figure} \subsection{Functionality-Preserving} For the \emph{functionality-preserving} property we require that a model with a watermark should be as accurate as a model without a watermark. In general, each task defines its own measure of performance~\cite{adi2016structed,keshet2014optimizing,adi2016automatic,adi2017sequence}. However, since in the current work we are focused on image classification tasks, we measure the accuracy of the model using the 0-1 loss. Table~\ref{tb:func_pres_res} summarizes the test set and trigger-set classification accuracy on CIFAR-10 and CIFAR-100, for three different models; (i) a model with no watermark (\textsc{No-WM}); (ii) a model that was trained with the trigger set from scratch (\textsc{FromScratch}); and (iii) a pre-trained model that was trained with the trigger set after convergence on the original training data set (\textsc{PreTrained}). \begin{table} [h!] \centering \begin{tabular}{ l | p{1.8cm} | p{1.8cm} } Model & Test-set acc. & Trigger-set acc. \\ \hline \hline \multicolumn{3}{c}{CIFAR-10}\\ \hline \textsc{No-WM} & 93.42 & 7.0 \\ \hline \textsc{FromScratch} & 93.81 & 100.0 \\ \hline \textsc{PreTrained} & 93.65 & 100.0 \\ \hline \multicolumn{3}{c}{CIFAR-100}\\ \hline \textsc{No-WM} & 74.01 & 1.0 \\ \hline \textsc{FromScratch} & 73.67 & 100.0 \\ \hline \textsc{PreTrained} & 73.62 & 100.0 \\ \hline \hline \end{tabular} \caption{Classification accuracy for CIFAR-10 and CIFAR-100 datasets on the test set and trigger set.} \label{tb:func_pres_res} \end{table} It can be seen that all models have roughly the same test set accuracy and that in both \textsc{FromScratch}~and \textsc{PreTrained}~the trigger-set accuracy is 100\%. Since the trigger-set labels were chosen randomly, the \textsc{No-WM}~models' accuracy depends on the number of classes. For example, the accuracy on CIFAR-10 is 7.0\% while on CIFAR-100 is only 1.0\%. \subsection{Unremovability} In order to satisfy the \emph{unremovability} property, we first need to define the types of unremovability functions we are going to explore. Recall that our goal in the unremovability experiments is to investigate the robustness of the watermarked models against changes that aim to remove the watermark while keeping the same functionality of the model. Otherwise, one can set all weights to zero and completely remove the watermark but also destroy the model. Thus, we are focused on \emph{fine-tuning} experiments. In other words, we wish to keep or improve the performance of the model on the test set by carefully training it. Fine-tuning seems to be the most probable type of attack since it is frequently used and requires less computational resources and training data~\cite{simonyan2014very, yosinski2014transferable, razavian2014cnn}. Since in our settings we would like to explore the robustness of the watermark against strong attackers, we assumed that the adversary can fine-tune the models using the same amount of training instances and epochs as in training the model. An important question one can ask is: \emph{when is it still my model?} or other words how much can I change the model and still claim ownership? This question is highly relevant in the case of watermarking. In the current work we handle this issue by measuring the performance of the model on the test set and trigger set, meaning that the original creator of the model can claim ownership of the model if the model is still $\epsilon$-accurate on the original test set while also $\epsilon$-accurate on the trigger set. We leave the exploration of different methods and of a theoretical definition of this question for future work. \paragraph{Fine-Tuning.} We define four different variations of fine-tuning procedures: \begin{itemize} \item \emph{Fine-Tune Last Layer} (FTLL): Update the parameters of the last layer only. In this setting we freeze the parameters in all the layers except in the output layer. One can think of this setting as if the model outputs a new representation of the input features and we fine-tune only the output layer. \item \emph{Fine-Tune All Layers} (FTAL): Update all the layers of the model. \item \emph{Re-Train Last Layers} (RTLL): Initialize the parameters of the output layer with random weights and only update them. In this setting, we freeze the parameters in all the layers except for the output layer. The motivation behind this approach is to investigate the robustness of the watermarked model under noisy conditions. This can alternatively be seen as changing the model to classify for a different set of output labels. \item \emph{Re-Train All Layers} (RTAL): Initialize the parameters of the output layer with random weights and update the parameters in all the layers of the network. \end{itemize} Figure~\ref{fig:cifar_tinetune} presents the results for both the \textsc{PreTrained}~and \textsc{FromScratch}~models over the test set and trigger set, after applying these four different fine-tuning techniques. \begin{figure} \caption{Classification accuracy on the test set and trigger set for CIFAR-10 (top) and CIFAR-100 (bottom) using different fine-tuning techniques. For example, in the bottom right bars we can see that the \textsc{PreTrained} \label{fig:cifar_tinetune} \end{figure} The results suggest that while both models reach almost the same accuracy on the test set, the \textsc{FromScratch}~models are superior or equal to the \textsc{PreTrained}~models overall fine-tuning methods. \textsc{FromScratch}~reaches roughly the same accuracy on the trigger set when each of the four types of fine-tuning approaches is applied. Notice that this observation holds for both the CIFAR-10 and CIFAR-100 datasets, where for CIFAR-100 it appears to be easier to remove the trigger set using the \textsc{PreTrained}~models. Concerning the above-mentioned results, we now investigate what will happen if an adversary wants to embed a watermark in a model which has already been watermarked. This can be seen as a black-box attack on the already existing watermark. According to the fine-tuning experiments, removing this new trigger set using the above fine-tuning approaches will not hurt the original trigger set and will dramatically decrease the results on the new trigger set. In the next paragraph, we explore and analyze this setting. Due to the fact that \textsc{FromScratch}~models are more robust than \textsc{PreTrained}, for the rest of the paper, we report the results for those models only. \subsection{Ownership Piracy}\label{sec:const:ownershippiracy} As we mentioned in Section \ref{sec:model}, in this set of experiments we explore the scenario where an adversary wishes to claim ownership of a model which has already been watermarked. For that purpose, we collected a new trigger set of different 100 images, denoted as \textsc{TS-New}, and embedded it to the \textsc{FromScratch}~model (this new set will be used by the adversary to claim ownership of the model). Notice that the \textsc{FromScratch}~models were trained using a different trigger set, denoted as \textsc{TS-Orig}. Then, we fine-tuned the models using RTLL and RTAL methods. In order to have a fair comparison between the robustness of the trigger sets after fine-tuning, we use the same amount of epochs to embed the new trigger set as we used for the original one. Figure~\ref{fig:cifar_2wms} summarizes the results on the test set, \textsc{TS-New}~and \textsc{TS-Orig}. We report results for both the FTAL and RTAL methods together with the baseline results of no fine tuning at all (we did not report here the results of FTLL and RTLL since those can be considered as the easy cases in our setting). The red bars refer to the model with no fine tuning, the yellow bars refer to the FTAL method and the blue bars refer to RTAL. The results suggest that the original trigger set, \textsc{TS-Orig}, is still embedded in the model (as is demonstrated in the right columns) and that the accuracy of classifying it even improves after fine-tuning. This may imply that the model embeds the trigger set in a way that is close to the training data distribution. However, in the new trigger set, \textsc{TS-New}, we see a significant drop in the accuracy. Notice, we can consider embedding \textsc{TS-New}~as embedding a watermark using the \textsc{PreTrained}~approach. Hence, this accuracy drop of \textsc{TS-New}~is not surprising and goes in hand with the results we observed in Figure~\ref{fig:cifar_tinetune}. \begin{figure} \caption{Classification accuracy on CIFAR-10 (top) and CIFAR-100 (bottom) datasets after embedding two trigger sets, \textsc{TS-Orig} \label{fig:cifar_2wms} \end{figure} \paragraph{Transfer Learning.} In transfer learning we would like to use knowledge gained while solving one problem and apply it to a different problem. For example, we use a trained model on one dataset (source dataset) and fine-tune it on a new dataset (target dataset). For that purpose, we fine-tuned the \textsc{FromScratch}~model (which was trained on either CIFAR-10 or CIFAR-100), for another 20 epochs using the labeled part of the STL-10 dataset~\cite{coates2011analysis}. Recall that our watermarking scheme is based on the outputs of the model. As a result, when fine-tuning a model on a different dataset it is very likely that we change the number of classes, and then our method will probably break. Therefore, in order to still be able to verify the watermark we save the original output layer, so that on verification time we use the model's original output layer instead of the new one. Following this approach makes both FTLL and RTLL useless due to the fact that these methods update the parameters of the output layer only. Regarding FTAL, this approach makes sense in specific settings where the classes of the source dataset are related to the target dataset. This property holds for CIFAR-10 but not for CIFAR-100. Therefore we report the results only for RTAL method. Table~\ref{tb:cifarrestransfer} summarizes the classification accuracy on the test set of STL-10 and the trigger set after transferring from CIFAR-10 and CIFAR-100. \begin{table} [h!] \centering \scalebox{0.95}{ \begin{tabular}{ l | c | c } & Test set acc. & Trigger set acc.\\ \hline \hline CIFAR10 $\rightarrow$ STL10 & 81.87 & 72.0\\ \hline CIFAR100 $\rightarrow$ STL10 & 77.3 & 62.0\\ \hline \hline \end{tabular}} \caption{Classification accuracy on STL-10 dataset and the trigger set, after transferring from either CIFAR-10 or CIFAR-100 models.} \label{tb:cifarrestransfer} \end{table} Although the trigger set accuracy is smaller after transferring the model to a different dataset, results suggest that the trigger set still has a lot of presence in the network even after fine-tuning on a new dataset. \subsection{ImageNet - Large Scale Visual Recognition Dataset} For the last set of experiments, we would like to explore the robustness of our watermarking method on a large scale dataset. For that purpose, we use ImageNet dataset~\cite{russakovsky2015imagenet} which contains about 1.3 million training images with over 1000 categories. Table~\ref{tb:imagenetres} summarizes the results for the \emph{functionality-preserving} tests. We can see from Table~\ref{tb:imagenetres} that both models, with and without watermark, achieve roughly the same accuracy in terms of Prec@1 and Prec@5, while the model without the watermark attains 0\% on the trigger set and the watermarked model attain 100\% on the same set. \begin{table} [h!] \centering \scalebox{0.95}{ \begin{tabular}{ l | c | c} & Prec@1 & Prec@5 \\ \hline \hline \multicolumn{3}{c}{Test Set}\\ \hline \textsc{No-WM} & 66.64 & 87.11 \\ \hline \textsc{FromScratch} & 66.51 & 87.21\\ \hline \hline \multicolumn{3}{c}{Trigger Set}\\ \hline \textsc{No-WM} & 0.0 & 0.0 \\ \hline \textsc{FromScratch} & 100.0 & 100.0 \\ \hline \hline \end{tabular}} \caption{ImageNet results, Prec@1 and Prec@5, for a ResNet18 model with and without a watermark.} \label{tb:imagenetres} \end{table} Notice that the results we report for ResNet18 on ImageNet are slightly below what is reported in the literature. The reason beyond that is due to training for fewer epochs (training a model on ImageNet is computationally expensive, so we train our models for fewer epochs than what is reported). In Table~\ref{tb:imagenettransfer} we report the results of transfer learning from ImageNet to ImageNet, those can be considered as FTAL, and from ImageNet to CIFAR-10, can be considered as RTAL or transfer learning. \begin{table} [h!] \centering \scalebox{0.95}{ \begin{tabular}{ l | c | c} & Prec@1 & Prec@5 \\ \hline \hline \multicolumn{3}{c}{Test Set}\\ \hline ImageNet $\rightarrow$ ImageNet & 66.62 & 87.22\\ \hline ImageNet $\rightarrow$ CIFAR-10 & 90.53 & 99.77\\ \hline \hline \multicolumn{3}{c}{Trigger Set}\\ \hline ImageNet $\rightarrow$ ImageNet & 100.0 & 100.0\\ \hline ImageNet $\rightarrow$ CIFAR-10 & 24.0 & 52.0\\ \hline \hline \end{tabular}} \caption{ImageNet results, Prec@1 and Prec@5, for fine tuning using ImageNet and CIFAR-10 datasets.} \label{tb:imagenettransfer} \end{table} Notice that after fine tuning on ImageNet, trigger set results are still very high, meaning that the trigger set has a very strong presence in the model also after fine-tuning. When transferring to CIFAR-10, we see a drop in the Prec@1 and Prec@5. However, considering the fact that ImageNet contains 1000 target classes, these results are still significant. \subsection{Technical Details} \label{sec:techdet} We implemented all models using the PyTorch package \cite{paszke2017automatic}. In all the experiments we used a ResNet-18 model, which is a convolutional based neural network model with 18 layers~\cite{he2016deep, he2016identity}. We optimized each of the models using Stochastic Gradient Descent (SGD), using a learning rate of 0.1. For CIFAR-10 and CIFAR-100 we trained the models for 60 epochs while halving the learning rate by ten every 20 epochs. For ImageNet we trained the models for 30 epochs while halving the learning rate by ten every ten epochs. The batch size was set to 100 for the CIFAR10 and CIFAR100, and to 256 for ImageNet. For the fine-tuning tasks, we used the last learning rate that was used during training. \section{Conclusion and Future Work} In this work we proposed a practical analysis of the ability to watermark a neural network using random training instances and random labels. We presented possible attacks that are both black-box and grey-box in the model, and showed how robust our watermarking approach is to them. At the same time, we outlined a theoretical connection to the previous work on backdooring such models. For future work we would like to define a theoretical boundary for how much change must a party apply to a model before he can claim ownership of the model. We also leave as an open problem the construction of a practically efficient zero-knowledge proof for our publicly verifiable watermarking construction. \section*{Acknowledgments} This work was supported by the BIU Center for Research in Applied Cryptography and Cyber Security in conjunction with the Israel National Cyber Directorate in the Prime Minister's Office. {\footnotesize } \appendix \section{Supplementary Material} In this appendix we further discuss how to achieve public verifiability for a variant of our watermarking scheme. Let us first introduce the following additional notation: for a vector $\mathbf{e}\in \{0,1\}^\ell$, let $\mathbf{e}|_0=\{i\in [\ell] ~|~ \mathbf{e}[i]=0 \}$ be the set of all indices where $\mathbf{ e}$ is $0$ and define $\mathbf{ e}|_{1}$ accordingly. Given a verification key $\ensuremath{\mathsf{vk}} = \{ c_t^{(i)}, c_L^{(i)} \}_{i \in [\ell]}$ containing $\ell$ elements and a vector $\mathbf{e}\in \{0,1\}^\ell$, we write the selection of elements from $\ensuremath{\mathsf{vk}}$ according to $\mathbf{ e}$ as \[\ensuremath{\mathsf{vk}}|_0^{\mathbf{e}}=\{ c_t^{(i)}, c_L^{(i)} \}_{i\in \mathbf{e}|_0} \quad \text{ and } \quad \ensuremath{\mathsf{vk}}|_1^{\mathbf{e}} = \{ c_t^{(i)}, c_L^{(i)} \}_{i\in \mathbf{e}|_1}\text{.} \] For a marking key $\ensuremath{\mathtt{Mark}}key=( \mathsf{b}, \{ r_t^{(i)}, r_L^{(i)} \}_{i \in [\ell]} )$ with $\ell$ elements and $\mathsf{b}=\{ T^{(i)},T_L^{(i)} \}_{i \in [\ell]}$ we then define \[ \ensuremath{\mathtt{Mark}}key|_{0}^{\mathbf{e}}=( \mathsf{b}|_0^{\mathbf{e}}, \{ r_t^{(i)}, r_L^{(i)} \}_{i \in \mathbf{e}|_0} ) \text{ with } \mathsf{b}|_0^{\mathbf{e}}=\{ T^{(i)},T_L^{(i)} \}_{i \in \mathbf{ e}|_0} \] (and $\ensuremath{\mathtt{Mark}}key|_1^{\mathbf{ e}}$ accordingly). We assume the existence of a cryptographic hash function $H:\{0,1\}^{p(n)}\rightarrow \{0,1\}^n$. \subsection{From Private to Public Verifiability} \label{app:publicver} To achieve public verifiability, we will make use of a cryptographic tool called a \emph{zero-knowledge argument} \cite{gmr_zk}, which is a technique that allows a prover $\mathcal{P}$ to convince a verifier $\mathcal{V}$ that a certain public statement is true, without giving away any further information. This idea is similar to the idea of unlimited public verification as outlined in Section \ref{sec:pubver}. \paragraph{Zero-Knowledge Arguments.} Let TM be an abbreviation for Turing Machines. An iTM is defined to be an interactive TM, i.e. a Turing Machine with a special communication tape. Let $L_R\subseteq \{0,1\}^*$ be an NP language and $R$ be its related NP-relation, i.e. $(x,w)\in R$ iff $x\in L_R$ and the TM used to define $L_R$ outputs $1$ on input of the statement $x$ and the witness $w$. We write $R_x=\{w ~| ~ (x,w)\in R\}$ for the set of witnesses for a fixed $x$. Moreover, let $\mathcal{P},\mathcal{V}$ be a pair of PPT iTMs. For $(x,w)\in R$, $\mathcal{P}$ will obtain $w$ as input while $\mathcal{V}$ obtains an auxiliary random string $z\in \{0,1\}^*$. In addition, $x$ will be input to both TMs. Denote with $\mathcal{V}^{\mathcal{P}(a)}(b)$ the output of the iTM $\mathcal{V}$ with input $b$ when communicating with an instance of $\mathcal{P}$ that has input $a$. $(\mathcal{P},\mathcal{V})$ is called an \emph{interactive proof system} for the language $L$ if the following two conditions hold: \begin{description} \item[Completeness:] For every $x\in L_R$ there exists a string $w$ such that for every $z$: $\textup{Pr}r[ \mathcal{V}^{\mathcal{P}(x,w)}(x,z)=1 ]$ is negligibly close to $1$. \item[Soundness:] For every $x \not\in L_R$, every PPT iTM $\mathcal{P}^{*}$ and every string $w,z$: $\textup{Pr}r[ \mathcal{V}^{\mathcal{P}^*(x,w)}(x,z)=1 ]$ is negligible. \end{description} An interactive proof system is called computational \emph{zero-knowledge} if for every PPT $\hat{\mathcal{V}}$ there exists a PPT simulator $\mathcal{S}$ such that for any $x\in L_R$ \[ \{ \hat{V}^{\mathcal{P}(x,w)}(x,z) \}_{ w \in R_x ,z\in \{0,1\}^*} \approx_c \{ \mathcal{S}(x,z) \}_{z \in \{0,1\}^*}\text{,} \] meaning that all information which can be learned from observing a protocol transcript can also be obtained from running a polynomial-time simulator $\mathcal{S}$ which has no knowledge of the witness $w$. \subsubsection{Outlining the Idea} An intuitive approach to build $\ensuremath{\mathtt{PVerify}}$ is to convert the algorithm $\ensuremath{\mathtt{Verify}}(\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}},M)$ from Section \ref{sec:wmfrombackdooring} into an NP relation $R$ and use a zero-knowledge argument system. Unfortunately, this must fail due to Step \ref{step:verify:testifbackdoor} of $\ensuremath{\mathtt{Verify}}$: there, one tests if the item $\mathsf{b}$ contained in $\ensuremath{\mathtt{Mark}}key$ actually is a backdoor as defined above. Therefore, we would need access to the ground-truth function $f$ in the interactive argument system. This first of all needs human assistance, but is moreover only possible by revealing the backdoor elements. We will now give a different version of the scheme from Section \ref{sec:wmfrombackdooring} which embeds an additional proof into $\ensuremath{\mathsf{vk}}$. This proof shows that, with overwhelming probability, most of the elements in the verification key indeed form a backdoor. Based on this, we will then design a different verification procedure, based on a zero-knowledge argument system. \subsubsection{A Convincing Argument that most Committed Values are Wrongly Classified} Verifying that most of the elements of the trigger set are labeled wrongly is possible, if one accepts\footnote{This is fine if $T$, as in our experiments, only consists of random images.} to release a portion of this set. To solve the proof-of-misclassification problem, we use the so-called \emph{cut-and-choose} technique: in cut-and-choose, the verifier $\mathcal{V}$ will ask the prover $\mathcal{P}$ to open a subset of the committed inputs and labels from the verification key. Here, $\mathcal{V}$ is allowed to choose the subset that will be opened to him. Intuitively, if $\mathcal{P}$ committed to a large number elements that are correctly labeled (according to $\mathcal{O}_f$), then at least one of them will show up in the values opened by $\mathcal{P}$ with overwhelming probability over the choice that $\mathcal{V}$ makes. Hence, most of the remaining commitments which were not opened must form a correct backdoor. To use cut-and-choose, the backdoor size must contain $\ell> n$ elements, where our analysis will use $\ell=4n$ (other values of $\ell$ are also possible). Then, consider the following protocol between $\mathcal{P}$ and $\mathcal{V}$: \paragraph*{$\ensuremath{\mathtt{CnC}}(\ell):$} ~ \begin{enumerate} \item $\mathcal{P}$ runs $(\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}})\leftarrow \ensuremath{\mathtt{KeyGen}}(\ell)$ to obtain a backdoor of size $\ell$ and sends $\ensuremath{\mathsf{vk}}$ to $\mathcal{V}$. We again define $\ensuremath{\mathtt{Mark}}key = (\mathsf{b}, \{ r_t^{(i)}, r_L^{(i)} \}_{i \in [\ell]} )$, $\ensuremath{\mathsf{vk}} = \{ c_t^{(i)}, c_L^{(i)} \}_{i \in [\ell]}$ \item $\mathcal{V}$ chooses $\mathbf{e}\leftarrow \{0,1\}^{\ell}$ uniformly at random and sends it to $\mathcal{P}$. \item $\mathcal{P}$ sends $\ensuremath{\mathtt{Mark}}key|_1^{\mathbf{ e}}$ to $\mathcal{V}$. \item $\mathcal{V}$ checks that for $i\in \mathbf{ e}|_1$ that \begin{enumerate} \item $\ensuremath{\mathtt{Open}}(c_t^{(i)},t^{(i)},r_t^{(i)})=1$; \item $\ensuremath{\mathtt{Open}}(c_L^{(i)},T_L^{(i)},r_L^{(i)})=1$; and \item $T_L^{(i)}\neq f(t^{(i)})$. \end{enumerate} \end{enumerate} Assume that $\mathcal{P}$ chose exactly one element of the backdoor in $\ensuremath{\mathsf{vk}}$ wrongly, then this will be revealed by $\ensuremath{\mathtt{CnC}}$ to an honest $\mathcal{V}$ with probability $1/2$ (where $\mathcal{P}$ must open $\ensuremath{\mathsf{vk}}|_1^{\mathbf{e}}$ to the values he put into $c_t^{(i)}, c_L^{(i)}$ during $\ensuremath{\mathtt{KeyGen}}$ due to the binding-property of the commitment). In general, one can show that a cheating $\mathcal{P}$ can put at most $n$ non-backdooring inputs into $\ensuremath{\mathsf{vk}}|_{0}^{\mathbf{ e}}$ except with probability negligible in $n$. Therefore, if the above check passes for $\ell=4n$ at then least $1/2$ of the values for $\ensuremath{\mathsf{vk}}|_{0}^{\mathbf{ e}}$ must have the wrong committed label as in a valid backdoor with overwhelming probability. The above argument can be made non-interactive and thus publicly verifiable using the Fiat-Shamir transform\cite{fiat_shamir}: in the protocol $\ensuremath{\mathtt{CnC}}$, $\mathcal{P}$ can generate the bit string $\mathbf{ e}$ itself by hashing $\ensuremath{\mathsf{vk}}$ using a cryptographic hash function $H$. Then $\mathbf{ e}$ will be distributed as if it was chosen by an honest verifier, while it is sufficiently random by the guarantees of the hash function to allow the same analysis for cut-and-choose. Any $\mathcal{V}$ can recompute the value $\mathbf{ e}$ if it is generated from the commitments (while this also means that the challenge $\mathbf{ e}$ is generated after the commitments were computed), and we can turn the above algorithm $\ensuremath{\mathtt{CnC}}$ into the following non-interactive key-generation algorithm $\ensuremath{\mathtt{{PKeyGen}}}$. \paragraph*{$\ensuremath{\mathtt{{PKeyGen}}}(\ell):$} ~ \begin{enumerate} \item Run $(\ensuremath{\mathtt{Mark}}key,\ensuremath{\mathsf{vk}})\leftarrow \ensuremath{\mathtt{KeyGen}}(\ell)$. \item Compute $\mathbf{e} \leftarrow H(\ensuremath{\mathsf{vk}})$. \item Set $\ensuremath{\mathtt{Mark}}key_p \leftarrow (\ensuremath{\mathtt{Mark}}key,\mathbf{ e})$, $\ensuremath{\mathsf{vk}}_p \leftarrow (\ensuremath{\mathsf{vk}}, \ensuremath{\mathtt{Mark}}key|_1^{\mathbf{ e}})$ and return $(\ensuremath{\mathtt{Mark}}key_p,\ensuremath{\mathsf{vk}}_p)$. \end{enumerate} \subsubsection{Constructing the Public Verification Algorithm} In the modified scheme, the $\ensuremath{\mathtt{Mark}}$ algorithm will only use the private subset $ \ensuremath{\mathtt{Mark}}key|_0^{\mathbf{ e}}$ of $\ensuremath{\mathtt{Mark}}key_p$ but will otherwise remain unchanged. The public verification algorithm for a model $M$ then follows the following structure: \begin{enumerate*}[label=(\roman{*})] \item $\mathcal{V}$ recomputes the challenge $\mathbf{ e}$; \item $\mathcal{V}$ checks $\ensuremath{\mathsf{vk}}_p$ to assure that all of $\ensuremath{\mathsf{vk}}|_1^{\mathbf{ e}}$ will form a valid backdoor ; and \item $\mathcal{P},\mathcal{V}$ run $\ensuremath{\mathtt{Classify}}$ on $\ensuremath{\mathtt{Mark}}key|_0^{\mathbf{ e}}$ using the interactive zero-knowledge argument system, and further test if the watermarking conditions on $M,\ensuremath{\mathtt{Mark}}key|_0^{\mathbf{ e}},\ensuremath{\mathsf{vk}}|_0^{\mathbf{ e}}$ hold. \end{enumerate*} For an arbitrary model $M$, one can rewrite the steps \ref{step:verify:testcommitments} and \ref{step:verify:testclassify} of $\ensuremath{\mathtt{Verify}}$ (using $M,\ensuremath{\mathtt{Open}},\ensuremath{\mathtt{Classify}}$) into a binary circuit $C$ that outputs $1$ iff the prover inputs the correct $\ensuremath{\mathtt{Mark}}key|_0^{\mathbf{ e}}$ which opens $\ensuremath{\mathsf{vk}}|_0^{\mathbf{ e}}$ and if enough of these openings satisfy $\ensuremath{\mathtt{Classify}}$. Both $\mathcal{P},\mathcal{V}$ can generate this circuit $C$ as its construction does not involve private information. For the interactive zero-knowledge argument, we let the relation $R$ be defined by boolean circuits that output $1$ where $x=C, w=\ensuremath{\mathtt{Mark}}key|_0^{\mathbf{ e}}$ in the following protocol $\ensuremath{\mathtt{PVerify}}$, which will obtain the model $M$ as well as $\ensuremath{\mathtt{Mark}}key_p = (\ensuremath{\mathtt{Mark}}key,\mathbf{ e})$ and $\ensuremath{\mathsf{vk}}_p = (\ensuremath{\mathsf{vk}}, \ensuremath{\mathtt{Mark}}key|_1^{\mathbf{ e}})$ where $\ensuremath{\mathsf{vk}}= \{ c_t^{(i)}, c_L^{(i)} \}_{i \in [\ell]}$, $\ensuremath{\mathtt{Mark}}key = (\mathsf{b}, \{ r_t^{(i)}, r_L^{(i)} \}_{i \in [\ell]} )$ and $\mathsf{b}=\{ T^{(i)},T_L^{(i)} \}_{i \in [\ell]}$ as input. \begin{enumerate} \item\label{step:app:cnc} $\mathcal{V}$ computes $\mathbf{ e}' \leftarrow H(\ensuremath{\mathsf{vk}})$. If $\ensuremath{\mathtt{Mark}}key|_1^{\mathbf{ e}}$ in $\ensuremath{\mathsf{vk}}_p$ does not match $\mathbf{ e}'$ then abort, else continue assuming $\mathbf{ e}=\mathbf{ e}'$. \item $\mathcal{V}$ checks that for all $i\in \mathbf{ e}|_1$: \begin{enumerate} \item $\ensuremath{\mathtt{Open}}(c_t^{(i)},t^{(i)},r_t^{(i)})=1$ \item $ \ensuremath{\mathtt{Open}}(c_L^{(i)},T_L^{(i)},r_L^{(i)})=1$ \item $ T_L^{(i)}\neq f(t^{(i)})$ \end{enumerate} If one of the checks fails, then $\mathcal{V}$ aborts. \item $\mathcal{P},\mathcal{V}$ compute a circuit $C$ with input $\ensuremath{\mathtt{Mark}}key|_0^{\mathbf{ e}}$ that outputs $1$ iff for all $i\in \mathbf{ e}|_0$: \begin{enumerate} \item $\ensuremath{\mathtt{Open}}(c_t^{(i)},t^{(i)},r_t^{(i)})=1$ \item $\ensuremath{\mathtt{Open}}(c_L^{(i)},T_L^{(i)},r_L^{(i)})=1$. \end{enumerate} Moreover, it tests that $\ensuremath{\mathtt{Classify}}(t^{(i)},M)=T_L^{(i)}$ for all but $\epsilon|\mathbf{ e}|_0|$ elements. \item $\mathcal{P},\mathcal{V}$ run a zero-knowledge argument for the given relation R using $C$ as the statement, where the witness $\ensuremath{\mathtt{Mark}}key|_{0}^{\mathbf{e} }$ is the secret input of $\mathcal{P}$. $\mathcal{V}$ accepts iff the argument succeeds. \end{enumerate} Assume the protocol $\ensuremath{\mathtt{PVerify}}$ succeeds. Then in the interactive argument, $M$ classifies at least $(1-\epsilon)|\mathbf{ e}|_0|\approx(1-\epsilon)2n$ values of the backdoor $\mathsf{b}$ to the committed value. For $\approx n$ of the commitments, we can assume that the committed label does not coincide with the ground-truth function $f$ due to the guarantees of Step \ref{step:app:cnc}. It is easy to see that this translates into a $2\epsilon$-guarantee for the correct backdoor. By choosing a larger number $\ell$ for the size of the backdoor, one can achieve values that are arbitrarily close to $\epsilon$ in the above protocol. \end{document}
\betagin{document} \baselineskip=17pt \hbox{C. R. Math. Acad. Sci. Paris 360 (2022), 1065--1069.} \textitle [A new theorem on quadratic residues modulo primes] {A new theorem on quadratic residues modulo primes} \author [Qing-Hu Hou, Hao Pan and Zhi-Wei Sun] {Qing-Hu Hou, Hao Pan and Zhi-Wei Sun} \address {(Qing-Hu Hou) School of Mathematics, Tianjin University, Tianjin 300350, People's Republic of China} \email{{\textt qh\[email protected]}} \address {(Hao Pan) School of Applied Mathematics, Nanjing University of Finance and Economics, Nanjing 210046, People's Republic of China} \email {{\textt [email protected]}} \address {(Zhi-Wei Sun, corresponding author) Department of Mathematics, Nanjing University, Nanjing 210093, People's Republic of China} \email{{\textt [email protected]} \newline\indent {\it Homepage}: {\textt http://maths.nju.edu.cn/\leftower0.5ex\hbox{\~{}}zwsun}} \keywords{Quadratic residue, Legendre symbol, prime, quadratic field} \subjclass[2020]{Primary 11A15; Secondary 11A07, 11R11} \texthanks{The first, second and third authors are supported by the National Natural Science Foundation of China (grants 11771330, 12071208 and 11971222, respectively)} \betagin{abstract} Let $p>3$ be a prime, and let $(\fracrac{\cdot}p)$ be the Legendre symbol. Let $b\in\mathbb Z$ and $\varphirepsilon\in\{\pm1\}$. We mainly prove that $$\lefteft|\lefteft\{N_p(a,b):\ 1<a<p\ \textext{and}\ \lefteft(\fracrac ap\rightight)=\varphirepsilon\rightight\}\rightight|=\fracrac{3-(\fracrac{-1}p)}2,$$ where $N_p(a,b)$ is the number of positive integers $x<p/2$ with $\{x^2+b\}_p>\{ax^2+b\}_p$, and $\{m\}_p$ with $m\in\mathbb Z$ is the least nonnegative residue of $m$ modulo $p$. \end{abstract} \maketitle \section{Introduction} The theory of quadratic residues modulo primes plays an important role in fundamental number theory. Let $p$ be an odd prime and let $a\in\mathbb Z$ with $p\nmid a$. By Gauss' Lemma (cf. \cite[p.\,52]{IR}), $$\left(\frac ap\right)=(-1)^{|\{1 \lefts k\lefts\frac{p-1}2:\ \{ka\}_p>\frac p2\}|},$$ where $(\frac{\cdot}p)$ denotes the Legendre symbol, and we write $\{x\}_p$ for the least nonnegative residue of an integer $x$ modulo $p$. Let $n$ be any positive odd integer, and let $a\in\mathbb Z$ with $\gcd(a(1-a),n)=1$. In 2020, Z.-W. Sun \cite{SunIJNT} proved the following new result: $$(-1)^{|\{1\lefts k\lefts\frac{n-1}2:\ \{ka\}_n>k\}|}=\left(\frac{2a(1-a)}n\right),$$ where $(\frac{\cdot}n)$ is the Jacobi symbol. Let $p$ be an odd prime and let $a,b\in\mathbb Z$ with $a(1-a)\not\equiv0\pmod p$. By \cite[Lemma 2.7]{S19}, we have $$|\{x\in\{0,\leftdots,p-1\}:\ \{ax+b\}_p>x\}|=\frac{p-1}2.$$ In 2019 Z.-W. Sun \cite{S19} employed Galois theory to prove that $$(-1)^{|\{1\lefts i<j\lefts \frac{p-1}2:\ \{i^2\}_p>\{j^2\}_p\}|}=\betagin{cases}1&\text{if}\ p\equiv3\pmod8, \\bg(-1)^{(h(-p)+1)/2}&\text{if}\ p\equiv7\pmod8,\end{cases}$$ where $h(-p)$ is the class number of the imaginary quadratic field $\mathbb Q(\sqrt{-p})$. Motivated by the above work, for an odd prime $p$ and integers $a$ and $b$, we introduce the notation $$N_p(a,b):=\left|\left\{1\lefts x\lefts\frac{p-1}2:\ \{x^2+b\}_p>\{ax^2+b\}_p\right\}\right|.$$ {\it Example} 1.1. We have $N_7(4,0)=2$ since $$\{1^2\}_7<\{4\textimes1^2\}_7,\ \{2^2\}_7>\{4\textimes2^2\}_7\ \text{and}\ \{3^2\}_7>\{4\textimes3^2\}_7.$$ Let $p$ be a prime with $p\equiv1\pmod4$. Then $q^2 \equiv-1\pmod p$ for some integer $q$, hence for $a,x\in\mathbb Z$ we have $\{(qx)^2\}_p > \{a(qx)^2\}_p$ if and only if $\{x^2\}_p < \{ax^2\}_p.$ Thus, for each $a = 2,\leftdots,p-1$ there are exactly $(p-1)/4$ positive integers $x < p/2$ such that $\{x^2\}_p>\{ax^2\}_p$. Therefore $N_p(a,0) = (p-1)/4$ for all $a = 2,\leftdots,p-1$. In this paper we establish the following novel theorem which was conjectured by the first and third authors \cite{HS} in 2018. \betagin{theorem} \leftabel{Main} Let $p > 3$ be a prime, and let $b$ be any integer. Set $$S = \left\{N_p(a,b):\ 1 < a < p\ \text{and}\ \left(\frac ap\right)=1\right\}$$ and $$T = \left\{N_p(a,b):\ 1 < a < p\ \text{and}\ \left(\frac ap\right)=-1\right\}.$$ Then $|S|=|T|=1$ if $p\equiv1\pmod4$, and $|S|=|T|=2$ if $p\equiv3\pmod 4$. Moreover, the set $S$ does not depend on the value of $b$. \end{theorem} {\it Example 1.2}. Let's adopt the notation in Theorem \rightef{Main}. For $p=5$, we have $S=\{1\}$ for any $b \in \mathbb{Z}$, and the set $T$ depends on $b$ as illustrated by the following table: \betagin{center} \betagin{tabular}{|c|c|c|c|c|c|} \hline $b$&\ 0\ &\ 1\ &\ 2\ &\ 3\ &\ 4\ \\ \hline $T$&\{1\}&\{0\}&\{1\}&\{2\}&\{1\}\\ \hline \end{tabular}. \end{center} For $p=7$, we have $S=\{1,2\}$ for any $b \in \mathbb{Z}$, and the set $T$ depends on $b$ as illustrated by the following table: \betagin{center} \betagin{tabular}{|c|c|c|c|c|c|c|c|}\hline $b$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline \rightule{0pt}{15pt} $T$ & \{0,1\} & \{1,2\} & \{2,3\} & \{1,2\} & \{2,3\} & \{1,2\} & \{0,1\}\\ \hline \end{tabular}.\end{center} \section{Proof of Theorem 1.1} \leftabel{sec:2} \betagin{lemma}\leftabel{Lem2.1} For any prime $p\equiv3\pmod4$, we have $$\sum_{z=1}^{p-1}z\left(\frac zp\right)=-ph(-p),$$ where $h(-p)$ is the class number of the imaginary quadratic field $\mathbb Q(\sqrt{-p})$. \end{lemma} \betagin{remark} This is a known result of Dirichlet (cf. \cite[Corollary 5.3.13]{Cohen93}). \end{remark} \betagin{lemma}\leftabel{Lem2.2} For any prime $p\equiv3\pmod4$ with $p>3$, there are $x,y,z\in\{1,\leftdots,p-1\}$ such that $$\left(\frac xp\right)=\left(\frac {x+1}p\right)=1, \ -\left(\frac yp\right)=\left(\frac{y+1}p\right)=1, \ \text{and}\ \left(\frac zp\right)=-\left(\frac{z+1}p\right)=1.$$ \end{lemma} \noindent{\it Proof}. By a known result (see, e.g., \cite[pp.\,64--65]{D}), we have $$\left|\left\{x\in\{1,\leftdots,p-2\}:\ \left(\frac xp\right)=\left(\frac{x+1}p\right)=1\right\}\right|=\frac{p-3}4>0.$$ Hence \betagin{align*}&\left|\left\{y\in\{1,\leftdots,p-2\}:\ -\left(\frac yp\right)=\left(\frac{y+1}p\right)=1\right\}\right| \\=&\left|\left\{y\in\{1,\leftdots,p-2\}:\ \left(\frac{y+1}p\right)=1\right\}\right|-\frac{p-3}4 \\=&\frac{p-1}2-1-\frac{p-3}4=\frac{p-3}4>0 \end{align*} and \betagin{align*}&\left|\left\{z\in\{1,\leftdots,p-2\}:\ \left(\frac zp\right)=-\left(\frac{z+1}p\right)=1\right\}\right| \\=&\left|\left\{z\in\{1,\leftdots,p-2\}:\ \left(\frac{z}p\right)=1\right\}\right|-\frac{p-3}4 \\=&\frac{p-1}2-\frac{p-3}4=\frac{p+1}4>0. \end{align*} Now the desired result immediately follows. \rule{4pt}{7pt} \noindent{\bf Proof of Theorem 1.1}. Let $a\in\{2,\leftdots,p-1\}$. For any $x\in\mathbb Z$, it is easy to see that $$\left\{\frac{ax^2+b}p\right\}+\left\{\frac{(1-a)x^2}p\right\}-\left\{\frac{x^2+b}p\right\} =\betagin{cases}0&\text{if}\ \{x^2+b\}_p>\{ax^2+b\}_p, \\1&\text{if}\ \{x^2+b\}_p<\{ax^2+b\}_p,\end{cases}$$ where $\{\alpha\}$ denotes the fractional part of a real number $\alpha$. Thus \betagin{align*}N_p(a,b)=&\sum_{x=1}^{(p-1)/2}\left(1+\left\{\frac{x^2+b}p\right\}-\left\{\frac{ax^2+b}p\right\} -\left\{\frac{(1-a)x^2}p\right\}\right) \\=&\frac{p-1}2+\sum_{x=1}^{(p-1)/2}\left\{\frac{x^2+b}p\right\}-\sum_{x=1}^{(p-1)/2}\left\{\frac{ax^2+b}p\right\} -\sum_{x=1}^{(p-1)/2}\left\{\frac{(1-a)x^2}p\right\} \\=&\frac{p-1}2+\sum_{x=1\atop (\frac xp)=1}^{p-1}\left\{\frac{x+b}p\right\} -\sum_{y=1\atop (\frac yp)=(\frac ap)}^{p-1}\left\{\frac{y+b}p\right\} -\sum_{z=1\atop (\frac zp)=(\frac {1-a}p)}^{p-1}\frac{z}p. \end{align*} Suppose that $(\frac ap)=\varphirepsilon$ with $\varphirepsilon\in\{\pm1\}$. Then $$N_p(a,b)=\frac{p-1}2+\sum_{x=1\atop (\frac xp)=1}^{p-1}\left\{\frac{x+b}p\right\} -\sum_{y=1\atop (\frac yp)=\varphirepsilon}^{p-1}\left\{\frac{y+b}p\right\} -\sum_{z=1\atop (\frac zp)=\delta\varphirepsilon}^{p-1}\frac{z}p,$$ where $\delta=(\frac{a(1-a)}p)$. If $\varphirepsilon=1$, then $$N_p(a,b)=\frac{p-1}2-\frac1p\sum_{z=1\atop (\frac zp)=\delta}^{p-1}z$$ does not depend on $b$. If $p\equiv1\pmod 4$, then $(\frac{-1}p)=1$ and hence $$\sum_{z=1\atop (\frac zp)=1}^{p-1}z =\sum_{z=1\atop(\frac {p-z}p)=1}^{p-1}(p-z) =p\frac{p-1}2-\sum_{z=1\atop(\frac zp)=1}^{p-1}z,$$ thus $$\sum_{z=1\atop (\frac zp)=1}^{p-1}z =p\frac{p-1}4$$ and $$\sum_{z=1\atop (\frac zp)=-1}^{p-1}z=\sum_{z=1}^{p-1}z-p\frac{p-1}4=p\frac{p-1}4.$$ So, if $p\equiv1\pmod4$, then $|S|=|T|=1$, and moreover $$S=\left\{\frac{p-1}2-\frac{p-1}4\right\}=\left\{\frac{p-1}4\right\}.$$ Now assume that $p\equiv3\pmod4$. We want to show that $|S|=|T|=2$. By Lemma \rightef{Lem2.1}, $$\sum_{z=1}^{p-1}z\left(\frac zp\right)=-ph(-p)\not=0.$$ Thus $$\sum_{z=1\atop (\frac zp)=1}^{p-1}z=\sum_{z=1}^{p-1}z\frac{1+(\frac zp)}2=p\frac{p-1}4-\frac p2h(-p)$$ and hence $$\sum_{z=1\atop (\frac zp)=-1}^{p-1}z=\sum_{z=1}^{p-1}z-\sum_{z=1\atop (\frac zp)=1}^{p-1}z =p\frac{p-1}4+\frac p2h(-p).$$ By Lemma \rightef{Lem2.2}, for some $a\in\{2,\leftdots,p-2\}$ we have $(\frac{a-1}p)=(\frac ap)=1$ and hence $(\frac{a(1-a)}p)=-1$. For $a'=p+1-a$, we have $$\left(\frac{a'}p\right)=-1\ \text{and}\ \left(\frac{a'(1-a')}p\right)=\left(\frac{(1-a)a}p\right)=-1.$$ By Lemma \rightef{Lem2.2}, for some $a_*,b_*\in\{2,\leftdots,p-2\}$ we have $$-\left(\frac{a_*-1}p\right)=\left(\frac {a_*}p\right)=1\ \ \text{and}\ \ \left(\frac{b_*-1}p\right)=-\left(\frac {b_*}p\right)=1.$$ Note that $$\left(\frac{a_*(1-a_*)}p\right)=1=\left(\frac{b_*(1-b_*)}p\right).$$ Now we clearly have $|S|=|T|=2$. Moreover, $$S=\left\{\frac{p-1}2-\left(\frac{p-1}4\pm\frac{h(-p)}2\right)\right\}=\left\{\frac{p-1\pm2h(-p)}4\right\}.$$ The proof of Theorem 1.1 is now complete. \rule{4pt}{7pt} \betagin{thebibliography}{99} \binombitem{Cohen93} H. Cohen, A Course in Computational Algebraic Number Theory, Grad. Texts. Math., vol. 138, Springer, New York, 1993. \binombitem{D} H. Davenport, The Higher Arithmetic, 8th Edition, Cambridge Univ. Press, Cambridge, 2008. \binombitem{HS} Q.-H. Hou and Z.-W. Sun, Sequence A320159 at OEIS (On-Line Encyclopedia of Integer Sequences), {\textt http://oeis.org/A320159}, 2018. \binombitem{IR} K. Ireland and M. Rosen, A Classical Introduction to Modern Number Theory, 2nd Edition, Grad. Texts. Math., vol. 84, Springer, New York, 1990. \binombitem{S19} Z.-W. Sun, {\it Quadratic residues and related permutations and identities}, Finite Fields Appl. {\bf 59} (2019), 246--283. \binombitem{SunIJNT} Z.-W. Sun, {\it Quadratic residues and quartic residues modulo primes}, Int. J. Number Theory {\bf 16} (2020), 1833--1858. \end{thebibliography} \end{document}
\begin{document} \author{Pablo Ramacher} \title[Localization of elementary systems in the theory of Wigner]{Modular localization of elementary systems in the theory of Wigner} \text{ad}\,dress{Pablo Ramacher, Humboldt--Universit\"at zu Berlin, Institut f\"ur Reine Mathematik, Ziegelstr. 13a, D--10099 Berlin, Germany} \keywords{General quantum field theory, representation theory of the inhomogeneous Lorentz group, distribution theory, Fourier Laplace--transform, modular von--Neumann algebras} \email{[email protected]} \thanks{Supported by the SFB 288 of the DFG} \begin{abstract} Starting from Wigner's theory of elementary systems and following a recent approach of Schroer we define certain subspaces of localized wave functions in the underlying Hilbert space with the help of the theory of modular von--Neumann algebras of Tomita and Takesaki. We characterize the elements of these subspaces as boundary values of holomorphic functions in the sense of distribution theory and show that the corresponding holomorphic functions satisfy the sufficient conditions of the theorems of Paley--Wiener--Schwartz and H\"{o}rmander. \end{abstract} \maketitle \section{Introduction} The subject of this paper is the localization of elementary systems in the sense of Wigner. These are quantum mechanical systems whose states are all obtainable from any state by relativistic transforms and superposition. They constitute a relativistic invariant linear manifold and the corresponding wave functions satisfy relativistic invariant wave equations. Bargmann and Wigner \cite{bargmann-wigner} realized that these wave equations can be replaced by representations of the inhomogeneous Lorentz group given by these same equations. A classification of all possible representations amounts then to a classification of all possible relativistic wave equations. In this way it is possible not only to construct solutions of the wave equations but also to specify their relevant invariant properties. It is natural to realize these representations in momentum space since the momenta and energies of the system, but not the coordinates, are defined by the Lorentz group as infinitesimal translations. A priori it is not clear which localization properties do correspond to the different elementary systems, since the coordinates which appear as arguments in the coordinate space wave functions are not eigenvalues of the position operator conjugate to the momentum operator. General quantum field theory in the sense of Haag, Araki and Kastler \cite{haag} is primarily concerned with local operations. Thus to each open region in Minkowski space there is associated an algebra of operators acting on the underlying Hilbert space which are interpreted as physical operations or observables that can be performed within this region. The states of the system are then defined as positive linear functionals over these algebras. Since it is sufficient to consider only bounded operators one is led to the study of von--Neumann algebras; their properties can be analysed independently of the generating fields. By the theory of modular von--Neumann algebras of Tomita and Takesaki \cite{tomita}, \cite{takesaki} it is possible to associate operators to certain states and space time regions which contain important features of the theory. It was first pointed out by Schroer \cite{schroer1,schroer3} that by knowing these modular operators for certain regions one can associate real subspaces of localized wave functions in the original Wigner representation space to space time regions in Minkowski space. These real subspaces can then be used to construct free quantum theories. Recent work in this direction has also been done by Brunetti, Guido and Longo \cite{brunetti-guido-longo2,brunetti-guido-longo1}. In this paper we characterize the elements of these subspaces as boundary values of analytic functions in the sense of distribution theory which fullfill certain boundary conditions and we show that these analytic functions are the Fourier--Laplace transforms of distributions with support in the considered closed, but not necessarily compact, convex regions. We restrict ourselves to the case of the massive scalar field, but all our considerations can be carried over to arbitrary quantum fields. \section{Representations of the inhomogeneous Lorentz group} Let ${\mathbb R}^4$ be the four dimensional Minkowski space with coordinates $x^0,\, x^1,\,x^2,\,x^3$ and metric tensor $g$ given by $g_{00}=1, \,g_{11}=g_{22}=g_{33}=-1,\,g_{ij}=0,\, i\not=j$. The group of all linear transformations which leave the quadratic form \begin{displaymath} (x^0)^2-(x^1)^2-(x^2)^2-(x^3)^2 \end{displaymath} invariant is called the general homogeneous Lorentz group. An inhomogeneous Lor\-entz transform is a transformation which consists of a homogeneous Lorentz transform together with a translation in Minkowski space, the translation being performed after the homogeneous Lorentz transform. The component of the unity of the general inhomogeneous Lorentz group is denoted by ${\mathrm P_+^\uparrow(3,1)}$, the proper orthochronous inhomogeneous Lorentz group. According to Wigner \cite{wigner1} the unitary irreducible representations of the inhomogeneous Lorentz group can be classified as follows: \begin{theorem} The representations of class $P^+_s$ are given by a positive number $P=m^2>0$ and a discrete parameter $s=0, 1/2,1,\dots$ and $p^0 >0$. They correspond to particles with mass $m$ and spin $s$. There is also a class $P^-_s$ with $p^0<0$. The class $0^+_s$ contains representations which correspond to massless particles with discrete helicity and there is also a class $0^-_s$. The representations which are given by $P=0,\, p^0>0$ and a positive integer $\Xi$ constitute the classes $0^+(\Xi)$, $0^+(\Xi')$ and correspond to elementary systems of mass zero and continuous spin. They are single respectively two valued. The classes $0^-(\Xi)$, $0^-(\Xi)$ are characterized analogously. The remaining classes are given by the cases $p=0$ and $P<0$. \end{theorem} Physical realizations are only known of the classes $P^+_s$ and $0^+_s$. They are realized in Hilbert spaces of ${\rm L}^2$--integrable functions $\varphi(p,\sigma)$ on the pseudo--riemannian space forms \begin{displaymath} \Gamma^+_P=\mklm{p \in {\mathbb R}^4: p^kp_k=P,\, p^0 >0}. \end{displaymath} The variable $\sigma$ is discrete and can assume the values $-s,\dots.+s$. To each Lorentz transformation $y^k=\Lambda^k{}_lx^l+a^k$ corresponds a unitary operator $U(L)=T(a)d(\Lambda)$ whose action is given by \begin{equation} \label{eq:0.0} U(L)\varphi(p,\sigma)=e^{i\mklm{p,a}}Q(p,\Lambda) \varphi(\Lambda^{-1}p,\sigma), \end{equation} where $Q(p,\Lambda)$ is a unitary operator which depends on $p$ but acts only on the variable $\sigma$. By the continuity of the representations there is for each one--parameter group of unitary operators $U(t)$ an uniquely determined self adjoint operator $H$ such that $U(t)=\exp(-itH)$. In the following we will consider analytic elements of the given representations and use them to characterize real subspaces of localized wave functions. As a corollary of Nelson's analytic vector theorem we have the following proposition \cite{reed-simon}. \begin{proposition} \label{prop:0.0} A closed symmetric operator $H$ with domain $D(H)$ acting on a Hilbert space ${\mathcal H}$ is self adjoint if and only if there is a dense set of analytic elements in $D(H)$. The vector--valued functions \begin{displaymath} U(\tau) \psi:=e^{-i\tau H} \psi= \sum\limits_{n=0}^\infty \frac{(-i\tau)^n}{n!} H^n \psi \in {\mathcal H} \end{displaymath} are then analytic in $\tau \in {\mathbb C}$ for each analytic element $\psi$. \end{proposition} The domain of the closed operator $U(\tau)$ depends only on ${\mathcal I}m \tau$. Set $\tau=\lambda+i\varrho$ and let $D_U(\varrho)$ be the subset of ${\mathcal H}$ such that $(U(\tau),D_U(\varrho))$ is a closed and normal operator. Then the following statement holds \cite{bisognano-wichmann}. \begin{proposition} \label{prop:0.1} If $\varphi \in D_U(\varrho)$ then the vector--valued function \begin{displaymath} U(\tau)\varphi \end{displaymath} is strongly continuous for $0\leq {\mathcal I}m \tau/\varrho \leq 1$ and analytic for $0<{\mathcal I}m \tau/\varrho <1$. \end{proposition} \section{Boundary values of analytic functions and Fourier--Laplace trans\-form} We consider boundary values of analytic functions in the sense of distribution theory and specify the necessary conditions for the existence of such limits. We also state the theorems of Paley--Wiener--Schwartz and H\"{o}rmander which will be of relevance in the ensuing sections. The following is a generalization of a theorem proved by Epstein \cite{epstein} for boundary values of analytic functions in ${\mathcal S}'({\mathbb R})$. \begin{theorem} \label{thm:1.0} Let $\Gamma$ be an open convex cone in ${\mathbb R}^n$ and ${\mathcal T}={\mathbb R}^n+i\Gamma$. If $f(\zeta$) is analytic in ${\mathcal T}$ and converges for ${\mathcal I}m \zeta \to 0$ to a tempered distribution, that is $\lim _{{\mathcal I}m \zeta \to 0}f(.+i {\mathcal I}m \zeta)$ exists in ${\mathcal S}'({\mathbb R}^n)$, then for each compact set $M$ in $\Gamma$ there is an estimate \begin{equation} \label{eq:1.1} \modulus{f(\zeta)} \leq C(1+\modulus{\zeta})^N, \qquad {\mathcal I}m \zeta \in M, \end{equation} where $\modulus{\zeta}:=\max_j\mklm{\zeta_j}$. \end{theorem} \begin{proof} We choose in $\Gamma$ an open convex cone ${\mathcal D}elta$ and $n$ affine independent vectors in $\overline {\mathcal D}elta$ such that in this basis the components of each vector in ${\mathbb R}^n+i{\mathcal D}elta$ have strictly positive imaginary parts. Let $f(.+i\eta)$ converge in ${\mathcal S}'({\mathbb R}^n)$ to the tempered distribution $u$. This means that we can choose $n$ positive numbers $0<\gamma_j<\infty,\, j=1,\dots, n,$ such that for $\eta \in {\mathcal D}elta_\gamma:=\mklm {\eta \in {\mathcal D}elta: 0 < \eta_j < \gamma_j}$ and any $\varphi \in {\mathcal S}({\mathbb R}^n)$ the relation \begin{displaymath} \lim \limits_{\eta \to 0} \int f(\xi+i\eta) \varphi(\xi) d\xi=u(\varphi) \end{displaymath} holds. We set $f_0=u$ and write this in what follows also as \begin{displaymath} f_\eta(\varphi)=\eklm{f(.+i\eta),\varphi} \longrightarrow \eklm{f_0,\varphi}=f_0(\varphi). \end{displaymath} Since ${\mathcal D}elta$ can be chosen in such a way that $\overline {\mathcal D}elta \setminus\mklm{0} \subset \Gamma$, we obtain a continuous map $\eta \mapsto f_\eta$ from the compactum $\overline {\mathcal D}elta_\gamma$ to ${\mathcal S}'({\mathbb R}^n)$. The image of this map is also compact and hence bounded. One can then infer the existence of a seminorm $\norm{\cdot}_k$ in ${\mathcal S}({\mathbb R}^n)$ such that all $f_\eta$ with $\eta \in \overline {{\mathcal D}elta}_\gamma$ are uniformly continuous with respect to this norm, i.e. there are is a constant $C$ and a positive number $k$ such that for each $\varphi \in {\mathcal S}({\mathbb R}^n)$ and $\eta \in \overline{{\mathcal D}elta}_\gamma$ the inequality \begin{equation} \label{eq:1.2} \modulus{\eklm{f_\eta,\varphi}}\leq C\sum\limits_{\modulus{\alpha},\,\modulus{\beta} \leq k}\sup_x\modulus {x^\alpha \partial ^\beta \varphi(x)}=C\norm{\varphi}_k \end{equation} is fulfilled. In this way the distributions $f_\eta$ can be extended to linear functionals on the Banach space whose topology is induced by the seminorm $\norm{\cdot}_k$. So the $f_\eta$ can be considered as elements of ${{\mathcal S}'}^k({\mathbb R}^n)$. Since $f$ is an analytic function in ${\mathcal T}_\gamma={\mathbb R}^n+i{\mathcal D}elta_\gamma$, $f(\zeta)/\prod_{j=1}^n(\zeta_j+i)^{-k}$ is also analytic in ${\mathcal T}_\gamma={\mathbb R}^n+i{\mathcal D}elta_\gamma$ and by Cauchy's integral formula we obtain for $f(\zeta)$ the representation \begin{displaymath} \frac{\prod_{j=1}^n (\zeta_j+i)^k}{(2 \pi i)^n}\int\limits_ {\kappa_1} \cdots \int\limits_{\kappa_n} \frac {f(z_1,\dots,z_n)}{\prod _{j=1}^n (z_j-\zeta_j)(z_j+i)^k} dz_1\cdots dz_n, \end{displaymath} where each $\kappa_j$ is a closed path in the strip ${\mathrm O}mega_j=\mklm{z_j \in {\mathbb C}: 0<{\mathcal I}m z_j<\gamma_j}$ around $\zeta_j$ respectively. This representation is independent of the chosen paths and we may thus take them as borders of the rectangles $[-x^1_j,x^1_j]\times[y^1_j,y^2_j], \, x^1_j>0,\, 0<y^1_j<y^2_j<\gamma_i$. If we now let the rectangles approach the strips in which they are contained, the integrals along the borders $[y^1_j,y^2_j]$ disappear and we obtain for $f(\zeta)$ the expression \begin{displaymath} \frac{\prod_{j=1}^n (\zeta_j+i)^k}{(2 \pi i)^n}\sum \limits_\theta \pm \int \limits_{-\infty}^\infty \frac {f(x+i\theta)}{\prod_{j=1}^n (x_j+i\theta_j-\zeta_j)(x_j+i\theta_j+i)^k} dx_1 \cdots dx_n \end{displaymath} where $\theta=(\theta_1,\dots,\theta_n)$ and the $\theta_j$ are equal to $y^1_j$ or equal to $\gamma_j$ respectively and the sign in the sum over $\theta$ depends on the orientation of the corresponding borders. We put \begin{displaymath} {\rm P}hi_{\zeta,\theta}(x)=\prod\limits_{j=1}^n (x_j+i\theta_j-\zeta_j)^{-1}(x_j+i\theta_j+i)^{-k}. \end{displaymath} The functions ${\rm P}hi_{\zeta,\theta}$ as well as their derivatives up to order $\leq k$ decrease at infinity faster than any polynomial $x^\alpha,\, \modulus {\alpha}\leq k$. On the other hand $\theta\mapsto {\rm P}hi_{\zeta,\theta}$ is a continuous map for $\theta_j\not=\eta_j={\mathcal I}m \zeta_j$ into the Banach space ${{\mathcal S}}^k({\mathbb R}^n)$ and ${\rm P}hi_{\zeta,\theta}$ remains in a compact set for $\theta \to 0$ and fixed $\eta_j$. For $\theta \to 0$ we have therefore \begin{displaymath} \eklm {f_\theta,{\rm P}hi_{\zeta,\theta}} \rightarrow \eklm{f_{0},{\rm P}hi_{\zeta,0}}. \end{displaymath} We obtain for $f$ in ${\mathcal T}_\gamma$ the expression \begin{displaymath} f(\zeta)=\frac{\prod_{j+1}^n (\zeta_j+i)^k}{(2 \pi i)^n}\sum \limits_\theta \pm \eklm {f_\theta,{\rm P}hi_{\zeta,\theta}}, \end{displaymath} where $\theta=(\theta_1,\dots,\theta_n)$ is equal to $\gamma$ or zero. As a consequence of the continuity condition \eqref{eq:1.2} it follows for $f(\zeta)$ the estimate \begin{align} \label{eq:1.3} \begin{split} \modulus{f(\zeta)} &\leq C\prod \limits_{j=1}^n \modulus{\zeta_j+i}^k \sum\limits_\theta \norm {{\rm P}hi_{\zeta,\theta}}_k\leq \\ &\leq C\prod\limits_{j=1}^n(1+\modulus{\zeta_j})^k \sum\limits_\theta \sum \limits_{\modulus{\alpha}, \, \modulus{\beta} \leq k} \sup _x \modulus{x^\alpha \partial ^\beta {\rm P}hi_{\zeta,\theta}(x)}, \end{split} \end{align} where we have made use of \begin{displaymath} \modulus{(\zeta_j+i)^k} \leq \sum \limits_{m=0}^k\modulus{\zeta_j^m i^{k-m}}=\sum \limits_{m=0}^k \modulus{\zeta^m_j}=(1+\modulus {\zeta_j})^k. \end{displaymath} Differentiation yields for the sum over $\modulus{\alpha},\modulus{\beta}\leq k$ \begin{gather*} \sum\limits_{\modulus{\alpha},\modulus{\beta}\leq k \atop \delta_j+\epsilon_j=\beta_j} \sup _x \modulus{x^\alpha \frac{\delta!k!}{\epsilon!} (x_1+i\theta_1-\zeta_1)^{-i-\delta_1}\cdots (x_n+i\theta_n-\zeta_n)^{-1-\delta_n} \times \right.\\ \left. \times (x_1+i\theta_1+i)^{-k-\epsilon_1}\cdots (x_n+i\theta_n+i)^{-k-\epsilon_n}} \end{gather*} so that the estimate \eqref{eq:1.3} now reads \begin{equation} \label{eq:1.4} \begin{split} \modulus{f(\zeta)} &\leq C\prod \limits_{j=1}^n (1+\modulus{\zeta_j})^k \sum\limits_\theta \sum \limits_{\modulus{\delta} \leq k} \modulus {{\mathcal I}m \zeta_1-\theta_1}^{-\delta_1-1}\cdots \modulus{{\mathcal I}m \zeta_n-\theta_n}^{-\delta_n-1}. \end{split} \end{equation} The $\theta_j$ are equal zero or equal $\gamma_j$ and we have $0<{\mathcal I}m \zeta_j<\gamma_j$. Since ${\mathcal D}elta$ and the numbers $\gamma_j$ are arbitrary, we obtain for each compact $M$ in $\Gamma$ an estimate of the form \begin{displaymath} \modulus{f(\zeta)} \leq C(1+\modulus{\zeta})^N, \qquad {\mathcal I}m \zeta \in M, \end{displaymath} for a positive integer $N$ and a constant $C$. \end{proof} It can be shown that the given conditions, especially relations \eqref{eq:1.3} and \eqref{eq:1.4}, are also sufficient. The Fourier transform is an isomorphism of ${\mathcal S}$, so that the Fourier transform of a tempered Distribution $u \in {\mathcal S}'({\mathbb R}^n)$ can be defined as \begin{displaymath} \hat u(\varphi)=u(\hat \varphi), \qquad \varphi \in {\mathcal S}. \end{displaymath} For distributions with compact support the Fourier transform is given by the entire analytic function \begin{displaymath} \hat u(\zeta)=u_x(e^{-i\eklm{x,\zeta}}).\qquad \zeta \in {\mathbb C}^n. \end{displaymath} It is called the Fourier--Laplace transform of $u$. For each closed convex set $E$ we define now the convex, positively homogeneous function \begin{equation} \label{eq:1.5} H_E(\xi):=\sup _{x \in E} \eklm {x,\xi}, \quad \xi \in {\mathbb R}^n. \end{equation} with values in $(-\infty,\infty]$. It characterizes the set $E$ completely, since $E$ is given as the set of all $x \in {\mathbb R}^n$ for which $\eklm{x.\xi} \leq H_E(\xi), \, \xi \in {\mathbb R}^n$. Conversely, if $H$ is a function with the mentioned properties, there exists exactly one closed convex set $E$ such that $H=H_E$ and $E=\mklm{x: \eklm{x,\xi} \leq H(\xi),\, \xi \in {\mathbb R}^n}$. If $E$ is compact then $H_E(\xi)<\infty$ for each $\xi$. We state now the theorems of Paley--Wiener--Schwartz \cite{schwartz} and H\"{o}r\-mander \cite{hoermander}. \begin{theorem} \label{thm:1.1} Let $K$ be a compact convex set in ${\mathbb R}^n$ with support function $H_K$. If $u$ is a distribution of order $N$ with support contained in $K$, then \begin{equation} \label{eq:1.6} \modulus {\hat u(\zeta)} \leq C(1+\modulus{\zeta})^N e^{H_K(Im \zeta)},\qquad \zeta \in {\mathbb C}^n. \end{equation} Conversely, every entire analytic function in ${\mathbb C}^n$ which satisfies the relation \eqref{eq:1.6} for some $N$ is the Fourier--Laplace transform of a distribution with support in $K$. \end{theorem} It turns out that it is possible to define the Fourier--Laplace transform at least on certain subspaces of ${\mathbb C}^n$ for more general distributions. So for $\zeta \in {\mathbb C}^n$ and fixed $\eta={\mathcal I}m \zeta$ \begin{displaymath} \hat u(\zeta)= \eklm {u,e^{-i\eklm {.\,,\zeta}}} \end{displaymath} could be defined as a distribution in $\xi={\mathbb R}e \zeta$ if $e^{\eklm{.,\eta}}u \in {\mathcal S}'$. We set \begin{equation} \label{eq:1.7} \Gamma_u=\mklm{\eta \in {\mathbb R}^n: e^{\eklm{.\,,\eta}} u \in {\mathcal S}'}. \end{equation} Then the following theorem holds. \begin{theorem} \label{thm:1.2} If $u \in {\mathcal D}'({\mathbb R}^n)$, \eqref{eq:1.7} defines a convex set $\Gamma_u$. If its interior $\Gamma_u^\circ$ is not empty, there exists a function $\hat u$ analytic in ${\mathbb R}^n+i \Gamma^\circ_u$ such that the Fourier transform of $e^{\eklm{.,\eta}} u$ is given by $\hat u(.+i \eta)$ for all $\eta \in \Gamma ^\circ_u$. For each compact set $M \subset \Gamma ^\circ_u$ there is an estimate \begin{equation} \label{eq:1.8} \modulus {\hat u(\zeta)} \leq C (1+ \modulus {\zeta})^N, \qquad {\mathcal I}m \zeta \in M. \end{equation} Conversely, if $\Gamma$ is an open convex set in ${\mathbb R}^n$ and $U$ an analytic function in ${\mathbb R}^n+i\Gamma$ which fulfills an estimate of the form \eqref{eq:1.8} for every compact set $M$ in $\Gamma$, then there is a distribution $u$ such that $e^{\eklm{.,\eta}} u \in {\mathcal S}'$ with Fourier transform $U(\cdot+i \eta)$ for all $\eta \in \Gamma$. If in addition $\supp u \subset K$, then \begin{equation} \label{eq:1.9} \modulus {\hat u(\zeta)} \leq C (1+\modulus {\zeta})^N e ^{H_K({\mathcal I}m \zeta -\eta)}, \end{equation} if $\eta \in M$, $H_K({\mathcal I}m \zeta - \eta) < \infty$, where $M$ is a compact set in $\Gamma^\circ_u$. If conversely there is an $\eta$ for which \eqref{eq:1.9} holds, then $\supp u \subset K$ if $K$ is closed and convex. \end{theorem} \section{Localization for the massive scalar field} Algebraic quantum field theory is concerned with von Neumann algebras ${\mathcal M}({\mathrm O})$ of observables localized in space time domains ${\mathrm O}$ together with states $\omega$ on these algebras satisfying certain selection criteria. Due to the Reeh--Schlieder property of the vacuum one may associate with certain regions ${\mathrm O}$ and states $\omega$ the operators $\delta$ and $j$ of the modular theory of Tomita and Takesaki. Here $\delta$ is a positive operator which generates a one parameter group of automorphisms $\text{ad}\, \delta^{it}$ of ${\mathcal M}({\mathrm O})$ and $j$ is an antiunitary operator that defines the conjugation $\text{ad}\, j$ which maps ${\mathcal M}({\mathrm O})$ onto its commutant in the Hilbert space associated with $\omega$ by the Gelfand--Neumann-Segal construction. Important features of the theory are contained in these operators but explicit realizations of them are only known for certain regions, $\omega$ being the vacuum state. So in the case where ${\mathrm O}$ is a spacelike wedge and the local algebras are generated by Wightman fields that transform covariantly under a finite dimensional representation of the Lorentz group the modular group is the group of velocity transforms that leave the wedge invariant, and the conjugation is the $PCT$ operation combined with a rotation. With the knowledge of these modular objects for wedge like regions we associate, following Schroer \cite{schroer1}, to certain closed and convex sets in Minkowski space which arise out of the intersection of wedges real subspaces of wave functions in Wigner representation space. These wave functions can then be viewed as localized in the corresponding regions. We characterize the elements of these subspaces as boundary values of analytic functions on three--dimensional complex submanifolds of complex Minkowski space which satisfy certain boundary conditions and show that the latter can be analytically continued to open regions in Minkowski space. They converge in the sense of distribution theory to square--integrable functions and we show that they satisfy the sufficient conditions of the theorems of Paley--Wiener--Schwartz and H\"{o}rmander. In the following we will restrict ourselves to the massive scalar field. This field corresponds to the representations of class ${\rm P}^+_0$ of the inhomogeneous Lorentz group ${\mathrm P_+^\uparrow(3,1)}$. The wave functions have only one component and the unitary operator $Q(p,\Lambda)$ in equation \eqref{eq:0.0} is equal to $1$. In this case the wave equation reduces to $p^kp_k=P$. To each Lorentz transformation $y^k=\Lambda^k_lx^l+a^k$ corresponds a unitary operator $U(L)=T(a)d(\Lambda)$ whose action on any $\varphi \in {\rm L}^2({\Gamma^+_P})$ is given by \begin{equation} \label{eq:2.0} U(L)\varphi(p)=e^{i\mklm{p,a}}\varphi(\Lambda^{-1}p), \end{equation} while the $PCT$ transformation is realized by the antiunitary operator \begin{displaymath} {\mathcal T}heta \varphi(p)=\bar \varphi(p). \end{displaymath} We consider now in Minkowski space the region $W:=\mklm{x \in {\mathbb R}^4:x^3>\modulus {x^0}}$. $W$ is open and convex as well as invariant under velocity transformations in $x^3$-- direction, under rotations around the $x^3$-- axis and under translations in direction of $x^1$ and $x^2$. All these transformations constitute a subgroup of isometric isomorphisms of $ W$ in ${\mathrm P_+^\uparrow(3,1)}$. The velocity transforms in $x^3$-- direction \begin{displaymath} y^k=\Lambda^k{}_l(t)x^l \end{displaymath} are given in their active form in the coordinates $x^0, x^1, x^2, x^3$ by the matrices \begin{displaymath} \left ( \begin{array}{cccc} \cosh t & 0 & 0 & -\sinh t \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ -\sinh t & 0 & 0 & \cosh t \end{array} \right ), \qquad t \in {\mathbb R}\,. \end{displaymath} By transformation with elements $L=(a,I)$ of ${\mathrm P_+^\uparrow(3,1)}$ we obtain from $W$ open convex regions $W_L$ to which correspond again certain subgroups of isometric automorphisms. The corresponding velocity transformations \begin{displaymath} y^k=I^k{}_l\Lambda^l{}_m(t)(I^{-1})^m{}_nx^n+(\delta ^k_n-I^k{}_l\Lambda^l{}_m(t)(I^{-1})^m{}_n)a^n \end{displaymath} constitute a one--parameter abelian subgroup to which we associate the one--parame\-ter group of unitary operators \begin{equation} \label{eq:2.2} U_L(t):=U(({\bf 1}-I\Lambda(t) I^{-1})a,\tilde I \tilde \Lambda(t) \tilde I^{-1})=T(a)d(\tilde I) d(\tilde \Lambda(t))d(\tilde I)^{-1}T(a)^{-1}. \end{equation} Finally, each $L$ defines the antiunitary involution \begin{equation} \label{eq:2.3} j_L:=T(a)d(\tilde I) d(\tilde \Upsilon) {\mathcal T}heta d(\tilde I)^{-1}T(a)^{-1}=T({\bf 1} + I\Upsilon I^{-1})a)d(\tilde I \tilde \Upsilon \tilde I^{-1}) {\mathcal T}heta, \end{equation} where $\Upsilon$ denotes a rotation by $\pi$ around the $x^3$--axis. Following Bisognano and Wichmann \cite{bisognano-wichmann} we consider the analytic continuation of the operators $U_L(t)$ and define the closed operators \begin{displaymath} (\delta^{1/2}_{L,+},D_+):=(U_L(i\pi),D_{U_L}(\pi)), \qquad (\delta^{1/2}_{L,-},D_-):=(U_L(-i\pi),D_{U_L}(-\pi)). \end{displaymath} Together with the involution $j_L$ they define the antilinear closed operators \begin{displaymath} (s_{L,+},D_+):=(j_L\delta ^{1/2}_{L,+},D_+), \qquad (s_{L,-},D_-):=(j_L\delta ^{1/2}_{L,-},D_-). \end{displaymath} They are then the modular operators corresponding to the region $W_L$. We consider now in Minkowski space a closed polyhedral region $K$ with vertices $a_i$, $i=1, \dots,n$, which arises out of the intersection of the family \begin{equation} \label{eq:2.4} \mklm{\overline{W}_L}_{L \in X_K} \end{equation} of closed convex regions $\overline{W}_L$; $X_K$ is some subset of ${\mathrm P_+^\uparrow(3,1)}$ depending on $K$. This family is supposed to decompose into $n$ subfamilies \begin{equation} \label{eq:2.5} \mklm{\overline {W}_{(a_i,I)}}_{I \in X_{K,a_i}} \end{equation} where the $X_{K,a_i}$ are nonempty closed convex $6$--dimensional subsets of ${\mathrm P_+^\uparrow(3,1)}$ associated to each vertex $a_i$. These assumptions correspond to the prescription that the intersection over the family $\mklm {\overline {W}_L}_{L \in X_K}$ is to be understood as the intersection of all the regions $\overline W_L$ which contain $K$. \begin{definition} Let $K$ be a closed convex region as above. We associate to $K$ in ${\rm L}^2({\Gamma^+_P})$ the real subspaces \begin{displaymath} {\mathcal H}^R_{K,\pm}:=\mklm{\varphi \in {\rm L}^2({\Gamma^+_P}): s_{L,\pm}\varphi=\varphi,\quad L \in X_K}. \end{displaymath} \end{definition} Let $\varphi \in {\mathcal H}^R_{K,+}$. Then $\varphi \in D_{U_L}(\pi)$ for all $L\in X_K$ and we define for each vertex $a_i$ the functions \begin{equation} \label{eq:2.6} u_{+,a_i}(\zeta):=U_L(\tau)\varphi(p), \quad L=(a_i,I),\quad I \in X_{K,a_i}, \quad 0\leq{\mathcal I}m \tau \leq \pi, \end{equation} where $p \in {\Gamma^+_P}$ and $\zeta =I\Lambda(\tau)^{-1}I^{-1}p$. \begin{lemma} For every vertex $a_i$, the set \begin{equation} \label{eq:2.7} M_{K,a_i}^+:=\mklm{\zeta \in {\Gamma^{(c)}_P}: \zeta =I\Lambda(\tau)^{-1}I^{-1}p, \, p \in {\Gamma^+_P},\, 0 < {\mathcal I}m \tau <\pi,\, I \in X^\circ_{K,a_i}} \end{equation} is a complex 3--dimensional submanifold in ${\mathbb C}^4$ and the function $u_{+,a_i}$ is holomorphic on $M^+_{K,a_i}$ and hence uniquely determined. For $0 \leq {\mathcal I}m \tau \leq \pi$, $u_{+,a_i}$ is continuous. \end{lemma} \begin{proof} It can be shown that the sets $M^+_{K,a_i}$ are given by \begin{displaymath} M_{K,a_i}^+={\Gamma^{(c)}_P} \cap {\mathcal T}_{K,a_i}^+, \end{displaymath} where ${\mathcal T}_{K,a_i}^+={\mathbb R}^4+i\Gamma_{K,a_i}^+$ and \begin{gather*} \Gamma_{K,a_i}^+:=\mklm{\eta \in {\mathbb R}^4: \eta=I\eta',\quad I\in X^\circ_{K,a_i},\quad \eta' \in \Gamma^+ },\\ \Gamma^+:=\mklm{\eta \in {\mathbb R}^4:\eta^-<0,\quad \eta^+>0,\quad \eta^1=\eta^2=0}. \end{gather*} Since ${\Gamma^{(c)}_P}$ is given as the set of zeros of the analytic function $\mu_(\zeta)=\mklm{\zeta,\zeta}-P$ it is a complex submanifold of ${\mathbb C}^4$ and since ${\mathcal T}^+_{K,a_i}$ is open in ${\mathbb C}^4$, so is $M_{K,a_i}^+$ as well. ${\mathrm L_+(C)}$ acts transitively on ${\Gamma^{(c)}_P}$ and the isotropy group of a point $\zeta \in {\Gamma^{(c)}_P}$ is given by ${\mathrm SU}(3)$, so that \begin{displaymath} {\Gamma^{(c)}_P} \simeq {\mathrm L_+(C)} {\mathcal B}ig / {\mathrm SU}(3). \end{displaymath} ${\Gamma^{(c)}_P}$ is thus the orbit of a point $\zeta$ under complex velocity transforms and we can write each $\zeta$ in $M_{K,a_i}^+={\Gamma^{(c)}_P} \cap {\mathcal T}_{K,a_i}^+$ as \begin{displaymath} \zeta=I\Lambda^{-1}(\tau_I)I^{-1}p=\Lambda^{-1}_1(\tau_1)\Lambda^{-1}_2(\tau_2)\Lambda^{-1}_3(\tau_3)p_0, \end{displaymath} where the $\Lambda_i(\tau_i)$ are complex velocity transforms in $x_i$--direction with $0 \leq {\mathcal I}m \tau_i \leq \pi,\, i=1,2,3,$ and $p_0$ is a fixed point in ${\Gamma^+_P}$. We can thus interpret the functions $u_{+,a_i}(\zeta)$ as functions of $\tau=(\tau_1,\tau_2,\tau_3)$ and write $u_{+,a_i}(\zeta)=u_{+,a_i}(\tau_1,\tau_2,\tau_3)$. Since every complex velocity transform can be obtained from any other by adjoining the latter with a homogeneous Lorentz transform, the analycity of the $u_{+,a_i}$ follows by Proposition \ref{prop:0.0}. \end{proof} The $u_{+,a_i}$ satisfy the boundary conditions \begin{displaymath} u_{+,a_i}(-\xi)=U_L(i\pi)\varphi(p)=j_L\varphi(p)=e^{i\mklm{p,a_i}}e^{i\mklm{\xi,a_i}}\bar u_{+,a_i}(\xi) \end{displaymath} for all $\xi=I\Upsilon^{-1}I^{-1}p,\, p \in {\Gamma^+_P},\,I \in X_{K,a_i}$. Note that $\Lambda^{-1}(i\pi)=-\Upsilon$. Analogous considerations hold in the case that $\varphi\in {\mathcal H}^R_{K,-}$. Hence we have the following proposition. \begin{proposition} \label{prop:2.0} Let an irreducible representation of the inhomogeneous Lor\-entz group ${\mathrm P_+^\uparrow(3,1)}gen$ of class $P^+_0$ be given in the Hilbert space ${\rm L}^2({\Gamma^+_P})$ of ${\rm L}^2$--integrable functions on ${\Gamma^+_P}$. The subspaces ${\mathcal H}^R_{K,\pm}$ associated to the polyhedral region $K$ with vertices $a_i$, $i=1,\dots,n$, are then given by the set of all functions $\varphi(p) \in {\rm L}^2({\Gamma^+_P})$ which are boundary values of analytic functions $u_{+,a_i}$ resp. $u_{-,a_i}$, in $M_{K,a_i}^+$ resp. $M_{K,a_i}^-$ for some $i$ satisfying the boundary conditions \begin{equation} \label{eq:2.8} u_{\pm,a_i}(-\xi)=e^{i\mklm{p,a_i}}e^{i\mklm{\xi,a_i}}\bar u_{\pm,a_i}(\xi) \end{equation} for all $\xi=I\Upsilon I^{-1}p,\, p \in {\Gamma^+_P}$, and $(a_i,I) \in \mklm{a_i} \times X_{K,a_i} \subset X_K$ respectively. \end{proposition} By the second theorem of Cartan every holomorphic function on a closed analytic submanifold of a Steinian manifold $X$ is the restriction of a holomorphic function defined on $X$ \cite{gunning-rossi}. Therefore every$u_{+,a_i}$ can be extended analytically to ${\mathcal T}^+_{K,a_i}$ respectively. Since the holomorphic hull of ${\mathcal T}^+_{K,a_i}$ is given by its convex hull, we have the following lemma. \begin{lemma} $u_{+,a_i}$ can be analytically extended to $\convh {\mathcal T}^+_{K,a_i}$. \end{lemma} We consider now the operator $R(\Lambda)\varphi(p)=\varphi(\Lambda^{-1}p)$ and set \begin{displaymath} R_I(t):=R(I \Lambda^{-1}(t) I^{-1}). \end{displaymath} By definition we have \begin{displaymath} U_L(t)\varphi(p)=e^{i\mklm{p,a}}e^{-i\mklm{I\Lambda^{-1}(t)I^{-1}p,a}}R_I(t)\varphi(p). \end{displaymath} If $\varphi \in D_R(\pi)$ we define the function \begin{equation} \label{eq:2.9} r_+(\zeta):=R_I(\tau)\varphi(p), \quad (a,I) \in X_K,\quad 0\leq{\mathcal I}m \tau\leq \pi. \end{equation} In a similar way as before for the functions $u_{+,a_i}$ we have the following lemma. \begin{lemma} $r_+$ is holomorphic on the complex manifold \begin{displaymath} M_{K,+}:=\mklm{\zeta \in {\Gamma^{(c)}_P}: \zeta =I\Lambda(\tau)^{-1}I^{-1}p, \, p \in {\Gamma^+_P},\, 0 < {\mathcal I}m \tau <\pi,\, L=(a,I) \in X_{K}^\circ} \end{displaymath} and can analytically be extended to $\convh {\mathcal T}_{K}^+$, where \begin{displaymath} \Gamma_K^+:=\mklm{\eta \in {\mathbb R}^4: \eta=I\eta',\, (a,I)\in X^\circ_{K},\, \eta' \in \Gamma^+ }. \end{displaymath} \end{lemma} As before we can represent each element $\zeta$ of $ M_{K,+}={\Gamma^{(c)}_P} \cap {\mathcal T}^+_K$ as \begin{displaymath} \zeta=I\Lambda^{-1}(\tau_I)I^{-1}p=\Lambda^{-1}_1(\tau_1)\Lambda^{-1}_2(\tau_2)\Lambda^{-1}_3(\tau_3)p_0, \end{displaymath} where $\Lambda_i(\tau_i)$ are complex velocity transformations in $x_i$ direction with $0 \leq {\mathcal I}m \tau_i \leq \pi,\, i=1,2,3,$ and $p_0$ a fixed point in ${\Gamma^+_P}$. We can therefore interpret $r_+(\zeta)$ as a function of $\tau=(\tau_1,\tau_2,\tau_3)$ and write \begin{gather*} r_+(\zeta)=r_+(\tau_1,\tau_2,\tau_3)=\\ =R(\Lambda_3(\tau_3))R(\Lambda_2(\tau_2))R(\Lambda_1(\tau_1))\varphi(p_0) =R_{\Lambda_3\Lambda_2\Lambda_1}(\tau). \end{gather*} \begin{proposition} $r_+$ satisfies in $\convh {\mathcal T}^+_K$ an estimate of the form \begin{equation} \label{eq:2.10} \modulus{r_+(\zeta)} \leq C(1+\modulus{\zeta})^N,\qquad {\mathcal I}m \zeta \in H, \end{equation} where $H$ is a compact set in $\convh \Gamma^+_K$. \end{proposition} \begin{proof} The element $R_I(\tau_I)\varphi(.)$in ${\rm L}^2({\Gamma^+_P})$ is a tempered distribution \begin{displaymath} R_{\Lambda_3\Lambda_2\Lambda_1}(\tau)\varphi(.)= R_{\Lambda_3\Lambda_2\Lambda_1}(.+i\varrho)\varphi(p_0) \end{displaymath} which depends on the parameters $\varrho=(\varrho_1,\varrho_2,\varphi_3)$; for ${\rm P}hi \in {\mathcal S}({\Gamma^+_P})$ we have thus \begin{gather*} \eklm {R_{\Lambda_3\Lambda_2\Lambda_1}(i\varrho)\varphi,{\rm P}hi}= \\=\int^\infty_{-\infty}R_{\Lambda_3\Lambda_2\Lambda_1}(\lambda'+i\varrho)\varphi(p_0) {\rm P}hi(\Lambda_1^{-1}(\lambda_1')\Lambda_2^{-1}(\lambda_2')\Lambda_3^{-1}(\lambda_3')p_0)\, d\lambda'=\\ =\int_{\Gamma^+_P} R_{\Lambda_3\Lambda_2\Lambda_1}(i\varrho)\varphi(p){\rm P}hi(p) \, dM \end{gather*} where $\modulus {\partial (p^1,p^2,p^3)/\partial(\lambda_1,\lambda_2,\lambda_3)}=p^0$. Now $R_I(\tau_I)\varphi=R_{\Lambda_3\Lambda_2\Lambda_1}(\tau)\varphi$ is strongly continuous for $0 \leq {\mathcal I}m \tau_I \leq \pi$ as well as for corresponding values of $\tau_i,\, i=1,2,3$; in particular we obtain \begin{displaymath} \norm {R_{\Lambda_3\Lambda_2\Lambda_1}(\tau)\varphi-\varphi} \to 0 \quad \text{for} \quad \tau \to 0; \end{displaymath} since strong convergence implies weak convergence we have for all elements $\psi$ of ${\rm L}^2({\Gamma^+_P})$ \begin{displaymath} (R_{\Lambda_3\Lambda_2\Lambda_1}(\tau)\varphi,\psi) \to (\varphi,\psi) \quad \text{for} \quad \tau \to 0. \end{displaymath} Hence it follows that \begin{displaymath} \lim_{\varrho \to 0} \int_{\Gamma^+_P} R_{\Lambda_3\Lambda_2\Lambda_1}(i\varrho)\varphi(p){\rm P}hi(p)\,dM=\varphi({\rm P}hi), \end{displaymath} and by Theorem \ref{thm:1.0} we obtain for $r_+(\zeta)=R_{\Lambda_3\Lambda_2\Lambda_1}(\tau)\varphi(p_0)$ in $M_{K,+}$ an estimate of the form \begin{equation} \label{eq:2.10.a} \modulus{r_+(\zeta)} \leq C(1+\modulus{\zeta})^N,\qquad {\mathcal I}m \zeta \in H, \end{equation} where $H$ is a compact set in ${\mathcal I}m M_{K,+}$. Continuing $r_+\equiv r_+(\zeta^i),\, \zeta^i\equiv \tau_i,\, \zeta^4=\mu(\zeta)-P$, analytically to $\convh {\mathcal T}^+_K$ in such a way that $r_+$ also has a bound of the given form with respect to $\zeta^4$ we obtain for $r_+$ in all $\convh {\mathcal T}^+_K$ an estimate of the form \eqref{eq:2.10.a} where $H$ is now a compact set in $\convh \Gamma^+_K$. \end{proof} For all functions $u_{+,a_i}$ the relation \begin{equation} \label{eq:2.11} u_{+,a_i}(\zeta)=e^{i\mklm{p,a_i}}e^{-i\mklm {\zeta,a_i}}r_+(\zeta),\qquad \zeta \in \convh {\mathcal T}_{K,a_i}^+ \end{equation} holds and we obtain with \eqref{eq:2.10} the estimates \begin{align*} \modulus{u_{+,a_i}(\zeta)}&=e^{\mklm{{\mathcal I}m \zeta,a_i}}\modulus {r_+(\zeta)}\leq\\ &\leq C(1+\modulus{\zeta})^N e^{H_K^{(1,3)}({\mathcal I}m \zeta)}, \quad {\mathcal I}m \zeta \in H, \end{align*} for each compact set $H$ in $\convh \Gamma_{K,a_i}^+$ and all $a_i$, $i=1,\dots,n$. We reformulate this and summarize the above results in the following theorem. \begin{theorem} \label{thm:2.0} Let $K$ be a polyhedral region in ${\mathbb R}^4$ with vertices $a_i$, $i=1,\dots,n$, and $\varphi \in {\mathcal H}^R_{K,+}$ resp. ${\mathcal H}^R_{K,-}$ the boundary value of the functions $u_{+,a_i}$ resp. $u_{-,a_i}$ analytic in $M_{K,a_i}^+={\Gamma^{(c)}_P} \cap {\mathcal T}_{K,a_i^+}$ resp. $M_{K,a_i}^-={\Gamma^{(c)}_P} \cap {\mathcal T}_{K,a_i}^-$. Then $u_{+,a_i}$ and $u_{-,a_i}$ can be analytically extended to $\convh {\mathcal T}_{K,a_i}^+$ resp. $\convh {\mathcal T}_{K,a_i}^-$ and satisfy for each compact set $H$ in $\convh \Gamma_{K,a_i}^+$ resp. $\convh \Gamma_{K,a_i}^-$ an estimate \begin{equation*} \label{eq:2.12} \modulus{u_{\pm,a_i}(\zeta)} \leq C(1+\modulus{\zeta})^Ne^{H_{K}^{(1,3)}({\mathcal I}m \zeta-\eta)},\qquad \zeta \in \convh {\mathcal T}_{K,a_i}^\pm, \end{equation*} if $\eta \in H$ and $H_{K}^{(1,3)}({\mathcal I}m \zeta -\eta) < \infty $. \end{theorem} Thus the functions $u_{+,a_i}$ and $u_{+,a_i}$ satisfy the sufficient conditions of Theorem \ref{thm:1.2}. If in addition $K$ is compact we have the following theorem. \begin{theorem} \label{thm:2.1} Let $K$ a compact polyhedral region in ${\mathbb R}^4$ and $\varphi \in {\mathcal H}^R_{K,+}$ resp. ${\mathcal H}^R_{K,-}$. Then $\varphi$ is the boundary value of an entire analytic function $u(\zeta)$ which satisfies an estimate of the form \begin{displaymath} \modulus{u(\zeta)} \leq C(1+\modulus{\zeta})^N e^{H_K^{(1,3)}({\mathcal I}m \zeta)}, \qquad \zeta \in {\mathbb C}^4. \end{displaymath} \end{theorem} According to Theorem \ref{thm:1.1} the elements of ${\mathcal H}^R_{K,+}$ resp. ${\mathcal H}^R_{K,-}$ are then boundary values of Fourier--Laplace transforms of distributions with support in $K$. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \end{document}
\begin{document} \author{\thinspace M. J. Steel and D. F. Walls} \address{Department of Physics, University of Auckland, Private Bag 92019, Auckland,\\ New Zealand.} \title{Pumping of twin-trap Bose-Einstein condensates} \date{\today} \maketitle \begin{abstract} We consider extensions of the twin-trap Bose-Einstein condensate system of Javaneinen and Yoo [Phys. Rev. Lett., {\bf 76}, 161--164 (1996)] to include pumping and output couplers. Such a system permits a continual outflow of two beams of atoms with a relative phase coherence maintained by the detection process. We study this system for two forms of thermal pumping, both with and without the influence of inter-atomic collisions. We also examine the effects of pumping on the phenomenon of collapses and revivals of the relative phase between the condensates. \end{abstract} \section{Introduction} Both before and since the recent demonstrations of Bose-Einstein condensation~\cite{and95,dav95,bra97} in dilute alkali gases, the concept of the phase of a Bose-Einstein condensate (BEC) has attracted a deal of theoretical study. Traditionally, the existence of a phase is taken as a signature of spontaneous symmetry breaking and strictly it is only the relative phase of two BEC's that can be assigned a definite value. Many people have discussed the difficulties associated with the fact that in many cases, we may know the number of atoms present fairly precisely. The condensate is thus in a state that cannot possess a well-defined phase. Recently, several authors~\cite{jav96,cir96,nar96,won96,won96a,jac96} have modelled systems containing two BEC's, demonstrating that a relative phase can arise naturally, even when both condensates are initially in number states. Typically, atoms are permitted to leak out from both traps and are detected by some apparatus below. As it is unknown from which trap any individual atom comes, the distribution of positions at which atoms are detected shows interference fringes. At the same time, the quantum state of the two traps evolves from a simple product of number states into an entangled state of varying number {\em differences} between the traps allowing a well-defined relative phase to appear between the two condensates. The value of the relative phase is randomly distributed from run to run so that for an ensemble of runs, the phase symmetry is restored. The influence of inter-atomic collisions on such a system has also been considered~\cite{won96,won96a,wri96,wri97}. The notable results are a reduction in the visibility of the observed interference pattern, and in the conditional visibility of the entangled state, due to diffusion of the condensate phase. Moreover, stopping the detection process after a relative phase has been established leads to collapses and revivals in the conditional visibility~\cite{won96a,wri96,wri97}, as the collisions cause the phases of different components of the entangled state to rotate at different rates. As the total state is a discrete sum of number difference states, the phases realign periodically and the visibility is restored. An obvious consequence of detecting atoms is that the system can not attain a steady state. In time, the trap occupancies fall so low that the entangled state is reduced in size until eventually the atoms in one or both traps are exhausted altogether. If one envisaged using the two-trap system as a real ``device'' rather than merely an analogy for a single condensate, this might be a serious problem. A natural use would be as a kind of two-beam ``atom laser'' in which additional atoms would be tapped off from each trap through an output coupler into separate beams\ (see Fig.~\ref{fig:geom}), as in the case of two remarkable recent experiments at MIT~\cite{mew97,and97}. The relative phase between the BEC's in the traps built up by the measurement process would then be reflected in the same well-defined relative phase between the two output beams, that could be exploited for some other interference experiment. While such a scheme could operate in a pulsed fashion where the traps were repeatedly filled with atoms, measured until exhausted and then the process repeated (as was the case for the experiments just mentioned), it would also be useful to have a continuous output. This would clearly require a continuous pumping of new atoms into the traps. We note of course, that interference measurements on the output beams would themselves act to produce a relative phase between the traps. It may be however, that the desired rate of measurements on the output beams is too low to stabilize the phase over long periods, whereas the detection rate directly on the traps may be as large as necessary. Here we assume that the rate at which subsequent measurements are made on the output beams is so low that they have a negligibly small feedback on the entangled state in the traps. (Note that the output rate may be relatively high as long as most of the leaked atoms are not actually detected in an interference experiment.) In this paper, we explore the effects of pumping a two-trap BEC system. We investigate what kind of steady states may be reached when pumping is included, and study the competing effects of the collisions and measurements in such a steady state. We also allow output couplers on each trap as discussed above. We explore two types of pumping from thermal atom baths coupled to each trap. In the first, we allow two-way pumping where atoms are exchanged with the baths in both directions. Such a scheme was considered for a single-trap atom laser based on evaporative cooling by Wiseman ${\em et }${\em \ al~}\cite{wis96}. In the second model, atoms may enter the traps from a thermal bath, but the reverse process is forbidden. This kind of irreversible pumping scheme has been considered in an atom laser model by Holland {\em et al~}\cite{hol96a}. We can then consider our whole system as a means of transferring non-condensate atoms in a thermal bath, into two coherent beams with the coherence established by the detection process. In this sense, the system might be considered a primitive two-beam atom laser. We emphasize though, that in this paper we include no line-narrowing element, a central component of true optical lasers~\cite{sar74,hak84,wis97} . The linewidth of the output beams would be at least that of the output couplers. The paper is structured as follows. In section~\ref{sec:ingreds} we describe our model in detail while in section~\ref{sec:state} we discuss the nature of the entangled state more fully. Using Monte-Carlo wave function simulations, we consider pumping from a thermal bath of atoms in section~\ref {sec:steady} examining the visibility and other parameters, when the pumping of the trap is either two-way or inwards-only . Finally in section~\ref {sec:collapses}, we turn to the phenomenon of collapses and revivals of the condensate phase and present some interesting effects associated with pumping. \section{\protect Elements of the model} \label{sec:ingreds} We now set out the model in full detail. In all the work below we assume a system of two traps containing condensates with occupation numbers $n_{1}$ and $n_{2}$ (see Fig.~\ref{fig:geom}). We occasionally write $n_{i}$ to indicate either trap. Atoms may leak from either trap and be detected below at a mean rate $\gamma n_{i}$ establishing a relative phase. (We assume that an atom in either trap has the same probability of detection). The traps are pumped from two thermal reservoirs containing $N_{1}$ and $N_{2}$ non-condensate atoms with rate coefficients $\chi _{1}$ and $\chi _{2}$ respectively. At different points in the paper, we assume that atoms may move either in both directions between the traps and reservoirs, or only into the traps, so that we define separate rates $\chi _{1}^{\text{in}},$ $ \chi _{2}^{\text{in}}$ and $\chi _{1}^{\text{out}}$ and $\chi _{2}^{\text{out }}$. In the simplest systems, we would expect $\chi _{i}^{\text{in}}=\chi _{i}^{\text{out}},$ but the inclusion of some irreversible pumping process can prevent atoms escaping from the trap into the baths giving $\chi _{i}^{ \text{out}}=0.$ One example would be to couple the thermal bath to an excited trap level $\left| e\right\rangle $. An atom in $\left| e\right\rangle $ can decay to the BEC\ mode $\left| g\right\rangle $ by spontaneous emission. If the medium is optically thin, the emitted photon is lost and excitation out of the ground state is impossible. We see in section~ \ref{sec:steady} that this one-way pumping leads to quite different behavior to the two-way pumping. The physical validity of the two types of thermal pumping has been discussed at some length in Ref.~\cite{wis96}. We also allow a separate leak from each trap into an empty mode at rates $ \nu _{1}$ and $\nu _{2}$ to act as ``output'' beams. While we here leave the mechanism unspecified, we note that several techniques for creating an output coupler have been demonstrated by the MIT group~\cite{mew97} in which rf signals or a bias in the trapping field are used to couple a portion of the condensate to untrapped spin states~\cite{bal97}. Finally, the atoms in each trap experience collisions amongst themselves at a rate $\kappa .$ The master equation for the system may thus be written \begin{eqnarray} \frac{d\rho }{dt} &=&\frac{i}{\hbar }\left[ \rho ,H\right] +\gamma \int_{0}^{2\pi }D\left[ \Psi \left( \phi \right) \right] \,d\phi \,\rho +\chi _{1}^{\text{out}}\left( N_{1}+1\right) D\left[ a_{1}\right] \rho +\chi _{1}^{\text{in}}N_{1}D\left[ a_{1}^{\dag }\right] \rho \nonumber \\ &&+\chi _{2}^{\text{out}}\left( N_{2}+1\right) D\left[ a_{2}\right] \rho +\chi _{2}^{\text{in}}N_{2}D\left[ a_{2}^{\dag }\right] \rho +\nu _{1}D\left[ a_{1}\right] \rho +\nu _{2}D\left[ a_{2}\right] \rho , \label{eq:master} \end{eqnarray} where the Hamiltonian describing collisions amongst the atoms is~\cite{wri97} \begin{equation} H=\frac{\kappa }{2}\left[ \left( a_{1}^{\dag }a_{1}\right) ^{2}+\left( a_{2}^{\dag }a_{2}\right) ^{2}\right] , \label{eq:hamil} \end{equation} and $a_{i}$ is the annihilation operator for an atom in trap $i.$ Also, for an arbitrary operator $c,$ the superoperator $D\left[ c\right] \rho $ is defined by \begin{equation} D\left[ c\right] \rho =c\rho c^{\dag }-\frac{1}{2}\left( c^{\dag }c\rho +\rho c^{\dag }c\right) . \end{equation} The field operator $\psi \left( \phi \right) =a_{1}+a_{2}e^{-i\phi },$ where $\phi =2\pi x,$ describes the detection of an atom at position $x.$ Most of our results are obtained from Monte-Carlo wave-function simulations of Eq.~( \ref{eq:master}) in which all leaks and additions of atoms to the traps are represented as quantum jumps, and the non-unitary evolution of the wave function is given by the effective Hamiltonian \begin{eqnarray} H_{\text{eff }} &=&H-i\frac{\hbar }{2}\left[ \gamma \left( a_{1}^{\dag }a_{1}+a_{2}^{\dag }a_{2}\right) +\left[ \chi _{1}^{\text{out}}\left( N_{1}+1\right) +\chi _{1}^{\text{in}}N_{1}+\nu _{1}\right] a_{1}^{\dag }a_{1}\right. \\ &&\left. +\chi _{1}^{\text{in}}N_{1}+\left[ \chi _{2}^{\text{out}}\left( N_{2}+1\right) +\chi _{2}^{\text{in}}N_{2}+\nu _{2}\right] \ a_{2}^{\dag }a_{2}+\chi _{2}^{\text{in}}N_{2}\right] . \end{eqnarray} When the chosen jump is a detection, the phase $\phi $ of the next detection is chosen randomly according to the conditional probability distribution \begin{equation} P\left( \phi \right) =\left\langle t\right| \psi ^{\dag }\left( \phi \right) \psi \left( \phi \right) \left| t\right\rangle , \end{equation} where $\left| t\right\rangle $ is the state of the system immediately preceding the detection. It has been shown elsewhere~\cite {jav96,won96,won96a} that this may always be written in the form \begin{equation} P\left( \phi \right) \propto 1+\beta \cos \left( \phi +\theta \right) , \label{eq:condprob} \end{equation} where the conditional visibility $\beta $ and conditional phase $\theta $ are determined by the previous history of the system. In order that the average number of atoms in the traps become constant in time for the thermal pumping schemes, we require a relation between the various coefficients in the master equation~(\ref{eq:master}). Assuming equal detection and pumping rates for traps 1 and 2, the pumping rates must satisfy \begin{itemize} \item for two-way pumping with $\chi ^{\leftrightarrow }=\chi ^{\text{in} }=\chi ^{\text{out}}:$ \begin{equation} \chi ^{\leftrightarrow }=\frac{\gamma \left\langle n_{1}+n_{2}\right\rangle +\sigma _{1}\left\langle n_{1}\right\rangle +\sigma _{2}\left\langle n_{2}\right\rangle }{N_{1}\ +N_{2}\ -\left\langle n_{1}+n_{2}\right\rangle }, \end{equation} \item for one-way pumping with $\chi ^{\rightarrow }=\chi ^{\text{in}},$ and $\chi ^{\text{out}}=0:$ \begin{equation} \chi ^{\rightarrow }=\frac{\gamma \left\langle n_{1}+n_{2}\right\rangle +\sigma _{1}\left\langle n_{1}\right\rangle +\sigma _{2}\left\langle n_{2}\right\rangle }{N_{1}\left( \left\langle n_{1}\right\rangle +1\right) +N_{2}\left( \left\langle n_{2}\right\rangle +1\right) }. \end{equation} \end{itemize} Note that for $N_{1}\approx N_{2}\gg \left\langle n_{i}\right\rangle ,$ the two-way pumping rate $\chi ^{\leftrightarrow }$ is larger than the one-way rate $\chi ^{\rightarrow }$ by a factor of approximately $\left\langle n_{1}+n_{2}\right\rangle /2.$ In the one-way case, on average one atom is added for each atom detected or lost to an output coupler, while in the two-way case, on average {\em all} the trapped atoms are exchanged with the reservoirs for every loss by detection or output coupling. This difference has important consequences below. \section{Quantum state of the traps} \label{sec:state} Our main interest in this paper is to find the equilibrium state of the model just described, under a variety of conditions and explore the different influences of detection, pumping and collisions. For example, we naturally expect increasing collisions to reduce the phase coherence and drive the state towards a narrower number distribution. In preparation, however, we should first discuss the nature of the entangled state and our methods for characterizing it more fully. In earlier studies that consider only detection of the atoms~\cite {jav96,cir96,nar96,won96,won96a}, the initial state is normally taken as the product state $\left| \;\right\rangle _{0}=\left| n_{1},n_{2}\right\rangle $ with $n_{1}$ atoms in trap 1 and $n_{2}$ atoms in trap 2. The unnormalized state following a single detection with phase $\phi $ is found by applying the field operator $\psi \left( \phi _{1}\right) $: \begin{equation} \quad \left| \;\right\rangle _{1}\propto \left( a_{1}+a_{2}e^{-i\phi _{1}}\right) \left| n_{1},n_{2}\right\rangle =\sqrt{n_{1}}\left| n_{1}-1,n_{2}\right\rangle +e^{-i\phi _{1}}\sqrt{n_{2}}\left| n_{1},n_{2}-1\right\rangle . \end{equation} By extension, after $m$ detections the state has the form \begin{equation} \left| \;\right\rangle _{m}=\sum_{k=0}^{m}c_{k}\left( m\right) \left| n_{1}-m+k,n_{2}-k\right\rangle , \end{equation} where the $c_{k}$ are functions of the phases of all the detected atoms $ \left\{ \phi _{1},\ldots ,\phi _{m}\right\} .$ If collisions are included, the state experiences unitary evolution under the Hamiltonian~(\ref{eq:hamil} ) in between detections and the coefficients $c_{k}\left( m\right) $ are also functions of time. Here, with the inclusion of pumping the situation is similar, but as one can continue to detect atoms indefinitely, the entangled state can become very large\ (note of course that the number of detections $ m $ can now arbitrarily exceed the initial occupancy of the traps). It becomes more natural to drop the dependence on $m$ and write the state at time $t$ as \begin{equation} \left| t\right\rangle \approx \sum_{k=-p}^{p}c_{k}\left( t\right) \left| n_{1}(t)-k,n_{2}(t)+k\right\rangle . \label{eq:pumpstate} \end{equation} This is an approximate relation because we truncate the sum at some cut-off $ p.$ This is particularly important numerically as the exact state can become prohibitively large for calculations. Such a truncation is possible because the probability of all detections occurring from a single trap is small (assuming the initial trap occupancies are not wildly different) and hence the coefficients at the extreme ends of the entangled state are negligible. In our simulations, we drop terms for which $\left| c_{k}\right| <10^{-12}.$ We characterise the state in terms of the mean number of atoms in each trap, and the width of the number difference distribution with the natural definition \begin{equation} \sigma _{n}=\left( \left\langle \left( n_{1}-n_{2}\right) ^{2}\right\rangle -\left\langle n_{1}-n_{2}\right\rangle ^{2}\right) ^{1/2}. \end{equation} Frequently we also wish to describe the phase distribution for which we use the width~\cite{ban69} \begin{eqnarray} \sigma _{\phi } &=&\left( 1-\left| \left\langle \exp \left( i\phi \right) \right\rangle \right| ^{2}\right) ^{1/2} \nonumber \\ &=&\left( 1-\left| \sum_{k}c_{k}^{*}c_{k+1}\right| ^{2}\right) ^{1/2}. \end{eqnarray} For a minimum uncertainty state, we have $\sigma _{n}\sigma _{\phi }=1.$ Aside from the evolution of the conditional visibility, our main interest below is in the behavior of these two measures of the state. \section{Steady states} \label{sec:steady}We now turn to finding the steady states of our system. With thermal pumping schemes however, a genuine steady state is achieved only for an average over many trajectories. For a given set of parameters, each trajectory differs not only in the actual relative phase established between the two traps, but more importantly, in the instantaneous atom numbers as a function of time. As the trajectory simulation proceeds, the occupancy of each trap exhibits thermal fluctuations which lead to time variations in other properties of the system such as the conditional visibility. Only the time-averaged properties approach a true steady state. As discussed in section~\ref{sec:ingreds}, we treat two cases: two-way pumping in which atoms may be exchanged between the bath and trap in both directions, and one-way pumping in which atoms can only move from the bath into the trap. We begin with representative plots of the visibility as a function of time for a single trajectory with no collisions $\left( \kappa =0\right) .$ The visbility for two-way pumping is shown in Fig.~\ref {fig:twoway}a. There are initially $n=100$ atoms in each trap and the pumping rate is chosen to balance the detection rate. On average, $2n$ atoms are detected in a time $\gamma t=1.$ The visibility is extremely noisy with frequent fluctuations of order 1. Our simulations show the occupancies of the traps also display large fluctuations as would be expected for coupling to thermal baths In particular, a zero in the visibility is always associated with a zero in one or other of the atom numbers. The visibility for a typical trajectory with one-way pumping is shown in Fig.~\ref {fig:twoway}b. In this case, there are again large fluctuations but on a much longer time scale. This difference has a simple origin mentioned earlier: for the one-way case, on average one atom is added to the system for each atom detected and so all the atoms are replaced once in time $ \gamma t=1$. In the two-way case, $n$ atoms are exchanged with the baths for each atom detected, and so in $\gamma t=1$, all the atoms are replaced $n$ times over and we expect a correspondingly shorter time scale for the fluctuations. As a comparison, in Fig.~\ref{fig:twoway}c we show a trajectory for a ``regular'' pumping model in which atoms are dripped into the trap at a constant rate to replace those lost by detection. In this case, the collisional rate is $\kappa =0.5\gamma ,$ but the visibility shows a much improved response than in the (collisionless) thermal pumping cases, indicating the severe influence of the thermal pumping. In fact, the visibility is degraded by the number fluctuations in two distinct ways. When the occupancy of one of the traps falls due to a fluctuation to within a few times $\sigma _{n}$ of zero, the extreme terms in the entangled state are removed, the number distribution narrows and the visibility falls. In particular, if one of the traps is completely emptied (as occurs several times in Figs.~\ref{fig:twoway}a and b,) the state is then a pure number difference state and any relative phase is completely destroyed. The visibility can of course be restored once fluctuations increase the atom number again, but there is no relation between the new relative phase and the phase before the trap was emptied. Even when both traps have $\left\langle n\right\rangle \gg \sigma _{n}$, the visibility is reduced according to the number difference between the traps. This is obvious---if one trap has significantly more atoms than the other, then we can predict with better than 50~\% accuracy from which trap the next atom will be detected, and the visibility must fall accordingly. We can calculate this effect simply as follows. In our system, the atom numbers experience thermal fluctuations in {\em time} due to the pumping. Suppose for a moment we have a different situation in which there is no pumping, and we perform a series of detection runs with a thermal distribution in the {\em initial} trap numbers and measure the visibility after a well-defined relative phase has been set up (but before the traps are significantly depleted). If we picture the condensates as coherent states with some relative phase: \begin{equation} \left| \;\right\rangle _{1}=\sqrt{n_{1}}\exp \left( i\phi _{1}\right) ,\qquad \left| \;\right\rangle _{2}=\sqrt{n_{2}}\exp \left( i\phi _{2}\right) , \end{equation} the expected visibility is just $\beta =2\sqrt{n_{1}n_{2}/}\left( n_{1}+n_{2}\right) $ which is a familiar expression for optical fields. Defining the relative occupancy \begin{equation} f=\frac{n_{1}-n_{2}}{n_{1}+n_{2}}, \label{eq:fdefn} \end{equation} we have \begin{equation} \beta \left( f\right) =\sqrt{1-f^{2}}. \label{eq:betafrel} \end{equation} More correctly, we should derive Eq.~(\ref{eq:betafrel}) directly from the entangled state description of the twin-trap system. In general, the fringe visibility is given by \begin{equation} \beta =\left| g^{\left( 1\right) }\right| \sqrt{1-f^{2}}, \end{equation} where $\left| g^{\left( 1\right) }\right| $ is the normalized correlation function~\cite{wal94}. While in general, $\left| g^{\left( 1\right) }\right| $ is not easily evaluated for the entangled states with which we are concerned, it can be shown for example that for the projected two-mode coherent state~\cite{mol96} $\left| \alpha ,\beta \right\rangle =\sum_{k=0}^{N}\alpha ^{k}\beta ^{N-k}/\sqrt{k!\left( N-k\right) !}\left| k,N-k\right\rangle ,$ which is the most natural expression of a state with relative phase with fixed total atom number $N,$ $\left| g^{\left( 1\right) }\right| $ tends to unity in the limit of large $N,$ Figures~\ref{fig:scatter}a and~b test Eq.~(\ref{eq:betafrel}) in the form of scatter plots of the points $\left( f\left( t\right) ,\beta \left( t\right) \right) $ for two-way and one-way pumping respectively with the same parameters as Figs~\ref{fig:twoway}a and b. The prediction~(\ref{eq:betafrel} ) is indicated by the black squares in each plot. The correlation is clearly much stronger in the one-way case. This difference is entirely due to the difference in time scales discussed above. If we are to have a well-defined phase, the number must be partially uncertain. Indeed, we see later that in the presence of collisions the average state has moderate number-squeezing but with a variance of the same order as a coherent state. So for a reasonable visibility we should require an entangled state of order $\sigma =O\left( 2\sqrt{n}\right) $\ terms. Such an entangled state is set up by the same number of detections and requires a time of order $\tau _{\text{e}}=1/ \sqrt{n}\gamma .$ Now for two-way pumping, the time scale for replacement of all the atoms once over is $\tau _{\text{r}}=1/n\gamma \ll \tau _{\text{e}}.$ Hence, the exchange of atoms with the baths occurs faster than an entangled state of a particular phase can be constructed and we may expect a reduced visibility. There can be only a weak correlation between the instantaneous visibility and the instantaneous relative occupancy $f$ , and the visibility is generally lower than the optimum given by Eq.~(\ref{eq:betafrel}). With one-way pumping however, the time scale for replacement of all the atoms is larger by a factor $n.$ The visibility is able to keep up with the drift in number and is then limited only by~Eq.~(\ref{eq:betafrel}). We can also calculate the mean visibility over time $\overline{\beta },$ for an arbitrary pair of mean atom numbers $\left\langle n_{1}\right\rangle $ and $\left\langle n_{2}\right\rangle .$ Again, we think of an ensemble of runs with no pumping and a thermal distribution of initial states $\left| n_{1}\right\rangle $ and $\left| n_{2}\right\rangle .$ The mean visibility over many runs is the weighted average of $\beta \left( f\right) $ over the probability distribution [see Eq.~(\ref{eq:fdefn})] \begin{equation} P_{f}\left( f\right) =\int \delta \left( f-\frac{n_{1}-n_{2}}{n_{1}+n_{2}} \right) P_{\bar{n}_{1}}\left( n_{1}\right) P_{\bar{n}_{2}}\left( n_{2}\right) \;dn_{1}dn_{2}, \end{equation} where $P_{\bar{n}_{i}}\left( n_{i}\right) =-\log \left( \gamma _{i}\right) \gamma _{i}^{n_{i}}$ are the probability distributions (in the continuous limit) of atom number for thermal distributions with mean number $\bar{n} _{i} $ and $\gamma _{i}=\bar{n}_{i}/\left( \bar{n}_{i}+1\right) .$ For the case where the mean numbers are the same, $P\left( f\right) $ is uniform and $\bar{\beta}=\pi /4~$\cite{gra97}$.$ Otherwise we find \begin{equation} \bar{\beta}=\frac{2\pi \log \left( \gamma _{1}\right) \log \left( \gamma _{2}\right) }{\left[ \log \left( \gamma _{1}/\gamma _{2}\right) \right] ^{2}} \left( \frac{1}{\sqrt{1-\left( \frac{\log \left( \gamma _{1}/\gamma _{2}\right) }{\log \left( \gamma _{1}\gamma _{2}\right) }\right) ^{2}}} -1\right) . \end{equation} which for $\bar{n}_{1},$ $\bar{n}_{2}\gg 1,$ gives \begin{equation} \bar{\beta}\simeq \frac{\pi \sqrt{p}}{\left( 1+\sqrt{p}\right) ^{2}}, \label{eq:betamean} \end{equation} with $p=\bar{n}_{1}/\bar{n}_{2}.$ In the pumped twin-trap setting, we also have thermal distributions in the atom number which occur not from run to run, but over time in a single trajectory, so it is reasonable to hope that the above analysis may still apply. In Fig.~\ref{fig:pdepend}, we show the average visibility for a thermal distribution as a function of the mean atom number ratio $p$ given by Eq.~( \ref{eq:betamean}). The plotted points show the time-averaged visibility calculated from trajectory simulations with one-way pumping and $\bar{n}_{1}+ \bar{n}_{2}=200.$ Error bars are shown at 1 standard deviation. As expected, the mean visibility falls with increasing disparity in the mean atom number. \subsection{One-way pumping and output couplers} We have seen that the one-way pumping process shows significantly higher visibility than the two-way pumping. From this point on, we restrict our attention to the one-way model and add the effect of an output coupler from each trap. In our results we find two distinct regimes according to the length of the simulations. Figure~\ref{fig:longbeta} shows the visibility as a function of time averaged over 200 trajectories for a simulation with $ \kappa =0,$ $\nu _{i}=0$ and an initial state $\left| n_{1},n_{2}\right\rangle =\left| 100,100\right\rangle .$ The one-way pumping rate was chosen to maintain the mean population at $n=100$ in each trap. There are clearly two regimes: for $\gamma t\precapprox n,$ the mean visibility shows a steady decline, while for $\gamma t\succapprox n,$ the visibility tends to a steady-state value of $\pi /4$ consistent with the calculation of the previous section. In the initial stage of a particular run, the populations of the traps become decorrelated due to the thermal nature of the pumping until they are completely uncorrelated and the time-averaged visibility is $\pi /4.$ The time for this decorrelation varies from run to run, having a characteristic length of $\gamma t\approx n.$ Thus the average over many trajectories shows a gradual decline until all members of the ensemble are likely to be decorrelated. We are thus led to examine the behavior of the system in the two regimes $\gamma t\ll n,$ when the trap populations are likely to be quite close, and $\gamma t\gg n,$ when there is no correlation between the populations. We treat these two cases in turn. In all cases, we start our simulations with the initial state $\left| n_{1},n_{2}\right\rangle =\left| 100,100\right\rangle $ and calculate quantities averaged over 200 trajectories. \begin{itemize} \item $\gamma t\ll n:$ For the short-time regime, we arbitrarily choose $ \gamma t=4$ to show results. Figure~\ref{fig:paramshort}a shows the visibility as a function of $\kappa $ again averaged over five trajectories. As expected, the visibility decreases with increasing collisions which increasingly disrupt the relative phase~\cite{won96}. We have also performed simulations with a range of output couplings from $\nu _{i}=0$ to $\nu _{i}=$ $\gamma $. This leads to a small decrease in $\beta $ (of less than 0.025 for the strongest coupling). This effect is simply a result of the fact that the pumping rate is increased to balance the additional loss of atoms and so the trap populations decorrelate faster. The nature of the average state of the system is indicated in Fig.~\ref{fig:paramshort}b. Shown are the widths of the number distribution $\sigma _{n}$ (dotted line) and phase distribution $\sigma _{\phi }$ (dot-dashed) and $\rho $: the root mean square of the product of the two (solid). The filled circles denote the actual simulations performed. The number distribution clearly narrows strongly with increasing collisional rate while the phase distribution spreads as collisions degrade the relative phase. The simulations with non-zero output coupling (not shown in Fig.~\ref{fig:paramshort}b) produced a reduction of less than 5 \% in the number width and no discernible change in the other parameters. Note that for zero collisions, the product of the widths (solid) is unity indicating a minimum uncertainty state. Further, for all values of $\kappa ,$ the number width $\sigma _{n}<\sqrt{\bar{n}}=10,$ which is the width we would expect if the state was a projection of a coherent state onto a basis of fixed total atom number. The real state is thus quite strongly number squeezed. This in consistent with recent analytic work by Dunningham {\em et al~}\cite{dun97,dun97a} using a Bose-broken symmetry model. They show that in the limit of a large collisional rate, the true state of the condensate is the amplitude-squeezed state that minimizes number fluctuations. \item $\gamma t\gg n:$ In the large time regime, the mean visibility has no dependence on the output coupling rate---once the atom numbers are completely uncorrelated, the precise rate at which atoms enter the trap is irrelevant. The number and phase widths shown in Figure~\ref{fig:paramlong} show very similar trends to the short time case. Note that even with the uncorrelated trap numbers, the state is still minimum uncertainty for $ \kappa =0,$ indicating that the pumping rate is low enough for the visibility to adjust to changes in number. Again, other simulations showed that the only effect of output coupling was to reduce the number width by a few percent. \end{itemize} \section{Application to collapses and revivals} \label{sec:collapses} In this final section, we consider the application of pumping processes to the interesting phenomenon of collapses and revivals in the relative phase. Several authors have shown that if a relative phase is prepared by detection and the entangled state subsequently evolves purely under the influence of the interatomic collisions, the visibility of the phase experiences recurrent collapses and revivals of period $\pi /\kappa $ due to the differential rate of phase rotation in the entangled state~\cite {lew96,won96a,wri96,wri97}. A demonstration of collapses and revivals of the phase, perhaps through light scattering experiments, would be a significant result in BEC\ physics. It is interesting to consider how the collapses and revivals are affected by pumping and leaking of atoms through the traps. Naively, we might expect that the oscillations would be destroyed by the time all the atoms had been replaced a few times over. In fact, we have found the collapses and revivals to be remarkably robust to pumping processes. In Fig.~\ref{fig:collapse}a, we show the visibility for a single trajectory {\em without} pumping or output coupling in which there are initially 200 detections from a total of 1000 atoms, followed by a period in which the system evolves only under the influence of collisions with $\kappa =0.25$. The oscillations in the visibility are clear. Fig.~\ref{fig:collapse}b shows a trajectory with the same number of detections and collision strength, but with a continual flushing of the trapped atoms by pumping and output coupling. On average, all the atoms are replaced in a period $\gamma t=1$ and the atom numbers exhibit large fluctuations (Fig.$~$\ref{fig:collapse} c). Despite this, the collapse and revivals persist for a considerable period and only disappear when the atom number in trap 2 (thick line in Fig.$ ~$\ref{fig:collapse}c) approaches zero at $\gamma t\approx 250.$ If the trajectory is such that neither atom number approaches zero, the oscillations may continue much longer still. As the simulation progresses however, while the period of the revivals is unchanged, the peaks broaden---the collapse time increases. This is associated with the gradual reduction in the width of the number difference of the initial entangled state due to the repeated addition and removal of atoms indicated in Fig.$~$ \ref{fig:collapse}d. Essentially, every addition or removal of an atom through an output coupler tends to drive the state to a narrower number distribution. In their treatment of collapses and revivals for a {\em fixed} number of atoms, Wong {\em et al}.$~$\cite{won96a} have shown that the visibility during a collapse should decay according to \begin{equation} V\propto \exp \left( -2\sigma _{A}^{2}\kappa ^{2}t^{2}\right) , \label{eq:decay} \end{equation} where $\sigma _{A}$ is the width of a Gaussian approximation to the coefficients \begin{equation} {\cal A}\left( k\right) =\left| c_{k}c_{k-1}\right| \sqrt{\left( n-k+1\right) \left( n-m+k\right) }, \end{equation} and there have been $m$ detections from an initial state of $n$ atoms in each trap. For a broad distribution, and $m\ll n,$ to lowest order we have $ {\cal A}\left( k\right) \propto \left| c_{k}^{2}\right| ,$ so that in our notation $\sigma _{A}\approx \sigma _{n}/2.$ The black squares in Fig.$~$\ref {fig:collapse}d are estimates of $\sigma _{n}$ calculated from the collapse widths in Fig.$~$\ref{fig:collapse}b using Eq.~(\ref{eq:decay}). The agreement with the directly measured values for the width of the number distribution (solid line) confirms that the increase in the collapse time is due purely to the change in $\sigma _{n.}$ Figure~\ref{fig:collapse}b also shows a variation in the height of the visibility peaks. Note that the variation is not monotonic, an effect we have found to be generally true. A natural guess is that the peak heights are associated with the relative occupancy $f=\left( n_{1}-n_{2}\right) /\left( n_{1}+n_{2}\right) ,$ which we found led to a maximum visibility for systems where the detections are {\em not} stopped in section~\ref {sec:steady}. We have tested this using scatter graphs of the peak visibility similar to those in Fig.~\ref{fig:scatter}. We find a moderate confirmation of the connection. In cases for which the {\em minima} of the visibility remain small, there is a strong correlation between the peak visibility and the quantity $f$. In other cases, such as that in Fig.$~$\ref {fig:collapse}b for $\gamma t>150,$ for which the minima are significantly greater than zero, the correlation is poor and we conclude that the pumping process has produced an additional degradation of the state beyond that implied simply by the mean number difference. \section{Conclusion} In this paper, we have studied the steady-state behavior for two pumping scenarios to show how an ongoing measurement process can generate a phase coherence between atoms derived from thermal baths, even in the presence of phase diffusion due to atomic collisions within the traps. We find important qualitative differences between systems with two-way and one-way pumping, the phase coherence being substantially improved for the one-way case. Systems displaying collapses and revivals of the condensate phase should provide an opportunity for examining the time-dependent effects induced by pumping. We remark finally that a natural extension to our model would be the inclusion of extra trap levels that would allow for line narrowing and genuine laser action. \acknowledgments The authors thank Tony Wong and Matthew Collett for useful discussions. We acknowledge support of the Marsden fund of the Royal Society of New Zealand, the University of Auckland Research Committee and the New Zealand Lotteries' Grants Board. \begin{references} \bibitem{and95} M.~H. Anderson {\it et~al.}, Science {\bf 269}, 198 (1995). \bibitem{dav95} K.~B. Davis {\it et~al.}, Phys. Rev. Lett. {\bf 75}, 3969 (1995). \bibitem{bra97} C.~C. Bradley, C.~A. Sackett, and R.~G. Hulet, Phys. Rev. Lett. {\bf 78}, 985 (1997). \bibitem{jav96} J. Javanainen and S.~M. Yoo, Phys. Rev. Lett. {\bf 76}, 161 (1996). \bibitem{cir96} J.~I. Cirac, C.~W. Gardiner, M. Naraschewski, and P. Zoller, Phys. Rev. A {\bf \ 54}, R3714 (1996). \bibitem{nar96} M. Naraschewski {\it et~al.}, Phys. Rev. A {\bf 54}, 2185 (1996). \bibitem{won96} T. Wong, M.~J. Collett, and D.~F. Walls, Phys. Rev. A {\bf 54}, R3718 (1996). \bibitem{won96a} T. Wong {\it et~al.}, submitted to Phys. Rev. Lett. (1996). \bibitem{jac96} M.~W. Jack, M.~J. Collett, and D.~F. Walls, Phys. Rev. A {\bf 54}, R4625 (1996). \bibitem{wri96} E.~M. Wright, D.~F. Walls, and J.~C. Garrison, Phys. Rev. Lett. (1996). \bibitem{wri97} E.~M. Wright {\it et~al.}, submitted to Phys. Rev. A (1996). \bibitem{mew97} M.-O. Mewes {\it et~al.}, Phys. Rev. Lett. {\bf 78}, 582 (1997). \bibitem{and97} M.~R. Andrews {\it et~al.}, Science {\bf 275}, 637 (1997). \bibitem{wis96} H. Wiseman, A. Martins, and D. Walls, Quantum and Semiclass. Opt. {\bf 8}, 737 (1996). \bibitem{hol96a} M. Holland {\it et~al.}, Phys. Rev. A {\bf 54}, R1757 (1996). \bibitem{sar74} M. {Sargent III}, M.~O. Scully, and W.~E. {Lamb, Jr.}, {\em Laser Physics} (Addison-Wesley, Reading, Mass., 1974). \bibitem{hak84} H. Haken, {\em Laser Theory} (Springer-Verlag, Berlin, 1984). \bibitem{wis97} H. Wiseman, submitted to Phys. Rev. A (1997). \bibitem{bal97} R.~J. Ballagh, K. Burnett, and T.~F. Scott, to be published (1996). \bibitem{ban69} A. Bandilla and H. Paul, Ann. Phys. (Leipzig) {\bf 23}, 323 (1969). \bibitem{wal94} D.~F. Walls and G.~J. Milburn, {\em Quantum Optics} (Springer-Verlag, Berlin, 1994). \bibitem{mol96} K. Molmer, {\em Optical coherence, a convenient myth?}, Phys. Rev. A, to be published (1997). \bibitem{gra97} R. Graham {\em et~al.}, {\em Phase preparation by atom counting of Bose-Einstein condensates in mixed states}, in preparation (1997). \bibitem{dun97} J.~A. Dunningham, Master's thesis, Department of Physics, University of Auckland, 1997. \bibitem{dun97a} J.~A. Dunningham, M.~J. Collett, and D.~F. Walls, in preparation (1997). \bibitem{lew96} M. Lewenstein and L. You, Phys. Rev. Lett. {\bf 77}, 3489 (1996). \end{references} \begin{figure} \caption{Geometry of pumped twin-trap system. The straight solid arrows indicate the detection at rate $\gamma$; the dotted arrows, the exchange of atoms with the reservoirs; the curved arrows, output coupling of the trapped atoms.} \label{fig:geom} \end{figure} \begin{figure} \caption{a)-b) Visibility $\beta$ as a function of time for a thermally pumped system with mean occupancy $n=100$ for each trap and $\kappa=0$. a) Two-way pumping, b) one-way pumping. c) Visibility for a regularly pumped model with $\kappa=1.0$. } \label{fig:twoway} \end{figure} \begin{figure} \caption{Scatter plot of $\beta$ as a function of relative occupancy $f$ for a) two-way, and b) one-way pumping. There are 10000 points shown. Black squares indicate the relation Eq.~(\ref{eq:betafrel} \label{fig:scatter} \end{figure} \begin{figure} \caption{Mean visibility $\bar{\beta} \label{fig:pdepend} \end{figure} \begin{figure} \caption{Visibility for one-way thermal pumping with no collisions and no output coupling. The mean atom number in each trap is 100.} \label{fig:longbeta} \end{figure} \begin{figure} \caption{Averaged state parameters as a function of collision rate for one-way pumping in short-time regime. a) Visibility and b) $\sigma_n $ (dashed), $\sigma_\phi$ (dot-dash), and $\rho$ (solid).} \label{fig:paramshort} \end{figure} \begin{figure} \caption{Averaged state parameters as a function of collision rate for one-way pumping in long-time regime: $\sigma_n $ (dashed), $\sigma_\phi$ (dot-dash), and $\rho$ (solid).} \label{fig:paramlong} \end{figure} \begin{figure} \caption{a) Visibility for collapses and revivals of relative phase with no pumping or output. Initially 200 detections were made from 1000 atoms. b) Visibility, c) atom numbers and d) $\sigma_n$ for the same parameters with pumping and output rates such that all atoms are replaced on average once in a period $\gamma t=1$.} \label{fig:collapse} \end{figure} \end{document}
\begin{document} \title[On Kummer 3-folds] {On Kummer 3-folds} \author{ Maria Donten } \address{Instytut Matematyki UW, Banacha 2, PL-02097 Warszawa} \email{[email protected]} \begin{abstract} We investigate a generalization of Kummer construction, as introduced in \cite{AW}. The aim of this work is to classify 3-dimensional Kummer varieties by computing their Poincar\'e polynomials. \end{abstract} \maketitle \section*{Introduction} A recent paper by Andreatta and Wi\'sniewski \cite{AW} provides a description of a generalization of Kummer construction, which is a method of producing a variety by resolving singularities in a quotient of a product of abelian varieties by a finite integral matrix group action. Some restrictive assumptions, both on the group action and the resolution, assure that a variety $X$ obtained as a result of the Kummer construction is projective, has $K_X$ linearly trivial and $H^1(X, \mathbb{Z}) = 0$. In this paper the Kummer construction is applied to the product of three elliptic curves $A^3$. We look at the action of a finite group $G < SL(3,\mathbb{Z})$ on $A^3$, which come from the natural action on $\mathbb{Z}^3$ by the identification $A^3 = \mathbb{Z}^3 \otimes_{\mathbb{Z}}A$. The quotient $A^3/G$ is singular. By resolving its singularities we obtain a Kummer 3-fold. An important observation is that 3-dimensional Kummer varieties are Calabi--Yau. The aim of this work is to compute Poincar\'e polynomials of such Kummer 3-folds, i.e. polynomials $P_X(t) = \sum_{i=0}^{2n} b_i(X)t^i$ of a formal variable $t$ with $b_i(X)$ being the $i$-th Betti number of a variety $X$. By $H^i(X, \mathbb{C})$ or $H^i(X)$ we denote the complexified De Rham cohomology of $X$. Section \ref{finite_subgroups_SL3Z} discusses the classification of finite subgroups of $SL(3, \mathbb{Z})$. We describe the process of determining all groups to which we apply the construction. They are listed in \ref{groups_table}. In section \ref{construction_details} the details of Kummer construction in 3-dimensional case are explained. We concentrate on understanding the structure of singularities of the quotient of $A^3/G$ and their resolution. Thus we explain the method of computing Poincar\'e polynomials of Kummer $3$-folds. The remaining sections are devoted to the presentation of the results of our computations. All interesting points are clearly visible in the examples \ref{case_S4_2} and \ref{case_S4_3}, which are therefore discussed in details. In the remaining cases (in section 6.) only the result and most important data about the group action is given. The results of the computations are summarized in \ref{summary}, theorem \ref{last_theorem}. The following results constitute the main part of my M.Sc. thesis completed at the University of Warsaw. I am greatly indebted to Jaros\l{}aw Wi\'sniewski, my thesis advisor, and I wish to thank him for all his help during the preparation of this paper. \section{Finite subgroups of $SL(3, \mathbb{Z})$}\label{finite_subgroups_SL3Z} The first step is determining groups to which we apply the construction, i.e. finite subgroups of $SL(3, \mathbb{Z})$. We say that $G, H < SL(n, \mathbb{Z})$ are $\mathbb{Z}$-equivalent if they are conjugate in $GL(n, \mathbb{Z})$. \begin{rem}\label{conjugacy_classes} The Kummer construction produces isomorphic varieties for groups which are $\mathbb{Z}$-equivalent. \end{rem} This means that we only need to find representatives of $\mathbb{Z}$-equivalence classes, called $\mathbb{Z}$-classes for short. We will restrict our attention to non-cyclic subgroups, because by lemma 3.3 in \cite{AW} cyclic groups fail to satisfy the assumptions of the construction. We were not able to find the reference for the classification of finite subgroups of~$SL(3, \mathbb{Z})$ up to $\mathbb{Z}$-equivalence. The book \cite{Newman} covers only the case of~$SL(2, \mathbb{Z})$. Our idea of classification is as follows. First, computations using programs CARAT (\cite{carat}, see also \cite{acta_cryst}) and GAP (\cite{gap}) allow to find the set of $16$ finite subgroups of $SL(3, \mathbb{Z})$ containing representatives of all non-cyclic $\mathbb{Z}$-classes. They are listed in the table 1. (see theorem \ref{groups_table}). Next, in sections \ref{examples} and \ref{results}, we investigate their action on the product of three elliptic curves $A^3$ in order to compute cohomology of Kummer varieties. In this process we show that the actions defined by the listed groups are different. This section gives a short description of the computations and their results. The code of the programs can be found at www.mimuw.edu.pl/\~{}marysia/prog\_gap. \subsection{General facts} Let us first note some results which are base for understanding finite integral matrix groups. See e.g. \cite{Newman} and \cite{KP} for more general survey. \begin{thm}[Minkowski]\label{Minkowski} Let $G < GL(n, \mathbb{Z})$ be a finite subgroup. Then, for every prime $p>2$ its $p$-th reduction $G \hookrightarrow GL(n, \mathbb{Z}) \rightarrow GL(n, \mathbb{Z}_p)$ is an embedding. \end{thm} \begin{lem}\label{existence_max_subgroups} Every finite subgroup of $GL(n, \mathbb{Z})$ is contained in a maximal finite subgroup. \end{lem} \begin{proof} By Minkowski's theorem, the number of elements of a finite subgroup of $GL(n, \mathbb{Z})$ divides the order of $GL(n, \mathbb{Z}_3)$. Hence an ascending sequence of finite subgroups must stabilize. \end{proof} By Minkowski's theorem, every finite subgroup of $GL(n, \mathbb{Z})$ is isomorphic to a subgroup of $GL(n, \mathbb{Z}_3)$. Hence isomorphism types of finite subgroups of $SL(n, \mathbb{Z})$ can be easily obtained for small values of $n$. The results for $SL(3, \mathbb{Z})$ are mentioned in~\cite{Newman}, chapter~IX.14 or lemma~3.2 in~\cite{AW}. \begin{lem}\label{types_of_subgroups} Every nontrivial finite subgroup of $SL(3, \mathbb{Z})$ is isomorphic to one of the following: \begin{itemize} \item cyclic groups $\mathbb{Z}_2$, $\mathbb{Z}_3$, $\mathbb{Z}_4$, $\mathbb{Z}_6$, \item dihedral groups $D_4$, $D_6$, $D_8$, $D_{12}$, \item the group $A_4$ of even permutations of four elements (e.g. the tetrahedral group~$T$), \item the symmetric group $S_4$ of all permutations of four elements (e.g. the octahedral group~$O$). \end{itemize} \end{lem} Note that $D_4$ is isomorphic to $\mathbb{Z}_2 \times \mathbb{Z}_2$, and $D_6$ to the symmetric group $S_3$. The task is now to determine $\mathbb{Z}$-classes contained in each isomorphism class of finite subgroups of $SL(3, \mathbb{Z})$. The algorithm is divided into two main steps, outlined in the next sections: \begin{enumerate} \item finding representatives of conjugacy classes of maximal finite subgroups of $GL(3, \mathbb{Z})$, \item determining $\mathbb{Z}$-classes of all finite subgroups of $SL(3, \mathbb{Z})$ by analyzing the maximal subgroups of $GL(3, \mathbb{Z})$. \end{enumerate} \subsection{Maximal finite subgroups in $GL(3, \mathbb{Z})$} To determine conjugacy classes of maximal finite subgroups of $GL(3, \mathbb{Z})$ we used programs of the crystallographic package CARAT. Maximal finite subgroups of $GL(3, \mathbb{Z})$ are so-called Bravais groups, which are integral matrix groups important in crystallography. This follows from the definition and basic properties of Bravais groups, described e.g. in \cite{acta_cryst} or \cite{BNZ}. CARAT programs can list $\mathbb{Z}$-classes of Bravais groups up to dimension~$6$. The only problem was to find maximal finite subgroups among them. The program \emph{Bravais\_inclusions} (with option -S) listed all Bravais groups in $GL(3, \mathbb{Z})$ containing $\{I, -I \} \simeq \mathbb{Z}_2$. Every maximal subgroup of $GL(n, \mathbb{Z})$ contains the matrix $-I$, because it belongs to the center of $GL(n, \mathbb{Z})$, so all maximal finite subgroups of $GL(3, \mathbb{Z})$ were on this list. The same program (with no option) let us cross out some groups which were not maximal elements of the list. There were four groups left: three of order $48$ and one of order $24$. It follows from the further part of the text that all of these are maximal finite subgroups of $GL(3, \mathbb{Z})$ (their intersection with $SL(3, \mathbb{Z})$ are not conjugate). \subsection{All finite subgroups of $SL(3, \mathbb{Z})$} The following observation assures that we only need to consider subgroups contained in the intersections of maximal finite subgroups of $GL(3, \mathbb{Z})$ with $SL(3, \mathbb{Z})$. \begin{lem} Let $G < SL(3, \mathbb{Z})$ be a finite subgroup, and $\{H_i\}_{i\in I}$ is a set of representatives of all conjugacy classes of maximal finite subgroups of $GL(3, \mathbb{Z})$. Then $G$ is $\mathbb{Z}$-equivalent to a $G' < H_i \cap SL(3, \mathbb{Z})$ for some $i \in I$. \end{lem} \begin{proof} By lemma \ref{existence_max_subgroups}, $G$ is contained in $H < GL(3, \mathbb{Z})$ which is maximal, so conjugate to some $H_i$. Conjugation does not change the determinant of a matrix. \end{proof} We used the algebraic package GAP to determine $\mathbb{Z}$-classes of finite subgroups of $SL(3, \mathbb{Z})$. In each maximal finite subgroup of $GL(3, \mathbb{Z})$ its intersection with $SL(3, \mathbb{Z})$ is of index $2$, so the input data consisted of three groups of order $24$ and one of order $12$. These are $S_4(1)$, $S_4(2)$, $S_4(3)$ and $D_{12}$ in table 1. Our algorithm applied to each of these groups determines orders of their elements and created lists of subgroups of different isomorphism types by checking relations on generators. It can be implemented effectively, because all groups listed in the lemma \ref{types_of_subgroups} are cyclic or have presentation with two generators and one relation. As the first result we got $24$ groups but their number was decreased to~$20$ because in $4$ cases it was easy to find a base-change matrix. Investigation of the action on $A^3$ suggested that $4$ of these groups were conjugate to some of the other $16$. Indeed, we found base-change matrices. The $16$ groups left represent different $\mathbb{Z}$-classes, which is proved by the analysis of their action on $A^3$ in Kummer construction described in sections \ref{examples} and \ref{results}. We note that Poincar\'e polynomials of Kummer varieties are not sufficient to distinguish $\mathbb{Z}$-classes of groups used in the construction. There are Kummer 3-folds obtained from non-conjugate groups, which have equal Poincar\'e polynomials (see table 2). In such cases we had to investigate the action on $A^3$ more carefully to distinguish $\mathbb{Z}$-classes. The most interesting cases \ref{case_D4_3} and \ref{case_D4_4} are discussed in section \ref{comment_D4}. \begin{thm}\label{groups_table} The list of groups in table 1 contains exactly one representative of each $\mathbb{Z}$-class of finite subgroups of $SL(3, \mathbb{Z})$. \end{thm} \begin{table}[!hp] \renewcommand{1}{0.87} \centerline{\begin{tabular}{|c|c|} \hline \textbf{group} & \textbf{generators}\\ \hline $D_4(1)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{array} \right) \left(\begin{array}{rrr} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & -1 \end{array} \right)$\\ \hline $D_4(2)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{array} \right) \left(\begin{array}{rrr} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & -1 \end{array} \right)$\\ \hline $D_4(3)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & -1 & 0 \end{array} \right) \left(\begin{array}{rrr} 1 & 1 & 1 \\ 0 & -1 & 0 \\ 0 & 0 & -1 \end{array} \right)$\\ \hline $D_4(4)$ & $\left(\begin{array}{rrr} -1 & -1 & -1 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array} \right) \left(\begin{array}{rrr} 0 & 0 & 1 \\ -1 & -1 & -1 \\ 1 & 0 & 0 \end{array} \right)$\\ \hline $D_6(1)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ 1 & 1 & 0 \\ 0 & 0 & -1 \end{array} \right) \left(\begin{array}{rrr} -1 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{array} \right)$\\ \hline $D_6(2)$ & $\left(\begin{array}{rrr} 0 & -1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & -1 \end{array} \right) \left(\begin{array}{rrr} -1 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 1 \end{array} \right)$\\ \hline $D_6(3)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & -1 & 0 \end{array} \right) \left(\begin{array}{rrr} 0 & -1 & 0 \\ 0 & 0 & 1 \\ -1 & 0 & 0 \end{array} \right)$\\ \hline $D_8(1)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{array} \right) \left(\begin{array}{rrr} 0 & 0 & -1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{array} \right)$\\ \hline $D_8(2)$ & $\left(\begin{array}{rrr} -1 & -1 & -1 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array} \right) \left(\begin{array}{rrr} 0 & -1 & 0 \\ 0 & 0 & -1 \\ 1 & 1 & 1 \end{array} \right)$\\ \hline $D_{12}$ & $\left(\begin{array}{rrr} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & -1 \end{array} \right) \left(\begin{array}{rrr} 0 & 1 & 0 \\ -1 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right)$\\ \hline $A_4(1)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{array} \right) \left(\begin{array}{rrr} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right)$\\ \hline $A_4(2)$ & $\left(\begin{array}{rrr} -1 & -1 & -1 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array} \right) \left(\begin{array}{rrr} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right)$\\ \hline $A_4(3)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ -1 & 0 & 1 \\ -1 & 1 & 0 \end{array} \right) \left(\begin{array}{rrr} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right)$\\ \hline $S_4(1)$ & $\left(\begin{array}{rrr} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & -1 \end{array} \right) \left(\begin{array}{rrr} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right)$\\ \hline $S_4(2)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 1 & 1 & 1 \end{array} \right) \left(\begin{array}{rrr} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right)$\\ \hline $S_4(3)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & -1 & 0 \end{array} \right) \left(\begin{array}{rrr} -1 & 0 & 1 \\ -1 & 1 & 0 \\ -1 & 0 & 0 \end{array} \right)$\\ \hline \end{tabular}} \renewcommand{1}{1} \caption{$\mathbb{Z}$-classes of finite non-cyclic subgroups of $SL(3,\mathbb{Z})$} \end{table} In the next sections, by abuse of notation, the names introduced in the table will denote both $\mathbb{Z}$-classes and chosen representatives. \subsection{Relations of groups}\label{relations_groups} The following definition is of considerable importance for the results of sections \ref{examples} and \ref{results}. \begin{defn}\label{def_duality} We say that finite subgroups $G, G' < GL(n, \mathbb{Z})$ are dual, if by transposing all matrices in $G$ we obtain all matrices in $G'$. Conjugacy classes in $GL(n, \mathbb{Z})$ are dual, if dual representatives can be chosen. \end{defn} \begin{prop}\label{dual_pairs} Programs in GAP package allow to determine duality relation in the set of $\mathbb{Z}$-classes of finite subgroups of $SL(3, \mathbb{Z})$: \begin{itemize} \item each of the following classes is dual to itself: $D_4(1)$, $D_4(2)$, $D_6(3)$, $D_8(1)$, $D_8(2)$, $D_{12}$, $A_4(1)$, $S_4(1)$; \item there are four pairs of dual classes: $D_4(3)$ and $D_4(4)$, $D_6(1)$ and $D_6(2)$, $A_4(2)$ and $A_4(3)$, $S_4(2)$ and $S_4(3)$. \end{itemize} \end{prop} Investigation of rational maps between Kummer 3-folds may lead to some new results. Therefore, by the following remark, we would like to understand the relation of inclusion (up to $\mathbb{Z}$-equivalence) of finite subgroups of $SL(3,\mathbb{Z})$. \begin{rem}\label{rational_map} Let $G, H < GL(r, \mathbb{Z})$ be finite subgroups such that there is $H' < G$ $\mathbb{Z}$-equivalent to H. Then there exists a rational map between Kummer varieties for $H$ and $G$. \end{rem} \begin{prop}\label{groups_relations_diagram} The following diagram presents inclusions of finite non-cyclic subgroups of $SL(3, \mathbb{Z})$ up to $\mathbb{Z}$-equivalence, determined by programs in GAP. An arrow from $H$ to $G$ means that there exists $H'<G$ which is $\mathbb{Z}$-equivalent to $H$. We omit arrows which come from composition of other arrows. \vbox to 0ex{\vss\centerline{\includegraphics[width=0.6\textwidth]{diagram.pdf}}\vss} \end{prop} \section{Construction and cohomology of Kummer 3-folds}\label{construction_details} We compute Poincar\'e polynomial for the varieties obtained by applying Kummer construction to chosen representatives of all $\mathbb{Z}$-classes of finite subgroups of $SL(3, \mathbb{Z})$. The general process of the construction and ideas of computations are described in \cite{AW}. This section is detailed discussion of the $3$-dimensional case. We describe the structure of Kummer 3-folds and explain methods of computing their cohomology, following the notation of \cite{AW}. From now on, $G$ denotes a finite subgroup of $SL(3, \mathbb{Z})$. For $H < G$ by $N(H)$ we denote the normalizer of $H$ in $G$ and $W(H) = N(H)/H$ is its Weyl group. To shorten the notation, if $H$ is cyclic, i.e. $H = \gen{h}$, we write $N(h)$ and $W(h)$ instead of $N(\gen{h})$ and $W(\gen{h})$. \subsection{Stratification}\label{stratification} We compute Poincar\'e polynomial $P_Y(t)$ of $Y = A^3/G$ and add the contribution of cohomology coming from the chosen resolution $f: X \rightarrow Y$. By theorem~III.7.2 in~\cite{Bredon}, to determine Poincar\'e polynomial of $Y$ it is sufficient to compute dimensions of the spaces of $G$-invariant forms on $A^3$. This can be done by a simple function in GAP based on the character formulas given in \cite{FH}, chapter~2. Thus the main difficulty is understanding the contribution of the resolution of singularities. We solve this problem using virtual Poincar\'e polynomials (see e.g. \cite{Arapura},~17.2 or \cite{Fulton},~4.5). Hence stratifications of $Y$ and $X$ must be chosen, as explained in \cite{AW},~2.2. There is a natural stratification of $Y = A^3/G$ determined by the action of~$G$ on $A^3$. The finite set of orbits of points with non-cyclic isotropy is the $0$-dimensional stratum. The $1$-dimensional stratum is the set of orbits of points with cyclic isotropy. The sum of $0$~and $1$-dimensional strata consist exactly of all singular points in $Y$, so the $3$-dimensional stratum is the smooth part of $Y$. By taking inverse images of the strata in $Y$ the decomposition of $X$ into $1$,~$2$ and $3$-dimensional strata is obtained. Note that the resolution of singularities $f: X \rightarrow Y$ gives an isomorphism of $3$-dimensional strata. The contribution to cohomology coming from the resolution is expressed by the difference of virtual Poincar\'e polynomials of the sums of $0$ and $1$-dimensional strata in~$Y$ and~$X$. \subsection{Cohomology of the strata in $Y$}\label{cohomology_Y} Let us first look at the set of points with non-trivial isotropy in the action of $G$ on $A^3$. By lemma 3.3 in \cite{AW}, the set $(A^3)^H$ of fixed points of a cyclic group $H$ consists of disjoint elliptic curves. The curves determined by $H$ can contain some points with non-cyclic isotropy, which are intersection points with curves determined by other cyclic groups. Hence the components of the set $(A^3)_0^H$ of points with isotropy $H$ are elliptic curves with some points removed. The image of $(A^3)_0^H$ in $Y$, denoted $Y([H])$, depends only on the conjugacy class of~$H$. The Weyl group $W(H)$ acts freely on $(A^3)_0^H$. Let $K$ be a component of $Y([H])$ and $A_K$ an elliptic curve in $A^3$, which is mapped to $\ovl{K}$, the closure of $K$. By $W_K$ we denote the subgroup of $W(H)$ which fixes $A_K$, hence acts on the set of its points. Then the normalized closure of $K$, denoted $\widehat{K}$, is isomorphic to $A_K/W_K$ (see \cite{AW}, section 2.2). The virtual Poincar\'e polynomial of $K$ is the polynomial of $A_K/W_K$ with number of orbits of points on~$\ovl{K}$ with non-cyclic isotropy subtracted. There are three possible actions of $W_K$ on elliptic curve in our computations: trivial, the action of $\mathbb{Z}_2$ by involution, and the action of $\mathbb{Z}_2\times\mathbb{Z}_2$ generated by translation and involution. In the first case the Poincar\'e polynomial is $P_{A_K/W_K}(t) = 1 + 2t + t^2$. In the remaining two $A_K/W_K \simeq \mathbb{P}^1$ and $P_{A_K/W_K}(t) = 1 + t^2$. To obtain virtual Poincar\'e polynomial $P_K(t)$ of a component $K$ we have to subtract the number of points with non-cyclic isotropy on~$\ovl{K}$. Counting these points is generally easy, but there are two possible difficulties (which are in fact more important in analyzing the strata in $X$). The first is that sometimes the action of $W_K$ identifies some points with non-cyclic isotropy (see \ref{case_S4_2}). Then normalization of $\ovl{K}$ is not an isomorphism, because points with non-cyclic isotropy are not normal in $\ovl{K}$, but it is a homeomorphism. The second is when two points with non-cyclic isotropy lying on $A_K$ are mapped to the same point of the quotient, but the identification does not come from the action of $W_K$ (see \ref{case_S4_3}). Then $\ovl{K}$ goes twice through the image of such points. However, these situations appear only in some cases for $|G| \geq 8$, because they require $G$ having non-cyclic proper subgroup. Summing $P_K(t)$ for all singular curves $K \subset Y$ we compute virtual Poincar\'e of $1$-dimensional stratum. Counting points in $0$-dimensional stratum is standard. It requires only the knowledge of non-cyclic subgroups of $G$ and their normalizers. Thus we compute the virtual Poincar\'e polynomial $P_3(t)$ of $3$-dimensional stratum both in $Y$ and in $X$, subtracting virtual Poincar\'e polynomials of $0$ and $1$-dimensional strata in $Y$ from $P_Y(t)$. \subsection{Cohomology of the strata in $X$}\label{cohomology_X} We now turn to computing the virtual Poincar\'e polynomial $P_2(t)$ of $2$-dimensional stratum in~$X$, which is the resolution of singularities over $Y([H])$ for all conjugacy classes of cyclic subgroups $H < G$. We consider locally product resolution $f : X \rightarrow Y$ in the sense of definition~2.4 in~\cite{AW} (in fact, the resolution must be locally product in the case of Kummer $3$-folds, because the resolution is uniquely defined in codimension $2$). Let $F(H)$ be the fiber of minimal resolution of the quotient singularity $\mathbb{C}^2/H$, and $K$ a component of $Y[H]$. By lemma~2.5 in~\cite{AW}, to obtain the virtual Poincar\'e polynomial of $f^{-1}(K)$ we compute Poincar\'e polynomial of the quotient of $A_K\times F(H)$ by the action of $W_K$ and subtract Poincar\'e polynomial of the sum of fibers over points with non-cyclic isotropy. Here we use the notation of $W_K$-Poincar\'e polynomials $P_{A_K\times F(H), W_K}(t)$ which have a representation of $W_K$ on $H^i(A_K\times F(H), \mathbb{C})$ as a coefficient at $t^i$ (see \cite{AW}, section 2.1). To obtain Poincar\'e polynomial of the quotient $(A_K\times F(H))/W_K$ we apply the operation $\mu_0$ of taking the dimension of maximal trivial subrepresentations of coefficients to $P_{A_K\times F(H), W_K}(t)$. Hence $$P_{(A_K\times F(H))/W_K} = \mu_0(P_{A_K\times F(H), W_K}(t)) = \mu_0(P_{A_K, W_K}(t) \cdot P_{F(H), W_K}(t)),$$ which means that the task is now to compute $W_K$-Poincar\'e polynomials of~$A_K$ and~$F(H)$. The action of $W_K$ on $A_K$ is described in \ref{cohomology_Y}. The only remaining problem is the action of $W_K$ on $F(H)$. However, we can use the McKay correspondence for the minimal resolution of $\mathbb{C}^2/H$ (see \cite{Reid02}): the action of $W(H)$, hence also of $W_K$, on cohomology of $F(H)$ is the same as the action on conjugacy classes of $H$. In our computations $H$ is one of the groups $\mathbb{Z}_2$, $\mathbb{Z}_3$, $\mathbb{Z}_4$, $\mathbb{Z}_6$. If $H = \mathbb{Z}_{n+1}$, the quotient singularity is of type $A_n$, so the non-zero cohomology spaces of $F(H)$ are $H^0(F(H)) = \mathbb{C}$ and $H^2(F(H)) = \mathbb{C}^n$. By McKay correspondence, the action of $W_K$ on a chosen basis of $H^2(F(H))$ is the same as the action of $W_K$ by conjugation on the set of nontrivial conjugacy classes in $H$. In fact we investigate the action of $W_K$ on $H$, because $H$ is abelian. There are three cases: $W_K = 0$, $W_K = \mathbb{Z}_2$ and $W_K = \mathbb{Z}_2 \times \mathbb{Z}_2$. The last one appears three times in the computations, and only when $H = \mathbb{Z}_2$. Hence in the first and the last case $W_K$ acts trivially on $H^2(F(H))$. The same is true for~$W_K = \mathbb{Z}_2$ when~$H = \mathbb{Z}_2$. Let us then look at the case $W_K = \mathbb{Z}_2$, assuming that $H \neq \mathbb{Z}_2$. Then $H$ has exactly two generators, which are interchanged by the action of $W_K$, because in our computations $H$ and $W_K$ always generate a non-abelian group. We choose new basis of $H^2(F(H))$: instead of each pair of basis vectors $\alpha$, $\beta$ interchanged by $W_K$ we take $\alpha + \beta$ and $\alpha - \beta$. In the new basis the representations of $W_K = \mathbb{Z}_2$ on $H^2(F(H))$ are as follows. The trivial representation is denoted by $1$, the standard by $\varepsilon$ (that is $\varepsilon = -1$) and the sum is direct sum of representations. \centerline{\begin{tabular}{|c|c|c|} \hline \textbf{group $H$} & \textbf{representation} \\ \hline $\mathbb{Z}_2$ & $1$ \\ \hline $\mathbb{Z}_3$ & $1 + \varepsilon$ \\ \hline $\mathbb{Z}_4$ & $2 + \varepsilon$ \\ \hline $\mathbb{Z}_6$ & $3 + 2\varepsilon$ \\ \hline \end{tabular}} The last step is computing the (virtual) Poincar\'e polynomial $P_1(t)$ of $1$-dimensional stratum of $X$, which is the resolution of quotient singularities $\mathbb{C}^3/H$ for a non-cyclic $H<G$. To understand this stratum it is sufficient to analyze a representative of each conjugacy class of non-cyclic subgroups of $G$. However, to compute $P_2(t)$ we need to count the points with non-cyclic singularity on each curve, which is much easier if we look at all non-cyclic subgroups of $G$, as in the following sections. As for the existence of the crepant resolution of non-cyclic quotient singularities in $3$-dimensional Kummer construction, we rely on \cite{AW}, section 3.2. Again, to determine Poincar\'e polynomial we use McKay correspondence (this case is discussed in \cite{BM}): the number of $\mathbb{P}^1$ curves in the fiber of the resolution of quotient singularity $\mathbb{C}^3/H$ is equal to the number of nontrivial conjugacy classes in~$H$. This can be computed by a simple function in GAP. Results for all non-cyclic groups which appear in $3$-dimensional Kummer construction are summarized in the following table. \centerline{\begin{tabular}{|c|c|} \hline \textbf{group $H$} & \textbf{Poincar\'e polynomial} \\ \hline $D_4$ & $1 + 3t^2$ \\ \hline $D_6$ & $1 + 2t^2$ \\ \hline $D_8$ & $1 + 4t^2$ \\ \hline $D_{12}$ & $1 + 5t^2$ \\ \hline $A_4$ & $1 + 3t^2$ \\ \hline $S_4$ & $1 + 4t^2$ \\ \hline \end{tabular}} \section{Examples}\label{examples} Computations of Poincar\'e polynomials for all $16$ discussed groups are based on the methods described above. Therefore it is worth to look into the details only in the most complex cases, which contain all that is needed to be explained more carefully, because the remaining cases can be treated in the same way. In the present section we discuss Kummer construction for $S_4(2)$ and $S_4(3)$. The results of computations for the remaining groups are given in the next section. \subsection*{Presentation of the results} The information concerning curves of singular points is summarized in tables. In columns we give the following data: \begin{itemize} \item \textbf{group} --- isomorphism type of the investigated cyclic subgroup, (the type of its quotient singularity in brackets); \item \textbf{generator} --- matrix which generates the subgroup in question (we chose representative of conjugacy classes of subgroups and a generator); \item \textbf{equations} --- equations of the fixed points set in coordinates $(e_1, e_2, e_3)$ in $A^3$; \item \textbf{components} --- number of elliptic curves in the fixed points set; \item $W(g)$ --- the Weyl group; \item \textbf{quotient} --- curves obtained as quotients of components of the fixed points set by the Weyl group action, with numbers of curves in each isomorphism class; \item $W_K$ --- subgroup of $W(g)$ that acts on a single curve (given separately for each class of curves in the previous column). \end{itemize} \subsection{Case of $S_4(2)$}\label{case_S4_2} There are three $\mathbb{Z}$-classes of $S_4$ subgroups in $SL(3, \mathbb{Z})$. The case $S_4(1)$ (i.e. octahedral group) is treated in \cite{AW} and the action of this group on $A^3$ has a little simpler structure than those of the other two representations of $S_4$. Here we discuss $S_4(2)$, in some sense the most tricky one. Let us recall some information about the structure of the group $S_4$. There are $2$ conjugacy classes of $\mathbb{Z}_2$ subgroups, one of all squares of order $4$ elements. All subgroups isomorphic to $\mathbb{Z}_4$ are conjugate, subgroups of type $\mathbb{Z}_3$ as well. Subgroups of type $D_4$ divide into two classes: one contains only a normal subgroup, with Weyl group $D_6$, and the second has three elements with Weyl group $\mathbb{Z}_2$. All $4$ subgroups isomorphic to $D_6$ are conjugate, as well as all $3$ subgroups of type $D_8$. There is also a normal subgroup $A_4$. Next we choose a representative of the $\mathbb{Z}$-class $S_4(2)$: $$G = \left< \left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 1 & 1 & 1 \end{array} \right), \left(\begin{array}{rrr} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right)\right>.$$ The information about the singularities of $Y$ in codimension $2$ is easily obtained: \centerline{\begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{group} & \textbf{gen.} & \textbf{equ.} & \textbf{comp.} & $W(g)$ & \textbf{quot.} & $W_K$\\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 1 & 1 & 1 \end{array} \right)$ & $\begin{array}{c} 2e_1 = 0 \\ e_1 = e_2\\ \end{array}$ & $4$ & $\mathbb{Z}_2$ & $4 \times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} 0 & 0 & 1 \\ -1 & -1 & -1 \\ 1 & 0 & 0 \end{array} \right)$ & $\begin{array}{c} e_1 = e_3 \\ 2e_2 = -2e_1 \\ \end{array}$ & $4$ & $\mathbb{Z}_2\times \mathbb{Z}_2$ & $3 \times \mathbb{P}^1$ & $\mathbb{Z}_2 \times \mathbb{Z}_2$ \\ \hline $\mathbb{Z}_3$ $(A_2)$ & $\left(\begin{array}{rrr} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right)$ & $\begin{array}{c} e_1 = e_3 \\ e_1 = e_2 \\ \end{array}$ & $1$ & $\mathbb{Z}_2$ & $1 \times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline $\mathbb{Z}_4$ $(A_3)$ & $\left(\begin{array}{rrr} 0 & -1 & 0 \\ 0 & 0 & -1 \\ 1 & 1 & 1 \end{array} \right)$ & $\begin{array}{c} e_1 = -e_2 \\ e_1 = e_3 \\ \end{array}$ & $1$ & $\mathbb{Z}_2$ & $1 \times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline \end{tabular}} The most interesting part is analyzing the relations between $0$ and $1$ dimensional strata in $Y$ and finding the Poincar\'e polynomials for all strata after resolving the singularities. It is much easier if we determine points with non-cyclic isotropy for all groups, not only for representatives of conjugacy classes. Each of $3$ not normal $D_4$ subgroups fixes a set of $16$ points. The first set contain points of coordinates $(\alpha, \alpha, \beta)$, the second --- $(\alpha, \beta, \alpha)$, and the third --- $(\beta, \alpha, \alpha)$, where $2\alpha = 2\beta = 0$. Note that elements of the intersection, that is $4$ points $(\alpha, \alpha, \alpha)$, have isotropy $S_4$ and in each set the remaining $12$ points have isotropy $D_4$. The Weyl group $\mathbb{Z}_2$ acts on them, identifying pairs of points with $\alpha$ and $\beta$ interchanged. Because we look at conjugate subgroups, the action of $\mathbb{Z}_3 < S_4$ identifies triples with cyclicly permuted coordinates, one point taken from each family. Hence $36$ points with isotropy $D_4$ in $A^3$ are mapped to $6$ points of $Y$. The normal $D_4$ subgroup fixes points which satisfy $e_1 = e_2 = e_3$ and $4e_1 = 0$, but these are exactly the points with isotropy $A_4$ ($12$ points with $2e_1 \neq 0$), or $S_4$. The set of points with isotropy $A_4$ is mapped to $6$ points of $Y$. Points fixed by $D_6$ and $D_8$ subgroups satisfy $e_1 = e_2 = e_3$ and $2e_1 = 0$, so in fact they have isotropy $S_4$. Four points given by these equations are the only fixed points of $G$. Summing up, in the quotient there are $6$ points with quotient singularity of type $D_4$, $6$ with $A_4$ and $4$ with $S_4$. We turn to the task of understanding resolutions over singular curves. Curves for the first (in the table) $\mathbb{Z}_2$ subgroup present what appears most often in the computations. Each component contains $3$ points with isotropy $D_4$ and $1$ with isotropy $S_4$. They are fixed by the action of $W_K$ (involution on each curve) and mapped to different points of $Y$. The fiber of resolution is $\mathbb{P}^1$, so $W_K$ acts trivially on it. We compute the virtual Poincar\'e polynomials using the notation in \ref{cohomology_X}. Note that from the polynomial of the quotient of a trivial bundle over elliptic curve we have to subtract the polynomial of the quotient of the induced bundle over the set of points with non-cyclic isotropy. The polynomial of a stratum of the resolution over one curve is \begin{align*} \mu_0((1 + \varepsilon \cdot 2t + t^2)(1 + t^2)) - \mu_0(4(1+t^2)) = -3 -2t^2 + t^4. \end{align*} The curve of points with isotropy $\mathbb{Z}_4$ contains only $4$ points fixed by $S_4$, its virtual Poincar\'e polynomial is also easy to write: \begin{align*} \mu_0((1 + \varepsilon \cdot 2t + t^2)(1 + (2 + \varepsilon)t^2)) - \mu_0(4(1+(2 + \varepsilon)t^2)) = -3 -5t^2 + 2t^3 + 2t^4. \end{align*} The second $\mathbb{Z}_2$ subgroup is one of three cases in our computations where $W_K = \mathbb{Z}_2 \times \mathbb{Z}_2$. These are isomorphic to $\mathbb{P}^1$, but the quotient map restricted to one component (after removing points with non-cyclic isotropy) is a $4$-sheeted cover. However, it is enough to look at the action of $W_K$ on the cohomology spaces of a single component. One generator of the product $\mathbb{Z}_2 \times \mathbb{Z}_2$ acts on elliptic curve $A$ by translation, which has trivial tangent map. The second, whichever we choose, acts by involution. The induced action on $H^0$ and $H^2$ is identity, and on $H^1$ has no fixed points, so the Poincar\'e polynomial is, indeed, $1 + t^2$. In all cases with $W_K = \mathbb{Z}_2 \times \mathbb{Z}_2$ the singularities are of type $A_1$, so we do not have to analyze the action of $\mathbb{Z}_2 \times \mathbb{Z}_2$ on the fibers. Each of $3$ curves for the discussed subgroup contains $4$ points with isotropy $D_4$ and $4$ with isotropy $A_4$. Each set is mapped to $2$ points of the quotient. For the first set there is a stabilizer $\mathbb{Z}_2 < \mathbb{Z}_2 \times \mathbb{Z}_2$, so the representation of $W_K$ is $2(1 + \varepsilon)$. The same is true for the second set, only the stabilizer is a different $\mathbb{Z}_2$ subgroup of $W_K$. Hence the Poincar\'e polynomial of one component is \begin{align*} \mu_0((1 + \varepsilon \cdot 2t + t^2)(1 + t^2)) - \mu_0(4(1 + \varepsilon)(1+t^2)) = -3 - 2t^2 + t^4. \end{align*} And finally the only case where it can be non-obvious to compute correctly the Poincar\'e polynomial. The main difficulty is computing the polynomial of the quotient of the bundle over the set of points with non-cyclic isotropy. In this single case the representations of $W_K$ both on this set of points and on the fiber are nontrivial, which has to be taken into account. The curve of fixed points of $\mathbb{Z}_3$ contains $12$ points with isotropy $A_4$, which are mapped to $6$ points of the quotient, so the representation of $W_K = \mathbb{Z}_2$ on this set is $6(1 + \varepsilon)$ and the $W_K$-Poincar\'e polynomial is $6(1 + \varepsilon)(1 + (1+\varepsilon)t^2)$. There are also $4$ fixed points of $S_4$. The Poincar\'e polynomial~is \begin{align*} \mu_0((1 + \varepsilon \cdot 2t + t^2)(1 + (1 + \varepsilon)t^2)) - \mu_0((6(1 + \varepsilon) + 4)(1+(1 + \varepsilon)t^2)) =\\= t^4 + 2t^3 - 14t^2 - 9. \end{align*} For $S_4(2)$, and also for other investigated representations of $S_4$, the (virtual) Poincar\'e polynomial of the quotient $Y$ is $$P_Y(t) = t^6 + t^4 + 4t^3 + t^2 + 1.$$ The polynomial of $3$-dimensional stratum can be easily computed using $P_Y$ and the given data. The polynomial of the $2$-dimensional stratum is a sum of the polynomials for all components. And the polynomials of the resolutions over points with non-cyclic isotropy, which sum up to $1$-dimensional stratum, can be taken from the list in \ref{cohomology_X}. Hence the virtual Poincar\'e polynomials of the strata are the following: \begin{align*} P_3(t) &= P_Y(t) - 8(1 + t^2 -4) - (1 + t^2 - 10) - 16 = t^6 + t^4 + 4t^3 - 8t^2 + 18,\\ P_2(t) &= 4(1 + t^2 - 4)(1 + t^2) + 3\mu_0((1 + t^2 - 4(1 + \varepsilon))(1 + t^2)) +\\ &\quad + \mu_0((1 + \varepsilon \cdot 2t + t^2 - 4 - 6(1 + \varepsilon))(1 + (1 + \varepsilon)t^2)) + \\ &\quad + \mu_0((1 + \varepsilon \cdot 2t + t^2 - 4)(1 + (2 + \varepsilon)t^2)) = 10t^4 + 4t^3 - 33t^2 - 33, \\ P_1(t) &= 12(1 + 3t^2) + 4(1 + 4t^2) = 52t^2 + 16, \end{align*} and, finally, the Poincar\'e polynomial of $X$ is $$P_X(t) = t^6 + 11t^4 + 8t^3 + 11t^2 + 1.$$ \subsection{Case of $S_4(3)$}\label{case_S4_3} There is one more interesting detail which does not appear in the previous example, but can be observed in the case of $S_4(3)$. Hence we explain it more carefully, while on the other steps of computations only general information is given. We choose $G = \left< \left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & -1 & 0 \end{array} \right), \left(\begin{array}{rrr} -1 & 0 & 1 \\ -1 & 1 & 0 \\ -1 & 0 & 0 \end{array} \right)\right>$. \centerline{\begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{group} & \textbf{gen.} & \textbf{equ.} & \textbf{comp.} & $W(g)$ & \textbf{quot.} & $W_K$\\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & -1 & 0 \end{array} \right)$ & $\begin{array}{c} 2e_1 = 0 \\ e_2 = -e_3\\ \end{array}$ & $4$ & $\mathbb{Z}_2$ & $4 \times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ -1 & 0 & 1 \\ -1 & 1 & 0 \end{array} \right)$ & $\begin{array}{c} 2e_1 = 0 \\ e_3 = e_1 + e_2 \\ \end{array}$ & $4$ & $\mathbb{Z}_2\times \mathbb{Z}_2$ & $3 \times \mathbb{P}^1$ & $\mathbb{Z}_2 \times \mathbb{Z}_2$ \\ \hline $\mathbb{Z}_3$ $(A_2)$ & $\left(\begin{array}{rrr} -1 & 0 & 1 \\ -1 & 1 & 0 \\ -1 & 0 & 0 \end{array} \right)$ & $\begin{array}{c} e_1 = 0 \\ e_3 = 0 \\ \end{array}$ & $1$ & $\mathbb{Z}_2$ & $1 \times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline $\mathbb{Z}_4$ $(A_3)$ & $\left(\begin{array}{rrr} 0 & -1 & 1 \\ 0 & 0 & 1 \\ -1 & 0 & 1 \end{array} \right)$ & $\begin{array}{c} e_1 = 0 \\ e_2 = e_3 \\ \end{array}$ & $1$ & $\mathbb{Z}_2$ & $1 \times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline \end{tabular}} In $A^3$ there are $3$ sets of $12$ points with isotropy $D_4$, mapped to $6$ points of the quotient, two points from each set to one in $Y$. There are also $6$ points determined by the normal $D_4$ subgroup, identified in the quotient. The points with isotropy $D_6$ are divided into $4$ sets of $3$ points, mapped to $3$ points of $Y$, because all subgroups of type $D_6$ are conjugate. Similarly there are $3$ sets of $3$ points with isotropy $D_8$. The action of normalizer is trivial, sets are identified by the quotient map. One point has isotropy $S_4$. In the case of $S_4(3)$ there are curves which go twice through some points. Let us look at the components of the fixed points set of the first $\mathbb{Z}_2$ subgroup. Three of them contain $4$ points with isotropy $D_4$ each. Take one component and denote it $A_K$, as in \ref{cohomology_Y} and \ref{cohomology_X}. On $A_K$ points with isotropy $D_4$ are fixed by $W_K$, but they are mapped to $2$ points of the quotient. That is, the identification does not come from the action of $W_K$, so the neighborhoods of these points are not glued by the quotient map. It means that if we take the image of $A_K$ in $Y$ and remove from it $2$ points with singularity $D_4$, we get $K$ isomorphic to $\mathbb{P}^1$ without $4$ points. In other words, the quotient of an elliptic curve $A_K$ by $W_K = \mathbb{Z}_2$ is not isomorphic to the closure of $K$ in $Y$, but only to its normalized closure $\widehat{K}$. Apart from $4$ points with isotropy $D_4$, three of the curves for the first $\mathbb{Z}_2$ subgroup contain also $2$ points with isotropy $D_6$ each. They are identified by the action of $W_K$. The fourth curve contains $3$ points with isotropy $D_8$ and $1$ with $S_4$, mapped to different points of $Y$. The quotient map on components for the second $\mathbb{Z}_2$, with $W_K = \mathbb{Z}_2 \times \mathbb{Z}_2$, is $4$-sheeted cover, as in the previous example. Each of these curves contains $6$ points with isotropy $D_4$ and $2$ with $D_8$, which are pairwise identified by the action of $W_K$. The curve for $\mathbb{Z}_3$ subgroup contains $3$ points with isotropy $D_6$, mapped to different points of the quotient, and $1$ fixed point of $S_4$. The curve for $\mathbb{Z}_4$ contains $3$ points with isotropy $D_8$, different in $Y$, and $1$ fixed by $S_4$. Using the same methods as in the case of $S_4(2)$ we give the virtual Poincar\'e polynomials of all strata in $Y$ with resolved singularities: \begin{align*} P_3(t) &= P_Y(t) - 3(1 + t^2 - 5) - 6(1 + t^2 - 4) - 14 = t^6 + t^4 + 4t^3 - 8t^2 + 17,\\ P_2(t) &= (1 + t^2 - 4)(1 + t^2) + 3\mu_0((1 + \varepsilon \cdot 2t + t^2 - 4 - (1 + \varepsilon))(1 + t^2)) +\\ &\quad + 3\mu_0((1 + \varepsilon \cdot 2t + t^2 - 4(1 + \varepsilon))(1 + t^2)) +\\ &\quad + \mu_0((1 + \varepsilon \cdot 2t + t^2 - 4)(1 + (1 + \varepsilon)t^2)) +\\ &\quad + \mu_0((1 + \varepsilon \cdot 2t + t^2 - 4)(1 + (2 + \varepsilon)t^2)) = 10t^4 + 4t^3 - 24t^2 - 30,\\ P_1(t) &= 7(1 + 3t^2) + 3(1 + 2t^2) + 4(1 + 4t^2) = 43t^2 + 14. \end{align*} The Poincar\'e polynomial of $X$ is $$P_X(t) = t^6 + 11t^4 + 8t^3 + 11t^2 + 1.$$ \section{Results of the computations}\label{results} In this section we collect the results of computations for all Kummer varieties except the examples from the previous section. Moreover, we provide some information about the action of finite subgroups of $SL(3, \mathbb{Z})$ on $A^3$, structure of their quotients (curves of singular points, their equations) and virtual Poincar\'e polynomials for all strata. \subsection{Cases of $D_4$} For all of the investigated representations of $D_4$ the (virtual) Poincar\'e polynomial of the quotient $Y$ is $$P_Y(t) = t^6 + 3t^4 + 8t^3 + 3t^2 + 1.$$ The group $D_4$ has three normal subgroups of order $2$, which determine curves of points of the isotropy $\mathbb{Z}_2$. The isotropy of their intersection points is $D_4$. \subsubsection{$D_4(1)$}\label{case_D4_1} \enlargethispage{0.5\baselineskip} $G = \left<\left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{array} \right), \left(\begin{array}{rrr} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & -1 \end{array} \right)\right>$ \centerline{\begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{group} & \textbf{gen.} & \textbf{equ.} & \textbf{comp.} & $W(g)$ & \textbf{quot.} & $W_K$\\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{array}\right)$ & $\begin{array}{c} 2e_1 = 0 \\ 2e_2 = 0\\ \end{array}$ & $16$ & $\mathbb{Z}_2$ & $16\times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & -1 \end{array}\right)$ & $\begin{array}{c} 2e_2 = 0 \\ 2e_3 = 0\\ \end{array}$ & $16$ & $\mathbb{Z}_2$ & $16\times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{array}\right)$ & $\begin{array}{c} 2e_1 = 0 \\ 2e_3 = 0\\ \end{array}$ & $16$ & $\mathbb{Z}_2$ & $16\times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline \end{tabular}} There are $64$ points with isotropy $D_4$, $4$ on each curve. They are defined by equations $2e_1 = 2e_2 = 2e_3 = 0$. The virtual Poincar\'e polynomials of the strata are \begin{align*} P_3(t) &= P_Y(t) - 48(1 + t^2 - 4) - 64 = t^6 + 3t^4 + 8t^3 - 45t^2 + 81,\\ P_2(t) &= 48(1 + t^2 -4)(1 + t^2) = 48t^4 - 96t^2 - 144,\\ P_1(t) &= 64(1 + 3t^2) = 192t^2 + 64, \end{align*} and the Poincar\'e polynomial of $X$ is $$P_X(t) = t^6 + 51t^4 + 8t^3 + 51t^2 + 1.$$ \subsubsection{$D_4(2)$}\label{case_D4_2} $G = \left<\left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{array} \right), \left(\begin{array}{rrr} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & -1 \end{array} \right)\right>$ \centerline{\begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{group} & \textbf{gen.} & \textbf{equ.} & \textbf{comp.} & $W(g)$ & \textbf{quot.} & $W_K$\\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{array}\right)$ & $\begin{array}{c} 2e_1 = 0 \\ 2e_2 = 0\\ \end{array}$ & $16$ & $\mathbb{Z}_2$ & $\begin{array}{c} 4 \times \mathbb{P}^1 \\ 6 \times A\\ \end{array}$ & $\begin{array}{c} \mathbb{Z}_2 \\ 0\\ \end{array}$ \\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & -1 \end{array}\right)$ & $\begin{array}{c} e_1 = e_2 \\ 2e_3 = 0\\ \end{array}$ & $4$ & $\mathbb{Z}_2$ & $4\times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} 0 & -1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & -1 \end{array}\right)$ & $\begin{array}{c} e_1 = -e_2 \\ 2e_3 = 0\\ \end{array}$ & $4$ & $\mathbb{Z}_2$ & $4\times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline \end{tabular}} There are $16$ points with isotropy $D_4$. They are given by equations $2e_1 = 2e_2 = 2e_3 = 0$ and $e_1 = e_2$. Each of the $\mathbb{P}^1$ curves contains $4$ of these points. The elliptic curves do not contain any of them. The virtual Poincar\'e polynomials of the strata are \begin{align*} P_3(t) &= P_Y(t) - 12(1 + t^2 - 4) - 6(1 + 2t + t^2) - 16 =\\ &= t^6 + 3t^4 + 8t^3 - 15t^2 -12t + 15,\\ P_2(t) &= 12(1 + t^2 -4)(1 + t^2) + 6(1 + 2t + t^2)(1 + t^2) =\\ &= 18t^4 + 12t^3 - 12t^2 + 12t - 30,\\ P_1(t) &= 16(1 + 3t^2) = 48t^2 + 16, \end{align*} and the Poincar\'e polynomial of $X$ is $$P_X(t) = t^6 + 21t^4 + 20t^3 + 21t^2 + 1.$$ \subsubsection{$D_4(3)$}\label{case_D4_3} $G = \left<\left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & -1 & 0 \end{array} \right) \left(\begin{array}{rrr} 1 & 1 & 1 \\ 0 & -1 & 0 \\ 0 & 0 & -1 \end{array} \right)\right>$ \centerline{\begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{group} & \textbf{gen.} & \textbf{equ.} & \textbf{comp.} & $W(g)$ & \textbf{quot.} & $W_K$\\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & -1 & 0 \end{array}\right)$ & $\begin{array}{c} 2e_1 = 0 \\ e_2 = -e_3\\ \end{array}$ & $4$ & $\mathbb{Z}_2$ & $4 \times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} 1 & 1 & 1 \\ 0 & -1 & 0 \\ 0 & 0 & -1 \end{array}\right)$ & $\begin{array}{c} 2e_2 = 0 \\ e_2 = e_3\\ \end{array}$ & $4$ & $\mathbb{Z}_2$ & $4\times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} -1 & -1 & -1 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array}\right)$ & $\begin{array}{c} e_2 = e_3 \\ 2e_1 = -2e_2\\ \end{array}$ & $4$ & $\mathbb{Z}_2$ & $4\times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline \end{tabular}} There are $16$ points with isotropy $D_4$, $4$ on each curve. They are given by equations $2e_1 = 2e_2 = 0$ and $e_2 = e_3$. The virtual Poincar\'e polynomials of the strata are \begin{align*} P_3(t) &= P_Y(t) - 12(1 + t^2 - 4) - 16 = t^6 + 3t^4 + 8t^3 - 9t^2 + 21,\\ P_2(t) &= 12(1 + t^2 -4)(1 + t^2) = 12t^4 - 24t^2 - 36,\\ P_1(t) &= 16(1 + 3t^2) = 48t^2 + 16, \end{align*} and the Poincar\'e polynomial of $X$ is $$P_X(t) = t^6 + 15t^4 + 8t^3 + 15t^2 + 1.$$ \subsubsection{$D_4(4)$}\label{case_D4_4} $G = \left<\left(\begin{array}{rrr} -1 & -1 & -1 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array} \right) \left(\begin{array}{rrr} 0 & 0 & 1 \\ -1 & -1 & -1 \\ 1 & 0 & 0 \end{array} \right)\right>$ \centerline{\begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{group} & \textbf{gen.} & \textbf{equ.} & \textbf{comp.} & $W(g)$ & \textbf{quot.} & $W_K$\\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} -1 & -1 & -1 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array}\right)$ & $\begin{array}{c} e_2 = e_3 \\ 2e_1 = -2e_2\\ \end{array}$ & $4$ & $\mathbb{Z}_2$ & $4 \times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} 0 & 0 & 1 \\ -1 & -1 & -1 \\ 1 & 0 & 0 \end{array}\right)$ & $\begin{array}{c} e_1 = e_3 \\ 2e_2 = -2e_1\\ \end{array}$ & $4$ & $\mathbb{Z}_2$ & $4\times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} 0 & 1 & 0 \\ 1 & 0 & 0 \\ -1 & -1 & -1 \end{array}\right)$ & $\begin{array}{c} e_1 = e_2 \\ 2e_3 = -2e_1\\ \end{array}$ & $4$ & $\mathbb{Z}_2$ & $4\times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline \end{tabular}} There are $16$ points with isotropy $D_4$, $4$ on each curve. They are defined by equations $e_1 = e_2 = e_3$ and $4e_1 = 0$. The virtual Poincar\'e polynomials of the strata are \begin{align*} P_3(t) &= P_Y(t) - 12(1 + t^2 - 4) - 16 = t^6 + 3t^4 + 8t^3 - 9t^2 + 21,\\ P_2(t) &= 12(1 + t^2 -4)(1 + t^2) = 12t^4 - 24t^2 - 36,\\ P_1(t) &= 16(1 + 3t^2) = 48t^2 + 16, \end{align*} and the Poincar\'e polynomial of $X$ is $$P_X(t) = t^6 + 15t^4 + 8t^3 + 15t^2 + 1.$$ \subsubsection{How do the cases \ref{case_D4_3} and \ref{case_D4_4} differ?}\label{comment_D4} In the cases \ref{case_D4_3} and \ref{case_D4_4} not only Poincar\'e polynomials, but also numbers of singular curves and points are the same. However, these are different cases, because in the case of $D_4(3)$ the set of points with nontrivial isotropy in $X$ is connected, which is not true for $D_4(4)$. In this section $x$ stands for the arbitrary point of elliptic curve $A$ and the letters $\alpha$ and $\beta$ denote points which satisfy $2x=0$. In the case of $D_4(3)$, points with isotropy $D_4$ are $(\alpha, \beta, \beta)$ for all possible values of $\alpha$ and $\beta$. Components of fixed points set for the first $\mathbb{Z}_2$ subgroup are parametrized by $(\alpha, x, -x)$. For the second $\mathbb{Z}_2$ parametrizations of components are $(x, \beta, \beta)$, so each of these curves intersect all curves determined by the first $\mathbb{Z}_2$. Therefore the sum of $0$ and $1$ dimensional strata is connected. In the case of $D_4(4)$ there are three families of the fixed points curves for $\mathbb{Z}_2$ subgroups, each containing $4$ curves, one for each value of $\alpha$. They are parametrized by $(\alpha - x, x, x)$, $(x, \alpha - x, x)$ and $(x, x, \alpha - x)$ respectively. Points with isotropy $D_4$ are $(\gamma, \gamma, \gamma)$, where $4\gamma = 0$. A choice of $\alpha$ determines three curves, one from each family, which intersect in $4$ points such that $2\gamma = \alpha$. Curves for different values $\alpha_0$ and $\alpha_1$ do not intersect. Hence the sum of $0$ and $1$ dimensional strata has $4$ connected components, which shows that the groups $D_4(3)$ and $D_4(4)$ cannot be conjugate. \begin{figure} \caption{Structure of the sets of singular points in $Y$ for $D_4(3)$ and $D_4(4)$} \end{figure} Figure 1. presents sums of $0$ and $1$ dimensional strata in discussed cases. Intersection points of curves are marked by black dots. Components determined by the same $\mathbb{Z}_2$ subgroup are drawn in the same line style. Note that by transposing matrices in $D_4(3)$ we obtain a group conjugate to $D_4(4)$. \subsection{Cases of $D_6$} For all of the investigated representations of $D_6 \simeq S_3$ the (virtual) Poincar\'e polynomial of the quotient $Y$ is $$P_Y(t) = t^6 + 2t^4 + 6t^3 + 2t^2 + 1.$$ The symmetric group $S_3$ has a normal subgroup of order $3$. Elements of order $2$ determine three conjugate subgroups $\mathbb{Z}_2$. Points which have non-cyclic isotropy are fixed points of the action of $S_3$. \subsubsection{$D_6(1)$}\label{case_D6_1} $G = \left< \left(\begin{array}{rrr} -1 & 0 & 0 \\ 1 & 1 & 0 \\ 0 & 0 & -1 \end{array} \right), \left(\begin{array}{rrr} -1 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{array} \right)\right>$ \centerline{\begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{group} & \textbf{gen.} & \textbf{equ.} & \textbf{comp.} & $W(g)$ & \textbf{quot.} & $W_K$\\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ 1 & 1 & 0 \\ 0 & 0 & -1 \end{array} \right)$ & $\begin{array}{c} e_1 = 0 \\ 2e_3 = 0\\ \end{array}$ & $4$ & $0$ & $4 \times A$ & $0$ \\ \hline $\mathbb{Z}_3$ $(A_2)$ & $\left(\begin{array}{rrr} -1 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{array} \right)$ & $\begin{array}{c} e_1 = e_2 \\ 3e_1 = 0 \\ \end{array}$ & $9$ & $\mathbb{Z}_2$ & $\begin{array}{c} 4\times A \\ 1\times \mathbb{P}^1 \\ \end{array}$ & $\begin{array}{c} 0 \\ \mathbb{Z}_2 \\ \end{array}$ \\ \hline \end{tabular}} Fixed points of the action of $D_6$ are given by equations $e_1 = e_2 = 0$ and $2e_3 = 0$, so there are $4$ of them. All lie on the $\mathbb{P}^1$ curve of fixed points of $\mathbb{Z}_3$. Each elliptic curve for $\mathbb{Z}_2$ goes through exactly one of these points. Elliptic curves for $\mathbb{Z}_3$ do not contain any points of bigger isotropy. The virtual Poincar\'e polynomials of the strata are \begin{align*} P_3(t) &= P_Y(t) - 4(1 + 2t + t^2 - 1) - 4(1 + 2t + t^2) - (1 + t^2 - 4) - 4 =\\ &= t^6 + 2t^4 + 6t^3 - 7t^2 - 16t - 4,\\ P_2(t) &= 4(1 + 2t + t^2 - 1)(1 + t^2) + 4(1 + 2t + t^2)(1 + 2t^2) +\\ &\quad + \mu_0((1 + \varepsilon \cdot 2t + t^2 - 4)(1 + (1 + \varepsilon)t^2)) = 13t^4 + 26t^3 + 14t^2 + 16t + 1,\\ P_1(t) &= 4(1 + 2t^2) = 8t^2 + 4, \end{align*} and the Poincar\'e polynomial of $X$ is $$P_X(t) = t^6 + 15t^4 + 32t^3 + 15t^2 + 1.$$ \subsubsection{$D_6(2)$}\label{case_D6_2} $G = \left< \left(\begin{array}{rrr} 0 & -1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & -1 \end{array} \right), \left(\begin{array}{rrr} -1 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 1 \end{array} \right)\right>$ \centerline{\begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{group} & \textbf{gen.} & \textbf{equ.} & \textbf{comp.} & $W(g)$ & \textbf{quot.} & $W_K$\\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} 0 & -1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & -1 \end{array} \right)$ & $\begin{array}{c} e_1 = -e_2 \\ 2e_3 = 0\\ \end{array}$ & $4$ & $0$ & $4 \times A$ & $0$ \\ \hline $\mathbb{Z}_3$ $(A_2)$ & $\left(\begin{array}{rrr} -1 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 1 \end{array} \right)$ & $\begin{array}{c} 2e_1 = e_2 \\ 3e_1 = 0 \\ \end{array}$ & $9$ & $\mathbb{Z}_2$ & $9 \times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline \end{tabular}} Fixed points of the action of $D_6$ are given by equations $3e_1 = 0$, $e_2 = 2e_1$ and $2e_3 = 0$, so there are $36$ of them. There are $9$ of them on each elliptic curve for $\mathbb{Z}_2$ and $4$ on each $\mathbb{P}^1$ for $\mathbb{Z}_3$. The virtual Poincar\'e polynomials of the strata are the following: \begin{align*} P_3(t) &= P_Y(t) - 4(1 + 2t + t^2 - 9) - 9(1 + t^2 - 4) - 36 =\\ &= t^6 + 2t^4 + 6t^3 - 11t^2 - 8t + 24,\\ P_2(t) &= 4(1 + 2t + t^2 - 9)(1 + t^2) + 9\mu_0((1 + \varepsilon \cdot 2t + t^2 - 4)(1 + (1 + \varepsilon)t^2)) = \\ &= 13t^4 + 26t^3 - 46t^2 + 8t - 59,\\ P_1(t) &= 36(1 + 2t^2) = 72t^2 + 36, \end{align*} and the Poincar\'e polynomial of $X$ is $$P_X(t) = t^6 + 15t^4 + 32t^3 + 15t^2 + 1.$$ \subsubsection{$D_6(3)$}\label{case_D6_3} $G = \left< \left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & -1 & 0 \end{array} \right), \left(\begin{array}{rrr} 0 & -1 & 0 \\ 0 & 0 & 1 \\ -1 & 0 & 0 \end{array} \right)\right>$ \centerline{\begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{group} & \textbf{gen.} & \textbf{equ.} & \textbf{comp.} & $W(g)$ & \textbf{quot.} & $W_K$\\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & -1 & 0 \end{array} \right)$ & $\begin{array}{c} 2e_1 = 0 \\ e_2 = -e_3\\ \end{array}$ & $4$ & $0$ & $4 \times A$ & $0$ \\ \hline $\mathbb{Z}_3$ $(A_2)$ & $\left(\begin{array}{rrr} 0 & -1 & 0 \\ 0 & 0 & 1 \\ -1 & 0 & 0 \end{array} \right)$ & $\begin{array}{c} e_1 = -e_2 \\ e_2 = e_3 \\ \end{array}$ & $1$ & $\mathbb{Z}_2$ & $1\times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline \end{tabular}} Fixed points of the action are defined by equations $2e_1 = 0$ and $e_1 = e_2 = e_3$, so there are $4$ of them. Each lies on the $\mathbb{P}^1$ curve of fixed points of the $\mathbb{Z}_3$ action and on one of the elliptic curves for $\mathbb{Z}_2$. The virtual Poincar\'e polynomials of the strata are the following: \begin{align*} P_3(t) &= P_Y(t) - 4(1 + 2t + t^2 - 1) - (1 + t^2 - 4) - 4 = t^6 + 2t^4 + 6t^2 - 3t^2 - 8t,\\ P_2(t) &= 4(1 + 2t + t^2 - 1)(1 + t^2) + \mu_0((1 + \varepsilon \cdot 2t + t^2 - 4)(1 + (1 + \varepsilon)t^2)) = \\ &= 5t^4 + 10t^3 + 2t^2 + 8t - 3,\\ P_1(t) &= 4(1 + 2t^2) = 8t^2 + 4, \end{align*} and the Poincar\'e polynomial of $X$ is $$P_X(t) = t^6 + 7t^4 + 16t^3 + 7t^2 + 1.$$ \subsection{Cases of $D_8$} For both investigated representations of $D_6 \simeq S_3$ the (virtual) Poincar\'e polynomial of the quotient $Y$ is $$P_Y(t) = t^6 + 2t^4 + 6t^3 + 2t^2 + 1.$$ The group $D_8$ has normal subgroup $\mathbb{Z}_4$, which contains a $\mathbb{Z}_2$ subgroup, also normal in $D_8$, with Weyl group $\mathbb{Z}_2\times\mathbb{Z}_2$. There are two other classes of $\mathbb{Z}_2$ subgroups, each containing $2$ groups, with Weyl group $\mathbb{Z}_2$. Points with non-cyclic isotropy are fixed by the whole $D_8$ or only by one of $2$ non-conjugate subgroups isomorphic to $D_4$. \subsubsection{$D_8(1)$}\label{case_D8_1} $G = \left< \left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{array} \right), \left(\begin{array}{rrr} 0 & 0 & -1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{array} \right)\right>$\\ \centerline{\begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{group} & \textbf{gen.} & \textbf{equ.} & \textbf{comp.} & $W(g)$ & \textbf{quot.} & $W_K$\\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{array} \right)$ & $\begin{array}{c} 2e_1 = 0 \\ 2e_2 = 0\\ \end{array}$ & $16$ & $\mathbb{Z}_2$ & $16 \times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} 0 & 0 & -1 \\ 0 & -1 & 0 \\ -1 & 0 & 0 \end{array} \right)$ & $\begin{array}{c} e_1 = -e_3 \\ 2e_2 = 0\\ \end{array}$ & $4$ & $\mathbb{Z}_2$ & $4 \times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{array} \right)$ & $\begin{array}{c} 2e_1 = 0 \\ 2e_3 = 0\\ \end{array}$ & $16$ & $\mathbb{Z}_2\times\mathbb{Z}_2$ & $6 \times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline $\mathbb{Z}_4$ $(A_3)$ & $\left(\begin{array}{rrr} 0 & 0 & -1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{array} \right)$ & $\begin{array}{c} 2e_1 = 0 \\ e_1 = e_3 \\ \end{array}$ & $4$ & $\mathbb{Z}_2$ & $4\times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline \end{tabular}} In this case $64$ points given by equations $2e_1 = 2e_2 = 2e_3 = 0$ have non-cyclic isotropy. There are $48$ points with isotropy $D_4$, which are mapped to $24$ points of the quotient, two to one. The remaining $16$ points are fixed by $D_8$. Each $\mathbb{P}^1$ curve determined by the first $\mathbb{Z}_2$ subgroup contains $3$ points with isotropy $D_4$ and $1$ fixed by $D_8$. There are $4$ points with isotropy $D_8$ on each curve for the second $\mathbb{Z}_2$. Fixed point set of the normal $\mathbb{Z}_2$ subgroup has $16$ components. Four of them consists of points with isotropy $\mathbb{Z}_4$, we look now at the other $12$. The Weyl group $\mathbb{Z}_2 \times \mathbb{Z}_2$ acts on this set by involution on each component and identification of pairs of curves. Hence they are mapped to $6$ copies of $\mathbb{P}^1$ in $Y$ and the quotient map is double cover on each component. Each of $12$ curves contains $4$ points with isotropy $D_4$. The curves determined by $\mathbb{Z}_4$ contain $4$ points with isotropy $D_8$ each. The virtual Poincar\'e polynomials of the strata are the following: \begin{align*} P_3(t) &= P_Y(t) - 30(1 + t^2 - 4) - 40 = t^6 + 2t^4 + 6t^3 - 28t^2 + 51,\\ P_2(t) &= 26(1 + t^2 - 4)(1 + t^2) + 4\mu_0((1 + \varepsilon \cdot 2t + t^2 - 4)(1 + (2 + \varepsilon)t^2)) =\\ &= 34t^4 + 8t^3 - 72t^2 - 90,\\ P_1(t) &= 24(1 + 3t^2) + 16(1 + 4t^2) = 136t^2 + 40, \end{align*} and the Poincar\'e polynomial of $X$ is $$P_X(t) = t^6 + 36t^4 + 14t^3 + 36t^2 + 1.$$ \subsubsection{$D_8(2)$}\label{case_D8_2} $G = \left< \left(\begin{array}{rrr} -1 & -1 & -1 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array} \right), \left(\begin{array}{rrr} 0 & -1 & 0 \\ 0 & 0 & -1 \\ 1 & 1 & 1 \end{array} \right)\right>$ \centerline{\begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{group} & \textbf{gen.} & \textbf{equ.} & \textbf{comp.} & $W(g)$ & \textbf{quot.} & $W_K$\\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} -1 & -1 & -1 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array} \right)$ & $\begin{array}{c} e_2 = e_3 \\ 2e_1 = -2e_2\\ \end{array}$ & $4$ & $\mathbb{Z}_2$ & $4 \times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ 1 & 1 & 1 \\ 0 & 0 & -1 \end{array} \right)$ & $\begin{array}{c} 2e_1 = 0 \\ e_3 = e_1\\ \end{array}$ & $4$ & $\mathbb{Z}_2$ & $4 \times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} 0 & 0 & 1 \\ -1 & -1 & -1 \\ 1 & 0 & 0 \end{array} \right)$ & $\begin{array}{c} e_1 = e_3 \\ 2e_2 = -2e_1\\ \end{array}$ & $4$ & $\mathbb{Z}_2\times\mathbb{Z}_2$ & $3 \times \mathbb{P}^1$ & $\mathbb{Z}_2 \times \mathbb{Z}_2$ \\ \hline $\mathbb{Z}_4$ $(A_3)$ & $\left(\begin{array}{rrr} 0 & -1 & 0 \\ 0 & 0 & -1 \\ 1 & 1 & 1 \end{array} \right)$ & $\begin{array}{c} e_1 = -e_2 \\ e_1 = e_3 \\ \end{array}$ & $1$ & $\mathbb{Z}_2$ & $1\times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline \end{tabular}} Each of $D_4$ subgroups fixes $16$ points. The first set is given by equations $e_1 = e_2 = e_3$ and $4e_1 = 0$, the second by $e_1 = e_3$ and $2e_1 = 2e_2 = 0$. The intersection of these sets consists of $4$ points with isotropy $D_8$. The remaining $24$ points with isotropy $D_4$ are mapped to $12$ points of the quotient, two to one. Three of the curves determined by the first $\mathbb{Z}_2$ subgroup contain $4$ points with isotropy $D_4$ each. These points are pairwise identified in the quotient, but the identification does not come from the action of $W_K$ (as in \ref{case_S4_3}). The fourth curve contains $4$ points with isotropy $D_8$. Each curve for the second $\mathbb{Z}_2$ contains $3$ points with isotropy $D_4$ and one with isotropy $D_8$. The quotient map on the curves determined by the normal $\mathbb{Z}_2$ subgroup is a $4$-sheeted cover (as in \ref{case_S4_2}). There are $8$ points with isotropy $D_4$ on each of these curves, $4$ for each $D_4$ subgroup. Pairs of these points are identified by the action of a $\mathbb{Z}_2$ subgroup of Weyl group, as in the cases of $S_4(2)$ and $S_4(3)$. All points fixed by $D_8$ lie on the $\mathbb{P}^1$ determined by $\mathbb{Z}_4$. The virtual Poincar\'e polynomials of the strata are the following: \begin{align*} P_3(t) &= P_Y(t) - 12(1 + t^2 - 4) - 16 = t^6 + 2t^4 + 6t^3 - 10t^2 + 21,\\ P_2(t) &= 11(1 + t^2 - 4)(1 + t^2) + \mu_0((1 + \varepsilon \cdot 2t + t^2 - 4)(1 + (2 + \varepsilon)t^2)) =\\ &= 13t^4 + 2t^3 - 27t^2 - 36,\\ P_1(t) &= 12(1 + 3t^2) + 4(1 + 4t^2) = 52t^2 + 16, \end{align*} and the Poincar\'e polynomial of $X$ is $$P_X(t) = t^6 + 15t^4 + 8t^3 + 15t^2 + 1.$$ \subsection{Case of $D_{12}$}\label{case_D12} There is only one $\mathbb{Z}$-class of subgroups of $SL(3, \mathbb{Z})$ isomorphic to $D_{12}$. The (virtual) Poincar\'e polynomial of the quotient $Y$ is $$P_Y(t) = t^6 + 2t^4 + 6t^3 + 2t^2 + 1.$$ The group $D_{12}$ has normal subgroup $\mathbb{Z}_6$, which contains $\mathbb{Z}_3$ and $\mathbb{Z}_2$, also normal. Other $\mathbb{Z}_2$ subgroups divide into $2$ classes. There are also $3$ conjugate subgroups isomorphic to $D_4$ and $2$ normal $D_6$ subgroups. The chosen representative is $G = \left< \left(\begin{array}{rrr} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & -1 \end{array} \right), \left(\begin{array}{rrr} 0 & 1 & 0 \\ -1 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right)\right>$. \centerline{\begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{group} & \textbf{gen.} & \textbf{equ.} & \textbf{comp.} & $W(g)$ & \textbf{quot.} & $W_K$\\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ -1 & 1 & 0 \\ 0 & 0 & -1 \end{array} \right)$ & $\begin{array}{c} e_1 = 0 \\ 2e_3 = 0\\ \end{array}$ & $4$ & $\mathbb{Z}_2$ & $4 \times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} -1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{array} \right)$ & $\begin{array}{c} 2e_1 = e_2 \\ 2e_3 = 0 \\ \end{array}$ & $4$ & $\mathbb{Z}_2$ & $4 \times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{array} \right)$ & $\begin{array}{c} 2e_1 = 0 \\ 2e_2 = 0 \\ \end{array}$ & $16$ & $D_6$ & $\begin{array}{c} 3 \times \mathbb{P}^1 \\ 1 \times A \\ \end{array}$ & $\begin{array}{c} \mathbb{Z}_2 \\ 0 \\ \end{array}$ \\ \hline $\mathbb{Z}_3$ $(A_2)$ & $\left(\begin{array}{rrr} -1 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 1 \end{array} \right)$ & $\begin{array}{c} 2e_1 = e_2 \\ 3e_1 = 0 \\ \end{array}$ & $9$ & $\mathbb{Z}_2\times \mathbb{Z}_2$ & $4 \times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline $\mathbb{Z}_6$ $(A_5)$ & $\left(\begin{array}{rrr} 0 & 1 & 0 \\ -1 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right)$ & $\begin{array}{c} e_1 = 0 \\ e_2 = 0 \\ \end{array}$ & $1$ & $\mathbb{Z}_2$ & $1 \times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline \end{tabular}} There are $36$ points with isotropy $D_4$, mapped to $12$ points of the quotient (triples are identified by the action of $\mathbb{Z}_3$). Next, there are $32$ points with isotropy $D_6$, mapped to $16$ points of $Y$ (pairs are identified), and $4$ points fixed by $D_{12}$. Each curve determined by the first $\mathbb{Z}_2$ subgroup contains $3$ points with isotropy $D_4$ and $1$ with isotropy $D_{12}$. Each curve for the second $\mathbb{Z}_2$ subgroup also contains $3$ points with isotropy $D_4$ and $1$ fixed by $D_{12}$, and moreover $8$ with isotropy $D_6$, which are mapped to $4$ points of $Y$ (identification comes from the action of $W_K$). As for the normal $\mathbb{Z}_2$ subgroup, there are $4$ points with isotropy $D_4$ on each of its $\mathbb{P}^1$ curves and no points with non-cyclic isotropy on its elliptic curve. The $\mathbb{P}^1$ curves determined by $\mathbb{Z}_3$ contain $4$ points with isotropy $D_6$ each. Finally, all points fixed by $D_{12}$ lie on the curve of fixed points of $\mathbb{Z}_6$. The virtual Poincar\'e polynomials of the strata are the following: \begin{align*} P_3(t) &= P_Y(t) - 4(1 + t^2 - 4) - 4(1 + t^2 - 8) - 3(1 + t^2 - 4) - (1 + 2t + t^2) -\\ &\quad -4(1 + t^2 - 4) - (1 + t^2 - 4) - 32 = t^6 + 2t^4 + 6t^3 - 15t^2 - 2t + 32,\\ P_2(t) &= 7(1 - 4 + (2 - 4)t^2 + t^4) + 4(1 - 4 - 4 + (2 - 4 - 4)t^2 + t^4) +\\ &\quad + (1 + 2t + 2t^2 + 2t^3 + t^4) + 4(1 - 4 + (2 - 4)t^2 + 2t^3 + t^4) + \\ &\quad + (1 - 4 + ((1 - 4)\cdot 3 + 1)t^2 + 4t^3 + 3t^4) = 19t^4 + 14t^3 - 52t^2 + 2t - 63,\\ P_1(t) &= 12(1 + 3t^2) + 16(1 + 2t^2) + 4(1 + 5t^2) = 88t^2 + 32. \end{align*} and the Poincar\'e polynomial of $X$ is $$P_X(t) = t^6 + 21t^4 + 20t^3 + 21t^2 + 1.$$ \subsection{Cases of $A_4$} For all of the investigated representations of $A_4$ the (virtual) Poincar\'e polynomial of the quotient $Y$ is $$P_Y(t) = t^6 + t^4 + 4t^3 + t^2 + 1.$$ In $A_4$ all cyclic subgroups of order $3$ are conjugate, the same applies to the subgroups of order $2$. There is also one non-cyclic subgroup, containing all elements of order $4$, isomorphic to $D_4$ and normal. \subsubsection{$A_4(1)$}\label{case_A4_1} $G = \left< \left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{array} \right), \left(\begin{array}{rrr} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right)\right>$ \centerline{\begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{group} & \textbf{gen.} & \textbf{equ.} & \textbf{comp.} & $W(g)$ & \textbf{quot.} & $W_K$\\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{array} \right)$ & $\begin{array}{c} 2e_1 = 0 \\ 2e_2 = 0\\ \end{array}$ & $16$ & $\mathbb{Z}_2$ & $16 \times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline $\mathbb{Z}_3$ $(A_2)$ & $\left(\begin{array}{rrr} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right)$ & $\begin{array}{c} e_1 = e_2 \\ e_2 = e_3 \\ \end{array}$ & $1$ & $0$ & $1\times A$ & $0$ \\ \hline \end{tabular}} There are $64$ points with isotropy $D_4$, given by equations $2e_1 = 2e_2 = 2e_3 = 0$. Elements of order $3$ act on this set by cyclic permutation of coordinates, so in the quotient triples of points are identified, except $4$ points fixed by the action of $A_4$. Hence the $0$-dimensional stratum consists of $4$ points with isotropy $A_4$, which lie on the elliptic curve of fixed points of $\mathbb{Z}_3$, and $20$ points with isotropy $D_4$. Each curve of fixed points of $\mathbb{Z}_2$ contains $4$ points of $0$-dimensional stratum. The virtual Poincar\'e polynomials of the strata are the following: \begin{align*} P_3(t) &= P_Y(t) - 16(1 + t^2 - 4) - (1 + 2t + t^2 - 4) - 24 =\\ &= t^6 + t^4 + 4t^3 - 16t^2 - 2t + 28,\\ P_2(t) &= 16(1 + t^2 - 4)(1 + t^2) + (1 + 2t + t^2 - 4)(1 + 2t^2) =\\ &= 18t^4 + 4t^3 -37t^2 + 2t - 51,\\ P_1(t) &= 20(1 + 3t^2) + 4(1 + 3t^2) = 72t^2 + 24, \end{align*} and the Poincar\'e polynomial of $X$ is $$P_X(t) = t^6 + 19t^4 + 8t^3 + 19t^2 + 1.$$ \subsubsection{$A_4(2)$}\label{case_A4_2} $G = \left< \left(\begin{array}{rrr} -1 & -1 & -1 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array} \right), \left(\begin{array}{rrr} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right)\right>$ \centerline{\begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{group} & \textbf{gen.} & \textbf{equ.} & \textbf{comp.} & $W(g)$ & \textbf{quot.} & $W_K$\\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} -1 & -1 & -1 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array} \right)$ & $\begin{array}{c} e_2 = e_3 \\ 2e_1 = -2e_2\\ \end{array}$ & $4$ & $\mathbb{Z}_2$ & $4 \times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline $\mathbb{Z}_3$ $(A_2)$ & $\left(\begin{array}{rrr} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right)$ & $\begin{array}{c} e_1 = e_2 \\ e_2 = e_3 \\ \end{array}$ & $1$ & $0$ & $1\times A$ & $0$ \\ \hline \end{tabular}} There are $16$ fixed points of $D_4$, they satisfy $e_1 = e_2 = e_3$ and $4e_1 = 0$. The chosen generator of $\mathbb{Z}_3$ acts trivially on this set, so these points are fixed by $A_4$. They all lie on the elliptic curve for $\mathbb{Z}_3$, and on each $\mathbb{P}^1$ of fixed points of $\mathbb{Z}_2$ there are $4$ of them. The virtual Poincar\'e polynomials of the strata are the following: \begin{align*} P_3(t) &= P_Y(t) - 4(1 + t^2 - 4) - (1 + 2t + t^2 - 16) - 16 =\\ &= t^6 + t^4 + 4t^3 - 4t^2 - 2t + 12,\\ P_2(t) &= 4(1 + t^2 - 4)(1 + t^2) + (1 + 2t + t^2 - 16)(1 + 2t^2) =\\ &= 6t^4 + 4t^3 -37t^2 + 2t - 27,\\ P_1(t) &= 16(1 + 3t^2) = 48t^2 + 16, \end{align*} and the Poincar\'e polynomial of $X$ is $$P_X(t) = t^6 + 7t^4 + 8t^3 + 7t^2 + 1.$$ \subsubsection{$A_4(3)$}\label{case_A4_3} $G = \left< \left(\begin{array}{rrr} -1 & 0 & 0 \\ -1 & 0 & 1 \\ -1 & 1 & 0 \end{array} \right), \left(\begin{array}{rrr} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right)\right>$ \centerline{\begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{group} & \textbf{gen.} & \textbf{equ.} & \textbf{comp.} & $W(g)$ & \textbf{quot.} & $W_K$\\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ -1 & 0 & 1 \\ -1 & 1 & 0 \end{array} \right)$ & $\begin{array}{c} 2e_1 = 0 \\ e_3 = e_1 + e_2\\ \end{array}$ & $4$ & $\mathbb{Z}_2$ & $4 \times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline $\mathbb{Z}_3$ $(A_2)$ & $\left(\begin{array}{rrr} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right)$ & $\begin{array}{c} e_1 = e_2 \\ e_2 = e_3 \\ \end{array}$ & $1$ & $0$ & $1\times A$ & $0$ \\ \hline \end{tabular}} Fixed points of $D_4$ are described by equations $2e_1 = 2e_2 = 0$ and $e_3 = e_1 + e_2$; there are $16$ of them. One is fixed by $A_4$. The remaining $15$ are permuted by the action of $\mathbb{Z}_3$, so in the quotient we get $5$ points with the isotropy $D_4$. The elliptic curve of fixed points of $\mathbb{Z}_3$ contains the fixed point of $A_4$. On each $P^1$ curves there are $4$ points with non-cyclic isotropy. One of them contains the fixed point of $A_4$ and $3$ more points. Three of them contain $4$ points with isotropy $D_4$, two being identified in the quotient, but not as a result of the normalizer's action. Hence each of the images of these curves contain three points with non-cyclic isotropy, and goes two times through one of them. The virtual Poincar\'e polynomials of the strata are the following: \begin{align*} P_3(t) &= P_Y(t) - 4(1 + t^2 - 4) - (1 + 2t + t^2 -1) - 6 = t^6 + t^4 + 4t^3 - 4t^2 - 2t + 7,\\ P_2(t) &= 4(1 + t^2 - 4)(1 + t^2) + (1 + 2t + t^2 - 1)(1 + 2t^2) =\\ &= 6t^4 + 4t^3 - 7t^2 + 2t -12,\\ P_1(t) &= 5(1 + 3t^2) + (1 + 3t^2) = 18t^2 + 6, \end{align*} and the Poincar\'e polynomial of $X$ is $$P_X(t) = t^6 + 7t^4 + 8t^3 + 7t^2 + 1.$$ \subsection{Case of $S_4(1)$}\label{case_S4_1} $G = \left< \left(\begin{array}{rrr} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & -1 \end{array} \right), \left(\begin{array}{rrr} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right)\right>$ This case is computed in \cite{AW}, but we repeat the results to complete the survey. \centerline{\begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{group} & \textbf{gen.} & \textbf{equ.} & \textbf{comp.} & $W(g)$ & \textbf{quot.} & $W_K$\\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & -1 \end{array} \right)$ & $\begin{array}{c} e_1 = e_2 \\ 2e_3 = 0\\ \end{array}$ & $4$ & $\mathbb{Z}_2$ & $4 \times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline $\mathbb{Z}_2$ $(A_1)$ & $\left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{array} \right)$ & $\begin{array}{c} 2e_1 = 0 \\ 2e_2 = 0 \\ \end{array}$ & $16$ & $\mathbb{Z}_2\times \mathbb{Z}_2$ & $6 \times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline $\mathbb{Z}_3$ $(A_2)$ & $\left(\begin{array}{rrr} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right)$ & $\begin{array}{c} e_1 = e_3 \\ e_1 = e_2 \\ \end{array}$ & $1$ & $\mathbb{Z}_2$ & $1 \times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline $\mathbb{Z}_4$ $(A_3)$ & $\left(\begin{array}{rrr} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{array} \right)$ & $\begin{array}{c} e_1 = e_2 \\ 2e_1 = 0 \\ \end{array}$ & $4$ & $\mathbb{Z}_2$ & $4 \times \mathbb{P}^1$ & $\mathbb{Z}_2$ \\ \hline \end{tabular}} The set of points with non-cyclic isotropy consists of $24$ points for $D_4$, which become $4$ points in the quotient. Next, there are $36$ points with isotropy $D_6$ in $3$ families associated to subgroups. Image of this set contains $12$ points. There are also $4$ points fixed by $S_4$. Each curve for the first $\mathbb{Z}_2$ class in the table contains $3$ points with isotropy $D_8$, not identified in the quotient, and $1$ point fixed by $S_4$. Each curve for the second $\mathbb{Z}_2$ class contains $2$ points with isotropy $D_4$ and $2$ with isotropy $D_8$, mapped to different points of $Y$. All fixed points of $S_4$ lie on the $\mathbb{P}^1$ for $\mathbb{Z}_3$. And each curve for $\mathbb{Z}_4$ there are $3$ points with isotropy $D_8$, not identified in $Y$, and $1$ of $S_4$. The virtual Poincar\'e polynomials of the strata are the following: \begin{align*} P_3(t) &= P_Y(t) - 15(1 + t^2 - 4) - 20 = t^6 + t^4 + 4t^3 - 14t^2 + 26,\\ P_2(t) &= 10(1 + t^2 - 4)(1 + t^2) + \mu_0((1 + \varepsilon \cdot 2t + t^2 - 4)(1 + (1 + \varepsilon)t^2)) +\\ &\quad + 4\mu_0((1 + \varepsilon \cdot 2t + t^2 - 4)(1 + (2 + \varepsilon)t^2)) = 19t^4 + 10t^3 - 42t^2 - 45,\\ P_1(t) &= 16(1 + 4t^2) + 4(1 + 3t^2) = 76t^2 + 20, \end{align*} and the Poincar\'e polynomial of $X$ is $$P_X(t) = t^6 + 20t^4 + 14t^3 + 20t^2 + 1.$$ \subsection{Summary}\label{summary} The following theorem summarizes the results of this paper. Its proof consists of computations presented in sections \ref{examples} and \ref{results}. \begin{thm}\label{last_theorem} The complete list of Poincar\'e polynomials of Kummer 3-folds is given in table 2. \end{thm} \begin{table}[!hp] \renewcommand{1}{2} \centerline{\begin{tabular}{ccc} \textbf{group} & \textbf{section} & \textbf{polynomial} \\ \hline $D_4(1)$ & \ref{case_D4_1} & $t^6 + 51t^4 + 8t^3 + 51t^2 + 1$ \\ $D_4(2)$ & \ref{case_D4_2} & $t^6 + 21t^4 + 20t^3 + 21t^2 + 1$ \\ $D_4(3)$ & \ref{case_D4_3} & $t^6 + 15t^4 + 8t^3 + 15t^2 + 1$ \\ $D_4(4)$ & \ref{case_D4_4} & $t^6 + 15t^4 + 8t^3 + 15t^2 + 1$ \\ \hline $D_6(1)$ & \ref{case_D6_1} & $t^6 + 15t^4 + 32t^3 + 15t^2 + 1$ \\ $D_6(2)$ & \ref{case_D6_2} & $t^6 + 15t^4 + 32t^3 + 15t^2 + 1$ \\ $D_6(3)$ & \ref{case_D6_3} & $t^6 + 7t^4 + 16t^3 + 7t^2 + 1$ \\ \hline $D_8(1)$ & \ref{case_D8_1} & $t^6 + 36t^4 + 14t^3 + 36t^2 + 1$ \\ $D_8(2)$ & \ref{case_D8_2} & $t^6 + 15t^4 + 8t^3 + 15t^2 + 1$ \\ \hline $D_{12}$ & \ref{case_D12} & $t^6 + 21t^4 + 20t^3 + 21t^2 + 1$ \\ \hline $A_4(1)$ & \ref{case_A4_1} & $t^6 + 19t^4 + 8t^3 + 19t^2 + 1$ \\ $A_4(2)$ & \ref{case_A4_2} & $t^6 + 7t^4 + 8t^3 + 7t^2 + 1$ \\ $A_4(3)$ & \ref{case_A4_3} & $t^6 + 7t^4 + 8t^3 + 7t^2 + 1$ \\ \hline $S_4(1)$ & \ref{case_S4_1} & $t^6 + 20t^4 + 14t^3 + 20t^2 + 1$ \\ $S_4(2)$ & \ref{case_S4_2} & $t^6 + 11t^4 + 8t^3 + 11t^2 + 1$ \\ $S_4(3)$ & \ref{case_S4_3} & $t^6 + 11t^4 + 8t^3 + 11t^2 + 1$ \\ \end{tabular}} \renewcommand{1}{1} \caption{Poincar\'e polynomials of Kummer 3-folds} \end{table} Poincar\'e polynomials are not sufficient to distinguish varieties obtained by the $3$-dimensional Kummer construction. The question whether the Kummer $3$-folds with equal Poincar\'e are isomorphic has not been solved yet. We only know that subgroups of $SL(3,\mathbb{Z})$ which are not $\mathbb{Z}$-equivalent define different structures of the quotients of $A^3$ by their actions (in \ref{comment_D4} we discuss two most similar cases). However, this suggests only that even if the isomorphism between Kummer $3$-folds exists, it does not come from the construction in a natural way. Possibly some other invariants work better than Poincar\'e polynomials in this problem. The next step is understanding the cone of effective divisors and the cone of curves of Kummer $3$-folds. The second point is the relation between Kummer $3$-folds obtained for $\mathbb{Z}$-classes which are dual in the sense of definition \ref{def_duality}. Comparing the list of pairs of dual $\mathbb{Z}$-classes (proposition \ref{dual_pairs}) and the results of computations we can see that Kummer varieties constructed from dual $\mathbb{Z}$-classes have equal Poincar\'e polynomials. It can be investigated whether this is specific for $3$-dimensional Kummer varieties, or is still true in higher dimensions. If it turns out that varieties obtained from the action of dual groups are in fact isomorphic, we can ask whether in higher dimensions the construction for dual groups also give isomorphic varieties. One more idea for further investigation of Kummer $3$-folds is to look at inclusion of groups and induced rational maps of varieties. We compared the results of computations and the diagram \ref{groups_relations_diagram} of $\mathbb{Z}$-classes inclusions. There are few pairs of groups which are not dual but lead to Kummer varieties with equal Poincar\'e polynomials: $D_4(2)$ and $D_{12}$, $D_4(3)$ and $D_8(2)$, $D_4(4)$ and $D_8(2)$. Note that all these pairs are inclusions up to $\mathbb{Z}$-equivalence. We are interested in finding any significant consequences or generalizations of this observation. It can be also checked whether Kummer $3$-folds appear on the lists of known examples of Calabi--Yau varieties. \end{document}
\begin{document} \title[Completeness in topological vector spaces]{Completeness in topological vector spaces and filters on ${\mathbb N}$} \author{Vladimir Kadets} \address{School of Mathematics and Computer Sciences V.N. Karazin Kharkiv National University, 61022 Kharkiv, Ukraine \newline \href{http://orcid.org/0000-0002-5606-2679}{ORCID: \texttt{0000-0002-5606-2679}}} \email{[email protected]} \author{Dmytro Seliutin} \address{School of Mathematics and Computer Sciences V.N. Karazin Kharkiv National University, 61022 Kharkiv, Ukraine \newline \href{https://orcid.org/0000-0002-4591-7272}{ORCID: \texttt{0000-0002-4591-7272}}} \email{[email protected]} \thanks{ The research was partially supported by the National Research Foundation of Ukraine funded by Ukrainian state budget in frames of project 2020.02/0096 ``Operators in infinite-dimensional spaces: the interplay between geometry, algebra and topology''} \subjclass[2000]{40A35; 54A20} \keywords{topological vector space, completeness, filter, ideal, $f$-statistical convergence} \begin{abstract} We study completeness of a topological vector space with respect to different filters on ${\mathbb N}$. In the metrizable case all these kinds of completeness are the same, but in non-metrizable case the situation changes. For example, a space may be complete with respect to one ultrafilter on ${\mathbb N}$, but incomplete with respect to another. Our study was motivated by [Aizpuru, List\'{a}n-Garc\'{i}a and Rambla-Barreno; Quaest. Math., 2014] and [List\'{a}n-Garc\'{i}a; Bull. Belg. Math. Soc. Simon Stevin, 2016] where for normed spaces the equivalence of the ordinary completeness and completeness with respect to $f$-statistical convergence was established. \end{abstract} \maketitle \section{Introduction} An increasing continuous function $f : [0, \infty) \to [0, \infty)$ is called a \emph{modulus function} if $f(0) = 0$ and $f(x+y) \leqslantslant f(x) + f(y)$ for all $x, y \geqslantslant 0$. The $f$-density of a subset $A \subset {\mathbb N}$ is the quantity $$ d_f (A) = \lim_{n \to \infty} \frac{f( |A \cap \overline{1, n}|)}{f(n)}, $$ where $\overline{1, n}$ denotes the set of integers of the form $\{1,2, \ldots, n\}$ and the symbol $|D|$ means the number of elements in the set $D$. If for a set $A$ the above limit does not exist, then the $f$-density of $A$ is not defined. Let $f$ be an unbounded modulus function, and $(x_n)$ be a sequence in a normed space $X$. An element $x \in X$ is called the \emph{$f$-statistical limit} of $(x_n)$, if $$ d_f\leqslantslantft(\{n \in {\mathbb N}: \|x_n - x\| > \varepsilon\}\right) = 0 $$ for every $\varepsilon > 0$. According to \cite[Definition 3.2.]{aizpuru} $(x_n) \subset X$ is said to be \emph{$f$-statistically Cauchy} if for every $\varepsilon > 0$ there exists $m \in {\mathbb N}$ such that $$ d_f\leqslantslantft(\{n \in {\mathbb N}: \|x_n - x_m\| > \varepsilon\}\right) = 0. $$ In the particular case of the modulus function $f(t) = t$, the above definitions give the classical notions of \emph{statistical convergent} and \emph{statistical Cauchy} sequences, that are quite popular subjects of study. Let us say that a normed space $X$ is \emph{$f$-complete}, if every $f$-statistically Cauchy sequence $(x_n) \subset X$ is $f$-statistically convergent. Our research is motivated by \cite[Theorem 2.4]{listan-garcia} (see also \cite[Theorem 3.3]{aizpuru}): Let $X$ be a normed space. The following are equivalent: (1) $X$ is complete; (2) $X$ is $f$-complete for every unbounded modulus $f$; (3) there exists an unbounded modulus $f$ such that $X$ is $f$-complete. Taking in account that convergence with respect to an unbounded modulus $f$ is equivalent to convergence with respect to the filter $\mathfrak{F}_{f-st}$ of those subsets $B \subset {\mathbb N}$ that $d_f ({\mathbb N} \setminus B) = 0$, the above theorem leads to the natural question whether the same result is true in more general setting of filter convergence. We show that the answer is positive, and moreover it easily generalizes to metrizable topological vector spaces. On the other hand, an attempt to generalize it further to arbitrary Hausdorff topological vector spaces fails because sequential completeness does not imply completeness in general. This motivates some results and leads to many open questions which we discuss at the end of our article. Below, we use the term ``topological vector space'' (abbreviation TVS) for a Hausdorff topological vector space over the field ${\mathbb K}$ which is either the field ${\mathbb R}$ of reals, or the field ${\mathbb C}$ of complex numbers. We follow notation from \cite{kadets}, in particular for a topological space $X$, $\mathfrak{N}_z$ or $\mathfrak{N}_z(X)$ denotes the family of neighborhoods of point $z\in X$. If $X$ is a TVS, $\mathfrak{N}_0$ or $\mathfrak{N}_0(X)$ is the family of neighborhoods of zero, $X^*$ is the set of all continuous linear functional on $X$ and $X'$ is the set of ALL linear functional on $X$. For two subsets $A$, $B$ of a linear space the symbol $A+B$ denotes the corresponding Minkowskii sum: $A+B = \{a+b : a \in A, b \in B \}$. We refer to \cite[Section 16.1]{kadets} for a short introduction to filters and ultrafilters, and to \cite{Bourbaki} for a detailed one. The very basic facts about topological vector spaces can be found in \cite[Chapters 16 and 17]{kadets}, and for a much deeper exposition we refer to the classical book \cite{koethe}. The structure of the paper is as follows. In the next section we recall, for the reader's convenience, the definitions and basic facts about filters and filter convergence in topological spaces. After that, in the section ``Completeness, sequential completeness, and completeness over a filter on ${\mathbb N}$'', we recall the basic facts about Cauchy filters and completeness in TVS, introduce formally the completeness over a filter on ${\mathbb N}$, list some features of this new property, and deduce, for general filters on ${\mathbb N}$ and a metrizable TVS, the validity of equivalences like those in \cite[Theorem 2.4]{listan-garcia}. After that we pass to the general non-metrizable case (Section ``Various types of completeness and classes of filters and spaces''). We discuss the relationship between completeness, sequential completeness, and completeness with respect to various filters on ${\mathbb N}$ (subsection ``Countable completeness''), demonstrate a non-metrisable version of \cite[Theorem 2.4]{listan-garcia} in locally convex spaces under the additional boundedness condition for the Cauchy sequences in question (subsection ``Completeness and boundedness''), give an example of a sequentially complete space which is not complete with respect to ANY free ultrafilter on ${\mathbb N}$, and give an example of a space which is complete with respect to a free ultrafilter on ${\mathbb N}$ but is not complete with respect to some other ultrafilter (subsection ``Completeness and ultrafilters''). We conclude the paper with some open questions. \section{Basic facts about filters and filter convergence} Let $\Omega$ be a non-empty set. Recall that \textit{filter} on $\Omega$ is a non-empty family $\mathfrak{F}$ of subsets in $\Omega$, satisfying the following axioms: $\Omega \in \mathfrak{F}$; $\emptyset \not\in \mathfrak{F}$; if $A,\ B \in \mathfrak{F}$ then $A \cap B$; and if $A \in \mathfrak{F}$ and $D \supset A$ then $D \in \mathfrak{F}$. Every point $x_0 \in \Omega$ generates the \emph{trivial filter} of all subsets containing $x_0$. The \textit{Fr\'{e}chet filter} $\mathfrak{F}R = \{A \subset {\mathbb N}: |{\mathbb N} \setminus A| < \infty\}$ is the simplest example of non-trivial filter on ${\mathbb N}$. A subset $A \subset \Omega$ is called \emph{$\mathfrak{F}$-stationary} if $A$ intersects all elements of $\mathfrak{F}$. A non-empty family $G \subset 2^{\Omega}$ is called a \textit{filter base}, if $\emptyset \notin G$ and for every pair $A, B \in G$ there exists $C \in G$ such that $C \subset A \cap B$. The \emph{filter generated by the base} $G$ is the collection of all those $A \subset \Omega$ for which there is a $B \in G$ such that $A \supset B$. A filter $\mathfrak{F}$ is generated by a base $G$ if $G \subset \mathfrak{F}$ and each element of $\mathfrak{F}$ contains at least one element from $G$. When we write $\mathfrak{F}=\mathfrak{F}(G)$ it means that $G$ is a base for the filter $\mathfrak{F}$. In this notation, the trivial filter on $\Omega$ generated by $x_0 \in \Omega$ is equal to $\mathfrak{F}(\{\{x_0\}\})$, and $\mathfrak{F}R = \mathfrak{F}(\{{\mathbb N} \setminus \overline{1, n}\}_{n \in {\mathbb N}})$. The set of all filters on $\Omega$ is naturally ordered by inclusion. Maximal in this ordering filters are called \emph{ultrafilters}. The only constructive examples of ultrafilters are the trivial ones, but the Zorn's lemma implies the existence of many non-trivial ultrafilters: for every filter $\mathfrak{F}$ on $\Omega$ there is an ultrafilter $\mathfrak{U}$ such that $\mathfrak{F} \subset \mathfrak{U}$. In particular, on ${\mathbb N}$ there are \emph{free} ultrafilters, i.e. ultrafilters that dominate the Fr\'{e}chet filter. Below, if the contrary is not precised, on ${\mathbb N}$ we consider only free filters and ultrafilters. Let $\Omega$ be a set with a filter $\mathfrak{F}_0$, $Y$ be another set, and $f : \Omega \to Y$ be a function. The natural collection $f(\mathfrak{F}_0) = \{f(A) \colon A \in \mathfrak{F}_0\}$ in $Y$ is not necessarily a filter, but is a filter base. By this reason the \emph{image of the filter} $\mathfrak{F}_0$ is defined as $f[\mathfrak{F}_0] := \mathfrak{F}(f(\mathfrak{F}_0))$. A sequence $x = (x_n) \subset Y$ is a function that acts from ${\mathbb N}$ to $Y$. By this reason at our convenience we use notation $x(n)$ for $x_n$, $x(A)$ for the set $\{x_n : n \in A\}$, etc. Let $Y$ be a topological space, $\mathfrak{F}$ be a filter on $Y$. A point $z \in Y$ is called a \textit{limit} of the filter $\mathfrak{F}$ ($z = \lim \mathfrak{F}$), if $\mathfrak{N}_z \subset \mathfrak{F}$, and is called a \textit{cluster point} of $\mathfrak{F}$ if every neighborhood of $z$ is $\mathfrak{F}$-stationary. In a Hausdorff space the limit of $\mathfrak{F}$, if exists, is the unique cluster point of $\mathfrak{F}$. Let $\mathfrak{F}_1 \subset \mathfrak{F}_2$ be filters on $Y$. Then every cluster point of $\mathfrak{F}_2$ is a cluster point of $\mathfrak{F}_1$, and the limit of $\mathfrak{F}_1$, if exists, is the limit of $\mathfrak{F}_2$. A sequence $x = (x_n) \subset Y$ is called \textit{converging to an element $y \in Y$ over filter $\mathfrak{F}$ on ${\mathbb N}$} ($y = \lim_{\mathfrak{F}} x_n$), if $y = \lim x[\mathfrak{F}]$, that is for every $V \in \mathfrak{N}_y$ there exists such $A \in \mathfrak{F}$ that $x(A) \subset V$. $y$ is a \emph{cluster point of $x$ over $\mathfrak{F}$} if $y$ is a cluster point of $x(\mathfrak{F})$. The huge advantage of ultrafilters, that we use in some instances below, is that for every ultrafilter $\mathfrak{U}$ on ${\mathbb N}$ every sequence with values in a compact (in particular every bounded numerical sequence) possesses a limit over $\mathfrak{U}$. \section{Completeness, sequential completeness, and completeness over a filter on ${\mathbb N}$} Let $X$ be a TVS. A filter $\mathfrak{F}$ on $X$ is called \textit{Cauchy filter}, if for every $U \in \mathfrak{N}_0$ there exists $A \in \mathfrak{F}$ such that $A - A \subset U$ (write $\mathfrak{F} \in {\mathbb C}auchy$). Evidently, if $\mathfrak{F}_1 \subset \mathfrak{F}_2$ are filters on $X$ and $\mathfrak{F}_1 \in {\mathbb C}auchy$, then $\mathfrak{F}_2 \in {\mathbb C}auchy$. A topological vector space\ $X$ is called \textit{complete} ($X \in {\mathbb C}ompl$), if every Cauchy filter on $X$ has a limit. Remark, that the most important examples of normed spaces are complete. This is the reason why in frames of normed spaces the majority of researchers are concerned only about complete (i.e. Banach) spaces. For the topological vector spaces the situation is very different: many spaces that motivated the whole theory, like infinite-dimensional dual Banach spaces equipped with the weak$^*$ topology, are incomplete ($X^*$ is not closed in $X{'}$ in the pointwise convergence topology), so one cannot avoid them in frames of the general theory. \begin{proposition}[{\cite[Section 16.2.2, Theorem 2]{kadets}}] \label{rem-clust-lim} Let $X$ be a TVS, $\mathfrak{F}$ be a Cauchy filter $\mathfrak{F}$ on $X$ and $z \in X$ be a cluster point of $\mathfrak{F}$, then $z = \lim \mathfrak{F}$. \end{proposition} A sequence $x = (x_n) \in X^{\mathbb N}$ is said to be a \textit{Cauchy sequence over filter $\mathfrak{F}$ on ${\mathbb N}$}, if $x[\mathfrak{F}]$ is a Cauchy filter on $X$. We denote the last property by $x \in {\mathbb C}auchy(\mathfrak{F})$. In other words, $x \in {\mathbb C}auchy(\mathfrak{F})$, if for every $U \in \mathfrak{N}_0$ there exists $B \in \mathfrak{F}$ such that $x(B) - x(B) \subset U$. $x = (x_n) \in X^{\mathbb N}$ is said to be a \textit{Cauchy sequence} if $x \in {\mathbb C}auchy(\mathfrak{F}R)$. In other words, $x \in {\mathbb C}auchy(\mathfrak{F}R)$ if for every $U \in \mathfrak{N}_0$ there exists $N \in {\mathbb N}$ such that $x_n - x_m \in U$ for all $n, m \geqslantslant N$. It seems to us that the following definition, which is the main object of study in this article, is new. At least, we did not find it in the literature. \begin{definition} \label{def-F-complete} Let $\mathfrak{F}$ be a free filter on ${\mathbb N}$. A topological vector space\ $X$ is said to be \textit{complete over $\mathfrak{F}$}, if every Cauchy sequence over $\mathfrak{F}$ in $X$ has a limit over $\mathfrak{F}$. We denote this property by $X \in {\mathbb C}ompl(\mathfrak{F})$. \end{definition} Recall, that a topological vector space \ $X$ is called \textit{sequentially complete}, if $X \in {\mathbb C}ompl(\mathfrak{F}R)$, that is, if every Cauchy sequence in $X$ has a limit. Let us list some elementary general facts about completeness over filters. \begin{theorem} \label{thm-ComplF1F2} $ \ $ \begin{enumerate}[\emph{(}1\emph{)}] \item If $X \in {\mathbb C}ompl$, then $X \in {\mathbb C}ompl(\mathfrak{F})$ for every free filter $\mathfrak{F}$ on ${\mathbb N}$. In particular, \item a complete TVS is sequentially complete. \item In order to verify that $X \in {\mathbb C}ompl(\mathfrak{F})$ it is sufficient to check that every Cauchy sequence over $\mathfrak{F}$ in $X$ has a cluster point over $\mathfrak{F}$. \item If $\mathfrak{F}_1 \subset \mathfrak{F}_2$ are filters on ${\mathbb N}$ and $X \in {\mathbb C}ompl(\mathfrak{F}_2)$, then $X \in {\mathbb C}ompl(\mathfrak{F}_1)$ \item If $\mathfrak{F}$ is a filter on ${\mathbb N}$, $f: {\mathbb N} \to {\mathbb N}$ is a function, and $X \in {\mathbb C}ompl(\mathfrak{F})$, then $X \in {\mathbb C}ompl(f[\mathfrak{F}])$. \item If $\mathfrak{F}$ is a filter on ${\mathbb N}$, $f: {\mathbb N} \to {\mathbb N}$ is an injective function, and $X \in {\mathbb C}ompl(f[\mathfrak{F}])$, then $X \in {\mathbb C}ompl(\mathfrak{F})$. \end{enumerate} \end{theorem} \begin{proof} (1) follows from the definition, (2) is a particular case of (1) for $\mathfrak{F} = \mathfrak{F}R$. Let us check (3). Let $\mathfrak{F}$ be a free filter on ${\mathbb N}$ and $x = (x_n) \in X^{\mathbb N}$ be a Cauchy sequence over $\mathfrak{F}$. Assume that we know that $x$ has a cluster point over $\mathfrak{F}$. This means that a Cauchy filter $x[\mathfrak{F}]$ has has a cluster point, so the application of Proposition \ref{rem-clust-lim} gives us the existence of $\lim x[\mathfrak{F}]$. Now it is turn of the statement (4). If $x = (x_n) \in X^{\mathbb N}$ is a Cauchy sequence over $\mathfrak{F}_1$, then $x \in {\mathbb C}auchy(\mathfrak{F}_2)$. Consequently, by $\mathfrak{F}_2$-completeness of $X$ there is $y \in X$ such that $y = \lim x[\mathfrak{F}_2]$. Taking in account that $ x[\mathfrak{F}_2] \supset x[\mathfrak{F}_1]$, this $y$ is a cluster point for $ x[\mathfrak{F}_1]$. It remains to apply the statement (3). Let us demonstrate (5). Let $x = (x_n) \in X^{\mathbb N}$ be a Cauchy sequence over $f[\mathfrak{F}]$. Consider the sequence $y = x \circ f$, i.e. $y = (y_n)$, where $y_n = x_{f(n)}$. Then $y[\mathfrak{F}] = x[f[\mathfrak{F}]]$, so $y$ is a Cauchy sequence over $\mathfrak{F}$. By the $\mathfrak{F}$-completeness assumption, there exist $\lim_\mathfrak{F} y$ which is the limit of $x$ over $f[\mathfrak{F}]$. Finally, let us demonstrate (6). Denote $g: {\mathbb N} \to {\mathbb N}$ a left inverse to $f$, which means that $g$ satisfies the condition $g(f(n)) = n$ for all $n \in {\mathbb N}$. Let $x = (x_n) \in X^{\mathbb N}$ be a Cauchy sequence over $\mathfrak{F}$. Consider the sequence $y = x \circ g$. Then $y \circ f = x \circ g \circ f = x$. So, $y \circ f \in {\mathbb C}auchy(\mathfrak{F})$ which means that $y [f[\mathfrak{F}]]$ is a Cauchy filter, so $y \in {\mathbb C}auchy(f[\mathfrak{F}])$. By the $f[\mathfrak{F}]$-completeness assumption, there exist $\lim_{f[\mathfrak{F}]} y$. By the definition, this means that $y [f[\mathfrak{F}]] = x[g[f[\mathfrak{F}]]] = x[\mathfrak{F}]$ is a convergent filter. \end{proof} Remark, that item (5) of the previous statement is of interest for us only if $f[\mathfrak{F}] \supset \mathfrak{F}R$, which is not always the case. Also, in (6) the assumption of injectivity cannot be omitted because, without it, it may happen that $f[\mathfrak{F}]$ is a trivial filter, in which case the $f[\mathfrak{F}]$-completeness is true for every space but does not give any information about the $\mathfrak{F}$-completeness. Now we are ready to the promised extension of \cite[Theorem 3.3]{aizpuru} to general filters. \begin{theorem} \label{thm-metriz-compl} Let $X$ be a TVS possessing a countable base of neighborhoods of zero (in other words, $X$ is metrizable), then {\mathbb T}FAE \begin{enumerate}[\emph{(}i\emph{)}] \item $X$ is complete. \item $X \in {\mathbb C}ompl(\mathfrak{F})$ for every free filter $\mathfrak{F}$ on ${\mathbb N}$. \item There is a every free filter $\mathfrak{F}$ on ${\mathbb N}$ such that $X \in {\mathbb C}ompl(\mathfrak{F})$ \item $X$ is sequentially complete. \end{enumerate} \end{theorem} \begin{proof} The implication (i)${\mathbb R}ightarrow$(ii) is covered by item (1) of Theorem \ref{thm-ComplF1F2}, the implication (ii)${\mathbb R}ightarrow$(iii) is evident, and the implication (i)${\mathbb R}ightarrow$(ii) follows from the item (4) of Theorem \ref{thm-ComplF1F2}. It remains to demonstrate that (iv)${\mathbb R}ightarrow$(i). This fact is well-known (\cite[Section 16.2.2, Exercise 4]{kadets}) and may be deduced from an analogous theorem for uniform spaces. Nevertheless, for the reader's convenience (and for a reference below) we prefer to give a direct proof. So, let $U_n \in \mathfrak{N}_0$, $U_1 \supset U_2 \supset \ldots$ be a base of neighborhoods of zero with the property that $U_{n+1} + U_{n+1} \subset U_n$, and let $\mathfrak{F}$ be a Cauchy filter on $X$. For each $n \in {\mathbb N}$ pick $A_n \in \mathfrak{F}$ such that $A_n - A_n \subset U_n$, and select an $x_n \in \bigcap_{k=1}^n A_k$. Then $x = (x_n)$ is a Cauchy sequence. Indeed, for every $U \in \mathfrak{N}_0$ there is an $N \in {\mathbb N}$ such that $U_N \subset U$. Then, for $n, m \geqslantslant N$ we have that $x_n - x_m \in A_N - A_N \subset U_N \subset U$. Since $x = (x_n)$ is a Cauchy sequence and $X$ is sequentially complete, there is $y:= \lim_{n \to \infty} x_n$. Let us show that the same $y$ is the limit of $\mathfrak{F}$. Consider an arbitrary neighborhood $V \in \mathfrak{N}_y$. $V - y \in \mathfrak{N}_0$, so there is $m \in {\mathbb N}$ such that $U_{m} \subset V - y$. By the definition of $y$, there is $k > m$ such that $x_k \in U_{m+1} + y$. For this $k$ we have $A_k - x_k \subset A_k - A_k \subset U_k \subset U_{m+1}$, consequently $A_k \subset U_{m+1} + x_k$. This means that $$ V \supset U_m + y \supset U_{m+1} + U_{m+1} + y \supset U_{m+1} + x_k \supset A_k, $$ so $V \in \mathfrak{F}$. \end{proof} \section{Various types of completeness and classes of filters and spaces} Although in metrizable spaces all types of completeness that we mentioned above are the same, in general non-metrizable spaces the picture is much more complex. It is well-known that an incomplete topological vector space may be sequentialy complete. The most important example of such kind is the Hilbert space $\ell_2$ equipped with the weak topology. This section is devoted to the non-metrizable case, where many interesting examples come from the duality theory for locally convex spaces. A very good comprehensive introduction to duality is \cite{R-R}, a shorter one may be found in \cite[Chapters 17, 18]{kadets}. As usual, for a duality pair $X, Y$ we denote $\sigma(X, Y)$ the weak topology on $X$ generated by $Y$. \subsection{Countable completeness} \begin{definition} A topological vector space \ $X$ is said to be \textit{countable complete}, if $X \in {\mathbb C}ompl(\mathfrak{F})$ for all $\mathfrak{F}$ on ${\mathbb N}$. \end{definition} We are going to show that countable completeness implies completeness for separable spaces, but does not imply completeness in general, and that sequential completeness does not imply countable completeness. At first, an easy reformulation. \begin{lemma}\label{lem-count-compl-1} For a topological vector space \ $X$ {\mathbb T}FAE \begin{enumerate}[\emph{(}i\emph{)}] \item $X$ is countable complete. \item For every Cauchy filter $\mathfrak{F}$ on $X$, if $\mathfrak{F}$ has a countable element then $\mathfrak{F}$ has a limit. \end{enumerate} \end{lemma} \begin{proof} (ii)${\mathbb R}ightarrow$(i). Let $\mathfrak{F}$ be a filter on ${\mathbb N}$, and $x = (x_n) \in 2^X$ be an $\mathfrak{F}$- Cauchy sequence in $X$. Then $x[\mathfrak{F}]$ is Cauchy filter on $X$, $x[\mathfrak{F}]$ has a countable element $x({\mathbb N})$, so $x[\mathfrak{F}]$ has a limit, which, according to the definition means that $x$ has limit with respect to $\mathfrak{F}$. (i)${\mathbb R}ightarrow$(ii) Let $\mathfrak{F}$ be a non-trivial Cauchy filter on $X$, $A \in \mathfrak{F}$ be a countable element. Let $x: {\mathbb N} \to A$ be a bijection. Define $x^{-1}(\mathfrak{F}) = \{D \subset {\mathbb N} : x(D) \in \mathfrak{F}\}$. Then $x[x^{-1}(\mathfrak{F})] = \mathfrak{F}$, so $x \in {\mathbb C}auchy(x^{-1}(\mathfrak{F}))$ which means the existence of $\lim_{x^{-1}(\mathfrak{F})}x$ which, by the definition, is the limit of $x[x^{-1}(\mathfrak{F})] = \mathfrak{F}$ in $X$. \end{proof} \begin{definition} A topological vector space \ $X$ is said to be \textit{ asymptotically countable}, if for every Cauchy filter $\mathfrak{F}$ on $X$ there is a countable set $A \subset X$ such that $A \cap (B + V) \neq \emptyset$ for every $V \in \mathfrak{N}_0$ and $B \in \mathfrak{F}$. \end{definition} Evidently, a separable space is asymptotically countable. Remark that there are non-separable asymptotically countable spaces. A funny example comes from the fact that every complete space is asymptotically countable (just in the notation from the above definition take such $A$ that $\lim {\mathfrak{F}} \in A$). Less evident examples come from asymptotic countability of every TVS that has a countable base of zero neighborhoods: this can be done similarly to the implication (iv)${\mathbb R}ightarrow$(i) of Theorem \ref{thm-metriz-compl}. \begin{theorem}\label{thm-count-comp-sep} For an asymptotically countable topological vector space \ $X$ (in particular, for separable $X$) its completeness is equivalent to its countable completeness. \end{theorem} \begin{proof} If $X$ is complete, then it is complete with respect to all filters on ${\mathbb N}$ by the evident item (1) of Theorem \ref{thm-ComplF1F2}, so we only need to check the inverse implication. Let $X$ be asymptotically countable and countable complete. Consider a non-trivial Cauchy filter $\mathfrak{F}$ on $X$. Fix a corresponding countable set $A \subset X$ such that the collection $G \subset 2^X$ consisting of all sets of the form $A \cap (V + B)$, where $V \in \mathfrak{N}_0$ and $B \in \mathfrak{F}$, does not contain the empty set. Evidently, $G$ is a filter base. Denote $\widetilde \mathfrak{F}$ the filter generated by the base $G$. Since $A \in G \subset \widetilde \mathfrak{F}$, $\widetilde \mathfrak{F}$ has a countable element. Also, $\widetilde \mathfrak{F} \in {\mathbb C}auchy$. Indeed, let $U \in \mathfrak{N}_0$. Select a balanced neighborhood $V \in \mathfrak{N}_0$ such that $V + V + V \subset U$. We know that $\mathfrak{F} \in {\mathbb C}auchy$, consequently there exists $B \in \mathfrak{F}$ with $B - B \subset V$. Then $A \cap (V + B) \in \widetilde \mathfrak{F}$ and $$ (A \cap (V + B)) - (A \cap (V + B)) \subset (V + B) - (V + B) $$ $$ \subset (V - V) + (B - B) \subset V - V + V = V + V + V \subset U, $$ which completes the proof of the fact that $\widetilde \mathfrak{F} \in {\mathbb C}auchy$. Then, by the countable completeness, there is $y \in X$ such that $y = \lim \widetilde \mathfrak{F}$ (we use (ii) of Lemma \ref{lem-count-compl-1}). It remains to show that $y = \lim \mathfrak{F}$. In order to demonstrate this, it is sufficient to show that $y$ is a cluster point for $\mathfrak{F}$ (Proposition \ref{rem-clust-lim}). For this, let us consider an arbitrary neighborhood $U \in \mathfrak{N}_y$, arbitrary $D \in \mathfrak{F}$ and demonstrate that $U \cap D \neq \emptyset$. Select a balanced neighborhood $V \in \mathfrak{N}_0$ such that $V + V \subset U - y$. Since $y = \lim \widetilde \mathfrak{F}$, we have that $V + y \in \widetilde \mathfrak{F}$. Consequently, $V + y$ contains a subset of the form $A \cap (W + B)$, where $B \in \mathfrak{F}$, $W \in \mathfrak{N}_0$. We have that $$ U \supset V + V +y \supset V + A \cap (W + B) \supset V + A \cap (W \cap V + B\cap D). $$ Take a point $a \in A \cap (W \cap V + B\cap D)$. It can be written in the form $a = v + d$, where $v \in V$, $d \in D$. Then, $$ d = a - v \in A \cap (W \cap V + B\cap D) + V \subset U, $$ so $U \cap D \neq \emptyset$. \end{proof} \begin{corollary}\label{cor-example-seq-noncount} The sequential completeness does not imply the countable completeness. \end{corollary} \begin{proof} By the previous theorem, in order to get an example of such a kind it is sufficient to find an incomplete separable TVS which is sequentially complete. The classical example for this is the Hilbert space $\ell_2$ equipped with the weak topology. More generally, for every separable infinite-dimensional Banach space $X$ the dual space $X^*$ in the topology $\sigma(X^*, X)$ is sequentially complete (see Theorem \ref{thm-bcompl-Fcoml} below for a stronger result), separable but incomplete. \end{proof} The next result shows that the asymptotic countability assumption in Theorem \ref{thm-count-comp-sep} cannot be omitted. \begin{theorem}\label{thm-example-non-comp-countcomp} There exists a non-complete TVS \ $X$ of continuum cardinality which is countably complete. \end{theorem} \begin{proof} Consider ${\mathbb R}^{[0, 1]}$ -- the space of all functions $f: [0, 1] \to {\mathbb R}$ equipped with the standard product topology, i.e. the topology of pointwise convergence. The space $X$ we are looking for will be the subspace of ${\mathbb R}^{[0, 1]}$ consisting of functions with countable support. In other words, $f: [0, 1] \to {\mathbb R}$ lies in $X$ if the set ${\text{supp}} f := \{t \in [0, 1]: f(t) \neq 0\}$ is at most countable. Since $X$ is a dense proper subspace of ${\mathbb R}^{[0, 1]}$, it cannot be complete (a complete subspace of any TVS is closed \cite[Section 16.2.2, Theorem 4]{kadets}). Let us demonstrate that $X$ is countably complete. Let $\mathfrak{F}$ be a filter on ${\mathbb N}$, and $x = (x_n) \in 2^X$ be an $\mathfrak{F}$- Cauchy sequence in $X$. Then $x$ is $\mathfrak{F}$- Cauchy as a sequence in ${\mathbb R}^{[0, 1]}$. By the completeness of ${\mathbb R}^{[0, 1]}$, there is $f \in {\mathbb R}^{[0, 1]}$ such that $f = \lim_\mathfrak{F} x_n$ in ${\mathbb R}^{[0, 1]}$. \end{proof} \subsection{Completeness and boundedness} In the very recent paper \cite{bondt} Ben De Bondt and Hans Vernaeve introduced several concepts that are very useful for our study. Below we present the most important for us particular case. Let $X$ be a Banach space, $(x_n^*) \subset X^*$ be a sequence of functionals, and $\mathfrak{F}$ be a free filter on ${\mathbb N}$. The sequence $(x_n^*) $ is said to be \emph{ pointwise $\mathfrak{F}$-bounded}, if for every $x \in X$ there is a $C = C(x) > 0$ such that $\{n \in {\mathbb N}: |x_n^*(x)| < C \} \in \mathfrak{F}$. The sequence $(x_n^*) $ is said to be \emph{$\mathfrak{F}$-bounded} (\emph{stationary $\mathfrak{F}$-bounded}), if there is a $C > 0$ such that $\{n \in {\mathbb N}: \|x_n^*\| < C \} \in \mathfrak{F}$ ($\{n \in {\mathbb N}: \|x_n^*\| < C \}$ is $\mathfrak{F}$-stationary). A free filter $\mathfrak{F}$ on ${\mathbb N}$ is called a \emph{B-UBP-filter} (\emph{stationary B-UBP-filter}), if for every Banach space $X$ every pointwise $\mathfrak{F}$-bounded sequence $(x_n^*) \subset X^*$ is $\mathfrak{F}$-bounded. This property is weaker (at least formally) than the property of being \emph{(stationary) Banach-UBP-filter}, for which the authors of \cite{bondt} demanded a similar statement for linear continuous operators from $X$ to arbitrary locally convex space $Y$ with $\mathfrak{F}$-equicontinuity instead of $\mathfrak{F}$-boundedness in the conclusion. The fact that the $\mathfrak{F}R$ is B-UBP is just the classical Banach-Steinhaus theorem. On the other hand, many classical filters $\mathfrak{F}$ do not enjoy this property, because of the existence of $\mathfrak{F}$-unbounded pointwise $\mathfrak{F}$-convergent sequences in dual Banach spaces. The latter effect was remarked in \cite[Theorem 1]{conkad} for the statistical convergence and was investigated in detail in \cite{gakad}, \cite{Kad-cyl}, and \cite{KLO2010}. Ben De Bondt and Hans Vernaeve presented non-trivial descriptions and examples of Banach-UBP filters and stationary Banach-UBP filters and demonstrated that the existence of of B-UBP ultrafilters is consistent in the standard ZFC axiom system. This motivates the following definition. \begin{definition} \label{def-F-complete} Let $\mathfrak{F}$ be a free filter on ${\mathbb N}$. A topological vector space\ $X$ is said to be \textit{boundedly complete over $\mathfrak{F}$}, if every bounded Cauchy sequence over $\mathfrak{F}$ in $X$ has a limit over $\mathfrak{F}$. We denote this property by $X \in {\mathbb C}ompl_b(\mathfrak{F})$. $X$ is said to be \textit{boundedly countably complete} if it is boundedly complete over all filters $\mathfrak{F}$ on ${\mathbb N}$. \end{definition} The next theorem gives a plenty of examples. \begin{theorem}\label{thm-dual-bcompl} Let $Y$ be a Banach space, then $(Y^*, \sigma(Y^*, Y) )$ is boundedly countably complete. \end{theorem} \begin{proof} The proof repeats almost literally the demonstration of the well-known facts \cite[Section 6.4.3, Theorems 1 and 2]{kadets} about the ordinary pointwise convergence. Namely, let $(x_n^*) \subset Y^*$ be a bounded sequence. Recall, that $\sigma(Y^*, Y)$-boundedness is equivalent to the boundedness in norm (Banach-Steinhaus), so $\sup_n\|x_n*\| = C < \infty$. Assume that for some filter $\mathfrak{F}$ on ${\mathbb N}$ the sequence $(x_n^*)$ is $\mathfrak{F}$-Cauchy in topology $\sigma(Y^*, Y)$. This means that for every $x \in Y$ the sequence $(x_n^*(x)) \subset {\mathbb K}$ is $\mathfrak{F}$-Cauchy. By completeness of ${\mathbb K}$ (which is either ${\mathbb R}$ or ${\mathbb C}$), $\lim_{\mathfrak{F}} x_n^*(x)$ exists for all $x\in Y$. Consider the map $f \colon Y \to {\mathbb K}$ given by the recipe $f(x) = \lim_{\mathfrak{F}} x_n^*(x)$. At first, it is a linear functional. Indeed, $ f(ax_1 + bx_2) = \lim_{\mathfrak{F}} x_n^* (ax_1 + bx_2) = a \lim_{\mathfrak{F}} x_n^* (x_1) + b \lim_{\mathfrak{F}} x_n^* (x_2) = af(x_1) + f(x_2)$. At second, the estimate $|f(x)| = \lim_{\mathfrak{F}} \|x_n^*(x)\| \leqslantslant C\|x\|$, which holds for all $x\in Y$, demonstrates that $f$ is continuous, so $f \in Y^*$, and $f = \lim_{\mathfrak{F}} x_n^*$ in $\sigma(Y^*, Y)$. \end{proof} \begin{theorem}\label{thm-bcompl-Fcoml} Let $Y$ be a Banach space, $\mathfrak{F}$ be a stationary B-UBP-filter on ${\mathbb N}$, then $(Y^*, \sigma(Y^*, Y) )$ is $\mathfrak{F}$-complete. \end{theorem} \begin{proof} Let $x^* = (x_n^*) \subset Y^*$ be $\mathfrak{F}$-Cauchy in the topology $\sigma(Y^*, Y)$. Then for every $x \in Y$ the sequence $(x_n^*(x)) \subset {\mathbb K}$ is $\mathfrak{F}$-Cauchy, which implies that $x^*$ is pointwise $\mathfrak{F}$-bounded. From the definition of stationary B-UBP-filter we deduce the existence of an $\mathfrak{F}$-stationary set $A \subset {\mathbb N}$ such that $\sup_{n \in A} \|x_n^*\| < \infty$. Consider the collection $G$ all sets of the form $A\cap B$, $B \in \mathfrak{F}$. $G$ is a filter base. Denote $\mathfrak{F}_A$ the filter on ${\mathbb N}$ generated by this particular base $G$. Let $g : {\mathbb N} \to A$ be a bijection. Denote $\mathfrak{F}_g$ the filter of all those $B \subset {\mathbb N}$ for which $g(B) \in G$. Finally, consider $y^* = x^* \circ g$. Then $y^* $ is pointwise bounded and is $\mathfrak{F}_g$-Cauchy. By Theorem \ref{thm-dual-bcompl} the sequence $y^*$ is pointwise $\mathfrak{F}_g$-convergent to some $f \in Y^*$. This means that the sequence $x^*$ converges to $f$ with respect to the filter $g[\mathfrak{F}_g] = F_A$. By the construction, $\mathfrak{F}_A \supset \mathfrak{F}$, consequently, $f$ is an $\mathfrak{F}$-cluster point for $x^* = (x_n^*)$ in $\sigma(Y^*, Y)$. But $x^*$ is $\mathfrak{F}$-Cauchy, so its cluster point $f$ is its limit (Proposition \ref{rem-clust-lim}). \end{proof} We don't know whether for general TVS the sequential completeness implies $f$-statistical completeness for every unbounded modulus $f$, but we have an analogous result for bounded completeness in locally convex spaces. Recall that in the particular case of the modulus function $f(t) = t$, $f$-statistical convergence reduces to the well-known statistical convergence, which is generated by the filter $\mathfrak{F}_{st}$, whose elements are those $A \subset {\mathbb N}$, for which \begin{equation} \label{eq:stat-conv} \lim_{n \to \infty} \frac{ |A(n)|}{n} = 1, \end{equation} where $A(n) = A \cap \overline{1, n}$. First, we need a generalization of the following fact that was remarked already in \cite{fast}: if a bounded numerical sequence $(x_n)$ converges statistically to a number $a$, then it is Cesaro convergent to $a$, i.e. $$ \lim_{n \to \infty} \frac{1}{n}\sum_{k=1}^n x_k = a. $$ \begin{lemma}\label{lem-loc-conv-stat-Cesaro} Let $X$ be a locally convex TVS and $x = (x_n) \in 2^X$ be a bounded $\mathfrak{F}_{st}$-Cauchy sequence in $X$, then the sequence $y = (y_n) \in 2^X$, where $y_n = \frac{1}{n}\sum_{k=1}^n x_k$, is a bounded Cauchy sequence. \end{lemma} \begin{proof} Let $U \in \mathfrak{N}_0$ be an open balanced convex neighborhood of zero and $p$ be the seminorm whose open unit ball is equal to $U$. Denote $C = \sup_n p(x_n)$. Then all $x_k \in C U$ and, by convexity, all $y_n \in C U$, which proves the boundness of $y$. It remains to show that $y$ is a Cauchy sequence. According to our assumption, there is a set $A \subset {\mathbb N}$ that satisfies \eqref{eq:stat-conv} and such that $p(x_n - x_m) < \frac{1}{2}$ for all $m, n \in A$ Select an $N \in {\mathbb N}$ in such a way that for all $n \geqslantslant N$ $$ \frac{ |({\mathbb N} \setminus A)(n)|}{n} < \frac{1}{8C}. $$ Then, for $n, m \geqslantslant N$ we have \begin{align*} &p\leqslantslantft( y_n - y_m \right) = p\leqslantslantft( \frac{1}{n}\sum_{k=1}^n x_k - \frac{1}{m}\sum_{j=1}^m x_j \right) = p\mathfrak{B}igl( \frac{1}{n}\sum_{k \in ({\mathbb N} \setminus A) (n)} x_k \\ &- \frac{1}{m}\sum_{j \in ({\mathbb N} \setminus A) (m)} x_j + \frac{1}{|A(n)| |A(m)|}\sum_{k \in A(n), \ j \in A (m)} (x_k - x_j) \\ &- \leqslantslantft(\frac{1}{|A(n)| } - \frac1n \right)\sum_{k \in A(n)}x_k - \leqslantslantft(\frac{1}{|A(m)| } - \frac1m \right)\sum_{j \in A(m)}x_j)\mathfrak{B}igr) \end{align*} \begin{align*} &\leqslantslant \frac{1}{n}\sum_{k \in ({\mathbb N} \setminus A) (n)} p(x_k) + \frac{1}{m}\sum_{j \in ({\mathbb N} \setminus A) (m)} p(x_j) \\ &+ \frac{1}{|A(n)| |A(m)|}\sum_{k \in A(n), \ j \in A (m)} p(x_k - x_j) \\ &+ \leqslantslantft(\frac{1}{|A(n)| } - \frac1n \right)\sum_{k \in A(n)}p(x_k) + \leqslantslantft(\frac{1}{|A(m)| } - \frac1m \right)\sum_{j \in A(m)}p(x_j) \\ & < \frac{C}{n} \frac{n}{8C} + \frac{C}{m} \frac{m}{8C} + \frac{1}{2} + \frac{C}{n} \frac{n}{8C} + \frac{C}{m} \frac{m}{8C} = 1, \end{align*} that is $y_n - y_m \in U$. \end{proof} \begin{theorem}\label{thm-loc-conv-bFst-compl} Let $X$ be a boundedly sequentially complete locally convex TVS, then $X \in {\mathbb C}ompl_b(\mathfrak{F}_{st})$. \end{theorem} \begin{proof} Let $x = (x_n) \in 2^X$ be a bounded $\mathfrak{F}_{st}$-Cauchy sequence in $X$. According to the previous lemma, the sequence $y = (y_n) \in 2^X$, where $y_n = \frac{1}{n}\sum_{k=1}^n x_k$, is a bounded Cauchy sequence, so it has a limit in $X$. Denote $a = \lim_{n \to \infty} y_n \in X$. Let us demonstrate that $a = \lim_{\mathfrak{F}_st} y$. To do this, consider $U$, $p$, $C$, $A$ and $N$ from the proof of Lemma \ref{lem-loc-conv-stat-Cesaro}. Also, fix such an $M > N$ that $p(y_n - a) < \frac14$ for all $n > M$. Then, for every $n \in A \setminus \overline{1, M}$ we have $$ p(x_n - a) \leqslantslant \frac14 + p(x_n - y_n) \leqslantslant \frac14 + p\leqslantslantft(\frac{1}{n}\sum_{k \in ({\mathbb N} \setminus A) (n)} x_k \right) + p\leqslantslantft( x_n - \frac{1}{n}\sum_{k \in A(n)} x_k \right) $$ \begin{align*} &\leqslantslant \frac14 + \frac{C}{n} \frac{n}{8C} + p\leqslantslantft( \frac{1}{|A(n)|}\sum_{k \in A(n)} (x_n - x_k ) \right) + \leqslantslantft(\frac{1}{|A(n)|}- \frac{1 }{n} \right) p\leqslantslantft(\sum_{k \in A(n)} x_k \right) \\ &\leqslantslant \frac14 + \frac{C}{n} \frac{n}{8C} + \frac{1}{2} + \frac{C}{n} \frac{n}{8C} = 1, \end{align*} that is $x_n - a \in U$. \end{proof} \begin{corollary}\label{cor-loc-conv-bFst-compl} Let $X$ be a boundedly sequentially complete locally convex TVS, then $X \in {\mathbb C}ompl_b(\mathfrak{F}_{f-st})$ for every unbounded modulus function $f$. \end{corollary} \begin{proof} Since $\mathfrak{F}_{f-st} \subset \mathfrak{F}_{st}$ (see the reasoning just before \cite[Corollary 2.2]{aizpuru}), it remains to apply the statement (4) of theorem \ref{thm-ComplF1F2} in its version for bounded sequences, which works the same way as the original one. \end{proof} \subsection{Completeness and ultrafilters} Let us start with an easy observation. \begin{remark} \label{rem-all-ultraf} If a TVS $X$ is complete with respect to all ultrafilters on ${\mathbb N}$, then $X$ is countably complete. Select an ultrafilter $\mathfrak{U} \supset \mathfrak{F}$. By our assumption, $X \in {\mathbb C}ompl(\mathfrak{U})$, and it remains to apply (4) of Theorem \ref{thm-ComplF1F2} in order to show that $X \in {\mathbb C}ompl(\mathfrak{F})$. \end{remark} The above remark motivates some natural questions. At first, is it true that the completeness with respect to one ultrafilter implies the completeness with respect to all other ultrafilters (and hence implies the countable completeness)? If the answer is negative, then the second question arises: does the sequential completeness imply completeness with respect to some ultrafilter? The negative answers to both questions are given below (for the first one the answer is given under an additional set-theoretic assumption). \begin{theorem}\label{thm-bcompl-Fcoml} Under the Martin's axiom there are free ultrafilters $\mathfrak{U}_1, \mathfrak{U}_2$ on ${\mathbb N}$ and a TVS $X$ such that $X \in {\mathbb C}ompl(\mathfrak{U}_1)$, but $X \notin {\mathbb C}ompl(\mathfrak{U}_2)$. \end{theorem} \begin{proof} Let $X = (Y^*, \sigma(Y^*, Y) )$, where $Y$ is a separable infinite-dimensional Banach space. According to \cite[Corollary 5.1 and Theorem 5.3]{bondt}, the Martin's axiom guaranties the existence of $2^{2^{\aleph_0}}$ -many B-UBP-ultrafilters. Let $\mathfrak{U}_1$ be a B-UBP-ultrafilter on ${\mathbb N}$. Due to Theorem \ref{thm-bcompl-Fcoml}, $X \in {\mathbb C}ompl(\mathfrak{U}_1)$. On the other hand, $X$ is separable (the dual to a separable Banach space contains a countable total system \cite[Section 17.2.4, Corollary 2]{kadets} and, consequently, is w$^*$-separable) and incomplete, so by Theorem \ref{thm-count-comp-sep} $X$ is not countably complete, which implies (Remark \ref{rem-all-ultraf}) that $X \notin {\mathbb C}ompl(\mathfrak{U}_2)$ for some free ultrafilter $\mathfrak{U}_2$ on ${\mathbb N}$. \end{proof} Recall, that the dual to $\ell_1$ is $\ell_\infty$, and for every $x = (x_1, x_2, \ldots) \in \ell_\infty$ and $y = (y_1, y_2, \ldots) \in \ell_1$ the action of $x$ on $y$ is $x(y) = \sum_{n \in {\mathbb N}}x_n y_n$. \begin{theorem}\label{thm-seq-comp-notultrcomp} The space $\ell_1$ in the weak topology $\sigma(\ell_1, \ell_\infty)$ is sequentially complete, but is not boundedly complete over any free ultrafilter $\mathfrak{U}$ on ${\mathbb N}$. \end{theorem} \begin{proof} Weak sequential completeness of $\ell_1$ (as well as of all spaces $L_1(\mu)$) is a classical Banach space theory result, see \cite[Theorem 2.5.10]{albiac}. Now, let us fix an arbitrary free ultrafilter $\mathfrak{U}$ on ${\mathbb N}$ and demonstrate that $(\ell_1, \sigma(\ell_1, \ell_\infty)) \notin {\mathbb C}ompl_b(\mathfrak{U})$. Denote $\leqslantslantft\{{e_n} \right\}_1^\infty$ the \emph{canonical basis} of $\ell_1$, that is $e_1 = (1,0,0,\ldots)$, $e_2 = (0,1,0,\ldots)$,\ldots . For every $x = (x_1, x_2, \ldots) \in \ell_\infty$ the values $x(e_n) = x_n$ form a bounded sequence of scalars, hence there is the limit on $x(e_n)$ over $\mathfrak{U}$. Consequently, $(x(e_n))$ is $\mathfrak{U}$-Cauchy in the topology $\sigma(\ell_1, \ell_\infty)$. Now we show that the sequence $(e_k)$ does not have a weak limit over $\mathfrak{U}$. Assume that there exists $z =(z_1, z_2, \ldots) \in \ell_1$ such that $x(z) = \lim_\mathfrak{U} (x(e_n)) = \lim_\mathfrak{U} (x_n)$ for all $x = (x_1, x_2, \ldots) \in \ell_{\infty}$. Then, on the one hand, considering $e_k$ as elements of $\ell_\infty$ we get that for every $k \in {\mathbb N}$ $$ z_k = e_k(z) = \lim_{\mathfrak{U}, n} (e_k(e_n)) = 0, $$ but on the other hand, taking $x = (1, 1, 1, \ldots)$ we get that $\sum_{n \in {\mathbb N}} z_n = \lim_\mathfrak{U} (x_n) = 1$. We came to a contradiction. \end{proof} The above result can be viewed in a bit different way. Consider $\ell_1$ as a subspace of $\ell_\infty^*$. Then in $\sigma(\ell_\infty^*, \ell_\infty)$ we have that $(e_k)$ is $\mathfrak{U}$-convergent to the functional $x \mapsto \lim_\mathfrak{U} x$, and this functional belongs to $\ell_\infty^* \setminus \ell_1$. \section{Concluding remarks and open questions} The following challenging problem remains open. \begin{problem} \label{prob1} Is there a combinatorial description of those filter $\mathfrak{F}$ on ${\mathbb N}$ for which the sequential completeness of a TVS implies its $\mathfrak{F}$-completeness? \end{problem} \begin{problem} \label{prob1+} Which of concrete filters $\mathfrak{F}$, widely mentioned in literature (like Erd\"os-Ulam filters, summable filters, $f$-statistical filters, filters generated by summability matrices, etc.) enjoy the property that $\mathfrak{F}$-sequential completeness of a TVS implies its $\mathfrak{F}$-completeness? \end{problem} Let us consider the following construction. For a free filter $\mathfrak{F}$ on ${\mathbb N}$ denote $c(\mathfrak{F})$ the set of all bounded $\mathfrak{F}$-convergent numerical sequences. Evidently $c_0 \subset c(\mathfrak{F}R) \subset c(\mathfrak{F}) \subset \ell_\infty$. For an ultrafilter $\mathfrak{U}$ we have $c(\mathfrak{F}) = \ell_\infty$. Following the argument from Theorem \ref{thm-seq-comp-notultrcomp} one can easily see that the space $(\ell_1, \sigma(\ell_1, c(\mathfrak{F})))$ is not $\mathfrak{F}$-complete. So, every time when $(\ell_1, \sigma(\ell_1, c(\mathfrak{F})))$ is sequentially complete, we obtain an example of a sequentially complete space which is not $\mathfrak{F}$-complete. This relates Problem \ref{prob1} the following one. \begin{problem} \label{prob2} $ \ $ \begin{enumerate}[(i)] \item Describe those linear subspaces $E$, $c_0 \subset E \subset \ell_\infty$, for which the corresponding space $(\ell_1, \sigma(E)$ is sequentially complete. \item Describe those free filters $\mathfrak{F}$ on ${\mathbb N}$, for which the corresponding space $(\ell_1, \sigma(\ell_1, c(\mathfrak{F})))$ is sequentially complete. \end{enumerate} \end{problem} Remark, that for some filters $\mathfrak{F}$ the corresponding space $(\ell_1, \sigma(\ell_1, c(\mathfrak{F})))$ is not sequentially complete. This evidently happens for the the Fr\'{e}chet filter and for those filters $\mathfrak{F}$ for which the implication ($(X \in {\mathbb C}ompl(\mathfrak{F}R)) {\mathbb R}ightarrow (X \in {\mathbb C}ompl(\mathfrak{F}))$) from Problem \ref{prob1} holds true. Let us give a more advanced example. \begin{theorem}\label{thm-statist-to-prob2} For every unbounded modulus function $f$ the space \newline $(\ell_1, \sigma(\ell_1, c(\mathfrak{F}_{f-st})))$ is not sequentially complete. \end{theorem} \begin{proof} Denote $x_n = \frac{1}{n}\sum_{k=1}^n e_k$, where $e_k \in \ell_1$ are the elements of the canonical basis. For every $y = (y_1, y_2, \ldots) \in c(\mathfrak{F}_{f-st})$ we know \cite[Corollary 2.2]{aizpuru} that it is statistically convergent to its $f$-statistical limit, so it is Cesaro convergent. Consequently $$ y(x_n) = \frac{1}{n}\sum_{k=1}^n y_k \xrightarrow[n \to \infty]{} \lim_{\mathfrak{F}_{f-st}} (y_n). $$ This means that $(x_n)$ is a Cauchy sequence in $(\ell_1, \sigma(\ell_1, c(\mathfrak{F}_{f-st})))$. On the other hand, $(x_n)$ is not $\sigma(\ell_1, c(\mathfrak{F}_{f-st}))$-convergent to any element of $z \in \ell_1$ because the mapping $y \mapsto \lim_{\mathfrak{F}_{f-st}} (y_n)$ cannot be represented in the form $y \mapsto \sum_{n \in {\mathbb N}}z_n y_n$. \end{proof} \begin{problem} \label{prob3} Does there exist a ``universal'' ultrafilter $\mathfrak{U}$ on ${\mathbb N}$, such that completeness with respect to $\mathfrak{U}$ implies the countable completeness? Is the existence of such $\mathfrak{U}$ consistent with ZFC axioms? \end{problem} \begin{problem} \label{prob4} For a given TVS $X$ denote ${\mathbb C}ompl(X)$ the set of those filters $\mathfrak{F}$ for which $X \in {\mathbb C}ompl(\mathfrak{F})$. Items (4--6) of Theorem \ref{thm-ComplF1F2} give some restrictions on the structure of ${\mathbb C}ompl(X)$. What else can be said about this set? For example, are there any topological restrictions on the intersection of ${\mathbb C}ompl(X)$ with the space $\beta{{\mathbb N}}$ of all ultrafilters? \end{problem} Remark that the questions formulated in Problems \ref{prob1}, \ref{prob1+}, \ref{prob3}, and \ref{prob4} can be asked for smaller classes of spaces, for example for locally convex spaces. \end{document}
\begin{document} \maketitle \makeatletter \newskip\@bigflushglue \@bigflushglue = -100pt plus 1fil \def\trivlist \bigcentering\item\relax{\trivlist \trivlist \bigcentering\item\relaxing\item\relax} \def\trivlist \bigcentering\item\relaxing{\let\\\@centercr\rightskip\@bigflushglue \leftskip\@bigflushglue \parindent\z@\parfillskip\z@skip} \def\endtrivlist{\endtrivlist} \makeatother \begin{abstract} We study the four infinite families $K\!A(n), K\!B(n), K\!D(n), K\!Q(n)$ of finite dimensional Hopf (in fact Kac) algebras constructed respectively by A.~Masuoka and L.~Vainerman: isomorphisms, automorphism groups, self-duality, lattices of coideal subalgebra\xspaces. We reduce the study to $K\!D(n)$ by proving that the others are isomorphic to $K\!D(n)$, its dual, or an index $2$ subalgebra of $K\!D(2n)$. We derive many examples of lattices of intermediate subfactors of the inclusions of depth $2$ associated to those Kac algebras, as well as the corresponding principal graphs, which is the original motivation. Along the way, we extend some general results on the Galois correspondence for depth $2$ inclusions, and develop some tools and algorithms for the study of twisted group algebras and their lattices of coideal subalgebra\xspaces. This research was driven by heavy computer exploration, whose tools and methodology we describe. \end{abstract} \begin{otherlanguage}{french} \begin{abstract} Nous étudions les quatre familles $K\!A(n), K\!B(n), K\!D(n), K\!Q(n)$ d'algèbres de Hopf (en fait de Kac) de dimension finie construites respectivement par A.~Masuoka et L.~Vainerman: isomorphismes, groupes d'automorphismes, autodualité, treillis des sous-algèbres coidéales. Nous réduisons l'étude à $KD(n)$ en démontrant que les autres algèbres sont toutes isomorphes à $K\!D(n)$, à sa duale ou à une sous-algèbre d'indice $2$ de $K\!D(2n)$. Nous en déduisons de nombreux exemples de treillis de facteurs intermédiaires d'inclusions de profondeur $2$ associées à ces algèbres de Kac, ainsi que les graphes principaux correspondants. Au cours de cette étude, nous approfondissons des résultats généraux sur la correspondance de Galois pour les inclusions de profondeur $2$ et donnons des méthodes et algorithmes pour l'analyse des algèbres de groupes déformées et leurs treillis des sous-algèbres coidéales. Cette recherche a été guidée par une exploration informatique intensive, dont nous décrivons les outils et la méthodologie. \end{abstract} \end{otherlanguage} \setcounter{tocdepth}{2} \tableofcontents \listoffigures \section{Introduction} The theory of Kac algebras provides a unified framework for both group algebras and their duals. In finite dimension this notion coincides with that of $C^*$-Hopf algebras (see~\ref{KacHopf}). Those algebras play an important role in the theory of inclusions of hyperfinite factors of type $II_1$; indeed, any irreducible finite index depth 2 subfactor is obtained as fixed point set under the action of some finite dimensional Kac algebra on the factor. There furthermore is a Galois-like correspondence between the lattice of intermediate subfactors and the lattice of coideal subalgebras of the Kac algebra. A cocommutative (respectively commutative) Kac algebra is the group algebra $\mathbb{C}[G]$ of some finite group $G$ (respectively its dual). We call such a Kac algebra \emph{trivial}; its lattice of coideal subalgebra\xspaces is simply given by the lattice of subgroups of $G$. In~\cite{Watatani.1996}, Yasuo Watatani initiates the study of the lattices of irreducible subfactors of type $II_1$ factors. He provides general properties as well as examples. In particular, his work, completed by Michael Aschbacher~\cite{Aschbacher.2008}, shows that every lattice with at most six elements derives from group actions and operations on factors. Our aim is to study more involved examples of lattices of subfactors, and therefore of principal graphs. It is our hope that this will contribute to the general theory by providing a rich ground for suggesting and testing conjectures. A secondary aim is to evaluate the potential of computer exploration in that topic, to develop algorithms and practical tools, and to make them freely available for future studies. We always assume the factors to be hyperfinite of type $II_1$, and the inclusions to be of finite index. Our approach is to use the Galois correspondence and study instead the lattices of coideal subalgebra\xspaces of some non trivial examples of Kac algebras. Only lattices of depth $2$ irreducible inclusions of type $II_1$ hyperfinite factors are obtained this way. The depth $2$ hypothesis is not so restrictive though, since D. Nikshych and L. Vainerman showed in~\cite{Nikshych_Vainerman.2000.2} that any finite depth inclusion is an intermediate subfactor of a depth $2$ inclusion. There remains only the irreducibility condition. Luckily, there still exists a Galois correspondence for non irreducible depth $2$ inclusions, at the price of considering the action of a finite $C^*$-quantum groupoid instead of a Kac algebra (see~\cite{Nikshych_Vainerman.2000.1,David.2005,David.2009}). The study of more general examples coming from $C^*$-quantum groupoids, like those constructed from tensor categories by C. Mével, is the topic of subsequent work. The first non trivial example of Kac algebra $K\!P$ was constructed in 1966 (see~\cite{Kac.Paljutkin.1966}); it is the unique one of dimension $8$. Later, M. Izumi and H. Kosaki classified small dimension Kac algebras through factors (see~\cite{Izumi_Kosaki.2002}). In 1998, L. Vainerman constructed explicitly two infinite families of Kac algebras, which we denote $K\!D(n)$ and $K\!Q(n)$, by deformation of the group algebras of the dihedral groups $D_{2n}$ and of the quaternion groups $Q_{2n}$ respectively (see~\cite{Vainerman.1998}\footnote{Note that~\cite{Vainerman.1998} is a follow up on a study initiated in 1996 with M.~Enock~\cite{Enock_Vainerman.1996}. An analogous construction can be found in~\cite{Nikshych.1998}.}). In 2000, A. Masuoka defined two other infinite families $A_{4n}$ and $B_{4n}$ (see~\cite[def 3.3]{Masuoka.2000}), which we denote $K\!A(n)$ and $K\!B(n)$ for notational consistency. We present here a detailed study of the structure of those Kac algebras, with an emphasis on $K\!D(n)$. We get some structural results: $K\!Q(n)$ is isomorphic to $K\!D(n)$ for $n$ even and to $K\!B(n)$ for $n$ odd, while $K\!B(n)$ itself is an index $2$ Kac subalgebra of $K\!D(2n)$; also, $K\!A(n)$ is the dual of $K\!D(n)$. Altogether, this is the rationale behind the emphasis on $K\!D(n)$. We prove that $K\!D(n)$ and $K\!Q(n)$ are self-dual if (and only if) $n$ is odd, by constructing an explicit isomorphism (the self-duality of $K\!A(n)$ for $n$ odd and of $K\!B(n)$ for all $n$ is readily proved in~\cite{Calinescu_al.2004}). We also describe the intrinsic groups and automorphism groups of $K\!D(n)$ and $K\!Q(n)$ (see~\cite{Calinescu_al.2004} for those of $K\!A(n)$ and $K\!B(n)$). Then, we turn to the study of the lattice of coideal subalgebra\xspaces of those Kac algebras. We describe them completely for $n$ small or prime, and partially for all $n$. For $KD(n)$, we further obtain a conjectural complete description of the lattice for $n$ odd, and explain how large parts can be obtained recursively for $n$ even. After this study, the lattice of intermediate subfactors of all irreducible depth $2$ inclusions of index at most $15$ is known. We derive the principal graphs of certain inclusions of factors; reciprocally we use classification results on inclusions in some proofs. As the first interesting examples are of dimension $12$ or more, calculations are quickly impractical by hand. Most of the research we report on in this paper has been driven by computer exploration: it led to conjectures and hinted to proofs. Furthermore, whenever possible, the lengthy calculations required in the proofs were delegated to the computer. Most of the tools we developed for this research (about 8000 lines of code) are generic and integrated in the open source \texttt{MuPAD}\xspacecombinat package, and have readily been reused to study other algebras. In Section~\ref{inclusions}, we recall and refine the results of~\cite{Nikshych_Vainerman.2000.2}, and adapt them to the duality framework used in~\cite{David.2005}. In order to build the foundations for our future work on $C^*$-quantum groupoids, the results are first given in this general setting, at the price of some technical details in the statements and proofs. The results which are used in the subsequent sections are then summarized and further refined in~\ref{irred} in the simpler context of Kac algebras. In Section~\ref{section.KP}, we describe the complete lattice of coideal subalgebra\xspaces of the smallest non trivial Kac algebra, $K\!P$, using the construction of~\cite{Enock_Vainerman.1996}. In Section~\ref{graphe}, we gather some graphs that are obtained as principal graphs of intermediate subfactors. In Section~\ref{section.KD}, we give general results on $K\!D(n)$. We describe its intrinsic group and its automorphism group; more generally, we describe the embeddings of $K\!D(d)$ into $K\!D(n)$. Along the way, we build some general tools for manipulating twisted group algebras (isomorphism computations and proof strategies, coideal subalgebra\xspaces induced by subgroups, efficient characterization of the algebra of the intrinsic group). Those tools will be reused extensively in the later sections. We list the three dimension $2n$ coideal subalgebra\xspaces: $K_2$, $K_3$, and $K_4$, and further prove that $K_2$ is isomorphic to $L^\infty(D_{n})$. We also describe some coideal subalgebra\xspaces induced by Jones projections of subgroups of $D_{2n}$, as well as how to obtain recursively coideal subalgebra\xspaces of dimension dividing $2n$. The properties of $K\!D(n)$ depend largely on the parity of $n$. In Section~\ref{section.KD.odd}, we concentrate on the case $n$ odd. We show that $K\!D(n)$ is then self-dual, by constructing an explicit isomorphism with its dual. We present a partial description of the lattice $\operatorname{l}(K\!D(n))$ which is conjecturally complete. This is proved for $n$ prime and checked on computer up to $n=51$. We conclude with some illustrations on $K\!D(n)$ for $n=3,5,9,15$. In Section~\ref{section.KD.even}, we consider $K\!D(2m)$. We prove that $K_3$ and $K_4$ are Kac subalgebras. In fact $K_4$ is isomorphic to $K\!D(m)$; $K_3$ is further studied in Section~\ref{section.KB}. It follows that a large part of the lattice of coideal subalgebra\xspaces of $K\!D(2m)$ can be constructed inductively. We list the dimension $4$ coideal subalgebra\xspaces of $K\!D(2m)$, and describe the complete lattice of coideal subalgebra\xspaces for $K\!D(4)$ and $K\!D(6)$. In Section~\ref{section.KQ}, we briefly study the algebra $K\!Q(n)$. Its structure mimics that of $K\!D(n)$. In fact, we prove that $K\!Q(n)$ is isomorphic to $K\!D(n)$ if and only if $n$ is even and that $K\!Q(n)$ is self-dual when $n$ is odd. We give the complete lattice for $K\!Q(n)$ for $n$ prime, and list the coideal subalgebra\xspaces of dimension $2$, $4$, $n$, and $2n$, for all $n$. Those results are used in the previous sections. In Section~\ref{section.KB}, we show that the Kac subalgebra $K_3$ of $K\!D(2m)$ is isomorphic to $K\!B(m)$. We summarize the links between the families $K\!A(n)$, $K\!B(n)$, $K\!D(n)$, and $K\!Q(n)$, and the coideal subalgebra\xspaces of dimension $4n$ in $K\!D(2n)$. We list the coideal subalgebra\xspaces of dimension $2$, $4$, $m$, and $2m$ of $K_3$ in $K\!D(2m)$ and describe the complete lattice of coideal subalgebra\xspaces for $K\!D(8)$. In Appendix~\ref{section.formulas}, we collect various large formulas for matrix units, coproducts, unitary cocycles, etc. which are used throughout the text, and we finish some technical calculations for the proof of Theorem~\ref{self-dual}. Finally, in Appendix~\ref{section.computerExploration}, we quickly describe the computer exploration tools we designed, implemented, and used, present typical computations, and discuss some exploration strategies. \centerline{\sc Acknowledgments} We would like to thank Vaughan Jones for his initial suggestion of investigating concrete examples of lattices of intermediate subfactors, and for fruitful questions. We are also very grateful to Leonid Vainerman for his continuous support and his reactivity in providing wise suggestions and helpful answers. This research was partially supported by NSF grant DMS-0652641, and was driven by computer exploration using the open-source algebraic combinatorics package \texttt{MuPAD-Combinat}~\cite{MuPAD-Combinat}. \section[depth $2$ inclusions, intermediate subfactors and coideal subalgebra\xspaces]{Depth $2$ inclusions, intermediate subfactors and coideal subalgebra\xspaces} \label{inclusions} In this section, we recall and refine the general results of D. Nikshych et L. Vainerman~\cite{Nikshych_Vainerman.2000.2}~\cite{Nikshych_Vainerman.2000.2} on depth $2$ inclusions and their Galois correspondence. In order to build the foundations for our future work on finite dimensional $C^*$-quantum groupoids, we do \emph{not} assume here the inclusion to be irreducible. Except for some technicalities, this does not affect the theory. For the convenience of the reader, the results which are used in the subsequent sections are summarized and further refined in~\ref{irred} in the simpler context of Kac algebras. This mostly amounts to replacing $C^*$-quantum groupoids by Kac algebra, and setting $h=1$. For the general theory of subfactors, we refer to~\cite{Jones.1983.Subfactors}, ~\cite{Goodman_Harpe_Jones.1989} et~\cite{Jones_Sunder.1997}. \subsection{Depth $2$ inclusions and $C^*$-quantum groupoids} \subsubsection{$C^*$-quantum groupoids}\label{gq} A finite $C^*$-quantum groupoid is a \emph{weak} finite dimensional $C^*$-Hopf algebra: the image of the unit by the coproduct is not required to be $1\otimes1$, and the counit is not necessarily an homomorphism (see~\cite{Boem_Nill_Szlachanyi.1999}, \cite{Nikshych_Vainerman.2000.1} or~\cite[2.1]{David.2005}). More precisely: \begin{defn} A \emph{finite $C^*$-quantum groupoid} is a finite dimensional $C^*$-algebra $G$ (we denote by $m$ the multiplication, by $1$ the unit, and by $^{*}$ the involution) endowed with an coassociative coalgebra structure (we denote by $\Delta$ the coproduct, $\varepsilon$ the counit and $S$ the antipode) such that: \begin{enumerate}[(i)] \item $\Delta$ is a $^{*}$-algebra homomorphism from $G$ to $G \otimes G$ satisfying: $$(\Delta \otimes \id)\Delta(1)=(1\otimes \Delta(1))(\Delta(1)\otimes 1)$$ \item The counit is a linear map of $G$ to $\mathbb{C}$ satisfying: \begin{displaymath} \varepsilon(fgh) =\varepsilon(fg_{(1)})\varepsilon(g_{(2)}h)\qquad ((f,g,h)\in G^3) \end{displaymath} Equivalently: \begin{displaymath} \varepsilon(fgh)=\varepsilon(fg_{(2)})\varepsilon(g_{(1)}h) \qquad ((f,g,h)\in G^3)) \end{displaymath} \item The antipode $S$ is an antiautomorphism of algebra and coalgebra of $G$ satisfying for all $g$ in $G$: \begin{displaymath} m(\id \otimes S)\Delta(g)=(\varepsilon \otimes \id)(\Delta(1)(g \otimes 1)) \end{displaymath} or equivalently: \begin{displaymath} m(S \otimes \id )\Delta(g)=(\id \otimes \varepsilon)((1 \otimes g)\Delta(1)) \end{displaymath} \end{enumerate} \end{defn} The \emph{target counital map} $\varepsilon_t$ and \emph{source counital map} $\varepsilon_s$ are the idempotent homomorphisms defined respectively for all $g\in G$ by: \begin{displaymath} \varepsilon_t(g)=(\varepsilon \otimes \id)(\Delta(1)(g \otimes 1)) \quad \text{and} \quad \varepsilon_s(g)=(\id \otimes \varepsilon)((1 \otimes g)\Delta(1)) \end{displaymath} \subsubsection{$C^*$-quantum groupoid associated to an inclusion} \label{action} Let $N_{0} \subset N_{1}$ be a depth $2$ inclusion of $\mathrm{II}_{1}$ factors of finite index $\tau^{-1}$. Consider the tower \begin{displaymath} N_0 \,\subset \, N_1 \, \stackrel{f_1}{\subset} \, N_2 \,\stackrel{f_2}{\subset} \, N_3 \subset\cdots \end{displaymath} obtained by basic construction. We denote by $tr$ the unique normalized trace of the factors. The relative commutants $A=N'_{0} \cap N_{2}$ and $B=N'_{1} \cap N_{3}$ are endowed with dual finite regular\footnote{$S^2$ is the identity on the counital algebras.} $C^*$-quantum groupoid structures thanks to the duality: $$\langle a,b\rangle = [N_{1}:N_{0}]^{2} tr(ahf_{2}f_{1}hb)\quad (a\in A, \;b\in B)$$ where $h$ is the square root of the index of the restriction of the trace $tr$ to $N'_{1} \cap N_{2}$ (see~\cite{Nikshych_Vainerman.2000.1} or~\cite[3.2]{David.2005}). The quantum groupoid $B$ acts on $N_2$ in such a way that $N_{3}$ is isomorphic to the crossed product $N_{2}\rtimes B$ and $N_1$ is isomorphic to the subalgebra of the points of $N_2$ fixed under the action of $B$. We recall that the inclusion is irreducible if and only if $B$ is a Kac algebra. We refer to~\cite{Nikshych_Vainerman.2000.1} for historical notes on the problem and references to analogous results. We have shown in~\cite{David.2005} that any finite $C^*$-quantum groupoid $B$ is isomorphic to a regular $C^*$-quantum groupoid and acts on the $II_1$ hyperfinite factor $R$. If $A$ and $B$ are two dual finite regular $C^*$-quantum groupoids, we can slightly modify the construction of~\cite{David.2005} so that the obtained tower of $II_1$ hyperfinite factors $N_0 \subset N_1 \subset N_2 \subset N_3$ endows the relative commutants $M'_0 \cap M_2$ and $M'_1 \cap M_3$ (respectively isomorphic to $A$ and $B$ as $C^*$-algebras) with dual finite regular $C^*$-quantum groupoid structures which are respectively isomorphic to the original ones (see~\cite{David.2009}). Therefore we may, without loss of generality, reduce our study to finite regular $C^*$-quantum groupoids associated to an inclusion. In this context, there are convenient expressions for the counit, the source and target counital maps, the antipodes, and the action: for any element $b\in B$, they satisfy: $$\varepsilon_{B}(b)=\tau^{-1}tr(hf_{2}hb), \qquad\qquad S_{B}(b) = hj_{2}(h^{-1})j_{2}(b)h^{-1}j_{2}(h),$$ $$\varepsilon^t_{B}(b)=\tau^{-1}E_{N'_1 \cap N_2}(bhf_{2}h^{-1}),\qquad \varepsilon^s_{B}(b)=\tau^{-1}E_{N'_2 \cap N_3}(j_2(b)h^{-1}f_{2}h),$$ where, for any $ n \in \mathbb N$, $j_{n}$ is the antiautomorphism of $N'_{0} \cap N_{2n}$ defined by: $$ j_{n}(x)=J_{n}x^*J_{n}\qquad \text{ for } x\in N'_{0} \cap N_{2n}\,,$$ where $J_{n}$ is the canonical anti-unitary involution of $L^2(N_{n},tr)$, space of the GNS construction of $N_{n}$~\cite[3.1]{David.2005}. The application $S^2_B$ is the inner automorphism defined by $G=j_2(h^{-2})h^2$. The action of $B$ on $N_2$ is defined by: $$b\triangleright x= \tau^{-1} E_{N_2}(bxhf_{2}h^{-1})\qquad \text{ for } x\in N_2,b\in B\,.$$ There are analogous formulas for $A$. The algebra $N_{0}$ is then the algebra of the points of $N_{1}$ which are fixed under the action of the $C^*$-quantum groupoid $A$. The application $\Theta_A$: $[x \otimes a] \longmapsto xa$ is a von Neumann algebra isomorphism from $N_{1}\rtimes A$ to $N_{2}$. The application $\Theta_B$ from $N_{2}\rtimes B$ to $N_{3}$ is defined similarly. In the sequel, we mostly consider the $C^*$-quantum groupoid $B$, and then set $\Theta=\Theta_B$ for short. \subsection{Galois correspondence}\label{galois} In~\cite{Nikshych_Vainerman.2000.2}, D. Nikshych and L. Vainerman show that the correspondence between finite index depth $2$ inclusions and finite $C^*$-quantum groupoids is a Galois-type correspondence. \subsubsection{Involutive left coideal subalgebra} \label{sacig} \begin{defn} An \textbf{involutive left (resp. right) coideal $*$-subalgebra} (or \emph{coideal subalgebra\xspace} for short) $I$ of $B$ is a unital $C^*$-subalgebra of $B$ such that $\Delta(I) \subset B \otimes I$ (resp. $\Delta(I) \subset I \otimes B$). As in~\cite[3.1]{Nikshych_Vainerman.2000.2}, the lattice of left (resp. right) coideal subalgebra\xspace of $B$ is denoted by $\operatorname{l}(B)$ (resp. $\operatorname{r}(B)$). \end{defn} \subsubsection{Cross product by a coideal subalgebra\xspace} By the definition of a coideal subalgebra\xspace, the image of $N_2\otimes I$ by $\Theta$ is a von Neumann subalgebra $M_3$ of $N_3$, called \textit{cross product of $N_2$ by $I$} and denoted by $M_3=N_2\rtimes I$. \subsubsection {Intermediate subalgebras of an inclusion} \label{inter} Let $\operatorname{l}(N_{2}\subset N_{3})$ be the lattice of intermediate subalgebras of an inclusion $N_{2} \subset N_{3}$, that is von Neumann algebras $M_3$ such that $N_{2}\subset M_3\subset N_{3}$. In theorem 4.3 of~\cite{Nikshych_Vainerman.2000.2}, D. Nikshych and L. Vainerman establish an isomorphism between $\operatorname{l}(N_{2} \subset N_{3})$ and $\operatorname{l}(B)$. More precisely, if $M_3$ is an intermediate von Neumann subalgebra of $N_{2} \subset N_{3}$, then $I=N'_1 \cap M_3$ is a coideal subalgebra\xspace of $B$. Reciprocally, if $I$ is a coideal subalgebra\xspace of $B$, then $M_3= N_{2}\rtimes I$ is an intermediate von Neumann subalgebra of $N_{2} \subset N_{3}$. In~\cite[4.5]{Nikshych_Vainerman.2000.2}, they prove that the intermediate von Neumann subalgebra is a factor if and only if the coideal subalgebra\xspace is \emph{connected}, that is $Z(I)\cap B_s$ is trivial. \emph{From now on, we only consider connected coideal subalgebra\xspaces.} \subsubsection{Quasi-basis}\label{quasibase} The notion of quasibasis is defined in~\cite[1.2.2]{Watatani.1990} and made explicit in the special case of depth $2$ inclusions in~\cite[3.3]{David.2005}. \begin{prop} Let $\{c_t{\ |\ } t\in T\}$ be a family of matrix units of $I$, normalized by $tr(c^{*}_t c_t')=\delta_{t,t'}$. Then, $\{c_t h^{-1} {\ |\ } t\in T\}$ is a quasibasis of $M_3 \cap N'_1$ over $N_2\cap N'_1$, and therefore a quasibasis of $M_3$ over $N_2$. The index $\tau^{-1}_I$ of $N_2$ in $M_3$ is then $\sum_{t\in T} c_t H^{-1}c ^*_t$. \end{prop} \begin{proof} By~\cite[2.4.1]{Watatani.1990}, the family $\{c_t h^{-1}{\ |\ } t\in T\}$ is a quasibasis of $M_3 \cap N'_1$ over $N_2\cap N'_1$. Since $M_3$ is generated by $N_2$ and $I$, this family is a quasibasis of $M_3$ over $N_2$ (see also~\cite[3.3]{David.2005}). Since the trace of the factors is a Markov trace, the index of the conditional expectation of $M_3$ over $N_2$ is the scalar $\tau^{-1}_I=\sum_{t\in T} c_t H^{-1}c ^*_t$~\cite[2.5.1]{Watatani.1990}. \end{proof} \begin{cor} If the inclusion $N_0 \subset N_1$ is irreducible, the family $\{c_t{\ |\ } t\in T\}$ of normalized matrix units of $I$ is a Pimsner-Popa basis of $M_3$ over $N_2$, and the dimension of $I$ coincides with the index of $N_2$ in $M_3$. \end{cor} \begin{proof} In this case, $h=1$. Therefore $[M_3:N_2]$ is the dimension of $I$, and the family $\{c_t{\ |\ } t\in T\}$ is a Pimsner-Popa basis of $M_3$ over $N_2$ (see~\cite[1.3]{Pimsner_Popa.1986} and~\cite[5.2.1]{David.1996}). \end{proof} \subsection{The Jones projection of a coideal subalgebra\xspace} \label{pI} \subsubsection {Restriction of the counit to $I$} \label{xI} It has been noticed in~\cite{David.2005} that the counit, the Haar projection of $B$, and the Jones projection are closely related. This phenomenon also occurs for coideal subalgebra\xspaces. \begin{prop}[{Proposition~\cite[3.5]{Nikshych_Vainerman.2000.2}}] The restriction of $\varepsilon$ to $I$ is a positive linear form on $I$. There therefore exists a unique positive element $x_I$ of $I$ such that $\varepsilon(b)=tr(x_Ib)$ for all $b$ in $I$. Then, $$\Delta(x_I)= \sum_{t\in T} S(c_t^{*})G \otimes c_t \quad \text{et}\quad S(x_I)=x_IG^{-1}\,.$$ \end{prop} Note that we use here a single trace, namely the normalized Markov trace which is that of the factors. \subsubsection {The Jones projection of $I$} \label{defpI} We now extend Proposition 4.7 of~\cite{Nikshych_Vainerman.2000.2}: \begin{prop} \label{proposition.delta_pI} The element $p_I=\tau_I h^{-1}x_Ih^{-1}$ is a projection of $I$ of trace $\tau_I$ which we call \textit{Jones projection of $I$}. Furthermore: \begin{align*} p_I&=\tau_I \tau^{-1} E_I(f_2)\\ \Delta(p_I)&= \tau_I \sum_{t\in T} h^{-1}S(H^{-1}c_t^{*})h \otimes c_t=\tau_I \sum_{t\in T} j_2(c_t^{*}h^{-1})\otimes c_t h^{-1}\\ j_2(p_I)&=p_I \quad \text{et}\quad S(p_I)=G^{1/2}p_IG^{-1/2} \end{align*} \end{prop} \begin{proof} Note that, in the proof of Proposition 4.7 of~\cite{Nikshych_Vainerman.2000.2}, $\varepsilon_s(H^{-1}x_I)$ is exactly $\tau_I^{-1}$. That $p_I$ is a projection follows from the same proof. The equality $p_I=\tau_I \tau^{-1} E_I(f_2)$ follows from the uniqueness of $x_I$, and yields the trace of $p_I$. The counit restricted to $I$ can then be expressed as $\varepsilon(y)=\tau^{-1}_I tr(yhp_Ih)$, for all $y\in I$. The coproduct formula is obtained using~\ref{quasibase} and~\ref{xI}. The formula of~\ref{action} for the antipode gives the second expression which also involves the quasibasis (see~\ref{quasibase}). From $S(x_I)=x_IG^{-1}$ and~\ref{action}, one deduces successively $$j_2(x_I)=h^{-1}j_2(h)x_Ih^{-1}j_2(h)\,, \quad j_2(p_I)=p_I\,, \quad\text{ and }\quad S(p_I)=G^{1/2}p_IG^{-1/2}\,.$$ \end{proof} \begin{remark} \label{remark.jones_left_legs} The projection $p_I$ generates $I$ as a coideal subalgebra\xspace. Furthermore, the coideal subalgebra\xspace $I$ is the linear span of the right legs of the coproduct of $p_I$. In general, we denote by $I(f)$ the coideal subalgebra\xspace generated by a projection $f$. \end{remark} \subsubsection {The Jones projection of $I$ is a Bisch projection} \label{pbisch} In~\cite{Bisch.1994}, D.~Bisch gives the following partial characterization of Jones projections. Let $BP(N_1, N_2)$ be the set of projections $q$ of $N'_{1} \cap N_{3}$ satisfying \begin{description} \item[BP1] $qf_2=f_2$; \item[BP2] $E_{N_2}(q)$ is a scalar; \item[BP3] $E_{N_2}(q f_1f_2)$ is the multiple of a projection. \end{description} Then, $BP(N_1, N_2)$ contains all the Jones projections of the intermediate subfactors of $N_{1} \subset N_{2}$ and is contained in the set of all the Jones projection of the intermediate von Neumann subalgebra of $N_{1} \subset N_{2}$. Using this result, D. Nikshych et L. Vainerman show that $p_I$ is the Jones projection\footnote{This motivates our terminology \emph{Jones projection of $I$}.} of the intermediate inclusion $M_1=\{p_I\}' \cap N_2 \subset N_2$ of $N_{1} \subset N_{2}$~\cite[Proposition~4.8]{Nikshych_Vainerman.2000.2}. \begin{prop}(\cite[Proposition~4.8]{Nikshych_Vainerman.2000.2}) The projection $p_I$ belongs to $BP(N_1, N_2)$. It implements the conditional expectation on the intermediate subfactor $M_1=\{p_I\}'\cap N_2$, whose index in $N_2$ is $\tau_I^{-1}$. \end{prop} The upcoming proof follows that of Proposition~4.8 of~\cite{Nikshych_Vainerman.2000.2}, albeit with our notations and duality. We identify $\Theta([z\otimes 1)]$ and $z$ for $z \in N_2$, as well as $\Theta([1\otimes b)]$ and $b$ for $b \in B$. \begin{proof} Since the source and target counits satisfy $\varepsilon_s \circ S=S\circ \varepsilon_t$, from $\varepsilon_s(H^{-1}x_I)=\tau_I^{-1}$, follows that $\varepsilon_t(hp_Ih^{-1})=1$. By definition of the Haar projection $e_B=d^{-1} hf_2h^{-1}$ of $B$, we get $hp_Ih^{-1}e_B=e_B$; this is property (BP1). We use Proposition~\ref{defpI} to compute $E_{N_2}(p_I)$, and obtain property (BP2); indeed: $$E_{N_2}(p_I)=E_{N_2}(\tau_I \tau^{-1} E_I(f_2))=\tau_I \tau^{-1}E_{N_2}(f_2)=\tau_I\,.$$ From Corollaries~2.2 and~3.9 of~\cite{Nikshych_Vainerman.2000.2}, and from the expression of the coproduct of $p_I$, we deduce that, for any $b \in B$, $\tau_I^{-1} b_{(1)}tr(hp_Ihb_{(2)})$ belongs to $I$ and coincides with $E_I(b)$. The element $E_{N_2}(p_If_1f_2)$ belongs to $A$; using duality, \cite[3.1.3]{David.2005}, and the expression of $E_I$, one get: \begin{align*} \langle E_{N_2}(p_If_1f_2), b\rangle&=\tau^{-2}tr(E_{N_2}(p_If_1f_2)hf_2f_1hb)=\tau^{-1}tr(p_If_1f_2f_1hbj_2(h))\\ &=tr(p_If_1hbj_2(h))=\tau tr(p_Ihbj_2(h))\\ &=\tau tr(p_IhE_I(bj_2(h)))=\tau \tau_I^{-1}tr(p_Ihb_{(1)})tr(hp_Ihb_{(2)}j_2(h))\\ &=\tau^{-1} \tau_I^{-1}\langle E_{N_2}(p_If_1f_2)h^{-1},b_{(1)}\rangle\langle h E_{N_2}(p_If_1f_2),b_{(2)}\rangle\\ &=\tau^{-1} \tau_I^{-1}\langle E_{N_2}(p_If_1f_2)^2,b\rangle\,. \end{align*} Therefore, $\tau^{-1} \tau_I^{-1}E_{N_2}(p_If_1f_2)$ is an idempotent. Using $j_2(p_I)=p_I$ and~\cite[3.1.3]{David.2005}, it is further self-adjoint: $$E_{N_2}(p_If_1f_2)^*=E_{N_2}(f_2f_1p_I)=E_{N_2}(p_If_1f_2)\,.$$ By~\cite[4.2]{Bisch.1994.2}, $p_I$ implements the conditional expectation on the intermediate subfactor $M_1=\{p_I\}'\cap N_2$; its index in $N_2$ is $\tau_I^{-1}$ since $\tau_I$ is the trace of $p_I$. \end{proof} \begin{remark} \begin{enumerate} \item In~\cite[Remark 4.4]{Bisch.1994}, D.~Bisch notes that BP3 can be replaced by BP3': $qN_2q=N_2q$. In the depth $2$ case, it is easy to show that one can also use instead BP3": $qAq=Aq$. Indeed, BP3" follows from BP3' by application of $E_{N'_0 \cap N_3}$. Reciprocally, if BP3" holds, then BP3' follows easily since $N_2$ is spanned linearly by $N_1.A$ and $q$ commutes with $N_1$. Furthermore, for any $a\in A$, $p_Iap_I=E_{\delta(I)}(a)p_I$, where $\delta(I)$ is defined in~\ref{d} and calculated in~\ref{tour}~(1). \item Using~\ref{tour}, the projection $\tau^{-1} \tau_I^{-1}E_{N_2}(p_If_1f_2)$ can be described as the Jones projection of $\delta(I)$; indeed, in this setting and thanks to ~\ref{proposition.delta_pI}, one can write: \begin{align*} \tau^{-1} \tau_I^{-1}E_{N_2}(p_If_1f_2)&=\tau^{-1} \tau_I^{-1}E_{N_2}(p_If_1E_{M_3}(f_2))=\tau_I^{-2}E_{N_2}(p_If_1p_I)\\ &=\tau_I^{-2}E_{N_2}(E_{M_1}(f_1)p_I)=\tau_I^{-1}E_{\delta(I)}(f_1)=p_{\delta(I)}\,. \end{align*} \end{enumerate} \end{remark} \subsubsection {Distinguished projection of $I$} \label{distingue} Following Definition 3.6 of~\cite{Nikshych_Vainerman.2000.2}, we denote by $e_I$ the support of the restriction of $\varepsilon$ on $I$, and call it the \emph{distinguished projection}\footnote{It could be called \textit{Haar projection}, since it satisfies all the relevant properties} of $I$. It is the minimal projection with the property $\varepsilon(e_Iye_I)=\varepsilon(y)$ for all $y$ of $I$. Furthermore, $e_I$ satisfies: $x_Ie_I=e_Ix_I=x_I$, $\varepsilon$ is faithful on $e_IIe_I$, and for all $b\in I$ one has $be_I=\varepsilon_t(b)e_I$~\cite[Proposition~3.7]{Nikshych_Vainerman.2000.2}. \begin{prop} The distinguished projection of $I$ is $e_I= E_{M_1}(H^{-1}) hp_Ih$. \end{prop} \begin{proof} Since $h$ belongs to the center of $N'_1 \cap N_2$, one has: $E_{M_1}(h^{-1})=E_{M_1}(h)^{-1}$. Let us show first some properties of $e=E_{M_1}(H^{-1}) hp_Ih$. Since $p_I$ implements the conditional expectation on $M_1$, $e$ is a projection which satisfies $\varepsilon(eye)=\varepsilon(y)$ for $y$ in $I$ and $x_Ie=ex_I=x_I$. From the formula for $\varepsilon_t$ we derive: \begin{align*} \varepsilon_t(e)&=\tau^{-1}E_{N'_1\cap N_2}(E_{M_1}(H)^{-1}hp_IHf_2h^{-1})\\ &=\tau^{-1}E_{N'_1\cap N_2}(E_{M_1}(H)^{-1}hp_IHp_If_2h^{-1})\\ &=\tau^{-1}E_{N'_1\cap N_2}(hf_2h^{-1})=1\,, \end{align*} as well as, for all $y \in I$: \begin{align*} \varepsilon_t(y)e&=\tau^{-1}E_{N'_1\cap N_2}(yhf_2h^{-1}E_{M_1}(H^{-1}) h)p_IE_{M_1}(H^{-1})h\\ &=\tau^{-1}E_{N'_1\cap N_2}(yhf_2p_I)p_IE_{M_1}(H^{-1})h\\ &=\tau^{-1} \tau_I yhE_I(f_2)p_IE_{M_1}(H^{-1})h\\ &= yE_{M_1}(H^{-1})hp_Ih=ye\,.\\ \end{align*} On the other hand, from $e_I=\varepsilon_t(e_I)e_I$ we deduce successively: \begin{displaymath} x_I=\varepsilon_t(e_I)x_I, \qquad \varepsilon_t(H^{-1}x_I)=\tau_I^{-1}, \qquad\text{and}\qquad \varepsilon_t(e_I)=1\,. \end{displaymath} Since $ee_I=\varepsilon_t(e)e_I=e_I$, we can write $$e_I=ee_I=e_Ie= \varepsilon_t(e_I)e=e\,.\qedhere$$ \end{proof} \subsection{The inclusion associated with a coideal subalgebra\xspace and its Jones tower} \subsubsection{The antiisomorphism $\delta$} \label{d} In~\cite[Proposition~3.2]{Nikshych_Vainerman.2000.2}, D. Nikshych and L. Vainerman define a lattice isomorphism from $\operatorname{l}(B)$ to $\operatorname{r}(B)$ by $I \longmapsto \tilde{I}= G^{-1/2}S_B(I)G^{1/2}$. This isomorphism is induced by $j_2$; indeed, $\tilde{I}$ is simply $j_2(I)$. Note that $j_2(I)$ is the linear span of the left legs of $\Delta(p_I)$, as well as the right coideal subalgebra\xspace generated by $p_I$. Following~\cite[Proposition~3.3]{Nikshych_Vainerman.2000.2}, consider $B$ as embedded into $A\rtimes B= N'_0 \cap N_3$, and for $I$ in $\operatorname{l}(B)$, denote by $\delta(I)$ the coideal subalgebra\xspace $j_2(I)'\cap A$ of $A$. Since $j_2(I)$ is the right coideal subalgebra\xspace generated by $p_I$, $\delta(I)$ is the commutant of $p_I$ in $A$. The map $\delta$ is a lattice antiisomorphism from $\operatorname{l}(B)$ to $\operatorname{l}(A)$~\cite[Proposition~3.3]{Nikshych_Vainerman.2000.2}. In particular, $\operatorname{l}(B)$ is self-dual whenever $B$ is. In this case, we identify $\delta(I)$ and its image in $B$ via some isomorphism from $A$ to $B$. Note that this is slightly abusive, as this is only defined up to an automorphism of $B$. \subsubsection{The Jones tower}\label{tour} We keep the notations of~\ref{pI} and study the inclusion $M_1 \subset N_2$ whose Jones projection is $p_I$. \begin{prop}\ \begin{enumerate} \item The algebra $M_1=\{p_I\}' \cap N_2$ is the subalgebra of $N_2$ obtained by cross product of $N_1$ by $\delta(I)=N'_0 \cap M_1$; \item Let $N_2^I$ be the subalgebra of the points of $N_2$ which are fixed under the action of $I$; then, $M_1=h^{-1}N_2^Ih$; \item $M_1=N_1\rtimes\delta(I) \subset N_2=N_1\rtimes A \subset M_3=N_2\rtimes I$ is the basic construction. The relative commutant $M'_1 \cap M_3$ is $I\cap j_2(I)$. \item The indices satisfy the following relations: $[N_2\rtimes I:N_2]=[N_2:N_2^I]$ and $[N_1\rtimes \delta(I):N_1][N_2\rtimes I:N_2]=[N_2\rtimes B:N_2]$. \item For any two coideal subalgebra\xspaces $I_1$ and $I_2$, $$I_1 \subset I_2\quad \Leftrightarrow \quad p_{I_2} \leq p_{I_1}\,.$$ \end{enumerate} \end{prop} \begin{proof}\ \begin{enumerate} \item The element of $\operatorname{l}(A)$ associated to $M_1$ by the Galois correspondence (see~\ref{inter}) is $\delta(I)$; indeed: $$N'_0 \cap M_1= N'_0 \cap \{p_I\}' \cap N_2=\{p_I\}' \cap A= \delta(I)\,.$$ \item Let $y= hzh^{-1}$, with $z \in M_1$. Then, for $b\in I$, \begin{displaymath} \begin{array}{lclcl } b\triangleright y & = &\tau^{-1} E_{N_2}(bhzf_{2}h^{-1}) & =& \tau^{-1} E_{N_2}(bhzp_If_{2}h^{-1}) \\ &=& \tau^{-1} E_{N_2}(bhp_Izf_{2}h^{-1}) &=& \tau^{-1}\tau_I E_{N_2}(bx_Ih^{-1}zf_{2}h^{-1}) \\ & =&\tau^{-1}\tau_I E_{N_2}(be_Ix_Ih^{-1}zf_{2}h^{-1}) & =&\tau^{-1}\tau_I E_{N_2}(\varepsilon^t_B(b)e_Ix_Ih^{-1}zf_{2}h^{-1}) \\ &=& \tau^{-1}\tau_I \varepsilon^t_B(b)E_{N_2}(e_Ix_Ih^{-1}zf_{2}h^{-1}) &= &\tau^{-1} \varepsilon^t_B(b)E_{N_2}(hp_Izf_{2}h^{-1}) \\ &=& \tau^{-1} \varepsilon^t_B(b)E_{N_2}(hzf_{2}h^{-1}) &=&\varepsilon^t_B(b)\triangleright y\,. \end{array} \end{displaymath} Therefore, $M_1$ is contained in $h^{-1}N_2^Ih$.\\ Take reciprocally $y \in h^{-1}N_2^Ih$; then, the following sequence of equality holds: \begin{align*} \label{} p_Ih^{-1}\triangleright hy h^{-1} & = \varepsilon^t_B(p_Ih^{-1}) \triangleright (hy h^{-1})\\ \tau^{-1} E_{N_2}(p_Iyf_{2})h^{-1} & = \tau^{-1} E_{N'_1\cap N_2}(p_If_{2})yh^{-1}\\ \tau^{-1} E_{N_2}(p_Iyp_If_{2})& = y\\ \tau^{-1} E_{M_1}(y)E_{N_2}(p_If_{2})& = y\\ E_{M_1}(y)& = y \end{align*} Therefore, $y$ belongs to $M_1$ and (2) is proved. \item The algebra $M_3=\langle N_2,p_I \rangle$ obtained by basic construction from $M_1\subset N_2$ is, using (1): $$J_2 M'_1 J_2=J_2 (N_1 \cup \delta(I))' J_2=J_2 N'_1 \cap \delta(I)'J_2=(J_2 N'_1J_2) \cap (J_2\delta(I)'J_2)$$ On one hand $J_2 N'_1 J_2$ is $N_3$, and on the other hand, $\delta(I)'$ contains both $N'_2$ and $\delta(I)'\cap B$. Therefore, by~\cite[3.3]{Nikshych_Vainerman.2000.2}, $M_3$ contains both $N_2$ and $I$, while being also contained in $N_2 \rtimes I$. Hence $M_3=N_2 \rtimes I$. Furthermore, $$I\cap \tilde{I}=I\cap j_2(I)=(N'_1\cap M_3) \cap (M'_1\cap N_3)= M'_1 \cap M_3\,.$$ \item Follows from (2), (3), and Proposition 2.1.8 of~\cite{Jones.1983.Subfactors}. \item Y. Watatani shows at the beginning of~\cite[Part II]{Watatani.1996} that if $M$ and $P$ are two intermediate subfactors of $N_1\subset N_2$, then $M$ is contained in $P$ if and only if $e^{N_2}_{M}$ is dominated by $e^{N_2}_{P}$. Let us apply this result to $P=J_2(N_2\rtimes I_1)'J_2$ and $M=J_2(N_2\rtimes I_2)'J_2$, where $I_1$ and $I_2$ are two coideal subalgebra\xspaces; then, $p_{I_2} \leq p_{I_1}$ is equivalent to $M \subset P$ and therefore to $I_1\subset I_2$ since $\delta$ is an antiisomorphism. \end{enumerate} \end{proof} \subsection{The principal graph of an intermediate inclusion} \label{grapheprincipal} The principal graph of an inclusion is obtained from the Bratelli diagram of the tower of relative commutants or from the equivalence classes of simple bimodules (see~\cite{Goodman_Harpe_Jones.1989}, and~\cite{Jones_Sunder.1997}). It is an invariant of the inclusion. By~\cite[5.9]{Nikshych_Vainerman.2000.2}, the principal graph of the inclusion $N_2 \subset N_2\rtimes I$ is the connected component of the Bratteli diagram of $\delta(I) \subset A$ containing the trivial representation of $A$. All the principal graphs we obtained in our examples are gathered in Section~\ref{graphe}. \subsection{Depth $2$ intermediate inclusions} \subsubsection{$C^*$-quantum subgroupoid}\label{sgq} A coideal subalgebra\xspace $I$ is a \emph{$C^*$-quantum subgroupoid} of $B$ if it is stabilized by the coproduct and antipode of $B$; then, it becomes a $C^*$-quantum groupoid for the structure induced from $B$. The following proposition provides some equivalent characterizations. \begin{prop} For $I$ a coideal subalgebra\xspace of $B$, the following are equivalent: \begin{enumerate}[(i)] \item $I$ is a $C^*$-quantum subgroupoid of $B$; \item $I$ is stabilized by the antipode $S$; \item $I$ is stabilized by the antiautomorphism $j_2$; \item $I$ is stabilized by $\Delta$: $\Delta(I)\subset I\otimes I$; \item $\Delta^{\varsigma}(p_I)=(h^{-1}j_2(h) \otimes 1)\Delta(p_I)(h^{-1}j_2(h) \otimes 1)$. \end{enumerate} \end{prop} \begin{proof} By definition, (i) is equivalent to (ii) and (iv) together. (ii) $\Longleftrightarrow$ (iii): It is sufficient to remark that if $I$ is stabilized by $S$ or $j_2$, then $j_2(h)$ belongs to $I$. It remains to use the relation between $S$, $h$ and $j_2$ (see~\ref{action}). (iii) $\Longrightarrow$ (v): Assume that $I$ is stabilized by $j_2$. Then $\{j_2(c^*_t){\ |\ } t\in T\}$ is a family of normalized matrix units of $I$; using it, one can write the coproduct of $p_I$ and check the claimed relation between $\Delta(p_I)$ et $\Delta^{\sigma}(p_I)$. (v) $\Longrightarrow$ (iii): Assume that $\Delta^{\sigma}(p_I)=(h^{-1}j_2(h) \otimes 1)\Delta(p_I)(h^{-1}j_2(h) \otimes 1)$. Then the left and right legs of $\Delta(p_I)$ span the same subspace, which is $I=j_2(I)$. (ii) $\Longrightarrow$ (iv): $$\Delta(I)=(S^{-1} \otimes S^{-1})\Delta^{\sigma}(S(I)) \subset S^{-1}(I)\otimes I=I\otimes I\,,$$ (iv) $\Longrightarrow$ (iii): Assume $\Delta(I)\subset I\otimes I$. Then, $\Delta(p_I)$ is in $I\otimes I$, and by Proposition~\ref{proposition.delta_pI}, $I$ is stabilized by $j_2$. \end{proof} \subsubsection{$C^*$-quantum subgroupoid and intermediate inclusions}\label{prof2} From~\ref{tour}~(3) and~\ref{sgq}, a coideal subalgebra\xspace $I$ is a $C^*$-quantum subgroupoid of $B$, if and only if $I=M'_1 \cap M_3$. If $I$ is a $C^*$-quantum groupoid, the inclusion $N_2 \subset N_2 \rtimes I$ is of depth $2$. In~\ref{irredprof2}, we will show that the reciprocal holds as soon as the inclusions are irreducible, while the following example shows that it can fail without the irreducibility condition. \subsubsection{An example} As in~\cite[2.7]{Nikshych_Vainerman.2002} and~\cite[5]{David.2005}, we consider the Jones subfactor of index $4\cos^2\frac{\pi}{5}$ in $R$. We denote by $P_0\subset P_1$ this inclusion whose principal graph is $A_4$ and write $$P_0\subset P_1\subset P_2\subset P_3\subset P_4\subset P_5\subset P_6\subset P_7\subset P_8\subset P_9$$ the tower obtained by basic construction. The relative commutants are minimals; they are the Temperley-Lieb algebras. From~\cite[4.1]{Nikshych_Vainerman.2000.2}, the inclusions $P_0\subset P_2$ and $P_0\subset P_3$ are of depth $2$. Consider $P_8$ as intermediate subfactor of $P_6\subset P_9$; it is obtained by cross product of $P_6$ by a coideal subalgebra\xspace $I=P'_3\cap P_8$ of $P'_3 \cap P_9$. The algebra $P'_3 \cap P_9$ is endowed with a $C^*$-quantum groupoid structure since the inclusion $P_0\subset P_3$ is of depth $2$. On the other hand, since the inclusion $P_0\subset P_2$ is of depth $2$, this is also the case for $P_6\subset P_8$. However, by dimension count, $P'_4 \cap P_8$ is strictly contained in $I$. \subsection{The $C^*$-quantum groupoids of dimension $8$} The smallest non trivial $C^*$-quantum groupoids are of dimension $8$. They are described in ~\cite[3.1]{Nikshych_Vainerman.2000.3} and in~\cite[7.2]{Nikshych_Vainerman.2000.1}. L. Vainerman pointed to us that, since their representation categories are the same as for $\mathbb{Z}_2$, their intermediate subfactors lattices are trivial. Together with the Kac-Paljutkin algebra $K\!P$, this covers all the non trivial $C^*$-quantum groupoids of dimension~$8$. \subsection{Special case of irreducible inclusions} \label{irred} In this article, we concentrate on Kac algebras, and therefore on irreducible inclusions of finite index. In this case, any intermediate inclusion is of integral index (see~\cite{Bisch.1994.2}); in fact, as we showed in~\ref{quasibase}, the index of $N_2 \subset N_2 \rtimes I$ is the dimension of $I$. Furthermore, the lattice of intermediate subfactors is finite (see~\cite{Watatani.1996}). \subsubsection{Kac algebras and $C^*$-Hopf algebras} \label{KacHopf} This is a summary of~\cite[6.6]{Enock_Schwartz.1992} which relates Kac algebras and $C^*$-Hopf algebras. Let $B$ be a $C^*$-algebra of finite dimension\footnote{Recall that any $C^*$-algebra of finite dimension is semi-simple.}. Set $B=\oplus_j M_{n_j}(\mathbb{C})q_j$ and denote by $Tr_j$ the canonical trace\footnote{$Tr_j(q_j)=n_j$.} on the factor $M_{n_j}(\mathbb{C})q_j$. The normalized canonical trace of $B$ is given by the formula $tr(x)=\frac{1}{\dim B}\sum_j n_j Tr_j(xq_j)$. If $(B, \Delta, \varepsilon, S)$ is a $C^*$-Hopf algebra, then $(B, \Delta, S, tr)$ is a Kac algebra. Reciprocally, if $(B, \Delta, S, tr)$ is a Kac algebra, then, from \cite[6.3.5]{Enock_Schwartz.1992}, there exist a projection $p$ of dimension $1$ in the center of $B$ such that $\varepsilon$, defined by $\varepsilon(x)=tr(xp)$, is the counit of $B$, and $(B, \Delta, \varepsilon, S)$ is a $C^*$-Hopf algebra. The coinvolution, or antipode, of a Kac algebra constructed from a depth $2$ irreducible inclusion coincides with $j_2$ (voir~\cite{David.1996}). From~\cite[III.3.2]{Kassel.1995}, the antipode of a $C^*$-algèbre de Hopf is uniquely determined by the rest of the structure. Also, in the examples we consider, the coinvolution is not deformed. Therefore, we often skip in the sequel the obvious checks on the coinvolution. \subsubsection{} \label{resumep} The following proposition summarizes the properties of coideal subalgebra\xspaces and their Jones projections in the irreducible case: \begin{prop} \label{prop.resumep} Assume that the inclusion $N_0 \subset N_1$ is irreducible, and consider a coideal subalgebra\xspace $I$ in $\operatorname{l}(B)$. Then, $I$ is connected, and \begin{enumerate} \item $\tau_I^{-1} = tr(p_I)^{-1}=[N_2\rtimes I:N_2]=\dim I$, and $\dim I$ divides $\dim B$. \item The Jones projection $p_I$ dominates $f_2$, belongs to the center of $I$, and satisfies: \begin{displaymath} Ip_I=\mathbb{C} p_I \quad \text{ and } \quad \varepsilon(b)=\dim I \; tr(p_Ib), \text{ for all } b\in I\,. \end{displaymath} \item The coideal subalgebra\xspace $I$ is the subspace spanned by the right legs of $p_I$, and $$\Delta(p_I)= (\dim I)^{-1}\sum_{t\in T} S(c_t^{*}) \otimes c_t\,.$$ \item $\displaystyle\Delta(p_I)(1\otimes p_I)=(p_I\otimes p_I)\,.$ \end{enumerate} \end{prop} \begin{proof} (1) follows from corollary~\ref{quasibase} and~\ref{tour}~(4). (2) The counital algebras are trivial, the element $h$ is $1$, the projections $e_I$ and $p_I$ coincide with $(\dim I)^{-1} x_I$. Take $y\in I$. From~\cite[3.7]{Nikshych_Vainerman.2000.2} (see also~\ref{distingue}), one has $yp_I=\varepsilon(y)p_I$, and it follows successively that $p_Iy^*=\overline{\varepsilon(y)}p_I$, and $p_Iy=\overline{\varepsilon(y^*)}p_I=\varepsilon(y)p_I$. Therefore $p_I$ is in the center of $I$. (3) follows from~\ref{defpI}. (4) Since $Ip_I$ is of dimension $1$, one can set $c_{t_0}=\tau_I^{-1/2}p_I$ for some $t_0\in T$, and $c_tp_I=0$ for all $t\in T\backslash\{t_0\}$. The desired equation follows easily. \end{proof} An equivalent statement of Proposition~\ref{prop.resumep} (4) can be found in~\cite{Baaj_Blanchard_Skandalis.1999}; namely, in their setting, the image of $p_I$ in $L^2(B,tr)$ is a presubgroup. The following remark is useful in practice when searching for coideal subalgebra\xspaces. \begin{remark} \label{remark.resumep} Let $p$ be a projection of trace $1/k$, for some divisor $k$ of $\dim B$, which dominates $f_2$ and generates a coideal subalgebra\xspace $I$ of dimension $k$. Then, $p$ is the Jones projection $p_I$ of $I$; indeed, $p$ dominates $p_I$ and has the same trace. \end{remark} \subsubsection{Intermediate inclusions of depth $2$} \label{irredprof2} The following proposition characterize the depth $2$ intermediate inclusions in the case of irreducible inclusions. \begin{prop} Let $I$ be a coideal subalgebra\xspace of $B$. Then, the following are equivalent: \begin{enumerate}[(i)] \item The inclusion $N_2 \subset N_2\rtimes I$ is of depth $2$; \item $I$ is a Kac subalgebra of $B$; \item the coproduct of $p_I$ is symmetric; \item $I=M'_1 \cap M_3$. \end{enumerate} \end{prop} \begin{proof} In the irreducible case, Proposition~\ref{sgq} implies that $(ii) \Leftrightarrow (iii) \Leftrightarrow (iv)$; indeed, by~\ref{tour}~(3), (iv) is equivalent to $I=j_2(I)$. Furthermore (ii) implies (i). Assume reciprocally (i): $N_2 \subset N_2\rtimes I$ of depth $2$. Then $M'_1 \cap M_3$ is a Kac algebra of dimension $[M_3:N_2]=\dim I$ contained in $I$; therefore $M'_1 \cap M_3=I\cap j_2(I)$ coincides with $I$, that is (iv). \end{proof} \subsubsection{Special case of group algebras} \label{section.group_algebras} \cite[4.7]{Goodman_Harpe_Jones.1989}, \cite[A.4]{Jones_Sunder.1997} et \cite[2]{Hong_Szymanski.1996} \label{groupe} We now describe the special case of group algebras and their duals. Let $G$ be a finite group. We denote by $(\mathbb C[G],\Delta_s)$ the symmetric Kac algebra of the group, and by $L^{\infty}(G)$ the commutative Kac algebra of complex valued functions on the group\footnote{Recall that any finite dimensional symmetric (resp. commutative) Kac algebra is the algebra of a group (resp. the algebra of functions on a group)\cite{Enock_Schwartz.1992}}. Those two algebras are in duality. If $G$ acts outerly on a $II_1$ factor $N$, then $N^G \subset N\subset N\rtimes G \subset (N\rtimes G)\rtimes L^{\infty}(G)$ is the basic construction. The principal graph of $N^G \subset N$ (resp. $N \subset N\rtimes G$) is the Bratelli diagram of $\mathbb C \subset \mathbb CG$ (resp. $\mathbb C \subset L^{\infty}(G)$). The intermediate subfactors of $N \subset N\rtimes G$ are of the form $N\rtimes H$, where $H$ is a subgroup of $G$; indeed, since the coproduct $\Delta_s$ is symmetric, any coideal subalgebra\xspace of $\mathbb{C}[G]$ is a symmetric Kac subalgebra, hence the algebra of a subgroup $H$ of $G$. The Jones projection of $\mathbb{C}[H]$ is $\frac{1}{\vert H \vert}\sum_{h\in H} h$. Remark that $H$ is normal if and only if $p_H$ belongs to the center of $\mathbb C[G]$. If $H$ is commutative, from~\cite[page 714]{Vainerman.1998}, the matrix units of $\mathbb C[H]$ are the projections $P_h=\frac{1}{|H|}\sum_{g\in H}\langle h,g\rangle \lambda (g)$, for $h\in \widehat{H}$, with standard coproduct: $$\Delta_s(P_h)=\sum_{g\in \widehat{H}} P_g\otimes P_{g^{-1}h}\,.$$ More precisely, for the Jones projection, one get using Proposition~\ref{resumep} (3): $$\Delta_s(\frac{1}{|H|}\sum_{g\in H} \lambda (g))=\sum_{h\in \widehat{H}} S(P_h) \otimes P_h\,.$$ The intermediate subfactors of $N\rtimes G \subset (N\rtimes G)\rtimes L^{\infty}(G)$ are of the form $(N\rtimes G)\rtimes I$, where $I$ is a coideal subalgebra\xspace of $L^{\infty}(G)$; they are obtained by basic construction from the inclusion $N\rtimes H\subset N\rtimes G$, where $\delta(I)=\mathbb C[H]$. The coideal subalgebra\xspace $I$ is then the algebra $L^{\infty}(G/H)$ of functions which are constant on the right $H$-cosets of $G$. Indeed $L^{\infty}(G/H)$ is an involutive subalgebra, and by definition of the coproduct $\Delta(f)(s,t)=f(st)$, it is a coideal subalgebra\xspace which is furthermore a Kac subalgebra if and only if $H$ is normal. Its Jones projection is $\sum_{h\in H} \chi_h$, where $\chi_h$ is the characteristic function of $h$. \subsubsection{Normal coideal subalgebra\xspaces} \label{normal} Generalizing from group algebras and their dual (see~\ref{groupe}), we define normal coideal subalgebra\xspaces as follow (see also~\cite{Kac.Paljutkin.1966} and~\cite[2.2]{Nikshych.1998}). \begin{defn} A coideal subalgebra\xspace $I$ of $B$ is \textbf{normal} if $I$ is a Kac subalgebra of $B$, and $p_I$ belongs to the center of $B$. \end{defn} The following characterization of normal coideal subalgebra\xspaces results from~\ref{irredprof2} and~\ref{grapheprincipal}. \begin{prop} A coideal subalgebra\xspace $I$ of $B$ is normal if and only if the inclusions $N_1 \subset M_1$ and $M_1 \subset N_2$ are of depth $2$ if and only if $p_I$ belongs to the center of $B$ and the coproduct of $p_I$ is symmetric. \end{prop} \begin{example} In $K\!D(2m)$, the coideal subalgebra\xspaces $K_j$, for $j=1,\dots,4$, are normal (see~\ref{section.KD.even}). \end{example} \subsubsection{Intrinsic group} \label{intrin} The intrinsic group $G(K)$ of a Kac algebra $K$ is the subset of invertible elements $x$ of $K$ satisfying $\Delta(x)=x \otimes x$~\cite[1.2.2]{Enock_Schwartz.1992}. It is in correspondence with the subset of characters of the dual Kac algebra of $K$~\cite[3.6.10]{Enock_Schwartz.1992}. The group algebra $\mathbb{C}[G(K)]$ of $G(K)$ is a Kac subalgebra of $K$ which contains any symmetric Kac subalgebra of $K$. In particular, since any inclusion of index $2$ results from a cross product by $\mathbb{Z}_2$, any coideal subalgebra\xspace of dimension $2$ is contained in $\mathbb{C}[G(K)]$. \section{The Kac-Paljutkin algebra} \label{section.KP} In this section, we study the Kac-Paljutkin algebra $K\!P$~\cite{Kac.Paljutkin.1966}. This is the unique non trivial Kac algebra of dimension $8$, making it the smallest non trivial Kac algebra. The study of the lattice of coideal subalgebra\xspaces of $K\!P$, which does not require heavy calculations, gives a simple illustration of the results of~\ref{inclusions}. Furthermore, $K\!P$ will often occur as Kac subalgebra in the sequel. \subsection{The Kac-Paljutkin algebra $K\!P$} We use the description and notations of~\cite{Enock_Vainerman.1996}. The Kac-Paljutkin algebra is the $C^*$-algebra \begin{displaymath} K\!P=\mathbb{C} e_1 \oplus \mathbb{C} e_2\oplus \mathbb{C} e_3 \oplus \mathbb{C} e_4 \oplus M_2(\mathbb{C}), \end{displaymath} endowed with its unique non trivial Kac algebra structure. The matrix units of the factor $M_2(\mathbb{C})$ are denoted by $e_{i,j}$ ($i=1,2$ and $j=1,2$). \noindent From~\ref{KacHopf}, the trace is given by: $\forall\; i=1,2,3,4\;\; tr(e_i)=\frac{1}{8}$ and $tr(e_{1,1})=tr(e_{2,2})=\frac{1}{4}$. The Kac algebra structure can be constructed by twisting the group algebra of $$G=\{1, a, b, ab=ba, c,ac=cb,bc=ca,abc=cab\}\,.$$ We denote by $\lambda$ the left regular representation of $G$: \begin{align*} \lambda(a)&=e_1 -e_2+e_3 -e_4-e_{1,1}+e_{2,2}\\ \lambda(b)&=e_1 -e_2+e_3 -e_4+e_{1,1}-e_{2,2}\\ \lambda(c)&=e_1 +e_2-e_3 -e_4+e_{1,2}+e_{2,1} \end{align*} and by $\Delta$ the coproduct of $K\!P$ given explicitly in~\cite{Enock_Vainerman.1996}. \subsection{The coideal subalgebra\xspaces of $K\!P$} We now determine $\operatorname{l}(K\!P)$, using the results of~\ref{resumep}. Note first that the Jones projection of the non trivial coideal subalgebra\xspaces are projections of trace either $\frac{1}{2}$ or $\frac{1}{4}$ which dominate $e_1$, the support of the counit of $K\!P$ hence the Jones projection of $K\!P$. \subsubsection{} Since the coproduct is not deformed on $\mathbb{C}[(\mathbb{Z}_2)^2]$, $\operatorname{l}(K\!P)$ contains the Kac subalgebras: \begin{itemize} \item $J_0=\mathbb{C}(e_1+e_3) \oplus \mathbb{C}(e_2+e_4) \oplus \mathbb{C} e_{1,1} \oplus \mathbb{C} e_{2,2}$ spanned by $1$, $\lambda(a)$, $\lambda(b)$, and $\lambda(ab)$; \item $I_0=\mathbb{C}(e_1 +e_2+e_3 +e_4)\oplus \mathbb{C}(e_{1,1} +e_{2,2})$ spanned by $1$ and $\lambda(ab)$; \item $I_1=\mathbb{C}(e_1 +e_3 +e_{1,1})\oplus \mathbb{C}(e_2 +e_4+e_{2,2})$ spanned by $1$ and $\lambda(b)$; \item $I_2=\mathbb{C}(e_1 +e_3 +e_{2,2})\oplus \mathbb{C}(e_2 +e_4+e_{1,1})$ spanned by $1$ and $\lambda(a)$. \end{itemize} The Jones projections of those subalgebras are therefore respectively \begin{displaymath} p_0=e_1+e_3,\quad p'_0=e_1 +e_2+e_3 +e_4,\quad p'_1=e_1 +e_3 +e_{1,1},\quad \text{and} \quad p'_2=e_1 +e_3 +e_{2,2}\,. \end{displaymath} Since the intrinsic group of $K\!P$ is $J_0$, by~\ref{intrin}, $I_0$, $I_1$, and $I_2$ are the only dimension $2$ coideal subalgebra\xspaces. \subsubsection{} Since the Kac-Paljutkin algebra is self-dual, $\operatorname{l}(K\!P)$ is also self-dual. We therefore can obtain the other elements of $\operatorname{l}(K\!P)$ by using the antiisomorphism $\delta$. Since $J_0$ is the group algebra of $(\mathbb{Z}_2)^2$, the inclusion $N_2\subset N_2\rtimes J_0$ is of index $4$, and its principal graph is $D_4^{(1)}$ (see~\ref{grapheD1}); from~\ref{grapheprincipal}, this graph is the connected component of the Bratteli diagram of $\delta(J_0) \subset K\!P$ which contains $\mathbb{C} e_1$. Therefore, the Jones projection of $\delta(J_0)$ is $e_1 +e_2+e_3 +e_4$, and $\delta(J_0)$ is $I_0$. \subsubsection{} Since $\delta$ is a lattice antiisomorphism, $\delta(I_1)$ and $\delta(I_2)$ contain $I_0=\delta(J_0)$; furthermore, their Jones projection (which dominate $e_1$) are of trace $\frac{1}{4}$. Only two remaining projections are possible: $p_2=e_1+e_2 \quad \text{et}\quad p_4=e_1+e_4$. Denote by $q_k$ the projection $\frac{1}{2}\begin{pmatrix} 1&\rm{e}^{-\mathrm{i}k\pi/4}\\ \rm{e}^{\mathrm{i}k\pi/4}&1 \end{pmatrix}$ of the factor $M_2(\mathbb{C})$. We describe the coideal subalgebra\xspace $J_2$ generated by $p_2$. From the equality \begin{align*} \Delta(p_{2})=(e_1+e_2)&\otimes(e_1+e_2)+(e_3+e_4)\otimes(e_3+e_4)\\&+\frac{1}{2}(e_{1,1}+e_{2,2})\otimes(e_{1,1}+e_{2,2}) +\frac{1}{2}(e_{1,2}-i e_{2,1}) \otimes (e_{1,2}+ie_{2,1})\,, \end{align*} we deduce that $J_2$ contains $e_3+e_4$, $e_{1,1}+e_{2,2}$ (which we already know since $J_2$ contains $p_2$ and $I_0$) and $u=e_{1,2}+ie_{2,1}$. It therefore contains the symmetry $s=\rm{e}^{-i\pi/4} u$, and the projection $q_1=\frac{1}{2}(e_{1,1}+e_{2,2}+s)$. More precisely: \begin{displaymath} \Delta(p_{2})=(e_1+e_2)\otimes(e_1+e_2)+(e_3+e_4)\otimes(e_3+e_4) +q_7 \otimes q_1 +q_3 \otimes q_5 \end{displaymath} It follows that: \begin{displaymath} J_2=\mathbb{C}(e_1+e_2) \oplus \mathbb{C}(e_3+e_4) \oplus \mathbb{C} q_1 \oplus \mathbb{C} q_5\,. \end{displaymath} Similarly, $J_4$ can be described as follow: \begin{displaymath} J_4=\mathbb{C}(e_1+e_4) \oplus \mathbb{C}(e_2+e_3) \oplus \mathbb{C} q_3 \oplus \mathbb{C} q_7\,. \end{displaymath} From~\ref{grapheprincipal}, the principal graph of the inclusions $N_2\subset N_2\rtimes J_2$ and $N_2\subset N_2\rtimes J_4$ is the connected component of the Bratteli diagram of $I_j \subset K\!P,\,j=1,2$ containing $\mathbb{C} e_1$; in both case, it is $D_6^{(1)}$ (see~\ref{grapheD1}). \subsection{The lattice $\mathrm{l}(K\!P)$} \label{KP} We have now completed the construction of $\operatorname{l}(K\!P)$. \begin{figure} \caption{The lattice of coideal subalgebra\xspaces of $K\!P$} \label{figure.KP} \end{figure} \begin{prop} The lattice of coideal subalgebra\xspaces of the Kac-Paljutkin algebra is as given in Figure~\ref{figure.KP}. \end{prop} \begin{proof} The Jones projections of the coideal subalgebra\xspaces of dimension $4$ are necessarily of the form $e_1+e_j$ with $j= 2,3$ or $4$, and the coideal subalgebra\xspaces of dimension $2$ are all contained in $J_0$. \end{proof} Thanks to the Galois correspondence, we can deduce the non trivial intermediate subfactors of the inclusion $N_2 \subset N_2 \rtimes K\!P$: \begin{itemize} \item Three factors $N_2\rtimes I_i$, $i=0,1,2$, isomorphic to $N_2 \rtimes \mathbb{Z}_2$; \item The factor $N_2\rtimes J_0=N_2 \rtimes (\mathbb{Z}_2)^2$ with principal graph $D^{(1)}_4$; \item Two factors $N_2\rtimes J_i$, $i=1,2$, with principal graph $D^{(1)}_6$. \end{itemize} \subsection{Realization of $K\!P$ by composition of subfactors} \label{KPIK} In~\cite[VIII]{Izumi_Kosaki.2002}, Izumi and Kosaki consider an inclusion of principal graph $D_6^{(1)}$, from which they construct a depth $2$ inclusion isomorphic to $R\subset R\rtimes K\!P$. We can now describe explicitly all the ingredients of their construction (see also~\cite{Popa.1990}). \begin{prop} The inclusion $N_2 \subset N_2 \rtimes J_2$, which is of principal graph $D_6^{(1)}$, can be put under the form $M^{(\alpha, \mathbb{Z}_2)} \subset M \rtimes_\beta \mathbb{Z}_2$ as follows. Set $M=N_2 \rtimes I_0$, $v=(e_1+e_2)-(e_3+e_4)+s$ and $\beta=\Ad v$. Let $\alpha$ be the automorphism of $M$ which fixes $N_2$ and changes $\lambda(ab)$ into its opposite (this is the dual action of $\mathbb{Z}_2$). Then, $\alpha$ and $\beta$ are involutive automorphisms of $M$ such that the outer period of $\beta \alpha$ is $4$, and \begin{displaymath} (\beta \alpha)^4 = \Ad \lambda(ab) , \quad \beta \alpha(\lambda(ab)) = -\lambda(ab) , \quad M^{\alpha}=N_2 , \quad \text{and }\quad N_2\rtimes J_2=M\rtimes_{\beta}\mathbb{Z}_2 \,. \end{displaymath} \end{prop} \begin{proof} As in~\ref{action} (see~\cite{David.2005}), we identify $N_3$ and $N_2\rtimes K\!P$. Therefore, for all $x \in N_2$, one has $\beta(x)=vxv^*=(v_{(1)}\triangleright x)v_{(2)}v^*$. From $\Delta(v)=v\otimes [(e_1+e_2)-(e_3+e_4)] +v'\otimes s $, (with $v'=(e_1+e_4)-(e_2+e_3)+\mathrm{e}^{\mathrm{i}\pi/4}e_{1,2}+\mathrm{e}^{-\mathrm{i}\pi/4}e_{2,1}$), we get, for $x\in N_2$, $$\beta(x)= 8 [E_{N_2}(vxe_1)(e_1+e_2+e_3+e_4)+E_{N_2}(v'xe_1)(e_{1,1}+e_{2,2})]\,,$$ and deduce that $\beta$ normalizes $M$; by a straightforward calculation, the two automorphisms satisfy the claimed relations. Then, $N_2\rtimes J_2$ is indeed the cross product of $M$ by $\beta$, since $\lambda(ab)$ and $v$ generate the subalgebra $J_2$. \end{proof} \section{Principal graphs obtained in our examples} \label{graphe} Using~\ref{grapheprincipal}, the examples we study in the sequel yield several families of non trivial principal graphs of inclusions which we collect and name here. \subsection{Graphs $D_{2n+1}/\mathbb{Z}_2$} \label{grapheDimpair} In~\ref{Himpair},~\ref{sacig4impair},~\ref{sacig4KQimpair}, and~\ref{grapheK0}, we obtain the graphs $D_{2n+1}/\mathbb{Z}_2$, where $n$ is the number of vertices in the spine of the graph (altogether $n+4$ vertices). The inclusions are then of index $2n+1$. The graph $D_{11}/\mathbb{Z}_2$ is depicted in Figure~\ref{figure.D11Z2}. \begin{figure} \caption{The graph $D_{11} \label{figure.D11Z2} \end{figure} Those graphs are the principal graphs of the inclusions $R\rtimes \mathbb{Z}_2 \subset R\rtimes D_{2n+1}$ \cite{deBoer.Goeree.1991}. Furthermore, any inclusion admitting such a principal graph is of the form $R\rtimes H \subset R\rtimes G$ where $G$ is a group of order $2(2n+1)$ obtained by semi-direct product of an abelian group and $H \equiv \mathbb{Z}_2$~\cite{Hong_Szymanski.1996}. \subsection{Graphs $D_{2n+2}/\mathbb{Z}_2$} \label{grapheDpair} In~\ref{Himpair} and~\ref{grapheK0}, we obtain the graphs $D_{2n+2}/\mathbb{Z}_2$, where $n$ is the number of vertices in the spine (altogether $n+6$ vertices). The inclusion is then of index $2(n+1)$. The graph $D_{12}/\mathbb{Z}_2$ is depicted in Figure~\ref{figure.D12Z2}. Those graphs are the principal graphs of the inclusions $R\rtimes \mathbb{Z}_2 \subset R\rtimes D_{2n+2}$ \cite{deBoer.Goeree.1991}. \begin{figure} \caption{The graph $D_{12} \label{figure.D12Z2} \end{figure} \subsection{Graphs $D^{(1)}_n$} \label{grapheD1} We obtain the graphs $D^{(1)}_n$, with $n+1$ vertices, for subfactors of index $4$. These graphs $D^{(1)}_n$ were readily obtained in~\cite{Popa.1990}. The graph $D^{(1)}_6$ (Figure~\ref{figure.Dn1}) is the principal graph of two intermediate subfactors of $R\subset R\rtimes K\!P$ (see~\ref{section.KP}). \begin{figure} \caption{The graph $D^{(1)} \label{figure.Dn1} \end{figure} The graph $D^{(1)}_8$ is the principal graph of two intermediate subfactors of $R\subset R\rtimes K\!D(3)$ (see~\ref{KD3section}). The graph $D^{(1)}_{10}$ is the principal graph of $R\rtimes \delta(K_{2k})$ (with $k \in \{1,\dots,4\}$) intermediate in $R\rtimes \widehat{K\!D(4)}$ (see~\ref{KD4K2}). The graphs $D^{(1)}_n$ also occur as principal graphs of $R\rtimes J_k$ (with $k \in \{1,\dots,m\}$) intermediate in $R\rtimes K\!D(2n+1)$ (see~\ref{sacig4impair}) and of $R\rtimes J_k$ (with $k \in \{1,\dots,m\}$) intermediate in $R\rtimes K\!Q(2n+1)$ (see~\ref{sacig4KQimpair}). \subsection{Graphs $QB_n$} \label{grapheQB} In~\ref{grapheK0}, we obtain a family of graphs which we denote $QB_n$. They have $4n+5$ vertices; $n$ of type $\star$ in each of the three groups, and $n-1$ of type $\square$. The inclusion is then of index $8n$. The graph $QB_2$ is depicted in Figure~\ref{figure.QB2}. \begin{figure} \caption{The graph $QB_2$} \label{figure.QB2} \end{figure} \subsection{Graphs $DB_n$} \label{grapheDB} In~\ref{grapheK0}, we find a family of graphs which we denote by $DB_n$. They have $4n+7$ vertices; $n$ of type $\star$ in each of the three groups, and $n+1$ of type $\square$. The inclusion is then of index $8n+4$. The graph $DB_2$ is depicted in Figure~\ref{figure.DB2}. \begin{figure} \caption{The graph $DB_2$} \label{figure.DB2} \end{figure} \section{The Kac algebras $K\!D(n)$ and twisted group algebras} \label{section.KD} In this section, we recall the definition of the family of Kac algebras $(K\!D(n))_n$, describe completely $K\!D(2)$ (\ref{KD2}), determine the automorphism group of $K\!D(n)$, and obtain some general results on the lattice of their coideal subalgebra\xspaces. The rest of the structure depends much on the parity of $n$, and are further investigated in~\ref{section.KD.odd} for $n$ odd and in~\ref{section.KD.even} and~\ref{section.KB} for $n$ even. Along the way, we build some general tools for manipulating twisted group algebras: isomorphism computations (Algorithm~\ref{algo.isomorphism}) and proofs (Proposition~\ref{proposition.KacAlgebraIsomorphism}), coideal subalgebra\xspaces induced by the group (Theorem~\ref{theorem.sacigG}), efficient characterization of the algebra of the intrinsic group (Corollary~\ref{corollary.intrinsicGroup}). Those tools will be reused extensively in the later sections. \subsection{Twisted group algebras} \label{groupedef} In~\cite{Enock_Vainerman.1996} and~\cite{Vainerman.1998}, M. Enock and L. Vainerman construct non trivial finite dimensional Kac algebras by twisting the coproduct of a group algebra by a 2-(pseudo)-cocycle $\Omega$. More precisely, $\Omega$ is obtained from a 2-(pseudo)-cocycle $\omega$ defined on the dual $\hat{H}$ of a commutative subgroup $H$ of $G$. We refer to those articles for the details of the construction. For example, the Kac-Paljutkin algebra $K\!P$ studied in~\ref{KP} can be constructed this way (see~\cite{Enock_Vainerman.1996}). Similarly, the Kac algebras $K\!D(n)$ (resp. $K\!Q(n)$) of dimension $4n$ are obtained in~\cite[6.6]{Vainerman.1998} by twisting of the algebra of the dihedral group $\mathbb{C}[D_{2n}]$ (resp. of the quaternion group). In~\cite[end of page 714]{Vainerman.1998}, L. Vainerman gives a condition on the cocycle $\omega$ (to be counital) for $\Omega$ itself to be counital (see definition in \cite[Lemma 6.2]{Vainerman.1998}). Then, the coinvolution exists in the twisted algebra and the counit is well defined, and in fact left unchanged~\cite[lemma 6.2]{Vainerman.1998}. \emph{This condition is always satisfied in our examples}. \subsection{Notations} \label{q} Let $n$ be a positive integer, and $D_{2n}=\langle a, b {\ |\ } a^{2n}=1, b^2=1, ba = a^{-1}b \rangle$ be the dihedral group of order $4n$. It contains a natural commutative subgroup of order $4$: $H=\langle a^n,b\rangle$. We denote by $K\!D(n)$ the Kac algebra constructed in~\cite[6.6]{Vainerman.1998} by twisting the group algebra $\mathbb{C}[D_{2n}]$ along a $2$-cocycle of $H$. For notational convenience we include $K\!D(1)$ in the definition, even though the twisting is trivial there since $H=D_2$. We refer to~\cite[6.6]{Vainerman.1998} for the full construction, and just recall here the properties that we need. Some further formulas are listed in Appendix~\ref{section.formulas}. The algebra structure and the trace are left unchanged; the block matrix structure of $K\!D(n)$ is given by: \begin{displaymath} K\!D(n)=\mathbb{C} e_1 \oplus \mathbb{C} e_2 \oplus \mathbb{C} e_3\oplus \mathbb{C} e_4\oplus \bigoplus_{k=1}^{n-1}K\!D(n)^k\,, \end{displaymath} where $K\!D(n)^k$ is a factor $M_2(\mathbb{C})$ whose matrix units we denote by $e^{k}_{i,j}$ ($i=1,2$ and $j=1,2$). From~\ref{KacHopf}, the trace on $K\!D(n)$ is given by: $tr(e_i)=1/4n$ and $tr(e^{k}_{j,j})=1/2n$. We use the following matrix units in $K\!D(n)^k$ : $$r^{k}_{1,1}=\frac{1}{2}\begin{pmatrix}1 & -\mathrm{i} \\ \mathrm{i}& 1\end{pmatrix}, \;r^{k}_{1,2}=\frac{1}{2}\begin{pmatrix}\mathrm{i} & -1 \\ -1&-\mathrm{i}\end{pmatrix},\; r^{k}_{2,1}=\frac{1}{2}\begin{pmatrix}-\mathrm{i} & -1 \\ -1&\mathrm{i}\end{pmatrix},\; r^{k}_{2,2}=\frac{1}{2}\begin{pmatrix}1 & \mathrm{i} \\ -\mathrm{i}& 1\end{pmatrix}\,, $$ and the projections \begin{displaymath} p^k_{1,1}=\frac{1}{2}(e^k_{1,1}+e^k_{1,2}+e^k_{2,1}+e^k_{2,2}) \qquad \text{ and }\qquad p^k_{2,2}=\frac{1}{2}(e^k_{1,1}-e^k_{1,2}-e^k_{2,1}+e^k_{2,2})\,, \end{displaymath} as well as, following L. Vainermann, their odd and even sums: \begin{displaymath} p_i = \sum_{k=1,\,k \text{ odd}}^{n-1} p^k_{i,i}, \quad i=1,2\, \qquad \text{ and }\qquad q_i = \sum_{k=1,\,k \text{ even}}^{n-1} p^k_{i,i}, \quad i=1,2\,. \end{displaymath} We finally consider the projections $q(\alpha, \beta,k)$ of the $k$-th factor $M_2(\mathbb{C})$ given by \begin{displaymath} \frac{1}{2}\begin{pmatrix} 1-\sin \alpha & \rm{e}^{-\mathrm{i}\beta}\cos \alpha \\ \rm{e}^{\mathrm{i}\beta}\cos \alpha & 1+\sin \alpha \end{pmatrix}\,, \end{displaymath} for $\alpha$ and $\beta$ in $\mathbb{R}$. Then, \begin{displaymath} q(\alpha, \beta,k)+q(-\alpha,\beta+\pi,k)=e^k_{1,1}+e^k_{2,2}\,. \end{displaymath} For the new coalgebra structure, the coproduct is obtained by twisting the symmetric coproduct $\Delta_s$ of $\mathbb{C}[D_{2n}]$: $$\Delta(x)=\Omega \Delta_s(x)\Omega^*\,,$$ where $\Omega$ is the unitary of $\mathbb{C}[H] \otimes \mathbb{C}[H]$ given in~\ref{omega}. We set $J_0=\mathbb{C}[H]$. The coinvolution and the counit are left unchanged\footnote{In most proofs, checking the properties of the coinvolution and the counit is straightforward and we omit them.}. The left regular representation $\lambda$ of $\mathbb{C}[D_{2n}]$ and the 2-cocycle $\Omega$ are given explicitly in Appendix~\ref{lambda} and~\ref{omega}, respectively. The coproduct expansions relevant to the description of the coideal subalgebra\xspaces are computed in Appendix~\ref{cop} for general formulas and by computer for examples (see Appendix~\ref{subsubsection.mupad.sacig.K2}). \subsection{The Kac algebra $K\!D(2)$ of dimension $8$} \label{KD2} We now describe completely the Kac algebra $K\!D(2)$. As in Appendix~\ref{subsection.demo.sacig}, we check on computer that the coproduct of $K\!D(2)$ is symmetric and that its intrinsic group is $\langle c, \lambda(b) \rangle$, where $c=e_1-e_2+e_3-e_4-ie_{1,1}+ie_{2,2}$ is of order $4$ with $c^2=\lambda(a^2)$. Therefore, $K\!D(2)$ is isomorphic to $\mathbb{C}[D_4]$, and the lattice of the coideal subalgebra\xspaces of $K\!D(2)$, depicted in Figure~\ref{figure.KD2}, coincides with that of the subgroups of $D_4$ (see~\ref{groupe}). \begin{figure} \caption{The lattice of coideal subalgebra\xspaces of $K\!D(2)$} \label{figure.KD2} \end{figure} The coideal subalgebra\xspaces of dimension $2$ are: \begin{itemize} \item $I_0=\mathbb{C}(e_1 +e_2+e_3 +e_4)\oplus \mathbb{C}(e_{1,1}+e_{2,2})$ generated by $\lambda(a^2)$; \item $I_1=\mathbb{C}(e_1 +e_4+p_2)\oplus \mathbb{C}(e_2 +e_3+p_1)$ generated by $\lambda(ba^2)$; \item $I_2=\mathbb{C}(e_1 +e_4+p_1)\oplus \mathbb{C}(e_2 +e_3+p_2)$ generated by $\lambda(b)$; \item $I_3=I(e_1 +e_2+r^1_{2,2})$ generated by $\lambda(b)c$; \item $I_4=I(e_1 +e_2+r^1_{1,1})$ generated by $\lambda(b)c^3$. \end{itemize} The coideal subalgebra\xspaces of dimension $4$ are: \begin{itemize} \item $J_0=\mathbb{C}(e_1+e_4) \oplus \mathbb{C}(e_2+e_3) \oplus \mathbb{C} p_1 \oplus \mathbb{C} p_2$ generated by $\lambda(a^2)$ and $\lambda(b)$; \item $J_{20}=\mathbb{C}(e_1+e_2) \oplus \mathbb{C}(e_3+e_4) \oplus \mathbb{C}(r^{1}_{1,1})\oplus \mathbb{C}(r^1_{2,2})$ generated by $\lambda(a^2)$ and $\lambda(b)c$; \item $J_m=\mathbb{C}(e_1+e_3) \oplus \mathbb{C}(e_2+e_4) \oplus \mathbb{C}(e^1_{1,1}) \oplus \mathbb{C}(e^1_{2,2})$ generated by $c$. \end{itemize} The rationale behind the notations will become clearer later on. \subsection{Coidalgebra\xspaces induced by subgroups} \subsubsection{Coidalgebra\xspaces containing $H$}\label{sacigG} \begin{theorem} \label{theorem.sacigG} Let $K$ be a finite dimensional Kac algebra obtained by twisting a group algebra $\mathbb{C}[G]$ with a $2$-(pseudo)-cocycle $\Omega$ of a commutative subgroup $H$. Then, the coideal subalgebra\xspaces of $K$ containing $H$ are in correspondence with the subgroups of $G$ containing $H$. Namely, the coideal subalgebra\xspace $I_{G'}$ corresponding to the subgroup $G'$ is obtained by twisting the group algebra $\mathbb{C}[G']\subset\mathbb{C}[G]$ with $\Omega$; in particular, its Jones projection is still $p_G = \frac1{|G'|} \sum_{g'\in G'} \lambda(g')$. \end{theorem} \begin{proof} Recall that the twisted coproduct of $K$ is defined by: $$\Delta(x)=\Omega \Delta_s(x)\Omega^*\,,$$ where $\Delta_s(x)$ is the standard coproduct of the group algebra $\mathbb{C}[G]$; recall also that $\Omega$ is a unitary of $H\otimes H$. Let $G'$ be a subgroup of $G$ containing $H$, and $I=\mathbb{C}[G']$. Then, $\Delta(I) \subset \Omega(I\otimes I) \Omega^* = I\otimes I$. Therefore $I$ is the Kac subalgebra of $K$, obtained by twisting $\mathbb{C}[G']$ with the cocycle $\Omega$. In particular, its trace, counit, and Jones projections are the same as for $\mathbb{C}[G']$ (see~\ref{groupedef}, and~\ref{section.group_algebras} for the formula). Reciprocally, let $I$ be a coideal subalgebra\xspace of $K$ containing $H$. We may untwist the coproduct on $I$ $$\Delta_s(x)=\Omega^* \Delta(x)\Omega\,.$$ Then, $\Delta_s(I) \subset K \otimes I$, making $I$ into a coideal subalgebra\xspace of $\mathbb{C}[G]$. Therefore, $I$ is the algebra of some subgroup $G'$ of $G$. \end{proof} \begin{corollary} \label{corollary.intrinsicGroup} Let $K$ be a finite dimensional Kac algebra obtained by twisting a group algebra $\mathbb{C}[G]$ with a $2$-(pseudo)-cocycle $\Omega$ of a commutative subgroup $H$. The intrinsic group $G(K)$ of $K$ contains $H$. Therefore, its group algebra coincides with the algebra of the subgroup $G'$ of those elements of $G$ such that $\Delta(g)$ is symmetric. \end{corollary} \begin{remark} This gives a rather efficient way to compute the order of the intrinsic group of $K$ (complexity: $|G|$ coproduct computations, that is $O(|G||H|^4)$). Beware however that $G'$ need not coincide nor even be isomorphic to $G(K)$; consider for example the Kac algebras $K\!D(2)$ and $K\!Q(2)$ which are isomorphic (see~\ref{iso.KDKQ}). Therefore, to compute the actual intrinsic group, and not only its group algebra, one still needs to compute the minimal idempotents of the (commutative) dual algebra of $(\mathbb{C}[G'], \Delta)$. \end{remark} \subsubsection{Embedding of $K\!D(d)$ into $K\!D(n)$} \label{plonge} We now apply the previous results to $K\!D(n)$. \begin{corollary} \label{corollary.plonge} Let $n$ be a positive integer and $d$ a divisor of $n$. Then $K\!D(d)$ embeds as a Kac subalgebra into $K\!D(n)$ via the morphism defined by $\varphi_k(\alpha)=a^k$ and $\varphi_k(\beta)=b$, where $k=\frac n d$, and $\alpha$ and $\beta$ denote the generators of $K\!D(d)$. Furthermore, all the coideal subalgebra\xspaces of $K\!D(n)$ containing $H$ are obtained this way. \end{corollary} \begin{proof} The subgroups of $D_{2n}$ containing $H$ are the $G_k =\langle a^k, b\rangle$ for $k$ as above. Furthermore, by Theorem~\ref{theorem.sacigG}, $(\mathbb{C}[G_k], \Delta)$ is a Kac subalgebra of $K\!D(n)$; it is obviously isomorphic to $K\!D(d)$ (the unitary $\Omega$ is constructed from the same elements $a^n=(a^k)^d$ and $b$ and the same cocycle). \end{proof} Note that $K\!D(d)$ could alternatively be embedded into $K\!D(n)$ by choosing any $k$ of the form $\frac n d (1+2k')$; this gives the same subalgebra of $K\!D(n)$ and amounts to compose the embedding with some automorphism of $K\!D(n)$ (see~\ref{intrinseque}). In fact, under the assumption that $n\geq3$, the automorphism group of $K\!D(n)$ stabilizes the embedding of $K\!D(d)$. \begin{example} When $n=2m$, $K\!D(n)$ contains the Kac subalgebras $K_0=\mathbb{C}[a^m,b]$ isomorphic to $K\!D(2)$, and $K_4=\mathbb{C}[a^2,b]$ isomorphic to $K\!D(m)$. Those subalgebras are studied in~\ref{section.KD.even}. \end{example} \subsubsection{Intrinsic group of $K\!D(n)$ and its dual} \label{intrinseque} The intrinsic group of the dual Kac algebra $\widehat{K\!D(n)}$ of $K\!D(n)$ is $\mathbb{Z}_2 \times \mathbb{Z}_2$~\cite[p.718]{Vainerman.1998}. We give here the intrinsic group of $K\!D(n)$ itself. \begin{prop} The intrinsic group of $K\!D(n)$ is $\mathbb{Z}_2 \times \mathbb{Z}_2$ if $n$ is odd and $D_4$ if $n$ is even. \end{prop} \begin{proof} Applying Corollaries~\ref{corollary.intrinsicGroup} and~\ref{corollary.plonge}, the algebra of the intrinsic group of $K\!D(n)$ is isomorphic to $K\!D(k)$ for some $k$ dividing $n$. Furthermore $K\!D(k)$ is cocommutative if and only if $k=1$ or $k=2$. Finally, $K\!D(1)$ and $K\!D(2)$ are respectively isomorphic to $\mathbb{C}[D_2]$ and $\mathbb{C}[D_4]$. \end{proof} \subsection{Automorphism group} \label{subsection.automorphisms} In this subsection, we describe the automorphism group of $K\!D(n)$. The point is that each automorphism of $K\!D(n)$ induce a symmetry in the lattice $\operatorname{l}(K\!D(n))$ of coideal subalgebra\xspaces of $K\!D(n)$. Namely, it maps a coideal subalgebra\xspace on a coideal subalgebra\xspace (mapping the Jones projection of the first to the Jones projection of the second). \subsubsection{Automorphisms induced by automorphisms of $D_{2n}$ fixing $\mathbb{C}[H]$} \label{conjugue} \begin{proposition} \label{proposition.automorphisms.KD.ak} Let $k$ be coprime with $2n$, and $\Theta_k$ be the algebra morphism defined by: \begin{displaymath} \Theta_k(\lambda(a)) = \lambda(a^k) \qquad \text{and} \qquad \Theta_k(\lambda(b))=\lambda(b)\,. \end{displaymath} Then, $\Theta_k$ is a Kac algebra automorphism of $K\!D(n)$ which fixes $H$. \end{proposition} \begin{proof} Such a $\Theta_k$ is an automorphism of the group $D_{2n}$, and therefore an automorphism of its Kac algebra $\mathbb{C}[D_{2n}]$. Since $H$ is fixed, so is $\Omega$, so that the coproduct $\Delta$ of $K\!D(n)$ is preserved as well. \end{proof} \begin{example} \label{example.conjugue} Take $\Theta=\Theta_{-1}$. It is an involutive automorphism of $K\!D(n)$; it can be alternatively described as the conjugation by $\lambda(b)$. \end{example} \subsubsection{An automorphism which does not fix $\mathbb{C}[H]$} We now construct an automorphism $\Theta'$ which acts non trivially on $\mathbb{C}[H]$. \begin{proposition} \label{proposition.automorphism.KD.ThetaPrime} The following formulas define an involutive Kac algebra automorphism of $K\!D(n)$: \begin{displaymath} \Theta'(\lambda(a))=\lambda(a)-\frac{1}{2}(\lambda(a)-\lambda(a^{-1}))(1+\lambda(a^n)) \qquad \text{and} \qquad \Theta'(\lambda(b))=\lambda(a^nb)\,. \end{displaymath} \end{proposition} \subsubsection{Tips and tricks to prove an isomorphism } The following (easy) proposition gives a conveniently small list of properties to be checked to be sure that an homomorphism defined by the images of some generators extends to a proper Kac-algebra isomorphism. \begin{proposition} \label{proposition.KacAlgebraIsomorphism} Let $K$ and $K'$ be two Kac algebras of identical finite dimension, and $A$ and $A'$ be sets of algebra generators of $K$ and $K'$ respectively. Fix $\Phi(a)\in K'$ for $a\in A$. Then, $\Phi$ extends to a Kac-algebra isomorphism from $K$ to $K'$ if and only if: \begin{enumerate}[(i)] \item The elements $\{\Phi(a) {\ |\ } a\in A\}$ satisfy the relations of $A$, and therefore define a (non necessarily unital) algebra morphism $\Phi$ from $K$ to $K'$; \item $\Phi$ preserves the involution on $A$: $\Phi(a^*) = \Phi(a)^*$, for all $a\in A$; \item $\Phi$ preserves the coproduct on $A$: $(\Phi\otimes\Phi)(\Delta(a)) = \Delta(\Phi(a))$, for all $a\in A$; \item All elements $a'\in A'$ have a preimage by $\Phi$ in $K$. \end{enumerate} \end{proposition} \begin{proof} Using (i), $\Phi$ extends to a (possibly non unital) algebra morphism from $K$ to $K'$. By (iv), $\Phi$ is bijective, and by the uniqueness of the unit it is an unital algebra isomorphism. Similarly, by (iii) $\Phi$ is a coalgebra isomorphism. The counit and the antipode are preserved, as they are unique in a Hopf algebra (see~\cite[III.3.2]{Kassel.1995}). By semi-simplicity of the algebra, the trace is preserved; indeed, the central idempotents are unique (up to permutation among those of same rank), and the normalization of the trace on each corresponding matrix block is determined by the rank. The coinvolution coincides with the antipode~\ref{KacHopf} and is preserved. Finally, the involution being an anti-algebra morphism, (ii) ensures it is preserved as well. \end{proof} \subsubsection{Automatic checks on computer} Checking the properties listed in Proposition~\ref{proposition.KacAlgebraIsomorphism} often boils down to straightforward, pointless, and tedious calculations. We therefore wish to delegate them to the computer whenever possible. To obtain Proposition~\ref{proposition.automorphism.KD.ThetaPrime}, we shall in principle prove those properties for all values of $n$; however it turns out that, in most cases, it is sufficient to check them only for small values of $n$, which is easier to automatize. We introduce here just a bit of formalism that justifies this approach. For the sake of simplicity, we do that for the special case of $K\!D(n)$, but we later reuse straightforward analogues in other isomorphism proofs. The idea is to consider $n$ as a formal parameter, which can be though of as letting $n$ go to infinity. Namely, let $D_\infty$ be the (infinite) group generated by $a,b,{a_{\infty}}$, subject to the relations $b^2={a_{\infty}}^2=1$, ${a_{\infty}} b=b{a_{\infty}}$, and $ab=ba^{-1}$. We denote by $\Pi_{D, n}$ the canonical quotient map from $D_\infty$ to $D_{2n}$ which sends ${a_{\infty}}$ to $a^n$ (its kernel is the subgroup generated by ${a_{\infty}} a^{-n}$). Let $\mathbb{C}[D_\infty]$ be the group algebra of $D_\infty$. The quotient map $\Pi_{D,n}$ extends to a surjective algebra morphism from $\mathbb{C}[D_\infty]$ to $\mathbb{C}[D_{2n}]$. It is further injective whenever restricted to the subspace of $\mathbb{C}[D_\infty]$ spanned by $a^i {a_{\infty}}^j b^k$ with $2|i| <n$. Define the degree of an algebraic expression $A$ involving $a$, $b$ and ${a_{\infty}}$ as the maximal $d$ such that $a^d$ or $a^{-d}$ occurs in it. \begin{proposition} \label{proposition.equationAtInfinity} An algebraic expression $A$ of degree $d$ which vanishes in some $K\!D(N)$, $N>2d$, vanishes in $K\!D(n)$ for any $n\in\mathbb{N}$. \end{proposition} \begin{proof} Indeed, it can be lifted up via $\Pi_{D,N}$ to $\mathbb{C}[D_\infty]$ and then projected down to $\mathbb{C}[D_n]$ for any other $n\in\mathbb{N}$ via $\Pi_{D,n}$. \end{proof} Note that the bound $N>2d$ is tight because of the cancellation $a^d - {a_{\infty}} a^{-d}=0$ if $n=2d$. As illustrated in the following example, we often use implicitly straightforward variants of this proposition. \begin{example} The coproduct $\Delta(a) = \Omega (a\otimes a)\Omega^*$ is of degree $1$. One can read off the general formula for its expansion from the computation for $N=3$; this expansion has $64$ terms, and is non symmetric. However, for $N=2$ there are cancellations: the expansion degenerates to $16$ terms and becomes symmetric. \end{example} \subsubsection{Technical lemma} \begin{lemma} \label{lemma.automorphism.KD.ThetaPk} For all $k\in \mathbb{Z}$, \begin{displaymath} \Theta'(\lambda(a))^k=\lambda(a^k)-\frac{1}{2}(\lambda(a^k)-\lambda(a^{-k}))(1+\lambda(a^n))\,, \end{displaymath} and $\Theta'$ preserves the involution. \end{lemma} \begin{proof} Write $f_k=\lambda(a^k)-\frac{1}{2}(\lambda(a^k)-\lambda(a^{-k}))(1+\lambda(a^n))$. Since everything commutes and $\lambda(a^n)$ is its own inverse, one checks easily that $f_k f_1 = f_{k+1}$. Specially we have $f_{-1}=f_1^*=f_1^{-1}$, so $\Theta'(\lambda(a)^*)=\Theta'(\lambda(a))^*$. The lemma follows by induction. \end{proof} \subsubsection{Proof of Proposition~\ref{proposition.automorphism.KD.ThetaPrime}} \begin{proof} We check that $\Theta'$ satisfies the properties listed in Proposition~\ref{proposition.KacAlgebraIsomorphism}. \begin{enumerate}[(i)] \item Thanks to Lemma~\ref{lemma.automorphism.KD.ThetaPk}, $\Theta'(\lambda(a))^{2n}=1$. The relation $\Theta'(\lambda(b))^2=1$ is obvious. The remaining relation $\Theta'(\lambda(b))\Theta'(\lambda(a))=\Theta'(\lambda(a))^{-1})\Theta'(\lambda(b))$ is of degree $1$. Using Proposition~\ref{proposition.equationAtInfinity}, it is sufficient to check on computer that it holds in $K\!D(3)$. \item Follows from Lemma~\ref{lemma.automorphism.KD.ThetaPk} \item Using a direct extension of Proposition~\ref{proposition.equationAtInfinity} for $\mathbb{C}[D_{2n}]\otimes\mathbb{C}[D_{2n}]$, we check on computer that, in $K\!D(3)$ and for $x=a,b$, $(\Theta'\otimes \Theta')(\Delta(\lambda(x)))=\Delta(\Theta'(x))$. \item We prove that $\Theta'$ is an involution (and therefore an isomorphism) by checking on computer that the equation $\Theta'(\Theta'(\lambda(a)))=\lambda(a)$ holds in $K\!D(3)$. The equation $\Theta'(\Theta'(\lambda(b)))=\lambda(b)$ is obvious.\qedhere \end{enumerate} \end{proof} \subsubsection{The automorphism group of $K\!D(n)$} \label{subsubsection.automorphismGroupKD} \begin{theorem} \label{theorem.automorphisms.KD} For $n\geq 3$, the automorphism group ${\operatorname{Aut}}(K\!D(n))$ of $K\!D(n)$ is given by: \begin{displaymath} A_{2n} = \{\ \Theta_k, \ \Theta_k\Theta'\ {\ |\ } k\wedge 2n = 1 \}\,. \end{displaymath} In particular, it is of order $2\varphi(2n)$ (where $\varphi$ is the Euler function), and isomorphic to $\mathbb{Z}_{2n}^* \rtimes \mathbb{Z}_2$, where $\mathbb{Z}_{2n}^*$ is the multiplicative group of units of $\mathbb{Z}_{2n}$. \end{theorem} \begin{proof} From Propositions~\ref{proposition.automorphisms.KD.ak} and~\ref{proposition.automorphism.KD.ThetaPrime}, ${\operatorname{Aut}}(K\!D(n))$ contains $A_{2n}$. Let us prove the converse. Let $\Phi$ be an automorphism of $K\!D(n)$. It induces an automorphism $\sigma$ of the intrinsic group $G(K\!D(n))$ (respectively $H$ for $n$ odd, and $D_4$ for $n$ even). Note that $(\Phi\otimes\Phi)(\Omega)$ is completely determined by $\sigma$. Furthermore, we can use it to "untwist" the coproduct of $K\!D(n)$ into a cocommutative coproduct by defining for all $y$ in $K\!D(n)$: \begin{displaymath} \Delta_\sigma(y) = (\Phi\otimes\Phi)(\Omega)^* \Delta(y) (\Phi\otimes\Phi)(\Omega)\,. \end{displaymath} This coproduct is indeed cocommutative because \begin{displaymath} \Delta_\sigma(y) = (\Phi\otimes\Phi)(\Omega)^* \Delta(\Phi(x)) (\Phi\otimes\Phi)(\Omega) = (\Phi\otimes \Phi) ( \Omega^* \Delta(x) \Omega ) = (\Phi\otimes \Phi) ( \Delta_s(x) ) \, \end{displaymath} where $x=\Phi^{-1}(y)$, and $\Delta_s$ is the usual cocommutative coproduct of $\mathbb{C}[D_{2n}]$. We claim that, for $n\geq 3$, this later property rules out all the automorphism of $G(K\!D(n))$ but $\sigma=\id$ and $\sigma$ defined by $\sigma(a^n)=a^n$ and $\sigma(b) =a^nb$. Consider indeed the case $n$ odd (resp. even), and take $\tau$ one of the $4$ (resp. $6$) remaining automorphisms of $G(K\!D(n))=H$ (resp $=D_4$). The expression $\Delta_\tau(\lambda(a))$ is of degree $1$, so it is sufficient to check on computer that it is non symmetric for $n=3$ (resp $n=4$). If $\sigma=\id$, then $\Delta_\sigma = \Delta_s$, so $\Phi$ is an automorphism of $\mathbb{C}[D_{2n}]$ fixing $H$. Therefore $\Phi=\Theta_k$ for some $k$ coprime to $2n$. Otherwise, $\Phi\Theta'$ fixes $H$, so $\Phi$ is of the form $\Theta_k\Theta'$. \end{proof} \subsubsection{Computing isomorphisms} The reasoning developed in the previous proof turns into an algorithm to compute Kac algebra isomorphisms. All the isomorphisms in this article were conjectured from the application of this algorithm on small examples (see~\ref{subsubsection.automorphismsK3}, \ref{subsubsection.demo.self-dual}, and~\ref{subsubsection.demo.KDKQ}). \begin{algorithm} \label{algo.isomorphism} Let $K=K(\mathbb{C}[G],\,H,\,\Omega)$ be a finite dimensional Kac algebra obtained by twisting a group algebra $\mathbb{C}[G]$ with a $2$-(pseudo)-cocycle $\Omega$ of a commutative subgroup $H$, and let $K'$ be any finite dimensional Kac algebra. The following algorithm returns all the Kac algebra isomorphisms $\phi$ from $K$ to $K'$. In particular, the result is non empty if and only if the two algebras are isomorphic. \begin{itemize} \item Compute the intrinsic group $H'$ of $K'$; \item For each embedding $\rho$ of $H$ into $H'$: \begin{itemize} \item Construct $K''=K(K',\,\rho(H),\,\Omega^*)$ whose coproduct $\Delta''$ is obtained by (un)twisting the coproduct $\Delta'$ of $K'$ by the inverse cocycle $\Omega^*$; \item If the untwisted coproduct $\Delta''$ is cocommutative: \begin{itemize} \item Compute the intrinsic group $G''$ of $K''$; \item Compute the group isomorphisms from $G$ to $G''$ which are compatible with $\rho$. \end{itemize} \end{itemize} \end{itemize} \end{algorithm} \begin{proof} If there exist a Kac algebra isomorphism $\phi$ from $K$ to $K'$, then the Kac algebra $K'' = K(K',\,\phi(H),\,\Omega^*)$ is isomorphic to $\mathbb{C}[G]$. \end{proof} This algorithm is not blazingly fast, but we are not aware of any other algorithms to test systematically the isomorphism of two Kac algebras (although their definitely should exist one). It also has some nice theoretical consequences: first there are finitely many isomorphisms, and second the existence of an isomorphism does not depend on the actual ground field (typically $\mathbb{C}$ or $\mathbb{Q}(i,\epsilon)$). \subsection{The three coideal subalgebra\xspaces $K_2$, $K_3$, $K_4$ of dimension $2n$} \label{sacig2n} We come back to the study of the lattice $\operatorname{l}(K\!D(n))$. The action of $G(\widehat{K\!D(n)})$ on $K\!D(n)$ produces three distinct exterior actions of $\mathbb{Z}_2$ on $N_3$, and the corresponding three fixed point algebras are three subfactors of index $2$ in $N_3$. Therefore, $K\!D(n)$ admits three coideal subalgebra\xspaces of dimension $2n$, with Jones projections $e_1+e_2$, $e_1+e_3$ et $e_1+e_4$ respectively (those are the only possible projections of trace $1/2n$). We denote by $K_i$ the coideal subalgebra\xspace $I(e_1+e_i)$ for $i=1,2,3$. We postpone the study of $K_3$ and $K_4$ whose structure depend on the parity of $n$. \subsection{The Kac subalgebra $K_2=I(e_1+e_2)$ of dimension $2n$} \label{e1e2} \label{section.K2} In this section, we describe completely the coideal subalgebra\xspace $K_2$ and its coideal subalgebra\xspaces. The following result was suggested by computer exploration (see Appendix~\ref{mupad.K2=Dn}). \begin{prop} \label{proposition.K2} The coideal subalgebra\xspace $K_2=I(e_1+e_2)$ is isomorphic to the Kac algebra of functions on the group $D_n$. Its lattice of coideal subalgebra\xspaces is the dual of the lattice of subgroups of $D_n$. \end{prop} \begin{proof} First, $K_2=I(e_1+e_2)$ is indeed a Kac subalgebra of $K\!D(n)$ because $\Delta(e_1+e_2)$ is symmetric (see Appendix~\ref{cop}). Looking further at the expression of this coproduct yields the following block matrix structure for $K_2$: \begin{displaymath} K_2=\mathbb{C}(e_1 +e_2)\oplus \mathbb{C}(e_3 +e_4) \oplus \bigoplus_{j=1,\,j \text{ odd}}^{n-1} \left(\mathbb{C} r^j_{1,1}\oplus \mathbb{C} r^j_{2,2}\right)\oplus \bigoplus_{j=1,\,j \text{ even}}^{n-1} \left(\mathbb{C} e^j_{1,1}\oplus \mathbb{C} e^j_{2,2}\right)\,. \end{displaymath} All the blocks are trivial, so $K_2$ is commutative. Therefore it is the Kac algebra $L^\infty(G)$ of functions on some group $G$. The minimal projections are the characteristic functions $\chi_g$ of the elements of $g\in G$. To start with, $\chi_1=e_1+e_2$, since the later is the support of the counit of $I(e_1+e_2)$. We denote the others as follows: $e_3+e_4=\chi_{\beta_0}$, \begin{align*} r^{2j-1}_{1,1}&=\chi_{\beta_j},&&r^{2j-1}_{2,2}=\chi_{\beta'_j},&&\text{for } j=1,\dots,m\,,\\ e^{2j}_{1,1}&=\chi_{\alpha_j},&&e^{2j}_{2,2}=\chi_{\alpha_{-j}},&&\text{for } j=1,\dots,m'\,. \end{align*} with $m'=\lfloor(n-1)/2\rfloor$. For short, we also write $\alpha=\alpha_1$. The group law on $G$ is determined by the coproducts of those central idempotents: $\Delta(\chi_g)= \sum_{hk=g} \chi_h \otimes \chi_k$ (the expressions of those coproducts are computed in Appendix~\ref{cop}). In particular, $\Delta(e_1+e_2)$ gives the inverses: the $\beta_j$ are idempotents, while $\alpha_j$ is the inverse of $\alpha_{-j}$. To obtain the remaining products, we need to distinguish between $n$ even and odd. \textbf{Case $n=2m$:} From the expression of $\Delta(e^2_{1,1})$, one get: \begin{itemize} \item for $j>2$ even, $\Delta(e^2_{1,1})(e^{j-2}_{2,2}\otimes e^{j}_{1,1})=e^{j-2}_{2,2}\otimes e^{j}_{1,1}$. Therefore, for $k=2,\dots,m-1$, $\alpha_{k}=\alpha_{k-1}\alpha_1$, and by induction $\alpha_{k}=\alpha^k$ and $\alpha_{-k}=\alpha^{-k}$. \item $\Delta(e^2_{1,1})((e_3+e_4)\otimes e^{n-2}_{2,2})=(e_3+e_4)\otimes e^{n-2}_{2,2}$. Therefore, $\beta_0=\alpha\alpha_{m-1}=\alpha^m$ and $\alpha$ is of order $n$. \item For $j>2$ odd, $\Delta(e^2_{1,1})(r^{j-2}_{1,1}\otimes r^{j}_{1,1})=r^{j-2}_{1,1}\otimes r^{j}_{1,1}$ and $\Delta(e^2_{1,1})(r^{n-j+2}_{2,2}\otimes r^{n-j}_{2,2} )=r^{n-j+2}_{2,2}\otimes r^{n-j}_{2,2}$. It follows that for $k=1,\dots,m-1$, $\beta_{k+1}=\beta_{k}\alpha$ and $\beta'_{k+1}=\alpha\beta'_{k}$ and by induction $\beta_{k+1}=\beta_1\alpha^k$ et $\beta'_{k+1}=\alpha^k\beta'_1$. \item $\Delta(e^2_{1,1})(r^1_{2,2}\otimes r^{1}_{1,1})=r^1_{2,2}\otimes r^{1}_{1,1}$. Therefore, $\beta'_1=\alpha\beta_1$. \end{itemize} The group $G$ is generated by $\alpha$ of order $n$ and $\beta=\beta_1$ of order $2$. From the expression $\Delta(e_3+e_4)$ given in~\ref{cop}, one get: $\Delta(e_3+e_4)(r^{2m-1}_{1,1}\otimes r^{1}_{2,2})=(r^{2m-1}_{1,1}\otimes r^{1}_{2,2})$, that is $\beta'_1=\beta_{m}\beta$, which gives the relation $\alpha\beta=\beta\alpha^{-1}$. Therefore, $G$ is the dihedral group $D_n$. \textbf{Case $n=2m+1$:} From the expression of $\Delta(e^2_{1,1})$ one get: \begin{itemize} \item for $j>2$ even, $\Delta(e^2_{1,1})(e^{j-2}_{2,2}\otimes e^{j}_{1,1})=e^{j-2}_{2,2}\otimes e^{j}_{1,1}$. Therefore, for $k=2,\dots,m$ $\alpha_{k}=\alpha_{k-1}\alpha_{1}$, and by induction $\alpha_{k}=\alpha^k$ and $\alpha_{-k}=\alpha^{-k}$. \item $\Delta(e^2_{1,1})(e^{n-1}_{2,2}\otimes e^{n-1}_{2,2})=e^{n-1}_{2,2}\otimes e^{n-1}_{2,2}$. Therefore, $\alpha$ is of order $n$. \end{itemize} The expression of the coproduct of $e_3+e_4$ gives the following relations for $j=1,\dots,m$, $\beta_{m+1-j}=\beta_0\alpha_{-j}$ and $\beta'_{m+1-j}=\beta_0\alpha_{j}$. Since $\beta_{m+1-j}$ is of order $2$, for all $k=0,\dots,n-1$, the following commutation relation holds: $\beta_0\alpha^{-k}=\alpha^{k}\beta_0$ . Therefore, $G$ is generated by $\alpha$ of order $n$ and $\beta=\beta_0$ of order $2$ with the relation $\alpha\beta=\beta\alpha^{-1}$: it is again the dihedral group $D_n$. \end{proof} \subsection{The coideal subalgebra\xspaces of $K_2$}\label{sacigsK2} Each coideal subalgebra\xspace of $K_2$ is the subspace of invariant functions on the right cosets of some subgroup of $D_n$. In Appendix~\ref{subsubsection.mupad.sacig.K2} we show how to get all the Jones projection on computer for $n$ small. We do it here explicitly for some examples for all $n$: \begin{itemize} \item Take $m$ such that $n=2m$ or $n=2m+1$. The $2m+1$ subgroups of order $2$ of $D_n$ induce $2m+1$ coideal subalgebra\xspaces of dimension $n$. Their Jones projection are respectively: $e_1+e_2+e_3+e_4$, $e_1+e_2+r_{1,1}^{2j+1}$ and $e_1+e_2+r_{2,2}^{2j+1}$ with $j=0,\dots,m-1$. \item The subgroup generated by $\alpha$ induces the coideal subalgebra\xspace $I_0=I(e_1+e_2+q_1+q_2)$ of dimension $2$. \item If $n$ is even, the subgroup generated by $\alpha^2$ induces a coideal subalgebra\xspace $J_{20}$ of dimension $4$ contained in $K_0$ (see~\ref{sacigK0}). \end{itemize} \subsection{The coideal subalgebra\xspace $K_1=I(e_1+e_2+e_3+e_4)$ of dimension $n$} \label{K1} The intersection $K_1$ of the three coideal subalgebra\xspaces $K_2$, $K_3$ et $K_4$ is the image by $\delta$ of the algebra of the intrinsic group of $\widehat{K\!D(n)}$. Its dimension is $n$; its Jones projection is $e_1+e_2+e_3+e_4$ since it is of trace $1/n$ and dominates $e_1+e_2$, $e_1+e_3$, and $e_1+e_4$. The connected component of $e_1$ in the Bratelli diagram of $I(e_1+e_2+e_3+e_4) \subset K\!D(n)$ is the principal graph of the inclusion $N \subset N\rtimes \mathbb{Z}_2 \times \mathbb{Z}_2$. The structure of $K_1$ depends on the parity of $n$ and is further elucidated in~\ref{K1pair} and~\ref{K1impair}. \subsection{Coidalgebra\xspaces of dimension dividing $2n$} \label{sacign} The following proposition allows for recovering many, if not all, of the coideal subalgebra\xspaces recursively from those of $K_2$, $K_3$, and $K_4$. \begin{prop} If the dimension of a coideal subalgebra\xspace $I$ of $K\!D(n)$ divides $2n$ (and not just $4n$), then either its Jones projection $p_I$ dominates $e_1+e_2+e_3+e_4$ and then $I$ is contained in $K_1$, or there exists a unique $i \in \{2,3,4\}$ such that $p_I$ dominates $e_1+e_i$ and $I$ is a subcoideal subalgebra\xspace of $K_i$. In particular, if $n$ is a power of $2$, then any coideal subalgebra\xspace is contained in one of the $K_i$'s. \end{prop} \begin{proof} The Jones projection $p_I$ of a coideal subalgebra\xspace $I$ dominates $e_1$. Given the block matrix structure of $K\!D(n)$, it is the sum of $x$ projections of trace $1/4n$ (with $x\geq 1$) et and $y$ projections of trace $1/2n$. Its trace $tr(p_I)=(\dim I)^{-1}$, is therefore $(x+2y)/4n$ (see~\ref{resumep}). If the dimension of $I$ divides $2n$ (which is always the case when $n$ is a power of $2$), $x$ must be even, and therefore $p_I$ takes one of the two following forms: \begin{itemize} \item $e_1+e_2+e_3+e_4+s$, with $s$ projection and $tr(s)=(\dim I)^{-1}-n^{-1}$; \item $e_1+e_i+s'$, with $i=2$, $3$, or $4$, and $s'$ projection of trace $(\dim I)^{-1}-(2n)^{-1}$. \end{itemize} The proposition follows from~\ref{tour}~(5). \end{proof} In~\ref{sacignimpair}, we will establish an even more precise result when $n$ is odd. \subsection{Coidalgebra\xspaces induced by Jones projections of subgroups} \label{abelien} We have seen in~\ref{plonge} that the algebras of the subgroups containing $H$ are coideal subalgebra\xspaces of $K\!D(n)$. The algebras of the other subgroups are usually not coideal subalgebra\xspaces in $K\!D(n)$; yet, in many cases, their Jones projections remain Jones projections in $K\!D(n)$. \begin{prop} \label{prop.abelien} The Jones projections of the following subgroups of $D_{2n}$ are Jones projections of coideal subalgebra\xspaces of $K\!D(n)$ of dimension the order of the subgroup. If the subgroup is commutative, then so is the coideal subalgebra\xspace. \begin{itemize} \item $H_r=\{1, \lambda(a^n), \lambda(ba^r), \lambda(ba^{r+n}) \}$ for $r=0,\dots,n-1$; \item $A_k$, subgroup generated by $a^k$, for $k$ such that $2n=kd$; \item $B_k$, subgroup generated by $a^k$ and $b$, for $k$ such that $2n=kd$. \end{itemize} \end{prop} For example, in $K\!D(9)$, the Jones projection of $\mathbb{C}[A_3]$ is that of $M_2$, and that of $\mathbb{C}[A_6]$ generates $L_0$ (see Figure~\ref{KD9}). We refer to~\ref{K4impair} and~\ref{sacig4impair} for further applications, and to~\ref{lattice_K3_even} and~\ref{KD6-8} for examples of Jones projections given by dihedral subgroups. Not every subgroup, even commutative, gives a Jones projection in the twisted Kac algebra; for example, in $K\!D(n)$ with $n\geq 3$, there are only three coideal subalgebra\xspaces of dimension $2$ whereas $D_{2n}$ has $2n+1$ subgroups of order $2$. Reciprocally, not all Jones projections are given by subgroups; for example, in $K\!D(3)$ there are three coideal subalgebra\xspaces of dimension $3$ whereas $D_6$ has a single subgroup of order $3$. \begin{problem} Let $K$ be a Kac algebra by twisting a group algebra $\mathbb{C}[G]$. Characterize the subgroups of $G$ whose Jones projections remain Jones projections in $K$. \end{problem} We now turn to the proof of the proposition, starting with a general lemma. \begin{lemma} \label{lemma.abelien} Let $K$ be a Kac algebra and $p$ be a projection which dominates the Jones projection $f_2$ of $K$, and such that $\Delta(p) = \sum_{i=1}^d Q_i\otimes P_i$, where: \begin{itemize} \item the families $(P_i)_{i=1,\dots,d}$ and $(Q_i)_{i=1,\dots,d}$ are linearly independent; \item $d = 1/tr(p)$; \item $I=\bigoplus_{i=1}^d \mathbb{C} P_i$ is a unital involutive subalgebra of $K$. \end{itemize} Then, $I$ is a coideal subalgebra\xspace of Jones projection $p$. In particular, $Q_i = S(P_i^*)$. \end{lemma} \begin{proof} This is a consequence of the coassociativity of $\Delta$. Indeed, \begin{equation} \sum_{i=1}^d Q_i \otimes \Delta(P_i) = (\id \otimes \Delta)\Delta(p) = (\Delta \otimes \id)\Delta(p) = \sum_{i=1}^d \Delta(Q_i) \otimes P_i\,; \end{equation} applying $\widehat Q_j \otimes \id \otimes \id$ on both sides of this equation, where $(\widehat Q_i)_{i\in I}$ is a family of linear forms such that $\widehat Q_i(Q_j)=\delta_{i,j}$, yields that $\Delta(P_j)\in K\otimes J$. Therefore, $\Delta(J)\subset K\otimes J$, and the result follows by Proposition~\ref{resumep} and Remark~\ref{remark.resumep}. \end{proof} \begin{proof}[Proof of proposition~\ref{prop.abelien}] We shall see in~\ref{J4} that the coproduct of the Jones projection $Q_r$ of $\mathbb{C}[H_r]$ is of the form $\Delta(Q_r)=\sum_{i=1}^4 S(P_i)\otimes P_i$, where $P_1=Q_r$ and $1=P_1+P_2+P_3+P_4$ is a decomposition of the identity into orthogonal projections. Then, $J_r=\sum_i \mathbb{C} P_i$ is an involutive commutative subalgebra of dimension $4$, while $tr(Q_r)=1/4$. The result follows by applying Lemma~\ref{lemma.abelien}. If $d$ is even, the Jones projection of $\mathbb{C} [A_k]$ generates a commutative coideal subalgebra\xspace, which is the image of $K_2$ in $K\!D(d/2)$ through the embedding $\varphi_k$ in $K\!D(n)$ (see~\ref{plonge} and the expression of $e_1+e_2$ in~\ref{form2}). For example, the Jones projection of $\mathbb{C}[A_1]$ is $e_1+e_2=p_{K_2}$. Otherwise, $k$ is even, and the Jones projection of $\mathbb{C}[A_k]$ generates a commutative coideal subalgebra\xspace which is the image of $K_1$ in $K\!D(d)$ through the embedding $\varphi_{k/2}$ in $K\!D(n)$. For example, the Jones projection of $\mathbb{C}[A_2]$ is $e_1+e_2+e_3+e_4=p_{K_1}$. If $k$ divises $n$, $\mathbb{C}[B_k]$ is isomorphic to $KD[d/2]$ (see~\ref{plonge}). If $k=2k'$ does not divide $n$, the Jones projection of $\mathbb{C}[B_k]$ generates a coideal subalgebra\xspace which is the image of $K_4$ of $K\!D(d)$ through the embedding $\varphi_{k/2}$ in $K\!D(n)$. \end{proof} \section{The Kac algebras $K\!D(n)$ for $n$ odd} \label{section.KD.odd} The study of $\operatorname{l}(K\!D(n))$ for $n$ odd led us to conjecture, and later prove, that $K\!D(n)$ is self-dual in that case. This property is not only interesting by itself. First, this is a useful tool for constructing new coideal subalgebra\xspaces by duality (see e.g. Appendix~\ref{subsubsection.demo.delta}), and unraveling the self-dual lattice $\operatorname{l}(K\!D(n))$; in particular, this is a key ingredient to get the full lattice when $n$ is prime. Last but not least, this allows us to describe completely the principal graph for some inclusions $N_2 \subset N_2 \rtimes I$ with $I$ coideal subalgebra\xspace of $K\!D(n)$ (which requires information on $\delta(I)$). After establishing the self-duality, we describe the symmetric coideal subalgebra\xspaces, as well as those of dimension $2n$, of odd dimension, and of dimension $4$. Putting everything together, we get a partial description of $\operatorname{l}(K\!D(n))$ (Theorem~\ref{theorem.lattice.nodd}) which is conjecturally complete (Conjecture~\ref{conj.lattice.nodd}). This is proved for $n$ prime (Corollary~\ref{corollary.lattice.nprime}) and checked on computer up to $n=51$. We conclude with some illustrations on $K\!D(n)$ for $n=3,5,9,15$. \subsection{Self-duality}\label{self-dualKD} In Appendix~\ref{subsubsection.demo.self-dual}, the self-duality of $K\!D(n)$ for $n$ odd is checked for $n\leq 21$ by computing an explicit isomorphism. From this exploration, we infer the definition of $\psi$, candidate as Kac isomorphism from $K\!D(n)$ to its dual. The dual of $\mathbb{C}[D_{2n}]$ is the algebra $L^{\infty}(D_{2n})$ of functions on $D_{2n}$. We denote by $\chi_g$ the characteristic function of $g$, so that $\{\chi_g, g \in G\}$ is the dual basis of $\{\lambda(g), g\in G\}$. We denote by $x \mapsto \hat{x}$ the vector-space isomorphism from $\mathbb{C}[D_{2n}]$ to its dual $L^{\infty}(D_{2n})$ which extends $\lambda(g)\mapsto \chi_g$ by linearity. \label{self-dual} The following theorem is the main result of this section: \begin{thm} \label{theorem.self-dual} Let $n$ be odd. Then $K\!D(n)$ is self-dual, via the Kac algebra isomorphism $\psi$: \begin{displaymath} a \mapsto n(2\widehat{e^{n-1}_{1,1}}+ \widehat{e^1_{2,2}}- (\widehat{e^1_{1,1}}+\widehat{e^{n-1}_{1,2}}+\widehat{e^{n-1}_{2,1}})) \qquad\qquad b \mapsto 4n\widehat {e_4}\,. \end{displaymath} \end{thm} We prove this theorem in the sequel of this section. We start by describing the block matrix decomposition of $\widehat{K\!D(n)}$. We use it to define a $C^*$-algebra isomorphism $\psi$ on the matrix units of $K\!D(n)$, calculate its expression on $\lambda(a)$ and $\lambda(b)$, and conclude by proving that $\psi$ is a Kac algebra isomorphism. Some heavy calculations are relegated to~\ref{psicoproduit}. For an alternative non constructive proof of self-duality, one can use the isomorphism with $K\!A(n)$ (see Theorem~\ref{theorem.isomorphisms}) and the self-duality of the later for $n$ odd which is mentioned in~\cite[p. 776]{Calinescu_al.2004}. \subsubsection{Block matrix decomposition of $\widehat{K\!D(n)}$} \label{block} The product on $L^{\infty}(D_{2n})$ is the point wise (Hadamard) product; the coproduct is defined by duality: $$\Delta(\chi_g)=\sum_{s\in D_{2n}}\chi_s\otimes \chi_{s^{-1}g}\,.$$ For $f\in L^{\infty}(D_{2n})$ and $s\in D_{2n}$, the involution is given by $f^{*}(s)=\overline{f(s)}$, the coinvolution $S$ by $S(f)(s)=f(s^{-1})$, and $tr(\chi_s)$ by $1/2n$. The coproduct of $K\!D(n)$ is that of $\mathbb{C}[D_{2n}]$ twisted by $\Omega$; Therefore by duality, the product of $\widehat{K\!D(n)}$ is twisted from that of $L^{\infty}(G)$. From~\ref{omega}, it can be written as: \begin{displaymath} \Delta(\lambda(g))=\sum_{i,j,r,s=1}^4 c_{i,j}c_{s,r} \lambda(h_igh_r) \otimes \lambda(h_jgh_s)\,, \end{displaymath} with $h_1=1$, $h_2=a^n$, $h_3=b$, $h_4=ba^n$. By duality, we obtain the twisted product on $L^{\infty}(D_{2n})$: \begin{displaymath} \chi_{g_1} \odot \chi_{g_2}=\sum_{i,j,r,s=1}^4 c_{i,j}c_{s,r}\chi_{h_ig_1h_r}\chi_{h_jg_2h_s}\,. \end{displaymath} Let us study this new product. If one of the $g_k$ is in $H$ and the other is not, then the product $\chi_{g_1} \odot \chi_{g_2}$ is zero. If both are in $H$, the product is left unchanged since the coproduct is unchanged on $H$. Therefore, $\chi_1$, $\chi_{a^n}$, $\chi_{b}$ and $\chi_{ba^n}$ are central projections of $\widehat{K\!D(n)}$. Consider now $s$ et $t$ in $D_{2n}-H$. The product $\chi_s \odot \chi_t$ is non zero if and only if there exists $h$ and $h'$ in $H$ such that $t=hsh'$, that is only if $t$ belongs to the double coset modulo $H$ of $s$: $H_s=\{s,s^{-1}, sa^n, s^{-1}a^n,bs, bs^{-1}, bsa^n, bs^{-1}a^n\}$; then $\chi_s \odot \chi_t$ belongs to the subspace spanned by $\{\chi_r, r\in H_s\}$. \begin{proposition} The $C^*$-algebra $\widehat{K\!D(n)}$ decomposes as an algebra as the direct sum of $\mathbb{C}\chi_h$ for $h\in H$ and of $\mathbb{C} H_{a^k}$ for $k$ odd integer in $\{1,\dots,n-1\}$, with $$H_{a^k}=\{a^k, a^{2n-k}, a^{n+k}, a^{n-k},ba^k, ba^{2n-k}, ba^{n+k}, ba^{n-k}\}\,.$$ Furthermore, each non trivial block $\mathbb{C} H_{a^k}$ of $\widehat{K\!D(n)}$ is isomorphic to $\mathbb{C} H_{a}$ in $\widehat{K\!D(3)}$. \end{proposition} \begin{proof} The $H_{a^k}$ are disjoints: indeed $k$ (resp. $n-k$) is odd (resp. even) and strictly lower than $n$, and $2n-k$ (resp. $n+k$) is odd (resp. even) and strictly larger than $n$. Therefore, the blocks $\mathbb{C} H_{a^k}$ and the $\mathbb{C}\chi_h$ are in direct sum and by dimension, they span $\widehat{K\!D(n)}$. Each non trivial block $\mathbb{C} H_{a^k}$ of $\widehat{K\!D(n)}$ is stable under product; in fact, it is trivially isomorphic to $\mathbb{C} H_{a}$ in $\widehat{K\!D(3)}$ because the relations between the eight elements of $H_s$ do not depend on $s=a^k$. From previous considerations, products of elements in different blocs are zero. Putting everything together, they form a (non minimal) matrix block decomposition of $\widehat{K\!D(n)}$. \end{proof} We now define eight matrix units in each non trivial block $\mathbb{C} H_{a^k}$. Those definitions are easily suggested and checked by computer exploration in $K\!D(3)$. \begin{defn} For any odd integer $k$ in $\{1,\dots,n-1\}$, set: \begin{align*} E^k_{1,1}&=\chi_{ba^{n+k}}+\chi_{ba^{k}}\\ E^k_{2,2}&=\chi_{ba^{2n-k}}+\chi_{ba^{n-k}}\\ E^k_{1,2}&=-\frac{1}{2} \rm{i}(-\chi_{a^k}+\chi_{a^{2n-k}}+\chi_{a^{n+k}}-\chi_{a^{n-k}})+\frac{1}{2}(\chi_{ba^k}+\chi_{ba^{2n-k}}-\chi_{ba^{n+k}}-\chi_{ba^{n-k}})\\ E^k_{2,1}&=\frac{1}{2}\rm{i}(-\chi_{a^k}+\chi_{a^{2n-k}}+\chi_{a^{n+k}}-\chi_{a^{n-k}})+\frac{1}{2}(\chi_{ba^k}+\chi_{ba^{2n-k}}-\chi_{ba^{n+k}}-\chi_{ba^{n-k}})\\ E^{n-k}_{1,1}&=\chi_{a^{n-k}}+\chi_{a^{2n-k}}\\ E^{n-k}_{2,2}&=\chi_{a^k}+\chi_{a^{n+k}}\\ E^{n-k}_{1,2}&=\frac{1}{2}(-\chi_{a^k}-\chi_{a^{2n-k}}+\chi_{a^{n+k}}+\chi_{a^{n-k}})+\frac{1}{2}\rm{i}(\chi_{ba^k}-\chi_{ba^{2n-k}}-\chi_{ba^{n+k}}+\chi_{ba^{n-k}}) \\ E^{n-k}_{2,1}&=\frac{1}{2}(-\chi_{a^k}-\chi_{a^{2n-k}}+\chi_{a^{n+k}}+\chi_{a^{n-k}})-\frac{1}{2}\rm{i}(\chi_{ba^k}-\chi_{ba^{2n-k}}-\chi_{ba^{n+k}}+\chi_{ba^{n-k}}) \end{align*} \end{defn} \subsubsection{The Kac algebra isomorphism $\psi$ from $K\!D(n)$ to its dual} Using definition~\ref{block}, we can construct an isomorphism $\psi$ from $K\!D(n)$ to its dual that preserves the involution and the trace. As we noticed just before, we can check others properties of $\psi$ on computer for $n=3$ to obtain the following proposition : \begin{proposition} The application $\psi$ defined by \begin{gather*} \psi(e_1)=\chi_1,\qquad \psi(e_2)=\chi_{a^n},\qquad \psi(e_3)=\chi_{ba^n},\qquad \psi(e_4)=\chi_b,\\ \psi(r^k_{i,j})=E^k_{i,j}, \qquad \psi(e^{n-k}_{i,j})=E^{n-k}_{i,j}\,, \end{gather*} for $k$ odd in $\{1,\dots,n-1\}$, $i=1,2$, and $j=1,2$ extends by linearity into a $C^*$-algebra from $K\!D(2m +1 )$ to $\widehat{K\!D(2m+1)}$ which preserves the trace $tr$. \end{proposition} \begin{proof}[Proof of Theorem~\ref{self-dual}] In~\ref{psicoproduit}, we calculate explicitly $\psi(\lambda(a))$ and $\psi(\lambda(b))$, and check that the corresponding coproducts are preserved by $\psi$. It follows that the isomorphism $\psi$ is a $C^*$-algebra isomorphism which preserves the trace and the coproducts of the generators of $K\!D(n)$. It therefore also preserves the coinvolution (see~\ref{KacHopf}), and is a Kac algebra isomorphism from $K\!D(n)$ to its dual. \end{proof} \subsection{The symmetric Kac subalgebras}\label{Himpair} The lattice $\operatorname{l}(K\!D(n))$ contains the algebra $J_0$ of the intrinsic group of $K\!D(n)$ (see~\ref{intrinseque}) and its Kac subalgebras: \begin{itemize} \item $J_0=\mathbb{C}(e_1+q_1) \oplus \mathbb{C}(e_2+q_2) \oplus \mathbb{C}(e_3+p_2) \oplus \mathbb{C}(e_4+p_1)$ spanned by $1$, $\lambda(a^n)$, $\lambda(b)$ and $\lambda(ba^n)$; \item $I_0=\mathbb{C}(e_1 +e_2+q_1+q_2)\oplus \mathbb{C}(e_3 +e_4+p_1+p_2)$ spanned by $1$ and $\lambda(a^n)$; \item $I_1=\mathbb{C}(e_1 +e_3+q_1+p_2)\oplus \mathbb{C}(e_2 +e_4+p_1+q_2)$ spanned by $1$ and $\lambda(ba^n)$; \item $I_2=\mathbb{C}(e_1 +e_4+p_1+q_1)\oplus \mathbb{C}(e_2 +e_3+p_2+q_2)$ spanned by $1$ and $\lambda(b)$. \end{itemize} Using~\ref{intrin}, the only coideal subalgebra\xspaces of dimension $2$ of $K\!D(n)$ are $I_0$, $I_1$, and $I_2$. Since $K\!D(n)$ is self-dual, we can describe the principal graph of an inclusion $N_2 \subset N_2\rtimes J$ as soon as $\delta(J)$ is known after identification of $K\!D(n)$ and its dual. Considering the Bratteli diagram of $I_i \subset K\!D(n)$ (with $i=0,1,2$) shows that the inclusion $N_2 \subset N_2\rtimes \delta^{-1}(I_0)$ is the only one of depth $2$. Therefore, by~\ref{irredprof2}, $\delta(K_2)$ has to be identified with $I_0$. Then, the principal graph of $N_2 \subset N_2\rtimes K_3$ and of $N_2 \subset N_2\rtimes K_4$ is $D_{2n}/\mathbb{Z}_2$ (see~\ref{grapheDpair}). Since $J_0$ contains the three coideal subalgebra\xspaces of dimension $2$, $\delta(K_1)$ is $J_0$, the intersection of the three coideal subalgebra\xspaces of dimension $2n$. The graph of the inclusion $N_2 \subset N_2\rtimes K_1$ is therefore $D_n/\mathbb{Z}_2$ (see~\ref{grapheDimpair}). Furthermore, using~\cite{Hong_Szymanski.1996}, the inclusion is of the form $R\rtimes \mathbb{Z}_2 \subset R\rtimes G$, where $G$ is the semi-direct product of an abelian group of order $n$ with $\mathbb{Z}_2$. \subsection{The other coideal subalgebra\xspaces of dimension $2n$: $K_3$ and $K_4$} \label{K3K4impair} \label{K3K4nimpair} \subsubsection{The coideal subalgebra\xspaces $K_3$ and $K_4$ are isomorphic} \label{iso-n-impair} \begin{proposition} Let $n$ be odd. Then, the automorphism $\Theta'$ of $K\!D(n)$ (see Proposition~\ref{proposition.automorphism.KD.ThetaPrime}) exchanges $e_3$ and $e_4$. Therefore, the coideal subalgebra\xspaces $K_3=I(e_1+e_3)$ and $K_4=I(e_1+e_4)$ of $K\!D(n)$ are exchanged by $\Theta'$ and thus isomorphic. \end{proposition} \begin{proof} Using~\ref{form2}, and dropping the $\lambda$'s for clarity, we have: \begin{displaymath} \Theta'(e_1+e_3) = \frac{1}{2n} \sum_{k=0}^{n-1}a^{2k} +ba^{2k+1+n} - \frac{1}{4n} \sum_{k=0}^{n-1}[a^{2k}-a^{-2k}+ba^{2k+1}-ba^{-2k-1}](1+a^n)\,. \end{displaymath} Since $n$ is odd, the first sum is $e_1+e_4$, while the second is obviously null. \end{proof} \subsubsection{Decomposition of $K_3$ and $K_4$} \label{K3K4.blocks.impair} From the expressions of the coproducts of the projections $e_1+e_3$ and $e_1+e_4$ for $n$ odd, one deduces: $$ K_3 =\mathbb{C}(e_1+e_3)\oplus \mathbb{C}(e_2+e_4) \oplus \bigoplus_{j=1, j \text{ odd}}^{n-1} K_3^j\,,$$ $$K_4 =\mathbb{C}(e_1+e_4)\oplus \mathbb{C}(e_2+e_3) \oplus \bigoplus_{j=1, j \text{ odd}}^{n-1} K_4^j\,,$$ where each $K_3^j$ and $K_4^j$ is a factor $M_2(\mathbb{C})$ with matrix units: \begin{displaymath} (e^j_{1,1}+r^{n-j}_{1,1}), \ (e^j_{2,2}+r^{n-j}_{2,2}), \ (e^j_{1,2}+r^{n-j}_{1,2}), \text{ and } (e^j_{2,1}+r^{n-j}_{2,1})\,, \end{displaymath} and \begin{displaymath} (e^j_{1,1}+r^{n-j}_{2,2}),\ (e^j_{2,2}+r^{n-j}_{1,1}),\ (e^j_{1,2}-r^{n-j}_{2,1}), \text{ and } (e^j_{2,1}-r^{n-j}_{1,2})\,, \end{displaymath} respectively. From~\ref{tour}~(5), $I_1$ is contained in $K_3$ and $I_2$ in $K_4$. \subsection{The coideal subalgebra\xspace $K_1=I(e_1+e_2+e_3+e_4)$ of dimension $n$} \label{K1impair} \begin{proposition} For $n=2m+1$, the matrix structure of $K_1=I(e_1+e_2+e_3+e_4)=K_2\cap K_3 \cap K_4$ is given by \begin{displaymath} K_1 =\mathbb{C}(e_1+e_2+e_3+e_4) \oplus \bigoplus_{j=1, j \text{ odd}}^{n-1} \mathbb{C}(r^j_{1,1}+e^{n-j}_{1,1}) \oplus \mathbb{C}(r^j_{2,2}+e^{n-j}_{2,2})\,. \end{displaymath} In $K_2\equiv D_n$ it is the subalgebra of constant functions on the right cosets w.r.t. $\{1, \beta\}$. Using the notations of~\ref{e1e2}, the matrix structure can be written as: \begin{displaymath} K_1 = \bigoplus_{j=0, \dots ,n-1} \mathbb{C} (\chi_{\alpha^j} + \chi_{ \alpha^j \beta})\,. \end{displaymath} Each coideal subalgebra\xspace $I$ of $K_1$ is the subalgebra of constant functions on the right cosets of some subgroup of $D_n$ containing $\{1,\beta\}$. Such a subgroup is a dihedral group generated by $\beta$ and $\alpha^k$ for some $k$ divisor of $n$. Then, the Jones projection of $I$ is $e_1+e_2+e_3+e_4+\sum_{h=1}^{(n-1)/2k} e^{2hk}_{1,1}+e^{2hk}_{2,2}+r^{n-2hk}_{1,1}+r^{n-2hk}_{2,2}$. \end{proposition} \begin{proof} Straightforward using~\ref{K1} and Appendix~\ref{cop} \end{proof} \subsection{The coideal subalgebra\xspaces of dimension dividing $n$} \label{sacignimpair} From~\ref{sacign}, the other coideal subalgebra\xspaces of dimension dividing $2n$ are contained in one of the coideal subalgebra\xspaces of dimension $2n$. We now show a more precise result for those of dimension dividing $n$ (or equivalently of odd dimension). \begin{prop} \label{prop.sacignimpair} When $n$ is odd, the coideal subalgebra\xspaces of odd dimension of $K\!D(n)$ are those of $K_2$; in particular, they are isomorphic to the algebra of functions on $D_n$ which are constant on right cosets w.r.t. one of its subgroups. There are exactly $n$ coideal subalgebra\xspaces of dimension $n$, whose Jones projections are given in~\ref{sacigsK2}. \end{prop} \begin{proof} Let $I$ be a coideal subalgebra\xspace of odd dimension $k$. Then, $k$ is an odd divisor of $2n$. Set $k'=2n/k$ which is even. From~\ref{sacign}, either $I$ is contained in $K_1$ (and therefore in $K_2$, or its Jones projection is of the form $e_1+e_i+s$, where $i=2,3$ or $4$ and $s$ is a projection of trace $1/k-1/2n=(k'-1)/2n$ distinct from $e_j+e_{j'}$. However, in $K_3$ (resp. $K_4$), and except for $e_2+e_4$ (resp. $e_2+e_3$), the minimal projections are of trace $1/n$, so it is not possible to combine them to get a projection of trace $(k'-1)/2n$ with $k'$ even. Therefore, $p_I$ is of the form $e_1+e_2+s$, and by~\ref{tour}~(5), $I\subset K_2$. \end{proof} \subsection{The coideal subalgebra\xspaces of $K_3$ and $K_4$}\label{K4impair} Since $K_3$ and $K_4$ are isomorphic by $\Theta'$, their lattice of coideal subalgebra\xspaces are isomorphic. We focus on the description of that of $K_4$. From~\ref{sacignimpair}, the coideal subalgebra\xspaces of odd dimension of $K_4$ are those of $K_1$, and therefore contained in $K_2$. For the others, we have the following partial description. \begin{prop} \label{prop.K4impair} Let $n=2m+1$, and $k$ a divisor of $n=kd$. Let $B_{2d}$ be the subgroup generated by $a^{2d}$ and $b$ as in Proposition~\ref{prop.abelien}. Then, \begin{equation*} \label{equation.pK4k} e_1+e_4+\sum_{j=1}^{d -1} p^{jk}_{1,1}=\frac{1}{2k} \sum_{g\in B_{2d}} \lambda(g)\,. \end{equation*} is the Jones projection of a coideal subalgebra\xspace of dimension $2k$ of $K_4$, image by $\varphi_{d}$ of the $K_4$ of $K\!D(k)$. In general, every coideal subalgebra\xspace of dimension $2k$ of $K_4$ contains $I_2$. Its Jones projection is of the form: \begin{equation*} e_1+e_4+\sum_{t\in T} p^{2t}_{1,1}+p^{n-2t}_{1,1}\,, \end{equation*} where $T$ is a subset of $\{1,2,\dots,m\}$ of size $\frac12(d-1)$. In particular, the Jones projection for $B_{2d}$ above is obtained when $T$ is the set of multiples of $k$ in $\{1,\dots,m\}$. \end{prop} \begin{proof} The first part is the third item of Proposition~\ref{prop.abelien}. Let $k$ be a non trivial divisor of $n$ and $L$ a coideal subalgebra\xspace of dimension $2k$ of $K_4$. Then, $\delta(L)$ is of dimension $2d$, and by~\ref{sacign} it is contained in exactly one of $K_2$, $K_3$, and $K_4$. Therefore, $L$ contains exactly one of $I_0$, $I_1$, and $I_2$, and from the expressions of $p_{I_i}$, $i=0,1,2$ (see~\ref{Himpair}), this is necessarily $I_2$. The Jones projection $p_L$ of $L$ is of trace $1/2k$, dominates $p_{K_4}=e_1+e_4$, and is dominated by $p_{I_2}=e_1+e_4+q_1+p_1$. Using the matrix units of $K_4$ (see~\ref{K3K4.blocks.impair}), it has to be of the given form. \end{proof} \subsection{The coideal subalgebra\xspaces of dimension $4$} \label{sacig4impair} The following proposition was suggested by the computer exploration of Appendix~\ref{subsubsection.demo.delta}, where we use the explicit isomorphism between $K\!D(n)$ and its dual (see~\ref{self-dualKD}) to derive the Jones projections of the coideal subalgebra\xspaces of dimension $4$ from the coideal subalgebra\xspaces of dimension $n$ of $K_2$. \begin{prop} When $n$ is odd, there are $n$ coideal subalgebra\xspaces $(J_k)_{k=0,\dots,n-1}$ of dimension $4$ in $K\!D(n)$. The Jones projection of $J_k$ is $p_{J_k} = e_1+ \sum_{j=1}^m q(0,\frac{2kj\pi}{n},2j)$. The principal graph of the inclusions $N_1 \subset N_1\rtimes\delta(J_k)$ and $N_2 \subset N_2\rtimes J_k$ are respectively $D_n/\mathbb{Z}_2$ (see~\ref{grapheDimpair}) and $D_{2n+2}^{(1)}$. \end{prop} \begin{proof} By self-duality, the coideal subalgebra\xspaces of dimension $4$ are mapped by $\delta$ to the coideal subalgebra\xspaces of dimension $n$ of $K_2$, and there are $n$ of them (see~\ref{sacigsK2}). The Jones projections $p_{J_k}$ are given in Proposition~\ref{J4} together with the central projections of $J_k$; the principal graph of $N_1 \subset N_1\rtimes\delta(J_k)$ follows. The principal graph of $N_2 \subset N_2\rtimes J_k$ follows from~\ref{sacigsK2}. \end{proof} \subsection{The lattice $\mathrm{l}(K\!D(n))$} \label{resumeimpair} The following theorem summarizes the results of this section, and describe most, if not all (see Conjecture~\ref{conj.lattice.nodd}), of the lattice $\mathrm{l}(K\!D(n))$ of coideal subalgebra\xspaces of $K\!D(n)$. \begin{theorem} \label{theorem.lattice.nodd} When $n$ is odd, the lattice $\operatorname{l}(K\!D(n))$ is self-dual. It has: \begin{itemize} \item 3 coideal subalgebra\xspaces of dimension $2$: $I_0$, $I_1$, and $I_2$, contained in the intrinsic group $J_0$; \item 3 coideal subalgebra\xspaces of dimension $2n$: \begin{itemize} \item $K_2$, isomorphic to $\mathcal{L}^{\infty}(D_n)$; \item $K_3$ and $K_4$, isomorphic via $\Theta'$, but non Kac subalgebras; \end{itemize} \item $n$ coideal subalgebra\xspaces of dimension $n$ contained in $K_2$: $K_1=I(e_1+e_2+e_3+e_4)$, and, for $j=0,\dots,m-1$, $I(e_1+e_2+r_{1,1}^{2j+1})$ and $I( e_1+e_2+r_{2,2}^{2j+1})$; \item $n$ coideal subalgebra\xspaces of dimension $4$: \begin{equation*} I(e_1+ \sum_{j=1}^m q(0,\frac{2kj\pi}{n},2j)), \quad\text{ for $k=0,\dots,n-1$.} \end{equation*} \end{itemize} When $n$ is not prime, $\operatorname{l}(K\!D(n))$ contains further: \begin{itemize} \item the coideal subalgebra\xspaces of odd dimension dividing strictly $n$, contained in $K_2$, and associated to the dihedral subgroups of $D_n$; \item the coideal subalgebra\xspaces of dimension a multiple of $4$, image of the previous ones by $\delta$; \item the coideal subalgebra\xspaces of even dimension dividing strictly $2n$: \begin{itemize} \item the coideal subalgebra\xspaces of $K_2$ corresponding to the subgroups of $<\alpha>$ in $D_n$; \item the coideal subalgebra\xspaces of $K_3$ containing $I_1$; they are the images by $\Theta'$ of those of $K_4$ containing $I_2$ below; \item coideal subalgebra\xspaces of $K_4$ containing $I_2$: \begin{itemize} \item for each $k$ non trivial divisor of $n=kd$, the image of the coideal subalgebra\xspace $K_4$ of dimension $2k$ of $K\!D(k)$ by the embedding $\varphi_{d}$; \item a set $\mathcal{I}$ of other coideal subalgebra\xspaces, controlled by Proposition~\ref{prop.K4impair}, and empty for $n\leq 51$. \end{itemize} \end{itemize} \end{itemize} \end{theorem} \begin{proof} Follows from Theorem~\ref{theorem.self-dual}, and Propositions~\ref{sacign},~\ref{iso-n-impair},~\ref{K1impair},~\ref{prop.sacignimpair},~\ref{prop.K4impair}, and~\ref{sacig4impair}. The emptyness of $\mathcal{I}$ for $n\leq 51$ was checked on computer, using that, by Proposition~\ref{prop.K4impair}) there are only finitely many possible Jones projections for coideal subalgebra\xspaces in $\mathcal{I}$. This check can be further reduced by using $\delta$ which exchanges coideal subalgebra\xspaces of dimension $2k$ and $2d$. \end{proof} \begin{cor} \label{corollary.lattice.nprime} For $n$ odd prime, the non trivial coideal subalgebra\xspaces of $K\!D(n)$ are of dimension $2$, $4$, $n$, and $2n$ and the graph of the lattice of coideal subalgebra\xspaces of $K\!D(n)$ is similar to Figure~\ref{figure.KD5} with $n$ coideal subalgebra\xspaces of dimension $4$ and $n$. \end{cor} \begin{figure}\label{figure.KD5} \end{figure} The computer exploration results mentioned in Theorem~\ref{theorem.lattice.nodd} suggests right away the following conjecture when $n$ is not prime. \begin{conjecture} \label{conj.lattice.nodd} Let $n$ be odd. Then, the description of the lattice $\operatorname{l}(K\!D(n))$ of Theorem~\ref{theorem.lattice.nodd} is complete: $\mathcal I$ is empty. \end{conjecture} \subsection{The Kac algebra $K\!D(3)$ of dimension $12$} \subsubsection{The lattice $\mathrm{l}(K\!D(3))$} \label{KD3section} In this section, we describe the lattice of $K\!D(3)$ in detail to illustrate Corollary~\ref{corollary.lattice.nprime}. \begin{figure} \caption{The lattice of coideal subalgebra\xspaces of $K\!D(3)$} \label{figure.KD3} \end{figure} \begin{prop} \label{prop.lattice.KD3} In $K\!D(3)$, \begin{itemize} \item The coideal subalgebra\xspaces of dimension $2$ are $I_0$, $I_1$, and $I_2$; \item The coideal subalgebra\xspaces of dimension $3$ are contained in $K_2$; they are the algebras of functions constant w.r.t. the three subgroups of order $2$: \begin{align*} K_1&=\mathbb{C}(e_1+e_2+e_3+e_4)\oplus \mathbb{C}(r^1_{1,1}+e^2_{1,1}) \oplus \mathbb{C}(r^1_{2,2}+e^2_{2,2})\,; \\ K_{21}&=\mathbb{C}(e_1+e_2+r^1_{1,1}) \oplus \mathbb{C}(e_3+e_4+e^2_{2,2}) \oplus \mathbb{C}(r^1_{2,2}+e^2_{1,1})\,; \\ K_{22}&=\mathbb{C}(e_1+e_2+r^1_{2,2}) \oplus \mathbb{C}(e_3+e_4+e^2_{1,1}) \oplus \mathbb{C}(r^1_{1,1}+e^2_{2,2})\,. \end{align*} \item The coideal subalgebra\xspaces of dimension $4$, images of those of dimension $3$ by $\delta$, are: \begin{align*} J_0&=\mathbb{C}(e_1+q_1) \oplus \mathbb{C}(e_2+q_2) \oplus \mathbb{C}(e_3+p_2) \oplus \mathbb{C}(e_4+p_1)\,;\\ J_1&=\mathbb{C}(e_1+q(0,\frac{2\pi}{3},2 )) \oplus \mathbb{C}(e_2+q(0,\frac{5\pi}{3},2 )) \oplus \mathbb{C}(e_3+q( \frac{ \pi}{3},0,1)) \oplus \mathbb{C}(e_4+q( -\frac{ \pi}{3},\pi,1))\,; \\ J_2&= \mathbb{C}(e_1+q(0,\frac{4\pi}{3},2))\oplus \mathbb{C}(e_2+q(0,\frac{\pi}{3},2))\oplus \mathbb{C}(e_3+q(-\frac{\pi}{3},0,1))\oplus \mathbb{C}(e_4+q(\frac{\pi}{3},\pi,1))\,. \end{align*} \item The three coideal subalgebra\xspaces of dimension $6$ are $K_2$, $K_3$, and $K_4$. \end{itemize} With the notations of~\ref{graphe}, the inclusion $N_2\subset N_2\rtimes J$ has for principal graph \begin{itemize} \item $D^{(1)}_8$ for $J= J_1$ or $J_2$. \item $A_5$ for $J=K_1$, $K_{21}$ or $K_{22}$, \item $D_6/\mathbb{Z}_2$ for $J=K_3$ or $K_4$. \end{itemize} For $J= I_0$, $I_1$, $I_2$, $J_0$ or $K_2$, it is of depth $2$. The lattice of coideal subalgebra\xspaces de $K\!D(3)$ is as given in Figure~\ref{figure.KD3}. \end{prop} \subsubsection{Realization of $N_2 \subset N_2 \rtimes J_1$ by composition of subfactors}\label{KD3IK} As in~\ref{KPIK}, the inclusion $N_2\subset N_2\rtimes J_i$ of principal graph $D^{(1)}_8$ can be interpreted as $M^{(\alpha, \mathbb{Z}_2)} \subset M \rtimes_\beta \mathbb{Z}_2$ (see also~\cite{Popa.1990}). \begin{prop} The inclusion $N_2 \subset N_2 \rtimes J_1$, which is of principal graph $D_8^{(1)}$, can be put under the form $M^{(\alpha, \mathbb{Z}_2)} \subset M \rtimes_\beta \mathbb{Z}_2$ as follows. Take the following basis of $J_1$: \begin{displaymath} B_1= e_1+q(0,\frac{2\pi}{3},2), \; B_2=e_2+q(0,\frac{5\pi}{3},2), \; B_3=e_3+q(\frac{ \pi}{3},0,1), \; B_4=e_4+q( -\frac{ \pi}{3},\pi,1)\,, \end{displaymath} and set $ v=B_1-B_2+B_3-B_4$. Recall that $\lambda(a^3)=B_1+B_2-B_3-B_4$. Set $M=N_2 \rtimes I_0$, and let $\alpha$ be the automorphism of $M$ which fixes $N_2$ and changes $\lambda(a^3)$ into its opposite (this is the dual action of $\mathbb{Z}_2$) and $\beta=\Ad v$. Then, $\alpha$ and $\beta$ are involutive automorphisms of $M$ such that the period of $\beta \alpha$ is $6$, while $M^{\alpha}=N_2$ and $N_2\rtimes J_1=M\rtimes_{\beta}\mathbb{Z}_2$. \end{prop} \begin{proof} As in~\ref{action} (see~\cite{David.2005}), we identify $N_3$ and $N_2\rtimes K\!D(3)$. Therefore, for all $x \in N_2$, one has $\beta(x)=vxv^*=(v_{(1)}\triangleright x)v_{(2)}v^*$. From $\Delta(v)=v\otimes (B_1-B_2) +w\otimes (B_3-B_4)$, (with $w=(e_1+q(0,-\frac{2\pi}{3},2))-(e_2+q(0,\frac{\pi}{3},2))+(e_3+q(-\frac{\pi}{3},0,1))-(e_4+q(\frac{ \pi}{3},\pi,1))$), we get, for $x\in N_2$, $$\beta(x)= (v\triangleright x)(B_1+B_2)+(w\triangleright x)(B_3+B_4)\,,$$ and deduce that $\beta$ normalizes $M$. By a straightforward calculation, we obtain $(wv)^3=(vw)^3=1$ and $(\beta\alpha)^6=\id$. Then, $N_2\rtimes J_1$ is indeed the cross product of $M$ by $\beta$, since $\lambda(a^3)$ and $v$ generate the subalgebra $J_1$. \end{proof} \subsection{The algebra $K\!D(9)$ of dimension $36$} \label{KD9} We illustrate, on $K\!D(9)$, Theorem~\ref{theorem.lattice.nodd} for $n$ not prime. \begin{corollary} The lattice of coideal subalgebra\xspaces of $K\!D(9)$ is given by Figure~\ref{figure.KD9}. Namely: \begin{itemize} \item The coideal subalgebra\xspaces of dimension $2$ are $I_0$, $I_1$ et $I_2$. \item The coideal subalgebra\xspaces of dimension $3$ are contained in $K_2$: \begin{itemize} \item The coideal subalgebra\xspace of constant functions modulo $<\alpha^3,\beta>$ is \\$L_0=I(e_1+e_2+e_3+e_4+r_{1,1}^3+r_{2,2}^3+e_{1,1}^6+e_{2,2}^6)$ \item The coideal subalgebra\xspace of constant functions modulo $<\alpha^3,\beta \alpha>$ is \\$L_1=I(e_1+e_2+r_{2,2}^1+r_{1,1}^5+e_{1,1}^6+e_{2,2}^6+r_{2,2}^7)$ \item The coideal subalgebra\xspace of constant functions modulo $<\alpha^3,\beta \alpha^2>$ is \\$L_2=I(e_1+e_2+r_{1,1}^1+r_{2,2}^5+e_{1,1}^6+e_{2,2}^6+r_{1,1}^7)$ \end{itemize} \item The nine coideal subalgebra\xspaces of dimension $4$ are the $J_k=I(e_1+ \sum_{j=1}^4 q(0,\frac{2kj\pi}{9},2j))$, for $k=0,\dots,8$; \item The coideal subalgebra\xspaces of dimension $6$ are: \begin{itemize} \item The Kac subalgebra of constant functions modulo $<\alpha^3>$ in $K_2$:\\ $M_{2}=I(e_1+e_2+e_{1,1}^6+e_{2,2}^6)$ which contains all the $L_i$, $i=0,1,2$; \item The coideal subalgebra\xspace $M_{3}=I(e_1+e_3+p^{3}_{2,2}+p^{6}_{1,1})$ contained in $K_3$; \item The coideal subalgebra\xspace $M_{4}=I(e_1+e_4+p^{3}_{1,1}+p^{6}_{1,1})$ contained in $K_4$. \end{itemize} \item The nine coideal subalgebra\xspaces of dimension $9$ are contained in $K_2$:\\ $K_1=I(e_1+e_2+e_3+e_4)$, $K_{21}=e_1+e_2+r_{1,1}^{3}$, and $K_{22}=e_1+e_2+r_{2,2}^{3}$ whose intersection is $L_0$; \\ $K_{23}=e_1+e_2+r_{2,2}^{1}$, $K_{24}=e_1+e_2+r_{1,1}^{5}$, and $K_{25}=e_1+e_2+r_{2,2}^{7}$ whose intersection is $L_1$; \\ $K_{23}=e_1+e_2+r_{2,2}^{1}$, $K_{24}=e_1+e_2+r_{1,1}^{5}$, and $K_{25}=e_1+e_2+r_{2,2}^{7}$ whose intersection is $L_1$; \\ $K_{26}=e_1+e_2+r_{1,1}^{1}$, $K_{27}=e_1+e_2+r_{2,2}^{5}$, and $K_{28}=e_1+e_2+r_{1,1}^{7}$ whose intersection is $L_2$. \item The three coideal subalgebra\xspaces of dimension $12$, images by $\delta$ of $L_i$, $i=0,1,2$. Their Jones projections are $e_1+p_{1,1}^6$ for $\delta(L_0)$, and $e_1+ q(0,\frac{2\pi}{3},6))$ and $e_1+ q(0,\frac{4\pi}{3},6))$ for the two others. \item The three coideal subalgebra\xspaces of dimension $18$ are $K_2$, $K_3$, and $K_4$. \end{itemize} \end{corollary} \begin{proof} Most of this proposition follows from Theorem~\ref{theorem.lattice.nodd}. To derive the formulas for the Jones projections of $\delta(L_i)$, we noticed that they all contain $M_2$ as well as three $J_k$'s, and looked on computer through the projectors of trace $1/12$ which dominates the projections of those coideal subalgebra\xspaces. Since $L_0$ is contained in $K_1$, $\delta(L_0)$ contains $J_0$. \end{proof} \begin{figure} \caption{The lattice of coideal subalgebra\xspaces of $K\!D(9)$} \label{figure.KD9} \end{figure} \subsection{The algebra $K\!D(15)$ of dimension $60$} \label{KD15} Figure~\ref{figure.KD15} illustrates, on $K\!D(15)$, Theorem~\ref{theorem.lattice.nodd} for $n$ not the square of a prime. \begin{figure} \caption{The lattice of coideal subalgebra\xspaces of $K\!D(15)$} \label{figure.KD15} \end{figure} \section{The Kac algebras $K\!D(n)$ for $n$ even} \label{section.KD.even} In this section we assume that $n$ is even: $n=2m$. The structure of $K\!D(n)$ is then very different from the odd case. The intrinsic group is $D_4$, and the algebra $K\!D(2m)$ is never self-dual. On the other hand, the coideal subalgebra\xspaces $K_3$ and $K_4$ are non isomorphic Kac subalgebras. Using that $K_2$ is isomorphic to $L^\infty(D_n)$, $K_4$ to $K\!D(m)$ and $K_3$ to $K\!B(m)$ (i.e. $K\!Q(m)$ for $m$ odd), we recover a large part of the lattice of coideal subalgebra\xspaces by induction. For example the lattices of coideal subalgebra\xspaces of $K\!D(4)$ (see~\ref{KD4}) and $K\!D(8)$ (see~\ref{KD8}) and a large part of the lattice of $K\!D(6)$ (see~\ref{KD6}) can be constructed this way. \subsection{The algebra of the intrinsic group $K_0$} \label{section.K0} We start with the algebra $K_0$ of the intrinsic group of $K\!D(n)$. From~\ref{plonge} and~\ref{intrinseque}, it is generated by $\lambda(a^{m})$ and $\lambda(b)$, and isomorphic to $K\!D(2)\equiv \mathbb{C}(D_4)$. \subsubsection{The isomorphism between $K\!D(2)$ and $K_0$ using matrix units} \label{section.KD2.K0} We have seen in~\ref{plonge} that the application $\varphi_m$ from $K\!D(2)$ to $K_0$ which sends $\lambda(b)$ to $\lambda(b)$ and $\lambda(a)$ to $\lambda(a^m)$ can be extended into a Kac algebra isomorphism. We now give the expression of $\varphi_m$ on the matrix units of $K\!D(2)$ (which will be marked with a $'$ to avoid confusion). Taking the conditions on the index $j$ modulo $4$, and using the formulas~\ref{lambda} in $K\!D(2)$ and $K_0$, we obtain:\\ For $n=2m=4m'$: \begin{align*} \varphi_m(e'_1) &=e_1+e_4 + \sum_{j \equiv 0} p^j_{1,1}\,,&& \varphi_m(e'_2) =e_2+e_3 + \sum_{j \equiv 0} p^j_{2,2}\,,\\ \varphi_m(e'_3) &= \sum_{j \equiv 2} p^j_{2,2}\,,&& \varphi_m(e'_4) =\sum_{j \equiv 2} p^j_{1,1}\,. \end{align*} For $n=2m=4m'+2$: \begin{align*} \varphi_m(e'_1) &=e_1 + \sum_{j \equiv 0} p^j_{1,1}\,,&& \varphi_m(e'_2) =e_2+ \sum_{j \equiv 0} p^j_{2,2}\,,\\ \varphi_m(e'_3) &= e_3 + \sum_{j \equiv 2} p^j_{2,2}\,,&& \varphi_m(e'_4) =e_4 + \sum_{j \equiv 2} p^j_{1,1}\,. \end{align*} In both cases: $$\varphi_m(e'_{1,2})=\sum_{j \equiv 1} e^j_{1,2}+\sum_{j \equiv 3} e^j_{2,1}\,.$$ \subsubsection{The lattice $\mathrm{l}(K_0))$} \label{sacigK0} We obtain the central projections and the lattice of coideal subalgebra\xspaces of $K_0$ by combining the results of~\ref{KD2} and~\ref{section.KD2.K0}. All the coideal subalgebra\xspaces are Kac subalgebras. We keep the same naming convention for the coideal subalgebra\xspaces in $K\!D(2)$ and their images by $\varphi_m$ in $K_0$. The algebra $K_0$ contains $J_0=\mathbb{C}[H]$ and all its coideal subalgebra\xspaces (note that by~\ref{tour} (5), those are also coideal subalgebra\xspaces of $K_4$): \begin{itemize} \item $J_0=\mathbb{C}(e_1+e_4+q_1) \oplus \mathbb{C}(e_2+e_3+q_2) \oplus \mathbb{C} p_2 \oplus \mathbb{C} p_1$; \item $I_0=\mathbb{C}(e_1 +e_2+e_3 +e_4+q_1+q_2)\oplus \mathbb{C}(p_1+p_2)$; \item $I_1=\mathbb{C}(e_1 +e_4+q_1+p_2)\oplus \mathbb{C}(e_2 +e_3+p_1+q_2)$; \item $I_2=\mathbb{C}(e_1 +e_4+p_1+q_1)\oplus \mathbb{C}(e_2 +e_3+p_2+q_2)$. \end{itemize} The basis of the other coideal subalgebra\xspaces depends on the parity of $m$: \begin{description} \item[For $n=4m'$] The Jones projection of $K_0$ is $e_1+e_4 +\sum_{j=1}^{m'-1}p^{4j}_{1,1}$. The other central projection in the same connected component of the Bratelli diagram of $K_0 \subset K\!D(n)$ is $e_2+e_3 + \sum_{j \equiv 0} p^j_{2,2}$. The coideal subalgebra\xspaces not contained in $J_0$ are: \begin{itemize} \item $I_3=I(e_1+e_2+e_3+e_4+ \sum_{j \equiv 0}(e^j_{1,1}+e^j_{2,2})+\sum_{j \equiv 1}r^j_{2,2}+\sum_{j \equiv 3}r^j_{1,1})$; \item $I_4=I(e_1+e_2+e_3+e_4+ \sum_{j \equiv 0}(e^j_{1,1}+e^j_{2,2})+\sum_{j \equiv 1}r^j_{1,1}+\sum_{j \equiv 3}r^j_{2,2})$; \item $J_{20}=I(e_1+e_2+e_3+e_4+ \sum_{j \equiv 0}(e^j_{1,1}+e^j_{2,2}))$; \item $J_m=I(e_1+e_4+ \sum_{j \equiv 0}p^j_{1,1}+\sum_{j \equiv 2}p^j_{2,2})$. \end{itemize} $K_0$ is a Kac subalgebra of $K_4$ and $J_{20}$ is contained in $K_1$. \item[For $n=4m'+2$] The Jones projection of $K_0$ is $e_1+\sum_{j=1}^{m'}p^{4j}_{1,1}$. The other central projection in the same connected component of the Bratelli diagram of $K_0 \subset K\!D(n)$ is $e_2+ \sum_{j \equiv 0} p^j_{2,2}$. The coideal subalgebra\xspaces not contained in $J_0$ are: \begin{itemize} \item $I_3=I(e_1+e_2+ \sum_{j \equiv 0}(e^j_{1,1}+e^j_{2,2})+\sum_{j \equiv 1}r^j_{2,2}+\sum_{j \equiv 3}r^j_{1,1})$; \item $I_4=I(e_1+e_2+ \sum_{j \equiv 0}(e^j_{1,1}+e^j_{2,2})+\sum_{j \equiv 1}r^j_{1,1}+\sum_{j \equiv 3}r^j_{2,2})$; \item $J_{20}=I(e_1+e_2+ \sum_{j \equiv 0}(e^j_{1,1}+e^j_{2,2}))$; \item $J_m=I(e_1+e_3+ \sum_{j \equiv 0}p^j_{1,1}+\sum_{j \equiv 2}p^j_{2,2})$. \end{itemize} From~\ref{tour}~(5), $J_{m}$ is contained in $K_3$. \end{description} In both cases, $J_{20}$ is the coideal subalgebra\xspace of functions which are constant on the right cosets for $ <\alpha^2>$ in $K_2$ (see~\ref{section.K2}). Recall that, by~\ref{intrin}, all the coideal subalgebra\xspaces of dimension $2$ of $K\!D(n)$ are in $K_0$. \subsubsection{Principal graphs coming from $K_0$ and its coideal subalgebra\xspaces} \label{grapheK0} Since $K\!D(n)$ is not self-dual when $n$ is even, the Bratelli diagram of an inclusion $I\subset K\!D(n)$ yields by~\ref{grapheprincipal} the principal graph of an intermediate factor in $\widehat{K\!D(n)}$. With the notations of~\ref{graphe}, here are the principal graphs coming from the coideal subalgebra\xspaces of $K_0$: \begin{itemize} \item The inclusions $N_1 \subset N_1\rtimes \delta(I_0)$ and $R\subset R\rtimes \delta(J_{20})$ are of depth $2$; \item The principal graph of $N_1 \subset N_1\rtimes \delta(I_1)$ and of $N_1 \subset N_1\rtimes \delta(I_2)$ is $D_{2n}/\mathbb{Z}_2$; \item The principal graph of $N_1\subset N_1\rtimes \delta(I_3)$ and of $N_1\subset N_1\rtimes \delta(I_4)$ is $QB_{m'}$ for $n=4m'$ and $DB_{m'}$ for $n=4m'+2$; \item The principal graph of $N_1 \subset N_1\rtimes\delta(J_0)$ and of $N_1\subset N_1\rtimes \delta(J_m)$ is $D_n/\mathbb{Z}_2$ (see~\ref{sacig4pair}); \item The principal graph of $N_1\subset N_1\rtimes \delta(K_0)$ is $D_m/\mathbb{Z}_2$. \end{itemize} \subsection{The Kac subalgebra $K_4=I(e_1+e_4)$} As in~\ref{plonge}, consider the Kac subalgebra of $K\!D(n)$ isomorphic to $K\!D(m)$ and generated by $\lambda(a^2)$ and $\lambda(b)$. Its dimension is $2n$, and it contains $e_1+e_4$ (see~\ref{form2}). Therefore, it coincides with $K_4$. From the formulas of~\ref{form2} or~\ref{cop}: $$K_4=\mathbb{C}(e_1+e_4) \oplus \mathbb{C}(e_2+e_3)\oplus \mathbb{C} p^m_{1,1} \oplus \mathbb{C} p^m_{2,2} \oplus \bigoplus_{j=1}^{m-1}K_4^j\,,$$ where $K_4^j$ is the factor $M_2(\mathbb{C})$ with matrix units $e^j_{1,1}+e^{n-j}_{2,2}$, $e^j_{1,2}+e^{n-j}_{2,1}$, $e^j_{2,1}+e^{n-j}_{1,2}$, and $e^j_{2,2}+e^{n-j}_{1,1}$ (note: $j$ and $n-j$ have the same parity). \subsection{The Kac subalgebra $K_3=I(e_1+e_3)$} \label{section.KD.even.K3} Since $\Delta(e_1+e_3)$ is symmetric (see~\ref{cop}), by~\ref{irredprof2}, $K_3$ is a Kac subalgebra of $K\!D(n)$. \label{K3pair} From the expression of the coproduct of the Jones projection, we deduce:\\ if $m$ is odd: $$K_3=I(e_1+e_3)=\mathbb{C}(e_1+e_3) \oplus \mathbb{C}(e_2+e_4)\oplus \mathbb{C} e^m_{1,1}\oplus \mathbb{C} e^m_{2,2} \oplus \bigoplus_{j=1}^{m-1}K_3^j\,,$$ if $m$ is even: $$K_3=I(e_1+e_3)=\mathbb{C}(e_1+e_3) \oplus \mathbb{C}(e_2+e_4)\oplus \mathbb{C} r^m_{1,1}\oplus \mathbb{C} r^m_{2,2} \oplus \bigoplus_{j=1}^{m-1}K_3^j\,,$$ where in both cases $K_3^j$ is the factor $M_2(\mathbb{C})$ with matrix units:\\ for $j$ even: \begin{displaymath} e^j_{2,2}+e^{n-j}_{1,1}, \quad e^j_{2,1}-e^{n-j}_{1,2}, e^j_{1,2}-e^{n-j}_{2,1}, \quad \text{ and } \quad e^j_{1,1}+e^{n-j}_{2,2}\,; \end{displaymath} for $j$ odd: \begin{gather*} r^j_{2,2}+r^{n-j}_{1,1},\quad r^j_{2,1}-r^{n-j}_{1,2}, \quad r^j_{1,2}-r^{n-j}_{2,1}, \quad\text{ and }\quad r^j_{1,1}+r^{n-j}_{2,2}\,,\\ \text{ or }\\ e^j_{1,1}+e^{n-j}_{1,1},\quad e^j_{2,1}-e^{n-j}_{2,1}, \quad e^j_{1,2}-e^{n-j}_{1,2}, \quad\text{ and }\quad e^j_{2,2}+e^{n-j}_{2,2}\,. \end{gather*} $K_3$ is further studied in Section~\ref{section.KB}, in connexion with the families of Kac algebras $K\!Q(m)$ and $K\!B(m)$. \subsection{The Kac subalgebra $K_1=I(e_1+e_2+e_3+e_4)$} \label{K1pair} \begin{proposition} For $n=2m$, $K_1=I(e_1+e_2+e_3+e_4)=K_2\cap K_3 \cap K_4$ is a commutative Kac subalgebra isomorphic to $L^\infty(D_m)$. Its matrix structure is given by \begin{displaymath} K_1=\mathbb{C}(e_1+e_2+e_3+e_4) \oplus \bigoplus_{j=1, \, j \text{ odd}}^{n-1} \mathbb{C}(r^j_{1,1}+r^{n-j}_{2,2}) \oplus \bigoplus_{j=1,\, j \text{ even}}^{n-1} \mathbb{C}(e^j_{1,1}+e^{n-j}_{2,2}) \end{displaymath} Its lattice of coideal subalgebra\xspaces is the dual of the lattice of subgroups of $D_m$. In $K_2\equiv L^\infty(D_{2m})$ it is the subalgebra of constant functions on the right cosets of $\{1, \alpha^m\}$. In $K_4\equiv K\!D(m)$, $K_1$ plays the role of $K_2$. \end{proposition} \begin{proof} The commutative subalgebra $K_1$ has to play the role of $K_2$ in $K_4$ because $K_4\equiv K\!D(m)$ has a single commutative coideal subalgebra\xspace of this dimension. The rest follows from~\ref{e1e2},~\ref{K1}, and~\ref{cop}. \end{proof} \subsection{The coideal subalgebra\xspaces of dimension $4$} \label{sacig4pair} Recall that, from~\ref{sacign}, any coideal subalgebra\xspace of dimension $4$ is contained in some $K_i$. In this section, we use this fact to inductively describe all the coideal subalgebra\xspaces of dimension $4$ of $K\!D(2m)$. As in the case $n$ odd (see~\ref{J4}), we exhibit $2m$ projections that generate $2m$ coideal subalgebra\xspaces $J_k$ of dimension $4$: \begin{lemma} For $k=0,\dots,n-1$, define \begin{displaymath} p_{J_k} = e_1+e_i + \sum_{j=1}^{m-1} q(0,\frac{2kj\pi}{n},2j)\,, \end{displaymath} where $i=3$ (resp. $i=4$) for $k$ odd (resp. even). The projection $p_{J_k}$ is the Jones projection of a coideal subalgebra\xspace $J_k$ of dimension $4$ of $K_3$ (resp. $K_4$) for $k$ odd (resp. even). In particular, $J_0$ and $J_m$ are contained in $K_0$. Furthermore, the principal graph of the inclusion $N_1 \subset N_1\rtimes\delta(J_k)$ is $D_n/\mathbb{Z}_2$ (see~\ref{grapheDpair}). \end{lemma} \begin{prop} The coideal subalgebra\xspaces of dimension $4$ of $K\!D(2m)$ are: \begin{itemize} \item in $K_3$: the $m$ coideal subalgebra\xspaces $J_k$, $k$ odd; \item in $K_4$: the $m$ coideal subalgebra\xspaces $J_k$, $k$ even; \item in $K_2$: \begin{itemize} \item when $m$ is odd, the unique coideal subalgebra\xspace $J_{20}$ of dimension $4$; \item when $m$ is even, the five coideal subalgebra\xspaces of dimension $4$. \end{itemize} \end{itemize} \end{prop} \begin{proof} \textbf{Case $m$ odd:} $K_3$ has $m$ coideal subalgebra\xspaces of dimension $4$ since it is isomorphic to $K\!Q(m)$ (see~\ref{theorem.lattice.nprime}); $K_4$ has $m$ coideal subalgebra\xspaces of dimension $4$ since it is isomorphic to $K\!D(m)$ (see~\ref{sacig4impair}). The unique coideal subalgebra\xspace of dimension $4$ of $K_2$ is $J_{20}$; it corresponds to the unique subgroup of order $m$ of $D_n$ (see~\ref{section.K2}). \textbf{Case $m=2m'$ even:} $D_n$ has five subgroups of order $m$, and the associated coideal subalgebra\xspaces of $K_2$ can be made explicit from the results of~\ref{sacigsK2}, which we will do in the examples. We prove in~\ref{latticeKB} that the coideal subalgebra\xspaces of dimension $4$ of $K_3$ are either in $K_1$, or are the $J_k$ for $k$ odd. Since $K_4$ is isomorphic to $K\!D(2m')$, we can use induction: we shall see in~\ref{KD4} that the proposition holds for $m=2$. Assume that the proposition holds for $m'$; then the dimension $4$ coideal subalgebra\xspaces of $K_4$ are: \begin{itemize} \item the coideal subalgebra\xspaces $J_k$ of $K\!D(2m')$ which give the $J_{2k}$ in $K_4$; \item those of the $K_2$ of $K\!D(2m')$ which itself is the subalgebra $K_1$ of $K_2$ of $K\!D(2m)$. \end{itemize} So the proposition holds for $m$. \end{proof} \subsection{The Kac algebra $K\!D(4)$ of dimension $16$} \label{KD4} We now illustrate the general study of $K\!D(2m)$ with $m$ even on the Kac algebra $K\!D(4)$ of dimension $16$. This algebra (and its dual $\widehat{K\!D(4)}$ with underlying algebra $\mathbb{C}^8\oplus M_2(\mathbb{C}) \oplus M_2(\mathbb{C})$) are described in~\cite[XI.15]{Izumi_Kosaki.2002}. \begin{prop} The lattice of coideal subalgebra\xspaces of $K\!D(4)$ is as given in Figure~\ref{figure.KD4}. \end{prop} \begin{figure} \caption{The lattice of coideal subalgebra\xspaces of $K\!D(4)$} \label{figure.KD4} \end{figure} \begin{proof} From~\ref{sacign}, any coideal subalgebra\xspace of $K\!D(4)$ is inductively a coideal subalgebra\xspace of one of the three Kac subalgebras of dimension $8$: $K_2$, $K_3$, and $K_4$. In the sequel of this section, we study them in turn. \end{proof} The coproduct expressions which we use in this section are collected in~\ref{cop}. \subsubsection{The coideal subalgebra\xspaces of $K_4$} \label{D4} The Kac subalgebra $K_4=I(e_1+e_4)$ is isomorphic to $K\!D(2)$: $$K_4=\mathbb{C}(e_1+e_4) \oplus \mathbb{C}(e_2+e_3)\oplus \mathbb{C} q_1 \oplus \mathbb{C} q_2 \oplus M_2(\mathbb{C})\,,$$ where the matrix units of the factor $M_2(\mathbb{C})$ are: \begin{displaymath} e^1_{1,1}+e^{3}_{2,2},\quad e^1_{1,2}+e^{3}_{2,1},\quad e^1_{2,1}+e^{3}_{1,2},\quad \text{ and } \quad e^1_{2,2}+e^{3}_{1,1}\,. \end{displaymath} As a special feature of $K\!D(4)$, $K_4$ coincides with the group algebra $K_0$ of the intrinsic group studied in~\ref{sacigK0}. The element $c=e_1 -e_2-e_3 +e_4-i(e^1_{1,1}-e^{1}_{2,2})-(q_1-q_2)+i(e^3_{1,1}-e^{3}_{2,2})$ of $K_4$ is group-like and of order $4$, and its square is $\lambda(a^4)$. The intrinsic group $D_4$ of $K\!D(4)$ is therefore generated by $c$ and $\lambda(b)$. The coideal subalgebra\xspaces of $K_4$ are the algebras of the subgroups of $D_4$ (see~\ref{sacigK0}). In dimension $2$, they are: \begin{itemize} \item $I_0=\mathbb{C}(e_1 +e_2+e_3 +e_4+q_1+q_2)\oplus \mathbb{C}(p_1+p_2)$ generated by $\lambda(a^4)$; \item $I_1=\mathbb{C}(e_1 +e_4+q_1+p_2)\oplus \mathbb{C}(e_2 +e_3+p_1+q_2)$ generated by $\lambda(ba^4)$; \item $I_2=\mathbb{C}(e_1 +e_4+p_1+q_1)\oplus \mathbb{C}(e_2 +e_3+p_2+q_2)$ generated by $\lambda(b)$; \item $I_3=I(e_1 +e_2+e_3 +e_4+r^1_{2,2}+r^3_{1,1})$ generated by $\lambda(b)c$; \item $I_4=I(e_1 +e_2+e_3 +e_4+r^1_{1,1}+r^3_{2,2})$ generated by $\lambda(b)c^3$. \end{itemize} Since $K_4=K_0$, those give all the coideal subalgebra\xspaces of dimension $2$ of $K\!D(4)$. In dimension $4$, the coideal subalgebra\xspaces are: \begin{itemize} \item $J_0=\mathbb{C}(e_1+e_4+q_1) \oplus \mathbb{C}(e_2+e_3+q_2) \oplus \mathbb{C} p_2 \oplus \mathbb{C} p_1$ generated by $\lambda(a^4)$ and $\lambda(b)$; \item$J_{20}=\mathbb{C}(e_1+e_2+e_3+e_4) \oplus \mathbb{C}(q_1+q_2) \oplus \mathbb{C}(r^{1}_{1,1}+r^{3}_{2,2})\oplus \mathbb{C}(r^1_{2,2}+r^{3}_{1,1})$ generated by $c^2$ and $\lambda(b)c$; \item$J_2=\mathbb{C}(e_1+e_4+q_2) \oplus \mathbb{C}(e_2+e_3+q_1) \oplus \mathbb{C}(e^1_{1,1}+e^3_{2,2}) \oplus \mathbb{C}(e^1_{2,2}+e^3_{1,1})$ generated by $\lambda(c)$. \end{itemize} \subsubsection{The coideal subalgebra\xspaces of $K_3$} From~\ref{K3pair}, $$K_3=I(e_1+e_3)=\mathbb{C}(e_1+e_3) \oplus \mathbb{C}(e_2+e_4)\oplus \mathbb{C} r^2_{1,1} \oplus \mathbb{C} r^2_{2,2} \oplus M_2(\mathbb{C})\,,$$ where the matrix units of the factor $M_2(\mathbb{C})$ are \begin{displaymath} e^1_{1,1}+e^{3}_{1,1},\quad e^1_{1,2}-e^{3}_{1,2},\quad e^1_{2,1}-e^{3}_{2,1},\quad \text{ and }\quad e^1_{2,2}+e^{3}_{2,2}\,. \end{displaymath} Since the coproduct of $r^2_{1,1}$ is not symmetric (checked on computer), $K_3$ is a non trivial Kac algebra of dimension $8$, and therefore isomorphic to $K\!P$. Its coideal subalgebra\xspaces are $I_0$, $I_3$, and $I_4$ contained in $J_{20}$, and $J_1$ and $J_3$ which contains $I_0$ (see~\ref{KP}): \begin{align*} J_1=I(e_1+e_3+r^2_{1,1})&=\mathbb{C}(e_1+e_3+r^2_{1,1}) \oplus \mathbb{C}(e_2+e_4+r^2_{2,2})\\ &\oplus \mathbb{C}(q(\pi/4, 0,1)+q(\pi/4, \pi,3))\oplus \mathbb{C}(q(-\pi/4, \pi,1)+q(-\pi/4, 0,3))\,, \\ J_3=I(e_1+e_3+r^2_{2,2})&=\mathbb{C}(e_1+e_3+r^2_{2,2}) \oplus \mathbb{C}(e_2+e_4+r^2_{1,1})\\ &\oplus \mathbb{C}(q(-\pi/4, 0,1)+q(-\pi/4, \pi,3))\oplus \mathbb{C}(q(\pi/4, \pi,1)+q(\pi/4, 0,3))\,.& \end{align*} The principal graph of $N_1 \subset N_1\rtimes\delta(J_i)$, $i=1,3$ is $D_6^{(1)}$. \subsubsection{The coideal subalgebra\xspaces of $K_2$}\label{KD4K2} From~\ref{e1e2}, the commutative Kac subalgebra $K_2$ is the algebra $L^\infty(D_4)$ of functions on the group $D_4$, its basis is given by: $$K_2=I(e_1+e_2)=\mathbb{C}(e_1+e_2) \oplus \mathbb{C}(e_3+e_4) \oplus \mathbb{C} r^1_{1,1} \oplus \mathbb{C} r^1_{2,2}\oplus \mathbb{C} e^2_{1,1} \oplus \mathbb{C} e^2_{2,2}\oplus \mathbb{C} r^3_{1,1} \oplus \mathbb{C} r^3_{2,2}\,$$ and its lattice of coideal subalgebra\xspaces is in correspondence with the dual of the lattice of the subgroups of $D_4$. There are three coideal subalgebra\xspaces of dimension $2$ contained in $J_{20}$ and four coideal subalgebra\xspaces of dimension $4$: \begin{align*} J_{21}&=\mathbb{C}(e_1+e_2+r^1_{1,1}) \oplus \mathbb{C}(e_3+e_4+r^3_{2,2})\oplus \mathbb{C}(e^2_{2,2}+r^3_{1,1})\oplus \mathbb{C}(r^1_{2,2}+e^2_{1,1});\\% J_{22}&=\mathbb{C}(e_1+e_2+r^1_{2,2}) \oplus \mathbb{C}(e_3+e_4+r^3_{1,1})\oplus \mathbb{C}(e^2_{1,1}+r^3_{2,2})\oplus \mathbb{C}(r^1_{1,1}+e^2_{2,2});\\% J_{23}&=\mathbb{C}(e_1+e_2+r^3_{2,2}) \oplus \mathbb{C}(e_3+e_4+r^1_{1,1})\oplus \mathbb{C}(e^2_{1,1}+r^3_{1,1})\oplus \mathbb{C}(r^1_{1,1}+e^2_{2,2});\\% J_{24}&=\mathbb{C}(e_1+e_2+r^3_{1,1}) \oplus \mathbb{C}(e_3+e_4+r^1_{2,2})\oplus \mathbb{C}(e^2_{2,2}+r^3_{2,2})\oplus \mathbb{C}(r^1_{1,1}+e^2_{1,1}). \end{align*} For $j=21,\dots,24$, the principal graph of $N_1 \subset N_1\rtimes\delta(J_{j})$ is $D_{10}^{(1)}$. \subsection{The Kac algebra $K\!D(6)$ of dimension $24$}\label{KD6} The study of the lattice of the coideal subalgebra\xspaces of $K\!D(6)$ illustrates how a large part of it (but not all!) can be constructed inductively. \begin{thm} The lattice of coideal subalgebra\xspaces of $K\!D(6)$ is as given in Figure~\ref{figure.KD6}. \end{thm} \begin{figure} \caption{The lattice of coideal subalgebra\xspaces of $K\!D(6)$} \label{figure.KD6} \end{figure} \begin{proof} From~\ref{sacign} and~\ref{isoproj}, any coideal subalgebra\xspaces not of dimension $8$ is contained in one of the three Kac subalgebras of dimension $12$: $K_2\equiv L^\infty(D_n)$, $K_3\equiv K\!Q(3)$, and $K_4\equiv K\!D(3)$ which we study in turn in the sequel of this section. In~\ref{KD6-8} we construct all coideal subalgebra\xspaces of dimension $8$. \end{proof} \subsubsection{The coideal subalgebra\xspaces of $K_0$} From~\ref{section.K0}, $K_0= I(e_1+p^{4}_{1,1})$ is the algebra of the intrinsic group $D_4$ and is of dimension $8$. It contains all the coideal subalgebra\xspaces of dimension $2$ of $K\!D(6)$. It also contains the algebra $J_0$ of the subgroup $H$. Its coideal subalgebra\xspaces are: \begin{itemize} \item $I_0=\mathbb{C}(e_1 +e_2+e_3 +e_4+q_1+q_2)$ \item $I_1=\mathbb{C}(e_1 +e_4+q_1+p_2)$ \item $I_2=\mathbb{C}(e_1 +e_4+q_1+p_1)$ \item $I_3=I(e_1+e_2+ e^4_{1,1}+e^4_{2,2}+r^1_{2,2}+r^3_{1,1}+r^5_{2,2})$ \item $I_4=I(e_1+e_2+ e^4_{1,1}+e^4_{2,2}+r^1_{1,1}+r^3_{2,2}+r^5_{1,1})$ \item $J_0=\mathbb{C}(e_1+e_4+q_1) $ \item $J_{20}=I(e_1+e_2+ e^4_{1,1}+e^4_{2,2})$ \item $J_3=I(e_1+e_3+ p^4_{1,1}+p^2_{2,2})$ \end{itemize} \subsubsection{The coideal subalgebra\xspaces of $K_2$} From~\ref{section.K2}, the Kac subalgebra $K_2=I(e_1+e_2)$ is $L^\infty(D_6)$. Its coideal subalgebra\xspaces of dimension $6$ correspond to the seven subgroups of order $2$ of $D_6$: \begin{itemize} \item $K_1=I(e_1+e_2+e_3+e_4)$ which is isomorphic to $L^\infty(D_3)$; \item $K_{21}=I(e_1+e_2+r^1_{1,1})$; \item $K_{22}=I(e_1+e_2+r^3_{1,1})$; \item $K_{23}=I(e_1+e_2+r^5_{1,1})$; \item $K_{24}=I(e_1+e_2+r^1_{2,2})$; \item $K_{25}=I(e_1+e_2+r^3_{2,2})$; \item $K_{26}=I(e_1+e_2+r^5_{2,2})$. \end{itemize} The group $D_6=\langle \alpha,\beta\rangle$ has a single subgroup of order $3$ which gives the coideal subalgebra\xspace $J_1$ of dimension $4$, and three subgroups of order $4$ which give three coideal subalgebra\xspaces of dimension $3$: \begin{itemize} \item $L_1=I(e_1 +e_2+e_3 +e_4+r^1_{1,1}+r^5_{2,2})$; \item $L_3=I(e_1 +e_2+e_3 +e_4+r^3_{1,1}+r^3_{2,2})$; \item $L_5=I(e_1 +e_2+e_3 +e_4+r^5_{1,1}+r^1_{2,2})$. \end{itemize} Finally, the coideal subalgebra\xspaces of dimension $2$ of $K_2$ are $I_0$, $I_3$ et $I_4$ (see~\ref{tour} (5)). \subsubsection{The coideal subalgebra\xspaces of $K_3$} \label{K3inKD6} The Kac algebra $K_3=I(e_1+e_3)$ is isomorphic to $K\!Q(3)$, and we can derive its coideal subalgebra\xspaces from~\ref{KQ3section}. There is a single coideal subalgebra\xspace of dimension $6$: $K_1$. Its coideal subalgebra\xspaces of dimension $2$ and $3$ are those of $K_1$ which we treated above. The three coideal subalgebra\xspaces of dimension $4$ are: \begin{itemize} \item $J_3$, which is the algebra of its intrinsic group, \item $J_5=I(e_1+e_3+q(0,\frac{5\pi}{3},2)+q(0,\frac{4\pi}{3},4))$, \item $J_1=I(e_1+e_3 + q(0,\frac{\pi}{3},2)+q(0,\frac{2\pi}{3},4))$. \end{itemize} We computed those projections by using successively the embedding of $K\!Q(3)$ into $K\!Q(6)$ and the explicit isomorphism from $K\!Q(6)$ to $K\!D(6)$ (see~\ref{mupad.plonge}). \subsubsection{The coideal subalgebra\xspaces of $K_4$} \label{K4inKD6} The Kac subalgebra $K_4=I(e_1+e_4)$ is isomorphic to $K\!D(3)$, and we can derive its coideal subalgebra\xspaces from~\ref{KD3section}. Besides $K_1$, it contains two other coideal subalgebra\xspaces of dimension $6$: \begin{itemize} \item $K_{41}=I(e_1+e_4+p(2,2,3))$ which contains $I_1$ and $L_3$, \item $K_{42}=I(e_1+e_4+p(1,1,3))$ which contains $I_2$ and $L_3$. \end{itemize} The coideal subalgebra\xspaces of dimension $4$ are: \begin{itemize} \item $J_0$, which is the algebra of its intrinsic group, \item $J_2=I(e_1+e_4+q(0,\frac{2\pi}{3},2)+q(0,\frac{4\pi}{3},4))$, \item $J_4=I(e_1+e_4+q(0,\frac{4\pi}{3},2)+q(0,\frac{2\pi}{3},4))$. \end{itemize} We computed those projections by using the embedding of $K\!D(3)$ in $K\!D(6)$ (see~\ref{mupad.plonge}). \subsubsection{The coideal subalgebra\xspaces of dimension $4$ of $K\!D(6)$ } As predicted by Proposition~\ref{sacig4pair}, there are seven coideal subalgebra\xspaces of dimension $4$ in $K\!D(6)$: $J_{20}$ and $J_k$, with $k= 0 \dots 5$. \subsubsection{The coideal subalgebra\xspaces of dimension $8$ of $K\!D(6)$ } \label{KD6-8} \begin{prop} The coideal subalgebra\xspaces of dimension $8$ of $K\!D(6)$ are the following : \begin{itemize} \item $K_0= I(e_1+p^{4}_{1,1})$, the algebra of the intrinsic group; \item $K_{01}=I(e_1 + q(0,\frac{2\pi}{3},4))=I(p_{\langle a^3, ba\rangle})$, which contains $J_{20}$, $J_1$, and $J_4$; \item $K_{02}=I(e_1 + q(0,\frac{4\pi}{3},4))=I(p_{\langle a^3, ab\rangle})$, which contains $J_{20}$, $J_2$, and $J_5$. \end{itemize} The last two are isomorphic by $\Theta$; they share the same block matrix structure as $K\!P$, and their lattice of coideal subalgebra\xspaces are isomorphic to that of $K\!P$. However they are not Kac subalgebras. \end{prop} \begin{proof} We first prove that any coideal subalgebra\xspace of dimension $8$ in $K\!D(6)$ contains $J_{20}$ and that its Jones projection is of the form $e_1+s$, with $s$ a minimal projection of the fourth factor $M_2(\mathbb{C})$ of $K\!D(6)$: By a usual trace argument, the Jones projection of a coideal subalgebra\xspace $K$ of dimension $8$ is of the form $e_1+s$, with $s$ a minimal projection of some $j$-th factor $M_2(\mathbb{C})$ (in principle, it could also be of the form $e_1+e_i+e_k$, but we ruled out those later projections by a systematic check on computer). The index of $N_1 \subset N_1\rtimes \delta(K)$ is $3$. Given the shape of $p_K$, its principal graph is $A_5$. Therefore, the center of $K$ admits a projection of the form $e_i+(e^j_{1,1}+e^j_{2,2}-s)$ (with $i=2,3$, or $4$). Then, the coideal subalgebra\xspace generated by $e_1+e_i+ e^j_{1,1}+e^j_{2,2}$ is of dimension at most $8$ since it is contained in $K$. A computer check on the $15$ possible values shows that only $e_1+e_2+e^4_{1,1}+e^4_{2,2}$ satisfies this condition. Therefore, every coideal subalgebra\xspace of dimension $8$ contains $J_{20}=I(e_1+e_2+e^4_{1,1}+e^4_{2,2})$ and its Jones projection is of the form $e_1+s$, with $s$ a minimal projection of the fourth factor $M_2(\mathbb{C})$ of $K\!D(6)$. Using Proposition~\ref{prop.resumep} (4), we checked on computer that the only such Jones projections are $e_1+p^{4}_{1,1}$, $e_1 + q(0,\frac{2\pi}{3},4)$ and $e_1 + q(0,\frac{4\pi}{3},4)$; this reduced to solving a system of quadratic algebraic equations. We also checked that both new coideal subalgebra\xspaces have the same block matrix structure as $K\!P$ but are not Kac subalgebras. The mentioned inclusions are straightforward to check using Proposition~\ref{tour} (5). \end{proof} \section{The Kac algebras $K\!Q(n)$} \label{section.KQ} The structure of $K\!Q(n)$ follows closely that of $K\!D(n)$, and we sketch some general results. We first prove that $K\!Q(2m)$ is isomorphic to $K\!D(2m)$. Then, when $n$ is odd, we establish the self-duality of $K\!Q(n)$, describe its automorphism group, list its coideal subalgebra\xspaces of dimension $4$, $n$ and $2n$, and get a partial description of $\operatorname{l}(K\!Q(n))$ (Theorem~\ref{theorem.lattice.nprime}); this description is complete for $n$ odd prime. Finally, we complete the systematic study of all Kac algebras of dimension at most $15$ by the lattice $\operatorname{l}(K\!Q(3))$ which serves as intermediate step toward the lattice of $K\!D(6)$ in~\ref{KD6}. \subsection{Definition and general properties} \subsubsection{Notations} \label{lambdaKQ} For $n\geq 1$, we denote by $K\!Q(n)$ the Kac algebra of dimension $4n$ constructed in~\cite[6.6]{Vainerman.1998} from the quaternion group $Q_{n}=\langle a, b | a^{2n}=1, b^2=a^n, ba = a^{-1}b \rangle$. As for $K\!D(n)$, its block matrix structure is: $$K\!Q(n)=\mathbb{C} e'_1 \oplus \mathbb{C} e'_2 \oplus \mathbb{C} e'_3\oplus \mathbb{C} e'_4\oplus \bigoplus_{k=1}^{n-1}K\!Q(n)^k\,,$$ where $K\!Q(n)^k$ is a factor $M_2(\mathbb{C})$ whose matrix units we denote by $e'^{k}_{i,j}$, $i=1,2$, $j=1,2$. In general, we use the same notations for matrix units as in~\ref{q}, and put a $'$ when needed to distinguish between the matrix units of $K\!D(n)$ and $K\!Q(n)$. From~\ref{KacHopf}, the trace on $K\!Q(n)$ is given by: $tr(e'_i)=1/4n \quad tr(e_{i,j}'^{k})=1/2n$. Setting $\epsilon_n=\mathrm{e}^{\mathrm{i}\pi/n}$, the left regular representation of $Q_n$ is given by: \begin{align*} \lambda'(a^k)&=e'_1+e'_2+(-1)^k(e'_3+e'_4)+\sum_{j=1}^{n-1}(\epsilon_n^{jk}\ e'^j_{1,1}+\epsilon_n^{-jk}\ e'^j_{2,2})\,;\\ \lambda'(a^k b)&=e'_1-e'_2+(-1)^k(e'_3-e'_4)+\sum_{j=1}^{n-1}(\epsilon_n^{j(k-n)}\ e'^j_{1,2}+\epsilon_n^{-jk}\ e'^j_{2,1}) \quad \text{for $n$ even}\,;\\ \lambda'(a^k b)&=e'_1-e'_2+(-1)^k \mathrm{i} (e'_3-e'_4)+\sum_{j=1}^{n-1}(\epsilon_n^{j(k-n)}\ e'^j_{1,2}+\epsilon_n^{-jk}\ e'^j_{2,1}) \quad \text{for $n$ odd}\,. \end{align*} The coproduct of $\mathbb{C}[Q_n]$ is twisted by a 2-pseudo-cocycle $\Omega'$, given in~\ref{sacig22nKQimpair}. We refer to~\cite[6.6]{Vainerman.1998} for details. As in~\ref{KD2} one can show that $K\!Q(1)$ and $K\!Q(2)$ are respectively the algebras of the groups $\mathbb{Z}_4$ and $D_4$. \subsubsection{Intrinsic group} \label{Qintrinseque} In~\cite[p.718]{Vainerman.1998}, L. Vainerman mentions that the intrinsic group of the dual of $K\!Q(n)$ is $\mathbb{Z}_4$ if $n$ is odd, and $\mathbb{Z}_2 \times \mathbb{Z}_2$ otherwise. As in \ref{intrinseque}, one can show that the intrinsic group of $K\!Q(n)$ is $\mathbb{Z}_4$ if $n$ is odd and $D_4$ otherwise. \subsubsection{Automorphism group} \label{section.KQ.automorphismGroup} \begin{theorem} \label{theorem.automorphisms.KQ} For $n\geq 3$, the automorphism group ${\operatorname{Aut}}(K\!Q(n))$ of $K\!Q(n)$ is given by: \begin{displaymath} A_{2n} = \{\ \Theta_k, \ \Theta_k\Theta'\ {\ |\ } k\wedge 2n = 1 \}\,, \end{displaymath} where $\Theta_k$ and $\Theta'$ are defined as in Theorem~\ref{theorem.automorphisms.KD}. In particular, it is of order $2\varphi(2n)$ (where $\varphi$ is the Euler function), and isomorphic to $\mathbb{Z}_{2n}^* \rtimes \mathbb{Z}_2$, where $\mathbb{Z}_{2n}^*$ is the multiplicative group of $\mathbb{Z}_{2n}$. \end{theorem} \begin{proof} The proof follows that of Theorem~\ref{theorem.automorphisms.KD}. The analogue of Lemma~\ref{lemma.automorphism.KD.ThetaPk} holds because $\Theta'(a)$ lies in the subalgebra generated by $a$ in $K\!Q(n)$ which is isomorphic to that in $K\!D(n)$. Furthermore, the group automorphisms which fix $a^n$ and $b$ are the same in $D_{2n}$ and $Q_{2n}$. Everything else is checked on computer. \end{proof} \subsection{The isomorphism $\phi$ from $K\!D(2m)$ to $K\!Q(2m)$} \label{iso.KDKQ} \label{isotheorem} \begin{theorem} \label{theorem.isomorphism.KD.KQ} The Kac algebras $K\!D(n)$ and $K\!Q(n)$ are isomorphic if and only if $n$ is even. More specifically, when $n=2m$, $\phi$ defined by: \begin{align*} \phi(\lambda(a)) &= a+1/4 (a-a^{-1}) (a^{n}-1) (1 - ba^m)\,,\\ \phi(\lambda(b)) &=1/2[b(a^n+1)+\mathrm{i}(a^n-1)]a^m\,, \end{align*} where for the sake of readability the $\lambda'$ have been omitted on the right hand sides, extends into a Kac algebra isomorphism from $K\!D(n)$ to $K\!Q(n)$. \end{theorem} Note first that $K\!D(n)$ and $K\!Q(n)$ are not isomorphic for $n$ odd, as their intrinsic groups are $\mathbb{Z}_2^2$ and $\mathbb{Z}_4$, respectively (see~\ref{intrinseque} and~\ref{Qintrinseque}). The proof of the isomorphism for $n$ even follows the same line as that of Proposition~\ref{proposition.automorphism.KD.ThetaPrime}. We start with some little remarks which simplify the calculation, give an explicit formula for $\phi(\lambda(a))^k$, and conclude the proof with computer checks. \subsubsection{Technical lemmas} \begin{remarks} Recall that, in $Q_n$, $ba^kb^{-1}=a^{-k}$, $b^4=1$, and $b^2=a^n$. Therefore: \begin{enumerate}[(i)] \item $a^n$ is in the center of $K\!Q(n)$; \item $(a^n-1)(a^n+1)=0$; \item $(1+a^n)^2= 2(1+a^n)$ et $(1-a^n)^2= 2(1-a^n)$; \item $(a^k-a^{-k})ba^m=-ba^m(a^k-a^{-k})$; \item $(1+ba^m)(1-ba^m)=1-a^n$. \end{enumerate} \end{remarks} \begin{lemma} \label{lemma.isomorphism.KD.KQ} For all $k\in \mathbb{Z}$, \begin{displaymath} \phi(\lambda(a))^k =\lambda'(a^k)+\frac{1}{4}(\lambda'(a^k)-\lambda'(a^{-k}))(\lambda'(a^n)-1)(1-\lambda'(ba^m))\,. \end{displaymath} Furthermore $\phi(\lambda(a)^*)=\phi(\lambda(a))^*$. \end{lemma} \begin{proof} Write $f_k=\lambda'(a^k)+\frac{1}{4}(\lambda'(a^k)-\lambda'(a^{-k}))(\lambda'(a^n)-1)(1-\lambda'(ba^m))$. First note that $f_0=1$. One checks further that $f_k f_1 = f_{k+1}$ (omitting $\lambda'$): \begin{align*} f_kf_1-a^{k+1}& =\frac{a^{n}-1}{4}[(1+ba^m)(a^k- a^{-k})a+a^k(1+b a^{m})(a-a^{-1})]\\&+\frac{(a^{n}-1)^2}{16} (1+ b a^m)(a^k-a^{-k})(a-a^{-1}) (1-ba^m)\\ &=\frac{a^{n}-1}{4}[(2a^{k+1}-a^{-k+1}-a^{k-1})+ba^m(a^{k+1}-a^{-k-1})] \\&-\frac{(a^{n}-1)}{8}(1+ b a^m)(1-ba^m)(a^{k+1}+a^{-k-1}-a^{k-1}-a^{-k+1})]\\ & = \frac{a^{n}-1}{4}(1+ba^m)(a^{k+1}-a^{-k-1})\,. \end{align*} In particular, we have $f_{-1}=f_1^*=f_1^{-1}$, that is $\phi(\lambda(a)^*)=\phi(\lambda(a))^*$. The lemma follows by induction. \end{proof} \subsubsection{Proof of Theorem~\ref{theorem.isomorphism.KD.KQ}} \begin{proof}[Proof of Theorem~\ref{theorem.isomorphism.KD.KQ}] We check that $\phi$ satisfies the properties listed in Proposition~\ref{proposition.KacAlgebraIsomorphism}. Most expressions below involve the element $a^m$. Therefore the computer checks use a close analogue of Proposition~\ref{proposition.equationAtInfinity} for the group algebra $\mathbb{C}[Q_{2\infty}]$ of \begin{displaymath} Q_{2\infty} = \langle a, b, a_\infty\,|\, b^2=a_\infty,\, ab=ba^{-1},\,a_\infty b = ba_\infty^{-1} \rangle\,, \end{displaymath} where $a_\infty$ plays the role of $a^m$. Then, an algebraic expression of degree $d$ in $a$ vanishes in $K\!Q(2m)$ for all $m$ whenever it vanishes for some $M>2d$. \begin{enumerate}[(i)] \item By Lemma~\ref{lemma.isomorphism.KD.KQ}, $\phi(\lambda(a))^{2n}=1$, and we check on computer that $\phi(\lambda(b))^2=1$ and $\phi(\lambda(b))\phi(\lambda(a))=\phi(\lambda(a))^{-1}\phi(\lambda(b))$. \item By Lemma~\ref{lemma.isomorphism.KD.KQ}, $\phi(\lambda(a)^*)=\phi(\lambda(a))^*$, and check on computer that $\phi(\lambda(b))^*\phi(\lambda(b))=1$. \item Using a close analogue of Proposition~\ref{proposition.equationAtInfinity}, we check on computer that $(\phi\otimes \phi)(\Delta(\lambda(x)))=\Delta(\phi(x))$ for $x\in\{a,b\}$. \item Similarly, we check on computer the following equations: \begin{align*} \lambda'(a) &= \phi\left(\frac14 (\lambda(a)-\lambda(a)^{-1}) (\lambda(a)^{m}-\lambda(a)^{-m}) ( \lambda(a)^{m} - \mathrm{i} \lambda(b) ) + \lambda(a)\right)\,,\\ \lambda'(b) &= \phi\left(\frac12 \lambda(a)^m\left((\lambda(a)^n+1)\lambda(b)-(\lambda(a)^n-1)\right)\right)\,. \qedhere \end{align*} \end{enumerate} \end{proof} \subsubsection{The isomorphism $\phi$ on the central projections} The following proposition gives the values of the isomorphism $\phi$ on the central projections of $K\!D(2m)$. \begin{prop} \label{prop.KDKQ.central} Set $n=2m$. Then,\\ for $m$ even: $$\phi(e_1)=e'_1, \qquad \phi(e_2)=e'_2, \qquad \phi(e_3)=e'_4, \qquad \phi(e_4)=e'_3\,,$$ and for $m$ odd: $$\phi(e_1)=e'_1, \qquad \phi(e_2)=e'_2, \qquad \phi(e_3)=e'_3, \qquad \phi(e_4)=e'_4\,.$$ \end{prop} \begin{proof} We recall the following formulas (see~\ref{form2} and~\ref{lambdaKQ}), where $\lambda$ (resp. $\lambda'$) is the left regular representation of the dihedral (resp. quaternion) group: \begin{align*} e_1+e_2 &= \frac{1}{2n}\sum_{k=0}^{2n-1}\lambda(a^k) &e'_1+e'_2 &= \frac{1}{2n}\sum_{k=0}^{2n-1}\lambda'(a^k)\\ e_3+e_4 &= \frac{1}{2n}\sum_{k=0}^{2n-1}(-1)^k\lambda(a^k) &e'_3+e'_4 &= \frac{1}{2n}\sum_{k=0}^{2n-1}(-1)^k\lambda'(a^k)\\ e_1-e_2&=\lambda(b)(e_1+e_2) &e'_1-e'_2&=(e'_1+e'_2)\lambda'(b)\\ e_3-e_4 &=-\lambda(b)(e_3+e_4) &e'_3-e'_4 &=(e'_3+e'_4) \lambda'(b) \end{align*} Using further the identities \begin{displaymath} \sum_{k=0}^{2n-1}(\lambda'(a^k)-\lambda'(a^{-k})) = 0 \qquad \text{ and } \qquad \sum_{k=0}^{2n-1}(-1)^k(\lambda'(a^k)-\lambda'(a^{-k})) = 0\,, \end{displaymath} it follows easily that $\phi(e_1+e_2)=e'_1+e'_2$ and $\phi(e_3+e_4)=e'_3+e'_4$. Then, from \begin{displaymath} \lambda'(a^n)=e'_1+e'_2+e'_3+e'_4+\sum_{j=1}^{n-1}(-1)^j(e^j_{1,1}+e^j_{2,2})\,, \end{displaymath} we get $$(e'_1+e'_2+e'_3+e'_4)\phi(\lambda(b))= (e'_1-e'_2+e'_3-e'_4) \lambda'(a^m)= (e'_1-e'_2)+(-1)^m(e'_3-e'_4)$$ and it follows that $\phi(e_1-e_2)=e'_1-e'_2$ \quad and \quad $\phi(e_3-e_4)=(-1)^{m+1}(e'_3-e'_4)$. \end{proof} \subsection{Self-duality for $n$ odd} \label{self-dualKQ} As for $K\!D(n)$, $K\!Q(n)$ is self-dual for $n$ odd, and the proof follows the same lines (see~\ref{self-dualKD}). \begin{thm} The Kac algebra $K\!Q(n)$ is self-dual if and only if $n$ is odd. When this is the case, one can take as Kac algebra isomorphism $\psi$ defined by: \begin{displaymath} a \mapsto 2n(\widehat{ e^{n-1}_{1,1}} + \frac12 (i(\widehat {e^1_{2,2}} - \widehat {e^1_{1,1}}) -\widehat {e^{n-1}_{1,2}} - \widehat {e^{n-1}_{2,1})}, \qquad b \mapsto 4n\widehat {e_4}\,, \end{displaymath} where the notations are as in~\ref{self-dualKD}. \end{thm} The following proposition gives an alternative description of $\psi$ on the matrix units. \begin{prop} Take $k$ odd in $\{1,\dots,n-1$, and set: \begin{align*} E^k_{1,1}&=\chi_{ba^{n+k}}+\chi_{ba^{n-k}}\\% E^k_{2,2}&=\chi_{ba^{2n-k}}+\chi_{ba^{k}}\\% E^k_{1,2}&=\frac{1}{2}[(-\chi_{a^k}+\chi_{a^{2n-k}}+\chi_{a^{n+k}}-\chi_{a^{n-k}})+(\chi_{ba^k}-\chi_{ba^{2n-k}}+\chi_{ba^{n+k}}-\chi_{ba^{n-k}})]\\% E^k_{2,1}&=\frac{1}{2}[-(-\chi_{a^k}+\chi_{a^{2n-k}}+\chi_{a^{n+k}}-\chi_{a^{n-k}})+(\chi_{ba^k}-\chi_{ba^{2n-k}}+\chi_{ba^{n+k}}-\chi_{ba^{n-k}})]\\ E^{n-k}_{1,1}&=\chi_{a^{n-k}}+\chi_{a^{2n-k}}\\ E^{n-k}_{2,2}&=\chi_{a^k}+\chi_{a^{n+k}}\\ E^{n-k}_{1,2}&=\frac{1}{2}[(-\chi_{a^k}-\chi_{a^{2n-k}}+\chi_{a^{n+k}}+\chi_{a^{n-k}})+(\chi_{ba^k}-\chi_{ba^{2n-k}}-\chi_{ba^{n+k}}+\chi_{ba^{n-k}})] \\% E^{n-k}_{2,1}&=\frac{1}{2}[(-\chi_{a^k}-\chi_{a^{2n-k}}+\chi_{a^{n+k}}+\chi_{a^{n-k}})-(\chi_{ba^k}-\chi_{ba^{2n-k}}-\chi_{ba^{n+k}}+\chi_{ba^{n-k}})] \end{align*} Then, $$\psi(e_1)=\chi_1 ,\qquad \psi(e_2)=\chi_{a^n},\qquad \psi(e_3)=\chi_{ba^n},\qquad \psi(e_4)=\chi_b,$$ $$\psi(r^k_{i,j})=E^k_{i,j}, \qquad \psi(e^{n-k}_{i,j})=E^{n-k}_{i,j}.$$ \end{prop} \subsection{The lattice $\mathrm{l}(K\!Q(n))$ for $n$ odd} \label{section.KQ.odd.sacig} In the sequel of this section, we prove the following theorem, and illustrate it on $K\!Q(3)$. \begin{theorem} \label{theorem.lattice.nprime} When $n$ is odd, $K\!Q(n)$ admits: \begin{itemize} \item A single coideal subalgebra\xspace of dimension $2n$: $K'_2=I(e_1+e_2)$, isomorphic to $L^\infty(D_n)$; \item $n$ coideal subalgebra\xspaces of dimension $n$ in $K'_2$; \item $n$ coideal subalgebra\xspaces of dimension $4$, $J_k$ for $k=0,\dots,n-1$, with Jones projections: \begin{displaymath} p_{J_k}=e_1+\sum_{j =1, \, j\text{ even}}^{n-1} q(0,\frac{jk\pi}{n},j)\,; \end{displaymath} \item A single coideal subalgebra\xspace of dimension $2$: $I(e_1 +e_2+q_1+q_2)$. \end{itemize} If $n$ is prime there are no other coideal subalgebra\xspaces and the graph of the lattice of coideal subalgebra\xspaces of $K\!Q(n)$ is similar to Figure~\ref{figure.KQ3}. \end{theorem} \subsubsection{The coideal subalgebra\xspaces of dimension $2$ and $2n$} \label{sacig22nKQimpair} From~\ref{Qintrinseque} and~\ref{intrin}, $\widehat{K\!Q(n)}$ has a single coideal subalgebra\xspace of dimension $2$. Therefore $K\!Q(n)$ has a single coideal subalgebra\xspace of dimension $2n$. By Remark~\ref{groupe} its standard coproduct is \begin{displaymath} \Delta_s(e_1+e_2)=(e_1+e_2)\otimes(e_1+e_2)+(e_3+e_4)\otimes(e_3+e_4)+\sum_{j=1}^{n-1} e^j_{1,1}\otimes e^{j}_{2,2}+e^{j}_{2,2}\otimes e^j_{1,1}\,. \end{displaymath} Beware that, for an improved consistency with $K\!D(n)$, our notations differ slightly from those of~\cite{Vainerman.1998}: $$r_1=\frac{1}{2}\sum_{j=1,\,j \text{ odd}}^{n-1}r^j_{1,1}, \qquad r_2=\frac{1}{2}\sum_{j=1,\,j \text{ odd}}^{n-1}r^j_{2,2}\,,$$ the unitary $\Omega'$ used to twist the coproduct of $K\!Q(n)$ can be written as: \begin{align*} \Omega'&=(e_1+e_4+r_1+q_1)\otimes (e_1+e_4+r_1+q_1)\\ &+(e_1+q_1)\otimes(e_2+e_3+r_2+q_2)+(e_2+e_3+r_2+q_2)\otimes(e_1+q_1)\\ &+\rm{i}(e_4+r_1)\otimes(e_2-e_3+q_2-r_2)-\rm{i}(e_2-e_3+q_2-r_2)\otimes(e_4+r_1)\\ &+(e_2-\rm{i}e_3+q_2-\rm{i}r_2)\otimes(e_2+\rm{i}e_3+q_2+\rm{i}r_2)\,. \end{align*} Therefore: \begin{multline*} \Delta(e_1+e_2)=(e_1+e_2)\otimes(e_1+e_2)+(e_3+e_4)\otimes(e_3+e_4)\\+\sum_{j=1,\,j \text{ even}}^{n-1} e^j_{1,1}\otimes e^{j}_{2,2}+e^{j}_{2,2}\otimes e^j_{1,1}+\sum_{j=1,\, j \text{ odd}}^{n-1} p^j_{1,1}\otimes p^{j}_{1,1}+p^{j}_{2,2}\otimes p^j_{2,2}\,. \end{multline*} The unique coideal subalgebra\xspace of dimension $2n$ of $K\!Q(n)$ is therefore $K'_2=I(e_1+e_2)$. As in~\ref{e1e2} it can be shown to be isomorphic to $L^\infty(D_n)$. In particular, it admits $n$ coideal subalgebra\xspaces of dimension $n$. \subsubsection{The coideal subalgebra\xspaces of dimension $4$ and $n$} \label{sacig4KQimpair} \begin{prop} For $n$ odd, the coideal subalgebra\xspaces of dimension $n$ of $K\!Q(n)$ are contained in $K'_2$. They correspond to the $n$ sub-groups of order $2$ of $D_n$. By self-duality, $K\!Q(n)$ admits $n$ coideal subalgebra\xspaces of dimension $4$, with Jones projections: $$p_{J_k}=e_1+\sum_{j=1, \, j \text{ even}}^{n-1} q(0,\frac{jk\pi}{n},j) \quad (k=0,\dots,n-1)\,.$$ The principal graph of the inclusion $N_1 \subset N_1\rtimes\delta(J_k)$ is $D_n/\mathbb{Z}_2$ (see~\ref{grapheDimpair}), that of $N_2 \subset N_2\rtimes J_k$ is $D_{2n+2}^{(1)}$ (see~\ref{grapheD1}). \end{prop} \begin{proof} If $J$ is a coideal subalgebra\xspace of dimension $n$, its Jones projection is of the same form as described in~\ref{sacign} for $K\!D(n)$. From the classification of subfactors of index $4$ in~\cite{Popa.1990}, the principal graph of $R \subset R\rtimes \delta^{-1}(J)$ is either $D_4^{(1)}$ or $D_{n'}^{(1)}$. Therefore, $R \subset R\rtimes \delta^{-1}(J)$ admits an intermediate factor of index $2$, which implies that $J$ is contained in the unique coideal subalgebra\xspace $K'_2$ of dimension $2n$. Using~\ref{sacig22nKQimpair}, there are exactly $n$ coideal subalgebra\xspaces of dimension $n$. Since $K\!Q(n)$ is self-dual, it admits exactly $n$ coideal subalgebra\xspaces of dimension $4$. As in~\ref{sacig4impair}, one shows that their Jones projections are, for $k=0,\dots,n-1$: \begin{displaymath} p_{J_k}=e_1+\sum_{j=1, \, j \text{ even}}^{n-1} q(0,\frac{jk\pi}{n},j)\,.\qedhere \end{displaymath} \end{proof} \subsubsection{The Kac algebra $K\!Q(3)$ of dimension $12$} \label{KQ3section} We illustrate Theorem~\ref{theorem.lattice.nprime} on $K\!Q(3)$, making explicit the block matrix decompositions of the coideal subalgebra\xspaces (obtained by computer) and the principal graphs of the corresponding inclusions. \begin{figure} \caption{The lattice of coideal subalgebra\xspaces of $K\!Q(3)$} \label{figure.KQ3} \end{figure} \begin{prop} The lattice of coideal subalgebra\xspaces de $K\!Q(3)$ is as given by Figure~\ref{figure.KQ3}. The block matrix decomposition of the coideal subalgebra\xspaces is given by: \begin{align*} I_0&=\mathbb{C}(e_1 +e_2+q_1+q_2)\oplus \mathbb{C}(e_3 +e_4+p_1+p_2);\\% K_1&=\mathbb{C}(e_1+e_2+e_3+e_4)\oplus \mathbb{C}(p^1_{1,1}+e^2_{2,2}) \oplus \mathbb{C}(p^1_{2,2}+e^2_{1,1}) ;\\ K_{21}&=\mathbb{C}(e_1+e_2+p^1_{1,1}) \oplus \mathbb{C}(e_3+e_4+e^2_{1,1}) \oplus \mathbb{C}(p^1_{2,2}+e^2_{2,2});\\ K_{22}&=\mathbb{C}(e_1+e_2+p^1_{2,2}) \oplus \mathbb{C}(e_3+e_4+e^2_{2,2}) \oplus \mathbb{C}(p^1_{1,1}+e^2_{1,1});\\ J_0&=\mathbb{C}(e_1+q_1) \oplus \mathbb{C}(e_2+q_2) \oplus \mathbb{C}(e_3+r^{1}_{2,2}) \oplus \mathbb{C}(e_4+r^{1}_{1,1});\\ J_1&=\mathbb{C}(e_1+q(0,\frac{2\pi}{3},2))\oplus \mathbb{C}(e_2+q(0,\frac{5\pi}{3},2))\oplus \mathbb{C}(e_3+q(\frac{\pi}{3},\frac{\pi}{2},1))\oplus \mathbb{C}(e_4+q(-\frac{\pi}{3},\frac{3\pi}{2},1));\\ J_2&=\mathbb{C}(e_1+q(0,\frac{4\pi}{3},2))\oplus \mathbb{C}(e_2+q(0,\frac{\pi}{3},2))\oplus \mathbb{C}(e_3+q(-\frac{\pi}{3},\frac{\pi}{2},1))\oplus \mathbb{C}(e_4+q(\frac{\pi}{3},\frac{3\pi}{2},1));\\ K_2&=I(e_1+e_2)=\mathbb{C}(e_1 +e_2)\oplus \mathbb{C}(e_3 +e_4)\oplus \mathbb{C} p^1_{1,1}\oplus \mathbb{C} p^1_{2,2}\oplus \mathbb{C} e^2_{1,1}\oplus \mathbb{C} e^2_{2,2}. \end{align*} With the notations of~\ref{graphe}, the inclusion $N_2\subset N_2\rtimes J$ has for principal graph \begin{itemize} \item $D^{(1)}_8$ for $J= J_1$, or $J_2$; \item $A_5$ for $J=K_1$, $K_{21}$, or $K_{22}$. \end{itemize} For $J= I_0$, $J_0$, or $K_2$, it is of depth $2$. \end{prop} \section{The Kac algebras $K\!B(n)$ and $K_3$ in $K\!D(2m)$} \label{section.KB} When $n=2m$ is even, the Kac subalgebras $K_2$ and $K_4$ of $K\!D(n)$ are respectively isomorphic to $L^{\infty}(D_m)$ and $K\!D(m)$; their structure is therefore known, or can at least be studied recursively. There remains to describe the family of Kac subalgebras $K_3$, as $m$ varies. This question was actually at the origin of our interest in the family of Kac algebras $K\!Q(n)$, and later in the family $B_{4m}$ defined by A. Masuoka in~\cite[def 3.3]{Masuoka.2000}, and which we denote $K\!B(m)$ for notational consistency. In this section, we first prove that, for all $m$, $K_3$ in $K\!D(2m)$ is isomorphic to $K\!B(m)$, and therefore self-dual, and discuss briefly a construction of $K_3$ by composition of factors. Then, we compare the three families: $K_3$ in $K\!D(n)$ is also isomorphic to $K\!Q(m)$ when $m$ is odd (this gives an alternative proof of the self-duality of $K\!Q(m)$ when $m$ is odd) and the family $(K\!D(n))_n$ contains the others, $(K\!Q(n))_n$ and $(K\!B(n))_n$, and $K\!P$ as subalgebras. Using the self-duality of $K\!B(m)$, we list the coideal subalgebra\xspaces of dimension $2$, $4$, $m$, and $2m$ of $K_3$ in $K\!D(2m)$ and describe the complete lattice of coideal subalgebra\xspaces for $K\!D(8)$. \subsection{The subalgebra $K_3$ of $K\!D(2m)$} \label{section.K3.KD2m} In $K\!D(4)$, $K_3$ is isomorphic to $K\!P$ (see~\ref{KD4}); in particular, it is self-dual and can be obtained by twisting a group algebra. In $K\!D(8)$, $K_3$ is the self-dual Kac algebra described in~\cite[Lemma XI.10]{Izumi_Kosaki.2002} (the intrinsic group of its dual is $\mathbb{Z}_2 \times \mathbb{Z}_2$). We however found on computer that it cannot be obtained by twisting a group algebra: we took a generic cocycle $\omega$ on $H$ with six unknowns such that $\omega(h,h)=1$, and $\omega(h,h')= \alpha_{h,h'}$ if $h\ne h'$, and asked whether there existed a choice for the $\alpha_{h,h'}$ making the untwisted coproduct $\Delta_u = \Omega\Delta\Omega^*$ is cocommutative; this gave an algebraic system of equations of degree $2$ which had no solution (note that we relaxed the condition $\omega(h,h')=\overline{\omega(h',h)}$ to keep the system algebraic). \subsubsection{A presentation of $K_3$ and isomorphism with $K\!B(m)$} \label{B4m} We now construct explicit generators for $K_3$ satisfying the presentation of $K\!B(m)$ given in~\cite[def 3.3]{Masuoka.2000}. \begin{lemma} With $n=2m$ and $\epsilon=\rm{e}^{\rm{i}\pi/(2m)}$ set: \begin{align*} v&=(e_1+e_3)-(e_2+e_4) +\sum_{j=1}^{m-1} \epsilon^{-2j}e_{1,2}^{2j} +\epsilon^{2j}e_{2,1}^{2j} +\sum_{j=0}^{m-1}\epsilon^{-(2j+1)}r_{1,2}^{2j+1}+\epsilon^{2j+1}r_{2,1}^{2j+1} \\ w&=(e_1+e_3)-(e_2+e_4) +\sum_{j=1}^{m-1} \epsilon^{2j}e_{1,2}^{2j} +\epsilon^{-2j}e_{2,1}^{2j} +\sum_{j=0}^{m-1}\epsilon^{2j+1}r_{1,2}^{2j+1}+\epsilon^{-(2j+1)}r_{2,1}^{2j+1}\\ B_0&=(1+\lambda(a^n))/2\\ B_1&=(1-\lambda(a^n))/2) \end{align*} The unitary elements $v$ and $w$ are contained in $K_3$ and satisfy: \begin{align*} v^2&=w^2=1\\ (vw)^m&=\lambda(a^n)\\ \Delta(v)=v \otimes B_0v + w\otimes B_1 v \quad &\text{et}\quad \Delta(w)=w \otimes B_0w + v\otimes B_1 w\\ \varepsilon(v)&=\varepsilon(w)=1\\ S(v)=B_0v+B_1w \quad &\text{et}\quad S(w)=B_0w+B_1v \end{align*} \end{lemma} \begin{proof} The non trivial part is the calculation of the coproducts which is carried over in~\ref{subsection.copv}. \end{proof} \begin{theorem} The Kac subalgebra $K_3$ in $K\!D(2m)$ is isomorphic to the Kac algebra $K\!B(m)$ defined by A. Masuoka in~\cite[def 3.3]{Masuoka.2000}. In particular, it is self-dual. \end{theorem} \begin{proof} Using Lemma~\ref{B4m}, $K_3$ contains the subalgebra $I_0=\mathbb{C}\mathbb{Z}_2$ generated by $\lambda(a^n)$, as well as the unitary elements $v$ and $w$ which satisfy the desired relations. By dimension count, $K_3$ is therefore isomorphic to $K\!B(m)$. The later is self-dual (\cite[p. 776]{Calinescu_al.2004}). \end{proof} \subsubsection{Realization of $K_3$ by composition of subfactors} As in~\ref{KPIK} and~\ref{KD3IK}, we describe the inclusion $N_2\subset N_2\rtimes K_3$ using group actions. Define the group $G=D_m\rtimes \mathbb{Z}_2$, where $\mathbb{Z}_2$ acts on $D_m= \langle \alpha, \beta {\ |\ } \alpha^{m}=1, \beta^2=1, \beta\alpha = \alpha^{-1}\beta \rangle$ by the automorphism $\nu$ of $D_m$ defined by $\nu(\alpha)=\alpha^{-1}$ and $\nu(\beta)=\alpha\beta$. Set $M=N_2\rtimes K_1$. Consider the dual action of $D_m$ on $M$ ($K_1$ is isomorphic to $L^\infty(D_m)$) and the action of $z$ of $\mathbb{Z}_2$ on $M$ by $\Ad(v)$. As in~\cite[def II.1]{Izumi_Kosaki.2002}, it can be easily proved that $(d,z)$ is an outer action of the matched pair $(D_m,\mathbb{Z}_2)$. From $(d,z)$ arises a couple of cocycles $(\eta, \zeta)$ whose class in a certain cohomology group $H^2((D_m,\mathbb{Z}_2),T)$ characterizes up to an isomorphism the depth 2 irreducible inclusion of factors $M^{(D_m,d)} \subset M \rtimes_{z} \mathbb{Z}_2$ (\cite[Theorem II.5 and Remark 2 p.12]{Izumi_Kosaki.2002}) and the Kac algebra which is associated to this inclusion (\cite[Theorem VI.1]{Izumi_Kosaki.2002}). The group $H^2((D_m,\mathbb{Z}_2),T)$ is reduced to $\mathbb{Z}_2$ (\cite[VII.§5 and Proposition VII.5]{Izumi_Kosaki.2002}) and the pair $(d,z)$ is associated to the non-trivial cocycle. Indeed, if $m$ is even, this follows from ~\cite[Lemma~VII.6]{Izumi_Kosaki.2002}, since the intrinsic group of $K_3$ and $\widehat{K_3}$ are of order $4$ (that of $K_3$ is $J_{20}\equiv \mathbb{Z}_2\times\mathbb{Z}_2$ and $K_3$ has four characters). If $m$ is odd, $K_3$ is isomorphic to $K\!Q(m)$, which is self-dual, and its intrinsic group is $\mathbb{Z}_4$, so one can use~\cite[Lemma~VII.2]{Izumi_Kosaki.2002}. For any value of $m$, the inclusion $N_2 \subset N_2\rtimes K_3$ is therefore isomorphic to the inclusion $M^{(D_m,d)} \subset M \rtimes_{z} \mathbb{Z}_2$ associated to the non trivial cocycle. \subsection{$K\!D(n)$, $K\!Q(n)$, $K\!A(n)$, and $K\!B(n)$: isomorphisms, coideal subalgebra\xspaces of dimension $2n$} \label{isoproj} We collect here all the results on the isomorphisms between the four families and their Kac subalgebras of dimension $2n$. \begin{theorem} \label{theorem.isomorphisms} Let $n \geq 2$. \begin{enumerate} \item The Kac algebra $K\!A(n)$ is isomorphic to the dual of $K\!D(n)$. \item Assume further $n=2m$ even. Then, in $K\!D(n)$, \begin{itemize} \item $K_2=I(e_1+e_2)$ is isomorphic to $L^\infty(D_n)$, \item $K_3=I(e_1+e_3)$ is isomorphic to $K\!B(m)$, \item $K_4=I(e_1+e_4)$ is isomorphic to $K\!D(m)$. \end{itemize} \item The Kac algebra $K\!Q(n)$ is isomorphic to $K\!D(n)$ for $n$ even and to $K\!B(n)$ for $n$ odd. \end{enumerate} \end{theorem} \begin{proof} (1) For $n=2$, see~\cite[Remark 3.4 (1)]{Masuoka.2000}. For $n \geq 3$, $K\!A(n)$ is the unique non trivial Kac algebra obtained by twisting the product of $L^{\infty}(D_{n})$ by a cocycle (see~\cite[Theorem 4.1 (2)]{Masuoka.2000}). Its dual is therefore the unique non trivial Kac algebra obtained by twisting the coproduct of $\mathbb{C}[D_{n}]$\footnote{It is a classical fact in the folklore that twisting products or coproducts of algebras by cocycles are two dual constructions}. Hence, $K\!A(n)$ must be isomorphic to $KD(n)$. (2) See Proposition~\ref{e1e2}, Theorem~\ref{B4m}, Theorem~\ref{plonge} for the structure of $K_2$, $K_3$, and $K_4$ respectively. (3) For $n$ even, $K\!Q(n)$ is isomorphic to $K\!D(n)$ (Theorem~\ref{theorem.isomorphism.KD.KQ}). If $n$ odd, $K\!Q(n)$, which is embedded in $K\!Q(2n)$ as $K'_3$, is isomorphic to $K_3$ of $K\!D(2n)$ by~Proposition~\ref{prop.KDKQ.central}, and therefore to $K\!B(n)$ by Theorem~\ref{B4m}. \end{proof} \subsection{The lattice $\mathrm{l}(K\!B(m))$}\label{latticeKB} \subsubsection{General case} \label{lattice_K3_even} In~\ref{section.KQ.odd.sacig}, we partially describe the lattice $\mathrm{l}(K\!Q(m))$ when $m$ is odd; those results therefore apply to the isomorphic algebras $K\!B(m)$ and $K_3$ in $K\!D(2m)$. Let us now explore $\mathrm{l}(K\!B(m))$ for $m$ even: $m=2m'$. To this end, we consider $K\!B(m)$ as $K_3$ in $K\!D(2m)$. To avoid handling degeneracies, the special cases $KB(2)=KP$ and $KB(4)$ are treated respectively in Subsections~\ref{KP} and~\ref{KD8}, and we now assume $m\geq 6$. \begin{proposition} When $m$ is even, the coideal subalgebra\xspaces of dimension $2$, $4$, $m$, $2m$ of the Kac algebra $KB(m)\approx K_3\subset KD(2m)$ are, keeping the notations of~\ref{sacigK0} and~\ref{sacig4pair}: \begin{itemize} \item Three coideal subalgebra\xspaces of dimension $2m$: \begin{itemize} \item $K_{32}=K_1 = I(e_1+e_2+e_3+e_4)$, isomorphic to $L^\infty(D_m)$; \item $K_{33}=I(e_1+e_3+r^m_{1,1})=I(p_{\langle a^4, ba\rangle})$; \item $K_{34}=I(e_1+e_3+r^m_{2,2})=I(p_{\langle a^4, ab\rangle})$; \end{itemize} The coideal subalgebra\xspaces $K_{33}$ and $K_{34}$ are isomorphic through the involutive automorphism $\Theta_{-1}$; \item The coideal subalgebra\xspaces of $K_{32}$, which correspond to subgroups of $D_m$; in dimension $2$, $4$, $m$, they are: \begin{itemize} \item Dimension $2$: $I_0$, $I_3$, and $I_4$ (subgroups of order $m$); \item Dimension $4$: the coideal subalgebra\xspace $J_{20}$ containing $I_0$, $I_3$, and $I_4$; furthermore, if $m'$ is even, there are four coideal subalgebra\xspaces (dihedral subgroups of order $m'$), which contain exactly one of $I_3$ or $I_4$; \item Dimension $m$: $K_{31}=I(e_1+e_2+e_3+e_4+e^m_{1,1}+e^m_{2,2})$ (subgroup generated by $\alpha^{m'}$), and $K_{1k}=I(e_1+e_2+e_3+e_4+r^{2k-1}_{1,1}+r^{n-2k+1}_{2,2})$ for $k=1,\dots, m$ (subgroups generated by one reflection); each $K_{1k}$ contains exactly one of $I_3$ or $I_4$; \end{itemize} \item The coideal subalgebra\xspaces of $K_{33}$ and $K_{34}$: \begin{itemize} \item Dimension $2$: $I_0$, and if $m'$ is even, $I_3$ and $I_4$ ; \item Dimension $4$: $J_{2k+1}$ for $k=0, \dots m-1$ and, if $m'$ is even, $J_{20}$; for $2k+1 \equiv 1$ (resp. $2k+1 \equiv 3$) mod $4$, $J_{2k+1}$ is contained in $K_{33}$ (resp. $K_{34}$); \item Dimension $m$: $K_{31}$, and if $m'$ is even, two coideal subalgebra\xspaces contained in $K_{33}$ and two in $K_{34}$. \end{itemize} \end{itemize} In particular, $K_3$ admits exactly $m$ coideal subalgebra\xspaces of dimension $4$ not contained in $K_1$. Also, $K_{31}$ is the intersection of the three coideal subalgebra\xspaces of dimension $2m$ of $K_3$. \end{proposition} \begin{proof} Since $K_{32}=K_1=L^{\infty}(D_m)$ the description of its lattice of coideal subalgebra\xspaces is straightforward. From~\ref{sacigK0} and~\ref{sacig4pair}, we know that $K_3$ admits three coideal subalgebra\xspaces of dimension $2$: $I_0$, $I_3$, and $I_4$. By self-duality of $K_3$, there are exactly three coideal subalgebra\xspaces of dimension $2m$, and by trace argument the given Jones projections for $K_{32}$, $K_{33}$, and $K_{34}$ are the only possible ones. Their expressions in the group basis is straightforward. Then, any automorphism of $K\!D(2m)$ stabilizes $K_3$ and therefore induces an automorphism of $K_3$. From the expressions of the Jones projection in the group basis, the involutive automorphism $\Theta_{-1}$ exchanges $K_{33}$ and $K_{34}$ which are therefore isomorphic. The specified inclusions are derived by comparison of the Jones projections. Looking at the inclusion diagrams imposes that $\delta$ maps $I_0$ to $K_1$, $I_3$ and $I_4$ to $K_{33}$ and $K_{34}$ (or $K_{34}$ and $K_{33}$), and $J_{20}$ to $K_{31}$, and the $J_{2k+1}$ to the $K_{1h}$ (see~\ref{sacig4pair}). As in~\ref{sacign}, the coideal subalgebra\xspaces of dimension dividing $2m$ are either contained in $K_1$ or in exactly one of $K_{33}$ or $K_{34}$. Therefore, the remaining coideal subalgebra\xspaces of dimension $4$ and $m$ of $K_3$ are in exactly one of $K_{33}$ or $K_{34}$. As $K_{33}$ ad $K_{34}$ are isomorphic, it is sufficient to find those in $K_{33}$. Any coideal subalgebra\xspace of dimension $4$ contains a coideal subalgebra\xspace of dimension $2$, since it is the image by $\delta$ of a coideal subalgebra\xspace of dimension $m$ contained in one of dimensions $2m$. The next argument depends on the parity of $m'$ : If $m'$ is odd, the only coideal subalgebra\xspace of dimension $2$ contained in $K_{33}$ is $I_0$; therefore the coideal subalgebra\xspaces of dimension $4$ of $K_{33}$ are mapped by $\delta$ on the $m+1$ coideal subalgebra\xspaces of dimension $m$ of $K_1$: they are $J_{20}$ and the $J_{2k+1}$. If $m'$ is even, $K_{33}$ admits only $4$ projections of trace $1/2m$ (see~\ref{K33}); therefore, $K_{33}$ contains at most $3$ coideal subalgebra\xspaces of dimension $m$, including $K_{31}$. Conclusion: $K_3$ contains at most $m+5$ coideal subalgebra\xspaces of dimension $m$ and at most $m+5$ coideal subalgebra\xspaces of dimension $4$, and by self-duality it contains exactly $m+5$ coideal subalgebra\xspaces de dimension $m$ and $m+5$ coideal subalgebra\xspaces de dimension $4$. In both cases, $K_3$ admits exactly $m$ coideal subalgebra\xspaces of dimension $4$ not contained in $K_1$. \end{proof} \subsubsection{Lattice $\mathrm{l}(K\!D(8))$} \label{KD8} Since $n$ is a power of $2$, any coideal subalgebra\xspace is contained in one of the $K_i$'s (see prop~\ref{sacign}). The lattice $\mathrm{l}(K\!D(8))$ can therefore be constructed from those of $K_2 \equiv L^{\infty}(D_8)$, $K_3$, and $K_4 \equiv K\!D(4)$ (see~\ref{KD4}) \begin{prop} The lattice of coideal subalgebra\xspaces of $K\!D(8)$ is as given in Figure~\ref{figure.KD8}. \end{prop} \begin{figure} \caption{The lattice of coideal subalgebra\xspaces of $K\!D(8)$} \label{figure.KD8} \end{figure} \begin{proof} We proceed essentially as in the general case. The algebra $K_1$ admits, beside $J_{20}$, four coideal subalgebra\xspaces of dimension $m=4$, denoted $J_{21}$, $J_{22}$, $J_{23}$, and $J_{24}$ which each contains either $I_3$ or $I_4$. Beside $K_1$, there are two coideal subalgebra\xspaces de dimension $8$: \begin{itemize} \item $K_{33}=I(e_1+e_3+r^4_{1,1})$ containing $J_{20}$, $J_1$, and $J_5$; \item $K_{34}=I(e_1+e_3+r^4_{2,2})$ containing $J_{20}$, $J_3$, and $J_7$. \end{itemize} Both $K_{33}$ and $K_{34}$ have $\mathbb{C} \oplus \mathbb{C} \oplus \mathbb{C} \oplus \mathbb{C} \oplus M_2(\mathbb{C})$ as matrix structure (computer calculation). Therefore they both admit, beside $J_{20}$, exactly two coideal subalgebra\xspaces of dimension $4$; those are $J_1$, $J_5$ and $J_3$, $J_7$. Their images by $\delta$ of $K_3$ are $J_{21}$, $J_{22}$ and $J_{23}$, $J_{24}$. \end{proof} \appendix \section{Collection of formulas for $K\!D(n)$} \label{section.formulas} In this appendix, we collect all the formulas used in the study of the algebra $K\!D(n)$: left regular representation and expression of matrix units in the group basis, standard coproducts of some special elements, expression of $\Omega$ in terms of matrix units, method to calculate the twisted coproducts and its expression on some elements. \subsection{Left regular representation of $D_{2n}$} \cite[6.6]{Vainerman.1998}\label{lambda} Set $\epsilon_n=\mathrm{e} ^{i\pi/n}$. The left regular representation of $D_{2n}$ in terms of matrix units is given by, for all $k$: \begin{align*} \lambda(a^k)&=e_1+e_2+(-1)^k(e_3+e_4)+\sum_{j=1}^{n-1}(\epsilon_n^{jk} e^j_{1,1}+\epsilon_n^{-jk} e^j_{2,2})\\ \lambda(ba^k)&=e_1-e_2-(-1)^k(e_3-e_4)+\sum_{j=1}^{n-1}(\epsilon_n^{-jk} e^j_{1,2}+\epsilon_n^{jk} e^j_{2,1})\\ \end{align*} \subsection{Matrix units in the group basis and coinvolution} \label{form2} By inverting the previous formulas we get the following expressions for the matrix units in the group basis: \begin{align*} &e_1 =\frac{1}{4n} \sum_{k=0}^{2n-1}(\lambda(a^k)+\lambda(ba^k))\,, &e_2 &=\frac{1}{4n} \sum_{k=0}^{2n-1}(\lambda(a^k)-\lambda(ba^k))\,,\\ &e_3 =\frac{1}{4n} \sum_{k=0}^{2n-1}(-1)^k(\lambda(a^k)-\lambda(ba^k))\,, &e_4 &=\frac{1}{4n} \sum_{k=0}^{2n-1}(-1)^k(\lambda(a^k)+\lambda(ba^k))\,,\\ &e^j_{1,1}=\frac{1}{2n} \sum_{k=0}^{2n-1}\mathrm{e}^{-ijk\pi/n}\lambda(a^k)\,, &e^j_{2,2}&=\frac{1}{2n} \sum_{k=0}^{2n-1}\mathrm{e}^{ijk\pi/n}\lambda(a^k)\,,\\ &e^j_{1,2}=\frac{1}{2n} \sum_{k=0}^{2n-1}\mathrm{e}^{ijk\pi/n}\lambda(ba^k)\,, &e^j_{2,1}&=\frac{1}{2n} \sum_{k=0}^{2n-1}\mathrm{e}^{-ijk\pi/n}\lambda(ba^k)\,. \end{align*} For the convenience of the reader, here are some direct consequences of those formulas: \begin{align*} &e_1+e_2 =\frac{1}{2n} \sum_{k=0}^{2n-1}\lambda(a^{k})\,, &e_3+e_4 &=\frac{1}{2n} \sum_{k=0}^{2n-1}(-1)^k\lambda(a^{k})\,,\\ &e_1+e_3 =\frac{1}{2n} \sum_{k=0}^{n-1}\lambda(a^{2k})+\lambda(ba^{2k+1})\,, &e_1+e_4& =\frac{1}{2n} \sum_{k=0}^{n-1}\lambda(a^{2k})+\lambda(ba^{2k})\,,\\ &e_2+e_3 =\frac{1}{2n} \sum_{k=0}^{n-1}(\lambda(a^{2k})-\lambda(ba^{2k}))\,, &e_1+e_2+e_3+e_4& = \frac{1}{n}\sum_{k=0}^{n-1}\lambda(a^{2k})\,,\\ &e^j_{1,1}+e^{n-j}_{2,2}=\frac{1}{n} \sum_{k=0}^{n-1}\mathrm{e}^{-2ijk\pi/n}\lambda(a^{2k})\,, &e^j_{1,2}+e^{n-j}_{2,1}&=\frac{1}{n} \sum_{k=0}^{n-1}\mathrm{e}^{2ijk\pi/n}\lambda(ba^{2k})\,,\\ &e^j_{1,2}-e^{n-j}_{2,1}=\frac{1}{n} \sum_{k=0}^{n-1}\mathrm{e}^{ij(2k+1)\pi/n}\lambda(ba^{2k+1})\,. \end{align*} The coinvolution is the antiisomorphism defined by $S(\lambda(g))=\lambda(g)^*$. It fixes the $e_i$ and $e_{i,j}^k$ (with $i\neq j$) and exchanges the $e_{1,1}^j$ and $e_{2,2}^j$. \subsection{The unitary $2$-cocycle $\Omega$} \label{omega} In~\cite[6.6]{Vainerman.1998} the unitary $2$-cocycle $\Omega$ used to twist the coproduct is expressed in the group basis: $$\Omega=\sum_{i,j=1}^4 c_{i,j} \lambda(h_i)\otimes\lambda(h_j)\,,$$ where $h_1=1$, $h_2=a^n$, $h_3=b$, $h_4=ba^n$, and the $c_{i,j}$ are the coefficients of the matrix: $$ \frac{1}{8}\begin{pmatrix} 5&1&1&1\\ 1&1&\bar{\nu}&\nu\\ 1&\nu&1&\bar{\nu}\\ 1&\bar{\nu}&\nu&1 \end{pmatrix}\quad \text{with}\;\; \nu=-1+2\rm{i}\,.$$ Furthermore, $\Omega^{*}= \sum_{i,j=1}^4 c_{j,i} \lambda(h_i)\otimes\lambda(h_j)$. We deduce its expression in terms of matrix units:\\ For $n$ even: \begin{align*} \Omega&=(e_1+e_2+e_3+e_4+q_1+q_2)\otimes(e_1+e_2+e_3+e_4+q_1+q_2)\\ &+(p_1+p_2)\otimes(e_1+e_4+q_1)+(e_1+e_4+q_1)\otimes(p_1+p_2)\\ &+i(p_1-p_2)\otimes(e_2+e_3+q_2)-i(e_2+e_3+q_2)\otimes(p_1-p_2)+(p_1+ip_2)\otimes(p_1-ip_2)\end{align*} For $n$ odd: \begin{align*} \Omega&=(e_1+e_2+q_1+q_2)\otimes(e_1+e_2+q_1+q_2)\\ &+(e_3+e_4+p_1+p_2)\otimes(e_1+q_1)+(e_1+q_1)\otimes(e_3+e_4+p_1+p_2)\\ &+i(e_2+q_2)\otimes(e_3-e_4-p_1+p_2)-i(e_3-e_4-p_1+p_2)\otimes (e_2+q_2)\\ &+(e_3-ie_4-ip_1+p_2)\otimes(e_3+ie_4+ip_1+p_2) \end{align*} \begin{remark} \label{remark.omega} Note that the conjugation by $\Omega$ acts similarly on all factors $M_2(\mathbb{C})$ of same parity. For example, if $x$ and $y$ are each in some even factor (not necessarily the same), then: $$\Omega (x\otimes y) \Omega^*=(q_1+q_2)\otimes (q_1+q_2)(x\otimes y) (q_1+q_2)\otimes (q_1+q_2)=x\otimes y\,.$$ If $x$ and $y$ are each in some odd factor (not necessarily the same), then: $$\Omega (x\otimes y) \Omega^*=(-ip_1+p_2)\otimes (ip_1+p_2) (x\otimes y) (ip_1+p_2)\otimes (-ip_1+p_2)\,.$$ In particular, for $j$ and $j'$ odd: $$\Omega (e^j_{1,2}\otimes e^{j'}_{1,2}) \Omega^*=r^j_{2,1}\otimes r^{j'}_{1,2}\,.$$ In~\ref{subsection.copv}, we shall see an example of twisting of $x\otimes y$ for $x$ in some odd factor and $y$ in some even factor. In subsequent sections, we give further examples of calculation of twisted coproducts using this remark. \end{remark} \subsection{Twisted coproduct of the Jones projection of the subgroup $H_r$} \label{J4} Recall that, in $\mathbb{C}[D_{2n}]$ and for $r=1,\dots, n-1$, the projection $$Q_r=\frac{1}{4}(1+\lambda(a^n)+\lambda(ba^r)+\lambda(ba^{r+n}))\, ,$$ is the Jones projection of the subalgebra $\mathbb{C}[H_r]$ of the subgroup \begin{displaymath} H_r=\{1, \lambda(a^n), \lambda(ba^r), \lambda(ba^{r+n}) \}\,. \end{displaymath} In order to illustrate Remark~\ref{remark.omega}, we prove that the twisted coproduct of $Q_r$ is of the form \begin{displaymath} \Delta(Q_r)=Q_r\otimes Q_r+Q'_r\otimes Q'_r+S(\tilde{P}_r)\otimes \tilde{P}_r + S(\tilde{P'}_r)\otimes \tilde{P'}_r\,. \end{displaymath} where $\tilde{P}_r$ and $\tilde{P'}_r$ are defined below, depending on the parity of $n$. This is used in the proof of Proposition~\ref{prop.abelien} to conclude that $Q_r$ is the Jones projection of $J_r$, spanned by $Q_r$, $Q'_r$, $\tilde{P}_r$ and $\tilde{P'_r}$. We recall from~\ref{groupe} that the standard coproduct of $Q_r$ is: $$\Delta_s(Q_r)=Q_r\otimes Q_r+Q'_r\otimes Q'_r+P_r\otimes P_r + P'_r\otimes P'_r\,,$$ where $Q_r$, $Q'_r$, $P_r$ and $P'_r$ are the minimal projections of $J_r$ (notice that all the elements of the group $H_r$ are of order $2$). Let us start with $n$ odd, and express $Q_r,Q'_r,P_r,P'_r$ in the matrix units basis: \begin{align*} Q_r&=\frac{1}{4}(1+\lambda(a^n)+\lambda(ba^r)+\lambda(ba^{r+n}))=e_1+\sum_{j=1,\,j \text{ even}}^{n-1} q(0,\frac{jr\pi}{n},j)\,.\\ Q'_r&=\frac{1}{4}(1+\lambda(a^n)-\lambda(ba^r)-\lambda(ba^{r+n}))=e_2+\sum_{j=1,\,j \text{ even}}^{n-1} q(0,\frac{jr\pi}{n}+\pi,j)\,. \end{align*} For $r$ even: \begin{align*} P_r&=\frac{1}{4}(1-\lambda(a^n)-\lambda(ba^r)+\lambda(ba^{r+n}))=e_3+\sum_{j=1,\,j \text{ odd}}^{n-1} q(0,\frac{jr\pi}{n} +\pi,j)\,,\\ P'_r&=\frac{1}{4}(1-\lambda(a^n)+\lambda(ba^r)-\lambda(ba^{r+n}))=e_4+\sum_{j=1,\,j \text{ odd}}^{n-1} q(0,\frac{jr\pi}{n},j)\,. \end{align*} For $r$ odd: \begin{align*} P_r&=\frac{1}{4}(1-\lambda(a^n)-\lambda(ba^r)+\lambda(ba^{r+n}))=e_4+\sum_{j=1,\,j \text{ odd}}^{n-1} q(0,\frac{jr\pi}{n}+\pi,j)\,,\\ P'_r&=\frac{1}{4}(1-\lambda(a^n)+\lambda(ba^r)-\lambda(ba^{r+n}))=e_3+\sum_{j=1,\,j \text{ odd}}^{n-1} q(0,\frac{jr\pi}{n},j)\,. \end{align*} Applying~\ref{omega}, yields that: $$\Delta(Q_r)=Q_r\otimes Q_r+Q'_r\otimes Q'_r+(V\otimes V^*)(P_r\otimes P_r)(V^*\otimes V) + (V\otimes V^*)(P'_r\otimes P'_r)(V^*\otimes V)\,,$$ where $V=(e_3-ie_4-ip_1+p_2)$, and $(V\otimes V^*)$ is the unique term of $\Omega$ which acts non trivially. Note that the following relations hold: \begin{equation*} Ve_3V^*=e_3, \qquad Ve_4V^*=e_4, \qquad P_r+P'_r=e_3+e_4\,. \end{equation*} Therefore, it is sufficient to conjugate $q(0,\frac{jr\pi}{n},j)$ with $\pm ip_1+p_2$. As desired, we get: $$\Delta(Q_r)=Q_r\otimes Q_r+Q'_r\otimes Q'_r+S(\tilde{P}_r)\otimes \tilde{P}_r + S(\tilde{P'}_r)\otimes \tilde{P'}_r\,,$$ with, for $r$ even: \begin{displaymath} \tilde{P}_r =e_3+\sum_{j=1,\,j \text{ odd}}^{n-1} q(-\frac{jr\pi}{n},\pi,j) \quad \text{ and }\quad \tilde{P'}_r =e_4+\sum_{j=1,\,j \text{ odd}}^{n-1} q(\frac{jr\pi}{n},0,j)\,, \end{displaymath} and for $r$ odd: \begin{displaymath} \tilde{P}_r =e_4+\sum_{j=1,\,j \text{ odd}}^{n-1} q(-\frac{jr\pi}{n},\pi,j) \quad \text{ and }\quad \tilde{P'}_r =e_3+\sum_{j=1,\,j \text{ odd}}^{n-1} q(\frac{jr\pi}{n},0,j)\,. \end{displaymath} From~\ref{form2}, we get: $S(q(\alpha,\beta,j))=q(-\alpha,\beta,j)$. The calculation of $\Delta(Q_r)$ in the case $n$ even is similar with \begin{displaymath} \tilde{P}_r =\sum_{j=1,\,j \text{ odd}}^{n-1} q(-\frac{jr\pi}{n},\pi,j) \quad \text{ and}\quad \tilde{P'}_r =\sum_{j=1,\,j \text{ odd}}^{n-1} q(\frac{jr\pi}{n},0,j)\,, \end{displaymath} and with, for $r$ even: \begin{displaymath} Q_r=e_1+e_4+\sum_{j=1,\,j \text{ even}}^{n-1} q(0,\frac{jr\pi}{n},j)\quad \text{ and}\quad Q'_r=e_2+e_3+\sum_{j=1,\,j \text{ even}}^{n-1} q(0,\frac{jr\pi+\pi}{n},j)\,, \end{displaymath} and for $r$ odd: \begin{displaymath} Q_r=e_1+e_3+\sum_{j=1,\,j \text{ even}}^{n-1} q(0,\frac{jr\pi}{n},j)\quad \text{ and}\quad Q'_r=e_2+e_4+\sum_{j=1,\,j \text{ even}}^{n-1} q(0,\frac{jr\pi+\pi}{n},j)\,. \end{displaymath} \subsection{Coproducts of Jones projections for coideal subalgebra\xspaces of dimension $2n$} \subsubsection{Standard coproduct expressions} \label{copG} We need to calculate the twisted coproduct of certain projections. We start with their standard coproducts in $\mathbb{C}[D_{2n}]$, using the results of~\ref{groupe}. \begin{proposition} The standard coproduct $\Delta_s$ of the Kac algebra of the dihedral group $D_{2n}$ satisfies: \begin{align*} \Delta_s(e_1+e_2)=&\,(e_1+e_2)\otimes(e_1+e_2)+(e_3+e_4)\otimes(e_3+e_4)\\ &+\sum_{j=1}^{n-1} e^j_{1,1}\otimes e^{j}_{2,2}+e^{j}_{2,2}\otimes e^j_{1,1}\\ \Delta_s(e_1+e_3)=&\,(e_1+e_3)\otimes(e_1+e_3)+(e_2+e_4)\otimes(e_2+e_4)\\ & +\frac{1}{2}\sum_{j=1}^{n-1}( e^j_{1,1}+ e^{n-j}_{2,2})\otimes (e^j_{2,2}+e^{n-j}_{1,1}) +( e^j_{1,2}- e^{n-j}_{2,1})\otimes (e^j_{2,1}- e^{n-j}_{1,2})\\ \Delta_s(e_1+e_4)=&\,(e_1+e_4)\otimes(e_1+e_4)+(e_2+e_3)\otimes(e_2+e_3)\\ &+\frac{1}{2}\sum_{j=1}^{n-1}(e^{n-j}_{1,1}+e^j_{2,2})\otimes (e^j_{1,1}+e^{n-j}_{2,2})+( e^j_{1,2}+ e^{n-j}_{2,1})\otimes (e^j_{2,1}+ e^{n-j}_{1,2})\\ \Delta_s(e_3+e_4)=&\,(e_1+e_2)\otimes(e_3+e_4)+(e_3+e_4)\otimes(e_1+e_2)\\ &+\sum_{j=1}^{n-1} e^{n-j}_{1,1}\otimes e^{j}_{1,1}+e^{n-j}_{2,2}\otimes e^j_{2,2}\\ \Delta_s(e_1+e_2+e_3+e_4)=&\,(e_1+e_2+e_3+e_4)\otimes(e_1+e_2+e_3+e_4)\\ &+\sum_{j=1}^{n-1} (e^{n-j}_{1,1}+e^j_{2,2})\otimes (e^j_{1,1}+e^{n-j}_{2,2})\\ \Delta_s(e^j_{1,1})=&\,e^j_{1,1}\otimes(e_1+e_2)+e^{n-j}_{2,2}\otimes(e_3+e_4)+(e_1+e_2)\otimes e^j_{1,1}+(e_3+e_4)\otimes e^{n-j}_{2,2} &\\ &+\sum_{j'<j} e^{j-j'}_{1,1}\otimes e^{j'}_{1,1} +e^{n-(j-j')}_{2,2}\otimes e^{n-j'}_{2,2}\\ &+\sum_{j'>j}e^{j'-j}_{2,2}\otimes e^{j'}_{1,1}+e^{n-(j'-j)}_{1,1}\otimes e^{n-j'}_{2,2} \end{align*} \end{proposition} \begin{proof} The projections $e_1+e_2$ and of $e_1+e_2+e_3+e_4$ are the Jones projections of the subgroups generated respectively by $\lambda(a)$ and $\lambda(a^2)$. Their standard coproducts can be derived from~\ref{groupe}. From $e_1+e_4 =\frac{1}{2n} \sum_{k=0}^{n-1}(\lambda(a^{2k})+\lambda(ba^{2k}))$, and using the formulas of~\ref{form2} we get: \begin{align*} \Delta_s(e_1+e_4)=&\,\frac{1}{2n} \sum_{k=0}^{n-1}\lambda(a^{2k})\otimes \lambda(a^{2k})+\lambda(ba^{2k})\otimes \lambda(ba^{2k})\\ =&\,\frac{1}{2}\Delta_s(e_1+e_2+e_3+e_4) \\&+\frac{1}{2n} \sum_{k=0}^{n-1}\lambda(ba^{2k})\otimes \left((e_1+e_4)-(e_2+e_3)+\sum_{j=1}^{n-1}\mathrm{e}^{-2ijk\pi/n}e^j_{1,2}+\mathrm{e}^{2ijk\pi/n}e^j_{2,1}\right)\\ =&\,\frac{1}{2}\Delta_s(e_1+e_2+e_3+e_4)+\frac{1}{2}\left((e_1+e_4)-(e_2+e_3)\right)\otimes\left((e_1+e_4)-(e_2+e_3)\right)\\ &+\sum_{j=1}^{n-1}\left(\frac{1}{2n} \sum_{k=0}^{n-1}\mathrm{e}^{-2ijk\pi/n}\lambda(ba^{2k})\otimes e^j_{1,2}+\frac{1}{2n} \sum_{k=0}^{n-1}\mathrm{e}^{2ijk\pi/n}\lambda(ba^{2k})\otimes e^j_{2,1}\right)\\ =&\,(e_1+e_4)\otimes(e_1+e_4)+(e_2+e_3)\otimes(e_2+e_3)\\ &+\frac{1}{2}\sum_{j=1}^{n-1}(e^{n-j}_{1,1}+e^j_{2,2})\otimes (e^j_{1,1}+e^{n-j}_{2,2})+( e^j_{2,1}+ e^{n-j}_{1,2})\otimes e^j_{1,2} +( e^j_{1,2}+ e^{n-j}_{2,1})\otimes e^j_{2,1}\\ =&\,(e_1+e_4)\otimes(e_1+e_4)+(e_2+e_3)\otimes(e_2+e_3)\\ &+\frac{1}{2}\sum_{j=1}^{n-1}(e^{n-j}_{1,1}+e^j_{2,2})\otimes (e^j_{1,1}+e^{n-j}_{2,2})+( e^j_{1,2}+ e^{n-j}_{2,1})\otimes (e^j_{2,1}+ e^{n-j}_{1,2}) \end{align*} The other coproduct expressions can be calculated similarly. \end{proof} \subsubsection{Twisted coproduct expressions} \label{cop} Using computer exploration as described in~\ref{mupad.K2=Dn}, and with some factorization efforts, one gets:\\ For $n=3$: \begin{align*} \Delta(e_1+e_2)=&\,(e_1+e_2)\otimes(e_1+e_2)+(e_3+e_4)\otimes(e_3+e_4)\\ &+ r^1_{1,1}\otimes r^1_{1,1}+r^1_{2,2}\otimes r^1_{2,2}+e^2_{1,1}\otimes e^{2}_{2,2}+e^{2}_{2,2}\otimes e^2_{1,1}\,,\\ \Delta(e_1+e_3)=&\,(e_1+e_3)\otimes(e_1+e_3)+(e_2+e_4)\otimes(e_2+e_4)\\ &+\frac{1}{2}\left(( e^{1}_{2,2}+ r^2_{1,1})\otimes (e^{1}_{1,1}+ r^2_{1,1})+( e^1_{1,1}+ r^{2}_{2,2})\otimes (e^{1}_{2,2}+ r^2_{2,2})\right)\\ &+\frac{1}{2}\left(( e^1_{1,2}+r^{2}_{2,1})\otimes (e^1_{2,1}+r^{2}_{2,1})+( e^1_{2,1}+r^{2}_{1,2})\otimes (e^1_{1,2}+r^{2}_{1,2})\right)\,,\\ \Delta(e_1+e_4)=&\,(e_1+e_4)\otimes(e_1+e_4)+(e_2+e_3)\otimes(e_2+e_3)\\ &+\frac{1}{2}\left(( e^{1}_{1,1}+ r^2_{1,1})\otimes (e^{1}_{2,2}+ r^2_{1,1})+( e^1_{2,2}+ r^{2}_{2,2})\otimes (e^{1}_{1,1}+ r^2_{2,2})\right)\\ &+\frac{1}{2}\left(( e^1_{1,2}-r^{2}_{1,2})\otimes (e^1_{2,1}-r^{2}_{1,2})+( e^1_{2,1}-r^{2}_{2,1})\otimes (e^1_{1,2}-r^{2}_{2,1})\right)\,,\\ \Delta(e_1+e_2+&e_3+e_4)=(e_1+e_2+e_3+e_4)\otimes(e_1+e_2+e_3+e_4)\\&+(r^1_{1,1}+e^{2}_{2,2})\otimes(r^1_{1,1}+e^{2}_{1,1})+(r^1_{2,2}+e^{2}_{1,1})\otimes(r^1_{2,2}+e^{2}_{2,2})\,. \end{align*} For $n=4$: \begin{align*} \Delta(e_1+e_2)=&\,(e_1+e_2)\otimes(e_1+e_2)+(e_3+e_4)\otimes(e_3+e_4) + r^1_{1,1}\otimes r^1_{1,1}\\&+r^1_{2,2}\otimes r^1_{2,2} +e^2_{1,1}\otimes e^{2}_{2,2}+e^{2}_{2,2}\otimes e^2_{1,1}+ r^3_{1,1}\otimes r^3_{1,1}+r^3_{2,2}\otimes r^3_{2,2}\\ \Delta(e_1+e_3)=&\,(e_1+e_3)\otimes(e_1+e_3)+(e_2+e_4)\otimes(e_2+e_4)\,,\\ &+\frac{1}{2}\left( ( e^{1}_{1,1}+ e^{3}_{1,1})\otimes (e^{1}_{2,2}+ e^{3}_{2,2})+( e^1_{2,2}+ e^{3}_{2,2})\otimes (e^{1}_{1,1}+ e^{3}_{1,1})\right)\\ &+\frac{1}{2}\left(( e^2_{1,2}-e^{3}_{1,2})\otimes (e^1_{2,1}-e^{3}_{2,1})+( e^1_{2,1}-e^{3}_{2,1})\otimes (e^1_{1,2}-e^{3}_{1,2})\right)\\ &+ r^2_{1,1} \otimes r^2_{1,1} +r^2_{2,2} \otimes r^2_{2,2}\,,\\ \Delta(e_1+e_4)=&\,(e_1+e_4)\otimes(e_1+e_4)+(e_2+e_3)\otimes(e_2+e_3)\\ &+\frac{1}{2}\left(( e^{1}_{1,1}+ e^{3}_{2,2})\otimes (e^{1}_{2,2}+ e^{3}_{1,1})+( e^1_{2,2}+ e^{3}_{1,1})\otimes (e^{1}_{1,1}+ e^{3}_{2,2})\right)\\ &+\frac{1}{2}\left(( e^1_{1,2}+e^{3}_{2,1})\otimes (e^1_{2,1}+e^{3}_{1,2})+( e^1_{2,1}+e^{3}_{1,2})\otimes (e^1_{1,2}+e^{3}_{2,1})\right)\\ &+ p^2_{1,1} \otimes p^2_{1,1} +p^2_{2,2} \otimes p^2_{2,2}\,,\\ \Delta(e_1+e_2+&e_3+e_4)=(e_1+e_2+e_3+e_4)\otimes(e_1+e_2+e_3+e_4)\\&+(r^1_{1,1}+r^{3}_{2,2})\otimes(r^1_{1,1}+r^{3}_{2,2})+(r^1_{2,2}+r^{3}_{1,1})\otimes(r^1_{2,2}+r^{3}_{1,1})\\&+(e^2_{1,1}+e^{2}_{2,2})\otimes(e^2_{1,1}+e^{2}_{2,2})\,. \end{align*} Using the conjugation rules of Remark~\ref{omega}, the formulas for the standard coproduct, and the expressions of the twisted coproduct for $n=3$ and $n=4$, and with the help of the computer as described in \ref{mupad.K2=Dn}, we get: \begin{align*} \Delta(e_1+e_2)=&\,(e_1+e_2)\otimes(e_1+e_2)+(e_3+e_4)\otimes(e_3+e_4)\\ &+ \sum_{j=1,\,j \text{ odd}}^{n-1}r^j_{1,1}\otimes r^j_{1,1}+r^j_{2,2}\otimes r^j_{2,2} +\sum_{j=1,\,j \text{ even}}^{n-1}e^j_{1,1}\otimes e^{j}_{2,2}+e^{j}_{2,2}\otimes e^j_{1,1}\,. \end{align*} For $n$ odd: \begin{align*} \Delta(e_1+e_3)=&\,(e_1+e_3)\otimes(e_1+e_3)+(e_2+e_4)\otimes(e_2+e_4)\\ &+\frac{1}{2} \sum_{j=1,\,j \text{ odd}}^{n-1}\left(( e^{j}_{2,2}+ r^{n-j}_{1,1})\otimes (e^{j}_{1,1}+ r^{n-j}_{1,1})+( e^j_{1,1}+ r^{n-j}_{2,2})\otimes (e^{j}_{2,2}+ r^{n-j}_{2,2})\right)\\ &+\frac{1}{2}\sum_{j=1,\,j \text{ odd}}^{n-1}\left(( e^j_{1,2}+r^{n-j}_{2,1})\otimes (e^j_{2,1}+r^{n-j}_{2,1})+( e^j_{2,1}+r^{n-j}_{1,2})\otimes (e^j_{1,2}+r^{n-j}_{1,2})\right)\,,\\ \Delta(e_1+e_4)=&\,(e_1+e_4)\otimes(e_1+e_4)+(e_2+e_3)\otimes(e_2+e_3)\\ &+\frac{1}{2} \sum_{j=1,\,j \text{ odd}}^{n-1}\left(( e^{j}_{1,1}+ r^{n-j}_{1,1})\otimes (e^{j}_{2,2}+ r^{n-j}_{1,1})+( e^j_{2,2}+ r^{n-j}_{2,2})\otimes (e^{j}_{1,1}+ r^{n-j}_{2,2})\right)\\ &+\frac{1}{2}\sum_{j=1,\,j \text{ odd}}^{n-1}\left(( e^j_{1,2}-r^{n-j}_{1,2})\otimes (e^j_{2,1}-r^{n-j}_{1,2})+( e^j_{2,1}-r^{n-j}_{2,1})\otimes (e^j_{1,2}-r^{n-j}_{2,1})\right)\,, \end{align*} \begin{align*} \Delta(e_1+e_2+&e_3+e_4)=(e_1+e_2+e_3+e_4)\otimes(e_1+e_2+e_3+e_4)\\ &+\sum_{j=1,\,j \text{ odd}}^{n-1}\left((r^j_{1,1}+e^{n-j}_{2,2})\otimes(r^j_{1,1}+e^{n-j}_{1,1})+(r^j_{2,2}+e^{n-j}_{1,1})\otimes(r^j_{2,2}+e^{n-j}_{2,2})\right)\,,\\ \Delta(e_3+e_4)=&\,(e_1+e_2)\otimes(e_3+e_4)+(e_3+e_4)\otimes(e_1+e_2)\\ &+ \sum_{j=1,\,j \text{ even}}^{n-1} r^{n-j}_{1,1}\otimes e^{j}_{1,1}+ r^{n-j}_{2,2}\otimes e^{j}_{2,2} + e^{j}_{1,1}\otimes r^{n-j}_{2,2}+ e^{j}_{2,2}\otimes r^{n-j}_{1,1}\,,\\ \\ \Delta(e^2_{1,1})=&\,e^2_{1,1}\otimes(e_1+e_2)+r^{n-2}_{1,1}\otimes(e_3+e_4)+(e_1+e_2)\otimes e^2_{1,1}+(e_3+e_4)\otimes r^{n-2}_{2,2}\\ &+r^1_{2,2}\otimes r^{1}_{1,1}+ e^{n-1}_{2,2}\otimes e^{n-1}_{2,2}\\ &+\sum_{j>2,\,j \text{ even}}^{n-1}e^{j-2}_{2,2}\otimes e^{j}_{1,1}+r^{n-j+2}_{2,2}\otimes r^{n-j}_{2,2}\\ &+\sum_{j>2,\,j \text{ odd}}^{n-1}r^{j-2}_{1,1}\otimes r^{j}_{1,1}+e^{n-j+2}_{1,1}\otimes e^{n-j}_{2,2}\,. \end{align*} For $n$ even: \begin{align*} \Delta(e_1+e_3)=&\,(e_1+e_3)\otimes(e_1+e_3)+(e_2+e_4)\otimes(e_2+e_4)\\ &+\frac{1}{2} \sum_{j=1,\,j\text{ odd}}^{n-1}\left(( r^{j}_{2,2}+ r^{n-j}_{1,1})\otimes (r^{j}_{2,2}+ r^{n-j}_{1,1})+(r^j_{2,1}-r^{n-j}_{1,2})\otimes( r^j_{2,1}-r^{n-j}_{1,2})\right)\\ &+\frac{1}{2}\sum_{j=1,\,j \text{ even}}^{n-1}\left(( e^{j}_{1,1}+ e^{n-j}_{2,2})\otimes (e^{j}_{2,2}+ e^{n-j}_{1,1})+( e^j_{1,2}-e^{n-j}_{2,1})\otimes (e^j_{2,1}-e^{n-j}_{1,2})\right)\,, \\ \Delta(e_1+e_4)=&\,(e_1+e_4)\otimes(e_1+e_4)+(e_2+e_3)\otimes(e_2+e_3)\\ &+\frac{1}{2} \sum_{j=1,\,j \text{ odd}}^{n-1}\left(( r^{j}_{1,1}+ r^{n-j}_{2,2})\otimes (r^{j}_{1,1}+ r^{n-j}_{2,2})+(r^j_{2,1}+r^{n-j}_{1,2})\otimes( r^j_{2,1}+r^{n-j}_{1,2})\right)\\ &+\frac{1}{2}\sum_{j=1,\, j \text{ even}}^{n-1}\left(( e^{n-j}_{1,1}+ e^{j}_{2,2})\otimes (e^{j}_{1,1}+ e^{n-j}_{2,2})+( e^j_{1,2}+e^{n-j}_{2,1})\otimes (e^j_{2,1}+e^{n-j}_{1,2})\right)\,, \\ \Delta(e_3+e_4)=&\,(e_1+e_2)\otimes(e_3+e_4)+(e_3+e_4)\otimes(e_1+e_2)\\ &+ \sum_{j=1,\,j \text{ odd}}^{n-1} r^{j}_{1,1}\otimes r^{n-j}_{2,2}+ r^{j}_{2,2}\otimes r^{n-j}_{1,1}\\ &+ \sum_{j=1,\,j \text{ even}}^{n-1} e^{n-j}_{1,1}\otimes e^j_{1,1}+ e^{n-j}_{2,2}\otimes e^j_{2,2}\,\\ \Delta(e_1+e_2+&e_3+e_4)=(e_1+e_2+e_3+e_4)\otimes(e_1+e_2+e_3+e_4)\\ &+\sum_{j=1,\,j \text{ odd}}^{n-1}(r^j_{1,1}+r^{n-j}_{2,2})\otimes(r^j_{1,1}+r^{n-j}_{2,2})\\ &+\sum_{j=1,\,j \text{ even}}^{n-1}(e^j_{1,1}+e^{n-j}_{2,2})\otimes(e^j_{2,2}+e^{n-j}_{1,1})\,, \end{align*} \begin{align*} \Delta(e^2_{1,1})= &e^2_{1,1}\otimes(e_1+e_2)+e^{n-2}_{2,2}\otimes(e_3+e_4)+(e_1+e_2)\otimes e^2_{1,1}+(e_3+e_4)\otimes e^{n-2}_{2,2}\\ &+r^1_{2,2}\otimes r^{1}_{1,1}+ r^{n-1}_{1,1}\otimes r^{n-1}_{2,2}\\ &+\sum_{j>2,\, j \text{ even}}^{n-1}e^{j-2}_{2,2}\otimes e^{j}_{1,1}+e^{n-j+2}_{1,1}\otimes e^{n-j}_{2,2}\\ &+\sum_{j>2,\, j \text{ odd}}^{n-1}r^{j-2}_{1,1}\otimes r^{j}_{1,1}+r^{n-j+2}_{2,2}\otimes r^{n-j}_{2,2}\,. \end{align*} \subsection{Coproducts of $v$ and $w$ of $K_3$ in $K\!D(2m)$} \label{subsection.copv} In this section, we calculate the coproducts of the unitary elements $v$ and $w$ of $K_3$ in $K\!D(2m)$ needed in~\ref{B4m}. \begin{prop} With the notations of~\ref{B4m}, the unitary elements $v$ and $w$ of $K_3$ expand as follow in the group basis: \begin{align*} v&=\lambda(ba)B_0-\frac12B_1[\mathrm{i}(\lambda(a)-\lambda(a^{-1}))+\lambda(ba)+\lambda(ba^{-1})] \\ w&=\lambda(ba^{-1})B_0+\frac12B_1[\mathrm{i}(\lambda(a)-\lambda(a^{-1}))-\lambda(ba)-\lambda(ba^{-1})] \end{align*} The coproduct of $v$ satisfies : $\Delta(v)=v \otimes B_0v + w\otimes B_1 v$.\\ Furthermore: $\Delta(w)=w \otimes B_0w + v\otimes B_1 w$. \end{prop} \begin{proof} A straightforward calculation gives the expressions in the group basis. We now check the expression of $\Delta(v)$ by proving that untwisting it yields back the standard coproduct of $v$. Namely, setting $D_{i,j}=(B_i\otimes B_j)(v \otimes B_0v + w\otimes B_1 v)$ for $i=1,2$ and $j=1,2$, we check the equalities: $\Omega^{*}D_{i,j}\Omega=(B_i\otimes B_j)\Delta_s(v)$, for $i=1,2$ and $j=1,2$. Note first that $B_1$ is the projection on odd factors and that $$\Delta(B_0)=B_0\otimes B_0+B_1\otimes B_1\quad \text{et} \quad \Delta(B_1)=B_0\otimes B_1+B_1\otimes B_0$$ since $\lambda(a^n))$ belongs to the intrinsic group. Using Remark~\ref{remark.omega}, it follows that: $$\Delta_s(vB_0)=\lambda(ba)B_0\otimes \lambda(ba)B_0+\lambda(ba)B_1\otimes \lambda(ba)B_1=\Omega^{*}D_{0,0}\Omega+\Omega^{*}D_{1,1}\Omega$$ The action of $\Omega$ on $x\otimes y$ depends only on the parity of the factors containing $x$ and $y$, and in particular is independent of $n$ (even). It is therefore sufficient to use the following formulas which have been obtained by computer in $K\!D(4)$. \begin{itemize} \item For tensors in $K\!D(2m)^{2j+1} \otimes K\!D(2m)^{2k}$: \begin{align*} \Omega^{*}(r_{1,2}\otimes e_{1,2})\Omega&=\frac{\mathrm{i}}{2}(e_{1,1}\otimes e_{1,1}-e_{2,2}\otimes e_{2,2})-\frac12(e_{1,2}\otimes e_{1,2}+e_{2,1}\otimes e_{2,1})\\ \Omega^{*}(r_{1,2}\otimes e_{2,1})\Omega&=\frac{\mathrm{i}}{2}(e_{1,1}\otimes e_{2,2}-e_{2,2}\otimes e_{1,1})-\frac12(e_{1,2}\otimes e_{2,1}+e_{2,1}\otimes e_{1,2})\\ \Omega^{*}(r_{2,1}\otimes e_{1,2})\Omega&=\frac{\mathrm{i}}{2}(e_{2,2}\otimes e_{1,1}-e_{1,1}\otimes e_{2,2})-\frac12(e_{2,1}\otimes e_{1,2}+e_{1,2}\otimes e_{2,1})\\ \Omega^{*}(r_{2,1}\otimes e_{2,1})\Omega&=\frac{\mathrm{i}}{2}(e_{2,2}\otimes e_{2,2}-e_{1,1}\otimes e_{1,1})-\frac12(e_{1,2}\otimes e_{1,2}+e_{2,1}\otimes e_{2,1}) \end{align*} \item For tensors in $K\!D(2m)^{2j+1} \otimes (\mathbb{C} e_1\oplus \mathbb{C} e_2\oplus \mathbb{C} e_3\oplus \mathbb{C} e_4)$: \begin{align*} \Omega^{*}[r_{1,2}\otimes ((e_1+e_3)-(e_2+e_4))]\Omega&=\frac{\mathrm{i}}{2}[(e_{1,1}-e_{2,2})\otimes ((e_1+e_2)-(e_3+e_4))]\\ &-\frac12[(e_{1,2}+e_{2,1})\otimes ((e_1+e_3)-(e_2+e_4))]\\ \Omega^{*}[r_{2,1}\otimes ((e_1+e_3)-(e_2+e_4))]\Omega&=\frac{\mathrm{i}}{2}[(e_{2,2}-e_{1,1})\otimes ((e_1+e_2)-(e_3+e_4))]\\ &-\frac12[(e_{1,2}+e_{2,1})\otimes ((e_1+e_3)-(e_2+e_4))] \end{align*} \item For tensors in $K\!D(2m)^{2k} \otimes K\!D(2m)^{2j+1}$: \begin{align*} \Omega^{*}(e_{1,2}\otimes r_{1,2})\Omega&=\frac{\mathrm{i}}{2}(e_{2,2}\otimes e_{1,1}-e_{1,1}\otimes e_{2,2})-\frac12(e_{1,2}\otimes e_{2,1}+e_{2,1}\otimes e_{1,2})\\ \Omega^{*}(e_{2,1}\otimes r_{1,2})\Omega&=\frac{\mathrm{i}}{2}(e_{1,1}\otimes e_{1,1}-e_{2,2}\otimes e_{2,2})-\frac12(e_{1,2}\otimes e_{1,2}+e_{2,1}\otimes e_{2,1})\\ \Omega^{*}(e_{1,2}\otimes r_{2,1})\Omega&=\frac{\mathrm{i}}{2}(e_{2,2}\otimes e_{2,2}-e_{1,1}\otimes e_{1,1})-\frac12(e_{1,2}\otimes e_{1,2}+e_{2,1}\otimes e_{2,1})\\ \Omega^{*}(e_{2,1}\otimes r_{2,1})\Omega&=\frac{\mathrm{i}}{2}(e_{1,1}\otimes e_{2,2}-e_{2,2}\otimes e_{1,1})-\frac12(e_{2,1}\otimes e_{1,2}+e_{1,2}\otimes e_{2,1}) \end{align*} \item For tensors in $(\mathbb{C} e_1\oplus \mathbb{C} e_2\oplus \mathbb{C} e_3\oplus \mathbb{C} e_4) \otimes K\!D(2m)^{2j+1}$: \begin{align*} \Omega^{*}[((e_1+e_3)-(e_2+e_4))\otimes r_{1,2}]\Omega&=\frac{\mathrm{i}}{2}[((e_1+e_2)-(e_3+e_4))\otimes(e_{1,1}-e_{2,2}) ]\\ &-\frac12[((e_1+e_3)-(e_2+e_4))\otimes(e_{1,2}+e_{2,1}) ]\\ \Omega^{*}[((e_1+e_3)-(e_2+e_4))\otimes r_{2,1}]\Omega&=\frac{\mathrm{i}}{2}[((e_1+e_2)-(e_3+e_4))\otimes(e_{2,2}-e_{1,1}) ]\\ &-\frac12[((e_1+e_3)-(e_2+e_4))\otimes(e_{1,2}+e_{2,1}) ]\\ \end{align*} \end{itemize} It follows that $\Delta_s(vB_1)=\Omega^{*}D_{0,1}\Omega+\Omega^{*}D_{1,0}\Omega$. Let us detail, for example, the calculation of $\Omega^{*}D_{1,0}\Omega$: \begin{align*} \Omega^{*}(B_1v\otimes B_0v)\Omega &=\sum_{j=0}^{m-1}\Omega^{*}\left((\epsilon^{-(2j+1)}r_{1,2}^{2j+1}+\epsilon^{2j+1}r_{2,1}^{2j+1})\otimes ((e_1+e_3)-(e_2+e_4))\right)\Omega\\ &+\sum_{j=0}^{m-1}\sum_{k=1}^{m-1} \Omega^{*}\left((\epsilon^{-(2j+1)}r_{1,2}^{2j+1}+\epsilon^{2j+1}r_{2,1}^{2j+1})\otimes (\epsilon^{-2k}e_{1,2}^{2k} +\epsilon^{2k}e_{2,1}^{2k})\right)\Omega \end{align*} \def\phantom{-\frac12\sum_{j=0}^{m-1}\sum_{k=1}^{m-1}}{\phantom{-\frac12\sum_{j=0}^{m-1}\sum_{k=1}^{m-1}}} \begin{align*} =&-\frac{\mathrm{i}}{2}\sum_{j=0}^{m-1}\left((\epsilon^{2j+1}e^{2j+1}_{1,1}+\epsilon^{-(2j+1)}e^{2j+1}_{2,2})-(\epsilon^{-(2j+1)}e^{2j+1}_{1,1}+\epsilon^{2j+1}e^{2j+1}_{2,2})\right)\otimes ((e_1+e_2)-(e_3+e_4)) \\ &-\frac12\sum_{j=0}^{m-1}\left((\epsilon^{-(2j+1)}e_{1,2}^{2j+1}+\epsilon^{2j+1}e_{2,1}^{2j+1})+(\epsilon^{2j+1}e_{1,2}^{2j+1}+\epsilon^{-(2j+1)}e_{2,1}^{2j+1})\right)\otimes ((e_1+e_3)-(e_2+e_4))\\ &+\frac{\mathrm{i}}{2}\sum_{j=0}^{m-1}\sum_{k=1}^{m-1} \epsilon^{-(2j+1)}e_{1,1}^{2j+1}\otimes \epsilon^{-2k}e_{1,1}^{2k}+\epsilon^{-(2j+1)}e_{1,1}^{2j+1}\otimes \epsilon^{2k}e_{2,2}^{2k} \\ &\phantom{-\frac12\sum_{j=0}^{m-1}\sum_{k=1}^{m-1}}+\epsilon^{2j+1}e_{2,2}^{2j+1}\otimes \epsilon^{-2k}e_{1,1}^{2k}+\epsilon^{2j+1}e_{2,2}^{2j+1}\otimes \epsilon^{2k}e_{2,2}^{2k}\\ &\phantom{-\frac12\sum_{j=0}^{m-1}\sum_{k=1}^{m-1}}- \epsilon^{-(2j+1)}e_{2,2}^{2j+1}\otimes \epsilon^{-2k}e_{2,2}^{2k}-\epsilon^{-(2j+1)}e_{2,2}^{2j+1}\otimes \epsilon^{2k}e_{1,1}^{2k}\\ &\phantom{-\frac12\sum_{j=0}^{m-1}\sum_{k=1}^{m-1}}-\epsilon^{2j+1}e_{1,1}^{2j+1}\otimes \epsilon^{-2k}e_{2,2}^{2k}-\epsilon^{2j+1}e_{1,1}^{2j+1}\otimes \epsilon^{2k}e_{1,1}^{2k}\\ &-\frac12\sum_{j=0}^{m-1}\sum_{k=1}^{m-1} \epsilon^{-(2j+1)}e_{1,2}^{2j+1}\otimes \epsilon^{-2k}e_{1,2}^{2k}+\epsilon^{-(2j+1)}e_{2,1}^{2j+1}\otimes \epsilon^{-2k}e_{2,1}^{2k}\\ &\phantom{-\frac12\sum_{j=0}^{m-1}\sum_{k=1}^{m-1}}+\epsilon^{-(2j+1)}e_{1,2}^{2j+1}\otimes \epsilon^{2k}e_{2,1}^{2k}+\epsilon^{-(2j+1)}e_{2,1}^{2j+1}\otimes \epsilon^{2k}e_{1,2}^{2k}\\ &\phantom{-\frac12\sum_{j=0}^{m-1}\sum_{k=1}^{m-1}}+\epsilon^{2j+1}e_{2,1}^{2j+1}\otimes\epsilon^{-2k}e_{1,2}^{2k}+\epsilon^{2j+1}e_{1,2}^{2j+1}\otimes\epsilon^{-2k}e_{2,1}^{2k}\\ &\phantom{-\frac12\sum_{j=0}^{m-1}\sum_{k=1}^{m-1}}+\epsilon^{2j+1}e_{1,2}^{2j+1}\otimes\epsilon^{2k}e_{1,2}^{2k}+\epsilon^{2j+1}e_{2,1}^{2j+1}\otimes\epsilon^{2k}e_{2,1}^{2k}\\ =&-\frac{\mathrm{i}}{2}\sum_{j=0}^{m-1}(\epsilon^{2j+1}e^{2j+1}_{1,1}+\epsilon^{-(2j+1)}e^{2j+1}_{2,2})\otimes \left((e_1+e_2)-(e_3+e_4)) +\sum_{k=1}^{m-1} \epsilon^{2k}e_{1,1}^{2k}+\epsilon^{-2k}e_{2,2}^{2k}\right)\\ &+\frac{\mathrm{i}}{2}\sum_{j=0}^{m-1}(\epsilon^{-(2j+1)}e^{2j+1}_{1,1}+\epsilon^{2j+1}e^{2j+1}_{2,2}) \otimes \left((e_1+e_2)-(e_3+e_4))+\sum_{k=1}^{m-1}\epsilon^{-2k}e_{1,1}^{2k}+\epsilon^{2k}e_{2,2}^{2k}\right) \\ &-\frac12\sum_{j=0}^{m-1}(\epsilon^{-(2j+1)}e_{1,2}^{2j+1}+\epsilon^{2j+1}e_{2,1}^{2j+1})\otimes \left((e_1+e_3)-(e_2+e_4)+\sum_{k=1}^{m-1} \epsilon^{-2k}e_{1,2}^{2k}+\epsilon^{2k}e_{2,1}^{2k}\right)\\ &-\frac12\sum_{j=0}^{m-1}(\epsilon^{2j+1}e_{1,2}^{2j+1}+\epsilon^{-(2j+1)}e_{2,1}^{2j+1})\otimes \left((e_1+e_3)-(e_2+e_4)+\sum_{k=1}^{m-1} \epsilon^{2k}e_{1,2}^{2k}+\epsilon^{-2k}e_{2,1}^{2k}\right)\\ =&-\frac{\mathrm{i}}{2}\left(\lambda(a)B_1\otimes \lambda(a)B_0-\lambda(a^{-1})B_1\otimes \lambda(a^{-1})B_0\right)\\ &-\frac12\left(\lambda(ba)B_1\otimes \lambda(ba)B_0+\lambda(ba^{-1})B_1\otimes \lambda(ba^{-1})B_0\right)\,. \end{align*} The calculation of the coproduct of $w$ is analogous. \end{proof} \subsection{Structure of $K_{33}$ in $K_3$ of $K\!D(4m')$} \label{K33} The Jones projection of the coideal subalgebra\xspace $K_{33}$ of $K_3$ in $K\!D(4m')$ is $p_{33}=e_1+e_3+r^m_{1,1}$. Furthermore, \begin{align*} p_{33}&=e_1+e_3+r^m_{1,1}\\ &=\frac{1}{2n} \sum_{k=0}^{n-1}\lambda(a^{2k})+\lambda(ba^{2k+1})+\frac{1}{2n}\sum_{k=0}^{n-1}(-1)^k\lambda(a^{2k})+\frac{1}{2n}\sum_{k=0}^{n-1}(-1)^k\lambda(ba^{2k+1})\\ &=\frac{1}{n} \sum_{k=0}^{m-1}\lambda(a^{4k})+\lambda(ba^{4k+1}) \end{align*} Therefore $p_{33}$ is the Jones projection of the algebra $A$ of the subgroup $D_{2m'}=\langle a^4, ba\rangle$ in $\mathbb{C}[D_{2n}]$. Using e.g. Proposition~\ref{prop.resumep}, its standard coproduct can be expressed in terms of the matrix units of $A$. This subalgebra admits four central projections, associated to the factors of dimension $1$: $\tilde{e}_1=p_{33}$, $\tilde{e}_2$, $\tilde{e}_3$, and $\tilde{e}_4$, with \begin{align*} \tilde{e}_2&=\frac{1}{2m} \sum_{k=0}^{m-1}(\lambda(a^{4k})-\lambda(ba^{4k+1}))\\ &=e_2+e_4+\frac{1}{2m}\sum_{j=1}^{n-1}\sum_{k=0}^{m-1}(\epsilon_m^{2jk} e^j_{1,1}+\epsilon_m^{-2jk} e^j_{2,2})-(\epsilon_m^{-2jk} \epsilon_n^{-j}e^j_{1,2}+\epsilon_m^{2jk}\epsilon_n^{j} e^j_{2,1})\\ &=e_2+e_4+r^m_{2,2} \end{align*} \begin{align*} \tilde{e}_3&=\frac{1}{2m} \sum_{k=0}^{m-1}(-1)^k(\lambda(a^{4k})-\lambda(ba^{4k+1}))\\ &=\frac{1}{2m}\sum_{j=1}^{n-1}\sum_{k=0}^{m-1}(-1)^k(\epsilon_m^{2jk} e^j_{1,1}+\epsilon_m^{-2jk} e^j_{2,2})-(-1)^k(\epsilon_m^{-2jk} \epsilon_n^{-j}e^j_{1,2}+\epsilon_m^{2jk}\epsilon_n^{j} e^j_{2,1})\\ &=\frac{1}{2m}\sum_{j=1}^{n-1}\sum_{k=0}^{m-1}(\epsilon_m^{2(m'+j)k} e^j_{1,1}+\epsilon_m^{-2(m'+j)k} e^j_{2,2})-(\epsilon_m^{-2(m'+j)k} \epsilon_n^{-j}e^j_{1,2}+\epsilon_m^{2(m'+j)k}\epsilon_n^{j} e^j_{2,1})\\ &=\frac{1}{2}(e^{m'}_{1,1}+e^{m'}_{2,2}+\mathrm{e} ^{3i\pi/4}e^{m'}_{1,2}+\mathrm{e} ^{-3i\pi/4}e^{m'}_{2,1})+\frac{1}{2}(e^{3m'}_{1,1}+e^{3m'}_{2,2}+\mathrm{e} ^{i\pi/4}e^{3m'}_{1,2}+\mathrm{e} ^{-i\pi/4}e^{3m'}_{2,1})\\ &=q(0,-3\pi/4,m')+q(0,-\pi/4,3m') \end{align*} \begin{align*} \tilde{e}_4&=\frac{1}{2m} \sum_{k=0}^{m-1}(-1)^k(\lambda(a^k)+\lambda(ba^{4k+1}))\\ &=\frac{1}{2m}\sum_{j=1}^{n-1}\sum_{k=0}^{m-1}(-1)^k(\epsilon_m^{2jk} e^j_{1,1}+\epsilon_m^{-2jk} e^j_{2,2})+(-1)^k(\epsilon_m^{-2jk} \epsilon_n^{-j}e^j_{1,2}+\epsilon_m^{2jk}\epsilon_n^{j} e^j_{2,1})\\ &=\frac{1}{2m}\sum_{j=1}^{n-1}\sum_{k=0}^{m-1}(\epsilon_m^{2(m'+j)k} e^j_{1,1}+\epsilon_m^{-2(m'+j)k} e^j_{2,2})+(\epsilon_m^{-2(m'+j)k} \epsilon_n^{-j}e^j_{1,2}+\epsilon_m^{2(m'+j)k}\epsilon_n^{j} e^j_{2,1})\\ &=\frac{1}{2}(e^{m'}_{1,1}+e^{m'}_{2,2}+\mathrm{e} ^{-i\pi/4}e^{m'}_{1,2}+\mathrm{e} ^{i\pi/4}e^{m'}_{2,1})+\frac{1}{2}(e^{3m'}_{1,1}+e^{3m'}_{2,2}+\mathrm{e} ^{-3i\pi/4}e^{3m'}_{1,2}+\mathrm{e} ^{3i\pi/4}e^{3m'}_{2,1})\\ &=q(0,\pi/4,m')+q(0,3\pi/4,3m') \end{align*} as well as $m'-1$ blocks of dimension $2$ whose matrix units are derived from the partial isometries $\tilde{e}^h_{1,2}$ with $h=1, \dots, m'-1$: \begin{align*} \tilde{e}^h_{1,2}&=\frac{1}{m} \sum_{k=0}^{m-1}\mathrm{e}^{ihk\pi/m'}\lambda(ba^{4k+1})\\ &=\frac{1}{m}\sum_{j=1}^{n-1} \sum_{k=0}^{m-1}\epsilon^{2hk}_{m}(\epsilon_m^{-2jk} \epsilon_n^{-j}e^j_{1,2}+\epsilon_m^{2jk}\epsilon_n^{j} e^j_{2,1})\\ &=\frac{1}{m}\sum_{j=1}^{n-1} \sum_{k=0}^{m-1}(\epsilon_m^{2(h-j)k} \epsilon_n^{-j}e^j_{1,2}+\epsilon_m^{2(h+j)k}\epsilon_n^{j} e^j_{2,1})\\ &=\epsilon_n^{-h}e^h_{1,2}+\epsilon_n^{-(m+h)}e^{m+h}_{1,2} +\epsilon_n^{(m-h)}e^{m-h}_{2,1}+\epsilon_n^{(n-h)}e^{n-h}_{2,1}\\ &=\epsilon_n^{-h}(e^h_{1,2}-e^{n-h}_{2,1})+\epsilon_n^{-(m+h)}(e^{m+h}_{1,2} -e^{n-(m+h)}_{2,1})\\ &=\epsilon_n^{-h}\left((e^h_{1,2}-e^{n-h}_{2,1})-i(e^{m+h}_{1,2} -e^{n-(m+h)}_{2,1})\right)\,. \end{align*} Upon deformation by $\Omega$, the right legs $\tilde{e}_1$, $\tilde{e}_2$, $\tilde{e}_3$, $\tilde{e}_4$, and $\tilde{e}^h_{i,j}$ for $h$ even of $\Delta_s(p_{33})$ are left unchanged; the remaining rights legs $\tilde{e}^h_{i,j}$ for $h$ odd become $\epsilon_n^{-h}[(r^h_{1,2}-r^{n-h}_{2,1})-i(r^{m+h}_{1,2}-r^{n-(m+h)}_{2,1})]$. Therefore, $K_{33}$ admits exactly four blocs of dimension $1$, as is the case for $A$. \subsection{Calculations for Theorem~\ref{self-dual}} \label{psicoproduit} We complete the demonstration of the self-duality of $K\!D(n)$ for $n$ odd (Theorem~\ref{self-dual}) by calculating explicitly $\psi(\lambda(a))$ and $\psi(\lambda(b))$ and checking that their coproducts are preserved by $\psi$. The calculations are straightforward albeit lengthy; regrettably, we could not delegate them to the computer in the general case. \subsubsection{Preservation of the coproduct of $\lambda(b)$} \begin{lemma} $$\psi(\lambda(b))=4n\widehat{e_4} \qquad \text{and} \qquad \Delta(\psi(\lambda(b)))=(\psi\otimes\psi)(\Delta(\lambda(b))\,.$$ \end{lemma} \begin{proof} From~\ref{lambda} and~\ref{q}, one can write: \begin{align*} \lambda(b)&=e_1-e_2-e_3+e_4+\sum_{j=1}^{n-1}e^j_{1,2}+e^j_{2,1}\\ &=e_1-e_2-e_3+e_4+\sum_{k=1,\, k \text{ odd}}^{n-1}-(r^k_{1,2}+r^k_{2,1})+e^{n-k}_{1,2}+e^{n-k}_{2,1}\,. \end{align*} It follows that \begin{align*} \psi(\lambda(b))=&\,\chi_1-\chi_{a^n}-\chi_{ba^n}+\chi_{b}+\sum_{k=1,\, k \text{ odd}}^{n-1}-(E^k_{1,2}+E^k_{2,1})+E^{n-k}_{1,2}+E^{n-k}_{2,1}\\ =&\,\chi_1-\chi_{a^n}-\chi_{ba^n}+\chi_{b}\\ &+\sum_{k=1,\, k \text{ odd}}^{n-1}\chi_{ba^{n-k}}+\chi_{ba^{n+k}} -\chi_{ba^k}-\chi_{ba^{2n-k}}-\chi_{a^k}-\chi_{a^{2n-k}}+\chi_{a^{n+k}}+\chi_{a^{n-k}}\\ =&\sum_{j=0}^{2n-1} (-1)^j(\chi_{a^j}+\chi_{ba^j})=4n\widehat{e_4}\,. \end{align*} Then, \begin{align*} &(\psi\otimes\psi)(\Delta(\lambda(b))=16n^2\widehat{e_4}\otimes \widehat{e_4}\\ =&\sum_{j=0}^{2n-1}\sum_{k=0}^{2n-1} (-1)^{k+j}(\chi_{a^j}+\chi_{ba^j})\otimes (\chi_{a^k}+\chi_{ba^k})\\ =&\sum_{j=0}^{2n-1}\sum_{k=0}^{2n-1} (-1)^{k+j}\left(\chi_{a^j}\otimes (\chi_{a^{-j}a^{j+k}}+\chi_{a^{-j}ba^{k-j}})+ \chi_{ba^j}\otimes (\chi_{(a^{-j}b)ba^{k+j}}+\chi_{(a^{-j}b)a^{k-j}})\right)\,; \end{align*} and since $j+k$ and $j-k$ have the same parity: \begin{align*} (\psi\otimes\psi)(\Delta(\lambda(b))&=\sum_{j=0}^{2n-1}\sum_{s\in D_{2n}} (-1)^{j}\chi_{s}\otimes (\chi_{s^{-1}a^{j}}+\chi_{s^{-1}ba^{j}})=\Delta(4n\widehat{e_4}) = \Delta(\psi(\lambda(b)))\,.\qedhere \end{align*} \end{proof} \subsubsection{Preservation of the coproduct of $\lambda(a)$} \begin{lemma} \begin{displaymath} \psi(\lambda(a))=n(2\widehat{e^{n-1}_{1,1}}+ \widehat{e^1_{2,2}}- (\widehat{e^1_{1,1}}+\widehat{e^{n-1}_{1,2}}+\widehat{e^{n-1}_{2,1}}))\quad \text{and} \quad \Delta(\psi(\lambda(a)))=(\psi\otimes\psi)(\Delta(\lambda(a))\,. \end{displaymath} \end{lemma} We prove this lemma in the following subsections. In order to split the calculations, using~\ref{lambda} and $\epsilon=\mathrm{e}^{\mathrm{i}\pi/n}$, we write: $\lambda(a)=U+V$, with $$U=e_1+e_2+\sum_{k=1,\, k \text{ even}}^{n-1} \epsilon^k e^k_{1,1}+ \epsilon^{-k} e^k_{2,2}\quad \text {and } \quad V=-e_3-e_4+\sum_{k=1,\,k \text{ odd}}^{n-1} \epsilon^k e^k_{1,1}+ \epsilon^{-k} e^k_{2,2}\,.$$. Hence, since all the terms of $U-(e_1+e_2)$ (resp. $V+(e_3+e_4)$) are in $M_2(\mathbb{C})$ factors of same parity, we may use Remark~\ref{remark.omega} to calculate the coproducts of $U$ and $V$. \subsubsection{Expression of $\psi(U)$ and $\psi(V)$}\label{lambdaa} \begin{lem} $$\psi(U)=2n\widehat{e^{n-1}_{1,1}},\qquad\psi(V)=n (\widehat{e^1_{2,2}}-\widehat{e^1_{1,1}}-\widehat{e^{n-1}_{1,2}}-\widehat{e^{n-1}_{2,1}})\,.$$ \end{lem} \begin{proof} On the one hand, \begin{align*} \psi(U)&=\chi_1+\chi_{a^n}+\sum_{k=1,\, k \text{ odd}}^{n-1}\epsilon^{n-k} E^{n-k}_{1,1}+ \epsilon^{-(n-k)} E^{n-k}_{2,2}\\ &= \chi_1+\chi_{a^n}+\sum_{j=1,\, j \text{ even}}^{n-1}\epsilon^j (\chi_{a^j}+\chi_{a^{n+j}})+ \epsilon^{-j} (\chi_{a^{n-j}}+\chi_{a^{2n-j}})\\ &=\sum_{j=0}^{2n-1}(-1)^j\epsilon^j\chi_{a^j} =\sum_{j=0}^{2n-1}\epsilon^{-(n-1)j}\chi_{a^j}=2n\widehat{e^{n-1}_{1,1}}\,. \end{align*} On the other hand, \begin{align*} (V+V^*)&=2(-e_3-e_4)+\sum_{k=1,\, k \text{ odd}}^{n-1} (\epsilon^k+\epsilon^{-k}) (r^k_{1,1}+ r^k_{2,2})\\ (V-V^*)&=\rm{i}\sum_{k=1,\, k \text{ odd}}^{n-1} (\epsilon^k-\epsilon^{-k})(r^k_{2,1}-r^k_{1,2})\,. \end{align*} It follows that: \begin{align*} \psi(V+V^*)&=2(-\chi_{ba^n}-\chi_{b})+\sum_{k=1,\, k \text{ odd}}^{n-1}(\epsilon^k + \epsilon^{-k}) (\chi_{ba^{n+k}}+\chi_{ba^{k}}+\chi_{ba^{2n-k}}+\chi_{ba^{n-k}})\\ &=- \sum_{j=0}^{2n-1}(\epsilon^{(n-1)j} + \epsilon^{-(n-1)j}) \chi_{ba^{j}} =-2n (\widehat{e^{n-1}_{1,2}}+\widehat{e^{n-1}_{2,1}})\,,\\ \psi(V-V^*)&=\sum_{k=1,\, k \text{ odd}}^{n-1}(\epsilon^k - \epsilon^{-k}) (\chi_{a^{k}}+\chi_{a^{n-k}}-\chi_{a^{n+k}}-\chi_{a^{2n-k}})\\ &=\sum_{j=0}^{2n-1}(\epsilon^j - \epsilon^{-j}) \chi_{a^{j}} =2n (\widehat{e^1_{2,2}}-\widehat{e^1_{1,1}})\,, \end{align*} which gives the desired expression for $\psi(V)$. The expressions for $\psi(V+V^*)$ and $\psi(V-V^*)$ will be reused later on. \end{proof} \subsubsection{Coproducts of $\psi(U)$ and $\psi(V)$}\label{UV} \begin{lem} $$\Delta(\psi(U)) =4n^2(\widehat{e^{n-1}_{1,1}}\otimes \widehat{e^{n-1}_{1,1}}+\widehat{e^{n-1}_{1,2}}\otimes \widehat{e^{n-1}_{2,1}})\,.$$ \begin{multline*} \Delta(\psi(V))=2n^2(\widehat{e^1_{2,2}} \otimes\widehat{e^1_{2,2}} +\widehat{e^1_{2,1}} \otimes \widehat{e^1_{1,2}} -\widehat{e^1_{1,1}} \otimes\widehat{e^1_{1,1}} -\widehat{e^1_{1,2}} \otimes \widehat{e^1_{2,1}}\\ -\widehat{e^{n-1}_{2,2}} \otimes \widehat{e^{n-1}_{2,1}} -\widehat{e^{n-1}_{2,1}} \otimes \widehat{e^{n-1}_{1,1}} -\widehat{e^{n-1}_{1,1}} \otimes \widehat{e^{n-1}_{1,2}} -\widehat{e^{n-1}_{1,2}} \otimes \widehat{e^{n-1}_{2,2}}) \end{multline*} \end{lem} \begin{proof} \begin{align*} \Delta(\psi(U))&=\sum_{k=0}^{2n-1}\sum_{j=0}^{2n-1}\epsilon^{-(n-1)j}(\chi_{a^k}\otimes\chi_{a^{j-k}}+\chi_{a^k b}\otimes\chi_{ba^{j-k}})\\ &=\sum_{k=0}^{2n-1}\sum_{j=0}^{2n-1}\epsilon^{-(n-1)(k+j)}(\chi_{a^k}\otimes\chi_{a^{j}}+\chi_{a^kb}\otimes\chi_{ba^{j}})\\ &=4n^2(\widehat{e^{n-1}_{1,1}}\otimes \widehat{e^{n-1}_{1,1}}+\widehat{e^{n-1}_{1,2}}\otimes \widehat{e^{n-1}_{2,1}})\,. \end{align*} Similar calculations give: \begin{align*} \Delta(2n \widehat{e^1_{2,2}}) &= 4n^2(\widehat{e^1_{2,2}} \otimes\widehat{e^1_{2,2}} +\widehat{e^1_{2,1}} \otimes \widehat{e^1_{1,2}})\,,\\ \Delta(2n \widehat{e^{n-1}_{1,2}}) &= 4n^2(\widehat{e^{n-1}_{2,2}} \otimes \widehat{e^{n-1}_{2,1}} +\widehat{e^{n-1}_{1,2}} \otimes \widehat{e^{n-1}_{1,1}})\,. \end{align*} The announced formula follows using the equalities $\widehat{e^j_{2,2}}=\widehat{e^j_{1,1}}^*$ and $\widehat{e^j_{2,1}}=\widehat{e^j_{1,2}}^*$. \end{proof} \subsubsection{Image of $\Delta(U)$ by $\psi\otimes \psi$} We now turn to the preservation by $\psi$ of the coproducts of $U$ and $V$, and therefore of $\lambda(a)=U+V$. Since $\lambda(a^{n+1})=U-V$, we can calculate the coproducts of $U=1/2(\lambda(a)+\lambda(a^{n+1}))$ and $V=1/2(\lambda(a)-\lambda(a^{n+1}))$ using the expression for $\Omega$ for $n$ odd given in~\ref{omega}. Let us start with $U$: \begin{align*} &\Delta(U)=\Omega(U\otimes U+V\otimes V)\Omega^*\\ &=(e_1+e_2+q_1+q_2)\otimes(e_1+e_2+q_1+q_2)(U\otimes U) (e_1+e_2+q_1+q_2)\otimes(e_1+e_2+q_1+q_2) \\ &+(e_3-ie_4-ip_1+p_2)\otimes(e_3+ie_4+ip_1+p_2)(V\otimes V) (e_3+ie_4+ip_1+p_2)\otimes (e_3-ie_4-ip_1+p_2)\\ &=U\otimes U + V'\otimes V'^*\,, \end{align*} where $V'=(-e_3-e_4+\sum_{k=1,\, k \text{ odd}}^{n-1} \epsilon^k r^k_{2,2}+ \epsilon^{-k} r^k_{1,1})$. The image of $V'$ by $\psi$ is: \begin{displaymath} \psi(V')=-\chi_{ba^n}-\chi_{b}+\sum_{k=1,\, k \text{ odd}}^{n-1}\epsilon^k E^k_{2,2}+ \epsilon^{-k} E^k_{1,1} =-\sum_{j=0}^{2n-1}\epsilon^{(n-1)j} \chi_{ba^{j}} =-2n\widehat{e^{n-1}_{1,2}}\,. \end{displaymath} It follows that $\psi(V'^*)=-2n\widehat{e^{n-1}_{2,1}}$, and we get: \begin{displaymath} (\psi\otimes \psi)(\Delta(U))=4n^2(\widehat{e^{n-1}_{1,1}}\otimes \widehat{e^{n-1}_{1,1}}+\widehat{e^{n-1}_{1,2}}\otimes \widehat{e^{n-1}_{2,1}})\,. \end{displaymath} Using Lemma~\ref{UV}, we conclude that $\psi$ preserves the coproduct of $U$. \subsubsection{Image of $\Delta(V)$ by $\psi\otimes \psi$} We now calculate the image of $\Delta(V)$ by $\psi\otimes \psi$: \begin{align*} \Delta(V)&=\Omega(U\otimes V+V\otimes U)\Omega^* \,,\\ &=U_1\otimes V_1+U_2\otimes V_2+i(U_3\otimes V_3-U_4\otimes V_4)\\ &+V_1\otimes U_1+V_2\otimes U_2-i(V_3\otimes U_3-V_4\otimes U_4)\,, \end{align*} where: \begin{align*} U_1&=(e_1+q_1)U(e_1+q_1)=e_1+\sum_{k=1,\, k \text{ even}}^{n-1} \frac{1}{2}(\epsilon^k + \epsilon^{-k})p^k_{1,1}\,,\\ V_1&=(e_3+e_4+p_1+p_2)V(e_3+e_4+p_1+p_2)=V\,,\\ U_2&= (e_2+q_2)U (e_2+q_2)=e_2+\sum_{k=1,\, k \text{ even}}^{n-1} \frac{1}{2}(\epsilon^k + \epsilon^{-k})p^k_{2,2}\,,\\ V_2&=(e_3-e_4-p_1+p_2)V(e_3-e_4-p_1+p_2)=V^*\,,\\ \end{align*} \begin{align*} U_3&=(e_2+q_2)U(e_1+q_1)=\sum_{k=1,\, k \text{ even}}^{n-1} \frac{1}{4}(\epsilon^k - \epsilon^{-k})(e^k_{1,1}+e^k_{1,2}-e^k_{2,1}-e^k_{2,2})\,,\\ V_3&=(e_3-e_4-p_1+p_2)V(e_3+e_4+p_1+p_2)=-e_3+e_4-\sum_{k=1,\, k \text{ odd}}^{n-1} \epsilon^{-k} e^{k}_{1,2}+ \epsilon^{k} e^k_{2,1}\,\\ U_4&=(e_1+q_1)U(e_2+q_2)=\sum_{k=1,\, k \text{ even}}^{n-1} \frac{1}{4}(\epsilon^k - \epsilon^{-k})(e^k_{1,1}-e^k_{1,2}+e^k_{2,1}-e^k_{2,2})\,,\\ V_4&=(e_3+e_4+p_1+p_2)V(e_3-e_4-p_1+p_2)=-e_3+e_4-\sum_{k=1,\, k \text{ odd}}^{n-1} \epsilon^k e^{k}_{1,2}+ \epsilon^{-k} e^k_{2,1}\,. \end{align*} To split the calculation into more manageable chunks, we write: \begin{align*} 2[U_1\otimes V_1&+U_2\otimes V_2+i(U_3\otimes V_3-U_4\otimes V_4)]=\\ &(U_1+U_2)\otimes (V_1+V_2)+(U_1-U_2)\otimes (V_1-V_2)\\ &+\rm{i}[(U_3-U_4)\otimes (V_3+V_4)+(U_3+U_4)\otimes (V_3-V_4)]\,. \end{align*} \begin{lem} \begin{align*} \psi(U_1+U_2)&= n(\widehat{e^{n-1}_{1,1}}+\widehat{e^{n-1}_{2,2}})\,,& \psi(U_1-U_2)&= n(\widehat{e^{1}_{1,1}} +\widehat{e^{1}_{2,2}})\,,\\ \psi(U_3+U_4)&= n(\widehat{e^{n-1}_{1,1}}-\widehat{e^{n-1}_{2,2}})\,,& \psi(U_3-U_4)&= {\rm i}n(\widehat{e^{1}_{1,2}} -\widehat{e^{1}_{2,1}})\,,\\ \psi(V_1+V_2)&= -2n(\widehat{e^{n-1}_{1,2}}-\widehat{e^{n-1}_{2,1}})\,,& \psi(V_1-V_2)&= 2n(\widehat{e^1_{2,2}} -\widehat{e^1_{1,1}})\,,\\ \psi(V_3+V_4)&= 2n(\widehat{e^1_{1,2}} +\widehat{e^1_{2,1}})\,,& \psi(V_3-V_4)&=2{\rm i}n(\widehat{e^{n-1}_{1,2}}-\widehat{e^{n-1}_{2,1}})\,. \end{align*} \end{lem} \begin{proof} \begin{align*} \psi(U_1+U_2)&=\psi(e_1+e_2+\sum_{k=1,\, k \text{ even}}^{n-1} \frac{1}{2}(\epsilon^k + \epsilon^{-k})(e^k_{1,1}+e^k_{2,2}))\\ &=\chi_1+\chi_{a^n}+\sum_{k=1,\, k \text{ even}}^{n-1} \frac{1}{2}(\epsilon^k + \epsilon^{-k})(\chi_{a^{n-k}}+\chi_{a^{2n-k}}+\chi_{a^k}+\chi_{a^{n+k}})=n(\widehat{e^{n-1}_{1,1}}+\widehat{e^{n-1}_{2,2}})\,, \\ \psi(U_1-U_2)&=\psi(e_1-e_2+\sum_{k=1,\, k \text{ even}}^{n-1} \frac{1}{2}(\epsilon^k + \epsilon^{-k})(e^k_{1,2}+e^k_{2,1}))\\ &=\chi_1-\chi_{a^n}+\sum_{k=1,\, k \text{ even}}^{n-1} \frac{1}{2}(\epsilon^k + \epsilon^{-k})(-\chi_{a^{n-k}}+\chi_{a^{2n-k}}+\chi_{a^k}-\chi_{a^{n+k}})=n(\widehat{e^{1}_{1,1}}+\widehat{e^{1}_{2,2}})\,, \end{align*} \begin{align*}\psi(U_3+U_4)&=\psi(\sum_{k=1,\, k \text{ even}}^{n-1} \frac{1}{2}(\epsilon^k - \epsilon^{-k})(e^k_{1,1}-e^k_{2,2}))\\ &=\sum_{k=1,\, k \text{ even}}^{n-1} \frac{1}{2}(\epsilon^k - \epsilon^{-k})(-\chi_{a^{n-k}}-\chi_{a^{2n-k}}+\chi_{a^k}+\chi_{a^{n+k}})=n(\widehat{e^{n-1}_{1,1}}-\widehat{e^{n-1}_{2,2}})\\ \psi(U_3-U_4)&=\psi(\sum_{k=1,\, k \text{ even}}^{n-1} \frac{1}{2}(\epsilon^k - \epsilon^{-k})(e^k_{1,2}-e^k_{2,1}))\,,\\ &={\rm i}\sum_{k=1,\, k \text{ even}}^{n-1} \frac{1}{2}(\epsilon^k - \epsilon^{-k})(\chi_{ba^k}-\chi_{ba^{2n-k}}-\chi_{ba^{n+k}}+\chi_{ba^{n-k}})={\rm i} n(\widehat{e^{1}_{1,2}}-\widehat{e^{1}_{2,1}})\,. \end{align*} The expressions of $\psi(V_1+V_2)$ and $\psi(V_1-V_2)$ follow from Lemma~\ref{lambdaa}. Similarly, one calculates: \begin{align*} \psi(V_3+V_4)&=\psi(2(-e_3-e_4)+\sum_{k=1,\, k \text{ odd}}^{n-1} (\epsilon^k+\epsilon^{-k}) (r^k_{1,2}+ r^k_{2,1}))\\ &=-2\chi_{ba^n}+2\chi_b+\sum_{k=1,\, k \text{ odd}}^{n-1} (\epsilon^k+\epsilon^{-k}) (\chi_{ba^k}+\chi_{ba^{2n-k}}-\chi_{ba^{n+k}}-\chi_{ba^{n-k}})\\ &=2n(\widehat{e^1_{1,2}}+\widehat{e^1_{2,1}})\,,\end{align*} \begin{align*} \psi(V_3-V_4)&=\psi(\rm{i}\sum_{k=1,\, k \text{ odd}}^{n-1} (\epsilon^k-\epsilon^{-k})(r^k_{1,1}-r^k_{2,2}))\\ &={\rm i}\sum_{k=1,\, k \text{ odd}}^{n-1} (\epsilon^k-\epsilon^{-k})(\chi_{ba^{n+k}}+\chi_{ba^{k}}-\chi_{ba^{2n-k}}-\chi_{ba^{n-k}}) \\ &= 2{\rm i} n (\widehat{e^{n-1}_{1,2}}-\widehat{e^{n-1}_{2,1}})\,, \end{align*} which concludes the proof of this lemma. \end{proof} We can at last check that the coproduct of $V$ is preserved: \begin{align*} n^{-2}(\psi\otimes \psi)(\Delta(V))&= -(\widehat{e^{n-1}_{1,1}}+\widehat{e^{n-1}_{2,2}})\otimes(\widehat{e^{n-1}_{1,2}}+\widehat{e^{n-1}_{2,1}}) +(\widehat{e^{1}_{1,1}}+\widehat{e^{1}_{2,2}})\otimes (\widehat{e^1_{2,2}}-\widehat{e^1_{1,1}})\\ &-(\widehat{e^{n-1}_{1,2}}+\widehat{e^{n-1}_{2,1}})\otimes(\widehat{e^{n-1}_{1,1}}+\widehat{e^{n-1}_{2,2}}) +(\widehat{e^1_{2,2}}-\widehat{e^1_{1,1}})\otimes (\widehat{e^{1}_{1,1}}+\widehat{e^{1}_{2,2}}) \\ &-(\widehat{e^{1}_{1,2}}-\widehat{e^{1}_{2,1}})\otimes (\widehat{e^1_{1,2}}+\widehat{e^1_{2,1}}) -(\widehat{e^{n-1}_{1,1}}-\widehat{e^{n-1}_{2,2}})\otimes (\widehat{e^{n-1}_{1,2}}-\widehat{e^{n-1}_{2,1}})\\ &+(\widehat{e^1_{1,2}}+\widehat{e^1_{2,1}})\otimes(\widehat{e^{1}_{1,2}}-\widehat{e^{1}_{2,1}})+(\widehat{e^{n-1}_{1,2}}-\widehat{e^{n-1}_{2,1}})\otimes(\widehat{e^{n-1}_{1,1}}-\widehat{e^{n-1}_{2,2}})\\ &=n^{-2}\Delta(\psi(V))\,.\qedhere \end{align*} \section{Computer exploration with \texttt{MuPAD}\xspacecombinat{}} \label{section.computerExploration} Most of the research we report on in this paper has been driven by computer exploration. In this section, we quickly describe the tools we designed, implemented, and used, present typical computations, and discuss some exploration strategies. To this end, we use the construction of all coideal subalgebra\xspaces{} of $K\!D(3)$ as running example. We recommend to start by skimming through the demonstration, in order to get a rough idea of what computations are, or not, achievable. \subsection{Software context} Our work is based on \texttt{MuPAD}\xspacecombinat~\cite{MuPAD-Combinat}, an open-source algebraic combinatorics package for the computer algebra system \texttt{MuPAD}\xspace~\cite{MuPAD.96}. Among other things, it provides a high-level framework for implementing Hopf algebras and the like. All the extensions we wrote for this research are publicly available from the developers repository (see \url{http://mupad-combinat.sf.net/}); in fact, the two authors used that mean to share the code between them. With it, new finite dimensional Kac algebras obtained by deformation of group algebras may be implemented concisely (the full implementation of the Kac-Paljutkin algebra takes about 50 lines of code, including comments). Most of the code is fairly generic and already integrated in the \texttt{MuPAD}\xspacecombinat{} core, and was beneficial to several unrelated research projects. The remaining specific code is provided as a separate worksheet. Feel free to contact the second author for help. Since then, the \texttt{MuPAD}\xspacecombinat project was migrated to the completely open-source platform \texttt{Sage}~\cite{Sage,Sage-Combinat}. Specific extensions, like this one, are migrated according to their usefulness for new research projects. \subsection{Setup} The first step is to start a new \texttt{MuPAD}\xspace{} session, and to setup the stage for the computations (think of it as the preliminaries section of a research paper which defines short hand notations). We load the \texttt{MuPAD}\xspacecombinat{} package by issuing: \begin{Mexin} package("Combinat") \end{Mexin} Next we load a worksheet which contains code and short-hand notations specific to that specific research project. For the user convenience, a short help will be displayed. \begin{Mexin} read("experimental/2005-09-08-David.mu"): \end{Mexin} \begin{Mexout} ////////////////////////////////////////////////////////////////////// Loading worksheet: Twisted Kac algebras Cf. p. 715 of '2-cocycles and twisting of Kac algebras' Version: $Id: KD.tex 433 2010-12-06 15:00:19Z nthiery $ To update to the latest version, go to the MuPAD-Combinat directory and type: svn update -d Content: G := DihedralGroup(4) -- the dihedral group TwistedDihedralGroupAlgebra: KD4 := TwistedDihedralGroupAlgebra(4): KD4 := KD(4): -- shortcut KD4::G = KD4::group -- KD4 expressed on group elements KD4::G([3,1]) -- a^3 b KD4::M = KD4::matrix -- KD4 expressed as block diagonal matrices KD4::G::tensorSquare -- the tensor product KD4::G # KD4::G KD4::M::tensorSquare -- the tensor product KD4::M # KD4::M KD4::coeffRing -- the coefficient field KD4::coeffRing::primitiveUnitRoot(4)-- the complex value I KD4::M(x), KD4::G(x) -- conversions between bases KD4::e(1), KD4::e(2,2,1) -- matrix units KD4::p(2,2,j), KD4::r(2,2,j) -- some projections of the j-th block KD4::p1, KD4::p2, KD4::q1, KD4::q1 -- some projections KD4::G::Omega -- Omega in the group basis KD4::M::tensorSquare(KD4::G::Omega) -- Omega in the matrix basis KD4::M::coproductAsMatrix(e(1)) -- the coproduct of e(1) as a matrix // Short hands, e.g. to write e(2,2,1) instead of KD4::e(2,2,1) export(KD4, Alias, e, p1, p2, q1, q2): TwistedQuaternionGroupAlgebra(N) KQ4 := TwistedDihedralGroupAlgebra(4): KQ4 := KD(4): -- shortcut Same usage as for KD(N) Isomorphism KD(2N) <-> KQ(2N): KQ4::G(KD4::G([1,0])): -- The image of a of KD4 in KQ4 KD4::G(KQ4::G([0,1])): -- The image of b of KQ4 in KD4 Computing with coideal subalgebras: algebraClosure([a,b,c]) -- A basis of the subalgebra generated by a,b,c coidealClosure([a,b,c]) -- A basis of the coideal generated by a,b,c coidealAndAlgebraClosure([a,b,c]) -- A basis of the coideal subalgebra ... echelonForm([a,b,c], Reduced) SkewTensorProduct(A, B) -- Skew tensor product of A and B (A being the dual of B) coidealDual([ p ]) -- Basis of the dual of the left coideal generated by p A sample computation: M := KQ(4): Fbasis := coidealAndAlgebraClosure([M::e(1) + M::e(2)]): F := Dom::SubFreeModule(Fbasis, [Cat::FiniteDimensionalHopfAlgebraWithBasis(M::coeffRing)]): Fdual := Dom::DualOfFreeModule(F): G := Fdual::intrinsicGroup(): G::list() ////////////////////////////////////////////////////////////////////// \end{Mexout} Mind that this worksheet is experimental; for further help one needs to dig into the code. On the other hand, all features that are integrated into \texttt{MuPAD}\xspacecombinat{} or \texttt{MuPAD}\xspace{} are documented within the usual \texttt{MuPAD}\xspace{} help system. \subsection{Computing with elements} Let us define $K\!D(3)$: \begin{Mexin} KD3 := KD(3): \end{Mexin} and shortcuts to its generators: \begin{Mexin} [aa,bb] := KD3::group::algebraGenerators::list() \end{Mexin} \begin{Mexout} [B(a), B(b)] \end{Mexout} Now we can use \texttt{MuPAD}\xspace{} as a pocket calculator: \begin{Mexin} bb^2 \end{Mexin} \begin{Mexout} B(1) \end{Mexout} The point is that \texttt{MuPAD}\xspace{} knows that \texttt{bb} lies in $K\!D(3)$ (more precisely, the object \texttt{bb} is in the domain \texttt{KD3::group}\footnote{In \texttt{MuPAD}\xspace{} parlance, concrete classes are called \emph{domains}}), and therefore applies the corresponding computation rules (usual object oriented programming paradigm). Here are some further simple computations: \begin{Mexin} aa^2, aa^6, bb*aa \end{Mexin} \begin{Mexout} 2 5 B(a ), B(1), B(a b) \end{Mexout} and a more complicated one: \begin{Mexin} (1 - aa^3)*(1 + aa^3) + 1/2*bb*aa^3 \end{Mexin} \begin{Mexout} 3 1/2 B(a b) \end{Mexout} Note that all computations above are done in the group algebra. Namely, \texttt{KD3::group} (or \texttt{KD3::G}) models the concrete algebra $K\!D(3)$ with its elements expanded on the group basis. However, $K\!D(3)$ can also be represented as a block-matrix algebra, with matrix units as basis, and it is often more convenient or efficient to do the computations there. This basis is modeled by the domain \texttt{KD3::matrix} (or \texttt{KD3::M} for short), and the change of basis is done in the natural way: \begin{Mexin} KD3::M(aa + 2*bb) \end{Mexin} \begin{Mexoutsmall} +- -+ | 3, 0, 0, 0, 0, 0, 0, 0 | | | | 0, -1, 0, 0, 0, 0, 0, 0 | | | | 0, 0, -3, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 1, 0, 0, 0, 0 | | | | 0, 0, 0, 0, epsilon, 2, 0, 0 | | | | 0, 0, 0, 0, 2, 1 - epsilon, 0, 0 | | | | 0, 0, 0, 0, 0, 0, epsilon - 1, 2 | | | | 0, 0, 0, 0, 0, 0, 2, -epsilon | +- -+ \end{Mexoutsmall} Some comments are in order: \begin{itemize} \item An element of \texttt{KD::M} is displayed as a single large matrix; however, the four $1\times 1$ blocks and the three $2\times2$ blocks inside are well visible in the example above. \item So far, we have not specified the ground field. It must be of characteristic zero, and contain some roots of unity to define $\Omega$ (see~\ref{omega}) and the left regular representation (see~\ref{lambda}). In theory one can just take $\mathbb{C}$, but in practice one needs a computable field. By default, an appropriate algebraic extension of $\mathbb{Q}$ is automatically constructed: \begin{Mexin} KD3::coeffRing \end{Mexin} \begin{Mexout} Q(II, epsilon) \end{Mexout} where $\texttt{II}^4=1$, and $\texttt{epsilon}^6=1$. \item The basis change is implemented by specifying the images of \texttt{a} and \texttt{b} and stating that it is an algebra morphism. The inverse basis change is deduced automatically by matrix inversion. Appropriate caching is done to avoid computation overhead. This is completely transparent to the user, and mostly transparent for the developer (encapsulation principle). \item We show here the \texttt{MuPAD}\xspace{} output in the text interface. In the graphical interface things look better; in particular, \texttt{epsilon} could be typeset as $\epsilon$. \end{itemize} So far, we have only played with the algebra structure of $K\!D(3)$ which is just the usual group algebra structure. Let us compute some coproducts, starting with some group like elements (note: tensor products are denoted by the symbol \#): \begin{Mexin} coproduct(aa^3), coproduct(bb) \end{Mexin} \begin{Mexout} 3 3 B(a ) # B(a ), B(b) # B(b) \end{Mexout} Here is the coproduct of $a$: \begin{Mexin} coproduct(aa) \end{Mexin} \begin{Mexout} 4 4 / II \ 4 5 1/16 B(a b) # B(a b) + | - -- - 1/16 | B(a b) # B(a ) + \ 8 / ... one hundred lines sniped out ... 2 -1/16 B(a) # B(a ) + 7/16 B(a) # B(a) \end{Mexout} The implementation of the coproduct follows closely Vainerman's definition~\cite{Vainerman.1998} by deformation of the usual coproduct. In particular, it goes through the definition of $\omega$ and $\Omega$: \begin{Mexin} KD3::G::Omega \end{Mexin} \begin{Mexout} 3 3 / II \ 3 3 1/8 B(a b) # B(a b) + | - -- - 1/8 | B(a b) # B(a ) + \ 4 / ... ten lines sniped out ... / II \ 3 | -- - 1/8 | B(b) # B(a ) + 1/8 B(b) # B(1) + 1/8 B(b) # B(b) \ 4 / \end{Mexout} and of the twisted coproduct: \begin{Mexin} expose(KD3::G::coproductBasis) \end{Mexin} \begin{Mexout} proc(x : DihedralGroup(6)) : KD3::G::tensorSquare name KD3::G::coproductBasis; option remember; begin dom::Omega * dom::tensorSquare(K::coproductBasis(x)) * dom::OmegaStar end_proc \end{Mexout} This function just computes the image of a basis element, and the actual coproduct is obtained by linearity. Thanks to the \texttt{option remember}, the computation is done only once. \texttt{dom} is a place holder for the current domain (here $K\!D(n)$), and \texttt{K} denotes the original Kac algebra (here $\mathbb{C}[D_6]$). The code is generic, and can be used to twist any Kac algebra by an appropriate cocycle. \texttt{KD3::M::tensorSquare} and \texttt{KD3::G::tensorSquare} model $K\!D(3)\otimes K\!D(3)$ respectively in the group and the matrix basis. The changes of basis between the two are defined automatically, and registered as implicit conversions. Those conversions are used transparently to compute coproducts in the matrix basis. \subsection{Computing with coideal subalgebra\xspaces} \label{subsection.demo.sacig} A foremost feature that we used for exploration was the ability to compute properties of coideal subalgebra\xspaces generated by various elements, in the hope to find Jones projections. Let us first define a shortcut for the matrix units: \begin{Mexin} e := KD3::e: \end{Mexin} Now, $e_i$ and $e_{i,j}^k$ are given respectively by \texttt{e(i)} and \texttt{e(i,j,k)}. We compute the coideal subalgebra\xspace $K_1$ generated by the projection $e_1+e_2+e_3+e_4$: \begin{Mexin} K1basis := coidealAndAlgebraClosure([e(1)+e(2)+e(3)+e(4)]) \end{Mexin} \begin{Mexoutsmall} -- +- -+ +- -+ +- -+ -- | | 1, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | | | | | | | | | 0, 1, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | | | | | | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 1, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | | | | | | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 1, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | |, | |, | | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | | | | | | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | | | | | | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | | | | | | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 1 | | -- +- -+ +- -+ +- -+ -- \end{Mexoutsmall} The result is a basis of $K_1$ in echelon form. Its dimension is consistent with the trace of $e_1+e_2+e_3+e_4$: \begin{Mexin} 1 / (e(1)+e(2)+e(3)+e(4))::traceNormalized() \end{Mexin} \begin{Mexout} 3 \end{Mexout} It follows that $e_1+e_2+e_3+e_4$ is the Jones projection $p_{K_1}$ (see Remark~\ref{resumep}). To give a flavor of the implementation work, here is the code for computing algebra and coideal subalgebra\xspace closures. It is defined generically for any domain implementing the appropriate operations (\texttt{dom} is a place holder for the current domain): \begin{Mexin} algebraClosure := proc(generators: Type::ListOf(dom)) : Type::ListOf(dom) begin userinfo(3, "Computing the (non unital!) algebra closure"); dom::moduleClosure(generators, [proc(x: dom) : Type::SequenceOf(dom) local generator; begin x * generator $ generator in generators end_proc]) end_proc; coidealAndAlgebraClosure := proc(generators: Type::ListOf(dom), direction=Left: Type::Enumeration(Left,Right)) : Type::ListOf(dom) begin userinfo(3, "Computing the coideal and algebra closure"); // Proposition: the algebra closure of a coideal is again a coideal! dom::algebraClosure(dom::coidealClosure(generators, direction)); end_proc; \end{Mexin} In short: thanks to the underlying computer science work in the design and implementation of the platform, the algorithms may be written in a reasonnably \emph{expressive} and \emph{mathematically meaningful} way. \subsubsection{The coideal subalgebra\xspace $K_2$ and subalgebras of functions on a group} Consider now the coideal subalgebra\xspace $K_2$ generated by the projection $e_1+e_2$ of trace $1/6$: \begin{Mexin} K2basis := coidealAndAlgebraClosure([e(1)+e(2)]): nops(K2basis) \end{Mexin} \begin{Mexout} 6 \end{Mexout} It has the desired dimension, so that $e_1+e_2$ is the Jones projection of $K_2$. Consider now the coproduct of $e_1+e_2$: \begin{Mexin} c := (e(1)+e(2))::coproduct() \end{Mexin} \begin{Mexout} e(1, 1, 2) # e(2, 2, 2) + e(2, 2, 2) # e(1, 1, 2) + 1/2 e(2, 2, 1) # e(2, 2, 1) + 1/2 e(1, 1, 1) # e(2, 2, 1) + 1/2 e(2, 2, 1) # e(1, 1, 1) + 1/2 e(1, 1, 1) # e(1, 1, 1) + -1/2 e(1, 2, 1) # e(1, 2, 1) + 1/2 e(2, 1, 1) # e(1, 2, 1) + 1/2 e(1, 2, 1) # e(2, 1, 1) + -1/2 e(2, 1, 1) # e(2, 1, 1) + e(4) # e(4) + e(3) # e(4) + e(4) # e(3) + e(3) # e(3) + e(2) # e(2) + e(1) # e(2) + e(2) # e(1) + e(1) # e(1) \end{Mexout} It turns out to be symmetric: \begin{Mexin} c - c::mapsupport(revert) \end{Mexin} \begin{Mexout} 0 \end{Mexout} and therefore $K_2$ is a Kac subalgebra, the properties of which we now investigate. To this end, we define the subspace spanned by this basis, and claim to \texttt{MuPAD}\xspace{} that it is indeed a Kac subalgebra: \begin{Mexin} K2 := Dom::SubFreeModule(K2basis, [Cat::FiniteDimensionalHopfAlgebraWithBasis(KD3::coeffRing)]): \end{Mexin} The computation rules inside \texttt{K2} are then inherited from those of \texttt{KD3::M}. We first ask whether $K_2$ is commutative or cocommutative: \begin{Mexin} K2::isCommutative(), K2::isCocommutative() \end{Mexin} \begin{Mexout} TRUE, FALSE \end{Mexout} This tells us that $K_2$ is the dual of the algebra $\mathbb{C}[G]$ of some non commutative group $G$. To find $G$, we first define the dual of $K_2$: \begin{Mexin} K2dual := K2::Dual(): \end{Mexin} As expected, there are six group like elements in it: \begin{Mexin} K2dual::groupLikeElements() \end{Mexin} \begin{Mexout} _ _ _ _ _ [-II B([6, 5]) + B([5, 5]), II B([6, 5]) + B([5, 5]), B([7, 7]), _ _ _ B([8, 8]), B([1, 1]), B([3, 3])] \end{Mexout} They are expressed in the dual basis of the row reduced echelon basis for \texttt{K2}; this representation is not very much useful. On the other hand, we may instead consider all of them together as the instrisic group: \begin{Mexin} intrinsicGroup := K2dual::intrinsicGroup(): \end{Mexin} and test that it is isomorphic to the dihedral group $D_3$: \begin{Mexin} D3 := DihedralGroup(3): nops(D3::groupEmbeddings(intrinsicGroup())) \end{Mexin} \begin{Mexout} 6 \end{Mexout} The algorithmic behind this last step is currently simplistic. It could not deal with large groups like other specialized software like \texttt{GAP}\xspace{} could, but is sufficient for our purpose. On the other hand, the computation of the group-like elements themselves is rather efficient: it is done by computing the rank one central projections in the dual algebra. More generally, we can compute the full representation theory of finite dimensional algebras. For example: \begin{Mexin} K2dual::isSemiSimple() \end{Mexin} \begin{Mexout} TRUE \end{Mexout} \begin{Mexin} K2dual::simpleModulesDimensions() \end{Mexin} \begin{Mexout} [2, 1, 1] \end{Mexout} \subsubsection{Coidalgebra\xspaces in $K_2$} \label{subsubsection.mupad.sacig.K2} We now show how to construct the Jones projections of the coideal subalgebra\xspaces of $K_2=L^\infty(D_3)$, which are in correspondence with the subgroups of $D_3$ (see~\ref{groupe} and~\ref{section.K2}). Here we consider as example a subgroup $Z_2$ of order $2$ of $D_3$. Take the second generator of the intrinsic group of $\widehat{K_2}$, and write it as an element of $\widehat{K_2}$: \begin{Mexin} c := intrinsicGroup([2]); c := c::lift() \end{Mexin} \begin{Mexout} [2] _ _ -II B([6, 5]) + B([5, 5]) \end{Mexout} \pagebreak[3] Here is the subgroup it generates: \begin{Mexin} Z2 := K2dual::multiplicativeClosure([c]) \end{Mexin} \begin{Mexout} [B([5, 5]) + -II B([6, 5]), B([1, 1])] \end{Mexout} The corresponding coideal subalgebra\xspace $I$ consists of the functions on $D_3$ which are constant on right cosets for $Z_2$; it is of dimension $[D_3:Z_2]=3$. The Jones projection is given by the formula $\sum_{g\in Z_2} \widehat g$ (see~\ref{groupe}): \begin{Mexin} pI := _plus( g::groupLikeToIdempotentOfDual() $ g in Z2 ): \end{Mexin} The result is actually given in $K_2$; we lift it to an element of $K\!D(3)$: \begin{Mexin} pI := pI::toSupModule() \end{Mexin} \begin{Mexoutsmall} +- -+ | 1, 0, 0, 0, 0, 0, 0, 0 | | | | 0, 1, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | II | | 0, 0, 0, 0, 1/2, - --, 0, 0 | | 2 | | | | II | | 0, 0, 0, 0, --, 1/2, 0, 0 | | 2 | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 0, 0 | +- -+ \end{Mexoutsmall} This gives us $p_{K_{21}}=e_1+e_2+r_{1,1}^1$, as in~\ref{KD3section}. As a double check, here is the basis of the coideal $K_{21}$ it generates: \begin{Mexin} coidealAndAlgebraClosure([pI]) \end{Mexin} \begin{Mexoutsmall} -- +- -+ +- -+ +- -+ -- | | 1, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | | | | | | | | | 0, 1, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | | | | | | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 1, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | | | | | | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 1, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | |, | |, | | | | | 0, 0, 0, 0, 1, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, II, -1, 0, 0 | | | | | | | | | | | | 0, 0, 0, 0, 0, 1, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 1, II, 0, 0 | | | | | | | | | | | | 0, 0, 0, 0, 0, 0, 1, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 2 II, 0 | | | | | | | | | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 1 | | 0, 0, 0, 0, 0, 0, 0, 0 | | -- +- -+ +- -+ +- -+ -- \end{Mexoutsmall} which is consistent with the trace of the projection: \begin{Mexin} 1 / pI::traceNormalized() \end{Mexin} \begin{Mexout} 3 \end{Mexout} Doing this for all subgroups of $D_3$ yields all coideal subalgebra\xspaces of $K\!D(3)$ of dimension $2$ and $3$. \subsubsection{Deeper study of $K_2$, and generalization to $K\!D(n)$}\label{mupad.K2=Dn} Studying $K_2$ this way had the advantage to be completely automatic. However to generalize the results to any $K\!D(n)$, we need to have a closer look at the structure, and do some things manually. Here, we are lucky enough that the row reduced basis comes out quite nicely (this is seldom the case): \pagebreak[4] \begin{Mexin} K2basis := coidealAndAlgebraClosure([e(1)+e(2)]) \end{Mexin} \begin{Mexoutsmall} -- +- -+ +- -+ +- -+ | | 1, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | | | | | | | 0, 1, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | | | | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 1, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | | | | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 1, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | | |, | |, | |, | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0,-1, 0, 0 | | | | | | | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 1, 0, 0, 0 | | | | | | | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | | | | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | -- +- -+ +- -+ +- -+ +- -+ +- -+ +- -+ -- | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | | | | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | | | | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | | | | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | | |, | |, | | | | 0, 0, 0, 0, 1, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | | | | | | | 0, 0, 0, 0, 0, 1, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | | | | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 1, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | | | | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 0 | | 0, 0, 0, 0, 0, 0, 0, 1 | | +- -+ +- -+ +- -+ -- \end{Mexoutsmall} In particular, we can read off this basis the complete algebra structure of $K_2$: it is a commutative algebra whose minimal projections are easy to find. The expression of the coproduct of $e_1+e_2$ did not look very good. As usual in computer exploration, an essential issue is to find the right view where the output is readable and exploitable by a \emph{human}; customizable outputs are therefore at a premium. For example the first author's favorite view for a tensor element (which tends to be huge) is as a matrix $M=(m_{a,b})_{a,b}$ whose rows and columns are indexed by the matrix units of $K\!D(m)$ (here, for $m=3$: $(e_1,\dots,e_4,e^1_{1,1},e^1_{2,2},e^1_{1,2},e^1_{2,1})$), and where $m_{a,b}$ is the coefficient of $a \otimes b$. With this view, the coproduct of $e_1+e_2$ now also comes out nicely (see also~\ref{cop}): \begin{Mexin} KD3::M::tensorElementToMatrix((e(1)+e(2))::coproduct()) \end{Mexin} \begin{Mexoutsmall} +- -+ | 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 | | | | 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 1/2, 1/2, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 1/2, 1/2, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, -1/2, 1/2, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 1/2, -1/2, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 | +- -+ \end{Mexoutsmall} For example, it becomes obvious that this coproduct is symmetric, implying that $K_2$ is a Kac subalgebra (see~\ref{irredprof2}); as it is commutative, we have $K_2\equiv L^\infty(G)$ for some group $G$ (see~\ref{groupe}). Note that a basis for $K_2$ can also be read off the rows. Using duality, the elements of $G$ are given by the rank $1$ central projections. With a bit more work, one can deduce from the expression of $\Delta(e_1+e_2)$ all the pairs $(g,g^{-1})$ of inverse elements in $G$. Looking at some other coproducts reveals the complete group structure of $G$ (see~\ref{e1e2}). To generalize this we look at $n=5$: \begin{Mexin} K := KD(5): K::M::tensorElementToMatrix((K::e(1)+K::e(2))::coproduct()) \end{Mexin} \begin{Mexoutsmall} +- -+ | 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 | | | | 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 1/2, 1/2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 1/2, 1/2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, -1/2, 1/2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 1/2, -1/2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1/2, 1/2, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1/2, 1/2, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1/2, 1/2, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1/2, -1/2, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 | +- -+ \end{Mexoutsmall} This suggests the general formulas of~\ref{cop} which we can double check for other values of $n$ before starting to prove them: \begin{Mexin} for n from 1 to 7 do r := (K\!D(n))::r: e := (K\!D(n))::e: print(n, iszero(coproduct(e(1)+e(2)) - ( ( e(1)+e(2) ) # ( e(1)+e(2) ) + ( e(3)+e(4) ) # ( e(3)+e(4) ) + _plus( r(1,1,j) # r(1,1,j) + r(2,2,j) # r(2,2,j) $ j in select([ $1..n-1], testtype, Type::Odd) ) + _plus( e(1,1,j) # e(2,2,j) + e(2,2,j) # e(1,1,j) $ j in select([ $1..n-1], testtype, Type::Even) ) ))); end_for: \end{Mexin} \begin{Mexout} 1, TRUE 2, TRUE ... 7, TRUE \end{Mexout} With some patience and perseverance, the other required coproducts can be reverse engineered the same way for all $n$. \subsection{Computing Kac isomorphisms and applications} We now demonstrate the use of algorithm~\ref{algo.isomorphism} to compute automorphisms, self-duality, and isomorphisms, with the search of coideals as motivation. \subsubsection{The coideal subalgebra\xspaces $K_3$ and $K_4$, and automorphisms of $K\!D(3)$} \label{subsubsection.automorphismsK3} Computing the coproducts of $e_1+e_3$ and $e_1+e_4$ as above shows that the corresponding coideal subalgebra\xspaces $K_3$ and $K_4$ are not Kac subalgebras. However, they look similar, and it is natural to ask whether there exists an automorphism of $K\!D(3)$ which would exchange them. To this end, we compute the automorphism group of $K\!D(3)$ (see~\ref{algo.isomorphism}): \begin{Mexin} automorphismGroup := KD3::G::algebraEmbeddings(KD3::G): \end{Mexin} A few minutes and a tea later, we obtain four automorphisms: \begin{Mexin} for phi in automorphismGroup do fprint(Unquoted,0, _concat("-" $ 78)): print(hold(phi)(a) = phi(KD3::G::algebraGenerators[a])); print(hold(phi)(b) = phi(KD3::G::algebraGenerators[b])); end: \end{Mexin} \begin{Mexout} ------------------------------------------------------------------------------ 5 4 2 phi(a) = 1/2 B(a ) + -1/2 B(a ) + 1/2 B(a ) + 1/2 B(a) 3 phi(b) = B(a b) ------------------------------------------------------------------------------ 5 4 2 phi(a) = 1/2 B(a ) + 1/2 B(a ) + -1/2 B(a ) + 1/2 B(a) 3 phi(b) = B(a b) ------------------------------------------------------------------------------ 5 phi(a) = B(a ) phi(b) = B(b) ------------------------------------------------------------------------------ phi(a) = B(a) phi(b) = B(b) \end{Mexout} Half of them are obviously induced by automorphisms of the group which fix $H$. The other half are obtained from $\Theta'$: \begin{Mexin} ThetaPrime := automorphismGroup[1]: \end{Mexin} which is an involution: \begin{Mexin} ThetaPrime(ThetaPrime(KD3::G::algebraGenerators[a])), ThetaPrime(ThetaPrime(KD3::G::algebraGenerators[b])) \end{Mexin} \begin{Mexout} B(a), B(b) \end{Mexout} The generalization of the formula for $\Theta'$ to any $n$ is straightforward (see Proposition~\ref{proposition.automorphism.KD.ThetaPrime}). Going back to our original problem, we see that $\Theta'$ exchanges $e_1+e_3$ and $e_1+e_4$: \begin{Mexin} KD3::M(ThetaPrime(KD3::G( KD3::e(1) + KD3::e(3)))) \end{Mexin} \begin{Mexoutsmall} +- -+ | 1, 0, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 1, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 0, 0 | | | | 0, 0, 0, 0, 0, 0, 0, 0 | +- -+ \end{Mexoutsmall} Therefore $K_3$ and $K_4$ are isomorphic (see~\ref{iso-n-impair}). \subsubsection{Self-duality of $K\!D(3)$} \label{subsubsection.demo.self-dual} In the previous subsubsections, we have determined all coideal subalgebra\xspaces of $K\!D(3)$, except those of dimension $4$ which we now investigate. By the classification of small dimension Kac algebras, we knew that $K\!D(3)$ was self-dual, and therefore that there existed exactly three coideal subalgebra\xspaces of dimension $4$. In this subsubsection, we demonstrate how we found an explicit isomorphism between $K\!D(2m+1)$ and its dual (see Theorem~\ref{self-dual}). Then, in the next subsubsection, we derive the explicit construction of the Jones projection of the coideal subalgebra\xspaces of dimension $4$ in $K\!D(3)$, and in general in $K\!D(2m+1)$. First, we compute all the Kac isomorphisms from $K\!D(3)$ to its dual. It occurs that the computation is much quicker in the dual of the matrix basis \texttt{KD3::M::Dual()} rather than of the group basis \texttt{KD3::G::Dual()} (the expressions of the group like elements and of $\Omega$ are sparser there): \begin{Mexin} isomorphisms := KD3::G::algebraEmbeddings(KD3::M::Dual()): \end{Mexin} Due to the non trivial automorphism group of $K\!D(3)$, there are four of them: \begin{Mexin} for phi in isomorphisms do fprint(Unquoted,0, _concat("-" $ 78)): print(hold(phi)(a) = phi(KD3::G::algebraGenerators[a])); print(hold(phi)(b) = phi(KD3::G::algebraGenerators[b])); end: \end{Mexin} \begin{Mexout} ------------------------------------------------------------------------------ _ _ _ _ _ phi(a) = -1/2 e(1,1,1) + 1/2 e(2,2,1) + -1/2 e(1,2,2) + -1/2 e(2,1,2) + e(2,2,2) _ phi(b) = e(3) ------------------------------------------------------------------------------ _ _ _ _ _ phi(a) = 1/2 e(1,1,1) + -1/2 e(2,2,1) + e(1,1,2) + -1/2 e(1,2,2) + -1/2 e(2,1,2) _ phi(b) = e(3) ------------------------------------------------------------------------------ _ _ _ _ _ phi(a) = -1/2 e(1,1,1) + 1/2 e(2,2,1) + e(1,1,2) + -1/2 e(1,2,2) + -1/2 e(2,1,2) _ phi(b) = e(4) ------------------------------------------------------------------------------ _ _ _ _ _ phi(a) = 1/2 e(1,1,1) + -1/2 e(2,2,1) + -1/2 e(1,2,2) + -1/2 e(2,1,2) + e(2,2,2) _ phi(b) = e(4) \end{Mexout} Running the same computation for $n=5$ was sufficient to pick one of them and guess the general formulas of Theorem~\ref{self-dual}. Now, we can construct this isomorphism $\Phi$ directly with: \begin{Mexin} psi := KD3::toDualIsomorphism: \end{Mexin} and check that it is indeed an isomorphism with: \begin{Mexin} KD3::G::isHopfAlgebraMorphism(psi) KD3::G::kernelOfModuleMorphism(psi) \end{Mexin} \begin{Mexout} TRUE [] \end{Mexout} We actually checked this up to $n=21$ (the computation for $n=21$ took two days on a $\unit[2]{GHz}$ PC and $\unit[1.6]{Gb}$ of memory): \begin{Mexin} for n from 3 to 21 step 2 do K := K\!D(n): psi := K::toDualIsomorphism: print(n, K::G::isHopfAlgebraMorphism(psi), K::G::kernelOfModuleMorphism(psi)) end_for: \end{Mexin} Here is finally how we guessed the formulas for $\psi$: \begin{Mexin} MdualToGdual := KD3::G::dualOfModuleMorphism(KD3::M): Phi := MdualToGdual @ psi @ KD3::G: Phi(KD3::e(2,1,1)) \end{Mexin} { \scriptsize \begin{Mexout} / II \ _ / II \ _ 2 / II \ _ 4 / II \ _ 5 | -- - 1/2 | B(a b) + | 1/2 - -- | B(a b) + | -- + 1/2 | B(a b) - | -- + 1/2 | B(a b) \ 2 / \ 2 / \ 2 / \ 2 / \end{Mexout} } \subsubsection{Coidalgebra\xspaces of dimension $4$ and the antiisomorphism $\delta$} \label{subsubsection.demo.delta} We now use the isomorphism of the previous section to construct explicitly the coideals of dimension $4$ of $K\!D(3)$. Consider for example the coideal $K_{21}$ of dimension $3$ whose Jones projection we computed in~\ref{subsubsection.mupad.sacig.K2}. We construct a basis for $\delta(K_{21})$ in $\widehat{K\!D(3)}$ by looking for the commutant of $p_{K_{21}}$ in $\widehat{K\!D(3)} \subset \widehat{K\!D(3)} \rtimes K\!D(3)$. \begin{Mexin} e := KD3::e: r := KD3::r: pK21 := e(1) + e(2) + r(1,1,1): deltaK21 := coidealDual([pK21]) \end{Mexin} \begin{Mexout} _ _ _ _ _ _ [e(1), e(2), -II e(1, 1, 1) + e(1, 2, 1), -II e(2, 1, 1) + e(2, 2, 1)] \end{Mexout} Using the inverse of the isomorphism $\Phi$: \begin{Mexin} psi := KD3::toDualIsomorphism: psiInv := KD3::G::inverseOfModuleMorphism(psi): \end{Mexin} we identify $\delta(K_{21})$ as a coideal $J_{20}$ of $K\!D(3)$: \begin{Mexin} J20 := map(deltaK21, psiInv) \end{Mexin} \begin{Mexout} -- 3 / II \ 5 II 5 4 II 4 | B(1), B(a ), | - -- - 1/4 | B(a b) + -- B(a ) + 1/4 B(a b) + -- B(a ) + -- \ 2 / 4 4 / II \ 2 II 2 II | 1/4 - -- | B(a b) + - -- B(a ) + -1/4 B(a b) + - -- B(a), \ 2 / 4 4 / II \ 5 5 II 4 4 | -- + 1/2 | B(a b) + 1/4 B(a ) + - -- B(a b) + 1/4 B(a ) + \ 4 / 4 / II \ 2 2 II -- | 1/2 - -- | B(a b) + -1/4 B(a ) + -- B(a b) + -1/4 B(a) | \ 4 / 4 -- \end{Mexout} Here, we are lucky enough that the obtained basis for $J_{20}$ is orthonormal: \begin{Mexin} matrix(4,4, (i,j) -> scalar(J20[i], J20[j])) \end{Mexin} \begin{Mexoutsmall} +- -+ | 1, 0, 0, 0 | | | | 0, 1, 0, 0 | | | | 0, 0, 1, 0 | | | | 0, 0, 0, 1 | +- -+ \end{Mexoutsmall} Therefore, we can compute the Jones projection $p_{J_{20}}$ right away by inverting its coproduct formula (otherwise, we should have orthogonalized it first by Gramm-Schmidt, and use a variant of the formula involving the norms): \begin{Mexin} pJ20 := KD3::G::fiberOfModuleMorphism (KD3::G::coproduct, (1/nops(J20)) * _plus((b::involution())::antipode() # b $ b in J20)) [1] \end{Mexin} \begin{Mexout} 5 3 2 1/4 B(a b) + 1/4 B(a ) + 1/4 B(a b) + 1/4 B(1) \end{Mexout} Now the construction of $p_{J_{20}}$ (and of $p_{J_0}$ and $p_{J_2}$) is obvious, and can be right away generalized to obtain the coideal subalgebra\xspaces of dimension $4$ in $K\!D(2n+1)$ (see~\ref{abelien},~\ref{sacig4impair}, and~\ref{KD3section}). Beware however that the correspondence we used between coideals of dimension $d$ of $K\!D(3)$ and those of dimension $\dim K\!D(3) / d$ is only defined up to an automorphism of $K\!D(3)$. \subsubsection{The isomorphism between $K\!D(2n)$ and $K\!Q(2n)$} \label{subsubsection.demo.KDKQ} \label{mupad.plonge} The lattice of coideal subalgebra\xspaces of $K\!D(3)$ being complete, we now show how parts of the lattice for the larger algebras can be build up from that of the smaller ones (see~\ref{KD6}). Let us start by defining $K\!D(6)$: \begin{Mexin} KD6 := KD(6): \end{Mexin} We know that $K\!D(3)$ embeds as $K_4$ in $K\!D(6)$. Let us show how to obtain the coideal subalgebra\xspaces of $K_4$ from those of $K\!D(3)$. We have to be a bit careful as, by default, $K\!D(3)$ and $K\!D(6)$ are not defined over the same ground field (unless stated otherwise, the ground field for $K\!D(n)$ is $\mathbb{Q}(i, \epsilon)$ where $\epsilon$ is a $2n$-th root of unity). Here, we force $K\!D(3)$ to use the same field as $K\!D(6)$. \begin{Mexin} KD3 := KD(3, KD6::coeffRing): \end{Mexin} To ease the notations, we define shortcuts for the algebra generators of $K\!D(3)$ and $K\!D(6)$: \begin{Mexin} [a3, b3] := KD3::G::algebraGenerators::list(): [a6, b6] := KD6::G::algebraGenerators::list(): \end{Mexin} Now we can define the embedding of $K\!D(3)$ as $K_4$ in $K\!D(6)$ (see~\ref{plonge}): \begin{Mexin} KD3ToKD6 := KD3::G::algebraMorphism(table(a = a6^2, b = b6 )): \end{Mexin} and use it as follows: \begin{Mexin} KD3ToKD6(1 + 2*a3^3 + 3 * b3) \end{Mexin} \begin{Mexout} 6 2 B(a ) + 3 B(b) + B(1) \end{Mexout} We now take the Jones projection of the coideal $J_{20}$ in $K\!D(3)$ (see Appendix~\ref{subsubsection.demo.delta}), and lift it into $K\!D(6)$: \begin{Mexin} pJ20 := 1/4 * ( 1 + a3^3 + a3^2 * b3 + a3^5 * b3 ): KD3ToKD6(pJ20) \end{Mexin} \begin{Mexout} 4 6 10 1/4 B(1) + 1/4 B(a b) + 1/4 B(a ) + 1/4 B(a b) \end{Mexout} The result is a posteriori obvious, but the point is that this construction is completely automatic. In a similar vein, one can obtain all the coideal subalgebra\xspaces of $K_3$, as it is $K\!Q(3)$ via the isomorphism between $K\!D(2n)$ and $K\!Q(2n)$ (see Theorem~\ref{theorem.isomorphism.KD.KQ}). This isomorphism was first suspected by comparing the representation theory of the algebras, their dual, and the properties of the simple coideal subalgebra\xspaces. The systematic computation of the isomorphisms for $n\leq 3$ suggested the general formulas which then were checked for $n\leq 10$: \begin{Mexin} KQ6 := KQ(6): \end{Mexin} \begin{Mexin} isomorphisms := KD6::G::algebraEmbeddings(KQ6::G): for phi in isomorphisms do fprint(Unquoted,0, _concat("-" $ 78)): print(hold(phi)(a) = phi(a6)); print(hold(phi)(b) = phi(b6)) end: \end{Mexin} \begin{Mexout} 2 4 5 7 phi(a) = 3/4 B(a) + 1/4 B(a b) + -1/4 B(a b) + -1/4 B(a ) + 1/4 B(a ) + 8 10 11 -1/4 B(a b) + 1/4 B(a b) + 1/4 B(a ) II 3 3 II 9 9 phi(b) = - -- B(a ) + 1/2 B(a b) + -- B(a ) + 1/2 B(a b) 2 2 \end{Mexout} The isomorphism we chose in Theorem~\ref{theorem.isomorphism.KD.KQ} is now built into the system: \begin{Mexin} KQ6::G(a6) \end{Mexin} \begin{Mexout} 2 4 5 7 3/4 B(a) + 1/4 B(a b) + -1/4 B(a b) + -1/4 B(a ) + 1/4 B(a ) + 8 10 11 -1/4 B(a b) + 1/4 B(a b) + 1/4 B(a ) \end{Mexout} \begin{Mexin} KQ6::G(b6) \end{Mexin} \begin{Mexout} II 3 3 II 9 9 - -- B(a ) + 1/2 B(a b) + -- B(a ) + 1/2 B(a b) 2 2 \end{Mexout} \begin{Mexin} KQ3 := KQ(3, KQ6::coeffRing): \end{Mexin} \begin{Mexin} [aq3, bq3] := KQ3::G::algebraGenerators::list(): [aq6, bq6] := KQ6::G::algebraGenerators::list(): KQ3ToKQ6 := KQ3::G::algebraMorphism(table(a = aq6^2, b = bq6 )): \end{Mexin} We can now conclude by demonstrating the lifting of the $4$-dimensional coideal $I=\delta(K_{21})$ of $K\!Q(3)$ to a coideal $J_5$ of $K\!D(6)$ (see~\ref{K3inKD6}). Here is the Jones projection of $\delta(K_{21})$ (see~\ref{KQ3section}): \begin{Mexin} pI := KQ3::e(1) + KQ3::q(0,2*PI/3,2): \end{Mexin} and its image in $K\!D(6)$: \begin{Mexin} pIInKD6 := KQ3ToKQ6(KQ3::G(pI)) \end{Mexin} \begin{Mexout} 6 7 1/4 B(1) + 1/4 B(a b) + 1/4 B(a ) + 1/4 B(a b) \end{Mexout} Again, the result is trivial, and the expression in the matrix basis of~\ref{K3inKD6} can be obtained with: \begin{Mexin} KQ6::M(pIInKD6) \end{Mexin} As expected, this projection generates a coideal subalgebra\xspace{} of dimension $4$: \begin{Mexin} nops(coidealAndAlgebraClosure([pIInKD6])), 1/pIInKD6::traceNormalized() \end{Mexin} \begin{Mexout} 4, 4 \end{Mexout} \subsection{Further directions} Putting everything together, it is completely automatic to test whether a projection is the Jones projection of a coideal subalgebra\xspace, and to construct new Jones projections from previous ones by various techniques: embeddings, self-duality, intersection and (completed) union, etc. In the small examples we considered, this was sufficient to actually construct semi-automatically all coideal subalgebra\xspaces. Furthermore, given the Jones projection of two coideal subalgebra\xspaces $A\subset B$, it would be straightforward to compute the Bratelli diagram of inclusion (this is not yet implemented). The ultimate goal would be to compute automatically all the Jones projections, and therefore the full lattice of coideal subalgebra\xspaces, but we do not know yet how to achieve this. \end{document}
\begin{document} \title{A simple proof of Dahmen's conjectures} \begin{abstract} The number of Lame equations with finite (ordinary or projective) monodromy has been conjectured by S. R. Dahmen, and a few proofs have been proposed. It is known that Lame equations with unitary monodromy are corresponding to spherical tori with one conical singularity, and the geometry of such surfaces had been studied with triangulation recently. In this paper, we will apply the results on spherical tori to give an alternative proof of Dahmen's conjectures. \end{abstract} \section{Introduction} Given a lattice $\Lambda=\mathbb Z\omega_1+\mathbb Z\omega_2$ on $\mathbb C$ with $(\omega_1,\omega_2)=(1,\tau), \textnormal{Im}(\tau)>0$, the Lam\'e equation on the elliptic curve $E=\mathbb C/\Lambda$ is a second order ordinary differential equation: \begin{equation}\label{lame} \frac{\partial^2 w}{\partial z^2} - \Big( n(n+1) \wp(z)+B \Big) w =0, \end{equation} where $B\in \mathbb C$ and $\wp$ is the Weierstrass elliptic function. We consider the case $n\in \mathbb Z_{>0}$ and study number of Lam\'e equations with given finite monodromy groups. \\ It is known that all the finite monodromy groups of Lam\'e equations are cyclic. The following two conjectures are proposed by S.R. Dahmen, and later he proved the first conjecture using dessin d'enfant \cite{S1}\cite{S2}. \conjecture Let $L_n(N)$ be the number of Lam\'e equations (\ref{lame}) with projective monodromy group isomorphic to the cyclic group $C_N$, then \begin{equation}\label{fin_proj_mono} L_n(N)=\dfrac{n(n+1)}{12}(\Psi(N)-3\phi(N))+\dfrac{2}{3}\epsilon(n,N), \end{equation} for $N\geq 3$, where \[\epsilon(n,N)=\begin{cases}1,\quad \textnormal{if }n=3\textnormal{ and }3|N-1,\\0,\quad \textnormal{otherwise.}\end{cases},\] $\phi$ is the Euler totient function and \[\Psi(N)=|\{(k_1,k_2)\in \{0,\dots,N-1\}^2, \textrm{gcd}(k_1,k_2,N)=1\}|\].\\ \conjecture Let $L'_n(N)$ be the number of Lam\'e equations (\ref{lame}) with ordinary monodromy group isomorphic to the cyclic group $C_N$, then \begin{equation}\label{fin_ordi_mono} L'_n(N)=\dfrac{1}{2}(\dfrac{n(n+1)}{24}\Psi(N)-a_n\phi(N)-b_n\phi(N/2))+\dfrac{2}{3}\epsilon(n,N), \end{equation} for $N\geq 3$, where $a_{2l}=a_{2l+1}=l(l+1)/2$, $b_{2l-1}=b_{2l}=l^2$, and $\phi(N/2)=0$ if $N$ is odd. \\ Note that in Dahmen's paper, the conjectures were originally stated for pushforward Lam\'e equation on $\mathbb{CP}^1$, thus the finite monodromy groups they concerned becomes the dihedral groups $D_N$ instead of $C_N$.\\ Up to $N\leq 4$ the second conjecture has been verified ($n=1,2,3$ by S.R. Dahmen, and $n=4$ by Y.C. Chou via modular form calculations (\cite{MF2}, appendix)). On the other hand, Z. Chen, T.J. Kuo and C.S. Lin have announced a proof of the second conjecture \cite{PE}, which involves more technical analyses on Painlev\'e equations. In this paper we give a simple and uniform proof of these two conjectures from the perspective of spherical tori, developed by A. Eremenko et al.\ \cite{E}.\\ In Section 2 we will briefly introduce results known from literatures, and we will complete our proof of main theorems in Section 3. \section{Known results} We first recall some basic results on Lam\'e equations with unitary monodromies. The details can be found in \cite{WW}\cite{A}. It is a classical result that the Lam\'e equation (\ref{lame}) has the following ansatz solution \proposition \label{ansatz} \begin{equation} w_{\underline a}=\exp(z\sum_{\mu=1}^n\zeta(a_\mu))\prod_{\mu=1}^n\frac{\sigma(z-a_\mu)}{\sigma(z)} \end{equation} where $\underline a=(a_1,\dots,a_n)$ satisfies \[\sum_{\nu\neq \mu} (\zeta(a_\nu)-\zeta(a_\mu)+\zeta(a_\mu-a_\nu))=0\] for $\mu=1,\dots,n$, and $B=(2n-1)\sum_{\mu=1}^n\wp(a_i)$.\\ Another solution for the same equation (with same $B$) can be chosen as $w_{-\underline a}$, where $-\underline a=(-a_1,\dots,-a_n)$, except that for $2n+1$ values of $B$ such that $\underline a=-\underline a$ up to a permutation. For ansatz solutions with $\underline a\neq-\underline a$, the monodromy of the quotient $f=\omega_{\underline a}/\omega_{-\underline a}$ (i.e. the projective monodromy of the equation) is given by \proposition \begin{equation} f(z+\omega_i)=\exp(\int_{\gamma_i} g)f(z),\qquad i=1,2, \end{equation} where \[g=(\log f)'=\displaystyle\sum_{\mu=1}^n \dfrac{\wp'(a_\mu)}{\wp(z)-\wp(a_\mu)}\] and $\gamma_1,\gamma_2$ are the two fundamental loops on the torus.\\ Also note that the ansatz gives trivial monodromy at the singularity at $0$. Thus, the condition of a Lam\'e equation having unitary monodromy group is equivalent to that \[\int_{\gamma_i} g\in \sqrt{-1}\mathbb R, i=1,2.\] A little reduction using Legendre's relation shows that \proposition (\cite{A}, p5-8, p21) The unitary monodromy condition for Lam\'e equations is equivalent to that \begin{equation}\label{green-hecke} \sum_{\mu=1}^n Z(a_\mu)=0\end{equation} Here if we write $a_\mu=t_\mu\omega_1+s_\mu\omega_2$ with $s_\mu,t_\mu\in \mathbb R$, and the quasi-periods $\eta_j=\zeta(z+\omega_j)-\zeta(z)$, then $Z$ is the Hecke function defined as \[Z(a_\mu)=\zeta(a_\mu)-t_\mu\eta_1-s_\mu\eta_2\] If we further denote $s=\sum_{\mu=1}^n s_\mu$ and $t=\sum_{\mu=1}^n t_\mu$, then the projective monodromy is given by \begin{equation}\label{proj_mono_eq} \begin{cases} f(z+\omega_1)=\exp(-4i\pi s)f(z),\\ f(z+\omega_2)=\exp(4i\pi t)f(z). \end{cases} \end{equation} while the ansatz $w_{\underline a},w_{-\underline a}$ have monodromy \begin{equation}\label{nonproj_mono_eq} \begin{cases} w_{\pm \underline a}(z+\omega_1)=\exp(\mp 2i\pi s)w_{\pm \underline a}(z),\\ w_{\pm \underline a}(z+\omega_2)=\exp(\pm 2i\pi t)w_{\pm \underline a}(z). \end{cases} \end{equation} Thus $s,t (\mathrm{mod }1)$ determines the ordinary monodromy of the equation, while $2s,2t (\mathrm{mod }1)$ determines the projective monodromy.\\ The quotient $f$ can be viewed as the developing map of a spherical torus with one conical singularity. \definition A spherical torus (with one conical singularity, omitted for short) $(S,x)$ is an oriented Riemannian surface $S$ of constant curvature $1$ and genus $1$, with a conical singularity $x$ of angle $2\pi\theta$, i.e., there is an local isometry of $S$ at $x$ to a spherical fan with corner of angle $2\pi\theta$ identifying its two edges.\\ We can also give any spherical torus $S$ a complex structure by identifying $S^2\cong\mathbb {CP}^1$, and this will make $S$ a Riemann surface of genus $1$ with a puncture at $x$.\\ \proposition If equation (\ref{lame}) has unitary monodromy for $f=w_1/w_2$, then the pullback of the Fubini-Study metric on $\mathbb{CP}^1\cong S^2$ by $f$ will produce a spherical torus $(E,x)$ with a conical singularity of angle $(4n+2)\pi$, and $E$ is its underlying Riemann surface. Conversely, any spherical torus of angle $(4n+2)\pi$ arises from a Lam\'e equation (\ref{lame}) with unitary monodromy, with the underlying elliptic curve given by the identification above.\\ Note that $f$ is depending on the choice of $w_1$ and $w_2$, so there is a correspondence between Lam\'e equation with unitary monodromy, and projective equivalence classes of spherical surfaces. \definition We say two spherical tori are projective equivalent if their developing maps $f_1,f_2$ are differ by a composition of m\"obius transformation $\gamma\in\textrm{PSL}(2,\mathbb C)$ on $S^2\cong \mathbb{CP}^1$, i.e., $f_2=\gamma\circ f_1$, or equivalently, $f_1$ and $f_2$ correspond to the same Lam\'e equation.\\ In the following we summarize the results on spherical tori from \cite{E} which are needed in the paper. \definition A spherical triangle is an oriented Riemannian surface $P$ of constant curvature $1$ with three geodesic boundaries.\\ Although the case $\theta\not\in 2\mathbb Z+1$ is not used in our paper, we include the result for completeness. \proposition (Theorem B in \cite{E}) Let $(S,x)$ be a spherical torus. If $\theta\not\in 2\mathbb Z+1$, then $S$ can be decomposed into two isometric spherical triangles, with the three interior angles $\pi\theta_1,\pi\theta_2,\pi\theta_3$ satisfying the triangle inequalities, i.e., $|\theta_1-\theta_2|\leq\theta_3\leq\theta_1+\theta_2$. Conversely, any such spherical triangle will uniquely determine a spherical torus, except for the case when some of $\theta_i=\theta_j+\theta_k$ holds, $\{i,j,k\}=\{1,2,3\}$, in which case the spherical triangle and its mirror image correspond to the same spherical torus.\\ For any spherical torus with such decomposition, we denote by $\triangle,\triangle'$ the two spherical triangles, and $L_i,P_i$ ($L_i',P_i'$) the edges and vertices on $\triangle$ ($\triangle'$, respectively), so that $P_1,P_2,P_3$ are ordered clockwise on $\partial\triangle$. To glue $\triangle$ and $\triangle'$ into $(S,x)$, we simply glue $L_i$ along $L_i'$ so that the orientations on both sides are compatible, $i=1,2,3$. The vertices of the spherical triangles would then form a conical singularity. \proposition (Theorem E in \cite{E}, with a $2$-torsion label) \label{MS_sigma} If $\theta\in 2\mathbb Z+1$, then every projective equivalence class of spherical tori, with a labelled $2$-torsion point, can be parametrized by $\mathbb R$. In each class there is a unique surface having the isometric spherical triangle decomposition as above, with the labelled $2$-torsion lying on $L_1$. Moreover, the three angles of the two spherical triangles must be integral multiples of $\pi$.\\ \begin{figure} \caption{A demonstration of a spherical triangle with interior angles $(\pi\theta_1,\pi\theta_2,\pi\theta_3)=(4\pi,5\pi,6\pi)$. All the bounded regions in the figure are hemispheres, and the shaded region is the basic spherical triangle.} \end{figure} We give a detailed description for such spherical tori. Let $\pi\theta_1,\pi\theta_2,\pi\theta_3$ be the three interior angles of the two spherical triangles, so $\theta=\theta_1+\theta_2+\theta_3=2n+1$, with $\theta_i$ all integers and satisfying the triangle inequalities. The spherical triangle can be obtained from contiguously gluing hemispheres along three edges of a basic spherical triangle of interior angle $\pi,\pi,\pi$ (thus also a hemisphere), see Figure 1.\\ As a consequence, the set of spherical tori of angle $2n+1$, with a labelled 2-torsion, is parametrized by \begin{align*}\{(\theta_1,\theta_2,\theta_3)\in\mathbb \{1,\dots,n\}^3, \theta_1+\theta_2+\theta_3=2n+1\}\\\times\{(\ell_1,\ell_2,\ell_3)\in (\mathbb R^+)^3, \ell_1+\ell_2+\ell_3=2\pi\}\times \mathbb R,\end{align*} where $\ell_1,\ell_2,\ell_3$ are parameters for the lengths of edges of the basic spherical triangle. Quotiening the last component and the $\mathbb Z_3$-action on cyclically permuting $\theta_i$ and $\ell_i$ gives the set of Lam\'e equations with unitary monodromy for $n\in\mathbb Z_{>0}$. \section{Main Theorems} In this section we will prove the two conjectures of Dahmen. We first establish the relation between the monodromy group and the shape of the spherical triangle. \definition Denote by $S_{\theta_1,\theta_2,\theta_3}(\ell_1,\ell_2,\ell_3)$ the spherical torus with decomposition as in Theorem \ref{MS_sigma} and with those parameters, and with a labelled 2-torsion at the midpoint of $L_1$.\\ It is noteworthy that $f$ maps the boundaries of every hemispheres in $\triangle$ and $\triangle'$ to the unit circle of $\mathbb{CP}^1$, and the centers of these hemispheres are mapped to $0$ or $\infty$, regardless of the monodromy. As a result, these centers are $a_i$ or $-a_i$ in the ansatz solution respectively. One simple observation is \proposition \label{dev_mono} The monodromy of $ S_{\theta_1,\theta_2,\theta_3}(\ell_1,\ell_2,\ell_3)$ is given by $f(z+\omega_1)=e^{\mp i(\ell_2+\ell_3)}f(z)$ and $f(z+\omega_2)=e^{\pm i(\ell_1+\ell_3)}f(z)$. \proof Since attaching an even number of hemispheres on the sides does not affect the projective monodromy, we may assume $\triangle$ and $\triangle'$ are basic (that is, $n=1$). We first consider the case the center of $\triangle$ is mapped to $0$. We glue $\triangle$ and $\triangle'$ along $L_3$, and we find the monodromy of the loop starting from any point on $L_2$ to the corresponding point on $L_2'$. Geometrically we can see that the loop is homotopic to a path on unit circle, clockwise of length $\ell_2+\ell_3$, thus $f(z+\omega_1)=e^{-i(\ell_2+\ell_3)}f(z)$ for $z$ on the unit circle. Similarly $f(z+\omega_2)=e^{i(\ell_1+\ell_3)}f(z)$, and they hold for any $z\in \mathbb{CP}^1$ since the monodromy of $f$ is in $\mathrm{PSU}(2)$. If the center of $\triangle$ is mapped to $\infty$, then $f$ has inverse monodromy. \qedhere\\ \begin{figure} \caption{The $n=1$ case. The image of the red path under the map $f$ is an arc of length $\ell_2+\ell_3$ on the unit circle.} \label{sphericalTrianglePic} \end{figure} From this we can easily deduce the constraint on projective monodromy. \proposition \label{proj_mono} The projective monodromy parameters $2s,2t(\mathrm{mod}\ 1)$ must satisfies the restriction $2s\neq 0,2t\neq 0, 2s+2t\neq 0(\mathrm{mod}\ 1)$. Any such $2s,2t$ would give rise to $n(n+1)/2$ Lam\'e equations with unitary monodromy satisfying the monodromy (\ref{proj_mono_eq}). The distribution of the monodromy parameters is shown in Figure 3. \proof We show that if the center of $\triangle$ is mapped to $0$, then we have \[2s<1,2t<1\textnormal{ and }2s+2t>1.\] Comparing Proposition \ref{dev_mono} and (\ref{proj_mono_eq}), we have \[2s=(\ell_2+\ell_3)/2\pi\textnormal{ and }2t=(\ell_1+\ell_3)/2\pi.\]The inequalities then follows from \[\ell_2+\ell_3<2\pi, \ell_1+\ell_3<2\pi\textnormal{ and }\ell_1+2\ell_2+\ell_3=2\pi+\ell_2>2\pi.\] Conversely, if $2s$ and $2t$ satisfies the inequalities, then we can solve \[(\ell_1,\ell_2,\ell_3)=(2\pi-4\pi t,2\pi-4\pi s,4\pi t+4\pi s-2\pi),\]and we can find one projective equivalence class in each connected component of $MS_{1,1}^{[2]}(2n+1)$ with such monodromy. If the center of $\triangle$ is mapped to $\infty$, then $f$ has inverse monodromy, and we have \[2s>0,2t>0\textnormal{ and }2s+2t<1.\] \qedhere\\ For the ordinary monodromy, we need to take the attached hemispheres on the sides into consideration. \proposition \label{ordi_mono} For fixed parameters $\theta_1,\theta_2,\theta_3\in\{1,2,\dots,n\}$ with $\theta=\theta_1+\theta_2+\theta_3=2n+1$, the monodromy parameter $s,t (\mathrm{mod}\ 1)$ for the ansatz $w_a$ satisfies either \[s<\frac{\theta_1}{2},\ t<\frac{\theta_2}{2},\ s+t>\frac{\theta_1+\theta_2-1}{2}\] or \[s>-\frac{\theta_1}{2},\ t>-\frac{\theta_2}{2},\ s+t<-\frac{\theta_1+\theta_2-1}{2}.\] Translating these regions into $(0,1)\times (0,1)$, we have the distribution of monodromy parameters shown in Figure 4. \proof Note that for any fixed $(2s,2t)$, there are $4$ choices of $(s,t)$, so we ought to determine for each $\theta_1,\theta_2,\theta_3$ which of the $8$ regions in Figure 4 $(s,t)$ should lies in. For the case $n=1$ and $\theta_1=\theta_2=\theta_3=1$, it is known that $(s,t)$ lies in the region we have described (\cite{A},p30). For $n>1$, note that attaching two hemispheres on the side of $L_2$ or $L_3$ will translate the parameter $s \mathrm{(mod}\ 1)$ by $\frac{1}{2}$, so in total $s$ is translated by $\frac{\theta_1-1}{2}$. A similar argument holds for $t$. \qedhere\\ \begin{figure} \caption{The number of developing maps with unitary monodromy and with given parameter $2s,2t (\mathrm{mod} \end{figure} \begin{figure} \caption{The number of ansatz solutions with unitary monodromy and with given parameter $s,t (\mathrm{mod} \end{figure} Now we are ready to prove Dahmen's conjectures with the propositions above.\\ \paragraph{\it{Proof of Main Conjectures}} Note that if $n=3$ and $3|(N-1)$, then there is a unique spherical torus $S_{\frac{N-1}{3},\frac{N-1}{3},\frac{N-1}{3}}(2\pi/3,2\pi/3,2\pi/3)$ fixed by the $\mathbb Z/3$-action on the labels, thus $3L_n(N)-2\epsilon(n,N)$ counts the number of Lam\'e equations with parameter \[(2s,2t)=(\dfrac{k_1}{N},\dfrac{k_2}{N}), 0<k_1<N-k_2<N, \textrm{gcd}(k_1,k_2,N)=1\textnormal{ and }N\geq 3.\] For convenience we suppose this also holds for $N=1,2$. By counting lattice points and Proposition \ref{proj_mono} we have \[\sum_{d|N}(3L_n(d)-2\epsilon(n,d))=\dfrac{n(n+1)}{2}(N^2-3N+2).\] The formula (\ref{fin_proj_mono}) then follows from M\"obius inversion.\\ Similarly we have that $3L_n'(N)-2\epsilon(n,N)$ counts the number of Lam\'e equations with parameter \[(s,t)=(\dfrac{k_1}{N},\dfrac{k_2}{N}), 0<k_1<N-k_2<N, \mathrm{gcd}(k_1,k_2,N)=1\textnormal{ and }N\geq 3,\] and we suppose this holds for $N=1,2$. By counting lattice points and Proposition \ref{ordi_mono} we have \begin{align*}&\sum_{d|N}(3L_n'(d)-2\epsilon(n,d))\\&=\begin{cases} a_n\dfrac{3(m-1)(m-2)}{2}+(b_n-a_n)\dfrac{m(m-1)}{2} \qquad\textnormal{if }N=2m-1,\\ a_n\dfrac{3(m-1)(m-2)}{2}+(b_n-a_n)\dfrac{(m-1)(m-2)}{2} \qquad\textnormal{if }N=2m,\end{cases} \\&=\begin{cases} \dfrac{n(n+1)}{2}\dfrac{m(m-1)}{2}-3a_nm\qquad\textnormal{if }N=2m-1,\\ \dfrac{n(n+1)}{2}\dfrac{m^2}{2}-(2a_n+b_n)\dfrac{3m-2}{2}\qquad\textnormal{if }N=2m, \end{cases} \\&=\begin{cases} \dfrac{n(n+1)}{16}(N^2-1)-\dfrac{3}{2} a_n (N+1) \qquad\textnormal{if }N=2m-1,\\ \dfrac{n(n+1)}{16}N^2-(2a_n+b_n)(\dfrac{3N}{4}-1) \qquad\textnormal{if }N=2m.\end{cases} \end{align*} The formula (\ref{fin_ordi_mono}) then follows from M\"obius inversion. The proof is complete. \qedhere\\ \begin{paragraph}{\textbf{Remark}} We can compare our proof of Conjecture 1.1 with Dahmen's proof using dessin d'enfant. Given a spherical torus $(S,p)=S_{\theta_1,\theta_2,\theta_3}(\ell_1,\ell_2,\ell_3)$ with $\ell_i=2\pi\dfrac{m_i}{N}, i=1,2,3$ and $m_i$ positive integers, we assume the developing map $f$ sends $p$ to some $N$th root of unity, and let \[g(z)=\left(\dfrac{1-z^N}{1+z^N}\right)^2, h(z)=\dfrac{z}{z-1}.\] The composition $h\circ g\circ f$ sends the $2N$-division points on the boundaries of hemispheres to $0$ or $1$ alternately, and also sends the centers of the hemispheres to $\infty$. As $h\circ g \circ f$ is independent of monodromy and ramify only at $0,1,\infty$, it serves as a Belyi function for the underlying elliptic curve of $S$. The dessin d'enfant corresponding to this Belyi function consists of $2n+1$ loops of $3$ directions through $p$ on the elliptic curve. The $i$th direction has $2n+1-2\theta_i$ loops with the numbers of edges alternate between $2m_i$ and $2N-2m_i$. Taking the quotient of the elliptic curve as well as the dessin by the involution $z\mapsto -z$, we obtain a dessin on $\mathbb P^1$ of Type I introduced in Dahmen's proof of Conjecture 1.1. (See Figure 5.). As a consequence, counting Type I dessins on $\mathbb P^1$ is equivalent to counting spherical tori of finite monodromy we have done here. \begin{figure} \caption{The dessin corresponding to the spherical torus obtained from Figure 2 (the opposite sides of the parallelogram are identified) (left), and its quotient dessin on $\mathbb P^1$ (right). Note that each segment represents several edges of the dessin (according to $m_i$).} \end{figure} \end{paragraph} \end{document}
\begin{document} \title[A new algorithm for mixed equilibrium problem] {A new algorithm for mixed equilibrium problem and Bregman strongly nonexpansive mappings in Banach spaces} \author[Vahid Darvish] {Vahid Darvish} \newcommand{\newline\indent}{\newline\indent} \address{Department of Mathematics and Computer Science\\ Amirkabir University of Technology\\ Hafez Ave., P.O. Box 15875-4413\\Tehran, Iran.} \mathrm{e}mail{[email protected]} \subjclass[2010]{47H05, 47J25, 58C30} \keywords{Banach space, Bregman projection, Bregman distance, Bregman strongly nonexpansive mapping, fixed point, mixed equilibrium problem. } \begin{abstract} In this paper, we study a new iterative method for a common fixed point of a finite family of Bregman strongly nonexpansive mappings in the frame work of reflexive real Banach spaces. Moreover, we prove the strong convergence theorem for finding common fixed points with the solutions of a mixed equilibrium problem. \mathrm{e}nd{abstract} \maketitle \section{Introduction} Let $E$ be a real reflexive Banach space and $C$ a nonempty, closed and convex subset of $E$ and $E^{*}$ be the dual space of $E$ and $f:E\to (-\infty,+\infty]$ be a proper, lower semi-continuous and convex function. We denote by $\text{dom} f$, the domain of $f$, that is the set $\{x\in E : f(x)<+\infty\}$. Let $x\in \text{int}(\text{dom} f)$, the subdifferential of $f$ at $x$ is the convex set defined by \begin{equation*} \partial f(x)=\{x^{*}\in E^{*} : f(x)+\langle x^{*},y-x\rangle \leq f(y), \forall y\in E\}, \mathrm{e}nd{equation*} where the Fenchel conjugate of $f$ is the function $f^{*}: E^{*}\to (-\infty,+\infty]$ defined by $$f^{*}(x^{*})=\sup \{\langle x^{*},x\rangle -f(x): x\in E\}.$$ Equilibrium problems which were introduced by Blum and Oettli \cite{blu} and Noor and Oettli \cite{asl} in 1994 have had a great impact and influence in the development of several branches of pure and applied sciences. It has been shown that the equilibrium problem theory provides a novel and unified treatment of a wide class of problems which arise in economics, finance, image reconstruction, ecology, transportation, network, elasticity and optimization. It has been shown (\cite{blu},\cite{asl}) that equilibrium problems include variational inequalities, fixed point, Nash equilibrium and game theory as special cases. Hence collectively, equilibrium problems cover a vast range of applications. Due to the nature of the equilibrium problems, it is not possible to extend the projection and its variant forms for solving equilibrium problems. To overcome this drawback, one usually uses the auxiliary principle technique. The main and basic idea in this technique is to consider an auxiliary equilibrium problem related to the original problem and then show that the solution of the auxiliary problems is a solution of the original problem. This technique has been used to suggest and analyze a number of iterative methods for solving various classes of equilibrium problems and variational inequalities, see \cite{asl2}, \cite{cen} and the references therein. Related to the equilibrium problems, we also have the problem of finding the fixed points of the nonexpansive mappings, which is the subject of current interest in functional analysis. It is natural to construct a unified approach for these problems. In this direction, several authors have introduced some iterative schemes for finding a common element of a set of the solutions of the equilibrium problems and a set of the fixed points of finitely many nonexpansive mappings, see \cite{yao} and the references therein. Let $\varphi :C\to \mathbb{R}$ be a real-valued function and $\Theta: C\times C\to \mathbb{R}$ be an equilibrium bifunction. The mixed equilibrium problem (for short, MEP) is to find $x^{*}\in C$ such that $$\text{MEP}: \Theta (x^{*},y)+\varphi(y)\geq \varphi(x^{*}), \ \ \forall y\in C.$$ In particular, if $\varphi \mathrm{e}quiv 0$, this problem reduces to the equilibrium problem (for short, EP), which is to find $x^{*}\in C$ such that $$\text{EP}: \Theta (x^{*}, y)\geq 0, \ \ \forall y\in C.$$ The mixed equilibrium problems include fixed point problems, optimization problems, variational inequality problems, Nash equilibrium problems and the equilibrium problems as special cases; see for example \cite{blu}, \cite{cha},\cite{cha2} and \cite{kon}. In \cite{rei}, Reich and Sabach proposed an algorithm for finding a common fixed point of finitely many Bregman strongly nonexpansive mappings $T_{i}:C\to C (i=1,2,\ldots, N)$ satisfying $\cap_{i=1}^{N}F(T_{i})\neq \mathrm{e}mptyset$ in a reflexive Banach space $E$ as follows: \begin{eqnarray*} x_{0}&\in & E, \text{chosen arbitrarily,}\\ y_{n}^{i}&=&T_{i}(x_{n}+e_{n}^{i}),\\ C_{n}^{i}&=&\{z\in E : D_{f}(z,y_{n}^{i})\leq D_{f}(z,x_{n}+e_{n}^{i})\},\\ C_{n}&=&\cap_{i=1}^{N}C_{n}^{i},\\ Q_{n}^{i}&=&\{z\in E : \langle \nabla f(x_{0})-\nabla f(x_{n}), z-x_{n}\rangle\leq 0\},\\ x_{n+1}&=&proj_{C_{n}\cap Q_{n}}^{f}(x_{0}), \ \ \forall n\geq0, \mathrm{e}nd{eqnarray*} and \begin{eqnarray*} x_{0}&\in & E,\\ C_{0}^{i}&=&E, i=1,2,\ldots,N,\\ y_{n}^{i}&=&T_{i}(\nu_{n}+e_{n}^{i}),\\ C_{n+1}^{i}&=&\{z\in C_{n}^{i} : D_{f}(z,y_{n}^{i})\leq D_{f}(z,x_{n}+e_{n}^{i})\},\\ C_{n+1}&=&\cap_{i=1}^{N}C_{n+1}^{i},\\ x_{n+1}&=&proj_{C_{n+1}}(x_{0}), \ \ \forall n\geq0, \mathrm{e}nd{eqnarray*} where $proj_{C}^{f}$ is the Bregman projection with respect to $f$ from E onto a closed and convex subset $C$ of $E$. They proved that the sequence $\{x_{n}\}$ converges strongly to a common fixed point of $\{T_{i}\}_{i=1}^{N}$. In \cite{sua}, Suantai, et al used the following Halpern's iterative scheme for Bregman strongly nonexpansive self mapping $T$ on $E$; for $x_{1}\in E$ let $\{x_{n}\}$ be a sequence defined by $$x_{n+1}=\nabla f^{*}(\alpha_{n}\nabla f(u)+(1-\alpha_{n})\nabla f(Tx_{n})), \ \ \forall n\geq1,$$ where $\{\alpha_{n}\}$ satisfying $\lim_{n\to\infty}\alpha_{n}=0$ and $\sum_{n=1}^{\infty}\alpha_{n}=\infty$. They proved that above sequence converges strongly to a fixed point of $T$. In \cite{zeg}, Zegeye presented the following iterative scheme: $$x_{n+1}=Proj_{C}^{f}\nabla f^{*}(\alpha_{n}\nabla f(u)+(1-\alpha_{n})\nabla f(Tx_{n}),$$ where $T=T_{N}\circ T_{N-1}\circ \ldots\circ T_{1}$. He proved that above sequence converges strongly to a common fixed point of a finite family of Bregman strongly nonexpansive mappings on a nonempty, closed and convex subset $C$ of $E$. The authors of \cite{kum} introduced the following algorithm: \begin{eqnarray} x_{1}&=&x\in C \ \ \ \ \ \text{chosen arbitrarily},\nonumber\\ z_{n}&=&Res_{H}^{f}(x_{n}),\nonumber\\ y_{n}&=&\nabla f^{*}(\beta_{n}\nabla f(x_{n})+(1-\beta_{n})\nabla f(T_{n}(z_{n})))\nonumber\\ x_{n+1}&=&\nabla f^{*}(\alpha_{n}\nabla f(x_{n})+(1-\alpha_{n})\nabla f(T_{n}(y_{n}))),\label{mnb} \mathrm{e}nd{eqnarray} where $H$ is an equilibrium bifunction and $T_{n}$ is a Bregman strongly nonexpansive mapping for any $n\in \mathbb{N}$. They proved the sequence (\ref{mnb}) converges strongly to the point $proj_{F(T)\cap EP(H)}x$. In this paper, motivated by above algorithms, we study the following iterative scheme: \begin{eqnarray} x_{1}&=&x\in C \ \ \ \ \ \text{chosen arbitrarily},\nonumber\\ z_{n}&=&Res_{\Theta,\varphi}^{f}(x_{n}),\nonumber\\ y_{n}&=&proj_{C}^{f}\nabla f^{*}(\beta_{n}\nabla f(x_{n})+(1-\beta_{n})\nabla f(T(z_{n})))\nonumber\\ x_{n+1}&=&proj_{C}^{f}\nabla f^{*}(\alpha_{n}\nabla f(x_{n})+(1-\alpha_{n})\nabla f (T(y_{n}))),\label{eqw} \mathrm{e}nd{eqnarray} where $\varphi :C\to \mathbb{R}$ is a real-valued function, $\Theta: C\times C\to \mathbb{R}$ is an equilibrium bifunction and $T=T_{N}\circ T_{N-1}\circ \ldots\circ T_{1}$ which $T_{i}$ is a finite family of Bregman strongly nonexpansive mapping for each $i\in \{1,2,\ldots, N\}$. We will prove that the sequence $\{x_{n}\}$ defined in (\ref{eqw}) converges strongly to the point $proj_{(\cap_{i=1}^{N}F(T_{i}))\cap MEP(\Theta)}x$. \section{Preliminaries} For any $x\in \text{int}(\text{dom} f)$, the right-hand derivative of $f$ at $x$ in the derivation $y\in E$ is defined by $$f^{'}(x,y):=\lim _{t\searrow0} \frac{f(x+ty)-f(x)}{t}.$$ The function $f$ is called G\^{a}teaux differentiable at $x$ if $\lim_{t\searrow0} \frac{f(x+ty)-f(x)}{t}$ exists for all $y\in E$. In this case, $f^{'}(x,y)$ coincides with $\nabla f(x)$, the value of the gradient ($\nabla f)$ of $f$ at $x$. The function $f$ is called G\^{a}teaux differentiable if it is G\^{a}teaux differentiable for any $x\in \text{int}(\text{dom} f)$ and $f$ is called Fr\'{e}chet differentiable at $x$ if this limit is attain uniformly for all $y$ which satisfies $\|y\|=1$. The function $f$ is uniformly Fr\'{e}chet differentiable on a subset $C$ of $E$ if the limit is attained uniformly for any $x\in C$ and $\|y\|=1$. It is known that if $f$ is G\^{a}teaux differentiable (resp. Fr\'{e}chet differentiable) on $\text{int}(\text{dom} f)$, then $f$ is continuous and its G\^{a}teaux derivative $\nabla f$ is norm-to-weak$^*$ continuous (resp. continuous) on $\text{int} (\text{dom}f)$ (see \cite{bon}). Let $f: E\to (-\infty,+\infty]$ be a G\^{a}teaux differentiable function. The function $D_{f}: \text{dom} f\times \text{int}(\text{dom} f)\to [0,+\infty)$ defined as follows: \begin{equation}\label{1} D_{f}(x,y):=f(x)-f(y)-\langle \nabla f(y),x-y\rangle \mathrm{e}nd{equation} is called the Bregman distance with respect to $f$, \cite{cens}. The Legendre function $f:E\to (-\infty,+\infty]$ is defined in \cite{bau}. It is well known that in reflexive spaces, $f$ is Legendre function if and only if it satisfies the following conditions: ($L_{1}$) The interior of the domain of $f$, $\text{int}(\text{dom} f)$, is nonempty, $f$ is G\^{a}teaux differentiable on $\text{int}(\text{dom} f)$ and $\text{dom} f=\text{int}( \text{dom} f)$; ($L_{2}$) The interior of the domain of $f^{*}$, $\text{int}( \text{dom} f^{*})$, is nonempty, $f^{*}$ is G\^{a}teaux differentiable on $\text{int}(\text {dom} f^{*})$ and $\text{dom} f^{*}= \text{int}( \text{dom} f^{*})$. \noindent Since $E$ is reflexive, we know that $(\partial f)^{-1}=\partial f^{*}$ (see \cite{bon}). This , with ($L_{1}$) and ($L_{2}$), imply the following equalities: $$ \nabla f=(\nabla f^{*})^{-1}, \ \ \ \text{ran} \nabla f=\text{dom} \nabla f^{*}=\text{int}(\text{dom} f^{*})$$ and $$\text {ran} \nabla f^{*}=\text{dom}(\nabla f)=\text {int}(\text{dom} f),$$ where $\text{ran}\nabla f$ denotes the range of $\nabla f$. When the subdifferential of $f$ is single-valued, it coincides with the gradient $\partial f=\nabla f$, \cite{phe}. By Bauschke et al \cite{bau} the conditions ($L_{1}$) and ($L_{2}$) also yields that the function $f$ and $f^{*}$ are strictly convex on the interior of their respective domains.\\ If $E$ is a smooth and strictly convex Banach space, then an important and interesting Legendre function is $f(x):=\frac{1}{p}\|x\|^{p} (1<p<\infty).$ In this case the gradient $\nabla f$ of $f$ coincides with the generalized duality mapping of $E$, i.e., $\nabla f=J_{p} (1<p<\infty).$ In particular, $\nabla f=I$, the identity mapping in Hilbert spaces. From now on we assume that the convex function $f:E\to (-\infty, \infty]$ is Legendre. \begin{definition} Let $f:E\to (-\infty,+\infty]$ be a convex and G\^{a}teaux differentiable function. The Bregman projection of $x\in \text{int}(\text{dom} f)$ onto the nonempty, closed and convex subset $C\subset \text{dom} f$ is the necessary unique vector $proj_{C}^{f}(x)\in C$ satisfying $$D_{f}(proj_{C}^{f}(x),x)=\inf\{D_{f}(y,x) : y\in C\}.$$ \mathrm{e}nd{definition} \begin{remark} If $E$ is a smooth and strictly convex Banach space and $f(x)=\|x\|^{2}$ for all $x\in E$, then we have that $\nabla f(x)=2Jx$ for all $x\in E$, where $J$ is the normalized duality mapping from $E$ in to $2^{E^{*}}$, and hence $D_{f}(x,y)$ reduced to $\phi(x,y)=\|x\|^{2}-2\langle x,Jy\rangle +\|y\|^{2}$, for all $x,y\in E$, which is the Lyapunov function introduced by Alber \cite{alb} and Bregman projection $proj_{C}^{f}(x)$ reduces to the generalized projection $\Pi_{C}(x)$ which is defined by $$\phi(\Pi_{C}(x),x)=\min_{ y\in C} \phi(y,x).$$ If $E=H$, a Hilbert space, $J$ is the identity mapping and hence Bregman projection $proj_{C}^{f}(x)$ reduced to the metric projection of $H$ onto $C$, $P_{C}(x)$. \mathrm{e}nd{remark} \begin{definition}\cite{but2} Let $f:E\to(-\infty,+\infty]$ be a convex and G\^{a}teaux differentiable function. $f$ is called: \begin{enumerate} \item \textit{totally convex} at $x\in \text{int}(\text{dom} f)$ if its modulus of total convexity at $x$, that is, the function $\nu_{f}:\text{int}(\text{dom} f)\times[0,+\infty)\to[0,+\infty)$ defined by $$\nu_{f}(x,t):=\inf\{D_{f}(y,x): y\in \text{dom}f, \|y-x\|=t\},$$ is positive whenever $t>0$; \item totally convex if it is totally convex at every point $x\in \text{int}(\text{dom} f)$; \item totally convex on bounded sets if $\nu_{f}(B,t)$ is positive for any nonempty bounded subset $B$ of $E$ and $t>0$, where the modulus of total convexity of the function $f$ on the set $B$ is the function $\nu_{f}:\text{int}(\text{dom} f)\times [0,+\infty)\to [0,+\infty)$ defined by $$\nu_{f}(B,t):=\inf\{\nu_{f}(x,t): x\in B\cap \text{dom} f\}.$$ \mathrm{e}nd{enumerate} \mathrm{e}nd{definition} The set $lev_{\leq}^{f}(r)=\{x\in E: f (x)\leq r\}$ for some $r\in\mathbb{R}$ is called a sublevel of $f$. \begin{definition}\cite{but2,rei} The function $f:E\to(-\infty,+\infty]$ is called; \begin{enumerate} \item \textit{cofinite} if $\text{dom} f^{*}=E^{*}$; \item \textit{coercive} \cite{hir} if the sublevel set of $f$ is bounded; equivalently, $$\lim_{\|x\|\to+\infty}f(x)=+\infty;$$ \item \textit{strongly coercive} if $\lim_{\|x\|\to+\infty}\frac{f(x)}{\|x\|}=+\infty$; \item \textit{sequentially consistent} if for any two sequences $\{x_{n}\}$ and $\{y_{n}\}$ in $E$ such that $\{x_{n}\}$ is bounded, $$\lim_{n\to\infty} D_{f}(y_{n},x_{n})=0\Rightarrow \lim_{n\to\infty}\|y_{n}-x_{n}\|=0.$$ \mathrm{e}nd{enumerate} \mathrm{e}nd{definition} \begin{lemma}\cite{but}\label{lem6} The function $f$ is totally convex on bounded subsets if and only if it is sequentially consistent. \mathrm{e}nd{lemma} \begin{lemma}\cite[Proposition 2.3]{rei} If $f:E\to(-\infty,+\infty]$ is Fr\'{e}chet differentiable and totally convex, then $f$ is cofinite. \mathrm{e}nd{lemma} \begin{lemma}\cite{but}\label{jad} Let $f:E\to(-\infty,+\infty]$ be a convex function whose domain contains at least two points.Then the following statements hold: \begin{enumerate} \item $f$ is sequentially consistent if and only if it is totally convex on bounded sets; \item If $f$ is lower semicontinuous, then $f$ is sequentially consistent if and only if it is uniformly convex on bounded sets; \item If $f$ is uniformly strictly convex on bounded sets, then it is sequentially consistent and the converse implication holds when $f$ is lower semicontinuous, Fr\'{e}chet differentiable on its domain and Fr\'{e}chet derivative $\nabla f$ is uniformly continuous on bounded sets. \mathrm{e}nd{enumerate} \mathrm{e}nd{lemma} \begin{lemma}\cite[Proposition 2.1]{rei4}\label{lem7} Let $f:E\to\mathbb{R}$ be uniformly Fr\'{e}chet differentiable and bounded on bounded subsets of $E$. Then $\nabla f$ is uniformly continuous on bounded subsets of $E$ from the strong topology of $E$ to the strong topology of $E^{*}$. \mathrm{e}nd{lemma} \begin{lemma}\cite[Lemma 3.1]{rei}\label{taz} Let $f:E\to \mathbb{R}$ be a G\^{a}teaux differentiable and totally convex function. If $x_{0}\in E$ and the sequence $\{D_{f}(x_{n},x_{0})\}$ is bounded, then the sequence $\{x_{n}\}$ is also bounded. \mathrm{e}nd{lemma} Let $T:C\to C$ be a nonlinear mapping. The fixed points set of $T$ is denoted by $F(T)$, that is $F(T)=\{x\in C: Tx=x\}$. A mapping $T$ is said to be nonexpansive if $\|Tx-Ty\|\leq \|x-y\|$ for all $x,y\in C$. $T$ is said to be quasi-nonexpansive if $F(T)\neq \mathrm{e}mptyset$ and $\|Tx-p\|\leq \|x-p\|,$ for all $x\in C$ and $p\in F(T)$. A point $p\in C$ is called an asymptotic fixed point of $T$ (see \cite{rei2}) if $C$ contains a sequence $\{x_{n}\}$ which converges weakly to $p$ such that $\lim_{n\to\infty}\|x_{n}-Tx_{n}\|=0$. We denote by $\widehat{F}(T)$ the set of asymptotic fixed points of $T$. A mapping $T:C\to\text{int}(\text{dom} f)$ with $F(T)\neq\mathrm{e}mptyset$ is called: \begin{enumerate} \item quasi-Bregman nonexpansive \cite{rei} with respect to $f$ if $$D_{f}(p,Tx)\leq D_{f}(p,x), \forall x\in C, p\in F(T).$$ \item Bregman relatively nonexpansive \cite{rei} with respect to $f$ if, $$ D_{f}(p,Tx)\leq D_{f}(p,x), \ \ \forall x\in C, p\in F(T), \ \ \ \text{and} \ \ \widehat{F}(T)=F(T). $$ \item Bregman strongly nonexpansive (see \cite{bru,rei}) with respect to $f$ and $\widehat{F}(T)$ if, $$ D_{f}(p,Tx)\leq D_{f}(p,x), \ \ \forall x\in C, p\in \widehat{F}(T) $$ and, if whenever $\{x_{n}\}\subset C$ is bounded, $p\in \widehat{F}(T)$, and $$\lim_{z\to\infty}(D_{f}(p,x_{n})-D_{f}(p,Tx_{n}))=0,$$ it follows that $$\lim_{n\to\infty} D_{f}(x_{n},Tx_{n})=0.$$ \item Bregman firmly nonexpansive (for short BFNE) with respect to $f$ if, for all $x,y\in C$, $$\langle \nabla f(Tx)-\nabla f(Ty),Tx-Ty\rangle \leq \langle \nabla f(x)-\nabla f(y),Tx-Ty\rangle$$ equivalently, \begin{equation}\label{5} D_{f}(Tx,Ty)+D_{f}(Ty,Tx)+D_{f}(Tx,x)+D_{f}(Ty,y)\leq D_{f}(Tx,y)+D_{f}(Ty,x). \mathrm{e}nd{equation} \mathrm{e}nd{enumerate} The existence and approximation of Bregman firmly nonexpansive mappings was studied in \cite{rei2}. It is also known that if $T$ is Bregman firmly nonexpansive and $f$ is Legendre function which is bounded, uniformly Fr\'{e}chet differentiable and totally convex on bounded subset of $E$, then $F(T)=\widehat{F}(T)$ and $F(T)$ is closed and convex. It also follows that every Bregman firmly nonexpansive mapping is Bregman strongly nonexpansive with respect to $F(T)=\widehat{F}(T)$. \begin{lemma}\label{niaz}\cite{but} Let $C$ be a nonempty, closed and convex subset of $E$. Let $f:E\to \mathbb{R}$ be a G\^{a}teaux differentiable and totally convex function. Let $x\in E$, then \\ 1) $z=proj_{C}^{f}(x)$ if and only if $$\langle \nabla f(x)-\nabla f(z),y-z\rangle \leq 0, \ \ \ \forall y\in C.$$ 2) $D_{f}(y,proj_{C}^{f}(x))+D_{f}(proj_{C}^{f}(x),x)\leq D_{f}(y,x), \ \ \ \forall x\in E, y\in C.$ \mathrm{e}nd{lemma} Let $f:E\to\mathbb{R}$ be a convex, Legendre and G\^{a}teaux differentiable function. Following \cite{alb} and \cite{cens}, we make use of the function $V_{f}:E\times E^{*}\to [0,\infty)$ associated with $f$, which is defined by $$V_{f}(x,x^{*})=f(x)-\langle x^{*},x\rangle +f^{*}(x^{*}), \ \ \ \ \forall x\in E, x^{*}\in E^{*}.$$ Then $V_{f}$ is nonexpansive and $V_{f}(x,x^{*})=D_{f}(x,\nabla f^{*}(x^{*}))$ for all $x\in E$ and $x^{*}\in E^{*}$. Moreover, by the subdifferential inequality, \begin{equation}\label{29} V_{f}(x,x^{*})+\langle y^{*},\nabla f^{*}(x^{*})-x\rangle\leq V_{f}(x,x^{*}+y^{*}) \mathrm{e}nd{equation} for all $x\in E$ and $x^{*},y^{*}\in E^{*}$ \cite{koh}. In addition, if $f:E\to (-\infty,+\infty]$ is a proper lower semicontinuous function, then $f^{*}:E^{*}\to(-\infty,+\infty]$ is a proper weak$^{*}$ lower semicontinuous and convex function (see \cite{mar}). Hence, $V_{f}$ is convex in the second variable. Thus, for all $z \in E$, $$D_{f}\left(z,\nabla f^{*}\left(\sum_{i=1}^{N}t_{i}\nabla f(x_{i})\right)\right)\leq \sum_{i=1}^{N}t_{i}D_{f}(z,x_{i}),$$ where $\{x_{i}\}_{i=1}^{N}\subset E$ and $\{t_{i}\}_{i=1}^{N}\subset (0,1)$ with $\sum_{i=1}^{N}t_{i}=1$. \begin{lemma}\label{222}\cite{mar} Let $f:E\to(-\infty,+\infty]$ be a bounded, uniformly Fr\'{e}chet differentiable and totally convex function on bounded subsets of $E$. Assume that $\nabla f^{*}$ is bounded on bounded subsets of $\text{dom} f^{*}=E^{*}$ and let $C$ be a nonempty subset of $\text{int}(\text{dom} f)$. Let $\{T_{i} : i=1,2,\ldots,N\}$ be $N$ Bregman strongly nonexpansive mappings from $C$ into itself satisfying $\cap_{i=1}^{N}\widehat{F}(T_{i})\neq \mathrm{e}mptyset.$ Let $T=T_{N}\circ T_{N-1}\circ\ldots \circ T_{1}$, then $T$ is Bregman strongly nonexpansive mapping and $\widehat{F}(T)=\cap_{i=1}^{N}\widehat{F}(T_{i})$. \mathrm{e}nd{lemma} \begin{lemma}\label{333}\cite{rei3} Let $C$ be a nonempty, closed and convex subset of $\text{int}(\text{dom} f)$ and $T:C\to C$ be a quasi-Bregman nonexpansive mappings with respect to $f$. Then $F(T)$ is closed and convex. \mathrm{e}nd{lemma} For solving the mixed equilibrium problem, let us give the following assumptions for the bifunction $\Theta$ on the set $C$: ($A_{1}$) $\Theta (x,x)=0$ for all $x\in C$; ($A_{2}$) $\Theta$ is monotone, i.e., $\Theta (x,y)+\Theta (y,x)\leq0$ for any $x,y\in C$; ($A_{3}$) for each $y\in C, x\mapsto \Theta (x,y)$ is weakly upper semicontinuous; ($A_{4}$) for each $x\in C, y\mapsto \Theta (x,y)$ is convex; ($A_{5}$) for each $x\in C, y\mapsto \Theta (x,y)$ is lower semicontinuous (see \cite{pen}). \begin{definition} Let $C$ be a nonempty, closed and convex subsets of a real reflexive Banach space and let $\varphi$ be a lower semicontinuous and convex functional from $C$ to $\mathbb{R}$. Let $\Theta: C\times C\to\mathbb{R}$ be a bifunctional satisfying ($A_{1}$)-($A_{5}$). The \textit{mixed resolvent} of $\Theta$ is the operator $Res_{\Theta,\varphi}^{f}:E\to 2^{C}$ \begin{equation} Res_{\Theta,\varphi}^{f}(x)=\{z\in C: \Theta (z,y)+\varphi(y)+\langle \nabla f(z)-\nabla f(x),y-z\rangle \geq \varphi(z), \ \ \forall y\in C\}. \mathrm{e}nd{equation} \mathrm{e}nd{definition} In the following two lemmas the idea of proofs are the same as in \cite{rei}, but for reader's convenience we provide their proofs. \begin{lemma} Let $f:E\to(-\infty,+\infty]$ be a coercive and G\^{a}teaux differentiable function. Let $C$ be a closed and convex subset of $E$. Assume that $\varphi:C\to\mathbb{R}$ be a lower semicontinuous and convex functional and the bifunctional $\Theta:C\times C\to\mathbb{R}$ satisfies conditions ($A_{1}$)-($A_{5}$), then $\text{dom}(Res_{\Theta,\varphi}^{f})=E$. \mathrm{e}nd{lemma} \begin{proof} Since $f$ is a coercive function, the function $h:E\times E\to (-\infty,+\infty]$ defined by $$h(x,y)=f(y)-f(x)-\langle x^{*},y-x\rangle,$$ satisfies the following for all $x^{*}\in E^{*}$ and $y\in C$ $$\lim_{\|x-y\|\to+\infty}\frac{h(x,y)}{\|x-y\|}=+\infty.$$ Then from \cite[Theorem 1]{blu}, there exists $\hat{x}\in C$ such that \begin{equation*} \Theta(\hat{x},y)+\varphi(y)-\varphi(\hat{x})+f(y)-f(\hat{x})-\langle x^{*},y-\hat{x}\rangle\geq0, \mathrm{e}nd{equation*} for any $y\in C$. So, we have \begin{equation}\label{7} \Theta(\hat{x},y)+\varphi(y)+f(y)-f(\hat{x})-\langle x^{*},y-\hat{x}\rangle\geq\varphi(\hat{x}). \mathrm{e}nd{equation} We know that inequality (\ref{7}) holds for $y=t\hat{x}+(1-t)\hat{y}$ where $\hat{y}\in C$ and $t\in (0,1)$. Therefore, \begin{eqnarray*} \Theta(\hat{x},t\hat{x}+(1-t)\hat{y})&+&\varphi(t\hat{x}+(1-t)\hat{y})+f(t\hat{x}+(1-t)\hat{y})-f(\hat{x})\\ &&-\langle x^{*},t\hat{x}+(1-t)\hat{y}-\hat{x}\rangle\\ &&\geq\varphi(\hat{x}) \mathrm{e}nd{eqnarray*} for all $\hat{y}\in C$. By convexity of $\varphi$ we have \begin{eqnarray} \Theta(\hat{x},t\hat{x}+(1-t)\hat{y})&+&(1-t)\varphi(\hat{y})+f(t\hat{x}+(1-t)\hat{y})-f(\hat{x})\nonumber\\ &&-\langle x^{*},t\hat{x}+(1-t)\hat{y}-\hat{x}\rangle\nonumber\\ &&\geq(1-t)\varphi(\hat{x}).\label{eq4} \mathrm{e}nd{eqnarray} Since $$f(t\hat{x}+(1-t)\hat{y})-f(\hat{x})\leq \langle \nabla f(t\hat{x}+(1-t)\hat{y}),t\hat{x}+(1-t)\hat{y}-\hat{x}\rangle,$$ we have from (\ref{eq4}) and ($A_{5}$) that \begin{eqnarray*} t\Theta (\hat{x},\hat{x})+(1-t)\Theta (\hat{x},\hat{y})+(1-t)\varphi(\hat{y})&+&\langle \nabla f(t\hat{x}+(1-t)\hat{y}),t\hat{x}+(1-t)\hat{y}-\hat{x}\rangle\\ && -\langle x^{*},t\hat{x}+(1-t)\hat{y}-\hat{x}\rangle\geq (1-t)\varphi(\hat{x}) \mathrm{e}nd{eqnarray*} for all $\hat{y}\in C$. From ($A_{1}$) we have \begin{eqnarray*} (1-t)\Theta (\hat{x},\hat{y})+(1-t)\varphi(\hat{y})&+&\langle \nabla f(t\hat{x}+(1-t)\hat{y}),(1-t)(\hat{y}-\hat{x})\rangle\\ && -\langle x^{*},(1-t)(\hat{y}-\hat{x})\rangle\geq (1-t)\varphi(\hat{x}). \mathrm{e}nd{eqnarray*} Equivalently \begin{eqnarray*} (1-t)[\Theta (\hat{x},\hat{y})+\varphi(\hat{y})&+&\langle \nabla f(t\hat{x}+(1-t)\hat{y}),\hat{y}-\hat{x}\rangle\\ && -\langle x^{*},\hat{y}-\hat{x}\rangle]\geq (1-t)\varphi(\hat{x}). \mathrm{e}nd{eqnarray*} So, we have $$\Theta (\hat{x},\hat{y})+\varphi(\hat{y})+\langle \nabla f(t\hat{x}+(1-t)\hat{y}),\hat{y}-\hat{x}\rangle-\langle x^{*},\hat{y}-\hat{x}\rangle\geq\varphi(\hat{x}),$$ for all $\hat{y}\in C$. Since $f$ is G\^{a}teaux differentiable function, it follows that $\nabla f$ is norm-to-weak$^{*}$ continuous (see \cite[Proposition 2.8]{phe}. Hence, letting $t\to1^{-1}$ we then get $$\Theta (\hat{x},\hat{y})+\varphi(\hat{y})+\langle \nabla f(\hat{x}),\hat{y}-\hat{x}\rangle-\langle x^{*},\hat{y}-\hat{x}\rangle\geq\varphi(\hat{x}).$$ By taking $x^{*}=\nabla f(x)$ we obtain $\hat{x}\in C$ such that $$\Theta (\hat{x},\hat{y})+\varphi(\hat{y})+\langle \nabla f(\hat{x})-\nabla f(x),\hat{y}-\hat{x}\rangle\geq\varphi(\hat{x}),$$ for all $\hat{y}\in C$, i.e., $\hat{x}\in Res_{\Theta,\varphi}^{f}(x)$. So, $\text{dom}(Res_{\Theta,\varphi}^{f})=E$. \mathrm{e}nd{proof} \begin{lemma}\label{nv} Let $f:E\to(-\infty,+\infty]$ be a Legendre function. Let $C$ be a closed and convex subset of $E$. If the bifunction $\Theta: C\times C\to\mathbb{R}$ satisfies conditions ($A_{1})$-($A_{5}$), then \begin{enumerate} \item $Res_{\Theta,\varphi}^{f}$ is single-valued; \item $Res_{\Theta,\varphi}^{f}$ is a BFNE operator; \item $F\left(Res_{\Theta,\varphi}^{f}\right)=MEP(\Theta)$; \item $MEP(\Theta)$ is closed and convex; \item $D_{f}\left(p, Res_{\Theta,\varphi}^{f}(x)\right)+D_{f}\left(Res_{\Theta,\varphi}^{f}(x),x\right)\leq D_{f}(p,x), \ \forall p\in F\left(Res_{\Theta,\varphi}^{f}\right), x\in E$. \mathrm{e}nd{enumerate} \mathrm{e}nd{lemma} \begin{proof} (1) Let $z_{1}, z_{2}\in Res_{\Theta,\varphi}^{f}(x)$ then by definition of the resolvent we have $$\Theta(z_{1},z_{2})+\varphi (z_{2}) +\langle \nabla f(z_{1})-\nabla f(x),z_{2}-z_{1}\rangle\geq \varphi(z_{1})$$ and $$ \Theta (z_{2},z_{1})+\varphi(z_{1})+\langle \nabla f(z_{2}-\nabla f(x),z_{1}-z_{2}\rangle \geq \varphi(z_{2}).$$ Adding these two inequalities, we obtain $$\Theta (z_{1},z_{2})+\Theta(z_{2},z_{1})+\varphi(z_{1})+\varphi(z_{2})+\langle \nabla f(z_{2})-\nabla f(z_{1}),z_{1}-z_{2}\rangle\geq \varphi(z_{1})+\varphi(z_{2}).$$ So, $$\Theta (z_{1},z_{2})+\Theta(z_{2},z_{1})+\langle \nabla f(z_{2})-\nabla f(z_{1}),z_{1}-z_{2}\rangle\geq 0.$$ By ($A_{2}$), we have $$\langle \nabla f(z_{2})-\nabla f(z_{1}),z_{1}-z_{2}\rangle\geq0.$$ Since $f$ is Legendre then it is strictly convex. So, $\nabla f$ is strictly monotone and hence $z_{1}=z_{2}$. It follows that $Res_{\Theta,\varphi}^{f}$ is single-valued. \\ \\ (2) Let $x,y\in E$, we then have \begin{eqnarray} \Theta(Res_{\Theta,\varphi}^{f}(x),Res_{\Theta,\varphi}^{f}(y))&+&\varphi(Res_{\Theta,\varphi}^{f}(y)) \nonumber\\ &&+\langle \nabla f(Res_{\Theta,\varphi}^{f}(x))-\nabla f(x),Res_{\Theta,\varphi}^{f}(y)-Res_{\Theta,\varphi}^{f}(x)\rangle\nonumber\\ &&\geq \varphi(Res_{\Theta,\varphi}^{f}(x))\label{eq1} \mathrm{e}nd{eqnarray} and \begin{eqnarray} \Theta(Res_{\Theta,\varphi}^{f}(y),Res_{\Theta,\varphi}^{f}(x))&+&\varphi(Res_{\Theta,\varphi}^{f}(x))\nonumber\\ &&+\langle \nabla f(Res_{\Theta,\varphi}^{f}(y)-\nabla f(y),Res_{\Theta,\varphi}^{f}(x)-Res_{\Theta,\varphi}^{f}(y)\rangle\nonumber\\ &&\geq \varphi(Res_{\Theta,\varphi}^{f}(y)).\label{eq2} \mathrm{e}nd{eqnarray} Adding the inequalities (\ref{eq1}) and (\ref{eq2}), we have \begin{eqnarray*} &&\Theta (Res_{\Theta,\varphi}^{f}(x),Res_{\Theta,\varphi}^{f}(y))+\Theta(Res_{\Theta,\varphi}^{f}(y),Res_{\Theta,\varphi}^{f}(x))\\ &&+\langle \nabla f(Res_{\Theta,\varphi}^{f}(x))-\nabla f(x)+\nabla f(y)-\nabla f(Res_{\Theta,\varphi}^{f}(y)),Res_{\Theta,\varphi}^{f}(y)-Res_{\Theta,\varphi}^{f}(x)\rangle\geq0. \mathrm{e}nd{eqnarray*} By ($A_{2}$), we obtain \begin{eqnarray*} &&\langle \nabla f(Res_{\Theta,\varphi}^{f}(x))-\nabla f(Res_{\Theta,\varphi}^{f}(y)),Res_{\Theta,\varphi}^{f}(x)-Res_{\Theta,\varphi}^{f}(y)\rangle\\ &&\leq\langle \nabla f(x)-\nabla f(y),Res_{\Theta,\varphi}^{f}(x)-Res_{\Theta,\varphi}^{f}(y)\rangle. \mathrm{e}nd{eqnarray*} It means $Res_{\Theta,\varphi}^{f}$ is BFNE operator.\\ \\ (3) \begin{eqnarray*} x\in F(Res_{\Theta,\varphi}^{f})&\Leftrightarrow& x=Res_{\Theta,\varphi}^{f}(x)\\ &\Leftrightarrow& \Theta (x,y)+\varphi(y)+\langle \nabla f(x)-\nabla f(x),y-x\rangle\geq \varphi(x), \ \ \forall y\in C\\ &\Leftrightarrow& \Theta (x,y)+\varphi(y)\geq \varphi(x), \ \ \forall y\in C\\ &\Leftrightarrow& x\in MEP(\Theta).\\ \mathrm{e}nd{eqnarray*} \\ (4) Since $Res_{\Theta,\varphi}^{f}$ is a BFNE operator, it follows from \cite[Lemma 1.3.1]{rei3} that $F(Res_{\Theta,\varphi}^{f})$ is a closed and convex subset of $C$. So, from (3) we have $MEP(\Theta)=F(Res_{\Theta,\varphi}^{f})$ is a closed and convex subset of C.\\ \\ (5) Since $Res_{\Theta,\varphi}^{f}$ is a BFNE operator, we have from (\ref{5}) that for all $x,y\in E$ \begin{eqnarray*} &&D_{f}(Res_{\Theta,\varphi}^{f}(x),Res_{\Theta,\varphi}^{f}(y))+D_{f}(Res_{\Theta,\varphi}^{f}(y),Res_{\Theta,\varphi}^{f}(x))\\ &&\leq D_{f}(Res_{\Theta,\varphi}^{f}(x),y)-D_{f}(Res_{\Theta,\varphi}^{f}(x),x)+D_{f}(Res_{\Theta,\varphi}^{f}(y),x)-D_{f}(Res_{\Theta,\varphi}^{f}(y),y). \mathrm{e}nd{eqnarray*} Let $y=p\in F(Res_{\Theta,\varphi}^{f})$, we then get \begin{eqnarray*} &&D_{f}(Res_{\Theta,\varphi}^{f}(x),p)+D_{f}(p,Res_{\Theta,\varphi}^{f}(x))\\ &&\leq D_{f}(Res_{\Theta,\varphi}^{f}(x),p)-D_{f}(Res_{\Theta,\varphi}^{f}(x),x)+D_{f}(p,x)-D_{f}(p,p). \mathrm{e}nd{eqnarray*} Hence, $$D_{f}(p,Res_{\Theta,\varphi}^{f}(x))+D_{f}(Res_{\Theta,\varphi}^{f}(x),x)\leq D_{f}(p,x).$$ \mathrm{e}nd{proof} \begin{lemma}\cite{xu}\label{444} Assume that $\{x_{n}\}$ is a sequence of nonnegative real numbers such that $$x_{n+1}\leq (1-\alpha_{n})x_{n}+\beta_{n}, \ \ \ \ \forall n\geq1,$$ where $\{\alpha_{n}\}$ is a sequence in $(0,1)$ and $\{\beta_{n}\}$ is a sequence such that \begin{enumerate} \item $\sum_{n=1}^{\infty}\alpha_{n}=+\infty;$ \item $\limsup_{n\to\infty}\frac{\beta_{n}}{x_{n}}\leq 0$ or $\sum_{n=1}^{\infty}|\beta_{n}|<+\infty.$ \mathrm{e}nd{enumerate} Then $\lim_{n\to\infty}x_{n}=0$. \mathrm{e}nd{lemma} \section{Main result} \begin{theorem}\label{mt} Let $E$ be a real reflexive Banach space, $C$ be a nonempty, closed and convex subset of $E$. Let $f:E\to \mathbb{R}$ be a coercive Legendre function which is bounded, uniformly Fr\'{e}chet differentiable and totally convex on bounded subsets of $E$. Let $T_{i}:C\to C$, for $i=1,2,\ldots, N,$ be a finite family of Bregman strongly nonexpansive mappings with respect to $f$ such that $F(T_{i})=\widehat{F}(T_{i})$ and each $T_{i}$ is uniformly continuous. Let $\Theta:C\times C\to\mathbb{R}$ satisfying conditions ($A_{1}$)-($A_{5}$) and $\left(\cap_{i=1}^{N}F(T_{i})\right)\cap MEP(\Theta)$ is nonempty and bounded. Let $\{x_{n}\}$ be a sequence generated by \begin{eqnarray} x_{1}&=&x\in C \ \ \ \ \ \text{chosen arbitrarily},\nonumber\\ z_{n}&=&Res_{\Theta,\varphi}^{f}(x_{n}),\nonumber\\ y_{n}&=&proj_{C}^{f}\nabla f^{*}(\beta_{n}\nabla f(x_{n})+(1-\beta_{n})\nabla f(T(z_{n})))\nonumber\\ x_{n+1}&=&proj_{C}^{f}\nabla f^{*}(\alpha_{n}\nabla f(x_{n})+(1-\alpha_{n})\nabla f (T(y_{n}))),\label{main} \mathrm{e}nd{eqnarray} where $T=T_{N}\circ T_{N-1}\circ\ldots\circ T_{1}$, $\{\alpha_{n}\}, \{\beta_{n}\}\subset (0,1)$ satisfying $\lim_{n\to\infty}\alpha_{n}=0$ and $\sum_{n=1}^{\infty}\alpha_{n}=\infty$. Then $\{x_{n}\}$ converges strongly to $proj_{(\cap_{i=1}^{N}F(T_{i}))\cap MEP(\Theta)}x$. \mathrm{e}nd{theorem} \begin{proof} We note from Lemma \ref{333} that $F(T_{i})$, for each $i\in\{1,2,\ldots,N\}$ is closed and convex and hence $\cap_{i=1}^{N}F(T_{i})$ is closed and convex. \\ Let $p=proj_{(\cap_{i=1}^{N}F(T_{i}))\cap GMEP(\Theta)}x\in (\cap_{i=1}^{N}F(T_{i}))\cap GMEP(\Theta)$. Then $p\in (\cap_{i=1}^{N}F(T_{i}))$ and $p\in GMEP(\Theta)$. Now, by using (\ref{main}) and Lemma \ref{nv}, we have $D_{f}(p,z_{n})=D_{f}(p,Res_{\Theta,\varphi,\Psi}^{f}(x_{n}))\leq D_{f}(p,x_{n})$, so \begin{eqnarray} D_{f}(p,y_{n})&=&D_{f}(p,proj_{C}^{f}\nabla f^{*}(\beta_{n}\nabla f(x_{n})+(1-\beta_{n})\nabla f(T(z_{n}))))\nonumber\\ &\leq&D_{f}(p,\nabla f^{*}(\beta_{n}\nabla f(x_{n})+(1-\beta_{n})\nabla f(T(z_{n}))))\nonumber\\ &\leq &\beta_{n}D_{f}(p,x_{n})+(1-\beta_{n})D_{f}(p,T(z_{n}))\nonumber\\ &\leq&\beta_{n}D_{f}(p,x_{n})+(1-\beta_{n})D_{f}(p,z_{n})\nonumber\\ &\leq &\beta_{n}D_{f}(p,x_{n})+(1-\beta_{n})D_{f}(p,x_{n})\nonumber\\ &\leq & D_{f}(p,x_{n}).\label{3.1} \mathrm{e}nd{eqnarray} By (\ref{main}) and (\ref{3.1}), we have \begin{eqnarray*} D_{f}(p,x_{n+1})&=&D_{f}(p,proj_{C}^{f}\nabla f^{*}(\alpha_{n}\nabla f(x_{n})+(1-\alpha_{n})\nabla f (T(y_{n}))))\\ &\leq&D_{f}(p,\nabla f^{*}(\alpha_{n}\nabla f(x_{n})+(1-\alpha_{n})\nabla f (T(y_{n}))))\\ &\leq& \alpha_{n}D_{f}(p,x_{n})+(1-\alpha_{n})D_{f}(p,T(y_{n}))\\ &\leq& \alpha_{n}D_{f}(p,x_{n})+(1-\alpha_{n})D_{f}(p,y_{n})\\ &\leq& \alpha_{n}D_{f}(p,x_{n})+(1-\alpha_{n})D_{f}(p,x_{n})\\ &\leq& D_{f}(p,x_{n}). \mathrm{e}nd{eqnarray*} Hence $\{D_{f}(p,x_{n})\}$ and $D_{f}(p,Ty_{n})$ are bounded. Moreover, by Lemma \ref{taz} we get that the sequences $\{x_{n}\}$ and $\{T(y_{n})\}$ are bounded. From the fact that $\alpha_{n}\to0$ as $n\to\infty$, Lemma \ref{niaz} we get that \begin{eqnarray*} D_{f}(T(y_{n}),x_{n+1})&\leq& D_{f}(T(y_{n}),proj_{C}^{f}\nabla f^{*}(\alpha_{n}\nabla f(x_{n})+(1-\alpha_{n})\nabla f (T(y_{n})))\\ &\leq&D_{f}(T(y_{n}), \nabla f^{*}(\alpha_{n}\nabla f(x_{n})+(1-\alpha_{n})\nabla f (T(y_{n})))\\ &\leq& \alpha_{n}D_{f}(T(y_{n}), x_{n})+(1-\alpha_{n})D_{f}(T(y_{n}),T(y_{n}))\\ &=&\alpha_{n}D_{f}(T(y_{n}), x_{n})\\ &=&0. \mathrm{e}nd{eqnarray*} Therefore, by Lemma \ref{lem6}, we have \begin{equation}\label{rd} \|x_{n+1}-T(y_{n})\|\to0, \ \ \ as \ \ \ n\to\infty. \mathrm{e}nd{equation} On the other hand, by Lemma \ref{niaz}, we have \begin{eqnarray*} \lim_{n\to\infty} D_{f}(x_{n},z_{n})&=&\lim_{n\to\infty} D_{f}(x_{n},Res_{\Theta,\varphi}^{f}(x_{n})\\ &\leq& \lim_{n\to\infty}(D_{f}(p,Res_{\Theta,\varphi}^{f}(x_{n}))-D_{f}(p,x_{n}))\\ &\leq& \lim_{n\to\infty}(D_{f}(p,x_{n})-D_{f}(p,x_{n}))\\ &=&0. \mathrm{e}nd{eqnarray*} By Lemma \ref{lem6}, we obtain \begin{equation}\label{3.8} \lim_{n\to\infty}\|x_{n}-z_{n}\|=0. \mathrm{e}nd{equation} Since $f$ is uniformly Fr\'{e}chet differentiable on bounded subsets of $E$, by Lemma \ref{lem7}, $\nabla f$ is norm-to-norm uniformly continuous on bounded subsets of $E$. So, \begin{equation}\label{3.9} \lim_{n\to\infty}\|\nabla f(x_{n})-\nabla f(z_{n})\|_{*}=0. \mathrm{e}nd{equation} Since $f$ is uniformly Fr\'{e}chet differentiable, it is also uniformly continuous, we get \begin{equation}\label{3.10} \lim_{n\to\infty}\|f(x_{n})-f(z_{n})\|=0. \mathrm{e}nd{equation} By Bregman distance we have \begin{eqnarray*} &&D_{f}(p,x_{n})-D_{f}(p,z_{n})\\ &&=f(p)-f(x_{n})-\langle \nabla f(x_{n}),p-x_{n}\rangle -f(p)+f(z_{n})+\langle \nabla f(z_{n}),p-z_{n}\rangle\\ &&=f(z_{n})-f(x_{n})+\langle \nabla f(z_{n}),p-z_{n}\rangle-\langle \nabla f(x_{n}),p-x_{n}\rangle\\ &&=f(z_{n})-f(x_{n})+\langle \nabla f(z_{n}),x_{n}-z_{n}\rangle-\langle \nabla f(z_{n})-\nabla f(x_{n}),p-x_{n}\rangle, \mathrm{e}nd{eqnarray*} for each $p\in \cap_{i=1}^{N}F(T_{i})$. By (\ref{3.8})-(\ref{3.10}), we obtain \begin{equation}\label{3.11} \lim_{n\to\infty}(D_{f}(p,x_{n})-D_{f}(p,z_{n}))=0. \mathrm{e}nd{equation} By above equation, we have \begin{eqnarray*} D_{f}(z_{n},y_{n})&=&D_{f}(p,y_{n})-D_{f}(p,z_{n})\\ &=&D_{f}(p,proj_{C}^{f}\nabla f^{*}(\alpha_{n}\nabla f(x_{n})+(1-\alpha_{n})\nabla f(T(z_{n}))-D_{f}(p,z_{n}))\\ &\leq&D_{f}(p,\nabla f^{*}(\alpha_{n}\nabla f(x_{n})+(1-\alpha_{n})\nabla f(T(z_{n}))-D_{f}(p,z_{n}))\\ &\leq& \alpha_{n}D_{f}(p,x_{n})+(1-\alpha_{n})D_{f}(p,T(z_{n})-D_{f}(p,z_{n})\\ &\leq&\alpha_{n}D_{f}(p,x_{n})+(1-\alpha_{n})D_{f}(p,z_{n})-D_{f}(p,z_{n})\\ &=&\alpha_{n}(D_{f}(p,x_{n})-D_{f}(p,z_{n}))\\ &=&0. \mathrm{e}nd{eqnarray*} By (\ref{3.11}), we have \begin{equation}\label{3.13} \lim_{n\to\infty}\|z_{n}-y_{n}\|=0. \mathrm{e}nd{equation} Note that $$\|x_{n}-y_{n}\|\leq \|x_{n}-z_{n}\|+\|z_{n}-y_{n}\|.$$ By applying (\ref{3.8}) and (\ref{3.13}), we can write \begin{equation}\label{15} \lim_{n\to\infty}\|x_{n}-y_{n}\|=0. \mathrm{e}nd{equation} Now, we claim that \begin{equation}\label{22} \lim_{n\to\infty}\|x_{n}-Tx_{n}\|=0. \mathrm{e}nd{equation} Since $f$ is uniformly Fr\'{e}chet differentiable on bounded subsets of $E$, by Lemma \ref{lem7}, $\nabla f$ is norm-to-norm uniformly continuous on bounded subsets of $E$. So, \begin{equation}\label{16} \lim_{n\to\infty}\|\nabla f(x_{n})-\nabla f(y_{n})\|_{*}=0. \mathrm{e}nd{equation} Since $f$ is uniformly Fr\'{e}chet differentiable, it is also uniformly continuous, we get \begin{equation}\label{17} \lim_{n\to\infty}\|f(x_{n})-f(y_{n})\|=0. \mathrm{e}nd{equation} By Bregman distance we have \begin{eqnarray*} &&D_{f}(p,x_{n})-D_{f}(p,y_{n})\\ &&=f(p)-f(x_{n})-\langle \nabla f(x_{n}),p-x_{n}\rangle -f(p)+f(y_{n})+\langle \nabla f(y_{n}),p-y_{n}\rangle\\ &&=f(y_{n})-f(x_{n})+\langle \nabla f(y_{n}),p-y_{n}\rangle-\langle \nabla f(x_{n}),p-x_{n}\rangle\\ &&=f(y_{n})-f(x_{n})+\langle \nabla f(y_{n}),x_{n}-y_{n}\rangle-\langle \nabla f(y_{n})-\nabla f(x_{n}),p-x_{n}\rangle, \mathrm{e}nd{eqnarray*} for each $p\in \cap_{i=1}^{N}F(T_{i})$. By (\ref{15})-(\ref{17}), we obtain \begin{equation}\label{18} \lim_{n\to\infty}(D_{f}(p,x_{n})-D_{f}(p,y_{n}))=0. \mathrm{e}nd{equation} By above equation, we have \begin{eqnarray*} D_{f}(y_{n},x_{n+1})&=&D_{f}(p,x_{n+1})-D_{f}(p,y_{n})\\ &=&D_{f}(p,proj_{C}^{f}\nabla f^{*}(\alpha_{n}\nabla f(x_{n})+(1-\alpha_{n})\nabla f(T(x_{n}))-D_{f}(p,y_{n}))\\ &\leq&D_{f}(p,\nabla f^{*}(\alpha_{n}\nabla f(x_{n})+(1-\alpha_{n})\nabla f(T(x_{n}))-D_{f}(p,y_{n}))\\ &\leq& \alpha_{n}D_{f}(p,x_{n})+(1-\alpha_{n})D_{f}(p,T(y_{n})-D_{f}(p,y_{n})\\ &\leq&\alpha_{n}D_{f}(p,x_{n})+(1-\alpha_{n})D_{f}(p,y_{n})-D_{f}(p,y_{n})\\ &=&\alpha_{n}(D_{f}(p,x_{n})-D_{f}(p,y_{n}))\\ &=&0. \mathrm{e}nd{eqnarray*} By Lemma \ref{lem6}, we have $$\lim_{n\to\infty}\|y_{n}-x_{n+1}\|=0.$$ From above equation and (\ref{rd}), we can write \begin{eqnarray} \|y_{n}-T(y_{n})\|&\leq&\|y_{n}-x_{n+1}\|+\|x_{n+1}-T(y_{n})\|\nonumber\\ &=&0\label{41} \mathrm{e}nd{eqnarray} when $n\to\infty$. By applying the triangle inequality, we get $$\|x_{n}-T(x_{n})\|\leq \|x_{n}-y_{n}\|+\|y_{n}-T(y_{n})\|+\|T(y_{n})-T(x_{n})\|.$$ By (\ref{15}), (\ref{41}) and since $T_{i}$ is uniformly continuous for each $i\in\{1,2,\ldots,N\}$ we have $$\lim_{n\to\infty}\|x_{n}-T(x_{n})\|=0.$$ As claimed in (\ref{22}).\\ Since $\|x_{n_{k}}-T(x_{n_{k}})\|\to0$ as $k\to\infty$, we have $q\in \cap_{i=1}^{N}F(T_{i})$.\\ From (\ref{3.8}) we can write $$\lim_{n\to\infty}\|Jz_{n}-Jx_{n}\|=0.$$ Here, we prove that $q\in MEP(\Theta)$. For this reason, consider that $z_{n}=Res_{\Theta,\varphi}^{f}(x_{n})$, so we have $$\Theta(z_{n},z)+\varphi(z)+\langle Jz_{n}-Jx_{n},z-z_{n}\rangle\geq \varphi(z_{n}), \ \ \forall z\in C.$$ From ($A_{2}$), we have $$\Theta (z,z_{n})\leq -\Theta (z_{n},z)\leq \varphi(z)-\varphi(z_{n})+\langle Jz_{n}-Jx_{n},z-z_{n}\rangle, \ \ \forall z\in C.$$ Hence, $$\Theta(z,z_{n_{i}})\leq \varphi(z)- \varphi(z_{n_{i}})+\langle Jz_{n_{i}}-Jx_{n_{i}},z-z_{n_{i}}\rangle, \ \ \forall z\in C.$$ Since $z_{n_{i}}\rightharpoonup q$ and from the weak lower semicontinuity of $\varphi$ and $\Theta (x,y)$ in the second variable $y$, we also have $$\Theta (z,q)+\varphi(q)-\varphi(z)\leq0, \ \ \ \forall z\in C.$$ For $t$ with $0\leq t\leq 1$ and $z\in C$, let $z_{t}=tz+(1-t)q$. Since $z\in C$ and $q\in C$ we have $z_{t}\in C$ and hence $\Theta(z_{t},q)+\varphi(q)-\varphi(z_{t})\leq 0$. So, from the continuity of the equilibrium bifunction $\Theta(x,y)$ in the second variable $y$, we have \begin{eqnarray*} 0&=&\Theta(z_{t},z_{t})+\varphi(z_{t})-\varphi(z_{t})\\ &\leq& t\Theta(z_{t},z)+(1-t)\Theta(z_{t},q)+t\varphi(z)+(1-t)\varphi(q)-\varphi(z_{t})\\ &\leq& t[\Theta(z_{t},z)+\varphi(z)-\varphi(z_{t})]. \mathrm{e}nd{eqnarray*} Therefore, $\Theta(z_{t},z)+\varphi(z)-\varphi(z_{t})\geq0$. Then, we have $$\Theta(q,z)+\varphi(z)-\varphi(q)\geq0, \ \ \ \forall y\in C.$$ Hence we have $q\in MEP(\Theta)$. We showed that $q\in(\cap_{i=1}^{N}F(T_{i}))\cap MEP(\Theta)$.\\ Since $E$ is reflexive and $\{x_{n}\}$ is bounded, there exists a subsequence $\{x_{n_{k}}\}$ of $\{x_{n}\}$ such that $\{x_{n_{k}}\}\rightharpoonup q\in C$ and $$\limsup_{n\to\infty}\langle \nabla f(x_{n})-\nabla f(p),x_{n}-p\rangle=\langle \nabla f(x_{n})-\nabla f(p),q-p\rangle.$$ On the other hand, since $\|x_{n_{k}}-Tx_{n_{k}}\|\to0$ as $k\to\infty$, we have $q\in \cap_{i=1}^{N}F(T_{i})$. It follows from the definition of the Bregman projection that \begin{equation}\label{60} \limsup_{n\to\infty}\langle \nabla f(x_{n})-\nabla f(p),x_{n}-p\rangle=\langle \nabla f(x_{n})-\nabla f(p),q-p\rangle\leq 0. \mathrm{e}nd{equation} From (\ref{29}), we obtain \begin{eqnarray*} D_{f}(p,x_{n+1})&=&D_{f}(p,proj_{C}^{f}\nabla f^{*}(\alpha_{n}\nabla f(x_{n})+(1-\alpha_{n})\nabla f(T(x_{n}))))\\ &\leq&D_{f}(p,\nabla f^{*}(\alpha_{n}\nabla f(x_{n})+(1-\alpha_{n})\nabla f(T(x_{n}))))\\ &= &V_{f}(p,\alpha_{n}\nabla f(x_{n})+(1-\alpha_{n})\nabla f(T(y_{n})))\\ &\leq& V_{f}(p,\alpha_{n}\nabla f(x_{n})+(1-\alpha_{n})\nabla f(T(y_{n}))-\alpha_{n}(\nabla f(x_{n})-\nabla f(p)))\\ &&+\langle \alpha_{n}(\nabla f(x_{n})-\nabla f(p)),x_{n+1}-p\rangle\\ &=&V_{f}(p,\alpha_{n}\nabla f(p)+(1-\alpha_{n})\nabla f(T(y_{n})\\ &&+ \alpha_{n}\langle \nabla f(x_{n})-\nabla f(p),x_{n+1}-p\rangle\\ &\leq&\alpha_{n} V_{f}(p,\nabla f(p))+(1-\alpha_{n})V_{f}(p,\nabla f(T(y_{n})))\\ &&+ \alpha_{n}\langle \nabla f(x_{n})-\nabla f(p),x_{n+1}-p\rangle\\ &=&(1-\alpha_{n})D_{f}(p,T(y_{n})+\alpha_{n}\langle \nabla f(x_{n})-\nabla f(p),x_{n+1}-p\rangle\\ &\leq&(1-\alpha_{n})D_{f}(p,x_{n})+\alpha_{n}\langle\nabla f(x_{n})-\nabla f(p),x_{n+1}-p\rangle. \mathrm{e}nd{eqnarray*} By Lemma \ref{444} and (\ref{60}), we can conclude that $\lim_{n\to\infty}D_{f}(p,x_{n})=0$. Therefore, by Lemma \ref{lem6}, $x_{n}\to p$. This completes the proof. \mathrm{e}nd{proof} Let $\beta_{n}=0$, \ $\forall n\in\mathbb{N}$ in Theorem \ref{main}, we have a generalization of H. zegeye result \cite{zeg}. If in Theorem \ref{mt}, we consider a single Bregman strongly nonexpansive mapping, we have the following corollary. \begin{corollary} Let $E$ be a real reflexive Banach space, $C$ be a nonempty, closed and convex subset of $E$. Let $f:E\to \mathbb{R}$ be a coercive Legendre function which is bounded, uniformly Fr\'{e}chet differentiable and totally convex on bounded subsets of $E$. Let $T$ be a Bregman strongly nonexpansive mappings with respect to $f$ such that $F(T)=\widehat{F}(T)$ and $T$ is uniformly continuous. Let $\Theta:C\times C\to\mathbb{R}$ satisfying conditions ($A_{1}$)-($A_{5}$) and $F(T)\cap MEP(\Theta)$ is nonempty and bounded. Let $\{x_{n}\}$ be a sequence generated by \begin{eqnarray*} x_{1}&=&x\in C \ \ \ \ \ \text{chosen arbitrarily},\nonumber\\ z_{n}&=&Res_{\Theta,\varphi}^{f}(x_{n}),\nonumber\\ y_{n}&=&proj_{C}^{f}\nabla f^{*}(\beta_{n}\nabla f(x_{n})+(1-\beta_{n})\nabla f(T(z_{n})))\nonumber\\ x_{n+1}&=&proj_{C}^{f}\nabla f^{*}(\alpha_{n}\nabla f(x_{n})+(1-\alpha_{n})\nabla f (T(y_{n}))), \mathrm{e}nd{eqnarray*} where $\{\alpha_{n}\},\{\beta_{n}\}\subset (0,1)$ satisfying $\lim_{n\to\infty}\alpha_{n}=0$ and $\sum_{n=1}^{\infty}\alpha_{n}=\infty$. Then $\{x_{n}\}$ converges strongly to $proj_{F(T)\cap MEP(\Theta)}x$. \mathrm{e}nd{corollary} If in Theorem \ref{mt}, we assume that $E$ is a uniformly smooth and uniformly convex Banach space and $f(x):=\frac{1}{p}\|x\|^{p} \ \ (1<p<\infty)$, we have that $\nabla f=J_{p}$, where $J_{p}$ is the generalization duality mapping from $E$ onto $E^{*}$. Thus, we get the following corollary. \begin{corollary} Let $E$ be a uniformly smooth and uniformly convex Banach space and $f(x):=\frac{1}{p}\|x\|^{p} \ \ (1<p<\infty)$. Let $C$ be a nonempty, closed and convex subset of $\text{int}(\text{dom} f)$ and $T_{i}:C\to C$, for $i=1,2,\ldots, N,$ be a finite family of Bregman strongly nonexpansive mappings with respect to $f$ such that $F(T_{i})=\widehat{F}(T_{i})$and each $T_{i}$ is uniformly continuous. Let $\Theta:C\times C\to\mathbb{R}$ satisfying conditions ($A_{1}$)-($A_{5}$) and $\left(\cap_{i=1}^{N}F(T_{i})\right)\cap MEP(\Theta)$ is nonempty and bounded. Let $\{x_{n}\}$ be a sequence generated by \begin{eqnarray*} x_{1}&=&x\in C \ \ \ \ \ \text{chosen arbitrarily},\nonumber\\ z_{n}&=&Res_{\Theta,\varphi}^{f}(x_{n}),\nonumber\\ y_{n}&=&proj_{C}^{f}J_{p}^{-1}(\beta_{n}J_{p} f(x_{n})+(1-\beta_{n})J_{p}(T(z_{n})))\nonumber\\ x_{n+1}&=&proj_{C}^{f}J_{p}^{-1}(\alpha_{n}J_{p}(x_{n})+(1-\alpha_{n})J_{p} (T(y_{n}))), \mathrm{e}nd{eqnarray*} where $T=T_{N}\circ T_{N-1}\circ\ldots\circ T_{1}$, $\{\alpha_{n}\}, \{\beta_{n}\}\subset (0,1)$ satisfying $\lim_{n\to\infty}\alpha_{n}=0$ and $\sum_{n=1}^{\infty}\alpha_{n}=\infty$. Then $\{x_{n}\}$ converges strongly to $proj_{(\cap_{i=1}^{N}F(T_{i}))\cap MEP(\Theta)}x$.\\ \mathrm{e}nd{corollary} \begin{thebibliography}{99} \bibitem{alb} Y.I. Alber, \textit{Metric and generalized projection operators in Banach spaces: properties and applications, in: A.G. Kartsatos (Ed.), Theory and Applications of Nonlinear Operator of Accretive and Monotone Type}, Marcel Dekker, New York, (1996) 15--50. \bibitem{blu} E. Blum, W. Oettli, \textit{From optimization and variational inequalities to equilibrium problems}, Math. Student \textbf{63} (1994) 123--145. \bibitem{asl2} M. Aslam Noor, \textit{Generalized mixed quasi-equilibrium problems with trifunction}, Appl. Math. Lett. \textbf{18} (2005) 695--700. \bibitem{asl} M. Aslam Noor, W. Oettli, \textit{On general nonlinear complementarity problems and quasi equilibria}, Matematiche (Catania) \textbf{49} (1994) 313--331. \bibitem{bau} H. H. Bauschke, J. M. Borwein, P. L. Combettes, \textit{Essential smoothness, essential strict convexity, and Legendre functions in Banach spaces}, Commun. Contemp. Math. \textbf{3} (2001) 615--647. \bibitem{bon} J. F. Bonnans, A. Shapiro, \textit{Perturbation analysis of optimization problem}. NewYork (NY) Springer, 2000. \bibitem{bru} R.E. Bruck, S. Reich, \textit{Nonexpansive projections and resolvents of accretive operators in Banach spaces}, Houston J. Math. \textbf{3} (1977) 459--470. \bibitem{but} D. Butnariu, E. Resmerita, \textit{Bregman distances, totally convex functions and a method for solving operator equations in Banach spaces}, Abstr. Appl. Anal. Art. ID 84919 (2006) 1--39. \bibitem{but2} D. Butnariu, A. N. Iusem, \textit{ Totally Convex Functions for Fixed Points Computation and Infinite Dimensional Optimization}, Applied Optimization, \textbf{40} Kluwer Academic, Dordrecht (2000). \bibitem{cen} L.C. Ceng, J.C. Yao, \textit{A hybrid iterative scheme for mixed equilibrium problems and fixed point problems}, J. Comput. Appl. Math. \textbf{214} (2008) 186--201. \bibitem{cens} Y. Censor, A. Lent, \textit{An iterative row-action method for interval convex programming}, J. Optim. Theory Appl. \textbf{34} (1981) 321--353. \bibitem{cha} O. Chadli, N.C. Wong, J.C. Yao, \textit{Equilibrium problems with applications to eigenvalue problems}, J. Optim. Theory Appl. \textbf{117} (2003) 245--266. \bibitem{cha2} O. Chadli, S. Schaible, J.C. Yao, \textit{Regularized equilibrium problems with an application to noncoercive hemivariational inequalities}, J. Optim. Theory Appl. \textbf{121} (2004) 571--596. \bibitem{hir} J. B. Hiriart-Urruty, C. Lemar\'{e}chal, \textit{Grundlehren der mathematischen Wissenschaften, in: Convex Analysis and Minimization Algorithms II}, \textbf{ 306}, Springer-Verlag, (1993). \bibitem{kas} G. Kassay, S. Reich, S. Sabach, \textit{Iterative methods for solving systems of variational inequalities in reflexive Banach spaces}, SIAM J. Optim. \textbf{21} (2011) 1319--1344. \bibitem{koh} F. Kohsaka, W. Takahashi, \textit{Proximal point algorithms with Bregman functions in Banach spaces}, J. Nonlinear Convex Anal. \textbf{6} (2005) 505--523. \bibitem{kum} W. Kumama, U. Witthayaratb, P. Kumam, S. Suantaie, K. Wattanawitoon, \textit{ Convergence theorem for equilibrium problem and Bregman strongly nonexpansive mappings in Banach spaces}, Optimization: A Journal of Mathematical Programming and Operations Research, DOI: 10.1080/02331934.2015.1020942. \bibitem{kon} I.V. Konnov, S. Schaible, J.C. Yao, \textit{Combined relaxation method for mixed equilibrium problems}, J. Optim. Theory Appl. \textbf{126} (2005) 309--322. \bibitem{mar} V. Martn-Marquez, S. Reich, S. Sabach, \textit{Iterative methods for approximating fixed points of Bregman nonexpansive operators}, Discrete Contin. Dyn. Syst. Ser. S. \textbf{6} (2013) 1043--1063. \bibitem{mor} J. J. Moreau, \textit{ Sur la fonction polaire d’une fonction semi-continue supérieurement [On the polar function of a semi-continuous function superiorly]}, C. R. Acad. Sci. Paris. \textbf{258} (1964) 1128--1130. \bibitem{pen} J. W. Peng, J. C. Yao, \textit{Strong convergence theorems of iterative scheme based on the extragradient method for mixed equilibrium problems and fixed point problems}, Math. Comp. Model. \textbf{49} (2009) 1816--1828. \bibitem{phe} R. P. Phelps, \textit{Convex Functions, Monotone Operators, and Differentiability}, second ed., in: Lecture Notes in Mathematics, vol. 1364, Springer Verlag, Berlin, 1993. \bibitem{rei2} S. Reich, \textit{A weak convergence theorem for the alternating method with Bregman distances, in: Theory and Applications of Nonlinear Operators of Accretive and Monotone Type}, Marcel Dekker, New York, (1996) 313--318. \bibitem{rei4} S. Reich, S. Sabach, \textit{A strong convergence theorem for a proximal-type algorithm in reflexive Banach spaces}, J. Nonlinear Convex Anal. \textbf{10} (2009) 471-485. \bibitem{rei3} S. Reich, S. Sabach, \textit{Existence and approximation of fixed points of Bregman firmly nonexpansive mappings in reflexive Banach spaces. In: Fixed-Point Algorithms for Inverse Problems in Science and Engineering}, Optimization and Its Applications, \textbf{49} (2011) 301--316. \bibitem{rei} S. Reich, S. Sabach, \textit{Two strong convergence theorems for a proximal method in reflexive Banach spaces}, Numer. Funct. Anal. Optim. \textbf{31} (2010) 22--44. \bibitem{roc} R. T. Rockafellar, \textit{Level sets and continuity of conjugate convex functions}, Trans. Amer. Math. Soc. \textbf{123} (1966) 46--63. \bibitem{sua} S. Suantai, Y. J. Choc, P. Cholamjiak, \textit{ Halpern’s iteration for Bregman strongly nonexpansive mappings in reflexive Banach spaces}, Computers and Mathematics with Applications, \textbf{64} (2012) 489--499. \bibitem{xu} H. K. Xu, \textit{An iterative approach to quadratic optimization}, J. Optim. Theory Appl. \textbf{116} (2003) 659--678. \bibitem{yao} Y. Yao, M. Aslam Noor, S. Zainab, Y.C Liou, \textit{Mixed equilibrium problems and Optimization problems} J. Math. Anal. Appl. \textbf{354} (2009) 319--329 . \bibitem{zal} C. Z\'{a}linescu, \textit{Convex analysis in general vector spaces}, River Edge (NJ), World Scientific (2002). \bibitem{zeg} H. Zegeye, \textit{Convergence theorems for Bregman strongly nonexpansive mappings in reflexive Banach spaces}, Filomat. \textbf{7} (2014) 1525--1536. \mathrm{e}nd{thebibliography} \mathrm{e}nd{document}
\begin{document} \twocolumn[{ \begin{@twocolumnfalse} \title{Maritime Just-in-time navigation with Quantum algorithms} \begin{abstract} Just-in-time arrival in the maritime industry is a key concept for the reduction of Greenhouse gas emissions and cost-cutting, with the aim to reach the industrywide overall climate goals set by the International Maritime Organization (IMO) for 2030. In this note, we propose a mathematical formulation which allows for an implementation on quantum computers. \end{abstract} \end{@twocolumnfalse} }] \section{Just-in-time arrival} Being a backbone of today's world logistics, the maritime industry's growth is accellerating and its organizations research technological innovation on all levels. One of those technological advances is Just-in-time arrival, which is an integral part of the industry's attempt to reach the climate targets set by the IMO for 2030. The basic idea is straightforward, and was for example discussed by the General Industry Alliance in \cite{gia}: Consider a vessel which is expected to reach its destination at the requested time of arrival (RTA) by driving full speed, see the upper row of figure \ref{fig:JIT-example}. At some stage during the trip, the destination authority changes the requested time of arrival to some later point in time. If the vessel's speed is kept at full speed, it arrives too early and needs to anchor, which causes extra costs and pollution at the destination. We have depicted an example of such anchoring costs in figure \ref{fig:harbor costs}. A better solution is to reduce the speed as soon as the RTA changes - saving both fuel and anchoring costs. This is depicted in the second row of figure \ref{fig:JIT-example}. This seemingly simple example contains a lot of analytics. To see this, note that the surrounding current's speed and direction along the path has to be taken into account. This is particularly interesting, because of its variation in time. On the one hand, the current's speed parallel to the vessel's path influences the estimated arrival time, but does \'a priori not have much bearing on fuel consumption. The perpendicular part, on the other hand, has to be counterbalanced and has a direct effect on fuel costs. Overall, wind and current forecasts influence the decision which velocity should be adopted on which path segment. Every time the forecast or the requested time of arrival changes, the optimal velocities need to be adapted. \begin{figure} \caption{Example for costs at destination as a function of ``time of arrival'' in relation to RTA.} \end{figure} \label{fig:harbor costs} While the previous paragraphs only discussed speed variations, it is also possible to vary the vessel's path itself. For a quantum version of such routing optimization problems with a view towards aircraft navigation, see \cite{jaroszewski2020ising}. For mathematical formulations of a different set of problems in maritime routing see \cite{9314905}. \begin{figure*} \caption{An example for Just-in-time navigation. Above an example for ``Today's Operation: hurry up and wait''. Below an adapted version with ``Just-in-Time operation''. This image is taken from \cite[p. 3]{gia} \label{fig:JIT-example} \end{figure*} \section{An Ising formulation} Quantum adiabatic optimization \cite{farhi2000quantum, farhi2002quantum, crosson2014different}, the Quantum Approximate Optimization Algorithm \cite{farhi2014quantum} and related algorithms \cite{Peruzzo_2014, McClean_2016} on quantum hardware \cite{Preskill_2018, farhi2017quantum} allow to find the minimum of a quadratic unconstrained binary optimization (QUBO) polynomial \begin{align*} C:\{0,1\}^n&\rightarrow\mathbb{R}_{\geq 0} \\ \{X_i\}\;\,&\mapsto C(X_1, ..., X_n). \end{align*} For a collection of such polynomials see \cite{IsingOverview}. In the following, we develop such an optimization polynomial whose minimum encodes a solution to a simple JIT-optimization problem. At the end of this section, we point out ways to extend this formulation to more sophisticated real-world applications. First, consider a fixed route of length $s$ divided into $n$ sectors $s_i$ with \[ \sum_{i=1}^n s_i = s. \] Let $\Delta v_i$ be the water flow rate parallel to the vessel route. Boost is indicated by $\Delta v_i > 0$. Perpendicular flow rates are commented on at the end of this section. Indicating the vessel's velocity relative to water in sector $i$ by $v_i$, the time for passing sector $i$ is given by \[ t_i = \frac{s_i}{v_i + \Delta v_i}. \] We now approximate the delay costs of figure \ref{fig:harbor costs} quadratically in the arrival time $t_A = t_1 + ... + t_n$. Denoting the fuel costs per time by a function $C_i$ quadratic in the velocities $v_i$, the total costs for given velocities $(v_1, ..., v_n)$ and requested time of arrival $\text{RTA}$ can be written down as \begin{equation}\label{eq:poly} \sum_{i=1}^n C_i(v_i)\cdot t_i + \alpha (t_A - \text{RTA})^2. \end{equation} For the time being, we approximate $C_i$ linearly in $v_i+\Delta v_i$. The remaining second term is quadratic in the variables \begin{equation}\label{eq:bin dec} (v_i + \Delta v_i)^{-1} := \sum_{j=-5}^0 2^j X_{i,j}, \end{equation} which we have written in binary form. \eqref{eq:poly} thus provides the quadratic polynomial whose minimum gives the optimal vector $(v_1, ..., v_n)$ solving the Just-in-time routing problem. This polynomial can now be fed to a quantum algorithm of choice. In order to apply this method to real world applications, this model can be extended. Today's implementations of quantum optimization algorithms, e.g. by IBM or D-Wave, usually require the cost function to be maximally quadratic. However, \'a priori neither the quantum approximate optimization algorithm nor quantum adiabatic computing impose this constraint, see e.g. \cite{farhi2015quantum}. The costs per time $C_i$ in \eqref{eq:poly} can therefore be extended to be a function quadratic in the $(v_i+\Delta v_i)$ and additionally in the water velocities perpendicular to the vessels path. Then, an additional penalty term relating the binary decomposition of \eqref{eq:bin dec} and its inverse is required: \[ P\cdot \left( (v_i+\Delta v_i) \cdot (v_i+\Delta v_i)^{-1} -1 \right)^2 \] Dropping the requiredment of maximally quadratic terms also allows for more complex models. For example, instead of finding velocities on fixed route sectors $s_i$, one could consider the length of a fixed number of sectors as optimization variable. \section{Review of Quantum optimization methods} The following review of quantum optimization solvers has appeared in \cite{jaroszewski2020ising} and is repeated here for exposition. \subsection{Adiabatic Quantum Computing} Quantum computation by adiabatic evolution as proposed in \cite{farhi2000quantum} is an optimization algorithm running on dedicated Quantum hardware. The basic idea is the following. Let $C(X_1, ..., X_B)$ be a quadratic optimization function in $B$ binary variables $X_i\in\{0,1\}$. Construct the problem Hamiltonian $\mathcal{H}_P$ \[ \mathcal{H}_P \ket{X_0}...\ket{X_B} = C(X_1, ..., X_B) \ket{X_0}...\ket{X_B}. \] Prepare your Quantum system in an easy-to-construct ground state $\ket 0$ of a simple initial Hamiltonian $\mathcal{H}_I$. Finally, let the system evolve in time from $t=0$ to $t=T$ along some monotonic curve $s(t)\in[0,1], s(0)=0, s(T)=1,$ according to the Hamiltonian \[ \mathcal{H}(t) := \mathcal{H}_I\cdot (1-s(t)) + \mathcal{H}_P \cdot s(t). \] After the evolution, the system is in the state $U(T)\ket 0$ where the time-evolution operator $U$ is the solution to the Schroedinger equation with respect to $\mathcal H (t)$. If the energy gap between ground state and first excited state is greater than zero throughout the evolution and $T$ is chosen large enough, $U(T)\ket 0$ is the ground state of $\mathcal{H}_P$ at $t=T$ by the adiabatic theorem. \subsection{QAOA} The Quantum Approximate Optimization Algorithm \cite{farhi2014quantum} is a hybrid quantum-classical algorithm approximating the adiabatic evolution on gate-model Quantum hardware. Its concept can be summarized as follows. Trotterizing the time-evolution operator $U$ of adiabatic Quantum computing into $p$ steps gives \begin{align*} U(T) &\approx \prod_{k=1}^p e^{-\frac i \hbar \cdot \mathcal{H}(k\cdot\delta t)\cdot \delta t} \\ &\approx e^{-i \beta_p \mathcal{H}_I} e^{-i \gamma_p \mathcal{H}_P} ... e^{-i \beta_1 \mathcal{H}_I} e^{-i \gamma_1 \mathcal{H}_P}. \end{align*} In the second step, we have linearized the exponential by suppressing higher order commutators in the Baker-Campbell-Hausdorff formula. The parameters $\beta_k$ and $\gamma_k$ depend on the form of $s(t)$. The individual factors in the trotterized form of $U(T)$ can easily be implemented as gates on a universal quantum computer. For fixed $p$, $\beta_k$ and $\gamma_k$, QAOA evaluates the trotterized version of \begin{equation}\label{eq:QAOA evaluation} \left(\bra 0 U(T)^\dagger \right) \hat C \left(U(T)\ket 0 \right) \end{equation} on a quantum computer. A classical optimization algorithm (e.g. gradient descent) now varies $\beta_k, \gamma_k$ while treating $p$ as a fixed hyperparameter. For every set of parameters the quantum computer evaluates \eqref{eq:QAOA evaluation} until the classical algorithm terminates. A Quantum measurement of $U(T)\ket 0$ for the final parameters $\beta_k, \gamma_k$ reveals the state minimizing the optimization function $C$. \end{document}
\begin{document} \title{Critical points of modular forms} \begin{abstract} We count the number of critical points of a modular form in a $\gamma$-translate of the standard fundamental domain~$\mathcal{F}$ (with $\gamma\in \mathfrak{sl}_2z$). Whereas by the valence formula the (weighted) number of zeros of this modular form in~$\gamma\mathcal{F}$ is a constant only depending on its weight, we give a closed formula for this number of critical points in terms of those zeros of the modular form lying on the boundary of~$\mathcal{F},$ the value of $\gamma^{-1}(\infty)$ and the weight. More generally, we indicate what can be said about the number of zeros of a quasimodular form. \end{abstract} \section{Introduction} \paragraph{A valence formula for quasimodular forms} For a modular form~$g$ of weight~$k$, the (weighted) number of zeros in a fundamental domain is given by \begin{equation}\label{eq:vf} \sum_{\tau \in \gamma\mathcal{F}} \mathbb{F}rac{v_\tau(g)}{e_\tau} \: =\: \mathbb{F}rac{k}{12}\end{equation} for all $\gamma \in \mathfrak{sl}_2z$, where~$\mathcal{F}$ denotes the standard fundamental domain for the action of~$\mathfrak{sl}_2z$ on the complex upper half plane,~$v_\tau(g)$ the multiplicity of vanishing of~$g$ at~$\tau$ and~$e_\tau$ the order of the stabilizer of~$\mathrm{PSL}_2(\mathbb{Z})$ with respect to~$\tau$ (see \mathbb{C}ref{sec:set-up} for the definitions). Much less is known about the (weighted) number of zeros of derivatives of modular forms, or, more generally, of zeros of quasimodular forms. That is, in this paper we study the value \[ N_\lambda(f) \: :=\: \sum_{\tau \in \gamma\mathcal{F}} \mathbb{F}rac{v_\tau(f)}{e_\tau} \mathbb{Q}quad\mathbb{Q}quad \bigl(\gamma = \left(\begin{smallmatrix} a & b \\ c & d \end{smallmatrix}\mathbb{R}ight)\in \mathfrak{sl}_2z \text{ such that } \lambda=-\tfrac{d}{c}\bigr) \] for derivatives of modular forms or, more generally, for quasimodular forms~$f$ (the quantity~$N_\lambda(f)$ is then well-defined if $f$ is quasimodular, or, more generally, if $f$ is 1-periodic). As an example, consider the critical points of the modular discriminant~$\Delta$. Note $\Delta'=\Delta E_2$ (with $f'=\mathbb{F}rac{1}{2\pi \mathrm{i}}\mathbb{F}rac{\mathrm{d}}{\mathrm{d}\tau}f$), where \[E_2(\tau) \: =\: 1 - 24 \sum_{m,r\geq 1} m\,q^{mr} \mathbb{Q}quad\mathbb{Q}quad (\tau \in \mathfrak{h}, \text{ the complex upper half plane})\] is the quasimodular Eisenstein series of weight~$2$ transforming as \[ (E_2|\gamma)(\tau) \: =\: E_2(\tau) + \mathbb{F}rac{12}{2\pi \mathrm{i}}\mathbb{F}rac{c}{c\tau+d} \: =\: E_2(\tau) + \mathbb{F}rac{12}{2\pi \mathrm{i}}\mathbb{F}rac{1}{\tau-\lambda(\gamma)}\] for all $\gamma = \left(\begin{smallmatrix} a & b \\ c & d \end{smallmatrix}\mathbb{R}ight)\in \mathfrak{sl}_2z$ and with $\lambda(\gamma)=-\mathbb{F}rac{d}{c}.$ For~$E_2$ the number of zeros in a fundamental domain depends on the choice of this domain. There are infinitely many non-equivalent zeros of~$E_2$; in fact, two zeros are only equivalent if one is a $\mathbb{Z}$-translate of the other \mathbb{C}ite{BS10}. Nevertheless, one can still count the number of zeros of~$E_2$ in~$\gamma\mathcal{F}$: \begin{equation} \label{eq:E2zeros}N_\lambda(E_2) \: =\: \begin{cases} 0 & |\lambda| \in (\mathbb{F}rac{1}{2},\infty] \\ 1 & |\lambda| \in [0,\mathbb{F}rac{1}{2}), \end{cases}\end{equation} as follows from \mathbb{C}ite{IJT14,WY14}. Recently, Gun and Oesterl\'e counted the number of critical points of the Eisenstein series~$E_{k}$ for $k>2$ \mathbb{C}ite{GO20} \begin{equation} \label{eq:DEkzeros}N_\lambda(E_{k}') \: =\: \begin{cases} \bigl\lfloor \mathbb{F}rac{k+2}{6}\bigr\mathbb{R}floor+\mathbb{F}rac{1}{3}\delta_{k\equiv 2 \, (6)} & |\lambda|\in (1,\infty] \\ \mathbb{F}rac{1}{3}\delta_{k\equiv 2 \, (6)} & |\lambda|\in [0,1). \end{cases}\end{equation} (Note $E_k$ has a double zero at $\mathbb{R}ho=-\mathbb{F}rac{1}{2}+ \mathbb{F}rac{1}{2}\sqrt{3}\,\mathrm{i}$ for $k\equiv 2 \mod 6$. Hence, the factor $\delta_{k\equiv 2 \, (6)}$ corresponds to the trivial zero of $E_k'$ at a $\gamma$-translate of $\mathbb{R}ho=-\mathbb{F}rac{1}{2}+ \mathbb{F}rac{1}{2}\sqrt{3}$.) \paragraph{Critical points of modular forms} Notice that the only zero of~$\Delta$ in the fundamental domain~$\mathcal{F}$ is at the cusp, whereas the Eisenstein series have all their zeros in~$\mathcal{F}$ on the unit circle \mathbb{C}ite{RSD}. Our main theorem expresses the number of critical points of a modular form~$g$ in terms of the number of zeros of this modular form on the boundary of~$\mathcal{F}$. Write~$C(g)$ for the number of distinct zeros~$z$ of~$g$ satisfying $|z|=1$ and $-\mathbb{F}rac{1}{2}\leq \mathrm{Re}\, (z)\leq 0$, where a zero $z$ is counted with weight $e_z^{-1}$ (i.e., a zero at $\mathbb{R}ho=-\mathbb{F}rac{1}{2}+ \mathbb{F}rac{1}{2}\sqrt{3}\,\mathrm{i}$ or at~$\mathrm{i}$ is counted with weight~$\mathbb{F}rac{1}{3}$ or~$\mathbb{F}rac{1}{2}$ respectively). Write~$L(g)$ for the number of distinct zeros~$z$ of~$g$ at the cusp or satisfying $\mathrm{Re}(z)=-\mathbb{F}rac{1}{2}$ and $|z|> 1$. \begin{thm}\label{thm:crit} Let~$g$ be a modular form of weight~$k$ with \emph{real} Fourier coefficients. Then, \begin{align} N_\lambda(g')\: =\: \mathbb{F}rac{k}{12} \,+\, \begin{cases} \phantom{-}C(g)\,+\, \mathbb{F}rac{1}{3}\delta_{g(\mathbb{R}ho)=0} & |\lambda| \in (1,\infty]\\ -C(g) & |\lambda| \in (\mathbb{F}rac{1}{2},1) \\ -C(g)+L(g) & |\lambda| \in [0,\mathbb{F}rac{1}{2}). \end{cases} \end{align} \end{thm} In particular, for any sequence of cuspidal Hecke eigenforms~$g_k$ of weight~$k$ we have \[ \mathbb{F}rac{N_\lambda(g_k')}{N_\lambda(g_k)} \to 1 \mathbb{Q}uad\mathbb{Q}uad \text{ as } k\to \infty,\] as by the holomorphic quantum unique ergodicity theorem the zeros of~$g_k$ are equidistributed in~$\mathcal{F}$ as $k\to\infty$ \mathbb{C}ite{HS10}. Moreover, if~$g$ is a modular form with \emph{all} its zeros on the interior of~$\mathcal{F}$, we find \[ N_\lambda(g') \: =\: \mathbb{F}rac{k}{12} \: =\: N_\lambda(g).\] (In case all the zeros of~$g$ lie on the interior of~$\mathcal{F}$, then $k\equiv 0 \mod 12$.) Observe that as $\Delta$ has a unique zero at the cusp, we have $C(\Delta)=0$ and $L(\Delta)=1$, by which we recover~\eqref{eq:E2zeros}. Moreover, for modular Eisenstein series we have $C(E_k)=\mathbb{F}rac{k}{12}-\mathbb{F}rac{1}{3}\delta_{k\equiv 2\,(6)}$ and $L(E_k)=0$, by which we recover~\eqref{eq:DEkzeros}. \paragraph{Further results} We aim to generalize these formulae to \emph{all} quasimodular forms, i.e., polynomials in $E_2$ with modular coefficients. First of all, we show that for any quasimodular form~$f$ the number~$N_\lambda(f)$ only takes finitely many different values if we vary~$\lambda$. \begin{thm}\label{thm:main1} Given a quasimodular form~$f$, there exist finitely many disjoint intervals~$I$ such that $\mathbb{R}=\mathbb{C}up_{I\in \mathscr{I}} I$, and for each~$I$ a constant~$N_I(f)\in \mathbb{F}rac{1}{6}\mathbb{Z}$ such that \[N_\lambda(f) = N_{I}(f) \mathbb{Q}quad \text{if }\lambda \in I.\] \end{thm} For example, in \mathbb{C}ref{ex:criticalE2} we will see that \begin{align}\label{eq:E2'} N_\lambda(E_2') \: =\: \begin{cases} 1 & |\lambda|\in (\mathbb{F}rac{1}{v},\mathbb{F}rac{1}{2})\mathbb{C}up (v,\infty] \\ 0 & |\lambda|\in (0,\mathbb{F}rac{1}{v})\mathbb{C}up (\mathbb{F}rac{1}{2},v). \end{cases}\end{align} for some $v\in (5,6)$, which we compare to the results in \mathbb{C}ite{CL19}. Secondly, we study in more detail the case that $f=f_0+f_1E_2$ for some modular forms~$f_0$ and~$f_1$ of weight~$k$ and $k-2$ respectively and with real Fourier coefficients. For example, the first derivative of a modular form can be written in such a way. We give closed formulas for~$N_\lambda(f)$ depending on the behaviour of~$f_1$ at~$\mathbb{R}ho$, its zeros on the boundary of~$\mathcal{F}$, and the value of~$f$ at $\mathrm{i}\infty,\mathbb{R}ho$ and these zeros of~$f_1$. That is, let $z_1,\ldots, z_m$ be the zeros of~$f_1$ such that $\mathrm{Re}\, z_i=-\mathbb{F}rac{1}{2}$ and $\mathrm{Im}\, z_i>\mathbb{F}rac{1}{2}\sqrt{3}$, counted with multiplicity and ordered by imaginary part, and let $z_0=\mathbb{R}ho$. Also, let $\theta_1,\ldots,\theta_n$ be the angles of those zeros of~$f_1$ on the unit circle satisfying $\mathbb{F}rac{2\pi}{3}\geq \theta_1\geq \theta_2\geq \ldots \geq \theta_n\geq \mathbb{F}rac{\pi}{2}$ (counted with multiplicity). We introduce the following notation: \begin{itemize}\itemsep0pt \item $\widehat{f}(\theta) = e^{\mathbb{F}rac{1}{2}k \mathrm{i} \theta}f(e^{\mathrm{i} \theta})$ \item $r(f_1)$ denotes the sign of the first non-zero Taylor coefficient in the \emph{natural} Taylor expansion of~$f_1$ around~$\mathbb{R}ho$ (see \eqref{eq:r}); $v_\mathbb{R}ho(f_1)$ denotes the order of vanishing of $f_1$ at $\mathbb{R}ho$. \item $s(f)=\sgn a_0(f)$ if~$f$ does not vanish at infinity, and $s(f)=-\sgn a_0(f_1)$ else. Here, $a_0$ denotes the constant term in the Fourier expansion, i.e., $a_0(f)=\lim_{z\to -\mathbb{F}rac{1}{2}+\mathrm{i} \infty} f(z)$. \item $w(z_0)=2$ if~$z_0$ equals $\mathbb{R}ho,\mathrm{i}$ or $-\mathbb{F}rac{1}{2}+\mathrm{i} \infty$, and $w(z)=1$ for all other $z\in \mathcal{F}$. \end{itemize} \begin{thm}\label{thm:main2} Let $f=f_0+f_1E_2$ be a quasimodular form of weight~$k$ for which~$f_0$ and~$f_1$ are modular forms without common zeros and with real Fourier coefficients. Then, there exist constants $N_{(1,\infty]}(f),N_{(\mathbb{F}rac{1}{2},1)}(f),N_{[0,\mathbb{F}rac{1}{2})}(f)\in \mathbb{Z}$ such that \[ N_\lambda(f) = \begin{cases} N_{(1,\infty]}(f) & |\lambda|\in (1,\infty] \\ N_{(\mathbb{F}rac{1}{2},1)}(f) & |\lambda| \in (\mathbb{F}rac{1}{2},1) \\ N_{[0,\mathbb{F}rac{1}{2})}(f) & |\lambda|\in [0,\mathbb{F}rac{1}{2}). \end{cases} \] Moreover, these constants are uniquely determined by \begin{align} \label{eq:Ninf} N_{(1,\infty]}(f) &\: =\: \mathbb{F}rac{1}{2}\biggl \lfloor \mathbb{F}rac{k}{6} \biggr \mathbb{R}floor \,-\, (-1)^{v_\mathbb{R}ho(f_1)} r(f_1)\sum_{j=1}^n \mathbb{F}rac{(-1)^j}{w(e^{\mathrm{i} \theta_j})} \,\sgn \widehat{f}(\theta_j)\\ \label{eq:N1/2} N_{(\mathbb{F}rac{1}{2},1)}(f) &\: =\: \phantom{\mathbb{F}rac{1}{2}} \biggl\lfloor \mathbb{F}rac{k}{6} \biggr\mathbb{R}floor \,-\, N_{(1,\infty]}(f) \\ \label{eq:N0} N_{[0,\mathbb{F}rac{1}{2})}(f) &\: =\: \phantom{\mathbb{F}rac{1}{2}} \biggl\lceil\mathbb{F}rac{k}{6}\biggr\mathbb{R}ceil \,-\, N_{(1,\infty]}(f) \,-\, r(f_1)\sum_{j=0}^{m} \mathbb{F}rac{(-1)^j}{w(z_j)}\, \sgn f(z_j) - \mathbb{F}rac{1}{2} (-1)^{m+1} r(f_1)\, s(f). \end{align} \end{thm} \mathbb{C}learpage \begin{remark}\mbox{}\\[-18pt] \begin{enumerate}[{\upshape(i)}] \itemsep0pt \item Observe that the conditions of the theorem guarantee that the sign functions is applied to a non-zero real number, that is, $\widehat{f}(\theta_j),f(z_j)\in \mathbb{R}^*$. \item In case~$f_0$ and~$f_1$ do admit common zeros, there always exists a modular form~$g$ such that~$f_0/g$ and~$f_1/g$ are (holomorphic) modular forms without common zeros. \item For quasimodular forms of depth $>1$ (i.e., if~$f$ is a polynomial in~$E_2$ of degree~$>1$ with modular coefficients), the first part of the statement is wrong. This has already been illustrated with the example in~\eqref{eq:E2'}; for more details, see \mathbb{C}ref{ex:criticalE2}. \mathbb{Q}edhere \end{enumerate} \end{remark} \paragraph{An extreme example} To illustrate some of the characteristic properties of zeros of quasimodular forms, consider the unique quasimodular form~$f=f_0+f_1E_2$ in the~$7$-dimensional vector space $M_{36}^{\leq 1}$ with $q$-expansion $f=1+O(q^7)$ (as the constant coefficient is $1$, the quasimodular form~$f$ cannot be the derivative of a modular form)\mathbb{F}ootnote{Following \mathbb{C}ite{JP13} one could call this a \emph{gap quasimodular form}. The theta series of an extremal lattice is a gap modular form. Do gap quasimodular forms have a similar interpretation?}. Explicitly, $f$ equals \[\small 1 \,+\, 212963830173619200 q^7 \,+\, 45122255555990230800 q^8 \,+\, 3920264199663225523200 q^9 \,+\, O(q^{10}). \] \begin{figure}\label{fig:zerosa} \label{fig:zerosb} \label{fig:zeros} \end{figure} The zeros of~$f$, depicted in \mathbb{C}ref{fig:zerosa}, satisfy \[ N_{(1,\infty]}=1,\mathbb{Q}quad N_{(\mathbb{F}rac{1}{2},1)}=5,\mathbb{Q}quad N_{[0,\mathbb{F}rac{1}{2})}(f)=6.\] Moreover, in \mathbb{C}ref{fig:zerosb}, we depicted the rational curves \begin{equation}\label{eq:curves}\{\gamma z \mid \gamma \in \mathfrak{sl}_2z, f(z)=0\} \mathbb{C}ap \mathcal{F} \: =\: \{ z\in \mathcal{F} \mid h(z)\in \mathbb{Q}\}, \end{equation} where the function~$h:\mathfrak{h}\to \mathbb{C}$ is given by \[ h(\tau) \: =\: \tau + \mathbb{F}rac{12}{2\pi i}\mathbb{F}rac{f_1(\tau)}{f(\tau)}. \] In fact,~$h$ is an equivariant function, i.e., \[ h(\tau+1) = h(\tau)+1,\mathbb{Q}quad h\Bigl(-\mathbb{F}rac{1}{\tau}\Bigr) = -\mathbb{F}rac{1}{h(\tau)}, \mathbb{Q}quad h(-\overline{\tau}) = -\overline{h(\tau)}.\] For other quasimodular forms of depth~$1$ the corresponding function~$h$ is also equivariant, and these transformation properties are the main ingredients for \mathbb{C}ref{thm:main2}. \paragraph{Extremal quasimodular forms} Write $\widetilde{M}_k^{\leq p}$ is the space of holomorphic quasimodular forms of weight~$k$ and depth~$\leq p$. Recall that $\widetilde{M}_4^{\leq 1}=M_4=\mathbb{C} E_4$ and $E_4$ has a unique zero at $\mathbb{R}ho$, so that $N_\lambda(E_4)=\mathbb{F}rac{1}{3}$. Excluding this modular form, we find the following upper bound. \begin{cor}\label{cor:upperbound} For all $f\in \widetilde{M}^{\leq 1}_{k}$ such that $\mathbb{F}rac{f}{E_4}\mathbb{N}ot\in \widetilde{M}^{\leq 1}_{k}$, we have \[N_\lambda(f) \:\leq\: \dim \widetilde{M}^{\leq 1}_{k} \,+\, \begin{cases} -1 & \lambda\in (\mathbb{F}rac{1}{2},\infty] \\ 0 & \lambda \in [0,\mathbb{F}rac{1}{2}).\end{cases} \] \end{cor} Observe that in any vector subspace of~$\mathbb{C}\llbracket q \mathbb{R}rbracket$ of dimension~$m$, there exist an element~$f$ with $v_{\mathrm{i} \infty}(f)\geq m-1$. Hence, there exist a quasimodular form~$f$ such that (i)~the inequality~\eqref{cor:upperbound} is sharp for $\lambda=\infty$ and (ii)~$f$ admits no zeros in~$\mathcal{F}$ outside infinity. \begin{cor}\label{cor:extreme} There exists a quasimodular form $f=f_0+f_1E_2\in \widetilde{M}^{\leq 1}_{k}$ such that \[N_\infty(f) \: =\: v_{\mathrm{i}\infty}(f) \: =\: \dim\widetilde{M}^{\leq 1}_{k}-1\] and all zeros of~$f_1$ in~$\mathcal{F}$ are located on the unit circle. \end{cor} The existence of a quasimodular form for which $v_{\mathrm{i}\infty}(f) = \dim\widetilde{M}^{\leq 1}_{k}-1$ was proven by Kaneko and Koiko, who called such a quasimodular form \emph{extremal} \mathbb{C}ite{KK06}. It is natural to generalize their question whether $v_{\mathrm{i}\infty}(f) \leq \dim\widetilde{M}^{\leq p}_{k}-1$ for all $f\in \widetilde{M}_{k}^{\leq p}$ (which has been confirmed for $p\leq 4$ in \mathbb{C}ite{Pel20}) to the following one. \begin{question}Let $k,p>0$. Do all $f\in \widetilde{M}_{k}^{\leq p}$ with $\mathbb{F}rac{f}{E_4}\mathbb{N}ot\in \widetilde{M}^{\leq p}_{k}$ satisfy \[ N_\lambda(f) \:\leq\: \dim \widetilde{M}_{k}^{\leq p}\,+\, \begin{cases} -1 & \lambda\in (\mathbb{F}rac{1}{2},\infty] \\ 0 & \lambda \in [0,\mathbb{F}rac{1}{2})\end{cases} \mathbb{Q}uad?\] \end{question} \paragraph{Contents} We start by recalling some basic properties of quasimodular forms in \mathbb{C}ref{sec:set-up}. In \mathbb{C}ref{sec:h} we discuss equivariant functions~$h$ associated to quasimodular forms of depth~$1$ and of higher depth, which results in the proof of \mathbb{C}ref{thm:main1} in Section~\mathbb{R}ef{sec:5}. The proof of \mathbb{C}ref{thm:main2} is obtained in Section~\mathbb{R}ef{sec:3} (for $\lambda=\infty$) and in Section~\mathbb{R}ef{sec:5} (for $\lambda<\infty$). We indicate how \mathbb{C}ref{thm:crit} and \mathbb{C}ref{cor:upperbound} then follow as corollaries of \mathbb{C}ref{thm:main1}. Moreover, in all sections we give many additional examples. \paragraph{Acknowledgements} We would like to thank Gunther Cornelissen and Wadim Zudilin for inspiring conversations and helpful feedback. A great part of this research has been carried out in the library of the mathematical institute of Utrecht University (even when both authors were not affiliated with this university any more), as well as during a visit of the second author to the Max-Planck-Institut für Mathematik, for which we would like to thank both institutes. \section{Set-up: zeros of quasimodular forms}\label{sec:set-up} \paragraph{Set-up} Fix a holomorphic quasimodular form~$f$ for~$\mathfrak{sl}_2z$, of weight~$k$ and depth~$p$, and with real Fourier coefficients, i.e., let $f\in \mathbb{R}[E_2,E_4,E_6]$ of homogenous weight~$k$ and depth~$p$. We write \[ f \: =\: \sum_{j=0}^p f_j \, E_2^j \] where~$f_j$ is a modular form of weight ${k-2j}$. \begin{remark}\label{rk:exp} For all $\gamma\in\mathfrak{sl}_2z$ we have \begin{align}\label{eq:transfo} (f|_k\gamma)(\tau) \: :=\: (c\tau+d)^{-k} f\Bigl(\mathbb{F}rac{a\tau+b}{c\tau+d}\Bigr) \: =\: \sum_{j=0}^p \mathbb{F}rac{(\mathfrak{d}^jf)(\tau)}{j!} \Bigl(\mathbb{F}rac{1}{2\pi \mathrm{i}}\mathbb{F}rac{c}{c\tau+d}\Bigr)^j,\end{align} where $\mathfrak{d}$ is the derivation on quasimodular forms uniquely determined by $\mathfrak{d}(E_2) = 12$ and the fact that it annihilates modular forms (see \mathbb{C}ite[Section~5.3]{Zag08}), i.e., \begin{align}\label{eq:der} \mathbb{F}rac{\mathfrak{d}^m(f)}{m!} &\: =\: (12)^m \sum_{j=0}^p \binom{j}{m} f_j E_2^{j-m}.\mathbb{Q}edhere\end{align} In fact, one cannot understand the theory of quasimodular forms without recognizing the $\mathfrak{sl}_2$-action on quasimodular forms by the derivation~$D=\mathbb{F}rac{1}{2\pi \mathrm{i}}\pdv{}{\tau} = q \pdv{}{q}$, the weight derivation $W$, which multiplies a quasimodular form with its weight, and the derivation $\mathfrak{d}$, satisfying \[ [W, D] = 2 D,\mathbb{Q}quad [W, \mathfrak{d}] = -2 \mathfrak{d}, \mathbb{Q}quad [\mathfrak{d}, D] = W. \] \end{remark} \begin{remark} Restricting to modular forms with real Fourier coefficients isn't that restrictive, for the following two reasons: \begin{enumerate}[{\upshape (i)}] \item All Hecke eigenforms for $\mathfrak{sl}_2z$ have real Fourier coefficients; \item Suppose $g$ is a quasimodular with complex, rather than real, Fourier coefficients. Then, $\tilde{g}(\tau) := \overline{g(-\overline{\tau})}$ is a quasimodular form which vanishes at~$\tau$ if $g$ vanishes at $-\overline{\tau}$. As $\tau$ and $-\overline{\tau}$ lie, up to shifting by $z\mapsto z+1$ in the same fundamental domain, it follows that the number of zeros of $g$ and $\tilde{g}$ agree in any fundamental domain. Hence, the number of zeros of $g$ is half the number of zeros of $g\tilde{g}$, which has real Fourier coefficients, and twice the degree and depth of $g$. \mathbb{Q}edhere \end{enumerate} \end{remark} \paragraph{The fundamental domain} Let $\mathfrak{h}=\{z\in \mathbb{C} \mid \mathrm{Im}(z)>0\}$ be the complex upper half plane, $\mathfrak{h}^*=\mathfrak{h}\mathbb{C}up \mathbb{P}^1(\mathbb{Q})$ be the extended upper half plane and \[\mathcal{F} \: :=\: \{z \in \mathfrak{h} \mid |z| > 1, |\mathrm{Re}(z)| <\tfrac{1}{2}\} \,\mathbb{C}up\, \{z \in \mathfrak{h} \mid |z| \geq 1, -\tfrac{1}{2} \leq \mathrm{Re}(z) \leq 0\} \mathbb{C}up \{\infty\}\] the standard (strict) fundamental domain for the action of $\mathfrak{sl}_2z$ on $\mathfrak{h}^*$, where $\infty$ is the point $[1,0]\in \mathbb{P}^1(\mathbb{Q})$ at infinity. Recall that the $\mathfrak{sl}_2z$-translates of $\mathbb{R}ho=-\mathbb{F}rac{1}{2}+\mathbb{F}rac{1}{2}\sqrt{3}\, \mathrm{i}$ and of $\mathrm{i}$ have a non-trivial stabilizer, i.e., $e_\mathbb{R}ho=3, e_\mathrm{i}=2$ and $e_z=1$ if $z\in \mathfrak{h}^*\backslash\bigl(\mathfrak{sl}_2z \mathbb{R}ho\mathbb{C}up \mathfrak{sl}_2z \mathrm{i}\bigr)$. Moreover, we write $\mathcal{C}, \mathcal{L}$ and $\mathcal{R}$ for the positively oriented circular part, left vertical line segment and right vertical line segment of the boundary $\partial \mathcal{F}$ of $\mathcal{F}$, i.e., $\partial \mathcal{F} = \mathcal{L} \mathbb{C}up \mathcal{C} \mathbb{C}up \mathcal{R} \mathbb{C}up \{[1,0]\}$ with \begin{align}\label{eq:C} \mathcal{C} &\: =\: \{ z\in \mathfrak{h} \mid |z|=1, \mathrm{Re}(z)\leq \tfrac{1}{2} \},\\ \label{eq:L} \mathcal{L} &\: =\: \{ z\in \mathfrak{h} \mid |z|\geq 1, \mathrm{Re}(z)= -\tfrac{1}{2}\},\\ \mathcal{R} &\: =\: \{ z\in \mathfrak{h} \mid |z|\geq 1, \mathrm{Re}(z)= \tfrac{1}{2}\}. \end{align} \paragraph{Order of vanishing at the cusps} Note that for a quasimodular form $f$ around $\tau_0=-\mathbb{F}rac{d}{c}\in \mathbb{P}^1(\mathbb{Q})$ we have \[ (c\tau+d)^kf(\tau) \: =\: \sum_{n=1}^\infty a_n(f,\tau,\tau_0) \exp\Bigl(2\pi \mathrm{i} n\, \mathbb{F}rac{a\tau+b}{c\tau+d}\Bigr),\] where $a,b\in \mathbb{Z}$ are such that $\left(\begin{smallmatrix} a & b \\ c &d \end{smallmatrix}\mathbb{R}ight)\in \mathfrak{sl}_2z$, and with \[ a_n(f,\tau,\tau_0) \: =\: \sum_{j=1}^p \mathbb{F}rac{a_{n,j}}{j!} \Bigl(\mathbb{F}rac{-c(c\tau+d)}{2\pi\mathrm{i}}\Bigr)^j \,\in\, \mathbb{C}[\tau],\] where~$a_{n,j}$ is the $n$th Fourier coefficient of~$\mathfrak{d}^jf$. We define the order of vanishing as follows. \begin{defn}\label{def:vtau} For a quasimodular form~$f$ and $\tau_0 \in \mathfrak{h}$, let $\mathbb{N}u_{\tau_0}(f)$ be the order of vanishing of $f$ at $\tau_0$. If $\tau_0 \in \mathbb{P}^1(\mathbb{Q})$, we let $\mathbb{N}u_{\tau_0}(f)$ be the minimal value of~$n$ for which $a_n(f,\tau,\tau_0) \in \mathbb{C}[\tau]$ is not the zero polynomial. \end{defn} \paragraph{The counting function} Observe that as $f(\tau+1)=f(\tau)$, the number of zeros in $\left(\begin{smallmatrix} 1 & 1 \\ 0 & 1\end{smallmatrix}\mathbb{R}ight)\mathcal{F}$ and $\mathcal{F}$ agree. Hence, after fixing a rational number $\lambda=-\mathbb{F}rac{d}{c}$ with $c,d$ coprime integers, for all possible choices $a,b\in \mathbb{Z}$ such that $ad-bc=1$ the number of zeros in $\left(\begin{smallmatrix} a & b \\ c & d\end{smallmatrix}\mathbb{R}ight)\mathcal{F}$ agree. \begin{defn} Given $\lambda\in \mathbb{P}^1(\mathbb{Q})$, denote by $N_\lambda(f)$ the weighted number of zeros of $f$ in $\gamma\mathcal{F}$, where $\gamma=\left(\begin{smallmatrix} a & b \\ c & d\end{smallmatrix}\mathbb{R}ight) \in \mathfrak{sl}_2z$ and $-\mathbb{F}rac{d}{c}=\lambda$, i.e., \begin{equation} N_\lambda(f) \: =\: \sum_{\tau \in \gamma\mathcal{F}} \mathbb{F}rac{v_\tau(f)}{e_\tau},\end{equation} where $v_\tau(f)$ is defined by \mathbb{C}ref{def:vtau}. \end{defn} Without loss of generality, we often restrict to irreducible quasimodular forms: \begin{defn} A quasimodular form is \emph{irreducible} if it cannot be written as the product of two quasimodular form of strictly lower weights. \end{defn} \begin{remark} If $f$ is a quasimodular form $f=f_0+f_1E_2$ of depth~$1$, then $f$ is irreducible if and only if $f_0$ and $f_1$ have no common zeros. \end{remark} \begin{remark} Suppose that $f$ is quasimodular with \emph{algebraic} Fourier coefficients. As noted by Gun and Oesterl\'e, if $a\in \mathfrak{h}$ is a zero of~$f$, there exists an irreducible factor $g$ of $f$, unique up to multiplication by a scalar, such that $g$ has a single zero in $a$ \mathbb{C}ite{GO20}. Hence, if $f$ is irreducible, it has only single zeros. Moreover, if~$f$ has a zero at $\mathrm{i}$ or $\mathbb{R}ho$ (or one of their $\mathfrak{sl}_2z$-translates), then it has $E_6$ or $E_4$ respectively as one of its factors. In particular, if~$f$ is an irreducible quasimodular form which is not a modular form, then \[ N_\lambda(f) = \sum_{\tau \in \gamma \mathcal{F}} \mathbb{N}u_\tau(f) \:\in\: \mathbb{Z}_{\geq 0}\,. \mathbb{Q}edhere\] \end{remark} \paragraph{Local behaviour of modular forms around~$\mathbb{R}ho$} Recall $\mathbb{R}ho=-\mathbb{F}rac{1}{2}+\mathbb{F}rac{1}{2}\sqrt{3}\,\mathrm{i}$. Let $g$ be a modular form of weight~$k$ \emph{with real Fourier coefficients}. Note that the mapping $w\mapsto \mathbb{F}rac{\mathbb{R}ho-\overline{\mathbb{R}ho}w}{1-w}$ maps the unit disc to~$\mathfrak{h}$. Then, the natural Taylor expansion of $g$ on $\mathfrak{h}$ (see \mathbb{C}ite[Proposition~17]{Zag08}) around $\tau=\mathbb{R}ho$ is given by \[ (1-w)^{-k} \,g\Bigl(\mathbb{F}rac{\mathbb{R}ho-\overline{\mathbb{R}ho}w}{1-w}\Bigr) \: =\: \sum_{n=v_\mathbb{R}ho(g)}^\infty b_n(g) \, w^n \mathbb{Q}quad (|w|<1),\] for some $v_\mathbb{R}ho(g)\geq 0$ and coefficients $b_n(g)\in \mathbb{C}$ with $b_{v_\mathbb{R}ho(g)}\mathbb{N}eq 0$. (This Taylor expansion is natural as the image of $w \mapsto \mathbb{F}rac{\mathbb{R}ho-\overline{\mathbb{R}ho}w}{1-w}$ for $|w|<1$ equals the full domain~$\mathfrak{h}$ on which $g$ is holomorphic.) Alternatively, $g$ admits an ordinary Taylor expansion ${g(z) = \sum_{n=v_\mathbb{R}ho(g)}^\infty c_n(g) \, (2\pi\mathrm{i})^n(z-\mathbb{R}ho)^n}$ (with $|z|$ sufficiently small, and for the same value of $v_\mathbb{R}ho(g)$) and for some coefficients $c_n(g)\in \mathbb{C}$ with $c_{v_\mathbb{R}ho(g)}(g)\mathbb{N}eq 0$. Let \begin{equation}\label{eq:r} r(g) \: :=\: \sgn b_{v_\mathbb{R}ho(g)}(g).\end{equation} In the sequel we need the following relation between $r(g)$ and the limiting behaviour of $g$ on the boundary of $\mathcal{F}$. \begin{lem}\label{lem:r} Let $g$ be a modular form of weight~$k$ with real Fourier coefficients. Then, for all $t\in \mathbb{R}_{>0}$ and $0<\theta<\pi$ the values of $g(\mathbb{R}ho+\mathrm{i} t)$ and $e^{k\mathrm{i}\theta/2}g(e^{\mathrm{i} \theta})$ are real. Moreover, \[ \lim_{t\downarrow 0} \sgn(g(\mathbb{R}ho+\mathrm{i} t)) \: =\: r(g) \: =\: (-1)^{v_\mathbb{R}ho(g)}\sgn(c_{v_\mathbb{R}ho(g)}) \: =\: (-1)^{v_\mathbb{R}ho(g)} \lim_{\theta\uparrow2\pi/3} \sgn(e^{k\mathrm{i}\theta/2}g(e^{\mathrm{i} \theta})), \] where $r(g)$ is defined by~\eqref{eq:r} and $c_n$ are the Taylor coefficients of $g$ around $\mathbb{R}ho$ as above. \end{lem} \begin{proof} The fact that $g(\mathbb{R}ho+\mathrm{i} t)$ is real for real $t$, follows directly from the assumption that the Fourier coefficients of $g$ are real. Moreover, this assumption implies that \begin{align}\label{eq:imghat} \overline{e^{k\mathrm{i}\theta/2}g(e^{\mathrm{i} \theta})} \: =\: e^{-\mathrm{i} k\theta/2}g\Bigl(\mathbb{F}rac{-1}{e^{\mathrm{i} \theta}}\Bigr) \: =\: e^{-\mathrm{i} k\theta/2} e^{\mathrm{i} k\theta} g(e^{\mathrm{i} \theta}) \end{align} Hence, $\mathrm{Im}\, e^{k\mathrm{i}\theta/2}g(e^{\mathrm{i} \theta}) =0$. Now, note $g(\mathbb{R}ho+\mathrm{i} t)=g(\mathbb{F}rac{\mathbb{R}ho-\overline{\mathbb{R}ho}w}{1-w})$ for $w=\mathbb{F}rac{t}{\sqrt{3}+t}.$ Hence, \[ \lim_{t\downarrow 0} \sgn(g(\mathbb{R}ho+\mathrm{i} t)) \: =\: \lim_{w\downarrow 0} \sgn g\Bigl(\mathbb{F}rac{\mathbb{R}ho-\overline{\mathbb{R}ho}w}{1-w}\Bigr) \: =\: \sgn(b_{v_\mathbb{R}ho(g)}).\] Also, $g(\mathbb{R}ho+\mathrm{i} t) = \sum_{n=v_\mathbb{R}ho(g)}^\infty (-2\pi t)^n$, hence, $\sgn(b_{v_\mathbb{R}ho(g)})=(-1)^{v_\mathbb{R}ho(g)}\sgn(c_{v_\mathbb{R}ho(g)}).$ Finally, for the last equality, we observe that by the valence formula, we know that $g$ has multiplicity ${v_\mathbb{R}ho(g)}=3\ell+\delta$ at $\mathbb{R}ho$ for some non-negative integer~$\ell$. Here, $\delta\in \{0,1,2\}$ is the reduced value of $k\mod 3$. In particular, in all cases we find that \[e^{k\mathrm{i}\theta/2}g(e^{\mathrm{i}\theta}) \sim (-2\pi)^{{v_\mathbb{R}ho(g)}} (\theta -\tfrac{2 \pi }{3})^{{v_\mathbb{R}ho(g)}} c_{{v_\mathbb{R}ho(g)}}\] as $\theta\uparrow 2\pi/3$. \end{proof} \section{Zeros in the standard fundamental domain \texorpdfstring{$(\lambda=\infty)$}{}}\label{sec:3} Let $f$ be a quasimodular form. In order to compute $N_\infty(f)$, we compute the contour integral of the logarithmic derivative of $f$ over the boundary of $\mathcal{F}$ (suitably adapted with small circular arcs, if $f$ has zeros on this boundary). For simplicity of exposition, assume $f$ has no zeros on the circular part of the boundary~$\mathcal{C}$. Then, by a standard argument \begin{equation}\label{eq:Ninfty1} N_\infty(f) \: =\: \mathbb{F}rac{1}{2 \pi \mathrm{i}}\int_{\mathcal{C}} \mathbb{F}rac{f'(z)}{f(z)} \,\mathop{}\!\mathrm{d} z \: =\: -\mathbb{F}rac{1}{2\pi \mathrm{i}} \int_{\mathbb{F}rac{\pi}{3}}^{\mathbb{F}rac{2 \pi}{3}} \mathbb{F}rac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d} \theta} \log(f(e^{\mathrm{i}\theta})) \, \mathop{}\!\mathrm{d} \theta. \end{equation} If $g$ is quasimodular of weight $k$, we define $\widehat{g}:[\mathbb{F}rac{\pi}{3}, \mathbb{F}rac{2 \pi}{3}]\to \mathbb{C}$ by \begin{equation} \widehat{g}(\theta) = e^{k\mathrm{i}\theta/2}g(e^{\mathrm{i}\theta}). \end{equation} We express $N_\infty(f)$ in terms of $\widehat{f}$, as follows \begin{equation} N_\infty(f) \: =\: -\mathbb{F}rac{1}{2 \pi \mathrm{i}} \int_{\mathbb{F}rac{\pi}{3}}^{\mathbb{F}rac{2 \pi}{3}} \mathbb{F}rac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d} \theta} \log(e^{-k\mathrm{i}\theta/2}\widehat{f}(\theta)) \, \mathop{}\!\mathrm{d} \theta \: =\: \mathbb{F}rac{k}{12} - \mathbb{F}rac{1}{2 \pi \mathrm{i}} \int_{\mathbb{F}rac{\pi}{3}}^{\mathbb{F}rac{2 \pi}{3}} \mathbb{F}rac{\widehat{f}'(\theta)}{\widehat{f}(\theta)} \, \mathop{}\!\mathrm{d} \theta. \end{equation} Since $N_\infty(f)$ is real-valued, we find \begin{equation} N_\infty(f) \: =\: \mathbb{F}rac{k}{12} - \mathbb{F}rac{1}{2 \pi} \mathrm{Im}\,\!\left(\int_{\mathbb{F}rac{\pi}{3}}^{\mathbb{F}rac{2 \pi}{3}} \mathbb{F}rac{\widehat{f}'(\theta)}{\widehat{f}(\theta)} \, \mathop{}\!\mathrm{d} \theta \mathbb{R}ight). \end{equation} We have the following interpretation for the latter integral. Write $\widehat{f}(\theta) = r(\theta) e^{2\pi \mathrm{i} \alpha(\theta)}$, where $r$ and $\alpha$ are real-valued continuous functions, i.e., $r$ is the radius of $\widehat{f}$ and $\alpha$ is called the \emph{continuous argument} of~$\widehat{f}$. Recall that by assumption $f$ has no zeros on $\mathcal{C}$, so $r(\theta)>0$ for all~$\theta$. Then, \begin{align}\label{eq:Ninfty2} N_\infty(f) \: =\: \mathbb{F}rac{k}{12} - \Big(\alpha\Bigl(\mathbb{F}rac{2\pi}{3}\Bigr)-\alpha\Bigl(\mathbb{F}rac{\pi}{3}\Bigr)\Big).\end{align} In order to compute \emph{the variation of the argument} $\alpha(\mathbb{F}rac{2\pi}{3})-\alpha(\mathbb{F}rac{\pi}{3})$, we first determine all $\theta\in [\pi/3,2\pi/3]$ for which $\alpha(\theta) \in \mathbb{F}rac{1}{2}\mathbb{Z}$, or equivalently, for which $\mathrm{Im}\, (\widehat{f})=0$. By making use of the assumption that our quasimodular form~$f$ has \emph{real} Fourier coefficients, we obtain: \begin{lem}\label{lem:Imhatf} We have \begin{align} \mathrm{Im}\, (\widehat{f}) &\: =\: \mathbb{F}rac{\mathrm{i}}{2} \sum_{m\geq 1}\mathbb{F}rac{1}{(2\pi \mathrm{i})^m m!} \widehat{\mathfrak{d}^mf} . \end{align} \end{lem} \begin{proof} First, assume~$g$ is a modular form (rather than a \emph{quasi}modular form) of homogeneous weight~$k$. Then, using that~$g$ has real Fourier coefficients, have \begin{align}\label{eq:imghat} \overline{\widehat{g}(\theta)} = e^{-\mathrm{i} k\theta/2}g\Bigl(\mathbb{F}rac{-1}{e^{\mathrm{i} \theta}}\Bigr) = e^{-\mathrm{i} k\theta/2} e^{\mathrm{i} k\theta} g(e^{\mathrm{i} \theta}) \end{align} Hence, $\mathrm{Im}\,\widehat{g}=0$. Similarly, for $\widehat{E_2}$ one has \begin{align}\label{eq:imE2hat} \overline{\widehat{E_2}(\theta)} = e^{-\mathrm{i} \theta}E_2\Bigl(\mathbb{F}rac{-1}{e^{\mathrm{i} \theta}}\Bigr) = e^{-\mathrm{i} \theta} \bigl(e^{2\mathrm{i}\theta} E_2(e^{\mathrm{i} \theta})+\tfrac{12}{2\pi i}e^{\mathrm{i}\theta}\bigr) = \widehat{E_2}(\theta) + \mathbb{F}rac{12}{2\pi \mathrm{i}} \end{align} Hence, \[ \mathrm{Im}\, \widehat{E_2}^j = \mathbb{F}rac{\mathrm{i}}{2}\sum_{m=1}^{j} \binom{j}{m}\Bigl(\mathbb{F}rac{12}{2\pi \mathrm{i} }\Bigr)^{m} \widehat{E_2}^{j-m} \] Applying this to the expansion $f=\sum_{j\geq 0} f_j \, E_2^j$ and using the expansion as in \mathbb{C}ref{rk:exp}, we find \begin{align} \mathrm{Im}\, (\widehat{f}) &\: =\: \mathbb{F}rac{\mathrm{i}}{2} \sum_{m\geq 1}\sum_{j\geq m} \binom{j}{m}\Bigl(\mathbb{F}rac{12}{2\pi \mathrm{i}}\Bigr)^{m} \, \widehat f_j \widehat{E_2}^{j-m} \: =\: \mathbb{F}rac{\mathrm{i}}{2} \sum_{m=1}^{j}\mathbb{F}rac{1}{(2\pi \mathrm{i})^m m!} \widehat{\mathfrak{d}^mf}. \mathbb{Q}edhere \end{align} \end{proof} \paragraph{Depth 1} We now restrict to irreducible quasimodular forms of depth~$1$, i.e., $f=f_0+E_2f_1$. Then, by the previous lemma we have $\mathrm{Im}\,(\widehat{f}) = \mathbb{F}rac{3}{\pi} f_1$. We write \[\mathbb{F}rac{2\pi}{3}\geq \theta_1 > \ldots >\theta_n > \mathbb{F}rac{\pi}{3}\] for the zeros of $\theta\mapsto f_1(e^{\mathrm{i} \theta})$ on $(\mathbb{F}rac{\pi}{3},\mathbb{F}rac{2\pi}{3}]$, counted with multiplicity. Recall that ${v_\mathbb{R}ho(f_1)}$ denotes the order of vanishing of $f_1$ at $\mathbb{R}ho$. Then, Equation~\eqref{eq:Ninf} in \mathbb{C}ref{thm:main2} will follow from the following lemma and proposition. \begin{lem} The mapping $\iota:\theta_j\mapsto \theta_{n-j-{v_\mathbb{R}ho(f_1)}+1}$ defines an involution on $\{\theta_j \mid \theta_j\mathbb{N}eq \mathbb{F}rac{2\pi}{3}\}$ such that \[(-1)^j\sgn(\widehat{f}(\theta_j)) = (-1)^{n-j-{v_\mathbb{R}ho(f_1)}+1} \sgn(\widehat{f}(\theta_{n-j-{v_\mathbb{R}ho(f_1)}+1})).\] \end{lem} \begin{proof} Note that if $\theta_j$ is the angle of an element on the unit disk for which $f_1$ has a zero, then $\pi-\theta_j$ also is such an angle. Leaving out the~${v_\mathbb{R}ho(f_1)}$ angles $\theta_1=\ldots=\theta_{v_\mathbb{R}ho(f_1)}=\mathbb{F}rac{2\pi}{3}$, we see that $\iota$ is a well-defined involution. As $f_1(e^{\mathrm{i}\theta_j})=0$, we obtain \begin{align} \widehat{f}(\theta_{n-j-{v_\mathbb{R}ho(f_1)}+1}) &\: =\: e^{\mathbb{F}rac{1}{2}\mathrm{i} k(\pi-\theta_j)}\, f\Bigl(-\mathbb{F}rac{1}{e^{\mathrm{i} \theta_{j}}}\Bigr) \: =\: e^{\mathbb{F}rac{1}{2}\mathrm{i} k(\pi-\theta_j)}\, e^{k\mathrm{i}\theta_{j}}\, f(e^{\mathrm{i} \theta_{j}}) \: =\: (-1)^{k/2} \,\widehat{f}(\theta_j). \end{align} We finish the proof by showing that $\mathbb{F}rac{k}{2}\equiv n-{v_\mathbb{R}ho(f_1)}+1 \mod 2$. Namely, $n-{v_\mathbb{R}ho(f_1)}$ is odd precisely if $f_1$ admits a zero of odd order at $\mathrm{i}$, or, equivalently, if $k-2\equiv 2,6$ or $10 \mod 12.$ We can exclude the case where $k\equiv 4 \, (6)$. Namely, then both $f_0$ and $f_1$ are divisible by $E_4$, contradicting the irreducibility of $f$. Hence, $n-{v_\mathbb{R}ho(f_1)}$ is odd if $k\equiv 0,8 \mod 12$ and even if $k\equiv 2,6 \mod 12$ as desired. \end{proof} \begin{prop}\label{prop:Ninf} For an irreducible quasimodular form~$f$ of weight~$k$ and depth 1, we have \begin{align}\label{eq:zeros1} N_\infty(f) &\: =\: \mathbb{F}rac{1}{2}\left \lfloor \mathbb{F}rac{k}{6} \mathbb{R}ight \mathbb{R}floor - \mathbb{F}rac{(-1)^{v_\mathbb{R}ho(f_1)}r(f_1)}{2}\sum_{j} (-1)^j \sgn(\widehat{f}(\theta_j)). \end{align} \end{prop} \begin{proof} The idea of the proof is to determine the value $\alpha(\mathbb{F}rac{2\pi}{3})-\alpha(\mathbb{F}rac{\pi}{3})$ in \eqref{eq:Ninfty2}. Denote by $A(\theta)$ the argument of $\widehat{f}$, i.e., the unique value in $(-\mathbb{F}rac{1}{2},\mathbb{F}rac{1}{2}]$ such that $\alpha(\theta)\equiv A(\theta)\mod 1$. As $\alpha$ is real analytic, this value can uniquely be determined by knowing $A(\mathbb{F}rac{\pi}{3}), A(\mathbb{F}rac{2\pi}{3})$ and all the values of $\theta$ for which $A(\theta)\in \{0,\mathbb{F}rac{1}{2}\}$. For example, if $0<A(\mathbb{F}rac{2\pi}{3})<\mathbb{F}rac{1}{2}$, and $A(\theta_1)=\mathbb{F}rac{1}{2}$, whereas $A(\theta_2) =0$, then for $\mathbb{F}rac{2\pi} 3>\theta\geq \theta_2$, $\alpha$ increases by $1-A(\mathbb{F}rac{2\pi} 3).$ Observe that $A(\theta)\in \{0,\mathbb{F}rac{1}{2}\}$ precisely if $\mathrm{Im}\, f(e^{\mathrm{i} \theta})=0$, or equivalently, $f_1(e^{\mathrm{i} \theta})=0$. Now, in order to compute the value of $\alpha(\mathbb{F}rac{2\pi}{3})-\alpha(\mathbb{F}rac{\pi}{3})$, first assume that all zeros of $\theta\mapsto f_1(e^{\mathrm{i} \theta})$ on $(\mathbb{F}rac{\pi}{3},\mathbb{F}rac{2 \pi}{3}]$ are simple and satisfy $\theta\in \{\mathbb{F}rac\pi 3,\mathbb{F}rac\pi 2\}$. Whether $f_1(e^{\mathrm{i}\theta})=0$ for such $\theta\in \{\mathbb{F}rac\pi 3,\mathbb{F}rac\pi 2\}$ (or, equivalently, $A(\mathbb{F}rac{2\pi}{3})\in \{0,\mathbb{F}rac{1}{2}\}$) is determined by the value of $k$ modulo $12$, see below. Note that as $f$ is \emph{irreducible}, we have $k\mathbb{N}ot\equiv 4 \, (6)$. \[ \begin{array}{r c c} k \mod 12 & A(\mathbb{F}rac{2\pi}{3})\in \{0,\mathbb{F}rac{1}{2}\} & A(\mathbb{F}rac\pi 2)\in \{0,\mathbb{F}rac{1}{2}\} \\\hline 0 & \mathbb{C}heckmark & \mathbb{C}heckmark\\ 2 & \mathbf{x} & \mathbf{x} \\ 6 & \mathbb{C}heckmark &\mathbf{x} \\ 8 & \mathbf{x} & \mathbb{C}heckmark \end{array} \] Temporarily, denote by $\varphi_i$ the elements of $\{2\pi/3, \pi/2\}$ for which $\theta\mapsto f_1(e^{\mathrm{i} \theta})$ admits a zero, and such that $\varphi_1\geq \varphi_2$. As $f$ is irreducible, we have $\widehat{f}(\varphi_i)\mathbb{N}eq 0$, so that $\sgn(\widehat{f}(\varphi_i))$ is well-defined. The sign being positive (or negative) corresponds to $\alpha(\varphi_i)\equiv 0 \mod 1$ (or $\mathbb{F}rac{1}{2} \mod 1$ respectively). By \mathbb{C}ref{lem:r} we have $(-1)^{{v_\mathbb{R}ho(f_1)}}r(f_1) = \lim_{\theta\uparrow2\pi/3} \sgn(e^{k\mathrm{i}\theta/2}f_1(e^{\mathrm{i} \theta}))$ with ${v_\mathbb{R}ho(f_1)}$ the order of vanishing of $f_1$ at $\mathbb{R}ho$. Hence, a case-by-case analysis using the symmetry $\mathrm{Im}\, \widehat{f}(\theta) = (-1)^{k/2+1} \mathrm{Im}\, \widehat{f}(\pi-\theta)$ shows \[A\Bigl(\mathbb{F}rac{2\pi}{3}\Bigr)-A\Bigl(\mathbb{F}rac{\pi}{3}\Bigr) \,-\, \mathbb{F}rac{(-1)^{v_\mathbb{R}ho(f_1)}r(f_1)}{2}\sum_{j} (-1)^j \sgn(\widehat{f}(\varphi_j)) \: =\: \begin{cases} 0 & k\equiv 0 \, (6) \\ \mathbb{F}rac{1}{6} & k\equiv 2 \, (6). \end{cases}\] Now, in the general case, note that the contribution to the variation of the argument on each interval $[\theta_j, \theta_{j+1}]$ is \[ \mathbb{F}rac{(-1)^{v_\mathbb{R}ho(f_1)}r(f_1)}{4} \left((-1)^j \sgn(\widehat{f}(\theta_j)) + (-1)^{j+1} \sgn(\widehat{f}(\theta_{j+1}))\mathbb{R}ight).\] Adding these contributions with special care at the boundary cases as above leads to the result \[\alpha\Bigl(\mathbb{F}rac{2\pi}{3}\Bigr)-\alpha\Bigl(\mathbb{F}rac{\pi}{3}\Bigr) \,-\, \mathbb{F}rac{(-1)^{v_\mathbb{R}ho(f_1)}r(f_1)}{2}\sum_{j} (-1)^j \sgn(\widehat{f}(\theta_j)) \: =\: \begin{cases} 0 & k\equiv 0 \, (6) \\ \mathbb{F}rac{1}{6} & k\equiv 2 \, (6). \end{cases}\] By Equation~\eqref{eq:Ninfty2} the result follows. \end{proof} \begin{remark} For a mixed modular form~$F=\sum_{j=0}^p f_j$ with $f_j$ of weight $k-2j$, we analogously find \begin{align} \mathrm{Im}\, (\widehat{F}) &= \sum_{j\geq 1}\widehat{f_j}(\theta)\sin(j\theta). \end{align} From this we similarly deduce that for a mixed modular form~$F=f_0+f_j$ (with $f_j$ of weight $k-2j$) we have \begin{align}\label{eq:zerosmixed} N_\infty(F) &\: =\: \mathbb{F}rac{1}{2}\left \lfloor \mathbb{F}rac{k}{6} \mathbb{R}ight \mathbb{R}floor - \mathbb{F}rac{(-1)^{v_\mathbb{R}ho(f_1)}r(f_j)}{2}\sum_{i} (-1)^i \sgn(\widehat{F}(\theta_i)), \end{align} where, accordingly, the $\theta_i$ are the zeros of $\theta\mapsto f_j(e^{\mathrm{i}\theta})$. \end{remark} \paragraph{Examples in depth 1} \begin{exmp}\label{E2Exp} Consider $f = E_2$. In this case, $f_0 \equiv 0$ and $f_1 \equiv 1$. As $f_1$ has no zeros on the arc, application of Proposition~\mathbb{R}ef{prop:Ninf} gives \begin{align} N_\infty(E_2) = \mathbb{F}rac{1}{2}\left \lfloor \mathbb{F}rac{2}{6} \mathbb{R}ight \mathbb{R}floor = 0. \end{align} Hence, $E_2$ has no zeros in the standard fundamental domain---a result which was discovered and proven in \mathbb{C}ite[Proposition~4.2]{BS10} by different means. \end{exmp} \begin{exmp}\label{ex:intro} We now return to the example in the introduction, i.e., let $f$ be the unique quasimodular form~$f=f_0+f_1E_2$ in the~$7$-dimensional vector space $M_{36}^{\leq 1}$ with $q$-expansion $f=1+O(q^7)$. In order to apply Theorem~\mathbb{R}ef{thm:main2}, we compute $f_0$ and $f_1$ explicitly: \[ f_0 \: =\: \mathbb{F}rac{43976643}{108264772}{\Delta^3}\left(j^3-\mathbb{F}rac{28903981960}{14658881}j^2\,+\,\mathbb{F}rac{9706007861928}{14658881}j\,+\,\mathbb{F}rac{396402626858112}{14658881}\mathbb{R}ight) \] and \[ f_1 \: =\: \mathbb{F}rac{64288129}{108264772}{E_4E_6}{\Delta^2}\left(j^2 - \mathbb{F}rac{2225338584}{1737517}j + \mathbb{F}rac{373036607496}{1737517}\mathbb{R}ight), \] where $j$ is the modular $j$-invariant, given by $j=1728\mathbb{F}rac{E_4^3}{E_4^3-E_6^2}$. We find that $f_1$ has zeros at $\mathrm{i}$ and $\mathbb{R}ho$, coming from the factors $E_4E_6$. Moreover, the roots of the degree~$2$ polynomial in the $j$-invariant are given by $j(\tau_1) \approx 198.3495\ldots$ and $j(\tau_2) \approx 1082.4083\ldots$. Recall $j(\mathcal{L}) = (-\infty,0]$ and $j(\mathcal{C}) = [0,1728]$, where $\mathcal{L}$ and $\mathcal{C}$ are the left vertical and circular boundary of the fundamental domain as in~\eqref{eq:C} and~\eqref{eq:L}. Therefore, the zeros of $f_1$ in $\mathcal{F}$ are all located on $\mathcal{C}$. Similarly, the zeros of $f_0$ are $\tau_3, \tau_4, \tau_5$, where $j(\tau_3) \approx -36.7451\ldots$, $j(\tau_4) \approx 482.1402\ldots$ and $j(\tau_5) \approx 1526.3776\ldots$, indicating that $\tau_4$ and $\tau_5$ lie on~$\mathcal{C}$ (and $\tau_3$ lies on~$\mathcal{L}$). As $\Delta(\mathbb{R}ho) < 0$ and $j(\mathbb{R}ho) = 0$, we have that \[ f(\mathbb{R}ho) \: =\: \widehat{f}\Bigl(\mathbb{F}rac{2\pi}{3}\Bigr) \: =\: \widehat{f_0}\Bigl(\mathbb{F}rac{2\pi}{3}\Bigr) \: =\: f_0(\mathbb{R}ho) \:<\: 0. \] From the location of the zeros of $f_0$ on $\mathcal{C}$, we conclude (writing $\tau_j = e^{\mathrm{i} \theta_j}$ for $\mathbb{F}rac{\pi}{2} \leq \theta_j \leq \mathbb{F}rac{2 \pi}{3}$ if $\tau_j$ is located on $\mathcal{C}$) \begin{align} \widehat{f}(\theta_1) &\: =\: \widehat{f_0}(\theta_1) \:<\: 0, \\ \widehat{f}(\theta_2) &\: =\: \widehat{f_0}(\theta_2) \:>\: 0, \\ \widehat{f}(\tfrac{\pi}{2}) &\: =\: \widehat{f_0}(\tfrac{\pi}{2}) \:<\: 0 . \end{align} Further, $f_1(-\mathbb{F}rac{1}{2} + \mathrm{i} \infty) > 0$ and $f_1$ does not change sign on $\mathcal{L}$, as it has no zeros there. Therefore, $r(f_1)=1$ and $(-1)^{v_\mathbb{R}ho(f_1)}=-1$. We now apply \mathbb{C}ref{prop:Ninf} (in the form of Equation~\eqref{eq:Ninf}): \begin{align} N_{(1, \infty]}(f) \: =\: \mathbb{F}rac{1}{2} \left \lfloor \mathbb{F}rac{36}{6} \mathbb{R}ight \mathbb{R}floor \,+\, \Bigl(\mathbb{F}rac{-1}{2}\mathbb{C}dot -1 + 1 \mathbb{C}dot -1 + -1 \mathbb{C}dot 1 + \mathbb{F}rac{1}{2} \mathbb{C}dot -1 \Bigr) \: =\: 1. \end{align} We come back to this example in \mathbb{C}ref{ex:intro2}. \end{exmp} \begin{exmp} For $k = 6n$ $(n = 1, 2, 3, \dots)$, we consider the Kaneko--Zagier differential equation \begin{equation} f''(\tau) - \mathbb{F}rac{k}{6}E_2(\tau) f'(\tau) + \mathbb{F}rac{k(k-1)}{12}E_2'(\tau)f(\tau) = 0. \end{equation} In \mathbb{C}ite[Theorem 2.1]{KK06}, it was shown that a solution to this equation is given by an \emph{extremal} quasimodular form $g$ of depth $1$ and weight $k$, i.e. \begin{equation} g(\tau) \: =\: cq^{m-1} + \mathcal{O}(q^m), \end{equation} where $c \mathbb{N}eq 0$ and $m$ is the dimension of the space of weight $k$ forms of depth $1$ (i.e., $m=n+1$). Clearly, $g$ has a zero at $i \infty$ of order $m-1=\mathbb{F}rac{k}{6}$. From \mathbb{C}ref{prop:Ninf} we learn that $g$ has no other zeros in $\mathcal{F}$ (see also \mathbb{C}ref{cor:extreme}). \end{exmp} \paragraph{Examples in higher depth} In depth $1$ we have seen that the number of zeros of a quasimodular forms $f=f_0+f_1E_2$ only depends on the sign of $\widehat{f_0}$ in the zeros of $f_1$ on the arc $\mathcal{C}$. The next example shows that this is not anymore the case for higher depth. \begin{exmp} Consider the following quasimodular form of weight~$4$ and depth~$1$ for a real parameter~$t$ \begin{equation} f_t = E_4 - t\,E_2^2. \end{equation} We are interested in the value of $N_\infty(f_t)$. By \mathbb{C}ref{lem:Imhatf} we have \begin{align} \mathrm{Im}\,(\widehat{f_t}) &\: =\: -t \mathbb{F}rac{6}{\pi}\Bigl(\widehat{E}_2 + \mathbb{F}rac{3}{\pi \mathrm{i}} \Bigr). \end{align} It can be seen that $\mathrm{Im}\,(\widehat{f_t})$ only vanishes once, at $\theta_0 \, = \mathbb{F}rac{\pi}{2}$. Hence, $$\widehat{f_t}(\theta_0) \: =\: \widehat{E_4}(\theta_0) + t \mathbb{F}rac{9}{\pi^2}.$$ Now, first assume $t<0$. As $\widehat{E_4} < 0$ on $(\mathbb{F}rac{\pi}{3},\mathbb{F}rac{2\pi}{3})$, we have $\widehat{f_t}(\theta_0) < 0$. Also $\widehat{f}_t(\pi/3) = r e^{2 \pi \mathrm{i}/3}$ for some positive $r$ and $\widehat{f}_t(2 \pi /3) = r e^{4 \pi i /3}$. This means that $\widehat{f}_t(\theta)$ with $\theta \in (\mathbb{F}rac{\pi}{3},\mathbb{F}rac{2\pi}{3})$ moves from $re^{2 \pi \mathrm{i}/3}$ to $re^{4 \pi\mathrm{i} /3}$, crossing the (negative) real axis exactly once. Hence, the variation of the argument $\alpha(\mathbb{F}rac{2\pi}{3})-\alpha(\mathbb{F}rac{\pi}{3}) = \mathbb{F}rac{2 \pi}{3}$. Therefore, $$N_\infty(f_t) = \mathbb{F}rac{4}{12} - \mathbb{F}rac{1}{2 \pi} \mathbb{F}rac{2 \pi}{3} = 0,$$ for $t < 0$. For $t > 0$, we have two cases. Let $$t_1 \: :=\: - \mathbb{F}rac{\pi^2}{9} \widehat{E_4}(\theta_0) \approx 1.596.. .$$ Now assume $0 < t < t_1$. Since $t>0$, we have $\widehat{f}_t(\pi/3) = s e^{5 \pi \mathrm{i}/3 }$ for some positive $s$ and $\widehat{f}_t(2 \pi /3 ) = s e^{\pi \mathrm{i}/3}$. Since $t < t_1$, we still have $\widehat{f}_t(\theta_0) < 0$. Hence the variation of the argument is now $-\mathbb{F}rac{4 \pi}{3}$. Therefore, $$N_\infty(f_t) = \mathbb{F}rac{4}{12} - \mathbb{F}rac{1}{2 \pi} \mathbb{C}dot \mathbb{F}rac{-4 \pi}{3} = 1.$$ For the case $t > t_1$, we have that $\widehat{f_t}(\theta_0) > 0$. So the variation of the argument equals $\mathbb{F}rac{2 \pi}{3}$ in that case, and $N_\infty(f_t)=0$. We conclude that $$ N_\infty(f_t) = \begin{cases} 0 &\text{if } t < 0 \text{ or } t> t_1\\ 1 &\text{if } 0 < t < t_1\,. \\ \end{cases} $$ \end{exmp} \begin{exmp} Write $f = f_0 + E_2^p$, where $f_0$ is a modular form of weight $2p>0$. In this case, \begin{equation} \mathrm{Im}\,(\widehat{f}) = \mathrm{Im}\,(\widehat{E}_2^p). \end{equation} Using \mathbb{C}ref{prop:rootsreimE2} below, the function $\widehat{f}$ crosses the imaginary axis exactly $ p - 1 - 2 \lfloor \mathbb{F}rac{p}{3}\mathbb{R}floor $ times. This means that the variation of the argument of $\widehat{f}$ along $\mathcal{C}$ is at most $ \pi \bigl( p - 2 \bigl \lfloor \mathbb{F}rac{p}{3}\bigr \mathbb{R}floor \bigr). $ Therefore, \begin{equation} N_\infty(f) \:\leq\: \mathbb{F}rac{1}{2} \Bigl\lfloor\mathbb{F}rac{2p}{6}\Bigr\mathbb{R}floor + \mathbb{F}rac{1}{2}\Bigl(p-2\Bigl\lfloor\mathbb{F}rac{p}{3}\Bigr\mathbb{R}floor\Bigr) \:\leq\: \mathbb{F}rac{p+1}{3}. \end{equation} \end{exmp} \section{Vector-valued equivariant forms}\label{sec:h} By the quasimodular transformation equation~\eqref{eq:transfo} \[ (f|_k\gamma)(\tau) \: =\: \sum_{j=0}^p \mathbb{F}rac{(\mathfrak{d}^jf)(\tau)}{j!} \Bigl(\mathbb{F}rac{1}{2\pi \mathrm{i}}\mathbb{F}rac{1}{\tau-\lambda}\Bigr)^j,\] where $\lambda=-\mathbb{F}rac{d}{c}$ for $\gamma=\left(\begin{smallmatrix} a & b \\ c & d \end{smallmatrix}\mathbb{R}ight) \in \mathfrak{sl}_2z$. We study the solutions $h:\mathfrak{h}\to \mathbb{C}$ of \begin{align}\label{eq:defh} 0 \: =\: \sum_{j=0}^p \mathbb{F}rac{(\mathfrak{d}^jf)(\tau)}{j!} \Bigl(\mathbb{F}rac{1}{2\pi \mathrm{i}}\mathbb{F}rac{1}{\tau-h(\tau)}\Bigr)^j.\end{align} The main property of the solutions~$h$ is that $f$ has a zero at $\gamma\tau$ with $\tau\in \mathcal{F}$ if and only if there is a solution satisfying $h(\tau)=\lambda$. Another property is that if $h_1(\tau),\ldots,h_p(\tau)$ different solutions (for a fixed $\tau\in \mathfrak{h}$), we have \begin{align}\label{eq:prodh} \prod_{i} (h_i(\tau)-\lambda) = (\tau-\lambda)^p \mathbb{F}rac{(f|_k\gamma)(\tau)}{f(\tau)}, \end{align} where as always $\gamma=\left(\begin{smallmatrix} a & b \\ c & d \end{smallmatrix}\mathbb{R}ight)\in \mathfrak{sl}_2z$ with $\lambda=-\mathbb{F}rac{d}{c}$. We now study the invariance of the solutions $h$ under $\mathfrak{sl}_2z$. \begin{prop}\label{prop:hequiv} If $h:\mathcal{F}\to \mathbb{C}$ is a solution of~\eqref{eq:defh} for $\tau=\tau_0$, then \begin{enumerate}[{\upshape (i)}] \item $\displaystyle \gamma h=\mathbb{F}rac{a h+b}{c h+d}:\mathcal{F}\to \mathbb{C}$ is a solution of~\eqref{eq:defh} for $\tau=\gamma\tau_0$. \item $-\overline{h}: \mathcal{F}\to \mathbb{C}$ is a solution of~\eqref{eq:defh} for $\tau=-\overline{\tau_0}$. \end{enumerate} \end{prop} \begin{proof} It suffices to show the first part for the two generators $\left(\begin{smallmatrix} 1 & 1 \\ 0 & 0 \end{smallmatrix}\mathbb{R}ight)$ and $\left(\begin{smallmatrix} 0 & 1 \\ -1 & 0 \end{smallmatrix}\mathbb{R}ight)$ of $\mathfrak{sl}_2z$. For the first, the result is almost immediate, so we only prove it for the inversion in the unit disk. This follows from the following computation: \begin{align}\label{eq:hsl2z} \sum_{j=0}^p & \mathbb{F}rac{(\mathfrak{d}^jf)(-\tau^{-1})}{j!} \Bigl(\mathbb{F}rac{1}{2\pi\mathrm{i}}\mathbb{F}rac{1}{-\tau^{-1}+h(\tau)^{-1}}\Bigr)^j \\ &= \tau^{k}\sum_{j=0}^p\sum_{m=0}^{p-j} \mathbb{F}rac{(\mathfrak{d}^{j+m}f)(\tau)}{j!m!} \Bigl(\mathbb{F}rac{1}{2\pi \mathrm{i}}\mathbb{F}rac{1}{-\tau+\tau^2h(\tau)^{-1}}\Bigr)^j \Bigl(\mathbb{F}rac{1}{2\pi \mathrm{i}}\mathbb{F}rac{1}{\tau}\Bigr)^m \\ &= \tau^{k}\sum_{\ell=0}^p \mathbb{F}rac{(\mathfrak{d}^{\ell}f)(\tau)}{\ell!} \Bigl(\mathbb{F}rac{1}{2\pi \mathrm{i}}\mathbb{F}rac{-\tau+\tau^2h(\tau)^{-1}+\tau}{\tau(-\tau+\tau^2h(\tau)^{-1})}\Bigr)^\ell \\ &= \tau^{k}\sum_{\ell=0}^p \mathbb{F}rac{(\mathfrak{d}^{\ell}f)(\tau)}{\ell!} \Bigl(\mathbb{F}rac{1}{2\pi \mathrm{i}}\mathbb{F}rac{1}{-h(\tau)+\tau}\Bigr)^\ell =0. \end{align} The second statement follows from the fact that for quasimodular forms $g$ with real Fourier coefficients one has $g(-\overline{\tau})=g(\tau)$. \end{proof} Extend the action of~$\mathfrak{sl}_2z$ on~$\mathfrak{h}$ to an action of~$\mathrm{GL}_2(\mathbb{Z})$ by \[\gamma\tau = \mathbb{F}rac{a\overline{\tau}+b}{c\overline{\tau}+d} \mathbb{Q}quad \text{if } \det\gamma=-1\] for $\gamma=\left(\begin{smallmatrix} a & b \\ c & d \end{smallmatrix}\mathbb{R}ight)\in \mathrm{PGL}_2(\mathbb{Z})$ and $\tau \in \mathfrak{h}$. Then, we find that the vector $\underline{h}=(h_1,\ldots,h_p)$ is a meromorphic vector-valued equivariant form for~$\mathrm{GL}_2(\mathbb{Z})$: \begin{cor}\label{cor:meromorphic} Let $U$ be a simply connected open subset of $\mathfrak{h}$ for which the $p$ solutions of~\eqref{eq:defh} are distinct. Then, one can choose solutions $h_1,\ldots,h_p:U\to \mathbb{C}$ of~\eqref{eq:defh} such that \begin{enumerate}[{\upshape (i)}] \item for all $\tau \in U$ and $\gamma\in \mathrm{GL}_2(\mathbb{Z})$ the solutions $h_j(\gamma \tau):\gamma U\to \mathbb{C}$ are meromorphic; \item if $\Gamma\leq \mathrm{GL}_2(\mathbb{Z})$ such that $\Gamma U \subseteq U$, then there exists a homomorphism $\sigma:\Gamma\to \mathfrak{S}_p$ such that for all $\tau \in U$ and $\gamma\in \Gamma$ one has\[ h_j(\gamma\tau)=\gamma\, h_{\sigma(\gamma)j}(\tau);\] \item $f$ has a zero at $\gamma\tau$ if and only if $h_j(\tau)=\lambda(\gamma)$ for some $j$. \end{enumerate} \end{cor} \begin{proof} By the implicit function theorem, there exist $p$ meromorphic solutions~$h_j$ on~$U$, which, by construction, satisfy the third property. By the previous proposition for all $\gamma \in \mathrm{GL}_2(\mathbb{Z})$ we have $h_j(\gamma\tau)=\gamma\, h_{\sigma(\gamma) j}(\tau)$ for some $\sigma(\gamma) \in \mathfrak{S}_p$, possibly depending on $\tau$. However, by continuity of $h_j$ on $U$, we find $\sigma(\gamma)$ does not depend on $\tau$ for $\gamma \in \Gamma$. In particular, $h_j:\gamma U\to \mathbb{C}$ is meromorphic and $\sigma$ is easily seen to be a homomorphism. \end{proof} We often make use of the fact that $h_j$ is a vector-valued equivariant function in the following way. Write \[C=\left(\begin{smallmatrix} -1 & 0 \\ 0 & 1 \end{smallmatrix}\mathbb{R}ight),\mathbb{Q}uad S=\left(\begin{smallmatrix} 0 & -1 \\ 1 & 0 \end{smallmatrix}\mathbb{R}ight),\mathbb{Q}uad T=\left(\begin{smallmatrix} 1 & 1 \\ 0 & 1 \end{smallmatrix}\mathbb{R}ight) \] for complex \emph{C}onjugation, \emph{S}piegeln (reflecting) in the unit disc and \emph{T}ranslation. Then, \[ \mathrm{PGL}_2(\mathbb{Z}) = \langle C,S,T \mid C^2=1,(CT)^2=1,(CS)^2=1, S^2=1, (ST)^3=1\mathbb{R}angle. \] Given $\gamma \in \mathrm{PGL}_2(\mathbb{Z})$, let $\mathbb{C}^\gamma$ be the set of $\tau\in \mathbb{C}$ such that $\gamma \tau=\tau$. Then, for all $\tau \in U$ we have \[ h_j(\tau) = h_j(\gamma\tau) = \gamma h_{\sigma(\gamma)j}(\tau).\] Hence, if $\sigma(\gamma)=e$, then $h_{j}(\tau)\in \mathbb{C}^\gamma.$ For example, we have proven the following lemma. \begin{lem}\label{lem:equivariance} Given the solutions $h_j$ and $\sigma:\Gamma\to \mathfrak{S}_p$ as in \mathbb{C}ref{cor:meromorphic}, write $\Gamma_j=\{\gamma \in \Gamma \mid \sigma(\gamma)j=j\}$. Then, \begin{enumerate}[{\upshape (i)}] \item\label{it:R} $h_j(\mathbb{F}rac{n}{2}+\mathbb{R}\mathrm{i}) \in \mathbb{F}rac{n}{2}+\mathbb{R}\mathrm{i}$ if $n\in \mathbb{Z}$ is such that $CT^{2n}\in \Gamma_j$\,, \item\label{it:C} $|h_j(z)|=1$ for $|z|=1$ if $CS\in \Gamma_j$\,, \item\label{it:rho} $h_j(\mathbb{R}ho)\in \{\pm \mathbb{R}ho\}$ if $ST\in \Gamma_j$\,. \end{enumerate} \end{lem} \paragraph{Depth 1} For a quasimodular form $f=f_0+E_2f_1$ of depth~$1$, we have $h:\mathfrak{h}\to \mathbb{C}$ is a holomorphic equivariant function, i.e., \[ h(\gamma\tau) = \gamma h(\tau)\] for all $\tau\in \mathfrak{h}$ and $\gamma \in \mathrm{GL}_2(\mathbb{Z})$. In fact, we can write~$h$ as \[ h(\tau) \: =\: \tau + \mathbb{F}rac{12}{2\pi \mathrm{i}}\mathbb{F}rac{f_1(\tau)}{f(\tau)} \: =\: \tau\mathbb{F}rac{f|S(\tau)}{f(\tau)} ,\] where $S=\left(\begin{smallmatrix} 0 & 1 \\ -1 & 0 \end{smallmatrix}\mathbb{R}ight)$. \begin{remark} There are several additional interesting properties of equivariant functions~$h$ of which we do not make use in this work. Among those are: \begin{enumerate}[{\upshape(i)}]\itemsep0pt \item The Schwarzian~derivative~$\displaystyle \{h(\tau),\tau\}$ is a meromorphic modular form of weight~$4$. This follows directly from the properties of the Schwarzian~derivative, see, e.g., \mathbb{C}ite{ES12}. \item The derivative $h'$ equals $Ff^{-2}$ for some (holomorphic) modular form $F$ of weight~$2k$ (where $k$ is the weight of $f$). Explicitly, \[ F \: =\: f_0^2 \,+\, 12\,f_0\,\vartheta(f_1)\,-\, 12\,\vartheta(f_0)\,f_1 +f_1^2\,E_4\,,\] where $\vartheta$ denotes the Serre~derivative. This was observed and proven in \mathbb{C}ite[Section~5.3]{GO20} in the case~$f$ is the derivative of a modular form. \mathbb{Q}edhere \end{enumerate} \end{remark} \paragraph{Example in depth 1} \begin{exmp} If $f=g'=\pdv{g}{\tau}$ with $g$ a modular form of weight~$k$, then $h(\tau) = \tau + k\mathbb{F}rac{g(\tau)}{g'(\tau)}$ (to the study of which \mathbb{C}ite{ES12} is devoted). In particular, for $f=\mathbb{F}rac{1}{2\pi \mathrm{i}}\Delta'=\Delta E_2$ we find $h(\tau) = \tau+ \mathbb{F}rac{12}{2\pi \mathrm{i}} \mathbb{F}rac{1}{E_2},$ which is also the equivariant function associated to $E_2$. \end{exmp} \paragraph{Examples in higher depth} \begin{exmp}\label{ex:h2} Consider $f = E_4 + E_2^2$. Then the $h_j$ for $j=1, 2$ are solutions of the equation $$0 = (2\pi \mathrm{i})^2(E_4(\tau) + E_2(\tau)^2)(\tau - h(\tau))^2 + 48 \pi \mathrm{i} E_2(\tau) (\tau - h(\tau)) + 144. $$ From this $$h_j(\tau) = \tau + \mathbb{F}rac{12}{2 \pi \mathrm{i}} \mathbb{F}rac{E_2(\tau) +(-1)^j\sqrt{ -E_4(\tau)}}{(E_4(\tau) + E_2(\tau)^2)},$$ which (for an appropriate choice of the square root) is meromorphic on every simply connected domain~$U$ not containing a $\mathfrak{sl}_2z$-translate of~$\mathbb{R}ho$. For example, for \[ U\: =\: \mathrm{int}(\mathcal{F}\mathbb{C}up S\mathcal{F}) \: =\: \{ z\in \mathfrak{h} \mid -\tfrac{1}{2}<\mathrm{Re}\,(z)<\tfrac{1}{2}, |z-1|>1 \text{ and } |z+1|>1\} \] we have that $\Gamma=\langle C,S\mathbb{R}angle$ satisfies $\Gamma U=U$ and $\sigma:\Gamma\to \mathfrak{S}_2$ is given by $\sigma(C) = (1\,2), \sigma(S)=(1\,2).$ In particular, $CS\in \ker \sigma$. Hence, by \mathbb{C}ref{lem:equivariance} we have $|h_j(z)|=1$ if $|z|=1$. However, it is not the case that, e.g., $h_j(-\mathbb{F}rac{1}{2}+\mathbb{R}\mathrm{i})\in -\mathbb{F}rac{1}{2}+\mathbb{R}\mathrm{i}$. \end{exmp} \section{Zeros in other fundamental domains \texorpdfstring{$(\lambda<\infty)$}{}}\label{sec:5} By definition of the functions $h_i$ we have \begin{equation} N_\lambda(f)-N_\infty(f) \: =\: \mathbb{F}rac{1}{2 \pi \mathrm{i}}\sum_{i}\int_{\partial\mathcal{F}} \mathbb{F}rac{h_i'(\tau)}{h_i(\tau)-\lambda} \,\mathop{}\!\mathrm{d} \tau \: =\: \mathbb{F}rac{1}{2 \pi}\sum_{i}\mathrm{Im}\int_{\partial\mathcal{F}} \mathbb{F}rac{h_i'(\tau)}{h_i(\tau)-\lambda} \,\mathop{}\!\mathrm{d} \tau , \end{equation} where the second equality holds as the number of zeros of a function is a real number. Hence, the value of~${N_\lambda(f)-N_\infty(f)}$ is determined by the variation of the argument of the $h_i-\lambda$. \begin{proof}[Proof of \mathbb{C}ref{thm:main1}] Given $\lambda\in \mathbb{R}$, we consider the variation of the argument of $h_j(\tau)-\lambda$ for all $j$. Note that, when moving along $\tau\in\partial\mathcal{F}$, by \mathbb{C}ref{cor:meromorphic} the functions $h_i$ are continuous and piecewise meromorphic. We change the contour $\partial \mathcal{F}$ to a family of contours $\mathscr{C}_\epsilon$ such that for $\epsilon>0$ sufficiently small there are no poles on the contour of integration, and that the functions $h_j$ only take finitely many real values. (For example, we could define $\mathscr{C}_\epsilon$ to be the shift of the contour $\partial \mathcal{F}$ by $(|\mathrm{Re}\,(z)|+\sqrt{3}\mathrm{Re}\,(z)\mathrm{i})\epsilon+\mathbb{F}rac{1}{2}\epsilon^2$. Note that for $\epsilon\to 0$ the value of the integral over the shifted contour converges to the desired value~$N_\lambda$.) The functions~$\tau\mapsto h_j(\tau)$ intersect the real axis only a finite number of times as $\tau$ goes over the (shifted) contour. Write $\lambda_1(\epsilon)<\ldots<\lambda_{n(\epsilon)}(\epsilon)$ for the intersection points for all functions $h_1,\ldots,h_p$ (here $n(\epsilon)$ may also depend on $\epsilon$). Moreover, write $\lambda_1,\ldots,\lambda_n$ for the limiting values of $\lambda_i(\epsilon)$ as $\epsilon\to 0$. As $h_j-\lambda$ is just a horizontal shift of $h_j$, given $\lambda,\lambda'\in \mathbb{R}$, the functions $h_j-\lambda$ and $h_j-\lambda'$ admit the same variation of the argument if there is no $\ell$ such that $\lambda<\lambda_\ell<\lambda'$ or $\lambda'<\lambda_\ell<\lambda$. Hence, in that case $N_\lambda-N_\infty=N_{\lambda'}-N_\infty$. Moreover, for $\lambda>\lambda_n$ the variation of the argument is~$0$. Hence, $N_\lambda - N_\infty=0$ for $\lambda>\lambda_n$. We conclude that if we define the elements of $\mathscr{I}$ to be \[ (-\infty,\lambda_1),\{\lambda_1\},(\lambda_1,\lambda_2),\ldots,\{\lambda_n\},(\lambda_n,\infty)\] the statement follows. (In case $\lambda_i=\pm\infty$ simply leave out the corresponding sets.) \end{proof} \paragraph{Depth 1} For quasimodular forms of depth $1$, by \mathbb{C}ref{lem:equivariance} we have \begin{itemize} \item $h(\tfrac{1}{2}+it) \in \tfrac{1}{2}+i\mathbb{R}$ for $t\in \mathbb{R}$; \item $|h(z)|=1$ if $|z|=1$. \end{itemize} Hence, the only possible values of $\lambda_i$ in the above proof are $\pm \mathbb{F}rac{1}{2}, \pm 1$ and $\pm \infty$. Therefore, we obtain the following corollary of \mathbb{C}ref{thm:main1}. \begin{cor} \label{cor:5.2} For a quasimodular form of depth $1$ with real Fourier coefficients, there exist constants $N_{[0,\mathbb{F}rac{1}{2})}(f), N_{(\mathbb{F}rac{1}{2},1)}(f), N_{(1,\infty)}(f)$ such that \[ N_\lambda(f) = \begin{cases} N_{[0,\mathbb{F}rac{1}{2})}(f) & |\lambda|\in (0,\mathbb{F}rac{1}{2})\\ N_{(\mathbb{F}rac{1}{2},1)}(f) & |\lambda|\in(\mathbb{F}rac{1}{2},1)\\ N_{(1,\infty)}(f) & |\lambda|\in (1,\infty). \end{cases} \] \end{cor} Next, by relating $N_{\lambda}(f)$ to $N_{-\mathbb{F}rac{1}{\lambda}}(f)$ and by using the above properties of $h$, we prove the following statement, which finishes the proof of \mathbb{C}ref{thm:main2}. Recall $z_1,\ldots, z_m$ are the zeros of~$f_1$ such that $\mathrm{Re}\, z_i=-\mathbb{F}rac{1}{2}$ and $\mathrm{Im}\, z_i>\mathbb{F}rac{1}{2}\sqrt{3}$, counted with multiplicity and ordered by imaginary part, and $z_0=\mathbb{R}ho$. Moreover, recall $r(f_1)$ denotes the sign of the first non-zero Taylor coefficient of~$f_1$ (see \eqref{eq:r}), and $s(f)=\sgn a_0(f)$ if~$f$ does not vanish at infinity, and $s(f)=-\sgn a_0(f_1)$ else. Also, $w(z_0)=2$ if $z_0$ equals $\mathbb{R}ho,\mathrm{i}$ or $-\mathbb{F}rac{1}{2}+\mathrm{i} \infty$, and $w(z)=1$ for all other $z\in \mathcal{F}$. \begin{thm} \label{thm:5.3} Let $f=f_0+E_2f_1$ be an irreducible quasimodular form of depth $1$. If $\lambda\in \mathbb{R}$ with $\mathbb{F}rac{1}{2}<|\lambda|<2$, then \[N_\lambda(f)+N_{-\mathbb{F}rac{1}{\lambda}}(f) \: =\: \biggl \lfloor \mathbb{F}rac{k}{6} \biggr \mathbb{R}floor.\] Moreover, if $|\lambda|<\mathbb{F}rac{1}{2}$ or $|\lambda|>2$ we have \begin{equation} \label{EqnThm:5.3} N_\lambda(f)+N_{-\mathbb{F}rac{1}{\lambda}}(f) \: =\: \biggl\lceil\mathbb{F}rac{k}{6}\biggr\mathbb{R}ceil \,-\, r(f_1)\sum_{j=0}^{m} \mathbb{F}rac{(-1)^j}{w(z_j)}\, \sgn f(z_j) - \mathbb{F}rac{1}{2}(-1)^{m+1}r(f_1)\,s(f). \end{equation} \end{thm} \mathbb{N}oindent\textit{Proof.} As before, we have \begin{equation}\label{eq:integral} N_\lambda(f)+N_{-\mathbb{F}rac{1}{\lambda}}(f)-2N_\infty(f) = \mathbb{F}rac{1}{2 \pi}\mathrm{Im}\, \int_{\partial\mathcal{F}} \mathbb{F}rac{h'(\tau)}{h(\tau)-\lambda}+\mathbb{F}rac{h'(\tau)}{h(\tau)+\mathbb{F}rac{1}{\lambda}} \,\mathop{}\!\mathrm{d} \tau . \end{equation} We split this integral in several pieces, and compute the contribution of each piece separately. This argument resembles the one of \mathbb{C}ref{prop:Ninf}, as well as the proof found in \mathbb{C}ite[Section~5.6]{GO20}. \step{Setup} Let $z_1, \ldots, z_m$ be the zeros of $f_1$ on $\mathcal{L} \backslash \{ \mathbb{R}ho \}$ and let $v_1, \ldots, v_r$ be the zeros of $f$ on $\mathcal{L}$, all ordered by imaginary part. We assume $r\geq 1$, and at the end of the argument verify the proof goes through if $r=0$. Moreover, without loss of generality, we assume $|\lambda|>1$. \begin{wrapfigure}{r}{0.37\textwidth} \mathbb{C}entering \includegraphics[clip, trim=2.5cm 19.1cm 13.8cm 3.5cm]{Contour.pdf} \mathbb{C}aption{Contour of integration}\label{fig:contour} \end{wrapfigure} The finite poles of~$h$ are exactly the finite zeros of~$f$. We fix $\epsilon>0$ sufficiently small. Let $L_\epsilon$ be the punctured left half line \[ [\mathbb{R}ho, v_1 - \mathrm{i} \epsilon]\, \mathbb{C}up\,[v_1 + \mathrm{i} \epsilon, v_2 - \mathrm{i} \epsilon] \,\mathbb{C}up \,\ldots \,\mathbb{C}up\, [v_{r}+\mathrm{i} \epsilon, -\tfrac{1}{2} + \mathrm{i} \infty], \] and define the punctured right half line by $R_\epsilon=L_\epsilon+1$. The line segment $[\mathbb{R}ho, v_1 - \mathrm{i} \epsilon]$ (as well as its shift on $R_\epsilon$) is referred to as the \emph{lower vertical segment}, whereas $[v_{r}+\mathrm{i} \epsilon, -\mathbb{F}rac{1}{2} + \mathrm{i} \infty]$ a called an \emph{upper vertical segment}. Recall that by definition of $N_\lambda$, we include all zeros/poles of $h(\tau)-\lambda$ and $h(\tau)+\mathbb{F}rac{1}{\lambda}$ on $\mathcal{L}$ in the integral~\eqref{eq:integral}, but not those on $\mathcal{R}$. Hence, for each zero $v_i$ we introduce a semicircle $C_i$ around $v_i$ of radius $\epsilon$ on the left of $\mathcal{L}$, as well as the semicircle $C_i+1$. The boundary of $\mathcal{F}$ then consists of $\mathcal{C}, L_\epsilon, R_\epsilon$ and the semicircles $C_i$ and $C_i+1$ for all $i$. \step{Circular segment~$\mathcal{C}$} Recall $|h(\tau)|=1$ if $|\tau|=1$. Hence, $h$ has no poles on $\mathcal{C}$. Using the expression~\eqref{eq:prodh} above, we write \begin{align} \mathbb{F}rac{1}{2 \pi}&\mathrm{Im}\,\int_{\mathcal{C}} \mathbb{F}rac{h'(\tau)}{h(\tau)-\lambda}+\mathbb{F}rac{h'(\tau)}{h(\tau)+\mathbb{F}rac{1}{\lambda}} \,\mathop{}\!\mathrm{d} \tau \\ &= \mathbb{F}rac{1}{2 \pi }\mathrm{Im}\int_{\mathcal{C}} \pdv{}{\tau}\log \bigl((h(\tau)-\lambda)(h(\tau)+\tfrac{1}{\lambda})\bigr) \,\mathop{}\!\mathrm{d} \tau \\ &= \mathbb{F}rac{1}{2 \pi}\mathrm{Im}\int_{\mathcal{C}} \pdv{}{\tau} \Bigl((\tau-\lambda)^{1-k}(\tau+\mathbb{F}rac{1}{\lambda})^{1-k} \mathbb{F}rac{f(\gamma\tau)f(\gamma S\tau)}{f(\tau)^2}\Bigr) \,\mathop{}\!\mathrm{d} \tau, \end{align} where $\gamma=\left(\begin{smallmatrix} a & b \\ c & d \end{smallmatrix}\mathbb{R}ight)\in \mathfrak{sl}_2z$ is such that $\lambda=-\mathbb{F}rac{d}{c}$, and $S=\left(\begin{smallmatrix} 0 & 1 \\ -1 & 0 \end{smallmatrix}\mathbb{R}ight)$. Note that $\gamma S = \left(\begin{smallmatrix} -b & a \\ -d & c \end{smallmatrix}\mathbb{R}ight)$ and $-\mathbb{F}rac{1}{\lambda} = -\mathbb{F}rac{c}{-d}.$ First, we compute (recall $|\lambda|>1$ by assumption) \begin{align} \mathbb{F}rac{1}{2 \pi }&\mathrm{Im}\int_{\mathcal{C}} \pdv{}{\tau}\log\Bigl((\tau-\lambda)^{1-k}\bigl(\tau+\mathbb{F}rac{1}{\lambda}\bigr)^{1-k} \Bigr) \,\mathop{}\!\mathrm{d} \tau \\ &=\mathbb{F}rac{k-1}{2 \pi}\mathrm{Re}\int_{\pi/3}^{2\pi/3} -\mathbb{F}rac{\mathbb{F}rac{1}{\lambda}e^{\mathrm{i} \theta}}{1-\mathbb{F}rac{1}{\lambda}e^{\mathrm{i}\theta}}+\mathbb{F}rac{1}{1+\mathbb{F}rac{1}{\lambda}e^{-\mathrm{i} \theta}} \mathop{}\!\mathrm{d} \theta \\ &= \mathbb{F}rac{k-1}{2 \pi}\mathrm{Re}\int_{\pi/3}^{2\pi/3} \Bigl( 1 - 2\sum_{n \geq 1} \mathbb{F}rac{1}{\lambda^{2n-1}} \mathbb{C}os((2n+1)\theta) + 2\mathrm{i} \sum_{m \geq 1} \mathbb{F}rac{1}{\lambda^{2m}} \sin(2m \theta) \Bigr) \mathop{}\!\mathrm{d} \theta\\ & = \mathbb{F}rac{k-1}{6}. \end{align} Next, we show that the following expression is actually independent of $\gamma$: \begin{align} \mathbb{F}rac{1}{2 \pi}&\mathrm{Im}\int_{\mathcal{C}} \pdv{}{\tau}\log\Bigl(\mathbb{F}rac{f(\gamma\tau)f(\gamma S\tau)}{f(\tau)^2}\Bigr) \,\mathop{}\!\mathrm{d} \tau \\ &= \mathbb{F}rac{1}{2 \pi}\mathrm{Im}\int_{\mathcal{C}} \mathbb{F}rac{1}{(c\tau+d)^2}\mathbb{F}rac{f'(\gamma\tau)}{f(\gamma\tau)} + \mathbb{F}rac{1}{(-d\tau+c)^2}\mathbb{F}rac{f'(\gamma S\tau)}{f(\gamma S\tau)} - 2\mathbb{F}rac{f'(\tau)}{f(\tau)} \,\mathop{}\!\mathrm{d} \tau. \end{align} Applying the coordinate transformation $\tau\mapsto -\mathbb{F}rac{1}{\tau}$ to the second term in the integrand, and using~\eqref{eq:Ninfty1} for the last term, this equals \begin{align} \mathbb{F}rac{1}{2 \pi}\mathrm{Im}\int_{\mathcal{C}} \mathbb{F}rac{1}{(c\tau+d)^2}\mathbb{F}rac{f'(\gamma\tau)}{f(\gamma\tau)} - \mathbb{F}rac{1}{(c\tau+d)^2}\mathbb{F}rac{f'(\gamma \tau)}{f(\gamma \tau)} - 2\mathbb{F}rac{f'(\tau)}{f(\tau)}\,\mathop{}\!\mathrm{d} \tau &=-2 N_\infty(f). \end{align} Hence, the contribution of $\mathcal{C}$ equals \[ \mathbb{F}rac{k-1}{6} -2 N_\infty(f).\] \step{Lower vertical segments} Now, first assume that $f_1$ admits no zeros on the lower vertical segment~$[\mathbb{R}ho, v_1 - \mathrm{i} \epsilon]$. By \mathbb{C}ref{lem:equivariance}(\mathbb{R}ef{it:R}) we know that $h(\tfrac{1}{2}+\mathrm{i} t) \in \tfrac{1}{2}+\mathrm{i} \mathbb{R}$ for all $t \in \mathbb{R}$. Also, by \mathbb{C}ref{lem:equivariance}(\mathbb{R}ef{it:rho}) we have $h(\mathbb{R}ho)\in \{\pm \mathbb{R}ho\}$. Hence, as $\tau$ moves on $[\mathbb{R}ho, v_1 - \mathrm{i} \epsilon]$ the function $h(\tau)$ moves from $\mathbb{R}ho$ or $\mathbb{R}ho^2$ to $\mathrm{i} \infty$ or $-\mathrm{i}\infty$. We have three possibilities for the combined sign of $\mathrm{Im}\, h(\mathbb{R}ho)$ and $\mathrm{Im}\, \bigl( h(\mathbb{R}ho+\epsilon\mathrm{i})-\mathbb{R}ho \bigr)$, depicted in~\mathbb{C}ref{tab:1}. (Note that if the first is negative, the second is necessarily negative as well. As in this segment there are no zeros of $f_1$ or $f$, we know that in the case~(--,--) eventually $h$ tends to $-\mathrm{i}\infty$.). \begin{table}\begin{center} \begin{tabular}{c c c l} (+,+) & (+,--) & (--,--) \\\hline\\[-5pt] \begin{tikzpicture}[scale=0.8] \draw[thick, ->] (-0.5,1.8) -- (-0.5,1.5); \draw[thick] (-0.5,0.866) -- (-0.5,1.5); \draw[thick] (0.5,1.8) -- (0.5,1.5); \draw[thick,->] (0.5,0.866) -- (0.5,1.5); \draw[dashed] (0.5,0.866) -- (0.4,0); \draw[dashed] (-0.5,0.866) -- (0.4,0); \draw[dashed] (0.5,0.866) -- (5/2,0); \draw[dashed] (-0.5,0.866) -- (5/2,0); \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=above left:{$\mathbb{R}ho$}] at (-0.5,0.866) {}; \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=above right:{$\mathbb{R}ho+1$}] at (0.5,0.866) {}; \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=below:{$\mathbb{F}rac{1}{\lambda}$}] (a) at (0.4,0) {}; \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=below:{$\lambda$}] (a) at (5/2,0) {}; \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=0pt,label=below left:{$\phantom{\mathbb{R}ho}$}] at (-0.5,-0.866) {}; \end{tikzpicture}& \begin{tikzpicture}[scale=0.8] \draw[thick, ->] (-0.5,-1.8) -- (-0.5,-1.5); \draw[thick] (-0.5,0.866) -- (-0.5,-1.5); \draw[thick] (0.5,-1.8) -- (0.5,-1.5); \draw[thick,->] (0.5,0.866) -- (0.5,-1.5); \draw[dashed] (0.5,0.866) -- (0.4,0); \draw[dashed] (-0.5,0.866) -- (0.4,0); \draw[dashed] (0.5,0.866) -- (5/2,0); \draw[dashed] (-0.5,0.866) -- (5/2,0); \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=above left:{$\mathbb{R}ho$}] at (-0.5,0.866) {}; \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=above right:{$\mathbb{R}ho+1$}] at (0.5,0.866) {}; \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=below left:{$\mathbb{F}rac{1}{\lambda}$}] (a) at (0.4,0) {}; \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=below:{$\lambda$}] (a) at (5/2,0) {}; \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=0pt,label=below left:{$\phantom{\mathbb{R}ho}$}] at (-0.5,-0.866) {}; \end{tikzpicture}& \begin{tikzpicture}[scale=0.8] \draw[thick, ->] (-0.5,-1.8) -- (-0.5,-1.5); \draw[thick] (-0.5,-0.866) -- (-0.5,-1.5); \draw[thick] (0.5,-1.8) -- (0.5,-1.5); \draw[thick,->] (0.5,-0.866) -- (0.5,-1.5); \draw[dashed] (0.5,-0.866) -- (0.4,0); \draw[dashed] (-0.5,-0.866) -- (0.4,0); \draw[dashed] (0.5,-0.866) -- (5/2,0); \draw[dashed] (-0.5,-0.866) -- (5/2,0); \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=below left:{$\mathbb{R}ho^2$}] at (-0.5,-0.866) {}; \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=below right:{$\mathbb{R}ho^2+1$}] at (0.5,-0.866) {}; \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=above:{$\mathbb{F}rac{1}{\lambda}$}] (a) at (0.4,0) {}; \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=above:{$\lambda$}] (a) at (5/2,0) {}; \end{tikzpicture} \\[5pt] \end{tabular}\end{center} \mathbb{C}aption{The three possibilities for the graph of $h(\tau)$ for $\tau$ in a small neighborhood of~$\mathbb{R}ho$ (resp.\ $\mathbb{R}ho+1$) along the left (resp.\ right) vertical line segment of $\partial \mathcal{F}$, given $\displaystyle\bigl(\sgn\mathrm{Im}\, h(\mathbb{R}ho),\sgn \lim\mathbb{N}olimits_{\epsilon \downarrow 0} (\mathrm{Im}\, h(\mathbb{R}ho+\epsilon\mathrm{i})-\mathbb{R}ho)\bigr)\in \{\pm\}^2$.} \label{tab:1} \end{table} Hence, the variation of the argument of $h-\lambda$ along $[\mathbb{R}ho, v_1 - \mathrm{i} \epsilon]$ and $[\mathbb{R}ho, v_1 - \mathrm{i} \epsilon]+1$ equals the (oriented) angle between $h(\mathbb{R}ho+1), \lambda$ and $h(\mathbb{R}ho)$, as shown in the same table. In particular, it is an exercise in Euclidean geometry that the sum of the oriented angles for~$\lambda$ and~$-\mathbb{F}rac{1}{\lambda}$ only depends on whether $|\lambda|<2$ or not (recall $|\lambda|>1$ by assumption). In \mathbb{C}ref{tab:2} we displayed the contribution of each of the cases in \mathbb{C}ref{tab:1}. \begin{table}\begin{center} \begin{tabular}{l|cccc} & (+,+) & (+,--) &\hspace{1pt} (--,--) \\[2pt]\hline\\[-5pt] $|\lambda|>2$ & $\mathbb{F}rac{1}{6} $ & $-\mathbb{F}rac{5}{6}$ & $-\mathbb{F}rac{1}{6}$ \\[5pt]\hline\\[-5pt] $1 <|\lambda|<2$ & $\mathbb{F}rac{1}{6} $ & $\phantom{-}\mathbb{F}rac{1}{6}$ & $-\mathbb{F}rac{1}{6}$ \end{tabular} \end{center} \mathbb{C}aption{Variation of the argument of the lower vertical segments in several cases (see \mathbb{C}ref{tab:1}).} \label{tab:2} \end{table} Correspondingly, the contribution to the variation of the argument equals \[ \mathbb{F}rac{1}{6} \sgn \mathrm{Im}\, h(\mathbb{R}ho) + \delta_{|\lambda|>2}\Bigl(-\mathbb{F}rac{1}{2} \sgn \mathrm{Im}\, h(\mathbb{R}ho) + \mathbb{F}rac{1}{2} \sgn \lim\mathbb{N}olimits_{\epsilon \downarrow 0} (\mathrm{Im}\, h(\mathbb{R}ho+\epsilon\mathrm{i})-\mathbb{R}ho)\Bigr) \] Observe that \[\sgn \lim\mathbb{N}olimits_{\epsilon \downarrow 0} (\mathrm{Im}\, h(\mathbb{R}ho+\epsilon\mathrm{i})-\mathbb{R}ho) \: =\: - \sgn \lim\mathbb{N}olimits_{\epsilon \downarrow 0} \sgn( f_1(\mathbb{R}ho+\epsilon\mathrm{i})) \sgn(f(\mathbb{R}ho))\] as $h(\tau) \: =\: \tau + \mathbb{F}rac{12}{2\pi \mathrm{i}}\mathbb{F}rac{f_1(\tau)}{f(\tau)}$. Hence, under the assumption $f_1$ has no zeros on the lower vertical segment and by \mathbb{C}ref{lem:r}, its contribution equals \[ \mathbb{F}rac{1}{6} \sgn \mathrm{Im}\, h(\mathbb{R}ho)\,+\,\delta_{|\lambda|>2} \Bigl( -\mathbb{F}rac{1}{2} \sgn \mathrm{Im}\, h(\mathbb{R}ho) \,-\, \mathbb{F}rac{r(f_1)}{2} \sgn f(\mathbb{R}ho) \Bigr) . \] Now, suppose $f_1$ admits $p$ zeros on the lower vertical segment. Then, as $f$ admits no zeros on this segment, $h(\tau)$ crosses the line $\tau$ precisely $p$ times. If $p$ is even, this does not alter the variation of the argument, but if $p$ is odd, then $h(\tau)$ changes sign if $\tau$ tends to the pole $v_1$. Note that this does not affect the variation of the argument of $h(\tau)-\mu$ if $|\mu|>1$. Suppose $I$ is a line segment of $\mathcal{L}$ for which $h(\tau)-\mu$ tends to $\pm \mathrm{i} \infty$ or to $0$ on the two boundary points of $I$. In that case the variation of the argument on $I$ equals $v$, whereas the variation of the argument on the corresponding line segment on $\mathcal{R}$ is $-v$. We conclude that if $|\mu|>1$, the only contribution for the variation of the argument is displayed in \mathbb{C}ref{tab:2}. Hence, the contribution of the lower vertical segment equals \[\mathbb{F}rac{1}{6} \sgn \mathrm{Im}\, h(\mathbb{R}ho) \,+\,\delta_{|\lambda|>2} \Bigl( -\mathbb{F}rac{1}{2} \sgn \mathrm{Im}\, h(\mathbb{R}ho) \,-\, \mathbb{F}rac{r(f_1)}{2} \sgn f(\mathbb{R}ho) \,-\, r(f_1)\sum_{j} (-1)^j \sgn f(z_j) \Bigr), \] where the sum is over all $j$ such that $z_j$ lies in the lower vertical segment. Note that by the factor $r(f_1)(-1)^j$ we keep track of the sign of $f_1$ at $z_j$. \step{Vertical segments between two poles} Similar as in the previous case (now there are no boundary terms), we find \[ -\delta_{|\lambda|>2}\,r(f_1) \sum_{j} (-1)^j \sgn f(z_j) \] where the sum is over all $j$ such that $z_j$ lies between two poles of $f$. \step{Semicircles centered at the poles} Note that for sufficiently small $\epsilon$, we have that the value $h-\lambda$ on these semicircles is arbitrary large (say, bigger than $|\lambda|+1$ in absolute value). Moreover, the contours of $h-\lambda$ on such a semicircle $C_i$ and $C_i+1$ differ only by $1$. Hence, as $\epsilon\to 0$, the contributions of the corresponding semicircles $C_i$ and $C_i+1$ of $\partial\mathcal{F}$ (which admit an opposite orientation) cancel in pairs. \step{Upper vertical segments} As before, the variation of the argument vanishes, except for $h(\tau)+\mathbb{F}rac{1}{\lambda}$ if $|\lambda|>2$. Again, in this case, we have the contribution \[ -\delta_{|\lambda|>2}\,r(f_1) \sum_{j} (-1)^j \sgn f(z_j), \] where the sum is over all $j$ such that $z_j$ lies in the upper vertical segment. This is also the only contribution, except in the following exceptional case. It may be that $h(\tau)$ is smaller than $\tau$ in imaginary value for $\tau=\mathbb{F}rac{1}{2}+\mathrm{i} t$ with $t$ tending to infinity, but still converging to $\mathrm{i} \infty$. If this is the case, we have to add a contribution of $+1$. By considering the Fourier expansion of $h(\tau)-\tau$, we see this can only happen if $f$ has no zero at infinity (else, $h$ goes to $\pm \mathrm{i} \infty$ at exponential rate). Moreover, the first non-zero Fourier coefficient of $f_1/f$ should be positive if $h(\tau)-\tau\leq 0$ for $\tau=-\mathbb{F}rac{1}{2}+\mathrm{i} t$. In case $f$ does not vanish at infinity, we have \[ \lim_{t\to \infty}\sgn\mathrm{Im}\,(h(-\tfrac{1}{2}+\mathrm{i} t)-(-\tfrac{1}{2}+\mathrm{i} t)) =- \lim_{t\to \mathrm{i}\infty}\sgn(f_1(-\tfrac{1}{2}+\mathrm{i} t)\, f(-\tfrac{1}{2}+\mathrm{i} t)),\] Note that $\lim_{t\to \infty}\sgn(f_1(-\tfrac{1}{2}+\mathrm{i} t) = (-1)^m r(f_1)$. We found that this special contribution equals \[ \begin{cases} \mathbb{F}rac{1}{2}+\mathbb{F}rac{1}{2} (-1)^m r(f_1) \lim_{t \to \infty} \sgn f(-\mathbb{F}rac{1}{2} + \mathrm{i} t ) & \text{ if } f \left(-\mathbb{F}rac{1}{2} + \mathrm{i} \infty \mathbb{R}ight) \mathbb{N}eq 0\\ 0 & \text{ if } f \left(-\mathbb{F}rac{1}{2} + \mathrm{i} \infty \mathbb{R}ight) = 0, \\ \end{cases} \] As $\lim_{t \to \infty} \sgn f(-\mathbb{F}rac{1}{2} + \mathrm{i} t ) = \sgn a_0(f)$ if $f$ does not vanish at the cusp, and $(-1)^{m}r(f_1)=\sgn a_0(f_1)$, by definition of $s(f)$, we find that the total contribution of the upper vertical segment for $|\lambda|>2$ equals \[-r(f_1)\sum_{j=1}^m (-1)^j \sgn f(z_j) +\mathbb{F}rac{1}{2}- \mathbb{F}rac{1}{2} (-1)^{m+1} r(f_1)\,s(f) \] \step{Total contribution} Adding all contributions for $1<|\lambda|<2$, we obtain $$\mathbb{F}rac{k-1}{6}+\mathbb{F}rac{1}{6} \sgn \mathrm{Im}\, h(\mathbb{R}ho).$$ Note that $f_1(\mathbb{R}ho)=0$ if $k\equiv 0 \, (6)$ and $f_0(\mathbb{R}ho)=0$ if $k\equiv 2 \, (6).$ Hence, \[ h(\mathbb{R}ho) = \mathbb{R}ho + \begin{cases} 0 & k\equiv 0 \, (6) \\ \mathbb{F}rac{12}{2\pi \mathrm{i}}\mathbb{F}rac{1}{E_2(\mathbb{R}ho)} & k\equiv 2\, (6).\end{cases}\] Therefore, \[ \sgn \mathrm{Im}\, h(\mathbb{R}ho) \: =\: \begin{cases} 1 & k \equiv 0 \, (6) \\ -1 & k \equiv 2 \, (6),\end{cases} \] We conclude that for $1<|\lambda|<2$, the variation of the argument equals $\lfloor \mathbb{F}rac{k}{6}\mathbb{R}floor.$ Adding all contributions for $|\lambda|>2$, we obtain \[ \biggl\lceil\mathbb{F}rac{k}{6}\biggr\mathbb{R}ceil - \mathbb{F}rac{r(f_1)}{2} \sgn f(\mathbb{R}ho) -r(f_1)\sum_{j=1}^m (-1)^j \sgn f(z_j) - \mathbb{F}rac{1}{2} (-1)^{m+1} r(f_1)\,s(f). \] \step{$f$ has no zeros on $\mathcal{L}$} In this case $h$ has no poles on $\mathcal{L}$ and $\mathcal{R}$. Therefore, the variation of the argument along $\mathcal{L}$ and $\mathcal{R}$ only depends on the values $h(\mathbb{R}ho)$ and $h(-\mathbb{F}rac{1}{2} + \mathrm{i} \infty)$. The image of $h$ on~$\mathcal{L}$ and~$\mathcal{R}$ is summarized in \mathbb{C}ref{tab:3}. \begin{table}\begin{center} \begin{tabular}{c c c c} (+,+) & (+,--) & (--,+) & (--,--) \\\hline\\[-5pt] \begin{tikzpicture}[scale=0.8] \draw[thick, ->] (-0.5,1.8) -- (-0.5,1.5); \draw[thick] (-0.5,0.866) -- (-0.5,1.5); \draw[thick] (0.5,1.8) -- (0.5,1.5); \draw[thick,->] (0.5,0.866) -- (0.5,1.5); \draw[dashed] (0.5,0.866) -- (0.4,0); \draw[dashed] (-0.5,0.866) -- (0.4,0); \draw[dashed] (0.5,0.866) -- (5/2,0); \draw[dashed] (-0.5,0.866) -- (5/2,0); \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=above left:{$\mathbb{R}ho$}] at (-0.5,0.866) {}; \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=above right:{$\mathbb{R}ho+1$}] at (0.5,0.866) {}; \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=below:{$\mathbb{F}rac{1}{\lambda}$}] (a) at (0.4,0) {}; \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=below:{$\lambda$}] (a) at (5/2,0) {}; \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=0pt,label=below left:{$\phantom{\mathbb{R}ho}$}] at (-0.5,-0.866) {}; \end{tikzpicture}& \begin{tikzpicture}[scale=0.8] \draw[thick, ->] (-0.5,-1.8) -- (-0.5,-1.5); \draw[thick] (-0.5,0.866) -- (-0.5,-1.5); \draw[thick] (0.5,-1.8) -- (0.5,-1.5); \draw[thick,->] (0.5,0.866) -- (0.5,-1.5); \draw[dashed] (0.5,0.866) -- (0.4,0); \draw[dashed] (-0.5,0.866) -- (0.4,0); \draw[dashed] (0.5,0.866) -- (5/2,0); \draw[dashed] (-0.5,0.866) -- (5/2,0); \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=above left:{$\mathbb{R}ho$}] at (-0.5,0.866) {}; \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=above right:{$\mathbb{R}ho+1$}] at (0.5,0.866) {}; \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=below left:{$\mathbb{F}rac{1}{\lambda}$}] (a) at (0.4,0) {}; \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=below:{$\lambda$}] (a) at (5/2,0) {}; \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=0pt,label=below left:{$\phantom{\mathbb{R}ho}$}] at (-0.5,-0.866) {}; \end{tikzpicture}& \begin{tikzpicture}[scale=0.8] \draw[thick, ->] (-0.5,1.8) -- (-0.5,1.5); \draw[thick] (-0.5,-0.866) -- (-0.5,1.5); \draw[thick] (0.5,1.8) -- (0.5,1.5); \draw[thick,->] (0.5,-0.866) -- (0.5,1.5); \draw[dashed] (0.5,-0.866) -- (0.4,0); \draw[dashed] (-0.5,-0.866) -- (0.4,0); \draw[dashed] (0.5,-0.866) -- (5/2,0); \draw[dashed] (-0.5,-0.866) -- (5/2,0); \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=below left:{$\mathbb{R}ho^2$}] at (-0.5,-0.866) {}; \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=below right:{$\mathbb{R}ho^2+1$}] at (0.5,-0.866) {}; \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=above left:{$\mathbb{F}rac{1}{\lambda}$}] (a) at (0.4,0) {}; \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=above:{$\lambda$}] (a) at (5/2,0) {}; \end{tikzpicture}& \begin{tikzpicture}[scale=0.8] \draw[thick, ->] (-0.5,-1.8) -- (-0.5,-1.5); \draw[thick] (-0.5,-0.866) -- (-0.5,-1.5); \draw[thick] (0.5,-1.8) -- (0.5,-1.5); \draw[thick,->] (0.5,-0.866) -- (0.5,-1.5); \draw[dashed] (0.5,-0.866) -- (0.4,0); \draw[dashed] (-0.5,-0.866) -- (0.4,0); \draw[dashed] (0.5,-0.866) -- (5/2,0); \draw[dashed] (-0.5,-0.866) -- (5/2,0); \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=below left:{$\mathbb{R}ho^2$}] at (-0.5,-0.866) {}; \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=below right:{$\mathbb{R}ho^2+1$}] at (0.5,-0.866) {}; \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=above:{$\mathbb{F}rac{1}{\lambda}$}] (a) at (0.4,0) {}; \mathbb{N}ode[circle,fill=black,inner sep=0pt,minimum size=3pt,label=above:{$\lambda$}] (a) at (5/2,0) {}; \end{tikzpicture} \\[5pt] \end{tabular}\end{center} \mathbb{C}aption{The four possibilities for the graph of $h(\tau)$ for $\tau$ along the left (resp.\ right) vertical line segment of $\partial \mathcal{F}$, given $\displaystyle\bigl(\sgn\mathrm{Im}\, h(\mathbb{R}ho), \lim_{t \to \infty} \mathrm{Im}\, \sgn h(-\tfrac{1}{2} + \mathrm{i} t ) \bigr)\in \{\pm\}^2$.} \label{tab:3} \end{table} \begin{table}\begin{center} \begin{tabular}{l|cccc} & (+,+) & (+,--) &\hspace{1pt} (--,+) & (--,--) \\[2pt]\hline\\[-5pt] $|\lambda|>2$ & $\mathbb{F}rac{1}{6} $ & $-\mathbb{F}rac{5}{6}$ & $\phantom{-}\mathbb{F}rac{5}{6}$ & $-\mathbb{F}rac{1}{6}$ \\[5pt]\hline\\[-5pt] $1 <|\lambda|<2$ & $\mathbb{F}rac{1}{6} $ & $\phantom{-}\mathbb{F}rac{1}{6}$ & $-\mathbb{F}rac{1}{6}$ & $-\mathbb{F}rac{1}{6}$ \end{tabular} \end{center} \mathbb{C}aption{Variation of the argument along the left (resp. right) vertical line segment of $\partial \mathcal{F}$ in several cases (see \mathbb{C}ref{tab:3}).} \label{tab:4} \end{table} The contributions to the variation of the argument is given in the following \mathbb{C}ref{tab:4}. Note that the contributions in this table yield the same final formula for $N_{\lambda}(f) + N_{-\mathbb{F}rac{1}{\lambda}}(f)$. \mathbb{Q}ed \begin{proof}[Proof of \mathbb{C}ref{thm:crit}] The result follows from \mathbb{C}ref{thm:main2}, as we explain now. First, let $\varphi$ be the unique modular form such that $f:=g'/\varphi$ is irreducible (if $g$ has only simple zeros, and no zeros at the cusp, then $\varphi=1$). Recall that the sign of the derivative of a real-valued differentiable function in two consecutive zeros of this function is opposite. Hence, for two consecutive zeros of~$g$, the function \[ f \: =\: \mathbb{F}rac{g'}{\varphi} \] changes sign, i.e., $(-1)^j \sgn f(z_j)$ and $(-1)^j \sgn \widehat{f}(\theta_j)$ are independent of $j$. Comparing with the behaviour of $f_1$ around $\mathbb{R}ho$ one obtains \[ (-1)^j \sgn f(z_j) = - r(f_1) = (-1)^j \sgn \widehat{f}(\theta_j)\] for all $j>0$. Moreover, we have \[\sgn f(\mathbb{R}ho) \: =\: \begin{cases} r(f_1) & k(f)\equiv 2 \mod 6\\ -r(f_1) & k(f)\equiv 0 \mod 6, \end{cases} \] where $k(f)$ denotes the weight of $f$. Namely, in case $k(f)\equiv 2 \mod 6$ we have $f_0(\mathbb{R}ho)=0$ and $f(\mathbb{R}ho) = \sgn f_1(\mathbb{R}ho)$, which is non-zero by definition of $\varphi$. Moreover, in case $k(f)\equiv 0 \mod 6$, we have \[ \sgn f(\mathbb{R}ho) \: =\: \sgn\Bigl(\mathbb{F}rac{g'}{\varphi}(\mathbb{R}ho)\Bigr) \: =\: \sgn\Bigl(\mathbb{F}rac{(f_1 \varphi)'}{\varphi}\Bigr) \: =\: \sgn\Bigl(f_1'(\mathbb{R}ho)+f_1(\mathbb{R}ho)\mathbb{F}rac{\varphi'}{\varphi}(\mathbb{R}ho)\Bigr).\] Note that $\varphi(\mathbb{R}ho)\mathbb{N}eq 0$ if $k(f)\equiv 0 \mod 6$, and $f_1(\mathbb{R}ho)=0$. Hence, we find \[ \sgn f(\mathbb{R}ho) \: =\: \sgn f_1'(\mathbb{R}ho) \: =\: -r(f_1).\] Note that $a_0(f)=0$ if and only if $a_0(g)\mathbb{N}eq 0$. In case $g$ has $n\geq 1$ roots of the cusps, observe that $f$ and $n \mathbb{F}rac{g}{\varphi} E_2$ have the same (non-zero) constant term. Also, observe that $f_1$ equals $\mathbb{F}rac{g}{\varphi}$ up to a constant (being the weight of $g$, divided by $12$). Hence, in that case we have \[ s(f) = \sgn(a_0(f)) = \sgn(a_0(g/\varphi)) = \sgn(a_0(f_1)) = (-1)^{m} r(f_1),\] where the last equality holds as $f_1$ does not vanish at infinity. Hence, \[-\mathbb{F}rac{1}{2}(-1)^{m+1} r(f_1)s(f) \: =\: \begin{cases} -\mathbb{F}rac{1}{2} & a_0(g)\mathbb{N}eq 0 \\ \mathbb{F}rac{1}{2} & a_0(g)=0 .\end{cases}\] Finally, write $\delta'$ and $\epsilon'$ for the order of $E_4$ and $E_6$ in $g'$. Note that $k(f)\equiv k+2 \mod 6$ if $g$ does not have repeated zeros at $\mathbb{R}ho$ and $\mathrm{i}$ (here $k$ is the weight of $g$, or $k+2$ is the weight of $k'$). More generally, \[ k+2 \:\equiv\: k(f)+4\delta'+6\epsilon' \mod 6.\] Observe the following identity \[ \mathbb{F}rac{1}{2}\biggl \lfloor \mathbb{F}rac{k+2}{6} -\mathbb{F}rac{2}{3}\delta'\biggr \mathbb{R}floor \,+\, \mathbb{F}rac{1}{3}\delta' = \mathbb{F}rac{k}{12} + \mathbb{F}rac{1}{6} \delta_{g(\mathbb{R}ho)=0}\,, \] where $\delta_{g(\mathbb{R}ho)=0}$ is $1$ precisely if $g(\mathbb{R}ho)=0$ (if not, then $k\equiv 0 \mod 6$). We conclude that \begin{align} N_{(1,\infty]}(g') &\: =\: \mathbb{F}rac{1}{2}\biggl \lfloor \mathbb{F}rac{k+2-4\delta'-6\epsilon'}{6} \biggr \mathbb{R}floor \,+\, \mathbb{F}rac{2\delta'+3\epsilon'}{6} \,+\, C(g)\,+\, \mathbb{F}rac{1}{6} \delta_{g(\mathbb{R}ho)=0} \\ &\: =\: \mathbb{F}rac{k}{12} \,+\, C(g)\,+\, \mathbb{F}rac{1}{3} \delta_{g(\mathbb{R}ho)=0}\,, \end{align} \begin{align} N_{(1,\infty]}(g') + N_{(\mathbb{F}rac{1}{2},1)}(g') &\: =\: \biggl\lfloor \mathbb{F}rac{k+2-4\delta'-6\epsilon'}{6} \biggr\mathbb{R}floor \,+\, \mathbb{F}rac{2\delta'+3\epsilon'}{3} \: =\: \mathbb{F}rac{k}{6}+\mathbb{F}rac{1}{3}\delta_{g(\mathbb{R}ho)=0} \end{align} and $N_{(1,\infty]}(g') + N_{[0,\mathbb{F}rac{1}{2})}(g')$ equals \begin{align} &\biggl\lceil \mathbb{F}rac{k+2-4\delta'-6\epsilon'}{6} \biggr\mathbb{R}ceil \,+\, \mathbb{F}rac{2\delta'+3\epsilon'}{3} \,+\, \mathbb{F}rac{1}{2}\delta_{k(f)\equiv 4\,(6)}- \mathbb{F}rac{1}{2}\delta_{k(f)\equiv 0\,(6)}+ |\mathcal{L}(g)| - \mathbb{F}rac{1}{2} \\ \: =\: & \biggl\lfloor \mathbb{F}rac{k+2}{6}-\mathbb{F}rac{2}{3}\delta' \biggr\mathbb{R}floor \,+\, \mathbb{F}rac{2}{3}\delta' \,+\,L(g) \\ \: =\: & \mathbb{F}rac{k}{12} \,+\,L(g)+\delta_{g(\mathbb{R}ho)=0}\,. \mathbb{Q}edhere \end{align} \end{proof} \begin{proof}[Proof of \mathbb{C}ref{cor:upperbound}] First of all, for $f$ as in \mathbb{C}ref{thm:main2} we have \[N_{(1,\infty]}(f) \:\leq\: \mathbb{F}rac{1}{2}\biggl \lfloor \mathbb{F}rac{k}{6} \biggr \mathbb{R}floor + n' + \mathbb{F}rac{1}{2}\delta_{k\equiv 0\,(6)}, \] where $n'$ counts the weighted number of zeros of $f_1$ on the part of the unit circle with angle $\theta$ such that $\mathbb{F}rac{\pi}{2}\leq \theta<\mathbb{F}rac{2\pi}{3}$. Now $n'\leq \mathbb{F}rac{k-2}{12}-\mathbb{F}rac{1}{3}\delta_{k\equiv 0\,(6)}$, as $f_1$ has precisely $\mathbb{F}rac{k-2}{12}$ zeros (of which at least one in $\mathbb{R}ho$ if $k\equiv 0 \mod 6$). Hence, \[N_{(1,\infty]}(f) \:\leq\: \mathbb{F}rac{1}{2}\biggl \lfloor \mathbb{F}rac{k}{6} \biggr \mathbb{R}floor + \mathbb{F}rac{k-2}{12} + \mathbb{F}rac{1}{6}\delta_{k\equiv 0\,(6)} \: =\: \biggl \lfloor \mathbb{F}rac{k}{6} \biggr \mathbb{R}floor \: =\: \dim \widetilde{M}_k^{\leq 1}-1,\] where we used that $k\equiv 0, 2 \mod 6.$ Now, as $f$ is not divisible by $E_4$ by assumption, this upper bound also holds true if $f$ is reducible. Similarly, we obtain the upper bound for $N_{(\mathbb{F}rac{1}{2},1)}(f)$. Next, for $f$ as in \mathbb{C}ref{thm:main2} we have \[ N_{(1,\infty]}(f) + N_{[0,\mathbb{F}rac{1}{2})}(f) \:\leq \: \biggl\lceil\mathbb{F}rac{k}{6}\biggr\mathbb{R}ceil + m +\delta_{k\equiv 0\,(6)}. \] Here, the term $\delta_{k\equiv 0\,(6)}$ comes from the fact that $-r(f_1)\mathbb{F}rac{(-1)^0}{2}\sgn f(\mathbb{R}ho)$ equals $-\mathbb{F}rac{1}{2}$ if $k\equiv 2\,(6)$, as in that case $r(f_1)=\sgn f_1(\mathbb{R}ho) = \sgn f(\mathbb{R}ho)$. For $k\equiv 0 \,(6)$ we have $-r(f_1)\mathbb{F}rac{(-1)^0}{2}\sgn f(\mathbb{R}ho)\leq \mathbb{F}rac{1}{2}.$ Now, similar as before, $N_{(1,\infty]}(f) \:\geq\: \mathbb{F}rac{1}{2} \lfloor \mathbb{F}rac{k}{6} \mathbb{R}floor - n' - \mathbb{F}rac{1}{2}\delta_{k\equiv 0\,(6)}.$ Hence, \[N_{[0,\mathbb{F}rac{1}{2})}(f) \:\leq \: \biggl\lceil\mathbb{F}rac{k}{6}\biggr\mathbb{R}ceil- \mathbb{F}rac{1}{2}\biggl \lfloor \mathbb{F}rac{k}{6} \biggr \mathbb{R}floor + n' + m + \mathbb{F}rac{3}{2}\delta_{k\equiv 0\,(6)} \] Now, similarly, $n'+m\leq \mathbb{F}rac{k-2}{12}-\mathbb{F}rac{1}{3}\delta_{k\equiv 0\,(6)}$, so that \[N_{[0,\mathbb{F}rac{1}{2})}(f) \:\leq \: \biggl\lceil\mathbb{F}rac{k}{6}\biggr\mathbb{R}ceil- \mathbb{F}rac{1}{2}\biggl \lfloor \mathbb{F}rac{k}{6} \biggr \mathbb{R}floor + \mathbb{F}rac{k-2}{12} + \mathbb{F}rac{7}{6}\delta_{k\equiv 0\,(6)} \: =\: \biggl \lfloor \mathbb{F}rac{k}{6} \biggr \mathbb{R}floor +1 \: =\: \dim \widetilde{M}_k^{\leq 1}\] as $k\equiv 0,2\mod 6$. This implies the corollary. \end{proof} \paragraph{Examples} \begin{exmp} Let $f$ be as in \mathbb{C}ref{thm:main2} and assume $f_1$ has only zeros on the interior of~$\mathcal{F}$ and at infinity. Write $a_0(f)$ for the constant term at infinity of $f$, and $a(f_1)$ for the first non-zero Fourier coefficient of $f_1$. Then, in case $a_0(f)=0$ (as is the case when $f$ is the derivative of a modular form), or if $\sgn(a_0(f))=-\sgn(a(f_1))$, a direct evaluation of the result implies that \[ N_\lambda(f) = N_\lambda(f_1). \] The situation alters slightly if $\sgn(a_0(f))=\sgn(a(f_1))$ (as is the case if $f=E_2$); in that case we find \[ N_\lambda(f) \: =\: \begin{cases} N_\lambda(f_1) & |\lambda|\in (\mathbb{F}rac{1}{2},\infty) \\ N_\lambda(f_1)+1 & |\lambda|\in (0,\mathbb{F}rac{1}{2}). \end{cases}\] \end{exmp} \begin{exmp}\label{ex:intro2} We return again to the example in the introduction. Applying Equations~\eqref{eq:N1/2} and \eqref{eq:N0} in Theorem~\mathbb{R}ef{thm:main2} and using the computations in Example~\mathbb{R}ef{ex:intro}, we find \begin{align} N_{(\mathbb{F}rac{1}{2}, 1)}(f) \: =\: -1 \,+\, \left \lfloor \mathbb{F}rac{36}{6} \mathbb{R}ight \mathbb{R}floor \: =\: 5. \end{align} Moreover, as $r(f_1)=1$, $m=0$, $f(\mathbb{R}ho)<0$ and $s(f)=1$, we obtain \begin{align} N_{[0,\mathbb{F}rac{1}{2})}(f) \: =\: -1 \,+\, \left \lceil \mathbb{F}rac{36}{6} \mathbb{R}ight \mathbb{R}ceil \,-\, - \mathbb{F}rac{1}{2} \,-\, \mathbb{F}rac{-1}{2} \: =\: 6. \end{align} \end{exmp} \paragraph{Example in higher depth} \begin{exmp}\label{ex:criticalE2} Let $f=E_2^2-E_4$. Its zeros are the critical points of $E_2$. We have $$h_j(\tau) = \tau + \mathbb{F}rac{12}{2 \pi \mathrm{i}} \mathbb{F}rac{E_2(\tau) +(-1)^j\sqrt{E_4(\tau)}}{( E_2(\tau)^2-E_4(\tau) )},$$ and for $U=\{z\in \mathfrak{h}\mid \mathrm{Im}\, z>\mathbb{F}rac{1}{2}\sqrt{3}\}$ we find $\sigma(C)=\sigma(T)=e$ (see \mathbb{C}ref{cor:meromorphic} for the definition of~$\sigma$). In particular, $h(-\mathbb{F}rac{1}{2}+\mathrm{i} t) \in -\mathbb{F}rac{1}{2}+\mathrm{i} \mathbb{R}$ for all $t\in \mathbb{R}$. Using the same ideas as in the proof of \mathbb{C}ref{thm:5.3}, we find that the contribution of $\mathcal{C}$ equals $\mathbb{F}rac{k-p}{6} -2 N_\infty(f)$ (with $k=4,p=2$). For the other contributions, we check that the proof goes through for both $E_2 \pm \sqrt{E_4}$ (which is not a \emph{holomorphic} quasimodular form). That is, take $h$ for $f_0=\pm \sqrt{E_4}$ and $f_1=1$. Then, the function $h$ for $E_2+\sqrt{E_4}=2+O(q)$ behaves as $(-,+)$ in \mathbb{C}ref{tab:4}, whereas the function $h$ for $E_2-\sqrt{E_4}=-144q+O(q^2)$ behaves as $(-,-)$ in the same table. Hence, \[ N_\lambda(f) +N_{-\mathbb{F}rac{1}{\lambda}}(f)\: =\: \begin{cases} \mathbb{F}rac{4-2}{6}+\mathbb{F}rac{5}{6}-\mathbb{F}rac{1}{6} \: =\: 1 & |\lambda|<\mathbb{F}rac{1}{2} \text{ or } |\lambda|>2 \\ \mathbb{F}rac{4-2}{6}-\mathbb{F}rac{1}{6}-\mathbb{F}rac{1}{6} \: =\: 0 & \mathbb{F}rac{1}{2}<|\lambda|<2.\end{cases}\] Observe that, in contrast to the case where the depth is 1, we do not longer have that $|h_j(z)|=1$ for $|z|=1$. In particular, $h_1(z)$ and $h_2(z)$ intersect the real line for $z\in \mathcal{C}$ in the value $v$ and $\mathbb{F}rac{1}{v}$ respectively, given by \[ \mathbb{F}rac{1}{v} = 0.180008\ldots, \mathbb{Q}quad v=5.555295\ldots \] As the value of $N_\lambda$ for positive $\lambda$ only changes at $\lambda=\mathbb{F}rac{1}{2},\mathbb{F}rac{1}{v},v$, and $N_\infty(f)\geq 1$ (because $\mathrm{i}\infty$ is a zero of $f$), we conclude \[ N_\lambda(f) \: =\: \begin{cases} 1 & \mathbb{F}rac{1}{v}<|\lambda|<\mathbb{F}rac{1}{2} \text{ or } |\lambda|>v \\ 0 & |\lambda|<\mathbb{F}rac{1}{v} \text{ or } \mathbb{F}rac{1}{2}<|\lambda|<v. \\ \end{cases}\] \end{exmp} \paragraph{Another result on the critical points of $E_2$} Again, let $f=E_2^2-E_4$. Let \[\mathcal{F}_0(2) \: :=\: \{z \in \mathfrak{h} \mid 0 \leq \mathrm{Re}\, z\leq 1 \text{ and } |z-\tfrac{1}{2}|\geq \tfrac{1}{2}\}\mathbb{C}up\{\mathrm{i} \infty\}\] be (the closure of) a fundamental domain for $\Gamma_0(2)$. In \mathbb{C}ite{CL19} it is shown that \begin{align}\label{eq:criticalE2} \sum_{\tau \in \gamma\mathcal{F}_0(2)} \mathbb{N}u_\tau(f) = 1 \end{align} for all $\gamma \in \Gamma_0(2)$. In particular, the number of critical points of $E_2$ is constant in every $\gamma$-translate of $\mathcal{F}_0(2)$, but depends on $\lambda(\gamma)$ in every $\gamma$-translate of $\mathcal{F}$ (see the previous example). Why are the zeros of a quasimodular form for $\mathfrak{sl}_2z$ `better' distributed with respect to $\Gamma_0(2)$? To get some more insight, we sketch how the proof of \mathbb{C}ref{thm:5.3} can be adapted in order to give an alternative proof of~\eqref{eq:criticalE2}. Let \[ N_\lambda^{(2)}(f) \: :=\: \sum_{\tau \in \gamma\mathcal{F}_0(2)} \mathbb{F}rac{\mathbb{N}u_\tau(f)}{e_{\tau,2}},\] where $\lambda(\gamma)=\lambda$ and $e_{\tau,2}=2$ if $\tau$ is a $\gamma$-translate of $\mathbb{F}rac{1}{2}+\mathbb{F}rac{1}{2}\mathrm{i}$ for $\gamma \in \Gamma_0(2)$, and $e_{\tau,2}=1$ else. Then, we claim \[ N_{\lambda}^{(2)}(f) + N_{\mathbb{F}rac{\lambda-1}{2\lambda-1}}^{(2)}(f) \: =\: 2\] for all $\lambda$. Observe that $z\mapsto \mathbb{F}rac{z-1}{2z-1}$ has the circle centered around $\mathbb{F}rac{1}{2}$ with radius $\mathbb{F}rac{1}{2}$ as its fixset. Now, the integral over this the corresponding circular segment in the upper half plane yields a contribution of \[ \mathbb{F}rac{k-p}{2}-2N_{\infty}^{(2)}(f)\] with $k=4,p=2$ (namely, we integrate a function containing a $(k-p)$fold pole at $\lambda$ and $\mathbb{F}rac{\lambda-1}{2\lambda-1}$ over $\mathbb{F}rac{1}{2}$ of the circle). Moreover, the functions $h$ corresponding to $E_2\pm \sqrt{E_4}$ tend to $0$ and $1$ as $z$ tends to $0$ and $1$, and tend to $+\mathrm{i}\infty$ for $\tau \to \mathrm{i}\infty$ and $\tau\to 1+\mathrm{i} \infty$. As exactly one of $\lambda$ and $\mathbb{F}rac{\lambda-1}{2\lambda-1}$ lies between $0$ and $1$, we find in both cases that the contribution to the variation of the argument is $\mathbb{F}rac{1}{2}$, which yields the claim. Next, let $U=\mathrm{int}\mathcal{F}_0(2)\backslash[\mathbb{R}ho,\mathbb{F}rac{1}{2}+i\infty)$. This is an open subset of $\mathfrak{h}$ invariant under $TC$ and $TST^2S$ (corresponding to $z\mapsto \mathbb{F}rac{z-1}{2z-1}$). Moreover, $\sigma(CT)=(1\,2)$ and $\sigma(TST^2S)=(1\,2)$. As both leave the circle $\tfrac{1}{2}+\tfrac{1}{2}e^{\mathrm{i} \theta}$ invariant, we conclude \[ |h_i(\tfrac{1}{2}+\tfrac{1}{2}e^{\mathrm{i} \theta})-\tfrac{1}{2}|=\tfrac{1}{4}. \] Hence, find that $N_\lambda^{(2)}(f)$ as a function of $\lambda$ can only change value at $\lambda=0,1$. Showing by other means that $N_ \infty^{(2)}(f)=1$, one could conclude that \[N_\lambda^{(2)}(f)=1 \text{ for all $\lambda$}.\] \appendix \section{The zeros of \texorpdfstring{$\mathrm{Re}\,(\widehat{E}_2^n)$}{the real} and \texorpdfstring{$\mathrm{Im}\,(\widehat{E}_2^n)$}{imaginary part of powers of the quasimodular Eisenstein series of weight 2}} We are interested in counting the number of zeros of~$\mathrm{Re}\,(\widehat{E}_2^n)$ and~$\mathrm{Im}\,(\widehat{E}_2^n)$ on $(\mathbb{F}rac{\pi}{3}, \mathbb{F}rac{2 \pi}{3})$ for $n > 0$. \begin{prop}\label{prop:rootsreimE2} The functions $\mathrm{Re}\,(\widehat{E}_2^n)$ and~$\mathrm{Im}\,(\widehat{E}_2^n)$ admit \[n - 2 \Bigl\lfloor \mathbb{F}rac{n}{3} + \mathbb{F}rac{1}{2} \Bigr\mathbb{R}floor, \mathbb{Q}quad \text{resp.} \mathbb{Q}quad n - 1 - 2 \Bigl \lfloor \mathbb{F}rac{n}{3}\Bigr\mathbb{R}floor \] zeros on $(\mathbb{F}rac{\pi}{3}, \mathbb{F}rac{2 \pi}{3})$ for $n \geq 0$. \end{prop} The proof almost immediately follows from the following result. \begin{lem} For $n>0$, write $$(x+\mathrm{i})^n = R_n(x) + \mathrm{i} \,Q_n(x),$$ where $R_n(x), Q_n(x)\in \mathbb{Z}[x]$. Then all roots of $R_n$ and $Q_n$ are real, of which \[n - 2 \Bigl\lfloor \mathbb{F}rac{n}{3} + \mathbb{F}rac{1}{2} \Bigr\mathbb{R}floor, \mathbb{Q}quad \text{resp.} \mathbb{Q}quad n - 1 - 2 \Bigl \lfloor \mathbb{F}rac{n}{3}\Bigr\mathbb{R}floor \] lie in $(-\mathbb{F}rac{1}{\sqrt{3}},\mathbb{F}rac{1}{\sqrt{3}})$. \end{lem} \begin{proof} For $x>0$, we may write \begin{equation} (x+\mathrm{i})^n = (x^2+1)^{n/2} e^{\mathrm{i} \, n \arctan(1/x) }. \end{equation} Therefore, for all $x$, we recognize \begin{align} R_n(x) &= (x^2+1)^{n/2} \, T_n( \mathbb{C}os( \arctan(1/x)))\\ &= (x^2+1)^{n/2} \, T_n \Bigl( \mathbb{F}rac{x}{\sqrt{x^2+1}}\Bigr), \end{align} where $T_n$ is the $n$-th Chebyshev polynomial of the first kind, admitting the $n$ distinct real roots $\mathbb{C}os\bigl(\mathbb{F}rac{(k+\mathbb{F}rac{1}{2})\pi}{n}\bigr)$ for $k=0,\ldots,n-1$ in~$(-1,1)$. Hence, $R_n$ has $n$ distinct real roots, of which $n - 2 \bigl \lfloor \mathbb{F}rac{n}{3} + \mathbb{F}rac{1}{2} \bigr\mathbb{R}floor$ are contained in $(-\tfrac{1}{\sqrt{3}},\tfrac{1}{\sqrt{3}})$. Write $U_n$ for the $n$-th Chebyshev polynomial of the second kind. Then, for $n \geq 1$, \begin{align} Q_n(x) &= (1+x^2)^{(n-1)/2} \, U_{n-1}(\mathbb{C}os(\arctan(1/x))) \\ &= (1+x^2)^{(n-1)/2} \, U_{n-1}\Bigl( \mathbb{F}rac{x}{\sqrt{1+x^2}}\Bigr), \end{align} from which one deduces that all roots of $Q_n$ are real and $n - 1 - 2 \bigl \lfloor \mathbb{F}rac{n}{3}\bigr\mathbb{R}floor$ lie in $(-\mathbb{F}rac{1}{\sqrt{3}},\mathbb{F}rac{1}{\sqrt{3}})$. Note that $Q_n$ admits a zero at $\pm \mathbb{F}rac{1}{\sqrt{3}}$ for $n\equiv 0 \mod 3.$ \end{proof} \begin{proof}[Proof of Proposition~\mathbb{R}ef{prop:rootsreimE2}] We apply the previous lemma to $x = \mathbb{F}rac{\pi}{3}\mathrm{Re}\,(\widehat{E}_2)$. Note that by~\eqref{eq:imE2hat}, we have $\mathrm{Im}\,(\widehat{E}_2) = \mathbb{F}rac{3}{\pi}$. Hence, \begin{align} \mathbb{F}rac{\pi^n}{3^n}\mathrm{Re}\, (\widehat{E}_2^n ) &= \mathrm{Re}\,\bigl(\bigl(\mathbb{F}rac{\pi}{3}\mathrm{Re}\,\bigl(\widehat{E}_2\bigr) + \mathrm{i}\bigr)^n \bigr) =R_n\bigl(\mathbb{F}rac{\pi}{3}\mathrm{Re}\,\bigl(\widehat{E}_2\bigr)\bigr) \end{align} and \begin{align} \mathbb{F}rac{\pi^n}{3^n} \mathrm{Im}\, ( \widehat{E}_2^n ) = Q_n\bigl(\mathbb{F}rac{\pi}{3}\mathrm{Re}\,\bigl(\widehat{E}_2\bigr)\bigr). \end{align} Observe that~$\mathrm{Re}\,(\widehat{E}_2)$ is a strictly decreasing function on $(\mathbb{F}rac{\pi}{3}, \mathbb{F}rac{2 \pi}{3})$ with a unique zero at $\theta = \mathbb{F}rac{\pi}{2}$. As on the boundary we have $$\mathbb{F}rac{\pi}{3}\mathrm{Re}\,(\widehat{E}_2)\Bigl(\mathbb{F}rac{\pi}{3}\Bigr) = -\mathbb{F}rac{\pi}{3}\mathrm{Re}\,(\widehat{E}_2)\Bigl(\mathbb{F}rac{2 \pi}{3}\Bigr) = \mathbb{F}rac{1}{\sqrt{3}},$$ the proposition follows from the lemma above. \end{proof} \end{document}
\begin{document} {\small \title{Algebraic Characters of Harish-Chandra Modules and Arithmeticity} \author{Fabian Januszewski} \maketitle } \begin{abstract} These are expanded notes from lectures at the Workshop {\em Representation Theory and Applications} held at Yeditepe University, Istanbul, in honor of Roger E.\ Howe. They are supplemented by the application of algebraic character theory to the construction of Galois-equivariant characters for Harish-Chandra modules. \end{abstract} {\small\tableofcontents} \section*{Introduction} In this notes we give an introduction to abstract algebraic character theory as introduced in \cite{januszewskipreprint}. The initial motivation for this theory was the study of periods of automorphic representations with applications to number theory. In general these periods are controlled by non-admissible branching problems on the level of $({\mathfrak {g}},K)$-modules, where Harish-Chandra's classical global character theory is not directly applicable. One obstacle being that the analytic definition of a character does not extend to non-admissible representations, and the other being that Harish-Chandra's correspondence between closed $G$-invariant submodules and $({\mathfrak {g}},K)$-submodules fails in this generality as well. So even if there would be a global character for the restriction, it is not clear that this would still be an invariant of the underlying $({\mathfrak {g}},K)$-module. To be more concrete, consider a real reductive Lie group $G$, a closed reductive subgroup $H\subseteq G$, and a unitary representation of $G$ on a Hilbert space $V$ say. Assume that $V$ is of finite length as a $G$-representation. Then Harish-Chandra has shown that it is admissible, in the sense that for a maximal compact subgroup $K\subseteq G$, and any irreducible unitary $K$-module $W$ its multiplicity $m_W(V)$ in $V$ is finite. Furthermore the subspace of $K$-finite vectors $V^{(K)}\subseteq V$ is a module for the complexified Lie algebra ${\mathfrak {g}}$ of $G$, i.e.\ it is a $({\mathfrak {g}},K)$-module. Harish-Chandra went on to prove that there is a natural bijective one-to-one correspondence between closed $G$-invariant subspaces $U\subseteq V$ and algebraic $({\mathfrak {g}},K)$-submodules $U'\subseteq V^{(K)}$, given in one direction by $U\mapsto U^{(K)}$, in the other by taking closures. This way he algebrized the study of finite length (more generally admissible) unitary representations of $G$. Now if we consider $V$ as a representation of $H$, it need no more be of finite length, it may even fail to be admissible, and the correspondence between closed $H$-invariant subspaces and algebraic $({\mathfrak {h}},L)$-submodules may fail spectacularly too. One reason being that $V^{(K)}$ is a dense $({\mathfrak {h}},L)$-module in $V$, and in general $V^{(K)}\subsetneq V^{(L)}$. In particular $V^{(L)}$ may be thought of as a certain {\em completion} of $V^{(K)}$, and yet these two modules behave differently: If they are not the same, then $V^{(L)}$ contains more $({\mathfrak {h}},L)$-submodules than $V^{(K)}$, for the former can distinguish between $V^{(L)}$ and $V^{(K)}\subsetneq V^{(L)}$. As the global and the infinitesimal pictures no more coincide in general, it is natural to look for a theory which genuinely lives on the infinitesimal side. The algebraic theory discussed here provides precisely this and thus overcomes some of the above limitations by seperating it from the analytic framework. The main idea is to apply cohomological methods, which are algebraic in nature, to define a reasonable character theory. We will motivate our construction in the first section where we also review the classical theories. We tried to keep these notes as self-contained as possible. However the proofs of many fundamental facts about $({\mathfrak {g}},K)$-modules are far too involved to be treated here. For those the reader may consult the textbooks of Knapp, Vogan, Wallach, Dixmier and others, and of course papers of Harish-Chandra. The monograph \cite{book_knappvogan1995} contains most of the fundamental material in the generality we need. For a streamlined general treatment of the algebraic character theory itself we refer to \cite{januszewskipreprint}. In order to proof something in the first 4 sections, we discuss duality theorems in detail. We hope that this allows a reader not so familar with Lie algebra cohomology to demystify the objects, as the arguments are elementary and yet the resulting statements real theorems. A fundamental theme we left out in our discussion here is coherent continuation. We do not discuss the (good) behavior of characters under translation functors. This is treated in \cite{januszewskipreprint}. In the last section we introduce notions of rationality for pairs, reductive pairs and also for the corresponding modules. We show that cohomology and cohomological induction carry over to the rational setting and compare in a natural way to the classical theory over ${\rm\bf C}$. We show in particular that every discrete series, and more generally every cohomological representation, has a model over a number field. Furthermore this gives a nice playground for a non-trivial generalization of our algebraic character theory from section 5. Since a while the author planned to write up such a theory, and the recent work of Harris \cite{harris2012} in the context of rational Beilinson-Bernstein localization and Shimura varieties gave the final motivation to include at least the beginnings here. While we tried to make the first 6 sections as self-contained as possible, we give less details in the last section, and also require more background, particularly about linear reductive groups and their representation theory over non-algebraically closed fields of characteristic $0$, and also Tannaka duality, as it is a convenient tool for our purposes. The author thanks the organizers of the Workshop on Representation Theory and Applications at Yeditepe University in Istanbul for their hospitality, great organization, and good working conditions. The author thanks Roger E.\ Howe for helpful discussions and his curiosity about this theory, and the participants of the workshop for their questions. Personally the author thanks Ilhan Ikeda, Mahir B.\ Can, Safak Ozden, K\"ursat Aker, Kazim B\"uy\"ukboduk, and all the others who took very good care of me during those moving times in Istanbul. \section{What is a character?}\label{sec:whatisacharacter} Historically the idea of characters arose in the very same moment as representation theory itself, when Frobenius introduced and investigated characters of finite groups, and thus led the foundations of representation theory. For a finite group $G$, and a finite-dimensional complex representation $$ \rho_V:G\to\GL(V) $$ the corresponding character is classically defined as the map $$ \Theta_V:G\to{\rm\bf C} $$ given by $$ g\;\mapsto\;\tr\rho_V(g). $$ The collection of all these functions generates a ${\rm\bf Z}$-submodule $C(G)$ of the space of all functions $G\to{\rm\bf C}$. The elements of $C(G)$ are also called {\em virtual characters}. The collection of virtual characters is an {\em abelian group}. To the addition of characters then corresponds the direct sum of representations. As a field, ${\rm\bf C}$ comes with a {\em multiplication}, and functions may be multiplied and we may wonder if the product of two (virtual) characters is a (virtual) character again. This turns out to be true, as the multiplication of two characters corresponds to the tensor product of representations. The fundamental properties of characters of finite groups then are \begin{itemize} \item[(A)] {\bf Additivity:} $\;\;\;\;\;\;\;\;\;\;\;\;\Theta_{V\oplus W}=\Theta_V+\Theta_W$. \item[(M)] {\bf Multiplicativity:} $\;\;\;\Theta_{V\otimes W}=\Theta_V\cdot\Theta_W$. \item[(I)] {\bf Independence:} Characters of pairwise distinct irreducible representations are linearly independent. \item[(D)] {\bf Density:} The collection of $\Theta_{V}$ generates the vector space ${\rm clf}(G)$ of class functions. \end{itemize} Properties (A), (M), (I), and (D) have the following alternative interpretation. Consider the {\em Grothendieck group} $K(G)$ of the category of representations which is constructed as follows. First consider the set $K^+(G)$ of isomorphism classes of finite-dimensional complex representations of $G$. Then the functor $$ (V,W)\;\mapsto\;V\oplus W $$ induces a map $$ +:K^+(G)\times K^+(G)\;\to\;K^+(G), $$ which turns $K^+(G)$ into an abelian monoid, the neutral element being the isomorphism class of the tautological representation $$ {\bf 0}:G\to\GL(0),\;\;\;g\mapsto {\bf 1}_0 $$ on the $0$-space. We turn $K^+(G)$ into an abelian group $K(G)$ by formally adjoining inverses. The result $K(G)$ is called the Grothendieck group of (finite-dimensional) representations of $G$. It comes with a multiplication, which is induced by the functor $$ (V,W)\;\mapsto\;V\otimes W $$ which induces a map $$ \cdot:\;\;\;K^+(G)\times K^+(G)\;\to\;K^+(G), $$ and the latter multiplication law extends to all of $K(G)$. This turns $K(G)$ into a {\em commutative ring with unit}, the unit being the class of the trivial representation $$ {\bf 1}:\;\;\;G\to \GL({\rm\bf C}),\;\;\;g\mapsto 1. $$ Now the map $V\mapsto \Theta_V$ is constant on isomorphism classes, hence induces a map $$ \Theta:\;\;\;K(G)\to{\rm clf}(G) $$ of the Grothendieck group to the space of class functions on $G$. Then the properties (A) and (M) are equivalent to saying that $\Theta$ is a ring homomorphism (which carries $1$ to $1$), and property (I) is equivalent to $\Theta$ being a monomorphism. Finally (D) amounts to saying that the ${\rm\bf C}$-span of the image of $\Theta$ is the entire space of class functions. In other words (A), (B), (I) and (D) are equivalent to the fact that the induced map $$ \Theta:\;\;\;K(G)\otimes_{\rm\bf Z}{\rm\bf C}\to{\rm clf}(G) $$ is an isomorphism of ${\rm\bf C}$-algebras. This notion of character depends on the notion of function on the group. There is another notion of character, which is motivated by the 1913 discovery of Elie Cartan, that the isomorphism classes of irreducible finite-dimensional representations of a complex semi-simple Lie algebra ${\mathfrak {g}}$ are in one-to-one correspondence with {\em dominant weights}. To be more precise, fix a Borel subalgebra ${\mathfrak {q}}\subseteq{\mathfrak {g}}$ with Levi decomposition ${\mathfrak {q}}={\mathfrak {l}}+{\mathfrak {u}}$, where ${\mathfrak {l}}$ is a Cartan subalgebra and ${\mathfrak {u}}$ is the nilpotent radical. Then the correspondence $$ V\;\;\;\mapsto\;\;\;V^{{\mathfrak {u}}}:=\{v\in V\;\mid\;\forall u\in{\mathfrak {u}}:\;u\cdot v=0\}, $$ induces a map $$ H^0({\mathfrak {u}};-):\;\;\;K({\mathfrak {g}})\;\to\;K({\mathfrak {l}}) $$ on the level of Grothendieck groups of finite-dimensional ${\mathfrak {g}}$- resp.\ ${\mathfrak {l}}$-modules. It is easily seen to be additive, and Elie Cartan showed that it is a monomorphism, with image the dominant weights with respect to ${\mathfrak {u}}$. However it is not multiplicative. A consequence of Cartan's observation is that the forgetful map \begin{equation} \mathcal F:\;\;\;K({\mathfrak {g}})\;\to\;K({\mathfrak {l}}),\;\;\;V\;\mapsto\;V|_{\mathfrak {l}}, \label{eq:cartanf} \end{equation} is a monomorphism. By definition $\mathcal F$ is also multiplicative, in particular $\mathcal F$ is a ring homomorphism. Therefore we may consider $\mathcal F$ as a character, satisfying all the fundamental properties (A), (M) and (I), except (D). This notion of character does no more depend on functions of any sort, but it depends on the non-vanishing of Grothendieck groups. Mere 12 years later, Hermann Weyl generalized Frobenius' work to the case of compact groups $G$, and he showed that the trace still satisfies all the properties (A), (M), (I), and property (D) reads: \begin{itemize} \item[] {} The span of the collection of the $\Theta_{V}$ is {\em dense} in the space ${\rm clf}^2(G)$ of $L^2$-class functions. \end{itemize} The latter statement is known as the (weak) Peter-Weyl Theorem. Weyl went on to prove that for a compact connected Lie group $G$ and an irreducible unitary representation $V$ of $G$ of highest weight $\lambda$, the restriction of the character to the corresponding maximal torus $T$ is explicitly given by the {\em Weyl character formula} \begin{equation} \Theta_V|_T\;=\; \frac{ \sum_{w\in W(G,T)} (-1)^{\ell(w)} e^{w(\lambda+\rho({\mathfrak {u}}))-\rho({\mathfrak {u}})} } { \sum_{w\in W(G,T)} (-1)^{\ell(w)} e^{w(\rho({\mathfrak {u}}))-\rho({\mathfrak {u}})}}, \label{eq:weylcharacterformula} \end{equation} where $\rho({\mathfrak {u}})$ denotes the half sum of the weights in ${\mathfrak {u}}$, $W(G,T)$ is the Weyl group and $\ell$ is the length function. As $G$ is covered by the conjugates of $T$, and as $\Theta$ is a class function, $\Theta_V|_T$ already determines $\Theta_V$ on all of $G$, analogous to Cartan's observation about $\mathcal F$ above. Now $V$ is also a module for the complexified Lie algebra ${\mathfrak {g}}$ of $G$, and it turns out that if we choose ${\mathfrak {l}}$ as the complexified Lie algebra of $T$, then we have the analogous identity \begin{equation} \mathcal F(V)\;=\; \frac{ \sum_{w\in W({\mathfrak {g}},{\mathfrak {l}})} (-1)^{\ell(w)} [\lambda+\rho({\mathfrak {u}}))-\rho({\mathfrak {u}})] } { \sum_{w\in W({\mathfrak {g}},{\mathfrak {l}})} (-1)^{\ell(w)} [w(\rho({\mathfrak {u}}))-\rho({\mathfrak {u}})]}, \label{eq:lieweylcharacterformula} \end{equation} in the localization $$ K({\mathfrak {l}})[W_{\mathfrak {q}}^{-1}], $$ where $$ W_{\mathfrak {q}}\;\;\;:= \;\;\;\sum_{w\in W({\mathfrak {g}},{\mathfrak {l}})} (-1)^{\ell(w)} [w(\rho({\mathfrak {u}}))-\rho({\mathfrak {u}})]. $$ As a matter of fact the map $$ c:\;\;\;K({\mathfrak {g}})\to K({\mathfrak {l}})[W_{\mathfrak {q}}^{-1}], $$ induced by $$ V\;\mapsto\; \frac{ \sum\limits_{w\in W({\mathfrak {g}},{\mathfrak {l}})} (-1)^{\ell(w)} [\lambda+\rho({\mathfrak {u}}))-\rho({\mathfrak {u}})] } { \sum\limits_{w\in W({\mathfrak {g}},{\mathfrak {l}})} (-1)^{\ell(w)} [w(\rho({\mathfrak {u}}))-\rho({\mathfrak {u}})]}, $$ is still injective, and thus may be interpreted as yet another notion of character. Harish-Chandra managed to generalize $\Theta$ to finite length representations of real reductive groups, i.e.\ which amounts to defining distribution characters for $({\mathfrak {g}},K)$-modules, and he proved an analogue of the Weyl character formula \eqref{eq:weylcharacterformula} for the discrete series \cite{harishchandra1953,harishchandra1954a,harishchandra1954b}. More generally Assume that $X$ is an admissible representation of $G$ and that the multiplicities of the $K$-types in $X$ are bounded by a polynomial in their infinitesimal character. Then for any compactly supported $f:G\to{\rm\bf C}$ the operator $$ \rho(f)\;:=\; \int_G f(g)\cdot \rho_X(g)\, dg $$ is of trace class and $$ \Theta_X:\;\;\;f\;\mapsto\;{\rm tr}(\rho(f)) $$ defines a distribution on $G$. This is Harish-Chandra's global character as defined in loc.\ cit.. However there is no direct way to define $\Theta$ in more general contexts, for example for non-admissible modules: The operators appearing in the analytic definition are no more of trace class. Similarly in the infinite-dimensional setting $\mathcal F$ still makes sense for Verma-modules, but for $({\mathfrak {g}},K)$-modules this notion is no more meaningful either. It turns out that the only notion surving is the map $c$. Of course for this purpose $c$ has to be defined without falling back to $\mathcal F$ or $\Theta$. This is possible, and the motivation stems from yet another incarnation of the Weyl character formulae \eqref{eq:weylcharacterformula} and \eqref{eq:lieweylcharacterformula} given by Bertram Kostant \cite{kostant1961} in 1961: \begin{equation} c(V)\;=\; \frac{ \sum\limits_{q\in{\rm\bf Z}} (-1)^q [H^q({\mathfrak {u}};V)] } { \sum\limits_{q\in{\rm\bf Z}} (-1)^q [H^q({\mathfrak {u}};{\bf 1})]}. \label{eq:kostantweylcharacterformula} \end{equation} Here the {\em Lie algebra cohomology} $H^q({\mathfrak {u}};-)$ is the $q$-th right derived functor of the functor $H^0({\mathfrak {u}};-)$ considered by Elie Cartan. It vanishes for $q<0$ and $q>\dim{\mathfrak {u}}$, thus the above sums are indeed finite. There are several advantages of applying homological methods. First of all cohomology is well defined for {\em every} $({\mathfrak {g}},K)$-module $V$, which in principle allows us to write down the expression \eqref{eq:kostantweylcharacterformula} for {\em any} $({\mathfrak {g}},K)$-module $V$. A crucial limitation is that we still need a meaningful ambient Grothendieck group. Grothendieck groups may be defined for every essentially small abelian category, but they tend to collapse once multiplicities are infinite. However this isn't always a bad thing, as things may sometimes be arranged such that some part of the group collapses, but another doesn't, and the study of the latter may even be simplified by the annihilation of the other. We will encounter similar phenomena related to localization, and it even turns out that interestingly vanishing still tells us something. Another reason why cohomological methods work out here is the extended formalism homological algebra provides. Lie algebra cohomology is very well behaved, and we will see that the fundamental desirable properties (A), (M), and partially (I), all correspond to fundamental properties of cohomology. We will also see other fundamental properties that cohomologically defined characters possess, which also have their classical analogues, yet that we refrained from adding to the above list in order not to obscure the picture. Harish-Chandra's global character theory works well for finite length modules, as those are small \lq{}by definition\rq{} and also by Harish-Chandra's Admissibility Theorem, which makes the analytic definition of the distribution character work. If we apply our theory to the category of finite length modules, we essentially recover Harish-Chandra's characters, and his results tell us that localization does no harm here. One fundamental problem then is how to define \lq{}small\rq{} categories. Motivated by our study of localization, we give definitions which are motivated by hypothetical generalizations of Harish-Chandra's Admissibility Theorem for more general branching problems. We will come back to this in section 6. \section{Reductive pairs and $({\mathfrak {g}},K)$-modules} In this section we introduce the fundamental notions and properties of Harish-Chandra modules. In the literature the term {\em Harish-Chandra module} appears in different variations. The common definitions differ slightly by the finiteness conditions (finitely generatedness, admissibility, ...) imposed. Therefore we prefer the term $({\mathfrak {g}},K)$-module, which a priori imposes no finiteness conditions at all (except local $K$-finiteness). We use the expression Harish-Chandra module only informally, and understand it synonymously for $({\mathfrak {g}},K)$-module. \subsection{Reductive pairs} Let $G$ be a reductive Lie group with finite component group. Then all maximal compact subgroups in $G$ are conjugate and share the same component group with $G$. We fix one maximal compact subgroup $K\subseteq G$, and write write ${\mathfrak {g}}_0$ for the Lie algebra of the connected component $G^0\subseteq G$ and denote ${\mathfrak {g}}={\rm\bf C}\otimes_{\rm\bf R}{\mathfrak {g}}_0$ its complexification. Then $({\mathfrak {g}},K)$ is a reductive pair, and there is a natural dictionary between reductive pairs and reductive Lie groups $G$ as above. Strictly speaking a reductive pair consists of more data than the notation suggests: The map ${\mathfrak {g}}_0\to{\mathfrak {g}}$ is part of the datum, the extension of the adjoint action of $K$ on ${\mathfrak {k}}_0$ to ${\mathfrak {g}}_0$ is, as is the Cartan involution $\theta:{\mathfrak {g}}_0\to{\mathfrak {g}}_0$, inducing a Cartan decomposition $$ {\mathfrak {g}}_0\;=\;{\mathfrak {p}}_0\oplus{\mathfrak {k}}_0 $$ of the Lie algebra ${\mathfrak {g}}_0$ into $(-1)$- resp.\ $1$-eigen spaces (${\mathfrak {k}}_0$ being identified with ${\rm Lie}(K^0)$), and the non-degenerate symmetric bilinear form $\langle\cdot,\cdot\rangle$ on ${\mathfrak {g}}_0$ generalizing the Killing form. Then the map \begin{equation} {\mathfrak {p}}_0\times K\;\to\;G,\;\;\;(p,k)\mapsto \exp(p)\cdot k \label{eq:Gexp} \end{equation} is a diffeomorphism. Using this map as a blueprint, one may eventually reconstruct $G$ as a reductive Lie group entirely starting from the information provided by the reductive pair $({\mathfrak {g}},K)$. For details the reader may consult \cite[Chap.\ IV]{book_knappvogan1995}. As an example, consider the reductive Lie group $\GL_n({\rm\bf R})$. Its complexified Lie algebra ${\mathfrak {g}}l_n$ consists of all complex $n\times n$-matrices, and ${\mathfrak {g}}l_{n,0}$ consists of the (real) subalgebra of real $n\times n$ matrices. The Lie bracket is the usual commutator bracket. A maximal compact subgroup in $\GL_n({\rm\bf R})$ is the group $\Oo(n)$ of orthogonal matrices, i.e.\ real matrices $A$ satisfying $A\cdot A^t={\bf1}_n$. Then $\Oo(n)$ acts on ${\mathfrak {g}}l_n$ and ${\mathfrak {g}}l_{n,0}$ via conjugation and $({\mathfrak {g}}l_n,\Oo(n))$ is the reductive pair corresponding to $\GL_n({\rm\bf R})$. Here $\theta:{\mathfrak {g}}l_n\to{\mathfrak {g}}l_n$ is the map $g\mapsto -g^t$. If we are interested in the connected group $\GL_n({\rm\bf R})^0$ we obtain correpondingly $({\mathfrak {g}}l_n,\SO(n))$. Similarly the group $\SL_n({\rm\bf R})$ corresponds to the pair $({\mathfrak {s}}l_n,\SO(n))$, where ${\mathfrak {s}}l_n$ denotes all complex $n\times n$-matrices of trace $0$. \subsection{Modules for pairs} Reductive pairs are a motivation for the following more general notion of {\em pair}. A pair $({\mathfrak {a}},B)$ consists of a finite-dimensional complex Lie algebra ${\mathfrak {a}}$, a compact Lie group $B$, a monomorphism of complex Lie algebras $$ {\rm\bf C}\otimes_{\rm\bf C}{\rm Lie}(B^0)=:{\mathfrak {b}}\;\to\;{\mathfrak {a}}, $$ and an extension of the adjoint action of $B$ on ${\mathfrak {b}}$ to all of ${\mathfrak {a}}$, whose differential is the adjoint action of ${\mathfrak {b}}$ on ${\mathfrak {a}}$. Then an $({\mathfrak {a}},B)$-module is a complex vector space $X$ with actions of both ${\mathfrak {a}}$ and $B$ subject to the following conditions: \begin{itemize} \item[($H_1$)] The actions of ${\mathfrak {a}}$ and $B$ on $X$ are compatible: $$ \forall a\in{\mathfrak {a}},\;b\in B,\;x\in X:\;\;\;b\cdot a\cdot b^{-1}\cdot x\;=\;[{\rm Ad}(b)a]\cdot x. $$ \item[($H_2$)] $X$ is {\em locally $B$-finite}: $$ \forall x\in X:\;\;\;\dim_{\rm\bf C}\langle B\cdot x\rangle_{\rm\bf C}\;<\;\infty $$ and with respect to the (unique) natural topology on $\langle B\cdot x\rangle_{\rm\bf C}$ the representation $$ B\to\Aut_{\rm\bf C}(\langle B\cdot x\rangle_{\rm\bf C}) $$ is continuous. \item[($H_3$)] The differential of the action of $B$ is the action of ${\mathfrak {b}}\subseteq{\mathfrak {a}}$ on $X$, i.e.\ $$ \forall b\in{\mathfrak {b}}_0,\; x\in X:\;\;\;b\cdot x\;=\;\left[\frac{d}{dt}\frac{\exp(tb)\cdot x-x}{t}\right]_{t=0}. $$ \end{itemize} In this generality the definition is due to Lepowsky. We remark that ($H_3$) is meaningful, as axiom ($H_2$) implies that the action of $B$ is smooth on the subrepresentation generated by $x\in X$. A map $X\to Y$ of $({\mathfrak {a}},B)$-modules is a ${\rm\bf C}$-linear map which is compatible with the actions of ${\mathfrak {a}}$ and $B$ in the obvious way. In particular we obtain the category $\mathcal C({\mathfrak {a}},B)$ of all $({\mathfrak {a}},B)$-modules. An important consequence of the local $K$-finiteness is \begin{proposition}\label{prop:kdecomp} Let $B$ be any compact Lie group with complexified Lie algebra ${\mathfrak {b}}$. Then every irreducible $({\mathfrak {b}},B)$-module $Y$ is finite-dimensional and every $({\mathfrak {b}},B)$-module $X$ is of the form $$ X\;\cong\;\bigoplus_{Y\in\widehat{B}} X_Y, $$ where $Y$ runs through the irreducibles and $X_Y$ denotes the $Y$-isotypic subspace in $X$. \end{proposition} \begin{corollary} The categories $\mathcal C_{\rm fd}({\mathfrak {b}},B)$ and $\mathcal C_{\rm fd}(B)$ of finite-dimensional $({\mathfrak {b}},B)$-modules resp.\ finite-dimensional continuous $B$-representations are naturally equivalent. \end{corollary} The same remains true for reductive pairs. \begin{proposition} Let $({\mathfrak {g}},K)$ be a reductive pair and $G$ the corresponding real reductive Lie group. Then the categories of finite-dimensional $({\mathfrak {g}},K)$-modules and finite-dimensional continuous $G$-representations are equivalent. \end{proposition} \begin{proof} The hard part is to show that every continuous representation of $G$ is smooth. This follows from the fact that every continuous group homomorphisms of Lie groups is smooth. Then we apply this to the continuous group homomorphism $G\to \GL(V)$ corresponding to our representation. The finite-dimensionality of $V$ is crucial, as it guarantees that $\GL(V)$ is a Lie group. Therefore $V$ is also a $({\mathfrak {g}},K)$-module. The other way around, departing from a $({\mathfrak {g}},K)$-module $V$, we'd like to lift the representation $\rho:K\to\GL(V)$ (uniquely) to $G$. That this is indeed possible follows from \eqref{eq:Gexp}, which essentially tells us that $\pi_0(G)=\pi_0(K)$ and $\pi_1(G)=\pi_1(K)$, i.e.\ all topological obstructions are already taken care of by $K$. More explicitly we may define $$ \rho_V(\exp(p)\cdot k)\;:=\;\exp(\rho(p))\cdot\rho(k), $$ for $p\in{\mathfrak {p}}_0$ and $k\in K$. This is the unique extension to $G$. Again the finite-dimensionality is crucial, as it guarantees the existence of $\exp(\rho(p))\in\GL(V)$. \end{proof} \subsection{Internal constructions} For any two $({\mathfrak {a}},B)$-modules $X$ and $Y$ the tensor product $X\otimes_{\rm\bf C} Y$ acquires a natural action of the pair $({\mathfrak {a}}\times{\mathfrak {a}},B\times B)$, which is explicitly given by $$ (a_1,a_2)\cdot (x\otimes y)\;=\;(a_1\cdot x)\otimes y\;+\;x\otimes(a_2\cdot y) $$ for $a_1,a_2\in{\mathfrak {a}}$ and $x\in X$, $y\in Y$. For $b_1,b_2\in B$ we have analogously $$ (b_1,b_2)\cdot (x\otimes y)\;=\;(b_1\cdot x)\otimes (b_2\cdot y). $$ Then the pullback of this action along the diagonal map $$ \Delta:\;\;\;({\mathfrak {a}},B)\;\to\;({\mathfrak {a}}\times{\mathfrak {a}},B\times B) $$ $$ (a,b)\;\mapsto\;((a,a),(b,b)) $$ turns $X\otimes_{\rm\bf C} Y$ into an $({\mathfrak {a}},B)$-module. Similarly we may consider the space $\Hom_{\rm\bf C}(X,Y)$ of linear maps $f:X\to Y$. This acquires an action of $({\mathfrak {a}}\times{\mathfrak {a}},B\times B)$ which is given by $$ [(a_1,a_2)\cdot f](x)\;=\;a_2\cdot f(-a_1\cdot x) $$ and similarly $$ [(b_1,b_2)\cdot f](x)\;=\;b_2\cdot f(b_1^{-1}\cdot x). $$ However in general this action is not locally $B\times B$-finite. Therefore we are obliged to pass to the subspace $$ \Hom_{\rm\bf C}(X,Y)_{B\times B} $$ of locally $B\times B$-finite vectors. This then is an $({\mathfrak {a}}\times{\mathfrak {a}},B\times B)$-module. In order to obtain an $({\mathfrak {a}},B)$-module, we may again consider the pullback along the diagonal $\Delta$. Yet in this case there is a subtelty, as in general we obtain the desired $({\mathfrak {a}},B)$-module only as the $B$-finite subspace $$ \Hom_{\rm\bf C}(X,Y)_{B}\;=\;\Hom_{\rm\bf C}(X,Y)_{\Delta(B)} $$ of $\Hom_{\rm\bf C}(X,Y)$, as the latter is in general strictly bigger than the subspace of $B\times B$-finite vectors. However if $X$ is finite-dimensional then all these spaces aggree with $\Hom_{\rm\bf C}(X,Y)$. We define the {\em dual} of $X$ as $$ X^\vee\;:=\;\Hom_{\rm\bf C}(X,{\bf1})_B, $$ where ${\bf1}$ denotes the trivial $({\mathfrak {a}},B)$-module which is isomorphic to ${\rm\bf C}$ as a vector space. As a $B$-module we may think of $X^\vee$ as the direct sum of the locally $B$-finite duals $(X_Y)^\vee$ of the isotypic components $X_Y$ in the sense of Proposition \ref{prop:kdecomp}. In particular if all $X_Y$ are finite-dimensional, then $X$ is {\em reflexive}, i.e.\ the canonical bidual map $X\to X^{\vee\vee}$ is an {\em isomorphism} in this case. We have a natural monomorphism \begin{equation} \psi_{X,Y}:\;\;\;X^\vee\otimes_{\rm\bf C} Y\;\to\;\Hom_{\rm\bf C}(X,Y)_B, \label{eq:tensorhom} \end{equation} $$ \xi\otimes y\;\;\mapsto\;\;[x\;\mapsto\;\xi(x)\cdot y] $$ of $({\mathfrak {a}},B)$-modules. This is an isomorphism whenever $X$ is reflexive. \subsection{The Harish-Chandra map and infinitesimal characters} We write $Z({\mathfrak {g}})$ for the center of the universal enveloping algebra $U({\mathfrak {g}})$ of ${\mathfrak {g}}$. We say that a $({\mathfrak {g}},K)$-module $X$ has an {\em infinitesimal character}, if $Z({\mathfrak {g}})$ acts via scalars in $X$. Then the character $$ \chi:\;\;\;Z({\mathfrak {g}})\to{\rm\bf C} $$ defined by this action is called the {\em infinitesimal character} of $X$. It is characterized by the identity $$ z\cdot x\;=\;\chi(z) x $$ for all $z\in Z({\mathfrak {g}})$ and all $x\in X$. \begin{proposition}[Dixmier]\label{prop:dixmier} For any irreducible $({\mathfrak {g}},K)$-module $X$ which remains irreducible as a $U({\mathfrak {g}})$-module we have $$ \End_{{\mathfrak {g}},K}(X)={\rm\bf C}\cdot{\rm id}_X. $$ In particular such an $X$ has an infinitesimal character. \end{proposition} Proposition \ref{prop:dixmier} is a generalization of Schur's Lemma. For a proof see \cite{dixmier1963}, and also \cite[Proposition 2.6.8]{book_dixmier1977} or \cite[Proposition 4.87]{book_knappvogan1995}. The $U({\mathfrak {g}})$-irreducibility is automatically satisfied whenever $K$ is connected and $X$ irreducible as a $({\mathfrak {g}},K)$-module. Fix a Borel subalgebra ${\mathfrak {b}}\subseteq{\mathfrak {g}}$, with Levi decomposition $$ {\mathfrak {b}}\;=\;{\mathfrak {h}}+{\mathfrak {u}}. $$ Then the Poincar\'e-Birkhoff-Witt Theorem tells us that we have a direct sum decomposition $$ U({\mathfrak {g}})\;\;=\;\;U({\mathfrak {h}})\;\oplus\;({\mathfrak {u}}^- U({\mathfrak {g}})+U({\mathfrak {g}}){\mathfrak {u}}), $$ and we denote the projection onto the first summand by $p_{\mathfrak {u}}$. We write $$ \rho({\mathfrak {u}})\;:=\;\frac{1}{2}\sum_{\alpha\in\Delta({\mathfrak {u}},{\mathfrak {h}})}\alpha\;\in\;{\mathfrak {h}}^*. $$ The map $$ h\;\mapsto\;h-[\rho({\mathfrak {u}})](h)\cdot 1_{U({\mathfrak {h}})} $$ for $h\in{\mathfrak {h}}$ turns out to be a homomorphism of Lie-algebras, and therefore extends to an algebra homomorphism $$ \rho_{\mathfrak {u}}:\;\;\;U({\mathfrak {h}})\to U({\mathfrak {h}}) $$ by universality. Finally we set $$ \gamma_{\mathfrak {u}}\;:=\;\rho_{\mathfrak {u}}\circ p_{\mathfrak {u}}:\;\;\;U({\mathfrak {g}})\to U({\mathfrak {h}}). $$ \begin{theorem}[Harish-Chandra]\label{thm:harishchandraisomorphism} The map $\gamma_{\mathfrak {u}}$ induces an algebra isomorphism $$ \gamma:\;\;\;Z({\mathfrak {g}})\to U({\mathfrak {h}})^{W({\mathfrak {g}},{\mathfrak {h}})}, $$ which only depends on ${\mathfrak {h}}$, i.e.\ is independentent of the choice of ${\mathfrak {u}}$. \end{theorem} The map $\gamma$ is called {\em Harish-Chandra isomorphism} and enables us to describe infinitesimal characters via $W({\mathfrak {g}},{\mathfrak {h}})$-orbits in ${\mathfrak {h}}^*$. It turns out that the correspondence $$ W({\mathfrak {g}},{\mathfrak {h}})\backslash{}{\mathfrak {h}}^*\;\to\;\Hom_{\rm alg}(Z({\mathfrak {g}}),{\rm\bf C}), $$ $$ W({\mathfrak {g}},{\mathfrak {h}})\lambda\;\mapsto\;\chi_\lambda:=\lambda\circ\gamma $$ is bijective, where we implicitly used the universal property of $U({\mathfrak {h}})$ on the right hand side. Via this correspondence we may say that a $({\mathfrak {g}},K)$-module $X$ has infinitesimial character $\lambda\in{\mathfrak {h}}^*$, if its infinitesimal character coincides with $\chi_\lambda$. Infinitesimal characters are strong invariants. An easy excercise shows that the infinitesimal character of an irreducible finite-dimensional ${\mathfrak {g}}$-module of highest weight $\lambda$ is $\chi_{\lambda+\rho({\mathfrak {u}})}$. Consequently two irreducible finite-dimensional ${\mathfrak {g}}$-modules are isomorphic if and only if their infinitesimal characters coincide. In general we have the fundamental \begin{theorem}[Harish-Chandra]\label{thm:harishchandrafinite} Assume that $K$ is connected. For any fixed $\lambda\in{\mathfrak {h}}^*$ there are only finitely many isomorphism classes of irreducible $({\mathfrak {g}},K)$-modules with infinitesimal character $\lambda$. \end{theorem} The classical proof of this theorem relies on Harish-Chandra's global characters. By algebraic methods one can show that, for any fixed $\lambda$, there are finitely many $K$-type that each irreducible $({\mathfrak {g}},K)$-module with infinitesimal character $\lambda$ must contain, cf.\ \cite[Theorem 7.204]{book_knappvogan1995}. Then David Vogan's minimal $K$-type theory concludes the proof. We see in Theorem \ref{thm:harishchandrafinite} that the condition on $X$ of having an infinitesimal character is very strict. We relax it as follows. We say that a $({\mathfrak {g}},K)$-module $X$ is {\em $Z({\mathfrak {g}})$-finite}, if the annihilator of $X$ in $Z({\mathfrak {g}})$ is of finite codimension. Any $Z({\mathfrak {g}})$-finite module may be decomposed into a direct sum of its $Z({\mathfrak {g}})$-primary components, which all are $({\mathfrak {g}},K)$-submodules. \subsection{Composition factors and multiplicities} Fix a reductive pair $({\mathfrak {g}},K)$ and a $({\mathfrak {g}},K)$-module $X$. We say that $X$ has an irreducible $({\mathfrak {g}},K)$-module $Y$ as a {\em composition factor} if there are submodules $X_1\subseteq X_0\subseteq X$ such that $X_1/X_0\cong Y$. We write $S(X)$ for the set of submodules of $X$. It is comes with a natural preorder given by set-theoretic inclusion. The {\em multiplicity} of $Y$ in $X$ is the supremum $m_Y(X)$ of the cardinalities of all totally ordered sets $(I,\leq)$ with the property that there exist injective order-preserving maps $a:I\to S(X)$ and $b:I\to S(X)$ such that for any $i\in I$ we have $a(i)\subseteq b(i)$ and $b(i)/a(i)\cong Y$. The multiplicity of $Y$ in $X$ is finite if and only if the set $\mu_Y(X)$ of natural numbers $m$ with the property that there exists a natural number $N$ and submodules $$ X_0\subseteq X_1\subseteq\cdots\subseteq X_N\subseteq X $$ such that $X_i/X_{i+1}\cong Y$ for $m$ distinct indices $0\leq i<N$ is bounded. If this is the case, then $m_Y(X)=\max\mu_Y(X)$. Suppose we are given a short exact sequence $$ 0\to A\to B\to C\to 0 $$ of $({\mathfrak {g}},K)$-modules. Then \begin{equation} m_Y(A)+m_Y(C)\;=\;m_Y(B), \label{eq:multiplicityaddition} \end{equation} where \lq$+$\rq{} denotes addition of cardinal numbers. We say that two $({\mathfrak {g}},K)$-modules $X$ and $X'$ have the same {\em semi-simplifications} if for all irreducible $Y$ we have $m_Y(X)=m_{Y}(X')$. Proposition \ref{prop:kdecomp} tells us that if ${\mathfrak {g}}={\mathfrak {k}}$ then the isomorphism class of $X$ depends only on its semi-simplification. However we emphasize that in general a non-zero $({\mathfrak {g}},K)$-module $X$ may possess no composition factor at all, and also non-trivial extension classes between irreducibles may exist. Consequently semi-simplification is far from being a faithful operation. \subsection{Admissible and finite length modules} We call $X$ {\em admissible} if, as a $K$-module, all multiplicities of irreducibles in $X$ are finite. For an admissible $X$ the multiplicities $m_Y(X)$ are necessarily finite for any irreducible $({\mathfrak {g}},K)$-module $Y$. We say that $X$ is of {\em finite length} if $X$ has a finite composition series. \begin{proposition}\label{prop:finitelength} Let $X$ be a $({\mathfrak {g}},K)$-module. The following statements are equivalent: \begin{itemize} \item[(i)] $X$ is of finite length. \item[(ii)] $X$ is admissible and finitely generated. \item[(iii)] $X$ is admissible and $Z({\mathfrak {g}})$-finite. \end{itemize} \end{proposition} \begin{proof} If $X$ is an irreducible $({\mathfrak {g}},K)$-module, then it is admissible \cite{lepowsky1973}. As a module of finite length is finitely generated, this shows that (i) implies (ii). By Dixmier's Proposition \ref{prop:dixmier}, an irreducible $X$ has an infinitesimal character for $K$ connected, hence (i) implies (iii). The implication (ii) $\Longrightarrow$ (iii) is standard. The remaining implication (iii) $\Longrightarrow$ (i), may be deduced from Theorem \ref{thm:harishchandrafinite}, using that $U({\mathfrak {g}})$ is noetherian. See the proof of Corollary 7.207 in \cite{book_knappvogan1995} for example. \end{proof} \subsection{Discretely decomposable modules} Following Kobayashi \cite[Definition 1.1]{kobayashi1997}, we say that $X$ is {\em discretely decomposable} if $X$ is a union of finite length modules, and a discretely decomposable $X$ is {\em discretely decomposable with finite multiplicities} if all $m_Y(X)$ are finite. By Proposition \ref{prop:kdecomp} an admissible $X$ is discretely decomposable with finite multiplicities as $({\mathfrak {k}},K)$-module. Then the multiplicity of any irreducible $({\mathfrak {g}},K)$-module $Y$ in $X$ is finite. We remark however $X$ need not be discretely decomposable. The other way round we have the following criterion. \begin{proposition}[{Kobayashi, \cite[Lemma 1.5]{kobayashi1997}}] Let $({\mathfrak {h}},L)\to({\mathfrak {g}},K)$ be an inclusion of reductive pairs. For any irreducible $({\mathfrak {g}},K)$-module $X$ the following are equivalent: \begin{itemize} \item[(i)] $X$ is a discretely decomposable $({\mathfrak {h}},L)$-module. \item[(ii)] It exists a finite length $({\mathfrak {h}},L)$-module $Z$ and a non-zero $({\mathfrak {h}},L)$-map $Z\to X$. \item[(iii)] It exists an irreducible $({\mathfrak {h}},L)$-module $Y$ and a non-zero $({\mathfrak {h}},L)$-map $Y\to X$. \end{itemize} \end{proposition} \subsection{Categories of $({\mathfrak {g}},K)$-modules} In order to make our character theory work, we need essentially small full abelian subcategories of $\mathcal C({\mathfrak {g}},K)$ (for the notion of an {\em abelian category} we refer to section \ref{sec:grothendieckgroups} below). The properties from the previous section all define certain abelian subcategories, some of which provide nice setups for our theory. For this purpose we write ${\mathcal C}_{\rm a}({\mathfrak {g}},K)$ resp.\ ${\mathcal C}_{\rm d}({\mathfrak {g}},K)$ resp.\ ${\mathcal C}_{\rm df}({\mathfrak {g}},K)$ resp.\ ${\mathcal C}_{\rm zf}({\mathfrak {g}},K)$ resp.\ ${\mathcal C}_{\rm fl}({\mathfrak {g}},K)$ resp.\ ${\mathcal C}_{\rm fd}({\mathfrak {g}},K)$ for the categories of admissible resp.\ discretely decomposable resp.\ discretely decomposable with finite multiplicities resp.\ $Z({\mathfrak {g}})$-finite resp.\ finite length resp.\ finite-dimensional $({\mathfrak {g}},K)$-modules. They satisfy the inclusion relations $$ {\mathcal C}({\mathfrak {g}},K)\;\supset\; {\mathcal C}_{\rm d}({\mathfrak {g}},K)\;\supset\; {\mathcal C}_{\rm df}({\mathfrak {g}},K)\;\supset\; {\mathcal C}_{\rm fl}({\mathfrak {g}},K)\;\supset\; {\mathcal C}_{\rm fd}({\mathfrak {g}},K), $$ $$ {\mathcal C}({\mathfrak {g}},K)\;\supset\; {\mathcal C}_{\rm zf}({\mathfrak {g}},K)\;\supset\; {\mathcal C}_{\rm fl}({\mathfrak {g}},K)\;\supset\; {\mathcal C}_{\rm fd}({\mathfrak {g}},K), $$ and $$ {\mathcal C}({\mathfrak {g}},K)\;\supset\; {\mathcal C}_{\rm a}({\mathfrak {g}},K)\;\supset\; {\mathcal C}_{\rm fl}({\mathfrak {g}},K)\;\supset\; {\mathcal C}_{\rm fd}({\mathfrak {g}},K). $$ Proposition \ref{prop:finitelength} tells us that $$ {\mathcal C}_{\rm zf}({\mathfrak {g}},K)\;\cap\; {\mathcal C}_{\rm a}({\mathfrak {g}},K)\;= {\mathcal C}_{\rm fl}({\mathfrak {g}},K). $$ The above categories are in general not closed under tensor products. For our purpose we have the important \begin{proposition}\label{prop:tensorstable} Fix $?\in\{\rm d,\rm df,\rm zf, \rm a,\rm fl,\rm fd\}$. For any $Z\in{\mathcal C}_{\rm fd}({\mathfrak {g}},K)$ and any $X\in{\mathcal C}_{?}({\mathfrak {g}},K)$ we have $$ Z\otimes_{\rm\bf C} X\;\in\;{\mathcal C}_{?}({\mathfrak {g}},K). $$ \end{proposition} \begin{proof} We only sketch a proof for $?={\rm df}$ and otherwise give references to the literature. The case $?=\rm fd$ is clear. The crucial case is $?={\rm zf}$, as it allows one to invoke Proposition \ref{prop:finitelength}. A treatment of this case may be found in \cite[Proposition 7.203]{book_knappvogan1995}. The case $?=\rm fl$ is due to Kostant \cite{kostant1975}. To prove it one can deduce it from $?={\rm zf}$ using Proposition \ref{prop:finitelength}, and \cite[Corollary 4.5.6]{book_vogan1981} also contains a proof. The case $?=\rm d$ follows from the fact that the endofunctor $Z\otimes_{\rm\bf C} -$ of ${\mathcal C}({\mathfrak {g}},K)$ commutes with direct limits, and was first observed by Kobayashi \cite[Lemma 1.4]{kobayashi1997}. We assume for a moment that $K$ is connected. Then the case $?={\rm df}$ follows from Kostant's observation that tensoring with $Z$ \lq{}shifts\rq{} infinitesimal characters only in a finite manner, namely by weights occuring in $Z$, cf.\ loc.\ cit.\ and also \cite[Theorem 7.133]{book_knappvogan1995}, \cite[Lemma 4.5.4 and Corollary 4.5.6]{book_vogan1981}. In particular, if $Y$ is an irreducible $({\mathfrak {g}},K)$-module, then there are (up to isomorphy) only finitely many irreducible $Y'$ with $m_Y(Z\otimes_{\rm\bf C} Y')\neq 0$ due to Harish-Chandra's Theorem \ref{thm:harishchandrafinite}. Consequently, if we write $$ X=\bigcup\limits_{i\in I}X_i $$ with each $X_i$, $i \in I$ of finite length, then as the multiplicity of each $Y'$ in $X$ is finite, there are only finitely many $i \in I$ with $m_Y(Z\otimes X_i)\neq 0$, which reduces us to the case $?={\rm fl}$. The case of non-connected $K$ then follows from the observation that each finite-length $({\mathfrak {g}},K)$-module $E$ is a finite length $({\mathfrak {g}},K^0)$-module, and if all multiplicities for $({\mathfrak {g}},K)$ are finite, the same is true in the connected case, and vice versa. \end{proof} \section{$K$-groups and their localizations} \subsection{Grothendieck groups}\label{sec:grothendieckgroups} Consider an {\em abelian category} $\mathcal A$, i.e.\ a category in which all morphism-sets are abelian groups in which the composition of morphisms is ${\rm\bf Z}$-bilinear, kernels and cokernels exist (i.e.\ $\mathcal A$ is {\em additive}), and every monomorphism is a kernel and every epimorphism is a cokernel. Evidently all the categories ${\mathcal C}_?({\mathfrak {g}},K)$ are abelian. In an abelian category we have the notion of {\em exact sequence}: A sequence of morphisms \begin{equation} \begin{CD} X@>\alpha>> Y@>\beta>> Z \end{CD} \label{eq:xyz} \end{equation} is said to be a {\em exact at $Y$} if the kernel of $\beta$ agrees with the image of $\alpha$. The notion of image has an abstract definition in an abelian category: it is the kernel of the cokernel. A {\em short exact sequence} is a sequence exact at $Y$ as above where additionally $\alpha$ is a monomorphism and $\beta$ is an epimorphism. We call $\mathcal A$ {\em essentially small} if it is equivalent to a category whose class of objects is a set. For an essentially small abelian category $\mathcal A$ we may consider the set $A$ of isomorphism classes of objects in $\mathcal A$. Then in the free abelian group ${\rm\bf Z}[A]$ over $A$ the collection of elements $$ Y-X-Z\;\in\;{\rm\bf Z}[A] $$ which occur in a short exact sequence \eqref{eq:xyz} generate a subgroup $R\leq{\rm\bf Z}[A]$, and we define the {\em Grothendieck group} of $\mathcal A$ as the abelian group $$ K(\mathcal A)\;:=\;{\rm\bf Z}[A]/R. $$ It comes with a natural map $$ [\cdot]:\;\;\;\mathcal A\;\to\;K(\mathcal A), $$ which is induced by sending an object $X$ first to its isomorphism class in $A$, then to its canonical image in ${\rm\bf Z}[A]$, and then project it modulo $R$. The map $[\cdot]$ is easily seen to be surjective and to satisfy the following universal property: For any abelian group $B$ and any {\em additive} map $f:\mathcal A\to B$, i.e.\ a map for which $$ f(X)+f(Z)\;=\;f(Y) $$ for any short exact sequence \eqref{eq:xyz}, there is a unique homomorphism $f':K(\mathcal A)\to B$ with $f=f'\circ [\cdot]$. Due to this universal property, all characters considered in section \ref{sec:whatisacharacter} factor over (i.e.\ extend to) the corresponding Grothendieck groups. \subsection{Grothendieck groups of $({\mathfrak {g}},K)$-modules} Unfortunately the categories ${\mathcal C}_?({\mathfrak {g}},K)$ for $?\in\{-,\rm zf,\rm d\}$ are {\em not} essentially small, so we cannot consider their Grothendieck groups directly. As there are only countably many isomorphism classes of irreducible finite-dimensional representations of $({\mathfrak {g}},K)$ for semisimple ${\mathfrak {g}}$, and $\#{\rm\bf C}$ many for a non-compact torus, we see that for an arbitrary reductive pair $({\mathfrak {g}},K)$ the category ${\mathcal C}_{\rm fd}({\mathfrak {g}},K)$ is essentially small, and $K({\mathcal C}_{\rm fd}({\mathfrak {g}},K))$ is the free abelian group generated by the isomorphism classes of irreducible finite-dimensional modules. There are at most countably many irreducible representations of $K$ up to isomorphy, and therefore ${\mathcal C}_{\rm a}({\mathfrak {g}},K)$ is essentially small. In this case there is no simple characterization of its Grothendieck group. However it comes with a group homomorphism $$ {\rm res}_{{\mathfrak {k}},K}^{{\mathfrak {g}},K}:\;\;\; K({\mathcal C}_{\rm a}({\mathfrak {g}},K))\;\to\; K({\mathcal C}_{\rm a}({\mathfrak {k}},K)) $$ induced by restriction. If $K$ is infinite, the group on the right hand side is non-canonically isomorphic to the additive group of the ring of formal power series $ {\rm\bf Z}[[T]] $ in one indeterminate, by identifying each power of $T$ with an isomorphism class of an irreducible $K$-module. Theorem \ref{thm:harishchandrafinite} tells us that in the remaining cases $?\in\{\rm df,\rm fl\}$ the corresponding category again is essentially small. If we write $A_0\subseteq A$ for the set of isomorphism classes of irreducibles, then as in the previous example we have a canonical isomorphism $$ K({\mathcal C}_{\rm df}({\mathfrak {g}},K))\;\cong\; {\rm\bf Z}^{A_0}, $$ where the right hand side denotes the abelian group of all maps $A_0\to{\rm\bf Z}$. The inclusion $$ K({\mathcal C}_{\rm fl}({\mathfrak {g}},K))\;\to\; K({\mathcal C}_{\rm df}({\mathfrak {g}},K)) $$ induces a canonical isomorphism of abelian groups $$ K({\mathcal C}_{\rm fl}({\mathfrak {g}},K))\;\cong\; {\rm\bf Z}^{(A_0)}, $$ where this time the right hand side denotes the abelian group of maps $A_0\to{\rm\bf Z}$ with {\em finite support}, i.e.\ that are non-zero only for finitely many arguments. For readability we introduce the abbreviations $$ K_?({\mathfrak {g}},K)\;:=\;K({\mathcal C}_?({\mathfrak {g}},K)). $$ \subsection{The multiplicative structure} The tensor product in ${\mathcal C}({\mathfrak {g}},K)$ is exact, and consequently it descends to Grothendieck groups whenever it preserves the underlying essentially small category. However this is difficult to guarantee in general in most cases. The situation is considerably simpler if we consider finite-dimensional modules. The category ${\mathcal C}_{\rm fd}({\mathfrak {g}},K)$ is closed under tensor products, and therefore we get an induced multiplication map $$ \cdot:\;\;\;K_{\rm fd}({\mathfrak {g}},K)\times K_{\rm fd}({\mathfrak {g}},K)\;\to\;K_{\rm fd}({\mathfrak {g}},K), $$ $$ ([X],[Y])\;\mapsto\;[X\otimes_{\rm\bf C} Y]. $$ As the tensor product is associative, behaves well in exact sequences, and in particular is distributive with respect to direct sums, this turns $K_{\rm fd}({\mathfrak {g}},K)$ into a commutative ring with $1$, the latter being the class of the trivial representation. \begin{proposition}\label{prop:connecteddomain} For connected $K$ the ring $K_{\rm fd}({\mathfrak {g}},K)$ is an integral domain. \end{proposition} \begin{proof} We already know that the tensor product of non-zero modules is non-zero. We have to show that the same remains true when allowing formal differences of modules. For this purpose we make use of the forgetful functor $\mathcal F$ from \eqref{eq:cartanf} from the first section which was associated to a Cartan subalgebra ${\mathfrak {l}}\subseteq{\mathfrak {g}}$. Then we may consider a corresponding Grothendieck group $K_{\rm fd}({\mathfrak {l}})$ for the category of finite-dimensional ${\mathfrak {l}}$-modules. The forgetful functor $\mathcal F$ induces a ring morphism $$ \mathcal F:\;\;\;K_{\rm fd}({\mathfrak {g}},K)\;\to\; K_{\rm fd}({\mathfrak {l}}), $$ sending $1$ to $1$. As already discussed in the first section, E.\ Cartan's result implies that this map is injective, and consequently we are reduced to the case of ${\mathfrak {l}}$, which is an abelian Lie algebra. Here all irreducibles are one-dimensional and correspond to characters $\lambda\in{\mathfrak {l}}^*$. The tensor product of two characters is its sum. Therefore we have a canonical isomorphism $$ K_{\rm fd}({\mathfrak {l}})\;\cong\; {\rm\bf Z}[{\mathfrak {l}}^*] $$ of rings, where the right hand side denotes the group algebra of ${\mathfrak {l}}^*$. The latter is obviously an integral domain, hence the claim follows. \end{proof} We remark that for non-connected $K$ the statement of Proposition \ref{prop:connecteddomain} is no more true. Consider the group $K=\{\pm 1\}$. The classes of irreducible representations are represented by ${\bf 1}$ and ${\rm sgn}$. In the Grotendieckgroup of finite-dimensional $K$-representations we have $$ ([{\bf 1}]+[{\rm sgn}])\cdot([{\bf 1}]-[{\rm sgn}])\;=\; [{\bf 1}\otimes {\bf 1}]-[{\rm sgn}\otimes{\rm sgn}])\;=\; [{\bf 1}]-[{\bf 1}]\;=\;0. $$ If we write $F_0$ for the set of isomorphism classes of finite-dimensional $({\mathfrak {g}},K)$-modules, the canonical isomorphism $$ \iota:\;\;\;K_{\rm fd}({\mathfrak {g}},K)\;\cong\; {\rm\bf Z}^{(F_0)} $$ does not respect multiplication if we define the product on the right hand side component-wise. The trick is that the tensor product induces a pairing $$ \cdot:\;\;\;F_0\times F_0\;\to\; {\rm\bf Z}^{(F_0)}, $$ which in turn induces a unique ${\rm\bf Z}$-bilinear map $$ \cdot:\;\;\;{\rm\bf Z}^{(F_0)}\times{\rm\bf Z}^{(F_0)}\;\to\;{\rm\bf Z}^{(F_0)}, $$ which turns ${\rm\bf Z}^{(F_0)}$ into a ring, and $\iota$ into a natural isomorphism of rings. Unless $F_0$ is finite we cannot extend this product to ${\rm\bf Z}^{F_0}$, as infinitely many pairs of elements of $F_0$ may contribute to the coefficient of a single one. Therefore there is no hope for larger categories to be stable under the tensor product. This already applies to ${\mathcal C}_{\rm a}({\mathfrak {k}},K)$. All we can hope for is to consider the Grothendieck groups of larger categories as modules over $K_{\rm fd}({\mathfrak {g}},K)$. This indeed works out in our previous examples thanks to Proposition \ref{prop:tensorstable}: For $?\in\{\rm a,\rm df,\rm fl,\rm fd\}$ the tensor product induces on $K_?({\mathfrak {g}},K)$ a $K_{\rm fd}({\mathfrak {g}},K)$-module structure. \subsection{Localization of Grothendieck groups} Assume we are given an element $$ 0\neq W\in K_{\rm fd}({\mathfrak {g}},K). $$ For connected $K$, the ring $K_{\rm fd}({\mathfrak {g}},K)$ is a domain by Proposition \ref{prop:connecteddomain}, and its quotient field $Q_{\rm fd}({\mathfrak {g}},K)$ comes with a monomorphism $$ i:\;\;\;K_{\rm fd}({\mathfrak {g}},K)\;\to\; Q_{\rm fd}({\mathfrak {g}},K), $$ and $i(W)$ becomes invertible. Therefore we may define the ring $$ K_{\rm fd}({\mathfrak {g}},K)[W^{-1}]\;\subseteq\;Q_{\rm fd}({\mathfrak {g}},K), $$ as the subring of the quotient field generated by the image of $i$ and $W^{-1}$. For non-connected $K$ we define $K_{\rm fd}({\mathfrak {g}},K)[W^{-1}]$ formally as the localization at $W$, i.e.\ as a set it is given by equivalence classes of pairs $(a,b)$, where $a\in K_{\rm fd}({\mathfrak {g}},K)$ and $b=W^k$ for some $k\geq 0$, and we say that $$ (a,b)\;\sim\;(a',b')\;\;\;:\Longleftrightarrow\;\;\;a\cdot b'\;=\;a'\cdot b. $$ Then the set of equivalence classes with respect to this equivalence relation defines the desired localized ring, which comes with a natural map $$ K_{\rm fd}({\mathfrak {g}},K)\;\to\;K_{\rm fd}({\mathfrak {g}},K)[W^{-1}],\;\;\; a\;\mapsto\;[(a,1)]_\sim, $$ which satiesfies the usual universal property. For a $K_{\rm fd}({\mathfrak {g}},K)$-module $M$ we define its localization at $W$ as $$ M[W^{-1}]\;:=\;K_{\rm fd}({\mathfrak {g}},K)[W^{-1}]\otimes_{K_{\rm fd}({\mathfrak {g}},K)}M. $$ Localization is a non-faithful operation, as it kills all $W$-torsion. More precisely we have \begin{proposition}\label{prop:lockernel} Let $M_W\subseteq M$ be the sum of the kernels of the endormorphisms $W^k:M\to M$ given by multiplication with $W^k$, $k\geq 1$. Then the sequence $$ \begin{CD} 0@>>> M_W@>>> M@>>> M[W^{-1}] \end{CD} $$ is exact. \end{proposition} \begin{proof} We may identify the elements of the image of $M$ in $M[W^{-1}]$ with equivalence classes of elements of $M$ with respect to the equivalence relation $$ a\sim b\;\;\;:\Longleftrightarrow\;\;\;\exists k\geq 1:\;W^k\cdot a=W^k\cdot b. $$ On the one hand, by definition, for any $m\in M_W$ there is a $k\geq 1$ with $W^k\cdot m=0$, which implies $W^k\cdot m=W^k\cdot 0$, hence $m\sim 0$, and $M_W$ maps to $0$ in $M[W^{-1}]$. On the other hand, if $m\in M$ maps to $0$ in the localization, this means that there is a $k\geq 1$ with $W^k\cdot m=W^k\cdot 0=0$, hence $m\in M_W$. \end{proof} \section{Parabolic pairs and cohomology} If ${\mathfrak {q}}={\mathfrak {l}}+{\mathfrak {u}}$ is a parabolic subalgebra of ${\mathfrak {g}}$, the functor $$ H^0({\mathfrak {u}};-):\;\;\;X\mapsto X^{\mathfrak {u}} $$ is very useful in the study of finite-dimensional representations. It maps ${\mathfrak {g}}$-modules to ${\mathfrak {l}}$-modules, and the latter are usually easier understood, and therefore help in the study of the former. In order to make this construction work for $({\mathfrak {g}},K)$-modules, we need to make sure that ${\mathfrak {l}}$ is part of a reductive pair, which we want to correspond to a closed reductive subgroup $L$ of $G$, if the reductive Lie group $G$ corresponds to $({\mathfrak {g}},K)$. Therefore we need to restrict our attention to {\em germane} parabolic subalgebras, which are parabolics giving rise to parabolic pairs. The latter have reductive pairs as Levi factors. In this context $H^0({\mathfrak {u}};-)$ becomes a left exact functor ${\mathcal C}({\mathfrak {g}},K)\to{\mathcal C}({\mathfrak {l}},L\cap K)$, and its right derived functors give us for each $({\mathfrak {g}},K)$-module $X$ a collection of $({\mathfrak {l}},L\cap K)$-modules $H^q({\mathfrak {u}};X)$, $0\leq q\leq\dim{\mathfrak {u}}$. There is a standard complex computing this cohomology. This explicit description of the cohomology will enable us later to prove fundamental properties of our algebraic characters. For our applications to character theory we will introduce the notion of {\em constructible parabolic pairs}. Those are iteratively constructed out of real and $\theta$-stable ones and in this case $H^q({\mathfrak {u}};-)$ preserves many useful properties, in particular finite length. \subsection{Parabolic pairs} In this section we set up the essential formalism of parabolic pairs and their Levi decompositions. For details the reader may consult \cite[Chapter IV, Section 6]{book_knappvogan1995}. Again $({\mathfrak {g}},K)$ is a reductive pair. Let ${\mathfrak {q}}\subseteq{\mathfrak {g}}$ be a parabolic subalgebra, and $G$ the corresponding reductive Lie group for the pair $({\mathfrak {g}},K)$. We say that ${\mathfrak {q}}$ is {\em germane}, if it possess a Levi factor ${\mathfrak {l}}$ which is the complexification of a real $\theta$-stable subalgebra ${\mathfrak {l}}_0\subseteq{\mathfrak {g}}$. This is equivalent to saying that ${\mathfrak {l}}$ is the complexified Lie algebra of a $\theta$-stable closed subgroup of $G$. A canonical choice for $L$ is given by the intersection of the normalizers of ${\mathfrak {q}}$ and $\theta({\mathfrak {q}})$ in $G$. Then $({\mathfrak {l}},L\cap K)$ is again a reductive pair, corresponding to the reductive Lie group $L$, whose Lie algebra is ${\mathfrak {l}}$, and may be thought of as the Levi factor, that we call the {\em Levi pair} of the {\em parabolic pair} $({\mathfrak {q}},L\cap K)$. In the sequel we always assume our Levi factors and Cartan subalgebras to be $\theta$-stable, and parabolic subalgebras to be germane. If ${\mathfrak {q}}={\mathfrak {l}}+{\mathfrak {u}}$ is a Levi-decomposition, we always assume ${\mathfrak {l}}$ to be $\theta$-stable. Then $L$ and a fortiori $L\cap K$ normalize ${\mathfrak {u}}$, ${\mathfrak {u}}^-$, $\theta({\mathfrak {u}})$, $\theta({\mathfrak {u}}^-)$. Take for example the reductive pair $({\mathfrak {g}}l_n,\Oo(n))$. Then the subalgebra ${\mathfrak {b}}\subseteq{\mathfrak {g}}l_n$ consisting of all upper triangular matrices is a germane parabolic subalgebra, because it has a Levi factor consisting of all diagonal matrices and those are evidently defined over ${\rm\bf R}$. In this case it is even true that all of ${\mathfrak {b}}$ is defined over ${\rm\bf R}$, as it is the complexification of the algebra of real upper triangular matrices. Therefore $L=\GL_1({\rm\bf R})^n$ here, and $L\cap\Oo(n)=\{\pm1\}^n$. There is another important case. Start with a maximal torus $T\subseteq\Oo(n)$, say with Lie algebra consisting of matrices of the shape $$ \begin{pmatrix} 0&t_1& & \\ -t_1&0& & \\ & & 0&t_2 & \\ & &-t_2&0 & \\ & & & &\ddots\\ \end{pmatrix}. $$ Then we may choose a complementary maximal abelian subspace in the space ${\mathfrak {p}}$ of symmetric matrices. Explictly we choose the matrices of the form $$ \begin{pmatrix} a_1& 0& & \\ 0 &a_1& & \\ & & a_2&0 & \\ & & 0&a_2 & \\ & & & &\ddots\\ \end{pmatrix}, $$ where we have a single entry $a_{\frac{n+1}{2}}$ at position $n\times n$ if $n$ is odd. Then together this gives us a Cartan subalgebra ${\mathfrak {l}}$ of ${\mathfrak {g}}l_n$ whose corresponding Lie group $L$ is isomorphic to $\GL_1({\rm\bf C})^{\frac{n-\delta}{2}}\times\GL_1({\rm\bf R})^\delta$, where $\delta=0$ if $n$ is even and $\delta=1$ if $n$ is odd. Therefore $L\cap\Oo(n)\cong U(1)^{\frac{n-1}{2}}\times\{\pm1\}^\delta$. Then there is a germane parabolic subalgebra ${\mathfrak {q}}\subseteq{\mathfrak {g}}l_n$ with Levi factor ${\mathfrak {l}}$. However ${\mathfrak {q}}$ is not defined over ${\rm\bf R}$. But contrary to ${\mathfrak {b}}$ above it turns out to be $\theta$-stable. ${\mathfrak {q}}$ is of particular importance, because it contains a Borel subalgebra of ${\mathfrak {s}}o_n$, and therefore turns out to be very useful in the study of restrictions from $\GL_n({\rm\bf R})$ to $\Oo(n)$. We see that although a germane parabolic subalgebra ${\mathfrak {q}}$ has a Levi factor which corresponds to a subgroup $L\subseteq G$, the nilpotent radical ${\mathfrak {n}}$ of ${\mathfrak {q}}$ need not be defined over ${\rm\bf R}$, i.e.\ in general there is no subgroup $N\subseteq G$ giving rise to ${\mathfrak {n}}$. Conceptually there are two extreme cases. We say that ${\mathfrak {q}}$ is {\em real} if ${\mathfrak {q}}$ is the complexification of ${\mathfrak {q}}_0:={\mathfrak {q}}\cap{\mathfrak {g}}_0$. We say that ${\mathfrak {q}}$ is {\em $\theta$-stable} if $\theta({\mathfrak {q}})={\mathfrak {q}}$. \begin{proposition}\label{prop:parabolic} Let ${\mathfrak {h}}\subseteq{\mathfrak {g}}$ be $\theta$-stable Cartan subalgebra. The we find a germane Borel subalgebra ${\mathfrak {q}}$ containing ${\mathfrak {h}}$ with the property that there is a $\theta$-stable parabolic ${\mathfrak {q}}'\subseteq{\mathfrak {g}}$ with Levi decomposition ${\mathfrak {q}}'={\mathfrak {l}}'+{\mathfrak {u}}'$ and a real parabolic subalgebra ${\mathfrak {q}}''\subseteq{\mathfrak {l}}'$ with Levi decomposition ${\mathfrak {q}}''={\mathfrak {h}}+{\mathfrak {u}}''$ such that $$ {\mathfrak {q}}={\mathfrak {h}}+({\mathfrak {u}}''+{\mathfrak {u}}') $$ is a Levi decomposition of ${\mathfrak {q}}$. \end{proposition} \begin{proof} We write ${\mathfrak {h}}_0={\mathfrak {t}}_0+{\mathfrak {a}}_0$ where ${\mathfrak {t}}_0={\mathfrak {h}}_0\cap{\mathfrak {k}}$ and ${\mathfrak {a}}_0$ is the vector part respectively. Now for any finite number of elements $h_1,\dots,h_r\in i{\mathfrak {t}}_0$ all eigenvalues of $\ad(h_i)$ are real and we define ${\mathfrak {u}}_{(h_1,\dots,h_r)}$ as the sum of the simultaneous eigenspaces of $\ad(h_i)$ in ${\mathfrak {g}}$ for simultaneous positive eigenvalues, where positivity is defined lexicographically. Then we find $h_1,\dots,h_r$ which satisfy the following maximality condition: \begin{itemize} \item[(m)] $$ \dim{\mathfrak {u}}_{(h_1,\dots,h_r)} $$ is maximal among all choices of elements $h_1,\dots,h_r$. \end{itemize} Then (m) is equivalent to $$ {\mathfrak {l}}'\;:=\;Z_{\mathfrak {g}}({\rm\bf C} h_1+\cdots+{\rm\bf C} h_r)\;=\;\bigcap_{i=1}^r\ker\ad(h_i). $$ This guarantees that with ${\mathfrak {u}}':={\mathfrak {u}}_{(h_1,\dots,h_r)}$ we obtain a $\theta$-stable germane parabolic subalgebra ${\mathfrak {q}}':={\mathfrak {l}}'+{\mathfrak {u}}'$. Now choose a Borel subalgebra ${\mathfrak {q}}''\subseteq{\mathfrak {l}}'$ with Levi decomposition ${\mathfrak {q}}''={\mathfrak {h}}+{\mathfrak {u}}''$. We claim that ${\mathfrak {q}}''$ is real. This is the same to say that for any $h'\in {\mathfrak {t}}_0+{\mathfrak {a}}_0$, its adjoint action on ${\mathfrak {u}}''$ has only real eigenvalues. So assume that this is not the case. We have the decomposition $h'=it_0+a_0$ with $t_0\in i{\mathfrak {t}}_0$ and $a_0\in{\mathfrak {a}}_0$. By our assumption $t_0\neq 0$ and $\ad(t_0)$ has a non-zero real eigenvalue as an endomorphism of ${\mathfrak {u}}''$, that we may assume to be positive. We write $E_0\subseteq{\mathfrak {u}}''$ for the corresponding eigenspace. As $E_0$ lies in the kernel of all $\ad(h_i)$ we have $$ E_0\;\subseteq\;{\mathfrak {u}}_{(h_1,\dots,h_r,t_0)}. $$ Therefore ${\mathfrak {u}}_{(h_1,\dots,h_r,t_0)}$ strictly contains ${\mathfrak {u}}'$. This is a contradiction to (m), concluding the proof. \end{proof} Proposition \ref{prop:parabolic} tells us that all germane Cartan subalgebras are Levi factors of Borel subalgebras that are constructed out of real and $\theta$-stable ones. As this class is of particular importance to us, we introduce the following terminology. Let ${\mathfrak {q}}\subseteq{\mathfrak {g}}$ be any germane parabolic subalgebra. We say that ${\mathfrak {q}}$ is {\em constructible} if there is a chain of germane parabolic subalgebras $$ {\mathfrak {q}}={\mathfrak {q}}_r\;\subseteq\;{\mathfrak {q}}_{r-1}\;\subseteq\;\cdots\; \subseteq\;{\mathfrak {q}}_1\;\subseteq\;{\mathfrak {q}}_0={\mathfrak {g}} $$ with Levi decompositions $$ {\mathfrak {q}}_i\;=\;{\mathfrak {l}}_i+{\mathfrak {u}}_i $$ with Levi pairs $({\mathfrak {l}}_i,L_i\cap K)$ such that for all $0\leq i<r$ $$ {\mathfrak {l}}_i\cap {\mathfrak {q}}_{i+1}\;\subseteq\;{\mathfrak {l}}_i $$ is a real or $\theta$-stable parabolic in $({\mathfrak {l}}_i,L_i\cap K)$. We say that the Levi pair $({\mathfrak {l}},L\cap K)$ of a constructible parabolic pair $({\mathfrak {q}},L\cap K)$ is {\em constructible}. An important corollary of Proposition \ref{prop:parabolic} is \begin{corollary}\label{cor:constructible} Ever $\theta$-stable Cartan subpair $({\mathfrak {h}},H\cap K)\subseteq({\mathfrak {g}},K)$ is constructible, in particular the reductibe Lie group $G$ corresponding to $({\mathfrak {g}},K)$ is covered by the conjugates of reductive subgroups $L\subseteq G$ whose corresponding reductive pairs $({\mathfrak {l}},L\cap K)$ are constructible. The $L$ may be chosen the be associated to Cartan pairs. \end{corollary} \subsection{Lie algebra cohomology} Fix a $({\mathfrak {g}},K)$-module $X$, and consider a parabolic subpair $({\mathfrak {q}},L\cap K)$ of $({\mathfrak {g}},K)$ with Levi decomposition ${\mathfrak {q}}={\mathfrak {l}}+{\mathfrak {u}}$. Consider the complex $$ C_{{\mathfrak {q}}}^q(X)\;:=\;\Hom_{\rm\bf C}(\bigwedge^q{\mathfrak {u}},X), $$ with differential $$ d:\;C_{{\mathfrak {q}}}^q(X)\to C_{{\mathfrak {q}}}^{q+1}(X), $$ $$ f\;\mapsto\;df $$ where $$ [df](u_0\wedge u_1\wedge\cdots\wedge u_q)\;:=\; \sum_{i=0}^q(-1)^i u_i\cdot f(u_0\wedge\cdots\wedge \widehat{u}_i\wedge\cdots\wedge u_q) $$ $$ \;+\; \sum_{i<j}^q(-1)^{i+j} f([u_i,u_j]\wedge u_0\wedge\cdots\wedge \widehat{u}_i\wedge\cdots\wedge\widehat{u}_j\wedge\cdots\wedge u_q). $$ It is easily seen that $d\circ d=0$, hence $(C_{{\mathfrak {q}}}^q(X),d)$ is a chain complex. We write $Z^q(X)$ for the kernel of $d:C_{{\mathfrak {q}}}^q(X)\to C_{{\mathfrak {q}}}^{q+1}(X)$ and $Z^q(X)$ for the image $d(C_{{\mathfrak {q}}}^{q-1}(X))$. The reductive pair $({\mathfrak {l}},L\cap K)$ acts on $C_{{\mathfrak {q}}}^q(X)$ and $d$ is equivariant for this action. As a result the cohomology $$ H^q({\mathfrak {u}}; X)\;:=\;Z^q(X)/B^q(X) $$ of this complex comes with a natural $({\mathfrak {l}},L\cap K)$-module structure. By construction $C_{{\mathfrak {q}}}^q(X)=0$ for all $q<0$ and $q>\dim{\mathfrak {u}}$, which implies that $H^q({\mathfrak {u}};X)=0$ in those degrees. The map \eqref{eq:tensorhom} provides us with an isomorphism \begin{equation} C_{{\mathfrak {q}}}^q(X)\;\cong\;\bigwedge^q{\mathfrak {u}}^*\otimes_{\rm\bf C} X \label{eq:tensorcomplex} \end{equation} of $({\mathfrak {l}},L\cap K)$-modules. \subsection{Homology and Poincar\'e duality} Similarly we have a homology theory that is constructed dually via the complex $$ C_q(X)\;:=\;\bigwedge^q{\mathfrak {u}}\otimes_{\rm\bf C} X, $$ with differential $$ d:\;C_{q+1}(X)\to C_{q}(X), $$ $$ d(u_0\wedge u_1\wedge\cdots\wedge u_q\otimes x)\;:=\; \sum_{i=0}^q(-1)^{i+1} (u_0\wedge\cdots\wedge \widehat{u}_i\wedge\cdots\wedge u_q)\otimes(u_i\cdot x) $$ $$ \;+\; \sum_{i<j}^q(-1)^{i+j} [u_i,u_j]\wedge u_0\wedge\cdots\wedge \widehat{u}_i\wedge\cdots\wedge\widehat{u}_j\wedge\cdots\wedge u_q\otimes x. $$ Note the sign change in the first sum. We denote the cycles in degree $q$ as $Z_q(X)$ and the boundaries with $B_q(X)$. Again the differential is $({\mathfrak {l}},L\cap K)$-linear and the homology $$ H_q({\mathfrak {u}}; X)\;:=\;Z_q(X)/B_q(X) $$ is an $({\mathfrak {l}},L\cap K)$-module. \begin{proposition}[Easy duality]\label{prop:easyduality} For any $({\mathfrak {q}},L\cap K)$-module and any degree $q$ we have a natural canonical isomorphism $$ H^q({\mathfrak {u}}; X^\vee)\;\cong\; H_q({\mathfrak {u}}; X)^\vee. $$ of $({\mathfrak {l}},L\cap K)$-modules. Here the supscript $\cdot^\vee$ denotes on both sides the $L\cap K$-finite dual. \end{proposition} This duality is sometimes called easy duality, as opposed to hard duality (Poincar\'e duality) that we discuss below. \begin{proof} The natural perfect $({\mathfrak {q}},L\cap K)$-equivariant pairing $$ \langle\cdot,\cdot\rangle:\;\;\;X^\vee\times X\;\to\;{\rm\bf C}, $$ $$ (y,x)\;\mapsto\;y(x) $$ induces a natural perfect pairing $$ \langle\cdot,\cdot\rangle:\;\;\;C_{{\mathfrak {q}}}^q(X^\vee)\times C_q(X)\;\to\;{\rm\bf C}, $$ $$ (f,u_0\wedge\cdots\wedge u_q\wedge x)\;\mapsto\;\langle f(u_0\wedge\cdots\wedge u_q),x\rangle, $$ which is $({\mathfrak {l}},L\cap K)$-equivariant. We claim that this pairing descends to a perfect pairing \begin{equation} \langle\cdot,\cdot\rangle:\;\;\;H^q({\mathfrak {u}};X^\vee)\times H_q({\mathfrak {u}};X)\;\to\;{\rm\bf C}. \label{eq:cohomologyhomologypairing} \end{equation} To see that this is well defined, we first observe that the maps $$ d:\;\;\;C_{{\mathfrak {q}}}^{q}({\mathfrak {u}}; X^\vee)\to C_{{\mathfrak {q}}}^{q+1}({\mathfrak {u}}; X^\vee), $$ and $$ d:\;\;\;C_{q+1}({\mathfrak {u}}; X)\to C_{q}({\mathfrak {u}}; X), $$ are adjoint with respect to $\langle\cdot,\cdot\rangle$. Indeed, we have $$ \langle df,u_0\wedge\cdots u_q\otimes x\rangle\;=\; \langle[df](u_0\wedge u_1\wedge\cdots\wedge u_q),x\rangle\;=\; $$ $$ \sum_{i=0}^q(-1)^i \langle u_i\cdot f(u_0\wedge\cdots\wedge \widehat{u}_i\wedge\cdots\wedge u_q),x\rangle $$ $$ \;+\; \sum_{i<j}^q(-1)^{i+j} \langle f([u_i,u_j]\wedge u_0\wedge\cdots\wedge \widehat{u}_i\wedge\cdots\wedge\widehat{u}_j\wedge\cdots\wedge u_q),x\rangle\;=\; $$ $$ \sum_{i=0}^q(-1)^{i+1} \langle f(u_0\wedge\cdots\wedge \widehat{u}_i\wedge\cdots\wedge u_q),u_i\cdot x\rangle $$ $$ \;+\; \sum_{i<j}^q(-1)^{i+j} \langle f([u_i,u_j]\wedge u_0\wedge\cdots\wedge \widehat{u}_i\wedge\cdots\wedge\widehat{u}_j\wedge\cdots\wedge u_q),x\rangle\;=\; $$ $$ \langle f, d(u_0\wedge u_1\wedge\cdots\wedge u_q\otimes x)\rangle. $$ Therefore, if $f\in C_{{\mathfrak {q}}}^{q-1}(X^\vee)$ and $g\in Z_q(X)$ is a cycle, i.e.\ lies in the kernel of $d$, $$ \langle df,g\rangle \;=\;\langle f,dg\rangle\;=\;\langle f,0\rangle\;=\;0, $$ and similarly for a cocycle $f\in Z^q(X^\vee)$ and $g\in C_{q-1}(X)$ $$ \langle f,dg\rangle \;=\;\langle df,g\rangle\;=\;\langle 0,g\rangle\;=\;0. $$ Therefore the pairing \eqref{eq:cohomologyhomologypairing} is well defined. To see that it is non-degenerate, we again use the same adjointness relation: Assume that $f\in Z^q(X^\vee)$ is a cocycle with $$ \langle f,g\rangle\;=\;0 $$ for all cycles $g\in Z_q(X)$. This means that $$ f\;\in\;Z^q(X^\vee)\cap Z_q(X)^\bot. $$ By adjointness we have $$ Z_q(X)\;=\;B^q(X^\vee)^\bot, $$ because for any $h\in C_q(X)$ we have the equivalence $$ \forall e\in C_{{\mathfrak {q}}}^{q-1}(X^\vee):\;\langle de,h\rangle=0\;\;\Leftrightarrow\;\; \forall e\in C_{{\mathfrak {q}}}^{q-1}(X^\vee):\;\langle e,dh\rangle=0\;\;\Leftrightarrow\;\;dh=0. $$ Therefore $$ Z_q(X)^\bot\;=\;\left(B^q(X^\vee)^\bot\right)^\bot=B^q(X^\vee), $$ and consequently $f$ is a coboundary. Switching the roles of $f$ and $g$ this completes the proof of non-degeneracy of \eqref{eq:cohomologyhomologypairing}. \end{proof} The following statement is a consequence of hard duality, and in this context also referred to as {\em Poincar\'e duality}. The {\em hard} refers to the generalization involving production or induction along the compact group as well, and in this case it is harder to proof. Our setting is not harder than above. \begin{proposition}[Poincar\'e duality]\label{prop:hardduality} For any $({\mathfrak {q}},L\cap K)$-module and any degree $q$ we have a natural canonical isomorphism $$ H_q({\mathfrak {u}}; X\otimes_{\rm\bf C}\bigwedge^{\dim{\mathfrak {u}}}{\mathfrak {u}}^*)\;\cong\; H^{\dim{\mathfrak {u}}-q}({\mathfrak {u}}; X). $$ of $({\mathfrak {l}},L\cap K)$-modules. \end{proposition} \begin{proof} Consider the natural map $$ \bigwedge^{q}{\mathfrak {u}}\otimes_{\rm\bf C} \bigwedge^{\dim{\mathfrak {u}}-q}{\mathfrak {u}}\otimes_{\rm\bf C} \bigwedge^{\dim{\mathfrak {u}}}{\mathfrak {u}}^*\;\to\; {\rm\bf C} $$ given explicitly by $$ u\otimes v\otimes w \;\mapsto\;w(u\wedge v). $$ It is $({\mathfrak {l}},L\cap K)$-equivariant, and every non-zero $w$ identifies $\bigwedge^{\dim{\mathfrak {u}}-q}{\mathfrak {u}}$ as the dual of $\bigwedge^{q}{\mathfrak {u}}$, i.e.\ it gives rise to an isomorphism $$ \eta_w:\;\;\;\bigwedge^{\dim{\mathfrak {u}}-q}{\mathfrak {u}}\;\to\;(\bigwedge^{q}{\mathfrak {u}})^*. $$ This induces an $({\mathfrak {l}},L\cap K)$-isomorphism $$ \bigwedge^{q}{\mathfrak {u}}\otimes_{\rm\bf C} X\otimes_{\rm\bf C} \bigwedge^{\dim{\mathfrak {u}}}{\mathfrak {u}}^*\;\to\; \left(\bigwedge^{\dim{\mathfrak {u}}-q}{\mathfrak {u}}\right)^*\otimes_{\rm\bf C} X, $$ $$ u\otimes x\otimes w\;\mapsto\;\eta(w)\otimes x. $$ The right hand side has a natural identification with the complex computing cohomology, hence we get an isomorphism $$ \delta:\;\;\;C_q(X\otimes_{\rm\bf C}\bigwedge^{\dim{\mathfrak {u}}}{\mathfrak {u}}^*)\;\to\; C_{{\mathfrak {q}}}^{\dim{\mathfrak {u}}-q}(X). $$ Let us verify that $\delta$ commutes with the differentials up to sign: $$ [\delta(d(u_0\wedge\cdots\wedge u_{q}\otimes x\otimes w)](v_q\wedge\cdots\wedge v_{\dim{\mathfrak {u}}})\;=\; $$ $$ \sum_{i=0}^q(-1)^{i+1} \delta(u_0\wedge\cdots\wedge \widehat{u}_i\wedge\cdots\wedge u_{q}\otimes u_i(x\otimes w))(v_q\wedge\cdots\wedge v_{\dim{\mathfrak {u}}})\;+\; $$ $$ \sum_{i<j}^q(-1)^{i+j} \delta([u_i,u_j]\wedge u_0\wedge\cdots\wedge \widehat{u}_i\wedge\cdots\wedge\widehat{u}_j\wedge\cdots\wedge u_q\otimes x\otimes w)(v_q\wedge\cdots\wedge v_{\dim{\mathfrak {u}}}) \;=\; $$ $$ \sum_{i=0}^q(-1)^{i+1} \delta(u_0\wedge\cdots\wedge \widehat{u}_i\wedge\cdots\wedge u_{q}\otimes (u_i\cdot x)\otimes w)(v_q\wedge\cdots\wedge v_{\dim{\mathfrak {u}}}) \;+\; $$ $$ \sum_{i<j}^q(-1)^{i+j} \delta([u_i,u_j]\wedge u_0\wedge\cdots\wedge \widehat{u}_i\wedge\cdots\wedge\widehat{u}_j\wedge\cdots\wedge u_q\otimes x\otimes w)(v_q\wedge\cdots\wedge v_{\dim{\mathfrak {u}}}), $$ because ${\mathfrak {u}}$ acts trivially on $w$. We get $$ \sum_{i=0}^q(-1)^{i+1} [\eta_w(u_0\wedge\cdots\wedge \widehat{u}_i\wedge\cdots\wedge u_{q})(v_q\otimes (u_i\cdot x)](v_q\wedge\cdots\wedge v_{\dim{\mathfrak {u}}})\;+\; $$ $$ \sum_{i<j}^q(-1)^{i+j} [\eta_w([u_i,u_j]\wedge u_0\wedge\cdots\wedge \widehat{u}_i\wedge\cdots\wedge\widehat{u}_j\wedge\cdots\wedge u_q\otimes x)](v_q\wedge\cdots\wedge v_{\dim{\mathfrak {u}}})\;=\; $$ $$ \sum_{i=0}^q(-1)^{i+1} w(u_0\wedge\cdots\wedge \widehat{u}_i\wedge\cdots\wedge u_{q}\wedge v_q\wedge\cdots\wedge v_{\dim{\mathfrak {u}}})\cdot (u_i\cdot x)\;+\; $$ $$ \sum_{i<j}^q(-1)^{i+j} w([u_i,u_j]\wedge u_0\wedge\cdots\wedge \widehat{u}_i\wedge\cdots\wedge\widehat{u}_j\wedge\cdots\wedge u_q\wedge v_q\wedge\cdots\wedge v_{\dim{\mathfrak {u}}})\cdot x. $$ Analogously we get $$ [d(\delta(u_0\wedge\cdots\wedge u_{q}\otimes x\otimes w))](v_q\wedge\cdots\wedge v_{\dim{\mathfrak {u}}})\;=\; $$ $$ \sum_{j=q}^{\dim{\mathfrak {u}}}(-1)^{j-q} v_j\delta(u_0\wedge\cdots\wedge u_{q}\otimes x\otimes w)(v_q\wedge\cdots\wedge \widehat{v}_j\wedge\cdots\wedge v_{\dim{\mathfrak {u}}})\;+\;\cdots\;=\; $$ $$ \sum_{j=q}^{\dim{\mathfrak {u}}}(-1)^{j-q} v_j\eta_w(u_0\wedge\cdots\wedge u_{q}\otimes x)(v_q\wedge\cdots\wedge \widehat{v}_j\wedge\cdots\wedge v_{\dim{\mathfrak {u}}})\;+\;\cdots\;=\; $$ $$ \sum_{j=q}^{\dim{\mathfrak {u}}}(-1)^{j-q} w(u_0\wedge\cdots\wedge u_{q}\wedge v_q\wedge\cdots\wedge \widehat{v}_j\wedge\cdots\wedge v_{\dim{\mathfrak {u}}})\cdot (v_j\cdot x)\;+\; $$ $$ \sum_{k<l}^q(-1)^{k+l} w([v_k,v_l]\wedge u_0\wedge\cdots\wedge u_q\wedge v_q\wedge\cdots\wedge \widehat{v}_k\wedge\cdots\wedge\widehat{v}_l\wedge\cdots\wedge v_{\dim{\mathfrak {u}}})\cdot x, $$ Summing up we conclude that $$ \delta(d(u\otimes x\otimes w))(v)\;+\;(-1)^{q}d(\delta(u\otimes x\otimes w))(v)\;=\; $$ $$ \sum_{i=0}^q(-1)^{i+1} w(u_0\wedge\cdots\wedge \widehat{u}_i\wedge\cdots\wedge u_{q}\wedge v_q\wedge\cdots\wedge v_{\dim{\mathfrak {u}}})\cdot (u_i\cdot x)\;+\; $$ $$ \sum_{j=q}^{\dim{\mathfrak {u}}}(-1)^{j} w(u_0\wedge\cdots\wedge u_{q}\wedge v_q\wedge\cdots\wedge \widehat{v}_j\wedge\cdots\wedge v_{\dim{\mathfrak {u}}})\cdot (v_j\cdot x). $$ For any fixed $w$ and $x$ we may interpret this expression as an alternating $X$-valued $(\dim{\mathfrak {u}}+1)$-form on ${\mathfrak {u}}$, and every such form is zero. \end{proof} \begin{corollary}\label{cor:duality} We have a canonical isomorphism $$ H^q({\mathfrak {u}}; (X\otimes_{\rm\bf C}\bigwedge^{\dim{\mathfrak {u}}}{\mathfrak {u}}^*)^\vee)\;\cong\; H^{\dim{\mathfrak {u}}-q}({\mathfrak {u}}; X)^\vee. $$ of $({\mathfrak {l}},L\cap K)$-modules. \end{corollary} We remark that this isomorphism is even an isomorphism of universal $\delta$-functores. In particular it is compatible with the connecting morphisms. \begin{proof} From Propositions \ref{prop:easyduality} and \ref{prop:hardduality} we get $$ H^q({\mathfrak {u}}; (X\otimes_{\rm\bf C}\bigwedge^{\dim{\mathfrak {u}}}{\mathfrak {u}}^*)^\vee)\;\cong\; H_q({\mathfrak {u}}; X\otimes_{\rm\bf C}\bigwedge^{\dim{\mathfrak {u}}}{\mathfrak {u}}^*)^\vee\;\cong\; H^{\dim{\mathfrak {u}}-q}({\mathfrak {u}}; X)^\vee. $$ \end{proof} \subsection{The K\"unneth formula} Consider two $({\mathfrak {g}},K)$-modules $V$ and $W$. Then the tensor product $V\otimes_{\rm\bf C} W$ is a $({\mathfrak {g}}\times{\mathfrak {g}},K\times K)$-mdule and ${\mathfrak {q}}\times{\mathfrak {q}}$ is a parabolic subpair of the latter reductive pair. Then the ${\mathfrak {u}}\times{\mathfrak {u}}$-cohomology of $V\otimes_{\rm\bf C} W$ is calculated via the standard complex $$ \Hom_{\rm\bf C}(\bigwedge^n{\mathfrak {u}}\times{\mathfrak {u}},V\otimes_{\rm\bf C} W), $$ which decomposes naturally into $$ \bigoplus_{p+q=n}\Hom_{\rm\bf C}(\bigwedge^p{\mathfrak {u}},V)\otimes_{\rm\bf C}\Hom_{\rm\bf C}(\bigwedge^q{\mathfrak {u}},W), $$ via the vector space isomorphism $$ \phi\;:=\sum_{p+q=n}\phi^{p,q}, $$ where $$ \phi^{p,q}:\;\;\Hom_{\rm\bf C}(\bigwedge^p{\mathfrak {u}},V)\otimes\Hom_{\rm\bf C}(\bigwedge^q{\mathfrak {u}},W)\;\to\;\Hom_{\rm\bf C}(\bigwedge^n{\mathfrak {u}}\times{\mathfrak {u}},V\otimes_{\rm\bf C} W), $$ $$ f\otimes g\;\;\;\mapsto\;\;\; [U_1\wedge U_2\mapsto f(v)\otimes g(w)] $$ whenever $U_1\in\bigwedge^p{\mathfrak {u}}$ and $U_2\in\bigwedge^q{\mathfrak {u}}$ and the other terms are mapped to zero. It is not hard to see that this decomposition is compatible with the differentials on the complexes. Hence we deduce \begin{proposition}[K\"unneth formula]\label{prop:kuenneth} For any $V,W\in\mathcal C({\mathfrak {g}},K)$, and any $n$ there is a canonical isomorphism $$ \bigoplus_{p+q=n}H^p({\mathfrak {u}};V)\otimes_{\rm\bf C} H^q({\mathfrak {u}};W)\;\cong\; H^{n}({\mathfrak {u}}\times{\mathfrak {u}};V\otimes_{\rm\bf C} W), $$ of $({\mathfrak {l}}\times{\mathfrak {l}},(L\cap K)\times (L\cap K))$-modules. \end{proposition} \subsection{The Hochschild-Serre spectral sequence} The Hochschild-Serre spectral sequence \cite{hochschildserre1953}, \cite[Chapter 5, Section 10]{book_knappvogan1995} will be our main fundamental tool in our algebraic character theory. It is a technically forbidding subject to show its existence, which has been discussed in detail in many standard text books. Suppose we are given an inclusion of reductive pairs $({\mathfrak {h}},N)\to({\mathfrak {g}},K)$ and germane parabolic subalgebras ${\mathfrak {p}}\subseteq{\mathfrak {h}}$ and ${\mathfrak {q}}\subseteq{\mathfrak {g}}$, satisfying ${\mathfrak {p}}={\mathfrak {q}}\cap{\mathfrak {h}}$ with Levi decompositions ${\mathfrak {p}}={\mathfrak {m}}+{\mathfrak {n}}$, ${\mathfrak {q}}={\mathfrak {l}}+{\mathfrak {u}}$ subject to the condition ${\mathfrak {m}}\subseteq{\mathfrak {l}}$, i.e.\ ${\mathfrak {m}}={\mathfrak {l}}\cap{\mathfrak {h}}$. The main idea here to describe the structure of ${\mathfrak {u}}$-cohomology as a $({\mathfrak {m}},M\cap N)$-module via the ${\mathfrak {n}}$-cohomology. Hochschild and Serre gave a structural recipe, i.e.\ a spectral sequence, which inductively achieves this. More concretely for any $({\mathfrak {q}},L\cap K)$-module $X$ they start with the $({\mathfrak {m}},M\cap N)$-modules \begin{equation} E_1^{p,q}\;:=\;\bigwedge^p({\mathfrak {u}}/{\mathfrak {n}})^*\otimes_{\rm\bf C} H^q({\mathfrak {n}}; X). \label{eq:e1term} \end{equation} Then they inductively construct a collection $(E^{p,q}_r)_{p,q\in{\rm\bf Z},r\geq 1}$ of $({\mathfrak {m}},M\cap N)$-modules together with morphisms $$ d^{p,q}_r:\;\;\;E^{p,q}_r\;\to\;E^{p+r,q-r+1}_r, $$ subject to the condition \begin{equation} d_r^{p+r,q-r+1}\circ d_r^{p,q}\;=\;0, \label{eq:dschicht} \end{equation} and isomorphisms \begin{equation} \alpha_r^{p,q}:\;\;\;H^{p,q}(E_r^{\bullet,\bullet})\;\to\;E^{p,q}_{r+1}, \label{eq:alphaiso} \end{equation} where the bigraded cohomology is defined via the above differential as \begin{equation} H^{p,q}(E_r^{\bullet,\bullet})\;\;:=\;\;\kernel d^{p,q}_r/\image d_r^{p-r,q+r-1}. \label{eq:Hschicht} \end{equation} In particular this data builds an explicit bridge between the $r$-th layer $E_r^{p,q}$ and the $(r+1)$-th one. They showed that for $r_0$ large enough the differentials of the $r_0$-th layer and above vanish. Therefore \eqref{eq:Hschicht} becomes $$ H^{p,q}(E_{r_0}^{\bullet,\bullet})\;\;=\;\; E^{p,q}_{r_0}\;\;=:\;\; E^{p,q}_\infty. $$ The same is true for any $r\geq r_0$, i.e.\ we may naturally identify $E^{p,q}_r$ with $E^{p,q}_\infty$ via the isomorphisms \eqref{eq:alphaiso}. Finally Hochschild and Serre construct a decreasing filtration of $$ E^n\;:=\;H^n({\mathfrak {n}}; X) $$ via submodules $$ 0\;\subseteq\cdots\subseteq\;F^{p+1}E^{n}\;\subseteq\;F^pE^{n}\;\subseteq\; F^{p-1}E^{n}\;\subseteq\;\cdots\;\subseteq\;F^0E^{n}\;=\;E^n, $$ satisfying for $p$ large enough $$ F^pE^n\;=\;0 $$ for all $n$, together with isomorphisms $$ \beta^{p,q}:\;\;\;E_\infty^{p,q}\;\to\;F^pE^{p+q}/F^{p+1}E^{p+q} $$ for all $p$ and $q$. In abstract terms this is equivalent to saying that they constructed a spectral sequence $(E_r^{p,q},E^n)_{r,p,q,n}$ with differential of bidegree $(r,r-1)$ and $E_1$-term given by \eqref{eq:e1term}, converging to $H^{p+q}({\mathfrak {u}}; X)$. This is convergens expressed by the notation $$ E_1^{p,q}\;\;\;\Longrightarrow\;\;\; H^{p+q}({\mathfrak {u}}; X). $$ Their construction is functorial in $X$ and all maps involved are $({\mathfrak {m}},M\cap N)$-equivariant. If there is another parabolic subalgbra ${\mathfrak {q}}'\subseteq{\mathfrak {g}}$ containing ${\mathfrak {q}}$ with Levi factor ${\mathfrak {h}}$, and nilpotent radical ${\mathfrak {u}}'$ say, then the second layer is given by $$ E_2^{p,q}\;=\;H^p({\mathfrak {n}}; H^q({\mathfrak {u}}'; X)). $$ In general there is no such ${\mathfrak {q}}'$, and then the Hochschild-Serre spectral sequence is nothing but the Grothendieck spectral sequence for the composition of functor $$ H^0({\mathfrak {n}};\cdot)\circ H^0({\mathfrak {u}}'; X)\;=\; H^0({\mathfrak {u}};\cdot). $$ A spectral sequence contains a lot of information, which is often not easy to extract. However Euler characteristics are invariant in spectral sequences, i.e.\ the Euler characteristic of the $E_r$-terms agree for all $r$, and in particular they are the same as the Euler characteristic of the $E^n$-terms. \subsection{Properties preserved by cohomology} \begin{theorem}\label{thm:inheritance} For any parabolic subpair $({\mathfrak {q}},L\cap K)$ of a reductive pair $({\mathfrak {g}},K)$, and any $q\in{\rm\bf Z}$ the functor $$ H^q({\mathfrak {u}};-):\;\;\;{\mathcal C}({\mathfrak {g}},K)\to{\mathcal C}({\mathfrak {l}},L\cap K) $$ preserves the following properties: \begin{itemize} \item[(i)] finite-dimensionality, \item[(ii)] $Z({\mathfrak {g}})$- resp.\ $Z({\mathfrak {l}})$-finiteness, \item[(iii)] admissibility for $\theta$-stable ${\mathfrak {q}}$, \item[(iv)] finite length for constructible ${\mathfrak {q}}$, \item[(v)] discrete decomposability for constructible ${\mathfrak {q}}$, \item[(vi)] discrete decomposability with finite multiplicities for constructible ${\mathfrak {q}}$. \end{itemize} \end{theorem} \begin{proof} The assertions (i) and (iii) follow easily. If $X$ is finite-dimensional or finitely generated, then all $C_{{\mathfrak {q}}}^q(X)$ are finite-dimensional resp.\ finitely generated too. Therefore $H^q({\mathfrak {u}};X)$ is finite-dimensional in the first case, and finitely generated as a $({\mathfrak {l}},L\cap K)$-module because $U({\mathfrak {l}})$ is noetherian. The hard part is proving (ii), for which we refer to the literature \cite{casselmanosborne1975}, \cite[Theorem 7.56]{book_knappvogan1995}. One usually shows a stronger assertion: If $X$ is $Z({\mathfrak {g}})$-finite, then $H^q({\mathfrak {u}};X)$ is $Z({\mathfrak {l}})$-finite and if the latter has a non-trivial $\chi_\lambda$-primary component for some character $\lambda\in{\mathfrak {h}}^*$, ${\mathfrak {h}}\subseteq{\mathfrak {l}}$ a $\theta$-stable Cartan, then there is a $w\in W({\mathfrak {g}},{\mathfrak {h}})$ such that the $\chi_{w(\lambda+\rho({\mathfrak {n}}))}$-primary component of $X$ is nonzero, where ${\mathfrak {n}}$ is the nilpotent radical of a Borel ${\mathfrak {b}}$ containing ${\mathfrak {h}}$. We will need this below in our proof of (vi). The proof of (iii) is an application of the Hochschild-Serre spectral sequence that we discuss below. The idea is to reduce the problem via this spectral sequence to the cohomology of the underlying compact pairs, where it becomes obvious thanks to Kostant's Theorem (Theorem \ref{thm:kostant} below). For details we refer to \cite[Corollary 5.140]{book_knappvogan1995}. Assertion (iv) follows in the $\theta$-stable case from (ii) and (iii). The real case is treated in \cite[Section 2]{hechtschmid1983}, and the general case follows from these two cases by Proposition \ref{prop:parabolic} via the Hochschild-Serre spectral sequence. To proove (v), let $$ X=\varinjlim X_i $$ be a discretely decomposable $({\mathfrak {g}},K)$-module, i.e.\ we assume the $X_i$ to be of finite length. Then by (v) the ${\mathfrak {u}}$-cohomology of the $X_i$ is of finite length too, as is by Poincar\'e duality the homology of $$ X_i\otimes_{\rm\bf C}\bigwedge^{\dim{\mathfrak {u}}}{\mathfrak {u}}^*. $$ Now homology and the tensor product commute with direct limits. Therefore $$ H_{\dim{\mathfrak {u}}-q}({\mathfrak {u}};X\otimes_{\rm\bf C}\bigwedge^{\dim{\mathfrak {u}}}{\mathfrak {u}}^*)\;=\; \varinjlim H_{\dim{\mathfrak {u}}-q}({\mathfrak {u}};X_i\otimes_{\rm\bf C}\bigwedge^{\dim{\mathfrak {u}}}{\mathfrak {u}}^*) $$ By Poincar\'e duality the left hand side computes the $q$-th ${\mathfrak {u}}$-cohomology of $X$, and the right hand side the corresponding cohomology of $X_i$. This proves (v). To proof (vi), we may assume without loss of generality that $K$ is connected. Now our observation in the proof of (ii) above allows us together with (v) to conclude the validity of (vi) thanks to Harish-Chandra's Theorem \ref{thm:harishchandrafinite}. \end{proof} \begin{theorem}[Kostant, Bott]\label{thm:kostant} If $K$ is connected and the Levi factor ${\mathfrak {l}}$ of ${\mathfrak {q}}={\mathfrak {l}}+{\mathfrak {u}}$ is a Cartan subalgebra, and if $V$ is an irreducible finite-dimensional representation of $({\mathfrak {g}},K)$ of ${\mathfrak {u}}$-hightest weight $\lambda\in{\mathfrak {l}}^*$, then for any $0\leq q\leq\dim{\mathfrak {u}}$: $$ H^q({\mathfrak {u}};V)\;\cong\; \bigoplus\limits_{\substack{w\in W({\mathfrak {g}},{\mathfrak {l}})\\\ell(w)=q}} {\rm\bf C}_{w(\lambda+\rho({\mathfrak {u}}))-\rho({\mathfrak {u}})} $$ as an ${\mathfrak {l}}$-module. \end{theorem} This Theorem was already implicit in Bott \cite{bott1957}, but became explicit only in \cite{kostant1961}. \begin{corollary}\label{cor:denominator} In the above setting we have $$ H^q({\mathfrak {u}};{\rm\bf C})\;\cong\; \bigoplus\limits_{\substack{w\in W({\mathfrak {g}},{\mathfrak {l}})\\\ell(w)=q}} {\rm\bf C}_{w(\rho({\mathfrak {u}}))-\rho({\mathfrak {u}})}. $$ \end{corollary} \subsection{Cohomology and Grothendieck groups} It turns out that $H^q({\mathfrak {u}};-)$ is the $q$-th right derived functor of $H^0({\mathfrak {u}};-)$. Therefore it maps short exact sequences $$ \begin{CD} 0@>>> X@>>> Y@>>> Z@>>> 0 \end{CD} $$ to long exact sequences $$ \begin{CD} 0@>>> H^0({\mathfrak {u}};X)@>>> H^0({\mathfrak {u}};Y)@>>> H^0({\mathfrak {u}};Z) \end{CD} $$ $$ \begin{CD} @>>> H^1({\mathfrak {u}};X)@>>> H^1({\mathfrak {u}};Y)@>>> H^1({\mathfrak {u}};Z)@>>>\cdots \end{CD} $$ $$ \begin{CD} @>>> H^{\dim{\mathfrak {u}}}({\mathfrak {u}};X)@>>> H^{\dim{\mathfrak {u}}}({\mathfrak {u}};Y)@>>> H^{\dim{\mathfrak {u}}}({\mathfrak {u}};Z)@>>>0. \end{CD} $$ A fundamental consequence of this fact and Theorem \ref{thm:inheritance} is that the map $$ H_{\mathfrak {q}}(-):\;\;\;K_?({\mathfrak {g}},K)\to K_?({\mathfrak {l}},L\cap K) $$ $$ [X]\;\;\mapsto\;\sum_{q=0}^{\dim{\mathfrak {u}}}(-1)^q[H^q({\mathfrak {u}}; X)] $$ is a well defined group homomorphism for $?\in\{\rm a,\rm df,\rm fl,\rm fd\}$, where in the case $?=\rm a$ we always implicitly assume ${\mathfrak {q}}$ to be $\theta$-stable. \section{Algebraic Characters} After all the preparation in the previous section, this section contains the main body of our theory. \subsection{Definition} Fix a reductive pair $({\mathfrak {g}},K)$ and a germane parabolic subalgebra ${\mathfrak {q}}\subseteq{\mathfrak {g}}$ with Levi decomposition ${\mathfrak {q}}={\mathfrak {l}}+{\mathfrak {u}}$. Define the (relative) {\em Weyl element} (with respect to ${\mathfrak {q}}$) as $$ W_{{\mathfrak {q}}}\;:=\;H_{\mathfrak {q}}({\bf1})\;\in\;K_{\rm fd}({\mathfrak {l}}, L\cap K). $$ The isomorphism \eqref{eq:tensorcomplex} tells us that \begin{equation} W_{{\mathfrak {q}}}\;=\;\sum_{q=0}^{\dim{\mathfrak {u}}}(-1)^q[\bigwedge^q{\mathfrak {u}}^*]. \label{eq:wformula} \end{equation} Choose any $?\in\{\rm a,\rm df,\rm fl,\rm fd\}$. From now on we assume that ${\mathfrak {q}}$ is $\theta$-stable if $?=\rm a$ and constructible in the other cases. Then Proposition \ref{prop:tensorstable} tells us that the category $\mathcal C_{?}({\mathfrak {l}},L\cap K)$ is stable under tensoring with finite-dimensional modules, and in particular the localization $$ C_{{\mathfrak {q}},?}({\mathfrak {l}},L\cap K)\;:=\;K_?({\mathfrak {l}},L\cap K)[W_{\mathfrak {q}}^{-1}] $$ is well defined. As we saw before, Theorem \ref{thm:inheritance} and the long exact sequence for cohomology, tells us that ${\mathfrak {u}}$-cohomology defines a map map $H_{\mathfrak {q}}$ between the $K$-groups of $({\mathfrak {g}},K)$ and $({\mathfrak {l}},L\cap K)$, and we set $$ c_{\mathfrak {q}}:\;\;\;K_{?}({\mathfrak {g}},K)\;\to\;C_{{\mathfrak {q}},?}({\mathfrak {l}},L\cap K), $$ $$ X\;\mapsto\;\frac{H_{\mathfrak {q}}(X)}{W_{\mathfrak {q}}}. $$ \subsection{Formal properties} The fundamental properties of $c_{\mathfrak {q}}$ are that it satisfies the analogues of axioms (A) and (M). However as in general multiplication is only partially defined, as for example the tensor product of finite length modules need not be of finite length again. \begin{proposition}[Additivity and Multiplicativity]\label{prop:addmult} The map $c_{\mathfrak {q}}$ is additive and multiplicative in the sense that $$ c_{\mathfrak {q}}(X+Y)\;=\; c_{\mathfrak {q}}(X)+c_{\mathfrak {q}}(Y), $$ $$ c_{\mathfrak {q}}({\bf1})\;=\;{\bf1}, $$ and if $X,Y,X\otimes_{\rm\bf C} Y\in\mathcal C_?({\mathfrak {g}},K)$, then $$ c_{\mathfrak {q}}(X\cdot Y)\;=\; c_{\mathfrak {q}}(X)\cdot c_{\mathfrak {q}}(Y) $$ under the assumption that ${\mathfrak {q}}$ is a Borel algebra. \end{proposition} All identities are understood in $C_{{\mathfrak {q}},?}({\mathfrak {l}},L\cap K)$. Our requirement of minimality of ${\mathfrak {q}}$ for multiplicativity becomes evident in the proof: In general we cannot garantee that $$ H^p({\mathfrak {u}};X)\otimes_{\rm\bf C} H^q({\mathfrak {u}};Y)\in \mathcal C_?({\mathfrak {l}},L\cap K), $$ even though we conjecture that this is always the case for $X\otimes_{\rm\bf C} Y\in \mathcal C_?({\mathfrak {g}},K)$. \begin{proof} Additivity is a formal consequence of the long exact sequence for cohomology (without which $c_{\mathfrak {q}}$ were not well defined on the Grothendieck group). As for the multiplicativity, we consider the ${\mathfrak {u}}\times{\mathfrak {u}}$-cohomology of $X\otimes_{\rm\bf C} Y$ as an $({\mathfrak {l}}\times{\mathfrak {l}},(L\cap K)\times(L\cap K))$-module. Then the K\"unneth formula from Proposition \ref{prop:kuenneth} tells us that after restricting to the diagonal $$ \Delta:\;\;\;({\mathfrak {l}},(L\cap K))\;\to\; ({\mathfrak {l}}\times{\mathfrak {l}},(L\cap K)\times(L\cap K)), $$ we have an identity \begin{equation} H_{\mathfrak {q}}(X)\cdot H_{\mathfrak {q}}(Y)\;=\; H_{{\mathfrak {q}}\times{\mathfrak {q}}}(X\otimes_{\rm\bf C} Y) \label{eq:hkuenneth} \end{equation} in $K_?({\mathfrak {l}},L\cap K)$. Now the stability of Euler characteristics in the Hochschild-Serre spectral sequence for the embedding $$ \Delta:\;\;\;({\mathfrak {g}},K)\;\to\;({\mathfrak {g}}\times{\mathfrak {g}},K\times K) $$ and the parabolic subalgebras ${\mathfrak {q}}$ resp.\ ${\mathfrak {q}}\times{\mathfrak {q}}$ gives $$ H_{{\mathfrak {q}}\times{\mathfrak {q}}}(X\otimes_{\rm\bf C} Y)\;=\; \sum_{p,q}(-1)^{p+q} [\bigwedge^p({\mathfrak {u}}\times{\mathfrak {u}}/\Delta({\mathfrak {u}}))^*] \cdot[H^q(\Delta({\mathfrak {u}});X\otimes_{\rm\bf C} Y)] $$ in $K_?({\mathfrak {l}},L\cap K)$. Now we have an isomorphism $$ \bigwedge^p({\mathfrak {u}}\times{\mathfrak {u}}/\Delta({\mathfrak {u}}))^*\;\cong\;\bigwedge^p{\mathfrak {u}}^* $$ of $({\mathfrak {l}},L\cap K)$-modules, and hence we get $$ H_{{\mathfrak {q}}\times{\mathfrak {q}}}(X\otimes_{\rm\bf C} Y)\;=\;W_{\mathfrak {q}}\cdot H_{\mathfrak {q}}(X\otimes_{\rm\bf C} Y). $$ Together with \eqref{eq:hkuenneth} this proves the claim. \end{proof} An important consequence of the multiplicativity is \begin{corollary}\label{cor:addloc} For any $W'\in K_{\rm fd}({\mathfrak {g}},K)$ the map $c_{\mathfrak {q}}$ induces a well defined additive and multiplicative map $$ c_{\mathfrak {q}}:\;\;\;K_{?}({\mathfrak {g}},K)[W'^{-1}]\;\to\;C_{{\mathfrak {q}},?}({\mathfrak {l}},L\cap K)[W'|_{{\mathfrak {l}},L\cap K}^{-1}]. $$ \end{corollary} \begin{proposition}[Compatibility with restriction]\label{prop:restriction} The map $c_{\mathfrak {q}}$ is compatible with restriction, i.e.\ if $X\in\mathcal C_?({\mathfrak {g}},K)$ and $X|_{{\mathfrak {l}},L\cap K}\in\mathcal C_?({\mathfrak {l}},L\cap K)$ then $$ c_{\mathfrak {q}}(X)\;=\;[X|_{{\mathfrak {l}},L\cap K}]. $$ \end{proposition} This identity again is understood in $C_{{\mathfrak {q}},?}({\mathfrak {l}},L\cap K)$. \begin{proof} Suppose that the restriction of $X$ to the Levi pair lies in $\mathcal C_?({\mathfrak {l}},L\cap K)$. Then this is true for the complex $C_{{\mathfrak {q}}}^q(X)$ computing cohomology. Now we have the formal identity $$ H_{\mathfrak {q}}(X)\;=\;\sum_{q=0}^{\dim{\mathfrak {u}}}(-1)^q[C_{{\mathfrak {q}}}^q(X)]. $$ By the formula \eqref{eq:tensorcomplex} we get $$ \sum_{q=0}^{\dim{\mathfrak {u}}}(-1)^q[C_{{\mathfrak {q}}}^q(X)]\;=\; \sum_{q=0}^{\dim{\mathfrak {u}}}(-1)^q[\bigwedge^q{\mathfrak {u}}^*\otimes_{\rm\bf C} X)]\;=\; [X]\cdot\sum_{q=0}^{\dim{\mathfrak {u}}}(-1)^q[\bigwedge^q{\mathfrak {u}}^*]. $$ With the identity \eqref{eq:wformula} we conclude that \begin{equation} H_{\mathfrak {q}}(X)\;=\;[X]\cdot W_{\mathfrak {q}}, \label{eq:riemannroch} \end{equation} is an identity in $K_{?}({\mathfrak {l}},L\cap K)$, which concludes the proof. \end{proof} The identity \eqref{eq:riemannroch} is sometimes also referred to as Riemann-Roch formula, for it gives an explicit expression for the Euler-Poincar\'e characteristic of the cohomology. \subsection{Applications to finite-dimensional modules} Let us assume in this section for simplicity that $K$ is connected, although all statements remain true with disconnected $K$, although the arguments get a bit more involved. What we have done so far enables us already to study interesting consequences of our results in the case of finite-dimensional modules. Forgetting along the inclusion of the Levi pair induces an additive and multiplicative map $$ {\mathcal F}:\;\;\;K_{\rm fd}({\mathfrak {g}},K)\;\to\;K_{\rm fd}({\mathfrak {l}},L\cap K). $$ As multiplication is always defined in the finite-dimensional setting, this map is actually a ring homomorphism. Consider the following diagram $$ \begin{CD} K_{\rm fd}({\mathfrak {g}},K)@>{\mathcal F}>>K_{\rm fd}({\mathfrak {l}},L\cap K)\\ @| @VVV\\ K_{\rm fd}({\mathfrak {g}},K)@>{c_{\mathfrak {q}}}>>C_{{\mathfrak {q}},\rm fd}({\mathfrak {l}},L\cap K) \end{CD} $$ of rings, which is commutative by Proposition \ref{prop:restriction}. By Proposition \ref{prop:connecteddomain} the vertical localization map is a monomorphism. As we already know that $\mathcal F$ is a monorphism, this shows \begin{theorem}\label{thm:fdmono} For connected $K$ the algebraic character map $$ c_{\mathfrak {q}}:\;\;\;K_{\rm fd}({\mathfrak {g}},K)\;\to\;C_{{\mathfrak {q}},\rm fd}({\mathfrak {l}},L\cap K) $$ is a monomorphism of rings, and it satisfies $$ c_{\mathfrak {q}}(X)\;=\;[X|_{{\mathfrak {l}},L\cap K}]. $$ \end{theorem} Suppose that ${\mathfrak {p}}$ is a parabolic subpair of $({\mathfrak {g}},K)$ contained in ${\mathfrak {q}}$, with Levi decomposition ${\mathfrak {p}}={\mathfrak {m}}+{\mathfrak {n}}$. Then the obvious relation $$ X|_{{\mathfrak {m}},M\cap K}\;=\;(X|_{{\mathfrak {l}},L\cap K})|_{{\mathfrak {m}},M\cap L} $$ translates to the identity \begin{equation} c_{{\mathfrak {p}}}\;=\;c_{{\mathfrak {p}}\cap{\mathfrak {l}}}\circ c_{{\mathfrak {q}}}, \label{eq:fdres} \end{equation} where $c_{{\mathfrak {p}}\cap{\mathfrak {l}}}$ is interpreted as a map $$ C_{{\mathfrak {q}},\rm fd}({\mathfrak {l}},L\cap K)\;\to\;C_{{\mathfrak {p}},\rm fd}({\mathfrak {m}},M\cap K), $$ which is meaningful thanks to Corollary \ref{cor:addloc}. The identity \ref{eq:fdres} enables us to study branching problems of finite-dimensional modules via algebraic characters. A remarkable fact is that this identity remains true in general, not only the finite-dimensional case, as we will see shortly. Another identity is related to duality. As taking duals is an exact functor, it induces an additive map $$ \cdot^\vee:\;\;\;K_{\rm fd}({\mathfrak {g}},K)\;\to\;K_{\rm fd}({\mathfrak {g}},K). $$ Now for any $X,Y\in\mathcal C_{\rm fd}({\mathfrak {g}},K)$ we have a (non-canonical) isomorphism $$ X^\vee\otimes_{\rm\bf C} Y^\vee\;\cong\;(X\otimes_{\rm\bf C} Y)^\vee, $$ which means that dualization is multiplicative and hence a ring isomorphism on Grothendieck groups. The same is true for the Levi pair, and due to its multiplicativity dualization induces a well defined ring automorphism $$ \cdot^\vee:\;\;\;C_{{\mathfrak {q}},\rm fd}({\mathfrak {l}},L\cap K)\;\to\;C_{{\mathfrak {q}},\rm fd}({\mathfrak {l}},L\cap K). $$ As taking duals commutes obviously with restrictions, we get the identity \begin{equation} c_{{\mathfrak {p}}}(X^\vee)\;=\;c_{{\mathfrak {q}}}(X)^\vee. \label{eq:fddual} \end{equation} We will see that this identity also generalizes. One may wonder why we pass to the localization in this setting at all, restriction seems to be good enough. The reason being that even here localization makes explicit calculations possible, as it allows explicit character formulae as in the classical \begin{theorem}[Weyl character formula] If $K$ is connected and the Levi factor ${\mathfrak {l}}$ of ${\mathfrak {q}}={\mathfrak {l}}+{\mathfrak {u}}$ is a Cartan subalgebra, and if $V$ is an irreducible finite-dimensional representation of $({\mathfrak {g}},K)$ of ${\mathfrak {u}}$-hightest weight $\lambda\in{\mathfrak {l}}^*$, then $$ c_{\mathfrak {q}}(V)\;=\; \frac {\sum_{w\in W({\mathfrak {g}},{\mathfrak {l}})}(-1)^{\ell(w)}[{\rm\bf C}_{w(\lambda+\rho({\mathfrak {u}}))-\rho({\mathfrak {u}})}]} {\sum_{w\in W({\mathfrak {g}},{\mathfrak {l}})}(-1)^{\ell(w)}[{\rm\bf C}_{w(\rho({\mathfrak {u}}))-\rho({\mathfrak {u}})}]} $$ and we have the denominator formula $$ \sum_{w\in W({\mathfrak {g}},{\mathfrak {l}})}(-1)^{\ell(w)}[{\rm\bf C}_{w(\rho({\mathfrak {u}}))-\rho({\mathfrak {u}})}] \;\;\;=\!\! \prod_{\alpha\in\Delta({\mathfrak {u}},{\mathfrak {l}})}({\bf1}-[{\rm\bf C}_{-\alpha}]). $$ \end{theorem} \begin{proof} It is an immediate consequence of the definition of $c_{\mathfrak {q}}$ and Kostant's Theorem, i.e.\ Theorem \ref{thm:kostant} and its Corollary \ref{cor:denominator}, together with the isomorphism \eqref{eq:tensorcomplex}, which tells us that $$ H_{\mathfrak {q}}({\bf1})\;=\; \sum_{q=0}^{\dim{\mathfrak {u}}}(-1)^q[\bigwedge^q{\mathfrak {u}}^*]\;=\; \prod_{\alpha\in\Delta({\mathfrak {u}}^*,{\mathfrak {l}})}({\bf1}-[{\rm\bf C}_\alpha]). $$ The last identity is a consequence of the isomorphism $$ \bigwedge^q{\mathfrak {u}}^*\;=\; \bigwedge^q\bigoplus_{\alpha\in\Delta({\mathfrak {u}}^*,{\mathfrak {l}})}{\rm\bf C}_\alpha\;\cong\; \bigoplus_{\substack{A\subseteq\Delta({\mathfrak {u}}^*,{\mathfrak {l}})\\\#A=q}}\bigotimes_{\alpha\in A}{\rm\bf C}_\alpha. $$ \end{proof} \subsection{Duality and Transitivity} In this section we study the generalizations of \eqref{eq:fdres} and \eqref{eq:fddual}. For transitivity and restrictions we consider the following setup. Fix an inclusion $i:({\mathfrak {h}},N)\to({\mathfrak {g}},K)$ of reductive pairs, and let ${\mathfrak {p}}\subseteq{\mathfrak {h}}$ resp.\ ${\mathfrak {q}}\subseteq{\mathfrak {g}}$ be germane parabolic subpairs with ${\mathfrak {p}}\subseteq{\mathfrak {q}}$ and compatible Levi decompositions ${\mathfrak {p}}={\mathfrak {m}}+{\mathfrak {n}}$ resp.\ ${\mathfrak {q}}={\mathfrak {l}}+{\mathfrak {u}}$, i.e.\ ${\mathfrak {m}}\subseteq{\mathfrak {l}}$. Note that then ${\mathfrak {p}}\cap{\mathfrak {l}}$ is germane parabolic in the reductive pair $({\mathfrak {h}}\cap{\mathfrak {l}},N\cap L)$. Assume that for $?,!\in\{\rm df, \rm a, \rm fl, \rm fd\}$ restriction along $i$ defines functors $\mathcal F:\mathcal C_?({\mathfrak {g}},K)\to\mathcal C_!({\mathfrak {h}},N)$ and similarly $\mathcal F:\mathcal C_?({\mathfrak {l}},L\cap K)\to\mathcal C_!({\mathfrak {h}}\cap{\mathfrak {l}},N\cap L)$. Then both forgetful functors descend to Grothendieck groups and respective localizations. \begin{proposition}[Transitivity]\label{prop:transitivity} In the setup above, we have the identity $$ c_{{\mathfrak {p}}}\circ\mathcal F\;=\; c_{{\mathfrak {p}}\cap{\mathfrak {l}}}\circ\mathcal F\circ c_{{\mathfrak {q}}}\;\;\in\;\; C_{\mathfrak {p}}({\mathfrak {m}},M\cap N)[W_{\mathfrak {p}}/W_{\mathfrak {q}}|_{{\mathfrak {m}},M\cap N}]. $$ \end{proposition} There are two extreme cases. First $i$ may be the identity map, then we may choose $?=\,!$ and $\mathcal F$ become the identity functors. Then the statement reduces is equivalent to the (generalization of the) transitivity relation \eqref{eq:fdres} that we saw in the finite-dimensional case before. Secondly, if $i$ is not the identity, and ${\mathfrak {h}}={\mathfrak {l}}$ say, then Proposition \ref{prop:transitivity} boils down to Proposition \ref{prop:restriction}. In particular if ${\mathfrak {h}}\cap{\mathfrak {l}}={\mathfrak {m}}$, then Proposition \ref{prop:transitivity} tells us that characters commute with restrictions. \begin{proof} We'll put us in a position where we can apply a Hochschild-Serre spectral sequence. To this point we introduce the germane parabolic $$ {\mathfrak {p}}'\;:=\;{\mathfrak {q}}\cap{\mathfrak {h}}\;\subseteq\;{\mathfrak {h}}. $$ It has a Levi decomposition ${\mathfrak {p}}'={\mathfrak {l}}'+{\mathfrak {n}}'$ with ${\mathfrak {l}}'={\mathfrak {l}}\cap{\mathfrak {h}}$ and ${\mathfrak {n}}'\subseteq{\mathfrak {n}}$. In particular $$ {\mathfrak {n}}\;=\;({\mathfrak {n}}\cap{\mathfrak {l}})\oplus{\mathfrak {n}}'\;=\;({\mathfrak {n}}\cap{\mathfrak {l}}')\oplus{\mathfrak {n}}'. $$ The Hochschild-Serre spectral sequence for the inclusion ${\mathfrak {l}}'\subseteq{\mathfrak {h}}$ and the parabolic subalgebras ${\mathfrak {p}}\cap{\mathfrak {l}}\subseteq{\mathfrak {p}}\subseteq{\mathfrak {p}}'$ gives us the relation \begin{equation} H_{\mathfrak {p}}(X)\;=\;H_{{\mathfrak {p}}\cap{\mathfrak {l}}}\circ H_{{\mathfrak {p}}'}(X)\;\;\in\;\; K_!({\mathfrak {m}},M\cap N), \label{eq:pprime} \end{equation} for all $X\in K_!({\mathfrak {h}},N)$. Now we can also apply the Hochschild-Serre spectral sequence to the inclusion ${\mathfrak {h}}\subseteq{\mathfrak {g}}$ and the parabolic algebras ${\mathfrak {p}}'\subseteq{\mathfrak {q}}$. From this we get $$ \sum_{q=0}^{\dim{\mathfrak {u}}/{\mathfrak {n}}'}[\bigwedge^q({\mathfrak {u}}/{\mathfrak {n}}')^*]\cdot H_{{\mathfrak {p}}'}\circ\mathcal F(X)\;=\;\mathcal F\circ H_{{\mathfrak {q}}}(X)\;\;\in\;\; K_?({\mathfrak {l}}',L'\cap K), $$ for all $X\in K_?({\mathfrak {g}},K)$. The sum on the left is the quotient of the Weyl denominators $W_{\mathfrak {q}}|_{{\mathfrak {l}}',L'\cap K}$ and $W_{\mathfrak {p}}'$. We conclude that \begin{equation} H_{{\mathfrak {p}}'}\circ\mathcal F(X)\cdot W_{\mathfrak {q}}|_{{\mathfrak {l}}',L'\cap K}\;=\; \mathcal F\circ H_{{\mathfrak {q}}}(X)\cdot W_{{\mathfrak {p}}'} \;\;\in\;\; K_?({\mathfrak {l}}',L'\cap K), \label{eq:wpq} \end{equation} Plugging \eqref{eq:pprime} and \eqref{eq:wpq} together we obtain $$ H_{\mathfrak {p}}\circ\mathcal F(X)\cdot W_{\mathfrak {q}}|_{{\mathfrak {m}},M\cap N}\;=\; H_{{\mathfrak {p}}\cap{\mathfrak {l}}}\circ H_{{\mathfrak {p}}'}\circ\mathcal F(X)\cdot W_{\mathfrak {q}}|_{{\mathfrak {m}},M\cap N}\;=\; $$ $$ H_{{\mathfrak {p}}\cap{\mathfrak {l}}}\circ \mathcal F\circ H_{{\mathfrak {q}}}(X)\cdot W_{{\mathfrak {p}}'}|_{{\mathfrak {m}},M\cap N} \;\;\in\;\; K_!({\mathfrak {m}},M\cap N), $$ for all $X\in K_?({\mathfrak {g}},K)$, where we implicitly exploited the multiplicativity relation $$ H_{{\mathfrak {p}}\cap{\mathfrak {l}}}((-)\cdot W_{\mathfrak {q}}|_{{\mathfrak {l}}',L'\cap N})\;=\; H_{{\mathfrak {p}}\cap{\mathfrak {l}}}(-)\cdot W_{\mathfrak {q}}|_{{\mathfrak {m}},M\cap N}. $$ Finally $$ W_{\mathfrak {p}}\;=\;W_{{\mathfrak {p}}\cap{\mathfrak {l}}}\cdot W_{{\mathfrak {p}}'}|_{{\mathfrak {m}},M\cap N} $$ concludes the proof. \end{proof} Our hypotheses for Proposition \ref{prop:transitivity} are stricter than necessary. The assumption that $\mathcal F$ be defined on the entire Grothendieck groups may be weakened. Not only other Grothendieck groups may be considered, but also suitable subgroups of those groups. This is made precise by the notion of {\em admissible quadruple} in \cite{januszewskipreprint}, which allows a more general construction of characters. However the methods and main arguments are still the same as the ones here. There are two important consequences of Proposition \ref{prop:transitivity} that are also reflected in its proof. The first is the transitivity relation \eqref{eq:fdres}, which reduces many properties of characters, including their explicit calculation, to the case of maximal parabolic subalgebras. The second is the compatibility with restrictions, which is easiest expressed in the case ${\mathfrak {p}}\cap{\mathfrak {l}}={\mathfrak {m}}$, which amounts to the commutativity of the square $$ \begin{CD} K_?({\mathfrak {g}},K)@>\mathcal F>> K_!({\mathfrak {h}},N)\\ @V{c_{\mathfrak {q}}}VV @V{c_{\mathfrak {p}}}VV\\ C_?({\mathfrak {l}},L\cap K)@>\mathcal F>>C_!({\mathfrak {m}},M\cap N)[W_{\mathfrak {p}}/W_{\mathfrak {q}}|_{{\mathfrak {m}},M\cap N}] \end{CD} $$ We remark that by our assumptions we have in this situation ${\mathfrak {n}}\subseteq{\mathfrak {u}}$, which means that we get a natural isomorphism $$ C_!({\mathfrak {m}},M\cap N)[W_{\mathfrak {p}}/W_{\mathfrak {q}}|_{{\mathfrak {m}},M\cap N}]\;\to\;K_!({\mathfrak {m}},M\cap N)[W_{\mathfrak {q}}|_{{\mathfrak {m}},M\cap N}^{-1}]. $$ In particular for a fixed reductive pair $({\mathfrak {m}},M\cap N)$ we are led to consider many different localizations, depending on the branching problem at hand. Therefore we are naturally led to study the problem of localization in general. The main issue being that $c_{\mathfrak {p}}$ (and even $c_{\mathfrak {q}}$) in the above diagram can be far from injective. One reason is the possible vanishing of cohomology. The other is the non-triviality of the kernel of the localization map. We will see that there is a connection between these two seemingly independent situations in our treatment of Blattner formulae. The third important consequence of Proposition \ref{prop:transitivity} resp.\ its proof is \begin{corollary}\label{cor:constructiblefinitelength} Let ${\mathfrak {q}}={\mathfrak {l}}+{\mathfrak {u}}\subseteq{\mathfrak {g}}$ be constructible. Then ${\mathfrak {u}}$-cohomology preserves finite length. \end{corollary} As for the duality, we observe that the isomorphism \eqref{eq:tensorcomplex} applied to the trivial module $X={\bf1}={\rm\bf C}$, together with Poincar\'e duality gives $$ (-1)^{\dim{\mathfrak {u}}}[\bigwedge^{\dim{\mathfrak {u}}}{\mathfrak {u}}^*]\cdot \sum_{q=0}^{\dim{\mathfrak {u}}}[\bigwedge^q{\mathfrak {u}}]\;=\; \sum_{q=0}^{\dim{\mathfrak {u}}}[\bigwedge^q{\mathfrak {u}}^*]. $$ The sum on the left hand side computes cohomology of ${\bf1}$, and the right hand side computes its cohomology. We deduce that $$ W^{\mathfrak {q}}\;:=\;\sum_{q=0}^{\dim{\mathfrak {u}}}(-1)^qH_q({\mathfrak {u}};{\bf1})\;\in\;K_{\rm fd}({\mathfrak {l}},L\cap K) $$ acts invertibly (via multiplication) in a given $K_{\rm fd}({\mathfrak {l}},L\cap K)$-module $M$ if and only if $W_{\mathfrak {q}}$ acts invertibly. By Proposition \ref{prop:easyduality} or direct inspection $$ W_{\mathfrak {q}}^\vee\;=\;W^{\mathfrak {q}}. $$ Therefore \begin{equation} (-1)^{\dim{\mathfrak {u}}}[\bigwedge^{\dim{\mathfrak {u}}}{\mathfrak {u}}^*]\cdot H_{{\mathfrak {q}}}({\bf1})^\vee\;=\; H_{{\mathfrak {q}}}({\bf1}). \label{eq:trivialcdual} \end{equation} This justifies that dualization extends to localizations, and we may formulate \begin{proposition}[Duality]\label{prop:duality} Let $X,X^\vee\in\mathcal C_?({\mathfrak {g}},K)$ be two admissible modules and assume ${\mathfrak {q}}$ to be a $\theta$-stable Borel. Then we have the identity $$ c_{\mathfrak {q}}(X^\vee)\;=\;c_{\mathfrak {q}}(X)^\vee $$ in $C_{{\mathfrak {q}},?}({\mathfrak {l}},L\cap K)$, where on the left hand side $\cdot^\vee$ denotes the $K$-finite dual, and on the right hand side the $(L\cap K)$-finite dual. \end{proposition} The restriction to finiteness seems necessary, as our proof relies on the reflexivity on $X$. For finite-length modules we obtain the same statement along the lines of the proof of Theorem \ref{thm:independence} below. \begin{proof} The crucial point is the comparison between the $K$-finite and the $L\cap K$-finite duals. Therefore we write $X^{\vee,G}$ for the $K$-finite and $X^{\vee,L}$ for the $L\cap K$-finite duals respectively. Our goal is to show that the inclusion $$ X^{\vee,G}\;\to\; X^{\vee,L} $$ induces an isomorphism \begin{equation} H^q({\mathfrak {u}}; X^{\vee,G})\cong H^q({\mathfrak {u}}; X^{\vee,L}) \label{eq:gldualmap} \end{equation} in cohomology. Then the claim of the proposition will follow from Poincar\'e duality. The biduality maps induce short exact sequences \begin{equation} \begin{CD} 0@>>> X@>\nu_G>> (X^{\vee,G})^{\vee,G}@>>> 0 @>>> 0 \end{CD} \label{eq:xgbidual} \end{equation} \begin{equation} \begin{CD} 0@>>> X@>\nu_{GL}>> (X^{\vee,G})^{\vee,L}@>>> Y @>>> 0 \end{CD} \label{eq:xglbidual} \end{equation} \begin{equation} \begin{CD} 0@>>> X@>\nu_L>> (X^{\vee,L})^{\vee,L}@>>> N @>>> 0 \end{CD} \label{eq:xlbidual} \end{equation} of $({\mathfrak {q}},L\cap K)$-modules. We also have a short exact sequence \begin{equation} \begin{CD} 0@>>> Z@>>> (X^{\vee,L})^{\vee,L}@>>> (X^{\vee,G})^{\vee,L} @>>> 0. \end{CD} \label{eq:xgldual} \end{equation} Plugging these sequences together, we obtain a commutative diagram $$ \begin{CD} X@>\nu_{GL}>> (X^{\vee,G})^{\vee,L}@>>> Y\\% @>>> 0\\ @AAA @A\eta AA @AAA\\% @AAA\\ X@>\nu_L>> (X^{\vee,L})^{\vee,L}@>>> N\\% @>>> 0\\ @AAA @AAA @AAA\\% @AAA\\ 0@>>> Z@>>> Z\\% @>>> 0\\ \end{CD} $$ with short exact sequences in the rows and columns. The exactness of the last row resp.\ last column follows from the snake lemma. By construction of the Poincar\'e duality map in cohomology (i.e.\ Corollary \ref{cor:duality}) the map $\nu_L$ induces the biduality map $$ H^q({\mathfrak {u}};X)\;\to\;H^q({\mathfrak {u}};X)^{\vee,\vee} $$ on cohomology, which is an isomorphism as $H^q({\mathfrak {u}};-)$ preserves admissibility by Theorem \ref{thm:inheritance}. Then the long exact sequence of cohomology tells us that the ${\mathfrak {u}}$-cohomology of $N$ vanishes in all degrees, and therefore the long exact sequence for the right most column gives us isomorphisms $$ H^q({\mathfrak {u}}; Y)\;\to\;H^{q+1}({\mathfrak {u}}; Z). $$ In order to see that \eqref{eq:gldualmap} is an isomorphism, it suffices to show that the cohomology of $Y$ and $Z$ vanishes. To this point we consider the same diagrams over the associated compact pairs. Duality is not affected by this restriction, and we obtain analogously and isomorphism $$ H^q({\mathfrak {u}}\cap{\mathfrak {k}}; Y)\;\to\;H^{q+1}({\mathfrak {u}}\cap{\mathfrak {k}}; Z). $$ Together with Kostant's Theorem this implies the vanishing of the ${\mathfrak {u}}\cap{\mathfrak {k}}$-cohomology of $Y$ and $Z$ in the light of Proposition \ref{prop:kdecomp}. We have a Hochschild-Serre spectral sequence $$ \bigwedge^p({\mathfrak {u}}\cap{\mathfrak {p}})^*\otimes_{\rm\bf C} H^q({\mathfrak {u}}\cap{\mathfrak {k}}; Y) \;\Longrightarrow\;H^{p+q}({\mathfrak {u}}; Y), $$ where ${\mathfrak {p}}$ denotes the orthogonal complement to ${\mathfrak {k}}$ in ${\mathfrak {g}}$. As the left hand side vanishes, the right hand side does so too, and we conclude the proof. \end{proof} The proof generalizes to arbitrary $\theta$-stable parabolics by appealing to the general version of Kostant's Theorem, cf.\ \cite{januszewskipreprint}. \subsection{Linear independence} In this section we assume that the associated Lie group $G$ to $({\mathfrak {g}},K)$ is linear, which is a mild condition needed for Hecht-Schmid's result below. In order to proof that algebraic characters classify the semi-simplifications of finite-length modules we import the following two results. The following result was conjectured by Osborne in his thesis \cite{osborne1972} for Borel algebras and generalized to the general real parabolic case by Casselman \cite{casselman1977}. \begin{proposition}[Hecht-Schmid, \cite{hechtschmid1983}]\label{prop:osbornereal} Let ${\mathfrak {q}}\subseteq{\mathfrak {g}}$ be a real parabolic subalgebra with Levi decomposition ${\mathfrak {q}}={\mathfrak {l}}+{\mathfrak {u}}$. Then there is an open subset $L_0\subseteq L\subseteq G$ which contains the set of regular elements of $L$, and which maps surjectively onto $L^{\rm ad}$ under the adjoint map, such that for any finite-length $({\mathfrak {g}},K)$-module $X$ Harish-Chandra's global characters for $G$ (cf.\ \cite{harishchandra1954b}) resp.\ $L$ satisfy the identity $$ \Theta_{G}(X)|_{L_0}\;=\; \frac{\sum_{q=0}^{\dim{\mathfrak {u}}}(-1)^q\Theta_{L}(H^q({\mathfrak {u}};X))|_{L_0}} {\sum_{q=0}^{\dim{\mathfrak {u}}}(-1)^q\Theta_{L}(H^q({\mathfrak {u}};{\bf1}))|_{L_0}}. $$ \end{proposition} The set $L_0$ from Proposition \ref{prop:osbornereal} is large enough that the left hand side, and hence the right hand side, determines the restriction of $\Theta_G(X)$ to $L$ uniquely. \begin{proposition}[Vogan, {\cite[Theorem 8.1]{vogan1979ii}}]\label{prop:osbornethetastable} Let ${\mathfrak {q}}\subseteq{\mathfrak {g}}$ be a $\theta$-stable parabolic subalgebra with Levi decomposition ${\mathfrak {q}}={\mathfrak {l}}+{\mathfrak {u}}$. Then for any finite-length $({\mathfrak {g}},K)$-module $X$ Harish-Chandra's global characters for $G$ resp.\ $L$ satisfy the identity $$ \Theta_{G}(X)|_{L}\;=\; \frac{\sum_{q=0}^{\dim{\mathfrak {u}}}(-1)^q\Theta_{L}(H^q({\mathfrak {u}};X))} {\sum_{q=0}^{\dim{\mathfrak {u}}}(-1)^q\Theta_{L}(H^q({\mathfrak {u}};{\bf1}))}. $$ \end{proposition} Now we can proof \begin{theorem}[Linear independence]\label{thm:independence} Assume that ${\mathfrak {q}}_1,\dots,{\mathfrak {q}}_r$ is a collection of constructible parabolic subalgebras whose Levi factors $L_1,\dots,L_r$ cover a dense subset of $G$ up to conjugation. Then the map $$ \prod_{i=1}^rc_{{\mathfrak {q}}_i}:\;\;\;K_{\rm fl}({\mathfrak {g}},K)\;\to\;\prod_{i=1}^r C_{{\mathfrak {q}}_i}({\mathfrak {l}}_i,L_i\cap K) $$ is a monomorphism. \end{theorem} \begin{proof} By Proposition \ref{prop:parabolic}, the transitivity relation from Proposition \ref{prop:transitivity}, and the above Propositions \ref{prop:osbornereal} and \ref{prop:osbornethetastable} we know that each character $$ c_{{\mathfrak {q}}_i}(X)\;\in\;C_{{\mathfrak {q}}_i}({\mathfrak {l}}_i,L_i\cap K) $$ determines the restriction of Harish-Chandra's global Character $\Theta_G(X)$ of $X$ to $L_i$ uniquely. Therefore by our assumption that the conjugates of $L_i$ cover a dense subset of $G$ we deduce the claim of the Theorem from Harish-Chandra's linear independence Theorem for his global characters \cite[Theorem 1]{harishchandra1954b}. \end{proof} \section{Localization} In order to extend Theorem \ref{thm:independence} to more general than finite-length modules we need to study localization in more detail. \subsection{An example} Let us consider $G=\SL_2({\rm\bf R})$ and the corresponding reductive pair $({\mathfrak {s}}l_2,\SO(2))$. We choose ${\mathfrak {q}}={\mathfrak {l}}+{\mathfrak {u}}$ a minimal $\theta$-stable parabolic with a Levi decomposition as indicated. Then $K_{\rm fl}({\mathfrak {s}}l_2,\SO(2))$ is the free abelian group generated by the irreducible $({\mathfrak {s}}l_2,\SO(2))$-modules, which are all known. For the Levi factor we have $$ L\cap\SO(2)\;=\;L\;=\;\SO(2), $$ and the corresponding Grothendieck group $$ K_{\rm fl}({\mathfrak {l}},L)\;=\;K_{\rm fd}({\mathfrak {l}},L) $$ is the free abelian group generated by the irreducible (hence finite-dimensional) representations of $\SO(2)$. In this group we have the Weyl denominator $$ W_{\mathfrak {q}}:=1-[2\alpha], $$ where $\alpha$ is a generator of all the characters of $\SO(2)$, compatible with our choice of ${\mathfrak {u}}$, i.e.\ the weight occuring in ${\mathfrak {u}}$ is $-2\alpha$. As we know that $K_{\rm fd}({\mathfrak {l}},L)$ is a domain we know that localization at $W_{\mathfrak {q}}$ is a faithful operation. We may identify $K_{\rm fd}({\mathfrak {l}},L)$ as a ring with the ring ${\rm\bf Z}[X,X^{-1}]$, where $X$ per definition corresponds to $\alpha$, and we may think of $$ C_{{\mathfrak {q}},\rm fd}({\mathfrak {l}},L)\;=\;K_{\rm fd}({\mathfrak {l}},L)[W_{\mathfrak {q}}^{-1}] $$ as the subring $$ {\rm\bf Z}[X,X^{-1},\frac{1}{1-X^2}]\;\subseteq\;{\rm\bf Q}(X). $$ of the rational function field ${\rm\bf Q}(X)$. The character of a finite-dimensional $({\mathfrak {s}}l_2,\SO(2))$-module $F_k$ of ${\mathfrak {u}}$-highest weight $k\geq 0$ is, by Weyl's formula, $$ c_{\mathfrak {q}}(F_k)\;=\;\frac{X^k-X^{-(k+2)}}{1-X^2}. $$ The formal identity $$ \frac{X^k-X^{-(k+2)}}{1-X^2}\;=\;X^k+X^{k-2}+\cdots+X^{-(k-2)}+X^{-k} $$ reflects the second identity of Theorem \ref{thm:fdmono}, i.e.\ \begin{equation} c_{\mathfrak {q}}(F_k)\;=\;[F_k|_{{\mathfrak {l}},L}]\;=\;[k\alpha]+[(k-2)\alpha]+\cdots+[-(k-2)\alpha]+[-k\alpha]. \label{eq:finitelres} \end{equation} In particular we get the structure of $F_k$ as a $({\mathfrak {l}},L)$-module. A fundamental property of cohomological characters is that this remains true in general, not only for finite-dimensional modules, but whenever both sides of the equation happen to be well defined. Now write $D_k$ for the (limits of) discrete series repesentation with lowest $\SO(2)$-type $k\cdot\alpha$, $k\geq 1$. Its character is $$ c_{\mathfrak {q}}(D_k)\;=\;\frac{X^k}{1-X^2}. $$ Assuming $k\geq k'+3$, the multiplicativity of our characters tells us that $$ c_{{\mathfrak {q}}}(D_k\otimes_{\rm\bf C} F_{k'})\;=\;c_{\mathfrak {q}}(D_k)\cdot c_{\mathfrak {q}}(F_{k'})\;=\; \frac{X^k}{1-X^2}\cdot\frac{X^{k'}-X^{-({k'}+2)}}{1-X^2}\;=\; $$ $$ \frac{X^k\cdot(X^{k'}+X^{k'-2}+\cdots+X^{-(k'-2)}+X^{-k'})}{1-X^2}\;=\; $$ $$ c_{\mathfrak {q}}(D_{k+k'})\;+\;c_{\mathfrak {q}}(D_{k+k'-2})\;+\;\cdots\;+\;c_{\mathfrak {q}}(D_{k-k'-2}). $$ Without appealing to Theorem \ref{thm:independence} we would like to argue that this formula implies the decomposition relation \begin{equation} [D_k\otimes_{\rm\bf C} F_{k'}]\;=\;[D_{k+k'}]\;+\;[D_{k+k'-2}]\;+\;\cdots\;+\;[D_{k-k'-2}] \label{eq:tensordecomp} \end{equation} in $K_{\rm fl}({\mathfrak {s}}l_2,\SO(2))$. To do so we first solve the Blattner problem for each $D_k$. With the same notation as before, we have canonical isomorphisms $$ K_{\rm df}({\mathfrak {l}},L)\;=\;K_{\rm a}({\mathfrak {l}},L)\;=\;{\rm\bf Z}[[X,X^{-1}]] $$ of abelian groups, where the right hand side denotes the {\em abelian group} of formal unbounded Laurent series in $X$. It is naturally a ${\rm\bf Z}[X,X^{-1}]$-module, and this module structure is compatible with the canonical $K_{\rm fd}({\mathfrak {l}},L)$-module structure of the left hand side. The restriction of $D_k$ to $({\mathfrak {l}},L)$ will be an element in this Grothendieck group. However, due to the compatibility of characters with restrictions, we get in the localization the relation $$ \frac{X^k}{1-X^2}\;=\;[D_k|_{{\mathfrak {l}},L}]\;\in\;{\rm\bf Z}[[X,X^{-1}]][\frac{1}{1-X^2}]. $$ Now we have $$ \frac{X^k}{1-X^2}\;=\;\sum_{i=0}^\infty X^{k+2i}, $$ and we would like to conclude that \begin{equation} [D_k|_{{\mathfrak {l}},L}]\;=\;[k\alpha]+[(k+2)\alpha]+[(k+4)\alpha]+\cdots \label{eq:unlocalkl} \end{equation} is a valid identity in the unlocalized $K_{\rm df}({\mathfrak {l}},L)$. The problem is that this identity is valid a priori only up to the kernel of the localization map \begin{equation} {\rm\bf Z}[[X,X^{-1}]]\;\to\;{\rm\bf Z}[[X,X^{-1}]][\frac{1}{1-X^2}]. \label{eq:localizationx} \end{equation} It is an easy excercise to see that the kernel is, as a vector space, generated by the elements \begin{equation} y_\alpha^{(n)}\;:=\;\sum_{i=0}^\infty\binom{n-1+i}{n-1}(X^{n+2i}\;+\;(-1)^{n+1}X^{-(n+2i)}) \label{eq:yalphan} \end{equation} and \begin{equation} y_{\alpha,+}^{(n)}\;:=\;\sum_{i=0}^\infty\frac{n-1+2i}{n-1+i}\binom{n-1+i}{n-1}(X^{n-1+2i}\;+(-1)^{n+1}\;X^{-(n-1+2i)}) \label{eq:yalphanplus} \end{equation} for $n\geq 1$. We remark that the coefficients occuring eventually are {\em integers}. Note that $$ (X+X^{-1})\cdot y_\alpha^{(n)}\;=\;y_{\alpha,+}^{(n)}, $$ and $$ (X-X^{-1})\cdot y_\alpha^{(n)}\;=\;\;y_\alpha^{(n-1)}. $$ Therefore, by induction, the kernel of $(X-X^{-1})^n$ as an endomorphism of ${\rm\bf Z}[[X,X^{-1}]]$ is generated by $y_\alpha^{(n)}$ as a ${\rm\bf Z}[X,X^{-1}]$-module. Using the above relations it is not hard to see, and appealing to Proposition \ref{prop:lockernel}, that the collection of the elements \eqref{eq:yalphan} and \eqref{eq:yalphanplus} for $n\geq 1$ are indeed a ${\rm\bf C}$-basis of kernel of the map \eqref{eq:localizationx}. In order to proof \eqref{eq:unlocalkl}, Harish-Chandra tells us that the multiplicity of the $\SO(2)$-module $m\alpha$ in $D_k$ is bounded by a constant independently of $m$. However the coefficients in \eqref{eq:yalphan} and \eqref{eq:yalphanplus} grow polynomially of order $n-1$, and this remains true for linear combinations of those terms. Therefore the only possible contribution from the kernel of the localization map to the identity \eqref{eq:unlocalkl} is a ${\rm\bf C}$-linear combination of $$ y_\alpha^{(1)}\;=\;\sum_{i=-\infty}^\infty X^{1+2i} $$ and $$ y_{\alpha,+}^{(1)}\;=\;\sum_{i=-\infty}^\infty X^{2i}. $$ However the minimal $\SO(2)$-type of $D_k$ is $k\cdot\alpha$ and is uniquely determined. As to $k\geq 1$, this implies that there cannot be any contribution from those two kernel elements to the identity \eqref{eq:unlocalkl}, i.e.\ the latter formula must be true, completing its proof. With \eqref{eq:unlocalkl} at hand, we may solve the identity \eqref{eq:tensordecomp} explicitly in the unlocalized Grothendieck group of admissible $({\mathfrak {l}},L)$-modules: $$ [D_k|_{{\mathfrak {l}},L}\otimes_{\rm\bf C} F_{k'}|_{{\mathfrak {l}},L}]\;=\;\sum_{i=0}^\infty X^{k+2i}\cdot(X^{k'}+X^{k'-2}+\cdots+X^{-k'})\;=\; $$ $$ [D_{k+k'}|_{{\mathfrak {l}},L}]\;+\;[D_{k+k'-2}|_{{\mathfrak {l}},L}]\;+\;\cdots\;+\;[D_{k-k'-2}|_{{\mathfrak {l}},L}]. $$ by the decomposition relation \eqref{eq:finitelres} for $F_{k'}$. We see that over $L$, there is no contribution from the kernel of the localization to the identity \eqref{eq:tensordecomp}. A fortiori this is true over $({\mathfrak {s}}l_2,\SO(2))$, hence \eqref{eq:tensordecomp} follows. Along the same lines we may deduce from $$ c_{\mathfrak {q}}(D_k\otimes_{\rm\bf C} D_{k'})\;=\;c_{\mathfrak {q}}(D_k)\cdot c_{\mathfrak {q}}(D_{k'}) $$ the decomposition of the tensor product of two discrete series modules $D_k$ and $D_{k'}$. It decomposes again into a sum of discrete series modules. So what happens if we consider the opposite discrete series $D_{-k}$, i.e.\ the unique irreducible $({\mathfrak {s}}l_2,SO(2))$-module with (unique) minimal $\SO(2)$-type $-k\alpha$? Here the tensor product of $D_k$ and $D_{-k'}$ gives along the same lines $$ c_{\mathfrak {q}}(D_k\otimes_{\rm\bf C} D_{-k'})\;=\;c_{\mathfrak {q}}(D_k)\cdot c_{\mathfrak {q}}(D_{k'})\;=\; \frac{X^k}{1-X^2}\cdot\frac{X^{-k-2}}{1-X^2}. $$ This is meaningful in the localization, but cannot capture everything here. The reason being that the tensor product of $D_k$ and $D_{-k'}$ as modules over $\SO(2)$, lies no more in $\mathcal C_{\rm df}({\mathfrak {l}},L)$, because the multiplicity of either $0\cdot\alpha$ or $1\cdot\alpha$, depending on the parity of $k+k'$, is no more finite. This tells us that there must be a (large) contribution of the continuous spectrum to $D_k\otimes_{\rm\bf C} D_{-k'}$. Another interesting relation is $$ [D_1]-[D_{-1}]\;=\;y_{\alpha}^{(1)}, $$ which shows that this element lies in the kernel of the localization map \eqref{eq:localizationx}. It is annihilated by $W_{\mathfrak {q}}=1-X^2$. However there is no such relation for finite linear combinations of elements $[D_k]$ and $[D_{-k'}]$ if we insist on $k,k'\geq 2$. Because we know that asymptotically the multiplicity of the $\SO(2)$-types in such a virtual finite length representation are bounded by a constant. Therefore the only contribution of the kernel of the localization map comes from the kernel of $W_{\mathfrak {q}}=1-X^2$ itself, which is generated by $y_\alpha^{(1)}$ and $y_{\alpha,+}^{(1)}$ as a ${\rm\bf C}$-vector space. This shows that any such element from the kernel must contain at least one of the two $\SO(2)$-modules $0\cdot\alpha$ or $1\cdot\alpha$. In particular this abtract argument shows that the subgroup $$ K_{\rm fl}^{\geq 2}({\mathfrak {s}}l_2,\SO(2))\;\subseteq\; K_{\rm fl}({\mathfrak {s}}l_2,\SO(2)) $$ generated by $D_k$ and $D_{-k'}$ for $k,k'\geq 2$, has trivial intersection with the kernel \eqref{eq:localizationx}, a fortiori its intersection with the kernel of $c_{\mathfrak {q}}$ is trivial. An analogous statement remains true if we consider the subgroup $$ K_{\rm df}^{\geq d+2,d}({\mathfrak {s}}l_2,\SO(2))\;\subseteq\; K_{\rm df}({\mathfrak {s}}l_2,\SO(2)) $$ of discretely decomposables with finite multiplicity, which is generated by those with the property that the multiplicity of $D_k$ resp.\ $D_{-k'}$ is bounded by a polynomial of degree $d$ in $k$ resp.\ $k'$, and which do {\em not} contain $D_k$ resp.\ $D_{-k'}$ for $k,k'<d+2$. This again is a consequence of the explicit form of the elements \eqref{eq:yalphan} and \eqref{eq:yalphanplus}. \subsection{The general case} As we already saw in the examples in the previous section, we need to understand the kernel of the localization map in order to lay hands on larger-than-finite-length modules. This situation already arises when we approach Blattner formulae, i.e.\ restrictions of admissible modules from $({\mathfrak {g}},K)$ to $K$. Generally the restriction will no more be of finite length, as over $K$ finite length is equivalent to finite dimension. We will give complete results here, but won't go into detailed proofs, as this is quite technical. For details the reader can consult Sections 4 to 6 in \cite{januszewskipreprint}. In principle one may consider the localization problem for general germane parabolic subalgebras, i.e.\ for Levi factors whose Lie algebras are not necessarily abelian. However in this case the structure of the Grothendieck group in question, $K_{\rm df}({\mathfrak {l}},L\cap K)$ say, as a $K_{\rm fd}({\mathfrak {l}},L\cap K)$-module is not clear in general. In any case the module structure of $K_{\rm df}({\mathfrak {l}},L\cap K)$ may be deduced formally from the $K_{\rm fd}({\mathfrak {l}},L\cap K)$-module structure of $K_{\rm fl}({\mathfrak {l}},L\cap K)$. Classically characters are studied via their restrictions to Levi factors corresponding to Cartan algebras ${\mathfrak {l}}$. This is also the case where we can give complete answers as then the multiplicative structure alluded to above is clear. Another appeal of this case is that the ${\mathfrak {u}}$-cohomology of any finite length $({\mathfrak {g}},K)$-module is a {\em finite-dimensional} $({\mathfrak {l}},L\cap K)$-module. Let us introduce the basic setup. We assume given a commutative square $$ \begin{CD} ({\mathfrak {g}}',K')@>i>>({\mathfrak {g}},K)\\ @AAA @AAA\\ ({\mathfrak {q}}',L'\cap K') @>j>> ({\mathfrak {q}},L\cap K) \end{CD} $$ of pairs, where $i$ is an inclusion of reductive pairs, and the vertical arrows are inclusions of constructible parabolic pairs, and $j$ is induced by $i$. Let ${\mathfrak {q}}={\mathfrak {l}}+{\mathfrak {u}}$ and ${\mathfrak {q}}'={\mathfrak {l}}'+{\mathfrak {u}}'$ be the Levi decompositions and assume that ${\mathfrak {l}}'\subseteq{\mathfrak {l}}$ is abelian, i.e.\ a Cartan subalgebra in ${\mathfrak {g}}'$. Then $\Delta({\mathfrak {u}}+{\mathfrak {u}}',{\mathfrak {l}}')$ generates a lattice $$ \Lambda_0\;\subseteq\;{\mathfrak {l}}'^*. $$ The group algebra ${\rm\bf C}[\Lambda_0]$ acts naturally on the space ${\rm\bf C}[[\Lambda_0]]$ of formal unbounded Laurent series in $\Lambda_0$ via multiplication. This turns ${\rm\bf C}[[\Lambda_0]]$ into a ${\rm\bf C}[\Lambda_0]$-module which reflects the $K_{\rm fd}({\mathfrak {l}}',L'\cap K')$-module structure of $K_{\rm df}({\mathfrak {l}}',L'\cap K')$ with the restriction that $\Lambda_0$ only captures ${\mathfrak {u}}+{\mathfrak {u}}'$-integral weights. In our study of localization this is enough, as the Weyl denominator stemming from ${\mathfrak {u}}$ and ${\mathfrak {u}}'$ naturally lives inside ${\rm\bf C}[\Lambda_0]$, and its kernel in ${\rm\bf C}[[\Lambda_0]]$ is sufficient to describe the kernel in the larger $K_{\rm df}({\mathfrak {l}}',L'\cap K')$. However for reasons of symmetry we introduce a slightly larger lattice $\Lambda$. Say the rank of $\Lambda_0$ is $d$, then $\Lambda$ is uniquely characterized by the short exact sequence $$ 0\to\Lambda_0\to\Lambda\to({\rm\bf Z}/2{\rm\bf Z})^d\to 0. $$ We think of $\Lambda$ as a {\em multiplicative} abelian group. Then any $\alpha\in\Delta({\mathfrak {u}}+{\mathfrak {u}}',{\mathfrak {l}}')$ has a {\em square root} $\alpha^{\frac{1}{2}}$ in $\Lambda$, i.e.\ and element whose square is $\alpha$. Further to any such $\alpha$ we have a generalized {\em Weyl reflection} $w_\alpha$ sending $\alpha$ to $\alpha^{-1}$ and $\alpha^{\frac{1}{2}}$ to $\alpha^{-\frac{1}{2}}$. To define $w_\alpha$ we choose a Cartan subalgebra ${\mathfrak {h}}\subseteq{\mathfrak {l}}$ containing ${\mathfrak {l}}'$ complementary to ${\mathfrak {u}}+{\mathfrak {u}}'$. Then $\alpha$ has a preimage $\alpha'\in\Delta({\mathfrak {u}}+{\mathfrak {u}}',{\mathfrak {h}})$ and the associated Weyl reflection $w_{\alpha'}$ projects down to $w_\alpha$ in $\Lambda$. This projected image is independent of the choice of $\alpha'$. The collection of $w_\alpha$ generate a generalized Weyl group $W_\Lambda$ which contains the Weyl group $W({\mathfrak {g}}',{\mathfrak {l}}')$. Both groups act on ${\rm\bf C}[\Lambda]$ and ${\rm\bf C}[[\Lambda]]$. We fix a $W_\Lambda$-invariant bilinear scalar product $\langle\cdot,\cdot\rangle$ on $\Lambda\otimes_{\rm\bf Z}{\rm\bf R}$. In order to descend from $\Lambda$ to $\Lambda_0$ we introduce the Galois group $G_{\Lambda/\Lambda_0}$ of the field extension ${\rm\bf C}(\Lambda)/{\rm\bf C}(\Lambda_0)$, the latter fields being the quotient fields of the domains ${\rm\bf C}[\Lambda]$ resp.\ ${\rm\bf C}[\Lambda_0]$. By construction $G_{\Lambda/\Lambda_0}$ is isomorphic to $\{\pm 1\}^d$, each sign corresponding to a choice of square root of an element of a fixed basis of $\Lambda_0$ in ${\rm\bf C}[\Lambda]$. Then $G_{\Lambda/\Lambda_0}$ acts on ${\rm\bf C}[\Lambda_0]$ and ${\rm\bf C}[\Lambda]$ and this action commutes with the action of $W_\Lambda$. Then we may use standard Galois theory to descend from $\Lambda$ to $\Lambda_0$. We leave the details to the reader. In the higher rank case the kernel of the localization map is completely described by the following generalization of the elements \eqref{eq:yalphan} and \eqref{eq:yalphanplus}. For any $\alpha\in\Delta({\mathfrak {u}}+{\mathfrak {u}}',{\mathfrak {l}}')$ with square root $\alpha^{\frac{1}{2}}\in\Lambda$ we have the elements $$ d_{\alpha,\pm}\;\;:=\;\;\alpha^{-\frac{1}{2}}\pm\alpha^{\frac{1}{2}} \;\;\in\;\;{\rm\bf C}[\Lambda], $$ $$ s_{\alpha}\;\;:=\;\; \alpha^{\frac{1}{2}} \sum_{k=0}^\infty \alpha^{k} \;\;\in\;\;{\rm\bf C}[[\Lambda]], $$ and for $n\geq 0$ we set $$ y_{\alpha}^{(n)}\;\;:=\;\;s_{\alpha}^n+(-1)^{n+1} w_\alpha s_{\alpha}^n. $$ This is the one-dimensional generalization of the element \eqref{eq:yalphan}. This works out as $$ d_{\alpha,-}\cdot s_{\alpha}\;=\;1, $$ $$ d_{\alpha,-}\cdot w_\alpha s_{\alpha}\;=\; -1, $$ gives $$ d_{\alpha,-}\cdot y_{\alpha}^{(n)}\;=\;y_{\alpha}^{(n-1)}, $$ and therefore $$ d_{\alpha,-}^n\cdot y_{\alpha}^{(n)}\;=\;y_{\alpha}^{(0)}\;=\;0. $$ The second element \eqref{eq:yalphanplus} corresponds to the product of $y_{\alpha}^{(n)}$ and $d_{\alpha,+}$. Say we are given elements $\beta_1,\dots,\beta_r\in\Lambda$. Then we define the module of {\em $(\beta_1,\dots,\beta_r)$-finite} Laurent series ${\rm\bf C}[[\Lambda]]_{(\beta_1,\dots,\beta_r)}$ as the space of series $$ f\;=\;\sum_{\mu\in\Lambda}f_\mu\cdot\mu\;\;\in\;\;{\rm\bf C}[[\Lambda]], $$ where $f_\mu\in{\rm\bf C}$, satisfying the finiteness following condition: For any $\lambda\in\Lambda$ the set $$ \{(k_1,\dots,k_r)\in{\rm\bf Z}^r\mid f_{\lambda\mu_1^{k_1}\cdots\beta_r^{k_r}}\neq 0\} $$ is {\em finite}. Then ${\rm\bf C}[[\Lambda]]_{(\beta_1,\dots,\beta_r)}$ is a ${\rm\bf C}[\Lambda]$-submodule of ${\rm\bf C}[[\Lambda]]$ and it turns out that the kernel of multiplication with $d_{\alpha,-}^n$ in ${\rm\bf C}[[\Lambda]]$ is given by $$ {\rm\bf C}[[\Lambda]]_{(\alpha_0)}\cdot y_{\alpha}^{(n)}. $$ The above notion of finiteness is necessary for the products occuring in this expression to be well defined. This representation is still redundant, as the case $n=1$ already shows. Pick a system of representatives ${\rm\bf C}[[\Lambda]]_{\alpha^{\frac{1}{2}}=1}\subseteq{\rm\bf C}[[\Lambda]]_{(\alpha)}$ for the factor module $$ {\rm\bf C}[[\Lambda]]_{(\alpha)}/\left({\rm\bf C}[[\Lambda]]_{(\alpha)}\cdot (\alpha^{\frac{1}{2}}-1)\right). $$ Then the above kernel equals $$ \sum_{k=1}^n {\rm\bf C}[[\Lambda]]_{\alpha^{\frac{1}{2}}=1}\cdot y_{\alpha}^{(k)}\;+\;{\rm\bf C}[[\Lambda]]_{\alpha^{\frac{1}{2}}=1}\cdot d_{\alpha,+}\cdot y_{\alpha}^{(k)}, $$ and this representation is unique. The kernel of mixed terms $$ d_{\underline{\alpha}}\;:=\;\prod_{i=1}^r d_{\alpha_i,-}^{n_i} $$ for a collection $$ \underline{\alpha}\;=(\alpha_1,\dots,\alpha_r)\;\in\;\Delta({\mathfrak {u}},{\mathfrak {t}}) $$ of pairwise distinct elements and a tuple of positive integer exponents $$ \underline{n}=(n_1,\dots,n_r) $$ is generated by the element \begin{equation} y_{\underline{\alpha}}^{\underline{n}}\;:=\; s_{\alpha_1}^{n_1}\cdot s_{\alpha_2}^{n_2}\cdots s_{\alpha_r}^{n_r} + (-1)^{1+n_1+\cdots+n_r} (w_{\alpha_1}s_{\alpha_1})^{n_1}\cdot (w_{\alpha_2}s_{\alpha_2})^{n_2}\cdots (w_{\alpha_r}s_{\alpha_r})^{n_r}, \label{eq:generaly} \end{equation} and its lower degree analogues. For a subset $I\subseteq\{1,\dots,r\}$ we denote by $\underline{\alpha}^I$ the tuple deduced from $\underline{\alpha}$ by deletion of the components with indices in $I$. Then again we have the explicit description of the kernel as $$ \sum_{I\subsetneq\{1,\dots,r\}} {\rm\bf C}[[\Lambda]]_{\underline{\alpha}^I}\cdot y_{\underline{\alpha}^I}^{\underline{n}^I} $$ and one may again find an explicit description via a suitable set of representatives for the relations $$ \forall i\not\in I:\;\;\;\alpha_i^{\frac{1}{2}}\;=\;1. $$ However the resulting expression is technical due to its recursive structure resulting from lower degree terms. Along these lines one may proof \begin{theorem}[Januszewski, {\cite{januszewskipreprint}}]\label{thm:kernel} Let $$ z=\sum_{\lambda}z_\lambda\cdot\lambda \;\in\; \kernel d_{\underline{\alpha}}^{\underline{n}} $$ with the property that there exists a $\lambda_0\in\Lambda$ such that for any $1\leq i\leq r$ and any $\lambda\in\Lambda$ with \begin{equation} \left| \langle \lambda-\lambda_0,\alpha_i\rangle \right| < \frac{n_i+1}{2}\cdot\langle \alpha_i,\alpha_i\rangle + \sum_{j\neq i}\frac{n_j}{2}\cdot\langle \alpha_i,\alpha_j\rangle, \label{regularity} \end{equation} we have $$ z_\lambda = 0 $$ then $$ z=0. $$ \end{theorem} \subsection{Applications to Blattner formulae} Suppose we are in the situation where $({\mathfrak {g}}',K')=({\mathfrak {k}},K)$, and assume for simplicity that $K$ is connected. The main idea to calculate the kernel of the localization map in this case is that analogously to the explicit expressions \eqref{eq:yalphan} and \eqref{eq:yalphanplus} we get explicit expressions for the term \eqref{eq:generaly}, which then shows that the coefficients of the monomials grow polynomially with degree depending on $\underline{n}$. At the same time we know that by Harish-Chandra's bound for the multiplicities of $K$-types in irreducibles, that $K$-types are bounded by their dimensions. Therefore we have for any finite length $({\mathfrak {g}},K)$-module $X$ that the multiplicity $m_\lambda$ of the $K$-type in $X$ with highest weight $\lambda$ is bounded by \begin{equation} m_\lambda\;\;\leq\;\; C\cdot\!\!\! \prod_{\beta\in\Delta({\mathfrak {n}},{\mathfrak {t}})}\!\! \langle\lambda+\rho({\mathfrak {n}}),\beta\rangle, \label{eq:weylbound} \end{equation} for a constant $C$ depending on $X$ but not on $\lambda$, which is a consequence of Weyl's dimension formula. Then this bound yields a constraint on the exponents $\underline{n}$ which ultimately results in the following condition: \begin{itemize} \item[(S)] For any highest weight $\lambda$ of a $K$-type occuring in $X$ we have for any $w\in W(K,L')=W({\mathfrak {k}},{\mathfrak {l}}')$, and any numbering $\beta_1,\dots,\beta_{r}$ of the pairwise distinct elements of $\Delta({\mathfrak {u}}+{\mathfrak {u}}';,{\mathfrak {l}}')$ and any non-negative integers $n_1,\dots,n_r$ with the property that for any $S\subseteq\{1,\dots,r\}$ \begin{equation} \sum_{s\in S}n_s\;\leq\;\#\{\beta\in\Delta({\mathfrak {u}}',{\mathfrak {l}}')\mid\exists i\in S:\langle\beta_i,\beta\rangle\neq 0\}, \label{eq:nsum} \end{equation} the condition \begin{equation} |\langle w(\lambda+\rho({\mathfrak {u}}'))-\lambda_0,\beta_1\rangle| \;\geq\; \frac{1}{2} \langle \beta_1,\beta_1\rangle \;+\, \sum_{i=1}^{r} \frac{n_{i}+1}{2} \langle \beta_1,\beta_i\rangle. \label{eq:lambdaregularity} \end{equation} \end{itemize} Where we fixed $\lambda_0$. A natural choice would be $\lambda_0=\rho({\mathfrak {u}}')$ for example. Then if we consider the subgroup $K_{\lambda_0}$ of $K_{\rm fl}({\mathfrak {g}},K)$ generated by the modules $X$ satisfying condition (S) we know by Theorem \ref{thm:kernel} and our multiplicity bound that in the commutative diagram $$ \begin{CD} K_{\lambda_0}@>\iota>> K_{\rm df}({\mathfrak {k}},K)\\ @Vc_{\mathfrak {q}} VV@Vc_{{\mathfrak {q}}'} VV\\ C_{{\mathfrak {q}},\rm fl}({\mathfrak {l}},L\cap K)@>\iota>> C_{{\mathfrak {q}}',\rm df}({\mathfrak {l}}',L')[W_{{\mathfrak {q}}/{\mathfrak {q}}'}^{-1}] \end{CD} $$ the map $c_{{\mathfrak {q}}'}$ is injective on the image of $\iota$, the latter denoting the forgetful map along $i$. In particular the restriction of the algebraic character $c_{\mathfrak {q}}(X)$ to the maximal torus $L'\subseteq T$ uniquely determines the Blattner formula for every $X$ satisfying condition (S), cf.\ \cite{januszewskipreprint}. By contraposition we obtain that any $X$ in the kernel of the localization map must contain a $K$-type violating the regularity condition \eqref{eq:lambdaregularity}. In summary we reduced Blattner formulae to the following assertions: \begin{itemize} \item[(B)] {\em Boundedness:} Multiplicities of the $K$-types occuring in $X$ are linearly bounded by their dimension (Harish-Chandra). \item[(S)] {\em Sample:} Knowledge of the multiplicities for the set of $K$-types violating the regularity condition \eqref{eq:lambdaregularity}. \end{itemize} We will see in the next section that the same statement remains true for not necessarily compact $({\mathfrak {g}}',K')$. We emphasize that (S) needs to be known {\em a priori}. An instance where this may be verified without using any character theory is given by Schmid's upper bound for the $K$-types for the discrete series \cite[Theorem 1.3]{schmid1975}. His result may be used to deduce the Blattner conjecture in many cases, and we conjecture that our regularity condition for $\lambda_0=\rho({\mathfrak {u}}')$ eventually covers all of the discrete series. A complete proof of the Blattner formula using classical character theory was given by Hecht and Schmid in \cite{hechtschmid1975}. \subsection{Applications to discretely decomposable branching problems} Let us return to the case of a general incluson $i:({\mathfrak {g}}',K')\to ({\mathfrak {g}},K)$ of reductive pairs, and corresponding germane parabolic subalgebras ${\mathfrak {q}}'$ and ${\mathfrak {q}}$ as before. In general the restriction of a finite length $({\mathfrak {g}},K)$-module $X$ will not be discretely decomposable as a $({\mathfrak {g}}',K')$-module. Furthermore even if $X$ is discretely decomposable, we do not have a universal bound for the multiplicities of composition factors at hand, contrary to the case of ${\mathfrak {g}}'={\mathfrak {k}}$ treated in the previous section. Therefore we need to impose these conditions a priori. Let us consider in $K_{\rm fl}({\mathfrak {g}},K)$ the subgroup $K'$ generated by the modules $X$ with the property that the restriction to $({\mathfrak {g}}',K')$ is discretely decomposable with finite multiplicities. Then pullback along $i$ provides us with a natural group homomorphism $$ \iota:\;\;\;K'\to K_{\rm df}({\mathfrak {g}}',K'). $$ We would like to have a similar map on the Levi factors of our parabolic subalgebras. But in order to guarantee its existence and compatibility with $\iota$, we assume that we are trating {\em Borel} subalgebras, i.e.\ we assume we are given a collection ${\mathfrak {q}}_1,\dots,{\mathfrak {q}}_s\subseteq{\mathfrak {g}}$ of constructible Borel subalgebras, whose Levi factors $L_i$ cover a dense subset of $G'\subseteq G$ up to conjugation, where $G'$ is the reductive subgroup of $G$ corresponding to the reductive subpair $({\mathfrak {g}}',K')$ of $({\mathfrak {g}},K)$. Such a collection always exists by Corollary \ref{cor:constructible}. We set $$ {\mathfrak {q}}_i'\;:=\;{\mathfrak {q}}_i\cap{\mathfrak {g}}', $$ and fix the compatible Levi decompositions $$ {\mathfrak {q}}_i\;:=\;{\mathfrak {l}}_i+{\mathfrak {u}}_i, $$ and $$ {\mathfrak {q}}_i'\;:=\;{\mathfrak {l}}_i'+{\mathfrak {u}}_i', $$ respectively. Then all the Lie algebras ${\mathfrak {l}}_i$ are abelian and therefore restriction along the inclusion $$ i:\;\;\;({\mathfrak {l}}_i,L_i\cap K)\to({\mathfrak {l}}_i',L_i'\cap K') $$ sends finite length modules to finite length modules and discretely decomposables to discretely decomposables. Now we fix an exponent $b\geq 0$ and consider for any $({\mathfrak {g}},K)$-module $X$ whose restriction along $i$ is discretely decomposable the following boundedness condition: We have for any $1\leq i\leq s$ and any degree $q$ that the multiplicity of any character $\lambda\in{{\mathfrak {l}}_i'}^*$ in the $q$-th ${\mathfrak {u}}_i'$-cohomology of $X$ satisfies \begin{equation} m_\lambda(H^q({\mathfrak {u}}_i';X))\;\leq\; c_X\cdot\!\!\!\!\prod_{\beta\in\Delta({\mathfrak {u}}_i',{\mathfrak {l}}')}\!\!\!\langle\lambda+\rho({\mathfrak {u}}_i'),\beta\rangle^b + d_X. \label{eq:multiplicitybound} \end{equation} for some constants $c_X, d_X\geq 0$ independent of $X$ and $\lambda$ and $i$. Then the modules $X$ satisfying the bound \eqref{eq:multiplicitybound} generate a subgroup $K_b'$ of $K'$ and we are interested in the commutative diagrams $$ \begin{CD} K_b'@>\iota>> K_{\rm df}({\mathfrak {g}}',K')\\ @Vc_{{\mathfrak {q}}_i} VV@Vc_{{\mathfrak {q}}_i'} VV\\ C_{{\mathfrak {q}}_i,\rm fl}({\mathfrak {l}}_i,L_i\cap K)@>\iota>> C_{{\mathfrak {q}}_i',\rm df}({\mathfrak {l}}_i',L')[W_{{\mathfrak {q}}_i/{\mathfrak {q}}_i'}^{-1}] \end{CD} $$ By the structure of the elements generating the kernel of the localization we know that the bound \eqref{eq:multiplicitybound} limits the degree of the annihilating Weyl denominator contributing to the kernel of the localization map in the image $\iota(K_b')$. The knowledge of the bound for the degree together with Theorem \ref{thm:kernel} yields the condition \begin{itemize} \item[(S')] For irreducible $Z$ with infinitesimal character $\lambda$ occuring in the restriction of $X$ to $({\mathfrak {g}}',K')$, for any $1\leq i\leq s$ and for any $w\in W({\mathfrak {g}}',{\mathfrak {l}}_i')$, and any numbering $\beta_1,\dots,\beta_r$ of the elements set $\Delta({\mathfrak {u}}_i,{\mathfrak {l}}_i')$ and any non-negative integers $n_1,\dots,n_r$ satisfying \begin{equation} n_1+\cdots+n_r\;=\;b\cdot\#\{\beta\in\Delta({\mathfrak {u}}_i',{\mathfrak {l}}_i')\mid\exists j:\langle\beta_j,\beta\rangle\neq 0\}, \label{eq:nsumb} \end{equation} the condition \begin{equation} |\langle w(\lambda)-\lambda_0,\beta_1\rangle| \;\geq\; \frac{1}{2} \langle \beta_1,\beta_1\rangle + \sum_{l=1}^r \frac{n_l+1}{2} \langle \beta_1,\beta_l\rangle \label{eq:lambdaregularityfd} \end{equation} holds. \end{itemize} Then the subgroup $K_{b,S'}'\subseteq K'$ generated by those $X$ satisfying condition (S') has trivial intersection with the kernel of the localization map corresponding to the pair $({\mathfrak {q}}_i, {\mathfrak {q}}_i')$, and we obtain \begin{theorem}[Januszewski, {\cite{januszewskipreprint}}] For any $X\in K_{b,S'}'$ the multiplicities of all composition factors of $\iota(X)$ are uniquely determined by the images $\iota(c_{{\mathfrak {q}}_i}(X))$ of the characters in $C_{{\mathfrak {q}}_i',\rm df}({\mathfrak {l}}_i',L_i')[W_{{\mathfrak {q}}_i/{\mathfrak {q}}_i'}^{-1}]$ via its simultaneous preimage of the maps $c_{{\mathfrak {q}}_i'}$. \end{theorem} Here again we may generalize our result to modules possibly violating condition (S') as long as we know the contribution of the composition factors violating (S') for the various parabolics ${\mathfrak {q}}_i$. We remark however that even in the compact case the analogous condition (S) may involve infinitely many irreducibles. This situation already arises if the root system of ${\mathfrak {k}}$ contains orthogonal roots. So far we ignored the action of the Weyl group, which in certain cases eventually may allow us to strengthen condition (S) resp.\ (S'). Another remark is that the condition of {\em integrality} on the multiplicities, i.e.\ the fact that all $m_Z(X)$ are {\em integers} may be exploited in certain cases as well, especially in conjunction with multiplicity-one statements. In general so far not much is known about multiplicities in discretely decomposable restrictions beyond Harish-Chandra's bound for the multiplicities of $K$-types in finite length representations. We have the following \begin{conjecture}[Kobayashi, {\cite[Conjecture C]{kobayashi2000}}]\label{conj:kobayashi} Let $(G,G')$ be a semisimple symmetric pair, and $\pi\in\hat{G}$ an irreducible unitary representation of $G$. Assume that the restriction of $\pi$ to $G'$ is infinitesimally discretely decomposable, then the dimension $$ \dim\Hom_{G'}(\tau,\pi|_{G'}),\;\;\;\tau\in\hat{G}' $$ is finite. \end{conjecture} Motivated by our results presented in this section the author formulated \begin{conjecture}[Januszewski, {\cite{januszewskipreprint}}]\label{conj:polybound} In the setting of Conjecture \ref{conj:kobayashi} the dimension $$ \dim\Hom_{G'}(\tau,\pi|_{G'}),\;\;\;\tau\in\hat{G}' $$ grows at most polynomially in the norm of the infinitesimal character of $\tau$ (in the sense of \eqref{eq:multiplicitybound}). \end{conjecture} \section{Galois-equivariant characters} In this section we sketch the first steps towards a Galois-equivariant algebraic character theory. The correct formal setup for this situation would be schemes representing representations, i.e.\ schemes parametrizing representations over the various extensions (including algebras, not only fields, and to allow for non-affine bases as well). However in order to keep the exposition as simple and self-contained as possible, we use an ad hoc approach in the classical language of linear algebraic groups and rational representations over fields of characteristic $0$. This fits well into Michael Harris' framework \cite{harris2012}. Instead of relying on Beilinson-Bernstein-localization we introduce a rational version of Zuckerman's cohomological induction. This allows for a purely rational theory over any field $k$ of characteristic $0$, without the need to fall back to a universal domain like ${\rm\bf C}$, although we use ${\rm\bf C}$ in our exposition for convenience. We define modules for pairs $({\mathfrak {a}}_k,B_{k'})$ where ${\mathfrak {a}}_k$ is a Lie algebra defined over a field $k$ and $B_{k'}$ a reductive linear algebraic group defined over an extension $k'/k$. Such a module can, but need not have $k$-rational points. This makes the situation interesting, and opens many questions that we unfortunately cannot deal with here. \subsection{Rationality of Harish-Chandra modules and Shimura varieties} In the context of Shimura varieties considered in \cite{harris2012} the point of departure is a reductive algebraic group $G$ defined over ${\rm\bf Q}$, together with a Shimura datum $(G,X)$. The latter canonically defines a family of complex algebraic varieties $S_U(G,X)$ where $U$ runs through compact open subgroups of $G({\rm\bf A}^{(\infty)})$ of the finite adelization of $G$. Following Shimura and Deligne the varieties $S_U(G,X)$ have canonical models over a number field $E(G,X)$. This rational structure comes from a set of distinguished points in $S_U(G,X)$, which stem from abelian varieties with complex multiplication. The rational structure on the latter has a description via certain reciprocity laws. Now to any special point $x\in X$ corresponds a stabilizer $K_x\subseteq G({\rm\bf R})$, which is a compact subgroup of maximal dimension (but not necessarily maximal). However as the point $x$ varies under the action of an $\alpha\in\Aut({\rm\bf C})$, the group $K_x$ varies as well, and in order to control the rational structure one needs to pass from $G$ to an inner form ${}^{\alpha,x}G$ of $G$, as $K_x$ is not necessarily defined over ${\rm\bf Q}$, i.e. compactness of the real-valued points is not necessarily preserved. This complicates the study of rationality of Harish-Chandra modules, as it forces the study of a collections of pairs in order to lay hands on the Galois action in general. In \cite{harris2012} the fundamentals of such a theory is developped. We follow a purely representation-theoretic approach here. For an introduction to the theory of linear algebraic groups, we refer to Mahir's lecture, and also to Borel's book \cite{book_borel1991}, \cite{book_chevalley2005}. The articles \cite{springer1979} and \cite{murnaghan2003} contain short introductions to the relative theory. \subsection{Rational pairs and rational modules} Let $k\subseteq{\rm\bf C}$ be a subfield. A {\em pair over $k$} is a pair $({\mathfrak {a}}_k,B_{k'})$ consisting of a Lie algebra ${\mathfrak {a}}_k$ over $k$ and a reductive linear algebraic group $B_{k'}$ over an extension $k'/k$, a $k'$-rational inclusion $$ i:\Lie(B_{k'})\to{\mathfrak {a}}_{k'} $$ of the algebraic Lie algebra of $B_{k'}$ into $$ {\mathfrak {a}}_{k'}\;:=\;{\mathfrak {a}}_k\otimes k', $$ together with an extension of the adjoint action of $B_{k'}$ on $\Lie(B_{k'})$ to ${\mathfrak {a}}_{k'}$ whose differential is the adjoint action of $\Lie(B_{k'})$ as subalgebra on ${\mathfrak {a}}_{k'}$. We write $\mathcal C_{\rm fd}(B_{k'})$ for the category of $k'$-rational finite-dimensional representations of $B_{k'}$. I.e.\ finite-dimensional $k'$-vectorspaces with a rational action of $B_{k'}(k')$. Then ${\mathfrak {a}}_{k'}$ is natually an object inside $\mathcal C_{\rm fd}(B_{k'})$ and the above conditions on the pair $({\mathfrak {a}}_k,B_{k'})$ guarantee that ${\mathfrak {a}}_{k'}$ is a {\em Lie algebra object} inside this category. In other words, it comes with a $B_{k'}$-linear morphism $$ [\cdot,\cdot]:\;\;\;{\mathfrak {a}}_{k'}\otimes_{k'}{\mathfrak {a}}_{k'}\;\to\;{\mathfrak {a}}_{k'} $$ satisfying $$ [\cdot,\cdot]\circ\Delta\;=\;0, $$ where $$ \Delta:\;\;\;{\mathfrak {a}}_{k'}\;\to\;{\mathfrak {a}}_{k'}\otimes_{k'}{\mathfrak {a}}_{k'} $$ is the diagonal, and furthermore $[\cdot,\cdot]$ satisfies the Jacobi identity, which may be equally expressed diagrammatically inside of $\mathcal C_{\rm fd}(B_{k'})$. Then a {\em finite-dimensional ($k'$-rational) $({\mathfrak {a}}_{k'},B_{k'})$-module} is an ${\mathfrak {a}}_{k'}$-module object $M$ in $\mathcal C_{\rm fd}(B_{k'})$. In other words we have a map $$ {\mathfrak {a}}_{k'}\otimes_{k'} M\;\to\;M $$ in $\mathcal C_{\rm fd}(B_{k'})$ satisfying the usual axioms of a module over ${\mathfrak {a}}_{k'}$, which may be expressed diagrammatically as well. This gives us the category $\mathcal C_{\rm fd}({\mathfrak {a}}_{k'},B_{k'})$ of finite-dimensional $({\mathfrak {a}}_{k'},B_{k'})$-modules. We introduce the category $\mathcal C(B_{k'})$ of ${\rm ind}$-objects in $\mathcal C_{\rm fd}(B_{k'})$. Every object in this category may be thought of as a (filtered) inductive limit of finite-dimensional representations of $B_{k'}$, or more explicitly as an increasing union of the latter. Then ${\mathfrak {a}}_{k'}$ is again a Lie algebra object in $\mathcal C(B_{k'})$ and we may analogously to $\mathcal C_{\rm fd}({\mathfrak {a}}_{k'},B_{k'})$ define the category $\mathcal C({\mathfrak {a}}_{k'},B_{k'})$ of {\em $({\mathfrak {a}}_{k'},B_{k'})$-modules rational over ${k'}$} as the category of ${\mathfrak {a}}_{k'}$-objects in $\mathcal C(B_{k'})$. To a rational pair every inclusion $\sigma:{k'}\to{\rm\bf C}$ yields a complex Lie algebra $$ {\mathfrak {a}}^\sigma\;:=\;{\mathfrak {a}}_{k'}\otimes_{{k'},\sigma}{\rm\bf C} $$ and a complex Lie group $$ B^\sigma({\rm\bf C})\;:=\;(B_{k'}\otimes_{{k'},\sigma}{\rm\bf C})({\rm\bf C}) $$ Then inside this group we find a compact subgroup $B^\sigma$ of maximal dimension, unique up to conjugation by an element of $B^\sigma({\rm\bf C})$, with the property that the inclusion $$ B^\sigma\;\to\;B^\sigma({\rm\bf C}) $$ induces an equivalence $$ \mathcal C_{\rm fd}(B\otimes_{{k'},\sigma}{\rm\bf C})\;\to\; \mathcal C_{\rm fd}(B^\sigma) $$ on the spaces of finite-dimensional representations. We call the pair (in the sense of section 2) $({\mathfrak {a}}^\sigma,B^\sigma)$ {\em associated} to $({\mathfrak {a}}_{k'},B_{k'})$. So far we only defined $k'$-rational modules. We postpone the general definition it relies on the notion of base change. \subsection{Rational models of reductive pairs} Let $k\subseteq{\rm\bf C}$ be a subfield. A {\em reductive pair} over $k$ is a triple $({\mathfrak {g}}_k,K,G_k)$ consisting of a reductive linear algebraic group $G_k$ defined over $k$, its Lie algebra ${\mathfrak {g}}_k$, and a reductive subgroup $K\subseteq G_k\otimes_k{\rm\bf C}$ with the property that there is an involution $\theta\in\Aut(G_k\otimes_k{\rm\bf C})$ such that $K$ is the kernel of $\theta$. We do not assume $K$ (or $\theta$) to be defined over $k$. As a subgroup of $G:=G_k\otimes_k{\rm\bf C}$ it acquires a natural rational structure and there is a unique field of definition $k_K/k$ inside ${\rm\bf C}$ and a unique reductive $k_K$-subgroup $K_{k_K}\subseteq G_k\otimes_k k_K$ whose base change to ${\rm\bf C}$ gives $K$. We assume ${\mathfrak {g}}_k\otimes_k{\rm\bf C}$ furthermore to be multiplicity-free as a $K$-module. This implies that ${\mathfrak {k}}_{k_K}$ has a unique $K_{k_K}$-invariant complement ${\mathfrak {p}}_{k_K}$ inside ${\mathfrak {g}}_{k_K}$ and thus we have a canonical {\em Cartan decomposition} $$ {\mathfrak {g}}_{k_K}\;=\;{\mathfrak {p}}_{k_K}\;+\;{\mathfrak {k}}_{k_K}, $$ and in particular the involution $\theta$ is defined over $k_K$ and unique. To emphasize our idea of a {\em pair}, we write $({\mathfrak {g}}_k,K)^G$ for the triple, and call it {\em $K$-split} if $k_K=k$ and {\em non-$K$-split} otherwise. To such a datum, split or non-split, every inclusion $\sigma:k_K\to{\rm\bf C}$ provides us, as above, with a complex reductive Lie algebra $$ {\mathfrak {g}}^\sigma\;:=\;{\mathfrak {g}}_k\otimes_{k,\sigma}{\rm\bf C} $$ and a complex Lie group $$ K^\sigma({\rm\bf C})\;:=\;(K_{k_K}\otimes_{k_K,\sigma}{\rm\bf C})({\rm\bf C}). $$ Again we find a compact subgroup $K^\sigma$ inside the latter which gives rise to equivalence between finite-dimsional rational representations of $(K_{k_K}\otimes_{k,\sigma}{\rm\bf C}$ and $K^\sigma$, and we call the pair $({\mathfrak {g}}^\sigma,K^\sigma)$ {\em associated} to $({\mathfrak {g}}_k,K)^G$. If $({\mathfrak {g}}^\sigma,K^\sigma)$ is a reductive pair, then we say that $({\mathfrak {g}}_k\otimes_{k,\sigma}\sigma(k),K_k\otimes_k\sigma(k))$ resp.\ $({\mathfrak {g}}_k,\theta)^G$ is a {\em $\sigma(k)$-rational model}. As in the previous section we may consider the category $\mathcal C_{{\rm fd}}({\mathfrak {g}}_{k_K},K_{k_K})$ of finite-dimensional ${k_K}$-rational $({\mathfrak {g}}_{k_K},K_{k_K})$-modules. It comes with a distinguished subcategory $\mathcal C_{{\rm fd}}({\mathfrak {g}}_{k_K},K_{k_K})^G$ which consists of the finite-dimensional ${k_K}$-rational rational representations of $G_{k_K}$. This category is of particular importance to us, as it is eventually defined over $k$: the category $\mathcal C_{{\rm fd}}({\mathfrak {g}}_{k},K_{k})^G$ of $k$-rational rational (finite-dimensional) $G_k$-representations is a neutralized Tannakian category, i.e.\ it is a $k$-linear tensor category with a canonical fibre functor $\mathscr F$ into the category of finite-dimensional $k$-vector spaces. The linear algebraic group $G_k$ over $k$ as the Tannaka dual of $\mathcal C_{{\rm fd}}({\mathfrak {g}}_k,K_k)^G$. Then the forgetful functor $$ \mathcal C_{{\rm fd}}({\mathfrak {g}}_{k_K},K_{k_K})^G\;\to\; \mathcal C_{{\rm fd}}({\mathfrak {k}}_{k_K},K_{k_K})^K $$ corresponds to the inclusion $K_{K_k}\to G_{k_K}:=G_k\otimes_k k_K$ and thus to the induced inclusion of pairs. \subsection{Base change and restriction of scalars} For every map $\tau:k\to l$ of fields inside ${\rm\bf C}$ we get for every pair $({\mathfrak {a}}_k,B_{k'})$ an $l$-rational pair $({\mathfrak {a}}_l^\tau,B_{l'}^\tau)$ given by $$ {\mathfrak {a}}_l^\tau\;:=\;{\mathfrak {a}}_k\otimes_{k,\tau}l, $$ and $$ B_{l'}^\tau\;:=\;B_{k'}\otimes_{k,\tau}l', $$ where $l':=l\cdot\tau(l)$ denotes the compositum. Along the same lines every $({\mathfrak {a}}_{k'},B_{k'})$-module $M_{k'}$ gives rise to an $({\mathfrak {a}}_{l'}^\tau,B_{l'}^\tau)$-module $$ M_{l'}^\tau\;:=\;M_{k'}\otimes_{k',\tau}l'. $$ In the case where $\tau$ is a set-theoretic inclusion we drop it from the notation. If $l$ is finite over $\tau(k)$ and if the degree of the extension $l/k$ is the same as the degree $l'/k'$, then this operation is left adjoint to {\em restriction of scalars} (in the sense of Weil \cite{book_weil1961}), which is defined as follows. Departing from an $l$-rational pair $({\mathfrak {a}}_l,B_{l'})$ we obtain a $k$-rational pair $$ (\res_\tau{\mathfrak {a}}_l,\res_\tau B_{l'}) $$ where $\res_\tau{\mathfrak {a}}_l$ denotes the pullback of the Lie algebra ${\mathfrak {a}}_l$ along $\tau:k\to l$, and $\res_\tau B_{l'}$ is the restriction of scalars of $B_{l'}$ along $\tau:k'\to l'$ in the sense of Weil. Then for any $({\mathfrak {a}}_{l'},B_{l'})$-module $M_{l'}$ we get an $(\res_\tau{\mathfrak {a}}_{l'},\res_\tau B_{l'})$-module $$ \res_\tau M_{l'}\;:=\;\tau^*(M_{l'}), $$ again as the pullback along $\tau$. If $({\mathfrak {a}}_{l'},B_{l'})=({\mathfrak {g}}_{l'},K)^G$ is a split reductive pair, then the restriction of scalars of $G_{l'}$ gives the reductive group associated to the restricted pair. \subsection{Rational Harish-Chandra modules for non-$K$-split pairs} We already introduced the category of $({\mathfrak {g}}_k,K)^G$-modules for $K$-split reductive pairs. Now in order to define a category of Harish-Chandra modules for non-$K$-split pairs we restrict ourselves to the connected case, i.e.\ we assume that $({\mathfrak {g}}_k,K)^G$ is a reductive pair with the property that $K$ is (geometrically) connected. For any field extension $l/k$ any $l$-rational ${\mathfrak {g}}_l$-module $M_l$ becomes a ${\mathfrak {k}}_{l'}$-module $M_{l'}$ after base change to $l':=l\cdot k_K$. We say that $M_l$ is a {\em $({\mathfrak {g}}_l,K)^G$-module} if it has the following properties: \begin{itemize} \item[($H_2^l$)] $M_{l'}$ is locally ${\mathfrak {k}}_{l'}$-finite. \item[($H_3^l$)] the action of ${\mathfrak {k}}_{l'}$ on $M_{l'}$ lifts to $K_{l'}$. \item[($H_1^l$)] the lifted action of $K_{l'}$ on $M_{l'}$ is compatible with the action of ${\mathfrak {g}}_{l'}$. \end{itemize} We remark that the action in $(H_3^l)$ is unique if it exists as $K$ is connected and we are eventually only lifting finite-dimensional representations. Mutatis mutandis we define $({\mathfrak {a}}_l,B_{l'})$-modules whenever $B_{l'}$ is geometrically connected. We call a map $f:M_l\to N_l$ between $({\mathfrak {g}}_l,K)^G$-modules is a {\em morphism} if it is ${\mathfrak {g}}_l$-linear. This is equivalent to the base change $f:M_{l'}\to N_{l'}$ to be a morphism of $({\mathfrak {g}}_{l'},K_{l'})$-modules. This provides us with the category ${\mathcal C}({\mathfrak {g}}_l,K)$ of $({\mathfrak {g}}_l,K)^G$-modules. It contains the category $\mathcal C_{{\rm fd}}({\mathfrak {g}}_l,K)^G$ as a full abelian subcategory, and if $({\mathfrak {g}}_l,K)^G$ is $K$-split, it agrees with the previously defined category of $({\mathfrak {g}}_l,K)^G$-modules as $l=l'$ in this case. We remark that as $K$ is connected, Schur's Lemma holds for simple modules in $\mathcal C({\mathfrak {g}}_l,K)$ by \cite{quillen1969}, and even in $\mathcal C({\mathfrak {a}}_k,B_{k'})$ whenever $B_{k'}$ is geometrically connected. We refer to \cite{lepowsky1976} for fundamental theorems on linear properties of rational reductive pairs. \subsection{Equivariant cohomology} Let $({\mathfrak {a}}_k,B_k)$ be any $k$-rational pair with $B_k$ defined over $k$ as well. As $B_k$ is reductive, the categories of finite-dimensional representations of $B_k$ is semisimple, hence all objects in $\mathcal C_{\rm fd}(B_k)$ are injective and projective, and the same remains true in the ind-category $\mathcal C(B_k)$. The forgetful functor $\mathcal F_{B_k}^{{\mathfrak {a}}_k,B_k}$ sending $({\mathfrak {a}}_k,B_k)$-modules to $B_k$-modules has a left adjoint $$ \ind_{B_k}^{{\mathfrak {a}}_k,B_k}:\;\;\;M\;\mapsto\; U({\mathfrak {a}}_k)\otimes_{U({\mathfrak {b}}_k)}M $$ and a right adjoint $$ \pro_{B_k}^{{\mathfrak {a}}_k,B_k}:\;\;\;M\;\mapsto\; \Hom_{{\mathfrak {b}}_k}(U({\mathfrak {a}}_k),M)_{B_k-\text{finite}}. $$ Then $\ind_{B_k}^{{\mathfrak {a}}_k,B_k}$ sends projectives to projectives and $\pro_{B_k}^{{\mathfrak {a}}_k,B_k}$ sends injectives to injectives. As all $({\mathfrak {b}}_k,B_k)$-modules are injective and projective we see that $\mathcal C({\mathfrak {a}}_k,B_k)$ has enough injectives and enough projectives. Therefore standard methods from homological algebra apply, and we may define cohomology as follows. Pick a $K$-split reductive pair $({\mathfrak {g}}_k,K)^G$ over $k$ and a $k$-rational parabolic subalgebra ${\mathfrak {q}}_k$ which is assumed to have a $k$-rational Levi decomposition $$ {\mathfrak {q}}_k\;=\;{\mathfrak {l}}_k+{\mathfrak {u}}_k. $$ We call ${\mathfrak {q}}_k$ {\em germane (over $k$)} if ${\mathfrak {l}}_k$ is the Lie algebra of a reductive subgroup $L_k$ of $G_k$ defined as follows. Consider the category $\mathcal C_{\rm fds}({\mathfrak {l}}_k)$ of semi-simple finite-dimensional ${\mathfrak {l}}_k$-modules. This is a neutralized Tannakian category with Tannaka dual $\mathcal L_k$ say. Then $\mathcal L_k$ is proreductive and the forgetful functor $$ \mathcal C_{\rm fd}(G_k)\;\to\; \mathcal C_{\rm fds}({\mathfrak {l}}_k) $$ defines a morphism $$ \ell:\mathcal L_k \;\to\; G_k. $$ We then define $L_k$ to be the image of $\ell$. It is defined over $k$. If the Lie algebra of $L_k$ is ${\mathfrak {l}}_k$ we call ${\mathfrak {q}}_k$ germane. In this case $({\mathfrak {q}}_k,L_k\cap K_k)$ a {\em $k$-rational parabolic subpair} of $({\mathfrak {g}}_k,K)^G$. We have a map $$ p:\;\;\;({\mathfrak {q}}_k,L_k\cap K_k)\;\to\;({\mathfrak {l}}_k,L_k\cap K_k) $$ of pairs and pullback along $p$ gives a functor $$ \mathcal C({\mathfrak {l}}_k,L_k\cap K_k)\to \mathcal C({\mathfrak {q}}_k,L_k\cap K_k). $$ It has a right adjoint $$ H^0({\mathfrak {u}}_k;-):\;\;\;\mathcal C({\mathfrak {q}}_k,L_k\cap K_k)\to \mathcal C({\mathfrak {l}}_k,L_k\cap K_k) $$ given by taking ${\mathfrak {u}}_k$-invariants. The higher right derived functors $$ H^q({\mathfrak {u}}_k;-):=R^qH^0({\mathfrak {u}}_k;-):\;\;\;\mathcal C({\mathfrak {q}}_k,L_k\cap K_k)\to \mathcal C({\mathfrak {l}}_k,L_k\cap K_k) $$ are the {\em $k$-rational ${\mathfrak {u}}_k$-cohomology} and may be computed via the usual standard complex. Dually we may define and explicitly compute ${\mathfrak {u}}_k$-homology. These homology and cohomology theories satisfy the usual duality relations. \begin{proposition}\label{prop:equicohomology} For any $k$-rational $({\mathfrak {g}}_k,K_k)$-module $M_k$ the cohmology $H^q({\mathfrak {u}}_k,X_k)$ is $k$-rational and for any map $\tau:k\to l$ of fields we have a natural isomorphism $$ H^q({\mathfrak {u}}_k,X_k)_l^\tau\;\to\; H^q({\mathfrak {u}}_l^\tau,X_l^\tau) $$ of $({\mathfrak {l}}_l^\tau,L_l^\tau\cap K_l^\tau)$-modules. The same statement is true for ${\mathfrak {u}}_k$-homology and the duality maps and Hochschild-Serre spectral sequences respect the rational structure. \end{proposition} \begin{proof} This is obvious from the $k$-resp.\ $l$-rational standard complexes computing cohomology and homology. \end{proof} Along the same lines we may define rational $({\mathfrak {g}}_k,K_k)$-cohomology. It also commutes with base change. \subsection{Equivariant cohomological induction} To define $k$-rational cohomological induction, we adapt Zuckerman's original construction as in \cite[Chaper 6]{book_vogan1981}. We start with a $({\mathfrak {g}}_k,L_k\cap K_k)$-module $M$ and set $$ \tilde{\Gamma}_0(M)\;:=\; \;\{m\in M\mid\dim_k U({\mathfrak {k}}_k)\cdot m\;<\;\infty\}. $$ and $$ \Gamma_0(M)\;:=\; \;\{m\in \tilde{\Gamma}_0(M)\mid \text{the ${\mathfrak {k}}_k$-representation}\; U({\mathfrak {k}}_k)\cdot m\;\text{lifts to}\;K_k^0\}. $$ As in the analytic case the obstruction for a lift to exist is the (algebraic) fundamental group of $K_k^0$. In particular there is no rationality obstruction, as a representation $N$ of ${\mathfrak {k}}_k$ lifts to $K_k^0$ if and only it does so after base change to one (and hence any) extension of $k$. It is easy to see that $\Gamma_0(M)$ is a $({\mathfrak {g}}_k,K_k^0)$-module. We define the space of {\em $K_k^0$-finite vectors} in $M$ as $$ \Gamma_1(M)\;:=\; $$ $$ \;\{m\in \Gamma_0(M)\mid \text{the actions of $L_k\cap K_k$ (on $M$) and $L_k\cap K_k^0\subseteq K_k^0$ agree on $m$}\}. $$ This is a $({\mathfrak {g}}_k,(L_k\cap K_k)\cdot K_k^0)$-module. Finally the space of {\em $K_k$-finite vectors} in $M$ is $$ \Gamma(M)\;:=\; \;\ind_{{\mathfrak {g}}_k,(L_k\cap K_k)\cdot K_k^0}^{{\mathfrak {g}}_k,K_k}(\Gamma_1(M)). $$ However this is not a subspace of $M$ in general. But by Frobenius reciprocity it comes with a natural map $\Gamma(M)\to M$. The functor $\Gamma$ is a right adjoint to the forgetful functor along $$ i:\;\;\;({\mathfrak {g}}_k,L_k\cap K_k)\;\to\;({\mathfrak {g}}_k,K_k) $$ and hence sends injectives to injectives. We obtain the higher Zuckerman functors as the right derived functors $$ \Gamma^q\;:=\;R^q\Gamma:\;\;\; \mathcal C({\mathfrak {g}}_k,L_k\cap K_k)\to \mathcal C({\mathfrak {g}}_k,K_k). $$ As in the classical case we can show \begin{proposition}\label{prop:find} For each $q$ we have a commutative square $$ \begin{CD} \mathcal C({\mathfrak {g}}_k,L_k\cap K_k)@>\Gamma^q>>\mathcal C({\mathfrak {g}}_k,K_k)\\ @V\mathcal F_{{\mathfrak {k}}_k,L_k\cap K_k}^{{\mathfrak {g}}_k,L_k\cap K_k}VV @VV\mathcal F_{{\mathfrak {k}}_k,K_k}^{{\mathfrak {g}}_k,K_k}V\\ \mathcal C({\mathfrak {k}}_k,L_k\cap K_k)@>\Gamma^q>>\mathcal C({\mathfrak {k}}_k,K_k)\\ \end{CD} $$ \end{proposition} \begin{proof} For $q=0$ the commutativity is obvious from the explicit construction of the functor $\Gamma$. For $q>0$ this follows from the standard argument that the forgetful functors have an exact left adjoint given by induction along the Lie algebras and hence carry injectives to injectives. Furthermore they are exact, which means that the Grothendieck spectral sequences for the two compositions both degenerate. Therefore edge morphisms yield the commutativity, this proves the claim. \end{proof} \begin{corollary}\label{cor:basechange} The functors $\Gamma^q:\mathcal C({\mathfrak {g}}_k,L_k\cap K_k)\to\mathcal C({\mathfrak {g}}_k,K_k)$ commute with base change and for any map $\tau:k\to l$ of fields and any $({\mathfrak {g}}_k,L_k\cap K_k)$-module $M_k$ we have for any degree $q$ a natural isomorphism $$ \left(\Gamma^q(M_k)\right)_{l}^\tau\;\to\; \Gamma^q(M_l^\tau). $$ \end{corollary} \begin{proof} By universality of base change we obtain a natural map $$ \left(\Gamma^q(M_k)\right)_{l}^\tau\;\to\; \Gamma^q(M_l^\tau), $$ which is compatible with the commutative diagram in Proposition \ref{prop:find}. This reduces us to the case ${\mathfrak {g}}_k={\mathfrak {k}}_k$, i.e.\ the finite-dimensional case where the statement is well known. \end{proof} In particular the functors $\Gamma^q$ satisfy the usual properties, i.e.\ they vanish for $q > \dim_k{\mathfrak {k}}_k/{\mathfrak {k}}_k\cap {\mathfrak {l}}_k$, we have a Hochschild-Serre spectral sequence for $K_k$-types, the effect on infinitesimal characters is the same as in the classical setting, etc. Let $Z$ be an $({\mathfrak {l}}_k,L_k\cap K_k)$-module. Then we may consider it as a $({\mathfrak {q}}_k,L_k\cap K_k)$-module with trivial ${\mathfrak {u}}_k$-action. From there we consider $$ \mathcal R^q(Z)\;:=\; \Gamma^q\pro_{{\mathfrak {q}}_k,L_k\cap K_k}^{{\mathfrak {g}}_k,L_k\cap K_k}(Z\otimes_k\bigwedge^{\dim_k{\mathfrak {u}}_k}{\mathfrak {u}}_k), $$ where $$ \pro_{{\mathfrak {q}}_k,L_k\cap K_k}^{{\mathfrak {g}}_k,L_k\cap K_k}(Z\otimes_k\bigwedge^{\dim_k{\mathfrak {u}}_k}{\mathfrak {u}}_k)\;:=\; \Hom_{{\mathfrak {q}}_k}(U({\mathfrak {g}}_k),Z\otimes_k\bigwedge^{\dim_k{\mathfrak {u}}_k}{\mathfrak {u}}_k )_{L_k\cap K_k-\text{finite}}. $$ The definition of the functor $\Gamma_0(M)$ may be extended to non-$K$-split reductive pairs via the maximal submodule of $M$ which maps into $\Gamma_0(M\otimes_k {k_K})$. However it is not so clear how to generalize the remaining functors to this setting. \subsection{Rational models of Harish-Chandra modules} We depart from a $k$-rational pair $({\mathfrak {a}}_k,B_k)$. Let $\sigma:k\to{\rm\bf C}$ be an embedding, and $({\mathfrak {a}}^\sigma,B^\sigma)$ be an associated pair. Then \begin{proposition}\label{prop:classicaliso} The category $\mathcal C({\mathfrak {a}}_{\rm\bf C}^\sigma,B_{\rm\bf C}^\sigma)$ of ${\rm\bf C}$-rational modules is naturally equivalent to the category $\mathcal C({\mathfrak {a}}^\sigma,B^\sigma)$ of classical $({\mathfrak {a}}^\sigma,B^\sigma)$-modules. This equivalence induces an equivalence of the corresponding categories of finite-dimensional modules. \end{proposition} \begin{proof} By construction the categories are equivalent in the case ${\mathfrak {a}}_k={\mathfrak {b}}_k$, and the general case follows from this observation as well. \end{proof} Now by the classification of reductive algebraic groups we know that each reductive pair $({\mathfrak {g}},K)$ has a $k$-rational model $({\mathfrak {g}}_k,K_k)$ over a {\em number field} $k\subseteq{\rm\bf C}$, i.e.\ $k/{\rm\bf Q}$ is finite (we may assume $K_k$ quasi-split upon replacing $k$ by a finite extension). However in general such a model is far from unique. \begin{corollary} Each cohomologically induced $({\mathfrak {g}},K)$-module has a model over the field of definition if its inducing data. \end{corollary} In particular if the parabolic subpair $({\mathfrak {q}}_k,L_k\cap K_k)$, its Levi decomposition ${\mathfrak {l}}_k+{\mathfrak {u}}_k$, and the inducing module $Z$ are all defined over $k\subseteq{\rm\bf C}$, then so are the induced modules $\mathcal R^q(Z)$ for all $q$. In particular every the $({\mathfrak {g}},K)$-module of any discrete series representation, or more generally of any unitary representation with non-trivial $({\mathfrak {g}},K)$-cohomology and $\overline{{\rm\bf Q}}$-rational infinitesimal character has a model over a {\em number field}. The classical character formulae, as for example in the latter case given in \cite{voganzuckerman1984}, are rational over the same field of definition as well if interpreted in our theory as follows. The standard arguments proving (ii) and (iii) of Theorem \ref{thm:inheritance} carry over to our setting provided that ${\mathfrak {q}}_k$ is {\em $\theta$-stable} in the algebraic sense: Write ${\mathfrak {p}}_k$ for the $K_k$-complement of ${\mathfrak {k}}_k$ in ${\mathfrak {g}}_k$, then \begin{equation} {\mathfrak {q}}_k\;=\;({\mathfrak {q}}_k\cap{\mathfrak {p}}_k)+({\mathfrak {q}}_k\cap{\mathfrak {k}}_k). \label{eq:rationalthetastable} \end{equation} \begin{proposition}\label{prop:rationalfl} Assume that $({\mathfrak {g}}_k,K)^G$ is a reductive pair and that ${\mathfrak {q}}_k\subseteq{\mathfrak {g}}_k$ is a $k$-germane parabolic subalgebra. Then the $k$-rational ${\mathfrak {u}}_k$-cohomology preserves $Z({\mathfrak {g}}_k)$-finiteness and if ${\mathfrak {q}}_k$ is $\theta$-stable in the sense of \eqref{eq:rationalthetastable} also admissbility, in particular it sends finite length modules to finite length modules in that case. \end{proposition} \begin{proof} Without loss of generality we may enlarge $k$ by Proposition \ref{prop:equicohomology} and in particular we may assume $G_k$ to be split. Then the Harish-Chandra map is defined over $k$. The rest of the argument goes as in the classical case, cf.\ \cite[Theorems 7.56 and Corollary 5.140]{book_knappvogan1995}. \end{proof} Proposition \ref{prop:rationalfl} allows us to define $k$-rational algebraic characters for finite length modules using the same formalism mutatis as before: $$ c_{{\mathfrak {q}}_k}:\;K_{\rm fl}({\mathfrak {g}}_k,K_k)\;\to\; C_{{\mathfrak {q}}_k,\rm fl}({\mathfrak {l}}_k,L_k\cap K_k):=K_{\rm fl}({\mathfrak {l}}_k,L_k\cap K_k)[W_{{\mathfrak {q}}_k}^{-1}], $$ $$ M_k\;\mapsto\; \frac{\sum_q(-1)^q[H^q({\mathfrak {u}}_k;M_k)]} {\sum_q(-1)^q[H^q({\mathfrak {u}}_k;{\bf1}_k)]}. $$ Then $c_{{\mathfrak {q}}_k}$ is multiplicative, respects duals, commutes with base change, and is therefore compatible with our theory from section 5 over ${\rm\bf C}$. $$ \underline{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;} $$\ \\ Karlsruher Institut f\"ur Technologie, Fakult\"at f\"ur Mathematik, Institut f\"ur Algebra und Geometrie, Kaiserstra\ss{}e 89-93, 76133 Karlsruhe, Germany\\ {[email protected]} \end{document} $$ \underline{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;} $$\ \\ Karlsruher Institut f\"ur Technologie, Fakult\"at f\"ur Mathematik, Institut f\"ur Algebra und Geometrie, Kaiserstra\ss{}e 89-93, 76133 Karlsruhe, Germany\\ {[email protected]} \end{document}
\begin{document} \input epsf \maketitle \section*{Introduction.} A series of properties for ordinary continued fractions possesses multidimensional analogues. H.~Tsuchihashi~\cite{Tsu} showed the connection between periodic multidimensional continued fractions and multidimensional cusp singularities. The relation between sails of multidimensional continued fractions and Hilbert bases is describe by J.-O.~Moussafir in the work~\cite{Mus}. In his book~\cite{Arn1} dealing with theory of continued fractions V.~I.~Arnold gives various images for the sails of two-dimensional continued fraction generalizes the golden ratio. In the article~\cite{Kor2} E.~I.~Korkina investigated the sales for the simplest two-dimensional continued fractions of cubic irrationalities, whose fundamental region consists of two triangles, three edges and one vertex. We consider the same model of the multidimensional continued fraction as was considered by the authors mentioned above. In the present work the examples of new tori triangulation of the sails for two-dimensional continued fractions of cubic irrationalities for some special families possessing the fundamental regions with more complicated structure are obtained. In $\S$1 the necessary definitions and notions are given. In $\S$2 the properties of two-dimensional continued fractions constructed using Frobenius operators are investigated, the relation between the equivalence classes of tori triangulations and cubic extensions for the field of rational numbers is discussed. (The detailed analysis of the properties of cubic extensions for the rational numbers field and their classification is realized by B.~N.~Delone and D.~K.~Faddeev in the work~\cite{Del1}.) In $\S$3 the appearing examples of tori triangulations are discussed. The author is grateful to professor V.~I.~Arnold for constant attention for this work and for useful remarks. \section{Definitions.} Points of the space $\r^k$ ($k\ge 1$) whose coordinates are all integer numbers are called {\it integer points}. Consider a set of $n+1$ hyperplanes passing through the origin in general position in the space $\r^{n+1}$. The complement to these hyperplanes consists of $2^{n+1}$ open orthants. Let us choose an arbitrary orthant. The boundary of the convex hull of all integer points except the origin in the closure of the orthant is called {\it the sail}. The union of all $2^{n+1}$ sails defined by these hyperplanes of the space $r^{n+1}$ is called {\it $n$-dimensional continued fraction} constructed according to the given $n+1$ hyperplanes in general position in $n+1$-dimensional space. Two $n$-dimensional sails (continued fractions) are called {\it equivalent} if there exists a linear integer lattice preserving transformation of the $n+1$-dimensional space such that it maps one sail (continued fraction) to the other. To construct the whole continued fraction up to the equivalence relation in one-dimensional case it is sufficiently to know some integer characteristics of one sail (that is to say the integer lengths of the edges and the integer angles between the consecutive edges of one sail). \begin{hyp}\lambdabel{hyp1}{\bf (Arnold)} There exist the collection of integer characteristics of the sail that uniquely up to the equivalence relation determines the continued fraction. \varepsilonnd{hyp} Let $A \in GL(n+1,\r)$ be an operator whose roots are all real and distinct. Let us take the $n$-dimensional spaces that spans all possible subsets of $n$ linearly independent eigenvectors of the operator $A$. As far as eigenvectors are linearly independent, the obtained $n+1$ hyperspaces will be $n+1$ hyperspaces in general position. The multidimensional continued fraction is constructed just with respect to these hyperspaces. \begin{proposition} Continued fractions constructed by operators $A$ and $B$ of the group $GL(n+1,\r)$ with distinct real irrational eigenvalues are equivalent iff there exists such an integer operator $X$ with the determinant equals to one that the operator $\tilde A$ obtained from the operator $A$ by means of the conjugation by the operator $X$ commutes with $B$. \varepsilonnd{proposition} \begin{proof} Let the continued fractions constructed by operators $A$ and $B$ of the group $GL(n+1,\r)$ with distinct real irrational eigenvalues are equivalent, i. e. there exists linear integer lattice preserving transformation of the space that maps the continued fraction of the operator $A$ to the continued fraction of the operator $B$ (and the orthants of the first continued fraction maps to the orthants of the second one). Under such transformation of the space the operator $A$ conjugates by some integer operator $X$ with the determinant equals to one. All eigenvalues of the obtained operator $A$ are distinct and real (since the characteristic polynomial of the operator is preserving). As far as the orthants of the first continued fraction maps to the orthants of the second one, the sets of the proper directions for the operators $\tilde A$ and $B$ coincides. Thus, given operators are diagonalizable together in some basis and hence they commutes. Let us prove the converse. Suppose there exists such an integer operator $X$ with the determinant equals to one that the operator $\tilde A$ obtained from the operator $A$ by means of the conjugation by the operator $X$ commutes with $B$. Note that the eigenvalues of the operators $A$ and $\tilde A$ coincide. Therefore all eigenvalues of the operator $\tilde A$ (just as for the operator $B$) are real, distinct, and irrational. Let us consider such basis that the operator $\tilde A$ is diagonal in it. Simple verification shows that the operator $B$ is also diagonal in this basis. Consequently the operators $\tilde A$ and $B$ defines the same orthant decomposition of the $n+1$-dimensional space and the operators corresponding to this continued fractions coincide. It remains just to note that a conjugation by an integer operator with the determinant equals to one corresponds to the linear integer lattice preserving transformation of the $n+1$-dimensional space. \varepsilonnd{proof} Later on we will consider only continued fractions constructed by some invertible integer operator of the $n+1$-dimensional space such that its inverse is also integer. The set of such operators form the group denoted by $GL(n+1,\z)$. This group consist of the integer operators with the determinants equal to $\pm 1$. The $n$-dimensional continued fraction constructed by the operator $A \in GL(n+1,\z)$ whose characteristic polynomial over the rational numbers field is irreducible and eigenvalues are real is called {\it the $n$-dimensional continued fraction of $(n+1)$-algebraic irrationality}. The cases of $n=1,2$ correspond to {\it one$($two$)$-dimensional continued fractions of quadratic $($cubic$)$ irrationalities}. Let the characteristic polynomial of the operator $A$ be irreducible over the rational numbers field and its roots be real and distinct. Under the action of the integer operators commuting with $A$ whose determinants are equal to one and preserving the given sail the sail maps to itself. These operators form an Abelian group. It follows from Dirichlet unity elements theorem (см.~\cite{BSh}) that this group is isomorphic to $\z^n$ and that its action is free. The factor of a sail under such group action is isomorphic to $n$-dimensional torus at that. (For the converse see~\cite{Kor1} and~\cite{Tsu}.) The polyhedron decomposition of $n$-dimensional torus is defined in the natural way, the affine types of the polyhedrons are also defined (in the notion of the affine type we include the number and mutual arrangement of the integer points for the faces of the polyhedron). In the case of two-dimensional continued fractions for cubic irrationalities such decomposition is usually called torus {\it triangulation}. By {\it a fundamental region} of the sail we call a union of some faces that contains exactly one face from each equivalence class. \section{ Conjugacy classes of two-dimensional continued fractions for cubic irrationalities.} Two-dimensional continued fractions for cubic irrationalities constructed by the operators $A$ and $-A$ coincide. In that way the study of continued fractions for integer operators with the determinants equal to $\pm 1$ reduces to the study of continued fractions for integer operators with the determinants equal to one (i. e. operators of the group $SL(3,\z)$). An operator (matrix) with the determinant equals to one $$ A_{m,n}:= \left( \begin{array}{ccc} 0 &1 &0 \\ 0 &0 &1 \\ 1 &-m &-n \\ \varepsilonnd{array} \right), $$ where $m$ and $n$ are arbitrary integer numbers is called {\it a Frobenius operator $($matrix$)$}. Note the following: if the characteristic polynomial $\chi_{A_{m,n}(x)}$ is irreducible over the field $\q$ than the matrix for the left multiplication by the element $x$ operator in the natural basis $\{1,x,x^2 \}$ in the field $\q[x]\big/ \big( \chi_{A_{m,n}(x)}\big)$ coincides with the matrix $A_{m,n}$. Let an operator $A\in SL(3,\z)$ has distinct real irrational eigenvectors. Let $e_1$ be some integer nonzero vector, $e_2=A(e_1)$, $e_3=A^2(e_1)$. Then the matrix of the operator in the basis $(e_1, e_2, c e_3)$ for some rational $c$ will be Frobenius. However the transition matrix here could be non-integer and the corresponding continued fraction is not equivalent to initial one. \begin{example} The continued fraction constructed by the operator $$ A= \left( \begin{array}{ccc} 1 &2 &0 \\ 0 &1 &2 \\ -7 &0 &29 \\ \varepsilonnd{array} \right), $$ is not equivalent to the continued fraction constructed by any Frobenius operator with the determinant equals to one. \varepsilonnd{example} Thereupon the following question is of interest. {\it How often the continued fractions that don't correspond to Frobenius operators can occur?} In any case the family of Frobenius operators possesses some useful properties that allows us to construct whole families of nonequivalent two-dimensional periodic continued fractions at once, that extremely actual itself. It is easy to obtain the following statements. \begin{statement} The set $\Omega$ of operators $A_{m,n}$ having all eigenvalues real and distinct is defined by the inequality $n^2m^2-4m^3+4n^3-18mn-27\le 0$. For the eigenvalues of the operators of the set to be irrational it is necessary to subtract extra two perpendicular lines in the integer plain: $A_{a,-a}$ and $A_{a,a+2}$, $a \in \z$. \varepsilonnd{statement} \begin{statement}\lambdabel{p1} The two-dimensional continued fractions for the cubic irrationalities constructed by the operators $A_{m,n}$ and $A_{-n,-m}$ are equivalent. \varepsilonnd{statement} Further we will consider all statements modulo this symmetry. {\it Remark.} Example~\ref{primer21} given below shows that among periodic continued fractions constructed by operators in the set $\Omega$ equivalent continued fractions can happen. Let us note that there exist nonequivalent two-dimensional periodic continued fractions constructed by operators of the group $GL(n+1,\r)$ whose characteristic polynomials define isomorphic extensions of the rational numbers field. In the following example the operators with equal characteristic polynomials but distinct continued fractions are shown. \begin{example}\lambdabel{primer} The operators $(A_{-1,2})^3$ and $A_{-4,11}$ have distinct two-dimensional continued fractions $($although their characteristic polynomials coincides$)$. \varepsilonnd{example} At the other hand similar periodic continued fractions can correspond to operators with distinct characteristic polynomials. \begin{example}\lambdabel{primer21} The operators $A^2_{0,-a}$ and $A_{-2a,-a^2}$ are conjugated by the operator in the group $GL(3,\z)$ and hence the periodic continued fractions $($including the torus triangulations$)$ corresponding to the operators $A_{0,-a}$ and $A_{-2a,-a^2}$ are equivalent. \varepsilonnd{example} Let us note that distinct cubic extensions of the field $\q$ posses nonequivalent triangulations. \section{Torus triangulations and fundamental regions for some series of operators $A_{m,n}$} Torus triangulations and fundamental domains for several infinite series of Frobenius operators are calculated here. In this paragraph it considers only the sails containing the point $(0,0,1)$ in its convex hull. The ratio of the Euclidean volume for an integer $k$-dimensional polyhedra in $n$-dimensional space to the Euclidean volume for the minimal simplex in the same $k$-dimensional subspace is called its {\it the integer $k$-dimensional volume} (if $k=1$ --- {\it the integer length} of the segment, if $k=2$ --- {\it the integer area} of the polygon). \\ The ratio of the Euclidean distance from an integer hyperplane (containing an $n-1$-dimensional integer sublattice) to the integer point to the minimal Euclidean distance from the hyperplane to some integer point in the complement of this hyperplane is called the corresponding {\it integer distance}. \\ By {\it the integer angle} between two integer rays (i.e. rays that contain more than one integer point) with the vertex at the same integer point we call the value $S(u,v)/(|u|\cdot|v|)$, where $u$ and $v$ are arbitrary integer vectors passing along the rays and $S(u,v)$ is the integer volume of the triangle with edges $u$ and $v$. {\it Remark.} Our integer volume is an integer number (in standard parallelepiped measuring the value will be $k!$ times less). The integer $k$-dimensional volume of the simplex is equal to the index of the lattice subgroup generated by its edges having the common vertex. Since the integer angles of any triangle with all integer vertices can be uniquely restored by the integer lengths of the triangle and its integer volume we will not write the integer angles of triangles below. \begin{hyp} The specified invariants distinguish all nonequivalent torus triangulations of two-dimensional continued fractions for the cubic irrationalities. \varepsilonnd{hyp} In the formulations of the propositions~\ref{t31}---\ref{t35} we say only about homeomorphic type for the torus triangulations although the description of the fundamental regions allowing us to calculate any other invariant including affine types of the faces is given in the proofs. (As an example we calculate integer volumes and distances to faces in propositions~\ref{t31} and~\ref{t32}.) The affine structure examples of all triangulation faces are shown on the figures. \begin{proposition}\lambdabel{t31} Let $m=b-a-1$, $n=(a+2)(b+1)$ $(a,b\ge 0)$ then the torus triangulation corresponding to the operator $A_{m,n}$ is homeomorphic to the following one: $$\varepsilonpsfbox{fig2.3}$$ $($in the figure $b=6)$. \varepsilonnd{proposition} \begin{proof} The operators $$ X_{a,b}=A_{m,n}^{-2}, \quad Y_{a,b}=A_{m,n}^{-1}\big(A_{m,n}^{-1}-(b+1)I\big) $$ commutes with the operator $A_{m,n}$ without transpose of the sails (note that the operator $A_{m,n}$ transpose the sails). Here $I$ is the identity element of the group $SL(3,\z)$. Let us describe the closure for one of the fundamental regions obtaining by factoring of the sail by the operators $X_{a,b}$ and $Y_{a,b}$. Consider the points $A=(1,0,a+2)$, $B=(0,0,1)$, $C=(b-a-1,1,0)$ and $D=((b+1)^2,b+1,1)$ of the sail containing the point $(0,0,1)$. Under the operator $X_{a,b}$ action the segment $AB$ maps to the segment $DC$ (the point $A$ maps to the point $D$ and $B$ to $C$). Under the operator $Y_{a,b}$ action the segment $AD$ maps to the segment $BC$ (the point $A$ maps to the point $B$ and $D$ to $C$). The integer points $((b+1)i,i,1)$, where $i\in \{ 1,\ldots ,b\}$ belong to the interval $BD$. As can be easily seen the integer lengths of the segments $AB$, $BC$, $CD$, $DA$ and $BD$ are equal to 1, 1, 1, 1 and $b+1$ correspondingly; the integer areas of both triangles $ABD$ and $BCD$ are equal to $b+1$. The integer distances from the origin to the plains containing the triangles $ABD$ and $BCD$ are equal to $1$ and $a+2$ correspondingly. The operators $X_{a,b}$ and $Y_{a,b}$ mapping the sail to itself, since all their eigenvectors are positive (or in this case it is equivalent to say that values of their characteristic polynomials on negative semi-axis are always negative). Furthermore this operators are the generators of the group of integer operators mapping the sail to itself, since it turn out that torus triangulation obtaining by factoring the sail by this operators contains the unique vertex (zero-dimensional face), and hence the torus triangulation have no smaller subperiod. \varepsilonnd{proof} Let us show that all vertices for the fundamental domain of the arbitrary periodic continued fraction can be chosen from the closed convex hull of the following points: the origin; $A$; $X(A)$; $Y(A)$ and $XY(A)$, where $A$ is the arbitrary zero-dimensional face of the sail, and operators $X$ and $Y$ are generators of the group of integer operators mapping the sail to itself. Consider a tetrahedral angle with the vertex at the origin and edges passing through the points $A$, $X(A)$, $Y(A)$, and $XY(A)$. The union of all images for this angle under the transformations of the form $X^m Y^n$, where $m$ and $n$ are integer numbers, covers the whole interior of the orthant. Hence all vertices of the sail can be obtained by shifting by operators $X^mY^n$ the vertices of the sail lying in our tetrahedral angle. The convex hull for the integer points of the form $X^mY^n(A)$ is in the convex hull of all integer points for the given orthant at that. Therefore the boundary of the convex hull for all integer points of the orthant is in the complement to the interior points of the convex hull for the integer points of the form $X^mY^n(A)$. The complement in its turn is in the unit of all images for the convex hull of the following points: the origin, $A$, $X(A)$, $Y(A)$, and $XY(A)$, under the transformations of the form $X^m Y^n$, where $m$ and $n$ are integer numbers. It is obviously that all points of the constructed polyhedron except the origin lie at the concerned open orthant at that. \begin{proposition}\lambdabel{t32} Let $m=-a$, $n=2a+3$ $(a\ge 0)$, then the torus triangulation corresponding to the operator $A_{m,n}$ is homeomorphic to the following one: $$\varepsilonpsfbox{fig2.4}$$ \varepsilonnd{proposition} \begin{proof} Let us choose the following generators of the subgroup of integer operators mapping the sail to itself: $$ X_{a}=A_{m,n}^{-2}; \quad Y_{a}=(2I-A_{m,n}^{-1})^{-1}. $$ As in the previous case let us make the closure of one of the fundamental regions of the sail (containing the point $(0,0,1)$) that obtains by factoring by the operators $X_{a}$ and $Y_{a}$. Let $A=(0,0,1)$, $B=(2,1,1)$, $C=(7,4,2)$ and $D=(-a,1,0)$. Besides this points the vertex $E=(3,2,1)$ is in the fundamental region. Under the operator $X_{a}$ action the segment $AB$ maps to the segment $DC$ (the point $A$ maps to the point $D$ and $B$ --- to $C$). Under the operator $Y_{a}$ action the segment $AD$ maps to the segment $BC$ (the point $A$ maps to the point $B$ and $D$ --- to $C$). If $a=0$ then the integer length of the sides $AB$, $BC$, $CD$ and $DA$ are equal to 1, and the integer areas of the triangles $ABD$ and $BCD$ are equal to 1 and 3 correspondingly. The integer distances from the origin to the plains containing the triangles $ABD$ and $BCD$ are equal to $2$ and $1$ correspondingly. If $a>0$ then all integer length of the sides and integer areas of all four triangles are equal to 1. The integer distances from the origin to the plains containing the triangles $ABD$, $BDE$, $BCE$ and $CED$ are equal to $a+2$, $a+1$, $1$ and $1$ correspondingly. Here and below the proofs of the statements on the generators are similar to the proof of the corresponding statements in the proof of proposition~\ref{t31}. \varepsilonnd{proof} \begin{proposition} \lambdabel{t33} Let $m=2a-5$, $n=7a-5$ $(a\ge 2)$, then the torus triangulation corresponding to the operator $A_{m,n}$ is homeomorphic to the following one: $$\varepsilonpsfbox{fig2.5}$$ $($in the figure $a=5)$. \varepsilonnd{proposition} \begin{proof} Let us choose the following generators of the subgroup of integer operators mapping the sail to itself: $$ X_{a}=2A_{m,n}^{-1}+7I; \quad Y_{a}=A_{m,n}^2. $$ Let us make the closure of the fundamental regions of the sail (containing the point $(0,0,1)$) that obtains by factoring by the operators $X_{a}$ and $Y_{a}$. Let $A=(-14,4,-1)$, $B=(-1,1-a,7a^2-10a+4)$, $C=(1,5-7a,49a^2-72a+30)$ and $D=(0,0,1)$. Under the operator $X_{a}$ action the segment $AB$ maps to the segment $DC$ (the point $A$ maps to the point $D$ and the $B$ --- to $C$). Under the operator $Y_{a}$ action the segment $AD$ maps to the segment $BC$ (the point $A$ maps to the point $B$ and $D$ --- to $C$). Besides this points the vertices $E=(-1,0,2a-1)$ and $F=(0,-a,7a^2-5a+1)$ are in the fundamental region. The interval $BE$ contains $a-2$ integer points, the interval $DF$ --- $a-1$, $AD$ and $CB$ --- one point for each. \varepsilonnd{proof} \begin{proposition} \lambdabel{t34} Let $m=a-1$, $n=3+2a$ $(a\ge 0)$, then the torus triangulation corresponding to the operator $A_{m,n}$ is homeomorphic to the following one: $$\varepsilonpsfbox{fig2.6}$$ $($in the figure $a=4)$. \varepsilonnd{proposition} \begin{proof} Let us choose the following generators of the subgroup of integer operators mapping the sail to itself: $$ X_{a}=(2I+A_{m,n}^{-1})^{-2}; \quad Y_{a}=A_{m,n}^{-2}. $$ Let us make the closure of one of the fundamental regions of the sail (containing the point $(0,0,1)$) that obtains by factoring by the operators $X_{a}$ and $Y_{a}$. Let $A=(1,-2a-3,4a^2+11a+10)$, $B=(0,0,1)$, $C=(-4a-11,2a+5,-a-2)$ and $D=(-a-2,0,a^2+3a+3)$. Besides this points the vertices $E=(-2,1,0)$, $F=(-2a-3,a+1,1)$ and $G=(0,-1-a,2a^2+5a+4)$ are in the fundamental region. The intervals $BG$ and $DF$ contains $a$ integer points each. In the interior of the pentagon $BEFDG$ $(a+1)^2$ integer points of the form: $(-j,-i+j,(2a+3)i-(a+2)j+1)$, where $1\le i\le a+1$, $1\le j\le 2i-1$ are contained. Under the operator $X_{a}$ action the segment $AB$ maps to the segment $DC$ (the point $A$ maps to the point $D$ and $B$ to $C$). Under the operator $Y_{a}$ action the broken line $AGD$ maps to the broken line $BEC$ (the point $A$ maps to the point $B$, the point $G$ maps to the point $E$, and the point $D$ --- to the point $C$). \varepsilonnd{proof} \begin{proposition}\lambdabel{t35} Let $m=-(a+2)(b+2)+3$, $n=(a+2)(b+3)-3$ $(a\ge 0$, $b\ge 0)$, then the torus triangulation corresponding to the operator $A_{m,n}$ is homeomorphic to the following one: $$\varepsilonpsfbox{fig2.7}$$ $($in the figure $b=5)$. \varepsilonnd{proposition} \begin{proof} Let us choose the following generators of the subgroup of integer operators mapping the sail to itself: $$ X_{a,b}=((b+3)I-(b+2)A_{m,n}^{-1})A_{m,n}^{-2}; \quad Y_{a,b}=A_{m,n}^{-2}. $$ Let us make the closure of one of the fundamental regions of the sail (containing the point $(0,0,1)$) that obtains by factoring by the operators $X_{a,b}$ and $Y_{a,b}$. Let $A=(b^2+3b+3, b^2+2b-a+1,a^2b+3a^2+4ab+b^2+6a+5b+4)$, $B=(b^2+5b+6,b^2+4b+4)$, $C=(-ab-2a-2b-1,1,0)$ and $D=(0,0,1)$. The interval $BD$ contains $b+1$ integer points. Besides this points the vertices $E=(b+4,b+3,b+2)$, $F=(b+2,b+1,a+b+2)$ and $G=(1,1,1)$ are in the fundamental region. Under the operator $X_{a,b}$ action the segment $AB$ maps to the segment $DC$ (the point $A$ maps to the point $D$ and the point $B$ --- to the point $C$). Under the operator $Y_{a,b}$ action the broken line $AFD$ maps to the broken line $BEC$ (the point $A$ maps to the point $B$, the point $F$ maps to the point $E$, and the point $D$ --- to the point $C$). \varepsilonnd{proof} Note that the generators of the subgroup of operators commuting with the operator $A_{m,n}$ that do not transpose the sails can be expressed by the operators $A_{m,n}$ and $\alpha I +\beta A^{-1}_{m,n}$, where $\alpha$ and $\beta$ are nonzero integer numbers. It turns out that in general case the following statement is true: the determinants of the matrices for the operators $\alpha I +\beta A^{-1}_{m,n}$ and $\alpha I +\beta A^{-1}_{m+k\beta,n+k\alpha}$ are equal. In particular, if the absolute value of the determinant of the matrix for the operator $\alpha I +\beta A^{-1}_{m,n}$ is equal to one then the absolute value of the determinant of the matrix for the operator $\alpha I +\beta A^{-1}_{m+k\beta,n+k\alpha}$ is also equal to one for an arbitrary integer $k$. Seemingly torus triangulation for the other sequences of operators \\ $A_{m_0+\beta s,n_0+\alpha s}$, where $s \in \n$, (besides considered in the propositions~\ref{t31}---\ref{t35}) have much in common (for example, number of polygons and their types). Note that the numbers $\alpha$ and $\beta$ for such sequences satisfy the following interesting property. Since $$ |\alpha I +\beta A_{m,n}^{-1}|= \alpha^3+\alpha^2\beta m-\alpha \beta^2n+\beta^3, $$ the integer numbers $m$ and $n$ such that $|\alpha^3+\alpha^2\beta m-\alpha \beta^2n+\beta^3|=1$ exist iff $\alpha^3-1$ is divisible by $\beta$ and $\beta^3-1$ is divisible by $\alpha$, or $\alpha^3+1$ is divisible by $\beta$ and $\beta^3+1$ is divisible by $\alpha$. For instance such pairs $(\alpha,\beta)$ for $10 \ge \alpha \ge \beta \ge -10$ (besides described in the propositions~\ref{t31}---\ref{t35}) are the following: $(3,2)$, $(7,-2)$, $(9,-2)$, $(9,2)$, $(7,-4)$, $(9,4)$, $(9,5)$, $(9,7)$. \begin{figure} $$\varepsilonpsfbox{fig2.2}$$ \caption{Torus triangulations for operators $A_{m,n}$.}\lambdabel{pic2} \varepsilonnd{figure} In conclusion we show the table with squares filled with torus triangulation of the sails constructed in this work whose convex hulls contain the point with the coordinates $(0,0,1)$, see fig~\ref{pic2}. The torus triangulation for the sail of the two-dimensional continued fraction for the cubic irrationality, constructed by the operator $A_{m,n}$ is shown in the square sited at the intersection of the string with number $n$ and the column with number $m$. If one of the roots of characteristic polynomial for the operator is equal to 1 or -1 at that than we mark the square $(m,n)$ with the sign $*$ or $\#$ correspondingly. The squares that correspond to the operators which characteristic polynomial has two complex conjugate roots we paint over with light gray color. \begin{thebibliography}{99} \bibitem{Arn1} V.~I.~Arnold, {\it Continued fractions}, M.: MCCME (2002). \bibitem{BSh} Z.~I.~Borevich, I.~R.~Shafarevich, {\it Number theory}, 3 ed., M., (1985). \bibitem{Del1} B.~N.~Delone, D.~K.~Faddeev, {\it The third power irrationalities theory}, M.-L.: Ac. Sci. USSR, (1940). \bibitem{Kor1} E.И.~Korkina, {\it La p\'eriodicit\'e des fractions continues multidimensionelles}, C. R. Ac. Sci. Paris, v. 319(1994), pp. 777--780. \bibitem{Kor2} E.~I.~Korkina, {\it Two-dimensional continued fractions. The simplest examples}, Proceedings of V.~A.~Steklov Math. Ins., v. 209(1995), pp. 143--166. \bibitem{Mus} J.-O.~Moussafir, {\it Sales and Hilbert bases}, Func. an. and appl., v. 34(2000), n. 2, pp. 43-49. \bibitem{Tsu} H.~Tsuchihashi, {\it Higher dimensional analogues of periodic continued fractions and cusp singularities}, Tohoku Math. Journ. v. 35(1983), pp. 176--193. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title{\LARGE \bf A {Jacobi} \pagestyle{empty} \thispagestyle{empty} \begin{abstract} \label{abstract} In this paper we introduce an iterative Jacobi algorithm for solving distributed model predictive control (DMPC) problems, with linear coupled dynamics and convex coupled constraints. The algorithm guarantees stability and persistent feasibility, and we provide a localized procedure for constructing an initial feasible solution by constraint tightening. Moreover, we show that the solution of the iterative process converges to the centralized MPC solution. The proposed iterative approach involves solving local optimization problems consisting of only few subsystems, depending on the choice of the designer and the sparsity of dynamical and constraint couplings. The gain in the overall computational load compared to the centralized problem is balanced by the increased communication requirements. This makes our approach more applicable to situations where the number of subsystems is large, the coupling is sparse, and local communication is relatively fast and cheap. A numerical example illustrates the effects of the local problem size, and the number of iterations on convergence to the centralized solution. \end{abstract} \section{Introduction} \label{intro} Model predictive control (MPC) is the most successful advanced control technology implemented in industry due to its ability to handle complex systems with hard input and state constraints \cite{Mac:02,MayRaw:00,GarPre:89}. The essence of MPC is to determine a control profile that optimizes a cost criterion over a prediction window and then to apply this control profile until new process measurements become available. Then the whole procedure is repeated and feedback is incorporated by using the measurements to update the optimization problem for the next step. For the control problem of large-scale networked systems, centralized MPC is considered impractical, inflexible and unsuitable due to information exchange requirements and computational aspects. The subsystems in the network may belong to different authorities that prevent sending all necessary information to one processing center. Moreover, the optimization problem yielded by centralized MPC can be excessively large for real-time computation. In order to deal with these limitations, distributed MPC is proposed for control of such large-scale systems, by decomposing the overall system into small subsystems. The subsystems employ distinct MPC controllers, use local information from neighboring subsystems, and collaborate to achieve globally attractive solutions. Approaches to distributed MPC design differ from each other in the problem setup. In \cite{Camponogara:2002}, Camponogara \textit{et al.} studied stability of coordination-based distributed MPC with several information exchange conditions. In \cite{Dunbar:2006}, Dunbar and Murray proposed a distributed MPC scheme for problems with coupled cost function, utilizing predicted trajectories of the neighbors in each subsystem's optimization. Keviczky \textit{et al.} proposed a distributed MPC scheme with a sufficient stability test for dynamically decoupled systems in \cite{Keviczky:2006}, in which each subsystem optimizes also over the behaviors of its neighbors. Richards and How in \cite{Richards:2007} proposed a robust distributed MPC method for networks with coupled constraints, based on constraint tightening and a serial solution approach. A distributed MPC scheme for dynamically coupled systems called \emph{feasible-cooperation MPC} (FC-MPC) was proposed by Venkat \textit{et al.} in \cite{Venkat:2005,Venkat:2005_techrep}, based on a parallel synchronous approach for cooperative optimization. This scheme works only for input-coupled linear time-invariant (LTI) subsystem dynamics without state constraints, and is not applicable to problems with constraints between subsystems. In this paper, we propose an extension of this scheme in several ways in order to solve these issues. The distributed MPC algorithm described in this paper is able to handle LTI dynamics with general dynamical couplings, and the presence of convex coupled constraints. Each local controller optimizes not only for itself, but also for its neighbors in order to gain better overall performance. Global feasibility and stability are achieved, whilst the algorithm can be implemented using local communications. The proposed algorithm is based on an MPC framework with zero terminal point constraint for increased clarity and simplicity. We highlight an open research question that needs to be addressed for a full treatment of the terminal cost based version of this MPC framework, which would allow reduced conservativeness. While other distributed MPC methods typically assume an initial feasible solution to be available, we incorporate a decentralized method to determine an initial feasible solution. The problem formulation is described in Section~\ref{problem}, followed by two variations of the algorithm in Section~\ref{algorithm}. It is shown that an algorithm using local communication exists and it is equivalent to one that is based on global communication. In Section~\ref{feasibility} we analyze the feasibility, stability and optimality of the algorithm. Different ways of customizing the proposed algorithm and a trade-off between communications and computational aspects are discussed in Section~\ref{customizations}. Finally, Section~\ref{experiment} illustrates the algorithm in a numerical example and Section~\ref{conclusions} concludes the paper. \section{Problem description} \label{problem} \subsection{Coupled subsystem model} Consider a plant consisting of $M$ subsystems. Each subsystem's dynamics is assumed to be influenced directly by only a small number of other subsystems. Let each subsystem be represented by a discrete-time, linear time-invariant model of the form: \begin{align} \label{eqn_dmodel} x^i_{t+1} &= \sum_{j=1}^M (A_{ij} x^j_t + B_{ij} u^j_t), \end{align} where $x^i_{t}\in {\mathbb R}^{n_i}$ and $u^i_{t} \in {\mathbb R}^{m_i}$ are the states and control inputs of the $i$-th subsystem at time $t$, respectively. \begin{remark} This is a very general model class for describing dynamical coupling between subsystems and includes as a special case the combination of \emph{decentralized models} and \emph{interaction models} in \cite{Venkat:2005}. \end{remark} We define the \emph{neighborhood of $i$}, denoted by $\mathcal{N}^i$, as the set of indices of subsystems that have either direct dynamical or convex constraint coupling with subsystem $i$. In Figure~\ref{fig_Nr}, we demonstrate this with an \emph{interaction map} where each node stands for one subsystem, the dotted links show constraint couplings and the solid arrows represent dynamical couplings. The neighborhood $\mathcal{N}^4$ of subsystem $4$ is the set of $\{ 4, 1, 2, 5 \}$. We will refer to vectors and sets related to nodes in $\mathcal{N}^i$ with a superscript $+i$. The collection of all other nodes that are not included in $\mathcal{N}^i$ will be referred to with a superscript $\bar{i}$. \subsection{Convex coupled constraints} Each subsystem $i$ is assumed to have local convex coupled constraints involving only a small number of the others. If we fix the control inputs and the corresponding states of the nodes outside $\mathcal{N}^i$, the state and input constraints involving the nodes in $\mathcal{N}^i$ can be defined in the following way: \begin{align} \label{eqn_constraints} \nbri{x}{i}_{t} \in \nbri{\mathcal{X}}{i}(\onbr{x}{i}_t), \quad \nbri{u}{i}_t \in \nbri{\mathcal{U}}{i}(\onbr{u}{i}_t), \quad \forall i=1,\ldots, M \end{align} where $\nbri{\mathcal{X}}{i}(\onbr{x}{i}_t)$ and $\nbri{\mathcal{U}}{i}(\onbr{u}{i}_t)$ are closed and convex sets parameterized by the states and control inputs of nodes outside $\mathcal{N}^i$. \begin{remark} Note that the constraints involving nodes of $\mathcal{N}^i$ in general do not depend on every other state and input outside $\mathcal{N}^i$, only on the immediate neighbors of $\mathcal{N}^i$. The notation in \eqref{eqn_constraints} is used for simplicity. \end{remark} \begin{figure}\label{fig_Nr} \end{figure} \subsection{Centralized model} Let $x = \left[ {x^1}^T \cdots {x^M}^T \right]^T$ and $u = \left[ {u^1}^T \cdots {u^M}^T \right]^T$ denote the aggregated states and inputs of the full plant, with dimensions ${\mathbb R}^{\sum_{i=1}^M n_i}$ and ${\mathbb R}^{\sum_{i=1}^M m_i}$ respectively. The matrices $A$ and $B$ will denote the aggregated subsystem dynamics matrices and are assumed to be stabilizable: \begin{align*} A = \begin{bmatrix} A_{11} & \ldots & A_{1M} \\ \vdots & ~ & \vdots \\ A_{M1} & \ldots & A_{MM} \end{bmatrix}, B = \begin{bmatrix} B_{11} & \ldots & B_{1M} \\ \vdots & & \vdots \\ B_{M1} & \ldots & B_{MM} \end{bmatrix}. \end{align*} The full (centralized) plant model is thus represented as: \begin{align} \label{eqn_cmodel} \begin{split} x_{t+1} &= A x_t + B u_t, \end{split} \end{align} \begin{remark} The \emph{centralized model} defined in \ref{eqn_cmodel} is more general than the so-called \emph{composite model} employed in \cite{Venkat:2005}. In our approach, the \emph{centralized model} can represent both couplings in states and inputs. In \cite{Venkat:2005}, the authors use an input-coupled \emph{composite model}, which requires the subsystems' states to be decoupled, allowing only couplings in inputs. \end{remark} \subsection{Centralized MPC problem} The centralized MPC problem is formulated based on a typical quadratic MPC framework \cite{Mac:02} with prediction horizon $N$, and the following quadratic cost function at time step $t$: \begin{align} V_t = \sum_{k=0}^{N-1} {{x_{k,t}}^T Q x_{k,t} + {u_{k,t}}^T R u_{k,t}} \label{eq_centralized_cost} \end{align} where $x_{k,t}$ denotes the centralized state vector at time $t + k$ obtained by starting from the state $x_{0,t} = x_t$ and applying to system (\ref{eqn_cmodel}) the input sequence $u_{0,t}, \ldots, u_{k-1,t}$. $Q=\diag{Q_1, \cdots, Q_M}$, $R=\diag{R_1, \cdots, R_M}$ with $\diag{.}$ function representing the block diagonal matrix. Matrices $Q_i$ are positive semidefinite and $R_i$ are positive definite. Let $\pred{x}_t = [x_{1,t}^T, \cdots, x_{N,t}^T]^T$, $\pred{u}_t = [u_{0,t}^T, \cdots, u_{N-1,t}^T]^T$. The centralized MPC problem is then defined as: \begin{align} V_t^*(x_t) = \min_{\pred{x}_t, \pred{u}_t} ~&~ \sum_{k=0}^{N-1} x_{k,t}^T Q x_{k,t} + u_{k,t}^T R u_{k,t} \label{eq:cenMPC} \\ \text{s.t.} ~&~ x_{k+1,t} = A x_{k,t} + B u_{k,t}, k=0,...,N-1, \nonumber \\ &~ u_{k,t} \in \mathcal{U}, k=0,...,N-1, \nonumber \\ &~ x_{k,t} \in \mathcal{X}, k=1,...,N-1, \nonumber \\ &~ x_{N,t} = 0, \nonumber \\ &~ x_{0,t} = x_t, \nonumber \end{align} where $\mathcal{U}$ and $\mathcal{X}$ are defined as $\bigcap_{i=1}^M \CE{\mathcal{U}^{+i}}$ and $\bigcap_{i=1}^M \CE{\mathcal{X}^{+i}}$, respectively. The $\CE{\cdot}$ operator denotes cylindrical extension to the set of ${\mathbb R}^{\sum_{i=1}^M n_i}$ and ${\mathbb R}^{\sum_{i=1}^M m_i}$, respectively. In other words, if $\mathcal{X}^{+i} \subset {\mathbb R}^d_i$ then $\CE{\mathcal{X}^{+i}} = \mathcal{X}^{+i} \times {\mathbb R}^{\sum_{i=1}^M n_i - d_i}$. The vector $x_t$ contains the measured states at time step $t$. Let $\pred{u}^*_t = [(u^*_{0,t})^T, \cdots, (u^*_{N-1,t})^T]^T$ denote the optimal control solution of (\ref{eq:cenMPC}) at time $t$. Then, the first sample of $\pred{u}^*_t$ is applied to the overall system: \begin{align} \label{eq:mpclaw} u_t=u^*_{0,t}. \end{align} The optimization~(\ref{eq:cenMPC}) is repeated at time $t+1$, based on the new state $x_{t+1}$. In \cite{KeeGil:88} it was shown that with prediction horizon $N$ long enough to allow a feasible solution to the optimization problem, the closed-loop system (\ref{eqn_cmodel})-(\ref{eq:mpclaw}) is stable. Before formulating the distributed MPC problems, we eliminate the state variables in the centralized MPC formulation. In the following we will also assume $t=0$ without loss of generality and drop subscript $t$ for simplicity. The set of dynamics equations allows us to write the predicted states as \begin{align} \pred{x} = \alpha \pred{u} + \beta(x_0), \end{align} where \begin{align*} \alpha = \begin{bmatrix} B & 0 & \ldots & 0 \\ AB & B & \ddots & \vdots \\ \vdots & \vdots & \ddots & 0 \\ A^{N-1}B & A^{N-2}B & \ldots & B \end{bmatrix}, \quad \beta (x_0) = \begin{bmatrix} A \\ A^2 \\ \vdots \\ A^N \end{bmatrix} x_0. \end{align*} Using the above equations, we can eliminate state variables in the centralized MPC leading to the following problem: \begin{align} \label{eqn_cmpc} \min_{\pred{u}} ~&~ V(\pred{u}, x_0) \\ = \min_{\pred{u}} ~&~ \pred{u}^T ( \alpha^T \pred{Q} \alpha + \pred{R})\pred{u} + 2 (\alpha^T \pred{Q} \beta )^T \pred{u} + \beta^T \pred{Q} \beta \nonumber \\ \text{s.t.} ~&~ \pred{u} \in \tilde{\mathcal{U}}, \nonumber \\ &~ \alpha \pred{u} + \beta (x_0) \in \tilde{\mathcal{X}}, \nonumber \\ &~ F \pred{u} + A^{N} x_0 = 0, \nonumber \end{align} where $\tilde{\mathcal{U}} = \prod_{k=1}^N \mathcal{U}$ and $\tilde{\mathcal{X}} = \prod_{k=1}^N \mathcal{X}$. $F=[A^{N-1}B, A^{N-2}B, \ldots, B]$ is the last block row of $\alpha$. The matrices $\pred{Q}$ and $\pred{R}$ are block-diagonal, built from weighting matrices $Q$ and $R$. Therefore $\pred{Q}$ is positive semidefinite and $\pred{R}$ is positive definite, making the cost function $V(\pred{u}, x_0)$ strictly convex with any given $x_0$. \subsection{Distributed MPC problem} \label{dmpc} We will solve problem (\ref{eqn_cmpc}) by dividing it into smaller, overlapping DMPC problems, with each DMPC assigned to one subsystem but optimizing also over neighboring subsystems at the same time. At each time step, by solving DMPCs and combining the local solutions in an iterative process, we will get an increasingly accurate approximate solution of the centralized MPC problem. In the DMPC problem for subsystem $i$, the global cost function is optimized with respect to a reduced set of variables: control inputs of $i$ and its neighbors, denoted together by $\nbri{\pred{u}}{i}$. Each DMPC problem will guarantee that all constraints of the centralized MPC problem are satisfied. The DMPC of subsystem $i$ can be recast into the following optimization problem: \begin{align} \label{eqn_dmpc} \min_{\nbri{\pred{u}}{i}} ~&~ V (\pred{u}, x_0) \\ \text{s.t.} ~&~ \pred{u} \in \tilde{\mathcal{U}}, \nonumber \\ &~ \alpha \pred{u} + \beta (x_0) \in \tilde{\mathcal{X}}, \nonumber \\ &~ F \pred{u} + A^{N} x_0 = 0, \nonumber \\ &~ \onbr{\pred{u}}{i} = \aonbr{\pred{u}}{i} \nonumber \end{align} where $\aonbr{\pred{u}}{i}$ denotes the \emph{assumed inputs} of all non-neighbors of $i$. For now we assume that in the beginning of each step, each node $j$ transmits its \emph{assumed inputs} for $\pred{u}^j = [(u^j_0)^T,...,(u^j_{N-1})^T]^T$ to the entire network, node $i$ receive these vectors from $\forall j \not\in \mathcal{N}^i$ to form $\aonbr{\pred{u}}{i}$. Note that $\pred{u}$ is the combination of $\nbri{\pred{u}}{i}$ and $\onbr{\pred{u}}{i}$. With each $i$, we can construct two pairs of matrices $\nbri{\alpha}{i}$, $\onbr{\alpha}{i}$ and $\nbri{F}{i}$, $\onbr{F}{i}$ so that: \begin{align} \alpha \pred{u} &= \nbri{\alpha}{i} \nbri{\pred{u}}{i} + \onbr{\alpha}{i} \onbr{\pred{u}}{i} \\ F \pred{u} &= \nbri{F}{i} \nbri{\pred{u}}{i} + \onbr{F}{i} \onbr{\pred{u}}{i} \nonumber \end{align} By eliminating the input constraints which involves only $\onbr{\pred{u}}{i}$, the DMPC problem~(\ref{eqn_dmpc}) is equivalent to the following \begin{align} \label{eqn_dmpci} \min_{\nbri{\pred{u}}{i}} ~&~ V (\pred{u}, x_0) \\ \text{s.t.} ~&~ \nbri{\pred{u}}{i} \in \nbri{\tilde{\mathcal{U}}}{i}, \nonumber \\ &~ \nbri{\alpha}{i} \nbri{\pred{u}}{i} + \onbr{\alpha}{i} \onbr{\pred{u}}{i} + \beta (x_0) \in \tilde{\mathcal{X}}, \nonumber \\ &~ \nbri{F}{i} \nbri{\pred{u}}{i} + \onbr{F}{i} \onbr{\pred{u}}{i} + A^{N} x_0 = 0, \nonumber \\ &~ \onbr{\pred{u}}{i} = \aonbr{\pred{u}}{i} \nonumber \end{align} in which $\nbri{\tilde{\mathcal{U}}}{i} = \prod_{k=1}^N \nbri{\mathcal{U}}{i}(\onbr{u}{i}_k)$. The optimal solution of (\ref{eqn_dmpc}), (\ref{eqn_dmpci}) will be denoted by $\opnbri{\pred{u}}{i}$. For implementation, we introduce the notion of the \emph{$r$-step extended neighborhood} for each subsystem $i$, denoted by $\mathcal{N}^i_r$, which contains all nodes that can be reached from node $i$ in not more than $r$ links. $\mathcal{N}^i_r$ is the union of subsystem indices in the neighborhoods of all subsystems in $\mathcal{N}^i_{r-1}$ : \begin{align} \mathcal{N}^i_r = \bigcup_{j \in \mathcal{N}^i_{r-1} } \mathcal{N}^j, \end{align} where $\mathcal{N}^i_1 := \mathcal{N}^i$. We see that in order to solve (\ref{eqn_dmpci}), subsystem $i$ only needs information from other subsystems in $\mathcal{N}^i_{N+1}$, the initial states and assumed inputs of subsystems outside $\mathcal{N}^i_{N+1}$ are redundant. \begin{remark} The MPC formulation using terminal point constraint described above simplifies our exposition but it is rather conservative. This could be alleviated by using a dual-mode MPC formulation with a terminal cost function. However, in order for this to be a truly distributed approach, the terminal cost function associated with the terminal controllers should have a sparse structure. This would allow the construction of a centralized Lyapunov function in a local way, using only local information. In \cite{Venkat:2005}, the authors try to bypass this obstacle by using additional restrictive assumptions: they employ zero terminal controllers and require all subsystems and interaction models (coupled via the inputs only) to be stable. These assumptions can actually be more conservative than using a terminal point constraint, preventing the application of the FC-MPC method in general dynamically coupled systems. Finding terminal controllers that lead to a structured terminal cost is an open problem and subject of our current research. \end{remark} \section{Jacobi-type algorithm} \label{algorithm} In this section we present an iterative procedure to approximate the centralized MPC solution by repeatedly calculating and combining the solutions the local DMPC problems described in the previous section. We will show two versions of our approach, which are based on Jacobi distributed optimization \cite{BertTsit:parallel-comp-book}. The proposed algorithms maintain feasibility of intermediate solutions and converge to the centralized MPC solution asymptotically. The first version uses global communication and can be considered as an extension of FC-MPC \cite{Venkat:2005}. The second version relies on local communication and represents the main contribution of the paper. We will show that the two versions are equivalent to each other, which leads to simplified analysis in Section~\ref{feasibility}. \subsection{Globally and locally communicating algorithms} For each time step $t$, we assume that a feasible input $\feas{\pred{u}}$ is given for the entire system. (Section~\ref{subsec:localfeas} discusses a method of obtaining such a feasible initial control sequence in a distributed way, given a known initial condition.) In each step of the proposed DMPC scheme, the subsystems cooperate and perform a Jacobi algorithm, where each subsystem iteratively solves the optimization problem (\ref{eqn_dmpci}) with regards to its local variables, and incorporates a convex combination of neighboring local solutions. During every MPC sampling period, a distributed iterative loop is employed, and is indexed by $p$. At each iteration $p$, $\feas{\pred{u}}$ is updated. We will refer to vectors obtained in these iterations with subscript $(p)$. For $p=1$, we initialize the iteration with $\pred{u}_{(0)} = \feas{\pred{u}}$. Let $\pred{u}^{s|i}_{(p)}$ denote the control sequence of the whole system stored in the memory of subsystem $i$ at iteration $p$. For making convex combinations, each subsystem $i$ is assigned a weight $\lambda^i \in (0,1)$, satisfying $\sum_{i=1}^{M} \lambda^i = 1$. The choice of weights is arbitrary and could depend on the specific problem, the simplest choice will be equal weights $(\lambda^i = \frac{1}{M}, \forall i)$. We propose then the following iterative algorithm: \begin{algorithm}[Jacobi DMPC with global communication] \label{alg_dmpc} Given $N$, $p_{max} > 0$, $\epsilon >0$: \begin{enumerate} \item[\textbf{1.}] $p \leftarrow 1$, $\pred{u}_{(0)} \leftarrow \feas{\pred{u}}, \rho^i \leftarrow$ large number, $\forall i = 1, \ldots, M$. \textbf{while} $\rho^i > \epsilon$ for some $i$ and $p \leq p_{max}$ \begin{enumerate} \item[\textbf{a.}] \textbf{for} each $i = 1,\ldots, M$ \\ Construct new $\aonbr{\pred{u}}{i}$ from $\pred{u}_{(p-1)}$.\\ Solve problem (\ref{eqn_dmpc}) to get $\opnbri{\pred{u}}{i}_{(p)}$. Construct a global input vector $\pred{u}^{s|i}_{(p)}$ from $\opnbri{\pred{u}}{i}_{(p)}$ and $\aonbr{\pred{u}}{i}$. Transmit $\pred{u}^{s|i}_{(p)}$ to a central update location. \textbf{end (for)} \item[\textbf{b.}] Merge local solutions according to the following convex combination: \begin{align} \pred{u}_{(p)} = \sum_{i=1}^M \lambda^i \; \pred{u}^{s|i}_{(p)} \end{align} \item[\textbf{c.}] Compute the progress and iterate: \begin{align*} \rho^i &= \| \pred{u}_{(p)} - \pred{u}_{(p-1)} \| \end{align*} \end{enumerate} $p \leftarrow p+1$ \\ \textbf{end (while)} \item[\textbf{2.}] Each subsystem implements the first input value in $\pred{u}^i_{(p)}$. \begin{align} \label{eq:dmpclaw} u^i_t=u^i_{0,(p)}. \end{align} \item[\textbf{3.}] Shift the predicted input sequence one step to make a feasible solution for the following MPC update: \begin{align*} \feas{\pred{u}} = [u_{1,(p)}, \cdots, u_{N-1,(p)}, 0]. \end{align*} \item[\textbf{4.}] $t \leftarrow t+1$. Measure new initial states $x_t$, go to step \textbf{1.} \end{enumerate} \end{algorithm} Algorithm~\ref{alg_dmpc} requires the existence of a central coordinator that communicates with all subsystems and performs the convex combination to find $\pred{u}_{(p)}$. For implementation, a control scheme without global communication is desired. Next we introduce a variation of Algorithm~\ref{alg_dmpc} that only needs local communication. Let $\pred{u}^{i,f}$ denote the feasible input sequence of subsystem $i$, and $\pred{u}^{j|i*}_{(p)}$ denote control sequence of subsystem $j$ computed by subsystem $i$ when solving its DMPC problem (\ref{eqn_dmpci}) at iteration $p$. \begin{algorithm}[Jacobi DMPC with local communication\footnote{'local communication' in this context means that each subsystem only communicates with 'nearby' subsystems in a small region.}] \label{alg_dmpci} Given $N$, $p_{max} > 0$, $\epsilon >0$, and assuming each subsystem $i$ knows a feasible input $\pred{u}^{j,f}$ for all subsystems $j \in \mathcal{N}^i_{N+1}$. \textbf{1.} $p \leftarrow 1$, $\rho^i \leftarrow$ large number, $\forall i = 1, \ldots, M$. \textbf{while} $\rho^i > \epsilon$ for some $i$ and $p \leq p_{max}$ \textbf{for} each $i = 1,\ldots, M$ \begin{enumerate} \item[\textbf{a.}] Subsystem $i$ solves the local problem (\ref{eqn_dmpci}), using $\{ \pred{u}^{j,f} | \forall j \in \mathcal{N}^i_{N+1} \backslash \mathcal{N}^i \}$ as assumed inputs for subsystems outside $\mathcal{N}^i$ but inside $\mathcal{N}^i_{N+1}$. The solution is comprised of $\{ \pred{u}^{j|i*}_{(p)}, \; j \in \mathcal{N}^i \}$. \item[\textbf{b.}] Subsystem $i$ receives solutions for itself calculated by its neighbors $\{ \pred{u}^{i|j*}_{(p)}, \; j \in \mathcal{N}^i \}$, then updates its solution for iterate $p$ according to: \begin{align} \pred{u}^{i}_{(p)} = \sum_{j \in \mathcal{N}^i} \lambda^j \pred{u}^{i|j*}_{(p)} + \left(1 - \sum_{j \in \mathcal{N}^i} \lambda^j\right) \pred{u}^{i}_{(p-1)} \end{align} \item[\textbf{c.}] Calculate the progress: \begin{align*} \rho^i &= \| \pred{u}^{i}_{(p)} - \pred{u}^{i}_{(p-1)} \| \end{align*} \item[\textbf{d.}] $\pred{u}^{i,f} \leftarrow \pred{u}^{i}_{(p)}$, subsystem $i$ transmits new $\pred{u}^{i,f}$ to all subsystems in $\mathcal{N}^i_{N+1}$. \end{enumerate} \textbf{end (for)} $p \leftarrow p+1 $ \textbf{end (while)} \textbf{2.} Each subsystem $i$ implements the first input value in $\pred{u}^i_{(p)}$: \begin{align} \label{eq:dmpclaw2} u^i_t=\pred{u}^i_{0,(p)}. \end{align} \textbf{3.} Shift the predicted input sequence by one step to make a feasible solution for the following MPC update: \begin{align*} \pred{u}^{i,f} = [u^i_{1,(p)}, \cdots, u^i_{N-1,(p)}, 0], \;\; i = 1,\ldots, M . \end{align*} \textbf{4.} $t \leftarrow t+1 $. Measure new initial states $x^i_t$, go to step \textbf{1.} \end{algorithm} The major difference between Algorithms~\ref{alg_dmpc} and \ref{alg_dmpci} is at step \textbf{1.b}: in Algorithm~\ref{alg_dmpc} the convex combination is performed on the global control input vector, while in Algorithm~\ref{alg_dmpci} each local controller performs convex combination using its local control input vectors, therefore removing the need of a coordinator. In the sequel, we will show that the two algorithms are equivalent, thus allowing us to implement Algorithm~\ref{alg_dmpci} while using Algorithm~\ref{alg_dmpc} for analysis. \subsection{Equivalence of the two algorithms} The two crucial differences between Algorithm~\ref{alg_dmpci} and \ref{alg_dmpc} are the communication requirement and the update method. We already mentioned that the optimization problem (\ref{eqn_dmpci}) is equivalent to (\ref{eqn_dmpc}), thus each subsystem only has to transmit its new results to the subsystems inside $\mathcal{N}^i_{N+1}$. This leads to the local communications in Algorithm~\ref{alg_dmpci}. Now we will show that the local update of Algorithm~\ref{alg_dmpci} is also equivalent to the global update of Algorithm~\ref{alg_dmpc}. Consider Algorithm~\ref{alg_dmpci}, at the beginning of iteration $p$, a local input vector $\pred{u}^i_{p-1}$ is given for each $i$. Then each subsystem $j \in \mathcal{N}^i$ computes $\pred{u}^{i|j*}_p$ and sends these solutions to $i$, which forms the final update for itself $\pred{u}^{i}_p$. Note that $i \in \mathcal{N}^j \Leftrightarrow j \in \mathcal{N}^i$, so we have \begin{align} \label{eqn_localupdate} \pred{u}^{i}_{(p)} &= \sum_{j \in \mathcal{N}^i} \lambda^j \pred{u}^{i|j*}_{(p)} + \left(1 - \sum_{j \in \mathcal{N}^i} \lambda^j\right) \pred{u}^{i}_{(p-1)} \nonumber \\ &= \sum_{j \in \mathcal{N}^i} \lambda^j \; \pred{u}^{i|j*}_{(p)} - \left( \sum_{j \in \mathcal{N}^i} \lambda^j \right) \pred{u}^{i}_{(p-1)} + \pred{u}^{i}_{(p-1)} \end{align} Now consider Algorithm~\ref{alg_dmpc}, at the beginning of iteration $p$, starting from the same $\pred{u}^i_{(p-1)}$ as in Algorithm~\ref{alg_dmpci}. The local problem of the $j$-th subsystem achieves solutions $\{ \pred{u}^{i|j*}_{(p)} ~|~ j \in \mathcal{N}^i \}$ which are equal in both algorithms and are transmitted to subsystem $i$. Then the global update $\pred{u}_{(p)}$ yields $\pred{u}^{i}_{(p)}$ as follows: \begin{align} \label{eqn_globalupdate} \pred{u}_{(p)} &= \sum_{i=1}^M \lambda^i \; \pred{u}^{s|i}_{(p)} \nonumber \\ \Leftrightarrow \pred{u}^{i}_{(p)} &= \sum_{j \in \mathcal{N}^i} \lambda^j \; \pred{u}^{i|j*}_{(p)} + \sum_{j \not\in \mathcal{N}^i} \lambda^j \; \pred{u}^{i|j*}_{(p)} \nonumber \\ &= \sum_{j \in \mathcal{N}^i} \lambda^j \; \pred{u}^{i|j*}_{(p)} + \sum_{j \not\in \mathcal{N}^i} \lambda^j \; \pred{u}^{i}_{(p-1)} \nonumber \\ &= \sum_{j \in \mathcal{N}^i} \lambda^j \left( \pred{u}^{i|j*}_{(p)} - \pred{u}^{i}_{(p-1)} \right) + \sum_{j=1}^M \lambda^j \; \pred{u}^{i}_{(p-1)} \nonumber \\ &= \sum_{j \in \mathcal{N}^i} \lambda^j \; \pred{u}^{i|j*}_{(p)} - \left( \sum_{j \in \mathcal{N}^i} \lambda^j \right) \pred{u}^{i}_{(p-1)} + \pred{u}^{i}_{(p-1)} \end{align} Comparing (\ref{eqn_localupdate}) and (\ref{eqn_globalupdate}) we can see that the local update of Algorithm~\ref{alg_dmpci} and the global update of Algorithm~\ref{alg_dmpc} yield the same result. This implies that Algorithm~\ref{alg_dmpci} does exactly what Algorithm~\ref{alg_dmpc} does, except it only needs to use regional information (each subsystem $i$ needs to communicate with subsystems in the region $\mathcal{N}^i_{N+1}$). If $M \gg N$ and the interaction map is relatively sparse, this region will be much smaller than the whole network, thus DMPC problems can be considered local optimization problems. The equivalence of the two proposed DMPC algorithms allows us to prove their feasibility, stability and optimality aspects by analyzing the globally communicating algorithm, which is more comprehensive than the locally communicating algorithm. In the next sections, we refer to Algorithm~\ref{alg_dmpc} for analysis. \section{Feasibility, stability and optimality} \label{feasibility} \subsection{Constructing initial feasible solutions locally} \label{subsec:localfeas} Although in current literature it is typically assumed that an initial centralized feasible solution exists and is available, in this section we give a simple but implementable way of actually constructing it in a distributed way assuming that the global initial state is available in advance. The initial feasible prediction input $\feas{\pred{u}}$ at time $t=0$ can be calculated locally by using an inner approximation of the global feasible set, which we will denote with $\Omega$ based on all the constraints appearing in (\ref{eqn_cmpc}) and the global initial state, which is assumed to be available. Consider an inner-hyperbox (or hyperrectangular) approximation $\mathcal{B}$ of the feasible set $\Omega$, which then takes the form of a Cartesian product: \begin{align} \label{eq:boxes} \mathcal{B} = \mathcal{B}^1 \times \cdots \times \mathcal{B}^M, \quad \mathcal{B} \subset \Omega. \end{align} This approximation essentially decomposes and decouples the constraints among subsystems by performing constraint tightening. Each subsystem $i$ will thus have to include $\mathcal{B}^i$ in their local problem setup. Since the Cartesian product of these local constraint sets are included in the globally feasible set $\Omega$, any combination of local solutions within $\mathcal{B}^i$ will be globally feasible as well. Needless to say that the local constraint sets that arise from this inner-hyperbox approximation will be in general quite conservative, but at the same time will allow the construction of a centralized feasible solution locally to initialize Algorithm~\ref{alg_dmpc}. Calculation of the inner-hyperbox approximation can be performed a priori and the local $\mathcal{B}^i$ constraints distributed to each subsystem. A polynomial-time procedure to compute a maximum volume inner box of $\Omega$ could follow the procedure described in \cite{BemFil:04}. Let us denote the dimension of the global input vector with $d=\sum_{i=1}^M m_i$. If we represent a box as $\mathcal{B}(\underline{u},\bar{u})=\{ u \in {\mathbb R}^d ~|~ \underline{u} \leq u \leq\bar{u} \}$, then $\mathcal{B}(\underline{u}^*,\underline{u}^* + v^*)$ is a maximum volume inner box of the full-dimensional polytope defined as $\Omega=\{u \in {\mathbb R}^d ~|~ Au \leq b \}$, where $(\underline{u}^*,v^*)$ is an optimal solution of \begin{align} \label{eq:innerbox} \begin{split} \max_{\underline{u}, v} ~&~ \sum_{j=1}^{d} \ln v_j \\ \text{s.t.}~&~A\underline{u} + A^{+}v \leq b. \end{split} \end{align} The matrix $A^{+}$ is the positive part of $A$. Obtaining the local component-wise constraints $\mathcal{B}^i$ is then straightforward. For time steps other than $t=0$, we construct a feasible solution by performing step \textbf{3} of Algorithm~\ref{alg_dmpc}. \subsection{Maintaining feasibility throughout the iterations} Observe that in step \textbf{1.a}, we get $M$ feasible solutions $\pred{u}^{s|i}_{(p)}$ for the centralized problem (\ref{eqn_cmpc}). In step \textbf{1.b}, we construct the new control profile $\pred{u}_{(p)}$ as a convex combination of these solutions. Since problem (\ref{eqn_cmpc}) is a convex constrained QP, any convex combination of $\{ \pred{u}^{s|i}_{(p)} \}_{i=1}^{M}$ also satisfies the convex constraint set. Therefore $\pred{u}_{(p)}$ is a feasible solution of optimization problems (\ref{eqn_cmpc}), and (\ref{eqn_dmpc}) for all $i$. \subsection{Stability analysis} \label{stability} Showing stability of the closed-loop system (\ref{eqn_cmodel})-(\ref{eq:dmpclaw}) follows standard arguments for the most part \cite{KeeGil:88,MayRaw:00}. In the following, we describe only the most important part for brevity, which considers the nonincreasing property of the value function. The proof in this section is closely related to the stability proof of the FC-MPC method in \cite{Venkat:2005_techrep}, the main difference is due to the underlying MPC schemes: this method uses terminal point constraint MPC while FC-MPC uses dual-mode MPC. Let $\bar{p}_t$ and $\bar{p}_{t+1}$ stand for the last iteration number of Algorithm~\ref{alg_dmpc} at step $t$ and $t+1$, respectively. Let $V_{t}=V (\pred{u}_{(\bar{p}_t)}, x_t)$ and $V_{t+1}=V (\pred{u}_{(\bar{p}_{t+1})}, x_{t+1})$ denote the cost values associated with the final combined solution at step $t$ and $t+1$, respectively. At step $t+1$, let $\Phi^{i}_{(p+1)} = V (\pred{u}^{s|i}_{(p+1)}, x_{t})$ denote the global cost associated with solution of subsystem $i$ at iterate $p+1$, and $\Phi_{(p)} = V (\pred{u}_{(p)}, x_{t})$ the cost corresponding to the combined solution at iterate $p$. The global cost function can be used as a Lyapunov function, and its nonincreasing property can be shown following the chain: \begin{align*} V_{t+1} \leq \cdots \leq \Phi_{(p+1)} &\leq \Phi_{(p)} \leq \cdots \\ \cdots &\leq \Phi_{(1)} \leq V_{t} - x_{t}^T Q x_{t} - u_{t}^T R u_{t} \end{align*} The two main components of the above inequality chain are shown in the following two subsections. \textbf{Showing that $\Phi_{(p+1)} \leq \Phi_{(p)}$} The cost $V (\pred{u}, x_t)$ is a convex function of $\pred{u}$, thus \begin{align} \label{ineq_Phi_p_1} V \left( \sum_{i=1}^M \lambda^i \; \pred{u}^{s|i}_{(p+1)}, x_t \right) \leq \sum_{i=1}^M \lambda^i \; V \left( \pred{u}^{s|i}_{(p+1)}, x_t \right) \end{align} Moreover, each $\pred{u}^{s|i}_{(p+1)}$ is the optimizer of $i$-th local problem starting from $\pred{u}_{(p)}$, therefore we have: \begin{align} \label{ineq_Phi_i} V \left( \pred{u}^{s|i}_{(p+1)}, x_t \right) = \Phi^i_{(p+1)} \leq \Phi_{(p)}, i = 1,\ldots, M \end{align} Substituting (\ref{ineq_Phi_i}) into (\ref{ineq_Phi_p_1}) leads to: \begin{align*} \Phi_{(p+1)} = V \left(\sum_{i=1}^M \lambda^i \; \pred{u}^{s|i}_{(p+1)}, x_t \right) \leq \sum_{i=1}^M \lambda^i \; \Phi_{(p)} = \Phi_{(p)} \end{align*} Using the above inequality, we can trace back to $p=1$: \begin{align*} V_{t+1} \leq \cdots. \leq \Phi_{(p+1)} \leq \Phi_{(p)} \leq \cdots \leq \Phi_{(1)}. \end{align*} \textbf{Showing that $\Phi_{(1)} \leq V_{t} - x_{t}^T Q x_{t} - u_{t}^T R u_{t}$} At step $t+1$ and iteration $p=1$, recall that the initial feasible solution $\feas{\pred{u}}$ of the centralized MPC is built by Algorithm~\ref{alg_dmpc} at the end of step $t$ in the following way: \begin{align*} \feas{\pred{u}} = [u_{1,(\bar{p}_t)}, \cdots, u_{N-1,(\bar{p}_t)}, 0] \end{align*} The DMPC of each subsystem $i$ optimizes the cost with respect to $\nbri{\pred{u}}{i}$ starting from $\feas{\pred{u}}$, therefore $\forall i = 1,\ldots, M$: \begin{align*} V \left( \pred{u}^{s|i}_{(1)}, x_t \right) & \leq V \left( \feas{\pred{u}}, x_t \right) \\ \Leftrightarrow \;\; \Phi^{i}_{(1)} & \leq \sum_{k=1}^{N-1} \left( x_{k,(\bar{p}_t)}^T Q x_{k,(\bar{p}_t)} + u_{k,(\bar{p}_t)}^T R u_{k,(\bar{p}_t)} \right) \\ \Leftrightarrow \;\; \Phi^{i}_{(1)} & \leq \Phi (\pred{u}_{(\bar{p}_t)}) - x_{0,\bar{p}_t}^T Q x_{0,(\bar{p}_t)} - u_{0,(\bar{p}_t)}^T R u_{0,(\bar{p}_t)} \\ \Leftrightarrow \;\; \Phi^{i}_{(1)} & \leq V_{t} - x_{t}^T Q x_{t} - u_{t}^T R u_{t}. \end{align*} Moreover, due to the convexity of $V (\pred{u}, x_t)$ and the convex combination update $\pred{u}_{(1)} = \sum_{i=1}^M \lambda^i \pred{u}^{s|i}_{(1)}$, we obtain \begin{align*} \Phi_{(1)} &= V \left( \sum_{i=1}^M \lambda^i \; \pred{u}^{s|i}_{(1)}, x_t \right) \leq \sum_{i=1}^M \lambda^i \; V \left( \pred{u}^{s|i}_{(1)}, x_t \right) \\ \Rightarrow \Phi_{(1)} & \leq \sum_{i=1}^M \lambda^i \; \Phi^{i}_{(1)} \leq \sum_{i=1}^M \lambda^i \; [V_{t} - x_{t}^T Q x_{t} - u_{t}^T R u_{t}] \\ \Leftrightarrow \Phi_{(1)} & \leq V_{t} - x_{t}^T Q x_{t} - u_{t}^T R u_{t} \end{align*} The above inequalities show that the value function decreases along closed-loop trajectories of the system. The rest of the proof for stability follows standard arguments found for instance in \cite{Venkat:2005,KeeGil:88}. \subsection{Optimality analysis} \label{optimality} Using the descent approach, we will show that the solution of Algorithm~\ref{alg_dmpc} approaches the solution of the centralized MPC in (\ref{eqn_cmpc}), as $p \rightarrow \infty$. We characterize the optimality of the proposed iterative procedure by using the following results. \begin{lemma} A limit point of $\{ \pred{u}_{(p)} \}$ is guaranteed to exist. \end{lemma} \emph{Proof}: The feasible set of (\ref{eqn_dmpc}) is compact. It is shown that every $\pred{u}_{(p)}$ is feasible, therefore this sequence is bounded, thus converges. $\Box$ \begin{lemma} Every limit point of $\{ \pred{u}_{(p)} \}$ is an optimal solution of (\ref{eqn_cmpc}). \end{lemma} \textbf{Proof}: We will make use of the strict convexity of $V (\cdot)$ and a technique, which is inspired by the proof of Gauss-Seidel distributed optimization algorithms in \cite{BertTsit:parallel-comp-book}. In our context however, we address also the overlapping variables that are present in the local optimization problems. Let $\pred{v} = (\pred{v}^1,...,\pred{v}^M)$ be a limit point of $\{ \pred{u}_{(p)} \}$, assume $\{ \pred{u}_{(p')} \}$ is a subsequence of $\{ \pred{u}_{(p)} \}$ that converges to $\pred{v}$. In the following, we drop parameter $x_t$ in $V(\cdot)$ for simplicity. Using the continuity of $V(\cdot)$ and the convergence of $\{ \pred{u}_{(p')} \}$ to $\pred{v}$, we see that $V \left( \pred{u}_{(p')} \right)$ converges to $V(\pred{v})$. This implies the entire sequence $\{ V \left( \pred{u}_{(p)} \right) \}$ converges to $V(\pred{v})$. It now remains to show that $\pred{v}$ minimizes $V(\cdot)$ over the feasible set of (\ref{eqn_cmpc}). We first show that $\opnbri{\pred{u}}{1}_{(p' + 1)} - \nbri{\pred{u}}{1}_{(p')}$ converges to zero. Recall that at iteration $p$, $\nbri{\pred{u}}{1}_{(p')}$ and $\aonbr{\pred{u}}{1}_{(p')}$ forms $\pred{u}_{(p')}$, at iteration $p+1$, $\opnbri{\pred{u}}{1}_{(p' + 1)}$ and $\aonbr{\pred{u}}{1}_{(p')}$ forms $\pred{u}^{s|1}_{(p' + 1)}$. Assume the contrary, or $\pred{u}^{s|1}_{(p' + 1)} - \pred{u}_{(p')}$ does not converge to zero. There exists some $\epsilon > 0$ such that $\| \pred{u}^{s|1}_{(p' + 1)} - \pred{u}_{(p')} \| \geq \epsilon$ for all $p'$. Let us fix some $\gamma \in (0,1)$ and define \begin{align} \pred{s}^1_{(p')} = \gamma \pred{u}_{(p')} + (1-\gamma) \pred{u}^{s|1}_{(p' + 1)}, \end{align} this means $\pred{s}^1_{(p')}$ lies between $\pred{u}_{(p')}$ and $\pred{u}^{s|1}_{(p' + 1)}$, only differs from them in the values of $\nbri{\pred{u}}{1}$. Notice that $\pred{s}^1_{(p')}$ belongs to a compact set and therefore has a limit point, denoted $\pred{s}^1_{\infty}$. Since $\gamma \neq 1$ and $\pred{u}_{(p')} \neq \pred{u}^{s|1}_{(p' + 1)}, \forall p'$, we have $\pred{s}^1_{\infty} \neq \pred{v}$. Using convexity of $V(\cdot)$, we obtain \begin{align} V \left( \pred{s}^1_{(p')} \right) \leq \max \{ V \left( \pred{u}_{(p')} \right), V \left( \pred{u}^{s|1}_{(p' + 1)} \right) \}. \end{align} Be definition, $\pred{u}^{s|1}_{(p' + 1)}$ minimizes $V(\cdot)$ over the subspace of $\nbri{\pred{u}}{1}$. So we have: \begin{align} \label{eqn_bound_V} V \left( \pred{u}^{s|1}_{(p' + 1)} \right) \leq V \left( \pred{s}^1_{(p')} \right) \leq V \left( \pred{u}_{(p')} \right) \end{align}. From Section~\ref{stability}, we have \begin{align} V \left( \pred{u}^{s|i}_{(p'+1)} \right) &\leq V \left( \pred{u}_{(p')} \right), ~~ \forall i=0,\cdots,M \label{V_s_i_1} \\ V \left( \pred{u}_{(p'+1)} \right) &\leq \sum_{i=1}^M \lambda^i V \left( \pred{u}^{s|i}_{(p'+1)} \right) \label{V_s_i_2} \end{align} Taking the limit of (\ref{V_s_i_1}) and (\ref{V_s_i_2}), we obtain \begin{align} \lim_{p' \to \infty} V \left( \pred{u}^{s|i}_{(p'+1)} \right) = V(\pred{v}), ~~ \forall i=0,\cdots,M \end{align} As $V \left( \pred{u}_{(p')} \right)$ and $V \left( \pred{u}^{s|1}_{(p' + 1)} \right)$ both converge to $V(\pred{v})$, taking limit of (\ref{eqn_bound_V}), we conclude that $V(\pred{v}) = V(\pred{s}^1_{\infty})$, for any $\gamma \in (0,1)$. This contradicts the strict convexity of $V(\cdot)$ in the subspace of $\nbri{\pred{u}}{1}$. The contradiction establishes that $\opnbri{\pred{u}}{1}_{(p' + 1)} - \nbri{\pred{u}}{1}_{(p')}$ converges to zero, leading to the convergence of $\pred{u}^{s|1}_{(p' + 1)}$ to $\pred{v}$. We have, by definition \begin{align} V \left( \pred{u}^{s|1}_{(p' + 1)} \right) \leq V \left( \nbri{\pred{u}}{1}, \onbr{\pred{u}}{1}_{(p')} \right) \end{align} Taking the limit as $p'$ tends to infinity, we obtain \begin{align} V (\pred{v}) \leq V \left( \nbri{\pred{u}}{1}, \onbr{\pred{v}}{1} \right) \end{align} or $\pred{v}$ is optimizer of $V(\cdot)$ in the subspace of $\nbri{\pred{u}}{1}$. If we further consider $V (\cdot)$ in a subspace corresponding to $\pred{u}^1$, then $V (\pred{v})$ is still a minimum. Thus, the necessary optimality condition gives ${\nabla_{\pred{u}^1} V (\pred{v})}^T ( \pred{u}^1 - \pred{v}^1 ) \geq 0, \forall \pred{u}^1 \in \Omega_1$ where $\Omega_1$ is the feasible set of (\ref{eqn_dmpc}) with $i=1$. Repeating the procedure, we obtain \begin{align} \label{partial_opt} {\nabla_{\pred{u}^i} V (\pred{v})}^T ( \pred{u}^i - \pred{v}^i ) \geq 0, \end{align} for all $\pred{u}^i$ such that $\pred{u}^i$ is a feasible solution of (\ref{eqn_dmpc}). By summing up the system of equations in (\ref{partial_opt}) for $i=1, \cdots, M$, we get: \begin{align} {\nabla_{\pred{u}} V(\pred{v})}^T (\pred{u} - \pred{v}) \geq 0, \end{align} for all $\pred{u}$ that is a feasible solution of (\ref{eqn_cmpc}). This shows that $\pred{v}$ satisfies the optimality condition of problem (\ref{eqn_cmpc}). $\Box$ Using strict convexity of $V (\cdot)$, it follows that $\pred{v}$ is in fact the global optimizer of Algorithm~\ref{alg_dmpc}. \section{Communications and computational aspects} \label{customizations} In this section, we discuss the communications and computational aspects of our approach and illustrate the freedom that the designer has in choosing the appropriate trade-off and performance level in a certain application. Although the overall computational load is reduced by employing the distributed Algorithm~\ref{alg_dmpci}, its iterative nature implies that communication between neighboring systems increases in exchange. This trade-off is illustrated in Table~\ref{tb_sm_comms}, which compares the communication requirements of the centralized and our distributed MPC approach. This overview suggests that our scheme is mostly applicable in situations where local communication is relatively fast and cheap. \begin{table}[htb] \caption{Comparison of communications requirements} \begin{center} \begin{tabular}{|c|c|c|} \hline & Centralized MPC & Distributed MPC \\ \hline Communication & Global & Local \\ \hline Each subsystem & & Other subsystems \\ communicates & Central coordinator & in $(N+1)$-step \\ with & & extended neighborhood \\ \hline Total number of & & \\ messages sent & $2 \times M $ & $p_{max} \times 2 \times \sum_{i=1}^{M}{| \mathcal{N}^i_{N+1} |}$ \\ in each time step & & \\ \hline \end{tabular} \end{center} \label{tb_sm_comms} \end{table} \begin{table}[htb] \caption{Comparison of optimization problems} \begin{center} \begin{tabular}{|c|c|c|} \hline & Centralized MPC & Distributed MPC \\ \hline Number of variables in & & \\ one optimization problem& $N \times M$ & $N \times | \mathcal{N}^i |$ \\ \hline Number of optimizations & & \\ solved in one sampling period & 1 & $p_{max} \times M$ \\ \hline \end{tabular} \end{center} \label{tb_sm_opt_size} \end{table} Table~\ref{tb_sm_opt_size} shows the difference in size of the optimization problems solved by the distributed and the centralized method. Since $| \mathcal{N}^i |\ll M $, where $M$ is the total number of subsystems, the local optimization problems in DMPC are much smaller than the centralized one. Note that during one sampling period, the local DMPC optimization problems are solved at most $p_{max}$ times. Nevertheless, DMPC is in general more computationally efficient than the centralized MPC, with a proper choice of $p_{max}$. Choosing an appropriately high $p_{max}$ value leads to better performance of the whole system. The trade-off is that the increase of $p_{max}$ will also lead to increased communications, and more local optimization problems will need to be solved in one sampling time. Another way to customize the algorithm is to expand the size of the neighborhood that each subsystem optimizes for. In the proposed Algorithms~\ref{alg_dmpc},~\ref{alg_dmpci}, each subsystem optimizes for its direct neighbors when solving the local optimization. We may have better performance when each subsystem optimizes also for its 2, 3, k-step expanded neighbors. Although we do not provide a formal proof of this, we will give an illustration in the following section through a numerical example. The intuition behind this phenomenon is nevertheless clear: each subsystem will have more precise predictions when it takes into account the behaviors of more neighboring subsystems. \section{Numerical example} \label{experiment} In this section, we illustrate the application of Algorithm~\ref{alg_dmpci} to a problem involving coupled oscillators. The problem setup consists of $M$ oscillators that can move only along the vertical axis, and are coupled by springs that connect each oscillator with its two closest neighbors. An exogenous vertical force will be used as the control input for each oscillator. The setup is shown in Figure~\ref{fig_setup}. Each oscillator is considered as one subsystem. Let the superscript $i$ denote the index of oscillators. The dynamics equation of oscillator $i$ is then defined as \begin{equation} \label{eqn_dynamics} m a^i = k_1 p^i - f_s v^i + k_2(p^{i-1} - p^i) + k_2(p^{i+1} - p^i) + F^i, \end{equation} where $p^i$, $v^i$ and $a^i$ denote the position, velocity and acceleration of oscillator $i$, respectively. The control force exerted at oscillator $i$ is $F^i$ and the parameters are defined as $k_1$: stiffness of vertical spring at each oscillator $k_2$: stiffness of springs that connect the oscillators $m$: mass of each oscillator $f_s$: friction coefficient of movements From some nonzero initial state, the system needs to be stabilized subject to the constraints: \begin{equation} \begin{vmatrix} p^i - \frac{p^{i-1} + p^{i+1}}{2} \end{vmatrix} \leq 4, \;\; \forall i=2,...,M-1 \end{equation} Based on dynamical couplings and constraint couplings, the neighborhood of each subsystem inside the chain is defined to contain itself and its two closest neighbors $\mathcal{N}^i = \{ i-1, i, i+1 \}, i=2,...,M-1$, while for the two ends $\mathcal{N}^1 = \{ 1, 2 \}$ and $\mathcal{N}^M = \{ M, M-1 \}$. We define the state vector as $x^i = [p^i, v^i]^T$, and the input as $u^i = F^i$. The discretized dynamics with sampling time $T_s$ is represented by the following matrices: \begin{align*} A_{ij} &= \left[ \begin{array}{cc} 0 & 0 \\ 0 & 0 \end{array} \right], \forall j \not\in \mathcal{N}^i \\ A_{i,i-1} &= \left[ \begin{array}{cc} 0 & 0 \\ T_s k_2 & 0 \end{array} \right], \forall i=2,...,M \\ A_{ii} &= \left[ \begin{array}{cc} 1 & T_s \\ T_s (k_1 - 2 k_2) & 1 - T_s f_s \end{array} \right], \forall i=1,...,M \\ A_{i,i+1} &= \left[ \begin{array}{cc} 0 & 0 \\ T_s k_2 & 0 \end{array} \right], \forall i=1,...,M-1 \\ B_{ij} &= \left[ \begin{array}{c} 0 \\ 0 \end{array} \right] \forall j \neq i \\ B_{ii} &= \left[ \begin{array}{c} 0 \\ T_s \end{array} \right], \forall i=1,...,M \end{align*} The following parameters were used in the simulation example: \begin{align*} k_1 &= 0.4, \quad k_2 = 0.3 \\ f_s &= 0.4, \quad T_s = 0.05, \quad m = 1 \\ M &= 40, \quad N = 20 \\ Q_i &= \left[ \begin{array}{cc} 100 & 0 \\ 0 & 0 \end{array} \right], \quad R_i = 10 \end{align*} Starting from the same feasible initial state, we apply Algorithm~\ref{alg_dmpci} with $p_{max}=2$, $20$ and $100$. The results are compared to the solution obtained from the centralized MPC approach. The results indicate that all states of the $40$ subsystems are stabilized. Figure~\ref{fig_compare_cost} shows the evolution of the overall cost achieved by DMPC compared to the cost of the centralized approach. We can see that the difference is reduced by choosing a larger $p_{max}$ value. Our analysis guarantees in fact that the DMPC solution converges to the centralized one as $p$ tends to infinity. As mentioned above, another way to customize the proposed distributed MPC algorithm is for each local problem to consider optimizing over the inputs of subsystems in a larger neighborhood. Figure~\ref{fig_compare_r} illustrates the effect of optimizing in each subproblem over an $r$-step extended neighborhood, with $r=\{1,5,10\}$. Fixing the number of maximum subiterations to $p_{max}=2$, we can observe a steady improvement in performance until the increased neighborhood of each subsystem covers essentially all other subsystems and end up with a centralized problem. \begin{figure} \caption{Setup of coupled oscillators} \label{fig_setup} \end{figure} \begin{figure}\label{fig_compare_cost} \end{figure} \begin{figure} \caption{Time evolution of the global cost value of distributed MPC algorithms with different radius of neighborhood to be optimized by one local controller.} \label{fig_compare_r} \end{figure} \section{Conclusions} \label{conclusions} We presented a Jacobi algorithm for solving distributed model predictive control problems, which is able to deal with general linear coupled dynamics and convex coupled constraints. We incorporated neighboring subsystem models and constraints in the formulation of the local problems for enhanced performance. Global feasibility and stability were achieved, and a local implementation of the algorithm was given, which relies on information exchange from an extended set of ``nearby'' neighboring subsystems. It was shown that the distributed MPC solution converges to the centralized one through a localized iterative procedure. An a priori approximation procedure was proposed, which allows to construct an initial feasible solution locally by tightening constraints. We also discussed the trade-off between communications and computational aspects, the effect of increasing the maximum number of iterations ($p_{max}$) in one sampling period and the potential improvements that can be gained by incorporating several subsystems into a local optimization. We are currently working on an extension of the algorithm, which allows the use of terminal costs in a dual-mode MPC formulation. \end{document}
\begin{document} \title{\bf\sc L\'{e}vy Approximation of Impulsive Recurrent Process with Semi-Markov Switching} \author{{\sc V. S. Koroliuk}$^1$, {\sc N. Limnios}$^2$ and {\sc I.V. Samoilenko}$^1$\\ $^1$Institute of Mathematics,\\ Ukrainian National Academy of Science, Kiev, Ukraine\\ $^2$Laboratoire de Math\'ematiques Appliqu\'ees,\\ Universit\'e de Technologie de Compi\`egne, France} \maketitle \baselineskip 6 mm \hrule \begin{abstract} In this paper, the weak convergence of impulsive recurrent process with semi-Markov switching in the scheme of L\'{e}vy approximation is proved. Singular perturbation problem for the compensating operator of the extended Markov renewal process is used to prove the relative compactness. \end{abstract} {\small {\sc Key Words:} L\'{e}vy approximation, semimartingale, semi-Markov process, impulsive recurrent process, piecewise deterministic Markov process, weak convergence, singular perturbation. } \hrule \section{Introduction} L\'{e}vy approximation is still an active area of research in several theoretical and applied directions. Since L\'{e}vy processes are now standard, L\'{e}vy approximation is quite useful for analyzing complex systems (see, e.g. \cite{ber, sat}). Moreover they are involved in many applications, e.g., risk theory, finance, queueing, physics, etc. For a background on L\'{e}vy process see, e.g. \cite{ber, sat, gisk}. In particular in \cite{korlim} it has been studied the following impulsive process as partial sums in a series scheme \begin{eqnarray}\label{1adf1} \xi^{\varepsilon}(t)=\xi_0^{\varepsilon}+\sum_{k=1}^{\nu(t)}\alpha^{\varepsilon}_k(x^{\varepsilon}_{k-1}),\quad t\ge 0, \end{eqnarray} the random variables $\alpha_k^{\varepsilon} (x), k \geq 1$ are supposed to be independent and perturbed by the jump Markov process $x(t), t\ge 0$. We propose to study generalization of the problem (\ref{1adf1}): \begin{eqnarray}\label{1bdf1} \xi^{\varepsilon}(t)=\xi_0^{\varepsilon}+\sum_{k=1}^{\nu(t)}\alpha^{\varepsilon}_k(\xi^{\varepsilon}_{k-1},x^{\varepsilon}_{k-1}),\quad t\ge 0. \end{eqnarray} Here the random variables $\alpha_k^{\varepsilon} (u, x), k \geq 1$ depend on the process $\xi^{\varepsilon}(t)$. We propose to study convergence of (\ref{1bdf1}) using a combination of two methods. The one, based on semimartingales theory, is combined with a singular perturbation problem for the compensating operator of the extended Markov renewal process. So, the method includes two steps. In the first step we prove the relative compactness of the semimartingales representation of the family $\xi^\varepsilon$, $\varepsilon>0$, by proving the following two facts \cite{ethier}: $$\lim\limits_{c\to \infty}\sup\limits_{\varepsilon\leq\varepsilon_0} \mathbf{P}\{\sup\limits_{t\leq T}|\xi^{\varepsilon}(t)|>c\}=0,$$ known as the compact containment condition, and $$\mathbf{E}|\xi^{\varepsilon}(t)-\xi^{\varepsilon}(s)|^2\le k |t-s|,$$ for some positive constant $k$. In the second step we prove convergence of the extended Markov renewal process $\xi_n^{\varepsilon}, x_n^{\varepsilon}, \tau_n^{\varepsilon}, n\geq0$ by using singular perturbation technique as presented in \cite{korlim}. Finally, we apply Theorem 6.3 from \cite{korlim}. The paper is organized as follows. In Section 2 we present the time-scaled impulsive process (\ref{1bdf1}) and the switching semi-Markov process. In the same section we present the main results of L\'{e}vy approximation. In Section 3 we present the proof of the theorem. \hrule \section{Main results} Let us consider the space $\mathbb{R}^d$ endowed with a norm $\abso{\cdot}$ ($d\ge 1$), and $(E,\mathcal{E})$, a {\it standard phase space}, (i.e., $E$ is a Polish space and $\mathcal{E}$ its Borel $\sigma$-algebra). For a vector $v\in \mathbb{R}^d$ and a matrix $c\in \mathbb{R}^{d\times d}$ , $v^*$ and $c^*$ denote their transpose respectively. Let $C_3(\mathbb{R}^d)$ be a measure-determining class of real-valued bounded functions, such that $g(u)/\abso{u}^2 \to 0$, as $\abso{u}\to 0$ for $g\in C_3(\mathbb{R}^d)$ (see \cite{jacod1,korlim}). The impulsive processes $\xi^{\varepsilon}(t), t\geq 0, \varepsilon>0$ on $\mathbb{R}^d$ in the series scheme with small series parameter $\varepsilon\to 0$, $(\varepsilon>0)$ are defined by the sum (\cite[Section 9.2.1]{korlim}) \begin{eqnarray}\label{1adf2} \xi^{\varepsilon}(t)=\xi_0^{\varepsilon}+\sum_{k=1}^{\nu(t/\varepsilon^2)}\alpha^{\varepsilon}_{k}(\xi^{\varepsilon}_{k-1},x^{\varepsilon}_{k-1}),\quad t\ge 0. \end{eqnarray} For any $\varepsilon > 0$, and any sequence $z_k, k \geq 0$, of elements of $\mathbb{R}^d\times E$, the random variables $\alpha_k^{\varepsilon} (z_{k-1}), k \geq 1$ are supposed to be independent. Let us denote by $G_{u,x}^{\varepsilon}$ the distribution function of $\alpha_k^{\varepsilon} (x)$, that is, $$G_{u,x}^{\varepsilon}(dv) := P(\alpha_k^{\varepsilon} (u,x) \in dv), k \geq 0, \varepsilon > 0, x \in E, u\in \mathbb{R}^d.$$ It is worth noticing that the coupled process $\xi^\varepsilon(t), x^{\varepsilon}(t), t \geq 0$, is a Markov additive process (see, e.g., \cite[Section 2.5]{korlim}). We make natural assumptions for the counting process $\nu(t)$, namely: \begin{eqnarray}\label{1con1} \int_0^t\mathbf{E}[\varphi(s)d\nu(s)]<l_1\int_0^t\mathbf{E}(\varphi(s))ds \end{eqnarray} for any nonnegative, increasing $\varphi(s)$ and $l_1>0.$ The switching semi-Markov process ${x}(t), t\ge 0$ on the standard phase space $(E,\mathcal{E})$, is defined by the semi-Markov kernel $$Q(x,B,t) = P(x,B)F_x(t), x \in E,B \in\mathcal{E}, t \geq 0, $$ which defines the associated Markov renewal process $ x_n, \tau_n, n \geq 0 $: $$Q(x,B, t) = P(x_{n+1} \in B, \theta_{n+1} \leq t | x_n = x) = P(x_{n+1} \in B | x_n = x)P(\theta_{n+1}\leq t | x_n = x).$$ Finally we should denote $\xi^{\varepsilon}_n$ in (3): $$\xi^{\varepsilon}_n:=\xi(\varepsilon^2\tau_n)=\xi_0^{\varepsilon}+\sum_{k=1}^n\alpha_k^{\varepsilon}(\xi^{\varepsilon}_{k-1},x^{\varepsilon}_{k-1}).$$ The L\'{e}vy approximation of Markov impulsive process (\ref{1adf2}) is considered under the following conditions. \begin{description} \item[C1:] The semi-Markov process ${x}(t), t \geq 0$ is uniformly ergodic with the stationary distribution $$\pi(dx)q(x)=q\rho(dx), q(x):=1/m(x), q:=1/m,$$ $$m(x):=\mathbb{E}\theta_x=\int_0^{\infty}\overline{F}_x(t)dt, m:=\int_E\rho(dx)m(x),$$ $$\rho(B)=\int_E\rho(dx)P(x,B), \rho(E)=1.$$ \item[C2:] {\it L\'{e}vy approximation}. The family of impulsive processes $\xi^{\varepsilon}(t), t\geq 0$ satisfies the L\'{e}vy approximation conditions \cite[Section 9.2]{korlim}. \begin{description} \item[L1:] Initial value condition $$\sup\limits_{\varepsilon>0} E|\xi_0^{\varepsilon}|\leq C < \infty$$ and $$\xi_0^{\varepsilon}\Rightarrow\xi_0.$$ \item[L2:]Approximation of the mean values: $$a^{\varepsilon}(u;x) = \int_{\mathbb{R}^d} vG^{\varepsilon}_{u,x}(dv) = \varepsilon a_1(u;x)+\varepsilon^2 [a(u;x) +\theta_a^{\varepsilon} (u;x)],$$ and $$c^{\varepsilon}(u;x) = \int_{\mathbb{R}^d} vv^*G^{\varepsilon}_{u,x}(dv) = \varepsilon^2 [c(u;x) + \theta_c^{\varepsilon} (u;x)],$$ where functions $a_1, a$ and $c$ are bounded. \item[L3:] Poisson approximation condition for intensity kernel (see \cite{jacod1}) $$G_g^{\varepsilon}(u;x) = \int_{\mathbb{R}^d} g(v)G^{\varepsilon}_{u,x}(dv) = \varepsilon^2[G_g(u;x) + \theta^{\varepsilon}_g(u;x)]$$ for all $g \in C_3(\mathbb{R}^d)$, and the kernel $G_g(u;x)$ is bounded for all $g \in C_3(\mathbb{R}^d)$, that is, $$|G_g(u;x)| \leq G_g \quad \hbox{(a constant depending on $g$)}.$$ Here \begin{eqnarray}\label{1gg2} G_{g}(u;x) =\int_{\mathbb{R}^d} g(v)G_{u,x}(dv),\quad g \in C_3(\mathbb{R}^d). \end{eqnarray} The above negligible terms $\theta_a^\varepsilon,\theta_c^\varepsilon, \theta_g^\varepsilon$ satisfy the condition $$\sup\limits_{x\in E} |\theta_{\cdot}^{\varepsilon}(u;x)|\to 0,\quad \varepsilon\to 0.$$ \item[L4:] {\it Balance condition}. $$\int_E\rho(dx)a_1(u;x)=0.$$ \end{description} In addition the following conditions are used: \item[C3:] {\it Uniform square-integrability}: $$\lim\limits_{c\to\infty}\sup\limits_{x\in E} \int_{|v|>c} vv^*G_{u,x}(dv) = 0.$$ \item[C4:] {\it Linear growth}: there exists a positive constant $L$ such that $$|a(u;x)|\leq L(1+|u|),\quad\hbox{and}\quad |c(u;x)|\leq L(1+\abso{u}^2),$$ and for any real-valued non-negative function $f(v), v\in \mathbb{R}^d$, such that $\int_{\mathbb{R}^d\setminus \{0\}}(1+f(v))\abso{v}^2dv<\infty,$ we have $$|G_{u,x} (v)|\leq Lf(v)(1+\abso{u}).$$ \end{description} The main result of our work is the following. \begin{thm} Under conditions $\mathbf{C1-C4}$ the weak convergence $$\xi^{\varepsilon}(t)\Rightarrow \xi^0(t),\quad \varepsilon \to 0$$ takes place. The limit process $\xi^0(t), t\geq0$ is a L\'{e}vy process defined by the generator $\mathbf{L}$ as follows \begin{eqnarray}\label{1limgen} \mathbf{L}\varphi(u)=(\widehat{a}(u)-\widehat{a}_0(u))\varphi'(u)+ \frac{1}{2}\sigma^2(u)\varphi''(u) + \lambda(u)\int_{\mathbb{R}^d} [\varphi(u + v)-\varphi(u)]G_{u}^0(dv), \end{eqnarray} where: $$\widehat{a}(u)=q\int_E\rho(dx)a(u;x), \widehat{a}_0(u)=\int_EvG_u(dv), G_u(dv)=q\int_E\rho(dx)G_{u,x}(dv),$$ $$\widehat{a^2_1}(u)=q\int_E\rho(dx)a^2_1(u;x),\hskip 5mm \widetilde{a}_1(u;x):=q(x)\int_EP(x,dy)a_1(u;x), c_0(u;x)=\int_Evv^*G_{u,x}(dv)$$ $$\sigma^2(u)=2\int_E\pi(dx)\{\widetilde{a}_1(u;x)\widetilde{R}_0\widetilde{a}_1^*(u;x)+ \frac{1}{2}[c(u;x)-c_0(u;x)]\}-\widehat{a^2_1}(u), \hskip 5mm \sigma^2(u)\geq 0$$ \hskip40mm $\lambda(u)=G_u(\mathbb{R}^d),$ \hskip5mm $G_{u}^0(dv)=G_u(dv)/\lambda(u),$ here $\widetilde{R}_0$ is the potential operator of embedded Markov chain. \end{thm} \textbf{Remark 1.} The limit L\'{e}vy process consists of three parts: deterministic drift, diffusion part and Poisson part. There are some possible cases: \begin{description} \item[1).] If $\widehat{b}(u)-\widehat{b}_0(u)=0$ then the limit process does not have deterministic drift. \item[2).] If $\sigma^2(u)=0$ then the limit process does not have diffusion part. As a variant of this case we note that if $c(u;x)=c_0(u;x)$ then also $b_1(u;x)=0$ and we obtain the conditions of Poisson approximation after re-normation $\varepsilon^2=\widetilde{\varepsilon}$ (see, for example Chapter 7 in \cite{korlim}). \end{description} \textbf{Remark 2.} In the work \cite{korlim} (Theorem 9.3) an analogical result was obtained for impulsive process with Markov switching. If we study an ordinary impulsive process without switching, we should obtain $\sigma^2=E(\alpha_k^{\varepsilon})^2-(E(\alpha_k^{\varepsilon}))^2=(c-c_0)-a_1^2$. This result correlates with the similar results from \cite{jacod1}. In case of our Theorem this may be easily shown, but in \cite{korlim} (Theorem 9.3) it is not obvious. The difference is that we used $\widetilde{R}_0$ -- the potential operator of embedded Markov chain instead of $R_0$ -- the potential operator of Markov process. Due to this, our result obviously correlates with other well-known result. \textbf{Remark 3.} Asymptotic of the second moment in the condition \textbf{L1} contains second modified characteristics $c(u;x)$ (see correlation 4.2 at page 555 in \cite{jacod1}). This characteristics in limit contains both second moment of Poisson part and dispersion of diffusion part, namely $c=c_0+\sigma^2.$ \hrule \section{Proof of Theorem 1} The proof of Theorem 1 is based on the semimartingale representation of the impulsive process (\ref{1adf2}). We split the proof of Theorem 1 in the following two steps. \noindent {\sc Step 1}. In this step we establish the relative compactness of the family of processes $\xi^{\varepsilon}(t), t\geq 0, \varepsilon>0$ by using the approach developed in \cite{lip3}. Let us remind that the space of all probability measures defined on the standard space $(E,{\cal E})$ is also a Polish space; so the relative compactness and tightness are equivalent. First we need the following lemma. \begin{lm} Under assumption $\mathbf{C4}$ there exists a constant $k>0$, independent of $\varepsilon$ and dependent on $T$, such that $$\mathbf{E}\sup\limits_{t\leq T}|\xi^{\varepsilon}(t)|^2\leq k_T.$$ \end{lm} \begin{cor} Under assumption $\mathbf{C4}$, the following compact containment condition (CCC) holds: $$\lim\limits_{c\to \infty}\sup\limits_{\varepsilon\leq\varepsilon_0} \mathbf{P}\{\sup\limits_{t\leq T}|\xi^{\varepsilon}(t)|>c\}=0.$$ \end{cor} \noindent{\it Proof}: The proof of this corollary follows from Kolmogorov's inequality.\begin{flushright} $\Box $ \end{flushright} \noindent{\it Proof of Lemma 1}: (following \cite{lip3}). The impulsive process (\ref{1adf2}) has the following semimartingale representation \begin{eqnarray}\label{1smdecomp} \xi^{\varepsilon}(t)=u+B_t^{\varepsilon}+M_t^{\varepsilon}, \end{eqnarray} where $u= \xi^\varepsilon_0$; $B_t^{\varepsilon}$ is the predictable drift $$B_t^{\varepsilon}=\sum_{k=1}^{\nu(t/\varepsilon^2)}a^\varepsilon(\xi^{\varepsilon}_{k-1},{x}^{\varepsilon}_{k-1}) =A_1^\varepsilon(t)+A^\varepsilon(t)+\theta^\varepsilon_a(t),$$ where $$A^{\varepsilon}_1(t):=\varepsilon\sum_{k=1}^{\nu(t/\varepsilon^2)}a_{1} (\xi_{k-1}^{\varepsilon}, x_{k-1}^{\varepsilon}), A^{\varepsilon}(t):= \varepsilon^2\sum_{k=1}^{\nu(t/\varepsilon^2)}a (\xi_{k-1}^{\varepsilon} ,x_{k-1}^{\varepsilon}).$$ \begin{eqnarray}\label{1qch} \langle M^{\varepsilon}\rangle_t=\varepsilon^2\sum_{k=1}^{\nu (t/\varepsilon^2)}\int_{\mathbb{R}^d\setminus\{0\}}vv^*G(\xi^{\varepsilon}_{k-1},dv;{x}_{k-1}^{\varepsilon})+ \theta^{\varepsilon}_c(t)=\\ \nonumber \varepsilon^2\sum_{k=1}^{\nu (t/\varepsilon^2)} c(\xi^{\varepsilon}_{k-1};{x}_{k-1}^{\varepsilon}) +\theta^{\varepsilon}_c(t), \end{eqnarray} and for every finite $T>0$ $$\sup\limits_{0\leq t\leq T} |\theta^\varepsilon_{\cdot}(t)|\rightarrow 0, \varepsilon\rightarrow 0.$$ To verify compactness of the process $\xi^{\varepsilon}(t)$ we split it at two parts. The first part of order $\varepsilon$ $$A_1^\varepsilon(t)=\varepsilon\sum_{k=1}^{\nu(t/\varepsilon^2)} a_1(\xi^{\varepsilon}_{k-1};{x}^{\varepsilon}_{k-1}),$$ can be characterized by the compensating operator $$\mathbf{L}^{\varepsilon}\varphi(u;x)=\varepsilon^{-2}q(x)[\mathbf{A}_1^{\varepsilon}(x)P-I]\varphi(u;x),$$ where $\mathbf{A}_1^{\varepsilon}(x)\varphi(u)=\varphi(u+\varepsilon a_1(u;x))=\varepsilon a_1(u;x)\varphi'(u)+\varepsilon \theta^{\varepsilon}\varphi(u).$ After simple calculations we may rewrite the operator: $$\mathbf{L}^{\varepsilon}=\varepsilon^{-2}\mathbf{Q}+\varepsilon^{-1}\mathbf{A}_1(x)P+\theta^{\varepsilon},$$ here $\mathbf{A}_1(x)\varphi(u)=\varepsilon a_1(u;x)\varphi'(u).$ Corresponding martingale characterization is the following $$\mu_{n+1}^{\varepsilon}=\varphi(A_{1,n+1}^{\varepsilon},x_{n+1}^{\varepsilon})-\varphi(A_{1,0}^{\varepsilon},x_0^{\varepsilon})- \sum_{m=0}^{\nu_n}\theta^{\varepsilon}_{m+1}\mathbf{L}^{\varepsilon}\varphi(A_{1,m}^{\varepsilon},x_m^{\varepsilon}).$$ Using the results from \cite{korlim}, Section 1 we obtain the last martingale in the form $$\widetilde{\mu}_t^{\varepsilon}=\varphi^{\varepsilon}(A_1^\varepsilon(t),x^{\varepsilon}_t)+ \varphi^{\varepsilon}(A_1^\varepsilon(0),x^{\varepsilon}_0)- \int_0^t\mathbf{L}^{\varepsilon}\varphi^{\varepsilon}(A_1^\varepsilon(s),x^{\varepsilon}_s)ds,$$ where $x^{\varepsilon}_t:=x(t/\varepsilon^2).$ Thus (see, for example Theorem 1.2 in \cite{korlim}), it has quadratic characteristic $$<\widetilde{\mu}^{\varepsilon}>_t=\int_0^t\left[\mathbf{L}^{\varepsilon}(\varphi^{\varepsilon} (A_1^\varepsilon(s),x^{\varepsilon}_t))^2-2\varphi^{\varepsilon}(A_1^\varepsilon(s),x^{\varepsilon}_s) \mathbf{L}^{\varepsilon}\varphi^{\varepsilon}(A_1^\varepsilon(s),x^{\varepsilon}_s)\right]ds.$$ Applying the operator $\mathbf{L}^{\varepsilon}=\varepsilon^{-2}\mathbf{Q}+\varepsilon^{-1}\mathbf{A}_1(x)P+\theta^{\varepsilon}$ to test-function $\varphi^{\varepsilon}=\varphi+\varepsilon\varphi_1$ we obtain the integrand of the view $$Q\varphi_1^2-2\varphi_1Q\varphi_1+\theta^{\varepsilon}\varphi^{\varepsilon}.$$ Thus the integrand is limited. The boundedness of the quadratic characteristic provides $\widetilde{\mu}_t^{\varepsilon}$ is compact. Thus, $\varphi(A_1^\varepsilon(t))$ is compact too and bounded uniformly by $\varepsilon$. By the results from \cite{ethier} we obtain compactness of $A_1^\varepsilon(t)$, because the test-function $\varphi(u)$ belongs to the measure-determining class. Now we should study the second part of order $\varepsilon^2$. For a process $y(t), t\ge 0$, let us define the process $y^\dag(t)=\sup\limits_{s\leq t}|y(s)|,$ then from (\ref{1smdecomp}) we have \begin{eqnarray}\label{1eq4} ((\xi^{\varepsilon}(t))^\dag)^2\le 4[u^2+((A^{\varepsilon}(t))^\dag)^2+((M^{\varepsilon}_t)^\dag)^2]. \end{eqnarray} Now we may apply the result of Section 2.3 \cite{korlim}, namely $$\sum_{k=1}^{\nu(t)} a(\xi^\varepsilon_{k-1}, x^\varepsilon_{k-1})=\int_0^ta(\xi^{\varepsilon}(s),x^{\varepsilon}(s))d\nu(s).$$ Condition $\mathbf{C4}$ implies that for sufficiently large $\varepsilon$ \begin{eqnarray}\label{1eq51} (A^\varepsilon(t))^\dag && = \varepsilon^2\int_0^{t/\varepsilon^2}a(\xi^{\varepsilon}(s),x^{\varepsilon}(s))d\nu(s)\leq L\varepsilon^2\int_0^{t/\varepsilon^2}(1+(\xi^{\varepsilon}(s))^\dag)d\nu(s)\end{eqnarray} Now, by Doob's inequality (see, e.g., \cite[Theorem 1.9.2]{lip1}), $$\mathbf{E}((M_t^{\varepsilon})^\dag)^2\leq 4\abso{\mathbf{E}\langle M^{\varepsilon}\rangle_t},$$ (\ref{1qch}) and condition \textbf{C4} we obtain \begin{eqnarray}\label{1eq6} \abso{\langle M^{\varepsilon}\rangle_t}=\abso{\varepsilon^2\int_0^{t/\varepsilon^2}\int_{\mathbb{R}^d\setminus \{0\}}vv^*G(\xi^{\varepsilon}(s),dv;{x}_s^{\varepsilon})d\nu(s)}=\abso{\varepsilon^2\int_0^{t/\varepsilon^2} c(\xi^{\varepsilon}(s);{x}^{\varepsilon}(s))d\nu(s)}\leq\nonumber\\ L\varepsilon^2\int_0^{t/\varepsilon^2}[1+((\xi^{\varepsilon}(s))^\dag)^2]d\nu(s). \end{eqnarray} Inequalities (\ref{1eq4})-(\ref{1eq6}), condition (\ref{1con1}) and Cauchy-Bunyakovsky-Schwarz inequality, ([$\int_0^t\varphi(s)ds]^2\leq t\int_0^t\varphi^2(s)ds$), imply $$\mathbf{E}((\xi^{\varepsilon}(t))^\dag)^2\leq k_1+k_2\varepsilon^2\int_0^{t/\varepsilon^2}\mathbf{E}[((\xi^{\varepsilon}(s))^\dag)^2d\nu(s)]\leq k_1+k_2l_1\varepsilon^2\int_0^{t/\varepsilon^2}\mathbf{E}((\xi^{\varepsilon}(s))^\dag)^2ds=$$ $$ k_1+k_2l_1\int_0^{t}\mathbf{E}((\xi^{\varepsilon}(s))^\dag)^2ds,$$ where $k_1, k_2$ and $l_1$ are positive constants independent of $\varepsilon$. By Gronwall inequality (see, e.g., \cite[p. 498]{ethier}), we obtain $$\mathbf{E}((\xi^{\varepsilon}(t))^\dag)^2\leq k_1\exp(k_2l_1 t).$$ Thus, both parts of $\xi^{\varepsilon}(t)$ are compact and bounded, so $$\mathbf{E}\sup\limits_{t\leq T}|\xi^{\varepsilon}(t)|^2\leq k_T.$$ Hence the lemma is proved. \begin{flushright} $\Box $ \end{flushright} \begin{lm} Under assumption $\mathbf{C4}$ there exists a constant $k>0$, independent of $\varepsilon$ such that $$\mathbf{E}|\xi^{\varepsilon}(t)-\xi^{\varepsilon}(s)|^2\leq k |t-s|.$$ \end{lm} \noindent{\it Proof}: In the same manner with (\ref{1eq4}), we may write $$|\xi^{\varepsilon}(t)-\xi^{\varepsilon}(s)|^2\leq 2|B_t^{\varepsilon} -B_s^{\varepsilon}|^2+2|M_t^{\varepsilon}-M_s^{\varepsilon}|^2.$$ By using Doob's inequality, we obtain $$\mathbf{E}|\xi^{\varepsilon}(t)-\xi^{\varepsilon}(s)|^2\leq 2\mathbf{E}\{|B_t^{\varepsilon}-B_s^{\varepsilon}|^2+8\abso{\langle M^{\varepsilon}\rangle_t-\langle M^{\varepsilon}\rangle_s}\}.$$ Now (\ref{1eq6}) and condition (\ref{1con1}) and assumption $\mathbf{C4}$ imply $$|B_t^{\varepsilon}-B_s^{\varepsilon}|^2+8\abso{\langle M^{\varepsilon}\rangle_t-\langle M^{\varepsilon}\rangle_s}\leq k_3[1+((\xi^{\varepsilon}(T))^\dag)^2]|t-s|,$$ where $k_3$ is a positive constant independent of $\varepsilon$. From the last inequality and Lemma 1 the desired conclusion is obtained. \begin{flushright} $\Box $ \end{flushright} The conditions proved in Corollary 2 and Lemma 2 are necessary and sufficient for the compactness of the family of processes $\xi^{\varepsilon}(t), t\geq 0, \varepsilon>0$. \noindent{\sc Step 2}. At the next step of proof we apply the problem of singular perturbation to the generator of the process $\xi^{\varepsilon}(t).$ To do this, we mention the following theorem. $C^2_0(\mathbb{R}^d\times E)$ is the space of real-valued twice continuously differentiable functions on the first argument, defined on $\mathbb{R}^d\times E$ and vanishing at infinity, and $C(\mathbb{R}^d\times E)$ is the space of real-valued continuous bounded functions defined on $\mathbb{R}^d\times E$. \begin{thm}(\cite[Theorem 6.3]{korlim}) Let the following conditions hold for a family of Markov processes $\xi^{\varepsilon}(t), t\ge 0, \varepsilon>0$: \begin{description} \item[CD1:] There exists a family of test functions $\varphi^{\varepsilon}(u, x)$ in $C^2_0(\mathbb{R}^d\times E)$, such that $$\lim\limits_{\varepsilon\to 0}\varphi^{\varepsilon}(u, x) = \varphi(u),$$ uniformly on $u, x.$ \item[CD2:] The following convergence holds $$\lim\limits_{\varepsilon\to 0}\mathbf{L}^{\varepsilon}\varphi^{\varepsilon}(u, x) = \mathbf{L}\varphi(u),$$ uniformly on $u, x$. The family of functions $\mathbf{L}^{\varepsilon}\varphi^{\varepsilon}, \varepsilon>0$ is uniformly bounded, and $\mathbf{L}\varphi(u)$ and $\mathbf{L}^{\varepsilon}\varphi^{\varepsilon}$ belong to $C(\mathbb{R}^d\times E)$. \item[CD3:] The quadratic characteristics of the martingales that characterize a coupled Markov process $\xi^{\varepsilon}(t), x^{\varepsilon}(t), t\geq0, \varepsilon>0$ have the representation $\left\langle \mu^{\varepsilon}\right\rangle_t = \int^t_0 \zeta^{\varepsilon}(s)ds,$ where the random functions $\zeta^{\varepsilon}, \varepsilon> 0,$ satisfy the condition $$\sup\limits_{0\leq s \leq T} \mathbf{E}|\zeta^{\varepsilon}(s)|\leq c < +\infty.$$ \item[CD4:] The convergence of the initial values holds and $$\sup\limits_{\varepsilon>0}\mathbf{E}|\zeta^{\varepsilon}(0)|\leq C < +\infty.$$ \end{description} Then the weak convergence $$\xi^{\varepsilon}(t)\Rightarrow \xi(t),\quad \varepsilon\to 0,$$ takes place. \end{thm} We consider the the extended Markov renewal process \begin{eqnarray}\label{ren}\xi_n^{\varepsilon},{x}_n^{\varepsilon}, \tau_n^{\varepsilon}, n\ge 0,\end{eqnarray} where $x_n^{\varepsilon} = x^{\varepsilon}(\tau_n^{\varepsilon}), x^{\varepsilon}(t) := x(t/\varepsilon^2), \xi_n^{\varepsilon}=\xi^{\varepsilon}(\tau_n^{\varepsilon})$ and $\tau_{n+1}^{\varepsilon} = \tau_{n}^{\varepsilon} + \varepsilon^2\theta_n^{\varepsilon} , n \geq 0,$ and $$P(\theta_{n+1}^{\varepsilon} \leq t | x_n^{\varepsilon} = x) = F_x(t) = P(\theta_x \leq t).$$ \begin{dfn} \cite{svirid1} The \textit{compensating operator} $\mathbf{L}^{\varepsilon}$ of the Markov renewal process (\ref{ren}) is defined by the following relation $$\mathbf{L}^{\varepsilon}\varphi(\xi^{\varepsilon}_0,x_0,\tau_0) = q(x_0)\mathbf{E}[ \varphi(\xi^{\varepsilon}_1,x_1,\tau_1) -\varphi(\xi^{\varepsilon}_0,x_0,\tau_0) | \mathcal{F}_0], $$ where $$ \mathcal{F}_t := \sigma(\xi^{\varepsilon}(s),x^{\varepsilon}(s),\tau^{\varepsilon}(s); 0 \leq s \leq t).$$ \end{dfn} Using Lemma 9.1 from \cite{korlim} we obtain that the compensating operator of the extended Markov renewal process from Definition 1 can be defined by the relation (see also Section 2.8 in \cite{korlim}) \begin{eqnarray}\label{1eq7} \mathbf{L}^{\varepsilon}\varphi(u,v;x)=\varepsilon^{-2}q(x)\left[ \int_E P(x, dy) \int_{\mathbb{R}^d} G_{u,x}^{\varepsilon}(dz)\varphi(u + z,v;y)-\right.\\\nonumber \left.\varphi(u,v;x)\right]. \end{eqnarray} By analogy with \cite[Lemma 9.2]{korlim} we may prove the following result: \begin{lm} The main part in the asymptotic representation of the compensating operator (\ref{1eq7}) is as follows $$\mathbf{L}^{\varepsilon}\varphi(u,v,x) = \varepsilon^{-2}\mathbf{Q}\varphi(\cdot,\cdot,x) + \varepsilon^{-1}a_1(u;x)\mathbf{Q}_0\varphi'_u(u,\cdot,\cdot) + [a(u;x) - a_0(u;x)]\mathbf{Q}_0\varphi'_u(u,\cdot,\cdot) +$$ $$ \frac{1}{2} [c(u;x)-c_0(u;x)]\mathbf{Q}_0\varphi_{uu}''(u, \cdot,\cdot)+\mathbf{G}_{u,x}\mathbf{Q}_0\varphi(u,\cdot,\cdot) $$ where: $$\mathbf{Q}_0\varphi(x) := q(x) \int_E P(x, dy)\varphi(y), \mathbf{G}_{u,x}\varphi(u) := \int_{\mathbb{R}^d} [\varphi(u + z) -\varphi(u)]G_{u,x}(dz),$$ $$a_0(u;x)=\int_EvG_{u,x}(dv), c_0(u;x)=\int_Evv^*G_{u,x}(dv).$$ \end{lm} \noindent{\it Proof} of this Lemma is analogical to the proof of \cite[Lemma 9.2]{korlim}. The solution of the singular perturbation problem at the test functions $\varphi^{\varepsilon}(u,x)=\varphi(u)+\varepsilon\varphi_1(u,x)+\varepsilon^2\varphi_2(u,x)$ in the form \begin{eqnarray}\label{spp}\mathbf{L}^{\varepsilon}\varphi^{\varepsilon} ={\mathbf{L}}\varphi+\theta^{\varepsilon}\varphi\end{eqnarray} can be found in the same manner with Lemma 9.3 in \cite{korlim}. To simplify the formula, we refer to the embedded Markov chain. Corresponding generator $\widetilde{\mathbf{Q}}:=P-I,$ and the potential operator satisfies the correlation $\widetilde{R}_0(P-I)=\widetilde{\Pi}-I.$ From (\ref{spp}) we obtain $$\widetilde{\mathbf{Q}}\varphi=0,$$ $$\widetilde{\mathbf{Q}}\varphi_1+\mathbf{A}_1(x)P\varphi=0,$$ $$\widetilde{\mathbf{Q}}\varphi_2+\mathbf{A}_1(x)P\varphi_1+(\mathbf{A}(x)+\mathbf{C}(x)+\mathbf{G}_{u,x})P\varphi=m(x){\mathbf{L}}\varphi,$$ where $$\mathbf{A}(x)\varphi(u):=[a(u;x)-a_0(u;x)]\varphi'(u), \mathbf{A}_1(x)\varphi(u):=a_1(u;x)\varphi'(u),$$ $$\mathbf{C}(x):=\frac{1}{2} [c(u;x)-c_0(u;x)]\varphi_{uu}''(u).$$ From the second equation we obtain $\varphi_1=\widetilde{R}_0\mathbf{A}_1(x)\varphi,$ and substituting it into the last equation we have: $$\widetilde{\mathbf{Q}}\varphi_2+\mathbf{A}_1(x)P\widetilde{R}_0\mathbf{A}_1(x)\varphi+(\mathbf{A}(x)+\mathbf{C}(x)+\mathbf{G}_{u,x})\varphi=m(x){\mathbf{L}}\varphi.$$ As soon as $P\widetilde{R}_0=\widetilde{R}_0+\widetilde{\Pi}-I$ we finally obtain \begin{eqnarray}\label{1eq8} q^{-1}{\mathbf{L}}=\widetilde{\Pi}[(\mathbf{A}(x)+\mathbf{C}(x)+\mathbf{G}_{u,x})+\mathbf{A}_1(x)\widetilde{R}_0\mathbf{A}_1(x)-\mathbf{A}^2_1(x)]\widetilde{\Pi}. \end{eqnarray} Simple calculations give us (\ref{1limgen}) from (\ref{1eq8}). Now Theorem 2 can be applied. We see from (\ref{1eq7}) and (\ref{1eq8}) that the solution of singular perturbation problem for $\mathbf{L}^{\varepsilon}\varphi^{\varepsilon}(u,v;x)$ satisfies the conditions \textbf{CD1, CD2}. Condition \textbf{CD3} of this theorem implies that the quadratic characteristics of the martingale, corresponding to a coupled Markov process, is relatively compact. The same result follows from the CCC (see Corollary 2 and Lemma 2) by \cite{jacod1}. Thus, the condition \textbf{CD3} follows from the Corollary 2 and Lemma 2. Due to \textbf{L1} the condition \textbf{CD4} is also satisfied. Thus, all the conditions of above Theorem 2 are satisfied, so the weak convergence $\xi^{\varepsilon}(t)\Rightarrow \xi^0(t)$ takes place. Theorem 1 is proved.\begin{flushright} $\Box $ \end{flushright} {\it Acknowledgements.} The authors thank University of Bielefeld for hospitality and financial support by DFG project 436 UKR 113/94/07-09.\par \hrule \section*{References} \begin{enumerate} \begin{itemize}bitem{ber} Bertoin J. (1996). {\it L\'{e}vy processes.} Cambridge Tracts in Mathematics, 121. Cambridge University Press, Cambridge. \begin{itemize}bitem{ethier} Ethier S.N., Kurtz T.G. (1986). {\it Markov Processes: Characterization and convergence}, J. Wiley, New York. \begin{itemize}bitem{gisk} Gihman, I.I., Skorohod, A.V. (1974). {\it Theory of stochastic processes, vol. 1,2,3,} Springer, Berlin. \begin{itemize}bitem{jacod1} Jacod J., Shiryaev A.N. (1987). {\it Limit Theorems for Stochastic Processes}, Springer-Verlang, Berlin. \begin{itemize}bitem{korlim} Koroliuk V.S., Limnios N. (2005). {\it Stochastic Systems in Merging Phase Space}, World Scientific Publishers, Singapore. \begin{itemize}bitem{lip3} Liptser R. Sh. (1994). The Bogolubov averaging principle for semimartingales, {\it Proceedings of the Steklov Institute of Mathematics}, Moscow, No 4, 12 pages. \begin{itemize}bitem{lip1} Liptser R. Sh., Shiryayev A. N. (1989). {\it Theory of Martingales}, Kluwer Academic Publishers, Dordrecht, The Netherlands. \begin{itemize}bitem{sat} Sato K.-I. (1999). {\it L\'{e}vy processes and infinitely divisible distributions.} Cambridge Studies in Advanced Mathematics, 68. Cambridge University Press, Cambridge. \begin{itemize}bitem{svirid1} Sviridenko M.N. (1998). Martingale characterization of limit distributions in the space of functions without discontinuities of second kind, \textit{Math. Notes}, 43, No 5, pp 398-402. \end{enumerate} \end{document}
\begin{document} \title{Vector spaces of linearizations for matrix polynomials: a bivariate polynomial approach} \begin{dedication} In memory of Leiba Rodman \end{dedication} \begin{abstract} We revisit the landmark paper [D. S. Mackey, N. Mackey, C. Mehl, and V. Mehrmann, {SIAM J. Matrix Anal. Appl.}, 28 (2006), pp.~971--1004] and, by viewing matrices as coefficients for bivariate polynomials, we provide concise proofs for key properties of linearizations for matrix polynomials. We also show that every pencil in the double ansatz space is intrinsically connected to a B\'{e}zout matrix, which we use to prove the eigenvalue exclusion theorem. In addition our exposition allows for any polynomial basis and for any field. The new viewpoint also leads to new results. We generalize the double ansatz space by exploiting its algebraic interpretation as a space of B\'{e}zout pencils to derive new linearizations with potential applications in the theory of structured matrix polynomials. Moreover, we analyze the conditioning of double ansatz space linearizations in the important practical case of a Chebyshev basis. \end{abstract} \begin{keywords} matrix polynomials; bivariate polynomials; B\'{e}zoutian; double ansatz space; degree-graded polynomial basis; orthogonal polynomials; conditioning \end{keywords} \begin{AMS} 65F15, 15A18, 15A22 \end{AMS} \section{Introduction} The paper by Mackey, Mackey, Mehl, and Mehrmann~\cite{Mackey_05_01} introduced three important vector spaces of pencils for matrix polynomials: $\mathbb{L}_1(P)$, $\mathbb{L}_2(P)$, and $\mathbb{DL}(P)$. In~\cite{Mackey_05_01} the spaces $\mathbb{L}_1(P)$ and $\mathbb{L}_2(P)$ generalize the companion forms of the first and second kind, respectively, and the \emph{double ansatz space} is the intersection, $\mathbb{DL}(P) = \mathbb{L}_1(P)\cap\mathbb{L}_2(P)$. These vector spaces provide a family of candidate generalized eigenvalue problems for computing the eigenvalues of a matrix polynomial, $P({\lambda})$, giving a rich source of \emph{linearizations} for $P({\lambda}bda)$: a classical approach for polynomial eigenvalue problems. In this article we introduce new viewpoints for these vector spaces. We regard a block matrix as coefficients for a bivariate matrix polynomial (see Section~\ref{sec:bivariate}), and point out that every pencil in $\mathbb{DL}(P)$ is a (generalized) B\'{e}zout matrix due to Lerer and Tismenetsky~\cite{Lerer_82_01} (see Section~\ref{sec:eigexclusion}). These novel viewpoints allow us to obtain remarkably elegant proofs for many properties of $\mathbb{DL}(P)$ and the eigenvalue exclusion theorem, which previously required rather tedious derivations. Furthermore, our exposition includes matrix polynomials expressed in any polynomial basis, such as the Chebyshev polynomial basis~\cite{Effenberger_11_01,kressner2013memory}. We develop a generalization of the double ansatz space (see Section~\ref{sec:beyondDL}) and also discuss extensions to generic algebraic fields, and conditioning analysis (see Section~\ref{sec:conditioning}). Let us recall some basic definitions in the theory of matrix polynomials. Let $P({\lambda}) = \sum_{i=0}^k P_i\phi_i({\lambda})$ be a matrix polynomial expressed in a certain polynomial basis $\left\{\phi_0,\ldots,\phi_k\right\}$, where $P_k\neq 0$, $P_i\in\mathbb{F}^{n\times n}$, and $\mathbb{F}$ is a field. Of particular interest is the case of a degree-graded basis, i.e.,\ $\left\{\phi_i\right\}$ is a polynomial basis where $\phi_j$ is of exact degree $j$. We assume throughout that $P({\lambda}bda)$ is regular, i.e., $\det P({\lambda}bda)\not\equiv 0$, which ensures the finite eigenvalues of $P({\lambda})$ are the roots of the scalar polynomial $\det(P({\lambda}))$. We note that if the elements of $P_i$ are in the field $\mathbb{F}$ then generally the finite eigenvalues exist in the algebraic closure of $\mathbb{F}$. Given $X,Y \in \mathbb{F}^{nk\times nk}$ a matrix pencil $L({\lambda}bda)={\lambda}bda X + Y$ is a \emph{linearization} for $P({\lambda})$ if there exist unimodular matrix polynomials $U({\lambda}bda)$ and $V({\lambda}bda)$, i.e., $\det U({\lambda}bda), \det V({\lambda}bda)$ are nonzero elements of $\mathbb{F}$, such that $L({\lambda}bda)=U({\lambda}bda)\diag(P({\lambda}bda),I_{n(k-1)})V({\lambda}bda)$ and hence, $L({\lambda}bda)$ shares its finite eigenvalues and their partial multiplicities with $P({\lambda})$. If $P({\lambda})$ has a singular leading coefficient, when expressed in a degree-graded basis, then it has an infinite eigenvalue and to preserve the partial multiplicities at infinity the matrix pencil $L({\lambda})$ needs to be a \emph{strong linearization}, i.e., $L({\lambda})$ is a linearization for $P({\lambda})$ and ${\lambda}bda Y+ X$ a linearization for ${\lambda}bda^kP(1/{\lambda}bda)$. In the next section we recall the definitions of $\mathbb{L}_1(P)$, $\mathbb{L}_2(P)$, and $\mathbb{DL}(P)$ allowing for matrix polynomials expressed in any polynomial basis, extending the results in~\cite{Mackey_05_01} given for the monomial basis (such extension was also considered in~\cite{DeTeran_09_01}). In Section~\ref{sec:bivariate} we consider the same space from a new viewpoint, based on bivariate matrix polynomials, and provide concise proofs for properties of $\mathbb{DL}(P)$. Section~\ref{sec:eigexclusion} shows that every pencil in $\mathbb{DL}(P)$ is a (generalized) B\'{e}zout matrix and gives an alternative proof of the eigenvalue exclusion theorem. In Section~\ref{sec:beyondDL} we generalize the double ansatz space to obtain a new family of linearizations, including new structured linearizations for structured matrix polynomials. Although these new linearizations are mainly of theoretical interest they show how the new viewpoint can be used to derive novel results. In Section~\ref{sec:conditioning} we analyze the conditioning of the eigenvalues of $\mathbb{DL}(P)$ pencils, and in Section~\ref{sec:construction} we describe a procedure to construct block symmetric pencils in $\mathbb{DL}(P)$ and B\'{e}zout matrices. \subsubsection*{Notation} The expansion $P({\lambda}) = \sum_{i=0}^k P_i\phi_i({\lambda})$ denotes a regular $n\times n$ matrix polynomial of degree $k$ expressed in a polynomial basis $\{\phi_i\}$. The following vector $\left[\phi_{k-1}({\lambda}),\phi_{k-2}({\lambda}),\ldots,\phi_0({\lambda})\right]^T$ is denoted by $\Lambda({\lambda})$. The $n\times n$ identity matrix is denoted by $I_n$, which we also write as $I$ when the dimension is immediate from the context. The superscript $^{\mathcal{B}}$ represents blockwise transpose: if $X = (X_{ij})_{1\leq i,j\leq k}$, $X_{ij}\in\mathbb{F}^{n\times n}$, then $X^{\mathcal{B}} = (X_{ji})_{1\leq i,j \leq k}$. \section{Vector spaces and polynomial bases}\label{sec:vectorspaces} Given a matrix polynomial $P({\lambda})$ we can define a vector space, denoted by $\mathbb{L}_1(P)$, as~\cite[Def.\ 3.1]{Mackey_05_01} \[ \mathbb{L}_1(P) = \left\{L({\lambda}) = {\lambda} X + Y: X, Y \in\mathbb{F}^{nk\times nk}, \ L({\lambda})\cdot (\Lambda({\lambda})\otimes I_n ) = v\otimes P({\lambda}), v\in\mathbb{F}^k\right\}, \] where $\Lambda({\lambda}) = \left[\phi_{k-1}({\lambda}),\phi_{k-2}({\lambda}),\ldots,\phi_0({\lambda})\right]^T$ and $\otimes$ is the matrix Kronecker product. An \emph{ansatz vector} $v\in\mathbb{F}^k$ generates a family of pencils in $\mathbb{L}_1(P)$, which are generically linearizations for $P({\lambda})$~\cite[Thm.~4.7]{Mackey_05_01}. If $\left\{\phi_0,\ldots,\phi_k\right\}$ is an orthogonal basis, then the comrade form~\cite{Barnett_book,Specht_57_01} belongs to $\mathbb{L}_1(P)$ with $v=\left[1,0,\ldots,0\right]^T$. The action of $L({\lambda})={\lambda} X+Y \in \mathbb{L}_1(P)$ on $(\Lambda({\lambda})\otimes I_n )$ can be characterized by the \emph{column shift sum} operator, denoted by ${\,\boxplus\hspace{-2.2mm}\raisebox{0.3mm}{$\rightarrow$} @}$~\cite[Lemma 3.4]{Mackey_05_01}, \[ L({\lambda})\cdot (\Lambda({\lambda})\otimes I_n ) = v\otimes P({\lambda}) \Longleftrightarrow X{\,\boxplus\hspace{-2.2mm}\raisebox{0.3mm}{$\rightarrow$} @} Y = v\otimes \left[P_k, P_{k-1},\ldots,P_0\right]. \] In the monomial basis $X{\,\boxplus\hspace{-2.2mm}\raisebox{0.3mm}{$\rightarrow$} @} Y$ can be paraphrased as ``insert a zero column on the right of $X$ and a zero column on the left of $Y$ then add them together'', i.e.,\ \[ X{\,\boxplus\hspace{-2.2mm}\raisebox{0.3mm}{$\rightarrow$} @} Y = \begin{bmatrix}X & \textbf{0}\end{bmatrix} + \begin{bmatrix}\textbf{0} & Y\end{bmatrix}, \] where $\textbf{0}\in\mathbb{F}^{nk\times n}$. More generally, given a polynomial basis we define the column shift sum operator as \begin{equation}\label{eq:Mcolshift} X{\,\boxplus\hspace{-2.2mm}\raisebox{0.3mm}{$\rightarrow$} @} Y = X M + \begin{bmatrix}\textbf{0} & Y\end{bmatrix}, \end{equation} where $\textbf{0}\in\mathbb{F}^{nk\times n}$, and $M\in\mathbb{F}^{nk\times n(k+1)}$ has block elements $M_{pq} = m_{p,q}I_n$ for $1\leq p\leq k$ and $1\leq q\leq k+1$, and $m_{p,q}$ is defined via the representation \begin{equation} \label{eq:Mmat} x\phi_{i-1} = \sum_{j=0}^k m_{k+1-i,k+1-j}\phi_{j}, \quad 1\leq i\leq k. \end{equation} The matrix $M$ has a particularly nice form if the basis is degree-graded, since the terms with $j>i$ in the above sum are zero. Then, the matrix $M$ in~\eqref{eq:Mcolshift} is given by \begin{equation}\label{eq:defM} M =\begin{bmatrix}M_{11}&M_{12}&\ldots&M_{1k}&M_{1,k+1}\cr \textbf{0} & M_{22} & \ddots & \ddots &M_{2,k+1} \cr \vdots&\ddots&\ddots&\ddots&\vdots\cr \textbf{0} & \ldots &\textbf{0} &M_{kk}&M_{k,k+1}\end{bmatrix}, \end{equation} where $M_{pq} = m_{p,q}I_n$, $1\leq p\leq q\leq k+1$, $p\neq k+1$. Furthermore, for an orthogonal basis, a three-term recurrence is satisfied and in this case the matrix $M$ has only three nonzero block diagonals. For example, if $P({\lambda}bda) \in \mathbb{R}[{\lambda}]^{n \times n}$ is expressed in the Chebyshev basis\footnote{Non-monomial bases can be of significant interest when working with numerical algorithms over some subfield of $\mathbb{C}$. For the sake of completeness, we note that in order to define the Chebyshev basis the field characteristics must be different than $2$.} $\left\{T_0(x),\ldots,T_k(x)\right\}$, where $T_j(x)=\cos\left(j\cos^{-1}x\right)$ for $x\in [-1,1]$, we have \begin{equation}\label{eq:Morth} M = \begin{bmatrix}\tfrac{1}{2}I_n & 0 & \tfrac{1}{2}I_n\cr & \ddots &\ddots&\ddots \cr & & \tfrac{1}{2}I_n & 0 & \tfrac{1}{2}I_n\cr & & & I_n& 0\cr \end{bmatrix} \in \mathbb{R}^{nk\times n(k+1)}. \end{equation} The properties of the vector space $\mathbb{L}_2(P)$ are analogous to $\mathbb{L}_1(P)$~\cite{Higham_06_01}. If $L({\lambda})={\lambda} X + Y$ is in $\mathbb{L}_2(P)$ then $L({\lambda}) = {\lambda} X^{\mathcal{B}} + Y^{\mathcal{B}}$ belongs to $\mathbb{L}_1(P)$. This connection means that the action of $L({\lambda})\in\mathbb{L}_2(P)$ is characterized by a \emph{row shift sum} operator, denoted by ${\,\boxplus \hspace{-2.2mm}\raisebox{-1mm}{$\downarrow$} \;}$, and defined as \[ X {\,\boxplus \hspace{-2.2mm}\raisebox{-1mm}{$\downarrow$} \;} Y = \left(X^{\mathcal{B}}{\,\boxplus\hspace{-2.2mm}\raisebox{0.3mm}{$\rightarrow$} @} Y^{\mathcal{B}}\right)^{\mathcal{B}} = M^{\mathcal{B}} X + \begin{bmatrix} \textbf{0}^T\\ Y \end{bmatrix}. \] Here, we used the fact that $(X^{\mathcal{B}} M)^{\mathcal{B}} = M^{\mathcal{B}} X$, which follows from the structure $M_{pq} = m_{p,q}I_n$. \subsection{Extending the results to general polynomial bases}\label{sec:genpolybases} Many of the derivations in~\cite{Mackey_05_01} are specifically for $P({\lambda})$ expressed in a monomial basis, though the lemmas and theorems can be generalized to any polynomial basis. One approach to generalize~\cite{Mackey_05_01} is to use a change-of-basis matrix $S$ such that $\Lambda({\lambda}) = S[{\lambda}bda^{k-1},\dots,{\lambda}bda,1]^T$ and to define the mapping (see also~\cite{DeTeran_09_01}) \begin{equation} \label{eq:mapc} \mathcal{C}\left(\hat{L}({\lambda}bda)\right)=\hat{L}({\lambda}bda) (S^{-1} \otimes I_n) = L({\lambda}), \end{equation} where $\hat{L}({\lambda})$ is a pencil in $\mathbb{L}_1(P)$ for the matrix polynomial $P({\lambda})$ expressed in the monomial basis with the same ansatz vector as $L({\lambda}bda)$. In particular, the strong linearization theorem holds for any polynomial basis. \begin{theorem}[Strong Linearization Theorem]\label{thm:linearization} Let $P({\lambda})$ be a regular matrix polynomial (expressed in any polynomial basis), and let $L({\lambda})\in\mathbb{L}_1(P)$. Then the following statements are equivalent: \begin{enumerate} \item $L({\lambda})$ is a linearization for $P({\lambda})$, \item $L({\lambda})$ is a regular pencil, and \item $L({\lambda})$ is a strong linearization for $P({\lambda})$. \end{enumerate} \end{theorem} \begin{proof} It is a corollary of~\cite[Theorem 4.3]{Mackey_05_01}. In fact, the mapping $\mathcal{C}$ in~\eqref{eq:mapc} is a strict equivalence between $\mathbb{L}_1(P)$ expressed in the monomial basis and $\mathbb{L}_1(P)$ expressed in another polynomial basis. Therefore, $L({\lambda})$ has one of the three properties if and only if $\hat{L}({\lambda})$ also does, and the properties are equivalent for $\hat{L}({\lambda})$ because they are equivalent for $L({\lambda})$. \end{proof} This strict equivalence can be used to generalize many properties of $\mathbb{L}_1(P)$, $\mathbb{L}_2(P)$, and $\mathbb{DL}(P)$, including~\cite[Thm.~4.7]{Mackey_05_01}, which shows that a pencil from $\mathbb{L}_1(P)$ is generically a linearization; however, our approach based on bivariate polynomials allows for more concise derivations. \section{Recasting to bivariate matrix polynomials}\label{sec:bivariate} A block matrix $X\in\mathbb{F}^{nk\times nh}$ with $n\times n$ blocks can provide the coefficients in the basis $\{\phi_i\}$ for a bivariate matrix polynomial of degree $h-1$ in $x$ and $k-1$ in $y$. Let $\phi:\mathbb{F}^{nk\times nh}\rightarrow \mathbb{F}_{h-1}^{n\times n}[x]\times\mathbb{F}^{n\times n}_{k-1}[y]$ be the mapping defined by \begin{equation} \label{eq:phidef} \phi \ : \ X = \begin{bmatrix}X_{11} & \dots & X_{1h} \cr \vdots &\ddots &\vdots \cr X_{k1} & \ldots & X_{kh}\end{bmatrix}, \hbox{ }X_{ij}\in\mathbb{F}^{n\times n} \mapsto F(x,y) = \sum_{i=0}^{k-1}\sum_{j=0}^{h-1} X_{k-i,h-j}\phi_i(y)\phi_j(x). \end{equation} Equivalently, we may define the map as follows: \[ \phi\ : \ X = \begin{bmatrix}X_{11} & \dots & X_{1h} \cr \vdots &\ddots &\vdots \cr X_{k1} & \ldots & X_{kh}\end{bmatrix} \mapsto F(x,y) = \begin{bmatrix}\phi_{k-1}(y) I &\cdots & \phi_0(y) I \end{bmatrix} X \begin{bmatrix} \phi_{h-1}(x) I\\ \vdots\\ \phi_0(x) I \end{bmatrix}.\] Usually, and unless otherwise specified, we will apply the map $\phi$ to square block matrices so that $h=k$. We recall that a regular (matrix) polynomial $P({\lambda})$ expressed in a degree-graded basis has an infinite eigenvalue if its leading matrix coefficient is singular. In order to correctly take care of infinite eigenvalues we write $P({\lambda}bda)=\sum_{i=0}^g P_i \phi_i({\lambda}bda)$, where the integer $g\geq k$ is called the \emph{grade}~\cite{Mackey_11_01}. If the grade of $P({\lambda})$ is larger than the degree then $P({\lambda})$ has at least $n$ infinite eigenvalues. Usually, and unless stated otherwise, the grade is equal to the degree. It is easy to show that the mapping $\phi$ is a bijection between $h \times k$ block matrices with $n\times n$ blocks and $n\times n$ bivariate matrix polynomials of grade $h-1$ in $x$ and grade $k-1$ in $y$. Even more, $\phi$ is an isomorphism preserving the group additive structure. We omit the trivial proof. Many matrix operations can be interpreted as functional operations via the above described duality between block matrices and their continuous analogues (see, for example,~\cite{townsend2013extension}). Bivariate matrix polynomials allow us to interpret many matrix operations in terms of functional operations. In many instances, existing proofs in the theory of linearizations of matrix polynomials can be simplified, and throughout the paper we will often exploit this parallelism. We summarize some computation rules in Table~\ref{tab:op}. We hope the table will be useful not only in this paper, but also for future work. All the rules are valid for any basis and for any field $\mathbb{F}$, except the last row that assumes $\mathbb{F}=\mathbb{C}$. \begin{table}[htbp] \centering \caption{Correspondence between operations in the matrix $X$ and the bivariate polynomial viewpoints regarding $F(x,y)$. The matrix $M$ is from~\eqref{eq:Mmat}, and $v,w\in\mathbb{F}^{k}$ are vectors. } \label{tab:op} \begin{tabular}{c|c} Block matrix operation & Bivariate polynomial operation\\ \hline Block matrix $X$ & Bivariate polynomial $F(x,y)$\\ $X \mapsto X M$ & $F(x,y) \mapsto F(x,y) x$\\ $X \mapsto M^{\mathcal{B}} X$ & $F(x,y) \mapsto y F(x,y)$ \\ $X (\Lambda({\lambda}bda) \otimes I)$ & Evaluation at $x={\lambda}bda$: $F({\lambda}bda,y)$\\ $X (\Lambda({\lambda}bda) \otimes v)$ & $F({\lambda}bda,y) v$\\ $(\Lambda^T(\mu) \otimes w^T) X$ & $w^TF(x,\mu)$\\ $(\Lambda^T(\mu) \otimes w^T) X (\Lambda({\lambda}bda) \otimes v)$ & $w^TF({\lambda}bda,\mu) v$\\ $X \mapsto X^{\mathcal{B}}$ & $F(x,y) \mapsto F(y,x)$\\ $X \mapsto X^T$ & $F(x,y) \mapsto F^T(y,x)$\\ $X \mapsto X^*$ & $F(x,y) \mapsto F^*(y,x)$\\ \end{tabular} \end{table} Other computational rules exist when the basis has additional properties. We give some examples in Table~\ref{tab:op2}, in which \begin{equation} \label{eq:sigR} \Sigma= \small \begin{bmatrix} \ddots & & & \\ & I & & \\ & & -I &\\ & & & I \end{bmatrix} \normalsize , \quad R=\begin{bmatrix} & & I\\ & \iddots & \\ I & & \end{bmatrix} , \end{equation} and we say that a polynomial basis is alternating if $\phi_{i}(x)$ is even (odd) when $i$ is even (odd). \begin{table}[htbp] \centering \caption{ Correspondence when the polynomial basis is alternating or the monomial basis. } \label{tab:op2} \begin{tabular}{c|c|c} Type of basis & Block matrix operation & Bivariate polynomial operation\\ \hline Alternating & $X \mapsto \Sigma X$ & $F(x,y) \mapsto F(x,-y)$\\ Alternating & $X \mapsto X \Sigma$ & $F(x,y) \mapsto F(-x,y)$\\ Monomials & $X \mapsto R X$ & $F(x,y) \mapsto y^{k-1}F(x,y^{-1})$\\ Monomials & $X \mapsto X R$ & $F(x,y) \mapsto x^{h-1}F(x^{-1},y)$\\ \end{tabular} \end{table} As seen in Table~\ref{tab:op}, the matrix $M$ in~\eqref{eq:Mcolshift} is such that the bivariate matrix polynomial corresponding to the coefficients $XM$ is $F(x,y)x$, i.e., $M$ applied on the right of $X$ represents multiplication of $F(x,y)$ by $x$. This gives an equivalent definition for the column shift sum operator: if the block matrices $X$ and $Y$ are the coefficients for $F(x,y)$ and $G(x,y)$ then the coefficients of $H(x,y)$ are $Z$, where \[ Z = X{\,\boxplus\hspace{-2.2mm}\raisebox{0.3mm}{$\rightarrow$} @} Y,\qquad H(x,y) = F(x,y)x + G(x,y). \] This gives a characterization of the $\mathbb{L}_1(P)$ space from the bivariate polynomial viewpoint as pencils $L({\lambda}) = {\lambda} X + Y$ such that with the mapping $\phi$, we have $F(x,y)x + G(x,y) = v(y)P(x)$. The ansatz vector is $v=[v_{k-1},\ldots,v_1,v_0]^T$, where $v(y) = \sum_{i=0}^{k-1}v_i\phi_i(y)$. Regarding the space $\mathbb{L}_2(P)$, the coefficient matrix $M^{\mathcal{B}} X$ corresponds to the bivariate matrix polynomial $yF(x,y)$, i.e., $M^{\mathcal{B}}$ applied on the left of $X$ represents multiplication of $F(x,y)$ by $y$. We have thus derived the following result (here and below, with a slight abuse of notation, we use $v$ to denote both the ansatz polynomial and the ansatz vector). \begin{lemma}\label{lem:L1L2} For an $n\times n$ matrix polynomial $P({\lambda}bda)$ of degree $k$, the space $\mathbb{L}_1(P)$ can be written as \[ \mathbb{L}_1(P) = \left\{L({\lambda}) = {\lambda} X + Y: F(x,y)x + G(x,y) = v(y)P(x), v\in\Pi_{k-1}(\mathbb{F})\right\}, \] where $\Pi_{k-1}(\mathbb{F})$ is the space of polynomials in $\mathbb{F}[y]$ of degree at most $k-1$, and $F(x,y),G(x,y)$ are defined using $X,Y$ by the mapping~\eqref{eq:phidef}, and, writing $v(y) = \sum_i^{k-1}v_i\phi_i(y)$, $v=[v_{k-1},\ldots,v_1,v_0]^T$ is the ansatz vector. Similarly, \[ \mathbb{L}_2(P) = \left\{L({\lambda}) = {\lambda} X + Y: yF(x,y) + G(x,y) = P(y)w(x), w\in\Pi_{k-1}(\mathbb{F})\right\}. \] \end{lemma} The space $\mathbb{DL}(P)$ is the intersection of $\mathbb{L}_1(P)$ and $\mathbb{L}_2(P)$. It is an important vector space because it contains block symmetric linearizations. By Lemma~\ref{lem:L1L2}, a pencil $L({\lambda}) = {\lambda} X + Y$ belongs to $\mathbb{DL}(P)$ with ansatze $v(y)$ and $w(x)$ if the following $\mathbb{L}_1(P)$ and $\mathbb{L}_2(P)$ conditions are satisfied: \begin{equation}\label{eq:DLrelations} F(x,y)x + G(x,y) = v(y)P(x), \qquad yF(x,y) + G(x,y) = P(y)w(x). \end{equation} It appears that $v(y)$ and $w(x)$ could be chosen independently; however, if we substitute $y=x$ into~\eqref{eq:DLrelations} we obtain the \emph{compatibility condition} \[ v(x)P(x) = F(x,x)x + G(x,x) = xF(x,x) + G(x,x) = P(x)w(x) \] and hence, $v=w$ as elements of $\Pi_{k-1}(\mathbb{F})$ since $P(x)(v(x)-w(x))$ is the zero matrix. This shows the double ansatz space is actually a single ansatz space; a fact that required two quite technical proofs in~\cite[Prop.~5.2, Thm.~5.3]{Mackey_05_01}. The bivariate matrix polynomials $F(x,y)$ and $G(x,y)$ are uniquely defined by the ansatz $v(x)$ since they satisfy the explicit formulas \begin{equation}\label{eq:F} yF(x,y) - F(x,y)x = P(y)v(x) - v(y)P(x), \end{equation} \begin{equation}\label{eq:G} yG(x,y)-G(x,y)x = yv(y)P(x) - P(y)v(x)x. \end{equation} In other words, there is an isomorphism between $\Pi_{k-1}(\mathbb{F})$ and $\mathbb{DL}(P)$. It also follows from~\eqref{eq:F} and~\eqref{eq:G} that $F(x,y) = F(y,x)$ and $G(x,y)=G(y,x)$. This shows that all the pencils in $\mathbb{DL}(P)$ are block symmetric. Furthermore, if $F(x,y)$ and $G(x,y)$ are symmetric and satisfy $F(x,y)x + G(x,y) = P(x)v(y)$ then we also have $F(y,x)x + G(y,x) = P(x)v(y)$, and by swapping $x$ and $y$ we obtain the $\mathbb{L}_2(P)$ condition, $yF(x,y) + G(x,y) = P(y)v(x)$. This shows all block symmetric pencils in $\mathbb{L}_1(P)$ belong to $\mathbb{L}_2(P)$ and hence, also belong to $\mathbb{DL}(P)$. Thus, $\mathbb{DL}(P)$ is the space of block symmetric pencils in $\mathbb{L}_1(P)$~\cite[Thm.\ 3.4]{Higham_06_01}. \begin{remark}Although in this paper we do not consider singular matrix polynomials, we note that the analysis of this section still holds even if we drop the assumption that $P(x)$ is regular. We only need to assume $P(x) \not \equiv 0$ in our proof that $\mathbb{DL(P)}$ is in fact a single ansatz space, which is no loss of generality because $\mathbb{DL}(0)=\{0\}$. \end{remark} \section{Eigenvalue exclusion theorem and B\'{e}zoutians}\label{sec:eigexclusion} The eigenvalue exclusion theorem~\cite[Thm.~6.7]{Mackey_05_01} shows that if $L({\lambda})\in\mathbb{DL}(P)$ with ansatz $v\in\Pi_{k-1}(\mathbb{F})$, then $L({\lambda})$ is a linearization for the matrix polynomial $P({\lambda})$ if and only if $v({\lambda})I_n$ and $P({\lambda})$ do not share an eigenvalue. This theorem is important because, generically, $v({\lambda})I_n$ and $P({\lambda})$ do not share eigenvalues and almost all choices for $v\in\Pi_{k-1}(\mathbb{F})$ correspond to linearizations in $\mathbb{DL}(P)$ for $P({\lambda})$. \begin{theorem}[Eigenvalue Exclusion Theorem]\label{thm:eigexclusion} Suppose that $P({\lambda})$ is a regular matrix polynomial of degree $k$ and $L({\lambda})$ is in $\mathbb{DL}(P)$ with a nonzero ansatz polynomial $v({\lambda}bda)$. Then, $L({\lambda})$ is a linearization for $P({\lambda})$ if and only if $v({\lambda}) I_n$ (with grade $k-1$) and $P({\lambda})$ do not share an eigenvalue. \end{theorem} We note that the last statement also includes infinite eigenvalues. In the following we will observe that any $\mathbb{DL}(P)$ pencil is a (generalized) B\'{e}zout matrix and expand on this theme. This observation tremendously simplifies the proof of Theorem~\ref{thm:eigexclusion} and the connection with the classical theory of B\'{e}zoutian (for the scalar case) and the Lerer--Tismenetsky B\'{e}zoutian (for the matrix case) allows us to further our understanding of the $\mathbb{DL}(P)$ vector space, and leads to a new vector space of linearizations. We first recall the definition of a B\'{e}zout matrix and B\'{e}zoutian function for scalar polynomials (see~\cite[p.~277]{Bernstein_09_01} and~\cite[sec.~2.9]{Bini_94_01}). \begin{definition}[B\'{e}zout matrix and B\'{e}zoutian function]\label{def:Bezout} Let $p_1(x)$ and $p_2(x)$ be scalar polynomials \[ p_1(x) = \sum_{i=0}^k a_i\phi_i(x), \qquad p_2(x) = \sum_{i=0}^{k} c_i\phi_i(x), \] \emph{(}$a_k$ and $c_k$ can be zero as we regard $p_1(x)$ and $p_2(x)$ as polynomials of grade $k$\emph{)}, then the B\'{e}zoutian function associated with $p_1(x)$ and $p_2(x)$ is the bivariate function \[ \mathcal{B}(p_1,p_2) =\sum_{i,j=1}^k b_{ij}\phi_{k-i}(y)\phi_{k-j}(x):= \frac{p_1(y)p_2(x)-p_2(y)p_1(x)}{x-y}. \] The $k\times k$ B\'{e}zout matrix associated to $p_1(x)$ and $p_2(x)$ is defined via the coefficients of the B\'{e}zoutian function \begin{equation} \label{eq:bezmat} B(p_1,p_2) = \left(b_{ij}\right)_{1\leq i,j\leq k}. \end{equation} \end{definition} Here are some standard properties of a B\'{e}zoutian function and B\'{e}zout matrix: \begin{enumerate} \item The B\'{e}zoutian function is skew-symmetric with respect to its polynomial arguments: $\mathcal{B}(p_1,p_2)=-\mathcal{B}(p_2,p_1)$. \item $\mathcal{B}(p_1,p_2)$ is bilinear with respect to its polynomial arguments. \item $B(p_1,p_2)$ is nonsingular if and only if $p_1$ and $p_2$ have no common roots. \item $B(p_1,p_2)$ is a symmetric matrix. \end{enumerate} Property 3 holds for polynomials whose coefficients lie in \emph{any} field $\mathbb{F}$, provided that the common roots are sought after in the algebraic closure of $\mathbb{F}$ and roots at infinity are included. Note in fact that the dimension of the B\'{e}zout matrix depends on the formal choice of the grade of $p_1$ and $p_2$. Unusual choices of the grade are not completely artificial: for example, they may arise when evaluating a bivariate polynomial along $x=x_0$ forming a univariate polynomial~\cite{Nakatsukasa_13_01}. Moreover, it is important to be aware that common roots at infinity make the B\'{e}zout matrix singular. \begin{example} Consider the finite field $F_2=\{0,1\}$ and let $p_1=x^2$ and $p_2=x+1$, whose finite roots are counting multiplicity $\{0,0\}$ and $\{1\}$, respectively. The B\'{e}zout function is $x+y+xy$. If $p_1$ and $p_2$ are seen as grade $2$, the B\'{e}zout matrix (in the monomial basis) is $\begin{bmatrix} 1 & 1\\ 1 & 0 \end{bmatrix}$, which is nonsingular and has a determinant of $1$. This is expected as $p_1$ and $p_2$ have no shared root. If $p_1$ and $p_2$ are seen as grade $3$ the B\'{e}zout matrix becomes $\begin{bmatrix} 0 & 0 & 0\\ 0 & 1 & 1\\ 0 &1 & 0 \end{bmatrix}$, whose kernel is spanned by $\begin{bmatrix} 1 & 0 & 0 \end{bmatrix}^T$. Note indeed that if the grade is $3$ then the roots are, respectively, $\{\infty,0,0\}$ and $\{\infty,\infty,1\}$, so $p_1$ and $p_2$ share a root at $\infty$. \end{example} To highlight the connection with the classic B\'{e}zout matrix we first consider scalar polynomials and show that the eigenvalue exclusion theorem immediately follows from the connection with B\'{e}zoutians. \begin{proof}[Proof of Theorem~\ref{thm:eigexclusion} for $n=1$] Let $p({\lambda})$ be a scalar polynomial of degree (and grade) $k$ and $v({\lambda})$ a scalar polynomial of degree at most $k-1$. We first solve the relations in~\eqref{eq:F} and~\eqref{eq:G} to obtain \begin{equation} \label{eq:fgscalar} F(x,y) = \frac{p(y)v(x)-v(y)p(x)}{y-x}, \qquad G(x,y) = \frac{yv(y)p(x)-p(y)v(x)x}{y-x} \end{equation} and thus, by Definition~\ref{def:Bezout}, $F(x,y)=\mathcal{B}(v,p)$ and $G(x,y)=\mathcal{B}(p,vx)$. Moreover, $\mathcal{B}$ is skew-symmetric and bilinear with respect to its polynomial arguments so \begin{equation} \label{eq:Lbez} L({\lambda}) = {\lambda} X + Y = {\lambda} B(v,p) + B(p,xv) = -{\lambda} B(p,v) + B(p,xv) =B(p,(x-{\lambda})v). \end{equation} Since $B$ is a B\'{e}zout matrix, $\det(L({\lambda}))=\det(B(p,(x-{\lambda})v))\neq 0$ for some ${\lambda}$ if and only if $p$ and $v$ do not share a root, which, by Theorem~\ref{thm:linearization}, is equivalent to $L({\lambda})$ being a linearization. \end{proof} An alternative (more algebraic) argument is to note that $p$ and $(x-{\lambda})v$ are polynomials in $x$ whose coefficients lie in the field of fractions $\mathbb{F}({\lambda})$. Since $p$ has coefficients in the subfield $\mathbb{F} \subset \mathbb{F}({\lambda})$, its roots lie in the algebraic closure of $\mathbb{F}$, denoted by $\overline{\mathbb{F}}$. The factorization $(x-{\lambda})v$ similarly reveals that this polynomial has one root at ${\lambda}$, while all the others lie in $\overline{\mathbb{F}} \cup \{\infty\}$. Therefore, $p$ and $(x-{\lambda})v$ share a root in the closure of $\mathbb{F}({\lambda})$ if and only if $p$ and $v$ share a root in $\overline{\mathbb{F}}$. Our proof of the eigenvalue exclusion theorem is purely algebraic and holds without any assumption on the field $\mathbb{F}$. However, as noted by Mehl~\cite{privcom}, if $\mathbb{F}$ is finite it could happen that no pencil in $\mathbb{DL}$ is a linearization, because there are only finitely many choices available for the ansatz polynomial $v$. Although this approach is extendable to any field, for simplicity of exposition we assume for the rest of this section that the underlying field is $\mathbb{C}$. A natural question at this point is whether this approach generalizes to the matrix case ($n>1$). An appropriate generalization of the scalar B\'{e}zout matrix should: \begin{itemize} \item depend on two matrix polynomials $P^{(1)}$ and $P^{(2)}$; \item have nontrivial kernel if and only if $P^{(1)}$ and $P^{(2)}$ share an eigenvalue and the corresponding eigenvector (note that for scalar polynomials the only possible eigenvector is $1$, up to multiplicative scaling). \end{itemize} The following examples show that the most straightforward ideas fail to satisfy the second property above. \begin{example}\label{ex:naive} Note that the most na\"{i}ve idea, i.e., $\frac{P^{(1)}(y)P^{(2)}(x)-P^{(2)}(y)P^{(1)}(x)}{x-y}$, is generally not even a matrix polynomial (its elements are rational functions). Almost as straightforward is the generalization $\frac{P^{(1)}(y)P^{(2)}(x)-P^{(1)}(x)P^{(2)}(y)}{x-y}$, which is indeed a bivariate matrix polynomial. However, consider the associated B\'{e}zout block matrix. Let us check that it does not satisfy the property of being singular if and only if $P^{(1)}$ and $P^{(2)}$ have a shared eigenpair by providing two examples over the field $\mathbb{Q}$ and in the monomial basis. Consider first $P^{(1)}=\begin{bmatrix} x & 0\\ 0 & x-1 \end{bmatrix}$ and $P^{(2)}=\begin{bmatrix} x-6 & -1\\ 12 & x+1 \end{bmatrix}$. $P^{(1)}$ and $P^{(2)}$ have disjoint spectra. The corresponding B\'{e}zout matrix is $\begin{bmatrix} 6 & 1\\ -12 & -2 \end{bmatrix}$, which is singular. Conversely, let $P^{(1)}=\begin{bmatrix} x & 1\\ 0 & x \end{bmatrix}$ and $P^{(2)}=\begin{bmatrix} 0 & x\\ x & 1 \end{bmatrix}$. Here, $P^{(1)}$ and $P^{(2)}$ share the eigenpair $\{0,\begin{bmatrix} 1 & 0 \end{bmatrix}^T\}$, but the corresponding B\'{e}zout matrix is $\begin{bmatrix} 1 & 0\\ 0 & -1 \end{bmatrix}$, which is nonsingular. \end{example} Fortunately, an extension of the B\'{e}zoutian to the matrix case was studied in the 1980s by Lerer, Tismenetsky, and others, see, e.g.,~\cite{Anderson_76_01, Lerer_82_01, LT2} and the references therein. It turns out that it provides exactly the generalization that we need. \begin{definition}\label{def:bezmatpoly} For $n \times n$ regular matrix polynomials $P^{(1)}(x)$ and $P^{(2)}(x)$ of grade $k$, let $M^{(1)}(x)$ and $M^{(2)}(x)$ be regular matrix polynomials selected so that $M^{(1)}(x)P^{(1)}(x)=M^{(2)}(x)P^{(2)}(x)$. Then, denoting the maximal degree of $M^{(1)}(x)$ and $M^{(2)}(x)$ by $\ell$, the associated Lerer--Tismenetsky B\'{e}zoutian function $\mathcal{B}_{M^{(2)},M^{(1)}}$ is defined by~\cite{Anderson_76_01,Lerer_82_01} \begin{equation} \label{eq:bezoutmat} \mathcal{B}_{M^{(2)},M^{(1)}}(P^{(1)},P^{(2)}) =\!\!\sum_{i,j=1}^{\ell,k}\!\! B_{ij}\phi_{\ell-i}(y)\phi_{k-j}(x)\!:=\frac{M^{(2)}(y)P^{(2)}(x)-M^{(1)}(y)P^{(1)}(x)}{x-y}. \end{equation} The $n\ell\times nk$ B\'{e}zout block matrix is defined by $B_{M^{(2)},M^{(1)}}(P^{(1)},P^{(2)}) \!= \!\left(B_{ij}\right)_{1\leq i\leq \ell,1\leq j\leq k}$. \end{definition} The Lerer--Tismenetsky B\'{e}zoutian function and the corresponding B\'{e}zout block matrix are not unique as there are many possible choices of $M^{(1)}$ and $M^{(2)}$. Indeed, the matrix $B$ does not even need to be square. In the examples below we use monomials $\phi_i(x)=x^i$. \begin{example} As in the first example in Example~\ref{ex:naive}, Let $P^{(1)}=\begin{bmatrix} x & 0\\ 0 & x-1 \end{bmatrix}$ and $P^{(2)}=\begin{bmatrix} x-6 & -1\\ 12 & x+1 \end{bmatrix}$ and select\footnote{$M^{(1)}$ and $M^{(2)}$ are of minimal degree as there are no $M^{(1)}$ and $M^{(2)}$ of degree $0$ or $1$ such that $M^{(1)}P^{(1)}=M^{(2)}P^{(2)}$ exists.} $M^{(1)}=\begin{bmatrix} x^2-3x+6 & x\\ 14x-12 & x^2+2x \end{bmatrix}$ and $M^{(2)}=\begin{bmatrix} x^2+3x & 2x\\ 2x & x^2 \end{bmatrix}$. It can be verified that $M^{(1)}P^{(1)} = M^{(2)}P^{(2)}$. The associated B\'{e}zout matrix is \[\begin{bmatrix} 6 & 1\\ -12 & -2\\ -6 & 0\\ -12 & 0 \end{bmatrix}\] and has a trivial kernel. Now consider again the second example in Example~\ref{ex:naive}, $P^{(1)}=\begin{bmatrix} x & 1\\ 0 & x \end{bmatrix}$, $P^{(2)}=\begin{bmatrix} 0 & x\\ x & 1 \end{bmatrix}$, and select $M^{(1)} = P^{(1)}$ and $M^{(2)} = P^{(1)}F$, where $F = \begin{bmatrix} 0 & 1\\ 1 & 0 \end{bmatrix}$. The B\'{e}zout matrix is $\begin{bmatrix} 0 & 0\\ 0 & 0 \end{bmatrix}$. Coherently with~\eqref{eq:kerbezout} below, its kernel has dimension $2$ because $P^{(1)}$ and $P^{(2)}$ only share the zero eigenvalue and the associated Jordan chain $\begin{bmatrix} 1 & 0 \end{bmatrix}^T,\begin{bmatrix} 0 & -1 \end{bmatrix}^T$. \end{example} When $P^{(1)}(x)$ and $P^{(2)}(x)$ commute, i.e., $P^{(2)}(x)P^{(1)}(x)=P^{(1)}(x)P^{(2)}(x)$, the natural choice of $M^{(1)}$ and $M^{(2)}$ is $M^{(1)}=P^{(2)}$ and $M^{(2)}=P^{(1)}$, and we write $\mathcal{B}(P^{(1)},P^{(2)}): = \mathcal{B}_{P^{(1)},P^{(2)}}(P^{(1)},P^{(2)})$. In this case the B\'{e}zout matrix $B(P^{(1)},P^{(2)})$ (also dropping subscripts) is square and of size $nk\times nk$. Here are some important properties of the Lerer--Tismenetsky B\'{e}zoutian function and the B\'{e}zout matrix: \begin{enumerate} \item The Lerer--Tismenetsky B\'{e}zoutian function is skew-symmetric with respect to its arguments: $\mathcal{B}_{M^{(2)},M^{(1)}}(P^{(1)},P^{(2)})=-\mathcal{B}_{M^{(1)},M^{(2)}}(P^{(2)},P^{(1)})$, \item $\mathcal{B}(P^{(1)},P^{(2)})$ is bilinear with respect to its polynomial arguments. That is, $\mathcal{B}(a P^{(1)}+b P^{(2)},P^{(3)}) = a \mathcal{B}(P^{(1)},P^{(3)})+b \mathcal{B}(P^{(2)},P^{(3)})$ if $P^{(1)},P^{(2)}$ both commute with $P^{(3)}$, \item The kernel of the B\'{e}zout block matrix is \begin{equation} \label{eq:kerbezout} \ker B_{M^{(2)},M^{(1)}}(P^{(1)},P^{(2)}) =\mbox{Im}\ \!\! \begin{bmatrix} X_F\phi_{k-1}(T_F)\\ \vdots\\ X_F\phi_{0}(T_F) \end{bmatrix} \,\oplus \,\mbox{Im}\ \!\! \begin{bmatrix} X_\infty \phi_{0}(T_\infty)\\ \vdots\\ X_\infty \phi_{k-1}(T_\infty) \end{bmatrix}. \end{equation} Here $(X_F,T_F)$, $(X_\infty,T_\infty)$ are the greatest common restrictions~\cite[Ch.~9]{Gohberg_09_01} of the finite and infinite Jordan pairs~\cite[Ch.~1, Ch.~7]{Gohberg_09_01} of $P^{(1)}(x)$ and $P^{(2)}(x)$. The infinite Jordan pairs are defined regarding both polynomials as grade $k$. Importantly, $ \ker B_{M^{(2)},M^{(1)}}(P^{(1)},P^{(2)})$ in~\eqref{eq:kerbezout} does not depend on the choice of $M^{(1)}$ and $M^{(2)}$. This was proved (in the monomial basis) in~\cite[Thm.~1.1]{Lerer_82_01}. Equation~\eqref{eq:kerbezout} holds for any polynomial basis: it can be obtained from that theorem via a congruence transformation involving the mapping $S^{-1} \otimes I_n$ in~\eqref{eq:mapc}. \item If for any $x$ and $y$ we have $P^{(1)}(y)P^{(2)}(x)=P^{(2)}(x)P^{(1)}(y)$, then $B(P^{(1)},P^{(2)})$ is a block symmetric matrix. Note that the hypothesis is stronger than $P^{(1)}(x)P^{(2)}(x)=P^{(2)}(x)P^{(1)}(x)$, but it is always satisfied when $P^{(2)}(x)=v(x) I$. \end{enumerate} The following lemma shows that, as in the scalar case, property 3 is the eigenvalue exclusion theorem in disguise. \begin{lemma}\label{lem:gcr} The greatest common restriction of the (finite and infinite) Jordan pairs of the regular matrix polynomials $P^{(1)}$ and $P^{(2)}$ is nonempty if and only if $P^{(1)}$ and $P^{(2)}$ share both an eigenvalue and the corresponding eigenvector. \end{lemma} \begin{proof} Suppose that the two matrix polynomials have only finite eigenvalues. We denote by $(X_1,J_1)$ (resp., $(X_2,J_2)$) a Jordan pair of $P^{(1)}$ (resp., $P^{(2)}$). Observe that a greatest common restriction is nonempty if and only if there exists at least one nonempty common restriction. First, assume there exist $v$ and $x_0$ such that $P^{(1)}(x_0)v=P^{(2)}(x_0)v=0$. Up to a similarity on the two Jordan pairs we have $X_1 S_1 e_1=X_2 S_2 e_1=v$, $J_1 S_1 e_1=S_1 e_1 x_0$, and $J_2 S_2 e_1=S_2 e_1 x_0$, where $S_1$ and $S_2$ are two similarity matrices. There is no loss of generality in assuming that, if necessary, one first applies the similarity transformation, see~\cite[p.~204]{Gohberg_09_01}. This shows that $(v,x_0)$ is a common restriction~\cite[p.~204, p.~235]{Gohberg_09_01} of the Jordan pairs of $P^{(1)}$ and $P^{(2)}$. Conversely, let $(X,J)$ be a common restriction with $J$ in Jordan form. We have the four equations $X_1 S_1 = X$, $X= X_2 S_2$, $J_1 S_1 = S_1 J$, and $J_2 S_2 = S_2 J$ for some full column rank matrices $S_1$ and $S_2$. Letting $v:=X e_1, x_0:=e_1^T J e_1$, it is easy to check that $(v,x_0)$ is also a common restriction, and that $X_1 S_1 e_1= v = X_2 S_2 e_1$, $J_1 S_1 e_1 = S_1 e_1 x_0$, and $J_2 S_2 e_1 = S_2 e_1 x_0$. From~\cite[eq.~1.64]{Gohberg_09_01}\footnote{Although strictly speaking~\cite[eq.~1.64]{Gohberg_09_01} is for a monic matrix polynomial, it is extendable in a straightforward way to a regular matrix polynomial (see also~\cite[Ch.~7]{Gohberg_09_01}).}, it follows that $P^{(1)}(x_0)v=P^{(2)}(x_0)v=0$. The assumption that all the eigenvalues are finite can be easily removed (although complicating the notation appears unavoidable). In the argument above replace every Jordan pair $(X,J)$ with a \emph{decomposable pair}~\cite[pp.~188--191]{Gohberg_09_01} of the form $\left[ X_F, X_\infty \right]$ and $ J_F \oplus J_\infty$, where $(X_F,J_F)$ is a finite Jordan pair and $(X_\infty, J_\infty)$ is an infinite Jordan pair~\cite[Ch.~7]{Gohberg_09_01}. As the argument is essentially the same we omit the details. \end{proof} The importance of the connection with B\'{e}zout theory is now clear. The proof of the eigenvalue exclusion theorem in the matrix polynomial case becomes immediate. Before giving the proof for Theorem~\ref{thm:eigexclusion} for $n>1$, we state the analogue of~\eqref{eq:Lbez} for matrix polynomials. Here and below, we use the notation $\mathbb{DL}(P,v)$ to denote the unique pencil in $\mathbb{DL}(P)$ with ansatz $v$. \begin{lemma}\label{lem:Lbzefunc} $\mathbb{DL}(P,v)$ for a matrix polynomial $P({\lambda}bda)$ with ansatz $v$ is a matrix pencil that can be written as \begin{equation} \label{eq:Lbezfunc} \mathbb{DL}(P,v) = B(P,(x-{\lambda}bda)vI) ={\lambda}bda B(vI,P)+B(P,xvI), \end{equation} where $B$ is the B\'{e}zout matrix as in~\eqref{eq:bezmat}. \end{lemma} \begin{proof} As in~\eqref{eq:fgscalar} for the scalar case, we solve~\eqref{eq:F} and~\eqref{eq:G} for $F(x,y)$ and $G(x,y)$ to obtain \[ F(x,y) = \frac{v(y)P(x)-P(y)v(x)}{x-y}, \qquad G(x,y) = \frac{P(y)v(x)x-yv(y)P(x)}{x-y} . \] Let $P^{(1)}=P(x)$ and $P^{(2)}=(x-{\lambda}bda)v(x) I_n$ in~\eqref{eq:bezoutmat}. Then, $P^{(1)}$ and $P^{(2)}$ commute for all $x$, so we take $M^{(1)}=P^{(2)}$ and $M^{(2)}=P^{(1)}$ and obtain \begin{align*} \mathcal{B}(P(x),(x-{\lambda}bda)v(x)I_n)= &\frac{P(y)(x-{\lambda}bda)v(x) -(y-{\lambda}bda)v(y)P(x)}{x-y}\\ =& {\lambda}bda F(x,y)+G(x,y)\\ =& \sum_{i,j=1}^k B_{ij}\phi_{k-i}(y)\phi_{k-j}(x). \end{align*} This gives the $nk\times nk$ B\'{e}zout block matrix $B(P,(x-{\lambda}bda)vI)=(B_{ij})_{1\leq i,j\leq k}$. \end{proof} We are now ready to prove Theorem~\ref{thm:eigexclusion}. Recall that the claim is that $L({\lambda})\in \mathbb{DL}(P) $ with ansatz $v({\lambda})$ is a linearization for $P({\lambda})$ if and only if $v({\lambda}) I_n$ and $P({\lambda})$ have no shared eigenvalue. \begin{proof}[Proof of Theorem~\ref{thm:eigexclusion} for $n>1$] If $v I_n$ and $P$ share a finite eigenvalue ${\lambda}_0$ and $P({\lambda}_0)w=0$ for a nonzero $w$, then $({\lambda}_0-{\lambda}bda)v({\lambda}_0)w=0$ for all ${\lambda}bda$. Hence, by~\eqref{eq:kerbezout} and Lemmas~\ref{lem:gcr} and~\ref{lem:Lbzefunc}, $L({\lambda}bda)=B(P,(x-{\lambda}bda)vI)$ is singular for all ${\lambda}bda$. An analogous argument holds for a shared infinite eigenvalue. Conversely, suppose $v({\lambda}bda)I_n$ and $P({\lambda}bda)$ have no common eigenvalues. If ${\lambda}bda_0$ is an eigenvalue of $P$ then $({\lambda}bda_0-{\lambda}bda)v({\lambda}bda_0)I$ is nonsingular unless ${\lambda}bda = {\lambda}bda_0$. Thus, again using~\eqref{eq:kerbezout} and Lemma~\ref{lem:gcr}, if ${\lambda}$ is not an eigenvalue for $P$ then the common restriction is empty, which means $L({\lambda}bda)$ is nonsingular. In other words, $L({\lambda}bda)$ is regular and a linearization by Theorem~\ref{thm:linearization}. \end{proof} \section{Barnett's theorem and ``beyond $\mathbb{DL}$'' linearization space}\label{sec:beyondDL} Thus far, we have introduced a new viewpoint for the $\mathbb{DL}$ linearization space and demonstrated that the viewpoint provides deeper understanding of the $\mathbb{DL}$ space, and often helps simplify proofs of known results. We now turn to new aspects and results that can be obtained from this viewpoint, namely, we define and study a new vector space of potential linearizations for matrix polynomials that includes $\mathbb{DL}$ as a subspace and extends many of its properties. In this section we work for simplicity in the monomial basis, and we assume that the matrix polynomial $P(x) = \sum_{i=0}^k P_i x^i$ has an invertible leading coefficient $P_k$. Given a ring $R$, a \emph{left ideal} $L$ is a subset of $R$ such that $(L,+)$ is a subgroup of $(R,+)$ and $r \ell \in L$ for any $\ell \in L$ and $r \in R$~\cite[Ch.~1]{hazewinkel2004algebras}. A \emph{right ideal} is defined analogously. Given a matrix polynomial $P(x)$ over some field $\mathbb{F}$ the set $L_P=\{ Q(x) \in \mathbb{F}^{n \times n}[x] \ | \ Q(x)=A(x)P(x), \ A(x)\in \mathbb{F}^{n \times n}[x]\}$ is a left ideal of the ring $\mathbb{F}^{n \times n}[x]$. Similarly, $R_P=\{ Q(x) \in \mathbb{F}^{n \times n}[x] \ | \ Q(x)=P(x)A(x), \ A(x)\in \mathbb{F}^{n \times n}[x] \}$ is a right ideal of $\mathbb{F}^{n \times n}[x]$. A matrix polynomial of grade $k-1$ can be represented as $G(x)=\Gamma \Phi(x)$, where $\Gamma = [\Gamma_{k-1},\Gamma_{k-2},\ldots,\Gamma_0]\in \mathbb{F}^{n \times nk}$ are its coefficient matrices when expressed in the monomial basis and $\Phi(x)=\begin{bmatrix} x^{k-1} I,\ldots,x I,I \end{bmatrix}^\mathcal{B}$. Let $C_P^{(1)}$ be the first companion matrix\footnote{Some authors define the first companion matrix with minor differences in the choice of signs. Here, we make our choice for simplicity of what follows. For other polynomial bases the matrix should be replaced accordingly~\cite{Barnett_book}. } of $P(x)$: \begin{equation} \label{eq:comp1} C_P^{(1)} = \begin{bmatrix} -P_k^{-1} P_{k-1} & -P_{k}^{-1}P_{k-2} & \cdots & -P_{k}^{-1}P_1 & -P_{k}^{-1}P_0\\ I & & & & \\ & I & & & \\ & & \ddots & &\\ & & & I & 0 \end{bmatrix}. \end{equation} A key observation is that the action of $C_P^{(1)}$ on $\Phi$ is that of the multiplication-by-$x I$ operator in the quotient module $\mathbb{F}^{n \times n}[x]/L_P$: \[ \begin{bmatrix} x^{k-1} I\\ x^{k-2} I\\ \vdots\\ x I\\ I \end{bmatrix} xI \equiv C_P^{(1)} \begin{bmatrix} x^{k-1} I\\ x^{k-2} I\\ \vdots\\ x I\\ I \end{bmatrix} = \begin{bmatrix} x^{k-1} I\\ x^{k-2} I\\ \vdots\\ x I\\ I \end{bmatrix} xI + \begin{bmatrix} - P_k^{-1}P(x)\\ 0\\ \vdots\\ 0\\ 0 \end{bmatrix}, \ \ \ \ \ \begin{bmatrix} - P_k^{-1}P(x)\\ 0\\ \vdots\\ 0\\ 0 \end{bmatrix} \in L_P^{k}.\] Multiplying by the coefficients $\Gamma$, we can identify the map $\Gamma \mapsto \Gamma C_P^{(1)}$ with the map $G(x) \mapsto G(x)x$ in $\mathbb{F}^{n \times n}[x]/L_P$. That is, we can write $\Gamma C_P^{(1)} \Phi = x G(x) + Q(x)$ for some $Q(x) \in L_P$. More precisely, we have $Q(x)=-\Gamma_{k-1}P_k^{-1}P(x)$. Applying the previous observation to each block row of a block matrix, we see that, if we map a block matrix $X \in \mathbb{F}^{kn \times kn}$ to the corresponding bivariate matrix polynomial $\phi(X) \in \mathbb{F}^{n \times n}[x,y]$ (recall the definition of $\phi$ in~\eqref{eq:phidef}), we can identify the map $X \mapsto X C_P^{(1)}$ with the map $\phi(X) \mapsto \phi(X) x$ in the quotient space $\mathbb{F}[y]^{n \times n}[x]/L_{P(x)} = \mathbb{F}^{n \times n}[x,y]/L_{P(x)}$. The next theorem shows that, when working with the quotient space modulo $L_P$ or $R_P$, one can find a unique matrix polynomial of low grade in each equivalence class. \begin{theorem}\label{quotient} Let $P(x)=\sum_{i=0}^k P_i x^i \in \mathbb{F}^{n \times n}[x]$ be a matrix polynomial of degree $k$ such that $P_k$ is invertible, and let $V(x) \in \mathbb{F}^{n \times n}[x]$ be any matrix polynomial. Then, there exists a unique $Q(x)$ of grade $k-1$ such that $Q(x) \equiv V(x)$ in the quotient module $\mathbb{F}^{n \times n}[x]/L_P$, i.e., there exists a unique $A(x) \in \mathbb{F}^{n \times n}[x]$ such that $V(x)= A(x) P(x) + Q(x)$ with $Q(x)$ of grade $k-1$. Moreover, there exists a unique $S(x)$ of grade $k-1$ such that $S(x) \equiv V(x)$ in the quotient module $\mathbb{F}^{n \times n}[x]/R_P$, i.e., there exists a unique $B(x) \in \mathbb{F}^{n \times n}[x]$ such that $V(x) = P(x) B(x) + S(x)$ with $S(x)$ of grade $k-1$. \end{theorem} \begin{proof} If $\deg V(x) < k$, then take $Q(x)=V(x)$ and $A(x)=0$. If $\deg V \geq k$, then our task is to find $A(x) = \sum_{i=0}^{\deg V-k} A_ix^i$ with $A_{\deg V- k} \neq 0$ such that, for $M(x)=A(x)P(x)=\sum_{i=0}^{{\deg V}} M_i x^i$, we have $M_i = \sum_{j+\ell=i} (A_j P_{\ell})= V_i$ for $k \leq i \leq \deg V$. This is equivalent to solving the following block matrix equation: \begin{equation}\label{eq:whatisA} \begin{bmatrix} A_{\deg A} & \cdots & A_0 \end{bmatrix} \begin{bmatrix} P_k & P_{k-1} & P_{k-2} & \cdots & \\ & P_k & P_{k-1} & P_{k-2} & \cdots \\ & & \ddots & \ddots & \\ & & & P_k & P_{k-1}\\ & & & & P_k \end{bmatrix} = \begin{bmatrix} V_{\deg V} & \cdots & V_k \end{bmatrix}, \end{equation} which shows explicitly that $A(x)$ exists and is unique. This also implies that $Q(x)=V(x) - A(x) P(x)$ exists and is unique. An analogous argument proves the existence and uniqueness of $B(x)$ and $S(x)$ such that $V(x) = P(x) B(x) + S(x)$. \end{proof} Thanks to the connection between $\mathbb{DL}$ and the B\'{e}zoutian, we find that~\cite[Theorem 4.1]{Higham_06_01} is a generalization of Barnett's theorem to the matrix case. The proof that we give below is a generalization of that found in~\cite{Heinig_1984} for the scalar case. It is another example where the algebraic interpretation and the connection with B\'{e}zoutians simplify proofs (compare with the argument in~\cite{Higham_06_01}). \begin{theorem}[Barnett's theorem for matrix polynomials]\label{thm:Barnett} Let $P(x)$ be a matrix polynomial of degree $k$ with nonsingular leading coefficient and $v(x)$ a scalar polynomial of grade $k-1$. We have $\mathbb{DL}(P,v) = \mathbb{DL}(P,1) v(C_P^{(1)})$, where $C_P^{(1)}$ is the first companion matrix of $P(x)$. \end{theorem} \begin{proof} It is easy to verify that the following recurrence formula holds: \[ \begin{aligned} \frac{P(y) x^j (x-{\lambda}bda) - y^j (y-{\lambda}bda) P(x)}{x-y} &= \frac{P(y) x^{j-1}(x-{\lambda}bda) -y^{j-1}(y-{\lambda}bda)P(x)}{x-y} x \\ &\qquad\qquad\qquad\qquad\qquad\qquad+ y^{j-1} (y-{\lambda}bda) P(x). \end{aligned} \] Hence, we have $\mathcal{B}(P,(x-{\lambda}bda) x^j I) \equiv \mathcal{B}(P,(x-{\lambda}bda) x^{j-1} I) x$ where the equivalence is in the quotient space $\mathbb{F}[y,{\lambda}bda]^{n \times n}[x]/L_{P(x)}$. On the other hand, as we argued above, the operator of multiplication-by-$x$ in the quotient space is represented, in the monomial basis, by right-multiplication times the matrix $C_P^{(1)}$, while, again in the monomial basis, the B\'{e}zoutian $\mathcal{B}(P,(x-{\lambda}bda) x^j I)$ (resp.~$\mathcal{B}(P,(x-{\lambda}bda) x^{j-1} I)$) is represented by the pencil $\mathbb{DL}(P,x^j)$ (resp.~$\mathbb{DL}(P,x^{j-1})$). This observation suffices to prove by induction the theorem when $v(C_P^{(1)})$ is a monomial of the form $(C_P^{(1)})^j$ for $0 \leq j \leq k-1$. The case of a generic $v(C_P^{(1)})$ follows by linearity of the B\'{e}zoutian. \end{proof} An analogous interpretation as a multiplication operator holds for the second companion matrix: \begin{equation} \label{eq:comp2} C_P^{(2)} = \begin{bmatrix} - P_{k-1} P_k^{-1}& I & & & \\ -P_{k-2}P_{k}^{-1} & & I & & \\ \vdots & & &\ddots & \\ -P_1P_{k}^{-1} & & & & I\\ -P_0P_{k}^{-1} & & & & \end{bmatrix}. \end{equation} Indeed, $C_P^{(2)}$ represents multiplication by $y$ modulo $R_P$, the right ideal generated by $P(y)$, i.e., if $X \in \mathbb{F}^{kn \times kn}$ is a block matrix we can identify the map $X \mapsto C_P^{(2)} X$ with the map $\phi(X) \mapsto y\phi(X)$ in $\mathbb{F}[x]^{n \times n}[y]/R_{P(y)} = \mathbb{F}^{n \times n}[x,y]/R_{P(y)}$. A dual version of Barnett's theorem holds for the second companion matrix. Indeed, one has $\mathbb{DL}(P,v(x)) = v(C_P^{(2)}) \mathbb{DL}(P,1)$. The proof is analogous to the one for Theorem~\ref{thm:Barnett} and is omitted. As soon as we interpret the two companion matrices in this way, we are implicitly defining a map $\psi$ from block matrices to bivariate polynomials modulo $L_{P(x)}$ and $R_{P(y)}$. More formally, let $S(x,y) \in \mathbb{F}^{n \times n}[x,y]$, and consider the equivalence class \begin{align}\label{eq:eqclass} [S(x,y)] : = \{ &T(x,y) \in \mathbb{F}^{n \times n}[x,y] : T(x,y) = S(x,y) + L(x,y)P(x) \nonumber\\ &+ P(y) R(x,y) \ \ \mathrm{for} \ \mathrm{some} \ L(x,y), R(x,y) \in \mathbb{F}^{n \times n}[x,y] \}. \end{align} Moreover, if $[S_1(x,y)]=[S_2(x,y)]$, we write $S_1(x,y) \equiv S_2(x,y)$. Then, for any block matrix $X \in \mathbb{F}^{nk \times nk}$ we define $\psi(X) = [\phi(X)]$, where $\phi$ is again the map defined in~\eqref{eq:phidef}. In this setting, $\psi(X)$ is seen as an equivalence class, and we may summarize our analysis on the two companion matrices with the following equations: \begin{equation}\label{eq:psicompanion} \psi(X C_P^{(1)}) = [\phi(X)x], \qquad \psi( C_P^{(2)} X) = [y \phi(X)], \qquad \mathrm{for} \ \mathrm{all} \ \ X \in \mathbb{F}^{kn \times kn}. \end{equation} Note that, by linearity,~\eqref{eq:psicompanion} imply in turn that for any polynomials $v(y)$ and $w(x)$ we have: \begin{equation}\label{eq:psicompanion2} \psi(X w(C_P^{(1)})) = [\phi(X)w(x)], \qquad \psi( v(C_P^{(2)}) X) = [v(y) \phi(X)], \qquad \mathrm{for} \ \mathrm{all} \ \ X \in \mathbb{F}^{kn \times kn}. \end{equation} However, in the equivalence class $\psi(X)$ there exists a unique bivariate polynomial having grade equal to $\deg P-1$ separately in both $x$ and $y$, as we now prove in Theorem~\ref{biquotient}. (Clearly, this unique bivariate polynomial must be precisely $\phi(X)$, as the latter has indeed grade $\deg P-1$ separately in both $x$ and $y$.) Theorem~\ref{biquotient} gives the appropriate matrix polynomial analogue to Euclidean polynomial division applied both in $x$ and $y$. \begin{theorem}\label{biquotient} Let $P(z)=\sum_{i=0}^k P_i z^i \in \mathbb{F}^{n \times n}[z]$ be a matrix polynomial with $P_k$ invertible, and let $F(x,y) = \sum_{i=0}^{k_1} \sum_{j=0}^{k_2} F_{ij} x^i y^j \in \mathbb{F}^{n \times n}[x,y]$ be a bivariate matrix polynomial. Then there is a unique decomposition $F(x,y) = Q(x,y) + A(x,y) P(x) + P(y) B(x,y) + P(y) C(x,y) P(x)$ such that \begin{enumerate} \item[(i)] $Q(x,y)$, $A(x,y)$, $B(x,y)$ and $C(x,y)$ are all bivariate matrix polynomials, \item[(ii)] $Q(x,y)$ has degree at most $k-1$ separately in $x$ and $y$, \item[(iii)] $A(x,y)$ has degree at most $k-1$ in $y$, and \item[(iv)] $B(x,y)$ has degree at most $k-1$ in $x$. \end{enumerate} Moreover, $Q(x,y)$ is determined uniquely by $P(z)$ and $F(x,y)$. \end{theorem} \begin{proof} Let us first apply Theorem~\ref{quotient} taking $\mathbb{F}(y)$ as the base field. Then there exist unique $A(x,y)$ and $Q_1(x,y)$ such that $F(x,y) = A_1(x,y) P(x) + Q_1(x,y)$, where $A_1(x,y)$ and $Q_1(x,y)$ are polynomials in $x$. Furthermore, $\deg_x Q_1(x,y) \leq k-1$. A priori, the entries of $A_1(x,y)$ and $Q_1(x,y)$ could be rational functions in $y$. However, a careful analysis of~\eqref{eq:whatisA} shows that the coefficients of $A_1(x,y)=\sum_i A_{1,i}(y) x^i$ can be obtained by solving a block linear system $M w = v$, say, where $v$ depends polynomially in $y$ whereas $M$ is \emph{constant} in $y$. Hence, $A_1(x,y)$, and a fortiori $Q_1(x,y) = F(x,y) - A_1(x,y)P(x)$, are also polynomials in $y$. At this point we can apply Theorem~\ref{quotient} again to write (uniquely) $Q_1(x,y) = Q(x,y) + P(y) B(x,y)$ and $A_1(x,y)=A(x,y) + P(y) C(x,y)$, where $\deg_y Q(x,y)$ and $\deg_y A(x,y)$ are both at most $k-1$. Moreover, comparing again with~\eqref{eq:whatisA}, it is easy to check that it must also hold $\deg_x Q(x,y) \leq k-1$ and $\deg_x B(x,y) \leq k-1$. Hence, $F(x,y) = Q(x,y) + A(x,y) P(x) + P(y) B(x,y) + P(y) C(x,y) P(x)$ is the sought decomposition. \end{proof} The next example illustrates the concepts just introduced. \begin{example}\label{examplequotients} Let $P(x)=I x^2 + P_1 x+ P_0$ and consider the block matrix $X=\begin{bmatrix} A & B\\ C & D \end{bmatrix}$. We have $\phi(X) = A x y + B y + C x + D$. Let $Y=C_P^{(2)} X C_P^{(1)}$. Then, using~\eqref{eq:psicompanion}, we know that $\psi(Y) = [y \phi(X) x] =[A x^2 y^2 + B x y^2 + C x^2 y + D x y].$ In particular, observing that $I x^2 \equiv - P_1 x - P_0$ and that $I y^2 \equiv -P_1 y - P_0$, we have \begin{align*} A& x^2 y^2 + B x y^2 + C x^2 y + D x y \equiv (-P_1 y -P_0)(A x^2 + Bx) + C x^2 y + D x y \\ &\equiv (-P_1 A y - P_0 A + C y)(-P_1 x - P_0) + (D - P_1 B) x y - P_0 B x \\ &= (P_1 A P_1 + D - P_1 B - C P_1) x y + (P_1 A P_0 - C P_0) y + (P_0 A P_1 - P_0 B) x + P_0 A P_0\\ & = \phi(Y) , \end{align*} as by Theorem~\ref{biquotient} in the equivalence class $\psi(Y)$ there exists a unique element of grade $\deg P-1$ separately in $x$ and $y$, and by the definition of the mapping $\phi$ this unique element must be equal to $\phi(Y)$. Equivalently, we could have taken quotients directly on the bases. The argument is that $\begin{bmatrix} y^2 I & y I\end{bmatrix}X\begin{bmatrix} x^2 I\\ x I \end{bmatrix} \equiv \begin{bmatrix} -P_1 y -P_0 & y I\end{bmatrix}X\begin{bmatrix} -P_1 x -P_0\\ x I \end{bmatrix} = \psi(Y)$, and leads to the same result. A third way of computing $Y = C_P^{(2)} X C_P^{(1)}$ is to formally apply the linear algebraic definition of matrix multiplication, and then apply the mapping $\phi$ as in~\eqref{eq:phidef} (forgetting about quotient spaces). One remarkable consequence of Theorem~\ref{biquotient} is that these three approaches are all equivalent. Note that the same remarks, using~\eqref{eq:psicompanion2}, apply to any block matrix of the form $\psi(v(C_P^{(2)}) X w(C_P^{(1)}))$, for any pair of polynomials $v(y)$ and $w(x)$. For this example, we have taken a monic $P(x)$ for simplicity. If its leading coefficient $P_k$ is not the identity matrix, but still is nonsingular, then the explicit formulas become more complicated and involve $P_k^{-1}$. \end{example} \subsection{Beyond $\mathbf{\mathbb{DL}}$ space} The key message in Theorem~\ref{thm:Barnett} is that one can start with the pencil in $\mathbb{DL}$ associated with ansatz polynomial $v=1$ and repeatedly multiply the first companion matrix $C_P^{(1)}$ on the right, to obtain all the pencils in the ``canonical basis'' of $\mathbb{DL}$~\cite{Mackey_05_01}. In the scalar case ($n=1$) there is a bijection between pencils in $\mathbb{DL}$ and polynomials in $C_P^{(1)}$. However, the situation is quite different when $n>1$, as the vector space of polynomials in $C_P^{(1)}$ can have dimension up to $kn$, depending on the Jordan structure of $P(x)$. \begin{remark}For some matrix polynomials $P(x)$, the dimension of the vector space of polynomials in $C_P^{(1)}$ can be much lower than $nk$, although generically this upper bound is achieved. An extreme example is $P(x)=p(x)I$ for some scalar $p(x)$, as in this case the dimension achieves the lowest possible bound, which is $k$.\end{remark} \subsubsection{Definition of $\mathbb{BDL}(P,v)$} In light of the above discussion, it makes sense to investigate the pencils of the form $v(C_P^{(2)})\mathbb{DL}(P,1)=\mathbb{DL}(P,1) v(C_P^{(1)})$ for $\deg v > k-1$, because for a generic $P$ they do not belong to $\mathbb{DL}$. We refer to the space of such pencils as the ``beyond $\mathbb{DL}$'' space of potential linearizations and write \begin{equation} \label{eq:BDLdef} \mathbb{DL}(P,1) v(C_P^{(1)})=:\mathbb{BDL}(P,v). \end{equation} Note that $\mathbb{DL}$ is now seen as a subspace of $\mathbb{BDL}$: if $\deg v \leq k-1$, then $\mathbb{BDL}(P,v)=\mathbb{DL}(P,v)$. An important fact is that, even if the degree of the polynomial $v(x)$ is larger than $k-1$, it still holds that $\mathbb{BDL}(P,v)=v(C_P^{(2)})\mathbb{DL}(P,1)=\mathbb{DL}(P,1) v(C_P^{(1)})$. When $\deg v \leq k-1$, i.e., for pencils in $\mathbb{DL}$, this is a consequence of the two equivalent versions of Barnett's theorem. We now prove this more generally. \begin{theorem} Let $P(x)$ be a matrix polynomial of degree $k$ with nonsingular leading coefficient. For any polynomial $v(x)$ we have \[ v(C_P^{(2)})\mathbb{DL}(P,1)=\mathbb{DL}(P,1) v(C_P^{(1)}), \] where $C_P^{(1)},C_P^{(2)}$ are the companion matrices as in~\eqref{eq:comp1},~\eqref{eq:comp2} and $\mathbb{DL}(P,1)$ is the pencil as in~\eqref{eq:Lbezfunc}. \end{theorem} \begin{proof} Since both $C_P^{(2)} - {\lambda} I$ and $C_P^{(1)} - {\lambda} I$ are strong linearizations of $P({\lambda})$, they have the same minimal polynomial $m({\lambda})$. Let $\gamma = \deg m({\lambda})$. By linearity, it suffices to check the statement for $v(x)=x^j$, $j=0,\dots,\gamma-1$. We give an argument by induction. Note first that the base case, i.e., $v(x)=x^0=1$, is a trivial identity. From the recurrence relation displayed in the proof of Barnett's theorem, we have that $\psi(\mathbb{DL}(P,1) (C_P^{(1)})^{j-1}) \equiv \mathcal{B}(P,x^{j-1} I)$. By the inductive hypothesis we also have $\psi(\mathbb{DL}(P,1) (C_P^{(1)})^{j-1}) \equiv \psi((C_P^{(2)})^{j-1}\mathbb{DL}(P,1)) \equiv \mathcal{B}(P,y^{j-1} I)$. Now, let $\Delta(x,y)=\phi(\mathbb{DL}(P,1) (C_P^{(1)})^{j} - (C_P^{(2)})^j \mathbb{DL}(P,1))$. By the definitions of $\psi$ and $\phi$, for any block matrix $X$ we have $[\phi(X)] = \psi(X)$, where the notation $[\cdot]$ denotes an equivalence class $\modulo R_{P(y)}$ and $\modulo L_{P(x)}$ as in~\eqref{eq:eqclass}. More explicitly, for any bivariate matrix polynomial $S(x,y)$ in the equivalence class $\psi(X)$ there exist matrix polynomials $L(x,y)$, $R(x,y)$ and $C(x,y)$ such that \[\phi(X) = S(x,y) + L(x,y) P(x) + P(y) R(x,y) + P(y) C(x,y) P(x).\] Therefore, it must be \[\Delta(x,y) = (x-y)\mathcal{B}(P,x^{j-1} I) + L(x,y)P(x) + P(y) R(x,y) + P(y) C(x,y) P(x)\] for some $L(x,y)$, $R(x,y)$, and $C(x,y)$. But \[(x-y)\mathcal{B}(P,x^{j-1} I) = P(y) x^{j-1} - y^{j-1}P(x),\] and hence, $\Delta(x,y) \equiv 0 + L_1(x,y)P(x) + P(y)R_1(x,y) + P(y) C(x,y) P(x)$. Finally, observe that Theorem~\ref{biquotient} guarantees the existence and uniqueness of a matrix polynomial of grade $\deg P-1$ separately in $x$ and $y$ in the equivalence class $\psi(\phi^{-1}(\Delta(x,y)))$, and note that the latter must be equal to $\phi(\phi^{-1}(\Delta(x,y))=\Delta(x,y)$. On the other hand, $0$ has grade $\deg P -1$ separately in $x$ and $y$, and hence, $0=\Delta(x,y)$. \end{proof} \subsubsection{Properties of $\mathbb{BDL}(P,v)$} We now investigate some properties of the $\mathbb{BDL}(P,v)$ pencils defined in~\eqref{eq:BDLdef}. Clearly, an eigenvalue exclusion theorem continues to hold. Indeed, by assumption $\mathbb{DL}(P,1)$ is a linearization, because we suppose $P(x)$ has no eigenvalues at infinity. Thus, $\mathbb{BDL}(P,v)$ will be a linearization as long as $v(C_P^{(1)})$ is nonsingular, which happens precisely when $P(x)$ and $v(x) I$ do not share an eigenvalue. Nonetheless, it is less clear what properties, if any, pencils in $\mathbb{BDL}$ will inherit from pencils in $\mathbb{DL}$. Besides the theoretical interest of deriving its properties, $\mathbb{BDL}$ finds an application in the theory of the sign characteristics of structured matrix polynomials~\cite{signpaper}. To investigate this matter, we will apply Theorem~\ref{quotient} taking $V(x)=v(x) I$. To analyze the implications of Theorem~\ref{quotient} and Theorem~\ref{biquotient}, it is worth summarizing the theory that we have built so far with a commuting diagram. Let $\mathbb{BDL}(P,v) = {\lambda}bda X + Y$ and $\mathbb{DL}(P,1) = {\lambda}bda \tilde{X} + \tilde{Y}$. Below, $F(x,y)$ (resp.~$\tilde{F}(x,y)$) denotes the continuous analogue of $X$ (resp.~$\tilde{X}$). $$ \begin{tikzpicture}[scale=2] \node (A) at (0,2) {$\tilde{X}$}; \node (B) at (3,2) {$\tilde{F}(x,y)$}; \node (C) at (2,1) {$\tilde{F}(x,y)v(x)$}; \node (D) at (0,0) {$X$}; \node (E) at (3,0) {$F(x,y)$}; \node (F) at (4,1) {$v(y)\tilde{F}(x,y)$}; \path[->,font=\scriptsize,] (A) edge[bend left] node[right]{$A \mapsto A v(C_P^{(1)})$} (D) (A) edge[bend right] node[left]{$A \mapsto v(C_P^{(2)}) A$} (D) (C) edge node[left]{quotient modulo $L_P$} (E) (A) edge node[above]{$\phi$} (B) (D) edge node[above]{$\phi$} (E) (B) edge node[left]{$H(x,y) \mapsto v(x)H(x,y)$} (C) (B) edge node[right]{$H(x,y) \mapsto H(x,y)v(y)$} (F) (F) edge node[right]{quotient modulo $R_P$} (E); \end{tikzpicture} $$ An analogous diagram can be drawn for $Y$, $\tilde{Y}$, $G(x,y)$, and $\tilde{G}(x,y)$. The diagram above illustrates that we may work in the bivariate polynomial framework (right side of the diagram), which is often more convenient for algebraic manipulations than the matrix framework (left side). In particular, using Theorem~\ref{quotient}, Theorem~\ref{biquotient} and~\eqref{eq:DLrelations}, we obtain the following relations: \begin{equation}\label{eq:shiftedsumbeyond} v(y)P(x) \equiv S(y) P(x)\!=\! F(x,y) x + G(x,y), \ \ yF(x,y) + G(x,y) \! =\! P(y)Q(x)\equiv P(y) v(x) \end{equation} In~\eqref{eq:shiftedsumbeyond}, $Q(x)$ and $S(y)$ are, respectively, the unique univariate matrix polynomials of grade $\deg P-1$ in $x$ (resp.~$y$) satisfying $v(x) I = Q(x) + A(x) P(x)$ (resp.~$v(y) I = S(y) + P(y) B(y)$ ) for some matrix polynomial $A(x)$ (resp.~$B(y)$). The existence and the uniqueness of $Q(x)$ and $S(y)$ follow from Theorem~\ref{quotient}. To see how~\eqref{eq:shiftedsumbeyond} can be derived, take for example the second equation, as the argument is similar for the first one. By Lemma~\ref{lem:L1L2}, we have that $y \tilde{F}(x,y)v(x) + \tilde{G}(x,y) v(x) = P(y) v(x)$. Applying Theorem~\ref{quotient}, there is a unique matrix polynomial $Q(x)$ having grade $\deg P-1$ and such that $v(x) I = A(x) P(x) + Q(x)$. Hence, $y \tilde{F}(x,y)v(x) + \tilde{G}(x,y) v(x) = P(y) Q(x) + P(y) A(x) P(x)$. On the other hand, as illustrated by the diagram above $F(x,y) = \tilde{F}(x,y) v(x) + A(x,y)P(x)$ and similarly $G(x,y) = \tilde{G}(x,y) v(x) + B(x,y)P(x)$ for some bivariate matrix polynomials $A(x,y)$ and $B(x,y)$. Since $y F(x,y) + G(x,y)$ has grade $\deg P-1$ in $x$, by the uniqueness of the decomposition in Theorem~\ref{biquotient} we may conclude that $y F(x,y) + G(x,y) = P(y) Q(x)$. From~\eqref{eq:shiftedsumbeyond} it appears clear that a pencil in $\mathbb{BDL}$ generally has \emph{distinct} left and right ansatz vectors, and that these ansatz vectors are now block vectors, associated with left and right ansatz matrix polynomials. For convenience of those readers who happen to be more familiar with the matrix viewpoint, we also display what we obtain by translating back~\eqref{eq:shiftedsumbeyond}: \begin{equation} X{\,\boxplus\hspace{-2.2mm}\raisebox{0.3mm}{$\rightarrow$} @} Y = \begin{bmatrix} S_{k-1}\\ \vdots\\ S_0 \end{bmatrix} \left[P_k, P_{k-1},\ldots,P_0\right], \qquad X{\,\boxplus \hspace{-2.2mm}\raisebox{-1mm}{$\downarrow$} \;} Y = \begin{bmatrix} P_k\\ P_{k-1}\\ \vdots\\ P_0 \end{bmatrix} \left[Q_{k-1},\ldots,Q_0\right]. \end{equation} Note that if $\deg v \leq k-1$ then $S(x)=Q(x)=v(x) I$ and we recover the familiar shifted sum equations for $\mathbb{DL}$. The eigenvalue exclusion theorem continues to hold for $\mathbb{BDL}$ with a natural extension that replaces the ansatz vector $v$ with the matrix polynomial $Q$ (or $S$). \begin{theorem}[Eigenvalue exclusion theorem for $\mathbb{BDL}$] Let $P(x)$ be a matrix polynomial of degree $k$ with nonsingular leading coefficient, and let $v(x)$ be a scalar polynomial of arbitrary degree. Then, the pencil $\mathbb{BDL}(P,v)$ defined as in~\eqref{eq:BDLdef} is a strong linearization of $P(x)$ if and only if $P(x)$ and $Q(x)$ (or $S(x)$) do not share an eigenpair, where $Q(x)$ and $S(x)$ are the unique matrix polynomials satisfying~\eqref{eq:shiftedsumbeyond}. \end{theorem} \begin{proof} We prove the eigenvalue exclusion theorem for $P$ and $Q$, as the proof for $P$ and $S$ is analogous. We know that $\mathbb{BDL}(P,v)$ is a strong linearization if and only if we cannot find an eigenvalue $x_0$ and a nonzero vector $w$ such that $P(x_0)w=v(x_0)I w=0$. (Here, we are implicitly using the fact that if $x_0$ is an eigenvalue of $v(x) I$, then any nonzero vector is a corresponding eigenvector.) But in the notation of Theorem~\ref{quotient}, we can write uniquely $Q(x)=v(x)I-A(x)P(x)$, and hence, $Q(x_0)w=v(x_0)w-A(x_0)P(x_0)w$. Hence, $P(x)$ and $v(x)I$ share an eigenpair if and only if $P(x)$ and $Q(x)$ do. \end{proof} We now show that pencils in $\mathbb{BDL}$ still are Lerer--Tismenetsky B\'{e}zoutians. It is convenient to first state a lemma and a corollary. \begin{lemma}\label{Toeplitz2} Let $U \in \mathbb{F}^{nk \times nk}$ be an invertible block-Toeplitz upper-triangular matrix. Then $(U^{\mathcal{B}})^{-1} = (U^{-1})^{\mathcal{B}}$. \end{lemma} \begin{proof} We claim that, more generally, if $U$ is an invertible Toeplitz upper-triangular matrix with elements in any ring with unity, and $L=U^T$, then $U^{-1}=(L^{-1})^T$. Taking $\mathbb{F}^{n \times n}$ as the base ring yields the statement. To prove the claim, recall that if $L^{-1}$ exists then $L^{-1}=L^{\#}$, where the latter notation denotes the group inverse of $L$. Explicit formulae for $L^{\#}$ appeared in~\cite[eq.~3.4]{Hartwig_77}\footnote{It should be noted that if $L^{-1}$ exists then $L_{11}$ must be invertible too, where $L_{11}$ denotes the top-left element of $L$: if the base ring is taken to be $\mathbb{F}^{n \times n}$, that is the $n \times n$ top-left block of $L$. Moreover,~\cite[Theorem 2]{Hartwig_77} implies that ~\cite[eq.~3.2]{Hartwig_77} is satisfied.}. Hence, it can be checked by direct computation that $(L^{-1})^T U = U (L^{-1})^T= I$. \end{proof} \begin{corollary}\label{Toeplitz} Let $U \in \mathbb{F}^{nk \times nk}$ be an invertible block-Toeplitz upper-triangular matrix and $\Upsilon=\begin{bmatrix} v_1 I_n\\ \vdots\\ v_k I_n\end{bmatrix}$, $v_i \in \mathbb{F}$. Then $(U^{-1} \Upsilon)^{\mathcal{B}} = \Upsilon^{\mathcal{B}} (U^{\mathcal{B}})^{-1}$. \end{corollary} \begin{proof} Since the block elements of $\Upsilon$ commute with any other matrix, it suffices to apply Lemma~\ref{Toeplitz2}. \end{proof} \begin{theorem}\label{thm:PQ=SP} If $Q(x)$, $A(x)$, $S(x)$, and $B(x)$ are defined as in Theorem~\ref{quotient} with $V(x)=v(x) I$, then $P(x) Q(x) = S(x) P(x)$ and $A(x)=B(x)$. \end{theorem} \begin{proof} Let $v(x)I-Q(x)=A(x)P(x)$ and $v(x)I-S(x)=P(x)B(x)$. We may assume $\deg v \geq k$, as otherwise the statement is trivially verified since $Q(x)=S(x)=v(x)I$ and $A(x)=B(x)=0$. Note first that $\deg A = \deg B =\deg v -k$ because by assumption the leading coefficient of $P(x)$ is not a zero divisor. The coefficients of $A(x)$ must satisfy~\eqref{eq:whatisA}, while block transposing~\eqref{eq:whatisA} we obtain an equation that must be satisfied by the coefficients of $B(x)$. Equating term by term and using Corollary~\ref{Toeplitz} we obtain $A(x)=B(x)$, and hence, $P(x)Q(x)-S(x)P(x)=P(x)B(x)P(x)-P(x)A(x)P(x)=0$. \end{proof} Hence, it follows that $\mathbb{BDL}(P,v)$ is a Lerer--Tismenetsky B\'{e}zoutian (compare the result with~\eqref{eq:Lbezfunc}). We now present the following immediate corollary: \begin{corollary}\label{cor:whatisBDL} It holds \[\mathbb{BDL}(P,v) ={\lambda}bda B_{S,P}(Q,P) + B_{P,xS}(P,xQ),\] where $Q(x)$ and $S(x)$ are as in Theorem~\ref{quotient} the unique matrix polynomials of grade $k-1$ satisfying $v(x) I = P(x) A(x) + S(x) = A(x) P(x) + Q(x)$ for some matrix polynomial $A(x)$. \end{corollary} \begin{proof} Observe first that by Definition~\ref{def:bezmatpoly} $B_{S,P}(Q,P)$ and $B_{P,xS}(P,xQ)$ are well defined since Theorem~\ref{thm:PQ=SP} implies that $P(x)Q(x)=S(x)P(x)$ and $xS(x)P(x)=P(x)xQ(x)$. The proof of the corollary is then a straightforward application of~\eqref{eq:shiftedsumbeyond}. For example, for the leading term we have from~\eqref{eq:shiftedsumbeyond} that $F(x,y) \!=\! \frac{S(y)P(x) - P(y)Q(x)}{x-y} \!=\! \mathcal{B}_{S,P}(Q,P)$. Translating back from bivariate polynomial to block matrices, we find that $\phi^{-1}(F(x,y))=B_{S,P}(Q,P)$. The proof for the trailing term of the pencil is analogous and we omit the details. \end{proof} Once again, if $\deg v \leq k-1$ then we recover $\mathbb{DL}(P,v)$ because $S(x)=Q(x)=v(x)I$. More generally, we have $S(x)-Q(x)=[A(x),P(x)]:=A(x)P(x)-P(x)A(x)$. For the rest of this section, we assume that the underlying field $\mathbb{F}$ is a metric space; for simplicity, we focus on the case $\mathbb{F}=\mathbb{C}$. As mentioned in Section~\ref{sec:genpolybases}, one property of a pencil in $\mathbb{DL}$ is block symmetry. It turns out that this property does not hold for pencils in $\mathbb{BDL}$. Nonetheless, an even deeper algebraic property is preserved. Since each matrix coefficient in a pencil in $\mathbb{DL}$ is a B\'{e}zout matrix, the inverses of those matrices are block Hankel -- note that unless $n=1$, the inverse of a block Hankel matrix needs not be block symmetric. The general result is: a matrix is the inverse of a block Hankel if and only if it is a B\'{e}zout matrix~\cite[Corollary 3.4]{LT2}. However, for completeness, we give a simple proof for the special case of our interest. \begin{theorem} Let ${\lambda}bda X + Y$ be a pencil either in $\mathbb{DL}$ or in $\mathbb{BDL}$ associated with a matrix polynomial $P(x) \in \mathbb{C}[x]^{n \times n}$ with an invertible leading coefficient. Then, $X^{-1}$ and $Y^{-1}$ are both block Hankel matrices if the inverses exist. \end{theorem} \begin{proof} Note first that, by Lemma~\ref{lem:Lbzefunc}, it suffices to show that $B(P,v I)^{-1}$ is block Hankel for all polynomials $v$ such that the inverse exists. Assume first $P(0)$ is invertible, implying that $C_P^{(1)}$ is invertible as well. We have that $H_0=(B(P,I))^{-1}$ is block Hankel, as can be easily shown by induction on $k$~\cite[Sec.~2.1]{Gohberg_09_01}. By Barnett's theorem, $(C_P^{(2)})^j B(P,I) = B(P,I) (C_P^{(1)})^j$. Then $H_j:=(C_P^{(1)})^{-j} H_0 = H_0 (C_P^{(2)})^{-j}$. Taking into account the structure of $(C_P^{(1)})^{-1}$ and $(C_P^{(2)})^{-1}$, we see by induction that $H_j$ is block Hankel. For a general $v(x)$ such that $v(x) I$ does not share eigenvalues with $P(x)$, we have that $(B(P,v I))^{-1}=v(C_P^{(1)})^{-1} H_0$. Since $v(C_P^{(1)})^{-1}$ is a polynomial in $(C_P^{(1)})^{-1}$, this is a linear combination of the $H_j$, hence is block Hankel. If $P(0)$ is singular consider any sequence $(P_n)_{n \in \mathbb{N}} = P(x) + E_n$ such that $\|E_n\| \rightarrow 0$ as $n\rightarrow\infty$ and $P_n(0)=P(0)+E_n$ is invertible for all $n$ (such a sequence exists because singular matrices are nowhere dense). Since the B\'{e}zout matrix is linear in its arguments, $B(P_n,v I) \rightarrow B(P,v I)$. In particular, $B(P_n,v I)$ is eventually invertible if and only if no root of $v(x)$ is an eigenvalue of $P(x)$. The inverse is continuous as a matrix function, and thus $B(P,v I)^{-1} = \lim_{n \rightarrow \infty} B(P_n,v I)^{-1}$. We conclude by observing that the limit of a sequence of block Hankel matrices is block Hankel. \end{proof} Note that the theorem above implies that if ${\lambda}bda_0$ is not an eigenvalue of $P$ then the evaluation of a linearization in $\mathbb{DL}$ or $\mathbb{BDL}$ at ${\lambda}bda={\lambda}bda_0$ is the inverse of a block Hankel matrix. \subsubsection{$\mathbb{BDL}(P,v)$ and structured matrix polynomials} We now turn to exploring the connections between $\mathbb{BDL}(P,v)$ and structured matrix polynomials. Recall that a Hermitian matrix polynomial is a polynomial whose coefficients are all Hermitian matrices. If $P(x)$ is Hermitian we write $P^*(x)=P(x)$. It is often argued that block-symmetry is important because, if $P(x)$ was Hermitian in the first place and $v(x)$ has real coefficients, then $\mathbb{DL}(P,v)$ is also Hermitian. Although $\mathbb{BDL}(P,v)$ is not block-symmetric, it still is Hermitian when $P(x)$ is Hermitian. \begin{theorem}\label{hermitian} Let $P(x) \in \mathbb{C}^{n \times n}[x]$ be a Hermitian matrix polynomial with invertible leading coefficient and $v(x) \in \mathbb{R}[x]$ a scalar polynomial with real coefficients. Then, $\mathbb{BDL}(P,v)$ is a Hermitian pencil. \end{theorem} \begin{proof} Recalling the explicit form of $\mathbb{BDL}(P,v) = {\lambda}bda X + Y$ from Corollary~\ref{cor:whatisBDL}, we have $X= B_{S,P}(Q,P)$ and $Y=B_{P,xS}(P,xQ)$. Here, $Q(x)$ (resp.~$S(x)$) are as in Theorem~\ref{quotient} the unique matrix polynomials of grade $k-1$ such that $v(x) I = A(x) P(x) + S(x) = P(x) A(x) Q(x)$, where $A(x)$ is also unique and we are using Theorem~\ref{thm:PQ=SP} as well. Then $-X$ is associated with the Lerer--Tismenetsky B\'{e}zoutian function $F(x,y)=\frac{P(y)Q(x) - S(y) P(x)}{x-y}$. By definition, $S(x) = v(x) I - P(x) A(x)$. Taking the transpose conjugate of this equation, and noting that by assumption $P(x)=P^*(x)$, $v(x)=v^*(x)$, we obtain $S^*(x)=v(x) I - A^*(x) P(x)$. But, by Theorem~\ref{quotient}, there is a \emph{unique} matrix polynomial $Q(x)$ of grade $k-1$ such that $v(x) I = Q(x) + A(x) P(x)$ for some $A(x)$. Thus, since $\deg S^*(x) = \deg S(x) \leq k-1$, we conclude that $S^*(x)=Q(x)$. (Although not strictly needed in this proof, the uniqueness of $A(x)$ also implies $A^*(x) = A(x)$.) Hence, $F(x,y)=\frac{P(y)Q(x)-Q^*(y)P(x)}{x-y} = \frac{Q^*(y)P(x) - P(y) Q(x)}{y-x} = F^*(y,x)$, proving that $X$ is Hermitian because the formula holds for any $x,y$. Analogously $G(x,y)=\frac{P(y)xQ(x)-yQ^*(y)P(x)}{x-y}=\frac{yQ^*(y)P(x)-P(y)xQ(x)}{y-x}=G^*(y,x)$, allowing us to deduce that $Y$ is also Hermitian. \end{proof} The theory of functions of a matrix~\cite{high:FM} allows one to extend the definition of $\mathbb{BDL}$ to a general function $f$, rather than just a polynomial $v$, as long as $f$ is defined on the spectrum of $C_P^{(1)}$ (for a more formal definition see~\cite{high:FM}). One just puts $\mathbb{BDL}(P,f):=\mathbb{BDL}(P,v)$ where $v(x)$ is the interpolating polynomial such that $v(C_P^{(1)})=f(C_P^{(1)})$. \begin{corollary} Let $P(x) \in \mathbb{C}^{n \times n}[x]$ be a Hermitian matrix polynomial with invertible leading coefficient and $f: \mathbb{C} \rightarrow \mathbb{C}$ a function defined on the spectrum of $C_P^{(1)}$ and such that $f(x^*)=(f(x))^*$. Then $\mathbb{BDL}(P,f)$ is a Hermitian pencil. \end{corollary} \begin{proof} It suffices to observe that the properties of $f$ and $P$ imply that $f(C_P^{(1)})=v(C_P^{(1)})$ with $v\in \mathbb{R}[x]$~\cite[Def.~1.4]{high:FM}. \end{proof} In the monomial basis, other structures of interest have been defined, such as $*$-even, $*$-odd, $T$-even, $T$-odd (all these definitions can be extended to any alternating basis, such as Chebyshev) or $*$-palindromic, $*$-antipalindromic, $T$-palindromic, $T$-antipalindromic. For $\mathbb{DL}$, analogues of Theorem~\ref{hermitian} can be stated in all these cases~\cite{goodvibrations}. These properties extend to $\mathbb{BDL}$. We state and prove them for the $*$-even and the $*$-palindromic case: \begin{theorem}\label{structures} Assume that $P(x)=P^*(-x)$ is $*$-even and with an invertible leading coefficient, and that $f(x)=f^*(-x)$, and let $\Sigma$ be as in~\eqref{eq:sigR}. Then $\Sigma \mathbb{BDL}(P,f)$ is a $*$-even pencil. Furthermore, if $P(x)=x^{k} P^*(x^{-1})$ is $*$-palindromic and $f(x)=x^{k-1}f(x^{-1})$, and defining the ``flip matrix'' $R$ as in \eqref{eq:sigR}, then $R \mathbb{BDL}(P,f)$ is a $*$-palindromic pencil. \end{theorem} \begin{proof} The proof goes along the same lines as that of Theorem~\ref{hermitian}: we first use the functional viewpoint and the B\'{e}zoutian interpretation of $\mathbb{BDL}(P,v)={\lambda} X + Y$ (see in particular Corollary~\ref{cor:whatisBDL}) to map $\phi(X)=F(x,y) = \frac{-P(y)Q(x)+S(y)P(x)}{x-y}$ and $\phi(Y)=G(x,y)=\frac{P(y)xQ(x)-yS(y)P(x)}{x-y}$. Once again, here $Q(x)$ and $S(x)$ are as in Theorem~\ref{quotient} the unique matrix polynomials of grade $k-1$ such that $v(x) I = A(x) P(x) + S(x) = P(x) A(x) Q(x)$, where $A(x)$ is also unique and we are using Theorem~\ref{thm:PQ=SP} as well. Assume first that $P$ $*$-even: we claim that ${\lambda} X^* \Sigma + Y^* \Sigma = -{\lambda} \Sigma X + \Sigma Y$. Indeed, note that $v(x)$, the interpolating polynomial of $f(x)$, must also satisfy $v^*(x)=v(-x)$. Taking the transpose conjugate of the equation $S(x)=v(x) I - P(x) B(x)$, and using Theorem~\ref{quotient} as in the proof of Theorem~\ref{hermitian}, we obtain $Q^*(x)=S(-x)$. This, together with Table~\ref{tab:op2}, implies that $\phi(-\Sigma X) = \frac{P(-y)Q(x)-Q^*(y)P(x)}{x+y} = -\frac{Q^*(y)P^*(-x)-P^*(y)Q(x)}{x+y} = \phi( X^* \Sigma)$. Similarly, $\phi(\Sigma Y) = \frac{P(-y)xQ(x)+yQ^*(y)P(x)}{x+y} = \frac{yQ^*(y)P^*(-x)+P^*(y)xQ(x)}{y+x} = \phi( Y^* \Sigma)$. The case of a $*$-palindromic $P$ is dealt with analogously and we omit the details. \end{proof} Similar statements hold for other structures. We summarize them in the following table, omitting the proofs as they are completely analogous to those of theorems~\ref{hermitian} and~\ref{structures}. \begin{table}[htbp] \centering \caption{Structures of $P$, $\deg P=k$, and potential linearizations that are structure-preserving} \label{tab:stru} \begin{tabular}{c|c|c} Structure of $P$ & Requirement on $f$ & Pencil\\ \hline Hermitian: $P(x)=P^*(x)$ & $f(x^*)=f^*(x)$ & $\mathbb{BDL}(P,f)$\\ skew-Hermitian: $P(x)=-P^*(x)$ & $f(x^*)=f^*(x)$ & $\mathbb{BDL}(P,f)$ \\ symmetric: $P(x)=P^T(x)$ & any $f(x)$ & $\mathbb{BDL}(P,f)$ \\ skew-symmetric: $P(x)=-P(x)^T$ & any $f(x)$ & $\mathbb{BDL}(P,f)$ \\ \hline *-even: $P(x)=P^*(-x)$ & $f(x)=f^*(-x)$ & $\Sigma \mathbb{BDL}(P,f)$ \\ *-odd: $P(x)=-P^*(-x)$ & $f(x)=f^*(-x)$ & $\Sigma \mathbb{BDL}(P,f)$ \\ T-even: $P(x)=P^T(-x)$ & $f(x)=f(-x)$ & $\Sigma \mathbb{BDL}(P,f)$ \\ T-odd: $P(x)=-P^T(-x)$ & $f(x)=f(-x)$ & $\Sigma \mathbb{BDL}(P,f)$ \\ \hline *-palindromic: $P(x)=x^kP^*(x^{-1})$ & $f(x)=x^{k-1} f^*(x^{-1})$& $R \mathbb{BDL}(P,f)$\\ *-antipalindromic: $P(x)=-x^{k}P^*(x^{-1})$ & $f(x)=x^{k-1} f^*(x^{-1})$& $R \mathbb{BDL}(P,f)$\\ T-palindromic: $P(x)=P^*(x^{-1})$ & $f(x)=x^{k-1} f(x^{-1})$& $R \mathbb{BDL}(P,f)$\\ T-antipalindromic: $P(x)=-P^T(x^{-1})$ & $f(x)=x^{k-1} f(x^{-1})$& $R \mathbb{BDL}(P,f)$\\ \end{tabular} \end{table} With a similar technique, one may produce pencils with a structure that is \emph{related} to that of the linearized matrix polynomial, e.g., if $P$ is $*$-odd and $f(x)=-f^*(-x)$, then $\Sigma \mathbb{BDL}(P,f)$ will be $*$-even. For lack of space we will not include a complete list of such variations on the theme in this paper. However, we note that generalizations of this kind are immediate to prove with the Lerer--Tismenetsky B\'{e}zoutian functional approach. We conclude this section by giving the following result which has an application in the theory of sign characteristics~\cite{signpaper}: \begin{theorem} Let $P(x)$ be $*$-palindromic of degree $k$, with nonsingular leading coefficient, and $f(x)=x^{k/2}$; if $k$ is odd, suppose furthermore that the square root is defined in such a way that $P(x)$ has no eigenvalues on the branch cut. Moreover, let $\mathbb{BDL}(P,f)={\lambda}bda X + Y$ and let $R$ be defined as in~\eqref{eq:sigR}. Then $Z=i R X$ is a Hermitian matrix. \end{theorem} \begin{proof} We claim that the statement is true when $P(x)$ has all distinct eigenvalues. Then it must be true in general. This follows by continuity, if we consider a sequence $(P_n(x))_n$ of $*$-palindromic polynomials converging to $P(x)$ and such that $P_n(x)$ has all distinct eigenvalues, none of which lie on the branch cut. Such a sequence exists because the set of palindromic matrix polynomials with distinct eigenvalues is dense, as can be seen arguing on the characteristic polynomial seen as a polynomial function of the $n^2(k+1)$ independent real parameters. It remains to prove the claim. Since $X$ is the linear part of the pencil $\mathbb{BDL}(P,f)$, we get, by Corollary~\ref{cor:whatisBDL} and using the mapping $\phi$ defined in Section~\ref{sec:bivariate}, that $\phi(X)= \frac{P(y)Q(x)-S(y)P(x)}{x-y}$, where $v(x) I = Q(x) + A(x) P(x) = S(x) + P(x) A(x)$ are defined as in Theorem~\ref{quotient} and $v(x)$ is the interpolating polynomial of $f(x)$ on the eigenvalues of $P(x)$. By assumption $P(x)$ has $kn$ distinct eigenvalues. Denote by $({\lambda}bda_i,w_i,u_i)$, $i=1,\ldots,nk$, an eigentriple, and consider the matrix in Vandermonde form $V$ whose $i$th column is $V_i=\Lambda({\lambda}bda_i) \otimes w_i$ ($V$ is the matrix of eigenvectors of $C_P^{(1)}$); recall moreover that if $({\lambda}bda_i,w_i,u_i)$ is an eigentriple then $(1/{\lambda}bda^*_i,u_i^*,w_i^*)$ is. Observe that by definition $Q({\lambda}bda_i) w_i = {\lambda}bda_i^{k/2} w_i$ and $u_i S({\lambda}bda_i) = u_i {\lambda}bda_i^{k/2}$. Our task is to prove that $R X = - X^* R$; observe that this is equivalent to $V^* R X V = - V^* X^* R V$. Using Table~\ref{tab:op} and Table~\ref{tab:op2}, we see that $V_i^* R X V_j$ is equal to the evaluation of $w_i^* \frac{y^k P(1/y)Q(x)-y^k S(1/y)P(x)}{x y-1} w_j$ at $(x={\lambda}bda_j,y={\lambda}bda_i^*)$. Suppose first that ${\lambda}bda_i {\lambda}bda_j^* \neq 1$. Then, using $P({\lambda}bda_j) w_j = 0$ and $w_i^* P(1/{\lambda}bda_i^*)=0$, we get $V_i^* R X V_j=0$. When ${\lambda}bda_i^{-1} = {\lambda}bda_j^*$, we can evaluate the fraction using De L'H\^{o}pital rule, and obtain $w_i^* \frac{-({\lambda}bda_i^*)^k S(1/{\lambda}bda_i^*)P'({\lambda}bda_j)}{{\lambda}bda_i^*} w_j = - w_i^* ({\lambda}bda_i^*)^{k/2-1} P'({\lambda}bda_j) w_j$. An argument similar to the previous one shows that $V_i^* X^* R V_j=0$ when ${\lambda}bda_i {\lambda}bda_j^* \neq 1$, and $V_i^* X^* R V_j=w_i^* ({\lambda}bda_i^*)^{k/2-1} P'({\lambda}bda_j) w_j$ when ${\lambda}bda_i {\lambda}bda_j^* = 1$. We have thus shown that $V_i^* X^* R V_j = -V_i^* R X V_j$ for all $(i,j)$, establishing the claim. \end{proof} \section{Conditioning of eigenvalues of $\mathbb{DL}(P)$}\label{sec:conditioning} In~\cite{Higham_06_02}, a conditioning analysis is carried out for the eigenvalues of the $\mathbb{DL}(P)$ pencils, which identifies situations in which the $\mathbb{DL}(P)$ linearization itself does not worsen the eigenvalue conditioning of the original matrix polynomial $P({\lambda}bda)$ expressed in the monomial basis. Here, we use the bivariate polynomial viewpoint to analyze the conditioning, using concise arguments and allowing for $P({\lambda}bda)$ expressed in any polynomial basis. As shown in~\cite{tisseur2000backward}, the first-order expansion of a simple eigenvalue ${\lambda}bda_i$ of $P({\lambda}bda)+\Delta P({\lambda}bda)$ is \begin{equation} \label{eq:matpolyeigpert} {\lambda}bda_i={\lambda}bda_i(P) - \frac{y_i^*\Delta P({\lambda}bda_i)x_i}{y_i^*P'({\lambda}bda_i)x_i}+\mathcal{O}(\|\Delta P({\lambda}bda_i)\|^2), \end{equation} where $y_i$ and $x_i$ are the left and right eigenvectors corresponding to ${\lambda}bda_i$. This analysis motivated the conditioning results for multidimensional rootfinding~\cite{Nakatsukasa_13_01,noferini2015resultant}. When applied to a $\mathbb{DL}(P)$ pencil $L({\lambda}bda)={\lambda}bda X+Y$ with ansatz $v$, defining $\widehat x_i = \Lambda({\lambda}_i)\otimes x_i, \widehat y_i = \Lambda^T({\lambda}_i) \otimes y_i$, where $\Lambda({\lambda}) = \left[\phi_{k-1}({\lambda}),\ldots,\phi_0({\lambda})\right]^T$ as before and noting that $L'({\lambda}bda) = X$,~\eqref{eq:matpolyeigpert} becomes \begin{equation} \label{eq:dlpert} {\lambda}bda_i={\lambda}bda_i(L) - \frac{\widehat y_i^*\Delta L({\lambda}bda_i)\widehat x_i}{\widehat y_i^*X\widehat x_i}+\mathcal{O}(\|\Delta L({\lambda}bda_i)\|^2), \quad i=1,\ldots, nk. \end{equation} Recall from~\eqref{eq:Lbezfunc} that $X =-B(P,v) = B(v,P)$, and note in Table~\ref{tab:op} that $\widehat y_i^*X\widehat x_i$ is the evaluation of the $n\times n$ Lerer--Tismenetsky B\'{e}zoutian\ function $\mathcal{B}(v,P))$ setting both variables equal to ${\lambda}bda_i$, followed by left and right multiplication by $y_i^*$ and $x_i$. Therefore, since the Lerer--Tismenetsky B\'{e}zoutian\ function is a polynomial, hence continuous with respect to its arguments, we have \begin{align*} \widehat y_i^*X\widehat x_i&= y_i^*\left(\lim_{s,t\rightarrow {\lambda}bda_i}\frac{v(s)P(t)-P(s)v(t)}{s-t} \right)x_i \\ &= y_i^*\left(v'({\lambda}bda_i)P({\lambda}bda_i)-P'({\lambda}bda_i)v({\lambda}bda_i)\right)x_i\\ &= -y_i^*P'({\lambda}bda_i)v({\lambda}bda_i)x_i. \end{align*} Here we used L'H\^{o}pital's rule for the second equality and $P({\lambda}bda_i)x_i=0$ for the last. Hence, the expansion~\eqref{eq:dlpert} becomes \begin{equation} \label{eq:dlpert2} {\lambda}bda_i={\lambda}bda_i(L) + \frac{1}{v({\lambda}bda_i)}\frac{\widehat y_i^*\Delta L({\lambda}bda_i)\widehat x_i}{y_i^*P'({\lambda}bda_i)x_i}+\mathcal{O}(\|\Delta L({\lambda}bda_i)\|^2). \end{equation} Thus, up to first order, a small change of $L$ to $L+\Delta L$ perturbs ${\lambda}bda_i$ by $\frac{|\widehat y_i^*\Delta L({\lambda}bda_i)\widehat x_i|}{|v({\lambda}bda_i)||y_i^*P'({\lambda}bda_i)x_i|}\leq \frac{\|\widehat y_i\|_2\|\Delta L({\lambda}bda_i)\|_2\|\widehat x_i\|_2}{|v({\lambda}bda_i)||y_i^*P'({\lambda}bda_i)x_i|}$, where the last inequality is sharp in that equality can hold by taking $\Delta L({\lambda}bda)=\sigma\widehat y_i\widehat x_i^*$ for any scalar $\sigma$. Similarly from~\eqref{eq:matpolyeigpert}, a small perturbation from $P$ to $P+\Delta P$ results in the eigenvalue perturbation $\frac{\|y_i\|_2\|\Delta P({\lambda}bda_i)\|_2\|x_i\|_2}{|y_i^*P'({\lambda}bda_i)x_i|}$, which is also a sharp bound. Combining these two bounds, we see that the ratio between the perturbation of ${\lambda}bda_i$ in the original $P({\lambda}bda)$ and the linearization $L({\lambda}bda)$ is \begin{equation} \label{eq:rratio} r_{{\lambda}bda_i}=\frac{1}{v({\lambda}bda_i)}\frac{\|\widehat y_i\|_2\|\Delta L({\lambda}bda_i)\|_2\|\widehat x_i\|_2}{\|y_i\|_2\|\Delta P({\lambda}bda_i)\|_2\|x_i\|_2}. \end{equation} Now recall that the absolute \emph{condition number} of an eigenvalue of a matrix polynomial may be defined as \begin{equation} \label{eq:conddef} \kappa({\lambda}bda) = \lim_{\epsilon \rightarrow 0} \sup \{|\Delta {\lambda}bda| : \left( P({\lambda}bda + \Delta {\lambda}bda) + \Delta P ({\lambda}bda + \Delta {\lambda}bda) \right) \!\hat x = 0, \hat x \neq 0, \| \Delta P(\cdot) \| \leq \epsilon \| P(\cdot)\| \}. \end{equation} Here, we are taking the norm for matrix polynomials to be $\|P(\cdot)\|=\max_{{\lambda}bda\in\mathcal{D}}\|P({\lambda}bda)\|_2$, where $\mathcal{D}$ is the domain of interest that below we take to be the interval $[-1,1]$. In~\eqref{eq:conddef}, ${\lambda}bda + \Delta {\lambda}bda$ is the eigenvalue of $P+\Delta P$ closest to ${\lambda}bda$ such that $\lim_{\epsilon\rightarrow 0}\Delta{\lambda}bda=0$. Note that definition~\eqref{eq:conddef} is the \emph{absolute} condition number, in contrast to the relative condition number treated in~\cite{tisseur2000backward}, in which the supremum is taken of $|\Delta {\lambda}bda|/(\epsilon|{\lambda}bda|)$, and over $\Delta P(\cdot) = \sum_{i=0}^k\Delta P_i\phi_i(\cdot)$ such that $\|\Delta P_i\|_2\leq \epsilon \| E_i\|$ where $E_i$ are prescribed tolerances for the term with $\phi_i$. Combining this definition with the analysis above, we can see that the ratio of the condition numbers of the eigenvalue ${\lambda}bda$ for the linearization $L$ and the original matrix polynomial $P$ is \begin{equation} \label{eq:rrratio} \widehat r_{{\lambda}bda_i}=\frac{1}{v({\lambda}bda_i)}\frac{\|\widehat y_i\|_2\|L(\cdot)\|\|\widehat x_i\|_2}{\|y_i\|_2\|P(\cdot)\|\|x_i\|_2}. \end{equation} The eigenvalue ${\lambda}bda_i$ can be computed stably from the linearization $L({\lambda}bda)$ if $\hat r_{{\lambda}bda_i}$ is not significantly larger than 1. Identifying conditions to guarantee $\hat r_{{\lambda}bda_i}=\mathcal{O}(1)$ is nontrivial and depends not only on $P({\lambda}bda)$ and the choice of the ansatz $v$, but also on the value of ${\lambda}bda_i$ and the choice of polynomial basis. For example,~\cite{Higham_06_02} considers the monomial case and shows that the coefficientwise conditioning of ${\lambda}bda_i$ does not worsen much by forming $L({\lambda}bda)$ if $\frac{\max_i\|P_i\|_2}{\max\{\|P_0\|_2,\|P_k\|_2\}}$ is not too large, where $P({\lambda}) = \sum_{i=0}^k P_i{\lambda}bda^i$, and the ansatz choice is $v={\lambda}bda^{k-1}$ if $|{\lambda}bda_i|\geq 1$ and $v=1$ if $|{\lambda}bda_i|\leq 1$. Although it is difficult to make a general statement on when $r_{{\lambda}bda_i}$ is moderate, here we show that in the practically important case where the Chebyshev basis is used and ${\lambda}bda_i\in \mathcal{D}:=[-1,1]$, the conditioning ratio can be bounded by a modest polynomial in $n$ and $k$, with an appropriate choice of $v$, namely, $v=1$. This means that the conditioning of these eigenvalues does not worsen much by forming the linearization, and the eigenvalues can be computed in a stable manner from $L({\lambda}bda)$. \begin{theorem} Let $L({\lambda}bda)$ be the $\mathbb{DL}(P)$ linearization with ansatz $v(x)=1$ of a matrix polynomial $P({\lambda}bda)$ expressed in the Chebyshev basis $\phi_j(x)=T_j(x)$. Let ${\lambda}bda_i$ be an eigenvalue of $P({\lambda}bda)$ with right and left eigenvectors $x_i$ and $y_i$, respectively, such that $P({\lambda}bda_i)x_i=0$, $y_i^TP({\lambda}bda_i)=0$, and define $\widehat x_i = x_i\otimes \Lambda({\lambda}_i), \widehat y_i = y_i\otimes \Lambda({\lambda}_i)$ where $\Lambda({\lambda}) = \left[T_{k-1}({\lambda}),\ldots,T_0({\lambda})\right]^T$. Then for any eigenvalue ${\lambda}bda_i\in[-1,1]$, the conditioning ratio $\widehat r_{{\lambda}bda_i}$ in~\eqref{eq:rrratio} is bounded by \begin{equation} \label{eq:condrate} \widehat r_{{\lambda}bda_i}\leq 16n(e-1)k^4. \end{equation} \end{theorem} \begin{proof} Since the Chebyshev polynomials $T_j(x)$ are all bounded by $1$ on $[-1,1]$, we have $\|\hat x_i\|_2=c_i\| x_i\|_2$, $\|\widehat y_i\|_2=d_i\| y_i\|_2$ for some $c_i,d_i\in[1,\sqrt{k}]$. Therefore, we have \begin{align} \widehat r_{{\lambda}bda_i} &\leq \frac{k}{v({\lambda}bda_i)}\frac{\|L(\cdot)\|}{\| P(\cdot)\|}. \label{eq:rlam} \end{align} We next claim that $\| L(\cdot)\|$ can be estimated as $\| L(\cdot)\|=\mathcal{O}(\|P(\cdot)\|\|v(\cdot)\|)$. To verify this it suffices to show that writing $L({\lambda}bda) = {\lambda}bda X+Y$ we have \begin{equation} \label{eq:xysmall} \|X\|_2\leq q_X(n,k)\|P(\cdot)\|\|v(\cdot)\| ,\quad \|Y\|_2\leq q_Y(n,k)\|P(\cdot)\|\|v(\cdot)\| \end{equation} where $q_X,q_Y$ are low-degree polynomials with modest coefficients. Let us first prove the bound for $\|X\|_2$ in~\eqref{eq:xysmall} (to gain a qualitative understanding one can consult the construction of $X,Y$ in Section~\ref{sec:construction}). Recalling~\eqref{eq:Lbezfunc}, $X$ is the B\'{e}zout block matrix $B(vI,P)$, so its $(k-i,k-j)$ block is the coefficients for $T_{i}(y)T_j(x)$ of the function \[ \mathcal{B}(P,-vI)=\frac{-P(y)v(x) +v(y)P(x)}{x-y} :=H(x,y). \] Recall that $H(x,y)$ is an $n\times n$ bivariate matrix polynomial, and denote its $(s,t)$ element by $H_{st}(x,y)$. For every fixed value of $y\in[-1,1]$, by~\cite[Lem.~B.1]{NNvandooren14} we have \[ |H_{st}(x,y)|\leq(e-1)k^2\max_{x\in[-1,1]}|H_{st}(x,y)(x-y)| \leq 2(e-1)k^2 \|P(\cdot)\|\|v(\cdot)\| \] when $|x-y|\leq k^{-2}$. Clearly, \[ |H_{st}(x,y)|\leq 2k^2 \|P(\cdot)\|\|v(\cdot)\|\quad \mbox{for} \quad |x-y|\geq k^{-2}. \] Together we obtain $\max_{x\in[-1,1]}|H_{st}(x,y)|\leq 2(e-1)k^2 \|P(\cdot)\|\|v(\cdot)\|$. Since this holds for every $(i,j)$ and every fixed value of $y\in[-1,1]$ we obtain \begin{equation} \label{eq:Hbound} \max_{x\in[-1,1],y\in[-1,1]}|H_{st}(x,y)|\leq 2(e-1)k^2 \|P(\cdot)\|\|v(\cdot)\|. \end{equation} To obtain~\eqref{eq:xysmall} it remains to bound the coefficients in the representation of a degree-$k$ bivariate polynomial $H_{st}(x,y) = \sum_{i=0}^{k}\sum_{j=0}^{k} h_{k-i,k-j}^{(st)}T_i(y)T_j(x)$. It holds \[h^{(st)}_{k-i,k-j} = \left(\frac{2}{\pi}\right)^2\int_{-1}^1 \int_{-1}^1 \frac{H_{st}(x,y)T_i(y)T_j(x)}{\sqrt{(1-x^2)(1-y^2)}}dxdy,\] (for $i=k$ and $j=k$ the constant is $\frac{1}{\pi}$) and hence using $|T_i(x)|\leq 1$ on $[-1,1]$ we obtain \begin{align*} |h^{(st)}_{k-i,k-j}|&\leq \left(\frac{2}{\pi}\right)^2\max_{x\in[-1,1],y\in[-1,1]}|H_{st}(x,y)| \int_{-1}^1 \int_{-1}^1 \frac{1}{\sqrt{(1-x^2)(1-y^2)}}dxdy\\ &=4\max_{x\in[-1,1],y\in[-1,1]}|H_{st}(x,y)|\\ &\leq 8(e-1)k^2 \|P(\cdot)\|\|v(\cdot)\|, \end{align*} where we used~\eqref{eq:Hbound} for the last inequality. Since this holds for every $(s,t)$ and $(i,j)$ we conclude that \[ \|X\|_2\leq 8n(e-1)k^3 \|P(\cdot)\|\|v(\cdot)\| \] as required. To bound $\|Y\|_2$ we use the fact that $Y$ is the B\'{e}zout block matrix $B(P,-vxI)$, and by an analogous argument we obtain the bound \[ \|Y\|_2\leq 8n(e-1)k^3 \|P(\cdot)\|\|v(\cdot)\|. \] This establishes~\eqref{eq:xysmall} with $q_X(n,k)=q_Y(n,k)=8n(e-1)k^3$ , and we obtain \begin{equation} \label{eq:Lbound} \|L(\cdot)\|\leq 16n(e-1)k^3 \|P(\cdot)\|\|v(\cdot)\|. \end{equation} Substituting this into~\eqref{eq:rlam} we obtain \begin{align*} r_{{\lambda}bda_i}&\leq \frac{k}{v({\lambda}bda_i)}\frac{\|L(\cdot)\|}{\|P(\cdot)\|} \leq \frac{k}{v({\lambda}bda_i)}\frac{16n(e-1)k^3 \|P(\cdot)\|\|v(\cdot)\|}{\|P(\cdot)\|}. \end{align*} With the choice $v = 1$ we have $v({\lambda}bda_i) = \|v(\cdot)\|=1$, which yields~\eqref{eq:condrate}. \end{proof} Note that our discussion deals with the normwise condition number, as opposed to the coefficientwise condition number as treated in~\cite{Higham_06_02}. In practice, we observe that the eigenvalues of $L({\lambda}bda)$ computed via the QZ algorithm are sometimes less accurate than those of $P({\lambda}bda)$, obtained via QZ for the colleague linearization~\cite{good1961colleague}, which is normwise stable~\cite{NNvandooren14}. The reason appears to be that the backward error resulting from the colleague matrix has a special structure, but a precise explanation is an open problem. \section{Construction}\label{sec:construction} We now describe an algorithm for computing $\mathbb{DL}$ pencils. The shift sum operation provides a means to obtain the $\mathbb{DL}$ pencil given the ansatz $v$. For general polynomial bases, however, the construction is not as trivial as for the monomial basis. We focus on the case where $\{\phi_i\}$ is an orthogonal polynomial basis, so that the multiplication matrix~\eqref{eq:defM} has tridiagonal structure. Recall that $F(x,y)$ and $G(x,y)$ satisfy the formulas~\eqref{eq:DLrelations},~\eqref{eq:F} and \eqref{eq:G}. Hence for $L({\lambda}) = {\lambda}bda X+Y\in \mathbb{DL}(P)$ with ansatz $v$, writing the bivariate equations in terms of their coefficient matrix expansions, we see that $X$ and $Y$ need to satisfy the following equations: defining $v = [v_{k-1},\ldots,v_0]^T$ to be the vector of coefficients of the ansatz, and setting \[ S = v\otimes \left[P_k,P_{k-1},\ldots,P_0\right]\qquad \hbox{and} \qquad T = v^T\otimes \left[P_k,P_{k-1},\ldots,P_0\right]^{\mathcal{B}}, \] Note that $S$ and $T$ are the matrix representation of the functions $P(y)v(x)$ and $v(y)P(x)$ respectively. Hence by~\eqref{eq:G} we have \begin{equation}\label{eq:getY} \begin{bmatrix} 0 \cr Y \end{bmatrix} M - M^T\begin{bmatrix}0 & Y\end{bmatrix} = TM - M^T S, \end{equation} where $M$ is as in~\eqref{eq:defM}, the matrix representing the shift operation; recall that $M^{\mathcal{B}}=M^T$. Similarly, by~\eqref{eq:F} we have \begin{equation}\label{eq:getX} XM = S - \begin{bmatrix}0 & Y \end{bmatrix}. \end{equation} Note that we have used the first equation of~\eqref{eq:DLrelations} instead of~\eqref{eq:F} to obtain an equation for $X$ because the former is simpler to solve. Now we turn to the computation of $X,Y$, which also explicitly shows that the pair $(X,Y)$ satisfying~\eqref{eq:getY},~\eqref{eq:getX} is unique\footnote{We note that~\eqref{eq:getY} is a singular Sylvester equation, but if we force the zero structure in the first block column in $[0\ Y]$ then the solution becomes unique. }. We first solve~\eqref{eq:getY} for $Y$. Recall that $M$ in~\eqref{eq:Morth} is block tridiagonal, the $(i,j)$ block being $m_{i,j}I_n$. Defining $R = TM - M^T S$ and denoting by $Y_i,R_{i}$ the $i$th block rows of $Y$ and $R$ respectively, the first block row of~\eqref{eq:getY} yields $m_{1,1}Y_1 = -R_{1}$, hence $Y_1 = -\frac{1}{m_{1,1}}R_{1}$ (note that $m_{i,i}\neq 0$ because the polynomial basis is degree-graded). The second block row of~\eqref{eq:getY} gives $Y_1M-(m_{1,2}Y_1+m_{2,2}Y_2) = R_{2}$, hence $ Y_2 = \frac{1}{m_{2,2}}(Y_1M-m_{1,2}Y_1-R_{2})$. Similarly, from the $i(\geq 3)$th block row of~\eqref{eq:getY} we get \begin{equation}\nonumber Y_i = \frac{1}{m_{i,i}}(Y_{i-1}M-m_{i-2,i}Y_{i-2}-m_{i-1,i}Y_{i-1}-R_{i}), \end{equation} so we can compute $Y_i$ for $i=1,2,\ldots,n$ inductively. Once $Y$ is obtained, $X$ can be computed easily by~\eqref{eq:getX}. The complexity is $\mathcal{O}((nk)^2)$, noting that $Y_{i-1}M$ can be computed with $\mathcal{O}(n^2k)$ cost. In Section~\ref{appendix} we provide a {\sc Matlab} code that computes $\mathbb{DL}(P)$ for any orthogonal polynomial basis. If $P({\lambda})$ is expressed in the monomial basis we have (see~\cite[eq.~2.9.3]{Bini_94_01} for scalar polynomials) \[ L({\lambda}) = \begin{bmatrix}P_{k-1}&\ldots&P_0\cr\vdots&\iddots\cr P_0\cr\end{bmatrix}\!\!\begin{bmatrix}\hat{v}_kI_n&\ldots&\hat{v}_1I_n\cr&\ddots&\vdots\cr &&\hat{v}_kI_n\end{bmatrix} - \begin{bmatrix}\hat{v}_{k-1}I_n&\ldots&\hat{v}_0I_n\cr\vdots&\iddots\cr\hat{v}_0I_n\cr\end{bmatrix}\!\! \begin{bmatrix}P_k&\ldots&P_1\cr&\ddots&\vdots\cr&&P_k\end{bmatrix}, \] where $\hat{v}_i = \left(v_{i-1} - {\lambda}bda v_i\right)$. This relation can be used to obtain expressions for the block matrices $X$ and $Y$. For other orthogonal bases the relation is more complicated. Matrix polynomials expressed in the Legendre or Chebyshev basis are of practical importance, for example, for a nonlinear eigenvalue solver based on Chebyshev interpolation~\cite{Effenberger_11_01}. Following~\cite[Table~5.2]{Mackey_05_01}, in Table~\ref{tab:ChebTDLP} we depict three $\mathbb{DL}(P)$ pencils for the cubic matrix polynomial $P({\lambda}) = P_3T_3({\lambda}) + P_2T_2({\lambda}) + P_1T_1({\lambda}) + P_0T_0({\lambda})$, where $T_j({\lambda})$ is the $j$th Chebyshev polynomial. \begin{table}[h] \centering \caption{Three instances of pencils in $\mathbb{DL}(P)$ and their linearization condition for the cubic matrix polynomial $P({\lambda}) = P_3T_3({\lambda}) + P_2T_2({\lambda}) + P_1T_1({\lambda}) + P_0T_0({\lambda})$, expressed in the Chebyshev basis of the first kind. These three pencils form a basis for the vector space $\mathbb{DL}(P)$.} \label{tab:ChebTDLP} \footnotesize \begin{tabular}{|c|c|c|} \hline \rule{0pt}{3ex}$v$ & $L({\lambda})\in\mathbb{DL}(P)$ for given $v$ & Linearization condition \\[3pt] \hline & & \\ $\begin{bmatrix} 1\\0 \\0\end{bmatrix} $& ${\lambda}\!\begin{bmatrix} 2P_3\!&0&0\\0&\!2P_3-2P_1\!&-2P_0\\0&-2P_0&\!P_3-P_1\!\end{bmatrix} + \begin{bmatrix}P_2&\!P_1-P_3\!&P_0\\\!P_1-P_3\!&2P_0&\!P_1-P_3\!\\P_0&\!P_1-P_3\!&P_0\end{bmatrix} $ & \parbox{3cm}{\centering $\det(P_0 + \frac{-P_3+P_1}{\sqrt{2}})\neq 0 $\\\vspace*{3pt} $\det(P_0 - \frac{-P_3+P_1}{\sqrt{2}})\neq 0 $ } \\& &\\ \hline & &\\ $\begin{bmatrix} 0\\1\\0\end{bmatrix} $& ${\lambda}\!\begin{bmatrix} 0&2P_3&0\\2P_3&2P_2&2P_3\\0&2P_3&P_2-P_0\end{bmatrix} + \begin{bmatrix}-P_3&0&-P_3\\0&P_1-3P_3&P_0-P_2\\-P_3&P_0-P_2&-P_3\end{bmatrix} $ & \parbox{3cm}{\centering $\det (-P_2+P_0)\neq 0$\\\vspace*{3pt} $\det (P_3)\neq 0$ } \\& &\\ \hline & &\\ $\begin{bmatrix} 0\\0\\1\end{bmatrix} $& ${\lambda}\!\begin{bmatrix} 0&0&2P_3\\0&4P_3&2P_2\\2P_3&2P_2&P_1+P_3\end{bmatrix} + \begin{bmatrix}0&-2P_3&0\\-2P_3&-2P_2&-2P_3\\0&-2P_3&P_0-P_2\end{bmatrix}$ & $\det (P_3)\neq 0$ \\&&\\ \hline \end{tabular} \end{table} \normalsize \subsection{{\sc Matlab} code for $\mathbb{DL}(P)$} \label{appendix} The formulae~\eqref{eq:F} and~\eqref{eq:G} can be used to construct any pencil in $\mathbb{DL}(P)$ without basis conversion, which can be numerically important~\cite{Amirasiani_09_01, Nakatsukasa_13_01}. We provide a {\sc Matlab} code that constructs pencils in $\mathbb{DL}(P)$ when the matrix polynomial is expressed in any orthogonal basis. If $P({\lambda}bda)$ is expressed in the monomials then ${\tt a=[ones(k,1)];}$ ${\tt b = zeros(k,1);}$ ${\tt c = zeros(k,1);}$ and if expressed in the Chebyshev basis then ${\tt a=[ones(k-1,1);2]/2;}$ ${\tt b = zeros(k,1);}$ ${\tt c = ones(k,1)/2;}$. \begin{verbatim} function [X Y] = DLP(P,v,a,b,c) [n m] = size(P); k=m/n-1; s=n*k; M = spdiags([a b c],[0 1 2],k,k+1); M = kron(M,eye(n)); S = kron(v,P); for j=0:k-1, jj=n*j+1:n*j+n; P(:,jj)=P(:,jj)';end T = kron(v.',P'); R=M'*S-T*M; X = zeros(s); Y=X; ii=n+1:s+n; nn=1:n; Y(nn,:)=R(nn,ii)/M(1); X(nn,:)=T(nn,:)/M(1); Y(nn+n,:)=(R(nn+n,ii)-M(1,n+1)*Y(nn,:)+Y(nn,:)*M(:,n+1:s+n))/M(n+1,n+1); X(nn+n,:)=(T(nn+n,:)-Y(nn,:)-M(1,n+1)*X(nn,:))/M(n+1,n+1); for i = 3:k ni=n*i; jj=ni-n+1:ni; j0=jj-2*n; j1=jj-n; M0=M(ni-2*n,ni); M1=M(ni-n,ni); m=M(ni,ni); Y0=Y(j0,:); Y1=Y(j1,:); X0=X(j0,:); X1=X(j1,:); Y(jj,:)=(R(jj,ii)-M1*Y1-M0*Y0+Y1*M(:,n+1:s+n))/m; X(jj,:)=(T(jj,:)-Y1-M1*X1-M0*X0)/m; end \end{verbatim} \subsection*{Acknowledgments} We wish to thank Nick Higham, Fran\c{c}oise Tisseur, and Nick Trefethen for their support and insightful comments. We are grateful to the anonymous referees for their careful reading of the manuscript that lead to an improved presentation. We finally thank Leiba Rodman for his words of appreciation and encouragement, and we posthumously dedicate this article to him. \end{document}
\begin{document} \title[Strichartz estimates for the magnetic Schr\"odinger equation] {Strichartz estimates for the magnetic Schr\"odinger equation with potentials $V$ of critical decay} \author{Seonghak Kim and Youngwoo Koh} \address{Department of Mathematics, Kyungpook National University, Daegu 41566, Republic of Korea} \email{[email protected]} \address{Department of Mathematics Education, Kongju National University, Kongju 32588, Republic of Korea} \email{[email protected]} \subjclass[2010] {Primary 35Q41, 46E35.} \keywords{Strichartz estimates, magnetic Schr\"odinger equation, Fefferman-Phong class.} \begin{abstract} We study the Strichartz estimates for the magnetic Schr\"odinger equation in dimension $n\geq3$. More specifically, for all Schr\"odinger admissible pairs $(r,q)$, we establish the estimate $$ \|e^{itH}f\|_{L^{q}_{t}(\mathbb{R}; L^{r}_{x}(\mathbb{R}^n))} \leq C_{n,r,q,H} \|f\|_{L^2(\mathbb{R}^n)} $$ when the operator $H= -\Delta_A +V$ satisfies suitable conditions. In the purely electric case $A\equiv0$, we extend the class of potentials $V$ to the Fefferman-Phong class. In doing so, we apply a weighted estimate for the Schr\"odinger equation developed by Ruiz and Vega. Moreover, for the endpoint estimate of the magnetic case in $\mathbb{R}^3$, we investigate an equivalence $$ \| H^{\frac{1}{4}} f \|_{L^r(\mathbb{R}^3)} \approx C_{H,r} \big\| (-\Delta)^{\frac{1}{4}} f \big\|_{L^r(\mathbb{R}^3)} $$ and find sufficient conditions on $H$ and $r$ for which the equivalence holds. \end{abstract} \maketitle \section{Introduction} Consider the Cauchy problem of the magnetic Schr\"odinger equation in $\mathbb{R}^{n+1}$ $(n\ge 3)$: \begin{equation}\label{mS_eq} \begin{cases} i\partial_{t}u - Hu=0, \quad (x,t)\in \mathbb{R}^{n} \times \mathbb{R},\\ u(x,0)=f(x), \quad\, f\in\mathcal S. \end{cases} \end{equation} Here, $\mathcal{S}$ is the Schwartz class, and $H$ is the electromagnetic Schr\"odinger operator $$ H = -\nabla_A^2 +V(x), \quad \nabla_A = \nabla - iA(x), $$ where $A= (A^1,A^2,\cdots,A^n): \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$ and $V:\mathbb{R}^{n} \rightarrow \mathbb{R}$. The \textit{magnetic field} $B$ is defined by $$ B= DA - (DA)^T\in \mathcal{M}_{n \times n}, $$ where $(DA)_{ij}= \partial_{x_i} A^j$, $(DA)^T$ denotes the transpose of $DA$, and $\mathcal{M}_{n \times n}$ is the space of $n\times n$ real matrices. In dimension $n=3$, $B$ is determined by the cross product with the vector field $\mathrm{curl} A$: $$ Bv = \mathrm{curl} A \times v\quad (v \in \mathbb{R}^3). $$ In this paper, we consider the Strichartz type estimate \begin{equation}\label{eM result} \|u\|_{L^{q}_{t}(\mathbb{R}; L^{r}_{x}(\mathbb{R}^n))} \leq C_{n,r,q,H} \|f\|_{L^2(\mathbb{R}^n)}, \end{equation} where $u=e^{itH}f$ is the solution to problem \eqref{mS_eq} with solution operator $e^{itH}$, and study some conditions on $A$, $V$ and pairs $(r,q)$ for which the estimate holds. For the unperturbed case of \eqref{mS_eq} that $A \equiv 0$ and $V \equiv 0$, Strichartz \cite{St} proved the inequality $$ \|e^{it\Delta}f\|_{L^{\frac{2(n+2)}{n}}(\mathbb{R}^{n+1})} \leq C_n \|f\|_{L^2(\mathbb{R}^{n})}, $$ where $e^{it\Delta}$ is the solution operator given by $$ e^{it\Delta}f(x)= \frac{1}{(2\pi)^n} \int_{\mathbb{R}^n} e^{ix\cdot\xi + it|\xi|^{2}}\widehat{f}(\xi)d\xi. $$ Later, Keel and Tao \cite{KT} generalized this inequality to the following: \begin{equation}\label{KT_result} \|e^{it\Delta}f\|_{L^{q}_{t}(\mathbb{R}; L^{r}_{x}(\mathbb{R}^n))} \leq C_{n,r,q} \|f\|_{L^2(\mathbb{R}^n)} \end{equation} holds if and only if $(r,q)$ is a Schr\"odinger admissible pair; that is, $r,q \geq 2$, $(r,q)\neq (\infty,2)$ and $\frac{n}{r} + \frac{2}{q} = \frac{n}{2}$. In the purely electric case of (\ref{mS_eq}) that $A \equiv 0$, the decay $|V(x)| \sim 1/|x|^2$ has been known to be critical for the validity of the Strichartz estimate. It was shown by Goldberg, Vega and Visciglia \cite{GVV} that for each $\epsilon>0$, there is a counterexample of $V=V_\epsilon$ with $|V(x)| \sim |x|^{-2+\epsilon}$ for $|x|\gg 1$ such that the estimate fails to hold. In a postive direction, Rodnianski and Schlag \cite{RS} proved \begin{equation}\label{electiric_result} \|u\|_{L^{q}_{t}(\mathbb{R}; L^{r}_{x}(\mathbb{R}^n))} \leq C_{n,r,q,V} \|f\|_{L^2(\mathbb{R}^n)} \end{equation} for non-endpoint admissible pairs $(r,q)$ (i.e., $q>2$) with almost critical decay $|V(x)| \lesssim 1/(1+|x|)^{2+\epsilon}$. On the other hand, Burq, Planchon, Stalker and Tahvildar-Zadeh \cite{BPST} established \eqref{electiric_result} for critical decay $|V(x)| \lesssim 1/|x|^2$ with some technical conditions on $V$ but the endpoint case included. Other than these, there have been many related positive results; see, e.g., \cite{GS}, \cite{Go}, \cite{BRV}, \cite{FV} and \cite{BG}. In regard to the purely electric case, the following is the first main result of this paper whose proof is given in section \ref{sec_A=0}. \begin{thm}\label{main_thm_1} Let $n\geq3$ and $A \equiv 0$. Then there exists a constant $c_n>0$, depending only on $n$, such that for any $V \in \mathcal{F}^p$ $(\frac{n-1}{2}<p<\frac{n}{2})$ satisfying \begin{equation}\label{Cond_Decay_V} \|V\|_{\mathcal{F}^p} \leq c_n, \end{equation} estimate \eqref{electiric_result} holds for all $\frac{n}{r} + \frac{2}{q} = \frac{n}{2}$ and $q>2$. Moreover, if $V\in L^{\frac{n}{2}}$ in addition, then \eqref{electiric_result} holds for the endpoint case $(r,q)=(\frac{2n}{n-2},2)$. \end{thm} Here, $\mathcal{F}^p$ is the Fefferman-Phong class with norm $$ \|V\|_{\mathcal{F}^p} = \sup_{r>0,\,x_0\in\mathbb{R}^n} r^2 \Big( \frac{1}{r^n} \int_{B_r(x_0)}|V(x)|^p dx \Big)^{\frac{1}{p}} < \infty, $$ which is closed under translation. From the definition of $\mathcal{F}^p$, we directly get $L^{\frac{n}{2},\infty} \subset \mathcal{F}^p$ for all $p<\frac{n}{2}$. Thus the class $\mathcal{F}^p$ $(p<\frac{n}{2})$ clearly contains the potentials of critical decay $|V(x)| \lesssim 1/|x|^2$. Moreover, $\mathcal{F}^p$ $(p<\frac{n}{2})$ is strictly larger than $L^{\frac{n}{2},\infty}$. For instance, if the potential function \begin{equation}\label{example_Fp} V(x)=\phi\big(\frac{x}{|x|}\big) |x|^{-2},\quad \phi\in L^p(S^{n-1}),\quad \frac{n-1}{2}<p<\frac{n}{2}, \end{equation} then $V$ need not belong to $L^{\frac{n}{2},\infty}$, but $V\in \mathcal{F}^p$. According to Theorem \ref{main_thm_1}, for the non-endpoint case, we do not need any other conditions on $V$ but its quantitative bound \eqref{Cond_Decay_V}, so that we can extend and much simplify the known results for potentials $|V(x)| \sim 1/|x|^2$ (e.g., $\phi\in L^{\infty}(S^{n-1})$ in \eqref{example_Fp}), mentioned above. To prove this, we use a weighted estimate developed by Ruiz and Vega \cite{RV2}. We remark that our proof follows an approach different from those used in the previous works. Unfortunately, for the endpoint case, we need an additional condition that $V\in L^{\frac{n}{2}}$. Although $L^{\frac{n}{2}}$ dose not contain the potentials of critical decay, it still includes those of \emph{almost} critical decay $|V(x)| \lesssim \phi(\frac{x}{|x|})\min (|x|^{-(2+\epsilon)}, |x|^{-(2-\epsilon)})$, $\phi\in L^{\frac{n}{2}}(S^{n-1})$. In case of dimension $n=3$, we can find a specific bound for $V$, which plays the role of $c_n$ in Theorem \ref{main_thm_1}. We state this as the second result of the paper. \begin{thm}\label{main_thm_1'} If $n=3$, $A \equiv 0$ and $\|V\|_{L^{3/2}} < 2\pi^{1/3}$, then estimate \eqref{electiric_result} holds for all $\frac{3}{r} + \frac{2}{q} = \frac{3}{2}$ and $q\geq 2$. \end{thm} To prove this, we use the best constant of the Stein-Tomas restriction theorem in $\mathbb{R}^3$, obtained by Foschi \cite{F}, and apply it to an argument in Ruiz and Vega \cite{RV2}.\\ Next, we consider the general (magnetic) case that $A$ or $V$ can be different from zero. In this case, the Coulomb decay $|A(x)| \sim 1/|x|$ seems critical. (In \cite{FG}, there is a counterexample for $n\geq3$. The case $n=2$ is still open.) In an early work of Stefanov \cite{Ste}, estimate (\ref{eM result}) for $n\ge 3$ was proved, that is, \begin{equation}\label{magnetic_result} \|e^{itH}f\|_{L^{q}_{t}(\mathbb{R}; L^{r}_{x}(\mathbb{R}^n))} \leq C_{n,r,q,H} \|f\|_{L^2(\mathbb{R}^n)} \end{equation} for all Schr\"odinger admissible pairs $(r,q)$ under some smallness assumptions on the potentials $A$ and $V$. Later, for potentials of almost critical decay $|A(x)| \lesssim 1/|x|^{1+\epsilon}$ and $|V(x)| \lesssim 1/|x|^{2+\epsilon}$ $(|x|\gg1)$, D'Ancona, Fanelli, Vega and Visciglia \cite{DFVV} established \eqref{magnetic_result} for all Schr\"odinger admissible pairs $(r,q)$ in $n\ge 3$, except the endpoint case $(n,r,q) = (3,6,2)$, under some technical conditions on $A$ and $V$. Also, there have been many related positive results; see, e.g., \cite{GST}, \cite{EGS2}, \cite{DF}, \cite{MMT}, \cite{EGS1}, \cite{Go2} and \cite{FFFP}. Despite all these results, there has been no known positive result on the estimate in case of potentials $A$ of critical decay even in the case $V\equiv 0$. Regarding the general case, we state the last result of the paper whose proof is provided in section \ref{sec_V=0}. \begin{thm}\label{main_thm_2} Let $n\ge 3$, $A,V \in C^1_{loc}(\mathbb{R}^n\backslash \{0\})$ and $\epsilon>0$. Assume that the operator $\Delta_A = -(\nabla -iA)^2$ and $H= \Delta_A +V$ are self-adjoint and positive on $L^2$ and that \begin{equation}\label{Kato_class} \| V_- \|_K = \sup_{x \in \mathbb{R}^n} \int \frac{|V_-(y)|}{|x-y|^{n-2}} dy ~< \frac{\pi^{n/2}}{\Gamma(\frac{n}{2}-1)} . \end{equation} Assume also that there is a constant $C_{\epsilon}>0$ such that $A$ and $V$ satisfy the almost critical decay condition \begin{equation}\label{Cond_2'} \begin{aligned} |A(x)|^2 + |V(x)| \leq C_{\epsilon} \min\big( \frac{1}{|x|^{2-\epsilon}}, \frac{1}{|x|^{2+\epsilon}}\big) , \end{aligned} \end{equation} and the Coulomb gauge condition \begin{equation}\label{Cond_Coulomb} \nabla\cdot A =0. \end{equation} Lastly, for the \textit{trapping component} of $B$ as $B_\tau (x) = (x/|x|) \cdot B(x)$, assume that \begin{equation}\label{Cond_B_n=3} \int_0^\infty \sup_{|x|=r} |x|^3 \big| B_\tau (x) \big|^2 dr + \int_0^\infty \sup_{|x|=r} |x|^2 \big| \big( \partial_r V (x) \big)_{+} \big| dr < \frac{1}{M}\quad\mbox{if}\quad n=3, \end{equation} for some $M>0$, and that \begin{equation}\label{Cond_B_n>3} \Big\| |x|^2 B_\tau (x) \Big\|_{L^{\infty}}^2 + 2 \Big\| |x|^3 \big(\partial_r V (x) \big)_{+} \Big\|_{L^{\infty}} < \frac{2(n-1)(n-3)}{3}\quad\mbox{if}\quad n\geq4. \end{equation} Only for $n=3$, we also assume the boundness of the imagnary power of $H$: \begin{equation}\label{Cond_BMO} \| H^{iy} \|_{BMO \rightarrow BMO_H} \leq C (1+|y|)^{3/2}. \end{equation} Then we have \begin{equation}\label{result_Thm2} \|e^{itH}f\|_{L^{q}_{t}(\mathbb{R}; L^{r}_{x}(\mathbb{R}^n))} \leq C_{n,r,q,H,\epsilon} \|f\|_{L^2(\mathbb{R}^n)}, \quad \frac{n}{r} + \frac{2}{q} = \frac{n}{2} \quad\mbox{and}\quad q \geq 2 . \end{equation} \end{thm} Note that this result covers the endpoint case $(n,r,q)=(3,6,2)$; but the conclusions for the other cases are the same as in \cite{DFVV}. Here, $V_{\pm}$ denote the positive and negative parts of $V$, respectively; that is, $V_+=\max\{V,0\}$ and $V_-=\max\{-V,0\}$. Also, we say that a function $V$ is of Kato class if $$ \| V \|_K := \sup_{x \in \mathbb{R}^n} \int \frac{|V(y)|}{|x-y|^{n-2}} dy ~< \infty, $$ and $\Gamma$ in \eqref{Kato_class} is the gamma function, defined by $\Gamma(\alpha)= \int_{0}^\infty x^{\alpha-1} e^{-x} dx$. The last condition \eqref{Cond_BMO} for $n=3$ may seem a bit technical but not be artificial. For instance, by Lemma 6.1 in \cite{DDSY}, we know that $\| H^{iy} \|_{L^{\infty} \rightarrow BMO_H} \leq C (1+|y|)^{3/2}$ using only \eqref{Kato_class}. Also, there are many known sufficient conditions to extend such an estimate to $BMO\rightarrow BMO$, like the translation invariant operator (See \cite{P}). For the definition and some basic properties of $BMO_H$ space, see section \ref{sec_CKN_ineq}. The rest of the paper is organized as follows. In section \ref{sec_A=0}, we prove Theorems \ref{main_thm_1} and \ref{main_thm_1'}. An equivalence of norms regarding $H$ and $-\Delta$ in $\mathbb{R}^3$ is investigated in section \ref{sec_CKN_ineq}. Lastly, in section \ref{sec_V=0}, Theorem \ref{main_thm_2} is proved. \section{The case $A\equiv0$: Proof of Theorem \ref{main_thm_1} and \ref{main_thm_1'}}\label{sec_A=0} In this section, the proof of Theorems \ref{main_thm_1} and \ref{main_thm_1'} is provided. Let $n\geq3$, and consider the purely electric Schr\"odinger equation in $\mathbb{R}^{n+1}$: \begin{equation}\label{mS_eq_A=0} \begin{cases} i\partial_{t}u +\Delta u= V(x)u, \quad (x,t)\in \mathbb{R}^{n} \times \mathbb{R},\\ u(x,0)=f(x), \quad\qquad\, f\in\mathcal S. \end{cases} \end{equation} By Duhamel's principle, we have a formal solution to problem \eqref{mS_eq_A=0} given by $$ u(x,t)= e^{itH}f(x) =e^{it\Delta}f(x) - i \int_{0}^{t}e^{i(t-s)\Delta} V(x) e^{isH}f ds . $$ From the standard Strichartz estimate \eqref{KT_result}, there holds $$ \begin{aligned} \|e^{itH}f\|_{L^{q}_{t}(\mathbb{R}; L^{r}_{x}(\mathbb{R}^n))} &\leq C_{n,r,q} \|f\|_{L^2(\mathbb{R}^n)} + \Big\| \int_{0}^{t}e^{i(t-s)\Delta} V(x) e^{isH}f ds \Big\|_{L^{q}_{t}(\mathbb{R}; L^{r}_{x}(\mathbb{R}^n))} \end{aligned} $$ for all Schr\"odinger admissible pairs $(r,q)$. Thus it is enough to show that \begin{equation}\label{A=0_reduce_1} \Big\| \int_{0}^{t}e^{i(t-s)\Delta} V(x) e^{isH}f ds \Big\|_{L^{q}_{t}(\mathbb{R}; L^{r}_{x}(\mathbb{R}^n))} \leq C_{n,r,q,V} \| f \|_{L^2(\mathbb{R}^n)} . \end{equation} for all Schr\"odinger admissible pairs $(r,q)$. By the duality argument, estimate \eqref{A=0_reduce_1} is equivalent to $$ \int_{\mathbb{R}} \int_{0}^{t} \Big\langle e^{i(t-s)\Delta} \big( V(x) e^{isH}f \big), G(\cdot,t) \Big\rangle_{L^2_x} ds dt \leq C \| f \|_{L^2(\mathbb{R}^n)} \| G \|_{L^{q'}_{t}(\mathbb{R}; L^{r'}_{x}(\mathbb{R}^n))}. $$ Now, we consider the left-hand side of this inequality. Commuting the operator and integration, we have $$ \begin{aligned} &\int_{\mathbb{R}} \int_{0}^{t} \Big\langle e^{i(t-s)\Delta} \big( V(x) e^{isH}f \big), G(\cdot,t) \Big\rangle_{L^2_x} ds dt \\ &\quad\quad= \int_{\mathbb{R}} \int_{0}^{t} \Big\langle V(x) e^{isH}f, e^{-i(t-s)\Delta} G(\cdot,t) \Big\rangle_{L^2_x} ds dt \\ &\quad\quad= \int_{\mathbb{R}} \Big\langle V(x) e^{isH}f, \int_{s}^{\infty} e^{-i(t-s)\Delta} G(\cdot,t)dt \Big\rangle_{L^2_x} ds. \end{aligned} $$ By H\"older's inequality, we have $$ \begin{aligned} &\int_{\mathbb{R}} \Big\langle V(x) e^{isH}f, \int_{s}^{\infty} e^{-i(t-s)\Delta} G(\cdot,t)dt \Big\rangle_{L^2_x} ds \\ &\qquad\leq \big\| e^{isH}f \big\|_{L^2_{x,s} (|V|)} \Big\| \int_{s}^{\infty} e^{-i(t-s)\Delta} G (\cdot,t) dt \Big\|_{L^2_{x,s} (|V|)}. \end{aligned} $$ Thanks to \cite[Theorem 3]{RV2}, for any $\frac{n-1}{2}<p<\frac{n}{2}$, we have \begin{equation}\label{RV2_result} \big\| e^{itH}f \big\|_{L^2_{x,t} (|V|)} \leq C_{n} \|V\|_{\mathcal{F}^p}^{\frac{1}{2}} \| f \|_{L^2(\mathbb{R}^n)} \end{equation} if condition \eqref{Cond_Decay_V} holds for some suitable constant $c_n$. More specifically, by Propositions 2.3 and 4.2 in \cite{RV2}, we have $$ \begin{aligned} \big\| e^{itH}f \big\|_{L^2_{x,t} (|V|)} &\leq \big\| e^{it\Delta}f \big\|_{L^2_{x,t} (|V|)} + \Big\| \int_{0}^{t}e^{i(t-s)\Delta} V(x) e^{isH}f ds \Big\|_{L^2_{x,t} (|V|)} \\ &\leq C_1 \|V\|_{\mathcal{F}^p}^{\frac{1}{2}} \| f \|_{L^2} + C_2 \|V\|_{\mathcal{F}^p} \big\| V(x) e^{itH}f \big\|_{L^2_{x,t}(|V|^{-1})} \\ &= C_1 \|V\|_{\mathcal{F}^p}^{\frac{1}{2}} \| f \|_{L^2} + C_2 \|V\|_{\mathcal{F}^p} \| e^{itH}f \|_{L^2_{x,t}(|V|)}. \end{aligned} $$ Thus, if $\|V\|_{\mathcal{F}^p} \leq 1/(2C_2)=:c_n$, we get $$ \big\| e^{itH}f \big\|_{L^2_{x,t} (|V|)} \leq C_1 \|V\|_{\mathcal{F}^p}^{\frac{1}{2}} \| f \|_{L^2} + \frac{1}{2}\| e^{itH}f \|_{L^2_{x,t}(|V|)}, $$ and this implies \eqref{RV2_result} by setting $C_n=2C_1$. As a result, we can reduce \eqref{A=0_reduce_1} to \begin{equation}\label{CK_goal} \Big\| \int_{t}^{\infty} e^{i(t-s)\Delta} G (\cdot,s) ds \Big\|_{L^2_{x,t} (|V|)} \leq C_{n,r,q,V} \| G \|_{L^{q'}_{t}(\mathbb{R}; L^{r'}_{x}(\mathbb{R}^n))}. \end{equation} It now remains to establish (\ref{CK_goal}). First, from \cite[Proposition 2.3]{RV2} and the duality of Keel-Tao's result \eqref{KT_result}, we know \begin{equation}\label{CK_reduce} \begin{aligned} \Big\| \int_{\mathbb{R}} e^{i(t-s)\Delta} G (\cdot,s) ds \Big\|_{L^2_{x,t}(|V|)} &=\Big\| e^{it\Delta} \int_{\mathbb{R}} e^{-is\Delta} G (\cdot,s) ds \Big\|_{L^2_{x,t}(|V|)} \\ &\leq C_n \|V\|_{\mathcal{F}^p}^{\frac{1}{2}} \Big\| \int_{\mathbb{R}} e^{-is\Delta} G (\cdot,s) ds \Big\|_{L^2_x} \\ &\leq C_{n,r,q} \|V\|_{\mathcal{F}^p}^{\frac{1}{2}} \| G \|_{L^{q'}_t L^{r'}_x} \end{aligned} \end{equation} for all Schr\"odinger admissible pair $(r,q)$. In turn, \eqref{CK_reduce} implies \begin{equation}\label{CK_reduce_2} \Big\| \int_{-\infty}^{t} e^{i(t-s)\Delta} G (\cdot,s) ds \Big\|_{L^2_{x,t} (|V|)} \leq C_{n,r,q} \|V\|_{\mathcal{F}^p}^{\frac{1}{2}} \| G \|_{L^{q'}_{t}(\mathbb{R}; L^{r'}_{x}(\mathbb{R}^n))} \end{equation} by the Christ-Kiselev lemma \cite{CK} for $q>2$. Combining \eqref{CK_reduce} with \eqref{CK_reduce_2}, we directly get \eqref{CK_goal} for $q>2$. Next, for the endpoint case $(r,q)=(\frac{2n}{n-2},2)$, we have \begin{equation}\label{CK_reduce_3} \begin{aligned} \Big\| \int_{-\infty}^{t} e^{i(t-s)\Delta} G (\cdot,s) ds \Big\|_{L^2_{x,t} (|V|)} &\leq \|V\|_{L^{\frac{n}{2}}_x}^{\frac{1}{2}} \Big\| \int_{-\infty}^{t} e^{i(t-s)\Delta} G (\cdot,s) ds \Big\|_{L^{2}_{t}(\mathbb{R}; L^{\frac{2n}{n-2}}_{x}(\mathbb{R}^n))} \\ &\leq C_n \|V\|_{L^{\frac{n}{2}}_x}^{\frac{1}{2}} \| G \|_{L^{2}_{t}(\mathbb{R}; L^{\frac{2n}{n+2}}_{x}(\mathbb{R}^n))} \end{aligned} \end{equation} from H\"older's inequality in $x$ with the inhomogeneous Strichartz estimates by Keel-Tao. Observe now that \eqref{CK_reduce_3} implies \eqref{CK_goal} when $q=2$ with the assumption $V\in L^{\frac{n}{2}}_x$. The proof of Theorem \ref{main_thm_1} is now complete.\\ Now, we will find a suitable constant in Theorem \ref{main_thm_1'}. For this, we refine estimate \eqref{RV2_result} based on an argument in \cite{RV2}. We recall the Fourier transform in $\mathbb{R}^n$, defined by $$ \widehat{f}(\xi)=\int_{\mathbb{R}^n} e^{ix\cdot\xi} f(x)dx, $$ and its basic properties $$ f(x) = \frac{1}{(2\pi)^{n}} \int_{\mathbb{R}^n} e^{ix\cdot\xi} \widehat{f}(\xi)d\xi \quad\mbox{and}\quad \| f\|_{L^2(\mathbb{R}^n)} = \frac{1}{(2\pi)^{\frac{n}{2}}} \big\| \widehat{f} \big\|_{L^2(\mathbb{R}^n)} . $$ Thus, we can express $e^{it\Delta}f$ using the polar coordinates with $r^2=\lambda$ as follows: $$ \begin{aligned} e^{it\Delta}f &= \frac{1}{(2\pi)^{n}} \int_0^\infty e^{itr^2} \int_{S^{n-1}_r} e^{ix\cdot\xi} \widehat{f}(\xi) d\sigma_r(\xi) dr \\ &= \frac{1}{2(2\pi)^{n}} \int_0^\infty e^{it\lambda} \int_{S^{n-1}_{\sqrt{\lambda}}} e^{ix\cdot\xi} \widehat{f}(\xi) d\sigma_{\sqrt{\lambda}}(\xi) \lambda^{-\frac{1}{2}} d\lambda. \end{aligned} $$ Take $F$ as $$ F(\lambda) = \int_{S^{n-1}_{\sqrt{\lambda}}} e^{ix\cdot\xi} \widehat{f}(\xi) d\sigma_{\sqrt{\lambda}}(\xi) \lambda^{-\frac{1}{2}} $$ if $\lambda\geq0$ and $F(\lambda)=0$ if $\lambda<0$. Then, by Plancherel's theorem in $t$, we get $$ \begin{aligned} \big\| e^{it\Delta}f \big\|_{L^2_{x,t} (|V|)}^2 &= \frac{2\pi}{4(2\pi)^{2n}} \int_{\mathbb{R}^n} \Big( \int_{\mathbb{R}} |F(\lambda)|^2 d\lambda \Big) |V(x)|dx \\ &= \frac{\pi}{2(2\pi)^{2n}} \int_{\mathbb{R}^n} \Big( \int_{0}^{\infty} \Big| \int_{S^{n-1}_{\sqrt{\lambda}}} e^{ix\cdot\xi} \widehat{f}(\xi) d\sigma_{\sqrt{\lambda}}(\xi) \Big|^2 \lambda^{-1} d\lambda \Big) |V(x)|dx \\ &= \frac{\pi}{(2\pi)^{2n}} \int_{0}^{\infty} \Big( \int_{\mathbb{R}^n} \Big| \int_{S^{n-1}_r} e^{ix\cdot\xi} \widehat{f}(\xi) d\sigma_r(\xi) \Big|^2 |V(x)|dx \Big) r^{-1} dr . \end{aligned} $$ Now, we consider the $n=3$ case and apply the result on the best constant of the Stein-Tomas restriction theorem in $\mathbb{R}^3$ obtained by Foschi \cite{F}. That is, $$ \big\| \widehat{fd\sigma} \big\|_{L^4(\mathbb{R}^3)} \leq 2\pi \| f \|_{L^2(S^{2})} $$ where $$ \widehat{fd\sigma}(x) = \int_{S^{n-1}} e^{-ix\cdot\xi}f(\xi)d\sigma(\xi). $$ Interpolating this with a trivial estimate $$ \big\| \widehat{fd\sigma} \big\|_{L^{\infty}(\mathbb{R}^3)} \leq \| f \|_{L^1(S^{2})} \leq \sqrt{4\pi} \| f \|_{L^2(S^{2})}, $$ we get $$ \big\| \widehat{fd\sigma} \big\|_{L^6(\mathbb{R}^3)} \leq 2^{1/6} (2\pi)^{5/6} \| f \|_{L^2(S^{2})}. $$ By H\"older's inequality, we have $$ \begin{aligned} \Big( \int_{\mathbb{R}^3} \Big| \int_{S^2} e^{ix\cdot\xi} \widehat{f}(\xi) d\sigma(\xi) \Big|^2 |V(x)|dx \Big) &\leq \Big\| \int_{S^2} e^{ix\cdot\xi} \widehat{f}(\xi) d\sigma(\xi) \Big\|_{L^6}^2 \| V \|_{L^{3/2}} \\ &\leq 2^{1/3} (2\pi)^{5/3} \| V \|_{L^{3/2}} \| \widehat{f} \|_{L^2(S^{2})}^2. \end{aligned} $$ So we get $$ \begin{aligned} \big\| e^{it\Delta}f \big\|_{L^2_{x,t} (|V|)}^2 &\leq \frac{\pi}{(2\pi)^{6}} 2^{1/3} (2\pi)^{5/3} \| V \|_{L^{3/2}} \| \widehat{f} \|_{L^2}^2 \\ &= \frac{1}{2\pi^{1/3}} \| V \|_{L^{3/2}} \| f \|_{L^2}^2 . \end{aligned} $$ By the argument as in the proof of Theorem \ref{main_thm_1}, we have $$ \begin{aligned} \big\| e^{itH}f \big\|_{L^2_{x,t} (|V|)} &\leq \big\| e^{it\Delta}f \big\|_{L^2_{x,t} (|V|)} + \Big\| \int_{0}^{t}e^{i(t-s)\Delta} V(x) e^{isH}f ds \Big\|_{L^2_{x,t} (|V|)} \\ &\leq \frac{1}{\sqrt{2}\pi^{1/6}} \| V \|_{L^{3/2}}^{\frac{1}{2}} \| f \|_{L^2} + \frac{1}{2\pi^{1/3}} \| V \|_{L^{3/2}} \big\| V(x) e^{itH}f \big\|_{L^2_{x,t}(|V|^{-1})} \\ &= \frac{1}{\sqrt{2}\pi^{1/6}} \| V \|_{L^{3/2}}^{\frac{1}{2}} \| f \|_{L^2} + \frac{1}{2\pi^{1/3}} \| V \|_{L^{3/2}} \| e^{itH}f \|_{L^2_{x,t}(|V|)}. \end{aligned} $$ Thus, if $\| V \|_{L^{3/2}} < 2\pi^{1/3}$, then \begin{equation}\label{const_WR} \big\| e^{itH}f \big\|_{L^2_{x,t} (|V|)} \leq C_V \| f \|_{L^2}. \end{equation} Using \eqref{const_WR} instead of \eqref{RV2_result} in that argument, the proof of Theorem \ref{main_thm_1'} is complete. \section{The equivalence of two norms involving $H$ and $-\Delta$ in $\mathbb{R}^3$}\label{sec_CKN_ineq} In this section, we investigate some conditions on $H$ and $p$ with which the equivalence $$ \| H^{\frac{1}{4}} f \|_{L^p(\mathbb{R}^3)} \approx C_{H,p} \big\| (-\Delta)^{\frac{1}{4}} f \big\|_{L^p(\mathbb{R}^3)} $$ holds. This equivalence was studied in \cite{DFVV} and \cite{CD} that are of independent interest. We now introduce such an equivalence in a form for $n=3$, which enables us to include the endpoint estimate also for that dimension. \begin{prop}\label{Key_Prop} Given $A \in L^2_{loc}(\mathbb{R}^3; \mathbb{R}^3)$ and $V: \mathbb{R}^3 \rightarrow \mathbb{R}$ measurable, assume that the operators $\Delta_A = -(\nabla -iA)^2$ and $H= -\Delta_A +V$ are self-adjoint and positive on $L^2$ and that \eqref{Cond_BMO} holds. Moreover, assume that $V_+$ is of Kato class and that $A$ and $V$ satisfy \eqref{Kato_class} and \begin{equation}\label{upper_bdd_of_AV} |A(x)|^2 + |\nabla\cdot A(x)| + |V(x)| \leq C_0 \min\Big( \frac{1}{|x|^{2-\epsilon}}, \frac{1}{|x|^{2+\epsilon}} \Big) \end{equation} for some $0<\epsilon\leq2$ and $C_0>0$. Then the following estimates hold: \begin{equation}\label{key_leq_ineq} \| H^{\frac{1}{4}} f \|_{L^p} \leq C_{\epsilon,p} C_0 \| (-\Delta)^{\frac{1}{4}} f \|_{L^p}, \quad 1<p \leq 6, \end{equation} \begin{equation}\label{key_geq_ineq} \| H^{\frac{1}{4}} f \|_{L^p} \geq C_p \| (-\Delta)^{\frac{1}{4}} f \|_{L^p}, \quad \frac{4}{3}<p<4. \end{equation} \end{prop} In showing this, we only prove \eqref{key_leq_ineq} as estimate \eqref{key_geq_ineq} is the same as \cite[Theorem 1.2]{DFVV}. When $1<p<6$, estimate \eqref{key_leq_ineq} easily follows from the Sobolev embedding theorem. However, in order to extend the range of $p$ up to $6$, we need a precise estimate which depends on $\epsilon$ in \eqref{upper_bdd_of_AV}. Towards this, we introduce a weighted Sobolev inequality as below. \begin{lem}\label{lem_SW} \textbf{(Theorem 1(B) in \cite{SW2}).} Suppose $0<\alpha<n$, $1<p<q<\infty$ and $v_1(x)$ and $v_2(x)$ are nonnegative measurable functions on $\mathbb{R}^n$. Let $v_1(x)$ and $v_2(x)^{1-p'}$ satisfy the reverse doubling condition: there exist $\delta,\epsilon \in (0,1)$ such that $$ \int_{\delta Q} v_1(x) \leq \epsilon\int_Q v_1(x)dx \quad\mbox{for all cubes}\quad Q\subset \mathbb{R}^n. $$ Then the inequality $$ \bigg( \int_{\mathbb{R}^n} |f(x)|^q v_1(x) dx \bigg)^{\frac{1}{q}} \leq C \bigg( \int_{\mathbb{R}^n} \big|(-\Delta)^{\alpha/2} f(x) \big|^p v_2(x) dx \bigg)^{\frac{1}{p}} $$ holds if and only if $$ |Q|^{\frac{\alpha}{n}-1} \Big( \int_Q v_1(x) dx \Big)^{\frac{1}{q}} \Big( \int_Q v_2(x)^{1-p'} dx \Big)^{\frac{1}{p'}} \leq C \quad\mbox{for all cubes}\quad Q\subset \mathbb{R}^n. $$ \end{lem} From Lemma \ref{lem_SW}, we obtain a weighted estimate as follows. \begin{lem}\label{lem_CKN_modi_2} Let $f$ be a $C_0^{\infty}(\mathbb{R}^3)$ function, and suppose that a nonnegative weight function $w$ satisfies \begin{equation}\label{CKN_pf_3} w(x) \leq \min\Big( \frac{1}{|x|^{2-\epsilon}}, \frac{1}{|x|^{2+\epsilon}} \Big) \end{equation} for some $0<\epsilon\leq2$. Then, for any $1<p \leq\frac{3}{2}$, we have $$ \| f w \|_{L^p} \leq C_{\epsilon,p} \| \Delta f \|_{L^p}. $$ \end{lem} \begin{proof} For all $1<p<\frac{3}{2}$, we directly get \begin{equation}\label{CKN_inter_1} \big\| \frac{1}{|x|^2} f \big\|_{L^p} \leq C \big\| \frac{1}{|x|^2} \big\|_{L^{\frac{3}{2},\infty}} \| f \|_{L^{\frac{3p}{3-2p},p}} \leq C \| \Delta f \|_{L^p} \end{equation} from H\"older's inequality in Lorentz spaces and the Sobolev embedding theorem. For $p=\frac{3}{2}$, by H\"older's inequality, we get \begin{equation*} \bigg( \int_{\mathbb{R}^3} |f(x)|^{\frac{3}{2}} w(x)^{\frac{3}{2}} dx \bigg)^{\frac{2}{3}} \leq \bigg( \int_{\mathbb{R}^3} |f(x)|^{q} w(x)^{(1-\theta)q} dx \bigg)^{\frac{1}{q}} \bigg( \int_{\mathbb{R}^3} w(x)^{\frac{3q}{2q-3}\theta} dx \bigg)^{\frac{2q-3}{3q}} \end{equation*} for any $\frac{3}{2}< q<\infty$ and $0<\theta<1$. Taking $\theta = 1- \frac{3}{2q}$, we have \begin{equation*} \bigg( \int_{\mathbb{R}^3} |f(x)|^{\frac{3}{2}} w(x)^{\frac{3}{2}} dx \bigg)^{\frac{2}{3}} \leq C_{\epsilon,q} \bigg( \int_{\mathbb{R}^3} |f(x)|^{q} w(x)^{\frac{3}{2}} dx \bigg)^{\frac{1}{q}} \end{equation*} because of \eqref{CKN_pf_3}. Thus, using Lemma \ref{lem_SW} with $\alpha=2$, $(p,q)=(\frac{3}{2},q)$, $v_1(x)= w(x)^{\frac{3}{2}}$ and $v_2(x)\equiv 1$, we have \begin{equation}\label{CKN_inter_2} \bigg( \int_{\mathbb{R}^3} |f(x)|^{\frac{3}{2}} w(x)^{\frac{3}{2}} dx \bigg)^{\frac{2}{3}} \leq C_{\epsilon,q} \bigg( \int_{\mathbb{R}^3} |\Delta f(x) |^{\frac{3}{2}} w(x)^{\frac{3}{2}} dx \bigg)^{\frac{2}{3}}. \end{equation} Combining \eqref{CKN_inter_1} and \eqref{CKN_inter_2}, the proof is complete. \end{proof} Finally, we prove Proposition \ref{Key_Prop}. We use Stein's interpolation theorem to the analytic family of operators $T_z= H^z \cdot (-\Delta)^{-z}$, where $H^z$ and $(-\Delta)^{-z}$ are defined by the spectral theory. Denoting $z=x +iy$, we can decompose $$ T_z = T_{x+iy} = H^{iy} H^x (-\Delta)^{-x} (-\Delta)^{-iy}. $$ In fact, the operators $H^{iy}$ and $(-\Delta)^{-iy}$ are bounded according to the following result. \begin{lem}\label{Im_est_operator} \textbf{(Proposition 2.2 in \cite{DFVV}).} Consider the self-adjoint and positive operators $-\Delta_A$ and $H= -\Delta_A +V$ on $L^2$. Assume that $A \in L^2_{loc}(\mathbb{R}^3; \mathbb{R}^3)$ and that the positive and negative parts $V_{\pm}$ of $V$ satisfy: $V_{+}$ is of Kato class and $$ \| V_- \|_K < \frac{\pi^{3/2}}{\Gamma(1/2)}. $$ Then for all $y\in \mathbb{R}$, the imaginary powers $H^{iy}$ satisfy the $(1,1)$ weak type estimate $$ \| H^{iy} \|_{L^1\rightarrow L^{1,\infty}} \leq C (1+|y|)^{\frac{3}{2}}. $$ \end{lem} Lemma \ref{Im_est_operator} follows from the pointwise estimate for the heat kernel $p_t(x,y)$ of the operator $e^{-tH}$ as $$ \big| p_t(x,y) \big| \leq \frac{(2t)^{-3/2}} {\pi^{3/2} - \Gamma(1/2)\|V_-\|_K} e^{-\frac{|x-y|^2}{8t}}. $$ Regarding this estimate, one may refer to some references \cite{Sim, SW, CD, DSY}. By Lemma \ref{Im_est_operator}, we get \begin{equation}\label{x=0_est} \| T_{iy} f\|_p \leq C (1+|y|)^3 \|f\|_p \quad\mbox{for all}\quad 1<p<\infty. \end{equation} Then by \eqref{Cond_BMO}, we have \begin{equation}\label{x=0_est_BMO} \begin{aligned} \| T_{iy} f \|_{BMO_H} &:= \big\| M^{\#}_{H} \big(H^{iy} (-\Delta)^{-iy} f \big) \big\|_{L^{\infty}} \\ &\leq C (1+|y|)^{\frac{3}{2}} \big\| (-\Delta)^{-iy} f \big\|_{BMO} \leq C (1+|y|)^3 \|f\|_{L^{\infty}}, \end{aligned} \end{equation} where $$ M^{\#}_{H} f(x) := \sup_{r>0} \frac{1}{|B(x,r)|} \int_{B(x,r)} \big| f(y)-e^{-r^2H}f(y) \big|dy <\infty. $$ Next, consider the operator $T_{1+iy}$. If \begin{equation}\label{x=1_goal} \| H (-\Delta)^{-1} f \|_{L^p} \leq C \| f \|_{L^p} \quad\mbox{for all}\quad 1<p \leq \frac{3}{2}, \end{equation} then by \eqref{x=0_est}, we get \begin{equation}\label{x=1_est} \| T_{1+iy} f \|_{L^p} \leq C \| f \|_{L^p} \quad\mbox{for all}\quad 1<p \leq \frac{3}{2}. \end{equation} Taking $\widetilde{T}_z f := M^{\#}_{H} \big( T_{z}f\big)$ and applying \eqref{x=1_est} with a basic property\footnote{Some properties of the $BMO_L$ space can be found in \cite{DY}.}: \begin{equation}\label{Lp_BMO_prop} \| M^{\#}_{H} f \|_{L^p} \leq C \|f\|_{L^p} \quad\mbox{for all}\quad 1<p\leq\infty, \end{equation} we have \begin{equation}\label{x=1_est_2} \| \widetilde{T}_{1+iy} f \|_{L^p} \leq C \| f \|_{L^p} \quad\mbox{for all}\quad 1<p \leq \frac{3}{2}. \end{equation} So, applying Stein's interpolation theorem to \eqref{x=0_est_BMO} and \eqref{x=1_est_2}, we obtain $$ \| \widetilde{T}_{1/4} f \|_{L^p} \leq C \| f \|_{L^p} \quad\mbox{for all}\quad 1<p \leq 6, $$ and using \eqref{Lp_BMO_prop} again, we have $$ \| H^{1/4} f \|_{L^p} \leq C \| (-\Delta)^{1/4} f \|_{L^p} \quad\mbox{for all}\quad 1<p \leq 6. $$ Now, we handle the remaining part \eqref{x=1_goal}; that is, we wish to establish the estimate $$ \| Hf \|_{L^p} \leq C \| \Delta f \|_{L^p}. $$ For a Schwartz function $f$, we can write \begin{equation}\label{expend_H} Hf = -\Delta f +2iA\cdot \nabla f + (|A|^2 +i\nabla\cdot A +V)f. \end{equation} From H\"older's inequality in Lorentz spaces and the Sobolev embedding theorem, we get $$ \| A \cdot \nabla f \|_{L^r} \leq C \| A \|_{L^{3,\infty}} \| \nabla f \|_{L^{\frac{3r}{3-r},r}} \leq C \| A \|_{L^{3,\infty}} \| \Delta f \|_{L^r} $$ for all $1<r<3$. On the other hand, applying Lemma \ref{lem_CKN_modi_2} to \eqref{upper_bdd_of_AV}, we get $$ \big\| (|A|^2 +i\nabla\cdot A +V) f \big\|_{L^r} \leq C C_0 \| \Delta f \|_{L^r} $$ for all $1<r \leq\frac{3}{2}$. Thus we have $$ \| H f \|_{L^r} \leq C \| \Delta f \|_{L^r} \quad\mbox{for all}\quad 1<r\leq \frac{3}{2}, $$ and this implies Proposition \ref{Key_Prop}. \section{Proof of Theorem \ref{main_thm_2}}\label{sec_V=0} In this final section, we prove Theorem \ref{main_thm_2}. This part follows an argument in \cite{DFVV}. Let $u$ be a solution to problem \eqref{mS_eq} of the magnetic Schr\"odinger equation in $\mathbb{R}^{n+1}$. By \eqref{expend_H}, we can expand $H$ in \eqref{mS_eq}: $$ H = -\Delta +2iA\cdot \nabla +|A|^2 +i\nabla\cdot A +V. $$ Thus, by Duhamel's principle and the Coulomb gauge condition \eqref{Cond_Coulomb}, we have a formal solution to \eqref{mS_eq} given by \begin{equation}\label{Duhamel_sol} u(x,t)= e^{itH}f(x) =e^{it\Delta}f(x) - i \int_{0}^{t}e^{i(t-s)\Delta} R(x,\nabla) e^{isH}f ds , \end{equation} where $$ R(x,\nabla) = 2iA \cdot \nabla_{A} -|A|^2 + V. $$ From \cite{RV} and \cite{IK} (see also (3.4) in \cite{DFVV}), it follows that for every admissible pair $(r,q)$, \begin{equation}\label{deri_inho_est} \Big\| |\nabla|^{\frac{1}{2}} \int_{0}^{t}e^{i(t-s)\Delta} F(\cdot,s) ds \Big\|_{L_t^q L_x^r} \leq C_{n,r,q} \sum_{j \in \mathbb{Z}} 2^{j/2} \| \chi_{C_j} F \|_{L^2_{x,t}}, \end{equation} where $C_j= \{ x: 2^j \leq |x| \leq 2^{j+1} \}$ and $\chi_{C_j}$ is the characteristic function of the set $C_j$. Then, from \eqref{Duhamel_sol}, \eqref{KT_result} and \eqref{deri_inho_est}, we know $$ \begin{aligned} \big\| |\nabla|^{\frac{1}{2}} u \big\|_{L^q_t L^r_x} &\leq \big\| |\nabla|^{\frac{1}{2}} e^{it\Delta}f \big\|_{L^q_t L^r_x} + \Big\| |\nabla|^{\frac{1}{2}} \int_{0}^{t}e^{i(t-s)\Delta} R(x,\nabla) e^{isH}f ds \Big\|_{L^q_t L^r_x}\\ &\leq C_{n,r,q}\big\| |\nabla|^{1/2} f \big\|_{L^2_x} + C_{n,r,q} \sum_{j \in \mathbb{Z}} 2^{j/2} \Big\| \chi_{C_j} R(x,\nabla) e^{itH}f \Big\|_{L^2_{x,t}} . \end{aligned} $$ For the second term in the far right-hand side, we get $$ \begin{aligned} &\Big\| \chi_{C_j} R(x,\nabla) e^{itH}f \Big\|_{L^2_{x,t}} \\ &\qquad\leq 2\Big\| \chi_{C_j} A\cdot \nabla_{A} e^{itH}f \Big\|_{L^2_{x,t}} + \Big\| \chi_{C_j} \big(|A|^2 +|V| \big) e^{itH}f \Big\|_{L^2_{x,t}}. \end{aligned} $$ Next, we will use a known result in \cite{FV}, which is a smoothing estimate for the magnetic Schr\"odinger equation. \begin{lem}\label{Lamma_FV} \textbf{(Theorems 1.9 and 1.10 in \cite{FV}).} Assume $n\geq3$, $A$ and $V$ satisfy conditions \eqref{Cond_Coulomb}, \eqref{Cond_B_n=3} and \eqref{Cond_B_n>3}. Then, for any solution $u$ to \eqref{mS_eq} with $f \in L^2$ and $-\Delta_{A} f \in L^2$, the following estimate holds: $$ \begin{aligned} &\sup_{R>0} \frac{1}{R} \int_{0}^{\infty} \int_{|x|\leq R} |\nabla_{A} u|^2 dxdt ~+~ \sup_{R>0} \frac{1}{R^2} \int_{0}^{\infty} \int_{|x|= R} |u|^2 d\sigma(x)dt \\ &\qquad \leq C_{A} \| (-\Delta_{A})^\frac{1}{4} f \|_{L^2}^2. \end{aligned} $$ \end{lem} From \eqref{Cond_2'} with Lemma \ref{Lamma_FV}, we have $$ \begin{aligned} &\sum_{j \in \mathbb{Z}} 2^{j/2} \Big\| \chi_{C_j} A\cdot \nabla_{A} e^{itH}f \Big\|_{L^2_{x,t}} \\ &\quad\leq \sum_{j \in \mathbb{Z}} 2^{j} \Big( \sup_{x\in C_j}|A| \Big) \Big( \frac{1}{2^{j+1}} \int_{0}^{\infty} \int_{|x|\leq 2^{j+1}} |\nabla_{A} u|^2 dxdt \Big)^{\frac{1}{2}}\\ &\quad\leq \Big( \sum_{j \in \mathbb{Z}} 2^{j} \sup_{x\in C_j}|A| \Big) \Big( \sup_{R>0} \frac{1}{R} \int_{0}^{\infty} \int_{|x|\leq R} |\nabla_{A} u|^2 dxdt \Big)^{\frac{1}{2}}\\ &\quad\leq C_{A,\epsilon} \big\| (-\Delta_{A})^\frac{1}{4} f \big\|_{L^2_x} \end{aligned} $$ and $$ \begin{aligned} &\sum_{j \in \mathbb{Z}} 2^{j/2} \Big\| \chi_{C_j} \big(|A|^2 +|V| \big) e^{itH}f \Big\|_{L^2_{x,t}} \\ &\quad\leq \sum_{j \in \mathbb{Z}} 2^{j/2} \Big( \sup_{x\in C_j} \big(|A|^2 +|V| \big) \Big) \Big( \int_{2^j}^{2^{j+1}} r^2 \int_{0}^{\infty} \frac{1}{r^2} \int_{|x|= r} |u|^2 d\sigma_r(x) dt dr \Big)^{\frac{1}{2}}\\ &\quad\leq \Big( \sum_{j \in \mathbb{Z}} 2^{2j} \sup_{x\in C_j} \big(|A|^2 +|V| \big) \Big) \Big( \sup_{R>0} \frac{1}{R^2} \int_{0}^{\infty} \int_{|x|= R} |u|^2 d\sigma_R(x)dt \Big)^{\frac{1}{2}}\\ &\quad\leq C_{A,V,\epsilon} \big\| (-\Delta_{A})^\frac{1}{4} f \big\|_{L^2_x}. \end{aligned} $$ That is, $$ \big\| |\nabla|^{\frac{1}{2}} e^{itH}f \big\|_{L^q_t L^r_x} \leq C_{n,r,q} \big\| |\nabla|^{1/2} f \big\|_{L^2_x} + C_{n,r,q,A,V,\epsilon} \big\| (-\Delta_{A})^\frac{1}{4} f \big\|_{L^2_x}. $$ First, consider the case $n=3$. By \eqref{Cond_2'}, estimate \eqref{key_leq_ineq} in Proposition \ref{Key_Prop} holds for all $1<p\leq 6$. (Here, $H= -\Delta_{A} +V$.) Then by \eqref{key_geq_ineq} in Proposition \ref{Key_Prop}, we get \begin{equation}\label{commute_relation} \begin{aligned} \big\| H^{\frac{1}{4}} e^{itH}f \big\|_{L^{q}_{t}(\mathbb{R}; L^{r}_{x}(\mathbb{R}^3))} &\leq C \big\| |\nabla|^{\frac{1}{2}} e^{itH}f \big\|_{L^{q}_{t}(\mathbb{R}; L^{r}_{x}(\mathbb{R}^3))} \\ &\leq C \big\| |\nabla|^{\frac{1}{2}} f \big\|_{L^2_x(\mathbb{R}^3)} + C \big\| (-\Delta_{A})^{\frac{1}{4}} f \big\|_{L^2_x(\mathbb{R}^3)} \\ &\leq C \big\| H^\frac{1}{4} f \big\|_{L^2_x(\mathbb{R}^3)} \end{aligned} \end{equation} for all admissible pairs $(r,q)$. (It clearly includes the endpoint case $(n,r,q)=(3,6,2)$.) Next, for the case $n\geq4$, we already know that \eqref{key_leq_ineq} holds for $1<p<2n$ and that \eqref{key_geq_ineq} is valid for $\frac{4}{3}<p<4$ under the same conditions on $A$ and $V$ (see \cite[Theorem 1.2]{DFVV}). Thus we can easily get the same bound as \eqref{commute_relation} for all admissible pairs $(r,q)$. Since the operators $H^{\frac{1}{4}}$ and $e^{itH}$ commutes, we get $$ \big\| e^{itH}f \big\|_{L^{q}_{t}(\mathbb{R}; L^{r}_{x}(\mathbb{R}^n))} \leq C \big\| f \big\|_{L^2_x(\mathbb{R}^n)} $$ from \eqref{commute_relation}, and this completes the proof. \section*{Acknowledgment} Y. Koh was partially supported by NRF Grant 2016R1D1A1B03932049 (Republic of Korea). The authors thank the referee for careful reading of the manuscript and many invaluable comments. \end{document}
\mathbf{b}egin{equation}gin{document} \title{\mathbf{b}f{Compatibility of subsystem states}} \mathbf{a}uthor{Paul Butterley,$^1$ Anthony Sudbery$^2$ and Jason Szulc$^3$\\[10pt] \small Department of Mathematics, University of York, \\[-2pt] \small Heslington, York, England YO10 5DD\\ \small $^1$ [email protected] \quad $^[email protected] \quad $^[email protected]} \date{22 April 2005\\------\\\normalsize \textbf{In memoriam Asher Peres}} \maketitle \mathbf{b}egin{equation}gin{abstract} \item We examine the possible states of subsystems of a system of bits or qubits. In the classical case (bits), this means the possible marginal distributions of a probability distribution on a finite number of binary variables; we give necessary and sufficient conditions for a set of probability distributions on all proper subsets of the variables to be the marginals of a single distribution on the full set. In the quantum case (qubits), we consider mixed states of subsets of a set of qubits; in the case of three qubits, we find \emph{quantum Bell inequalities} --- necessary conditions for a set of two-qubit states to be the reduced states of a mixed state of three qubits. We conjecture that these conditions are also sufficient. \end{abstract} \section{Introduction} What can we believe about some parts of a system without contradicting what we believe about other parts? If the system is described by a set of numbers, and our beliefs are the probabilities that these numbers take given values, then a part of the system is described by a subset of the numbers and our beliefs about it will be given by marginal probabilities derived from the probability distribution of the full set of numbers. The marginal distributions of different parts are constrained by the fact that they all come from a single set of probabilities on the full system. Bell's inequalities are an example of such constraints. The conclusion of the EPR argument is that a single quantum system like an electron has a set of numbers giving the results of all possible measurements, even though these cannot be measured simultaneously. Wigner \mathbf{c}ite{Wigner:ineq} presented Bell's theorem by considering the probabilities for subsets of electron observables which could be measured simultaneously (either directly, or by measuring the electron's partner in a singlet state), and showing that these subset probabilities, if they derived from a single probability distribution on the full set, would be constrained by inequalities which were not satisfied by the predictions of quantum mechanics. Other forms of Bell inequalities can also be understood in this way, as compatibility conditions on the marginal distributions of subsets. Asher Peres \mathbf{c}ite{Peres:allBell} has considered this problem in complete generality, bringing out its formidable computational complexity. In this paper we solve the special case in which one is given joint probability distributions for all proper subsets of a set of binary variables, finding necessary and sufficient conditions for these distributions to be the marginals of a single distribution on the full set. The motivation for this study is to investigate our initial question for quantum systems. In this case our knowledge of the system is represented by a mixed state, or density matrix, and our knowledge of a part of the system is given by the reduced state, obtained by tracing the full density matrix over the rest of the system. What are the constraints on these reduced states? Our answer to the classical problem yields a possible answer to the quantum question, as the conditions on marginal probability distributions have immediate analogues for quantum states of a finite set of qubits. They can be translated into conditions on the density matrices of proper subsets of the qubits, which we prove, in the case of a system of three qubits, to be necessary for the density matrices to be the reductions of a (mixed) state of the full set of qubits. We conjecture that these \emph{quantum Bell-Wigner inequalities} are also sufficient conditions. For more than three qubits the corresponding conditions are not even necessary; this gives rise to new separability criteria (the \emph{generalised reduction criteria}) \mathbf{c}ite{Bill:reduction}. A still more general problem in classical probability, which was introduced by George Boole \mathbf{c}ite{Boole:Laws}, is to ask when a set of real numbers $p_{ijk\mathbf{c}dots}$ can be simultaneous probabilities $P(E_i\,\& \,E_j\,\& \,E_k\,\&\,\ldots)$ for some events $E_i$. This problem has been investigated by Pitowsky \mathbf{c}ite{Pitowsky:book, Pitowsky:range}, who has shown \mathbf{c}ite{Pitowsky:polytopes} that the problem of deciding whether the relevant conditions are satisfied is NP-complete. The relation to the problem considered here (and by Peres) is that we assume that the full sample space is a Cartesian product of finite sets and that our events $E_i$ are slices of this product. Work on this problem appears to have concentrated on a (discrete or continuous) infinity of real-valued variables, i.e.\ a stochastic process, in which case there are no conditions other than the obvious ones (see \eqref{obvious} below); the Kolmogorov-Daniell theorem \mathbf{c}ite{Kolmogorov} asserts essentially that if these are satisfied for all finite subsets of the variables, then there is a stochastic process of which they are the finite-time marginals. The focus then is on the range of possible processes having these marginals. This problem for bipartite quantum states has been studied by Parthasarathy \mathbf{c}ite{Parthasarathy} and Rudolph \mathbf{c}ite{Rudolph:marginal}. The situation in the quantum problem for pure states is, in a sense, inverse to this. It is not at all easy to construct an overall pure state which has given marginals: there are other conditions to be satisfied in addition to the obvious ones \mathbf{c}ite{polygon, Atsushi:3qutrit, Bravyi, HanZhangGuo}, and there is usually only one state with these marginals (this is the generic situation if one is given the reduced states of subsets containing more than half of the total number of qubits \mathbf{c}ite{NickNoah:parts, Diosi:reconstruct}). This can be interpreted \mathbf{c}ite{NoahSanduBill:power} as meaning that irreducible $n$-way correlation is exceptional in pure $n$-qubit states. However, it is not surprising that the quantum pure-state problem should be different from the general classical problem, since the classical pure-state problem is also very different. Classically, a pure probability distribution consists of certainty; its marginals are also pure, the only conditions to be satisfied by them are the obvious compatibility conditions \eqref{obvious}, and the marginals of singleton subsets uniquely determine the overall distribution. The quantum analogue of the non-trivial classical problem is to ask when a set of subsystem states is compatible with a mixed overall state. For identical particles, this problem has been much studied \mathbf{c}ite{Coleman:book}, but the case of distinguishable particles has only recently received attention. One approach to it is outlined in \mathbf{c}ite{NickNoah:parts}: in this paper we suggest another line of attack. The paper is organised as follows. In section 2 we consider the classical problem and present necessary and sufficient conditions for compatibility of probability distributions on proper subsets of a finite set of binary variables. In section 3 we describe the quantum problem, prove necessary conditions for compatibility of reduced states of two-qubit subsystems of a system of three qubits, and show that the corresponding conditions are not necessarily satisfied for a system of more than three qubits. In an appendix we review other work on the quantum marginal problem. \section{Classical marginals} The general classical problem is as follows. Let $S = \{X_1,\ldots,X_n\}$ be a set of random variables, with $X_i$ taking values in a finite set $V_i$. Let $A,B,\ldots$ be a set of subsets of $\{1,\ldots,n\}$, and let $S_A, S_B,\ldots\ \subset S$ be the corresponding sets of variables: $S_A = \{X_i: i\in A\}$. Suppose we are given joint probability distributions $P_A, P_B,\ldots$ for these sets of variables. What are the conditions for these to be the marginal distributions of a single probability distribution $P(x_1,\ldots,x_n)$? This means that if, for example, $A=\{1,\ldots,r\}$, then \[ P_A(x_1,\ldots,x_r) = \sum_{x_{r+1},\ldots,x_n}P(x_1\ldots,x_n) \] which we write as \[ P_A = \mathbf{S}igma_{S \setminus A}(P). \] There are some obvious necessary conditions: \mathbf{b}egin{equation}gin{equation}\label{obvious} \mathbf{S}igma_B(P_{A\mathbf{c}up B}) = \mathbf{S}igma_C(P_{A\mathbf{c}up C}) \quad \text{if} \quad A\mathbf{c}ap B = A\mathbf{c}ap C = \emptyset. \end{equation} In particular, $P_A$ is determined by $P_S$ if $A\subset S$. We may therefore assume that in our given set of subsets, none is contained in another. We will say that the subset distributions are \emph{equimarginal} if they satisfy the conditions \eqref{obvious}. We ask what further conditions must be satisfied. The simplest non-trivial case --- which we discuss separately, for ease of reading, even though it is contained in the general case which follows --- is where $S$ is a set of three binary variables and $A,B,C$ are the three two-element subsets, so that we are considering three marginal two-variable distributions $P_{12}(x,y)$, $P_{13}(x,z)$ and $P_{23}(y,z)$ where $x,y,z\in\{0,1\}$. Wigner \mathbf{c}ite{Wigner:ineq} pointed out that these must satisfy \mathbf{b}egin{equation} \label{Wigner} P_{12}(x,y) \le P_{13}(x,z) + P_{23}(y,\overline{z}) \end{equation} where $\overline{z}=1-z$ (but these inequalities are not satisfied by the predictions of quantum mechanics for the measurements of the spin components of an electron in three directions, where joint measurements in two different directions are performed by measuring two electrons in a singlet state). Pitowsky \mathbf{c}ite{Pitowsky:book} showed that the inequalities \eqref{Wigner}, and the inequalities related to them by permuting (1,2,3), are sufficient for $P_{12}(x,y)$, $P_{13}(x,z)$ and $P_{23}(y,z)$ to be the marginals of a single three-variable distribution $P(x,y,z)$. To put these inequalities in a form which has a quantum analogue, we regard $P_{12}(x,y)$ as a function of three variables $x,y,z$ which is constant in $z$, and similarly for $P_{13}(x,z)$ and $P_{23}(y,z)$. Then the functions $P_{12}, P_{13}, P_{23}$ are equimarginal if they satisfy three equations like \[ P_{12}(x,y,z) + P_{12}(x,\overline{y},z) = P_{13}(x,y,z) + P_{13}(x,y,\overline{z}) = P_1(x,y,z) \] where $P_1$ is constant in $y$ and $z$. Now the observation of Wigner and Pitowsky can be expressed in terms of three-variable functions as \mathbf{b}egin{equation}gin{theorem}\label{3class} Three equimarginal two-variable functions of three binary variables, $P_{12}$, $P_{13}$ and $P_{23}$, are the two-variable marginals of a three-variable probability distribution if and only if \mathbf{b}egin{equation}\label{ineq3class} 0 \le \Delta(x,y,z) \le 1 \qquad \text{for all}\quad x,y,z\in\{0,1\} \end{equation} where \[ \Delta = 1 - P_1 - P_2 - P_3 + P_{12} + P_{13} + P_{23}. \] \end{theorem} \mathbf{b}egin{equation}gin{proof} For $x\in\{0,1\}$, define $\sigma(x) = (-1)^x$, and write \mathbf{b}egin{equation} \sigma_1(x,y,z) = \sigma(x), \quad \sigma_2(x,y,z) = \sigma(y), \quad \sigma_3(x,y,z) = \sigma(z). \end{equation} Then any probability distribution $P$ on $\{0,1\}^3$ can be written \mathbf{b}egin{equation}\label{sigmarep} P = \tfrac{1}{8} + a\sigma_1 + b\sigma_2 + c\sigma_3 + d\sigma_1\sigma_2 + e\sigma_1\sigma_3 + f\sigma_2\sigma_3 + g\sigma_1\sigma_2\sigma_3 \end{equation} for some real constants $a,\ldots,g$. The marginals of $P$ are given by \mathbf{b}egin{equation}gin{align}\label{marginals} P_{12} &= \quarter + 2a\sigma_1 + 2b\sigma_2 + 2d\sigma_1\sigma_2,\notag\\ P_{13} &= \quarter + 2a\sigma_1 + 2c\sigma_3 + 2e\sigma_1\sigma_3,\\ P_{23} &= \quarter + 2b\sigma_2 + 2c\sigma_3 + 2f\sigma_2\sigma_3\notag \end{align} and \mathbf{v}space{-\mathbf{b}aselineskip} \[ P_1 = \half + 4a\sigma_1, \quad P_2 = \half + 4b\sigma_2, \quad P_3 = \half + 4c\sigma_3. \] Hence \mathbf{b}egin{equation} \Delta = \quarter + 2(d\sigma_1\sigma_2 + e\sigma_1\sigma_3 + f\sigma_2\sigma_3), \end{equation} i.e. \mathbf{v}space{-\mathbf{b}aselineskip} \mathbf{b}egin{equation}\label{Delta} \Delta(x,y,z)= P(x,y,z) + P(\overline{x},\overline{y},\overline{z}). \end{equation} It follows that the inequality \eqref{ineq3class} is a necessary condition for the existence of the probability distribution $P(x,y,z)$. To prove that it is sufficient, note that the equimarginal condition forces the $P_{ij}$ to be of the form \eqref{marginals}. We have to prove that there is a value of $g$ such that $P$ defined by \eqref{sigmarep} is a positive function. Let \[ Q = \tfrac{1}{8} + a\sigma_1 + b\sigma_2 + c\sigma_3 + d\sigma_1\sigma_2 + e\sigma_1\sigma_3 + f\sigma_2\sigma_3; \] then the conditions on $g$ are \mathbf{b}egin{equation} - Q(x,y,z) \le g \le 1 - Q(x,y,z) \quad\quad\text{if }\;\sigma(x)\sigma(y)\sigma(z) = 1,\label{gineq1} \end{equation} and \mathbf{b}egin{equation} -1 + Q(x,y,z) \le g \le Q(x,y,z) \quad\quad\text{if }\;\sigma(x)\sigma(y)\sigma(z) = -1.\label{gineq2} \end{equation} But \[ Q = \quarter(P_{12} + P_{13} + P_{23} + \Delta) - \tfrac{1}{8}. \] Hence the condition $0\le\Delta\le 1$, together with $0\le P_{ij}\le 1$, gives \[ -\tfrac{1}{8} \le Q(x,y,z)\le \tfrac{7}{8}. \] It follows that every lower bound is less than every upper bound in \eqref{gineq1} for different values of $(x,y,z)$; the same is true of \eqref{gineq2}; and every lower bound in \eqref{gineq2} is less than every upper bound in \eqref{gineq1}. Now suppose that $\sigma(x)\sigma(y)\sigma(z) = 1$ and $\sigma(x^\prime)\sigma(y^\prime)\sigma(z^\prime) = -1$. Then in the equations \[ \sigma(x) = \pm\sigma(x^\prime),\quad \sigma(y) = \pm\sigma(y^\prime), \quad \sigma(z) = \pm\sigma(z^\prime) \] either one or three of the signs are negative. If all three are negative, then \mathbf{b}egin{equation}gin{align*} Q(x,y,z) + Q(x^\prime,y^\prime,z^\prime) &= \quarter + 2\mathbf{b}ig(d\sigma(x) + e\sigma(x)\sigma(z) + f\sigma(y)\sigma(z)\mathbf{b}ig)\\ &= \Delta(x,y,z). \end{align*} If just one sign is negative, say the first, then \mathbf{b}egin{equation}gin{align*} Q(x,y,z) + Q(x^\prime,y^\prime,z^\prime) &= \quarter + 2\mathbf{b}ig(b\sigma(y) + c\sigma(z) + f\sigma(y)\sigma(z)\mathbf{b}ig)\\ &= P_{23}(y,z). \end{align*} In both cases we have \mathbf{b}egin{equation}\label{Qineq1} Q(x,y,z) + Q(x^\prime,y^\prime,z^\prime) \mathfrak{g}e 0 \end{equation} so that every lower bound in \eqref{gineq1} is less than every upper bound in \eqref{gineq2}. Thus there is a $g$ satisfying all of these inequalities and giving the required probability distribution $P(x,y,z)$. \end{proof} A classical probabilist would (probably) find it more natural to prove necessity from the inclusion-exclusion principle, which gives $1 - \Delta(x,y,z)$ as the probability that $X_1 = x$ or $X_2 = y$ or $X_3 = z$. We have given our rather clumsier proof because it connects both with the proof of sufficiency and with the quantum problem. We now move on to the general case of $n$ binary variables $x_1,\ldots,x_n$. Let $N = \{1,\ldots,n\}$; for subsets of $N$, we write $A\subset B$ to mean that $A$ is a proper subset of $B$, writing $A\subseteq B$ when we want to allow $A=B$; and $|A|$ denotes the number of elements of $A$. We consider probability distributions $P_A$ for subsets $A\subset N$, regarding $P_A$ as a function of $(x_1,\ldots,x_n)$ which is constant in $x_i$ for $i\notin A$. If $P(x_1,\ldots,x_n)$ is a probability distribution on all $n$ variables, its marginal distributions $P_A$ can be written in terms of operators $M_i$ on functions of $n$ binary variables defined by \[ M_if(x_1,\ldots,x_n) = f(x_1,\ldots,x_n) + f(x_1,\ldots,\overline{x_i},\ldots,x_n). \] Then \[ P_A = M_{i_1}\ldots M_{i_r}P \qquad \text{where} \quad N \setminus A = \{i_1,\ldots,i_r\}. \] The distribution $P(x_1,\ldots,x_n)$ can be expanded as \mathbf{b}egin{equation}gin{equation} \label{sigmaexpansion} P = \sum_{A\subseteq N}c_A\sigma_A \end{equation} where the $c_A$ are real coefficients, with $c_\emptyset = 2^{-n}$, and \[ \sigma_A(x_1,\ldots,x_n) = ^\primeod_{i\in A}\sigma(x_i), \qquad \sigma_\emptyset = 1. \] Then the corresponding expansion of the marginal $P_A$ is \mathbf{b}egin{equation}gin{equation}\label{marginal} P_A = 2^{n-|A|}\sum_{B\subseteq A}c_B\sigma_B. \end{equation} This equation can be inverted to give $c_A\sigma_A$ in terms of the marginals $P_A$: \mathbf{b}egin{equation}gin{equation}\label{sigmaterm} c_A\sigma_A = \sum_{B\subseteq A}\frac{(-1)^{|A|-|B|}}{2^{n-|B|}}P_B. \end{equation} We can now state the generalisation of \thmref{3class} to any number of variables: \mathbf{b}egin{equation}gin{theorem}\label{nclass} Let $P_A$ ($A\subset N$) be an equimarginal set of probability distributions on subsets of the variables $x_1,\ldots,x_n$. These are the marginals of a single distribution $P(x_1,\ldots,x_n)$ if and only if for each subset $A\subseteq N$ with an odd number of elements, \mathbf{b}egin{equation}gin{equation}\label{genTony} 0\; \le \sum_{\substack{A\mathbf{c}up B = N\\\\mathbf{B}\subset N}}(-1)^{|A\mathbf{c}ap B|}P_B(\mathbf{x}) \;\le\; 1 \end{equation} for all $\,\mathbf{x}\in\{0,1\}^n$. \end{theorem} \mathbf{b}egin{equation}gin{proof} To prove that the condition is necessary, suppose the distribution $P$ exists and let $A$ be a subset of $N$ with an odd number of elements. Let $\mathbf{x} = (x_1,\ldots,x_n)$ and let $\mathbf{x}^\prime$ be the sequence which differs from $\mathbf{x}$ just in places belonging to $A$: \[ x_i^\prime = \mathbf{b}egin{equation}gin{cases}\overline{x_i} \text{ if } i\in A\\ x_i \text{ if } i\notin A.\end{cases} \] Then \[ 0\le P(\mathbf{x}) + P(\mathbf{x}^\prime ) \le 1. \] Expanding $P$ as in \eqref{sigmaexpansion}, we have \[ P(\mathbf{x}) + P(\mathbf{x}^\prime) = 2 \sum_{|A\mathbf{c}ap B|\text{ even}}c_B\sigma_B(\mathbf{x}). \] Using \eqref{sigmaterm}, we can express this in terms of the probability distributions $P_B$; the result is the sum in \eqref{genTony}. This can be verified by using \eqref{marginal} to expand \eqref{genTony}: \mathbf{b}egin{equation}gin{equation}\label{expansion} \sum_{N\setminus A\,\subseteq\, B\,\subset\, N}(-1)^{|A\mathbf{c}ap B|}P_B = \sum_{N\setminus A \,\subseteq \,B\, \subset\, N}(-1)^{|A\mathbf{c}ap B|}2^{n-|B|} \sum_{D\subseteq B}c_D\sigma_D \end{equation} in which the coefficient of $c_D\sigma_D$ is \mathbf{b}egin{equation}gin{align*} \sum_{\substack{B\supseteq D\\\\N\setminus A \subseteq B \subset N}}(-1)^{|A\mathbf{c}ap B|}2^{n-|B|} &= \sum_{m=|A\mathbf{c}ap D|}^{|A|-1}(-1)^m2^{|A| - m} \mathbf{b}egin{equation}gin{pmatrix}|A| - |A\mathbf{c}ap D|\\m-|A\mathbf{c}ap D|\end{pmatrix}\\ &\phantom{=} \qquad (\text{ writing } m = |A\mathbf{c}ap B|)\\&\\ &= (-1)^{|A\mathbf{c}ap D|}2^{|A\setminus D|}\left\{\left(1-\half\right)^{|A\setminus D|} - \left(-\half\right)^{|A\setminus D|}\right\}\\&\\ &=(-1)^{|A\mathbf{c}ap D|}\left\{ 1 - (-1)^{|A\setminus D|}\right\}, \end{align*} so the right-hand side of \eqref{expansion} is \[ 2\sum_{|A\setminus D| \text{ odd}}c_D\sigma_D\; =\; 2\sum_{|A\mathbf{c}ap D| \text{ even}} c_D\sigma_D \] since $|A|$ is odd. Thus if the distribution $P$ exists, the inequality \eqref{genTony} must be satisfied for each subset $A$ with an odd number of elements. To show that these inequalities are sufficient for the existence of the distribution $P$, we first note, as in \thmref{3class}, that the equimarginality of the distributions $P_A$ gives us coefficients $c_B$ such that \[ P_A = \sum_{B\subseteq A}c_B\sigma_B. \] We have to prove that the stated conditions are sufficient to ensure that there is a coefficient $c_N$ such that \[ P = \sum_{A\subset N} c_A\sigma_A + c_N\sigma_N \] satisfies $0\le P(\mathbf{x}) \le 1$ for all $\mathbf{x}\in\{0,1\}^n$. Writing \[ Q(\mathbf{x}) = \sum_{A\subset N} c_A\sigma_A(\mathbf{x}), \] we therefore need to be able to satisfy the inequalities \mathbf{b}egin{equation}gin{equation}\label{cNineq1} - Q(\mathbf{x}) \le c_N \le 1 - Q(\mathbf{x}) \quad \text{ whenever }\; \sigma_N(\mathbf{x}) = 1 \end{equation} and \mathbf{b}egin{equation}gin{equation}\label{cNineq2} -1 + Q(\mathbf{x}^\prime) \le c_N \le Q(\mathbf{x}') \quad \text{whenever }\; \sigma_N(\mathbf{x}^\prime) = -1. \end{equation} Using \eqref{sigmaterm}, we can express $Q(\mathbf{x})$ in terms of the distributions $P_A(\mathbf{x})$ as \mathbf{b}egin{equation}gin{align} Q &= \sum_{A\subset N}\sum_{B\subseteq A} \frac{(-1)^{|A|-|B|}}{2^{n-|B|}}P_B \notag\\&\notag\\ &= \sum_{B\subset N}\frac{P_B}{2^{n-|B|}}\sum_{B\subseteq A \subset N} (-1)^{|A|-|B|}\notag\\&\notag\\ &= \sum_{B\subset N}\frac{P_B}{2^{n-|B|}}\sum_{m=|B|}^{n-1} (-1)^{m-|B|}\mathbf{b}egin{equation}gin{pmatrix}n-|B|\\m-|B|\end{pmatrix}\notag\\&\notag\\ &= \sum_{B\subset N}\frac{(-1)^{n-|B|-1}}{2^{n-|B|}}P_B.\label{Jason} \end{align} We will now show that the inequalities \eqref{genTony} imply \mathbf{b}egin{equation}gin{equation}\label{Qineq} -\frac{1}{2^n} \le Q \le 1 - \frac{1}{2^n}. \end{equation} Indeed, summing these inequalities over all subsets $A$ with an odd number of elements (of which there are $2^{n-1}$) gives \[ 0\;\le\; \sum_{B\subset N} d_B P_B(\mathbf{x})\; \le\; 2^{n-1} \] where \mathbf{b}egin{equation}gin{align*} d_B &= \sum_{\substack{A\mathbf{c}up B = N\\\\|A|\text{ odd}}}(-1)^{|A\mathbf{c}ap B|}\\ &\\ &= \sum_{r=0}^{|B|}\sum_{s\text{ odd}}(-1)^r\mathbf{b}egin{equation}gin{matrix}\\\text{(number of $s$-element subsets $A$ with $|A\mathbf{c}ap B| = r$}\\\text{and $A\mathbf{c}up B = N$}\end{matrix}\\ &\\ &= \sum_{\substack{r=0\\\\n-|B|+r\text{ odd}}}^{|B|}(-1)^r \mathbf{b}egin{equation}gin{pmatrix}|B|\\r\end{pmatrix} \\ &\\ &=\mathbf{b}egin{equation}gin{cases}1 \text{ if $B = \emptyset$ and $n$ is odd}\\ 0 \text{ if $B = \emptyset$ and $n$ is even}\\ (-1)^{n-|B|+1}2^{|B|-1}\;\text{ otherwise}\end{cases} \end{align*} since the sum of every other binomial coefficient in the $m$\/th row of Pascal's triangle is $2^{m-1}$ if $m \mathfrak{g}e 1$. Hence \[ 0\; \le\; \frac{1}{2} + \sum_{B\subset N}(-1)^{n-|B|+1}2^{|B|-1}P_B\; \le\; 2^{n-1} \] which, together with \eqref{Jason}, gives \eqref{Qineq}. It follows from \eqref{Qineq} that if the inequalities \eqref{genTony} are satisfied, then every lower bound is less than every upper bound in \eqref{cNineq1}, and therefore it is possible to satisfy all of these inequalities with a single choice of $c_N$; and the same is true of \eqref{cNineq2}. To be able to satisfy both sets of inequalities simultaneously, we need \[ 0 \le Q(\mathbf{x}) + Q(\mathbf{x}^\prime) \le 2 \qquad \text{whenever } \sigma(\mathbf{x}) = 1 \text{ and } \sigma(\mathbf{x}^\prime) = -1. \] If $\sigma(\mathbf{x}) = 1$ and $\sigma(\mathbf{x}^\prime) = -1$, $\mathbf{x}$ and $\mathbf{x}^\prime$ must differ in an odd number of places. Let $A$ be the set of indices $i$ such that $x_i \neq x_i^\prime$; then $\sigma_B(\mathbf{x}) = -\sigma_B(\mathbf{x}^\prime)$ if and only if $|A\mathbf{c}ap B|$ is odd, so \[ Q(\mathbf{x}) + Q(\mathbf{x}^\prime) = 2\sum_{|A\mathbf{c}ap B| \text{ even}}c_B\sigma_B(\mathbf{x}), \] which, as we have already shown, is equal to the sum in \eqref{genTony}. Hence if \eqref{genTony} is satisfied, then $Q(\mathbf{x}) + Q(\mathbf{x}^\prime) \mathfrak{g}e 0$, so no lower bound in \eqref{cNineq1} is greater than any upper bound in \eqref{cNineq2}; and $Q(\mathbf{x}) + Q(\mathbf{x}^\prime) \le 2$, so no lower bound in \eqref{cNineq2} is greater than any upper bound in \eqref{cNineq1}. It follows that it is possible to find a suitable coefficient $c_N$, i.e. the conditions are sufficient for the existence of a distribution $P$. \end{proof} The proof of this theorem suggests an alternative set of necessary and sufficient conditions. Define the ``bit flip" operator $\kappa_i$ on functions of $n$ binary variables $x_i\in\{0,1\}$ by \mathbf{b}egin{equation}gin{equation} (\kappa_i f)(x_1,\ldots,x_n) = f(x_1,\ldots,x_{i-1},\overline{x_i},x_{i+1},\ldots,x_n). \end{equation} and for any subset $A = \{i_1,\ldots,i_r\}$, let $\kappa_A = \kappa_{i_1}\mathbf{c}dots\kappa_{i_r}$. Then \mathbf{b}egin{equation}gin{theorem}\label{classJason} Let $P_A$ ($A\subset N$) be an equimarginal set of probability distributions on subsets of the variables $x_1,\ldots,x_n$. These are the marginals of a single distribution $P(x_1,\ldots,x_n)$ if and only if, for all $\,\mathbf{x}\in\{0,1\}^n$, \mathbf{b}egin{equation}gin{equation}\label{genJason} -\frac{1}{2^n} \le Q(\mathbf{x}) \le 1 - \frac{1}{2^n} \end{equation} and, for each odd subset $A\subset\{1,\ldots,n\}$, \mathbf{b}egin{equation}gin{equation}\label{genJason2} 0\le Q(\mathbf{x}) + \kappa_A Q (\mathbf{x}) \le 2 \end{equation} where \[ Q = \sum_{A\subset N}\frac{(-1)^{n-|A|-1}}{2^{n-|A|}}P_A. \] \end{theorem} \mathbf{b}egin{equation}gin{proof} If the distribution $P(x_1,\ldots,x_n)$ exists, then we can expand it in terms of the functions $\sigma_A$ for subsets $A\subset N$ as in \eqref{sigmaexpansion}, and we have $Q(\mathbf{x}) = P(\mathbf{x}) - c_N\sigma_N(\mathbf{x})$, as in \eqref{Jason}. The inequalities \eqref{cNineq1} and \eqref{cNineq2} follow, giving \[ 0 \le Q(\mathbf{x}) + Q(\mathbf{x}^\prime) \le 2 \qquad \text{whenever } \quad \sigma(\mathbf{x}) = 1 \text{ and } \sigma(\mathbf{x}^\prime) = -1. \] This is equivalent to \eqref{genJason2}. Moreover, if $P$ exists then \thmref{nclass} holds and the inequalities \eqref{genJason} follow, as was shown in the proof of \thmref{nclass}. Conversely, the stated inequalities on $Q$ guarantee that every left-hand side is less than every right-hand side in both \eqref{cNineq1} and \eqref{cNineq2}, and therefore there exists a coefficient $c_N$ such that $P = Q + c_N\sigma_N$ is a probability distribution. As in \thmref{nclass}, the equimarginal distributions $P_A$ can be expanded as \[ P_A(\mathbf{x}) = \sum_{B\subseteq A}c_B\sigma_B(\mathbf{x}) \] and then, by \eqref{Jason}, \[ Q(\mathbf{x}) = \sum_{B\subset N} c_B\sigma_B(\mathbf{x})\quad\text{ where }\quad P_A(\mathbf{x}) = \sum_{B\subseteq A}c_B\sigma_B(\mathbf{x}) \] Hence the marginal distribution of $P = Q + c_N\sigma_N$ over the subset $A$ is \[ \mathbf{S}igma_{N\setminus A}(P) = \mathbf{S}igma_{N\setminus A}(Q) = \sum_{B\subseteq A}c_B\sigma_B = P_A, \] as required. \end{proof} \section{Quantum reduced states} The general quantum problem concerns subsystems of a multipartite system, with state space $\mathcal{H}=\mathcal{H}_1\otimes\mathbf{c}dots\otimes\mathcal{H}_n$ where $\mathcal{H}_1,\ldots,\mathcal{H}_n$ are the state spaces of the individual parts of the system. For each subset $A\subset N = \{1,\ldots,n\}$, we denote the state space of the corresponding subsystem by $\mathcal{H}_A = \mathbf{b}igotimes_{i\in A}\mathcal{H}_i$. Then the problem is: Given a set of subsets $A,B,\ldots$ and states $\rho_A,\rho_B,\ldots$ (density matrices on $\mathcal{H}_A,\mathcal{H}_B,\ldots$), does there exist a state $\rho$ on $\mathcal{H}$ whose reduction to $\mathcal{H}_A$ is $\rho_A$, i.e. \mathbf{b}egin{equation}gin{equation}\label{reduced} \rho_A = \operatorname{tr}_{\mathbf{b}ar{A}}(\rho)\;? \end{equation} (Here $\mathbf{b}ar{A}$ is the complement of $A$ in $\{1,\ldots,n\}$, and tr$_{\mathbf{b}ar{A}}$ denotes the trace over $\mathcal{H}_{\mathbf{b}ar{A}}$ in the decomposition $\mathcal{H} = \mathcal{H}_A \otimes \mathcal{H}_{\mathbf{b}ar{A}}$.) The obvious compatibility conditions, corresponding to the classical conditions \eqref{obvious}, are \mathbf{b}egin{equation}gin{equation}\label{qobvious} \operatorname{tr}_B(\rho_{A\mathbf{c}up B}) = \operatorname{tr}_C(\rho_{A\mathbf{c}up C}) \quad \text{if} \quad A\mathbf{c}ap B = A\mathbf{c}ap C = \emptyset. \end{equation} As in the classical case, we will call a set of states \emph{equimarginal} if they satisfy these conditions, and we can assume that none of the subsets $A,B,\ldots$ is a subset of any other. There is a further question in the quantum case: as well as asking whether there is \emph{any} overall state with the given subsystem states as reduced states, one can ask whether there is a pure state with this property. This problem has a simplest case for which the classical and mixed problems are trivial: if the given marginals are those of all one-element subsets, then one can always construct the classical probability distribution \[ f(x_1,\ldots,x_n) = f_1(x_1)\ldots f_n(x_n) \] with one-variable marginals $f_1,\ldots f_n$, and one can always construct the quantum multipartite mixed state \[ \rho = \rho_1\otimes\mathbf{c}dots\otimes\rho_n \] with one-party reduced states $\rho_1,\ldots,\rho_n$ (though not for fermions: see the appendix). But it is not always possible to find a pure state with these reductions. For a set of qubits, necessary and sufficient conditions were found in \mathbf{c}ite{polygon}: \mathbf{b}egin{equation}gin{theorem} Let $\rho_1,\ldots,\rho_n$ be a set of one-qubit density matrices, and let $\lambda_i$ be the smaller eigenvalue of $\rho_i$. Then there is an $n$-qubit pure state $|\mathbf{P}si\rangle$ with one-qubit reduced states $\rho_1,\ldots,\rho_n$ if and only if $\lambda_1,\ldots,\lambda_n$ satisfy the polygon inequalities \mathbf{b}egin{equation}gin{equation}\label{polygon} \lambda_i \le \sum_{j\ne i}\lambda_j\,. \end{equation} \end{theorem} This result has been extended and generalised by a number of authors. Details are given in the appendix. Now let us consider the conditions for the existence of a mixed state with given reduced states. The simplest case, as for the classical problem, is a system of three qubits for which we are given two-qubit reduced states $\rho_{12}, \rho_{13}, \rho_{23}$. The form in which we have given the classical necessary and sufficient conditions can be immediately translated into quantum conditions by replacing probability distributions by density matrices, and inequalities between functions (holding for all values of the variables) by inequalities between expectation values of operators, holding for all states --- that is, positivity conditions on operators. We can prove that this results in necessary conditions for the quantum problem, and we conjecture that they are also sufficient. We will regard the reduced density matrix of a subsystem as an operator on the full system by supposing that it acts as the identity on the remaining factors of the full tensor product state space. That is, for three qubits, we identify $\rho_{12}$ with $\rho_{12}\otimes\mathbf{1}$, $\rho_2$ with $\mathbf{1}\otimes\rho_2\otimes\mathbf{1}$, etc. Then we have \mathbf{b}egin{equation}gin{theorem}\label{3quant} \emph{{\mathbf{b}f Quantum Bell-Wigner inequalities}} Suppose $\rho_{12}, \rho_{13}, \rho_{23}$ are the two-qubit reductions of a three-qubit mixed state. Then \[ 0\le \langle\mathbf{P}si|\Delta|\mathbf{P}si\rangle \le 1 \] for all normalised pure three-qubit states $|\mathbf{P}si\rangle$, where \[ \Delta = \mathbf{1} - \rho_1 - \rho_2 - \rho_3 + \rho_{12} + \rho_{13} + \rho_{23}. \] \end{theorem} \mathbf{b}egin{equation}gin{proof} This can be proved in a similar way to the classical version, \thmref{3class}, with the help of the antiunitary ``universal NOT" operator $\tau$ defined for one qubit by \[ \tau(a|0\rangle + b|1\rangle) = a^*|1\rangle - b^*|0\rangle. \] This operator satisfies $\tau^2 = -\mathbf{1}$ and anticommutes with all three Pauli operators $\sigma_i$ ($i=1,2,3$). It is antiunitary, i.e. \mathbf{b}egin{equation}gin{equation}\label{antiuni} \tau|\phi\rangle = |\overline{\phi}\rangle,\; \tau|\psi\rangle = |\overline{\psi}\rangle \; \Rightarrowl \langle\overline{\phi}|\overline{\psi}\rangle = \langle\phi|\psi\rangle^*. \end{equation} We extend this to three-qubit states and define \mathbf{b}egin{equation}gin{equation}\label{tau} \tau\left(\sum_{\mathbf{a}lpha\mathbf{b}egin{equation}ta\mathfrak{g}amma}c_{\mathbf{a}lpha\mathbf{b}egin{equation}ta\mathfrak{g}amma}|\mathbf{a}lpha\rangle|\mathbf{b}egin{equation}ta\rangle|\mathfrak{g}amma\rangle\right) = (-1)^{\mathbf{a}lpha + \mathbf{b}egin{equation}ta + \mathfrak{g}amma}c_{\mathbf{a}lpha\mathbf{b}egin{equation}ta\mathfrak{g}amma}^* |\overline{\mathbf{a}lpha}\rangle|\overline{\mathbf{b}egin{equation}ta}\rangle|\overline{\mathfrak{g}amma}\rangle \end{equation} where $\mathbf{a}lpha, \mathbf{b}egin{equation}ta, \mathfrak{g}amma \in \{0,1\}$ and $\overline{\mathbf{a}lpha} = 1 - \mathbf{a}lpha$, etc. This three-qubit operator is also antiunitary and squares to $-\mathbf{1}$, which implies the ``universal NOT" property that it takes every pure state to an orthogonal state. It anticommutes with the single-qubit Pauli operators $\sigma_i\otimes\mathbf{1}\otimes\mathbf{1}$, $\mathbf{1}\otimes\sigma_j\otimes\mathbf{1}$ and $\mathbf{1}\otimes\mathbf{1}\otimes\sigma_k$. Any three-qubit mixed state can be written as \mathbf{b}egin{equation}gin{align}\label{expandrho} \rho &= \eighth\mathbf{1} + a_i\sigma_i\otimes\mathbf{1}\otimes\mathbf{1} + b_j\mathbf{1}\otimes\sigma_j\otimes\mathbf{1} + c_k\mathbf{1}\otimes\mathbf{1}\otimes\sigma_k\\ &\phantom{=} + d_{ij}\sigma_i\otimes\sigma_j\otimes\mathbf{1} + e_{ik}\sigma_i\otimes\mathbf{1}\otimes\sigma_k + f_{jk}\mathbf{1}\otimes\sigma_j\otimes\sigma_k + g_{ijk}\sigma_i\otimes\sigma_j\otimes\sigma_k \notag \end{align} (using the summation convention for repeated indices), with real coefficients $a_i,\ldots,g_{ijk}$. The reduced states of $\rho$ are \mathbf{b}egin{equation}gin{align}\label{reductions} \rho_{12} &= \quarter\mathbf{1} + 2a_i\sigma_i\otimes\mathbf{1} + 2b_j\mathbf{1}\otimes\sigma_j + 2d_{ij}\sigma_i\otimes\sigma_j,\notag\\ \rho_{13} &= \quarter\mathbf{1} + 2a_i\sigma_i\otimes\mathbf{1} + 2c_k\mathbf{1}\otimes\sigma_k + 2e_{ik}\sigma_i\otimes\sigma_k,\\ \rho_{23} &= \quarter\mathbf{1} + 2b_j\sigma_j\otimes\mathbf{1} + 2c_k\mathbf{1}\otimes\sigma_k + 2f_{jk}\sigma_j\otimes\sigma_k\notag \end{align} and \mathbf{v}space{-\mathbf{b}aselineskip} \[ \rho_1 = \half\mathbf{1} + 4a_i\sigma_i, \quad \rho_2 = \half\mathbf{1} + 4b_j\sigma_j, \quad \rho_3 = \half\mathbf{1} + 4c_k\sigma_k. \] Hence \mathbf{b}egin{equation}gin{align*} \Delta &= \quarter\mathbf{1} + 2(d_{ij}\sigma_i\otimes\sigma_j\otimes\mathbf{1} + e_{ik}\sigma_i\otimes\mathbf{1}\otimes\sigma_3 + f_{jk}\mathbf{1}\otimes\sigma_j\otimes\sigma_k)\\ &= \rho + \tau^{-1}\rho\tau \end{align*} since $\tau$ anticommutes with single-qubit Pauli operators. Thus \mathbf{b}egin{equation}gin{align} \langle\mathbf{P}si|\Delta|\mathbf{P}si\rangle &= \langle\mathbf{P}si|\rho|\mathbf{P}si\rangle + \langle\overline{\mathbf{P}si}|\rho|\overline{\mathbf{P}si}\rangle \quad \text{where} \quad |\overline{\mathbf{P}si}\rangle = \tau|\mathbf{P}si\rangle\label{qDelta}\\ &\mathfrak{g}e 0 \quad \text{ since $\rho$ is a positive operator.}\notag \end{align} Since $|\overline{\mathbf{P}si}\rangle$ is orthogonal to $|\mathbf{P}si\rangle$, \eqref{qDelta} also gives \[ \langle\mathbf{P}si|\Delta|\mathbf{P}si\rangle \le \operatorname{tr}\rho = 1, \] establishing the theorem. \end{proof} We conjecture that the condition $0\le\Delta\le 1$ is also sufficient for the existence of a three-qubit state with marginals $\rho_{12}, \rho_{13}, \rho_{23}$. In the general multipartite case, the classical compatibility conditions of \thmref{nclass} also have quantum analogues, namely \mathbf{b}egin{equation}gin{equation}\label{genqTony} 0 \le \sum_{\substack{A\mathbf{c}up B = N\\\\mathbf{B}\subset N}} (-1)^{|A\mathbf{c}ap B|}\langle\mathbf{P}si|\rho_B|\mathbf{P}si\rangle \le 1 \end{equation} where $A\subseteq N$ is a subset with an odd number of elements. However, for $n > 3$ these are not even necessary conditions for compatibility (except for the case $A = N$, $n$ odd \mathbf{c}ite{Bill:reduction}). The proof given above for $n=3$ fails because the universal-NOT operator $\tau$ is antilinear, not linear (which has the consequence that $\tau\otimes\mathbf{1}$ does not commute with $\mathbf{1}\otimes\sigma_i$). We illustrate this failure with a counter-example for $n=4$. In this case \eqref{genqTony} becomes \mathbf{b}egin{equation}gin{equation}\label{qineq4} 0\le \langle\mathbf{P}si|\Delta_i|\mathbf{P}si\rangle \le 1, \qquad i = 1,2,3,4 \end{equation} where \[ \Delta_1 = \rho_1 - \rho_{12} - \rho_{13} - \rho_{14} + \rho_{123} + \rho_{124} + \rho_{134} \] and $\Delta_2,\Delta_3,\Delta_4$ are defined similarly. But consider \[ \rho = |\mathbf{P}si\rangle\langle\mathbf{P}si| \quad \text{ where } \quad |\mathbf{P}si\rangle = \tfrac{1}{\sqrt{2}}(|0000\rangle + |1100\rangle). \] We find that for this state \[ \Delta_1 = \half(\mathbf{1}\otimes\mathbf{1} - 2P_+)\otimes P_1\otimes P_1 + P_+\otimes P_0 \otimes P_0 \] where $P_+$ is the two-qubit projector onto the maximally entangled state $\tfrac{1}{\sqrt{2}}\left(|00\rangle + |11\rangle\right)$, and $P_0$ and $P_1$ are the one-qubit projectors onto $|0\rangle$ and $|1\rangle$. Thus $\Delta_1$ has a negative eigenvalue $-\half$ with eigenvector $\tfrac{1}{\sqrt{2}}(|0011\rangle + |1111\rangle)$. Since the classical inequalities are satisfied by all classical states, it is not surprising to find that the quantum analogues like \eqref{qineq4} are satisfied by separable states \mathbf{c}ite{Bill:reduction}. Thus they constitute a set of separability criteria. These multipartite versions of the reduction criterion \mathbf{c}ite{Hor2reduction,Cerf:reduction} have been investigated by Hall \mathbf{c}ite{Bill:reduction}. \mathbf{v}space{\mathbf{b}aselineskip} \noindent\large\textbf{Acknowledgement}\quad\normalsize We are grateful to Sam Braunstein, whose remark about the Wigner inequalities set us going in the right direction. \mathbf{a}ppendix \section{Appendix: Beyond Qubits} Tripartite systems made up of state spaces with dimensions $d_i$ not all equal to 2 have been studied by Higuchi ($d_1=d_2=d_3$) and Bravyi ($d_1=d_2=2, d_3=4$), who have found necessary conditions for a set of one-party mixed states to be the reductions of a pure tripartite state. Their results are as follows: \mathbf{b}egin{equation}gin{theorem} \emph{(Higuchi \mathbf{c}ite{Atsushi:3qutrit})} Three $3\times 3$ hermitian matrices $\rho_a$ $(a = 1,2,3)$ with eigenvalues $\lambda_1^{(a)}\le\lambda_2^{(a)}\le\lambda_3^{(a)} = 1 - \lambda_1^{(a)} - \lambda_2^{(a)}$ are the reduced one-qutrit states of a pure three-qutrit state if and only if \mathbf{b}egin{equation}gin{align*} \mathbf{a}lpha_a &\le \mathbf{a}lpha_b + \mathbf{a}lpha_c,\\ \mathbf{b}egin{equation}ta_a &\le \mathbf{a}lpha_b + \mathbf{b}egin{equation}ta_c,\\ \mathfrak{g}amma_a &\le \mathbf{a}lpha_b + \mathbf{b}egin{equation}ta_c,\\ \delta_a &\le \delta_b + \delta_c,\\ \epsilon_a &\le \delta_b + \epsilon_c,\\ \zeta_a &\le \delta_b + \zeta_c,\\ \text{and } \quad \zeta_a &\le \epsilon_b + \eta_c \end{align*} \mathbf{b}egin{equation}gin{align*} \text{where}\qquad \qquad \mathbf{a}lpha_a = \lambda_1^{(a)} + \lambda_2^{(a)}, \quad \mathbf{b}egin{equation}ta_a = &\lambda_1^{(a)} + \lambda_3^{(a)}, \quad \mathfrak{g}amma_a = \lambda_2^{(a)} + \lambda_3^{(a)},\\ \delta_a = \lambda_1^{(a)} + 2\lambda_2^{(a)},\quad \epsilon_a = 2\lambda_1^{(a)} + \lambda_2^{(a)},&\quad \zeta_a = 2\lambda_2^{(a)} + \lambda_3^{(a)},\quad \eta_a = 2\lambda_3^{(a)} + \lambda_2^{(a)} \end{align*} and $\{a,b,c\} = \{1,2,3\}$ in any order. \end{theorem} \mathbf{b}egin{equation}gin{theorem} \emph{(Bravyi \mathbf{c}ite{Bravyi})} Let $\rho_1$ and $\rho_2$ be two $2\times 2$ density matrices with eigenvalues $\lambda_a\le\mu_a=1-\lambda_a\; (a = 1,2)$, and let $\rho_3$ be a $4\times 4$ density matrix with eigenvalues $\lambda_3\le\mu_3\le\nu_3\le\mathbf{x}i_3 = 1 - \nu_1 -\nu_2 - \nu_3$. Then $\rho_1, \rho_2$ and $\rho_3$ are the reduced states of a pure state in $\mathbb{C}^2\otimes\mathbb{C}^2\otimes\mathbb{C}^4$ if and only if \mathbf{b}egin{equation}gin{align*} \lambda_a &\mathfrak{g}e \lambda_3 + \mu_3 \quad (a = 1,2),\\ \lambda_1 + \lambda_2 &\mathfrak{g}e 2\lambda_3 + \mu_3 + \nu_3,\\ \text{and } \qquad |\lambda_1 - \lambda_2| &\le \min\{\nu_3 - \lambda_3,\; \mathbf{x}i_3 - \mu_3\}. \end{align*} \end{theorem} The general version of this inequality has been found by Han, Zhang and Guo \mathbf{c}ite{HanZhangGuo}, who, however, only proved that it is necessary: \mathbf{b}egin{equation}gin{theorem} \emph{(Han, Zhang and Guo \mathbf{c}ite{HanZhangGuo})} Let $\rho_1,\ldots ,\rho_n$ be the reduced one-particle density matrices of a pure state of a system of $n$ particles each with an $m$-dimensional state space. Let $\lambda_i^{(a)} (i=1,\ldots ,n)$ be the eigenvalues of $\rho_a$, with $\lambda_1^{(a)}\le\mathbf{c}dots\le\lambda_n^{(a)}$. Then for each pair $(a,b)$ of distinct particles and for each $p=1,2,\ldots m-1$, \[ \sum_{i=1}^p\lambda_i^{(a)} \le \sum_{i=1}^p\lambda_i^{(b)} + \sum_{\substack{c=1\\mathbf{c}\neq a,b}}^n\sum_{i=1}^{m-1}\lambda_i^{(c)}. \] \end{theorem} These results can now be seen as special cases of a very general theorem due to Klyachko \mathbf{c}ite{Klyachko}. A pure state is a special case of a mixed state with a given spectrum $(1,0,\ldots,0)$. One can consider a mixed state with any given spectrum and then, given a set of one-particle states, ask whether there is a mixed many-particle state with that spectrum which yields the given one-particle states. Klyachko has shown how to obtain sets of linear inequalities which give necessary and sufficient conditions on the one-particle spectra. For systems larger than four qubits, there are thousands of inequalities. Klyachko's methods belong to symplectic geometry, and are similar to the methods he used to solve the long-standing problem of Horn, who asked ``What are the possible spectra of a sum of hermitian matrices with given spectra?" Daftuar and Hayden have used these methods to find the possible spectra of a single reduced state $\rho_A$ obtained from a bipartite state $\rho_{AB}$; their paper \mathbf{c}ite{DaftuarHayden} contains a very readable introduction to the relevant ideas from algebraic topology and symplectic geometry. There is a surprising connection to the representation theory of the symmetric group, which was also found by Christandl and Mitchison \mathbf{c}ite{MatthiasGraeme}; roughly speaking, their result is that if the spectra of $\rho_{AB}, \rho_A$ and $\rho_B$ approximate the ratios of row lengths of Young diagrams $\lambda, \mu, \nu$, each with $N$ boxes, then the representation of the symmetric group $S_N$ labelled by $\lambda$ must occur in the tensor product of the representations labelled by $\mu$ and $\nu$. Finally, we note that for fermions there is a non-trivial compatibility problem for the one-particle reduced states of a mixed state. The solution is as follows: \mathbf{b}egin{equation}gin{theorem} \emph{(Coleman \mathbf{c}ite{Coleman:fermions})} An $m\times m$ density matrix $\rho$ is the reduced one-party state of a system of $n$ fermions if and only if each of its eigenvalues $\lambda$ satisfies $0\le\lambda\le 1/n$. \end{theorem} \mathbf{b}egin{equation}gin{thebibliography}{10} \mathbf{b}ibitem{Boole:Laws} George Boole, \newblock {\em An investigation of the laws of thought} \newblock (1858; Dover, New York, 1951). \mathbf{b}ibitem{Bravyi} S.~Bravyi, \newblock ``Requirements for compatibility between local and multipartite quantum states." \newblock {\em Quantum Inf. Comp.} {\mathbf{b}f 4}, 12 (2004). \newblock {\tt quant-ph/0301014}. \mathbf{b}ibitem{Cerf:reduction} N.~J. Cerf, C.~Adami, and R.~M. Gingrich, \newblock ``Quantum conditional operator and a criterion for separability." \newblock {\em Phys. Rev. A} {\mathbf{b}f 60}, 893 (1999). \newblock {\tt quant-ph/9710001}. \mathbf{b}ibitem{MatthiasGraeme} M.~Christandl and G.~Mitchison, \newblock ``The spectra of density operators and the Kronecker coefficients of the symmetric group." \newblock {\tt quant-ph/0409016}. \mathbf{b}ibitem{Coleman:fermions} A.~J. Coleman, \newblock ``Structure of fermion density matrices." \newblock {\em Rev. Mod. Phys.}, {\mathbf{b}f 35}, 668 (1963). \mathbf{b}ibitem{Coleman:book} A.~J. Coleman and V.~I. Yukalov, \newblock {\em Reduced density matrices: {C}oulson's challenge}. \newblock (Springer, Berlin, 2000). \mathbf{b}ibitem{DaftuarHayden} S.~Daftuar and P.~Hayden, \newblock ``Quantum state transformations and the Schubert calculus." \newblock {\em Ann. Phys.}, to appear (2005). \newblock {\tt quant-ph/0410052}. \mathbf{b}ibitem{Diosi:reconstruct} L.~Di\'osi, \newblock ``Three-party pure quantum states are determined by two two-party reduced states." \newblock {\em Phys. Rev. A} {\mathbf{b}f 70}, 10302 (2004). \newblock {\tt quant-ph/0403200}. \mathbf{b}ibitem{HanZhangGuo} Y.-J. Han, Y.-S. Zhang, and G.-C. Guo, \newblock ``Compatibility relations between the reduced and global density matrices." \newblock {\tt quant-ph/0403151}. \mathbf{b}ibitem{Atsushi:3qutrit} A.~Higuchi, \newblock ``On the one-particle reduced density matrices of a pure three-qutrit quantum state." \newblock {\em J. Math. Phys.}, \newblock to appear. {\tt quant-ph/0309186}. \mathbf{b}ibitem{polygon} A.~Higuchi, A.~Sudbery, and J.~Szulc, \newblock ``One-qubit reduced states of a pure many-qubit state: polygon inequalities." \newblock {\em Phys. Rev. Lett.} {\mathbf{b}f 90}, 107902 (2003). \newblock {\tt quant-ph/0209085}. \mathbf{b}ibitem{Hor2reduction} M.~Horodecki and P.~Horodecki, \newblock ``Reduction criterion of separability and limits for a class of protocols of entanglement distillation." \newblock {\em Phys. Rev. A} {\mathbf{b}f 59}, 4206 (1999). \newblock {\tt quant-ph/9708015}. \mathbf{b}ibitem{NickNoah:parts} N.~S. Jones and N.~Linden \newblock ``Parts of quantum states." \newblock {\em Phys. Rev. A} {\mathbf{b}f 71}, 012324 (2005). \newblock {\tt quant-ph/0407117}. \mathbf{b}ibitem{Klyachko} A.~Klyachko, \newblock ``Quantum marginal problem and representations of the symmetric group." \newblock {\tt quant-ph/0409113}. \mathbf{b}ibitem{Kolmogorov} A.~N. Kolmogorov, \newblock {\em Foundations of probability theory} \newblock (Chelsea, London, 1950). \mathbf{b}ibitem{NoahSanduBill:power} N.~Linden, S.~Popescu, and W.~K.Wootters, \newblock ``Almost Every Pure State of Three Qubits Is Completely Determined by Its Two-Particle Reduced Density Matrices." \newblock {\em Phys. Rev. Lett.} {\mathbf{b}f 89}, 207901 (2002). \newblock {\tt quant-ph/0207109}. \mathbf{b}ibitem{Parthasarathy} K.~R. Parthasarathy, \newblock ``Extremal quantum states in coupled systems." \newblock {\tt quant-ph/0307182}. \mathbf{b}ibitem{Peres:allBell} A.~Peres, \newblock ``All the {B}ell inequalities." \newblock {\em Found. Phys.} {\mathbf{b}f 29}, 589 (1999). \newblock {\tt quant-ph/9807017}. \mathbf{b}ibitem{Pitowsky:range} I.~Pitowsky, \newblock ``The range of quantum probability." \newblock {\em J. Math. Phys.} {\mathbf{b}f 27}, 1556 (1986). \mathbf{b}ibitem{Pitowsky:book} I.~Pitowsky, \newblock {\em Quantum Probability --- Quantum Logic} \newblock (Springer, Berlin, 1989). \mathbf{b}ibitem{Pitowsky:polytopes} I.~Pitowsky, \newblock ``Correlation polytopes: their geometry and complexity" \newblock {\em Math. Programming} {\mathbf{b}f 50}, 395 (1991). \mathbf{b}ibitem{Rudolph:marginal} O.~Rudolph, \newblock ``On extremal quantum states of composite systems with fixed marginals." \newblock {\em J. Math. Phys.} {\mathbf{b}f 45}, 4035 (2004). \newblock {\tt quant-ph/0406021}. \mathbf{b}ibitem{Bill:reduction} W.Hall, \newblock ``Multipartite reduction criteria." \newblock {\tt quant-ph/0504154}. \mathbf{b}ibitem{Wigner:ineq} E.~P. Wigner, \newblock ``On hidden variables and quantum mechanical probabilities." \newblock {\em Am. J. Phys.} {\mathbf{b}f 38}, 1005 (1970). \end{thebibliography} \end{document}
\begin{document} \title[Sequences of consecutive squares on elliptic curves] {On sequences of consecutive squares on elliptic curves} \author[M. Kamel] {Mohamed~Kamel} \address{Department of Mathematics, Faculty of Science, Cairo University, Giza, Egypt} \email{[email protected]} \author[M. Sadek] {Mohammad~Sadek} \address{American University in Cairo, Mathematics and Actuarial Science Department, AUC Avenue, New Cairo, Egypt} \email{[email protected]} \date{} \begin{abstract} Let $C$ be an elliptic curve defined over ${\mathbb Q}$ by the equation $y^2=x^3+Ax+B$ where $A,B\in{\mathbb Q}$. A sequence of rational points $(x_i,y_i)\in C({\mathbb Q}),\,i=1,2,\ldots,$ is said to form a sequence of consecutive squares on $C$ if the sequence of $x$-coordinates, $x_i,i=1,2,\ldots$, consists of consecutive squares. We produce an infinite family of elliptic curves $C$ with a $5$-term sequence of consecutive squares. Furthermore, this sequence consists of five independent rational points in $C({\mathbb Q})$. In particular, the rank $r$ of $C({\mathbb Q})$ satisfies $r\ge 5$. \end{abstract} \maketitle \section{Introduction} In \cite{Bremner}, Bremner initiated the discussion of certain arithmetic questions on rational points of elliptic curves attempting to relate the group structure on an elliptic curve $E$ to the addition group operation on the rational line. He raised the question of the existence of a sequence of rational points in $E({\mathbb Q})$ whose $x$-coordinates form an arithmetic progression in ${\mathbb Q}$. Such sequence is called an arithmetic progression sequence in $E({\mathbb Q})$. A variety of questions may be posed. For instance, how long these sequences can be and how many elliptic curves would have such long sequences of rational points. The existence of infinitely many elliptic curves with length 8 arithmetic progressions was proved. Several authors introduced different approaches to find infinitely many elliptic curves with longer arithmetic progression sequences, see \cite{Alvarado, Campbell, Macleod, Ulas1}. In \cite{BremnerUlas}, the study of sequences of rational points on elliptic curves whose $x$-coordinates form a geometric progression in ${\mathbb Q}$ was initiated. An infinite family of elliptic curves having geometric progression sequences of length 4 was exhibited. It was remarked that infinitely many elliptic curves with $5$-term geometric progression sequences can be constructed. In this note, we discuss sequences of rational points on elliptic curves whose $x$-coordinates form a sequence of consecutive squares. We consider elliptic curves defined by the equation $y^2=ax^3+bx+c$ over ${\mathbb Q}$. We show that elliptic curves defined by the latter equation with 5-term sequences of rational points whose $x$-coordinates are elements in a sequence of consecutive squares in ${\mathbb Q}$ are parametrized by an elliptic surface whose rank is positive. Hence, one deduces the existence of infinitely many such elliptic curves. Moreover, we show that the five rational points forming the sequence are linearly independent in the group of rational points of the elliptic curve they lie on. In particular, we introduce an infinite family of elliptic curves of rank $\ge 5$. \section{Sequences of Consecutive Squares} \begin{Definition} Let $C$ be an elliptic curve defined over a number field $K$ by the Weierstrass equation $y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6,\,a_i\in K$. The sequence $(x_i,y_i)\in C(K)$ is said to be a {\em sequence of consecutive squares} on $C$ if there is a $u\in K$ such that $x_i=(u+i)^2$, $i=1,2,\ldots$. \end{Definition} The following proposition ensures the finiteness of the sequence of consecutive squares on an elliptic curve. \begin{Proposition} \lambdabel{prop1} Let $C$ be an elliptic curve defined over a number field $K$ by a Weierstrass equation of the form \[y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6,\,a_i\in K.\] Let $(x_i,y_i)\in C(K)$ be a sequence of consecutive squares on $C$. Then the sequence $(x_i,y_i)$ is finite. \end{Proposition} \begin{Proof} We can assume without loss of generality that $x_i=(u+i)^2$, $i=1,2,\ldots$, $u\in K$. This sequence gives rise to a sequence of rational points on the genus $2$ hyperelliptic curve \[\operatorname{\mathcal C}:y^2+a_1x^2y+a_3y=x^6+a_2x^4+a_4x^2+a_6.\] Namely, the points $(u+i,y)\in \operatorname{\mathcal C}(K)$. According to Faltings' Theorem, \cite{Falting}, one knows that $\operatorname{\mathcal C}(K)$ is finite, hence the sequence is finite. \end{Proof} Based on the above proposition, one may present the following definition. \begin{Definition} Let $C$ be an elliptic curve over ${\mathbb Q}$ defined by a Weierstrass equation. Let $(x_i,y_i)\in C({\mathbb Q}),\,i=1,2,\ldots,n,$ be a sequence of consecutive squares on $C$. Then $n$ is said to be the {\em length} of the sequence. \end{Definition} \section{Constructing elliptic curves with long sequences of consecutive squares} In this note, we focus our attention on the family of elliptic curves given by the affine equation $C: y^2=a x^3+b x +c$ over ${\mathbb Q}$. We will show that there are infinitely many elliptic curves defined by the latter equation containing 5-term sequences of consecutive squares. One observes that if $(t^2,d),((t+1)^2,e)$, and $((t+2)^2,f)$ lie in $C({\mathbb Q})$, where $t\in{\mathbb Q}$, then these rational points form a 3-term sequence of consecutive squares. Indeed, one has \begin{eqnarray*} d^2&=&at^6+bt^2+c\\e^2&=&a(t+1)^6+b(t+1)^2+c\\f^2&=&a(t+2)^6+b(t+2)^2+c. \end{eqnarray*} It is a standard linear algebra exercise to show that {\footnotesize \begin{eqnarray} \lambdabel{eqabc} a&=&\frac{ (3 + 2t)d^2-4(1 + t)e^2+(1 + 2 t)f^2}{4 (15 + 73 t + 135 t^2 + 125 t^3 + 60 t^4 + 12 t^5)}\nonumber\\ b&=&\frac{(3+2 t) (3+3 t+t^2) (7+9 t+3 t^2)d^2+(1+t)(-4 (4+2 t+t^2) (4+6 t+3 t^2)e^2) }{ 4(1 + 2 t) (15 + 43 t + 49 t^2 + 27 t^3 + 6 t^4)}\nonumber\\ &+&\frac{ (1+t)(4+2 t+t^2) (4+6 t+3 t^2)f^2}{ 4(1 + 2 t) (15 + 43 t + 49 t^2 + 27 t^3 + 6 t^4)}\nonumber\\ c&=&\frac{(2 + t)^2 (15 + 43 t + 46 t^2 + 22 t^3 + 4 t^4)d^2 -8t^2(2 + t)^2 (2 + 2 t + t^2)e^2 +t^2 (1 + 5 t + 10 t^2 + 10 t^3 + 4 t^4)f^2}{4 (1 + 2 t) (15 + 28 t + 21 t^2 + 6 t^3)}.\nonumber\\ \end{eqnarray}} In particular, one has the following result. \begin{Remark} \lambdabel{Rem1} The above argument indicates that given $d,e,f\in{\mathbb Q}(t)$, there exist $a,b,c\in{\mathbb Q}(t)$ such that the ordered pairs $(t^2,d),((t+1)^2,e)$ and $((t+2)^2,f)$ are three rational points on the elliptic surface $y^2=ax^3+bx+c$. \end{Remark} Now, if $((t+3)^2,g)\in C({\mathbb Q})$, then one has a 4-term sequence of consecutive squares on $C$. In fact, using the above values for $a,b,c$, one then sees that {\footnotesize\begin{eqnarray} \lambdabel{eq1} g^2=\frac{ (5 + 2 t)((2 + t) (14 + 12 t + 3 t^2)d^2 -3(1 + t) (13 + 10 t + 3 t^2)e^2)+3 (2 + t) (1 + 2 t) (10 + 8 t + 3 t^2)f^2 }{(1 + t) (1 + 2 t) (5 + 6 t + 3 t^2)}.\nonumber\\ \end{eqnarray}} Therefore, in view of Remark \ref{Rem1}, one needs to find the elements $d,e,f$ and $g$ in ${\mathbb Q}(t)$ satisfying the latter equation in order to construct an elliptic curve $C$ with a 4-term sequence of consecutive squares. In fact, since $(d,e,f,g)=(1,1,1,1)$ is a solution for equation (\ref{eq1}), the general solution $(d,e,f,g)$ is given by the following parametrization: {\footnotesize \begin{eqnarray}\lambdabel{eq2}d&=& (2 + t) (5 + 2 t) (14 + 12 t + 3 t^2)p^2+3 (1 + t) (5 + 2 t) (13 + 10 t + 3 t^2)q^2-3 (2 + t) (1 + 2 t) (10 + 8 t + 3 t^2) w^2 \nonumber\\ &-&6 (65+141 t+111 t^2+41 t^3+6 t^4)pq+6 (20+66 t+66 t^2+31 t^3+6 t^4)pw,\nonumber\\ e&=& -(2 + t) (5 + 2 t) (14 + 12 t + 3 t^2)p^2-3 (1 + t) (5 + 2 t) (13 + 10 t + 3 t^2)q^2-3 (2 + t) (1 + 2 t) (10 + 8 t + 3 t^2) w^2 \nonumber\\ &+&2 (140+246 t+166 t^2+51 t^3+6 t^4)p q+6 (20+66 t+66 t^2+31 t^3+6 t^4)q w,\nonumber\\ f&=& -(2 + t) (5 + 2 t) (14 + 12 t + 3 t^2)p^2+3 (1 + t) (5 + 2 t) (13 + 10 t + 3 t^2)q^2+3 (2 + t) (1 + 2 t) (10 + 8 t + 3 t^2) w^2\nonumber\\ &-&6 (1 + t) (5 + 2 t) (13 + 10 t + 3 t^2) q w+2 (2 + t) (5 + 2 t) (14 + 12 t + 3 t^2)p w,\nonumber\\ g&=&-p^2(140+ 246t+ 166t^2+ 51t^3+6t^4)+3(q^2(65+ 141t+ 111t^2+ 41t^3+6t^4)\nonumber\\&-&(20+ 66t+ 66t^2+ 31t^3+6t^4)w^2).\nonumber\\ \end{eqnarray}} Consult \cite[\S 7]{Mordell} for finding parametric rational solutions of a homogeneous polynomial of degree $2$ in several variables. \begin{Remark} \lambdabel{Rem2} The points $(t^2,d),((t+1)^2,e),((t+2)^2,f), ((t+3)^2,g)$, where $d,e,f,g\in {\mathbb Q}(t,p,q,w)$ are given as above, are rational points on the elliptic surface $y^2=ax^3+bx+c$, where $a,b,c$ are defined in (\ref{eqabc}). \end{Remark} Now, we assume that $((t+4)^2,h)$ is a rational point on the elliptic curve $y^2=ax^3+bx+c$. In particular, there exists a $5$-term sequence of consecutive squares on the latter curve. Then one has \begin{eqnarray} \lambdabel{eq:ellipticsurface} h^2=A p^4+B p^3 +C p^2 +D p + E \end{eqnarray} with {\footnotesize\begin{eqnarray*} A&=&(140 + 246 t + 166 t^2 + 51 t^3 + 6 t^4)^2\\ B&=&\frac{4(5 + 2 t)^2 (196 + 322 t + 202 t^2 + 57 t^3 +6 t^4) ( (87 + 83 t + 27 t^2 + 3 t^3)q -3 (52 + 58 t + 22 t^2 + 3 t^3) w)}{3 + 2 t}\\ C&=&\frac{2(5 +2 t) }{3 + 2 t}(5 (78330 + 250402 t + 346118 t^2 + 271991 t^3 +132943 t^4 + 41217 t^5 + 7851 t^6 + 828 t^7\\ &+& 36 t^8)q^2-12 (41580 + 154358 t + 243358 t^2 + 217563 t^3 + 121708 t^4 + 43727 t^5 + 9852 t^6 + 1272 t^7 \\&+& 72 t^8)q w +15 (2 + t)^2 (-2828 - 2552 t + 784 t^2 + 2364 t^3 + 1395 t^4 +360 t^5 + 36 t^6) w^2)\\ D&=&\frac{12 (35 + 24 t +4 t^2)}{3 + 2 t} ( (5655 + 17662 t + 23115 t^2 + 16782 t^3 + 7345 t^4 +1938 t^5 + 285 t^6 + 18 t^7)q^3\\ &+& (6660 + 18502 t + 22620 t^2 + 15567 t^3 + 6515 t^4 +1683 t^5 + 255 t^6 + 18 t^7)q^2 w -5 (3708 + 11842 t\\ &+& 16104 t^2 + 12237 t^3 + 5651 t^4 +1593 t^5 + 255 t^6 + 18 t^7)q w^2 + 3 (2 + t)^2 (260 + 888 t + 972 t^2 + 544 t^3\\ &+& 153 t^4 + 18 t^5) w^3)\\ E&=&9 ( (65 + 141 t + 111 t^2 + 41 t^3 + 6 t^4)^2 q^4 +8 (22750 + 79965 t + 121251 t^2 + 105282 t^3 + 57708 t^4\\ &+&20529 t^5 + 4643 t^6 + 612 t^7 + 36 t^8) w q^3 -2 (120300 + 457050 t + 737244 t^2 + 678163 t^3 + 394077 t^4 \\&+&149001 t^5 + 35957 t^6 + 5088 t^7 + 324 t^8) w^2 q^2 +8 (2 + t)^2 (1750 + 6380 t + 7959 t^2 + 5294 t^3 + 1987 t^4 \\&+&408 t^5 + 36 t^6)q w^3 + (20 + 66 t + 66 t^2 + 31 t^3 +6 t^4)^2 w^4). \end{eqnarray*}} \begin{Theorem} \lambdabel{thm1} The curve $\operatorname{\mathcal C}:Y^2=A X^4+B X^3 +C X^2 +D X + E$ defined over ${\mathbb Q}(t)$ is birationally equivalent over ${\mathbb Q}(t,p,q,w)$ to an elliptic curve ${\mathcal E}$ with $\operatorname{rank} {\mathcal E}({\mathbb Q}(t,p,q,w))\ge 1$. \end{Theorem} \begin{Proof} After homogenizing the equation describing $\operatorname{\mathcal C}$, one obtains $Y^2=A X^4+B X^3Z +C X^2Z^2 +D XZ^3 + EZ^4$ with a rational point $R=(X:Y:Z)=(1:140 + 246 t + 166 t^2 + 51 t^3 + 6 t^4:0)$. The curve $\operatorname{\mathcal C}$ is birationally equivalent to the cubic curve ${\mathcal E}$ defined by the equation $V^2=U^3-27 I U-27J$, \cite{StollCremona}, where $I=12AE-3BD+C^2$ and $J=72 ACE + 9 BCD -27 AD^2 -27B^2E -2 C^3$. The discriminant $\Delta({\mathcal E})$ of ${\mathcal E}$ is given by $(4I^3-J^2)/27$, and the specialization of ${\mathcal E}$ is singular only if $\Delta({\mathcal E})=0$. Moreover, the point $P=\displaystyle \left( 3\frac{3B^2 -8AC}{4A}, 27\frac{B^3 + 8A^2D - 4ABC}{8A^{3/2}}\right)$ lies in ${\mathcal E}({\mathbb Q}(t,p,q,w))$ since $A$ is a square. One considers the specialization $\displaystyle t=1 , q=\frac{81}{40} ,w=1 $ to obtain the specialization $\displaystyle \widetilde{P}=\left(\frac{-4786935489}{100},\frac{-56568093052527}{50}\right)$ of the point $P$ on the specialized elliptic curve $$\widetilde{{\mathcal E}}: y^2=x^3-\frac{147183268996968521373}{10000}x+\frac{171278570868444028577352480093}{250000}.$$ Using ${\sf MAGMA }$, \cite{Bosma}, the point $\widetilde{P}$ is a point of infinite order on $\widetilde{{\mathcal E}}$. Therefore, according to Silverman's specialization Theorem, the point $P$ is of infinite order on ${\mathcal E}$. \end{Proof} \begin{Corollary} \lambdabel{cor1} For any nontrivial sequence of consecutive rational squares $t_0^2,(t_0+1)^2,(t_0+2)^2,(t_0+3)^2, (t_0+4)^2$, there exist infinitely many elliptic curves $E_m:y^2=a_mx^3+b_mx+c_m,\;m\in{\mathbb Z}\setminus\{0\},$ such that $(t_0+i)^2,i=0,1,2,3,4,$ is the $x$-coordinate of a rational point on $E_m$. Moreover, these five rational points are independent. \end{Corollary} \begin{Proof} We fix $t=t_0$, $q=q_0$, and $w=w_0$ in ${\mathbb Q}$. Substituting these values into (\ref{eq:ellipticsurface}), one obtains the elliptic curve \[\operatorname{\mathcal C}_{t_0,q_0,w_0}:h^2=Ap^4+Bp^3+Cp^2+Dp+E,\; A,B,C,D\in{\mathbb Q},\] with positive rank, see Theorem \ref{thm1}. Now, one fixes a point $P=(p,h)$ of infinite order in $\operatorname{\mathcal C}_{t_0,q_0,w_0}({\mathbb Q})$. For any nonzero integer $m$, we set $mP=(p_m,h_m)$ to be the $m$-th multiple of the point $P$ in $\operatorname{\mathcal C}_{t_0,q_0,w_0}({\mathbb Q})$. Now, one substitutes $t=t_0,q=q_0,w=w_0$, and $p=p_m$ into the formulas for $d,e,f,g\in{\mathbb Q}(t,p,q,w)$ in (\ref{eq2}) in order to obtain the rational numbers $d_m,e_m,f_m,g_m$, respectively. Then one substitutes $d_m,e_m,f_m$ into the formulas for $a,b,c\in{\mathbb Q}(t,d,e,f)$ in (\ref{eqabc}) to get the rational numbers $a_m,b_m,c_m$, respectively. To sum up, one constructed an infinite family of elliptic curves $E_m:y^2=a_mx^3+b_mx+c_m$, where $m$ is a nonnegative integer. The latter infinite family $E_m$ of elliptic curves satisfies the property that the points $(t_0^2,d_m),((t_0+1)^2,e_m),((t_0+2)^2,f_m),((t_0+3)^2,g_m),((t_0+4)^2,h_m)\in E_m({\mathbb Q})$. Thus, one obtains an infinite family of elliptic curves with a $5$-term sequence of rational points whose $x$-coordinates form a sequence of consecutive squares in ${\mathbb Q}$. To show that the points $(t_0^2,d_m),((t_0+1)^2,e_m),((t_0+2)^2,f_m),((t_0+3)^2,g_m),((t_0+4)^2,h_m)\in E_m({\mathbb Q})$ are independent, one specializes $\displaystyle t=1 , q=81/40 ,w=1 $ which yields the existence of the infinite point $\displaystyle (p,h)=\left(\frac{2201}{2320},\frac{-62736289}{18852320}\right)\in \operatorname{\mathcal C}_{1,81/40,1}({\mathbb Q})$. Therefore, the specialization $t=1,q=81/40,w=1,p=2201/2320$ gives us the specialized elliptic curve \[E_1: {\footnotesize y^{2}=\frac{42674183}{52786496000}x^3-\frac{612989889 }{7540928000}x+\frac{1180698375893607 }{2487869785676800}}\] with the following set of rational points in $E_1({\mathbb Q})$: \[\left(1,\frac{-2367005}{3770464}\right), \left(2^2,\frac{8455597}{18852320}\right), \left(3^2,\frac{-10868031}{18852320}\right),\\ \left(4^2,\frac{-29720351}{18852320}\right), \left(5^2,\frac{-62736289}{18852320}\right).\] Using ${\sf MAGMA }$, \cite{Bosma}, these rational points are independent. According to Silverman's Specialization Theorem, it follows that the points $(t_0^2,d_m),((t_0+1)^2,e_m),((t_0+2)^2,f_m),((t_0+3)^2,g_m),((t_0+4)^2,h_m)$ are independent in $E_m$ over ${\mathbb Q}(t_0,q_0,w_0,p_m)$. \end{Proof} \begin{Remark} Corollary \ref{cor1} implies the existence of an infinite family of elliptic curves whose rank $r\ge 5$. \end{Remark} \begin{Remark} One notices that a sequence of consecutive squares on an elliptic curve gives rise to a set of rational points on some hyperelliptic curve of genus $2$, see the proof of Proposition \ref{prop1}. Therefore, according to Corollary \ref{cor1}, we are able to construct an infinite family of hyperelliptic curves $\operatorname{\mathcal C}$ such that $|\operatorname{\mathcal C}({\mathbb Q})|\ge 5$. \end{Remark} \hskip-12pt\emph{\bf{Acknowledgements.}} We would like to thank Professor Nabil Youssef, Cairo University, for his support, thorough reading of the manuscript, and several useful suggestions. \end{document}
\begin{document} \title{Space-time dual quantum Zeno effect: Interferometric engineering of open quantum system dynamics } \author{Jhen-Dong Lin} \email{[email protected]} \affiliation{Department of Physics, National Cheng Kung University, 701 Tainan, Taiwan} \affiliation{Center for Quantum Frontiers of Research \& Technology, NCKU, 70101 Tainan, Taiwan} \author{Ching-Yu Huang} \affiliation{Department of Physics, National Cheng Kung University, 701 Tainan, Taiwan} \affiliation{Center for Quantum Frontiers of Research \& Technology, NCKU, 70101 Tainan, Taiwan} \author{Neill Lambert} \affiliation{Theoretical Quantum Physics Laboratory, RIKEN Cluster for Pioneering Research, Wako-shi, Saitama 351-0198, Japan} \author{Guang-Yin Chen} \affiliation{Department of Physics, National Chung Hsing University, Taichung 402, Taiwan} \author{Franco Nori} \affiliation{Theoretical Quantum Physics Laboratory, RIKEN Cluster for Pioneering Research, Wako-shi, Saitama 351-0198, Japan} \affiliation{RIKEN Center for Quantum Computing (RQC), Wakoshi, Saitama 351-0198, Japan} \affiliation{Department of Physics, The University of Michigan, Ann Arbor, 48109-1040 Michigan, USA} \author{Yueh-Nan Chen} \email{[email protected]} \affiliation{Department of Physics, National Cheng Kung University, 701 Tainan, Taiwan} \affiliation{Center for Quantum Frontiers of Research \& Technology, NCKU, 70101 Tainan, Taiwan} \date{\today} \begin{abstract} Superposition of trajectories, which modify quantum evolutions by superposing paths through interferometry, has been utilized to enhance various quantum communication tasks. However, little is known about its impact from the viewpoint of open quantum systems. Thus, we examine this subject from the perspective of system-environment interactions. We show that the superposition of multiple trajectories can result in quantum state freezing, suggesting a space-time dual to the quantum Zeno effect. Moreover, non-trivial Dicke-like super(sub)radiance can be triggered without utilizing multi-atom correlations. \end{abstract} \maketitle \section{Introduction} Controlling quantum dynamics is an essential part of quantum information science, which becomes richer and more challenging when considering dissipative open systems. In this regime, several unique control, or engineering, approaches are available. For example: (i) quantum reservoir engineering, which steers open system dynamics by directly manipulating an artificial environment~\cite{myatt2000decoherence,liu2011experimental,chiuri2012linear,PhysRevA.88.063806,PhysRevA.95.033610,liu2018experimental,PhysRevA.99.022107,garcia2020ibm,PhysRevLett.124.210502}; (ii) feedback-based control~\cite{PhysRevA.49.2133,PhysRevA.62.012105,PhysRevA.65.010101,wiseman2009quantum,zhang2017quantum,PhysRevA.81.062306}, where the dynamics is modified by closed-loop controls; (iii) dynamical decoupling~\cite{PhysRevLett.82.2417,PhysRevLett.83.4888,PhysRevLett.95.180501,yang2011preserving,du2009preserving,PhysRevLett.90.037901,de2010universal,liu2013noise,alonso2016generation}, which is an open-loop design to counter the effect of system-environment couplings; and (iv) the quantum Zeno effect~\cite{misra1977zeno,PhysRevA.41.2295,kofman2000acceleration,PhysRevLett.87.040402,PhysRevLett.89.080401,koshino2005quantum,PhysRevA.90.012101,chaudhry2016general,PhysRevA.77.062339,PhysRevA.80.062109,PhysRevA.82.022119,CAO2012349,ZHANG20131837,Ai2013}, where dissipation can be suppressed through frequent measurements on the open system. Recently, an interferometric scheme known as \textit{superposed trajectories} has drawn considerable research interest~\cite{PhysRevLett.91.067902,PhysRevA.72.012338,chiribella2019quantum,PhysRevA.101.012340,abbott2020communication,kristjansson2020resource,PhysRevResearch.3.013093,PhysRevD.103.065013,PhysRevD.102.085013,PhysRevLett.125.131602,ban2021two,ban2020relaxation,PhysRevA.103.032223}. This approach utilizes a quantum control of evolution paths to let the target system to go through different evolution paths in a quantum superposition. In principle, the superposition of paths can be implemented by a Mach-Zehnder type interferometer~\cite{PhysRevLett.73.58,PhysRevA.89.062316,RevModPhys.81.1051,nairz2003quantum,sadana2019double,carine2020multi,margalit2021realization}, as illustrated in Fig.~\ref{ill_sup_traj}. The quantum interference between different evolution paths can reduce the noise effect. Thus, it is beneficial for quantum communication~\cite{PhysRevLett.91.067902, PhysRevA.72.012338,chiribella2019quantum, PhysRevA.101.012340,abbott2020communication,kristjansson2020resource, PhysRevResearch.3.013093}, quantum metrology~\cite{lee2022steering} and quantum thermodynamics~\cite{chan2022maxwell}. Also, it has potential applications in relativistic quantum theory~\cite{PhysRevD.103.065013, PhysRevD.102.085013,PhysRevLett.125.131602}. The ability to mitigate quantum decoherence has also stimulated the quantum open-system community to investigate the concept of superposed trajectories in more detail~\cite{ban2021two,ban2020relaxation, PhysRevA.103.032223}; however, various questions remain. For instance, in many previous works, only two paths were considered, which may limit the utility of the quantum interference effect. Further, these works only consider what we call the independent-environments scenario, where the environments inside the interferometer are considered to be separated and independent from each other. This simplified scenario may deviate from real-world considerations; for instance, these environments could either be correlated or be different regions of a single environment. Here, we explore both open questions. First, we extend the exploration of the independent-environments scenario to multiple evolution paths. Our primary result here is revealing an unexpected connection between superposed trajectories and the quantum Zeno effect. For concreteness, we consider the dissipative and the pure dephasing spin-boson models~\cite{breuer2002theory}. We find that quantum state freezing occurs when the number of superposed evolution paths reaches infinity. Moreover, we show that the effective decay can be, in general, characterized by the overlap-integral expression, which serves as a universal tool to study the quantum Zeno effect~\cite{kofman2000acceleration} and noise spectroscopy~\cite{bylander2011noise,PhysRevLett.107.230501,PhysRevLett.108.140403, PhysRevLett.129.030401}. Our result could also open a novel alternative approach to investigate the space-time “dual" quantum Zeno effect~\cite{PRXQuantum.2.040319, PhysRevLett.126.060501}, because we replace the \textit{temporal sequence} of measurements performed at one location (of the atom) with a \textit{single} measurement done on the multiple paths followed by the atom in the interferometer. In other words, a temporal sequence of measurements in one location is replaced by a single measurement at one time, but over many paths (i.e., many locations). Second, we consider an indefinite-position scenario, wherein an initially excited two-level atom is placed inside a bosonic vacuum where its position is indefinite because of the superposition of paths. The modified decay can exhibit signatures of both the superradiant and subradiant emission effects~\cite{PhysRev.93.99, PhysRevLett.76.2049,gross1982superradiance,chen2005proposal, PhysRevA.88.052320, PhysRevLett.90.166802,brandes2005coherent,chen2013examining,PhysRevA.98.063815}. It is well known that Dicke first proposed the idea of the superradiance effect induced by an ensemble of correlated atoms~\cite{PhysRev.93.99}, wherein the quantum correlations make them behave like a giant dipole moment. Our result implies that the formation of a giant dipole moment can be emulated by only one atom with superposed trajectories.We expect that this single-atom collective effect can be utilized to design new Dicke quantum batteries~\cite{PhysRevLett.120.117702, quach2022superabsorption} and heat engines~\cite{PhysRevLett.128.180602}. \begin{figure*} \caption{Superposed quantum dynamics achieved by a multi-arm interferometer. To produce superposition of paths, the qubit is first sent into the multi-port beam splitter (MBS1) such that the path of the qubit is prepared in $|\chi_C\rangle = \sum_{i=1} \label{ill_sup_traj} \end{figure*} \section{Independent-environments scenario} We formalize the scenario depicted in Fig.~\ref{ill_sup_traj}(a). We consider that the path of a traveling qubit $Q$ inside the interferometer is characterized by a quantum system $C$. More specifically, we introduce $N$ orthonormal states $\{|i_C\rangle\}_{i=1\cdots N}$ to describe $N$ possible paths for the qubit. When $C$ is prepared in the state $|i_C\rangle$, the qubit goes through the path labeled by $i$ and interacts with the environment $\mathcal{E}_i$. One can also prepare $C$ in a superposition state, i.e., a superposition of paths, by sending the qubit into a multi-port beam splitter [the MBS1 in Fig.~\ref{ill_sup_traj}(a)], such that $Q$ interacts with all environments $\left\{\mathcal{E}_i\right\}$ as a coherent superposition. Therefore, $C$ acts as a quantum control to determine which environment for $Q$ to interact with. Further, we assume that $C$ does not directly interact with these environments. Thus, inside the interferometer, the total Hamiltonian of $C$, $Q$, and $\left\{\mathcal{E}_i\right\}_{i=1\cdots N}$ can be written as \begin{equation} H_{\text{tot}} = \sum_{i=1}^N |i_C\rangle\langle i_C| \otimes H_{Q\mathcal{E}_i} , \label{Htot} \end{equation} where $H_{Q\mathcal{E}_i}$ represents the interaction Hamiltonian of the qubit $Q$ and the environment $\mathcal{E}_i$. The initial states of $Q$ and $\left\{\mathcal{E}_i\right\}$ are $\rho_{Q}(0)$ and $\left\{ \rho_{\mathcal{E}_i}(0)\right\}$, respectively. Also, by using MBS1, $C$ is prepared in \begin{equation} |\chi_C\rangle =\sum_{i=1}^N|i_C\rangle/\sqrt{N}. \end{equation} Thus, after passing through these environments, the reduced dynamics of $CQ$ complex is \begin{align} &\rho_{CQ}(t) = \frac{1}{N}\sum_{i,j=1}^N |i_C\rangle \langle j_C| \otimes \rho_{Q,i,j}(t)\nonumber\\ &\text{with}~~ \rho_{Q,i,j}(t)=\mathrm{tr}_{\{\mathcal{E}_k\}}\big[e^{-iH_{Q\mathcal{E}_i }t}~\rho_Q(0)\bigotimes_{l=1}^N \rho_{\mathcal{E}_l}(0)~e^{iH_{Q\mathcal{E}_j }t}\big]. \end{align} Terms with $i=j$ describe the reduced dynamics obtained from sending the qubit into a single path with the label $i$, i.e., the single-path dynamics; while, the terms with $i\neq j$ (i.e., the off-diagonal terms) capture the quantum interference between the paths $i$ and $j$. Before discarding $C$, one should perform another selective measurement on it to harness the quantum interference effect~\cite{PhysRevA.101.012340,abbott2020communication}. As shown in Fig.~\ref{ill_sup_traj}(a), this can be achieved by applying the second multi-port beam splitter (MBS2) with the phase shifters $\left\{\phi_i\right\}$ and selecting one of the output beams of the qubit. To illustrate this idea, we consider that the selective measurement is characterized by a projector \begin{align} &P_{C,\bm{\phi}} = |\chi_{C,\bm{\phi}}\rangle \langle \chi_{C,\bm{\phi}}|\nonumber \\ \text{with~~}&|\chi_{C,\bm{\phi}}\rangle =\sum_{i=1}^N \exp(i\phi_i)|i_C\rangle/\sqrt{N}. \end{align} The post-measurement state of $Q$ then reads \begin{align} &\tilde{\rho}_{Q,\bm{\phi}}(t) =\langle \chi_{C,\bm{\phi}}|\rho_{CQ}(t)|\chi_{C,\bm{\phi}}\rangle \nonumber\\ &=\frac{1}{N^2}\sum_{i,j}e^{-i(\phi_i-\phi_j)}\rho_{Q,i,j}(t) \nonumber\\ &=\frac{1}{N}\rho_{Q,\mathrm{avg}}(t) +\frac{1}{N^2}\sum_{i\neq j}\left[e^{-i(\phi_i-\phi_j)}\rho_{Q,i,j}(t) \right].\label{rho_Q_t_unnorm} \end{align} Here, $\rho_{Q,\text{avg}}(t)=\sum_i \rho_{Q,i,i}(t)/N$ denotes the incoherent mixture of the single-path dynamics~\cite{megier2017eternal,PhysRevA.94.022118,breuer2018mixing,PhysRevA.101.062304,PhysRevA.103.022605}. Note that the normalized post-measured state is written as \begin{equation} \rho_{Q,{\bm{\phi}}}(t)=\tilde{\rho}_{Q,{\bm{\phi}}}(t)/\mathrm{tr}\left[\tilde{\rho}_{Q,{\bm{\phi}}}(t)\right]. \end{equation} Equation~\eqref{rho_Q_t_unnorm} suggests that the interferometry-modification of the qubit dynamics originates from both the incoherent mixing of single-path dynamics and the interference effects, i.e., the off diagonal terms $\rho_{Q,i,j}(t)$ with $i\neq j$ and the phase shifts $\{\phi_i\} $. To simplify the following discussions, we introduce two additional assumptions. First, we assume that all single-path dynamics are identical; that is, $\rho_Q(t)=\rho_Q^{\text{avg}}(t)=\rho_{Q,i,i}(t)~\forall i$ and $\beta(t) = \rho_{Q,i,j}(t)~\forall i\neq j$. In this case, the incoherent mixing does not yield a new dynamical process; thus, the modification is totally determined by the interference effect. This assumption holds when all of the environments are prepared in the same state and all of the qubit-environment interactions are identical. Second, we consider that each $\phi_i$ is either $0$ or $\pi$; and, hence, the projector associated with the selective measurement can be written as $P_{C,\bm{\phi}_n} = |\chi_{C,\bm{\phi}_n}\rangle\langle \chi_{C,\bm{\phi}_n}| $ with $n$ being the number of phase shifts that take the value $\pi$. The post-measurement unnormalized state of $Q$ can then be simplified as \begin{align} &\tilde{\rho}_{Q,{\bm{\phi}_n}}(t)=\rho_Q(t)/N+\left(R_{N,n}-1/N\right)\beta(t) \nonumber \\ \text{with~~} &R_{N,n} = \left(N-n/N\right)^2. \end{align} One can find that $\rho_{Q,\bm{\phi}_{N/2-k}}(t)=\rho_{Q,\bm{\phi}_{N/2+k}}(t)$ for $k=0,1, \cdots, N/2-1$. Note that at $t=0$ we have \begin{equation} \rho_{Q,\bm{\phi}_n}(0)= \begin{cases} \rho_Q(0) &\text{for}~n\neq \frac{N}{2}\\ \mathbf{0}&\text{for}~n=\frac{N}{2} \end{cases}. \end{equation} When $n=N/2$, one obtains a null result because of the completely destructive interference, i.e., $\langle \chi_{\bm{\phi}_0}|\chi_{\bm{\phi}_{N/2}}\rangle=0 $, and thus, we naturally exclude this scenario from the rest of the discussions. \section{Space-time dual quantum Zeno effect for the dissipative and the pure dephasing models} We now focus on two different types of spin-boson interactions, the dissipative and the pure dephasing models. Without loss of generality, we work within the interaction picture; the interaction Hamiltonians are \begin{align} H^{\text{diss}}_{Q\mathcal{E}_i}(t) &= \sum_k g_k e^{i(\omega_q-\omega_k)t}\sigma_+ a_{i,k} + g_k^* e^{-i(\omega_q-\omega_k)} \sigma_-a^{\dagger}_{i,k}, \nonumber\\ H^{\text{deph}}_{Q\mathcal{E}_i}(t) &= \sigma_z \sum_k (g_k e^{-i\omega_k t} a_{i,k}+ g_k^* e^{i\omega_k t} a_{i,k}^{\dagger}). \label{interaction_diss} \end{align} Here, $\omega_q$ denotes the energy gap between the excited state $|e\rangle$ and the ground state $|g\rangle$ of the qubit, $\sigma_z = |e\rangle \langle e|-|g\rangle \langle g|$, $\sigma_+=|e\rangle \langle g|$ ($\sigma_-=|g\rangle \langle e|$) represents the qubit raising (lowering) operator, and $a_{i,k}$ ($a_{i,k}^{\dagger}$) stands for the annihilation (creation) of the mode $k$ for the environment $\mathcal{E}_i$. Let us assume that the initial state of the total system is \begin{equation} \rho_{\text{tot}}(0)=|\chi_C\rangle\langle\chi_C| \otimes \rho_Q(0)\otimes |\text{vac}\rangle\langle \text{vac}|, \end{equation} where $|\text{vac}\rangle =\bigotimes_{i=1}^N|\text{vac}_i\rangle$, with $a_{i,k} |\text{vac}_i\rangle =0$. The associated reduced dynamics can be obtained analytically; detailed derivations can be found in Appendix~\ref{derivations}. As pointed out in Refs.~\cite{ban2021two,ban2020relaxation, PhysRevA.103.032223}, superposed trajectories can modulate the quantum non-Markovian effect. Tunability also holds in our models; we discuss this in detail in Appendix~\ref{non-markovian}. \begin{figure} \caption{ Effective decay within a given time $ t= \omega_q/5$ of the dissipative model for superposed trajectories and the usual quantum Zeno effect, which are described by a spectral density and the filter functions. $\mathcal{F} \label{ill_zeno} \end{figure} \begin{figure} \caption{Comparison of the two-atom super(sub)radiance, with qubits' distance $d$, with the single-atom decay modified by three superposed trajectories, where the position vectors form an equilateral triangle with edge length $d$. This can be implemented by a Young's type triple-slit experiment. One can place the atom detectors at certain positions and perform quantum state tomography to verify the super(sub) radiant dynamics. Here, we set the collective factor $\mathrm{sinc} \label{figsup} \end{figure} If we now prepare the qubit state in $|\psi^{\text{diss}}_Q(0)\rangle=|e\rangle $ and $|\psi^{\text{deph}}_Q(0)\rangle=(|e\rangle +|g\rangle )/\sqrt{2}$ for the dissipative and the pure dephasing models, respectively, then \begin{align} \lim_{N\rightarrow\infty} \rho^{\text{diss}}_{Q,\bm{\phi}_n}(t) &= |\psi_Q^{\text{diss}}(0)\rangle \langle \psi_Q^{\text{diss}}(0)|, \nonumber \\ \lim_{N\rightarrow\infty} \rho^{\text{deph}}_{Q,\bm{\phi}_n}(t) &= |\psi_Q^{\text{deph}}(0)\rangle \langle \psi_Q^{\text{deph}}(0)|~~\forall t \end{align} with $n$ being a finite positive integer. That is, the quantum states of the qubit are frozen when the number of paths goes to infinity. To gain a deeper insight, we introduce the survival probability defined as \begin{equation} p(t) = \tr [|\psi_Q(0)\rangle \langle \psi_Q(0)|~ \rho_{Q,\bm{\phi}_n}(t)]. \end{equation} We also consider the decay factor $\gamma(t)$ associated with the survival probability $p(t) = \exp[-\gamma(t)]$, or equivalently, \begin{equation} \gamma(t) = -\log p(t). \end{equation} Within leading order in perturbation, the decay factor can be described by an overlap integral in a similar manner with the traditional quantum Zeno effect~\cite{kofman2000acceleration,PhysRevLett.87.040402,PhysRevLett.89.080401,koshino2005quantum,PhysRevA.90.012101,chaudhry2016general}; namely, \begin{align} \gamma(t) = \int d\omega~ \mathcal{J}(\omega) \mathcal{F}(\omega,t,N,n). \label{decay_factor} \end{align} Here, $\mathcal{J}(\omega)$ denotes the system-environment coupling spectral density, and the filter function $\mathcal{F}(\omega,t,N,n)$ can be expressed as \begin{align} \mathcal{F}^{\text{diss}}(\omega,t,N,n)&= \frac{N}{(N-2n)^2}t^2\mathrm{sinc}^2\left[\frac{(\omega-\omega_q)}{2}t\right]\nonumber\\ \mathcal{F}^{\text{deph}}(\omega,t,N,n) &= \frac{1}{2} \frac{N}{(N-2n)^2}\frac{1-\cos(\omega t)}{\omega^2}. \label{filter_superposed} \end{align} Note that $\mathrm{sinc}(x)=\sin(x)/x$. Equations \eqref{decay_factor} and \eqref{filter_superposed} show that one can modify the decay by either introducing different number of paths $N$ or modulating the phase shifts, i.e., changing the value $n$. We emphasize that this overlap-integral expression can be derived for a more general class of open-system models without imposing the two assumptions mentioned earlier, namely, (1) all environments are identical, and (2) each $\phi_i$ is either $0$ or $\pi$ (see Appendix~\ref{general_zeno} for detailed derivations). The only requirement for the validity of such an expression is the weak system-environment couplings, such that the reduced dynamics can be perturbatively approximated. One distinct feature of the traditional Zeno effect is that frequent measurements broaden the filter functions, as shown in Fig.~\ref{ill_zeno}. For dissipative processes, the broadening has been interpreted as a consequence of energy-time uncertainty~\cite{kofman2000acceleration} because the system energy is measured frequently. Superposed trajectories only modify the overall magnitude of the filter function without broadening. This is physically reasonable because the open system is not measured frequently in the superposed trajectories scenario, and therefore, the energy-time uncertainty does not occur. \section{Indefinite-position scenario and single-atom Dicke-like decay} We now consider a scenario that generates behavior normally observed under the Dicke effect. The simplest model to illustrate the Dicke effect is that of two identical two-level atoms, with an energy gap $\omega_q$, embedded in a bosonic vacuum (see, e.g., ~\cite{brandes2005coherent,PhysRevA.98.063815} for instance). By utilizing either the Fermi--Gorden rule or the master equation approach, one can predict two split decay rates, $\Gamma_\pm=\Gamma_0[1\pm \mathrm{sinc}(qd)]$, where $\Gamma_0$ represents the single-atom spontaneous decay rate, $d$ is the distance between two atoms and $q=\omega_q/c$, with $c$ being the speed of light in vacuum. The factor $\mathrm{sinc}(qd)$ can be interpreted as the collective effect for this two-atom model. Here, $\Gamma_+>\Gamma_0$ ($\Gamma_-<\Gamma_0)$, which is known as Dicke superradiance (subradiance), if $\mathrm{sinc}(qd)>0$. Inspired by this model, we consider that a single qubit $Q$ interacts with a single bosonic vacuum, where the location of $Q$ is coherently controlled by $C$, as depicted by Fig.~\ref{ill_sup_traj}(b). We model the total Hamiltonian as \begin{align} &\tilde{H}_{\text{tot}}=\sum_{i=1}^N |i_C\rangle\langle i_C|\otimes \tilde{H}(\mathbf{r}_i) \nonumber\\ \text{with~~}&\tilde{H}(\mathbf{r}_i) = \omega_q\sigma_z/2 + \sum_{\mathbf{k}} \omega_k a^\dagger_\mathbf{k}a_\mathbf{k}\nonumber\\ &~~~~~~~~~+\sigma_x \sum_{\mathbf{k}} g_{\mathbf{k}} \left( a_\mathbf{k}e^{i\mathbf{k}\cdot \mathbf{r}_i}+ a^\dagger_\mathbf{k}e^{-i\mathbf{k}\cdot \mathbf{r}_i} \right). \end{align} Here, $\{\mathbf{r}_i\}$ denotes the possible positions of $Q$ that are controlled by $C$. By taking Born--Markov and secular approximations, the time evolution of the $CQ$ complex is governed by the following master equation \begin{align} \frac{\partial\rho_{CQ}(t)}{\partial t}=&\Gamma_0 \sum_{i=1}^N L_i \rho_{CQ}(t) L_i^{\dagger} -\frac{1}{2}\{L_i^\dagger L_i,\rho_{CQ}(t)\} \nonumber \\ +&\Gamma_0 \sum_{i\neq j } \mathrm{sinc}\big(q|\mathbf{r}_i-\mathbf{r}_j| \big) L_i \rho_{CQ}(t) L_j^\dagger. \label{master_eq_dicke} \end{align} Here, $L_i = |i\rangle \langle i| \otimes \sigma_-$. The evolution governed by the Lamb-shifted Hamiltonian is neglected for simplicity. In Eq.~\eqref{master_eq_dicke}, we observe the emergence of the factor $\mathrm{sinc}\big(q|\mathbf{r}_i-\mathbf{r}_j| \big)$, which is present in the above two-atom example, and this suggests that a Dicke-like collective effect also plays a non-trivial role. We initialize the state of systems $C$ and $Q$ as $\rho_{CQ}(0) = |\chi_C\rangle \langle \chi_C| \otimes |e\rangle \langle e|$ to investigate the effective population decay of $Q$ modified by the superposed trajectories. For simplicity, we consider $|\mathbf{r}_i-\mathbf{r}_j|=d,~\forall i\neq j$. That is, the position vectors $\{\mathbf{r}_i\}$ form an equilateral triangle for $N=3$ or a regular tetrahedron for $N=4$ with the edge length $d$. Following the procedure described in the previous sections, we perform the projective measurement $P_{\chi_{\bm{\phi}_n}}$ on $C$ so that the effective dynamics of the excited state population after the post-selection can be expressed as \begin{align} P_e(t,N,n) = \frac{\langle \chi_{\bm{\phi}_n}|\langle e|\rho_{CQ}(t)|\chi_{\bm{\phi}_n}\rangle|e\rangle}{\tr\big[\langle \chi_{\bm{\phi}_n}|\langle e|\rho_{CQ}(t)|\chi_{\bm{\phi}_n}\rangle|e\rangle\big]} \nonumber ~~~~~~~~~~\\ =\frac{R_{N,n} e^{-\Gamma_0 t}}{\frac{1}{N}+\big(R_{N,n}-\frac{1}{N}\big)\big[e^{-\Gamma_0 t}+\mathrm{sinc}(qd)(1-e^{-\Gamma_0 t})\big]}. \end{align} Therefore, the effective decay of $Q$ depends on the factors $(N,n)$, which are determined by the superposed trajectories setup, and most importantly, the collective factor $\mathrm{sinc}(qd)$. In Fig.~\ref{figsup}, we present a comparison between the two-atom super(sub)radiant decay with distance $d$ such that $\mathrm{sinc}(qd)$, and a single-atom effective decay from three superposed trajectories. We consider that the position vectors form an equilateral triangle with edge length $d$. We set the spontaneous emission rate $\Gamma_0/\omega_q$ as $0.01$. The single-atom effective decay can either be greater than the two-qubit superradiance, i.e., $(N=3,n=1)$, or less than the two-atom subradiance, i.e., $(N=3,n=0)$. For the experimental realization, a Young's experiment (as illustrated in Fig.~\ref{figsup}), which has been applied to large molecules~\cite{nairz2003quantum}, can be considered as a natural test-bed for such superposed trajectories. A beam of atoms passes through a plate pierced by two or three slits, and therefore, the atom may interact with the environment at different locations. Atom detectors can be placed at different positions (on the right-hand side) to verify the super(sub) radiant effective dynamics. This is equivalent to performing a measurement on the path degrees of freedom and selecting the associated atoms. The modified dynamics can then be obtained by performing quantum state tomography on the selected atoms. Although the effective decay can be modified similarly with the traditional Dicke effect, there exist non-trivial differences between these two distinct results. First, quantum correlations between multiple atoms, which is the most important ingredient for the traditional Dicke effect, are not present in the superposed trajectories because there is only one atom in the system. Second, it is known that the strongest superradiant effect for the traditional Dicke effect occurs in the so-called small sample limit, i.e., $q|\mathbf{r}_i - \mathbf{r}_j|\ll 1,~\forall~ i,j$. However, this is \textit{not} the case for superposed trajectories. The equation for $\tilde{H}_{\text{tot}}$ indicates that the superposition of paths cannot create the indefiniteness of the qubit position when all position vectors are identical, i.e., $|\mathbf{r}_i - \mathbf{r}_j|=0,~\forall~ i,j$; therefore, $\lim_{\{q|\mathbf{r}_i - \mathbf{r}_j|\rightarrow 0\}_{i,j}}P_e(t,N,n)=\exp(-\Gamma_0 t ),~\forall~ N,n$. \section{Summary and outlook} We studied the effects of superposed trajectories from the perspective of open quantum systems. We demonstrate a space-time dual Zeno effect when introducing multiple superposed trajectories for independent-environments scenario. More specifically, we find that it is possible to express the effective decay in terms of an overlap integral. This result provides a novel physical intuition to the problem, and we expect that it could be applicable to dynamical control~\cite{PhysRevLett.87.270405} and noise spectroscopy~\cite{bylander2011noise,PhysRevLett.107.230501,PhysRevLett.108.140403, PhysRevLett.129.030401}, which is also based on overlap integrals. Moreover, it would be interesting to investigate whether the proposed interferometric setup can trigger a space-time dual of measurement-induced phase transitions~\cite{PRXQuantum.2.040319, PhysRevLett.126.060501}, which is an application of the quantum Zeno effect to many-body physics. We leave this as a promising future work. We also considered an indefinite-position scenario and demonstrated that the Dicke-like superradiant (subradiant) decay, usually observed by an ensemble of atoms, can be generated by only one atom with multiple superposed trajectories. One can naturally ask whether it is possible to induce an effective superabsorption~\cite{higgins2014superabsorption, yang2021realization}, which could then open new possibilities to design quantum batteries~\cite{PhysRevLett.120.117702, quach2022superabsorption} or quantum heat engines~\cite{PhysRevLett.128.180602}. On the other hand, there exists another type of quantum-controlled evolution that induces indefinite causal order~\cite{oreshkov2012quantum,rubino2017experimental, PhysRevLett.120.120502,zych2019bell} of quantum processes. The investigation of this approach from the perspective of open systems is still an open question. \section{Derivations of the spin-boson models \label{derivations}} \subsection{The dissipative model} The Hamiltonian of the dissipative spin-boson model is described by \begin{equation} H^{\text{diss}}_{Q\mathcal{E}_i}(t) = \sum_k g_k e^{i(\omega_q-\omega_k)t}\sigma_+ a_{i,k} + g_k^* e^{-i(\omega_q-\omega_k)} \sigma_-a^{\dagger}_{i,k}. \end{equation} Here, $\omega_q$ denotes the energy gap between the excited state $|e\rangle$ and the ground state $|g\rangle$ for the qubit $Q$, $\sigma_+=|e\rangle \langle g|$ ($\sigma_-=|g\rangle \langle e|$) represents the qubit raising (lowering) operator, and $a_{i,k}$ ($a_{i,k}^{\dagger}$) stands for the annihilation (creation) of the mode $k$ for the environment $\mathcal{E}_i$. This model can be solved analytically in the single-excitation subspace spanned by the following basis states: \begin{align} \Big\{ &|\psi_{i,g}\rangle = |i_C\rangle \otimes |g\rangle \otimes |\text{vac}\rangle,~|\psi_{i,e}\rangle =|i_C\rangle \otimes |e\rangle\otimes |\text{vac}\rangle \nonumber \\ &|\psi_{i,k_j}\rangle =|i_C\rangle \otimes |g\rangle \otimes |k_j\rangle \Big\}_{i,j=1,\cdots,N}, \end{align} where $|\text{vac}\rangle =\bigotimes_{i=1}^N|\text{vac}_i\rangle$ with $a_{i,k} |\text{vac}_i\rangle =0$ and $|k_j\rangle =a_{j,k}^{\dagger}|\text{vac}\rangle$. Whenever the initial state of the total system is expanded by the aforementioned bases, the quantum state at time $t$ can be written as \begin{align} |\Psi^{\text{diss}}_\text{tot}(t)\rangle =&\sum_i\Big[c_{i,g}(t)|\psi_{i,g}\rangle +c_{i,e}(t)|\psi_{i,e}\rangle \nonumber\\ &~~~+\sum_{j,k}c_{i,k_j}(t)|\psi_{i,k_j}\rangle \Big] \end{align} Further, the amplitudes satisfy the following coupled differential equations: \begin{equation} \begin{cases} \dot{c_{i,g}}(t)=0\\ \dot{c_{i,e}}(t) = -i\sum_k g_k e^{i(\omega_q-\omega_k)t }c_{i,k_i}(t)\\ \dot{c_{i,k_j}}(t)=-i\delta_{i,j }g_k^*e^{-i(\omega_q-\omega_k)t}c_{i,e}(t) \end{cases}~\text{for}~ i,j=1\cdots N. \label{coupled_eqs_diss} \end{equation} Assuming that the environments are initially prepared in the vacuum state, i.e., $c_{i,k_j}(0)=0,~\forall i,j$, Eq.~\eqref{coupled_eqs_diss} can be analytically solved by Laplace transformation as \begin{align} &\begin{cases} &c_{i,g}(t) =c_{i,g}(0) \\ &c_{i,e}(t) =c_{i,e}(0)G(t) \\ &c_{i,k_j}(t) = -i\delta_{i,j} g_k^*\int_0^tdt^\prime e^{-i(\omega_q-\omega_k)t^\prime}c_{i,e}(t^\prime) \end{cases} \nonumber\\ &\text{with}~G(t) = \mathcal{L}^{-1}\big[\frac{1}{s+\hat{f}(s)}\big] \nonumber \\ &\text{and}~\hat{f}(s) = \mathcal{L}\Big[\int_0^\infty d\omega~\mathcal{J}(\omega)e^{i(\omega_q-\omega)t} \Big]. \end{align} Here, $G(t) = \mathcal{L}^{-1}\big[\frac{1}{s+\hat{f}(s)}]$ coincides with the dissipation function for the single-path dynamics, wherein $\mathcal{J}(\omega)=\sum_k |g_k|^2\delta(\omega-\omega_k)$ represents the spectral density function, and $\mathcal{L}$ and $\mathcal{L}^{-1}$ denote the Laplace and inverse Laplace transformations, respectively. Assuming that the initial state of the total system is written as \begin{align} |\Psi^{\text{diss}}_\text{tot}(0)\rangle &=|\chi_C\rangle \otimes \big(c_g(0)|g\rangle +c_e(0)|e\rangle \big)\otimes |\text{vac}\rangle \nonumber \\ &=\sum_{i=1}^N\frac{1}{\sqrt{N}}\big(c_g(0)|\psi_{i,g}\rangle+c_e(0)|\psi_{i,e}\rangle \big), \label{solution_diss} \end{align} where $|\chi_C\rangle =\sum_{i=1}^N |i_C\rangle /\sqrt{N}$. The reduced dynamics of systems $C$ and $Q$ can then be written as \begin{align} \rho^{\text{diss}}_{CQ}&(t)\nonumber\\=&\frac{1}{N}\sum_{i,j}|i_C\rangle \langle j_C|\otimes \big(|c_g(0)|^2|g\rangle \langle g|+c_g(0)c_e(t)^*|g\rangle \langle e|\nonumber \\ &~~~~~~~~~~~~~~~~~~+c_e(t)c_g(0)^*|e\rangle \langle g|+|c_e(t)|^2|e\rangle \langle e|\big)\nonumber\\ &+\sum_i|i_C\rangle \langle i_C|\otimes |c_{i,k_i}(t)|^2|g\rangle \langle g|, \end{align} with $c_e(t)=c_e(0)G(t).$ Consequently, the unnormalized post-measurement state will be \begin{align} \tilde{\rho}_{Q,\bm{\phi}_n}^{\text{diss}}(t)=&\langle \chi_{C,\bm{\phi}_n}|\rho^{\text{diss}}_{CQ}(t)|\chi_{C,\bm{\phi}_n}\rangle\nonumber\\ =&\big[R_{N,n}|c_g(0)|^2 +\frac{1}{N}|c_e(0)|^2(1-|G(t)|^2)\big]|g\rangle \langle g|\nonumber \\&+ R_{N,n} G^*(t)c_g(0)c_e^*(0)|g\rangle \langle e|\nonumber \\&+R_{N,n} G(t) c_e(0)c_g(0)^*|e\rangle \langle g|\nonumber \\&+R_{N,n} |G(t)|^2|c_e(0)|^2|e\rangle \langle e| \label{unnorm_post_measured_state_diss} \end{align} with $R_{N,n} =(N-2n)^2/N^2$. \subsection{The pure dephasing model} Let us now consider the pure dephasing model, for which the interaction Hamiltonian is described as \begin{equation} H^{\text{deph}}_{Q\mathcal{E}_i}(t) = \sigma_z\otimes \sum_k (g_k e^{-i\omega_k t} a_{i,k}+ g_k^* e^{i\omega_k t} a_{i,k}^{\dagger}), \label{interaction_deph} \end{equation} where $\sigma_{z} = |e\rangle \langle e| -|g\rangle \langle g|$. For this model, the unitary operator of the total system can be analytically derived as \begin{align} U_{\text{tot}}^{\text{deph}}(t) &= \mathcal{T}_+\exp \left[-i\int_0^t H_{\text{tot}}^{\text{deph}}(t^\prime)dt^\prime\right]\nonumber\\ &=\sum_{i=1}^N |i\rangle \langle i|\otimes U^{\text{deph}}_{Q\mathcal{E}_i}(t). \end{align} Here, \begin{align} U^{\text{deph}}_{Q\mathcal{E}_i}(t)=\exp[if(t)]\prod_k[&|e\rangle\langle e|\otimes D_i(\alpha_k)+\nonumber\\&|g\rangle\langle g|\otimes D_i(-\alpha_k)], \end{align} where $D_i(\alpha_k)=\exp(\alpha_k a_{i,k}^{\dagger} -\alpha_k^* a_{i,k})$ denotes the bosonic displacement operator with $\alpha_k = g_k^*(1-e^{i\omega_k t})/\omega_k$, and $f(t)=-\int_0^t dt^\prime_1 \int_0^{t^\prime_1}dt^\prime_2 \sum_k |g_k|^2 \sin[\omega_k(t^\prime_2-t^\prime_1)]$ characterizes an unimportant global phase. In the main text, we stated that the environments are prepared in the vacuum states. However, this model can be solved when the environments are prepared in thermal equilibrium states as well. Therefore, we now consider the initial state of the total system as \begin{align} &\rho^{\text{deph}}_\text{tot}(0)=|\chi_C\rangle\langle\chi_C| \otimes \rho_Q(0)\bigotimes_{i=1}^N \rho_{\mathcal{E}_i,T}, \nonumber\\ \text{with}~~&\rho_{\mathcal{E}_i,T} = \prod_k\big[1-e^{-\omega_k/k_B T}\big]e^{-\omega_k a^{\dagger}_{i,k} a_{i,k}/k_B T}, \end{align} where $k_B$ and $T$ denote the Boltzmann constant and the temperature of the environments, respectively. After tracing out the environments, the reduced state of the systems $C$ and $Q$ can be obtained as \begin{align} \rho^{\text{deph}}_{CQ}(t)=\frac{1}{N}\big[&\sum_{i=1}^N |i_C\rangle \langle i_C|\otimes \rho_Q(t)+\nonumber\\&\sum_{i\neq j}|i_C\rangle \langle j_C|\otimes \sqrt{\phi_T(t)}\rho_Q(0)\big], \end{align} where \begin{align} \langle e|\rho_Q(t) |e\rangle &= \langle e|\rho_Q(0) |e\rangle, \nonumber \\ \langle g|\rho_Q(t) |g\rangle &= \langle g|\rho_Q(0) |g\rangle, \nonumber \\ \langle e|\rho_Q(t) |g\rangle &= \langle e|\rho_Q(0) |g\rangle \phi_T(t)=(\langle g|\rho_Q(t) |e\rangle)^*\nonumber.\\ \end{align} Here, $\phi_T(t) = 4\int_0^\infty d\omega\frac{\mathcal{J}(\omega)}{\omega^2}\coth(\omega/2k_BT)[1-\cos(\omega t)]$ represents the dephasing factor for the single-path dynamics. The unnormalized post-measurement state can be written as \begin{align} \tilde{\rho}_{Q,\bm{\phi}_n}^{\text{deph}}(t)=&\langle \chi_{C,\bm{\phi}_n}|\rho^{\text{deph}}_{CQ}(t)|\chi_{C,\bm{\phi}_n}\rangle \nonumber\\ &=\frac{1}{N}\rho_Q(t)+\left(R_{N,n}-\frac{1}{N}\right)\sqrt{\phi_T(t)}\rho_Q(0) \label{unnorm_post_measured_state_deph}. \end{align} The normalized state retains a pure dephasing dynamics for the pure dephasing model with a dephasing function modified as \begin{equation} \Phi(t,N,n)=\frac{\phi_T(t)+[(N-1)-\frac{4n}{N}(N-n)\big]\sqrt{\phi_T(t)}}{1+[(N-1)-\frac{4n}{N}(N-n)\big]\sqrt{\phi_T(t)}}. \end{equation} \section{Full-time dynamics and quantum non-Markovian effects \label{non-markovian}} \begin{figure} \caption{Time evolutions of the trace distance for different factors $(N,n)$. (a) For the dissipative model, we consider the Lorentzian spectral density given by Eq.~\eqref{drude} \label{non_markovian} \end{figure} We now discuss the full-time dynamics and associated non-Markovian effects. A well-known indicator of quantum non-Markovianty is the non-monotonic behavior of the trace distance~\cite{PhysRevLett.103.210401, RevModPhys.88.021002}, which quantifies the distinguishability between quantum states. We consider an initial state pair $[\rho_{Q,+}(0),\rho_{Q,-}(0)]$, where \begin{align} \rho_{Q,+}(0) &= \frac{1}{2}(|e\rangle +|g\rangle )(\langle e| +\langle g|) \nonumber\\ \rho_{Q,-}(0) &= \frac{1}{2}(|e\rangle -|g\rangle )(\langle e| -\langle g|). \end{align} The time evolutions of the trace distances for the dissipative and pure dephasing models can be derived as \begin{align} \mathcal{D}&[\rho_{Q,\bm{\phi}_n,+}^{\text{diss}}(t),\rho_{Q,\bm{\phi}_n,-}^{\text{diss}}(t)] \nonumber \\ &= \frac{2(N-2n)^2|G(t)|^2}{[(N-2n)^2-N]|G(t)|^2 + (N-2n)^2 + N}\\ \text{and}\nonumber\\ \mathcal{D}&[\rho_{Q,\bm{\phi}_n,+}^{\text{deph}}(t),\rho_{Q,\bm{\phi}_n,-}^{\text{deph}}(t)]=|\Phi(t,N,n)|, \end{align} where $\mathcal{D}(A,B)$ denotes the trace distance between $A$ and $B$. Taking the time derivative shows that the trace distances monotonically decrease whenever $|G(t)|^2$ and $|\Phi(t, N,n)|^2$ are also monotonic decreasing functions. For the dissipative model, the criterion of the monotonically decrease for the superposed trajectories coincides with that of the single-path dynamics (i.e., $d|G(t)|^2/dt \leq 0$). Therefore, non-monotonic behavior cannot be activated by superposed dynamics whenever the single-path dynamics are monotonic decreasing. We present the numerical results for the non-Markovian dynamics in Fig.~\ref{non_markovian}. For the dissipative model, we consider the Lorentzian spectral density expressed as \begin{equation} \mathcal{J}_{\text{L}}(\omega)=\frac{1}{2\pi}\frac{\gamma_0 \lambda^2}{(\omega_q-\omega)^2+\lambda^2} \label{drude} \end{equation} with the width $\lambda$ and the coupling strength $\gamma_0$. The dynamics of the trace distance shows oscillating behavior when $\gamma_0>\lambda/2$; this criterion also holds for superposed trajectories. In Fig.~\ref{non_markovian}(a), we consider $\lambda = 0.1 \gamma_0$, for which oscillations can be observed. The magnitude of the oscillations, i.e., the strength of the non-Markovian effect, can either be enhanced or suppressed based on the factors $(N,n)$ in comparison with the single-path dynamics, i.e., $(N=1,n=0)$. Let us now consider the pure dephasing model. Here, we consider a family of Ohmic spectral density parameterized as \begin{equation} \mathcal{J}_{\text{Ohmic}}(s,\omega) = \eta \omega^{s}\omega_c^{1-s}\exp\left(-\frac{\omega}{\omega_c}\right) \label{ohmic} \end{equation} with the coupling strength $\eta=1/3$, Ohmicity $s$, and the cut-off frequency $\omega_c=1$. We present the results for $s=1$ and $s=4$, which show single-path Markovian and non-Markovian dephasing, respectively. In the single-path Markovian regime $(s=1)$, we find that the interferometric engineering mitigates the dephasing process for $(N=3,n=0)$. Further, the non-monotonic behavior for $(N=3, n=1)$ implies that superposed trajectories can lead to non-Markovian dynamics even when single-path dynamics is Markovian (in contrast to the dissipative case). The trace distance (or equivalently the modified dephasing function) experiences a sudden death, and a sudden revival during the dephasing process. For the case $s=4$, the dephasing process is mitigated when $(N=3,n=0)$ and sudden death (revival) occurs when $(N=3,n=1)$. In addition, one can also find that the superposed trajectories can enhance another signature of non-Markovian effect known as coherence trapping~\cite{PhysRevA.89.024101}, where the coherence saturates to a finite value. Quantum Markovianity is usually defined through the divisibility of dynamics characterized by a family of completely positive and trace-preserving (CPTP) maps. For the pure dephasing model, the concept of (non-)Markovianity can be applied directly because the effective dynamics of the post-measurement state remains a pure dephasing process that can be described by CPTP maps. However, for the dissipative model, the dynamics cannot be characterized by CPTP maps, because $\mathrm{tr}[\rho_{Q,\bm{\phi}_n}^{\text{diss}}(t)]$ is dependent on the initial state of the qubit $Q$. Nevertheless, the dynamics of the unnormalized post-measurement state $\tilde{\rho}_{Q,\bm{\phi}_n}^{\text{diss}}(t)$ is characterized by a family of completely positive and trace non-increasing (CPTNI) maps. These maps are CP divisible when $d|G(t)|^2/dt\leq0$ , which coincides with the criterion for the monotonic decrease of the trace distance. Therefore, the trace distance is still a valid indicator of quantum (non-)Markovianity in the general sense of CPTNI maps. Similar discussions can also be found in Ref.~\cite{PhysRevA.103.032223}. We now derive the divisible criterion for the dissipative model in terms of completely-positive and trace non-increasing maps. A dynamical map is usually considered as a collections of CPTP maps $\Lambda(t;0)$, where $\Lambda(t;0)$ is CP divisible if for all $t,\tau \geq 0 $ when $\Lambda(t+\tau;0)$ can be decomposed as \begin{equation} \Lambda(t+\tau;0) =\Lambda(t+\tau;t)\Lambda(t;0), \end{equation} where $\Lambda(t+\tau;t)$ is also a CPTP map. We relax the trace-preserving condition for this definition of CP divisibility because the dynamics for the dissipative model is described by CPTNI maps. According to Eq.~\eqref{unnorm_post_measured_state_diss}, the Choi representation of map $\Lambda^{\text{diss}}_{\bm{\phi}_n}(t+\tau;t)$ can be derived as \begin{equation} M^{\text{diss}}_{\bm{\phi}_n}(t+\tau;t)= \begin{pmatrix} 1 & 0 & 0 & \frac{G(t+\tau)^*}{G(t)^*}\\ 0 & \bar{N}\big[ 1 - \big|\frac{G(t+\tau)}{G(t)}\big|^2\big] &0 &0\\ 0 & 0 & 0& 0 \\ \frac{G(t+\tau)}{G(t)} & 0 & 0 &\big|\frac{G(t+\tau)}{G(t)}\big|^2 \end{pmatrix}, \end{equation} where $\bar{N} = \frac{N}{(N-2n)^2}$. Then, $\Lambda^{\text{diss}}_{\bm{\phi}_n}(t+\tau;t)$ is CP if and only if $|G(t+\tau)|^2\leq |G(t)|^2$. Therefore, $\Lambda^{\text{diss}}_{\bm{\phi}_n}(t+\tau;0)$ is CP divisible (in a sense of CPTNI maps) if and only if \begin{equation} \frac{d |G(t)|^2}{dt}\leq 0. \end{equation} \section{General expression for the overlap-integral expression\label{general_zeno}} \begin{table*} \begin{ruledtabular} \begin{tabular}{lll} & Usual quantum Zeno effect & Space-time dual Zeno effect\\ [5pt]\hline Setup & \vtop{\hbox{\strut A temporal sequence of measurements}\hbox{\strut on one atom.}} & \vtop{\hbox{\strut Only one measurement on $N$ paths}\hbox{\strut taken by the atom in an interferometer.}}\\ [15pt]\hline Behavior of the filter functions&\vtop{\hbox{\strut The filter functions smear out}\hbox{\strut when increasing the measurement frequency.}} & \vtop{\hbox{\strut The filter functions remain localized}\hbox{\strut when increasing the number of paths.}} \end{tabular} \end{ruledtabular} \caption{\label{zeno_st_zeno} Comparison of usual quantum Zeno effect and the space-time dual Zeno effect proposed in this work.} \end{table*} In the main text, we propose a space-time dual to quantum Zeno effect~(see in Table.~\ref{zeno_st_zeno}) and show that the corresponding decay factors for the dissipative and the pure dephasing models can be characterized by an overlap integral. We now provide a general expression of the decay factor. In general, the Hamiltonian for the qubit $Q$ and the environment $\mathcal{E}_i$ can be written as \begin{align} H_{Q\mathcal{E}_i} = H_Q + H_{\mathcal{E}_i} + H^I_{Q\mathcal{E}_i} \nonumber\\ \text{with}~~~~~H^I_{Q\mathcal{E}_i}=\sum_\alpha A_{i,\alpha}\otimes B_{i,\alpha}. \end{align} Here, $H_Q$ and $H_{\mathcal{E},i}$ respectively represent the free Hamiltonians for the qubit and the environment $\mathcal{E}_i$, and $H^I_{Q\mathcal{E}_i}$ denotes the interaction Hamiltonian, where $A_{i,\alpha} = A_{i,\alpha}^\dagger$ and $B_{i,\alpha} = B_{i,\alpha}^\dagger$ are the operators for $Q$ and $\mathcal{E}_i$, respectively. Within the interaction picture, the interaction Hamiltonian reads \begin{align} V^I_{Q\mathcal{E}_i}(t) &= \sum_\alpha A_{i,\alpha} (t) \otimes B_{i,\alpha}(t) \nonumber \\ \text{with}~~~A_{i,\alpha}(t)&=e^{i H_Q t}A_{i,\alpha} e^{-i H_Q t}, \nonumber \\ B_{i,\alpha}(t)&=e^{i H_{\mathcal{E}_i} t} B_{i,\alpha} e^{-i H_{\mathcal{E}_i} t}. \end{align} The propagator that governs the single-path time evolution can be written as \begin{equation} U_{Q\mathcal{E}_i}(t) = \mathcal{T}_+ \exp\big[-i\int_0^t dt^\prime V^I_{Q\mathcal{E}_i}(t^\prime) \big]. \end{equation} We now assume that the propagator can be approximated to the second-order perturbation such that \begin{equation} U_{Q\mathcal{E}_i}(t) \approx \mathbb{1}+U_{Q\mathcal{E}_i,1}(t) + U_{Q\mathcal{E}_i,2}(t), \end{equation} where \begin{align} U_{Q\mathcal{E}_i,1}(t)&=-i\int_0^t dt_1 V^I_{Q\mathcal{E}_i}(t_1) \nonumber \\ \text{and}~~ U_{Q\mathcal{E}_i,2}(t) &= -\int_0^t dt_1 \int_0^{t_1} dt_2 V^I_{Q\mathcal{E}_i}(t_1)V^I_{Q\mathcal{E}_i}(t_2). \end{align} The time-dependent terms $\rho_{Q,i,j}(t)$ described in Eq.~\eqref{rho_Q_t_unnorm} can then be expanded to the second order, i.e., \begin{equation} \rho_{Q,i,j}(t) \approx \rho_Q(0) + \rho_{Q,i,j,1}(t)+\rho_{Q,i,j,2}(t). \end{equation} Here, $\rho_{Q,i,j,k}(t)$ characterizes the $k-$th order correction, wherein \begin{widetext} \begin{align} \rho_{Q,i,j,1}(t)&=\tr_{\{\mathcal{E}_k\}}\Big[U_{Q\mathcal{E}_i,1}(t)\rho_Q(0)\bigotimes_{l=1}^N\rho_{\mathcal{E}_l}(0) +\rho_Q(0)\bigotimes_{l=1}^N\rho_{\mathcal{E}_l}(0)U_{Q\mathcal{E}_j,1}^\dag(t) \Big] \nonumber \\ &=\tr_{\mathcal{E}_i}[U_{Q\mathcal{E}_i,1}(t)\rho_{\mathcal{E}_i}(0)]\rho_Q(0)+ \rho_Q(0)\tr_{\mathcal{E}_j}[\rho_{\mathcal{E}_j}(0)U_{Q\mathcal{E}_j,1}^\dag(t)]\nonumber\\ &=-i\sum_{\alpha,\beta}\int_0^t~dt_1\left\{A_{i,\alpha}(t_1)\rho_Q(0)\tr[B_{i,\alpha}(t_1)\rho_{\mathcal{E}_i}(0)]-\rho_Q(0)A_{j,\beta}(t_1)\tr[\rho_{\mathcal{E}_j}(0)B_{j,\beta}(t_1)]\right\},\nonumber\\ \rho_{Q,i,j,2}(t)&=\tr_{\{\mathcal{E}_k\}}\Big[U_{Q\mathcal{E}_i,2}(t)\rho_Q(0)\bigotimes_{l=1}^N\rho_{\mathcal{E}_l}(0)+\rho_Q(0)\bigotimes_{l=1}^N\rho_{\mathcal{E}_l}(0)U_{Q\mathcal{E}_j,2}^\dagger(t)+U_{Q\mathcal{E}_i,1}(t)\rho_Q(0)\bigotimes_{l=1}^N\rho_{\mathcal{E}_l}(0)U_{Q\mathcal{E}_j,1}^\dagger(t)\Big]\nonumber\\ &=-\sum_{\alpha,\beta}\int_0^t dt_1\int_0^{t_1}dt_2\Big\{A_{i,\alpha}(t_1)A_{j,\beta}(t_2)\rho_Q(0)\tr_{\{\mathcal{E}_k\}}\Big[B_{i,\alpha}(t_1)B_{j,\beta}(t_2)\bigotimes_{l=1}^N \rho_{\mathcal{E}_l}(0)\Big] \nonumber\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+\rho_Q(0)A_{j,\beta}(t_2)A_{i,\alpha}(t_1)\tr_{\{\mathcal{E}_k\}}\Big[\bigotimes_{l=1}^N \rho_{\mathcal{E}_l}(0)B_{j,\beta}(t_2)B_{i,\alpha}(t_1)\Big]\Big\}\nonumber\\ &~~~+\sum_{\alpha,\beta}\int_0^t dt_1 \int_0^t dt_2~ A_{i,\alpha}(t_1)\rho_Q(0)A_{j,\beta}(t_2)\tr_{\{\mathcal{E}_i\}}\left[B_{i,\alpha}(t_1)\bigotimes_{l=1}^N\rho_{\mathcal{E}_l}(0)B_{j,\beta}(t_2)\right]. \end{align} \end{widetext} For a large class of open-system models, the terms $\tr[B_{i,\alpha}(t)\rho_{\mathcal{E}_i}(0)]$ vanish, which implies $\rho_{Q,i,j,1}(t)=0$ and \begin{align} \rho_{Q,i,j,2}(t)=&\delta_{i,j}\sum_{\alpha,\beta}\int_0^t dt_1\int_0^{t_1} dt_2 \Big\{[A_{i,\beta}(t_2)\rho_Q(0)A_{i,\alpha}(t_1)\nonumber \\ &~~~~~~-A_{i,\alpha}(t_1)A_{i,\beta}(t_2)]C_{i,\alpha,\beta}(t_1,t_2)+h.c.\Big\}. \end{align} Here, $h.c.$ denotes the Hermitian conjugate, and $C_{i,\alpha,\beta}(t_1,t_2)= \tr[B_{i,\alpha}(t_1) B_{i,\beta}(t_2)\rho_{\mathcal{E}_i}(0)]$ represents the two-point correlation function of the environment $\mathcal{E}_i$. Thus, we have \begin{widetext} \begin{align} &\tilde{\rho}_{Q,\bm{\phi}}(t)=\frac{1}{N^2}\sum_{i,j}e^{-i(\phi_i-\phi_j)}\rho_{Q,i,j}(t)\nonumber\\ &\approx\frac{1}{N^2}\left\{\sum_{i,j}e^{-i(\phi_i-\phi_j)}\rho_Q(0)+\sum_{i,\alpha,\beta}\int_0^t dt_1\int_0^{t_1}dt_2\left\{[A_{i,\beta}(t_2)\rho_Q(0)A_{i,\alpha}(t_1)-A_{i,\alpha}(t_1)A_{i,\beta}(t_2)\rho_Q(0)]C_{i,\alpha,\beta}(t_1,t_2)+h.c.\right\}\right\}. \end{align} \end{widetext} We assume that the corrections are sufficiently small, i.e., $|| \rho_{Q,i,j}(t) ||_\mathrm{tr}\ll1$, so that $\tr[\tilde{\rho}_{Q,\bm{\phi}}(t)] \approx \sum_{i,j}e^{-i(\phi_i-\phi_j)}/N^2$. Further, the correlation function usually takes the form $C_{i,\alpha,\beta}(t_1,t_2) = \int d\omega \mathcal{J}_i(\omega)f_{i,\alpha,\beta}(\omega,t_1,t_2)$, where $\mathcal{J}_i(\omega)$ denotes the coupling spectral density between the qubit and the environment $\mathcal{E}_i$, and $f_{i,\alpha,\beta}(\omega,t_1,t_2)$ summarizes the remaining information about the correlation function. Suppose that the initial qubit state is $\rho_Q(0)=|\psi\rangle \langle \psi|$. One can then express the decay factor in terms of averaging $N$ overlap integrals, namely, \begin{align} \frac{1}{N}\sum_{i=1}^N\int d\omega J_i(\omega) F_i(\omega,t,N,\bm{\phi}), \end{align} with the filter function \begin{align} &F_i(\omega,t,N,n)\nonumber\\ &=\frac{2N}{\sum_{k,l}e^{-i(\phi_k-\phi_l)}}\text{Re}\Big\{\sum_{\alpha,\beta}\int_0^t dt_1 \int_0^{t_1} dt_2~ f_{i,\alpha,\beta}(\omega,t_1,t_2)\nonumber\\ &~~~~~~~~~~~~~~~~~~~~~~~~~\tr\big[P_{\psi^\perp}A_{i,\beta}(t_2)\rho_Q(0)A_{i,\alpha}(t_1)\big] \Big\}. \end{align} Here, $P_{\psi^\perp}$ denotes a projector that projects onto the subspace orthogonal to $|\psi\rangle\langle \psi|$. \begin{thebibliography}{94} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Myatt}\ \emph {et~al.}(2000)\citenamefont {Myatt}, \citenamefont {King}, \citenamefont {Turchette}, \citenamefont {Sackett}, \citenamefont {Kielpinski}, \citenamefont {Itano}, \citenamefont {Monroe},\ and\ \citenamefont {Wineland}}]{myatt2000decoherence} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~J.}\ \bibnamefont {Myatt}}, \bibinfo {author} {\bibfnamefont {B.~E.}\ \bibnamefont {King}}, \bibinfo {author} {\bibfnamefont {Q.~A.}\ \bibnamefont {Turchette}}, \bibinfo {author} {\bibfnamefont {C.~A.}\ \bibnamefont {Sackett}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Kielpinski}}, \bibinfo {author} {\bibfnamefont {W.~M.}\ \bibnamefont {Itano}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Monroe}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Wineland}},\ }\href {https://www.nature.com/articles/35002001} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {403}},\ \bibinfo {pages} {269} (\bibinfo {year} {2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Liu}\ \emph {et~al.}(2011)\citenamefont {Liu}, \citenamefont {Li}, \citenamefont {Huang}, \citenamefont {Li}, \citenamefont {Guo}, \citenamefont {Laine}, \citenamefont {Breuer},\ and\ \citenamefont {Piilo}}]{liu2011experimental} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.-H.}\ \bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {Y.-F.}\ \bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {C.-F.}\ \bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {G.-C.}\ \bibnamefont {Guo}}, \bibinfo {author} {\bibfnamefont {E.-M.}\ \bibnamefont {Laine}}, \bibinfo {author} {\bibfnamefont {H.-P.}\ \bibnamefont {Breuer}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Piilo}},\ }\href {https://www.nature.com/articles/nphys2085} {\bibfield {journal} {\bibinfo {journal} {Nat. Phys.}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {931} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chiuri}\ \emph {et~al.}(2012)\citenamefont {Chiuri}, \citenamefont {Greganti}, \citenamefont {Mazzola}, \citenamefont {Paternostro},\ and\ \citenamefont {Mataloni}}]{chiuri2012linear} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Chiuri}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Greganti}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Mazzola}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Paternostro}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Mataloni}},\ }\href {https://www.nature.com/articles/srep00968} {\bibfield {journal} {\bibinfo {journal} {Sci. Rep.}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {968} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zou}\ \emph {et~al.}(2013)\citenamefont {Zou}, \citenamefont {Chen}, \citenamefont {Xiong}, \citenamefont {Sun}, \citenamefont {Zou}, \citenamefont {Han},\ and\ \citenamefont {Guo}}]{PhysRevA.88.063806} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.-L.}\ \bibnamefont {Zou}}, \bibinfo {author} {\bibfnamefont {X.-D.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Xiong}}, \bibinfo {author} {\bibfnamefont {F.-W.}\ \bibnamefont {Sun}}, \bibinfo {author} {\bibfnamefont {X.-B.}\ \bibnamefont {Zou}}, \bibinfo {author} {\bibfnamefont {Z.-F.}\ \bibnamefont {Han}}, \ and\ \bibinfo {author} {\bibfnamefont {G.-C.}\ \bibnamefont {Guo}},\ }\href {\doibase 10.1103/PhysRevA.88.063806} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {88}},\ \bibinfo {pages} {063806} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yuan}\ \emph {et~al.}(2017)\citenamefont {Yuan}, \citenamefont {Xing}, \citenamefont {Kuang},\ and\ \citenamefont {Yi}}]{PhysRevA.95.033610} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.-B.}\ \bibnamefont {Yuan}}, \bibinfo {author} {\bibfnamefont {H.-J.}\ \bibnamefont {Xing}}, \bibinfo {author} {\bibfnamefont {L.-M.}\ \bibnamefont {Kuang}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Yi}},\ }\href {\doibase 10.1103/PhysRevA.95.033610} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {95}},\ \bibinfo {pages} {033610} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Liu}\ \emph {et~al.}(2018)\citenamefont {Liu}, \citenamefont {Lyyra}, \citenamefont {Sun}, \citenamefont {Liu}, \citenamefont {Li}, \citenamefont {Guo}, \citenamefont {Maniscalco},\ and\ \citenamefont {Piilo}}]{liu2018experimental} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.-D.}\ \bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Lyyra}}, \bibinfo {author} {\bibfnamefont {Y.-N.}\ \bibnamefont {Sun}}, \bibinfo {author} {\bibfnamefont {B.-H.}\ \bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {C.-F.}\ \bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {G.-C.}\ \bibnamefont {Guo}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Maniscalco}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Piilo}},\ }\href {https://www.nature.com/articles/s41467-018-05817-x} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {3453} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Khurana}\ \emph {et~al.}(2019)\citenamefont {Khurana}, \citenamefont {Agarwalla},\ and\ \citenamefont {Mahesh}}]{PhysRevA.99.022107} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Khurana}}, \bibinfo {author} {\bibfnamefont {B.~K.}\ \bibnamefont {Agarwalla}}, \ and\ \bibinfo {author} {\bibfnamefont {T.~S.}\ \bibnamefont {Mahesh}},\ }\href {\doibase 10.1103/PhysRevA.99.022107} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {99}},\ \bibinfo {pages} {022107} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Garc{\'\i}a-P{\'e}rez}\ \emph {et~al.}(2020)\citenamefont {Garc{\'\i}a-P{\'e}rez}, \citenamefont {Rossi},\ and\ \citenamefont {Maniscalco}}]{garcia2020ibm} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Garc{\'\i}a-P{\'e}rez}}, \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Rossi}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Maniscalco}},\ }\href {https://www.nature.com/articles/s41534-019-0235-y} {\bibfield {journal} {\bibinfo {journal} {npj Quantum Inf.}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {1} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lu}\ \emph {et~al.}(2020)\citenamefont {Lu}, \citenamefont {Zhang}, \citenamefont {Liu}, \citenamefont {Nori}, \citenamefont {Fan},\ and\ \citenamefont {Pan}}]{PhysRevLett.124.210502} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.-N.}\ \bibnamefont {Lu}}, \bibinfo {author} {\bibfnamefont {Y.-R.}\ \bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {G.-Q.}\ \bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Fan}}, \ and\ \bibinfo {author} {\bibfnamefont {X.-Y.}\ \bibnamefont {Pan}},\ }\href {\doibase 10.1103/PhysRevLett.124.210502} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {124}},\ \bibinfo {pages} {210502} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wiseman}(1994)}]{PhysRevA.49.2133} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~M.}\ \bibnamefont {Wiseman}},\ }\href {\doibase 10.1103/PhysRevA.49.2133} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {49}},\ \bibinfo {pages} {2133} (\bibinfo {year} {1994})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Doherty}\ \emph {et~al.}(2000)\citenamefont {Doherty}, \citenamefont {Habib}, \citenamefont {Jacobs}, \citenamefont {Mabuchi},\ and\ \citenamefont {Tan}}]{PhysRevA.62.012105} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~C.}\ \bibnamefont {Doherty}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Habib}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Jacobs}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Mabuchi}}, \ and\ \bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont {Tan}},\ }\href {\doibase 10.1103/PhysRevA.62.012105} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {62}},\ \bibinfo {pages} {012105} (\bibinfo {year} {2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lloyd}\ and\ \citenamefont {Viola}(2001)}]{PhysRevA.65.010101} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}}\ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Viola}},\ }\href {\doibase 10.1103/PhysRevA.65.010101} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {65}},\ \bibinfo {pages} {010101} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wiseman}\ and\ \citenamefont {Milburn}(2010)}]{wiseman2009quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~M.}\ \bibnamefont {Wiseman}}\ and\ \bibinfo {author} {\bibfnamefont {G.~J.}\ \bibnamefont {Milburn}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum measurement and control}}}\ (\bibinfo {publisher} {Cambridge university press},\ \bibinfo {year} {2010})\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhang}\ \emph {et~al.}(2017)\citenamefont {Zhang}, \citenamefont {Liu}, \citenamefont {Wu}, \citenamefont {Jacobs},\ and\ \citenamefont {Nori}}]{zhang2017quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {Y.-x.}\ \bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {R.-B.}\ \bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Jacobs}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\href {https://www.sciencedirect.com/science/article/pii/S0370157317300479} {\bibfield {journal} {\bibinfo {journal} {Phys. Rep.}\ }\textbf {\bibinfo {volume} {679}},\ \bibinfo {pages} {1} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schirmer}\ and\ \citenamefont {Wang}(2010)}]{PhysRevA.81.062306} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~G.}\ \bibnamefont {Schirmer}}\ and\ \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Wang}},\ }\href {\doibase 10.1103/PhysRevA.81.062306} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo {pages} {062306} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Viola}\ \emph {et~al.}(1999{\natexlab{a}})\citenamefont {Viola}, \citenamefont {Knill},\ and\ \citenamefont {Lloyd}}]{PhysRevLett.82.2417} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Viola}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Knill}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}},\ }\href {\doibase 10.1103/PhysRevLett.82.2417} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {82}},\ \bibinfo {pages} {2417} (\bibinfo {year} {1999}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Viola}\ \emph {et~al.}(1999{\natexlab{b}})\citenamefont {Viola}, \citenamefont {Lloyd},\ and\ \citenamefont {Knill}}]{PhysRevLett.83.4888} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Viola}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}}, \ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Knill}},\ }\href {\doibase 10.1103/PhysRevLett.83.4888} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {83}},\ \bibinfo {pages} {4888} (\bibinfo {year} {1999}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Khodjasteh}\ and\ \citenamefont {Lidar}(2005)}]{PhysRevLett.95.180501} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Khodjasteh}}\ and\ \bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont {Lidar}},\ }\href {\doibase 10.1103/PhysRevLett.95.180501} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {95}},\ \bibinfo {pages} {180501} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yang}\ \emph {et~al.}(2011)\citenamefont {Yang}, \citenamefont {Wang},\ and\ \citenamefont {Liu}}]{yang2011preserving} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {Z.-Y.}\ \bibnamefont {Wang}}, \ and\ \bibinfo {author} {\bibfnamefont {R.-B.}\ \bibnamefont {Liu}},\ }\href {https://doi.org/10.1007/s11467-010-0113-8} {\bibfield {journal} {\bibinfo {journal} {Front. Phys.}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {2} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Du}\ \emph {et~al.}(2009)\citenamefont {Du}, \citenamefont {Rong}, \citenamefont {Zhao}, \citenamefont {Wang}, \citenamefont {Yang},\ and\ \citenamefont {Liu}}]{du2009preserving} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Du}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Rong}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Zhao}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Yang}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~B.}\ \bibnamefont {Liu}},\ }\href {https://www.nature.com/articles/nature08470} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {461}},\ \bibinfo {pages} {1265} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Viola}\ and\ \citenamefont {Knill}(2003)}]{PhysRevLett.90.037901} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Viola}}\ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Knill}},\ }\href {\doibase 10.1103/PhysRevLett.90.037901} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {90}},\ \bibinfo {pages} {037901} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {De~Lange}\ \emph {et~al.}(2010)\citenamefont {De~Lange}, \citenamefont {Wang}, \citenamefont {Riste}, \citenamefont {Dobrovitski},\ and\ \citenamefont {Hanson}}]{de2010universal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {De~Lange}}, \bibinfo {author} {\bibfnamefont {Z.~H.}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Riste}}, \bibinfo {author} {\bibfnamefont {V.~V.}\ \bibnamefont {Dobrovitski}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Hanson}},\ }\href {https://science.sciencemag.org/content/330/6000/60.abstract} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {330}},\ \bibinfo {pages} {60} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Liu}\ \emph {et~al.}(2013)\citenamefont {Liu}, \citenamefont {Po}, \citenamefont {Du}, \citenamefont {Liu},\ and\ \citenamefont {Pan}}]{liu2013noise} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.-Q.}\ \bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {H.~C.}\ \bibnamefont {Po}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Du}}, \bibinfo {author} {\bibfnamefont {R.-B.}\ \bibnamefont {Liu}}, \ and\ \bibinfo {author} {\bibfnamefont {X.-Y.}\ \bibnamefont {Pan}},\ }\href {https://www.nature.com/articles/ncomms3254} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {2254} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Alonso}\ \emph {et~al.}(2016)\citenamefont {Alonso}, \citenamefont {Leupold}, \citenamefont {Sol{\`e}r}, \citenamefont {Fadel}, \citenamefont {Marinelli}, \citenamefont {Keitch}, \citenamefont {Negnevitsky},\ and\ \citenamefont {Home}}]{alonso2016generation} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Alonso}}, \bibinfo {author} {\bibfnamefont {F.~M.}\ \bibnamefont {Leupold}}, \bibinfo {author} {\bibfnamefont {Z.~U.}\ \bibnamefont {Sol{\`e}r}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Fadel}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Marinelli}}, \bibinfo {author} {\bibfnamefont {B.~C.}\ \bibnamefont {Keitch}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Negnevitsky}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~P.}\ \bibnamefont {Home}},\ }\href {https://www.nature.com/articles/ncomms11243} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {11243} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Misra}\ and\ \citenamefont {Sudarshan}(1977)}]{misra1977zeno} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Misra}}\ and\ \bibinfo {author} {\bibfnamefont {E.~G.}\ \bibnamefont {Sudarshan}},\ }\href {https://aip.scitation.org/doi/abs/10.1063/1.523304} {\bibfield {journal} {\bibinfo {journal} {J. Math. Phys.}\ }\textbf {\bibinfo {volume} {18}},\ \bibinfo {pages} {756} (\bibinfo {year} {1977})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Itano}\ \emph {et~al.}(1990)\citenamefont {Itano}, \citenamefont {Heinzen}, \citenamefont {Bollinger},\ and\ \citenamefont {Wineland}}]{PhysRevA.41.2295} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.~M.}\ \bibnamefont {Itano}}, \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Heinzen}}, \bibinfo {author} {\bibfnamefont {J.~J.}\ \bibnamefont {Bollinger}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Wineland}},\ }\href {\doibase 10.1103/PhysRevA.41.2295} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {41}},\ \bibinfo {pages} {2295} (\bibinfo {year} {1990})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kofman}\ and\ \citenamefont {Kurizki}(2000)}]{kofman2000acceleration} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont {Kofman}}\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Kurizki}},\ }\href {https://www.nature.com/articles/35014537} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {405}},\ \bibinfo {pages} {546} (\bibinfo {year} {2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fischer}\ \emph {et~al.}(2001)\citenamefont {Fischer}, \citenamefont {Guti\'errez-Medina},\ and\ \citenamefont {Raizen}}]{PhysRevLett.87.040402} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~C.}\ \bibnamefont {Fischer}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Guti\'errez-Medina}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont {Raizen}},\ }\href {\doibase 10.1103/PhysRevLett.87.040402} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {040402} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Facchi}\ and\ \citenamefont {Pascazio}(2002)}]{PhysRevLett.89.080401} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Facchi}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Pascazio}},\ }\href {\doibase 10.1103/PhysRevLett.89.080401} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo {pages} {080401} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Koshino}\ and\ \citenamefont {Shimizu}(2005)}]{koshino2005quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Koshino}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Shimizu}},\ }\href {https://www.sciencedirect.com/science/article/pii/S0370157305001080} {\bibfield {journal} {\bibinfo {journal} {Phys. Rep.}\ }\textbf {\bibinfo {volume} {412}},\ \bibinfo {pages} {191} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chaudhry}\ and\ \citenamefont {Gong}(2014)}]{PhysRevA.90.012101} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~Z.}\ \bibnamefont {Chaudhry}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Gong}},\ }\href {\doibase 10.1103/PhysRevA.90.012101} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {90}},\ \bibinfo {pages} {012101} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chaudhry}(2016)}]{chaudhry2016general} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~Z.}\ \bibnamefont {Chaudhry}},\ }\href {https://www.nature.com/articles/srep29497} {\bibfield {journal} {\bibinfo {journal} {Sci. Rep.}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {29497} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2008)\citenamefont {Wang}, \citenamefont {You},\ and\ \citenamefont {Nori}}]{PhysRevA.77.062339} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-B.}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {J.~Q.}\ \bibnamefont {You}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\href {\doibase 10.1103/PhysRevA.77.062339} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {77}},\ \bibinfo {pages} {062339} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhou}\ \emph {et~al.}(2009)\citenamefont {Zhou}, \citenamefont {Yang}, \citenamefont {Liu}, \citenamefont {Sun},\ and\ \citenamefont {Nori}}]{PhysRevA.80.062109} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {Y.-x.}\ \bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {C.~P.}\ \bibnamefont {Sun}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\href {\doibase 10.1103/PhysRevA.80.062109} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {80}},\ \bibinfo {pages} {062109} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cao}\ \emph {et~al.}(2010)\citenamefont {Cao}, \citenamefont {You}, \citenamefont {Zheng}, \citenamefont {Kofman},\ and\ \citenamefont {Nori}}]{PhysRevA.82.022119} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Cao}}, \bibinfo {author} {\bibfnamefont {J.~Q.}\ \bibnamefont {You}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Zheng}}, \bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont {Kofman}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\href {\doibase 10.1103/PhysRevA.82.022119} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {82}},\ \bibinfo {pages} {022119} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cao}\ \emph {et~al.}(2012)\citenamefont {Cao}, \citenamefont {Ai}, \citenamefont {Sun},\ and\ \citenamefont {Nori}}]{CAO2012349} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Cao}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Ai}}, \bibinfo {author} {\bibfnamefont {C.-P.}\ \bibnamefont {Sun}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\href {\doibase https://doi.org/10.1016/j.physleta.2011.11.045} {\bibfield {journal} {\bibinfo {journal} {Phys. Lett. A}\ }\textbf {\bibinfo {volume} {376}},\ \bibinfo {pages} {349} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhang}\ \emph {et~al.}(2013)\citenamefont {Zhang}, \citenamefont {Kofman}, \citenamefont {Zhuang}, \citenamefont {You},\ and\ \citenamefont {Nori}}]{ZHANG20131837} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kofman}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Zhuang}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {You}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\href {\doibase https://doi.org/10.1016/j.physleta.2013.05.029} {\bibfield {journal} {\bibinfo {journal} {Phys. Lett. A}\ }\textbf {\bibinfo {volume} {377}},\ \bibinfo {pages} {1837} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ai}\ \emph {et~al.}(2013)\citenamefont {Ai}, \citenamefont {Xu}, \citenamefont {Yi}, \citenamefont {Kofman}, \citenamefont {Sun},\ and\ \citenamefont {Nori}}]{Ai2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Ai}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Xu}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Yi}}, \bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont {Kofman}}, \bibinfo {author} {\bibfnamefont {C.~P.}\ \bibnamefont {Sun}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\href {\doibase 10.1038/srep01752} {\bibfield {journal} {\bibinfo {journal} {Sci. Rep.}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {1752} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Oi}(2003)}]{PhysRevLett.91.067902} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~K.~L.}\ \bibnamefont {Oi}},\ }\href {\doibase 10.1103/PhysRevLett.91.067902} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {91}},\ \bibinfo {pages} {067902} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gisin}\ \emph {et~al.}(2005)\citenamefont {Gisin}, \citenamefont {Linden}, \citenamefont {Massar},\ and\ \citenamefont {Popescu}}]{PhysRevA.72.012338} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Linden}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Massar}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Popescu}},\ }\href {\doibase 10.1103/PhysRevA.72.012338} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {72}},\ \bibinfo {pages} {012338} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chiribella}\ and\ \citenamefont {Kristj{\'a}nsson}(2019)}]{chiribella2019quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Chiribella}}\ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Kristj{\'a}nsson}},\ }\href {https://royalsocietypublishing.org/doi/full/10.1098/rspa.2018.0903} {\bibfield {journal} {\bibinfo {journal} {Proc. R. Soc. A}\ }\textbf {\bibinfo {volume} {475}},\ \bibinfo {pages} {20180903} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Loizeau}\ and\ \citenamefont {Grinbaum}(2020)}]{PhysRevA.101.012340} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Loizeau}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Grinbaum}},\ }\href {\doibase 10.1103/PhysRevA.101.012340} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {101}},\ \bibinfo {pages} {012340} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Abbott}\ \emph {et~al.}(2020)\citenamefont {Abbott}, \citenamefont {Wechs}, \citenamefont {Horsman}, \citenamefont {Mhalla},\ and\ \citenamefont {Branciard}}]{abbott2020communication} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont {Abbott}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wechs}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Horsman}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mhalla}}, \ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Branciard}},\ }\href {https://doi.org/10.22331/q-2020-09-24-333} {\bibfield {journal} {\bibinfo {journal} {Quantum}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {333} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kristj{\'a}nsson}\ \emph {et~al.}(2020)\citenamefont {Kristj{\'a}nsson}, \citenamefont {Chiribella}, \citenamefont {Salek}, \citenamefont {Ebler},\ and\ \citenamefont {Wilson}}]{kristjansson2020resource} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Kristj{\'a}nsson}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Chiribella}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Salek}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Ebler}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Wilson}},\ }\href {https://iopscience.iop.org/article/10.1088/1367-2630/ab8ef7/meta} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {22}},\ \bibinfo {pages} {073014} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rubino}\ \emph {et~al.}(2021)\citenamefont {Rubino}, \citenamefont {Rozema}, \citenamefont {Ebler}, \citenamefont {Kristj\'ansson}, \citenamefont {Salek}, \citenamefont {Allard~Gu\'erin}, \citenamefont {Abbott}, \citenamefont {Branciard}, \citenamefont {Brukner}, \citenamefont {Chiribella},\ and\ \citenamefont {Walther}}]{PhysRevResearch.3.013093} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Rubino}}, \bibinfo {author} {\bibfnamefont {L.~A.}\ \bibnamefont {Rozema}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Ebler}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Kristj\'ansson}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Salek}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Allard~Gu\'erin}}, \bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont {Abbott}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Branciard}}, \bibinfo {author} {\bibfnamefont {{\v{C}}.}~\bibnamefont {Brukner}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Chiribella}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Walther}},\ }\href {\doibase 10.1103/PhysRevResearch.3.013093} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Research}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {013093} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Foo}\ \emph {et~al.}(2021)\citenamefont {Foo}, \citenamefont {Mann},\ and\ \citenamefont {Zych}}]{PhysRevD.103.065013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Foo}}, \bibinfo {author} {\bibfnamefont {R.~B.}\ \bibnamefont {Mann}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Zych}},\ }\href {\doibase 10.1103/PhysRevD.103.065013} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume} {103}},\ \bibinfo {pages} {065013} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Foo}\ \emph {et~al.}(2020)\citenamefont {Foo}, \citenamefont {Onoe},\ and\ \citenamefont {Zych}}]{PhysRevD.102.085013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Foo}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Onoe}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Zych}},\ }\href {\doibase 10.1103/PhysRevD.102.085013} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume} {102}},\ \bibinfo {pages} {085013} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Henderson}\ \emph {et~al.}(2020)\citenamefont {Henderson}, \citenamefont {Belenchia}, \citenamefont {Castro-Ruiz}, \citenamefont {Budroni}, \citenamefont {Zych}, \citenamefont {Brukner},\ and\ \citenamefont {Mann}}]{PhysRevLett.125.131602} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~J.}\ \bibnamefont {Henderson}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Belenchia}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Castro-Ruiz}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Budroni}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Zych}}, \bibinfo {author} {\bibfnamefont {{\v{C}}.}~\bibnamefont {Brukner}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~B.}\ \bibnamefont {Mann}},\ }\href {\doibase 10.1103/PhysRevLett.125.131602} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {125}},\ \bibinfo {pages} {131602} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ban}(2021)}]{ban2021two} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ban}},\ }\href {https://www.sciencedirect.com/science/article/pii/S0375960120308033} {\bibfield {journal} {\bibinfo {journal} {Phys. Lett. A}\ }\textbf {\bibinfo {volume} {385}},\ \bibinfo {pages} {126936} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ban}(2020)}]{ban2020relaxation} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ban}},\ }\href {https://link.springer.com/article/10.1007/s11128-020-02856-6} {\bibfield {journal} {\bibinfo {journal} {Quantum Inf. Process.}\ }\textbf {\bibinfo {volume} {19}},\ \bibinfo {pages} {351} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Siltanen}\ \emph {et~al.}(2021)\citenamefont {Siltanen}, \citenamefont {Kuusela},\ and\ \citenamefont {Piilo}}]{PhysRevA.103.032223} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Siltanen}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Kuusela}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Piilo}},\ }\href {\doibase 10.1103/PhysRevA.103.032223} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {103}},\ \bibinfo {pages} {032223} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Reck}\ \emph {et~al.}(1994)\citenamefont {Reck}, \citenamefont {Zeilinger}, \citenamefont {Bernstein},\ and\ \citenamefont {Bertani}}]{PhysRevLett.73.58} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Reck}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Zeilinger}}, \bibinfo {author} {\bibfnamefont {H.~J.}\ \bibnamefont {Bernstein}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Bertani}},\ }\href {\doibase 10.1103/PhysRevLett.73.58} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {73}},\ \bibinfo {pages} {58} (\bibinfo {year} {1994})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bouland}\ and\ \citenamefont {Aaronson}(2014)}]{PhysRevA.89.062316} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Bouland}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Aaronson}},\ }\href {\doibase 10.1103/PhysRevA.89.062316} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo {pages} {062316} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cronin}\ \emph {et~al.}(2009)\citenamefont {Cronin}, \citenamefont {Schmiedmayer},\ and\ \citenamefont {Pritchard}}]{RevModPhys.81.1051} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~D.}\ \bibnamefont {Cronin}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Schmiedmayer}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~E.}\ \bibnamefont {Pritchard}},\ }\href {\doibase 10.1103/RevModPhys.81.1051} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo {pages} {1051} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nairz}\ \emph {et~al.}(2003)\citenamefont {Nairz}, \citenamefont {Arndt},\ and\ \citenamefont {Zeilinger}}]{nairz2003quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Nairz}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Arndt}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Zeilinger}},\ }\href {https://doi.org/10.1119/1.1531580} {\bibfield {journal} {\bibinfo {journal} {American J. Phys.}\ }\textbf {\bibinfo {volume} {71}},\ \bibinfo {pages} {319} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sadana}\ \emph {et~al.}(2019)\citenamefont {Sadana}, \citenamefont {Sanders},\ and\ \citenamefont {Sinha}}]{sadana2019double} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Sadana}}, \bibinfo {author} {\bibfnamefont {B.~C.}\ \bibnamefont {Sanders}}, \ and\ \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Sinha}},\ }\href {https://iopscience.iop.org/article/10.1088/1367-2630/ab4f46/meta} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {21}},\ \bibinfo {pages} {113022} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cari{\~n}e}\ \emph {et~al.}(2020)\citenamefont {Cari{\~n}e}, \citenamefont {Ca{\~n}as}, \citenamefont {Skrzypczyk}, \citenamefont {{\v{S}}upi{\'c}}, \citenamefont {Guerrero}, \citenamefont {Garcia}, \citenamefont {Pereira}, \citenamefont {Prosser}, \citenamefont {Xavier}, \citenamefont {Delgado} \emph {et~al.}}]{carine2020multi} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Cari{\~n}e}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Ca{\~n}as}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Skrzypczyk}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {{\v{S}}upi{\'c}}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Guerrero}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Garcia}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Pereira}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Prosser}}, \bibinfo {author} {\bibfnamefont {G.~B.}\ \bibnamefont {Xavier}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Delgado}}, \emph {et~al.},\ }\href {https://www.osapublishing.org/optica/fulltext.cfm?uri=optica-7-5-542&id=431844} {\bibfield {journal} {\bibinfo {journal} {Optica}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {542} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Margalit}\ \emph {et~al.}(2021)\citenamefont {Margalit}, \citenamefont {Dobkowski}, \citenamefont {Zhou}, \citenamefont {Amit}, \citenamefont {Japha}, \citenamefont {Moukouri}, \citenamefont {Rohrlich}, \citenamefont {Mazumdar}, \citenamefont {Bose}, \citenamefont {Henkel},\ and\ \citenamefont {Folman}}]{margalit2021realization} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Margalit}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Dobkowski}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Amit}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Japha}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Moukouri}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Rohrlich}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Mazumdar}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Bose}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Henkel}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Folman}},\ }\href {DOI: 10.1126/sciadv.abg2879} {\bibfield {journal} {\bibinfo {journal} {Sci. Adv.}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {eabg2879} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lee}\ \emph {et~al.}(2022)\citenamefont {Lee}, \citenamefont {Lin}, \citenamefont {Miranowicz}, \citenamefont {Nori}, \citenamefont {Ku},\ and\ \citenamefont {Chen}}]{lee2022steering} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.-Y.}\ \bibnamefont {Lee}}, \bibinfo {author} {\bibfnamefont {J.-D.}\ \bibnamefont {Lin}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Miranowicz}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}}, \bibinfo {author} {\bibfnamefont {H.-Y.}\ \bibnamefont {Ku}}, \ and\ \bibinfo {author} {\bibfnamefont {Y.-N.}\ \bibnamefont {Chen}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv:2206.03760}\ } (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chan}\ \emph {et~al.}(2022)\citenamefont {Chan}, \citenamefont {Huang}, \citenamefont {Lin}, \citenamefont {Ku}, \citenamefont {Chen}, \citenamefont {Chen},\ and\ \citenamefont {Chen}}]{chan2022maxwell} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.-J.}\ \bibnamefont {Chan}}, \bibinfo {author} {\bibfnamefont {Y.-T.}\ \bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {J.-D.}\ \bibnamefont {Lin}}, \bibinfo {author} {\bibfnamefont {H.-Y.}\ \bibnamefont {Ku}}, \bibinfo {author} {\bibfnamefont {J.-S.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {H.-B.}\ \bibnamefont {Chen}}, \ and\ \bibinfo {author} {\bibfnamefont {Y.-N.}\ \bibnamefont {Chen}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv:2206.05921}\ } (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Breuer}\ and\ \citenamefont {Petruccione}(2002)}]{breuer2002theory} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-P.}\ \bibnamefont {Breuer}}\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Petruccione}},\ }\href@noop {} {\emph {\bibinfo {title} {The theory of open quantum systems}}}\ (\bibinfo {publisher} {Oxford University Press on Demand},\ \bibinfo {year} {2002})\BibitemShut {NoStop} \bibitem [{\citenamefont {Bylander}\ \emph {et~al.}(2011)\citenamefont {Bylander}, \citenamefont {Gustavsson}, \citenamefont {Yan}, \citenamefont {Yoshihara}, \citenamefont {Harrabi}, \citenamefont {Fitch}, \citenamefont {Cory}, \citenamefont {Nakamura}, \citenamefont {Tsai},\ and\ \citenamefont {Oliver}}]{bylander2011noise} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Bylander}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gustavsson}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Yan}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Yoshihara}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Harrabi}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Fitch}}, \bibinfo {author} {\bibfnamefont {D.~G.}\ \bibnamefont {Cory}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Nakamura}}, \bibinfo {author} {\bibfnamefont {J.-S.}\ \bibnamefont {Tsai}}, \ and\ \bibinfo {author} {\bibfnamefont {W.~D.}\ \bibnamefont {Oliver}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nat. Phys.}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {565} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {\'Alvarez}\ and\ \citenamefont {Suter}(2011)}]{PhysRevLett.107.230501} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~A.}\ \bibnamefont {\'Alvarez}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Suter}},\ }\href {\doibase 10.1103/PhysRevLett.107.230501} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {107}},\ \bibinfo {pages} {230501} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bretschneider}\ \emph {et~al.}(2012)\citenamefont {Bretschneider}, \citenamefont {\'Alvarez}, \citenamefont {Kurizki},\ and\ \citenamefont {Frydman}}]{PhysRevLett.108.140403} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~O.}\ \bibnamefont {Bretschneider}}, \bibinfo {author} {\bibfnamefont {G.~A.}\ \bibnamefont {\'Alvarez}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Kurizki}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Frydman}},\ }\href {\doibase 10.1103/PhysRevLett.108.140403} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {108}},\ \bibinfo {pages} {140403} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Virz\`{\i}}\ \emph {et~al.}(2022)\citenamefont {Virz\`{\i}}, \citenamefont {Avella}, \citenamefont {Piacentini}, \citenamefont {Gramegna}, \citenamefont {Opatrn\'y}, \citenamefont {Kofman}, \citenamefont {Kurizki}, \citenamefont {Gherardini}, \citenamefont {Caruso}, \citenamefont {Degiovanni},\ and\ \citenamefont {Genovese}}]{PhysRevLett.129.030401} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Virz\`{\i}}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Avella}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Piacentini}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Gramegna}}, \bibinfo {author} {\bibfnamefont {T.~c.~v.}\ \bibnamefont {Opatrn\'y}}, \bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont {Kofman}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Kurizki}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gherardini}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Caruso}}, \bibinfo {author} {\bibfnamefont {I.~P.}\ \bibnamefont {Degiovanni}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Genovese}},\ }\href {\doibase 10.1103/PhysRevLett.129.030401} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {129}},\ \bibinfo {pages} {030401} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lu}\ and\ \citenamefont {Grover}(2021)}]{PRXQuantum.2.040319} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.-C.}\ \bibnamefont {Lu}}\ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Grover}},\ }\href {\doibase 10.1103/PRXQuantum.2.040319} {\bibfield {journal} {\bibinfo {journal} {PRX Quantum}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {040319} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ippoliti}\ and\ \citenamefont {Khemani}(2021)}]{PhysRevLett.126.060501} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ippoliti}}\ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Khemani}},\ }\href {\doibase 10.1103/PhysRevLett.126.060501} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {126}},\ \bibinfo {pages} {060501} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dicke}(1954)}]{PhysRev.93.99} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~H.}\ \bibnamefont {Dicke}},\ }\href {\doibase 10.1103/PhysRev.93.99} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev.}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {99} (\bibinfo {year} {1954})}\BibitemShut {NoStop} \bibitem [{\citenamefont {DeVoe}\ and\ \citenamefont {Brewer}(1996)}]{PhysRevLett.76.2049} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~G.}\ \bibnamefont {DeVoe}}\ and\ \bibinfo {author} {\bibfnamefont {R.~G.}\ \bibnamefont {Brewer}},\ }\href {\doibase 10.1103/PhysRevLett.76.2049} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {76}},\ \bibinfo {pages} {2049} (\bibinfo {year} {1996})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gross}\ and\ \citenamefont {Haroche}(1982)}]{gross1982superradiance} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Gross}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Haroche}},\ }\href {https://doi.org/10.1016/0370-1573(82)90102-8} {\bibfield {journal} {\bibinfo {journal} {Phys. Rep.}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {301} (\bibinfo {year} {1982})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chen}\ \emph {et~al.}(2005)\citenamefont {Chen}, \citenamefont {Li}, \citenamefont {Chuu},\ and\ \citenamefont {Brandes}}]{chen2005proposal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.-N.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {D.-S.}\ \bibnamefont {Chuu}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Brandes}},\ }\href {https://iopscience.iop.org/article/10.1088/1367-2630/7/1/172/meta} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {172} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chen}\ \emph {et~al.}(2013{\natexlab{a}})\citenamefont {Chen}, \citenamefont {Chen}, \citenamefont {Lambert}, \citenamefont {Li}, \citenamefont {Chen},\ and\ \citenamefont {Nori}}]{PhysRevA.88.052320} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.-N.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {S.-L.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Lambert}}, \bibinfo {author} {\bibfnamefont {C.-M.}\ \bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {G.-Y.}\ \bibnamefont {Chen}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\href {\doibase 10.1103/PhysRevA.88.052320} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {88}},\ \bibinfo {pages} {052320} (\bibinfo {year} {2013}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chen}\ \emph {et~al.}(2003)\citenamefont {Chen}, \citenamefont {Chuu},\ and\ \citenamefont {Brandes}}]{PhysRevLett.90.166802} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.~N.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {D.~S.}\ \bibnamefont {Chuu}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Brandes}},\ }\href {\doibase 10.1103/PhysRevLett.90.166802} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {90}},\ \bibinfo {pages} {166802} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Brandes}(2005)}]{brandes2005coherent} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Brandes}},\ }\href {https://doi.org/10.1016/j.physrep.2004.12.002} {\bibfield {journal} {\bibinfo {journal} {Phys. Rep.}\ }\textbf {\bibinfo {volume} {408}},\ \bibinfo {pages} {315} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chen}\ \emph {et~al.}(2013{\natexlab{b}})\citenamefont {Chen}, \citenamefont {Chen}, \citenamefont {Li},\ and\ \citenamefont {Chen}}]{chen2013examining} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.-Y.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {S.-L.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {C.-M.}\ \bibnamefont {Li}}, \ and\ \bibinfo {author} {\bibfnamefont {Y.-N.}\ \bibnamefont {Chen}},\ }\href {https://www.nature.com/articles/srep02514?origin=ppub} {\bibfield {journal} {\bibinfo {journal} {Sci. Rep.}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {2154} (\bibinfo {year} {2013}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shammah}\ \emph {et~al.}(2018)\citenamefont {Shammah}, \citenamefont {Ahmed}, \citenamefont {Lambert}, \citenamefont {De~Liberato},\ and\ \citenamefont {Nori}}]{PhysRevA.98.063815} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Shammah}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Ahmed}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Lambert}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {De~Liberato}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\href {\doibase 10.1103/PhysRevA.98.063815} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo {pages} {063815} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ferraro}\ \emph {et~al.}(2018)\citenamefont {Ferraro}, \citenamefont {Campisi}, \citenamefont {Andolina}, \citenamefont {Pellegrini},\ and\ \citenamefont {Polini}}]{PhysRevLett.120.117702} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Ferraro}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Campisi}}, \bibinfo {author} {\bibfnamefont {G.~M.}\ \bibnamefont {Andolina}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Pellegrini}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Polini}},\ }\href {\doibase 10.1103/PhysRevLett.120.117702} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {120}},\ \bibinfo {pages} {117702} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Quach}\ \emph {et~al.}(2022)\citenamefont {Quach}, \citenamefont {McGhee}, \citenamefont {Ganzer}, \citenamefont {Rouse}, \citenamefont {Lovett}, \citenamefont {Gauger}, \citenamefont {Keeling}, \citenamefont {Cerullo}, \citenamefont {Lidzey},\ and\ \citenamefont {Virgili}}]{quach2022superabsorption} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~Q.}\ \bibnamefont {Quach}}, \bibinfo {author} {\bibfnamefont {K.~E.}\ \bibnamefont {McGhee}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Ganzer}}, \bibinfo {author} {\bibfnamefont {D.~M.}\ \bibnamefont {Rouse}}, \bibinfo {author} {\bibfnamefont {B.~W.}\ \bibnamefont {Lovett}}, \bibinfo {author} {\bibfnamefont {E.~M.}\ \bibnamefont {Gauger}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Keeling}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Cerullo}}, \bibinfo {author} {\bibfnamefont {D.~G.}\ \bibnamefont {Lidzey}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Virgili}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Sci. adv.}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {eabk3160} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kamimura}\ \emph {et~al.}(2022)\citenamefont {Kamimura}, \citenamefont {Hakoshima}, \citenamefont {Matsuzaki}, \citenamefont {Yoshida},\ and\ \citenamefont {Tokura}}]{PhysRevLett.128.180602} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Kamimura}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Hakoshima}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Matsuzaki}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Yoshida}}, \ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Tokura}},\ }\href {\doibase 10.1103/PhysRevLett.128.180602} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {128}},\ \bibinfo {pages} {180602} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Megier}\ \emph {et~al.}(2017)\citenamefont {Megier}, \citenamefont {Chru{\'s}ci{\'n}ski}, \citenamefont {Piilo},\ and\ \citenamefont {Strunz}}]{megier2017eternal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Megier}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Chru{\'s}ci{\'n}ski}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Piilo}}, \ and\ \bibinfo {author} {\bibfnamefont {W.~T.}\ \bibnamefont {Strunz}},\ }\href {https://www.nature.com/articles/s41598-017-06059-5} {\bibfield {journal} {\bibinfo {journal} {Sci. Rep.}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {6379} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chru\ifmmode \acute{s}\else \'{s}\fi{}ci\ifmmode~\acute{n}\else \'{n}\fi{}ski}\ and\ \citenamefont {Siudzi\ifmmode~\acute{n}\else \'{n}\fi{}ska}(2016)}]{PhysRevA.94.022118} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Chru\ifmmode \acute{s}\else \'{s}\fi{}ci\ifmmode~\acute{n}\else \'{n}\fi{}ski}}\ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Siudzi\ifmmode~\acute{n}\else \'{n}\fi{}ska}},\ }\href {\doibase 10.1103/PhysRevA.94.022118} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {94}},\ \bibinfo {pages} {022118} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Breuer}\ \emph {et~al.}(2018)\citenamefont {Breuer}, \citenamefont {Amato},\ and\ \citenamefont {Vacchini}}]{breuer2018mixing} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-P.}\ \bibnamefont {Breuer}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Amato}}, \ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Vacchini}},\ }\href {https://iopscience.iop.org/article/10.1088/1367-2630/aab2f9/meta} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {20}},\ \bibinfo {pages} {043007} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jagadish}\ \emph {et~al.}(2020)\citenamefont {Jagadish}, \citenamefont {Srikanth},\ and\ \citenamefont {Petruccione}}]{PhysRevA.101.062304} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Jagadish}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Srikanth}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Petruccione}},\ }\href {\doibase 10.1103/PhysRevA.101.062304} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {101}},\ \bibinfo {pages} {062304} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Siudzi\ifmmode~\acute{n}\else \'{n}\fi{}ska}(2021)}]{PhysRevA.103.022605} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Siudzi\ifmmode~\acute{n}\else \'{n}\fi{}ska}},\ }\href {\doibase 10.1103/PhysRevA.103.022605} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {103}},\ \bibinfo {pages} {022605} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kofman}\ and\ \citenamefont {Kurizki}(2001)}]{PhysRevLett.87.270405} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont {Kofman}}\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Kurizki}},\ }\href {\doibase 10.1103/PhysRevLett.87.270405} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {270405} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Higgins}\ \emph {et~al.}(2014)\citenamefont {Higgins}, \citenamefont {Benjamin}, \citenamefont {Stace}, \citenamefont {Milburn}, \citenamefont {Lovett},\ and\ \citenamefont {Gauger}}]{higgins2014superabsorption} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.~D.~B.}\ \bibnamefont {Higgins}}, \bibinfo {author} {\bibfnamefont {S.~C.}\ \bibnamefont {Benjamin}}, \bibinfo {author} {\bibfnamefont {T.~M.}\ \bibnamefont {Stace}}, \bibinfo {author} {\bibfnamefont {G.~J.}\ \bibnamefont {Milburn}}, \bibinfo {author} {\bibfnamefont {B.~W.}\ \bibnamefont {Lovett}}, \ and\ \bibinfo {author} {\bibfnamefont {E.~M.}\ \bibnamefont {Gauger}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {4705} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yang}\ \emph {et~al.}(2021)\citenamefont {Yang}, \citenamefont {Oh}, \citenamefont {Han}, \citenamefont {Son}, \citenamefont {Kim}, \citenamefont {Kim}, \citenamefont {Lee},\ and\ \citenamefont {An}}]{yang2021realization} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {S.-h.}\ \bibnamefont {Oh}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Han}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Son}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kim}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kim}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Lee}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {An}},\ }\href {https://www.nature.com/articles/s41566-021-00770-6} {\bibfield {journal} {\bibinfo {journal} {Nat. Photon.}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {272} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Oreshkov}\ \emph {et~al.}(2012)\citenamefont {Oreshkov}, \citenamefont {Costa},\ and\ \citenamefont {Brukner}}]{oreshkov2012quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Oreshkov}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Costa}}, \ and\ \bibinfo {author} {\bibfnamefont {{\v{C}}.}~\bibnamefont {Brukner}},\ }\href {https://doi.org/10.1038/ncomms2076} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {1092} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rubino}\ \emph {et~al.}(2017)\citenamefont {Rubino}, \citenamefont {Rozema}, \citenamefont {Feix}, \citenamefont {Ara{\'u}jo}, \citenamefont {Zeuner}, \citenamefont {Procopio}, \citenamefont {Brukner},\ and\ \citenamefont {Walther}}]{rubino2017experimental} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Rubino}}, \bibinfo {author} {\bibfnamefont {L.~A.}\ \bibnamefont {Rozema}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Feix}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ara{\'u}jo}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Zeuner}}, \bibinfo {author} {\bibfnamefont {L.~M.}\ \bibnamefont {Procopio}}, \bibinfo {author} {\bibfnamefont {{\v{C}}.}~\bibnamefont {Brukner}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Walther}},\ }\href {https://www.science.org/doi/10.1126/sciadv.1602589} {\bibfield {journal} {\bibinfo {journal} {Sci. Adv.}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {e1602589} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ebler}\ \emph {et~al.}(2018)\citenamefont {Ebler}, \citenamefont {Salek},\ and\ \citenamefont {Chiribella}}]{PhysRevLett.120.120502} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Ebler}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Salek}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Chiribella}},\ }\href {\doibase 10.1103/PhysRevLett.120.120502} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {120}},\ \bibinfo {pages} {120502} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zych}\ \emph {et~al.}(2019)\citenamefont {Zych}, \citenamefont {Costa}, \citenamefont {Pikovski},\ and\ \citenamefont {Brukner}}]{zych2019bell} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Zych}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Costa}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Pikovski}}, \ and\ \bibinfo {author} {\bibfnamefont {{\v{C}}.}~\bibnamefont {Brukner}},\ }\href {https://doi.org/10.1038/s41467-019-11579-x} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo {pages} {3772} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Breuer}\ \emph {et~al.}(2009)\citenamefont {Breuer}, \citenamefont {Laine},\ and\ \citenamefont {Piilo}}]{PhysRevLett.103.210401} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-P.}\ \bibnamefont {Breuer}}, \bibinfo {author} {\bibfnamefont {E.-M.}\ \bibnamefont {Laine}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Piilo}},\ }\href {\doibase 10.1103/PhysRevLett.103.210401} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {103}},\ \bibinfo {pages} {210401} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Breuer}\ \emph {et~al.}(2016)\citenamefont {Breuer}, \citenamefont {Laine}, \citenamefont {Piilo},\ and\ \citenamefont {Vacchini}}]{RevModPhys.88.021002} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-P.}\ \bibnamefont {Breuer}}, \bibinfo {author} {\bibfnamefont {E.-M.}\ \bibnamefont {Laine}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Piilo}}, \ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Vacchini}},\ }\href {\doibase 10.1103/RevModPhys.88.021002} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {88}},\ \bibinfo {pages} {021002} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Addis}\ \emph {et~al.}(2014)\citenamefont {Addis}, \citenamefont {Brebner}, \citenamefont {Haikka},\ and\ \citenamefont {Maniscalco}}]{PhysRevA.89.024101} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Addis}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Brebner}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Haikka}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Maniscalco}},\ }\href {\doibase 10.1103/PhysRevA.89.024101} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo {pages} {024101} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \end{thebibliography} \end{document}
\begin{document} \begin{titlepage} {\vspace*{0.4em}ace*{-1in}\mathcal{H}space*{-.5in} \mathcal{P}arbox{7.25in}{ \setlength{\baselineskip}{13pt} \makebox{\ }\mathcal{H}fill {\footnotesize College of Engineering and Applied Sciences} \\ \makebox{\ }\mathcal{H}fill {\footnotesize Computer Science Department} \\ \makebox{\ }\\ } \vspace*{0.4em}ace{-.775in}} \epsfxsize=3.15in \epsfclipon \mathcal{H}space{-0.5in}{\raggedright{ \epsffile{logo_A2_pms124_269.eps} }} \vspace*{0.4em}ace{2in} \begin{center} {\mathcal{H}uge\bf Notes on Lynch-Morawska Systems}\\[+25pt] \end{center} \vspace*{0.4em}ace{1.5in} \begin{center} {\large\bf Daniel S. Hono II\\[+3pt] Namrata Galatage\\[+3pt] Kimberly A. Gero\\[+3pt] Paliath Narendran\\[+3pt] Ananya Subburathinam}\\ \end{center} \end{titlepage} \date{} \thispagestyle{plain} \begin{abstract} In this paper we investigate convergent term rewriting systems that conform to the criteria set out by Christopher Lynch and Barbara Morawska in their seminal paper \emph{``Basic Syntactic Mutation.''} The equational unification problem modulo such a rewrite system is solvable in polynomial-time. In this paper, we derive properties of such a system which we call an $LM$-system. We show, in particular, that the rewrite rules in an $LM$-system have no left-\ or right-overlaps. We also show that despite the restricted nature of an $LM$-system, there are important undecidable problems, such as the \emph{deduction problem} in cryptographic protocol analysis (also called the \emph{cap problem}) that remain undecidable for $LM$-systems. {} {\em Keywords:} Equational unification, Term rewriting, Polynomial-time complexity, \textbf{NP}-completeness. \end{abstract} \section{Introduction} Unification modulo an equational theory $E$ (equational unification or $E$-unification) is an undecidable problem in general. Even in cases where it is decidable, it is often of high complexity. In their seminal paper ``Basic Syntactic Mutation'' Christopher Lynch and Barbara Morawska present syntactic criteria on equational axioms~$E$ that guarantee a polynomial time algorithm for the corresponding $E$-unification problem. As far as we know these are the only purely syntactic criteria that ensure a polynomial-time algorithm for unifiability. \ignore{ We also exhibit theories whose unification problems do not fit all of the constraints but still have polynomial time algorithms. } In~\cite{NotesOnBSM} it was shown that relaxing any of the constraints imposed upon a term-rewriting system $R$ by the conditions given by Lynch and Morawska results in an unification problem that is $NP$-Hard. Thus, these conditions are tight in the sense that relaxing any of them leads to an intractable unification problem (assuming $P \not = NP$). In this work, we continue to investigate the consequences of the syntactic criteria given in ``Basic Syntactic Mutation". As in ~\cite{NotesOnBSM} we consider the case where $E$ forms a convergent and forward closed term-rewriting system, which we call $LM$-Systems. This definition differs from that of~\cite{LynchMorawska} in which $E$ is saturated by paramodulation and not necessarily convergent. The criteria of \emph{determinism} introduced in ~\cite{LynchMorawska} remains essentially unchanged. We give a structural characterization of these systems by showing that if $R$ is an $LM$-System, then there are no overlaps between the left-hand sides of any rules in $R$ and there are no forward-overlaps between any right-hand side and a left-hand side. This characterization shows that $LM$-Systems form a very restricted subclass of term-rewriting systems. Any term-rewriting system that contains overlaps of these kinds cannot be an $LM$-System. Using these results, we show that saturation by paramodulation is equivalent to forward-closure when considering convergent term-rewriting systems that satisfy all of the remaining conditions for $LM$-Systems. Despite their restrictive character, we show in Section~$5$ that the cap problem, which is undecidable in general, remains undecidable when restricted to $LM$-Systems. The cap problem (also called the deduction problem) originates from the field of cryptographic protocol analysis. This result shows that $LM$-Systems are yet strong enough to encode important undecidable problems. The reduction considered is essentially the same as that given in~\cite{NotesOnBSM} to show that determining if a term-rewriting system is subterm-collapsing given all the other conditions of $LM$-Systems is undecidable. \ignore{ In the interest of brevity we have omitted most of the proofs of results appearing in this paper. The interested reader can find all of the details in the technical report.\footnote{arxiv link here} } \section{Notation and Preliminaries} We assume the reader is familiar with the usual notions and concepts in term rewriting systems~\cite{Term} and equational unification~\cite{BaaderSnyd-01}. We consider rewrite systems over ranked signatures, usually denoted $\Sigma$, and a possibly infinite set of variables, usually denoted $\mathcal{X}$. The set of all terms over~$\Sigma$ and~$\mathcal{X}$ is denoted as $T(\Sigma, \mathcal{X})$. An \emph{equation} is an ordered pair of terms $(s, \, t)$, usually written as $s \approx t$. Here $s$~is the left-hand side and $t$~is the right-hand side of the equation~\cite{Term}. A rewrite rule is an equation $s \approx t$ where $\var(t) \subseteq \var(s)$, usually written as~$s \to t$. A term rewriting system is a set of rewrite rules. A set of equations $E$ is \emph{subterm-collapsing}\footnote{Non-subterm-collapsing theories are called \emph{simple} theories in~\cite{BHSS}} if and only if there are terms $t$ and $u$ such that $t$ is a proper subterm of $u$ and $E$ $\vdash$ $t \approx u$ (or $t =_E^{} u$)~\cite{BHSS}. A set of equations $E$ is \emph{variable-preserving}\footnote{Variable-preserving theories are also called \emph{non-erasing} or \emph{regular} theories~\cite{Term}.} if and only if for every equation $t \approx u$ in $E$, $\var(t) = \var(u)$~\cite{Ohlebusch95}. A term rewriting system is \emph{convergent} if and only if it is confluent and terminating~\cite{Term}. If $R$ is a term rewriting system denote by $IRR(R)$ the set of all $R$-irreducible terms, i.e. the set of all $R$ normal forms. A term $t$ is \emph{$\bar{\epsilon}$-irreducible} modulo a rewriting system~$R$ if and only if every \emph{proper} subterm of~$t$ is irreducible. A term $t$ is an {\em innermost redex\/} of a rewrite system $R$ if all proper subterms of $t$ are irreducible and $t$ is \emph{reducible}, i.e., $t$ is \emph{$\bar{\epsilon}$-irreducible} and $t$ is an instance of the left-hand-side of a rule in $R$. The following definition is used in later sections to simplify the exposition. It is related to the above notion of $\bar{\epsilon}$-irreducible. Specifically, we define a normal form of a term $t$ that depends also on the reduction path used. More formally, we have: \begin{defn} Let $R$ be a rewrite-system. A term~$t$ is said to be an \textbf{$\bar{\epsilon}$-normal-form} of a term~$s$ if and only if $s \, \rightarrow_{R}^* \, t$, no proper subterm of~$t$ is reducible and none of the rewrites in the sequence of reduction steps is at the root. \end{defn} Given a set of equations $E$, the set of ground instances of $E$ is denoted by $Gr(E)$. We assume a reduction order $\mathcal{P}rec$ on $E$ which is total on ground terms. We extend this order to equations as $(s \approx t) \mathcal{P}rec (u \approx v)$ iff $\{s, t\} \mathcal{P}rec_{mul} \{u, v\}$, where $\mathcal{P}rec_{mul}$ is the multiset order induced by $\mathcal{P}rec$. An equation~$e$ is \emph{redundant in $E$} if and only if every ground instance $\sigma(e)$~of~$e$ is a consequence of equations in~$Gr(E)$ which are smaller than $\sigma(e)$ modulo~$\mathcal{P}rec$~\cite{LynchMorawska}. \subsection{Paramodulation} Lynch and Morawska define \emph{paramodulation}, which is an extension to the critical pair rule. Since our focus is only on convergent term rewriting systems, this definition can be modified for rewrite rules as the following inference rule: \[\infer{\sigma(u[t]_p) \approx \sigma(v)}{u[s']_p \approx v &\quad s \to t}\] where $\sigma = \Mgu(s \mathcal{EQ}q s')$ and $p \in \fpos(u)$. A set of equations $E$ is \emph{saturated by paramodulation} if all inferences among equations in $E$ using the above rule are redundant. In this work, we consider rewrite systems that are \emph{forward-closed} as opposed to saturated by paramodulation. A later result of section 4 will show that saturation by paramodulation and forward-closure are equivalent properties when all of the other $LM$-conditions obtain. \section{Forward Closures} Following Hermann~\cite{Hermann} the {\em forward-closure\/} of a convergent term rewriting system~$R$ is defined in terms of the following operation on rules in~$R$: let $\rho_1^{} : l_1^{} \rightarrow r_1^{}$ and $\rho_2^{} : l_2^{} \rightarrow r_2^{}$ be two rules in~$R$ and let $p \in {\fpos}(r_1^{})$. Then \[ \rho_1^{} \, \rightsquigarrow_p^{} \, \rho_2^{} ~ = ~ \sigma (l_1^{} \rightarrow r_1^{}[r_2^{}]_p^{}) \] where $\sigma = mgu( r_1^{}|_p^{} =_{}^? l_2^{} )$. Given rewrite systems $R_1^{}$ and $R_2^{}$ such that $R_2^{} \subseteq R_1^{}$, we define $R_1^{} \rightsquigarrow R_2^{}$ as the rules in: \begin{center} $\left\{ (l_1^{} \rightarrow r_1^{}) \, \rightsquigarrow_p^{} \, (l_2^{} \rightarrow r_2^{}) ~ \big| ~ (l_1^{} \rightarrow r_1^{}) \in R_1^{}, \; (l_2^{} \rightarrow r_2^{}) \in R_2^{} \; \mathrm{and} \; p \in {\fpos}(r_1^{}) \right\}$ \end{center} which are not redundant in~$R_1^{}$. We now define \begin{align*} {FC}_0^{} (R) &= R \shortintertext{and} {FC}_{k+1}^{} (R) &= {FC}_k^{}(R) \; \cup \; ({FC}_k^{}(R) \rightsquigarrow R) \shortintertext{for all $k \ge 0$. Finally,} FC(R) &= \bigcup\limits_{i=1}^{\infty} \, FC_i^{} (R) \intertext{Note that $FC_j^{} (R) \subseteq FC_{j+1}^{} (R)$ for all $j \ge 0$. A set of rewrite rules $R$ is forward-closed if and only if $FC(R) = R$. We also define the sets of ``new rules''} NR_0(R) &= R \shortintertext{and} NR_{k+1}(R) &= FC_{k+1}(R) \smallsetminus FC_k(R) \intertext{Then} FC(R) &= \bigcup\limits_{i=1}^{\infty} \, NR_i^{} (R) \end{align*} \ignore{ \begin{proposition} Given a convergent rewrite system $R$ and innermost redex $t$ with normal form $\widehat{t}$, if $t \to_R^k \widehat{t}$, then $t \to_{FC_{k'}(R)} \widehat{t}$, for any $k' \geq k - 1$. \label{lemma-fc-compress} \end{proposition} } \ignore{ \begin{proposition} Let $R$ be a convergent rewrite system, and $t$, $t'$ be terms, where $t$ is an innermost redex and $t \to_R^k t'$ for some $k$. Then $t \to_{FC_{k'}(R)} t'$ for any $k' \geq k - 1$. \label{prop-fc-compress} \end{proposition} \begin{proof} We will prove this by induction. When $k = 1$, $t \to_R^{} t'$ and thus $t \to_{FC_{0}^{} (R)} t'$. Since $FC_i^{} (R) \subseteq FC_{i+1}^{} (R)$ for all natural numbers~$i$ the result follows. Suppose that, for all $k \leq n$, given $t \to_R^k t'$, then $t \to_{FC_{k'}(R)} t'$ for any $k' \geq k - 1$. We will show that, given $t \to_R^{n+1} t'$, then $t \to_{FC_{k''}(R)} t'$ for any $k'' \geq n$. If $t \to_R^{n+1} t'$, then there is a term $t''$ such that \[t \longrightarrow_R^n t'' \longrightarrow_R t'\] By our inductive hypothesis, $t \to_{FC_{k'}(R)} t''$ for any $k' \geq n - 1$. Let $k'$ be any such number. Let $\rho_1 = (l_1^{} \to r_1^{})$ be the rule in $FC_{k'}(R)$ such that $t \to_{\rho_1}^{} t''$. Thus $t = \_1^{} \sigma$ and $t'' = r_1^{} \sigma$, for some substitution~$\sigma$. The substitution $\sigma$ has to be a normalized substitution since otherwise $t$ would not have been an innermost redex. Let $\rho_2 = (l_2^{} \to r_2^{})$ be the rule in $R$ such that $t'' = r_1^{} \sigma \to_{\rho_2}^{} t'$. and let $p$ be the position in $t''$ where $\rho_2$ is applied, i.e., $t' = r_1^{} \sigma [r_2^{} \delta]_p^{}$ where $\delta$ is a substitution that matches $l_2^{}$ with $(r_1^{} \sigma)|_p^{}$. The position~$p$ has to belong to $\mathcal{FP}os ( r_1^{} )$ since $\sigma$, as mentioned above, is a normalized substitution. Thus $\rho_1 \rightsquigarrow_p \rho_2$ is defined (constructible) and it is either a rule in $FC_{k' + 1}(R)$ or redundant in $FC_{k'}$. Thus, $t \to_{FC_{k'+1}(R)} t'$. Let $k'' = k' + 1$. Then $t \to_{FC_{k''}(R)} t'$. \end{proof} This is the intuition behind forward closure; each sequence of rewrite rules from $R$ of length $k$ or less is combined into a single rewrite rule in $FC_k(R)$. Then, in $FC(R)$, all (possibly infinite) sequences of rewrite rules are represented as a single rule each. \begin{corollary} If $R$ is a convergent rewrite system and $t$ an innermost redex with normal form~$\widehat{t}$, then $t \to_{FC(R)}^{} \widehat{t}$. \end{corollary} } \ignore{ \begin{Lemma} Let $R$ be a convergent rewrite system and $l \approx r$ be an equation such that $l \rightarrow_{R}^! r$, i.e., $r$ is the $R$-normal-form of~$l$. Then $l \approx r$ is redundant in~$R$ if and only if a proper subterm of $l$ is reducible. \end{Lemma} \begin{proof} For a given ground equation $s \approx t$ such that $s \succ t$, we define the following (possibly infinite) ground term rewriting system: \[ \mathcal{G}_{}^{\mathcal{P}rec (s \approx t)} = \left\{ \sigma (l) \to \sigma(r) ~ \big| ~ (l \to r) \in R \mathrm{~ and ~} (\sigma (l) \to \sigma(r)) \mathcal{P}rec (s \approx t) \right\} \] We can now prove\\ \noindent {\bf Claim 1}: $\mathcal{G}_{}^{\mathcal{P}rec (s \approx t)}$ is convergent.\\ \noindent {\bf Claim 2}: An equation $e = (s_1^{} \approx s_2^{})$ is redundant in~$R$ if and only if for every ground instance~$\delta (s_1^{}) \approx \delta (s_2^{})$ of~$e$, $\delta (s_1^{})$ and $\delta (s_2^{})$ are joinable modulo~$\mathcal{G}_{}^{\mathcal{P}rec (s_1 \approx s_2)}$.\\ \noindent {\bf Claim 3}: Let $t$, $l$ and $r$ be terms such that $t \succ l \succ r$. If $l \downarrow_R^{} r$, then every term that appears in the rewrite proof (``valley proof'') is~$\mathcal{P}rec l$.\\ If a proper subterm of $l$ is reducible, then there must be a rule $l' \to r'$, a position~$p \neq \epsilon$ and a substitution~$\sigma$ such that $l|_p^{} = \sigma(l')$ and $l \to_R^{} l[\sigma(r')]_p^{}$. Now $\sigma (l') \mathcal{P}rec l$ because of the subterm property and thus $\sigma (r') \mathcal{P}rec \sigma (l') \mathcal{P}rec l$. The terms $l[\sigma(r')]_p^{}$ and~$r$ are joinable modulo~$R$ and by Claim~3 we are done. Suppose $l \approx r$ is redundant in~$R$. Let $\theta$ be a substitution that replaces every variable in~$l$ with a distinct free constant. Then $\theta (l)$ and $\theta(r)$ are joinable modulo~$\mathcal{G}_{}^{\mathcal{P}rec (\theta(l) \approx \theta(r))}$ by Claim~2. If a proper subterm of $\theta (l)$ is reducible, then we are done. Otherwise, there must be a ground rule $l_g^{} \to r_g^{}$ in~$\mathcal{G}_{}^{\mathcal{P}rec (s_1 \approx s_2)}$ such that $\theta (l) = l_g^{}$. But~$r_g^{}$ cannot be lower than~$\theta(r)$ since~$\theta(r)$ is in normal form. \end{proof} \begin{Lemma} Let $R$ be a convergent rewrite system and $t, t'$ be terms where $t$ is an innermost redex. If $t \to_{FC_{k'}(R)} t'$ then $t \to_R^k t'$ for some $k \le k' + 1$. \end{Lemma} } The following theorem, shown in~\cite{BGLN}, gives necessary and sufficient conditions for a rewrite-system to be forward-closed. This property will be used repeatedly in the sequel below. \begin{theorem} \emph{\cite{BGLN}} \label{thm-fc-irb} A convergent rewrite system $R$ is forward-closed if and only if every innermost redex can be reduced to its $R$-normal~form in one step. \end{theorem} We next show that there are ways to reduce a rewrite system $R$ while still maintaining the properties that we are interested in. More precisely, given a convergent, forward-closed rewrite system $R$ we can reduce the right-hand sides of rules in $R$ while maintaining convergence, forward-closure and the equational theory generated by $R$. Let $R$ be a convergent rewrite system. Following~\cite{gramlich} we define \[ R\!\downarrow ~ = ~ \left\{ l \rightarrow r\!\downarrow ~ | ~ (l \rightarrow r) \in R \vphantom{b_b^b} \right\} \] \begin{Lemma} \label{RightReduceEquiv} Let $R$ be a convergent, forward-closed rewrite system. Then $R\!\downarrow$ is convergent, equivalent to~$R$ (i.e., they generate the same congruence), and forward-closed. \end{Lemma} \begin{proof} The cases of convergence and equivalence of $R\!\downarrow$ are handled in~\cite{gramlich}; it remains to show that $R\!\downarrow$ is forward-closed. By Theorem~\ref{thm-fc-irb} it suffices to show that every innermost redex modulo~$R\!\downarrow$ is reducible to its normal form in a single step. Let $t$ be an innermost redex of $R\!\downarrow$. Since the passage from $R$ to $R\!\downarrow$ preserved the left-hand sides of the rules, $t$ must also be an innermost redex of~$R$. Thus, $\exists l \rightarrow r \in R$ such that $\sigma(r) \in IRR(R)$. Then $l \rightarrow r\!\downarrow \in R\!\downarrow$. Since $r \rightarrow^{!} r\!\downarrow$ and $\rightarrow$ is closed under substitutions, we have that $\sigma(r) \rightarrow^{!} \sigma(r\!\downarrow)$, but $\sigma(r)$ is irreducible, thus $\sigma(r) = \sigma(r\!\downarrow)$. Therefore, $t$ reduces to its normal form modulo $R\!\downarrow$ in a single step. \end{proof} A convergent rewrite system~$R$ is \emph{right-reduced} if and only if $R ~ = ~ R\!\downarrow$. From the above lemma, it is clear that right-reduction does not affect forward-closure. However, full interreduction, where one also deletes rules whose left-hand sides are reducible by \emph{other} rules, will not preserve forward-closure. The following example illustrates this: \begin{eqnarray*} f(x, i(x)) & \rightarrow & g(x)\\ g(b) & \rightarrow & c\\ f(b, i(b)) & \rightarrow & c \end{eqnarray*} The last rule can be deleted since its left-hand side is reducible by the first rule. This will preserve convergence, but forward-closure will be lost since $f(b, i(b))$, an innermost redex, cannot be reduced in \emph{one step} to~$c$ in the absence of the third rule. However, the following lemma enables us to do a restricted deletion of superfluous rules: \begin{Lemma} \label{semileftreduced} Let $R$ be a convergent, forward-closed, term rewriting system. Let $l_i \rightarrow r_i \in R$ for $i \in \{1, 2\}$ such that $\exists p \in \fpos(l_1) : p \not = \epsilon \text{ and } l_{1}|_{p} = \sigma(l_2)$ for some substitution $\sigma$. That is, $l_1$ contains a proper subterm that is an instance of the left-hand side of another rule in $R$. Then, $R' = R \smallsetminus \left\{ l_1 \rightarrow r_1 \right\}$ is convergent, forward-closed and equivalent to~$R$. \end{Lemma} \begin{proof} For the sake of deriving a contradiction, assume that $R'$ is not forward-closed. Then there must exist some term $t$ such that $t$ is an innermost redex modulo $R'$ and $t$ is not reducible to its normal form in a single step. Since $R$ is forward-closed $t$ must be reducible to its normal form in a single step modulo $R$, and since $R$ differs from $R'$ by the rule $l_1 \rightarrow r_1$, then it must be that $t = \theta(l_1)$ for some substitution $\theta$. However, since $l_{1}|_p = \sigma(l_2)$ we have that $\theta(l_{1})|_p = \theta(\sigma(l_2))$, but since $\theta(l_{1}) = t$ we also have that $\theta(l_1)|_{p} = t|_{p}$. Thus, $t|_p = \theta(\sigma(l_2))$, but this contradictions the assumption that $t$ is an innermost redex as $p \not = \epsilon$. Therefore, $R'$ must be forward-closed. The termination of $R'$ follows from the fact that $R$ is terminating and $R' \subset R$. Towards showing confluence and equivalence we show that $IRR(R') = IRR(R)$. First, it is clear that $IRR(R) \subseteq IRR(R')$ since $R' \subseteq R$. For the reverse containment, suppose that $t \in IRR(R')$ but $t \not \in IRR(R)$. That is, $t$ must be reducible modulo $R$. However, since $R$ and $R'$ differ by only a single rule, it must be the case that $t$ is reducible by the rule $l_1 \rightarrow r_1$, but $l_1$ is reducible by $l_2 \rightarrow r_2$, thus $t$ would also be reducible by $l_2 \rightarrow r_2 \in R'$. But this contradicts the assumption that $t \in IRR(R')$. Now, suppose that $R'$ is not confluent. Then there must a term $t$ with two distinct $R'$-normal forms $t'$ and $t''$, but $t' \downarrow_{R} t''$ as $R$ is confluent. However, this contradicts the above fact that $IRR(R) = IRR(R')$ as at least one of $t', t''$ must be reducible. Thus, $R'$ is confluent, and thus we have established that $R'$ is convergent. Finally, we show that $R'$ is equivalent to $R$. Since $R' \subset R$ we have that $\leftrightarrow_{R'}^* \, \subseteq \, \leftrightarrow_{R}^*$. For the reverse containment, suppose there are two distinct $R'$-irreducible terms $s$ and $t$ such that $s \leftrightarrow_{R}^* t$, but then since $IRR(R) = IRR(R')$, $s = t$, which contradicts the assumption that they are distinct terms. Thus $R'$ is equivalent to $R$. \end{proof} We call systems that have no rules such that the conditions of Lemma~\ref{semileftreduced} obtain \textit{almost-left-reduced}. Note, however, that they are not fully left-reduced as there is still the possibility of overlaps at the \emph{root.} If a convergent, forward-closed and right-reduced $R$ is not almost-left-reduced, then the above lemma tells us that we may delete such rules and obtain an equivalent system. \section{Lynch-Morawska Conditions} In this section we define the Lynch-Morawska conditions. We also derive some preliminary results on convergent term rewriting systems that satisfy the Lynch-Morawska conditions. A new concept introduced by Lynch and Morawska is that of a {\em Right-Hand-Side Critical Pair,\/} defined as follows: \vspace*{0.4em}ace{2 mm} {\large \begin{center} \fbox{ \begin{minipage}{3.75in} \centerline{$\infer{s\sigma \approx u\sigma} {s \approx t ~ \qquad \qquad ~ u \approx v}$} \vspace*{0.4em}ace{0.1in} {$\mathrm{where ~} s\sigma \nprec t\sigma, u\sigma \nprec v\sigma, \sigma = mgu(v, t) ~ \mathrm{and} ~ s\sigma \neq u\sigma$} \end{minipage} } \end{center} } Since our focus is only on convergent term rewriting systems, this definition can be modified as follows: \vspace*{0.4em}ace{2 mm} {\large \begin{center} \fbox{ \begin{minipage}{3.75in} \centerline{$\infer{s\sigma \approx u\sigma} {s \rightarrow t ~ \qquad \qquad ~ u \rightarrow v}$} \vspace*{0.4em}ace{0.1in} {$\mathrm{where ~} \sigma = mgu(v, t) ~ \mathrm{and} ~ s\sigma \neq u\sigma$} \end{minipage} } \end{center} } \vspace*{0.4em}ace{2 mm} For instance the right-hand-side critical pair $f(x, s(y)) \rightarrow s(f(x, y))$ and $f(s(x), y) \rightarrow s(f(y, x))$ is $f(x, s(x)) \approx f(s(x), x)$. Also note (as pointed out in~\cite{LynchMorawska}) that the rule $f(x, x) \rightarrow 0$ has a right-hand-side critical pair: $f(x, x) \approx f(x', x')$. For an equational theory $E$, {\em RHS(E)\/} = $\left\{ \, e ~ \big| ~ e \right.$ is the conclusion of a Right-Hand-Side Critical Pair inference of two members of $E$ $\left. \vphantom{\big|} \right\}$ $\cup$ $E$~\cite{LynchMorawska}. A set of equations~$E$ is \emph{quasi-deterministic} if and only if \begin{enumerate} \item No equation in~$E$ has a variable as its left-hand side or right-hand side, \item No equation in $E$ is \emph{root-stable}---i.e., no equation has the same root symbol on its left- and right-hand side, and \item $E$ has no \emph{root pair repetitions}---i.e., no two equations in $E$ have the same pair of root symbols on their sides. \end{enumerate} The following lemma was proved in~\cite{NotesOnBSM}. \begin{Lemma} Suppose $R$ is a variable-preserving convergent rewrite system and $R$ is quasi-deterministic. Then $\mathcal{R}HS(R)$ is not quasi-deterministic if and only if $\mathcal{R}HS(R)$ has a root pair repetition. \label{lemma-quasi} \end{Lemma} A theory $E$ is \emph{deterministic} if and only if it is quasi-deterministic and non-subterm-collapsing. {} \noindent A \emph{Lynch-Morawska term rewriting system} or \emph{LM-system} is a convergent, almost-left-reduced and right-reduced term rewriting system~$R$ which satisfies the following conditions: \begin{itemize} \item[(i)] $R$ is non-subterm-collapsing, \\[-18pt] \item[(ii)] $R$ is forward-closed, and \\[-18pt] \item[(iii)] $RHS(R)$ is quasi-deterministic. \end{itemize} The goal of the remainder of this section is to show that, given an \emph{LM-system} $R$, there can be no overlaps between the left-hand sides of any rules in $R$ and that there can be no forward-overlaps. These notions are defined precisely below. Further, we use those results to derive the equivalence of forward-closure and saturation by paramodulation when $R$ is an \emph{LM-system}. These results show that \emph{LM-systems} are a highly restrictive subclass of term-rewriting systems. However, in a later section, we show that there are important decision problems that remain undecidable when restricted to \emph{LM-systems}. The first of these results, Lemma~\ref{ExactlyTwo} and its proof, are used multiple times to prove other results. It concerns how two terms $s$ and $t$, such that $s$ is an innermost redex and $t$ is $\bar{\epsilon}$-irreducible, can be joined. It establishes that there are only two possible cases, and further, only one of these cases can hold at a time. \begin{Lemma} \label{ExactlyTwo} Let $R$ be an $LM$-system, $s = f(s_1^{}, \ldots , s_m^{})$ an innermost redex and $t = g(t_1^{}, \ldots , t_n^{})$ an $\bar{\epsilon}$-irreducible term such that~$f \neq g$. Then $s$ and $t$ are joinable modulo~$R$ if and only if \emph{exactly one} of the following conditions holds: \begin{itemize} \item[(a)] there is a unique rule $l \rightarrow r$ with root pair $(f, g)$ and \( \displaystyle{s \mathop{\xrightarrow{\hspace*{0.75cm}}}_{l \rightarrow r}^{} t} \), $\; ~ ~$\emph{or} \item[(b)] there are unique rules $l_1^{} \rightarrow r_1^{}$ and $l_2^{} \rightarrow r_2^{}$ with root pairs $(f, h)$ and $(g, h)$ such that $\displaystyle{s \mathop{\xrightarrow{\mathcal{H}space*{0.75cm}}}_{l_1 \rightarrow r_1}^{} \widehat{t}}$ and $\displaystyle{t \mathop{\xrightarrow{\mathcal{H}space*{0.75cm}}}_{l_2 \rightarrow r_2}^{} \widehat{t}}$ for some term~$\widehat{t}$. \end{itemize} \end{Lemma} \begin{proof} $(\mathcal{R}ightarrow)$ There are two cases to consider. \begin{itemize} \item[\emph{(i)}] $t = g(t_1, \ldots t_n)$ is in normal form. \item[\emph{(ii)}] $t$ is an innermost redex. \end{itemize} \textit{Case (i)}. Since $s \downarrow_R t$ by assumption, $s$ must reduce to $t$ in a single step (as $s$ is an innermost redex and $t$ cannot be reduced modulo $R$ any further). Therefore $\exists! \; \rho = l \rightarrow r \in R$ that will reduce $s$ to $t$, and this rule is unique as there can be no root-pair repetitions. \textit{Case (ii)}. Since $t$ is an innermost redex and $s$ is $\bar{\epsilon}$-irreducible (and therefore also an innermost redex in this case) they both must be reducible to their normal forms in one step and since $s \downarrow_R t$ these normal forms must be equal. Let $\widehat{t}$ be the normal form of $s$ and $t$. Since there can be no root-stable equations $\widehat{t}(\epsilon) = h$ and $h \not= f, g$. Thus, $\exists! \; \rho_{i} = l_i \rightarrow r_i \in R$ for $i \in \{1, 2\}$ with root pairs $(f, h)$ and $(g, h)$ respectively such that $s \rightarrow_{\rho_{1}} \widehat{t}$ and $t \rightarrow_{\rho_{2}} \widehat{t}$. It remains to show that both case~$(a)$ and case~$(b)$ cannot obtain simultaneously. For the sake of deriving a contradiction, assume that both case~$(a)$ and case~$(b)$ hold. The only way this could occur is with case~$(ii)$ above. Without loss of generality assume that $\mathcal{V}ar(\rho_1) \cap \mathcal{V}ar(\rho_2) = \varnothing$. Since $s$ and $t$ are $\bar{\epsilon}$-irreducible we have that $\sigma_1(r_1) = \widehat{t} = \sigma_2(r_2)$ where $\sigma_1 = mgu(l_1 \, \lesssim_{}^? \, s)$ and $\sigma_2 = mgu(l_2 \, \lesssim_{}^? \, t)$. And so by defining $\sigma := \sigma_1 \cup \sigma_2$ we have that $\sigma(r_1) = \sigma(r_2)$, i.e., $\sigma$ is a unifier of $r_1$ and~$r_2$. We can thus perform a RHS inference step using $\rho_1$ and $\rho_2$, i.e., \[\frac{ l_1 \rightarrow r_1 \; \qquad \; l_2 \rightarrow r_2}{\theta(l_1) \approx \theta(l_2)} \] \noindent to get that $\theta(l_1) \approx \theta(l_2) \in RHS(R)$ where $\theta = mgu(r_1 =^{?} r_2)$. This equation has root-pair $\{f, g\}$. \ignore{ Now consider the rewrite rule given by $\theta(l_1) \rightarrow \theta(l_2)$. Recall that $\sigma \in \mathcal{U}(r_1, r_2)$, but $\theta$ is the mgu of $r_1$ and $r_2$ and therefore we have that $\theta \mathcal{P}recsim \sigma$ and therefore we get $(\exists \delta \in \textbf{Sub})[ \sigma = \delta \theta]$. From before we know that $\sigma(l_1) = s$ and thus $\rho\theta(l_1) = s$ which gives us that $\theta(l_1)$ matches with $s$. We can hence rewrite $s$ to $\rho\theta(l_2) = t$, but by assumption $l \rightarrow r$ is unique, therefore $\theta(l_1) \approx \theta(l_2) = l \approx r$, but then $r$ is reducible by $l_2 \rightarrow r_2$ which contradicts the assumption that $R$ is right-reduced. } But since $RHS (R)$ can have no root-pair repetition, it must be that $\theta(l_1) \approx \theta(l_2) = l \approx r$, but then $r$ is reducible by $l_2 \rightarrow r_2$ which contradicts the assumption that $R$ is right-reduced. \iffalse $(\mathcal{R}ightarrow)$ We need to consider two cases: (i) $t = g(t_1, \ldots, t_n)$ is in normal form, and (ii) $t$~is an innermost redex. In the former case, $s$ must reduce to $t$ in a single step (which must be unique as there can be no root-pair repetitions). In the latter case, there must be a term~$\widehat{t}$ such that both $s$ and $t$ reduce to it in one step, and furthermore the root symbol of $\widehat{t}$, say~$h$, must be different from $f$ and~$g$. Thus there must be two unique rules (again by no root-pair repetitions) that reduce $s$ and $t$ to~$\widehat{t}$, i.e., there are unique rules $l_1^{} \rightarrow r_1^{}$ and $l_2^{} \rightarrow r_2^{}$ with root pairs $(f, h)$ and $(g, h)$ such that $\displaystyle{s \mathop{\xrightarrow{\hspace*{0.75cm}}}_{l_1 \rightarrow r_1}^{} \widehat{t}}$ and $\displaystyle{t \mathop{\xrightarrow{\hspace*{0.75cm}}}_{l_2 \rightarrow r_2}^{} \widehat{t}}$ for some term~$\widehat{t}$. We also need to rule out the possibility (in case~(ii)) that both (a) and~(b) hold. Since $\widehat{t}$ is a common instance of $r_1^{}$ and $r_2^{}$, $r_1^{}$ and $r_2^{}$ must be unifiable, and thus there is a root-pair $\{ f, g \}$ in~$RHS(R)$. More specifically, $\theta (l_1^{}) \approx \theta (l_2^{})$ is an equation in~$RHS(R)$, where $\theta ~ = ~ mgu(r_1^{} \, =^? \, r_2^{})$, and its root-pair is~$\{ f, g \}$. Since we assume that $R$ is an $LM$-system, it must be that $\theta (l_1^{}) \approx \theta (l_2^{})$ = $l \approx r$. But then $r$~must be reducible by~$l_2^{} \rightarrow r_2^{}$ which contradicts the assumption that $R$ is right-reduced. \fi $(\Leftarrow)$ If exactly one of the two conditions hold, then $s$ and $t$ are joinable by definition. \end{proof} We present an interesting consequence of the above lemma. Namely, if a term $t$ such that $t(\epsilon) = f$ has as normal form a term $s$ such that $s(\epsilon) = g$ for $g$ differing from $f$, then we can establish some information on the rules in $R$. \begin{corollary} \label{NormalFormPair} If $f( s_1^{} , \ldots , s_m^{} ) \; \rightarrow_R^! \; g( t_1^{} , \ldots , t_n^{} )$ where~$f \neq g$, then $(f, \, g)$ is a root pair in~$R$. \end{corollary} \begin{proof} Let $s = f(s_1, \ldots, s_n)$ and $t = g(t_1, \ldots, t_m)$. Suppose that $s \rightarrow_{R}^{!} t$. Then by definition $s \downarrow_R t$. Since $t$ is a normal form it is also an $\bar{\epsilon}$-irreducible term modulo~$R$. Let $s' = f(s_1', \ldots, s_n')$ be the term obtained from $s$ by reducing all top-level subterms of $s$ to their normal forms modulo~ $R$. Thus we have the following situation: $s \rightarrow_{R}^{*} s' \rightarrow_{R}^{!} t$ as $s'$ must be an $\bar{\epsilon}$-irreducible term and hence also an innermost redex in this case. Therefore we can apply \textit{case~(a)} of Lemma~\ref{ExactlyTwo} to conclude that there must be a unique rule $\rho = l \rightarrow r \, \in \, R$ that reduces $s'$ to $t$ with root pair~$(f, g)$. \end{proof} Given Lemma~\ref{ExactlyTwo}, we can immediately derive two results that will be useful in proving the main result of this section. \begin{corollary} \label{UniqueRule} Suppose $l \rightarrow r \in R$ is a rule with root-pair $(f, g)$, $s = f(s_1, \ldots, s_m)$ and $t = g(t_1, \ldots, t_n)$, and $s$ and $t$ are $\bar{\epsilon}$-irreducible. Then, $s \downarrow_{R} t$ if and only if $\;$ \( \displaystyle{s ~ \mathop{\xrightarrow{\mathcal{H}space*{0.75cm}}}_{l^{} \rightarrow r^{}}^{} ~ t}. \) \end{corollary} \begin{proof} This result follows from Lemma~\ref{ExactlyTwo} and its proof. \end{proof} \begin{corollary} \label{UniqueRule2} Let $R$ be an LM-System. Suppose $l \rightarrow r \in R$ is a rule with root-pair $(f, g)$ and $s = f(s_1, \ldots, s_m)$ and $t = g(t_1, \ldots, t_n)$ be terms that are joinable. Let $\widehat{s_1}, \ldots , \widehat{s_m}, \, \widehat{t_1}, \ldots, \widehat{t_n}$ be respectively the normal forms of $s_1 , \ldots , s_m, \, t_1 , \ldots , t_n$. Then \[ f(\widehat{s_1}, \ldots , \widehat{s_m}) ~ \mathop{\xrightarrow{\mathcal{H}space*{0.75cm}}}_{l^{} \rightarrow r^{}}^{} ~ g(\widehat{t_1}, \ldots , \widehat{t_n}) . \] (Thus the normal form of $s$ and $t$ is an instance of the right-hand side~$r$.) \end{corollary} \begin{proof} Since $s$ and $t$ are joinable, it must be the case that $\widehat{s} = f(\widehat{s_1}, \ldots, \widehat{s_m})$ and $\widehat{t} = g(\widehat{t_1}, \ldots, \widehat{t_n})$ are joinable, but since each $\widehat{s_i}$ and $\widehat{t_i}$ are in normal form, $\widehat{s}$ and $\widehat{t}$ must be $\bar{\epsilon}$-irreducible, therefore we can apply Corollary~\ref{UniqueRule}. \end{proof} The above two corollaries, along with Lemma~\ref{ExactlyTwo}, allows us to state the first of the results concerning the non-overlapping property of LM-systems. The following establishes that there can be no overlaps between left-hand sides of two rules occurring at the root-position and that there can be no overlaps between a right-hand side of a rule and a left-hand side of another rule at the root position. This is achieved by showing that these terms cannot be unified. \begin{corollary} \label{noRootOverlaps} Let $R$ be an LM-System and let $l_1^{} \rightarrow r_1^{}$ and $l_2^{} \rightarrow r_2^{}$ be \emph{distinct} rules in~$R$. Then \begin{itemize} \item[(a)] $l_1^{}$ and $l_2^{}$ are not unifiable, and \item[(b)] $r_1^{}$ and $l_2^{}$ are not unifiable. \end{itemize} \end{corollary} \begin{proof} Suppose that the rule $l_1 \rightarrow r_1$ has root pair $(f, g)$ and the rule $l_2 \rightarrow r_2$ has root pair $(h, i)$. We show that each case above leads to a contradiction. Thus, for case (a), towards deriving such a contradiction suppose that $\theta ~ = ~ mgu(l_1^{} \; =_{}^? \; l_2^{})$. Then, we have that $\theta(l_1) \rightarrow \theta(r_1)$, and so $\theta(l_1)$ and $\theta(r_1)$ are obviously joinable. Note that, since $l_1$ is unifiable with $l_2$, $l_1(\epsilon) = l_2(\epsilon)$. Thus, $f = h$ above, and $g \not = i$ as the contrary would induce a root-pair repetition in $R$. By applying Corollary~\ref{UniqueRule2} to $\theta(l_1)$ and $\theta(r_1)$ we can conclude that their normal form must be some instance of $r_1$. Likewise, we can apply the same corollary on $\theta(l_1)$ and $\theta(r_2)$, i.e. their normal form must be an instance of $r_2$. But since $R$ is convergent, $\theta(r_1)$ and $\theta(r_2)$ must be joinable and thus must have the same normal form, but this is impossible as $r_1$ and $r_2$ have different root symbols. For case (b), suppose that $\beta ~ = ~ mgu(r_1^{} \; =_{}^? \; l_2^{})$. Thus $\beta (l_1^{}) \, \rightarrow \, \beta (r_1^{}) \, \rightarrow \, \beta(r_2^{})$. Hence $\beta (l_1^{})$ and $\beta (r_1^{})$ are joinable, and $\beta (l_1^{})$ and $\beta (r_2^{})$ are joinable. The rest of the argument is the same as for the above case. \end{proof} Next, we work towards showing that the other possible overlaps cannot occur either. The next lemma, and its extension, are used towards this goal. The technical result is used in the proofs of various other lemmas and corollaries. \begin{Lemma} \label{CommutingSquare} Let $R$ be an LM-System, and suppose $f(s_1, \ldots, s_m) \displaystyle{ ~ \mathop{\xrightarrow{\mathcal{H}space*{0.75cm}}}_{l^{} \rightarrow r^{}}^{}} ~ g(t_1, \ldots, t_n)$. Then the following diagram commutes. \begin{center} \begin{tikzpicture} \node (A) at (0, 0) {$f(s_1, \ldots, s_m)$}; \node[right=of A] (B) {$g(t_1, \ldots, t_n)$}; \node[below=of A] (C) {$f(s_1', \ldots, s_m')$}; \node[below=of B] (D) {$g(t_1', \ldots, t_n')$}; \draw[->] (A)--(B) node [below, pos=.5] {$l \rightarrow r$}; \draw[->] (A) --(C) node [left, pos=.8] {$*$}; \draw[->, dashed] (C) -- (D) node [below, pos=.5] {$l \rightarrow r$}; \draw[->] (B) -- (D) node [right, pos=.8] {$*$}; \end{tikzpicture} \end{center} \noindent where $s_1', \ldots, s_m', t_1', \ldots, t_n'$ are the normal forms of $s_1, \ldots, t_n$ respectively. \end{Lemma} \begin{proof} We have that $f(s_1, \ldots, s_m) \to^* f(s_1', \ldots, s_m')$ and $f(s_1, \ldots, s_m) \to^+ g(t_1', \ldots, t_n')$, and since $R$ is confluent $s' = f(s_1', \ldots, s_m') \; \downarrow_R \; g(t_1', \ldots, t_n') = t'$. Since both $s'$ and $t'$ are $\bar{\epsilon}$-irreducible, they must be joinable in a single step. Let $\widehat{t}$ be their normal form. There are three cases corresponding to Lemma~\ref{ExactlyTwo}. Cases $1$ and $2$ below correspond to case $(a)$ of Lemma~\ref{ExactlyTwo} and case $3$ corresponds to case $(b)$ of the Lemma~\ref{ExactlyTwo}. We have: \begin{itemize} \item[1.] $s' \rightarrow t'$ by a unique rule in $R$ with root-pair $(f, g)$, \item[2.] $t' \rightarrow s'$, by a unique rule in $R$ with root-pair $(g, f)$, \item[3.] $s' \rightarrow \widehat{t}$ and $ t' \rightarrow \widehat{t}$, by two unique rules with root-pairs $(f, h)$ and $(g, h)$ for some $h$. \end{itemize} However, case 2 leads to a contradiction as it would imply that there exists a rule in $R$ with root pair $(g, f)$ which would be a root pair repetition in~$E$. Suppose case 3 were true. Let $\rho_i = l_i \rightarrow r_i \in R$ for $i \in \{1, 2\}$ be the unique rules with root pairs $(f, h)$ and $(g, h)$ respectively that reduce $s'$ and $t'$ to $\widehat{t}$. Without loss of generality assume that $\mathcal{V}ar(\rho_1) \cap \mathcal{V}ar(\rho_2) = \varnothing$. Then, $r_1$ and $r_2$ are unifiable and therefore we can perform a RHS inference step to get $\theta(l_1) \approx \theta(l_2)$. However, there is a root-rewrite step along the path from $s$ to $t$. Let $l \rightarrow r \in R$ be the rule that induces the root-rewrite step. Thus $\theta(l_1) \approx \theta(l_2) = l \approx r$ as $l \rightarrow r$ must be unique. This implies, however, that $r$ is reducible by $l_2 \rightarrow r_2$ would contradicts the assumption that $R$ is right-reduced. We are then only left with case 1. This rule is unique by Lemma~\ref{ExactlyTwo} and since $l \rightarrow r$ has root pair $(f, g)$ these rules must be the same. \end{proof} We now extend the previous result by induction. \begin{Lemma} \label{CommutingLemma} Let $R$ be an LM-System and suppose $f(s_1, \ldots, s_m) \rightarrow_{R}^{+} g(t_1, \ldots, t_n)$ are terms such that $f \not = g$. Then the following diagram commutes: \begin{center} \begin{tikzpicture} \node (A) at (0, 0) {$f(s_1, \ldots, s_m)$}; \node[right=of A] (B) {$g(t_1, \ldots, t_n)$}; \node[below=of A] (C) {$f(s_1', \ldots, s_m')$}; \node[below=of B] (D) {$g(t_1', \ldots, t_n')$}; \draw[->] (A)--(B) node [above, pos=1] {$+$}; \draw[->] (A) --(C) node [left, pos=.8] {$*$}; \draw[->, dashed] (C) -- (D) node [above, pos=1] {$+$}; \draw[->] (B) -- (D) node [right, pos=.8] {$*$}; \end{tikzpicture} \end{center} \noindent where $s_i' = s_i\downarrow_R$ and $t_j' = t_j\downarrow_R$ for $1 \leq i \leq n$, $1 \leq j \leq m$. \end{Lemma} \begin{proof} The proof proceeds by induction on the number of rewrites occurring at the root along the chain $s = f(s_1\ldots, s_m)$ to $t = g(t_1, \ldots, t_n)$. Let $q$ be the number of root rewrite steps occurring as stated above. \\ \noindent \textbf{Base Step:} The base step, where $q = 1$, corresponds to Lemma~\ref{CommutingSquare}. \\ \noindent \textbf{Inductive Step:} Assume that the result holds for $q = k$. We show that the result is also true for $q = k+1$. Since there can be no root-pair repetitions in~$R$, we must have that there is a sequence of root pairs starting with $f$ and ending with $g$ corresponding to the rules used in the reductions. In the diagram below, the first square commutes by the base case, and the rest of the chains can be filled in to create commuting squares by the induction hypothesis up to the $k+1$ root rewrite. That is: \begin{center} \begin{tikzpicture} \node (F) at (0, 0) {$f(s_1, \ldots, s_m)$}; \node (H0) [right=of F] {$h_{0}(u_1, \ldots, u_s)$}; \node (DOTS) [right=of H0] {$\cdots$}; \node (Hk) [right=of DOTS] {$h_{k}(u_1, \ldots, u_j)$}; \node (G) [right=of Hk] {$g(t_1, \ldots, t_n)$}; \node (FP) [below=of F] {$f(s_1', \ldots, s_m')$}; \node (H0P) [below=of H0] {$h_{0}(u_1', \ldots, u_s')$}; \node (LDOTS) [right=of H0P] {$\ldots$}; \node (Hkp) [below=of Hk] {$h_{k}(u_1', \ldots, u_j')$}; \node (Gp) [below=of G] {$g(t_1', \ldots, t_n')$}; \draw[->] (F)--(H0) node [above, midway] {$+$}; \draw[->] (H0)--(DOTS) node [above, midway] {$*$}; \draw[->] (DOTS)--(Hk) node [above, midway] {$*$}; \draw[->] (Hk)--(G) node [above, midway] {$+$}; \draw[->] (FP)--(H0P) node [above, midway] {$+$}; \draw[->] (H0P)--(LDOTS) node [above, midway] {$*$}; \draw[->] (LDOTS)--(Hkp) node [above, midway] {$*$}; \draw[->] (F)--(FP) node [left, midway] {$*$}; \draw[->] (H0)--(H0P) node [left, midway] {$*$}; \draw[->] (Hk)--(Hkp) node [left, midway] {$*$}; \draw[->] (G)--(Gp) node [left, midway] {$*$}; \end{tikzpicture} \end{center} \noindent Therefore, we only need to fill in the final square. However, a similar argument as for the proof of Lemma~\ref{CommutingSquare} applies to the terms $h_k(u_1, \ldots, u_j) \to^+ g(t_1', \ldots, t_n')$ and $h_k(u_1, \ldots, u_j) \to^* h_k(u_1', \ldots, u_j')$. \end{proof} The following corollary establishes that given an LM-system $R$, then if $l \rightarrow r \in $ is a any rule with root-pair $(f, g)$ no term with root-symbol $f$ can be reduced modulo $R$ to a term with $g$ as its root-symbol. \begin{corollary} \label{NoReversals} Let $f( s_1^{} , \ldots , s_m^{} ) \; \rightarrow \; g( t_1^{} , \ldots , t_n^{} )$ be a rule in~$R$ where~$f \neq g$. Then no term with $g$ as its root symbol can be reduced modulo~$R$ to a term with $f$ at its root. \end{corollary} \begin{proof} The proof is by contradiction. Suppose that there exists a reduction chain $g(s_1, \ldots, s_m) \to^{+} f(t_1, \ldots, t_n)$ such that $g \not = f$. Applying Lemma~\ref{CommutingLemma} we obtain a reduction chain $s' = g(s_1', \ldots, s_m') \to^+ f(t_1', \ldots, t_n')= t'$ where $s_i' = s_i\downarrow_R$ and $t_j' = t_j\downarrow_R$ for $1 \leq i \leq m$ and $1 \leq j \leq n$. Thus, $s'$ and $t'$ are joinable, and $s'$ must be an innermost redex as it is not in normal form, therefore we can apply Lemma~\ref{ExactlyTwo} and get two cases. Case ($a$) leads to a contradiction as $(f, g)$ is already a root pair in $R$ by assumption, thus, $\{f, g\}$ is a root pair in~$E$ and therefore there cannot be a root pair $(g, f)$ as this would contradict the assumption that $R$ is an $LM$-system. Therefore, since exactly one of the two cases must obtain, case~($b$) must be true. Thus there exist unique rules $\rho_i = l_i \to r_i$ for $i \in \{1, 2\}$ such that $s' \to_{\rho_1} \widehat{t}$ and $t' \to_{\rho_2} \widehat{t}$ with root pairs $(g, h)$ and $(f, h)$ respectively. However, again we have that $r_i$ for $i \in \{1, 2\}$ have a common instance and hence are unifiable, and so we can perform an RHS inference to get $\theta(l_1) \approx \theta(l_2) \in RHS(R)$ where $\theta = mgu(r_1 =^? r_2)$. Since we assumed that $l \to r = f(s_1, \ldots, s_m) \to g(t_1, \ldots, t_m)$ is a rule in $R$ this inference step must yield that $\theta(l_1) \approx \theta(l_2) = l \approx r$ as it has root pair $\{f, g\}$. This leads to a contradiction however, as it implies that $r$ is reducible by $l_2 \rightarrow r_2$ which contradicts the assumption that $R$ is right-reduced. \end{proof} We next establish two corollaries that give us information about where reductions can take place. \begin{corollary} \label{InnermostReductions} Let $R$ be an LM-System, then $s = f( s_1^{} , \ldots , s_n^{} ) \; \rightarrow_R^* \; f( t_1^{} , \ldots , t_n^{} ) = t \,$ if and only if \[ \forall i \in \{1, \ldots, n\}: [s_i^{} \; \rightarrow_R^* \; t_i^{}] . \] \end{corollary} \begin{proof} The ``if'' part is obvious. For the ``only if'' part, suppose $s \; \rightarrow_R^* \; t$ as above and $s_j \not\rightarrow_R^{*} t_j$ for some~$1 \leq j \leq n$. Then some rewrite step in the sequence from $f(s_1, \ldots, s_n)$ to $f(t_1, \ldots, t_n)$ must have occurred at the root. Thus there must be a rule with (directed) root pair $(f, h)$ for some $h \not = f$. However, $t$ has root symbol~$f$. Thus, a term with $h$ as the root symbol must be reducible to a term with $f$ at the root symbol, but this contradicts Corollary~\ref{NoReversals}. \end{proof} \begin{defn} Let $s_1^{}, \, s_2^{}$ be terms. A position $p \in \mathcal{P}os(s_1^{}) \cup \mathcal{P}os(s_2^{})$ is said to be an \emph{outermost distinguishing position} between $s_1^{}$ and $s_2^{}$ if and only if $s_1^{} (p) \neq s_2^{} (p)$ and $s_1^{} (p') = s_2^{} (p')$ for all proper prefixes~$p'$ of~$p$. The set of all outermost distinguishing positions between two terms $s$ and~$t$ is denoted by $ODP(s, t)$. \end{defn} Note that $ODP(s, s) = \emptyset$. \begin{corollary} \label{ODPReductions} Let $R$ be an LM-System. Then $s \xrightarrow{*} t \; \, \text{if and only if} \; \, \forall p \in ODP(s, t): \left[s|_{p} \xrightarrow{*} t|_p\right]$. \end{corollary} \begin{proof} The ``if" direction is straightforward. We prove the ``only if" direction by induction on triples (w.r.t.\ the induced lexicographic xorder) $(|s|, |t|, |p|)$ where $s, t$ are terms and $p$ is a position such that $s \rightarrow_{R}^{*} t$ and $p \in ODP(s, t)$. \\ \noindent \textbf{Basis.} The ``least" (i.e., lowest in the ordering) such triple is $(1, 1, 0)$, corresponding to terms such as $s = a$ and $t = b$, i.e., constants. Suppose that $s \rightarrow^{*} t$. Then, $ODP(s, t) = \{\epsilon\}$ and therefore $s|_p = s \rightarrow^{*} t = t|_p$, which is exactly the statement of the corollary. Thus, we can conclude that the base case holds. \\ \noindent \textbf{Inductive Step.} Assume that the result is true for all triples $C \mathcal{P}rec (|s'|, |t'|, |p'|)$. We show that the result holds for the triple $(|s'|, |t'|, |p'|)$ itself. Suppose $s' \rightarrow^{*} t'$ and $p' \in ODP(s', t')$. Again, since the result clearly holds for $p' = \epsilon$ we assume that $p' \not = \epsilon$. Then $s' = f(s_1, \ldots, s_n)$ and $t' = f(t_1, \ldots, t_n)$ for some~$f$ and terms~$s_1, \ldots , s_n, t_1, \ldots , t_n$. By Corollary~\ref{InnermostReductions} it must be the case that $\forall i \in \{1, \ldots, n\} : \left[ \vphantom{b^b} s_i \rightarrow^{*} t_i \right]$. Since $|s_i| < |s'|$ and $|t_i| < |t'|$ we invoke the induction hypothesis to conclude that $(\forall i \in \{1, \ldots, n\})(\forall q \in ODP(s_i, t_i))\left[ \vphantom{b^b} s_i|_q \rightarrow^{*} t_i|_q \right]$. But, we then have that $p' = i \cdot q$ for some $i \in \{1, \ldots, n\}$ and some $q \in ODP(s_i, t_i)$, and so we have that $s'|_{i \cdot q}^{} = s|_{p'} \rightarrow^{*} t'|_{i \cdot q}^{} = t'|_{p'}$. We can therefore conclude that the result must hold for all triples. \end{proof} \begin{defn} (\emph{non-overlay superpositions}) Let $R$ be a rewrite-system, then \[ NOSU\!P(R) ~ := ~ \left\{ \, \sigma( l_1^{} [l_2^{}]_p^{} ) ~ \; \big| \; ~ \vphantom{c_c^c} p \in \fpos( l_1^{} ) \smallsetminus \{\epsilon \} ~ \text{and} ~ \sigma = \Mgu( l_1^{} |_p^{} \, =^? \, l_2^{} ) \,, l_1 \rightarrow r_1,\; l_2 \rightarrow r_2 \in R\; \right\} \] \end{defn} The previous results of this section are now brought together to prove the following main results about LM-systems concerning the status of overlaps. Namely, we show that there are no non-overlay superpositions and no forward-overlaps. We first establish that there are no superpositions occurring between the left-hand sides of two distinct rules in $R$. Formally, this amounts to showing that $NOSU\!P(R) = \emptyset$. The main idea of the proof is that we show that such superpositions would induce critical pairs, and then these critical pairs cannot be joinable, which would contradict the confluence of our system ($R$ is convergent by definition). \begin{corollary} \label{UnjoinableCriticalPairs} $NOSU\!P(R) = \emptyset \,$ for all LM-systems~$R$. \end{corollary} \begin{proof} We prove the following: if $R$ is an LM-System, $l_1 \to r_1$ and $l_2 \to r_2$ rules in~$R$, $p \in \fpos(l_1^{})$ a non-root position, and $\sigma( l_1^{} [l_2^{}]_p^{})$ a superposition, then the critical pair $\langle \sigma ( l_1^{} [ r_2^{} ]_p^{} ), \; \sigma ( r_1^{} ) \rangle$ is not joinable modulo~$R$. Assume towards deriving a contradiction that $s = \sigma ( l_1^{} [ r_2^{} ]_p^{} )$ and $t = \sigma ( r_1^{} )$ are joinable modulo $R$. Since $l_1^{} \to r_1^{} \in R$ there must be distinct function symbols $f$ and~$g$ such that $s(\epsilon) = f$ and $t(\epsilon) = g$ as $R$ is an LM-System, i.e., $l_1^{} \to r_1^{}$ must have root-pair $(f, g)$ for $f \not = g$. Then, it must be that $s = f(s_1, \ldots, s_m)$ and $t = g(t_1, \ldots, t_n)$. Since $s$ and $t$ are assumed to be joinable, $l_1^{} \to r_1^{}$ is a rule in $R$ with root pair $(f, g)$ we can apply Corollary~\ref{UniqueRule2} to get that: \[ f(\widehat{s_1}, \ldots , \widehat{s_m}) ~ \mathop{\xrightarrow{\mathcal{H}space*{0.75cm}}}_{l_1^{} \rightarrow r_1^{}}^{} ~ g(\widehat{t_1}, \ldots , \widehat{t_n}) \] \noindent where $\widehat{s_1}, \ldots, \widehat{s_m}, \, \widehat{t_1}, \ldots, \widehat{t_n}$ are the $R$-normal forms of $s_1, \ldots, s_m, \, t_1, \ldots, t_n$ resp. However, since $f(\widehat{s_1}, \ldots , \widehat{s_m})$ is $\bar{\epsilon}$-irreducible, the rule must be applied at the root, which gives us that $f(\widehat{s_1}, \ldots , \widehat{s_m}) = \theta(l_1^{})$ for some substitution $\theta$. Putting this all together we then get the following reduction: \[ \sigma ( l_1^{} [ r_2^{} ]_p^{} ) \, \longrightarrow^{*}_{R} \, \theta(l_1) \] Thus, Corollary~\ref{ODPReductions} applies and $\forall q \in ODP(\sigma ( l_1^{} [ r_2^{} ]_p^{} ), \; \theta(l_1)) : \, \left[ \vphantom{b_b^b} \sigma ( l_1^{} [ r_2^{} ]_p^{} )|_q \rightarrow^{*} \theta(l_1)|_q \right]$. Since $p$ belongs to $ODP(\sigma ( l_1^{} [ r_2^{} ]_p^{} ), \; \theta(l_1))$, it follows that $\sigma(r_2^{}) \rightarrow^{*} \theta(l_1)|_p = \theta(l_1|_p)$. But since $l_1|_p$ is unifiable with $l_2$, we have $l_1 |_p^{} (\epsilon) = l_1 (p) = l_2 (\epsilon)$ and this contradicts Corollary~\ref{NoReversals}. \end{proof} We now turn to the case of forward-overlaps as defined in the section on forward-closure. \begin{Lemma} \label{NoForwardOverlaps} Let $R$ be an LM-System, then $R \rightsquigarrow R = \varnothing$. \end{Lemma} \begin{proof} Recall: $R \rightsquigarrow R = \left \{(l_1 \rightarrow r_1) \rightsquigarrow_p (l_2 \rightarrow r_2) \; \big| \; l_1 \rightarrow r_1, \, l_2 \rightarrow r_2 \in R \; \wedge \; p \in \fpos(r_1) \vphantom{b^b} \right\}$ which are not redundant in $R$. The proof proceeds by contradiction. Suppose that the above set is non-empty. By Corollary~\ref{noRootOverlaps}, no forward overlap can occur at $p = \epsilon$. Thus, there exists at least two rules, $l_1 \rightarrow r_1$ and $l_2 \rightarrow r_2$ in $R$, such that $p \in \fpos(r_1)$ is a non-root position, and $\sigma ~ = ~ mgu(r_1|_p =^? l_2)$ exists. Suppose that $l_2^{} \rightarrow r_2^{}$ has root-pair $(f, g)$. Forward closure gives us the rule $\sigma(l_1^{}) \rightarrow \sigma(r_1[r_2]_p)$. By~Corollary~\ref{UniqueRule2} the normal form of $\sigma(l_1^{})$ and $\sigma(r_1^{})$ must be an instance of~$r_1^{}$, i.e., $\beta (r_1^{})$ for some substitution~$\beta$. The normal form of $\sigma(r_1[r_2]_p)$ must also be the same~$\beta (r_1^{})$. But note that $p \in ODP( \sigma(r_1[r_2]_p) , \, \beta (r_1^{}) )$ since $r_1^{} (p) = l_2^{} (\epsilon) = f \neq r_2^{} (\epsilon) = g$. Thus $\sigma (r_2^{}) \, \rightarrow_{}^+ \, \beta (r_1^{})|_p^{}$. But, as mentioned, the root symbol of~$r_1^{}|_p^{}$ (and hence that of~$\beta (r_1^{})|_p^{}$) is the same as the root symbol of~$l_2^{}$. This contradicts Corollary~\ref{NoReversals}. \end{proof} We can now state the following lemma, which follows easily from the above results concerning overlaps. First we introduce the following definition: \begin{defn} A term-rewriting system $R$ is said to be \emph{non-overlapping} if and only if there are no left-hand side superpositions and no forward-overlaps. \end{defn} \begin{Lemma} Every LM-system is non-overlapping. \end{Lemma} \begin{proof} Let $R$ be an LM-system. Suppose $l_i \rightarrow r_i \in R$ for $i \in \{1, 2\}$ are distinct rules. Then, there can be no overlaps between $l_1$ and $l_2$ at position $p = \epsilon$ by lemma~\ref{noRootOverlaps}. Lemma~\ref{UnjoinableCriticalPairs} establishes that $l_1$ and $l_2$ cannot overlap at position $p \not = \epsilon$. Finally, by Lemma~\ref{NoForwardOverlaps}, no overlaps can occur between $r_1$ and $l_2$. \end{proof} Finally, using the results derived above about \emph{LM-systems}, we show that every \emph{LM-system} is saturated by paramodulation. It is clear that every rewrite system saturated by paramodulation is also forward-closed. The next result establishes that for \emph{LM-systems}, these two concepts are equivalent. Specifically, an \emph{LM-system} is trivially saturated by paramodulation as there can be no overlaps into the left-hand side of an equation nor the right-hand side of an equation. \begin{corollary} \label{SaturatedByParamod} Every LM-System is saturated by paramodulation. \end{corollary} \begin{proof} Let $R$ be an LM-System and $E$ be the set of equations obtained from~$R$. Suppose $u[s']_p \approx v$ in $E$ and $s ~ \rightarrow ~ t$ in $R$ induces a paramodulation inference. There are two cases depending on whether $u[s']_p$ is the lhs or rhs of some rule in~$R$. If it is the lhs, then this would contradict Corollary~\ref{UnjoinableCriticalPairs}. If $u[s']_p$ is the rhs of some rule in $R$, then this would contradict Lemma~\ref{NoForwardOverlaps}. Thus, each case leads to a contradiction, and so no paramodulation inference steps can be performed. \end{proof} \section{The Cap Problem Modulo LM-Systems} In this section we prove that although \emph{LM-systems} are a restrictive subclass of term-rewriting systems there are still important problems that are undecidable when restricted to \emph{LM-systems}. Specifically, we show that the \emph{cap problem}\footnote{Also known as the \emph{deduction problem}}, which has important applications in cryptographic protocol analysis, is undecidable even when the rewrite system~$R$ is an \emph{LM-system}. The cap problem is defined as follows: \noindent \\ \underline{\underline{\textbf{Instance:}}} ~ A LM-System $R$, a set $S$ of ground terms representing the intruder knowledge, and a ground term~$M$. \underline{\underline{\textbf{Question:}}} Does there exist a cap term $C(\diamond_1, \ldots, \diamond_n)$ such that $C[\diamond_1:= s_{i_1}, \ldots, \diamond_n := s_{i_n}] \, \rightarrow^{*}_{R} \, M$? \\ We show that the above problem is undecidable by a many-one reduction from the halting problem for reversible deterministic 2-counter Minsky machines (which are known to be equivalent to Turing machines). The construction below is extremely similar to the one given in~\cite{NotesOnBSM}. Originally, the construction was used to show the undecidablility of the subterm-collapse problem for LM-Systems. Here it is modified slightly to account for the cap problem, however the majority of the rules remain unchanged. A reversible deterministic 2-counter Minsky machine (henceforth a Minsky machine) is described as a tuple $N = (Q, \delta, q_0, q_L)$ where $Q$ is a finite non-empty set of states, $q_0, q_L \in Q$ are the initial and final states respectively and $\delta$ is the transition relation. The elements of the transition relation $\delta$ are represented as 4-tuples of the following form: \[ \left[q_i, j, k, q_{i'} \right] \text{ or } \left[q_i, j, d, q_{i'} \right] \] \noindent where $q_i, q_i' \in Q, \; j \in \{1, 2\}, \; k \in \{Z, P\}, \; d \in \{0, +, -\}$. Tuples of the first form represent that the machine is in state $q_i$, checks if counter $j$ is zero $(Z)$ or positive $(P)$ and transitions to state $q_{i'}$. Tuples of the second form represent that the machine is in state $q_i$, and either decrements $(-)$, increments $(+)$, or does nothing $(0)$ to counter $j$ and transitions to state~$q_{i'}$. Each configuration of machine $N$ is written as $(q_i, C_1, C_2)$ where $q_i \in Q$ is the current state of the machine, and $C_1, C_2$ are the values of the counters. We encode such configurations as terms. The initial and final configurations of $N$ are encoded by: $c(q_0, s^k(0), s^p(0), 0)$ and $c(q_L, s^{k'}(0), s^{p'}(0), s^n(0))$ respectively. The fourth argument of~$c$ corresponds to the number of steps the machine has taken. We need the fact that $N$ is deterministic and reversible in the sequel to establish the results that the construction provided actually produces an LM-System. Namely, we need the following fact: For every pair of tuples in $\delta$, $[q_{i_1}, j_1, x_1, q_{i'_1}]$ and $[q_{i_2}, j_2, x_2, q_{i'_2}]$ we have that \[ (i_1 = i_2) \vee (i'_1 = i'_2) \mathcal{R}ightarrow (j_1 = j_2 \wedge \{x_1, x_2\} = \{Z, P\}) \] This means that $N$ can leave or enter the same state on two different transitions only when the same counter is being checked and different checks are being performed on that counter. Let $N$ be a Minsky machine. We construct a term-rewriting system $R_N$ over the signature \[ \Sigma \, = \, \bigcup\limits_{i = 1}^{L} \{f_i, f'_i, q_i \} \cup \left\{c, s, 0, g, g', e \right\} \] and show that the resulting TRS is an LM-System such that if $N$ starts in the initial configuration and halts in a final configuration then there exists a cap term $C$ such that the ground term $c(e, 0, 0, 0)$ (playing the role of $M$ in the description of the problem above) can be deduced. We begin the construction by initializing $R_N$ with the following rules: \begin{align*} f_L(c(q_L, s^{k'}(0), s^{p'}(0), z)) \rightarrow g(c(e, 0, 0, z)) \\[+4pt] g'(g(c(e, 0, 0, s(z)) \rightarrow c(e, 0, 0, z) \end{align*} We then add the following rules to $R_N$, each of which encodes a possible move of the machine. That is, each rule represents an element of $\delta$. \begin{enumerate} \item[(a1)] $\, [ \, q_i, 1, P, q_j \, ]\colon R_N \, := \, R_N \cup \{ \, f_i(c(q_i, s(x), y, z)) \to c(q_j, s(x), y, s(z)) \, \}$ \item[(a2)] $\, [ \, q_i, 2, P, q_j \, ]\colon R_N \, := \, R_N \cup \{ \, f_i(c(q_i, x, s(y), z)) \to c(q_j, x, s(y), s(z)) \, \}$ \vspace*{0.4em}ace{2ex} \item[(b1)] $\, [ \, q_i, 1, Z, q_j \, ]\colon R_N \, := \, R_N \cup \{ \, f'_i(c(q_i, 0, y, z)) \to c(q_j, 0, y, s(z)) \, \}$ \item[(b2)] $\, [ \, q_i, 2, Z, q_j \, ]\colon R_N \, := \, R_N \cup \{ \, f'_i(c(q_i, x, 0, z)) \to c(q_j, x, 0, s(z)) \, \}$ \vspace*{0.4em}ace{2ex} \item[(c1)] $\, [ \, q_i, 1, +, q_j \, ]\colon R_N \, := \, R_N \cup \{ \, f_i(c(q_i, x, y, z)) \to c(q_j, s(x), y, s(z)) \, \}$ \item[(c2)] $\, [ \, q_i, 2, +, q_j \, ]\colon R_N \, := \, R_N \cup \{ \, f_i(c(q_i, x, y, z)) \to c(q_j, x, s(y), s(z)) \, \}$ \vspace*{0.4em}ace{2ex} \item[(d1)] $\, [ \, q_i, 1, 0, q_j \, ]\colon R_N \, := \, R_N \cup \{ \, f_i(c(q_i, x, y, z)) \to c(q_j, x, y, s(z)) \, \}$ \item[(d2)] $\, [ \, q_i, 2, 0, q_j \, ]\colon R_N \, := \, R_N \cup \{ \, f_i(c(q_i, x, y, z)) \to c(q_j, x, y, s(z)) \, \}$ \vspace*{0.4em}ace{2ex} \item[(e1)] $\, [ \, q_i, 1, -, q_j \, ]\colon R_N \, := \, R_N \cup \{ \, f_i(c(q_i, s(x), y, z)) \to c(q_j, x, y, s(z)) \, \}$ \item[(e2)] $\, [ \, q_i, 2, -, q_j \, ]\colon R_N \, := \, R_N \cup \{ \, f_i(c(q_i, x, s(y), z)) \to c(q_j, x, y, s(z)) \, \}$ \end{enumerate} \vspace*{0.4em}ace{1ex} We now state the following theorem, and hold off on providing a proof until various claims have been shown to hold. The result will then following as an easy corollary. \begin{theorem} \label{CapUndec} The cap problem modulo LM-Systems is undecidable. \end{theorem} We first begin by showing that given a reversible deterministic 2-counter Minsky machine, $N$, the rewrite system constructed above, $R_N$, is convergent. \begin{claim} \label{RNConvergence} Let $N$ be a Minsky machine, then the TRS, $R_N^{}$, is convergent. \end{claim} \begin{proof} Define $\succ$ on $\Sigma$ as follows: $f_i \succ f'_i \succ f_L \succ g \succ g' \succ c \succ s \succ q_j \succ e \succ 0$. Then termination of $R_N$ is apparent by applying a recursive path ordering induced by $\succ$. We show that $R_N$ is confluent by showing that $R_N$ has no critical pairs. By construction, there are clearly no superpositions that can occur at positions other than than $\epsilon$. However, by the determinism of $N$, if the index $i$ occurs more than once as the subscript of the lhs of a rule with an $f$ as its root symbol, then it could only ever occur again as the index of a term with an $f'_{i}$ as the root symbol of the lhs. Thus, there can be no critical pairs. \end{proof} The next claims establish that $R_N$ is also forward closed and that $RHS(R_N)$ is quasi-deterministic. Each of which is a condition of a rewriting system to be an LM-System. \begin{claim} \label{RNFowardClosed} Let $N$ be a Minsky machine, then $R_N$ is forward-closed. \end{claim} \begin{proof} No root-symbol of the lhs of any rule in $R_N$ occurs in the rhs of any other rule. That is, there is no way to unify a subterm of the rhs of any rule with the lhs of any other rule. Thus, there can be no forward-overlaps and therefore $R_N$ is forward closed. \end{proof} \begin{claim} \label{RNQDet} Let $N$ be a Minsky machine, then $RHS(R_N)$ is quasi-deterministic. \end{claim} \begin{proof} We first show that there are no RHS overlaps in $R_N$. Suppose $l_i \rightarrow r_i \in R_N$ for $i \in \{1, 2\}$ induces a RHS overlap. By construction of $R_N$, it must be that $r_1(\epsilon) = c = r_2(\epsilon)$ and $r_1(1) = q_i = r_2(1)$. Since $N$ is deterministic, the only way this could occur is between a rule from set $(a)$ and a rule from set $(b)$, but then $r_1$ and $r_2$ are not unifiable as there would be a function clash between ``$s$" and ``0". Thus, there are no RHS overlaps. It then suffices to show that $R_N$ itself is quasi-deterministic. Clearly, no rule contains a variable as its left-hand side or its right-hand side, and no rule is root-stable. It remains to show that there are no root-pair repetitions. Suppose $l_i \rightarrow r_i \in R_N$ for $i \in \{1, 2 \}$ induces a root-pair repetition. Suppose $(h, c)$ is the repeated root-pair. This implies that there are some pair of 4-tuples $x, y \in \delta$ such that the first coordinates of $x$ and $y$ where the same. Let $x = [q_\alpha, j_1, m, q_p]$ and $y = [q_\alpha, j_2, n, q_k]$ be such tuples. Since $N$ is deterministic and reversible, it must be that $j_1 = j_2$ and $\{m, n\} = \{Z, P\}$, but then $l_1 \rightarrow r_1$ would have root-pair $(f_\alpha, c)$ and $l_2 \rightarrow r_2$ would have root pair~$(f'_\alpha, c)$. Thus, assuming there is a root-pair repetition contradictions the definition of the construction of $R_N$ from $\delta$. Therefore, there are no root-pair repetitions in $R_N$, and therefore $RHS(R_N)$ is quasi-deterministic. \end{proof} Finally, the claim below, along with the claims above, establish that $R_N$ is an LM-System. Namely, we prove that $R_N$ is non-subterm collapsing, and thus we can conclude that $RHS(R_N)$ is deterministic. Then, we establish that $N$ halts if and only if there exists a cap-term that allows the deduction of $M$ modulo $R_N$. Thus, putting it all together we establish a many-one reduction and provide a proof of Theorem~\ref{CapUndec}. \begin{claim} \label{RNNonSubterm} Let $N$ be a Minsky machine, then $R_N$ is non-subterm-collapsing. \end{claim} \begin{proof} Suppose $t \rightarrow_{R_N}^{+} t'$ such that $t'$ is a proper subterm of~$t$. By construction of $R_N$,~ $t|_p(\epsilon) = c$ for some position~$p$. Since rules $(a1) - (e2)$ and the second rule in the initialization of $R_N$ produce a ``$c$" term with the last argument with an extra $s$ added or removed, the only rule that could have been used to produce the collapse was the first rule of the initialization of~$R_N$. But in this case, the $q_L$ is replaced with an $e$ and thus the resulting term in the reduction could not be a proper subterm of~$t$. Thus we have a contradiction and can conclude that $R_N$ is non-subterm-collapsing. \end{proof} \begin{claim} \label{RNLMSystem} Let $N$ be a Minsky machine. Then the TRS $R_N^{}$ is an LM-System. \end{claim} \begin{proof} Claim~\ref{RNConvergence} shows that $R_N$ is convergent and claim~\ref{RNFowardClosed} shows that $R_N$ is forward-closed. Claims~\ref{RNQDet} and \ref{RNNonSubterm} show that $RHS(R_N)$ is deterministic. Therefore, $R_N$ is a LM-System. \end{proof} \begin{claim} \label{CapiffHalt} Let $N$ be a Minsky machine. Then starting in the initial configuration \( \left( \vphantom{b_b^b} q_0^{}, k, p \right) \) $N$ halts in \( \left( \vphantom{b_b^b} q_L^{}, k', p' \right) \) iff there exists a cap term $C(\diamond)$ such that if $M = c(e, 0, 0, 0)$ and $S = \{c(q_0, s^k(0), s^p(0), 0) \}$ then $C[ \diamond := c(q_0, s^k(0), s^p(0), 0)] \, \rightarrow_{R_N}^{*} \, M$. \end{claim} \begin{proof} $(\mathcal{R}ightarrow)$ Suppose $N$ halts, then it does so in a finite number of steps. Let the number of steps $N$ takes to be $n \in \mathbb{N}$. The proof then proceeds similiarly to that of Lemma 4.4 in~\cite{NotesOnBSM}. That is, let $(\tau_i)_{i = 1}^{n}$ be the sequence of transitions that $N$ passes through on its computation run. Then, $\tau_1 = [q_0, j, d, q_i]$ and $\tau_n = [q_{i'}, j, d, q_L]$. Define $f^{*}_{j}$ as follows: $$ f^*_i = \begin{cases} f_{j}^{'} & \mbox{ if } d = Z \\ f_j & \mbox{otherwise} \end{cases} $$ We can then construct the cap term $C(\diamond)$ as follows: \[ C(\diamond) = (g' \circ g)^{n-1} ( g'(f_L((f^*_n \circ \dots \circ f^*_1)(\diamond)))) \] Since the rules of $R_N$ simulate the sequence of configurations of $N$ and $N$ halts, we can set $C[\diamond := c(q_0, s^k(0), s^p(0), 0)]$ and use the rules of $R_N$ to reduce this term to \[ C(\diamond) \rightarrow_{R_N}^{+} (g' \circ g)^{n-1} (g'(g(c(e, 0, 0, s^n(0))) \] which can then be reduced using the second rule of $R_N$ to get the term $(g' \circ g)^{n-1}(c(e, 0, 0, s^{n-1}(0)))$. This can then be reduced by further applications of the same rule to get the term $g'(g(c(e, 0, 0, s(0))))$. At this point, one more application of the second rule will yield $c(e, 0, 0, 0) = M$. Thus, $M$ can be deduced. $(\Leftarrow)$ Suppose that there exists a cap $C(\diamond)$ such that $C[\diamond := c(q_0, s^k(0), s^p(0), 0)] \rightarrow_{R_N}^{*} M$. Let $t$ be such a cap term. Thus, the above says that there is a reduction chain starting with $t$ and ending in the term $M$. That is, $t \rightarrow t_1 \rightarrow t_2 \rightarrow \dots \rightarrow t_{\kappa} = c(e, 0, 0, 0)$. Since the only rule that can introduce an ``$e$" term is the first rule, there must be a sub-chain of reductions $t \rightarrow t_1 \rightarrow \dots \rightarrow t_i$ such that $t_i = \sigma(f_L(c(q_L, s^{k'}(0), s^{p'}(0), z))$. By design of the rewrite system $R_N$ from $N$, one can see that the chain above corresponds to a sequence of configurations of $N$ that eventually leads to the configuration $(q_L, k', p')$. Therefore, $N$, when started in $(q_0, k, p)$, halts in configuration~$(q_L, k', p')$. \end{proof} We can now state the proof of Theorem~\ref{CapUndec} \begin{proof} Given a Minsky machine $N$, let $G$ be the function such that $G(N) \rightarrow R_N$, i.e. the function that takes as input a Minsky machine and produces the corresponding system $R_N$ as described above. Then $G$ is clearly a total recursive function. Claim~\ref{CapiffHalt} says then that $G$ is in fact a many-one reduction. Thus, since the halting problem for Minsky machines is undecidable, then so is the cap problem modulo LM-Systems. \end{proof} \ignore{ } \section{Appendix} \begin{Lemma} It is not the case that the equational theory of every $LM$-system is {\em finite\/} (i.e., all congruence classes are finite). \label{inf-cong} \end{Lemma} \begin{proof} The system \[ f(g(h(x))) \rightarrow g(x) \] is an $LM$-system, but the congruence class of $g(y)$ is clearly infinite. \end{proof} \begin{Lemma} Every $LM$-system is free over the signature. \label{free-sig} \end{Lemma} \begin{proof} Let $R$ be an $LM$-system and $f \in \Sigma^{(n)}$. Let $s~=~f( s_1^{} , \ldots , s_n^{} )$ and $t = f( t_1^{} , \ldots , t_n^{} )$ be $\bar{\epsilon}$-irreducible terms that are joinable modulo~$R$. (Thus $s_1^{} , \ldots , s_n^{}, t_1^{} , \ldots , t_n^{}$ are in normal form.) Then either $t$ is in normal form and $s \to t$, or $s \to \widehat{t}$ and $t \to \widehat{t}$ where $\widehat{t}$ is the normal form of $s$ and $t$. (Note that since $R$ is forward-closed, reduction to a normal form will take only one step.) But the former is impossible because no rule in~$R$ can have the same root symbol on both the left-hand side and the right-hand side. The latter is ruled out because no two rules can have the same root symbols at their sides, i.e., root-pair repetitions are not allowed. \end{proof} \begin{Lemma} \label{UniqueEquation} Let $R$ be an $LM$-system and $s = f(s_1^{}, \ldots , s_m^{})$ and $t = g(t_1^{}, \ldots , t_n^{})$ be $\bar{\epsilon}$-irreducible terms such that~$f \neq g$. Then $s$ and $t$ are joinable modulo~$R$ if and only if there is a unique \emph{equation} $e_1^{} \approx e_2^{} \; \in \; RHS(R)$ with root pair $(f, g)$ such that \( \displaystyle{s ~ \mathop{\xrightarrow{\mathcal{H}space*{0.75cm}}}_{e_1^{} \rightarrow e_2^{}}^{} ~ t}. \) \end{Lemma} \begin{proof} The result follows from Lemma~\ref{ExactlyTwo} and its proof. \end{proof} \end{document}
\begin{document} \setcounter{page}{1001} \issue{XXI~(2014)} \sloppy \title{Universality and Almost Decidability} \author{Cristian S. Calude$^{1}$, Damien Desfontaines$^{2}$\\[2ex] $^{1}$Department of Computer Science\\ University of Auckland, Private Bag 92019, Auckland, New Zealand\\ \url{www.cs.auckland.ac.nz/~cristian}\\ $^{2}$\'{E}cole Normale Sup\'erieure, 45 rue d'Ulm, 75005 Paris, France\\ \url{desfontain.es/serious.html} } \maketitle \runninghead{C. S. Calude, D. Desfontaines}{Universality and Almost Decidability} \begin{abstract} We present and study new definitions of universal and programmable universal unary functions and consider a new simplicity criterion: almost decidability of the halting set. A set of positive integers $S$ is almost decidable if there exists a decidable and generic (i.e.\ a set of natural density one) set whose intersection with $S$ is decidable. Every decidable set is almost decidable, but the converse implication is false. We prove the existence of infinitely many universal functions whose halting sets are generic (negligible, i.e.\ have density zero) and (not) almost decidable. One result---namely, the existence of infinitely many universal functions whose halting sets are generic (negligible) and not almost decidable---solves an open problem in \cite{HM}. We conclude with some open problems. \end{abstract} \begin{keywords}Universal function, halting set, density, generic and negligible sets, almost decidable set\end{keywords} \section{Universal Turing Machines and Functions} The first universal Turing machine was constructed by Turing \cite{AT,ATc}. In Turing's words: \begin{quote}\it \dots a single special machine of that type can be made to do the work of all. It could in fact be made to work as a model of any other machine. The special machine may be called the universal machine. \end{quote} Shannon \cite{CS} proved that two symbols were sufficient for constructing a universal Turing machine providing enough states can be used. According to Margenstern~\cite{Maurice-2010}: ``Claude Shannon raised the problem of what is now called the {\em descriptional complexity} of Turing machines: how many states and letters are needed in order to get universal machines?'' Notable universal Turing machines include the machines constructed by Minsky (7-state 4-symbol) \cite{MM}, Rogozhin (4-state 6-symbol) \cite{YR}, Neary--Woods (5-state 5-symbol) \cite{NW}. Herken's book \cite{Herken} celebrates the first 50 years of universality. Woods and Neary presents a survey in \cite{WN}; Margenstern's paper ~\cite[p.\ 30--31]{Maurice-2010} presents also a time line of the main results. Roughly speaking, a universal machine is a machine capable of simulating any other machine. There are a few definitions of universality, the most important being {\em universality in Turing's sense} and {\em programmable universality} in the sense of Algorithmic Information Theory~\cite{Ca,DH}. In the following we denote by $\mathbb{Z}^{+}$ the set of positive integers $\left\{ 1,2,\ldots\right\} $, and $\overline{\mathbb{Z}^{+}}=\mathbb{Z}^{+}\cup\left\{ \infty\right\}$. The cardinality of a set $S$ is denoted by $\#S$. The domain of a partial function $F\colon \mathbb{Z}^{+} \longrightarrow\overline{\mathbb{Z}^{+}}$ is $\dom(F)=\{x \in \mathbb{Z}^{+}\mid F(x)\not=\infty\}$. We assume familiarity with the basics of computability theory~\cite{Cooper-2004,YM2010}. We define now universality for unary functions. A partially computable function $U \colon\mathbb{Z}^{+}\longrightarrow\overline{\mathbb{Z}^{+}}$ is called {\em (Turing) universal} if there exists a computable function $C_U \colon\mathbb{Z}^{+}\times\mathbb{Z}^{+}\longrightarrow\mathbb{Z}^{+}$ such that for any partially computable function $F\colon\mathbb{Z}^{+}\longrightarrow\overline{\mathbb{Z}^{+}}$ there exists an integer $g_{U,F}$ (called a {\em G\"{o}del number} of $F$ for $U$) such that for all $ x\in\mathbb{Z}^{+}$ we have: $U \left(C_U\left(g_{U,F},x\right)\right)=F\left(x\right)$. Following \cite{YM2012,CD} we say that a partially computable function $U \colon\mathbb{Z}^{+}\longrightarrow\overline{\mathbb{Z}^{+}}$ is {\em programmable universal} if for every partially computable function $F\colon\mathbb{Z}^{+}\longrightarrow\overline{\mathbb{Z}^{+}}$ there exists a constant $k_{U,F}$ such that for every $x\in\mathbb{Z}^{+}$ there exists $y\le k_{U,F}\cdot x$ with $U (y) = F (x).$\footnote{For the programming-oriented reader we note that the property ``programmable universal" corresponds to being able to write a compiler.} \begin{thm} \label{mainprogramuniv} A partially computable function $U \colon\mathbb{Z}^{+}\longrightarrow\overline{\mathbb{Z}^{+}}$ is programmable universal iff there exists a partially computable function $C_U \colon\mathbb{Z}^{+}\times\mathbb{Z}^{+}\longrightarrow\overline{\mathbb{Z}^{+}}$ such that for any partially computable function $F\colon\mathbb{Z}^{+}\longrightarrow\overline{\mathbb{Z}^{+}}$ there exist two integers $g_{U,F}, c_{U,F}$ such that for all $ x\in\mathbb{Z}^{+}$ we have \begin{equation} \label{suniv1} U \left(C_U\left(g_{U,F},x\right)\right)=F\left(x\right) \end{equation} and \begin{equation} \label{suniv2}C_U\left(g_{U,F},x\right)\leq c_{U,F}\cdot x. \end{equation} \end{thm} \begin{proof}First we construct a partially computable function $V\colon\mathbb{Z}^{+}\longrightarrow\overline{\mathbb{Z}^{+}}$ and a partially computable function $C_V \colon\mathbb{Z}^{+}\times\mathbb{Z}^{+}\longrightarrow\overline{\mathbb{Z}^{+}}$ such that for every partially computable function $F$, (\ref{suniv1}) and (\ref{suniv2}) are satisfied. Indeed, the classical Enumeration Theorem~\cite{Cooper-2004} shows the existence of a partial computable function $\Gamma \colon\mathbb{Z}^{+}\times \mathbb{Z}^{+} \longrightarrow\overline{\mathbb{Z}^{+}}$ such that for every partial computable function $F\colon \mathbb{Z}^{+} \longrightarrow\overline{\mathbb{Z}^{+}}$ there exists $e \in \mathbb{Z}^{+}$ such that $F(x) = \Gamma(e, x)$, for all $x\in \mathbb{Z}^{+}$. Consider the computable function $f \colon \mathbb{Z}^{+}\times \mathbb{Z}^{+} \longrightarrow\mathbb{Z}^{+}$ such that the binary expansion of $f(e, x)$ is obtained by prefixing the binary expansion of $x$ with the binary expansion of $2e+1$. Then $\alpha$ is injective because if $e_{1}e_{2}\dots e_{n}$ and $x_{1}x_{2}\dots x_{m}$ are the binary expansions of $e$ and $x$, respectively, then $e_{1}0e_{2}0\dots e_{n}1x_{1}x_{2}\dots x_{m}$ is the binary expansion of $f(e,x)$ from which we can uniquely recover $e$ and $x$. If $f_1, f_2 \colon \mathbb{Z}^{+} \longrightarrow \mathbb{Z}^{+}$ are computable partial inverses of $f$, i.e.\ $ f(f_1(x), f_2(x)) = x$, for all $x\in f(\mathbb{Z}^{+}\times\mathbb{Z}^{+})$, then the function $V(x) = \Gamma(f_1(x), f_2(x))$ has (\ref{suniv1}) and (\ref{suniv2}) for $C_{V}=f$.\footnote{This construction suggests that the function $C_{U}$ may be taken to be computable.} Let $U$ be programmable universal, that is, for every partially computable function $F\colon\mathbb{Z}^{+}\longrightarrow\overline{\mathbb{Z}^{+}}$ there exists a constant $k_{U,F}$ such that for every $x\in\mathbb{Z}^{+}$ there exists $y\le k_{U,F}\cdot x$ with $U (y) = F (x).$ We shall use $V$ to prove that $U$ satisfies the condition in the statement of the theorem. Let $b\colon \mathbb{Z}^{+}\times\mathbb{Z}^{+} \longrightarrow\mathbb{Z}^{+}$ be a computable bijection and $b_1, b_2$ the components of its inverse. We define the partially computable function $C_U$ as follows. We consider first the set $ S(z,x)=\{y \in\dom(U)\mid y \le b_1(z)\cdot x, U(y)=V(C_V(b_2(z),x))\}$ and then we define $C_U(z,x) $ to be the first element of $S(z,x)$ according to some computable enumeration of $\dom(U)$. Formally, let $E$ be a computable one-one enumeration of $\dom(U)$ and define \[C_U(z,x) = E\left(\inf\{y\:|\:E(y) \le b_1(z)\cdot x \text{ and }U(E(y))=V(C_V(b_2(z),x))\}\right).\] We now prove that $U$ satisfies the condition in the statement of the theorem via $C_U$. To this aim let $F$ be a partially computable function and let $g_{V,F}, c_{V,F}$ be the constants associated to $V$ and $F$. \\[-2ex] Put $g_{U,F} = b(k_{U,F}, g_{V,F})$ and $c_{U,F}= k_{U,F}$. \\[-2ex] We have: \begin{eqnarray*} C_{U}(g_{U,F},x) & = & E\left(\inf\{y\:|\: E(y) \le b_1(g_{U,F})\cdot x \text{ and }U(E(y))=V(C_V(b_2(g_{U,F}),x))\}\right)\\ & = & E\left(\inf\{y\:|\: E(y)\le k_{U,F}\cdot x \text{ and } U(E(y))=V(C_V(g_{V,F},x))\}\right)\\ & = & E\left(\inf\{y\:|\: E(y) \le k_{U,F}\cdot x \text{ and } U(E(y))=F(x)\}\right)\\ & \le & k_{U,F}\cdot x=c_{U,F}\cdot x, \end{eqnarray*} and $U(C_{U}(g_{U,F},x))=F(x)$. \if01 We have: \begin{eqnarray*} C_{U}(g_{U,F},x) & = & E(\inf\{E(y) \le b_1(g_{U,F})\cdot x \text{ and } U(E(y))=V(C_V(b_2(g_{U,F}),x))\}\\ & = & E(\inf\{y \le k_{U,F}\cdot x \text{ and } U(E(y))=V(C_V(k_{U,F},x))\})\\ & = & E(\inf\{y \le k_{U,F}\cdot x \text{ and } U(E(y))=F(x)\}),\\ & \le & k_{U,F}=c_{U,F}, \end{eqnarray*} and $U(C_{U}(g_{U,F},x))=F(x)$. \fi Conversely, if $V$ satisfies (\ref{suniv1}) and (\ref{suniv2}) with the partially computable function $C_{V}$, then $V$ is programmable universal: given a partially computable function $F$ and $x\in\mathbb{Z}^{+}$, $y=C_{V}(g_{V,F},x)$ and $k_{V,F}=c_{V,F}$. \end{proof} Universal and programmable universal functions exist and can be effectively constructed. Every programmable universal function is universal, but the converse implication is false. \section{The Halting Set and Almost Decidability} Interesting classes of Turing machines have decidable halting sets: for example, Turing machines with two letters and two states~\cite{Maurice-2010}. In contrast, the most (in)famous result in computability theory is that {\em the halting set $\text{Halt}(U)=\dom(U)$ of a universal function $U$ is undecidable.} However, the halting set $\text{Halt}(U)$ is computably enumerable (see~\cite{Cooper-2004,YM2010}). How ``undecidable'' is $\text{Halt}(U)$? To answer this question we formalise the following notion: a set $S$ is ``almost decidable'' if there exists a ``large'' decidable set whose intersection with $S$ is also decidable. In other words, the undecidability of $S$ can be located to a ``small'' set. To define ``large'' sets we can employ measure theoretical or topological tools adapted to the set of positive integers (see \cite{Ca}). In what follows we will work with the {\it (natural) density} on $\mathcal{P}\left(\mathbb{Z}^{+}\right)$. Its motivation is the following. If a positive integer is ``randomly'' selected from the set $\{1,2,\dots ,N\}$, then the probability that it belongs to a given set $A \subset \mathbb{Z}^{+}$ is \[ p_{N}\left(A\right)=\frac{\#\left(\left\{ 1,\ldots,N\right\} \cap A\right)}{N}\raisebox{.5ex}. \] If $\lim_{N\longrightarrow\infty}p_{N}\left(A\right)$ exists, then the set $A\subset \mathbb{Z}^{+}$ has {\em density}: \[ d\left(A\right)=\lim_{N\longrightarrow\infty}\frac{\#\left\{1\leq x\leq N\:|\: x\in A\right\} }{N}\raisebox{.5ex}. \] \begin{defn}A set is {\em generic} if it has density one; a set of density zero is called {\em negligible}. A set $S\subset \mathbb{Z}^{+}$ is {\em almost decidable} if there exists a generic decidable set $R\subset \mathbb{Z}^{+}$ such that $R \cap S$ is decidable. \end{defn} Every decidable set is almost decidable, but, as we shall see below, there exist almost decidable sets which are not decidable. A set which is not almost decidable contains no generic decidable subset; of course, this result is non-trivial if the set itself is generic. \begin{thm}[\cite{HM}, Theorem 1.1] \label{hautm} There exists a universal Turing machine whose halting set is negligible and almost decidable (in polynomial time). \end{thm} A single semi-infinite tape, single halt state, binary alphabet universal Turing machine satisfies Theorem~\ref{hautm}; other examples are provided in \cite{HM}. Negligibility reduces to some extent the power of almost decidability in Theorem~\ref{hautm}. This deficiency is overcome in the next result: the price paid is in the redundancy of the universal function. \begin{prop} \label{genericnad}There exist infinitely many universal functions whose halting sets are generic and almost decidable (in polynomial time). \end{prop} \begin{proof}Let $V$ be a universal function and define $U$ by the formula: \[U(x) = \left\{ \begin{array}{ll} V(y), & \mbox{\rm if $x=y^{2}$, for some $y\in\mathbb{Z}^{+}$}, \\ 0, & \mbox{\rm otherwise} \,. \end{array} \right.\] Clearly, $U$ is universal, $\text{Halt}(U)$ is generic, the set $S=\{ y \in\mathbb{Z}^{+}\mid y\not= x^{2} \text { for every } x\in\mathbb{Z}^{+}\}$ is generic and decidable (in polynomial time) and $S\cap \text{Halt}(U)$ is generic and decidable (in polynomial time). \end{proof} \begin{cor} There exist infinitely many almost decidable but not decidable sets. \end{cor} Does there exist a universal function $U$ whose halting set is not almost decidable? This problem was left open in \cite{HM}: here we answer it in the affirmative. \begin{thm} \label{notalmostdecid} There exist infinitely many universal functions whose halting sets are not negligible and not almost decidable. \end{thm} \begin{proof} We start with an arbitrary universal function $V$ and construct a new universal function $U$ whose halting set $\text{Halt}(U)$ is not almost decidable.\\ First we define the computable function $\varphi\colon \mathbb{Z}^{+}\longrightarrow\mathbb{Z}^{+}$ by $ \varphi (n)=\max\{ k\in\mathbb{Z}^{+} \mid 2^{k-1} \mbox{ divides } n\}.$ \if01 Here are the first values of $\varphi$: \noindent \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline $x$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \tabularnewline \hline $\varphi\left(x\right)$ & 1 & 2 & 1 & 3 & 1 & 2 & 1 & 4 & 1 & 2 \tabularnewline \hline \hline $x$ & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20\tabularnewline \hline $\varphi\left(x\right)$ & 1 & 3 & 1 & 2 & 1 & 5 & 1 & 2 & 1 & 3\tabularnewline \hline \end{tabular} \par\end{center} \fi The function $\varphi$ has the following properties:\\[-2ex] \begin{enumerate} \item[(a)] $\varphi(2^{m-1}(2k+1))=m$, for every $m,k\in\mathbb{Z}^{+}$, so $\varphi$ outputs every positive integer infinitely many times. \item[(b)] $\varphi^{-1}(n) = \{ k\in\mathbb{Z}^{+}\mid 2^{n-1} \text { divides } k \text{ but } 2^{n} \text{ does not divide } k\}$. \item[(c)] $d(\varphi^{-1}(n))=2^{-n}$, for all $n\in\mathbb{Z}^{+}$. \item[(d)] If $S\subseteq \mathbb{Z}^{+}$ and $d(S)=1$, then for every $n\in\mathbb{Z}^{+}$, $\varphi^{-1}\left(n\right)\cap S\neq\emptyset$.\\[-2ex] \end{enumerate} For (d) we note that if $\varphi^{-1}\left(n\right)\cap S=\emptyset$, then $d\left(S\right)\leq1-2^{-n}$, a contradiction. Next we define $U(x)=V(\varphi (x))$ and prove that $U$ is universal. We consider the partially computable function $C_{U}(z,x) = \inf\{s\in \mathbb{Z}^{+} \mid \varphi(s) = C_{V}(z,x)\}$ and note that: 1) by (a), $\dom (C_{U}) = \dom (C_{V})$, and 2) $\varphi (C_{U}(z,x))= C_{V}(z,x)$, for all $(z,x)\in \dom (C_{V})$. Consequently, for every partially computable function $F\colon\mathbb{Z}^{+}\longrightarrow\overline{\mathbb{Z}^{+}}$ we have $F(x) = V(C_{V}(g_{V,F},x)) = V(\varphi(C_{U}(g_{V,F},x)))$, so $g_{U,F}=g_{V,F}$. Let us assume by absurdity that there exists a generic decidable set $S\subseteq \mathbb{Z}^{+} $ such that $S\cap \text{Halt}(U)$ is decidable. Define the partial function $\theta \colon \mathbb{Z}^{+}\longrightarrow\overline{\mathbb{Z}^{+}}$ by $\theta(n) = \inf\{k\in S\mid \varphi\left(k\right)=n\}.$ As $S$ is decidable, $\theta$ is partially computable; by (a) ($\varphi$ is surjective) and by (d) (as $d(S)=1$, for all $n\in\mathbb{Z}^{+}$, $ \varphi^{-1}(n) \cap S \not=\emptyset$) it follows that $\theta$ is computable. Furthermore, the computable function $\theta$ has the following two properties: for all $n\in\mathbb{Z}^{+}$, $\varphi(\theta(n))=n$ and $\theta(n)\in S$. We next prove that for all $n\in\mathbb{Z}^{+}$, \begin{equation} \label{hequiv} n \in \text{Halt}(V) \, \text{ iff } \, \theta(n) \in S\cap \text{Halt}(U). \end{equation} Indeed, \begin{eqnarray*} n \in \text{Halt}(V) &\Longleftrightarrow & V(n)<\infty\\ &\Longleftrightarrow & V(\varphi(\theta(n)))<\infty \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(\varphi(\theta(n)) = n)\\ &\Longleftrightarrow & U(\theta (n))<\infty\, \, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, (\text{definition of } U)\\ &\Longleftrightarrow & \theta (n) \in \text{Halt}(U)\\ & \Longleftrightarrow & \theta (n) \in S\cap \text{Halt}(U). \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, (\theta (n) \in S) \end{eqnarray*} From (\ref{hequiv}) it follows that $\text{Halt}(V)$ is decidable because $S\cap \text{Halt}(U)$ is decidable, a contradiction. Finally, $d(\text{Halt}(U))>0$ because $\text{Halt}(U)= \varphi^{-1}(\text{Halt}(V))$. By varying the universal function $V$ we get infinitely many examples of universal functions $U$. \end{proof} \begin{cor}There exist infinitely many universal functions $U$ such that for any generic computably enumerable set $S\subseteq\mathbb{Z}^{+}$, $S\cap \text{Halt}\left(U\right)$ is not decidable. \end{cor} \begin{proof} Assume $S$ is computable enumerable and $d(S)=1$. If replace the computable function $\theta$ with the computable function $\Gamma (n) = E(\min\{k\in\mathbb{Z}^{+}\mid \varphi(E(i))=n\})$, where $E \colon \mathbb{Z}^{+}\longrightarrow\overline{\mathbb{Z}^{+}}$ is a computable injective function such that $E(\mathbb{Z}^{+})=S$ ($S$ is infinite) in the proof of Theorem~\ref{notalmostdecid}, then we prove that $S\cap\text{Halt}\left(U\right)$ is not decidable. \end{proof} There are six possible relations between the notions of negligible, generic and almost decidable sets. The above results looked at three of them: here we show that the remaining three possibilities can be realised too. First, it is clear that there exist non-negligible and decidable sets, hence non-negligible and almost decidable sets. The next result is a stronger form of Theorem~\ref{notalmostdecid}: its proof depends on a set $A$ and works for other interesting sets as well. \begin{thm} \label{gnad} There exist infinitely many universal functions whose halting sets are generic and not almost decidable. \end{thm} \begin{proof} We use a computably enumerable generic set $A$ which has no generic decidable subset (see Theorem~2.22 in \cite{JS2012}) to construct a universal function as in the statement above. Assume $A= \text{Halt}(F)$ for some partially computable function $F$. Let $V$ be an arbitrary universal function and define $U$ by: \[U(x) = \left\{ \begin{array}{ll} V(y), & \mbox{\rm if $x=y^{2}$, for some $y\in\mathbb{Z}^{+}$}, \\ F(x), & \mbox{\rm otherwise} \,. \end{array} \right.\] Clearly $\text{Halt}(U)$ is universal and generic. For the sake of a contradiction assume that $\text{Halt}(U)$ is almost decidable by $S$, i.e.\ $S$ is a generic decidable set such that $\text{Halt}(U) \cap S$ is decidable. We now prove that $\text{Halt}(F)$ is almost decidable by $S'=S\cap \overline{P}$, where $P$ is the set of square positive integers (note that $P$ is decidable and negligible) and $\overline{P}$ is the complement of $P$. It is clear that $S'$ is generic and decidable, so we need only to show that $\text{Halt(F)} \cap S^{'} = \text{Halt(F)} \cap S \cap \overline{P}$ is decidable. We note that $\text{Halt}(U)$ is a disjoint union of the sets $\{x\in\mathbb{Z}^{+} \mid x=y^{2}, \text{ for some } y\in \text{Halt}(V)\}$ and $\text{Halt}(F) \cap \overline{P}$, and the first set is a subset of $P$. To test whether $x$ is in $\text{Halt(F)} \cap S^{'}$ we proceed as follows: a) if $x\in P$, then $x\not\in \text{Halt}(F) \cap S^{'}$, b) if $x\not \in P$, then $x\in \text{Halt(F)} \cap S^{'}$ iff $x\in \text{Halt}(U)\cap S$. Hence, $\text{Halt(F)} \cap S^{'}$ is decidable because $\text{Halt}(U) \cap S$ is decidable, so $\text{Halt}(U)$ is almost decidable. We have obtained a contradiction because $\text{Halt(F)} \cap S^{'}$ is a generic decidable subset of $A$, hence $\text{Halt}(U)$ is not almost decidable. \end{proof} Let $r \in (0,1]$. We say that a set $S\subset \mathbb{Z}^{+}$ is {\em $r$-decidable} if there exists a decidable set $R\subset \mathbb{Z}^{+}$ such that $d(R)=r$ and $R \cap S$ is decidable; a set $S\subset \mathbb{Z}^{+}$ is {\em weakly decidable} if $S$ is $r$-decidable for some $r \in (0,1)$. With this terminology, generic sets coincide with $1$-decidable sets. Theorem 3.18 of \cite{DJS2012} states that there is a computably enumerable generic set that has no decidable subset of density in $(0,1)$. Using this set in the proof of Theorem~\ref{gnad} we get the following stronger result: \begin{thm}There exist infinitely many universal functions whose halting sets are generic and not weakly decidable. \end{thm} A simple set is a co-infinite computably enumerable set whose complement has no decidable subset; the existence of a negligible simple set is shown in the proof of Proposition 2.15 in \cite{JS2012}. If in the proof of Theorem~\ref{gnad} we use a negligible simple set instead of the computably enumerable generic set which has no generic decidable subset we obtain the following result: \begin{thm}There exist infinitely many universal functions whose halting sets are negligible and not almost decidable. \end{thm} \section{A Simplicity Criterion for Universal Functions and Open Problems} Universality is one of the most important concepts in computability theory. However, not all universal machines are made equal. The most popular criterion for distinguishing between universal Turing machines is the number of states/symbols. Other three other criteria of simplicity for universal prefix-free Turing machines have been studied in \cite{cris-2010}. The property of almost decidability is another criterion of simplicity for universal functions. The universal function $U$ constructed in the proof of Theorem~\ref{notalmostdecid} is {\em not} programmable universal. Theorems 2 and 8 in \cite{CNSS} show that the halting sets of programmable universal string functions (plain or prefix-free) are never negligible. {\em Are there programmable universal functions not almost decidable?} The notion of almost decidability suggests the possibility of an approximate (probabilistic) solution for the halting problem (see also \cite{CS2008,CD}). Assume that the halting set is $\text{Halt}(U)$ is almost decidable via the generic decidable set $S$ and we wish to test whether an arbitrary $x\in\mathbb{Z}^{+}$ is in $\text{Halt}(U)$. If $x\in S$, then $x\in \text{Halt}(U)$ iff $x\in S \cap \text{Halt}(U)$. If $x\not\in S$, then we don't know whether $x\in \text{Halt}(U)$ or $x\not\in \text{Halt}(U)$ (the undecidability is located in $ \overline{S}\cap \text{Halt}(U)$). Should we conclude that $x\in \text{Halt}(U)$ or $x\not\in \text{Halt}(U)$? Density does not help because $d(\overline{S} \cap \text{Halt}(U))= d(\overline{S} \cap \overline{\text{Halt}(U)})=0$. It is an open problem to find a solution. The notion of almost decidability can be refined in different ways, e.g.\ by looking at the computational complexity of the decidable sets appearing in Theorem~\ref{notalmostdecid}. Also, it will be interesting to study the property of {\em almost decidability} topologically or for other densities. \end{document}
\begin{document} \title{Regularity of $R(X)$ does not pass to finite unions} \begin{abstract} We show that there are compact plane sets $X$, $Y$ such that $R(X)$ and $R(Y)$ are regular but $R(X \cup Y)$ is not regular. \end{abstract} \section{Introduction} In \cite{1963M}, McKissick gave the first example of a non-trivial, regular uniform algebra. His example was $R(X)$ for a suitable compact plane set $X$. (We recall the relevant definitions in the next section.) In \cite[Corollary 7.7]{2016FMY} it was shown that if $X$ and $Y$ are compact plane sets such that $R(X)$ and $R(Y)$ are regular and $X \cap Y$ is countable, then $R(X \cup Y)$ is also regular. This note addresses the question of whether $R(X \cup Y)$ is regular whenever $R(X)$ and $R(Y)$ are regular, without any additional assumptions on the compact plane sets $X$ and $Y$. If the answer was positive, the result would clearly also hold for all finite unions. Bearing this in mind, we show that the answer is negative, in Theorem 6 below, by the slightly indirect method of finding \textbf{four} compact plane sets $X_k$, $1 \leq k \leq 4$, such that each $R(X_k)$ is regular, but such that $\displaystyle R\left(\bigcup_{k=1}^4 X_k\right)$ is not regular. We describe this result by saying that \emph{regularity of $R(X)$ does not pass to finite unions}. Along the way, in Theorem 5, we give a slightly easier example showing that (in an obvious sense) regularity of $R(X)$ does not pass to countable unions. \section{Preliminaries} Throughout this note, by a \textit{compact space} we shall mean a non-empty, compact, Hausdorff topological space; by a \textit{compact plane set} we shall mean a {non-empty}, compact subset of the complex plane. We shall use the term \emph{clopen} to describe sets which are both open and closed in a given topological space (typically a compact plane set). Let $a\in\mathbb{C}$ and let $r>0$. We denote the open disk of radius $r$ and centre $a$ by $D(a,r)$ and the corresponding closed disk by $\bar D(a,r)$. We denote the diameter of a non-empty, bounded subset $E$ of $\mathbb{C}$ by diam$(E)$. \vskip 0.3cm We assume that the reader has some familiarity with uniform algebras. We refer the reader to \cite{browder,Ga,St-book} for further background. For the general theory of commutative Banach algebras, the reader may consult \cite{Dales,Kaniuth}. \vskip 0.3cm Let $X$ be a {compact space}, and let $C(X)$ be the algebra of all continuous complex-valued functions on $X$. For each function $f \in C(X)$ and each non-empty subset $E$ of $X$, we denote the uniform norm of the restriction of $f$ to $E$ by ${\abs {f}}_E$. In particular, we denote by ${\abs {\,\cdot\,}}_X$ the uniform norm on $X$. When endowed with the norm ${\abs {\,\cdot\,}}_X$, $C(X)$ is a Banach algebra. A \emph{uniform algebra} on $X$ is a closed subalgebra of $C(X)$ that contains the constant functions and separates the points of $X$. We say that a uniform algebra $A$ on $X$ is {\it nontrivial\/} if $A\neq C(X)$, and is {\it natural\/} (on $X$) if $X$ is the character space of $A$ (under the usual identification of points of $X$ with evaluation functionals). Let $A$ be a natural uniform algebra on $X$, and let $x\in X$. We denote by $J_x$ the ideal of functions $f$ in $A$ such that $x$ is in the interior of the zero set of $f$, $f^{-1}(\{0\})$. We denote by $M_x$ the ideal of functions $f$ in $A$ such that $f(x) = 0$. We say that $x$ is a \emph{point of continuity} (for $A$) if, for all $y \in X \setminus \{x\}$ we have $ J_y\nsubseteq M_x$; we say that $x$ is an \emph{R-point} (for $A$) if, for all $y \in X \setminus \{x\}$ we have $J_x\nsubseteq M_y$. We say that $A$ is \emph{regular} if, for every closed subset $F$ of $X$ and every $y\in X\setminus F$, there exists $f\in A$ with $f(y)=1$ and $f(F)\subseteq \{0\}$. The natural uniform algebra $A$ is regular if and only if every point of $X$ is a point of continuity, and this is also equivalent to the condition that every point of $X$ is an R-point (\cite{2012FeinsteinMortini,FeinsteinSomerset2000}). \vskip 0.3cm Let $X$ be a compact plane set. By $R(X)$ we denote the set of those functions $f\in C(X)$ which can be uniformly approximated on $X$ by rational functions with no poles on $X$. It is standard that $R(X)$ is a natural uniform algebra on $X$. Let $x \in X$. As we will sometimes need to work with different compact plane sets simultaneously, in this note we will denote the ideals $J_x$ and $M_x$ in $R(X)$ by $J^X_x$ and $M^X_x$ respectively. We will occasionally use the fact that all idempotents in $C(X)$ are automatically in $R(X)$. This does not require the full force of the Shilov Idempotent Theorem, as for $R(X)$ it follows from Runge's theorem. \vskip 0.3cm Let $X$ be a compact plane set, let $Y$ be a compact subset of $X$ and suppose that $x,y \in Y$ with $x \neq y$. Trivially the restriction $R(X)|_Y$ is contained in $R(Y)$. It follows immediately that if $J^X_x \nsubseteq M^X_y$ then $J^Y_x \nsubseteq M^Y_y$. Thus if $x$ is an R-point for $R(X)$ then $x$ is also an R-point for $R(Y)$. Similarly, if $x$ is a point of continuity for $R(X)$ then $x$ is a point of continuity for $R(Y)$. If $R(X)$ is regular, then so is $R(Y)$. \vskip 0.3cm We now discuss some conditions under which converses to some of these implications hold. \begin{lemma} Let $X$ be a compact plane set and suppose that $E$ is a non-empty, clopen subset of $X$. Let $x \in E$. If $x$ is an R-point for $R(E)$ then $x$ is an R-point for $R(X)$. Similarly, if $x$ is a point of continuity for $R(E)$ then $x$ is a point of continuity for $R(X)$. \end{lemma} \noindent {\bf Proof. } This is almost trivial in view of the idempotents available in $R(X)$. The only point that may be worth noting is the fact that, for all $y \in E$, each function in $J_y^E$ has an extension in $J_y^X$. This is because every function in $R(E)$ can be extended to give a function in $R(X)$ which is constantly zero on $X\setminus E$. $\Box$ Let $X$ and $Y$ be compact plane sets, with $Y \subseteq X$, and suppose that $X$ contains no bounded component of $\mathbb{C} \setminus Y$. Then, by Runge's theorem, $R(X)|_Y$ is dense in $R(Y)$. This condition concerning the bounded components of $\mathbb{C} \setminus Y$ is, of course, automatically satisfied if $X$ has empty interior. \begin{theorem} Let $X$ and $Y$ be compact plane sets, with $Y \subseteq X$ and suppose that $x,y \in Y$ with $x \neq y$. Suppose that $X$ contains no bounded component of $\mathbb{C} \setminus Y$ and that $X \setminus Y$ is the union of a sequence of pairwise disjoint, non-empty relatively clopen subsets $Y_n$ of $X$ whose diameters tend to $0$ as $n \to \infty$. Then $Y$ is a peak set for $R(X)$ and $R(X)|_Y = R(Y)$. Moreover the following implications hold. \begin{enumerate} \item[(i)]If $J^Y_x \nsubseteq M^Y_y$ then $J^X_x \nsubseteq M^X_y$. \item[(ii)]If $x$ is an R-point for $R(Y)$ then $x$ is an R-point for $R(X)$. \item[(iii)]If $x$ is a point of continuity for $R(Y)$ then $x$ is a point of continuity for $R(X)$. \end{enumerate} \end{theorem} \noindent {\bf Proof. } In this proof, we work on $X$ when discussing clopen sets or characteristic functions: the characteristic functions of the clopen sets are then precisely the idempotents in $R(X)$. Note that clopen subsets of $X$ are automatically peak sets (using idempotents in $R(X)$). Clearly $Y$ is an intersection of (a sequence of) clopen subsets of $X$, so $Y$ is also a peak set for $R(X)$. Thus $R(X)|_Y$ is a uniformly closed subset of $R(Y)$. As $X$ contains no bounded component of $\mathbb{C} \setminus Y$, $R(X)|_Y$ is dense in $R(Y)$, so equality holds. Using idempotents, we also see that, for each $z \in X \setminus Y$, we have $J^X_x \nsubseteq M^X_z$ and $J^X_z\nsubseteq M^X_x$. Thus (ii) and (iii) follow quickly from (i). To prove (i), suppose that $J^Y_x \nsubseteq M^Y_y$. Choose $f \in J^Y_x \setminus M^Y_y$. We shall show that $f$ has an extension in $J^X_x$, from which (i) follows. We know that $f$ has an extension, say $F$, in $R(X)$. However $F$ might not be in $J^X_x$, so we modify $F$ as follows. Choose $r>0$ such that $f(\bar D(x,r) \cap Y) = \{0\}$. Set \[ S=\{n \in \mathbb{N}: Y_n \cap \bar D(x,r) \neq \emptyset\}\,. \] If $S$ is finite, then $F$ is already in $J^X_x$. So we may assume that $S$ is infinite, say $S=\{n_1,n_2,\dots\}$ with $n_1<n_2<\cdots$. Set $U = \bigcup_{n \in S} Y_n = \bigcup_{k=1}^\infty Y_{n_k}$. We define $\tilde f : X \to \mathbb{C}$ by $\tilde f = (1-\chi_U) F$. We claim that $\tilde f$ is the extension of $f$ that we need. Clearly $\tilde f$ vanishes on $X \cap D(x,r)$. It remains to show that $\tilde f \in R(X)$ (and, in particular, that $\tilde f$ is continuous). For each $m \in \mathbb{N}$, set $U_m=\bigcup_{k=1}^m Y_{n_k}$ and $V_m = \bigcup_{k=m+1}^\infty Y_{n_k}$. Then $U_m$ is clopen in $X$, so $\chi_{U_m} \in R(X)$ and hence $(1 - \chi_{U_m}) F \in R(X)$. Since diam$(Y_{n_k}) \to 0$ as $k \to \infty$, we see that the the sequence of clopen sets $Y_{n_k}$ can accumulate only at points of $\bar D(x,r) \cap Y$, and that dist$(Y_{n_k},\bar D(x,r) \cap Y) \to 0$ as $k \to \infty$. It then follows that $|F|_{V_m} \to 0$ as $m \to \infty$, and hence $(1 - \chi_{U_m}) F \to \tilde f$ uniformly on $X$. The result follows. $\Box$ \vskip 0.3cm Using Theorem 2 and our earlier observations, we immediately obtain the following corollary concerning the regularity of $R(X)$ for this type of compact plane set $X$. \begin{corollary} Let $X$ and $Y$ be compact plane sets, with $Y \subseteq X$. Suppose that $X$ contains no bounded component of $\mathbb{C} \setminus Y$ and that $X \setminus Y$ is the union of a sequence of pairwise disjoint, non-empty relatively clopen subsets $Y_n$ of $X$ whose diameters tend to $0$ as $n \to \infty$. Then $R(X)$ is regular if and only if $R(Y)$ is regular and each $R(Y_n)$ is regular. \end{corollary} \section{Main results} We are almost ready to construct the examples mentioned in the introduction. First we need the following result, which is taken from \cite{2001Feinstein} (see also \cite{FM,FY}). \begin{proposition} There exist a compact plane set $X$ and a non-degenerate closed interval $J$ in $\mathbb{R}$ with $J\subseteq X$ such that $R(X)$ is not regular, but every point of $X\setminus J$ is a point of continuity for $R(X)$. \end{proposition} Let $X$, $J$ be as in Proposition 4. Clearly $X$ has empty interior. For every non-empty compact subset $K$ of $X\setminus J$, $R(K)$ is regular, because every point of $K$ is a point of continuity for $R(K)$. We have $R(J)=C(J)$, and so $R(J)$ is regular. However, $R(X)$ is not regular, so not every point of $X$ is a point of continuity for $R(X)$. The points of $X$ which are not points of continuity for $R(X)$ must all be in $J$. By Lemma 1, $J$ is not clopen in $X$. In particular, $X \neq J$. \vskip 0.3cm Using this example, we next show that, in a suitable sense, regularity of $R(X)$ does not pass to countable unions. \begin{theorem} There exist a compact plane set $X$ and a sequence of compact plane sets $X_k~(k \in \mathbb{N})$ such that $X=\bigcup_{k\in \mathbb{N}} X_k$ and each $R(X_k)$ is regular, but $R(X)$ is not regular. \end{theorem} \textbf{Proof.} Let $X$ and $J$ be as in Proposition 4. Scaling $X$ if necessary, we may assume that $\{z\in X:\textrm{dist}(z,J)\geq 1/2\} \neq \emptyset$. Set $X_1=J$ and, for $k \geq 2$, set $X_k=\{z\in X:\textrm{dist}(z,J) \geq 1/k\,\}$. Then $R(X)$ is not regular, but each $R(X_k)$ is regular, and $X=\bigcup_{k\in \mathbb{N}} X_k$ as required. $\Box$ \vskip 0.3cm Thus regularity of $R(X)$ does not pass to countable unions. We finish by showing that regularity of $R(X)$ does not pass to finite unions either. \vskip 0.3cm \begin{theorem} There exist a compact plane set $X$ and four compact plane sets $X_k$, $k=1,2,3,4$, such that $\displaystyle X=\bigcup_{k=1}^{4} X_k$ and each $R(X_k)$ is regular, but $R(X)$ is not regular. \end{theorem} \textbf{Proof.} Again, let $X$ and $J$ be as in Proposition 4. Scaling $X$ if necessary, we may assume that $X \subseteq \{z\in \mathbb{C}:\textrm{dist}(z,J)\leq 1\}$. We begin by defining subsets $K_o$ and $K_e$ as follows. Set \[ K_o=J \cup \bigcup_{n\in\mathbb{N}} \{z \in X: 1/(2n+1) \leq \textrm{dist}(z,J) \leq 1/2n\} \] and \[ K_e=J \cup \bigcup_{n\in\mathbb{N}} \{z \in X: 1/2n \leq \textrm{dist}(z,J) \leq 1/(2n-1)\}\,. \] Then $X = K_o \cup K_e$, and $R(X)$ is not regular. It is not clear whether or not $R(K_o)$ or $R(K_e)$ are regular. In the following, we assume that $J$ is clopen in neither $K_o$ nor $K_e$ (otherwise the arguments may be simplified). We now show that each of $K_o$ and $K_e$ is a union of two sets where we can establish regularity using Corollary 3. We may write $K_o=X_1 \cup X_3$ and $K_e=X_2 \cup X_4$, where the sets $X_k$ are defined with the help of vertical bands as follows: \[ X_1=J \cup \bigcup_{n\in\mathbb{N}} \{z \in X: 1/(2n+1) \leq \textrm{dist}(z,J) \leq 1/2n~\text{and}~\sin(n\mathbb{R}e(z))\leq 0 \}\,; \] \[ X_2=J \cup \bigcup_{n\in\mathbb{N}} \{z \in X: 1/2n \leq \textrm{dist}(z,J) \leq 1/(2n-1)~\text{and}~\sin(n\mathbb{R}e(z))\leq 0 \}\,; \] \[ X_3=J \cup \bigcup_{n\in\mathbb{N}} \{z \in X: 1/(2n+1) \leq \textrm{dist}(z,J) \leq 1/2n~\text{and}~\sin(n\mathbb{R}e(z))\geq 0 \}\,; \] \[ X_4=J \cup \bigcup_{n\in\mathbb{N}} \{z \in X: 1/2n \leq \textrm{dist}(z,J) \leq 1/(2n-1)~\text{and}~\sin(n\mathbb{R}e(z))\geq 0 \}\,. \] Then, for $1 \leq k \leq 4$, it is easy to see that $X_k$ is the disjoint union of $J$ with a sequence $(Y_{k,n})_{n=1}^\infty$ of pairwise disjoint, non-empty relatively clopen subsets of $X_k$ whose diameters tend to $0$. Each of the sets $X_k$ then satisfies the conditions of Corollary 3 with $Y=J$ and $Y_n=Y_{k,n}$ $(n\in \mathbb{N})$. Since $R(J)$ and all of the $R(Y_{k,n})$ are regular, each $R(X_k)$ is regular by Corollary 3. Certainly $\displaystyle X=\bigcup_{k=1}^{4} X_k$, and $R(X)$ is not regular, as required. $\Box$ \vskip 0.3cm As noted earlier, it follows that regularity of $R(X)$ does not pass to unions of two sets either. However, we do not know which pair of sets mentioned above demonstrates this. Clearly at least one of the following pairs of sets must provide a suitable example: $X_1$ and $X_3$; $X_2$ and $X_4$; $K_o$ and $K_e$. \vskip 0.3cm We do not know whether or not regularity of $R(X)$ \emph{does} pass to unions of two sets if the sets are assumed to be connected. \vskip 0.3cm We are grateful to the referee for a careful reading and for useful comments and suggestions. \end{document}
\betaegin{document} \title{On Evans' and Choquet's theorems for polar sets} \section{Introduction and main results}\lambdaabel{intro} By classical results of G.C.\,Evans and G.\,Choquet on ``good kernels $G$ in potential theory'', for every polar $K_\sigma$-set $P$, there exists a finite measure~$\mu$ on~$P$ such that $G\mu=\infty$ on $P$, and a set $P$ admits a finite measure $\mu$ on $P$ such that $\{G\mu=\infty\}=P$ if and only if $P$ is a polar $G_\deltaelta$-set. We recall that Evans' theorem yields the solutions of the generalized Dirichlet problem for open sets by the Perron-Wiener-Brelot method using only \emph{harmonic} upper and lower functions (see \cite{cornea} and Corollary \ref{evans-corollary}). In this paper we intend to show that such results can be obtained, by elementary ``metric'' considerations and without using any potential theory, for general kernels~$G$ locally satisfying \betaegin{equation*} G(x,z)\wedge G(y,z)\lambdae C G(x,y).\footnote{We write $a\wedge b$ for the minimum and $a\varepsilone b$ for the maximum of $a$ and $b$.} \end{equation*} The particular case $G(x,y)=|x-y|^{\alpha-N}$ on $\mathbbm{R}^N$, $2<\alpha<N$, solves a long-standing open problem (see \cite[p.\,407, III.1.1]{landkof}). {\betafseries ASSUMPTION.} {\sl Let $X$ be a~locally compact space with countable base and let $G\colon X\times X\to [0,\infty]$ be Borel measurable, $G>0$, such that the following holds: \betaegin{itemize} \item[\rm (1)] For every $x\in X$, $\lambdaim_{y\to x} G(x,y)=G(x,x)=\infty$ and $G(x,\cdot)$ is bounded outside every neighborhood of $x$. \item[\rm (2)] $G$ has the local triangle property. \end{itemize} } We recall that $G$ is said to have the \emph{triangle property} if, for some $C>0$, \betaegin{equation}\lambdaabel{TP} G(x,z)\wedge G(y,z)\lambdae C G(x,y) \qquad\mbox{ for all }x,y,z\in X, \end{equation} and that $G$ has the \emph{local triangle property} if every point in $X$ admits an open neighborhood $U$ such that $G|_{U\times U}$ has the triangle property. It is well known that $G$ has the triangle property if and only if there exist a~metric~$d$ on~$X$ and $\gamma,C\in (0,\infty)$ such that, for all $x,y\in X$, \betaegin{equation}\lambdaabel{triangle} C^{-1} d(x,y)^{-\gamma}\lambdae G(x,y)\lambdae C d(x,y)^{-\gamma}, \end{equation} where, by assumption (1), every such metric yields the initial topology on $X$. To be more explicit we observe that (\ref{TP}) means that $\tilde G(x,y):=G(y,x)\lambdae C G(x,y)$ (take $z=x$) and that $\rho:=G^{-1}+\tilde G^{-1}$ is a \emph{quasi-metric} on $X$, that is, for some $C>0$, $\rho(x,y)\lambdae C(\rho(x,z)+\rho(y,z))$ for all $x,y,z\in X$. A metric $d$ satisfying~(\ref{triangle}) is then obtained fixing $\gamma\gammae 2 \lambdaog_2 C$ and defining \betaegin{equation*} d(x,y):=\inf\{\sum\nolimits_{0< j<n} \rho(z_j,z_{j+1})^{1/\gamma}\colon n\gammae 2,\, z_1=x, z_n=y,\, z_j\in X\} \end{equation*} (see \cite[Proposition 14.5]{heinonen}). Conversely, every $G$ satisfying~(\ref{triangle}) has the triangle property, since $d(x,z)\varepsilone d(y,z)\gammae d(x,y)/2$. More generally, (\ref{TP}) holds~if \betaegin{equation*} g\circ d_0\lambdae G\lambdae c g\circ d_0 \end{equation*} for some metric $d_0$ for $X$ and a~decreasing function $g$ on $[0,\sup\{d_0(x,y)\colon x,y\in X\}]$ satisfying $0<g(r/2)\lambdae cg(r)<\infty$ for $r>0$ and $\lambdaim_{r\to 0}g(r)=g(0)=\infty$. In particular, the local triangle property holds for the classical Green function not only for domains in $\mathbbm{R}^N$, $N\gammae 3$, but also on domains $X$ in $\mathbbm{R}^2$ such that $\mathbbm{R}^2\setminus X$ is not polar ($\lambdaog (2/r)\lambdae 2\lambdaog(1/r)$ for $0<r\lambdae 1/2$) and as well for Green functions associated with very general L\'evy processes (see \cite{hansen-liouville-wiener}). For every Borel set $B$ in $X$, let $\mathcal M(B)$ denote the set of all \emph{finite} positive Radon measures~$\mu$ on~$X$ such that $\mu(X\setminus B)=0$, and let $\mathcal M_ \eta(B)$, $\eta>0$, be the set of all measures $\mu\in\mathcal M(B)$ such that the total mass $\|\mu\|:=\mu(B)$ is at most $\eta$. Let \betaegin{equation*} G\mu(x):=\int G(x,y)\,d\mu(y), \qquad \mu\in \mathcal M(X), \, x\in X. \end{equation*} By assumption (1), for every $\mu\in\mathcal M(X)$, \betaegin{equation}\lambdaabel{gmu-finite} G\mu<\infty \quad\text{ on } X\setminus \operatorname*{supp}(\mu). \end{equation} For every $A\subset X$, let \betaegin{equation*} c^\alphast(A):=\inf\{\|\mu\|\colon \mu\in \mathcal M(X), \, G\mu\gammae 1\mbox{ on }A\}. \end{equation*} We observe that $c^\alphast(A)=0$ if and only if there exists $\mu\in \mathcal M(X)$ such that $G\mu=\infty$ on $A$. Indeed, if there are $\mu_n \in\mathcal M(X)$, $n\in\mathbbm{N}$, such that $\|\mu_n\|< 2^{-n}$ and $G\mu_n\gammae 1$ on $A$, then obviously $\mu:=\sum_{n\in\mathbbm{N}} \mu_n\in \mathcal M_1(X)$ and $G\mu=\infty$ on $A$. Conversely, if~$\mu\in \mathcal M(X)$ with $G\mu=\infty$ on $A$, then $\nu_n:=(1/n)\mu$ satisfies $G\nu_n=\infty\gammae 1$ on $A$ and~$\lambdaim_{n \to \infty} \|\nu_n\|=0$, hence $c^\alphast(A)=0$. As already indicated, the main results of this paper are the next two theorems obtained by G.C.\,Evans \cite{evans} in the classical case (where $c^\alphast(P)$ is the outer capacity of~$P$, and $P$ is polar if and only if $c^\alphast(P)=0$, cf.\ \cite[Corollary 5.5.7]{AG}) and G.\,Choquet for ``good kernels in potential theory'' \cite{choquet-cr,choquet-gd}. \betaegin{theorem}\lambdaabel{main-first} Let $P$ be an $F_\sigma$-set in $X$, $P=\betaigcup_{m\in\mathbbm{N}} A_m$ with closed sets~$A_m$. Then $c^\alphast(P)=0$ if and only if there is a measure $\mu\in \mathcal M(P)$ with $G\mu=\infty$ on $P$. \end{theorem} \betaegin{corollary}\lambdaabel{p0-evans} Let $P $ be an $F_\sigma$-set in~$X$ with $c^\alphast(P)=0$. Let $P_0\subset P$ be countable and $A_m$, $m\in \mathbbm{N}$, be closed sets such that $\betaigcup_{m\in\mathbbm{N}} A_m=P$ and every intersection $P_0\cap A_m$ is dense in $A_m$. Then there is a~measure $\mu\in \mathcal M(P_0)$ such that $G\mu=\infty$ on $P$. \end{corollary} \betaegin{theorem}\lambdaabel{main-second} Let $P\subset X$ and let $P_0$ be a~countable dense set in $P$. The following are equivalent: \betaegin{itemize} \item[\rm(i)] $P$ is a $G_\deltaelta$-set and $c^\alphast(P)=0$. \item[\rm (ii)] There exists $\mu\in\mathcal M(P)$ such that $\{G\mu=\infty\}=P$. \item[\rm (iii)] There exists $\mu\in\mathcal M(P_0)$ such that $\{G\mu=\infty\}=P$. \end{itemize} \end{theorem} \betaegin{remark}{\rm Let us note that J.\,Deny \cite{deny} had made a step in the direction of Choquet's result in proving that, for every $G_\deltaelta$-set $P$ in $\mathbbm{R}^N$ which is polar (with respect to classical potential theory), there exists a~measure~$\mu$ on~$\mathbbm{R}^N$ such that $\{G\mu=\infty\}=P$. } \end{remark} \section{Case, where $G$ has the triangle property on $X$} \lambdaabel{special} Let us consider first the case, where $G$ has the triangle property on $X$, and therefore $ C^{-1} d^{-\gamma}\lambdae G\lambdae C d^{-\gamma}$ for some metric $d$ for $X$ and $\gamma,C\in (0,\infty)$. Defining $\tilde G:=d^{-\gamma}$ we then have $ C^{-1} \tilde G\mu \lambdae G\mu\lambdae C \tilde G\mu$, and hence $\{\tilde G\mu=\infty\}=\{G\mu=\infty\}$ for every $\mu\in \mathcal M(X)$. So the implications of Theorems \ref{main-first} and Theorem \ref{main-second} follow immediately if they hold for $\tilde G$. Thus we may and shall assume in this section without loss of generality that \betaegin{equation*} G(x,y)=d(x,y)^{-\gamma}, \qquad x,y\in X. \end{equation*} Then, for every $\mu\in\mathcal M(X)$, the ``potential'' $G\mu$ is lower semicontinuous on $X$ and continuous outside the support of $\mu$. Moreover, if $A$ and $B$ are Borel sets in $X$ such that $ d(A,B):=\inf\{d(x,y)\colon x\in A, y\in B\}>0$, then \betaegin{equation*} G\mu(x) \lambdae d(A,B)^{-\gamma}\|\mu\| \qquad\mbox{ for all $x\in A$ and $\mu\in \mathcal M(B)$.} \end{equation*} The key for Theorems \ref{main-first} and \ref{main-second} in our setting will be Lemmas \ref{key}, \ref{key-0} and \ref{key-finite}. \subsection{Proof of Theorem \ref{main-first}} \betaegin{lemma}\lambdaabel{key} Let $\emptyset \ne A \subset X$ be closed, let $A_0\subset A$ be a Borel set which is dense in $A$, and let $\mu\in \mathcal M(X)$. \betaegin{itemize} \item[\rm (a)] If $\mu(A)=0$, there exists $\nu\in \mathcal M(A_0)$ such that \betaegin{equation}\lambdaabel{eins} \|\nu\|=\|\mu\| \quad\mbox{ and }\quad \mbox{$ G\nu\gammae 3^{-\gamma} G\mu$ on $A$.} \end{equation} \item[\rm (b)] There exists $\nu\in\mathcal M(A)$ such that {\rm(\ref{eins})} holds. \end{itemize} \end{lemma} \betaegin{proof} (a) Assuming $\mu(A)=0$, we consider ``shells'' $S(A,r)$ around $A$ defined by \betaegin{equation*} S(A,r):=\{y\in X\colon 3r\lambdae d(A,\{y\}) < 4r\}, \qquad r>0. \end{equation*} Since $X\setminus A$ is obviously covered by the ``shells'' $S(A,(4/3)^k)$, $k\in\mathbbm{Z}$, we may suppose without loss of generality that $\mu$ is supported by some $S(A,r)$, $r>0$. For $x\in X$ and $R>0$, let $B(x,R):=\{y\in X\colon d(y,x)<R\}$. There is a~sequence~$(x_n)$ in~$A_0$ such that $A$ is covered by the open balls $B(x_n,r)$, $n\in\mathbbm{N}$, and hence $S(A,r)$ is the union of the sets \betaegin{equation*} A_n:=S(A,r)\cap B(x_n,5r). \end{equation*} So there exist $\mu_n\in\mathcal M(A_n)$, $n\in\mathbbm{N}$, such that $\sum_{n\in\mathbbm{N}} \mu_n=\mu$. For the moment, let us fix $n\in\mathbbm{N}$ and $x\in A$. If $y\in A_n$, then $d(y,x_n)<5r$ and $3r\lambdae d(x,y)$, hence $d(x,x_n)\lambdae d(x, y)+d(y,x_n) < 3 d(x,y)$. \betaegin{center} \includegraphics[width=8cm]{pic21-eps-converted-to.pdf}\\ {\small Figure 1. ``Sweeping'' of $\mu_n$ from $A_n$ to $x_n$} \end{center} So $d(x,x_n)^{-\gamma}>3^{-\gamma} d(x,y)^{-\gamma}$. Integrating with respect to $\mu_n\in \mathcal M(A_n)$ we see that \betaegin{equation*} \|\mu_n\|G(x,x_n)\gammae 3^{-\gamma} G\mu_n(x). \end{equation*} Clearly, $\nu:=\sum_{n\in\mathbbm{N}} \|\mu_n\|\,\deltaelta_{x_n}\in \mathcal M(A_0)$ (where $\deltaelta_{x_n}$ denotes Dirac measure at~$x_n$) and the measure $\nu$ satisfies (\ref{eins}). (b) Let $\mu':=1_{A^c}\mu$ and $\mu'':=1_{A} \mu$. By (a), there exists $\nu'\in \mathcal M(A)$ such that $\|\nu'\|=\|\mu'\|$ and $G\nu'\gammae 3^{-\gamma} G\mu'$ on $A$. Since $\mu=\mu'+\mu''$, we obtain that $\nu:=\nu'+\mu''\in \mathcal M(A)$ and (\ref{eins}) holds. \end{proof} \betaegin{remark} {\rm It is easily seen that, for every $\varepsilon>0$, we can get (a) in Lemma~\ref{key} with $(2+\varepsilon)^{-\gamma}$ in place of $3^{-\gamma}$ replacing $3r$ and $4r$ in the definition of $S(A,r)$ by $Mr$ and $(M+1)r$, $M$ sufficiently large. } \end{remark} \betaegin{proof}[Proof of Theorem \ref{main-first}] Let $\mu\in \mathcal M_1(X)$, $G\mu=\infty$ on $P$. By Lemma \ref{key}, there are $\nu_m\in \mathcal M_1(A_m)$ with $G\nu_m=\infty$ on $A_m$, $m\in\mathbbm{N}$. Then $\nu:=\sum_{m\in\mathbbm{N}} 2^{-m}\nu_m\in\mathcal M_1(P)$ and $G\nu=\infty$ on~$P$. \end{proof} We might note that until now we did not use local compactness of $X$. \subsection{Proof of Corollary \ref{p0-evans}} We shall need a lemma which follows from the weak$^\alphast$-lower semicontinuity of the mappings $\nu\mapsto G\nu(x)=\sup_{m\in\mathbbm{N}}(G\wedge m)\nu(x)$, $x\in X$, and the lower semicontinuity of the functions $G\nu$ (see \cite[p.\,26]{brelot} or \cite[Lemme 1]{choquet-gd}). \betaegin{lemma}\lambdaabel{weak} Let $K,L$ be compacts in $X$, $L\subset K$. Let $\mu\in \mathcal M(K)$ and $\varphi$ be a~continuous function on $L$ such that $G\mu>\varphi$ on $L$. Then there exists a weak$^\alphast$-neighborhood $\mathcal N$ of $\mu$ in $\mathcal M(K)$ such that, for every $\nu\in \mathcal N$, $G\nu>\varphi$ on $L$. \end{lemma} Given a compact $K\ne \emptyset$ in $X$, a measure $\mu\in\mathcal M(K)$ and a dense set~$A_0$ in~$K$, we construct \emph{approximating measures} \betaegin{equation}\lambdaabel{approx} \mu^{(n)}=\sum\nolimits_{1\lambdae j\lambdae M_n} \alpha_{nj} \deltaelta_{x_{nj}}, \quad \alpha_{nj}\gammae 0,\ x_{nj}\in A_0, \quad n\in\mathbbm{N}, \end{equation} in the following way: Having fixed $n\in\mathbbm{N}$, we take $x_1,\deltaots, x_M\in A_0$ such that the balls $B_j:=B(x_j,1/n)$ cover $K$, choose $\mu_j\in \mathcal M(B_j)$ with $\mu=\sum_{1\lambdae j\lambdae M} \mu_j$, and define $ \mu^{(n)}:=\sum_{1\lambdae j\lambdae M} \|\mu_j\| \deltaelta_{x_j}$. Of course, $\|\mu^{(n)}\|=\|\mu\|$. Clearly, the sequence $ (\mu^{(n)})$ is weak$^{\alphast}$-convergent to~$\mu$, and, for every open neighborhood $W$ of $K$, the sequence $(G\mu^{(n)})$ converges to $G\mu$ uniformly on $W^c$, since the functions $y\mapsto d(x,y)^{-\gamma}$, $x\in W^c$, are equicontinuous on~$K$. \betaegin{proof}[Proof of Corollary \ref{p0-evans}] We may assume without loss of generality that $(A_m)$ is an increasing sequence of compacts. Given $m\in\mathbbm{N}$, there exists $\mu_m\in \mathcal M(A_m)$ such that $G\mu_m=\infty$ on $A_m$, by Theorem \ref{main-first}. By Lemma \ref{weak} and using approximating discrete measures, we obtain $\nu_m\in \mathcal M_1(P_0\cap A_m)$ such that $G\nu_{m}> 2^m$ on~$A_m$. Then~$ \nu:=\sum_{m \in\mathbbm{N}} 2^{-m} \nu_m\in \mathcal M_1(P_0)$ and $G\nu=\infty$ on~$P$. \end{proof} \subsection{Proof of Choquet's theorem} The implication (iii)\,$\Rightarrow$\,(ii) holds trivially. Since, for every $\mu\in \mathcal M(X)$, the function~$ G\mu$ is lower semicontinuous, and therefore $ \{G\mu=\infty\} =\betaigcap_{n\in\mathbbm{N}}\{ G\mu>n\}$ is a~$G_\deltaelta$-set, we also know that (ii) implies (i). Based on Lemma \ref{key}, the next two lemmas are the additional ingredients for the~proof of the implication (i)\,$\Rightarrow$\,(iii) in our setting. They replace the potential-theoretic Lemme 3 in \cite{choquet-gd}. The other steps can, more or less, be taken as in \cite{choquet-gd}. A sequence $(U_n)$ of open sets in $X$ with $\betaigcup_{n\in\mathbbm{N}} U_n=U$ will be called \emph{exhaustion of~$U$} provided, for every $n\in \mathbbm{N}$, the closure $\overline U_n$ is a compact subset of $U_{n+1}$. \betaegin{lemma}\lambdaabel{key-finite} Let $V\subset X $ be open, $\nu_0\in \mathcal M(V)$, and $M>0$. There exists~\hbox{$\nu\in \mathcal M(V)$} such that $\nu\lambdae \nu_0$, $G\nu<\infty$ on $V^c$ and $G\nu>M$ on~ $\{G\nu_0>M+1\}\cap V$. \end{lemma} \betaegin{proof} Let us choose exhaustions $(V_n)$ and $(W_n)$ of $V$ and $W:=\{G\nu_0>M+1\}$, respectively. We~claim that there are measures $\nu_0\gammae \nu_1\gammae \nu_2 \gammae \deltaots$ such that \betaegin{equation}\lambdaabel{ind} G\nu_n>M+2^{-n} \mbox{ on }W , \qquad G( \nu_{n-1}-\nu_n)<2^{-n} \mbox{ on } V_n. \end{equation} Let us fix $n\in \mathbbm{N}$ and suppose that we have $\nu_{n-1}\lambdae \nu_0 $ such that $G\nu_{n-1}>M+2^{-(n-1)}$ on $W$ (which holds if $n=1$). Since $V_m\uparrow V$ and therefore $G(1_{V_m}\nu_{n-1})\uparrow G\nu_{n-1}$ as~$m\to \infty$, there exists $m>n$ such that \betaegin{equation*} B_n:= W_n\cap (V\setminus V_m)\subset W_n \setminus V_{n+1} \quad\mbox{ and }\quad \nu_n:=1_{V\setminus B_n}\nu_{n-1} \end{equation*} (see Figure 2) satisfy \betaegin{equation}\lambdaabel{crucial} \mbox{$\nu_0(B_n)< 2^{-n} d(W_n\setminus V_{n+1}, W_{n+1}^c\cup V_n) ^{\gamma} $} \mbox{ \ and \ } \mbox{$ G\nu_n>M+2^{-n} $ on $\overline W_{n+1}$.} \end{equation} \betaegin{center} \includegraphics[width=8.8cm]{pic24-eps-converted-to.pdf}\\ {\small Figure 2. The choice of $B_n$} \end{center} Then $ G(\nu_{n-1}-\nu_n)\lambdae G(1_{B_n}\nu_0)< 2^{-n}$ on $W_{n+1}^c\cup V_n$. In particular, \betaegin{equation*} G\nu_n\gammae G\nu_{n-1} -2^{-n}>M+2^{-(n-1)}-2^{-n}= M+2^{-n} \mbox{ on }W\setminus W_{n+1}. \end{equation*} Having the second inequality in~(\ref{crucial}) we conclude that~(\ref{ind}) holds. The sequence $(\nu_n)$ is decreasing to a measure $\nu$ such that, for every $n\in\mathbbm{N}$, \betaegin{equation*} G(\nu_n-\nu)=\sum\nolimits_{j=n}^\infty G(\nu_j-\nu_{j+1}) <\sum\nolimits_{j=n}^\infty 2^{-(j+1)} = 2^{-n} \mbox{ on } V_n, \end{equation*} and hence $G\nu>M$ on $W\cap V_n$, by (\ref{ind}). So $ G\nu>M$ on $W\cap V$. Of course, $G\nu<\infty$ on $\overline V^c\cup W^c$. By our construction, for every $n\in\mathbbm{N}$, the support of $\nu_n$ does not intersect $W_n\cap \partial V$, and hence $G\nu\lambdae G\nu_n<\infty$ on $W_n\cap \partial V$. Therefore $G\nu<\infty$ on $W\cap \partial V$, and we finally obtain that $G\nu<\infty$ on $V^c$. \end{proof} \betaegin{lemma}\lambdaabel{key-0} Let $U$ be a relatively compact open set in $X$, let $ P$ be a subset of $U$ with $c^\alphast(P)=0$, and let $\varepsilon>0$. Then there exist an open neighborhood $V$ of~$P$ in~$U$ and~$\mu\in\mathcal M_\varepsilon(\overline P\cap V)$ such that $G\mu<\infty$ on $V^c$ and $G\mu>2$ on $\overline P\cap V$. \end{lemma} \betaegin{proof} Let $\nu_0\in \mathcal M_\varepsilon(X)$ with $G\nu_0=\infty$ on $P$. Since $G(1_{U^c}\nu_0)<\infty$ on $U$, by (\ref{gmu-finite}), we may assume that $\nu_0$ is supported by $U$. Of course, \betaegin{equation*} P\subset V:=\{x\in U\colon G\nu_0(x)>9^{\gamma+1}+1\}\subset U. \end{equation*} By Lemma \ref{key-finite}, there exists $\nu\in\mathcal M_\varepsilon(U)$ with $G\nu<\infty$ on $U^c$ and $G\nu>9^{\gamma+1}$ on $V$. Let $\nu_1:=1_V\nu$, $\nu_2:=1_{U\setminus \overline V}\nu$ and $\sigma:=1_{U\cap \partial V}\nu$ so that $\nu=\nu_1+\nu_2+\sigma$. \betaegin{center} \includegraphics[width=6.7cm]{pic25-eps-converted-to.pdf}\\ {\small Figure 3. Decomposition of $\nu$} \end{center} By Lemma \ref{key},(a), applied to $U$, $A:=\overline V\cap U$, $A_0:=V$, there exists $\tilde \nu_2\in \mathcal M(V)$ such that \betaegin{equation*} \|\tilde \nu_2\|=\|\nu_2\| \quad\mbox{ and }\quad G\tilde \nu_2\gammae 3^{-\gamma} G\nu_2\mbox{ on }\overline V\cap U. \end{equation*} Of course, $G\sigma<\infty$ outside the boundary $\partial V$ supporting $\sigma$. Further, $G\sigma\lambdae G\nu<\infty$ on ${U^c}$ and, by definition of $V$, on $U\setminus V$. So~$G\sigma<\infty$ on $X$. By Lusin's theorem and a~version of the continuity principle of Evans-Vasilesco (see \cite[pp.\,97--98]{HN-(H)}), there are $\sigma_n\in \mathcal M(X)$ such that each $G\sigma_n$ is continuous on $X$ and $\sigma=\sum_{n\in\mathbbm{N}} \sigma_n$. Using (\ref{approx}) and Lemma \ref{weak} with $K=L=\overline V$, we get $\tilde\sigma_n\in\mathcal M(V)$ with \betaegin{equation*} \|\tilde \sigma_n\|= \| \sigma_n\| \quad\mbox{ and }\quad G\tilde\sigma_n>G\sigma_n-2^{-n} \mbox{ on }\overline V. \end{equation*} We define $ \tilde \sigma=\sum\nolimits_{n\in\mathbbm{N}} \tilde \sigma_n $ and $ \tilde \nu:=\nu_1+\tilde \nu_2+\tilde \sigma$. Then \betaegin{equation*} \tilde \nu\in \mathcal M_\varepsilon(V) \quad\mbox{ and }\quad G\tilde \nu\gammae 3^{-\gamma}G\nu-1>3^{\gamma+2}-1>3^{\gamma+1}\mbox{ on }V. \end{equation*} Applying Lemma \ref{key},(b) to $V$, we get $\mu_0\in \mathcal M(\overline P\cap V)$ with $ \|\mu_0\|=\|\tilde \nu\|\lambdae \varepsilon $ and $G\mu_0\gammae 3^{-\gamma}G\tilde \nu>3$ on $\overline P\cap V$. Finally, by Lemma \ref{key-finite}, we obtain a measure $\mu\lambdae \mu_0$ such that $G\mu< \infty$ on ${V^c}$ and $G\mu>2$ on $\overline P\cap V$. \end{proof} \betaegin{lemma}\lambdaabel{lemme-2} Let $ P\subset X$ such that $c^\alphast(P)=0$, $U$ be an open neighborhood of~$P$ and~$0<\varepsilon\lambdae 1$. There are an open neighborhood $V$ of $P$ in $U$ and $\mu\in \mathcal M_\varepsilon (\overline P\cap V)$ such that~$G\mu>2$ on $\overline P\cap V$, $G\mu<\infty$ on $V^c$, and $G\mu<\varepsilon$ on $U^c$. \end{lemma} \betaegin{proof} Let $(W_n)$ be an exhaustion of $U$. Let $n\in\mathbbm{N}$, \betaegin{equation*} U_n:=W_{n+1}\setminus \overline W_{n-1}, \quad P_n:=P\cap U_n, \quad \varepsilon_n:= 2^{-n} \varepsilon \, (1\wedge d(U_n, W_{n-2}\cup W_{n+2}^c)^\gamma ) \end{equation*} (with $W_{-1}=W_0=\emptyset$). By Lemma \ref{key-0}, there exist an open neighborhood~$V_n$ of~$P_n$ in~$U_n$ and~$\mu_n\in\mathcal M_{\varepsilon_n} (\overline P\cap V_n)$ such that $G\mu_n>2$ on $\overline P\cap V_n$ and $G\mu_n<\infty$ on~$V_n^c$. \betaegin{center} \includegraphics[width=8.6cm]{pic26-eps-converted-to.pdf}\\ {\small Figure 4. Choice of $V_n$} \end{center} By our choice of $\varepsilon_n$, $G\mu_n\lambdae 2^{-n}\varepsilon $ on $W_{n-2}\cup W_{n+2}^c$. It is immediately verified that~$\mu:=\sum_{n\in\mathbbm{N}} \mu_n$ and $V:=\betaigcup_{n\in\mathbbm{N}} V_n$ have the desired properties. \end{proof} We may now continue similarly as in \cite{choquet-gd}, but in a~slightly simpler way using approximating sequences instead of weak$^\alphast$-neighborhoods. \betaegin{lemma}\lambdaabel{lemme-4} Let $ P\subset X$ with $c^\alphast(P)=0$ and $P_0$ be a countable, dense set in $P$. Let $U$ be an open neighborhood of $P$ and $\varepsilon>0$. There exists $\nu\in \mathcal M_\varepsilon(P_0)$ such that \betaegin{equation*} \mbox{ $G\nu >1 $ on $P$},\quad \mbox{$G\nu<\infty$ on $X\setminus P_0$} \quad\mbox{ and }\quad \mbox{$G\nu< \varepsilon$ on $U^c$}. \end{equation*} \end{lemma} \betaegin{proof}[Proof {\rm (cf.\ the proof of \cite[Lemme 2]{choquet-gd})}] Let $\deltaelta:= 1\wedge(\varepsilon/2)$. By Lemma \ref{lemme-2}, there exist an open neighborhood $V$ of $P$ in $U$ and $\mu\in \mathcal M_\deltaelta(\overline P\cap V) $ such that \betaegin{equation}\lambdaabel{previous} G\mu>2 \mbox{ on } \overline P\cap V, \quad G\mu<\infty\mbox { on }V^c\quad\mbox{ and }\quad G\mu<\deltaelta\mbox{ on }U^c. \end{equation} Let $(V_k)$ be an exhaustion of $V$. For $k\in\mathbbm{N}$, we define (taking $V_{-1}=V_0:=\emptyset$) \betaegin{equation*} P_k:= P_0\cap(V_k\setminus V_{k-1}) \quad\mbox{ and }\quad W_k:= V_{k+1}\setminus \overline V_{k-2} \end{equation*} so that $W_k$ is an open neighborhood of $\overline P_k$, see Figure 5. \betaegin{center} \includegraphics[width=8cm]{pic27-eps-converted-to.pdf}\\ {\small Figure 5. The open neighborhood $W_k$ of $\overline P_k$} \end{center} Clearly, every point in $\overline P\cap (V_k\setminus V_{k-1})$ is contained in the closure of $P_{k-1}\cup P_k$. Hence we have $\betaigcup_{k\in\mathbbm{N}} \overline P_k=\overline P\cap V$. We choose $\mu_k\in \mathcal M(\overline P_k)$ with $\mu=\sum_{k\in\mathbbm{N}} \mu_k$ and approximating sequences~$(\mu_k^{(n)})_{n\in\mathbbm{N}}$ for~$\mu_k$ in~$\mathcal M(P_k)$, see (\ref{approx}). So, for every $k\in\mathbbm{N}$, the sequence $(\mu_k^{(n)}) $ is weak$^\alphast$-convergent to $\mu_k$ and the sequence $(G\mu_k^{(n)}) $ converges to $G\mu_k$ uniformly on $W_k^c$. For the moment, we fix $k\in\mathbbm{N}$. There exists $l_k\in\mathbbm{N}$ such that, for all $n\gammae l_k$, \betaegin{equation}\lambdaabel{est-k} |G\mu_k^{(n)}-G\mu_k|< 2^{-k}\deltaelta \quad\text{ on } W_k^c. \end{equation} Let $\tau_k:=\sum_{|m-k|>1} \mu_m$. Then $G\tau_k$ is continuous on $\overline P_k$ and, by (\ref{previous}), \betaegin{equation*} G(\mu_{k-1}+\mu_k+\mu_{k+1})>2-G\tau_k \quad\text{ on } \overline P_k.\footnote{To be formally correct we have to omit, here and below, the term with subscript $k-1$ if $k=1$.} \end{equation*} By Lemma \ref{weak} (applied with $K:=\overline P_{k-1}\cup \overline P_k\cup \overline P_{k+1}$ and $L:=\overline P_k$), there exists $m_k\gammae l_k$ such that, for all $n\gammae m_k$, \betaegin{equation}\lambdaabel{kkk} G(\mu_{k-1} ^{(n)}+\mu_k ^{(n)}+\mu_{k+1} ^{(n)} )> 2-G\tau_k \quad\text{ on } \overline P_k. \end{equation} We now define \betaegin{equation*} n_k:=m_{k-1}\varepsilone m_k\varepsilone m_{k+1} , \quad \nu_k :=\mu_k^{(n_k)} \quad\mbox{ and }\quad \nu:= \sum\nolimits_{k\in\mathbbm{N}}\nu_k. \end{equation*} Then $\nu\in \mathcal M (P_0)$, $\|\nu\|=\|\mu\|\lambdae \deltaelta$, and $G\nu<\infty$ on $V\setminus P_0$, since $\nu$ is supported by a~subset of $P_0$ having no accumulation points in $V$. By (\ref{est-k}), \betaegin{equation*} G\nu\lambdae G\mu + \sum\nolimits_{k\in\mathbbm{N}} |G\nu_k-G\mu_k|< G\mu+\deltaelta \quad\text{ on } V^c. \end{equation*} So, by (\ref{previous}), $G\nu<\infty$ on $V^c$ and $G\nu<2\deltaelta\lambdae \varepsilon$ on $U^c$. By (\ref{kkk}), for every $k\in\mathbbm{N}$, \betaegin{equation*} G\nu\gammae G(\nu_{k-1}+\nu_k+\nu_{k+1}+\tau_k)-\sum\nolimits_{|m-k|>1}|G\nu_m-G\mu_m| > 2-\deltaelta\gammae 1 \end{equation*} on $ \overline P_k$. Thus $G\nu>1$ on the set $\overline P\cap V$ containing $P$. \end{proof} \betaegin{proof}[Proof of {\rm (i)\,$\Rightarrow$\,(iii)} in Theorem \ref{main-second}] Let $(U_m)$ be a decreasing sequence of open sets in $X$ such that $P=\betaigcap_{m\in\mathbbm{N}} U_m$. By Lemma \ref{lemme-4}, there are $\nu_m\in \mathcal M(P_0)$, $m\in\mathbbm{N}$, such that $\|\nu_m\|\lambdae 2^{-m}$, $G\nu_m> 1$ on $P $, $G\nu_m<\infty$ on $X\setminus P_0$ and $G\nu_m<2^{-m}$ on~$X\setminus U_m$. Obviously, $\nu:=\sum_{m\in\mathbbm{N}}\nu_m\in \mathcal M(P_0)$ and $\{G\nu=\infty\}=P$. \end{proof} \section{The general case} Assuming that $G$ has the local triangle property there is a locally finite covering of~$X$ by relatively compact open sets $U_n$ such that, for each $n\in\mathbbm{N}$, the restriction of~$G$ on $U_n\times U_n$ has the triangle property. Let us consider $ P\subset X$, a countable dense set $P_0$ in $P$, and $\mu\in \mathcal M_1(X)$ such that $G\mu=\infty$ on $P$. For $n\in\mathbbm{N}$, we introduce \betaegin{equation*} P_n:=P\cap U_n \quad\mbox{ and }\quad \mu_n:=1_{U_n}\mu\in\mathcal M_1(U_n). \end{equation*} By (\ref{gmu-finite}), $ G(1_{U_n^c}\mu)<\infty$ on $U_n$, and hence \betaegin{equation}\lambdaabel{Gnn} G\mu_n=\infty\mbox{ on } P_n. \end{equation} a) If $P$ is an $F_\sigma$-set, then every $P_n$ is an $F_\sigma$-set, and applying Theorem \ref{main-first} to~$U_n$ and $G|_{U_n\times U_n}$, we obtain $\nu_n\in\mathcal M_1(P_n)$ such that $G\nu_n=\infty$ on $P_n$. Then clearly $\nu:=\sum_{n\in\mathbbm{N}} 2^{-n}\nu_n\in \mathcal M_1(P)$ and $G\nu=\infty$ on $ P$ completing the proof of Theorem~\ref{main-first}. A straightforward modification yields Corollary~\ref{p0-evans}. b) Let us next suppose that $P$ is a $G_\deltaelta$-set and let $n\in\mathbbm{N}$. Then $P_n$ is a $G_\deltaelta$-set and an application of Theorem \ref{main-second} to $U_n$ and $G|_{U_n\times U_n}$ yields $ \nu_n\in\mathcal M_1(P_0\cap U_n) $ such that $\{x\in U_n: G\nu_n(x)=\infty\}=P_n$. By (\ref{gmu-finite}), $G\nu_n<\infty$ on $U_n^c$, and therefore \betaegin{equation}\lambdaabel{choqu} \{x\in X\colon G\nu_n(x)=\infty\}=P_n. \end{equation} Obviously, \betaegin{equation*} \nu:=\sum\nolimits_{n\in\mathbbm{N}} 2^{-n} \nu_n \in \mathcal M_1(P_0) \quad\mbox{ and }\quad G\nu=\infty\mbox{ on } P. \end{equation*} To show that $G\nu<\infty$ outside $P$, we fix $n\in\mathbbm{N}$ and note that the set $I_n$ of all $k\in\mathbbm{N}$ such that $U_k\cap U_n\ne\emptyset$ is finite. Defining $\rho_n:=\sum_{k\in I_n^c} 2^{-k}\nu_k$ we know that $G\rho_n<\infty$ on $U_n$, by (\ref{gmu-finite}). Hence, using (\ref{choqu}), \betaegin{equation*} G\nu=G\rho_n+\sum\nolimits_{k\in I_n} 2^{-n} G\nu_k< \infty \quad\text{ on } U_n\setminus P. \end{equation*} Thus $\{G\nu=\infty\}=P$. c) To complete the proof of Theorem \ref{main-second} we suppose that $\{G\mu=\infty\}=P$ and have to show that $P$ is a $G_\deltaelta$-set. Let $n\in\mathbbm{N}$. By (\ref{Gnn}), $\{x\in U_n\colon G\mu_n(x)=\infty\}=P_n$. By Theorem \ref{main-second}, applied to $U_n$ and $G|_{U_n\times U_n}$, we obtain that $P_n$ is a $G_\deltaelta$-set. So there exist open neighborhoods $W_{nm}$, $m\in\mathbbm{N}$, of $P_n$ in $U_n$ such that $W_{nm}\deltaownarrow P_n$ as~$m\to \infty$. Since $I_n$ is finite, we then easily see that $ W_m:=\betaigcup\nolimits_{n\in\mathbbm{N}} W_{nm} $ is decreasing to $\betaigcup\nolimits_{n\in\mathbbm{N}} P_n=P$ as~$m\to \infty$. Thus $P$ is a $G_\deltaelta$-set, and we are done. \section{Appendix: Application to the PWB-method} In this section we shall recall the solution to the generalized Dirichlet problem for balayage spaces by the Perron-Wiener-Brelot method and how (a~generalization of) Evans theorem enables us to use only \emph{harmonic} upper and lower functions. So let $(X,\mathcal W)$ be a balayage space (where the assumption in Section \ref{intro} may be satisfied or not). Let $\mathcal B(X)$, $\mathcal C(X)$ denote the set of all Borel measurable numerical functions on~$X$, continuous real functions on $X$, respectively. Let $\mathcal P$ be the set of all \emph{continuous real potentials} for $(X,\mathcal W)$, that is \betaegin{equation*} \mathcal P:=\{p\in \mathcal W\cap \mathcal C(X)\colon \exists\ q\in \mathcal W\cap \mathcal C(X),\, q>0, \,p/q\to 0 \mbox{ at infinity}\}, \end{equation*} see \cite{BH, H-course} for a thorough treatment and, for example, \cite{ H-compact, H-prag-87} for an introduction to balayage spaces. We recall that, for all open sets $V$ in $X$ and $x\in X$, we have positive Radon measures $\ve_x^{V^c}$ on $X$, supported by ${V^c}$ and characterized by \betaegin{equation*} \int p\,d \ve_x^{V^c}=R_p^{V^c}(x):=\inf\{w(x)\colon w\in \mathcal W,\,w\gammae p\mbox{ on }V^c\} , \qquad p \in \mathcal P, \end{equation*} so that, obviously, $\ve_x^{V^c}=\deltaelta_x$ if $x\in {V^c}$. They lead to \emph{harmonic kernels} $H_V$ on $X$: \betaegin{equation*} H_Vf(x):=\int f\,d\ve_x^{V^c}, \qquad f\in\mathcal B^+(X),\, x\in X. \end{equation*} Let us now fix an open set $U$ in $X$ for which we shall consider the generalized Dirichlet problem (see \cite[Chapter VII]{BH}). Let $\mathcal V(U)$ denote the set of all open sets~$V$ such that $\overline V$ is compact in $U$, and let ${}^\alphast {\mathcal H}(U)$ be the set of all functions $u\in\mathcal B(X)$ which are \emph{hyperharmonic on~$U$}, that is, are lower semicontinuous on $U$ and satisfy \betaegin{equation*} -\infty< H_Vu(x)\lambdae u(x)\mbox{ for all }x\in V\in \mathcal V(U). \end{equation*} Then ${\mathcal H}(U):= {}^\alphast {\mathcal H}(U)\cap (-{}^\alphast {\mathcal H}(U))$ is the set of functions which are \emph{harmonic on~$U$}, \betaegin{equation*} {\mathcal H}(U) =\{h\in \mathcal B(X)\colon h|_U\in \mathcal C(U),\ H_Vh(x)=h(x) \mbox{ for all }x\in V\in\mathcal V(U)\}. \end{equation*} A function $f\colon X\to \overline \mathbbm{R}$ is called \emph{lower $\mathcal P$-bounded}, \emph{$\mathcal P$-bounded} if there is some $p\in\mathcal P$ such that $f\gammae -p$, $|f|\lambdae p$, respectively. For every numerical function~$f$ on~$X$, we have the set of all \emph{upper functions} \betaegin{equation*} {\mathcal U}_f^U:=\{u\in {}^\alphast {\mathcal H}(U)\colon u\gammae f\mbox{ on }{U^c},\ u\mbox{ lower $\mathcal P$-bounded and l.s.c.\ on $X$}\}, \end{equation*} the set $ \mathcal L_f^U:=-{\mathcal U}_{-f}^U$ of all \emph{lower functions for $f$ with respect to $U$}, and the definitions \betaegin{equation*}\lambdaabel{bar} {\overline H}_f^U :=\inf \,{\mathcal U}_f, \qquad {\quad\mbox{ and }\quaderbar H}_f^U:=\sup \,\mathcal L_f^U. \end{equation*} For every $p\in \mathcal P$, there exists $q\in \mathcal P$, $q>0$, such that $p/q\to 0$ at infinity. Hence we may replace ${\mathcal U}_f^U$ by the smaller set of upper functions, which are positive outside a~compact in $X$, without changing the infimum (if $f\gammae -p$ consider $f+\varepsilon q$, $\varepsilon>0$). To avoid technicalities we state the \emph{resolutivity result} (see \cite[VIII.2.12]{BH}) only for $\mathcal P$-bounded functions: \betaegin{theorem}\lambdaabel{pwb} For every $\mathcal P$-bounded $f\in\mathcal B(X)$, \betaegin{equation*} H_Uf={\overline H}_f^U={\quad\mbox{ and }\quaderbar H}_ f^U \in {\mathcal H}(U). \end{equation*} \end{theorem} \betaegin{remark}\lambdaabel{harmonic-case} {\rm Let us indicate how the general approach above yields the solution to the generalized Dirichlet problem for harmonic spaces in the way the reader may be more familiar with. So let us assume for a moment that the harmonic measures $\ve_x^{V^c}$, $x\in V$, for our balayage space are supported by~$\partial V$ so that (hyper)harmonicity on $U$ does not depend on values on ${U^c}$, and let us identify functions on $U$ with functions on $X$ vanishing outside $U$. Let $f$ be a Borel measurable function on $\partial U$ which is $\mathcal P$-bounded (amounting to boundedness if $U$ is relatively compact) and let $\tilde {\mathcal U}_f^U$ be the set of all functions~$u$ on~$U$ which are hyperharmonic on $U$ and satisfy \betaegin{equation}\lambdaabel{liminf} \lambdaiminf\nolimits_{x\in U,\, x\to z}u(x)\gammae f(z)\quad\mbox{ for every }z\in \partial U. \end{equation} If $u\in {\mathcal U}_f^U$, then $\tilde u:=1_Uu$ is hyperharmonic on $U$ and $\lambdaiminf\nolimits_{x\to z}\tilde u(x)\gammae u(z)\gammae f(z)$ for every $z\in \partial U$, hence $\tilde u\in \tilde {\mathcal U}_f^U$. If, conversely, $\tilde u$ is a function in $\tilde {\mathcal U}_f^U$ then, extending it to $X$ by $\lambdaiminf_{x\to z} u(z)$ for $z\in\partial U$ and $\infty$ on $X\setminus \overline U$, we get a~function $u\in {\mathcal U}_f^U$. Therefore Theorem \ref{pwb} yields that $h\colon x\mapsto \ve_x^{U^c}(f)$, $x\in U$, is harmonic on $U$ and \betaegin{equation*} h(x)=\inf \tilde {\mathcal U}_f^U(x)=\sup \tilde \mathcal L_f^U(x) \quad\mbox{ for every } x\in U. \end{equation*} } \end{remark} Let {$\partial_{\mbox{\rm\small reg}}U$} denote the set of \emph{regular} boundary points $z$ of $U$, that is, $z\in \partial U$ such that $\lambdaim_{x\to z} H_Uf(x)=f(z)$ for all $\mathcal P$-bounded $f\in \mathcal C(X)$, and let $\partial_{\mbox{\rm \small irr}} U$ be the set of \emph{irregular} boundary points of $U$, $\partial_{\mbox{\rm \small irr}} U:=\partial U\setminus \partial_{\mbox{\rm\small reg}}U$. \betaegin{corollary}\lambdaabel{evans-corollary} Suppose that there is a~lower semicontinuous function~$h_0\gammae 0$ on~$X$ which is harmonic on $U$ and satisfies $h=\infty$ on~$\partial_{\mbox{\rm \small irr}} U$. Then \betaegin{equation*} H_Uf=\inf \, {\mathcal U}_f^U\cap {\mathcal H}(U) =\sup \,\mathcal L_f^U\cap {\mathcal H}(U) \quad\mbox{ for every $\mathcal P$-bounded } f\in\mathcal B(X). \end{equation*} \end{corollary} \betaegin{proof} a) Let $g$ be $\mathcal P$-bounded and lower semicontinuous on $X$. Then there exist $\mathcal P$-bounded $\varphi_n$ in~$\mathcal C(X)$, $n\in\mathbbm{N}$, such that $\varphi_n\uparrow g$. For all $z\in \partial_{\mbox{\rm\small reg}}U$ and $n\in\mathbbm{N}$, \betaegin{equation*} \lambdaiminf\nolimits_{x\to z} H_Ug(x)\gammae \lambdaiminf\nolimits_{x\to z} H_U\varphi_n(x)=\varphi_n(z), \end{equation*} and hence $ \lambdaiminf_{x\to z} H_Ug(x)\gammae g(z)$. Clearly, \betaegin{equation*} h_n:=H_Ug+(1/n) h_0\in {\mathcal H}(U) \end{equation*} satisfies $\lambdaim_{x\to z} h_n(x)=\infty$ for all $z\in\partial_{\mbox{\rm \small irr}} U$, and $h_n$ is lower semicontinuous on~$X$. Thus $ h_n\in {\mathcal U}_f^U\cap {\mathcal H}(U)$. b) Let $f\in \mathcal B(X) $ be $\mathcal P$-bounded, $x\in X$. There exists a decreasing sequence~$(g_n)$ of $\mathcal P$-bounded lower semicontinuous functions on $X$ such that $g_n\gammae f$ for every $n\in\mathbbm{N}$ and \betaegin{equation*} \int f\, d\ve_x^{U^c}=\inf\nolimits_{n\in\mathbbm{N}} \int g_n\,d\ve_x^{U^c}, \end{equation*} that is, $H_Uf(x)=\inf_{n\in\mathbbm{N}} H_Ug_n(x)$. Hence \betaegin{equation*} H_Uf(x)=\inf\nolimits_{n\in\mathbbm{N}} (H_U g_n +(1/n)h_0) (x), \end{equation*} where $H_Ug_n+(1/n)h_0\in {\mathcal U}_f^U\cap {\mathcal H}(U)$, by (a). Thus $H_Uf=\inf {\mathcal U}_f^U\cap {\mathcal H}(U)$. c) Further, $H_Uf=-H_U(-f)=- \inf {\mathcal U}_{-f}^U\cap {\mathcal H}(U)= \sup \mathcal L_f^U\cap {\mathcal H}(U)$. \end{proof} \betaegin{remarks} {\rm 1. For harmonic spaces, the result in Corollary \ref{evans-corollary} has been proven in \cite{cornea}, where the solution to the generalized Dirichlet problem is obtained using \emph{controlled convergence}. 2. In general, the set $\partial_{\mbox{\rm \small irr}} U$ is a semipolar $F_\sigma$-set. Of course, if $(X,\mathcal W)$ satisfies Hunt's hypothesis (H), that is, if every semipolar set is polar, then $\partial_{\mbox{\rm \small irr}} U$ is polar for every $U$. Let us note that (H) holds if $X$ is an abelian group such that $\mathcal W$ is invariant under translations and $(X,\mathcal W)$ admits a Green function having the local triangle property (see \cite{HN-(H)}). By Theorem \ref{main-first}, we obtain that in this situation (which covers the classical case, many translation-invariant second order PDO's as well as Riesz potentials, that is, $\alpha$-stable processes, and many more general L\'evy processes) the assumption of Corollary~\ref{evans-corollary} holds. } \end{remarks} \betaibliographystyle{plain} \betaegin{thebibliography}{10} \betaibitem{AG} D.H.\,Armitage and S.J.\,Gardiner. \newblock{\em Classical Potential Theory}. \newblock Springer Monographs in Mathematics, Springer 2001. \betaibitem{BH} J.~Bliedtner and W.~Hansen. \newblock {\em {Potential Theory -- An Analytic and Probabilistic Approach to Balayage}}. \newblock Universitext. Springer, Berlin-Heidelberg-New York-Tokyo, 1986. \betaibitem{brelot} M.\,Brelot. \newblock{\em Lectures on potential theory.} \newblock Notes by K. N. Gowrisankaran and M. K. Venkatesha Murthy. Lectures on Mathematics, No. 19. Tata Institute of Fundamental Research, Bombay 1967. \betaibitem{choquet-cr} G.~Choquet. \newblock Potentiels sur un ensemble de capacit\'e nulle. \newblock{\em C.\ R.\ Acad.\ Paris}, 244: 1710--1712, 1957. \betaibitem{choquet-gd} G.~Choquet. \newblock Sur les $G_\deltaelta$ de capacit\'e nulle. \newblock{\em Ann. Inst. Fourier} , 9: 103--109, 1959. \betaibitem{cornea-resolution} A. Cornea. \newblock R\'esolution du prob\`eme de Dirichlet et comportement des solutions \`a la fronti\`ere \`a l'aide des fonctions de contr\^ole. \newblock{\em C. R. Acad. Paris S\'er. I Math}, 320, 159--164, 1995. \betaibitem{cornea} A.\,Cornea. \newblock Applications of controlled convergence in analysis. \newblock{In: \em Analysis and topology,} pp.\ 257--275, World Sci. Publ., River Edge, NJ, 1998. \betaibitem{deny} J.\,Deny. \newblock Sur les infinis d'un potentiel. \newblock{\em C. R. Acad. Paris}, 224: 524--525, 1947. \betaibitem{evans} G.C.\,Evans. \newblock Potentials and positively infinite singularities of harmonic functions. \newblock{\em Monatshefte f\"ur Math. u. Phys.}, 43: 419--424, 1936. \betaibitem{H-prag-87} W.\,Hansen. \newblock{Balayage spaces -- a natural setting for potential theory.} Potential Theory - Surveys and Problems, Proceedings, Prague 1987, Lecture Notes 1344 (1987), 98--117. \betaibitem{H-course} W.\,Hansen. \newblock{\em Three views on potential theory}. \newblock A course at Charles University (Prague), Spring 2008. \newblock http: //www.karlin.mff.cuni.cz/~hansen/lecture/course-07012009.pdf. \betaibitem{hansen-liouville-wiener} W.\,Hansen. \newblock{Liouville property, Wiener's test and unavoidable sets for Hunt processes} \newblock{\em Potential Anal.}, 44: 639--653, 2016. \betaibitem{H-compact} W.\,Hansen. \newblock Equicontinuity of harmonic functions and compactness of potential kernels. \newblock arXiv:1904.12726v2, 2018. \betaibitem{HN-(H)} W.\,Hansen and I.\,Netuka. \newblock{Hunt's hypothesis (H) and triangle property of the Green function.} \newblock{\em Expo. Math.}, 34: 95--100, 2016. \betaibitem{heinonen} J.\,Heinonen. \newblock{\em Lectures on analysis on metric spaces}. Springer New-York 2001. \betaibitem{landkof} N.S.\,Landkof. \newblock{\em Foundations of Modern Potential Theory}. \newblock Grundl. d. math. Wiss. 180, Springer Berlin - Heidelberg - New York, 1972. \end{thebibliography} {\small \noindent Wolfhard Hansen, Fakult\"at f\"ur Mathematik, Universit\"at Bielefeld, 33501 Bielefeld, Germany, e-mail: hansen$@$math.uni-bielefeld.de}\\ {\small \noindent Ivan Netuka, Charles University, Faculty of Mathematics and Physics, Mathematical Institute, Sokolovsk\'a 83, 186 75 Praha 8, Czech Republic, email: [email protected]} \end{document}
\begin{document} \title[Expansivity of ergodic measures with positive entropy]{Expansivity of ergodic measures with positive entropy} \author{A. Arbieto, C. A. Morales} \address{Instituto de Matem\'atica, Universidade Federal do Rio de Janeiro, P. O. Box 68530, 21945-970 Rio de Janeiro, Brazil} \ead{[email protected],[email protected]} \begin{abstract} We prove that for every ergodic invariant measure with positive entropy of a continuous map on a compact metric space there is $\delta>0$ such that the dynamical $\delta$-balls have measure zero. We use this property to prove, for instance, that the stable classes have measure zero with respect to any ergodic invariant measure with positive entropy. Moreover, continuous maps which either have countably many stable classes or are Lyapunov stable on their recurrent sets have zero topological entropy. We also apply our results to the Li-Yorke chaos. \end{abstract} \noindent{\it 2000 Mathematical Subject Classification: Primary 37A25, Secondary 37A35.} \noindent{\it Keywords}: Expansive, Entropy, Ergodic. \section{Introduction} \noindent Ergodic measures with positive entropy for continuous maps on compact metric spaces have been studied in the recent literature. For instance, \cite{bhr} proved that the set of points belonging to a proper asymptotic pair (i.e. points whose stable classes are not singleton) constitute a full measure set. Moreover, \cite{h} proved that if $f$ is a homeomorphism with positive entropy $h_\mu(f)$ with respect to one of such measures $\mu$, then there is a full measure set $A$ such that for all $x\in A$ there is a closed subset $A(x)$ in the stable class of $x$ satisfying $h(f^{-1},A(x))\geq h_\mu(f)$, where $h(\cdot,\cdot)$ is the Bowen's entropy operation \cite{b}. We can also mention \cite{cj} which proved that every ergodic endomorphism on a Lebesgue probability space having positive entropy on finite measurable partitions formed by continuity sets is pairwise sensitive. In this paper we prove that these measures have an additional property closely related to \cite{cj}. More precisely, we prove that for every ergodic invariant measure with positive entropy of a continuous map on a compact metric space there is $\delta>0$ such that the dynamical $\delta$-balls have measure zero. Measures with this property will be called {\em expansive measures} in part motivated by the classical definition of {\em expansive map} which requires $\delta>0$ such that {\em the dynamical $\delta$-balls reduce to singleton} \cite{u}. We shall prove that expansive measures satisfy certain properties which are interesting by themselves. With the aid of these properties we prove that, on compact metric spaces, every stable class has measure zero with respect to any ergodic measure with positive entropy (this seems to be new as far as we know). We also prove through the use of expansive measures that every continuous map on a compact metric space exhibiting countably many stable classes has zero topological entropy (a similar result with different techniques has been obtained in \cite{hy} but in the {\em transitive} case). Still in the compact case we prove that every continuous map which is Lyapunov stable on its recurrent set has zero topological entropy too (this result is well-known but for {\em one-dimensional maps} \cite{fs}, \cite{si}, \cite{z}). Finally we use expansive measures to give necessary conditions for a continuous map on a Polish space to be chaotic in the sense of Li and Yorke \cite{liy}. \section{Definition of expansive measure} \noindent In this section we define expansive measures, present some examples and give necessary and sufficient conditions for a Borel probability to be expansive. To motivate the definition we recall the notion of expansive map first. Let $(X,d)$ a metric space and $f:X \to X$ be a map of $X$. Given $x\in X$ and $\delta>0$ we define the dynamical ball of radio $\delta$, $$ \Phi_\delta(x)=\{y\in X:d(f^i(y),f^i(x))\leq \delta, \quad\forall i\in I\!\! N\}. $$ We say that $f$ is {\em expansive} (or {\em positively expansive}) if there is $\delta>0$ (called {\em expansivity constant}) such that for all $x\neq y$ in $X$ there is $n\in\mathbb{N}$ such that $d(f^n(x),f^n(y))> \delta$ (c.f. \cite{hk}). Equivalently, $f$ is expansive if there is $\delta>0$ such that $$ \Phi_\delta(x)=\{x\}, \quad \quad \forall x\in X. $$ In such a case every nonatomic Borel measure $\mu$ of $X$ satisfies \begin{equation} \label{paratodo} \mu(\Phi_\delta(x))=0,\quad\quad \forall x\in X. \end{equation} This suggests the following definition in which the term {\em measurable} means measurable with respect to the Borel $\operatorname{Sing}gma$-algebra. \begin{defi} \label{expansive-measure} An {\em expansive measure}(\footnote{the name {\em positively expansive measure} seems to be the correct one. Another possible names are {\em pairwise sensitive measure} or {\em symmetrically sensitive measure}.}) of a measurable map $f:X\to X$ is a Borel measure $\mu$ for which there is $\delta>0$ satisfying (\ref{paratodo}). \end{defi} Notice that this definition does not assume that the map $f$ (resp. the measure $\mu$) is {\em measure-preserving} (resp. {\em invariant}), i.e., $\mu=\mu\circ f^{-1}$. In fact, this hypothesis will not be assumed unless otherwise stated. Here are some examples. \begin{ex} \label{exa1} Neither the identity nor the constant maps on separable metric spaces have expansive measures. On the other hand, every expansive measure is nonatomic. The converse is true for expansive maps, i.e., every nonatomic Borel probability measure is expansive with respect to any expansive map. On the other hand, there are nonexpansive continuous maps on certain compact metric spaces for which every nonatomic measure is expansive \cite{m1}. The homeomorphism $f(x)=2x$ in $\mathbb{R}$ exhibits expansive measures (e.g. the Lebesgue measure) but not expansive {\em invariant} ones. \end{ex} Hereafter all expansive measures will be probability measures. Now we present an useful characterization of expansive measures. \begin{lemma} \label{forall} A Borel probability measure $\mu$ is expansive for a measurable map $f$ if and only if there is $\delta>0$ such that \begin{equation} \label{quasitodo} \mu(\Phi_\delta(x))=0, \quad\quad \forall \mu\mbox{-a.e. } x\in X. \end{equation} \end{lemma} \noindent{{\bf Proof.}} We only have to prove that (\ref{quasitodo}) implies that $\mu$ is expansive. Fix $\delta>0$ satisfying (\ref{quasitodo}) and suppose by contradiction $\mu$ is not expansive. Then, there is $x_0\in X$ such that $\mu(\Phi_{\delta/2}(x_0))>0$. Denote $X_\delta=\{x\in X:\mu(\Phi_\delta(x))=0\}$ so $\mu(X_\delta)=1$. Since $\mu$ is a probability we obtain $X_\delta\cap \Phi_{\frac{\delta}{2}}(x_0)\neq\emptyset$ so there is $y_0\in \Phi_{\frac{\delta}{2}}(x_0)$ such that $\mu(\Phi_\delta(y_0))=0$. Now if $x\in \Phi_{\frac{\delta}{2}}(x_0)$ we have $d(f^i(x),f^i(x_0))\leq \frac{\delta}{2}$ (for all $i\in \mathbb{N}$) and, since $y_0\in \Phi_{\frac{\delta}{2}}(x_0)$, we obtain $d(f^i(y_0),f^i(x_0))\leq \frac{\delta}{2}$ (for all $i\in \mathbb{N}$) so $d(f^i(x),f^i(y_0))\leq d(f^i(x),f^i(x_0))+d(f^i(x_0),f^i(y_0))\leq \frac{\delta}{2}+\frac{\delta}{2}=\delta$ (for all $i\in \mathbb{N}$) proving $x\in \Phi_{\frac{\delta}{2}}(y_0)$. Therefore $\Phi_{\frac{\delta}{2}}(x_0)\operatorname{Supp}bset \Phi_\delta(y_0)$ so $\mu(\Phi_{\frac{\delta}{2}}(x_0))\leq \mu(\Phi_\delta(y_0))=0$ which is absurd. This proves the result. \opensquare This lemma together with the corresponding definition for expansive maps suggests the following. \begin{defi} \label{sensitivity-constant} An {\em expansivity constant} of an expansive measure $\mu$ is a constant $\delta>0$ satisfying either (\ref{paratodo}) or (\ref{quasitodo}). \end{defi} \section{Some properties of expansive measures} \noindent In this section we select the properties of expansive measures we shall use in the next section. For the first one we need the following definition. \begin{defi} \label{def-stable-set} Given a map $f: X\to X$ and $p\in X$ we define $W^s(p)$, the {\em stable set} of $p$, as the set of points $x$ for which the pair $(p,x)$ is asymptotic, i.e., $$ W^s(p)=\left\{x\in X:\lim_{n\to\infty}d(f^n(x),f^n(p))=0\right\}. $$ By a {\em stable class} we mean a subset equals to $W^s(p)$ for some $p\in X$. \end{defi} The following shows that every stable class is negligible with respect to any expansive {\em invariant} measure. \begin{prop} \label{asymptotic-pair} The stable classes of a measurable map have measure zero with respect to any expansive {\em invariant} measure. \end{prop} \noindent{{\bf Proof.}} Let $f:X\to X$ a measurable map and $\mu$ be an expansive invariant measure. Denoting by $B[\cdot,\cdot]$ the closed ball operation one gets $$ W^s(p)=\bigcap_{i\in \mathbb{N}^+}\bigcup_{j\in \mathbb{N}}\bigcap_{k\geq j}f^{-k}\left( B\left[f^k(p),\frac{1}{i}\right]\right). $$ As clearly $$ \bigcup_{j\in \mathbb{N}}\bigcap_{k\geq j}f^{-k}\left(B\left[f^k(p),\frac{1}{i+1}\right]\right) \operatorname{Supp}bseteq \bigcup_{j\in \mathbb{N}}\bigcap_{k\geq j}f^{-k}\left(B\left[f^k(p),\frac{1}{i}\right]\right), \quad\forall i\in\mathbb{N}^+, $$ we obtain \begin{equation} \label{zero} \mu(W^s(p))\leq \lim_{i\to\infty} \operatorname{Supp}m_{j\in \mathbb{N}}\mu\left(\bigcap_{k\geq j}f^{-k}\left(B\left[f^k(p),\frac{1}{i}\right]\right) \right). \end{equation} On the other hand, $$ \bigcap_{k\geq j}f^{-k}\left(B\left[f^k(p),\frac{1}{i}\right]\right)=f^{-j}\left(\Phi_{\frac{1}{i}}(f^j(p))\right) $$ so $$ \mu\left( \bigcap_{k\geq j}f^{-k}\left(B\left[f^k(p),\frac{1}{i}\right]\right) \right)=\mu\left( f^{-j}\left(\Phi_{\frac{1}{i}}(f^j(p))\right) \right)=\mu\left(\Phi_{\frac{1}{i}}(f^j(p))\right) $$ since $\mu$ is invariant. Then, taking $i$ large, namely, $i>\frac{1}{\epsilon}$ where $\epsilon$ is a expansivity constant of $\mu$ (c.f. Definition \ref{sensitivity-constant}) we obtain $\mu\left(\Phi_{\frac{1}{i}}(f^j(p))\right)=0$ so $$ \mu\left( \bigcap_{k\geq j}f^{-k}\left(B\left[f^k(p),\frac{1}{i}\right]\right) \right)=0. $$ Replacing in (\ref{zero}) we get the result. \opensquare For the second property we will use the following definition \cite{fs}. \begin{defi} A map $f: X\to X$ is said to be {\em Lyapunov stable} on $A\operatorname{Supp}bset X$ if for any $x\in A$ and any $\epsilon>0$ there is a neighborhood $U(x)$ of $x$ such that $d(f^n(x),f^n(y))< \epsilon$ whenever $n\geq 0$ and $y\in U(x)\cap A$. \end{defi} (Notice the difference between this definition and the corresponding one in \cite{si}.) The following implies that measurable sets where the map is Lyapunov stable are negligible with respect to any expansive measure (invariant or not). \begin{prop} \label{maluco} If a measurable map of a separable metric space is Lyapunov stable on a measurable set $A$, then $A$ has measure zero with respect to any expansive measure. \end{prop} \noindent{{\bf Proof.}} Fix a measurable map $f: X\to X$ of a separable metric space $X$, an expansive measure $\mu$ and $\Delta>0$. Since $\mu$ is regular there is a closed subset $C\operatorname{Supp}bset A$ such that $$ \mu(A\setminus C)\leq \Delta. $$ Let us compute $\mu(C)$. Fix an expansivity constant $\epsilon$ of $\mu$ (c.f. Definition \ref{sensitivity-constant}). Since $f$ is Lyapunov stable on $A$ and $C\operatorname{Supp}bset A$ for every $x\in C$ there is a neighborhood $U(x)$ such that \begin{equation} \label{fedorenko} d(f^n(x),f^n(y))< \epsilon \quad\quad\forall n\in \mathbb{N}, \forall y\in U(x)\cap C. \end{equation} On the other hand, $C$ is separable (since $X$ is) and so Lindelof with the induced topology. Consequently, the open covering $\{U(x)\cap C:x\in C\}$ of $C$ admits a countable subcovering $\{U(x_i)\cap C:i\in \mathbb{N}\}$. Then, \begin{equation} \label{balurdo} \mu(C)\leq\operatorname{Supp}m_{i\in \mathbb{N}}\mu\left(U(x_i)\cap C\right). \end{equation} Now fix $i\in \mathbb{N}$. Applying (\ref{fedorenko}) to $x=x_i$ we obtain $ U(x_i)\cap C\operatorname{Supp}bset \Phi_\epsilon(x_i) $ and then $ \mu\left(U(x_i)\cap C\right)\leq \mu(\Phi_\epsilon(x))=0 $ since $\epsilon$ is an expansivity constant. As $i$ is arbitrary we obtain $\mu(C)=0$ by (\ref{balurdo}). To finish we observe that $$ \mu(A)=\mu(A\setminus C)+\mu(C)=\mu(A\setminus C)\leq \Delta $$ and so $\mu(A)=0$ since $\Delta$ is arbitrary. This ends the proof. \opensquare From these propositions we obtain the following corollary. Recall that the {\em recurrent set} of $f: X\to X$ is defined by $R(f)=\{x\in X:x\in \omega_f(x)\}$, where $ \omega_f(x)=\left\{y\in X:y=\lim_{k\to\infty}f^{n_k}(x)\mbox{ for some sequence } n_k\to\infty\right\}. $ \begin{clly} \label{maps-zero-entropy-0} A measurable map of a separable metric space which either has countably many stable classes or is Lyapunov stable on its recurrent set has no expansive invariant measures. \end{clly} \noindent{{\bf Proof.}} First consider the case when there are countably many stable classes. Suppose by contradiction that there exists an expansive invariant measure. Since the collection of stable classes is a partition of the space it would follow from Proposition \ref{asymptotic-pair} that the space has measure zero which is absurd. Now consider the case when the map $f$ is Lyapunov stable on $R(f)$. Again suppose by contradiction that there is an expansive invariant measure $\mu$. Denote by $supp(\mu)$ the support of $\mu$. Since $\mu$ is invariant we have $supp(\mu)\operatorname{Supp}bset R(f)$ by Poincare recurrence. However, since $f$ is Lyapunov stable on $R(f)$ we obtain $\mu(R(f))=0$ from Proposition \ref{maluco} so $\mu(supp(\mu)))=\mu(R(f))=0$ which is absurd. This proves the result. \opensquare \section{Applications} \label{sec-entropy} \noindent We start this section by proving that positive entropy implies expansiveness among ergodic invariant measures for continuous maps on compact metric spaces. Afterward we include some short applications. Recall that a Borel probability measure $\mu$ in $X$ is {\em ergodic} with respect to a map if every invariant measurable set has measure zero or one. For the notion of entropy $h_\mu(f)$ of an invariant measure $\mu$ see \cite{w}. \begin{thm} \label{thA} Every ergodic invariant measure with positive entropy of a continuous map on a compact metric space is expansive. \end{thm} \noindent{{\bf Proof.}} Consider an ergodic invariant measure $\mu$ with positive entropy $h_\mu(f)>0$ of a continuous map $f$ on a compact metrix space $X$. Fix $\delta>0$ and define $$ X_\delta=\{x\in X:\mu(\Phi_\delta(x))=0\}. $$ By Lemma \ref{forall} we are left to prove that there is $\delta>0$ such that $\mu(X_\delta)=1$. Fix $x\in X$. It follows from the definition of $\Phi_\delta(x)$ that $ \Phi_\delta(x)\operatorname{Supp}bset f^{-1}(\Phi_\delta(f(x))) $ so $$ \mu(\Phi_\delta(x))\leq \mu(\Phi_\delta(f(x))) $$ since $\mu$ is invariant. Then, $\mu(\phi_\delta(x))=0$ whenever $x\in f^{-1}(X_\delta)$ yielding $$ f^{-1}(X_\delta)\operatorname{Supp}bset X_\delta. $$ Denote by $A\Delta B$ the symmetric difference of the sets $A,B$. Since $\mu(f^{-1}(X_\delta))=\mu(X_\delta)$ the above inequality implies that $X_\delta$ is {\em essentially invariant}, i.e., $\mu(f^{-1}(X_\delta)\Delta X_\delta)=0$. Since $\mu$ is ergodic we conclude that $\mu(X_\delta)\in\{0,1\}$ for all $\delta>0$. Then, we are left to prove that there is $\delta>0$ such that $\mu(X_\delta)>0$. To find it we proceed as follows. For all $\delta>0$ we define the map $\phi_\delta: X\to I\!\! R\cup\{\infty\}$, $$ \phi_\delta(x)=\liminf_{n\to\infty}-\frac{\log\mu(B[x,n,\delta])}{n} $$ where $B[x,n,\delta]=\bigcap_{i=0}^{n-1}f^{-i}(B[f^i(x),\delta])$ and $B[\cdot,\cdot]$ stands for the closed ball operation. Define $h=\frac{h_\mu(f)}{2}$ (thus $h>0$) and $$ X^m=\left\{x\in X:\phi_{\frac{1}{m}}(x)>h\right\}, \quad\quad\forall m\in I\!\! N^+. $$ Notice that $\phi_\delta(x)\geq \phi_{\delta'}(x)$ whenever $0<\delta<\delta'$. From this it follows that $X^m\operatorname{Supp}bset X^{m'}$ for $m\leq m'$ and further $$ \left\{ x\in X:\operatorname{Supp}p_{\delta>0}\phi_\delta(x)=h_\mu(f) \right\} \operatorname{Supp}bset\bigcup_{m\in I\!\! N^+}X^m. $$ Then, $$ \mu \left( \left\{ x\in X:\operatorname{Supp}p_{\delta>0}\phi_\delta(x)=h_\mu(f) \right\} \right)\leq \lim_{m\to\infty}\mu(X^m). $$ On the other hand, $\mu$ is nonatomic since it is ergodic invariant with positive entropy. So, the Brin-Katok Theorem \cite{bk} implies $$ \mu \left( \left\{ x\in X:\operatorname{Supp}p_{\delta>0}\phi_\delta(x)=h_\mu(f) \right\} \right) =1. $$ Then, $$ \lim_{m\to \infty}\mu(X^m)=1. $$ Consequently, we can fix $m\in I\!\! N^+$ such that $$ \mu(X^m)>0. $$ We shall prove that $\delta=\frac{1}{m}$ works. Let us take $x\in X^m$. It follows that $\mu(B[x,n,\delta])<e^{-hn}$ for all $n$ large. Since $h>0$ we conclude that $$ \lim_{n\to\infty}\mu(B[x,n,\delta])=0. $$ But it follows from the definition of $\Phi_\delta(x)$ that $$ \Phi_\delta(x)= \bigcap_{n=0}^\infty B[x,n,\delta]. $$ In addition, $B[x,n',\delta]\operatorname{Supp}bset B[x,n,\delta]$ whenever $n\leq n'$ therefore $$ \mu(\Phi_\delta(x))= \lim_{n\to\infty}\mu(B[x,n,\delta])=0. $$ This proves $x\in X_\delta$. As $x\in X^m$ is arbitrary we obtain $X^m\operatorname{Supp}bset X_\delta$ whence $$ 0<\mu(X^m)\leq \mu(X_\delta) $$ and the proof follows. \opensquare The converse of the above theorem is false, i.e., an expansive measure may have zero entropy even in the ergodic invariant case. A counterexample is as follows. \begin{ex} \label{circle-interval} There are continuous maps in the circle exhibiting ergodic invariant measures with zero entropy which, however, are expansive. \end{ex} \noindent{{\bf Proof.}} Recall that a {\em Denjoy map} is a nontransitive homeomorphism with irrational rotation number of the circle $S^1$ (c.f. \cite{hk}). Since all circle homeomorphisms have zero topological entropy it remains to prove that every Denjoy map $h$ exhibits expansive measures. As is well-known $h$ is uniquely ergodic and the support of its unique invariant measure $\mu$ is a {\em minimal set}, i.e., a set which is minimal with respect to the property of being compact invariant. We shall prove that this measure is expansive. Denote by $E$ the support of $\mu$. It is well known that $E$ is a Cantor set. Let $\alpha$ be half of the length of the biggest interval $I$ in the complement $S^1-E$ of $E$ and take $0<\delta<\alpha/2$. Fix $x\in S^1$ and denote by $Int(\cdot)$ the interior operation. We claim that $Int(\Phi_\delta(x))\cap E=\emptyset$. Otherwise, there is some $z\in Int(\Phi_\delta(x))\cap E$. Pick $w\in \partial I$ (thus $w\in E$). Since $E$ is minimal there is a sequence $n_k\to\infty$ such that $h^{-n_k}(w)\to z$. Since $\mu$ is a finite measure, the interval sequence $\{h^{-n}(I):n\in I\!\! N\}$ is disjoint, we have that the length of the intervals $h^{-n_k}(I)\to 0$ as $k\to\infty$. It turns out that there is some integer $k$ such that $h^{-n_k}(I)\operatorname{Supp}bset \Phi_\delta(x)$. From this and the fact that $h(\Phi_\delta(x))\operatorname{Supp}bset \Phi_\delta(h(x))$ one sees that $I\operatorname{Supp}bset B[h^{n_k}(x),\delta]$ which is clearly absurd because the length of $I$ is greather than $\alpha>2\delta$. This contradiction proves the claim. Since $\Phi_\delta(x)$ is either a closed interval or $\{x\}$ the claim implies that $\Phi_\delta(x)\cap E=\Phi_\delta(x)\cap E$ consists of at most two points. Since $\mu$ is clearly nonatomic we conclude that $\mu(\Phi_\delta(x))=0$. Since $x\in S^1$ is arbitrary we are done. \opensquare \begin{rk} Altogether the above proof and \cite{m} characterize the Denjoy maps as those circle homeomorphisms exhibiting expansive measures. \end{rk} A first application of Theorem \ref{thA} is as follows. \begin{thm} \label{stable-set-0} The stable classes of a continuous map of a compact metric space have measure zero with respect to any ergodic invariant measure with positive entropy. \end{thm} \noindent{{\bf Proof.}} In fact, since these measures are expansive by Theorem \ref{thA} we obtain the result from Proposition \ref{asymptotic-pair}. \opensquare We can also use Theorem \ref{thA} to compute the topological entropy of certain continuous maps on compact metric spaces (for the related concepts see \cite{akm} or \cite{w}). As a motivation let us mention the known facts that both {\em transitive} continuous maps with countably many stable classes on compact metric spaces and continuous maps of the {\em interval} or the {\em circle} which are Lyapunov stable on their recurrent sets have zero topological entropy (see Corollary 2.3 p. 263 in \cite{hy}, \cite{fs}, Theorem B in \cite{si} and \cite{z}). Indeed we improve these result in the following way. \begin{thm} \label{maps-zero-entropy} A continuous map of a compact metric space which either has countably many stable classes or is Lyapunov stable on its recurrent set has zero topological entropy. \end{thm} \noindent{{\bf Proof.}} If the topological entropy were not zero the variational principle \cite{w} would imply the existence of ergodic invariant measures with positive entropy. But by Theorem \ref{thA} these measures are expansive against Corollary \ref{maps-zero-entropy-0}. \opensquare \begin{ex} An example satisfying the first part of Theorem \ref{maps-zero-entropy} is the classical pole North-South diffeomorphism on spheres. In fact, the only stable sets of this diffeomorphism are the stable sets of the poles. The Morse-Smale diffeomorphisms \cite{hk} are basic examples where these hypotheses are fulfilled. \end{ex} Now we use expansive measures to study the chaoticity in the sense of Li and Yorke \cite{liy}. Recall that if $\delta\geq 0$ a {\em $\delta$-scrambled set} of $f:X\to X$ is a subset $S\operatorname{Supp}bset X$ satisfying \begin{equation} \label{delta-scrambled} \liminf_{n\to\infty}d(f^n(x),f^n(y))=0\quad \mbox{ and } \quad \limsup_{n\to\infty}d(f^n(x),f^n(y))>\delta \end{equation} for all different points $x,y\in S$. Recalling that a {\em Polish space} is a complete separable metric space we obtain the following result. \begin{thm} \label{cadre->li} A continuous map of a Polish space carrying an uncountable $\delta$-scrambled set for some $\delta>0$ also carries expansive measures. \end{thm} \noindent{{\bf Proof.}} Let $X$ a Polish space and $f: X\to X$ be a continuous map carrying an uncountable $\delta$-scrambled set for some $\delta>0$. Then, by Theorem 16 in \cite{bhs}, there is a {\em closed} uncountable $\delta$-scrambled set $S$. As $S$ is closed and $X$ is Polish we have that $S$ is also a Polish space with respect to the induced metric. As $S$ is uncountable we have from \cite{prv} that there is a nonatomic Borel probability measure $\nu$ in $S$. Let $\mu$ be the Borel probability induced by $\nu$ in $X$, i.e., $\mu(A)=\nu(A\cap S)$ for all Borelian $A\operatorname{Supp}bset X$. We shall prove that this measure is expansive. If $x\in S$ and $y\in \Phi_{\frac{\delta}{2}}(x)\cap S$ we have that $x,y\in S$ and $d(f^n(x),f^n(y))\leq \frac{\delta}{2}$ for all $n\in \mathbb{N}$ therefore $x=y$ by the second inequality in (\ref{delta-scrambled}). We conclude that $\Phi_{\frac{\delta}{2}}(x)\cap S=\{x\}$ for all $x\in S$. As $\nu$ is nonatomic we obtain $\mu(\Phi_{\frac{\delta}{2}}(x))=\nu(\Phi_{\frac{\delta}{2}}(x)\cap S)=\nu(\{x\})=0$ for all $x\in S$. On other hand, it is clear that every open set which does not intersect $S$ has $\mu$-measure $0$ so $\mu$ is supported in the closure of $S$. As $S$ is closed we obtain that $\mu$ is supported on $S$. We conclude that $\mu(\Phi_{\frac{\delta}{2}}(x))=0$ for $\mu$-a.e. $x\in X$, so, $\mu$ is expansive by Lemma \ref{forall}. \opensquare Now recall that a continuous map is {\em Li-Yorke chaotic} if it has an uncountable $0$-scrambled set. Until the end of this section $M$ will denote either the interval $I=[0,1]$ or the unit circle $S^1$. \begin{clly} Every Li-Yorke chaotic map in $M$ carries expansive measures. \end{clly} \noindent{{\bf Proof.}} Theorem in p. 260 of \cite{d} together with theorems A and B in \cite{ku} imply that every Li-Yorke chaotic map in $I$ or $S^1$ has an uncountable $\delta$-scrambled set for some $\delta>0$. Then, we obtain the result from Theorem \ref{cadre->li}. \opensquare It follows from Example \ref{circle-interval} that there are continuous maps with zero topological entropy in the circle exhibiting expansive {\em invariant} measures. This leads to the question whether the same result is true on compact intervals. The following consequence of the above corollary gives a partial positive answer for this question. \begin{ex} There are continuous maps with zero topological entropy in the interval carrying expansive measures. \end{ex} Indeed, by \cite{j} there is a continuous map of the interval, with zero topological entropy, exhibiting a $\delta$-scrambled set of positive Lebesgue measures for some $\delta>0$. Since sets with positive Lebesgue measure are uncountable we obtain an expansive measure from Theorem \ref{cadre->li}. Another interesting example is the one below. \begin{ex} \label{tent} The Lebesgue measure is an ergodic invariant measure with positive entropy of the tent map $f(x)=1-|2x-1|$ in $I$. Therefore, this measure is expansive by Theorem \ref{thA}. \end{ex} It follows from this example that there are continuous maps in $I$ carrying expansive measures $\mu$ {\em with full support} (i.e. $supp(\mu)=I$). These maps also exist in $S^1$ (e.g. an expanding map). Now, we prove that Li-Yorke and positive topological entropy are equivalent properties among these maps in $I$. But previously we need a result based on the following well-known definition. A {\em wandering interval} of a map $f: M\to M$ is an interval $J\operatorname{Supp}bset M$ such that $f^n(J)\cap f^m(J)=\emptyset$ for all different integers $n,m\in \mathbb{N}$ and no point in $J$ belongs to the stable set of some periodic point. \begin{lemma} \label{non-I} If $f: M\to M$ is continuous, then every wandering interval has measure zero with respect to every expansive measure. \end{lemma} \noindent{{\bf Proof.}} Let $J$ a wandering interval and $\mu$ be an expansive measure with expansivity constant $\epsilon$ (c.f. Definition \ref{sensitivity-constant}). To prove $\mu(J)=0$ it suffices to prove $Int(J)\cap supp(\mu)=0$ since $\mu$ is nonatomic. As $J$ is a wandering interval one has $\lim_{n\to\infty}|f^n(J)|=0$, where $|\cdot |$ denotes the length operation. From this there is a positive integer $n_0$ satisfying \begin{equation} \label{mao} |f^n(J)|<\epsilon, \quad\quad\forall n\geq n_0. \end{equation} Now, take $x\in Int(J)$. Since $f$ is clearly uniformly continuous and $n_0$ is fixed we can select $\delta>0$ such that $B[x,\delta]\operatorname{Supp}bset Int(J)$ and $|f^n(B[x,\delta])|<\epsilon$ for $0\leq n\leq n_0$. This together with (\ref{mao}) implies $|f^n(x)-f^n(y)|<\epsilon$ for all $n\in\mathbb{N}$ therefore $B[x,\delta]\operatorname{Supp}bset \Phi_\epsilon(x)$ so $\mu(B[x,\delta])=0$ since $\epsilon$ is an expansivity constant. Thus $x\not\in supp(\mu)$ and we are done. \opensquare From this we obtain the following corollary. \begin{clly} A continuous map with expansive measures of the circle or the interval has no wandering intervals. Consequently, a continuous map of the interval carrying expansive measures with full support is Li-Yorke chaotic if and only if it has positive topological entropy. \end{clly} \noindent{{\bf Proof.}} The first part is a direct consequence Lemma \ref{mao} while, the second, follows from the first since a continuous interval map without wandering intervals is Li-Yorke chaotic if and only if it has positive topological entropy \cite{smi}. \opensquare \section*{References} \end{document}
\begin{document} \title{Repulsive Mixtures} \author[1]{\rm Francesca Petralia} \author[2]{\rm Vinayak Rao} \author[1]{\rm David B. Dunson} \affil[1]{Department of Statistical Science, Box 90251, Duke University, Durham, North Carolina 27708, U.S.A. } \affil[2]{Gatsby Computational Neuroscience Unit, University College London, London WC1N3AR, United Kingdom } \date{} \maketitle \begin{abstract} Discrete mixture models are routinely used for density estimation and clustering. While conducting inferences on the cluster-specific parameters, current frequentist and Bayesian methods often encounter problems when clusters are placed too close together to be scientifically meaningful. Current Bayesian practice generates component-specific parameters independently from a common prior, which tends to favor similar components and often leads to substantial probability assigned to redundant components that are not needed to fit the data. As an alternative, we propose to generate components from a repulsive process, which leads to fewer, better separated and more interpretable clusters. We characterize this repulsive prior theoretically and propose a Markov chain Monte Carlo sampling algorithm for posterior computation. The methods are illustrated using simulated data as well as real datasets. \end{abstract} Key Words: Bayesian nonparametrics; Dirichlet process; Gaussian mixture model; Model-based clustering; Repulsive point process; Well separated mixture \section{ Introduction} Finite mixture models characterize the density of $y \in \mathcal{Y} \subset \Re^m$ as \begin{equation} f(y) = \sum_{h=1}^k p_h \phi( y; \gamma_h), \label{eq:mix} \end{equation} where $p = (p_1,\ldots,p_k)^T$ is a vector of probabilities summing to one, and $\phi(\cdot; \gamma)$ is a kernel depending on parameters $\gamma \in \Gamma$, which may consist of location and scale parameters. There is a rich literature on inference for finite mixture models from both a frequentist (\cite{unsuplearning}; \cite{finiteEM}) and Bayesian \citep{reversible} perspective. \indent In analyses of finite mixture models, a common concern is over-fitting in which {\em redundant} mixture components having similar locations and scales are introduced. Over-fitting can have an adverse impact on density estimation, since this leads to an unnecessarily complex model. Another common goal of finite mixture modeling is clustering \citep{raftery2}, and having components with similar locations, leads to overlapping kernels and lack of interpretability. Introducing kernels with similar locations but different scales may be necessary to fit heavy-tailed and skewed densities, and hence low separation in clustering and over-fitting are distinct problems. This article develops a repulsive mixture modeling approach which can be applied to both these problems. \indent Recently, \cite{Rousseau} studied the asymptotic behavior of the posterior distribution in over-fitted Bayesian mixture models having more components than needed. They showed that a carefully chosen prior will lead to asymptotic emptying of the redundant components. However, several challenging practical issues arise. For their prior and in standard Bayesian practice, one assumes that $\gamma_h \sim P_0$ independently {\em a priori}. For example, if we consider a finite location-scale mixture of multivariate Gaussians, one may choose $P_0$ to be multivariate Gaussian-inverse Wishart. However, the behavior of the posterior can be sensitive to $P_0$ for finite samples, with higher variance $P_0$ favoring allocation to fewer clusters. In addition, drawing the component-specific parameters from a common prior tends to favor components located close together unless the variance is high. \indent Regardless of the specific $P_0$ chosen, for small to moderate sample sizes, the weight assigned to redundant components is often substantial. This can be attributed to identifiability problems that arise from a difficulty in distinguishing between models that partition each of a small number of well separated components into a number of essentially identical components. This issue leads to substantial uncertainty in clustering and estimation of the number of components, and is not specific to over-fitted mixture models; similar behavior occurs in placing a prior on $k$ or using a nonparametric Bayes approach such as the Dirichlet process. \indent The problem of separating components has been studied for Gaussian mixture models (\cite{Dasgupta}; \cite{DasguptaSchulman}). Two Gaussians can be separated by placing an arbitrarily chosen lower bound on the distance between their means. Separated Gaussians have been mainly utilized to speed up convergence of the Expectation-Maximization (EM) algorithm. In choosing a minimal separation level, it is not clear how to obtain a good compromise between values that are too low to solve the problem and ones that are so large that one obtains a poor fit. As an alternative, we propose a repulsive prior that discourages closeness among component-specific parameters without a hard constraint. \indent In contrast to the vast majority of the recent Bayesian literature on discrete mixture models, instead of drawing the component-specific parameters $\{ \gamma_h \}$ independently from a common prior $P_0$, we propose a joint prior for $\{ \gamma_1,\ldots,\gamma_k \}$ that is chosen to assign low density to $\gamma_h$'s located close together. We consider two types of repulsive priors, (i) priors guarding against over-fitting by penalizing redundant kernels having close to identical locations and scales and case (ii) priors discouraging closeness in only the locations to favor well separated clusters. \section{ Bayesian Repulsive Mixture Models} \subsection{Background on Bayesian mixture modeling} Considering the finite mixture model in expression (\ref{eq:mix}), a Bayesian specification is completed by choosing priors for the number of components $k$, the probability weights $p$, and the component-specific parameters $\gamma = (\gamma_1, \ldots, \gamma_k)^T$. Typically, $k$ is assigned a Poisson or multinomial prior, $p$ a $Dirichlet(\alpha)$ prior with $\alpha=(\alpha_1, \ldots, \alpha_k)^T$, and $\gamma_h \sim P_0$ independently, with $P_0$ often chosen to be conjugate to the kernel $\phi$. Posterior computation can proceed via a reversible jump Markov chain Monte Carlo algorithm involving moves for adding or deleting mixture components. Unfortunately, in making a $k \to k+1$ change in model dimension, efficient moves critically depend on the choice of proposal density. \indent It has become popular to use over-fitted mixture models in which $k$ is chosen as a conservative upper bound on the number of components under the expectation that only relatively few of the components will be occupied by subjects in the sample. As motivated in \cite{IshwaranZarepour}, simply letting $\alpha_h = c/k$ for $h=1,\ldots, k$ and a constant $c>0$ leads to an approximation to a Dirichlet process mixture model for the density of $y$, which is obtained in the limit as $k$ approaches infinity. An alternative finite approximation to a Dirichlet process mixture is obtained by truncating the stick-breaking representation of Sethuraman (1994), leading to a similarly simple Gibbs sampling algorithm \citep{IshwaranJames}. These approaches are now used routinely in practice. \subsection{ Repulsive densities} We seek a prior on the component parameters in (\ref{eq:mix}) that automatically favors spread out components near the support of the data. Instead of generating the atoms $\gamma_h$ independently from $P_0$, one could generate them from a repulsive process that automatically pushes the atoms apart. This idea is conceptually related to the literature on repulsive point processes \citep{huber}. In the spatial statistics literature, a variety of repulsive processes have been proposed. One such model assumes that points are clustered spatially, with the vector of cluster centers $\gamma$ having a Strauss density \citep{StraussForClustering}, that is $p(k,\gamma) \propto \beta^k \rho^{r(\gamma)}$ where $k$ is the number of clusters, $\beta>0$, $0 < \rho \leq 1$ and $r(\gamma)$ is the number of pairwise centers that lie within a pre-specified distance $r$ of each other. A possibly unappealing feature is that repulsion is not directly dependent on the pairwise distances between the clusters. We propose an alternative class of priors, which smoothly push apart components based on their pairwise distances. \begin{definition}\label{def1} A density $h(\gamma)$ is repulsive if for any $\delta>0$ there is a corresponding $\epsilon>0$ such that $h(\gamma)<\delta$ for all $\gamma \in \Gamma \setminus G_{\epsilon}$, where $G_{\epsilon} = \{ \gamma : d(\gamma_s,\gamma_j)>\epsilon; s=1, \ldots, k; j<s \}$ and $d$ is a distance. \end{definition} \indent We consider two special cases (i) $d(\gamma_s,\gamma_j)$ is the distance between the $s$th and $j$th kernel, (ii) $d(\gamma_s,\gamma_j)$ is the distance between sub-vectors of $\gamma_s$ and $\gamma_j$ corresponding to only locations. Priors following definition \ref{def1}(i) limit over-fitting in density estimation, while priors following definition \ref{def1}(ii) favor well-separated clusters. \indent As a convenient class of repulsive priors which smoothly push components apart, we propose \begin{equation}\pi(\gamma) = c_1\left( \prod_{j=1}^k g_0(\gamma_j)\right) h(\gamma), \label{joint1} \end{equation} with $c_1$ being a normalizing constant that can be intractable to calculate. The dependence of $c_1$ on $k$ leads to complications in estimating $k$ that motivate the use of an over-specified mixture that treats $k$ as an upper bound on the number of components. The proposed prior is closely related to a class of point processes from the statistical physics and spatial statistics literature called Gibbs processes \citep{DalVer2008a}. We assume $g_0: \Gamma \to \Re_+$ and $h: \Gamma^k \to [0,\infty)$ are continuous with respect to Lesbesgue measure, and $h$ is bounded above by a positive constant $c_2$ and is repulsive according to definition \ref{def1} with $d$ differing across cases. It follows that density $\pi$ defined in (\ref{joint1}) is also repulsive. For location-scale kernels, let $\gamma_j=(\mu_j,\Sigma_j)$ and $g_0(\mu_j,\Sigma_j)=\xi(\mu_j) \psi(\Sigma_j)$ with $\mu_j$ and $\Sigma_j$ being respectively the location and the scale parameters. A special hardcore repulsion is produced if the repulsion function is zero when at least one pairwise distance is smaller than a pre-specified threshold. Such a density implies choosing a minimal separation level between the atoms. \indent We avoid hard separation thresholds by considering repulsive priors that smoothly push components apart. In particular, we propose two repulsion functions defined as\\ \begin{minipage}{0.5\linewidth} \begin{equation}h(\gamma)=\prod_{ \{(s, j) \in A\} } g\{d(\gamma_s, \gamma_j)\} \label{repul2}\end{equation} \end{minipage} \begin{minipage}{0.5\linewidth} \begin{equation}h(\gamma)= \min_{ \{(s, j) \in A\} } g\{d(\gamma_s, \gamma_j)\} \label{repul}\end{equation}\end{minipage} with $A = \{ (s,j): s=1,\ldots, k; j < s \}$ and $g:\Re_+ \to [0,M]$ a strictly monotone differentiable function with $g(0)=0$, $g(x)>0$ for all $x>0$ and $M<\infty$. It is straightforward to show that $h$ in (\ref{repul2}) and (\ref{repul}) is integrable and satisfies definition \ref{def1}. The two alternative repulsion functions differ in their dependence on the relative distances between components, with all the pairwise distances playing a role in (\ref{repul2}), while (\ref{repul}) only depends on the minimal separation. A flexible choice of $g$ corresponds to \begin{eqnarray} g\{ d(\gamma_s,\gamma_j) \} = \exp\big[ - \tau \{d(\gamma_s,\gamma_j)\}^{-\nu} \big], \label{gfunction} \end{eqnarray} where $\tau>0$ is a scale parameter and $\nu$ is a positive integer controlling the rate at which $g$ approaches zero as $d(\gamma_s,\gamma_j)$ decreases. Figure \ref{prior_nutau} shows contour plots of the prior $\pi(\gamma_1,\gamma_2)$ defined as (\ref{joint1}) and satisfying definition \ref{def1}(ii) with $\gamma_1, \gamma_2 \in \mathbb{R}$, $d$ the Euclidean distance, $g_0$ the standard normal density, the repulsive function defined as (\ref{repul2}) or (\ref{repul}) and $g$ defined as (\ref{gfunction}) for different values of $(\tau,\nu)$. As $\tau$ and $\nu$ increase, the prior increasingly favors well separated components. \begin{figure} \caption{Contour plots of the repulsive prior $\pi(\gamma_1,\gamma_2)$ satisfying definition \ref{def1} \label{prior_nutau} \end{figure} \subsection{Theoretical properties}\label{theory} In this section, theoretical properties of the proposed prior are considered under definition \ref{def1}(ii) for simplicity, though all results can be modified to accommodate definition \ref{def1}(i). For some results, the kernel will be assumed to depend only on location parameters, while for others on both location and scale parameters. Let $\Pi$ be the prior induced on $\cup_{j=1}^{\infty} \mathcal{F}_k$, where $\mathcal{F}_k$ is the space of all distributions defined as (\ref{eq:mix}). Let $\|\cdot\|_1$ denote the $L_1$ norm and $KL(f_0,f) = \int f_0 \log(f_0/f)$ refer to the Kullback-Leibler (K-L) divergence between $f_0$ and $f$. Density $f_0$ belongs to the K-L support of the prior $\Pi$ if $\Pi\{ f: KL(f_0,f)< \epsilon \}>0$ for all $\epsilon>0$. Let the true density $f_0:\Re^m \to \Re_+$ be defined as $f_0=\sum_{h=1}^{k_0} p_{0h} \phi(\gamma_{0h})$ with $\gamma_{0h}\in \Gamma$ and $\gamma_{0j}$s such that there exists an $\epsilon_1>0$ such that $\min_{\{(s,j): s<j\}} d(\gamma_{0s},\gamma_{0j})\geq \epsilon_1$, $d$ being the Euclidean distance of sub-vectors of $\gamma_{0j}$ and $\gamma_{0s}$ corresponding to only locations. Let $f=\sum_{h=1}^{k}p_h \phi(\gamma_h)$ with $\gamma_h \in \Gamma$. Let $\gamma \sim \pi$ and $\pi$ satisfy definition \ref{def1}(ii). Let $p \sim \lambda$ with $\lambda=Dirichlet(\alpha)$ and $k \sim \vartheta$ with $\vartheta(k=k_0)>0$. Let $\theta=(p,\gamma)$. These assumptions on $f_0$ and $f$ will be referred to as condition B0. The next lemma provides sufficient conditions under which the true density is in the K-L support of the prior for location kernels. \begin{lemma}\label{kl} Assume condition B0 is satisfied with $m=1$. Let $D_0$ be a compact set containing location parameters $(\gamma_{01}, \ldots, \gamma_{0k_0})$. Let $\phi$ and $\pi$ satisfy the following conditions: \indent A1. for any $y \in \mathcal{Y}$, the map $\gamma \to \phi(y; \gamma)$ is uniformly continuous \indent A2. for any $y \in \mathcal{Y}$, $\phi(y; \gamma)$ is bounded above by a constant \indent A3. $\int f_0 \left|\log \left\lbrace \sup_{\gamma \in D_{0}} \phi(\gamma)\right\rbrace -\log\left\lbrace\inf_{\gamma \in D_{0}} \phi(\gamma)\right\rbrace\right| < \infty$ \indent A4. $\pi$ is continuous with respect to Lebesgue measure and for any vector $x \in \Gamma^k$ \indent \indent with $\min_{\{(s,j): s<j\}} d(x_{s},x_{j})\geq \upsilon$ for $\upsilon>0$ there is a $\delta>0$ such that $\pi(\gamma)>0$ for all $\gamma$ \indent \indent satisfying $|| \gamma-x ||_1 < \delta$ Then $f_0$ is in the K-L support of the prior $\Pi$. \end{lemma} \begin{lemma}\label{A4} The repulsive density in (\ref{joint1}) with $h$ defined as either (\ref{repul2}) or (\ref{repul}) satisfies condition A4 in lemma \ref{kl}. \end{lemma} The next lemma formalizes the posterior rate of concentration for univariate location mixtures of Gaussians. \begin{lemma}\label{contraction:rate} Let condition B0 be satisfied, let $m=1$ and $\phi$ be the normal kernel depending on a location parameter $\mu$ and a scale parameter $\sigma$. Assume that condition $(i), (ii)$ and $(iii)$ of theorem 3.1 in \cite{Scricciolo2011} and assumption A4 in lemma \ref{kl} are satisfied. Furthermore, assume that \indent C1) the joint density $\pi$ leads to exchangeable random variables and for all $k$ the marginal \indent density of $\mu_1$ satisfies $\pi_m(|\mu_1|\geq t) \lesssim \exp\left(-q_1 t^2\right)$ for a given $q_1>0$ \indent C2) there are constants $u_1, u_2, u_3 >0$, possibly depending on $f_0$, such that for any $\epsilon \leq u_3$ \[\pi(||\mu-\mu_0 ||_1\leq \epsilon) \geq u_1 \exp(-u_2 k_0 \log(1/\epsilon))\] Then the posterior rate of convergence relative to the $L_1$ metric is $\epsilon_n= n^{-1/2} \log n$.\end{lemma} Lemma \ref{contraction:rate} is basically a modification of theorem 3.1 in \cite{Scricciolo2011} to our proposed repulsive mixture model. Lemma \ref{lemma_condition} gives sufficient conditions for $\pi$ to satisfy condition C1 and C2 in lemma \ref{contraction:rate}. \begin{lemma} \label{lemma_condition} Let $\pi$ be defined as (\ref{joint1}) and $h$ be defined as either (\ref{repul2}) or (\ref{repul}), then $\pi$ satisfies condition C2 in lemma \ref{contraction:rate}. Furthermore, if for a positive constant $n_1$ the function $\xi$ satisfies $\xi(|x|\geq t)\lesssim \exp(-n_1 t^2)$, $\pi$ satisfies condition C1 in lemma \ref{contraction:rate}. \end{lemma} As motivated above, when the number of mixture components is chosen to be conservatively large, it is appealing for the posterior distribution of the weights of the extra components to be concentrated near zero. Theorem \ref{weights:theorem} formalizes the rate of concentration with increasing sample size $n$. One of the main assumptions required in theorem \ref{weights:theorem} is that the posterior rate of convergence relative to the $L_1$ metric is $\delta_n=n^{-1/2}(\log n)^q$ with $q\geq 0$. We provided the contraction rate, under the proposed prior specification and univariate Gaussian kernel, in lemma \ref{contraction:rate}. However, theorem \ref{weights:theorem} is a more general statement and it applies to multivariate mixture density of any kernel. \begin{theorem}\label{weights:theorem} Let assumptions $B0-B5$ be satisfied. Let $\pi$ be defined as (\ref{joint1}) and $h$ be defined as either (\ref{repul2}) or (\ref{repul}). If $\bar{\alpha}=\max(\alpha_1, \ldots, \alpha_k)<m/2$ and for positive constants $r_1, r_2, r_3$ the function $g$ satisfies $g(x) \leq r_1 x^{r_2}$ for $0\leq x<r_3$ then \[\lim_{M\to\infty}\limsup_{n \to \infty} E^0_n \left[ P \left\lbrace \min_{\{\iota \in S_k\}} \left( \sum_{i=k_0+1}^{k} p_{\iota(i)}\right)> M n^{-1/2} (\log n)^{q(1+s(k_0,\alpha)/s_{r_2})} \right\rbrace\right] = 0\] with $s(k_0,\alpha)=k_0-1+m k_0+\bar{\alpha}(k-k_0)$, $s_{r_2}=r_2+m/2-\bar{\alpha}$ and $S_k$ the set of all possible permutations of $\{1, \ldots, k\}$. \end{theorem} Theorem \ref{weights:theorem} is a modification of theorem 1 in Rousseau and Mengersen (2011) to our proposed repulsive mixture model. Theorem \ref{weights:theorem} implies that the posterior expectation of weights of the extra components is of order $O(n^{-1/2} (\log n)^{q(1+s(k_0,\alpha)/s_{r_2})})$. When $g$ is defined as (\ref{gfunction}), parameters $r_1$ and $r_2$ can be chosen such that $r_1=\tau$ and $r_2=\nu$. \indent When the number of components is unknown, with only an upper bound known, the posterior rate of convergence is equivalent to the parametric rate $n^{-1/2}$ \citep{ratepost}. In this case, the rate in theorem \ref{weights:theorem} is $n^{-1/2}$ under usual priors or our repulsive prior. However, in our experience using usual priors, the sum of the extra components can be substantial in small to moderate sample sizes, and often has high variability. As we show in Section \ref{simsection}, for repulsive priors the sum of the extra component weights is close to zero and has small variance for small as well as large sample sizes. When an upper bound on the number of components is unknown, the posterior rate of concentration is $n^{-1/2} (\log n)^q$ with $q>0$. In this case, according to theorem \ref{weights:theorem}, using our prior specification the logarithmic factor in theorem 1 of \cite{Rousseau} can be improved. \section{Parameter Calibration and Posterior Computation} An important issue in implementing repulsive mixture models is elicitation of the repulsion hyper-parameters $(\tau,\nu)$. Although a variety of strategies can be considered, we propose a simple approach that can be used to obtain a default hyper-parameter choice in general applications. In case (i) we choose $d(\cdot,\cdot)$ as the symmetric Kullback-Leibler divergence defined for Gaussian kernels as \[ s_{12} = d(\gamma_1,\gamma_2)=tr(\Sigma_1\Sigma_2^{-1})+tr(\Sigma_1^{-1} \Sigma_2)-2m+(\mu_1-\mu_2)^T(\Sigma_1^{-1}+\Sigma_2^{-1})(\mu_1-\mu_2), \] while in case (ii) we use the Euclidean distance between the location parameters. For both case (i) and case (ii), define $\bar{d}$ as the mean of pairwise distances between atoms, $\bar{d}=\frac{1}{n(A)}\sum_{(s,j)\in A} d(\gamma_s,\gamma_j)$ with $A=\{ (s,j): s=1, \ldots, k; j<s \}$ and $n(A)$ the cardinality of set $A$. Let $f_1$ and $f_2$ denote the densities of $\bar{d}$ under repulsive and non-repulsive priors respectively, with $(\varrho_j,\ \varsigma_j)$ the mean and standard deviation of $f_j$ for $j=1,2$. We choose $(\tau,\nu)$ so that $f_1$ and $f_2$ are well-separated using the following definition of separation \citep{Dasgupta}. \begin{definition}\label{def2} Given a positive constant $c$, $f_1$ and $f_2$ are $c$-separated if $\varrho_1-\varrho_2 \geq c \max(\varsigma_1,\varsigma_2)$. \end{definition} We have found that $\nu=2$ and $\nu=1$ provide good default values in case (i) and (ii) respectively and we fix $\nu$ at these values in all our applications below. For a given value of $\nu$, $\tau$ is found by starting with small values, estimating the mean and variance of $\bar{d}$ through Monte Carlo draws, and incrementing $\tau$ until definition \ref{def2} is satisfied for a pre-specified $c$. We use $c=4$ in our implementations. \indent For posterior computation, we use a slice sampling algorithm \citep{neal}, a class of Markov chain Monte Carlo algorithms widely used for posterior inference in infinite mixture models \citep{slice}. Letting $g_0$ be a conjugate prior, introduce a latent variable $u$ which is jointly modeled with $\gamma$ through \begin{equation*}\pi(\gamma_1, \ldots, \gamma_k, u) \propto \left(\prod_{h=1}^k g_0(\gamma_h ) \right) 1\left\{ h(\gamma_1, \ldots, \gamma_k)>u\right\}. \end{equation*} Here $1(B)$ is the indicator function, equalling $1$ if the event $B$ occurs and $0$ otherwise. Marginalizing out $u$, we recover the original density $\pi(\gamma_1, \ldots, \gamma_k)$. For a repulsion function defined as (\ref{repul}), let $B_j \equiv \bigcap_{\{s: s\not= j\}}\left[ \gamma_j: g\{d(\gamma_s,\gamma_j)\}>u\right]$. As long as $g$ is invertible in its argument, the set $B_j$ can be calculated, making sampling straightforward. When the repulsion function is defined as (\ref{repul2}), one can introduce a latent variable for each product term. Under repulsive priors satisfying definition \ref{def1}(i), the set $B_j$ might not be easy to compute. However, when covariance matrices are constrained to be diagonal, vectors $\gamma_j$s can be easily sampled element-wise. For multivariate observations, the location parameter vector can be sampled element-wise from truncated distributions. Details can be found in the supplementary materials. \section{Synthetic Examples} \label{simsection} Simulation examples were considered to assess the performance of the repulsive prior in density estimation, clustering and emptying of extra components. Figure \ref{figure:datasets} plots the true densities in the various cases that we considered. For each synthetic dataset, repulsive and non-repulsive mixture models were compared considering a fixed upper bound on the number of components; extra components should be assigned small probabilities and hence effectively excluded. The slice sampler was run for $10,000$ iterations with a burn-in of $5,000$. The chain was thinned by keeping every 10th draw. To overcome the label switching problem, the samples were post-processed following the algorithm of \cite{Stephens}. Details on parameters involved in the true densities, choice of prior distributions and methods used to compute quantities presented in this section can be found in the supplement. \begin{figure} \caption{$(I)$ Standard normal density (solid), two-component mixture of normals sharing the same location parameter (dash) and Student's t density (dash-dot), referred as $(I a, I b, I c)$, $(II)$ two-components mixture of poorly (solid) and well separated (dot-dash) Gaussian densities, referred as $(II a, II b)$, $(III)$ mixture of poorly (dot-dash) and well separated (solid) Gaussian and Pearson densities, referred as $(III a, III b)$, $(IV)$ two-components mixture of two-dimensional non-spherical Gaussians} \label{figure:datasets} \end{figure} Repulsive mixtures satisfying definition \ref{def1}(i) and non-repulsive mixtures were compared. For this experiment $1,000$ draws from a standard normal density and a two component mixture of overlapping normals was considered. Both repulsive and non-repulsive mixtures were run considering six as the upper bound of the number of components. Table \ref{table:par} shows posterior summaries of parameters involved in the components with highest weights. Clearly, repulsive mixtures lead to a more parsimonious representation of the true densities and more accurate parameter estimates. The mean and standard deviation of the K-L divergence under the first data example were $(0$$\cdot$$003,0$$\cdot$$002)$ and $(0$$\cdot$$004,0$$\cdot$$002)$ for non-repulsive and repulsive mixtures respectively; while under the second data example were $(0$$\cdot$$006,0$$\cdot$$003)$ and $(0$$\cdot$$009,0$$\cdot$$003)$ for non-repulsive and repulsive mixtures respectively. Therefore, repulsive mixtures were able to concentrate more on the reduced model while performing similarly to non-repulsive mixtures in estimating the true density. Repulsive mixtures satisfying definition \ref{def1} (ii) and non-repulsive mixtures were compared to assess clustering performance. Table \ref{table:mis} shows summary statistics of the K-L divergence, the misclassification error and the sum of extra weights under repulsive and non-repulsive mixtures with six mixture components as the upper bound. Table \ref{table:mis} shows also the misclassification error resulting from hierarchical clustering \citep{hierarchical}. In practice, observations drawn from the same mixture component were considered as belonging to the same category and for each dataset a similarity matrix was constructed. The misclassification error was established in terms of divergence between the true similarity matrix and the posterior similarity matrix. As shown in table \ref{table:mis}, the K-L divergences under repulsive and non-repulsive mixtures become more similar as the sample size increases. For smaller sample sizes, the results are more similar when components are very well separated. Since a repulsive prior tends to discourage overlapping mixture components, a repulsive model might not estimate the density quite as accurately when a mixture of closely overlapping components is needed. However, as the sample size increases, the fitted density approaches the true density regardless of the degree of closeness among clusters. Again, though repulsive and non-repulsive mixtures perform similarly in estimating the true density, repulsive mixtures place considerably less probability on extra components leading to more interpretable clusters. In terms of misclassification error, the repulsive model outperforms the other two approaches while, in most cases, the worst performance was obtained by the non-repulsive model. Potentially, one may favor fewer clusters, and hence possibly better separated clusters, by penalizing the introduction of new clusters more through modifying the precision in the Dirichlet prior for the weights; in the supplemental materials, we demonstrate that this cannot solve the problem. \begin{table}[h!] \centering \caption{Posterior mean and standard deviation of weights, location and scale parameters under dataset drawn from densities $(Ia, Ib)$}{ \scalebox{0.9}{ \begin{tabular}{lccccccccccccccccccc} &\multicolumn{3}{c}{Density Ia}&\multicolumn{6}{c}{Density Ib} \\ &\multicolumn{3}{c}{Comp 1}&\multicolumn{3}{c}{Comp 1}&\multicolumn{3}{c}{Comp 2} \\ &$\hat{p}_1$&$\hat{\mu}_1$&$\hat{\sigma}_1$ &$\hat{p}_1$&$\hat{\mu}_1$&$\hat{\sigma}_1$&$\hat{p}_2$&$\hat{\mu}_2$&$\hat{\sigma}_2$\\ \begin{footnotesize} True \end{footnotesize} &$1$&$0$&$1$&$0$$\cdot$$7$&$0$&$0$$\cdot$$2$&$0$$\cdot$$3$&$0$&$2$\\ \begin{footnotesize} N-R \end{footnotesize} & $0$$\cdot$$53$&$-0$$\cdot$$01$ &$0$$\cdot$$85$&$0$$\cdot$$44$ &$0$$\cdot$$08$&$1$$\cdot$$21$&$0$$\cdot$$34$&$0$$\cdot$$12$&$1$$\cdot$$33$\\ & $$\mbox{\footnotesize$(0$$\cdot$$16)$}$$ &$$\mbox{\footnotesize$(0$$\cdot$$04)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$25)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$06)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$10)$}$$&$$\mbox{\footnotesize$(1$$\cdot$$05)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$06)$}$$ &$$\mbox{\footnotesize$(0$$\cdot$$16)$}$$&$$\mbox{\footnotesize$(1$$\cdot$$11)$}$$\\ \begin{footnotesize} R \end{footnotesize} &$0$$\cdot$$87$&$-0$$\cdot$$00$&$0$$\cdot$$84$&$0$$\cdot$$67$&$-0$$\cdot$$02$&$0$$\cdot$$28$&$0$$\cdot$$27$&$0$$\cdot$$09$&$2$$\cdot$$36$\\ &$$\mbox{\footnotesize$(0$$\cdot$$07)$}$$ &$$\mbox{\footnotesize$(0$$\cdot$$01)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$04)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$05)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$03)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$02)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$09)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$23)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$75)$}$$\\ \end{tabular}}}\label{table:par} \end{table} \begin{small} \begin{table}[h!] \centering \caption{Mean and standard deviation of K-L divergence, misclassification error and sum of extra weights resulting from non-repulsive mixture and repulsive mixture with a maximum number of clusters equal to six under different synthetic data scenarios.}{ \scalebox{0.9}{ \begin{tabular}{lccccccccccccccccccccc} &\multicolumn{6}{c}{n=100}&\multicolumn{6}{c}{n=1000} \\ & $$\mbox{\footnotesize$I c$}$$ & $$\mbox{\footnotesize$IIa$}$$ & $$\mbox{\footnotesize$IIb$}$$ & $$\mbox{\footnotesize$IIIa$}$$& $$\mbox{\footnotesize$IIIb$}$$&$$\mbox{\footnotesize$IV$}$$& $$\mbox{\footnotesize$I c$}$$ & $$\mbox{\footnotesize$IIa$}$$ & $$\mbox{\footnotesize$IIb$}$$ & $$\mbox{\footnotesize$IIIa$}$$& $$\mbox{\footnotesize$IIIb$}$$&$$\mbox{\footnotesize$IV$}$$\\ \multicolumn{10}{l}{\begin{footnotesize}K-L divergence \end{footnotesize}}& \\ \begin{footnotesize} N-R \end{footnotesize} &$0$$\cdot$$05$&$0$$\cdot$$03$&$0$$\cdot$$07$&$0$$\cdot$$05$&$0$$\cdot$$08$&$0$$\cdot$$22$&$0$$\cdot$$01$&$0$$\cdot$$01$&$0$$\cdot$$01$&$0$$\cdot$$01 $&$0$$\cdot$$01 $&$0$$\cdot$$02$ \\ &$$\mbox{\footnotesize$(0$$\cdot$$03)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$01)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$02)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$02)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$03)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$05)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$00)$}$$ &$$\mbox{\footnotesize$(0$$\cdot$$00)$}$$ &$$\mbox{\footnotesize$(0$$\cdot$$00)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$00)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$00)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$01)$}$$\\ \begin{footnotesize} R \end{footnotesize}&$0$$\cdot$$06$&$0$$\cdot$$05$&$0$$\cdot$$08$&$0$$\cdot$$07$&$0$$\cdot$$09$&$0$$\cdot$$28$&$ 0$$\cdot$$01$&$0$$\cdot$$01$&$0$$\cdot$$01$&$0$$\cdot$$01$&$0$$\cdot$$01$&$0$$\cdot$$03$ \\ &$$\mbox{\footnotesize$(0$$\cdot$$03)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$02)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$03)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$03)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$03)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$04)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$00)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$00)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$00)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$00)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$00)$}$$ &$$\mbox{\footnotesize$(0$$\cdot$$01)$}$$\\ \multicolumn{10}{l}{\begin{footnotesize}Misclassification \end{footnotesize}}& \\ \multirow{1}{*}{\begin{footnotesize} HCT \end{footnotesize}}& $0$$\cdot$$12$ &$0$$\cdot$$11$ &$0$$\cdot$$04$&$0$$\cdot$$12$&$0$$\cdot$$08$&$0$$\cdot$$21$& $0$$\cdot$$05$&$0$$\cdot$$42$&$0$$\cdot$$01$&$0$$\cdot$$42$&$0$$\cdot$$01$&$0$$\cdot$$20$\\ \begin{footnotesize} N-R \end{footnotesize} & $0$$\cdot$$69$&$ 0$$\cdot$$26$&$0$$\cdot$$06$&$0$$\cdot$$17$&$0$$\cdot$$05$&$0$$\cdot$$13 $&$0$$\cdot$$65$&$0$$\cdot$$24$&$0$$\cdot$$03$&$0$$\cdot$$14$&$0$$\cdot$$03$&$0$$\cdot$$19$\\ & $$\mbox{\footnotesize$(0$$\cdot$$10)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$10)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$04)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$09)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$06)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$05)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$11)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$08)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$04)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$09)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$03)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$02)$}$$\\ \begin{footnotesize} R \end{footnotesize} & $0$$\cdot$$53$&$0$$\cdot$$18$&$0$$\cdot$$01$&$0$$\cdot$$10$&$0$$\cdot$$01$&$0$$\cdot$$05$&$0$$\cdot$$46$&$0$$\cdot$$13$&$0$$\cdot$$00$&$0$$\cdot$$03$&$0$$\cdot$$00$&$0$$\cdot$$17$\\ & $$\mbox{\footnotesize$(0$$\cdot$$10)$}$$& $$\mbox{\footnotesize$(0$$\cdot$$09)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$02)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$05)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$01)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$02)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$16)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$04)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$01)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$02)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$01)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$01)$}$$\\ \multicolumn{10}{l}{\begin{footnotesize}Sum of extra weights \end{footnotesize}}& \\ \begin{footnotesize} N-R \end{footnotesize} & $0$$\cdot$$30$&$0$$\cdot$$21$& $0$$\cdot$$09$&$0$$\cdot$$16$&$0$$\cdot$$07$&$0$$\cdot$$13$&$0$$\cdot$$30$&$0$$\cdot$$21$&$0$$\cdot$$03$ &$0$$\cdot$$16$&$0$$\cdot$$03$&$0$$\cdot$$29$\\ &$$\mbox{\footnotesize$(0$$\cdot$$10)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$11)$}&$$\mbox{\footnotesize$(0$$\cdot$$07)$}&$$\mbox{\footnotesize$(0$$\cdot$$09)$}&$$\mbox{\footnotesize$(0$$\cdot$$07)$}&$$\mbox{\footnotesize$(0$$\cdot$$08)$}&$$\mbox{\footnotesize$(0$$\cdot$$11)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$11)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$04)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$10)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$03)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$03)$}$$\\ \begin{footnotesize} R \end{footnotesize} &$ 0$$\cdot$$08$&$0$$\cdot$$08$&$ 0$$\cdot$$02$ &$0$$\cdot$$04$&$0$$\cdot$$02$&$0$$\cdot$$06$&$0$$\cdot$$10$& $0$$\cdot$$09$&$0$$\cdot$$00$&$0$$\cdot$$01$&$0$$\cdot$$00$&$0$$\cdot$$25$\\ &$$\mbox{\footnotesize$(0$$\cdot$$05)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$07)$}&$$\mbox{\footnotesize$(0$$\cdot$$02)$}&$$\mbox{\footnotesize$(0$$\cdot$$05)$}&$$\mbox{\footnotesize$(0$$\cdot$$02)$}&$$\mbox{\footnotesize$(0$$\cdot$$03)$}&$$\mbox{\footnotesize$(0$$\cdot$$04)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$06)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$00)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$01)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$00)$}$$&$$\mbox{\footnotesize$(0$$\cdot$$03)$}$$\\ \end{tabular}}} \label{table:mis} \end{table} \end{small} \section{Real data} We tested the performance of our proposed prior specification on three real datasets. The first involves $82$ measurements of the velocities in km/s of galaxies diverging from our own (\cite{escobar}, \cite{reversible}), the second consists of the acidity index measured in a sample of $155$ lakes in north central Wisconsin (\cite{reversible}), and the third consists of 150 observations from three different species of iris each with four measurements \citep{clusterCV}. For the first two datasets, a repulsive mixture satisfying definition \ref{def1}(i) was considered and a five-component mixture model was fit while for the third dataset a repulsive mixture satisfying definition \ref{def1}(ii) was considered and both six components and ten components were considered as the upper bound. The same prior specification, Markov chain Monte Carlo sampler, and relabeling technique as in section \ref{simsection} were utilized. \begin{figure} \caption{Histogram of galaxy data (I) and acidity data (II) overlaid with a nonparametric density estimate using Gaussian kernel density estimation} \label{data:real} \end{figure} For the galaxy data, figure \ref{data:real} reveals that there are three non-overlapping clusters with the one close to the origin relatively large compared to the others. Although this large cluster might be interpreted as two highly overlapping clusters, it appears to be well approximated by a single normal density. \cite{reversible} and \cite{escobar} estimated the number of components, obtaining a posterior distribution on $k$ concentrating on values ranging from $5$ to $7$. This may be due to the non-repulsive prior allowing closely overlapping components, favoring relatively large values of $k$. Figure \ref{real:clusters} reveals that the non-repulsive prior specification leads to two overlapping and essentially indistinguishable clusters. Under repulsive priors, no clusters overlap significantly and unnecessary components receive a weight close to zero. \indent For the acidity data, figure \ref{data:real} suggests that two clusters are involved. Since one of them appears to be highly skewed, we expect that three clusters might be needed to approximate this density well. \cite{reversible} obtained a posterior for $k$ almost equally concentrated on values of $k$ ranging from $3$ to $5$. Figure \ref{real:clusters} shows the estimated clusters for both repulsive and non-repulsive priors. With non-repulsive priors, four clusters receive significant weight and two of them overlap significantly. With repulsive priors, only three clusters receive significant weight and all of them appear fairly separated. The iris data were previously analyzed by \cite{sugar:iris} and \cite{clusterCV} using new methods to estimate the number of clusters based on minimizing loss functions. They concluded the optimal number of clusters was two. This result did not agree with the number of species due to low separation in the data between two of the species. Such point estimates of the number of clusters do not provide a characterization of uncertainty in clustering in contrast to Bayesian approaches. Repulsive and non-repulsive mixtures were fitted under different choices of upper bound on the number of components. Since the data contains three true biological clusters, with two of these having similar distributions of the available features, we would expect the posterior to concentrate on two or three components. Posterior means and standard deviations of the three highest weights were $(0$$\cdot$$30, 0$$\cdot$$23,0$$\cdot$$13)$ and $(0$$\cdot$$05, 0$$\cdot$$04,0$$\cdot$$04)$ for non-repulsive and $(0$$\cdot$$56, 0$$\cdot$$29,0$$\cdot$$08)$ and $(0$$\cdot$$05, 0$$\cdot$$04,0$$\cdot$$03)$ for repulsive. Clearly, repulsive priors lead to a posterior more concentrated on two components, and assign low probability to more than three components. Figure \ref{real:iris} shows the density of the total probability assigned to the extra components. This quantity was computed considering the number of species as the true number of clusters. According to figure \ref{real:iris}, our repulsive prior specification leads to extra component weights very close to zero regardless of the upper bound on the number of components. The posterior uncertainty is also small. Non-repulsive mixtures assign large weight to extra components, with posterior uncertainty increasing considerably as the number of components increases. \begin{figure} \caption{Estimated clusters under galaxy data for non-repulsive (I) and repulsive (II) priors and under acidity data for non-repulsive (III) and repulsive (IV) priors} \label{real:clusters} \end{figure} \begin{figure} \caption{Density of sum of extra weights under k=6 for non-repulsive (solid) and repulsive (dash) and k=10 components for non-repulsive (dash-dot) and repulsive (dot)} \label{real:iris} \end{figure} \section*{Supplementary Material} Supplementary material includes the proof of lemma 2 and lemma 4, assumptions B1-B5, conditions $(i)$, $(ii)$ and $(iii)$ of theorem 3.1. in \cite{Scricciolo2011} and theorem 2.1. in \cite{ghosal2000}. \appendix \section*{Appendix} \noindent Throughout the appendix we write all constants whose values are of no consequence to be equal to 1. \begin{proof} [of lemma~\ref{kl}] By assumption B0, $\vartheta(k=k_0)>0$. We consider the case $f$ is a finite mixture with $k_0$ components. By assumption A1, for each $\eta >0$ there is a corresponding $\delta >0 $ such that, for any given $y\in \mathcal{Y}$ and for all $\gamma_1, \gamma_2 \in \Gamma$ with $|\gamma_1 - \gamma_2|<\delta$, we have that $|\phi(y; \gamma_1) - \phi(y;\gamma_2)| < \eta$. Let $S_{\delta}=P_{\delta} \times \Gamma_{\delta}$ with $\Gamma_{\delta}=\left\lbrace \gamma : |\gamma_j - \gamma_{0j}| \leq \delta, j\leq k_{0} \right\rbrace$ and $P_{\delta}=\left\lbrace p : |p_{j} - p_{0 j}| \leq \delta, j \leq k_{0} \right\rbrace$. By assumption A1 and A2, for any given $y$ and for any $\eta> 0$, there is a $\delta >0$ such that $|f_{0} - f| \leq \eta$ if $\theta \in S_{\delta}$. This means that, $f \to f_0$ as $\theta \to \theta_0$, for any given $y$. Equivalently, we can say that $|\log(f_{0}/f)| \to 0$ pointwise as $\theta \to \theta_0$. Notice that \[\left|\log\left(f_{0}/f\right)\right| \leq \left|\log\left\lbrace\sup_{\gamma \in D_{0}} \phi(\gamma)\right\rbrace - \log\left\lbrace\inf_{\gamma \in D_{0}} \phi(\gamma)\right\rbrace\right|\] By assumption A3 and applying the dominated convergence theorem, for any $\epsilon > 0$ there is a $\delta >0$ such that $\int f_0 \log(f_{0}/f) < \epsilon$ if $\theta \in S_{\delta}$. By the independence of the weights and the parameters of the kernel,\begin{equation*}\Pi(KL(f_0,f)< \epsilon)\geq \lambda( P_{\delta} )\pi(\Gamma_{\delta})\end{equation*} Assumption A4 combined with the fact that $\{ \gamma: ||\gamma -\gamma_0||_1 \leq \delta\} \subseteq \Gamma_{\delta}$ result in $\pi(\Gamma_{\delta})>0$. Finally, since $\lambda= Dirichlet(\alpha)$, it can be shown that $\lambda( P_{\delta})>0$.\end{proof} \begin{proof} [of lemma ~\ref{contraction:rate}] \indent To prove lemma \ref{contraction:rate} we need to show that the three conditions of theorem 2.1 in \cite{ghosal2000} are satisfied. First, define $D(\epsilon, \mathcal{F},d_s)$ as the maximum number of points in $\mathcal{F}$ such that the distance, with respect to metric $d_s$, between each pair is at least $\epsilon$. Let $d_s$ be either the Hellinger metric or the one induced by the L1-norm. For given sequences $k_n, a_n, u_n \uparrow \infty$ and $b_n\downarrow 0$ define $$\mathcal{F}^{(k)}_n=\left\{ f: f=\sum_{j=1}^{k} p_{j} \phi(\mu_{j},\sigma), \mu \in (-a_n,a_n)^k, \sigma \in (b_n,u_n)\right\}$$ and $\mathcal{F}_n=\cup_{j=1}^{k_n} \mathcal{F}^{(j)}_n$. As it is shown in \cite{Scricciolo2011}, for constants $f_2 \geq f_1 >0$ and $l_1,l_2,l_3>0$, derived below to satisfy condition (2) and (3) in \cite{ghosal2000}, define $f_1 \log n \leq k_n \leq f_2 \log n$, $a_n=l_3\left( \log \bar{\epsilon}_n^{-1}\right)^{1/2}$, $b_n=l_1 (\log \bar{\epsilon}_n^{-1})^{-1/e_2}$ and $u_n=\bar{\epsilon}_n^{-l_2}$, $\log D(\bar{\epsilon}_n,\mathcal{F}_n,d_s)\lesssim n \bar{\epsilon}_n^2$ with $\bar{\epsilon}_n=n^{-1/2} \log n$. \indent Let $A_{n,j}=(-a_n,a_n)^j$. In order to show condition (2) of theorem 2.1. in \cite{ghosal2000}, we need to show that there is a constant $q_1>0$ such that $\pi(A_{n,k}^C) \lesssim \exp(- q_1 a_n^2)$. From the exchangeability assumption it follows\\ \begin{tabular}{lllll} &&$pr(A_{n,k}^C|k=s)$&$=\sum_{j=1}^s \frac{s!}{j! (s-j)!} \pi\left(A_{n,j}^C \times A_{n,s-j}\right)$\\ &&&$\leq s \sum_{j=1}^s \frac{(s-1)!}{(j-1)! (s-j)!} \pi\left(A_{n,j}^C\times A_{n,s-j}\right) \leq s \pi_m(A_{n,1}^C)$\\ \end{tabular} Therefore, condition C1 implies that, for a positive constant $q_1$ we have $\pi(A_{n,k}^C)\lesssim E(k)\exp(-q_1 a_n^2)$ with $E(k)<\infty$ by condition (ii) of theorem 3.1. in \cite{Scricciolo2011}. Given a positive constant $z_2$ chosen to satisfy condition (3) in theorem 2.1 of \cite{ghosal2000}, let $f_1 \geq (z_2 + 4)/d_2$, $l_1 \leq \left\lbrace e_1/4 (z_2+4)\right\rbrace^{1/e_2}$, $ l_2 \geq 4(z_2 +4)/e_3$ and $l_3 \geq \left\lbrace4(z_2 +4)/q_1\right\rbrace^{1/2}$. Under these values of $f_1, l_1, l_2$ and $l_3$, following \cite{Scricciolo2011}, assumptions (i), (ii) of theorem 3.1. in \cite{Scricciolo2011} combined with assumption C1 imply $\Pi(\mathcal{F}\setminus \mathcal{F}_n) \lesssim \exp\left\lbrace-(z_2 + 4)n \tilde{\epsilon}_n^2\right\rbrace$ with $\tilde{\epsilon}_n=n^{-1/2}(\log n)^{1/2}$. \indent To show condition (3) of theorem 2.1 in \cite{ghosal2000}, we can again follow the proof of theorem 3.1. in \cite{Scricciolo2011}. The only thing we need to show is that, there are constants $u_1, u_2, u_3>0$ such that for any $\epsilon_n \leq u_3$ \[\pi(||\mu-\mu_0 ||_1\leq \epsilon_n) \geq u_1 \exp\left\lbrace-u_2 k_0 \log(1/\epsilon_n)\right\rbrace\] that is guaranteed by condition C2. Therefore, it can be easily showed that, for sufficiently large $n$, $z_2>0$ and $\tilde{\epsilon}_n=n^{-1/2}(\log n)^{1/2}$, $\Pi\left\lbrace B_{KL}(f_0,\tilde{\epsilon}_n^2) \right\rbrace \gtrsim \exp(-z_2 n \tilde{\epsilon}_n^2)$. \end{proof} \begin{proof}[of theorem ~\ref{weights:theorem}] Only for this proof and for ease of notation the density $f$ will be referred as $f_{\theta}$. Define the non identifiability set as $T=\{\theta: f_{\theta} = f_{0} \}$. In order to define each vector in $T$, let $0 = t_0 < t_1< t_2 \ldots < t_{k_0} \leq k$ and $\gamma_j = \gamma_{0i}$ for $j \in I_i=\{t_{i-1}+1, t_i \}$. Let $p_{0i}=\sum_{j=t_{i-1}+1}^{t_i}p_j$ and $p_{j}=0$ for $j>t_{k_0}$. Define $q_j=p_j/p_{0i}$ for $j \in I_i$. \\ \indent Define $A_n=\left\lbrace \min_{\{\iota \in S_k\}}\left( \sum_{i=1}^{k-k_0} p_{\iota(i)}\right) > \delta_n M_n \right\rbrace$ and $A'_n=A_n \cap \{ \|f-f_0\|_1\leq M \delta_n \}$. Let $D_n= \int_{ \{ \|f-f_0\|_1<\delta_n \}} \exp(l_n(\theta) - l_n(\theta_0)) d(\pi \times \lambda)(\theta)$ with $l_n(\theta_0)$ being the log-likelihood evaluated at $\theta_0$. Along the line of \cite{Rousseau}'s proof, to prove theorem 1 we need to show that for any $\epsilon>0$ there are positive constants $m_1, m_2$ and a permutation $\iota \in S_k$ such that\\ \begin{minipage}{0.5\linewidth} \begin{equation}D_n \geq m_1 n^{-s(k_0,\alpha)/2} \label{condDn}\end{equation} \end{minipage} \begin{minipage}{0.5\linewidth} \begin{equation} \Pi(A_n')\leq m_2 \delta_n^{s(k_0,\alpha)} M_n^{\bar{\alpha}-m/2} \label{condAn}\end{equation}\end{minipage}\\ with $s(k_0,\alpha)=k_0-1+m k_0+\sum_{j=1}^{k-k_0}\alpha_{\iota(j)}$. Following \cite{Rousseau}'s proof, we can show that, under condition B5, (\ref{condDn}) is satisfied for sufficiently large $n$. Concerning (\ref{condAn}), \cite{Rousseau} showed that on $A_n'$, there is a set $I_i$ containing indices $j_1$ and $ j_2$ such that \[|\gamma_{j_1}-\gamma_{0i}| \leq \left(\delta_n/q_{j_1}\right)^{1/2}, \quad |\gamma_{j_2}-\gamma_{0i}|\leq \left(\delta_n/q_{j_2}\right)^{1/2}\] with $q_{j_1}>\epsilon/k_0$ and $q_{j_2}>\delta_n M_n/2$. Therefore, from the triangle inequality it follows \[|\gamma_{j_1}-\gamma_{j_2}|\leq \left\lbrace2\delta_n/\min(q_{j_1},q_{j_2})\right\rbrace^{1/2}\] Now, for sufficiently large $n$, $\min(q_{j_1},q_{j_2})> \delta_n M_n/2$ and therefore $|\gamma_{j_1}-\gamma_{j_2}|\leq M_n^{-1/2}$. Recalling that $g$ is bounded above by a positive constant, there exists a constant $c>0$ such that \begin{equation}h(\gamma)\leq c g\left\lbrace d(\gamma_{j_1},\gamma_{j_2})\right\rbrace \leq c g\left(M_n^{-1/2}\right)\label{bound}\end{equation} \noindent Let the prior probability of the set $A'_n$ be defined as $\Pi(A'_n) = \int_{A'_n} d(\pi \times \lambda)(\gamma \times p)$. To find an upper bound for this integral, directly apply the proof of \cite{Rousseau} showing that $\Pi(A'_n)\leq g\left(M_n^{-1/2}\right)\delta_n^{s(k_0,\alpha)} M_n^{\bar{\alpha} - m/2}$. By assumption, for sufficiently large $n$, $g\left(M_n^{-1/2}\right)\leq r_1 M_n^{-r_2}$. Letting $s_{r_2}=r_2+m/2-\bar{\alpha}$, it follows \[\Pi(A'_n)\leq M_n^{-s_{r_2}} (\log n)^{q s(k_0,\alpha)} D_n\] Therefore, $M_n=(\log n)^{q s(k_0,\alpha)/s_{r_2}}$ implies $\Pi(A'_n)=O_p(D_n)$.\end{proof} \end{document}
\begin{document} \title{Equitable $d$-degenerate choosability of graphs\thanks{The short version of the paper was accepted to be published in proceedings of IWOCA2020}} \author{Ewa Drgas-Burchardt\inst{1} \and Hanna Furma\'nczyk\inst{2} \and El\.zbieta Sidorowicz\inst{1}} \authorrunning{E. Drgas-Burchardt, H. Furma\'nczyk, E. Sidorowicz} \institute{Faculty of Mathematics, Computer Science and Econometrics,\ University of Zielona G\'ora,\ Prof. Z. Szafrana 4a,\ 65-516 Zielona G\'ora,\ Poland \email{\{E.Drgas-Burchardt,E.Sidorowicz\}@wmie.uz.zgora.pl}\\ \and Institute of Informatics,\ Faculty of Mathematics, Physics and Informatics,\ University of Gda\'nsk,\ 80-309 Gda\'nsk,\ Poland\\ \email{[email protected]}} \maketitle \begin{abstract} Let ${\mathcal D}_d$ be the class of $d$-degenerate graphs and let $L$ be a list assignment for a graph $G$. A colouring of $G$ such that every vertex receives a colour from its list and the subgraph induced by vertices coloured with one color is a $d$-degenerate graph is called the $(L,{\mathcal D}_d)$-colouring of $G$. For a $k$-uniform list assignment $L$ and $d\in\mathbb{N}_0$, a graph $G$ is equitably $(L,{\mathcal D}_d)$-colorable if there is an $(L,{\mathcal D}_d)$-colouring of $G$ such that the size of any colour class does not exceed $\left\lceil|V(G)|/k\right\rceil$. An equitable $(L,{\mathcal D}_d)$-colouring is a generalization of an equitable list coloring, introduced by Kostochka at al., and an equitable list arboricity presented by Zhang. Such a model can be useful in the network decomposition where some structural properties on subnets are imposed. In this paper we give a polynomial-time algorithm that for a given $(k,d)$-partition of $G$ with a $t$-uniform list assignment $L$ and $t\geq k$, returns its equitable $(L,\mathcal{D}_{d-1})$-colouring. In addition, we show that 3-dimensional grids are equitably $(L,\mathcal{D}_1)$-colorable for any $t$-uniform list assignment $L$ where $t\geq 3$. \keywords{equitable choosability\and hereditary classes \and $d$-degenerate graph} \end{abstract} \section{Motivation and preliminaries} In last decades, a social network graphs, describing relationship in real life, started to be very popular and present everywhere. Understanding key structural properties of large-scale data networks started to be crucial for analyzing and optimizing their performance, as well as improving their security. This topic has been attracting attention of many researches, recently (see \cite{dragan,lei,miao,deg}). We consider one of problems connected with the decomposition of networks into smaller pieces fulfilling some structural properties. For example, we may desire that, for some security reason, the pieces are acyclic or even independent. This is because of in such a piece we can easily and effectively identify a node failure since the local structure around such a node in this piece is so clear that it can be easily tested using some classic algorithmic tools \cite{deg}. Sometimes, it is also desirable that the sizes of pieces are balanced. It helps us to maintain the whole communication network effectively. Such a problem can be modeled by minimization problems in graph theory, called an \emph{equitable vertex arboricity} or an \emph{equitable vertex colourability} of graphs. Sometimes we have some additional requirements on vertices/nodes that can be modeled by a list of available colours. So, we are interested in the list version, introduced by Kostochka, Pelsmajer and West \cite{Ko03} (an independent case), and by Zhang \cite{Zh16} (an acyclic case). In colourability and arboricity models the properties of a network can be described in the language of the upper bound on the minimum degree, i.e. each colour class induces a graph whose each induced subgraph has the minimum degree bounded from above by zero or one, respectively. In the paper we consider the generalization of these models in which each colour class induces a graph whose each induced subgraph has the minimum degree bounded from above by some natural constant. Let $\mathbb{N}_0 = \mathbb{N}\cup\{0\}$. For $d\in \mathbb{N}_0$, the graph $G$ is $d$-{\it degenerate} if $\delta(H)\le d$ for any subgraph $H$ of $G$, where $\delta(H)$ denotes the minimum degree of $H$. The class of all $d$-degenerate graphs is denoted by ${\cal D}_d$. In particular, ${\cal D}_0$ is the class of all edgeless graphs and ${\cal D}_1$ is the class of all forests. A ${\cal D}_d$-{\it coloring} is a mapping $c:V(G)\rightarrow \mathbb{N}$ such that for each $i\in \mathbb{N}$ the set of vertices coloured with $i$ induces a $d$-degenerate graph. A \textit{list assignment} $L$, for a graph $G$, is a mapping that assigns a nonempty subset of $\mathbb{N}$ to each vertex $v\in V(G)$. Given $k\in \mathbb{N}$, a list assignment $L$ is $k$-\textit{uniform} if $|L(v)|=k$ for every $v\in V(G)$. Given $d\in \mathbb{N}_0$, a graph $G$ is $(L,{\mathcal D}_d)$-{\it colourable} if there exists a colouring $c: V(G) \to \mathbb{N}$, such that $c(v) \in L(v)$ for each $v \in V(G)$, and for each $ i\in \mathbb{N}$ the set of vertices coloured with $i$ induces a $d$-degenerate graph. Such a mapping $c$ is called an {\it $(L,{\mathcal D}_d)$-colouring} of $G$. The $(L,{\mathcal D}_d)$-colouring of $G$ is also named as its $L$-colouring. If $c$ is any colouring of $G$, then its restriction to $V^\prime$, $V^\prime \subseteq V(G)$, is denoted by $c|_{V'}$. For a partially coloured graph $G$, let $N_G^{col}(d,v)=\{w\in N_G(v): w\; \mbox{has}\; d \;\mbox{neighbors coloured with}\; c(v)\}$, where $N_G(v)$ denotes the set of vertices of $G$ adjacent to $v$. We refer the reader to \cite{Dist00} for terminology not defined in this paper. Given $k\in \mathbb{N}$ and $d\in \mathbb{N}_0$, a graph $G$ is \textit{equitably $(k,{\mathcal D}_d)$-choosable} if for any $k$-uniform list assignment $L$ there is an $(L,{\mathcal D}_0)$-colouring of $G$ such that the size of any colour class does not exceed $\left\lceil |V(G)|/k\right\rceil$. The notion of equitable $(k,{\mathcal D}_0)$-choosability was introduced by Kostochka et al. \cite{Ko03} whereas the notation of equitable $(k,{\mathcal D}_1)$-choosability was introduced by Zhang \cite{Zh16}. Let $k,d\in\mathbb{N}$. A partition $S_1\cup \cdots \cup S_{\eta+1}$ of $V(G)$ is called a \textit{$(k,d)$-partition} of $G$ if $|S_1| \leq k$, and $|S_j|=k$ for $j\in \{2, \ldots ,\eta+1\}$, and for each $j\in \{2, \ldots ,\eta+1\}$, there is such an ordering $\{x_1^j,\ldots ,x_k^j\}$ of vertices of $S_j$ that \begin{equation} |N_G(x_i^j) \cap (S_1\cup\cdots \cup S_{j-1})| \leq di-1,\label{part-cond} \end{equation} for every $i\in \{1, \ldots ,k\}$. Observe that if $S_1\cup \cdots \cup S_{\eta+1}$ is a $(k,d)$-partition of $G$, then $\eta+1=\left\lceil |V(G)|/k\right\rceil$. Moreover, immediately by the definition, each $(k,d)$-partition of $G$ is also its $(k,d+1)$-partition. Surprisingly, the monotonicity of the $(k,d)$-partition with respect to the parameter $k$ is not so easy to analyze. We illustrate this fact by Example \ref{ex:Gq}. Note that for integers $k,d$ the complexity of deciding whether $G$ has a $(k,d)$-partition is unknown. The main result of this paper is as follows. \begin{theorem}\label{thm:1} Let $k,d,t\in \mathbb{N}$ and $t\geq k$. If a graph $G$ has a $(k,d)$-partition, then it is equitably $(t,{\mathcal D}_{d-1})$-choosable. Moreover, there is a polynomial-time algorithm that for any graph with a given $(k,d)$-partition and for any $t$-uniform list assignment $L$ returns an equitable $(L,{\mathcal D}_{d-1})$-colouring of $G$. \end{theorem} The first statement of Theorem \ref{thm:1} generalizes the result obtained in \cite{DrFrDy18} for $d\in \{1,2\}$. In this paper we present an algorithm that confirms both, the first and second statements of Theorem \ref{thm:1} for all possible $d$. The algorithm, given in Section \ref{algor}, for a given $(k,d)$-partition of $G$ with $t$-uniform list assignment $L$ returns its equitable $(L,\mathcal{D}_{d-1})$-colouring. Moreover, in Section \ref{grids} we give a polynomial-time algorithm that for a given 3-dimensional grid finds its $(3,2)$-partition, what, in consequence, implies $(t,\mathcal{D}_1)$-choosability of 3-dimensional grids for every $t\geq 3$. \section{The proof of Theorem \ref{thm:1}} \label{algor} \subsection{Background} For $S\subseteq V(G)$ by $G-S$ we denote a subgraph of $G$ induced by $V(G)\setminus S$. We start with a generalization of some results given in \cite{Ko03,Pe04,Zh16} for classes ${\mathcal D}_0$ and ${\mathcal D}_1$. \begin{proposition}\label{l:2} Let $k,d\in \mathbb N$ and let $S$ be a set of distinct vertices $x_1, \ldots ,x_k$ of a graph $G$. If $G-S$ is equitably $(k,{\mathcal D}_{d-1})$-choosable and $$|N_G(x_i)\setminus S|\le di-1$$ holds for every $i\in \{1, \ldots ,k\}$, then $G$ is equitably $(k,{\mathcal D}_{d-1})$-choosable. \end{proposition} \begin{proof} Let $L$ be a $k$-uniform list assignment for $G$ and let $c$ be an equitable $(L|_{V(G)\setminus S},{\mathcal D}_{d-1})$-colouring of $G-S$. Thus each colour class in $c$ has the cardinality at most $\left\lceil (|V(G)|-k)/k\right\rceil$ and induces in $G-S$, and consequently in $G$, a graph from ${\mathcal D}_{d-1}$. We extend $c$ to $(V(G)\setminus S) \cup \{x_k\}$ by assigning to $x_k$ a colour from $L(x_k)$ that is used on vertices in $N_G(x_k)\setminus S$ at most $d-1$ times. Such a colour always exists because $|N_G(x_k)\setminus S|\le dk-1 \hbox{ and } |L(x_k)|=k.$ Next, we colour vertices $x_{k-1}, \ldots, x_1$, sequentially, assigning to $x_i$ a colour from its list that is different from colours of all vertices $x_{i+1}, \ldots ,x_k$ and that is used at most $d-1$ times in $N_G(x_i)\setminus S$. Observe that there are at least $i$ colours in $L(x_i)$ that are different from $c(x_{i+1}), \ldots ,c(x_{k})$, and, since $|N_G(x_i)\setminus S|\le di-1$, $i\in \{1, \ldots ,k-1\}$, then such a choice of $c(x_i)$ is always possible. Next, the colouring procedure forces that the cardinality of every colour class in the extended colouring $c$ is at most $\left\lceil |V(G)|/k\right\rceil$. Let $G_i=G[(V(G)\setminus S)\cup \{x_{i}, \ldots ,x_k\}]$. It is easy to see that for each $i\in \{1, \ldots ,k\}$ each colour class in $c|_{V(G_i)}$ induces a graph belonging to ${\mathcal D}_{d-1}$. In particular this condition is satisfied for $G_1$, i.e. for $G$. Hence $c$ is an equitable $( L,{\mathcal D}_{d-1})$-colouring of $G$ and $G$ is equitably $(k,{\mathcal D}_{d-1})$-choosable. $\Box$ \end{proof} Note that if a graph $G$ has a $(k,d)$-partition, then one can prove that $G$ is equitably $(k,{\mathcal D}_{d-1})$-choosable by applying Proposition \ref{l:2} several times. In general, the equitable $(k,{\mathcal D}_{d-1})$-choosability of $G$ does not imply the equitable $(t,{\mathcal D}_{d-1})$-choosability of $G$ for $t\geq k$. Unfortunately, if $G$ has a $(k,d)$-partition, then $G$ may have neither a $(k+1,d)$-partition nor a $(k-1,d)$-partition.The infinite family of graphs defined in Example 1 confirms of the last fact. \begin{example} Let $q\in \mathbb{N}$ and let ${G_1,\ldots,G_{2q+1}}$ be vertex-disjoint copies of $K_6$ such that $V(G_i)=\{v_1^i,\ldots ,v_6^i\}$ for $i\in \{1, \ldots 2q+1\}$. Let $G(q)$ $($cf. Fig. \ref{fig:exg2}$)$ be the graph resulted by adding to ${G_1,\ldots,G_{2q+1}}$ edges that join vertices of $G_i$ with vertices of $G_{i-1}$, $i\in \{2,\ldots,2q+1\}$, in the following way: $$ \begin{array}{ll} \mbox{for } i \mbox{ even:} &\mbox{for } i \mbox{ odd:} \\ N_{G_{i-1}}(v_1^i)= \emptyset & N_{G_{i-1}}(v_1^i)=\{v_2^{i-1},v_3^{i-1},v_4^{i-1},v_5^{i-1},v_6^{i-1}\}\\ N_{G_{i-1}}(v_2^i)=\{v_1^{i-1}\} & N_{G_{i-1}}(v_2^i)=\{v_1^{i-1},v_4^{i-1},v_5^{i-1},v_6^{i-1}\\ N_{G_{i-1}}(v_3^i)=\{v_2^{i-1},v_3^{i-1}\}& N_{G_{i-1}}(v_3^i)=\{v_1^{i-1},v_2^{i-1},v_3^{i-1}\}\\ N_{G_{i-1}}(v_4^i)=\{v_1^{i-1},v_2^{i-1},v_3^{i-1}\} & N_{G_{i-1}}(v_4^i)=\{v_2^{i-1},v_3^{i-1}\}\\ N_{G_{i-1}}(v_5^i)=\{v_1^{i-1},v_4^{i-1},v_5^{i-1},v_6^{i-1}\} & N_{G_{i-1}}(v_5^i)=\{v_1^{i-1}\}\\ N_{G_{i-1}}(v_6^i)=\{v_2^{i-1},v_3^{i-1},v_4^{i-1},v_5^{i-1},v_6^{i-1}\}\ & N_{G_{i-1}}(v_6^i)=\emptyset \end{array} $$ \begin{figure} \caption{A draft of graph $G(2)$ from Example \ref{ex:Gq} \label{fig:exg2} \end{figure} The construction of $G(q)$ immediately implies that for every $q\in \mathbb{N}$ the graph $G(q)$ has a $(6,1)$-partition. Also, observe that $deg_{G(q)}(v)\geq 5$ for each vertex $v$ of $G(q)$. Suppose that $G(q)$ has a $(5,1)$-partition $S_1\cup\cdots\cup S_{\eta+1}$ with $S_{\eta+1}=\{x_1^{\eta+1}, \ldots ,x_5^{\eta+1}\}$ such that $|N_{G(q)}(x_i^{\eta+1}) \cap (S_1\cup\cdots \cup S_{\eta})| \leq i-1$. Thus $|N_{G(q)}(x_1^{\eta+1}) \cap (S_1\cup\cdots \cup S_{\eta})| = 0$ and consequently $deg_{G(q)}(x_1^{\eta+1})\le 4$, contradicting our previous observation. Hence, $G(q)$ has no $(5,1)$-partition. Next, we will show that $G(q)$ has no $(7,1)$-partition for $q\geq 2$. \begin{proposition}\label{prop:1} For every integer $q$ satisfying $q\geq 2$ the graph $G(q)$ constructed in {\rm Example \ref{ex:Gq}} has no $(7,1)$-partition. \end{proposition} \begin{proof} Let $G=G(q)$ and suppose that $S^\prime_1\cup \cdots \cup S^\prime_{\eta+1}$ is a $(7,1)$-partition of $G$. Let $x_1^{\eta+1},x_2^{\eta+1},\ldots, x_7^{\eta+1}$ be an ordering of vertices of $S^\prime_{\eta+1}$ such that $|N_G(x_i^{\eta+1}) \cap (S^\prime_1\cup\cdots \cup S^\prime_{\eta})| \leq i-1$. Thus $\deg_G(x_1^{\eta+1})\le 6$. Since for every vertex $v\in V(G)\setminus \{v_5^{2q+1},v_6^{2q+1}\}$ it holds that $\deg_G(v)\ge 7$, we have $x_1^{\eta+1}=v_6^{2q+1}$ or $x_1^{\eta+1}=v_5^{2q+1}$ and furthermore $N_G(x_1^{\eta+1})\subseteq S^\prime_{\eta+1}$. So, $\{v_1^{2q+1},\ldots,v_6^{2q+1}\}\subseteq S^\prime_{\eta+1}$ and we must recognize the last vertex of $S^\prime_{\eta+1}$. Since every vertex in $S^\prime_{\eta+1}$ has at most $6$ neighbors outside $S^\prime_{\eta+1}$, it follows that either $v_1^{2q}\in S^\prime_{\eta+1}$ or $v_2^{2q}\in S^\prime_{\eta+1}$. Let $G^\prime=G-S^\prime_{\eta+1}$, $S^\prime_{\eta}=\{x_1^{\eta},x_2^{\eta},\ldots, x_7^{\eta}\}$, and $|N_{G^\prime}(x_i^{\eta}) \cap (S^\prime_1\cup\cdots \cup S^\prime_{\eta-1})| \leq i-1$. \noindent {\it Case} 1. $v_1^{2q}\in S^\prime_{\eta+1}$ Since $\deg_{G^\prime}(x_1^{\eta})\le 6$, we have that either $x_1^{\eta}=v_2^{2q}$ or $x_1^{\eta}=v_3^{2q}$. In the first case, $\{v_1^{2q-1},v_2^{2q},\ldots,v_6^{2q}\}\subseteq S^\prime_{\eta}$ and we must recognize the last vertex of $S^\prime_{\eta}$. However, $v_1^{2q-1}$ has 10 neighbors outside $\{v_1^{2q-1},v_2^{2q},\ldots,v_6^{2q}\}$. It contradicts the condition $v_1^{2q-1}\in S^\prime_{\eta}$, since every vertex in $S^\prime_{\eta}$ can have at most 6 neighbors outside. In the second case, $\{v_2^{2q-1},v_3^{2q-1},v_2^{2q},\ldots,v_6^{2q}\}= S^\prime_{\eta}$. Similarly, we can observe that $v_2^{2q-1}$ has $8$ neighbors outside $S^\prime_{\eta}$, which contradicts the fact $v_2^{2q-1}\in S^\prime_{\eta}$. \noindent {\it Case} 2. $v_2^{2q}\in S^\prime_{\eta+1}$ Since $\deg_{G^\prime}(x_1^{\eta})\le 6$, we have that either $x_1^{\eta}=v_1^{2q}$ or $x_1^{\eta}=v_3^{2q}$. By the same argument as in \emph{Case} $1$ we can observe that $x_1^{\eta}\neq v_3^{2q}$, and so we may assume that $x_1^{\eta}=v_1^{2q}$ and $\{v_1^{2q},v_3^{2q},\ldots ,v_6^{2q}\}\subseteq S^\prime_{\eta}$. Thus we must recognize two last vertices of $S^\prime_{\eta}$. Since each of $v_1^{2q-1},v_2^{2q-1},v_3^{2q-1}$ has at least $8$ neighbors outside $\{v_1^{2q},v_3^{2q},\ldots ,v_6^{2q}\}$, we have $\{v_1^{2q-1},v_2^{2q-1},v_3^{2q-1}\}\cap S^\prime_{\eta}=\emptyset $. Hence we conclude that $v^{2q}_3,v^{2q}_4,v^{2q}_5,v^{2q}_6$ have at least two neighbors outside $S^\prime_{\eta}$, and so no of them is $x_2^{\eta}$, because $x_2^{\eta}$ has at most one neighbor outside $S^\prime_{\eta}$. Furthermore, every vertex in $V(G^\prime)\setminus \{v_1^{2q},v_3^{2q},\ldots ,v_6^{2q}\}$ has at least $5$ neighbors outside $\{v_1^{2q},v_3^{2q},\ldots ,v_6^{2q}\}$, thus also none of them is $x_2^{\eta}$. Since $x_2^{\eta}$ does not exist, the $(7,1)$-partition of $G$ does not exist either. \end{proof} \label{ex:Gq} \end{example} \subsection{Algorithm}\label{subal} Now we are ready to present the algorithm that confirms both statements of Theorem \ref{thm:1}. Note that Proposition \ref{l:2} and the induction procedure could be used to prove the first statement of Theorem \ref{thm:1} for $t=k$ but this approach seems to be useless for $t> k$, as we have observed in Example \ref{ex:Gq}. Mudrock et al. \cite{mudrock} proved the lack of monotonicity for the equitable $(k,\mathcal{D}_0)$-choosability with respect to the parameter $k$. It motivates our approach for solving the problem. To simplify understanding we give the main idea of the algorithm presented in the further part. It can be expressed in a few steps (see also Fig. \ref{fig:my_label}): \begin{itemize} \item on the base of the given $(k,d)$-partition $S_1\cup \cdots \cup S_{\eta+1}$ of $G$, we create the list $S$, consisting of the elements from $V(G)$, whose order corresponds to the order in which the colouring is expanded to successive vertices in each $S_j$ (cf. the proof of Proposition \ref{l:2}); \item let $|V(G)|=\beta t +r_2$, $1 \leq r_2 \leq t$; we colour $r_2$ vertices from the beginning of $S$ taking into account the lists of available colours; we delete the colour assigned to $v$ from the lists of available colours for vertices from $N^{col}_G(d,v)$; \item let $|V(G)|=\beta (\gamma k +r)+r_2=\beta \gamma k+ (\rho k +x) +r_2$; observe that $r \equiv t \pmod k$ and $\rho k +x\equiv 0 \pmod r$; we colour $\rho k +x$ vertices taking into account the lists of available colours in such a way that every sublist of length $k$ is formed by vertices coloured differently (consequently, every sublist of length $r$ is coloured differently); we divide the vertices colored here into $\beta$ sets each one of cardinality $r$; we delete $c(v)$ from the lists of vertices from $N^{col}_G(d,v)$; \item we extend the list colouring into the uncoloured $\beta\gamma k$ vertices by colouring $\beta$ groups of $\gamma k$ vertices; first, we associate each group of $\gamma k$ vertices with a set of $r$ vertices coloured in the previous step (for different groups these sets are disjoint); next, we color the vertices of each of the group using $\gamma k$ different colors that are also different from the colors of $r$ vertices of the set associated with this group; \item our final equitable list colouring is the consequence of a partition of $V(G)$ into $\beta+1$ coloured sets, each one of size at most $t$ and each one formed by vertices coloured differently. \end{itemize} \begin{figure} \caption{An exemplary illustration of the input of the \textsc{Equitable $(L,{\mathcal D} \label{fig:my_label} \end{figure} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \setlength{\intextsep}{0pt} \begin{algorithm}\label{al:1} \caption{\textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring}($G$)} \Input {Graph $G$ on $n$ vertices; $L$ - $t$-uniform list assignment; a $(k,d)$-partition $S_1\cup \cdots \cup S_{\eta+1}$ of $G$, given by lists $S_1=(x^1_1,\ldots,x^1_{r_1})$ and $S_j=(x^j_1,\ldots,x^j_k)$ for $j\in \{2, \ldots ,\eta+1\}$.} \Output{Equitable $(L,{\mathcal D}_{d-1})$-colouring of $G$.} initialization\; $S:=empty$; $L_R:=empty$; $L_X:=empty$;\\ \For{$j:=1$ \textbf{to} $\eta+1$}{ add \textsc{reverse}$(S_j)$ to $S$; //\textsc{reverse} is the procedure for reversing lists\label{procedureReverse}} $\beta:=\left\lceil n/t\right\rceil$-1;\\ \eIf {$n \equiv0 \pmod t$}{$r_2:=t$}{$r_2:=n \pmod t;$} $\gamma := t \div k$; $r:= t \pmod k$; $\rho:=\beta r \div k$; $x:=\beta r \pmod k$;\\ take and delete $r_2$ elements from the beginning of $S$, and add them, vertex \\ \hspace{2cm} by vertex, to list $L_R$; \label{r2}\\ \textsc{colour\_List}$(L_R,r_2)$; \label{colour-list-one}\\ take and delete $x$ elements from the beginning of $S$, and add them, vertex\\ \hspace{2cm} by vertex, to list $L_X$ ;\\ \textsc{colour\_List}$(L_X,x)$; \label{colour-list-two}\\ $S_{col}:=L_X$;\\ \For{$j=1$ \textbf{to} $\rho$}{\label{procedureOne-for-one} take and delete $k$ elements from the beginning of $S$, and add them, vertex\\ \hspace{2cm} by vertex, to list $S'$;\\ \textsc{colour\_List}$(S',k)$; \label{colour-list-three}\\ $S_{col}:=S_{col} + S'$;} \textsc{Reorder}($S_{col}$); \label{reorder}\\ $\overline{S}:=S$; //an auxiliary list\\ \label{modify-colour-one} \textsc{Modify\_colourLists}$(S_{col},\overline{S})$; \label{modify-list}\\ \textsc{colour\_List}$(S,\gamma k)$;\label{colour-list-four}\\ \end{algorithm} \setlength{\intextsep}{0pt} \begin{algorithm}\renewcommand{Procedure}{Procedure} \caption{\textsc{colour\_List}($S',p$)}\label{colour-list} \Input {List $S'$ of vertices; integer $p$.//the length of $S'$ is multiple of $p$} \Output {$L$-colouring of the vertices from $S'$.\\//The procedure also modifies a global variable of the list assignment $L$. } initialization\; \While{$S'\neq empty$}{\label{procedureTwo-while-one} let $S''$ be the list of the $p$ first elements of $S'$;\\ $C:=\emptyset$; //set $C$ is reserved for the colours being assigned to vertices of $S''$\\ \While{$S''\neq empty$}{\label{procedureTwo-while-two} let $v$ be the first element of $S''$;\label{vdef}\\ $L(v):=L(v) \backslash C$;\\ $c(v):=$\textsc{colour\_Vertex}$(v)$; \\ delete vertex $v$ from $S'$ and $S''$; $C:=C\cup \{c(v)\}$;\\ } } \end{algorithm} \setlength{\intextsep}{0pt} \begin{algorithm}\renewcommand{Procedure}{Procedure} \caption{\textsc{colour\_Vertex}($v$)} \label{colour-vertex} \Input{Vertex $v$ of the graph $G$.} \Output{$L$-colouring of the vertex $v$.\\//The procedure modifies also a global variable of the list assignment $L$.} initialization\; $c(v):=$ any colour from $L(v)$;\\ delete $c(v)$ from $L(w)$ for all $w \in N_G^{col}(d,v)$;\label{delete} //$d$ is a global variable\\ \textbf{return} $c(v)$;\\ \end{algorithm} \setlength{\intextsep}{0pt} \begin{algorithm}\renewcommand{Procedure}{Procedure} \caption{\textsc{Reorder}($S'$)} \Input {List $S'$ of coloured vertices of $G$.} \Output{List $S'$ - reordered in such a way that every its sublist of length $k$ is formed by vertices being coloured with different colours.} initialization\; $S_{aux}:=empty$; //an auxiliary list\\ take and delete $r_1$ elements from $S'$, and add them, vertex by vertex, to $S_{aux}$;\\ \For{$j=1$ \textbf{\emph{to}} $\eta-\beta \gamma$}{ $P:=\emptyset$;\\ take and delete first $k$ elements from $S'$, and add them to set $P$;\\ \For{$i=1$ \textbf{\emph{to}} $k$ \textbf}{ let $v$ be a vertex from $P$ such that $c(v)$ is different from colours of the last $k-1$ vertices of $S_{aux}$; add $v$ to the end of $S_{aux}$;\\ } } $S':=S_{aux}$;\\ \end{algorithm} \setlength{\intextsep}{0pt} \begin{algorithm}\renewcommand{Procedure}{Procedure} \caption{\textsc{Modify\_colourLists}($L_1,L_2$)}\label{modify-colour} \Input {List $L_1$ of $\beta r$ coloured vertices and list $L_2$ of $\beta \gamma k$ uncoloured vertices.} \Output {Modified colour list assignment $L$ for vertices of $L_2$.\\ //$L$ is a global variable } initialization\; $C:=\emptyset$; //$C$ is a set of colours of vertices from the depicted part of $L_1$\\ \For{$i=1$ \textbf{\emph{to}} $\beta$}{ take and delete first $r$ vertices from $L_1$;\\ let $C$ be the set of colours assigned to them;\\ \For{$j=1$ \textbf{\emph{to}} $\gamma k$}{ let $v$ be the first vertex from $L_2$;\\ $L(v):=L(v)\backslash C$; delete $v$ from $L_2$;\\ } } \end{algorithm} Now we illustrate \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring} using a graph constructed in Example \ref{ex:1}. \begin{example}\label{ex:1} Let $G_1,G_2$ be two vertex-disjoint copies of $K_5$ and $V(G_i)=\{v_1^i,\ldots,$ $v_5^i\}$ for $i\in \{1,2\}$. We join every vertex $v_j^2$ to $v_j^1,v_{j+1}^1, \ldots, v_5^1$ for $j\in\{1,2,3,4,5\}$. Next, we add a vertex $w^i_j$ and join it with $v^i_j$ for $i\in\{1,2\}\;j\in\{1,2,3,4,5\}$. In addition, we join $w^i_j$ to arbitrary two vertices in $\{v^q_p:(q<i)\vee (q=i\wedge p<j)\}\cup \{w^q_p:(q<i)\vee (q=i\wedge p<j)\}$, $i\in\{2,3,4,5\}\;j\in\{1,2\}$. Let $G$ be a resulted graph. Observe that $|V(G)|=20$ and the partition $S_1\cup S_2\cup\ldots \cup S_{10}$ of $V(G)$ such that $S_{p+1}$$=\{v^{s+1}_{r+1},w^{s+1}_{r+1}\}$ for $p\in \{0,\ldots, 9\}$, where $s=\left\lfloor \frac{p}{5}\right\rfloor, r\equiv p \pmod 5$ is a $(2,3)$-partition of $G$. \begin{figure} \caption{An exemplary graph $G$ depicted in Example \ref{ex:1} \label{ex:alg} \end{figure} For the purpose of Example \ref{ex:1}, we assume the following 3-uniform list assignment for the graph from Figure \ref{ex:alg}: $L(v^i_j)=\{1,2,3\}$, for $i\in \{1,2,3,4\}$, $L(w^1_j)=\{2,3,4\}$, and $L(w^2_j)=\{1,2,4\}$, $j\in \{1,2,3,4,5\}$, while given $(2,3)$-partition of $G$ is: $S_p=\{w^{s+1}_{r+1},v^{s+1}_{r+1}\}$, where $s=\left\lfloor \frac{p}{5}\right\rfloor, r\equiv p \pmod 5$, $p\in[10]$. Thus \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring} returns equitable $(L,\mathcal{D}_2)$-coloring of $G$. Note, that $20=|V(G)|=\eta \cdot k+r_1=9\cdot 2 +2$. While on the other hand, we have $20=|V(G)|=\beta \cdot t +r_2=6 \cdot 3+2$. Observe that $x=0$. When we colour a vertex, we always choose the first colour on its list. The list $S$ determined in lines 3-5 of \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring} and the colours assigned to first part of vertices of $S$ (lines 18-25) are as follows: $$ \begin{array}{rc|c|c} S=&(v_1^1,w_1^1,& v_2^1,w_2^1,v_3^1,w_3^1,v_4^1,w_4^1,&v_5^1,w_5^1,v_1^2,w_1^2,\ldots,v_5^2,w_5^2) \\ &r_2 &\rho k&\beta \gamma k\\ \mbox{\emph{ colours:}} & 1 \ \ \ 2 & 1\ \ \ 2\ \ \ 1^*\ \ \ 2\ \ \ 2\ \ \ 3\ \ & \\ \end{array} $$ \noindent $^*$: after colouring $v_3^1$ with 1, $L(v_1^2)=\{2,3\}$ - the result of line \ref{delete} in the \textsc{colour\_Ver\-tex} procedure. List $S_{col}$ after \textsc{Reorder}{($S_{col}$)}: $(v_2^1,w_2^1,v_3^1,w_3^1,w_4^1,v_4^1)$ with corresponding colours: $(1,2,1,2,3,2)$. $$ \begin{array}{*{13}c} \overline{S}= & (v_5^1, &w_5^1, & v_1^2, & w_1^2,& v_2^2, & w_2^2,& v_3^2, & w_3^2,& v_4^2, & w_4^2,& v_5^2, &w_5^2) \\\hline \rm{lists\ after} & 2& 2& 3 & 1& 2& 2& 1& 1& 1& 1& 1& 1 \\ \rm{procedure} & 3 & 3 & & 4 &3 & 4 & 3 & 4 & 2 & 2 & 3 & 4 \\ \textsc{Modify\_colourList}& & 4 & & & & & & & & 4 & & \\\hline {\rm final}\ c(v) & 2 & 3 & 3 & 1 & 2 & 4 & 1 & 4 & 1 & 2 & 1&4\\ \end{array} $$ \end{example} To prove the correctness of the \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring} algorithm, we give some observations and lemmas. \begin{observation}\label{obs:al} The colour function $c$ returned by \textsc{Equitable $(L,{\mathcal D}_{d-1})$-col\-ouring} is constructed step by step. In each step, $c(v)$ is a result of {\rm \textsc{colour\_Ver\-tex}{($v$)}} and this value is not changed further. \end{observation} \begin{observation}\label{obs:al:1} The list assignment $L$, as a part of the input of \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring}, is modified for a vertex $v$ by \textsc{colour\_List} or by \textsc{Modify\_colourLists}. \end{observation} \begin{lemma}\label{lem:1} Every time when \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring}$(G)$ calls {\rm \textsc{col\-our\_Ver\-tex}{($v$)}}, the list $L(v)$ for $v$ is non-empty, i.e. {\rm \textsc{colour\_Vertex}{($v$)}} is always executable. \end{lemma} \begin{proof} Note that \textsc{colour\_Vertex} is called by \textsc{colour\_List}. Let \noindent $R=\{v\in V(G): \textsc{colour\_Vertex}{(v)} \mbox{ is called when }\; \textsc{colour\_List}(L_R,r_2) \linebreak \mbox{ in line \ref{colour-list-one}}\; \mbox{of }\; \textsc{Equitable} (L,{\mathcal D}_{d-1})\textsc{-colouring} \mbox{ is executed}\}$, \noindent $X=\{v\in V(G): \textsc{colour\_Vertex}{(v)} \mbox{ is called when }\; \textsc{colour\_List}(L_X,x) \linebreak \mbox{ in line \ref{colour-list-two}}\; \mbox{of }\; \textsc{Equitable} (L,{\mathcal D}_{d-1})\textsc{-colouring} \mbox{ is executed}\}$. \noindent Let $V_1:=S_1\cup \cdots \cup S_{\eta+1-\beta\gamma}$, $V_2:=V(G)\setminus V_1=S_{\eta+1-(\beta\gamma-1)}\cup \cdots \cup S_{\eta+1}$. Note that \noindent $V_1\setminus (R \cup X)=\{v\in V(G):$ \textsc{colour\_Vertex}${(v)}$ is called when \textsc{colour\_List}$(S',k)$ in line \ref{colour-list-three} of \textsc{Equitable} $(L,{\mathcal D}_{d-1})$ \textsc{-colouring} is executed$\}$ \noindent $V_2=\{v\in V(G):$ \textsc{colour\_Vertex}${(v)}$ is called when \textsc{colour\_List}$(S,\gamma k)$ in line \ref{colour-list-four} of \textsc{Equitable} $(L,{\mathcal D}_{d-1})$\textsc{-colouring} is executed$\}$. \noindent Observe that $|R|=r_2,|X|=x, |V_1\setminus (R\cup X)|=\rho k, |V_2|=\beta\gamma k$. \noindent {\it Case} 1. $v\in R$. In this case, the vertex $v$ is coloured by \textsc{colour\_List}{($L_R,r_2$)} in line \ref{colour-list-one} of \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring}. Since $|R|=r_2$, the {\bf while} loop in line \ref{procedureTwo-while-one} of \textsc{colour\_List} is executed only once. The {\bf while} loop in line \ref{procedureTwo-while-two} of \textsc{colour\_List} is executed $r_2$ times. Suppose, $v$ is a vertex such that \textsc{colour\_Vertex}{($v$)} is called in the $i$-th execution of the {\bf while} loop in line \ref{procedureTwo-while-two} of \textsc{colour\_List}. By Observation \ref{obs:al:1}, the fact that \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring} has not called \textsc{Modify\_colourLists} so far, and because it is the first time when \textsc{colour\_List} works, we have $|C|=i-1$, and $L(v)\setminus C$ is the current list of $v$. Since $t\ge r_2$, the list of $v$ is non-empty. \noindent {\it Case} 2. $v\in X$. This time, the vertex $v$ is coloured by \textsc{colour\_List}{($L_X,x$)} called in line \ref{colour-list-two} of \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring}. Similarly as in \emph{Case} 1, the {\bf while} loop in line \ref{procedureTwo-while-one} of \textsc{colour\_List} is executed only once and the {\bf while} loop in line \ref{procedureTwo-while-two} of \textsc{colour\_List} is executed $x$ times. Suppose that $v$ is a vertex such that \textsc{colour\_Vertex}{($v$)} is called in the $i$-th iteration of the {\bf while} loop in line \ref{procedureTwo-while-two} of \textsc{colour\_List}. Observe that properties of the $(k, d)$-partition $S_1\cup \cdots \cup S_{\eta+1}$ of $G$ (given as the input of \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring}) and the \textsc{Reverse} procedure from line \ref{procedureReverse} of \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring} imply that $v$ has at most $(x-i+1)d-1+(k-x)$ neighbors $w$ for which \textsc{colour\_Vertex}{($w,d$)} was executed earlier than \textsc{colour\_List}{($L_X,x$)}. More precisely, by the definition of the $(k, d)$-partition, $v$ has at most $(x-i+1)d-1$ neighbours in $R\setminus Y$, where $Y$ consists of the last $k-x$ vertices $w$ for which \textsc{colour\_Vertex}{($w$)} was executed (being called by \textsc{colour\_List}{($L_R,r_2$)}). Thus, at most $k-i$ colours were deleted from $L(v)$ before \textsc{colour\_List}{($L_X,x$)} began. If the {\bf while} loop in line \ref{procedureTwo-while-two} of \textsc{colour\_List} is called for the $i$-th time, then $|C|=i-1$ and so, from the current list $L(v)$ at most $i-1$ elements were deleted. Furthermore, \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring} has not called \textsc{Modify\_colourLists} so far. Thus the current size of $L(v)$ is at least $t-k+1$, by Observation \ref{obs:al:1}. Since $t\geq k$, the list of $v$ is non-empty. \noindent {\it Case} 3. $v\in V_1\setminus (R\cup X)$. In this case, the vertex $v$ is coloured during the execution of the {\bf for} loop in line \ref{procedureOne-for-one} of \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring}. Observe that this loop is executed $\rho$ times, and in each of the executions the {\bf while} loop in line \ref{procedureTwo-while-one} of \textsc{colour\_List} is executed only once, while the {\bf while} loop in line \ref{procedureTwo-while-two} of \textsc{colour\_List} is executed $k$ times. Suppose that $v$ is a vertex such that \textsc{colour\_Vertex}{($v$)} is called in the $j$-th execution of the {\bf for} loop in line \ref{procedureOne-for-one} of \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring} and the $i$-th execution of the {\bf while} loop in line \ref{procedureTwo-while-two} of \textsc{colour\_List}. Observe that properties of the $(k, d)$-partition $S_1\cup \cdots \cup S_{\eta+1}$ of $G$ (given as input of \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring}) and the \textsc{Reverse} procedure from line \ref{procedureReverse} of \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring} imply that $v$ has at most $(k-i+1)d-1$ neighbors $w$ for which \textsc{colour\_Vertex}{($w$)} was executed before the $j$-th execution of the {\bf for} loop in line \ref{procedureTwo-while-one} of \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring} started. Thus from the list of $v$ at most $k-i$ colours were deleted before the $j$-th execution of the {\bf for} loop in line \ref{procedureTwo-while-one} of \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring} started. If the {\bf while} loop in line \ref{procedureTwo-while-two} of \textsc{colour\_List} is called for the $i$-th time, $|C|=i-1$ and so, from the current list of $v$ at most $i-1$ elements were deleted. Furthermore, \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring} has not called \textsc{Modify\_colourLists} so far. Thus the current size of the list $L(v)$ is at least $t-k+1$, by Observation \ref{obs:al:1}. Since $t\geq k$, the list of $v$ is non-empty. \noindent {\it Case} 4. $v\in V_2$. In this case, the vertex $v$ is coloured by the \textsc{colour\_List}{($S,\gamma k$)} procedure, called by \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring} in line \ref{colour-list-four}. Since $|V_2|=\beta\gamma k$, the {\bf while} loop in line \ref{procedureTwo-while-one} of \textsc{colour\_List} is executed $\beta$ times and the {\bf while} loop in line \ref{procedureTwo-while-two} of \textsc{colour\_List} is executed $\gamma k$ times. Suppose that $v$ is a vertex for which \textsc{colour\_Vertex}{($v,d$)} is called in the $j$-th execution of the {\bf while} loop in line \ref{procedureTwo-while-one} of \textsc{colour\_List} and in the $z$-th execution of the {\bf while} loop in line \ref{procedureTwo-while-two} of \textsc{colour\_List}. Let $z=yk+i$, where $0\le y\le \gamma -1,\; 0\le i\le k$. Similarly as before, properties of the $(k, d)$-partition $S_1\cup \cdots \cup S_{\eta+1}$ of $G$ and the \textsc{Reverse} procedure imply that $v$ has at most $(k-i+1)d-1$ neighbors $w$ for which \textsc{colour\_Vertex}{($w$)} was executed before the $j$-th execution of the {\bf while} loop in line \ref{procedureTwo-while-one} of \textsc{colour\_List} began. Thus from the list of $v$ at most $k-i$ elements were deleted before the $j$-th execution of the {\bf while} loop in line \ref{procedureTwo-while-one} of \textsc{colour\_List} started. Furthermore, after the execution of \textsc{Modify\_colourLists} in line \ref{modify-colour-one} from the list of every vertex in $V_2$ at most $r$ elements were removed. If the {\bf while} loop in line \ref{procedureTwo-while-two} of \textsc{colour\_List} is called for the $z$-th time, then $|C|=z-1$ and so, from the initial list $L(v)$ at most $z-1$ elements were deleted. Thus, when \textsc{colour\_Vertex}{($v$)} is called, then the size of the current list $L(v)$ is at least $t-r-(k-i)-(z-1)=t-r-k-yk+1$. Since $y\le \gamma-1$ and $t=\gamma k+r$, the list of $v$ is non-empty. $\Box$ \end{proof} \begin{lemma}\label{com:col} An output of {\rm \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring}($G$)} is an $(L,{\mathcal D}_{d-1})$-colouring of $G$. \end{lemma} \begin{proof} We will show that if \textsc{colour\_Vertex}{($v$)} is executed, then an output $c(v)$ has always the following property. For each subgraph $H$ of $G$ induced by vertices $x$ for which \textsc{colour\_Vertex}{($x,d$)} was executed so far with the output $c(x)=c(v)$, the condition $\delta(H)\le d-1$ holds. By Observation \ref{obs:al} and Lemma \ref{lem:1}, it will imply that an output $c$ of \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring} is an $(L,{\mathcal D}_{d-1})$-colouring of $G$. Note that it is enough to show this fact for $H$ satisfying $v\in V(H)$. By a contradiction, let $v$ be a vertex for which the output $c(v)$ does not satisfy the condition, i.e. $v$ has at least $d$ neighbors in the set of vertices for which \textsc{colour\_Vertex} was already executed with the output $c(v)$. But it is not possible because $c(v)$ was removed from $L(v)$ when the last (in the sense of the algorithm steps) of the neighbors of $v$, say $x$, obtained the colour $c(v)$ (\textsc{colour\_Vertex}{($x$)} removed $c(x)$ from $L(v)$ since $v\in N_G^{col}(d,x)$ in this step). $\Box$ \end{proof} \begin{lemma}\label{com:eq} An output colour function $c$ of {\rm \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring}($G$)} satisfies $|C_i|\le \left\lceil |V(G)|/t\right\rceil$, where $t$ is the part of the input of \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring}$(G)$ and $C_i=\{v\in V(G):\; c(v)=i\}$. \end{lemma} \begin{proof} Recall that $\left\lceil |V(G)|/t\right\rceil=\beta+1$. We will show that there exists a partition of $V(G)$ into $\beta+1$ sets, say $W_1\cup \cdots \cup W_{\beta+1}$, such that for each $i\in \{1, \ldots ,\beta+1\}$ any two vertices $x,y$ in $W_i$ satisfy $c(x)\neq c(y)$. It will imply that the cardinality of every colour class in $c$ is at most $\beta+1$, giving the assertion. Note that after the last, $\rho$-th execution of the {\bf for} lopp in line \ref{procedureOne-for-one} of \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring} the list $S_{col}$ consists of the coloured vertices of the set $V_1\setminus R$ (observe that $|V_1\setminus R|=\beta r$). The elements of $S_{col}$ are ordered in such a way that the first $x$ ones have different colours and for every $i\in \{1, \ldots ,\rho\}$ the $i$-th next $k$ elements have different colours. Now the \textsc{Reorder}($S_{col}$) procedure in line \ref{reorder} of \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring} changes the ordering of elements of $S_{col}$ in such a way that every $k$ consecutive elements have different colours. Since $r \in \{0,\ldots,k-1\}$, it follows that also every $r$ consecutive elements of this list have different colours. The execution of \textsc{Reorder}($S_{col}$) is always possible because of the previous assumptions on $S_{col}$. For $i\in \{1, \ldots ,\beta\}$ let $H_i=S_{\eta+1-((\beta-i+1)\gamma-1)}\cup S_{\eta+1-((\beta-i+1)\gamma-2)}\cup \cdots \cup S_{\eta+1-(\beta-i)\gamma}$. Thus $H_1\cup \cdots \cup H_{\beta}$ is a partition of $V_2$ into $\beta$ sets, each of the cardinality $\gamma k$. Note that the vertices of $H_i$ are coloured when \textsc{colour\_List}($S,\gamma k$) in line \ref{colour-list-four} of \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring} is executed. More precisely, it is during the $i$-th execution of the {\bf while} loop in line \ref{procedureTwo-while-one} of \textsc{colour\_List}. It guarantees that the vertices of $H_i$ obtain pairwise different colours. Moreover, in line \ref{modify-list} of \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring} the lists of vertices of $H_i$ were modified in such a way that the colours of $i$-th $r$ elements from the current list $S_{col}$ are removed from the list of each element in $H_i$. Hence, after the execution of \textsc{colour\_List}($S,\gamma k$) in line \ref{colour-list-four} of \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring} the elements in $H_i$ obtain colours that are pairwise different and also different from all the colours of $i$-th $r$ elements from the list $S_{col}$ (recall that $S_{col}$ consists of the ordered vertices of $V_1\setminus R$). Hence, for every $i\in \{1, \ldots ,\beta\}$ the elements of $H_i$ and the $i$-th $r$ elements of $S_{col}$ have pairwise different colours in $c$ and can constitute $W_i$. Moreover, the elements of $R$ constitute $W_{\beta+1}$. Thus $|W_{\beta+1}|=r_2$, which finishes the proof. $\Box$ \end{proof} \begin{theorem} For a given graph $G$ on $n$ vertices, a $t$-uniform list assignment $L$, a $(k,d))$-partition of $G$ the {\rm \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring}($G$)} algorithm returns $(L,{\mathcal D}_{d-1})$-colouring of $G$ in polynomial time. \end{theorem} \begin{proof} The correctness of the algorithm follows immediately from Lemmas \ref{com:col}, \ref{com:eq}. To determine the complexity of the entire \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring} $(G)$, let us analyze first its consecutive instructions: \begin{itemize} \item creating the list $S$ - the {\bf for} loop in line \ref{procedureReverse} can be done in $O(n)$ time \item \textsc{colour\_List}{($L_R,r_2$)} in line \ref{colour-list-one} This procedure recalls \textsc{colour\_Vertex}{($v,d$)} in which the most costly operation is the one depicted in line \ref{delete} of \textsc{colour\_Vertex}. Since, we colour $r_2$ initial vertices we do not need to check $N_G^{col}(d,v)$ for first $d$ vertices. If $r_2\leq d$, then the cost of \textsc{colour\_List}{($L_R,r_2$)} is $O(r_2)$, otherwise $O(r_2 \Delta^2(G))$. \item \textsc{colour\_List}{($L_X,x$)} in line \ref{colour-list-two} Similarly as in the previous case, the complexity of this part can be bounded by $O(x\Delta^2(G))$. \item the {\bf for} loop in line \ref{procedureOne-for-one} This loop is executed $\eta$ times, $\rho \leq \eta \leq \lceil n/k \rceil$. The complexity of the internal \textsc{colour\_List}{($S',k$)} procedure is $O(k\Delta^2(G))$. So, we have $O(n\Delta^2(G))$ in total. \item \textsc{Reorder}{($S_{col}$)} in line \ref{reorder} Since $(\eta - \beta \gamma)k\leq n$, then we get $O(n)$. \item \textsc{Modify\_colourList}{($S_{col}, \overline{S}$)} The double {\bf for} loop implies the complexity $O(\beta \gamma k)=O(n)$. \item \textsc{colour\_List}{($S,\gamma k$)} The list $S$ is of length $\beta \gamma k$, so the {\bf while} loop in line \ref{procedureTwo-while-one} of \textsc{colour\_List} is executed $\beta$ times, while the internal {\bf while} loop is executed $\gamma k$ times. Taking into account the complexity of \textsc{colour\_Vertex}, we get $O(n\Delta^2(G))$. \end{itemize} We get $O(n\Delta^2(G))$ as the complexity of the entire \textsc{Equitable $(L,{\mathcal D}_{d-1})$-colouring}$(G)$. $\Box$ \end{proof} \section{Grids}\label{grids} Given two graphs $G_1$ and $G_2$, the \emph{Cartesian product} of $G_1$ and $G_2$, $G_1 \square G_2$, is defined to be a graph whose the vertex set is $V(G_1) \times V(G_2)$ and the edge set consists of all edges joining vertices $(x_1,y_1)$ and $(x_2,y_2)$ when either $x_1=x_2$ and $y_1y_2 \in E(G_2)$ or $y_1=y_2$ and $x_1x_2\in E(G_1)$. Note that the Cartesian product is commutative and associative. Hence the graph $G_1 \square \cdots \square G_d$ is unambiguously defined for any $d \in \mathbb{N}$. Let $P_n$ denote a path on $n$ vertices. If each factor $G_i$ is a path on at least two vertices then $G_1 \square \cdots \square G_d$ is a $d$-\emph{dimensional grid} (cf. Fig.~\ref{p34}). Note that the $d$-dimensional grid $P_{n_1}\square \cdots \square P_{n_d}$, $d \geq 3$, may be considered as $n_1$ layers and each layer is the $(d-1)$-dimensional grid $P_{n_2}\square \cdots \square P_{n_d}$. We assume $n_1 \geq \cdots \geq n_d$. \begin{figure} \caption{The 3-dimensional grid $P_5 \square P_3 \square P_2$.} \label{p34} \end{figure} Let $P_{n_1} \sqsupset \ldots \sqsupset P_{n_d}$ denote an \textit {incomplete} $d$-dimensional grid, i.e. a connected graph being a subgraph of $P_{n_1} \square \ldots \square P_{n_d}$ such that its some initial layers may be empty, the first non-empty layer may be incomplete, while any next layer is complete (cf. Fig.~\ref{incom}). Note that every grid is particular incomplete grid. \begin{figure} \caption{An incomplete grid $P_n \sqsupset P_3 \sqsupset P_2$, $n\geq 5$.} \label{incom} \end{figure} In this subsection we construct a polynomial-time algorithm that for each $3$-dimensional grid finds its $(3,2)$-partition (\textsc{Partition3d}($G$)). Application of Theorem \ref{thm:1} implies the main result of this subsection. Note that using a completely different method, the first statement of Theorem \ref{main3} has already been proven in \cite{DrFrDy18}. \begin{theorem}\label{main3} Let $t\geq 3$ be an integer. Every $3$-dimensional grid is equitably $(t,{\mathcal D}_1)$-choosable. Moreover, there is a polynomial-time algorithm that for every $t$-uniform list assignment $L$ of the $3$-dimensional grid $G$ returns an equitable $(L,{\mathcal D}_{1})$-colouring of $G$. \end{theorem} \begin{algorithm} \renewcommand{Procedure}{Procedure} \caption{\textsc{Corner}($G$)} \Input {Incomplete non-empty $d$-dimensional grid $G=P_{n_1} \sqsupset \ldots \sqsupset P_{n_d}$, $d \geq 2$.} \Output {Vertex $y=(a_1,\ldots,a_d) \in V(G)$ such that $\deg_G(y)\leq d$. } initialization\; let $a_1$ be the number of the incomplete layer of $G$;\\ \For{$i=2$ \textbf{\emph{to}} $d-1$}{ $a_i=\min\{x_i: \exists_{x_{i+1},\ldots, x_d}(a_1,\ldots, a_{i-1}, x_i, \ldots,x_d)\in V(G)\}.$} $a_d:=\min\{x_d: (a_1,\ldots, a_{d-1}, x_d)\in V(G)\}.$\\ \textbf{return} $(a_1,\ldots,a_d)$; \end{algorithm} \begin{algorithm}[htb] \caption{\textsc{Partition3d}($G$)}\label{alg3d} \Input{$3$-dimensional grid $G=P_{n_1} \square P_{n_2} \square P_{n_3}$.} \Output {A $(3,2)$-partition $S_1 \cup \cdots \cup S_{\alpha+1}$ of $G$.} initialization;\\ $\alpha:=\lceil \frac{n_1n_2n_3}{3} \rceil -1$;\\ \If {$\alpha \geq 1$}{\label{ifalpha} \For{$j:=\alpha +1$ \textbf{\emph{downto}} $2$}{\label{firstfor} $y_1^j=(a_1,a_2,a_3):=$\textsf{Corner}($G$);\label{firstcorner}\\ \If {$\deg(y_1^j)=1$}{\label{deg1} $y_2^j:=$\textsf{Corner}($G - y_1^j$);\label{secondcorner}\\ \eIf {$y_1^j$ is the only vertex on $a_1$ layer}{ let $y_3^j$ be any vertex on layer $a_1+1$ such that $y_3^j \neq y_2^j$} { let $y_3^j$ be any vertex on layer $a_1$ such that $y_3^j \neq y_1^j$ and $y_3^j \neq y_2^j$, if exists, otherwise $y_3^j$ is any vertex on layer $a_1+1$}} \If {$\deg(y_1^j)=2$}{\label{deg2} let $y_2^j$ be the neighbour of $y_1^j$ lying on the same layer as $y_1^j$; \\ let $y_3^j$ be any vertex on layer $a_1$, if exists, otherwise, $y_3^j$ is any vertex on layer $a_1+1$} \If {$\deg(y_1^j)=3$}{\label{deg3} $y_2^j:=(a_1,a_2,a_3+1)$; $y_3^j:=(a_1,a_2+1,a_3)$; } $S_j:=\{y_1^j,y_2^j,y_3^j\}$; $G:=G-S_j$;\\ } } $S_1:=V(G)$;\label{rest}\\ \end{algorithm} \begin{theorem} For a given $3$-dimensional grid $G$ the {\rm {\textsc{Partition3d}($G$)}} algorithm returns a $(3,2)$-partition of $G$ in polynomial-time.\label{3grid} \end{theorem} \begin{proof} First, observe that thanks to the condition of the \textbf{if} instruction in line \ref{ifalpha}, the \textbf{for} loop in line \ref{firstfor} is correctly defined and its instruction are performed every time for a graph with at least three vertices. Thus, the \textsc{Corner} procedure in both line \ref{firstcorner} and line \ref{secondcorner} is called for a non-empty graph, so the returned vertices $y_1^j$ and $y_2^j$ are correctly defined. There is no doubt that the remaining instruction are also correctly defined, and, in consequence, we get sets $S_1,\ldots, S_{\alpha+1}$ fulfilling conditions: $|S_1|\leq 3$ and $|S_j| = 3$, $j \in \{2,\ldots,\alpha+1\}$. Hence, all we need is to prove that the partition $S_1 \cup \cdots \cup S_{\alpha+1}$ fulfills also the condition (\ref{part-cond}) in the definition of the $(k,s)$-partition of $G$. Indeed, let us consider set $S_j=\{y_1^j, y_2^j, y_3^j\}$. If the condition from line \ref{deg1} of \textsc{Partition}{3d$(G)$} is true, then $\deg_{G-S_j}(y_1^j)=1$, $\deg_{G-S_j}(y_2^j)\leq 3$, while $\deg_{G- S_j}(y_3^j)\leq 5$. The same inequalities hold whenever the condition in line \ref{deg2} holds. If the condition in line \ref{deg3} is true, then $\deg_{G- S_j}(y_1^j)\leq 1$, $\deg_{G-S_j}(y_2^j)\leq 3$, while $\deg_{G-S_j}(y_3^j)\leq 4$. It is easy to see that the complexity of \textsc{Partition}{3d$(G)$} is linear due to the number of vertices of $G$. $\Box$ \end{proof} As a consequence of the above theorem and Theorem \ref{thm:1} we get the statement of Theorem \ref{main3}. \section{Concluding remarks} In Subsection \ref{subal} we have proposed the polynomial-time algorithm that finds an equitable $(L,{\mathcal D}_{d-1})$-colouring of a given graph $G$ assuming that we know a $(k,d)$-partition of $G$ ($L$ is a $t$-uniform list assignment for $G$, $t\geq k$). In this context the following open question seems to be interesting: What is the complexity of recognition of graphs having a $(k,d)$-partition? \textbf{Acknowledgment.} The authors thank their colleague Janusz Dybizbański for making several useful suggestions improving the presentation. \end{document}
\begin{document} \title{Convergence of stochastic nonlinear systems and implications for stochastic Model Predictive Control} \author{Diego Mu\~{n}oz-Carpintero and Mark Cannon \thanks{This work was supported in part by FONDECYT-CONICYT Postdoctorado N\textsuperscript{o} 3170040.} \thanks{D.~Mu\~{n}oz-Carpintero is with Institute of Engineering Sciences, Universidad de O´Higgins, Rancagua, 2841959, Chile, and Department of Electrical Engineering, FCFM, University of Chile, Santiago 8370451, Chile (e-mail: [email protected]).} \thanks{M.~Cannon is with Department of Engineering Science, University of Oxford, OX1 3PJ U.K. (e-mail: [email protected]).}} \maketitle \begin{abstract} The stability of stochastic Model Predictive Control (MPC) subject to additive disturbances is often demonstrated in the literature by constructing Lyapunov-like inequalities that ensure closed-loop performance bounds and boundedness of the state, but tight ultimate bounds for the state and non-conservative performance bounds are typically not determined. In this work we use an input-to-state stability property to find conditions that imply convergence with probability~1 of a disturbed nonlinear system to a minimal robust positively invariant set. We discuss implications for the convergence of the state and control laws of stochastic MPC formulations, and we prove convergence results for several existing stochastic MPC formulations for linear and nonlinear systems. \end{abstract} \section{Introduction} \label{sec:introduction} Stochastic Model Predictive Control (MPC) takes account of stochastic disturbances affecting a system using knowledge of their probability distributions or samples generated by an oracle~\cite{kouvaritakisandcannon2015,mesbah2016}. Stochastic predictions generated by a system model are used to evaluate a control objective, which is typically the expected value of a sum of stage costs, and probabilistic constraints, which allow constraint violations up to some specified maximum probability. The motivation for this approach is to avoid the conservatism of worst-case formulations of robust MPC and to account for stochastic disturbances in the optimization of predicted performance~\cite{mayne2014}. Stability analyses of stochastic MPC can be divided into cases in which disturbances are either additive or multiplicative. For regulation problems with multiplicative disturbances, Lyapunov stability and convergence of the system state to the origin can often be guaranteed \cite{cannonetal2009d,bernardinibemporad2012,farinaetal2016}. However, convergence of stochastic MPC with additive disturbances is often either ignored \cite{matuskoborrelli2012,fagianokhammash2012,schilbachetal2014}, or else analysed using Lyapunov-like functions without identifying ultimate bounds on the state and asymptotic properties of the control law \cite{paulsonetal2015,lietal2018,santosetal2019}. In~\cite{cannonetal2009a,cannonetal2009b,kouvaritakisetal2010} this difficulty is tackled by redefining the cost function and determining asymptotic bounds on the time-average stage cost, which converges to the value associated with the unconstrained optimal control law. In~\cite{chatterjeeandlygeros2015} a detailed analysis of the convergence and performance of stochastic MPC techniques is provided, considering typical stability notions of Markov chains and exploring the effect on stability and performance of the properties of the resulting Lyapunov-like inequalities (or geometric drift conditions). In particular, certain Lyapunov-like inequalities provide the robust notion of input-to-state stability (ISS), which may be used to derive (potentially conservative) ultimate boundedness conditions. These results are used for example in~\cite{paulsonetal2015,mishraetal2016} to analyse the stability of particular stochastic MPC formulations. A difficulty with the stability analyses discussed so far is that stability and convergence properties of stochastic MPC may be stronger than what can be inferred directly from Lyapunov-like conditions. These may imply bounds on stored energy, measured for example by a quadratic functional of the system state~\cite{paulsonetal2015}, or that the state is bounded~\cite{goulartandkerrigan2008} without providing a tight limit set. Likewise, limit average performance bounds may be found~\cite{kouvaritakisetal2013} without guarantees that these bounds are tight. However~\cite{lorenzenetal2017} presents a stochastic MPC strategy and a convergence analysis showing almost sure convergence of the system to the minimal robust positively invariant (RPI) set associated with a terminal control law of the MPC formulation. This implies a bound on time-average performance, and moreover the ultimate bounds for the state and average performance are tight. The approach of~\cite{lorenzenetal2017} uses the statistical properties of the disturbance input to show that, for any given initial condition and terminal set (which can be any set that is positively invariant under the terminal control law), there exists a finite interval on which the state of the closed-loop system will reach this terminal set with a prescribed probability. The Borel-Cantelli lemma is then used to show that the terminal set (and therefore the minimal RPI set associated with the terminal control law) is reached with probability 1 on an infinite time interval. It follows that average performance converges with probability 1 to that associated with the MPC law on the terminal set. These results are derived from the properties of a particular MPC strategy that may not apply to other MPC formulations; namely that if a predicted control sequence is feasible for the MPC optimization at a sufficiently large number of consecutive sampling instants, then the state necessarily reaches the desired terminal set. Here we extend the analysis of \cite{lorenzenetal2017} to general nonlinear stochastic systems with ISS Lyapunov inequalities. We analyse convergence of the state to any invariant subset of the closed-loop system state space under the assumption that arbitrarily small disturbances have a non-vanishing probability. We thus derive tight ultimate bounds on the state and, for the case that the closed-loop dynamics are linear within the limit set, non-conservative bounds on time-average performance. The convergence analysis applies to stochastic MPC algorithms that ensure ISS, either for input- or state-constrained linear systems (for which the closed-loop dynamics are generally nonlinear), or for nonlinear systems. We apply the analysis to the stochastic MPC strategies of~\cite{kouvaritakisetal2013,goulartetal2006} for linear systems and to that of~\cite{santosetal2019} for nonlinear systems, deriving new convergence results for these controllers. Particularly, we find tight ultimate bounds that had not been proved before: for the state, for the controllers of \cite{kouvaritakisetal2013,santosetal2019}; and for the time-average performance for the controller of \cite{kouvaritakisetal2013}. Our analysis uses arguments similar to those in~\cite{lorenzenetal2017}. Specifically, for any given initial condition and any invariant set containing the origin of state space, an ISS Lyapunov function is used to show that the probability of the state not entering this set converges to zero on an infinite time interval. It can be concluded that the state converges with probability~1 to the minimal RPI set for the system. This analysis is used to show that the closed-loop system state under a stochastic MPC law satisfying a suitable Lyapunov inequality converges with probability~1 to the minimal RPI set associated with the MPC law. This set and the limit average performance can be evaluated non-conservatively (to any desired precision) if the dynamics within the set are linear. An analysis yielding similar results was recently published in~\cite{munozandcannon2019}. Using results given in~\cite{meynandtweedie2009} on the convergence of Markov chains,~\cite{munozandcannon2019} demonstrates convergence (in distribution) to a stationary terminal distribution supported in the minimal RPI of the system. This convergence was proved by assuming linearity of the dynamics in this terminal set imposing a controllability assumption on the system within this terminal set. On the other hand, the analysis presented here uses the properties of ISS Lyapunov functions to provide stronger convergence results that hold with probability~1, and moreover these are demonstrated without the need for linearity or controllability assumptions. Nevertheless, for the special case of linear dynamics, we show here that it is possible to compute the tight ultimate bounds and non-conservative limit average performance bounds. The structure of the paper is as follows. Section 2 introduces the setting of the problem. Section 3 presents the stability analysis for general nonlinear systems under the specified assumptions. Section 4 discusses the implications for stochastic MPC by applying the analysis to derive convergence properties of several existing stochastic MPC formulations (for systems with linear and nonlinear dynamics, and MPC strategies with implicit and explicit predicted terminal controllers). Finally, Section 5 provides concluding remarks. \subsection{Basic definitions and notation} The sets of non-negative integers and non-negative reals are denoted ${\mathbb N}$ and ${\mathbb R}_+$, and $\mathbb{N}_{[a,b]}$ is the sequence $\{a,a+1,\ldots,b\}$ and $\mathbb{N}_k = \mathbb{N}_{[0,k]}$. For a sequence $\{x_0,x_1,\ldots\}$, $x_{j|k}$ for $j\in\mathbb{N}$ denotes the predicted value of $x_{k+j}$ made at time $k$. For sets $X,Y\subseteq{\mathbb R}^n$, the Minkowski sum is given by $X\oplus Y=\{x+y : x\in X,\, y\in Y \}$. The Minkowski sum of a sequence of sets $\{X_j , \, j\in\mathbb{N}_k\}$ is denoted $\bigoplus_{j=0}^k X_j$. For $X\subseteq{\mathbb R}^n$, ${\bf 1}_X(x)$ is the indicator function of $x\in X$. The open unit ball in ${\mathbb R}^n$ is ${\mathbb B}$. A continuous function $\phi:{\mathbb R}_+ \rightarrow {\mathbb R}_+$ is a ${\mathcal K}$-function if it is continuous, strictly increasing and $\phi(0)=0$, and it is a ${\mathcal K}_\infty$-function if it is a ${\mathcal K}$-function and $\phi(s)\rightarrow \infty$ as $s\rightarrow \infty$. A continuous function $\phi:{\mathbb R}_+\times{\mathbb R}_+\rightarrow {\mathbb R}_+$ is a ${\mathcal K} {\mathcal L}$-function if $\phi(\cdot,t)$ is a ${\mathcal K}$-function for all $t\in{\mathbb R}_+$ and if $\phi(s,\cdot)$ is decreasing for all $s\in{\mathbb R}_+$ with $\phi(s,t)\rightarrow 0$ as $t\rightarrow \infty$. The probability of an event $A$ is denoted ${\mathbb P}(A)$. We use the term \textit{limit set} to refer to an invariant subset of state-space to which the system converges. \section{Problem Setting}\label{sec2} Consider a discrete time nonlinear system given by \begin{equation} \label{eq1} x_{k+1} =f(x_k,w_k), \end{equation} where $x_{k}\in \mathbb{X}\subseteq{\mathbb R}^{n} $ is the state, $w_{k}\in\mathbb{W}\subseteq{\mathbb R}^{n_w}$ is the disturbance input, $f:\mathbb{X}\times\mathbb{W}\rightarrow \mathbb{X}$ is a function with ${f(0,0)=0}$, and $\mathbb{X}$ contains the origin in its interior. Current and future values of $w_k$ are unknown. \begin{assum}\label{distass} The disturbance sequence $\{w_0, w_1,\ldots\}$ is independent and identically distributed (i.i.d.) with ${\mathbb E}\{w_k\}=0$. The probability density function (PDF) of $w$ is supported in $\mathbb{W}$, a bounded set that contains the origin in its interior. Additionally, ${\mathbb P}\{\norm{w}\le \lambda \}>0$ for all $\lambda>0$. \end{assum} The requirement that disturbance inputs are i.i.d.\ and zero-mean is a standard assumption in stochastic MPC formulations in the literature. The assumption that ${\mathbb P}\{\norm{w}\le \lambda \}>0$ for all $\lambda>0$ clearly excludes certain disturbance distributions, but we note that it does not require continuity of the PDF and is satisfied by uniform and Gaussian distributions (or truncated Gaussian, to comply with Assumption \ref{distass}), among many others. While the dynamics of system \eqref{eq1} are defined on $\mathbb{X}$, we consider conditions for convergence to an invariant set $\Omega\subseteq\mathbb{X}$, characterized in Assumption \ref{terminalass}. The analysis of convergence is based on the notion of input-to-state stability~\cite{jiangandwang2001}. \begin{assum}\label{terminalass} The set $\Omega$ is bounded, contains the origin in its interior and is RPI, i.e.\ $f(x,w)\in\Omega$ for all $(x,w)\in\Omega\times\mathbb{W}$. \end{assum} \begin{defn}[Input-to-state stability] System \eqref{eq1} is input-to-state stable (ISS) with region of attraction $\mathbb{X}$ if there exist a ${\mathcal K}{\mathcal L}$-function $\beta(\cdot,\cdot)$ and a ${\mathcal K}$-function $\gamma(\cdot)$ such that, for all $k\in{\mathbb N}$, all $x_0\in\mathbb{X}$ and all admissible disturbance sequences $\{w_0,\ldots,w_{k-1}\} \in \mathbb{W}^k$, the state of~\eqref{eq1} satisfies $x_k\in\mathbb{X}$ and \begin{equation} \label{iss} \norm{x_k}\le \beta(\norm{x_0},k)+\gamma\left( \max_{j\in \{0,\ldots,k-1 \}} \norm{w_j}\right). \end{equation} \end{defn} The following theorem provides a necessary and sufficient condition under which system (\ref{eq1}) is ISS. \begin{thm}[\cite{jiangandwang2001}]\label{lemiss} System \eqref{eq1} is ISS with region of attraction~$\mathbb{X}$ if, and only if, there exist ${\mathcal K}_\infty$-functions $\alpha_1(\cdot)$, $\alpha_2(\cdot)$, $\alpha_3(\cdot)$, a ${\mathcal K}$-function $\sigma(\cdot)$ and a continuous function $V:\mathbb{X}\rightarrow {\mathbb R}_+$ such that \begin{subequations} \begin{align} & \alpha_1(\norm{x})\le V(x) \le \alpha_2(\norm{x}), \quad \forall x\in {\mathbb R}^{n}, \label{lyapkappa} \\ & V\left(f(x,w)\right)-V(x)\le -\alpha_3(\norm{x})+\sigma(\norm{w}), \label{isslyap} \end{align} \end{subequations} for all $(x,w)\in\mathbb{X}\times\mathbb{W}$. In this case we say that $V(\cdot)$ is an ISS-Lyapunov function. \end{thm} \begin{assum}\label{lyapunov} System \eqref{eq1} is ISS with region of attraction~$\mathbb{X}$. \end{assum} Assumption~\ref{lyapunov} implies that: (i) the origin of the state-space of the system $x_{k+1}=f(x_k,0)$ is asymptotically stable; (ii) all trajectories of \eqref{eq1} are bounded since $\mathbb{W}$ is a bounded set; and (iii) all trajectories of \eqref{eq1} converge to the origin if $w_k\rightarrow0$ as $k\rightarrow\infty$. For details we refer the reader to~\cite{jiangandwang2001}. We assume throughout this work that the system~\eqref{eq1} possesses an ISS-Lyapunov function, and hence that Assumption~\ref{lyapunov} holds. This assumption does not directly guarantee convergence to the set $\Omega$, however. Instead we combine this property with the stochastic nature of disturbances satisfying Assumption~\ref{distass} to prove convergence to $\Omega$ in Section~\ref{sec3}. \section{Main result}\label{sec3} As discussed above, ISS is not generally sufficient to determine non-conservative ultimate bounds for the state of~\eqref{eq1}. However, Section \ref{ssmainXf} shows that under Assumptions~\ref{distass} and~\ref{lyapunov} the state of~\eqref{eq1} converges with probability 1 to any set $\Omega$ satisfying Assumption~\ref{terminalass}, and thus to the minimal RPI set defined by the intersection of all such sets. Section \ref{secterminallinear} considers the special case of linear dynamics on $\Omega$ (for which the minimal RPI set can be determined with arbitrary precision); while~\cite{lorenzenetal2017} proves this for a particular MPC algorithm, in this section we extend the treatment to more general systems. \subsection{Convergence to $\Omega$}\label{ssmainXf} This section uses the ISS property of Assumption~\ref{lyapunov} to demonstrate almost sure convergence to a set $\Omega$ satisfying Assumption~\ref{terminalass} if the disturbance satisfies Assumption~\ref{distass}. The general idea is as follows. We note that for any state $x \notin \Omega$ there exists a set $W\subseteq \mathbb{W}$ such that the ISS-Lyapunov function decreases if $w\in W$ and such that $w\in W$ occurs with non-zero probability. It follows that, for any given $x_0\in \mathbb{X}$, there exists $N_f\in\mathbb{N}$ such that if $w\in W$ for $N_f$ consecutive time steps, then the Lyapunov function decreases enough to ensure that the state enters $\Omega$. This observation is used to show that, for any given $p\in(0,1]$, there exists a finite horizon over which $x$ reaches $\Omega$ with probability $1-p$. Finally, the Borell-Cantelli lemma is used to conclude that the state converges to $\Omega$ with probability 1, which implies that, for any $x_0\in \mathbb{X}$, there is zero probability of a disturbance sequence realisation $\{w_0,w_1,\ldots\}$ such that $x_k$ does not converge to $\Omega$ as $k\to\infty$. \begin{prop}\label{propW} Under Assumption~\ref{lyapunov}, for given $\lambda \in (0,1)$ let $W(z) = \{w:\sigma(\norm{w})\le \lambda \alpha_3(z)\}$, where $\sigma(\cdot)$ and $\alpha_3(\cdot)$ are a $\mathcal{K}$-function and a $\mathcal{K}_\infty$-function satisfying \eqref{isslyap}, and where $z\in \mathbb{R}_+$. Then there exists $\lambda \in (0,1)$ and a ${\mathcal K}$-function $\xi(\cdot)$ such that the ISS-Lyapunov function $V(\cdot)$ satisfies \begin{equation}\label{lyapstrict} V\bigl( f(x,w) \bigr)- V(x) \leq -\xi(\norm{x}) , \end{equation} whenever $w\in W(\norm{x})$. \end{prop} \begin{IEEEproof} Note that $-\alpha_3(\norm{x})+\sigma(\norm{w})\le -(1-\lambda)\alpha_3(\norm{x})$ if $w\in W(\norm{x})$. It follows immediately that \eqref{isslyap} implies \eqref{lyapstrict} with $\xi(\norm{x})=(1-\lambda)\alpha_3(\norm{x})$. \end{IEEEproof} \begin{prop}\label{propNfXf} Under Assumptions \ref{terminalass} and \ref{lyapunov}, for any $x_0\in \mathbb{X}$ there exists an integer $N_f=\lceil \alpha_2(r(x_0))/\xi(\epsilon) \rceil$ such that, if $w_j\in W(\epsilon)$ for $j=k,k+1,\ldots k+N_f-1$, where $\epsilon = \sup \{ \rho : \rho {\mathbb B}\subseteq \Omega\}$ and $k\in\mathbb{N}$ is arbitrary, then $x_{k+N_f}\in \Omega$. \end{prop} \begin{IEEEproof} The positive invariance of $\Omega$ in Assumption~\ref{terminalass} implies that $x_{k+N}\in \Omega$ for all $N\in\mathbb{N}$ if $x_{k} \in \Omega$. Suppose therefore that $x_j\notin \Omega$ for $j=k,\ldots,k+N-1$ for given $N\in \mathbb{N}$. In this case $W(\norm{x_j}) \supseteq W(\epsilon)$. Then, if $w_j\in W(\epsilon)$ for $j=k,\ldots,k+N-1$, Proposition \ref{propW} implies \[ \sum_{j=k}^{k+N-1} \xi(\norm{x_j}) \leq V(x_k) - V(x_{k+N}) , \] and since Assumption~\ref{lyapunov} implies $V(x_k) \leq \alpha_2(\norm{x_k}) \leq \alpha_2(r(x_0))$, where $r(x_0) = \beta(\norm{x_0},0) + \gamma(\sup_{w\in\mathbb{W}}\norm{w})$ for some ${\mathcal K}{\mathcal L}$-function $\beta(\cdot,\cdot)$ and some ${\mathcal K}$-function $\gamma(\cdot)$, the right-hand side of this inequality can be no greater than $\alpha_2(r(x_0))$. Furthermore, $x_j\notin \Omega$ implies $\xi(\norm{x_j}) \geq \xi(\epsilon)$ and hence $N \leq \alpha_2(r(x_0))/\xi(\epsilon)$. Choosing $N=N_f=\lceil \alpha_2(r(x_0))/\xi(\epsilon) \rceil$ therefore ensures that $x_{k+N_f}\in\Omega$. \end{IEEEproof} \begin{lem}\label{lemkey} Under Assumptions \ref{distass}-\ref{lyapunov}, for any $x_0\in \mathbb{X}$ and any given $p\in (0,1]$ there exists an integer $N_p$ such that \begin{equation}\label{lemkeyeq} {\mathbb P} \{ x_{N_p} \in \Omega \} \geq 1 - p. \end{equation} \end{lem} \begin{IEEEproof} Proposition~\ref{propNfXf} and the positive invariance of $\Omega$ in Assumption~\ref{terminalass} ensure that, for any $x_0$ and $k\in{\mathbb N}$, ${x_{k+{N_f}}\in\Omega}$ whenever $w_j\in W(\epsilon)$ for all $j \in\{ k,\ldots,k+N_f-1\}$ with $N_f=\lceil \alpha_2(r(x_0))/\xi(\epsilon) \rceil$. Let $p_\epsilon$ be the probability that $w_j\in W(\epsilon)$, then the i.i.d.\ property of Assumption~\ref{distass} implies $\mathbb{P}\{{ w_j \in W(\epsilon)}, \, j = k,\ldots,k+N_f-1\} = p_{\epsilon}^{N_f} > 0$ and, since the event that $w_j \notin W(\epsilon)$ for some $j \in \{k,\ldots,k+N_f-1\}$ does not guarantee that $x_{k+N_f}\notin\Omega$, we obtain the bound \[ \mathbb{P} \{ x_{k+N_f}\!\notin\!\Omega\} \leq 1 - \mathbb{P} \bigl\{ w_j \!\in\! W(\epsilon), \, j = 1,\ldots,N_f\bigr\} = 1 - p_{\epsilon}^{N_f} \] Now consider $k > N_f$, and choose $j_0,j_1,\ldots , j_{\lfloor k/N_f \rfloor}$ so that $j_0 = 0$, $j_{\lfloor k/N_f \rfloor} \leq k$ and $j_{i+1} - j_i \geq N_f$ for all $i$. Then $x_{k}\notin\Omega$ only if $x_{j_i}\notin \Omega$ for all $i=0,\ldots, \lfloor N/N_f \rfloor$. Hence \begin{equation}\label{eq:pAc} \mathbb{P} \{ x_{k}\notin\Omega\} \leq (1 - p_\epsilon^{N_f})^{\lfloor k/ N_f \rfloor} \end{equation} and (\ref{lemkeyeq}) therefore holds with $N_p = N_f \lceil \log p /\log (1-p_\epsilon^{N_f})\rceil$ if $p_\epsilon < 1$ or $N_p = N_f$ if $p_\epsilon = 1$. \end{IEEEproof} Proposition~\ref{propNfXf} establishes that the state of (\ref{eq1}) reaches $\Omega$ in finite time if the disturbance input is small enough for a sufficiently large number of consecutive time steps. Lemma~\ref{lemkey} uses a lower bound on the probability of this event to show that the probability of the state reaching this set is arbitrarily close to 1 on a long enough (but finite) horizon. Furthermore, for any $k \in\mathbb{N}$, the argument used to prove Lemma~\ref{lemkey} implies \[ \mathbb{P} \{ x_k \in \Omega \} \geq 1 - (1 - p_\epsilon^{N_f})^{\lfloor k/ N_f \rfloor} , \] and an immediate consequence is that $x_k$ converges to $\Omega$ as \[ \lim_{k\to\infty} \mathbb{P} \{ x_k \in \Omega \} = 1. \] We next use the Borel-Cantelli lemma (in a similar fashion to~\cite{lorenzenetal2017}) to prove the slightly stronger property that the state converges to $\Omega$ with probability 1. \begin{thm}\label{mainthm1} Under Assumptions \ref{distass}-\ref{lyapunov}, for any $x_0\in\mathbb{X}$ we have \begin{equation}\label{eqthm1} \mathbb{P} \Bigl\{ \lim_{k\rightarrow\infty} 1_{ \Omega}(x_k)=1 \Bigr\} =1. \end{equation} \end{thm} \begin{IEEEproof} Let $A_k$ denote the event $x_k\notin \Omega$, then Lemma \ref{lemkey} implies (according to \eqref{eq:pAc}) \[ \sum_{k=0}^\infty \mathbb{P} \{ A_k\} \leq \sum_{k=0}^\infty \bigl( 1 - p_\epsilon^{N_f}\bigr)^{\lfloor k/N_f\rfloor} = N_fp_\epsilon^{-N_f} < \infty \] and the Borel-Cantelli lemma therefore implies \[ \mathbb{P} \biggl\{ \bigcap_{k=0}^\infty \bigcup_{j=k}^\infty A_j \biggr\} = 0 . \] But $A_{k+1} \subseteq A_k$ for all $k\in\mathbb{N}$ since $\Omega$ is RPI due to Assumption \ref{terminalass}. It follows that $\mathbb{P} \{ \cap_{k=0}^\infty A_k \} = 0$, which is equivalent to $\mathbb{P} \{ \lim_{k\to \infty} A_k \} = 0$ and hence~(\ref{eqthm1}). \end{IEEEproof} \begin{defn}[Minimal RPI set] The minimal RPI set for system (\ref{eq1}) containing the origin, denoted $\mathbb{X}_\infty$, is defined as the intersection of all sets $X$ such that $0\in X \subseteq \mathbb{X}$ and $f(x,w)\in X$ for all $(x,w)\in X\times \mathbb{W}$. \end{defn} \begin{rem}~\label{rem:mRPI} Theorem \ref{mainthm1} applies to any RPI set $\Omega$ that contains the origin in its interior. In particular $\Omega=\mathbb{X}_\infty$ is the smallest set that can satisfy Assumption \ref{terminalass}~\cite[{Prop.~6.13}]{blanchini08}. And, since the state can escape any smaller set because it would not be invariant, $\mathbb{X}_\infty$ is the smallest set to which the state of (\ref{eq1}) converges with probability 1. \end{rem} \begin{cor}\label{mainthm2} Let Assumptions~\ref{distass}-\ref{lyapunov} hold and let $x_0\in\mathbb{X}$. Then \begin{equation}\label{convlimit} \mathbb{P} \Bigl\{ \lim_{k\rightarrow\infty} 1_{ \mathbb{X}_\infty}(x_k)=1 \Bigr\} = 1 . \end{equation} \end{cor} \begin{IEEEproof} If Assumption \ref{terminalass} holds for any RPI set, then the minimal RPI set $\mathbb{X}_\infty$ exists and also satisfies this assumption. The result then follows directly from Theorem \ref{mainthm1}. \end{IEEEproof} In this section we have demonstrated convergence with probability 1 of the state of~\eqref{eq1} to any RPI set containing the origin in its interior. Remark~\ref{rem:mRPI} and Corollary~\ref{mainthm2} therefore imply that the minimal RPI set $\mathbb{X}_\infty$ is a tight limit set of~\eqref{eq1}. This improves on the result of~\cite{munozandcannon2019}, where convergence to $\mathbb{X}_\infty$ in probability is shown for the case that $f(x,w)=Ax+Dw$ for all $(x,w)\in\Omega\times\mathbb{W}$, where $(A,D)$ is controllable. \subsection{Convergence to a limit set with linear dynamics} \label{secterminallinear} Of particular interest when analysing stochastic MPC algorithms for constrained linear systems is the case in which the dynamics of system (\ref{eq1}) are linear on an RPI set containing the origin. In this case the minimal RPI set $\mathbb{X}_\infty$ defining ultimate bounds for the state and the limit average performance can be explicitly determined. \begin{assum}\label{terminallinearass} There exists an RPI set $\Gamma\subseteq\mathbb{X}$, such that $f(x,w)=\Phi x+Dw$ for all $(x,w)\in\Gamma\times\mathbb{W}$, where $\Phi,D$ are matrices with appropriate dimensions and $\Phi$ is Schur stable. \end{assum} Corollary~\ref{mainthm2} implies that the state of (\ref{eq1}) converges with probability~1 from any initial condition in $\Gamma$ to the minimal RPI set $\mathbb{X}_\infty$ given by \begin{equation}\label{eq:mRPIset} \mathbb{X}_\infty=\lim_{k\rightarrow\infty}\bigoplus_{j=1}^k \Phi^j D\mathbb{W} . \end{equation} This set can be computed with arbitrary precision (see e.g.~\cite{blanchini08} for details). We next consider the asymptotic time-average value of a quadratic function of the state of (\ref{eq1}), representing, for example, a quadratic performance cost. \begin{thm}\label{mainthm3} Let Assumptions~\ref{distass}-\ref{terminallinearass} hold and let $x_0\in\mathbb{X}$. Then \begin{equation}\label{perflimit} \lim_{k\rightarrow\infty} \frac{1}{k}\sum_{j=0}^{k-1}{\mathbb E}\{x_j^\top Sx_j \} \leq l_{ss} \end{equation} for any given $S=S^\top\succ 0$ where $l_{ss}={\mathbb E}\{w^\top D^\top PDw \}$ with $P\succ 0$ satisfying $P-\Phi^\top P\Phi = S$. \end{thm} \begin{IEEEproof} Let $V(x)=x^\top Px$. If $x_j\in\Gamma$, then \[ {\mathbb E}\{V(x_{j+1}) \}-{\mathbb E}\{V(x_j) \}= -{\mathbb E}\{x_j^\top Sx_j \}+{\mathbb E}\{w_j^\top D^\top PDw_j \}. \] On the other hand, if $x_j\notin \Gamma$, then \begin{align*} &{\mathbb E}\{V(x_{j+1}) \}-{\mathbb E}\{V(x_j) \} \\ &= {\mathbb E}\{f(x_j,w_j)^\top Pf(x_j,w_j) \}-{\mathbb E}\{x_j^\top Px_j \} \\ & ={\mathbb E}\{\delta(x_j,w_j)\}\!+\!{\mathbb E}\{(\Phi x_{j\!}\!+\! D w_{j\!})^{\!\!\top\!} \! P (\Phi x_{j\!}\!+\! D w_{j\!} )\} \!-\!{\mathbb E}\{x_{j\!}^{\!\top} \! Px_{j\!}\} \\ & ={\mathbb E}\{\delta(x_j,w_j)\}-{\mathbb E}\{x_j^\top Sx_j \}+{\mathbb E}\{w_j^\top D^\top PDw_j \}, \end{align*} where $\delta(x_{j}, w_{j}) = f(x_{j}, w_{j})^{\top} P f(x_{j}, w_{j}) - (\Phi x_{j}+ Dw_{j})^{\top} P(\Phi x_{j}+ Dw_{j})$ and $\mathbb{E}\{\delta(x_j,w_j)\}\le \nu$ for finite $\nu$ since $x_j$, $w_j$ are bounded due to \eqref{iss} and Assumption~\ref{distass}. Therefore \[ {\mathbb E}\{V(x_{j+1}) \}-{\mathbb E}\{V(x_j) \}\le \nu{\mathbb P}\{x_j\!\notin\! \Gamma\} - {\mathbb E}\{x_j^\top Sx_j \} + l_{ss}. \] Summing both sides of this inequality over $0\leq j < k$ yields \[ {\mathbb E}\{V(x_{k}) \}-{\mathbb E}\{V(x_0) \} \!\leq\! \sum_{j=0}^{k-1}\bigl( \nu{\mathbb P}\{x_j\!\notin\! \Gamma \} -{\mathbb E}\{x_j^\top Sx_j \}\bigr) + kl_{ss} . \] Here $\sum_{j=0}^{k-1}{\mathbb P}\{x_j\notin \Gamma \} \leq N_f p_\epsilon^{-N_f\!\!}$ and ${\mathbb E}\{V(x_{k}) \}$ is finite for all $k$ due to (\ref{iss}). We therefore obtain (\ref{perflimit}) in the limit as $k\to\infty$. \end{IEEEproof} Under Assumptions \ref{distass}-\ref{terminallinearass}, the state of \eqref{eq1} therefore converges with probability 1 to the minimal RPI set $\mathbb{X}_\infty$ and the time-average performance converges to its limit average on this set. Moreover the bound in \eqref{perflimit} is tight because $l_{ss}$ is equal to the time-average performance associated with the linear dynamics defined in Assumption \ref{terminallinearass}. \section{Implications for stability and convergence of stochastic MPC}\label{sec4} This section uses the results of Section~\ref{sec3} to analyse the convergence of three existing stochastic MPC algorithms to a limit set of the closed loop system. The first of these is for linear systems and assumes a control policy that is an affine function of the disturbance input~\cite{goulartetal2006,goulartandkerrigan2008}. For this approach, convergence to a minimal RPI set was shown in~\cite{wangetal2008} by redefining the cost function and control policy; here we provide analogous results for the original formulation in~\cite{goulartandkerrigan2008}. The second MPC algorithm is also for linear systems, but it assumes an affine disturbance feedback law with a different structure (striped and extending across an infinite prediction horizon), for which the gains are computed offline~\cite{kouvaritakisetal2013}. The proof of convergence for this second MPC formulation is provided here for the first time. The third is a generic stochastic MPC algorithm for nonlinear systems based on constraint-tightening~\cite{santosetal2019}. Only ISS is proved in~\cite{santosetal2019}; here we demonstrate convergence to a minimal RPI set. The system dynamics for the nonlinear stochastic MPC formulation are defined in Section \ref{nlsmpc}. On the other hand, Sections~\ref{sec:gandk} and~\ref{sec:kcandm} assume dynamics defined by \begin{equation}\label{lindyn} x_{k+1}=Ax_k+Bu_k+Dw_k \end{equation} where $A,B,D$ are matrices of conformal dimensions, and $(A,B)$ is stabilizable. A measurement of the current state, $x_k$, is assumed to be available at time $k$, but current and future values of $w_k$ are unknown. In each case the disturbance sequence $\{w_0,w_1,\ldots\}$ is assumed to be i.i.d.\ with $\mathbb{E}\{w_k\} = 0$, and the PDF of $w$ is supported in a bounded set $\mathbb{W}$ containing the origin in its interior. These assumptions are included in~\cite{goulartandkerrigan2008,kouvaritakisetal2013, santosetal2019}; here we assume in addition that ${\mathbb P}\{\norm{w}\le \lambda \}>0$ for all $\lambda>0$ so that Assumption~\ref{distass} holds. \subsection{Affine in the disturbance stochastic MPC}\label{sec:gandk} In~\cite{goulartandkerrigan2008} the predicted control policy is an affine function of future disturbances. The expected value of a quadratic cost is minimized online subject to the condition that state and control constraints hold for all future realisations of disturbance inputs. The state and control constraints take the form \begin{equation}\label{goulart_cons} (x_k,u_k)\in \mathbb{Z} \end{equation} for all $k\in\mathbb{N}$, where $\mathbb{Z}\subseteq {\mathbb R}^{n}\times{\mathbb R}^{n_u}$ is a convex and compact set containing the origin in its interior. The control input is determined at each discrete time instant by solving a stochastic optimal control problem. To avoid the computational load of optimizing an arbitrary feedback policy, predicted control inputs are parameterized for $i\in\mathbb{N}_{N-1}$ as \begin{equation} u_{i|k}=v_{i|k}+\sum_{j=0}^{i-1}M_{i,j|k}w_{j|k} , \end{equation} where the open-loop control sequence ${\bf v}_k=\{v_{i|k}, i\in\mathbb{N}_{N-1}\}$ and feedback gains ${\bf M}_k =\{M_{i,j|k}, \, j\in\mathbb{N}_{i-1}, \, i\in\mathbb{N}_{[1,N-1]}\}$ are decision variables at time $k$. For all $i\geq N$, predicted control inputs are defined by $u_{i|k}=Kx_{i|k}$, where $A+BK$ is Schur stable. The predicted cost at time $k$ is defined as \[ J(x_k,{\bf v}_k,{\bf M}_k) ={\mathbb E}\Bigl\{x_{i|k}^\top Px_{i|k}+\sum_{i=0}^{N-1} (x_{i|k}^\top Qx_{i|k}+u_{i|k}^\top Ru_{i|k}) \Bigr\} \] where $Q \succeq 0$, $R\succ 0$, $(A,Q^{1/2})$ is detectable, and $P\succ 0$ is the solution of the algebraic Riccati equation $P=Q+A^\top PA-K^\top (R+B^\top PB)K$, $K=-(R+B^\top PB)^{-1}B^\top PA$~\cite{goulartandkerrigan2008}. A terminal constraint, $x_{N|k}\in\mathbb{X}_f$, is included in the optimal control problem, where $\mathbb{X}_f$ is an RPI set for the system \eqref{lindyn} with control law $u_k=Kx_k$ and constraints $(x_k,Kx_k)\in{\mathbf Z}$. The optimal control problem solved at the $k$th instant is \begin{align*} \mathcal{P}_1: \ \ & \min_{{\bf v}_k,{\bf M}_k} & & J(x_k,{\bf v}_k,{\bf M_k}) \quad \text{s.t.} \quad \forall w_{i|k}\in\mathbb{W}, \ \forall i\in\mathbb{N}_{N-1} \\ &&& u_{i|k}=v_{i|k}+\sum_{j=0}^{i-1}M_{i,j|k}w_{j|k} \\ &&& (x_{i|k},u_{i|k})\in\mathbb{Z} \\ &&& x_{i+1|k}=Ax_{i|k}+Bu_{i|k}+Dw_{i|k} \\ &&& x_{N|k}\in \mathbb{X}_f \\ & && x_{0|k}=x_k \end{align*} For polytopic $\mathbb{Z}$ and $\mathbb{X}_f$, this problem is a convex QP or SOCP if $\mathbb{W}$ is polytopic or ellipsoidal, respectively. For all $x_k\in\mathbb{X}$, where $\mathbb{X}$ is the set of feasible states for $\mathcal{P}_1$, a receding control law is defined $u_k=v_{0|k}^*(x_k)$, where $(\cdot)^*$ denotes an optimal solution of $\mathcal{P}_1$. For all $x_0\in\mathbb{X}$, the closed-loop system \begin{equation}\label{goulart_dynamics} x_{k+1}=Ax_k+Bv_{0|k}^*(x_k)+Dw_k , \end{equation} satisfies $x_k\in\mathbb{X}$ for all $k\in\mathbb{N}$~\cite{goulartandkerrigan2008}. It is also shown in~\cite{goulartandkerrigan2008} that the system~\eqref{goulart_dynamics} is ISS with region of attraction $\mathbb{X}$. \begin{prop}\label{asspropgoulart} Assumptions \ref{distass}-\ref{terminallinearass} hold for the closed-loop system~(\ref{goulart_dynamics}) with $\Gamma = \Omega =\mathbb{X}_f$ and $\Phi = A+BK$. \end{prop} \begin{IEEEproof} Assumption \ref{distass} holds due to the assumptions on $w_k$. Assumptions \ref{terminalass} and \ref{terminallinearass} hold because $\Omega=\mathbb{X}_f\subseteq \mathbb{Z}$ is by assumption bounded and RPI for (\ref{lindyn}) under $u_k=Kx_k$, and since the solution of $\mathcal{P}_1$ is $v_{0|k}^\ast(x_k) = Kx_k$ for all $x_k\in\mathbb{X}_f$ due to the definition of $P$ and $K$. Assumption~\ref{lyapunov} holds because (as proved in~\cite{goulartandkerrigan2008}) $J^\ast(x_k)$, the optimal value of the cost in $\mathcal{P}_1$, is an ISS-Lyapunov function for system \eqref{goulart_dynamics} satisfying the conditions of Theorem~\ref{lemiss}, thus guaranteeing that the system is ISS with region of attraction~$\mathbb{X}$. \end{IEEEproof} Proposition~\ref{asspropgoulart} implies almost sure convergence to the minimal RPI set in~(\ref{eq:mRPIset}) and ensures bounds on average performance. \begin{cor} For all $x_0\in\mathbb{X}$, the closed-loop system \eqref{goulart_dynamics} satisfies \eqref{convlimit} and \eqref{perflimit} with $S=Q+K^\top R K$. \end{cor} \begin{IEEEproof} This is a direct consequence of Corollary \ref{mainthm2} and Theorem \ref{mainthm3} since Assumptions \ref{distass}-\ref{terminallinearass} hold. \end{IEEEproof} Similar convergence results presented in~\cite{wangetal2008} required a redefinition of the cost and the control policy used in~\cite{goulartandkerrigan2008}. The results presented here apply to the original algorithm in~\cite{goulartandkerrigan2008} with the mild assumption that small disturbances have non-zero probability. \subsection{Striped affine in the disturbance stochastic MPC} \label{sec:kcandm} The predicted control policy of~\cite{kouvaritakisetal2013} is again an affine function of future disturbance inputs. However, there are several differences with the formulation of Section~\ref{sec:gandk}: (i) a state feedback law with fixed gain is included in the predicted control policy; (ii) disturbance feedback gains are computed offline in order to reduce online computation; (iii) the disturbance feedback has a striped structure that extends over an infinite horizon. A consequence of (ii) is that state and control constraints can be enforced robustly by means of constraint tightening parameters computed offline, while (iii) has the effect of relaxing terminal constraints~\cite{kouvaritakisetal2013}. The system is subject to mixed input-state hard and probabilistic constraints, defined for all $k\in\mathbb{N}$ by \begin{subequations} \label{koucons} \begin{align} &(x_k,u_k)\in \mathbb{Z} \label{kouvaritakis_hcons}, \\ &\mathbb{P}\{f^\top_j x_{k+1}+g^\top_j u_k\le h_j \}\ge p_j, \quad j\in\mathbb{N}_{[1,n_c]} \label{kouvaritakis_cons} \end{align} \end{subequations} where $\mathbb{Z}$ is a convex compact polyhedral set containing the origin in its interior, $f_j\in \mathbb{R}^{n}$, $g_j\in \mathbb{R}^{n}$, $h_j\in\mathbb{R}$, $p_j\in(0, 1]$ and $n_c$ is the number of probabilistic constraints. The predicted control policy has the structure \begin{equation}\label{kouvaritakis_control_law} u_{i|k}= \begin{cases} Kx_{i|k}+c_{i|k}+\sum_{j=1}^{i-1}L_{j}w_{i-j|k}, & i\in\mathbb{N}_{N-1} \\ Kx_{i|k}+\sum_{j=1}^{N-1}L_{j}w_{i-j|k}, & i \geq N \end{cases} \end{equation} where ${\bf c}_k = \{c_{i|k}\}_{i\in\mathbb{N}_{N-1}}$ are decision variables at time $k$ and $K$ satisfies the algebraic Riccati equation $P=Q+A^\top PA-K^\top (R+B^\top PB)K$, $K=-(R+B^\top PB)^{-1}B^\top PA$. Using \eqref{lindyn} we note that constraint (\ref{kouvaritakis_cons}) can be imposed as \begin{equation}\label{tightened_prob} {f}^\top_j {x}_{k}+{g}^\top_j {u}_{k}+\gamma_j\le h_j,\quad j\in\mathbb{N}_{[1,n_c]} \end{equation} where $\tilde{f}^\top_j=f_j^\top A$, $\tilde{g}^\top_j=f_j^\top B+g_j^\top$ and $\gamma_j$ is a tightening parameter that accounts for the stochastic disturbance $w$. Specifically, $\gamma_j$ is computed using the PDF (or a collection of samples) of $w$ so that the satisfaction of the tightened constraint \eqref{tightened_prob} ensures satisfaction of \eqref{kouvaritakis_cons}. Constraint \eqref{tightened_prob} is equivalent to a polytopic constraint: $(x_k,u_k)\in{\mathbb S}=\{(x,u): {f}^\top_j {x}+{g}^\top_j \bar{u}+\gamma_j\le h_j, \, j\in\mathbb{N}_{[1,n_c]}\}$. Let $\bar{x}_{i|k},\bar{u}_{i|k}$ be nominal state and input predictions satisfying $\bar{x}_{0|k}=x_k$, $\bar{x}_{i|k+1}=A\bar{x}_{i|k}+B\bar{u}_{i|k}$ and $\bar{u}_{i|k}=u_{i|k}$ (i.e.~assuming $w_{i|k}=0$ for all $i\in\mathbb{N}$). Then (\ref{koucons}) can be imposed with constraints on the nominal sequences \begin{equation} \label{tightened_constraints} (\bar{x}_{i|k},\bar{u}_{i|k}) \in \tilde{\mathbb{S}}_i , \quad (\bar{x}_{i|k},\bar{u}_{i|k}) \in \tilde{\mathbb{Z}}_i , \end{equation} where $\tilde{\mathbb{S}}_i$ and $\tilde{\mathbb{Z}}_i$ are tightened versions of $\mathbb{S}$ and $\mathbb{Z}$. Here $\tilde{\mathbb{Z}}_i$ is computed by considering the worst-case of uncertain components of the predictions for the hard constraint $(x_{i|k},u_{i|k})\in\mathbb{Z}$, while $\tilde{\mathbb{S}}_i$ enforces the probabilistic constraint \eqref{tightened_prob} and includes a worst-case tightening to ensure recursive feasibility. The disturbance feedback gains $L_{j}$, $j\in \mathbb{N}_{[1,N-1]}$, are computed sequentially offline so as to minimize a the tightening of constraints \eqref{tightened_constraints}. Specifically, $L_1$ is first chosen so as to minimize the effect of the disturbance $w_{0|k}$ on the constraints \eqref{tightened_constraints} at prediction instant $i=2$, then $L_2$ is chosen so as to minimize the effect of $\{w_{0|k}, w_{1|k}\}$ on these constraints at prediction instant $i=3$, and so on until all $N-1$ gains have been chosen (we refer the reader to~\cite{kouvaritakisetal2013} for further details). The cost function is given by \begin{equation}\label{kouvaritakis_cost} J(x_k,{\bf c}_k)= \mathbb{E}\Bigl\{\sum_{i=0}^{\infty} (x_{i|k}^\top Qx_{i|k}+u_{i|k}^\top Ru_{i|k} -L_{ss} ) \Bigr\}, \end{equation} where $Q,R\succ0$ and $L_{ss}=\lim_{k\rightarrow\infty}\mathbb{E} (x_{i|k}^\top Qx_{i|k}+u_{i|k}^\top Ru_{i|k})$ can be computed using the predicted control law for $i\geq N$ and the second moments of the disturbance input. It can be shown (e.g.~\cite[Thm.~4.2]{kouvaritakisetal2013}) that $J(x_k,{\bf c}_k)$ can be replaced, without changing the solution of the optimization problem, by \begin{equation}\label{eq_eqcost} V(x_k,{\bf c}_k) = x_k^\top P x_k+ {\bf c}_k^\top P_c {\bf c}_k \end{equation} where $P_c = I \otimes (R + B^\top P B)\succ 0$. The optimal control problem solved at the $k$th instant is therefore \begin{align*} \mathcal{P}_2: \ \ & \min_{{\bf c}_k} & & V(x_k,{\bf c}_k) \quad \text{s.t.} \quad \ \forall i\in\mathbb{N}_{N+N_2-1} \\ &&& \bar{u}_{i|k}=c_{i|k}+K\bar{x}_{i|k}\\ &&& (\bar{x}_{i|k},\bar{u}_{i|k}) \in \tilde{\mathbb{Z}}^{p}_i\cap \tilde{\mathbb{Z}}_i \\ &&& \bar{x}_{i+1|k}=A\bar{x}_{i|k}+B\bar{u}_{i|k} \\ &&& \bar{x}_{0|k}=x_k \end{align*} where $N_2$ is large enough to ensure that $(x_{i|k},u_{i|k})\in \tilde{\mathbb{S}}_i\cap\tilde{\mathbb{Z}}_i$ holds for all $i\geq N$. The solution ${\bf c}_k^\ast(x_k)$ defines the MPC law $u_k={c_{0|k}^*(x_k)+Kx_k}$ and the closed-loop dynamics are given, for $\Phi = A+BK$, by \begin{equation}\label{dynamics_kouvaritakis} x_{k+1}=\Phi x_k+Bc^*_{0|k}(x_k)+Dw_k . \end{equation} The set of states for which $\mathcal{P}_2$ is feasible, denoted by $\mathbb{X}$, is robustly invariant under the closed-loop dynamics~\cite{kouvaritakisetal2013}. Asymptotically optimal performance is obtained if $u_k$ converges to the unconstrained optimal control law $u_k=Kx_k$. However, the bound $\lim_{k\to\infty}\mathbb{E}\bigl\{x_{k}^\top Qx_{k}+u_{k}^\top Ru_{k}\bigr\} \leq L_{ss}$ is derived in~\cite[Thm~4.3]{kouvaritakisetal2013}, where $L_{ss} = l_{ss}+{\mathbb E}\{w^\top P_w w \}$ for ${P_w\succeq 0}$, and $l_{ss}=\lim_{k\rightarrow\infty}{\mathbb E}\{w^\top D^\top P D w\}$ is the asymptotic value of $\mathbb{E}\bigl\{x_{k}^\top Qx_{k}+u_{k}^\top Ru_{k}\bigr\}$ for~\eqref{lindyn} with $u_k=Kx_k$. Thus, although~\cite{kouvaritakisetal2013} provides an asymptotic bound on closed-loop performance, this bound does not ensure convergence to the unconstrained optimal control law since $L_{ss} \geq l_{ss}$. From \eqref{eq_eqcost} it follows that the optimal solution is ${\bf c}_k^\ast(x_k) = 0$ whenever the constraints of $\mathcal{P}_2$ are inactive, and in particular we have ${\bf c}_k^\ast(0) = 0$. To ensure that the dynamics of (\ref{dynamics_kouvaritakis}) are linear on this system's limit set, we make the following assumption about the set $\mathbb{X}_{uc} = \{x\in\mathbb{X} : {c^\ast_{0|k}(x) = 0}\}$. \begin{assum}\label{assterminalregimestriped} The minimal RPI set (\ref{eq:mRPIset}) satisfies $\mathbb{X}_\infty \subseteq \mathbb{X}_{uc}$. \end{assum} It was proved in \cite{kouvaritakisetal2013} that Assumption \ref{assterminalregimestriped} guarantees the existence of an invariant set $\Omega$, such that $\mathbb{X}_\infty\subseteq \Omega\subseteq \mathbb{X}_{uc}$ and that if the state reaches $\Omega$, then it necessarily converges to $\mathbb{X}_\infty$. No guarantee is given in \cite{kouvaritakisetal2013} that the state will reach $\mathbb{X}_\infty$, but we can now apply the results of Sections~\ref{sec2} and~\ref{sec3} to the control policy of~\cite{kouvaritakisetal2013}. \begin{thm}\label{cor1_kouvaritakis} Let $V^\ast(x_k) = V\bigl(x_k,{\bf c}_k^\ast(x_k)\bigr)$, then $V^\ast(\cdot)$ is an ISS-Lyapunov function, and the closed-loop system \eqref{dynamics_kouvaritakis} is ISS with region of attraction $\mathbb{X}$. \end{thm} \begin{IEEEproof} First note that $\mathcal{P}_2$ is a convex QP since ${P,P_c\succ 0}$. Therefore $V^\ast(\cdot)$ is strictly convex and continuous and the optimizer ${\bf c}_k^\ast(\cdot)$ is also continuous~\cite[Thm.~4]{bemporad02}. It follows that (i) $V^\ast(\cdot)$ is Lipschitz continuous on $\mathbb{X}$, (ii) the function $f(x_k,w_k) = \Phi x_k+Bc_{0|k}^*(x_k)+Dw_k$ defining the closed-loop dynamics in \eqref{dynamics_kouvaritakis} is continuous on $x_k$, and (iii) condition (\ref{lyapkappa}) holds since $V^\ast(\cdot)$ is positive definite. Next, from~\eqref{dynamics_kouvaritakis}, \eqref{eq_eqcost} and $P-\Phi^\top P \Phi = Q + K^\top R K$ we have $V(f(x_k,0))-V(x_k) \leq -(x_k^\top Qx_k + u^{\top}_kRu_k)\le-x_k^\top Qx_k$. Then, there exists a ${\mathcal K}_\infty$ function $\alpha_3(\cdot)$ such that $V\left(f(x,0)\right)-V(x)\le -\alpha_3(\norm{x})$ holds since $Q\succ 0$. Applying \cite[Lem.~22]{goulartetal2006}, these conditions imply that \eqref{dynamics_kouvaritakis} is ISS with region of attraction $\mathbb{X}$. \end{IEEEproof} \begin{prop}\label{asspropkovaritakis} Assumptions \ref{distass}-\ref{terminallinearass} hold for the closed-loop system (\ref{dynamics_kouvaritakis}) with $\Gamma = \Omega \subseteq \mathbb{X}_{uc}$ if Assumption~\ref{assterminalregimestriped} holds. \end{prop} \begin{IEEEproof} Assumption \ref{distass} due to the assumptions on $w_k$. Assumptions~\ref{terminalass} and~\ref{terminallinearass} hold because $\mathbb{X}_\infty$ is RPI, $\mathbb{Z}$ is a bounded set, and ${\bf c}_k^\ast(x_k) = 0$ for all $x_k\in\mathbb{X}_\infty$ under Assumption~\ref{assterminalregimestriped}. Assumption~\ref{lyapunov} holds by Theorem~\ref{cor1_kouvaritakis}. \end{IEEEproof} This allows us to conclude the following convergence results for the state and limit average performance. \begin{cor} For all $x_0\in\mathbb{X}$, the closed-loop system \eqref{dynamics_kouvaritakis} satisfies \eqref{convlimit} and \eqref{perflimit} with $S = Q + K^\top R K$ \end{cor} \begin{IEEEproof} The bounds in \eqref{convlimit} and \eqref{perflimit} follow from Corollary~\ref{mainthm2} and Theorem~\ref{mainthm3} since Assumptions \ref{distass}-\ref{terminallinearass} hold. \end{IEEEproof} \subsection{Nonlinear stochastic MPC based on constraint-tightening}\label{nlsmpc} This section considers the convergence properties of a nonlinear system with the stochastic MPC algorithm~\cite{santosetal2019}. Assuming that arbitrarily small disturbances have a non-vanishing probability, we use the results of~\cite{santosetal2019} and Section \ref{sec3} to show that the closed-loop system converges with probability~1 to the minimal RPI set associated with the MPC law. We consider the system with state $x\in{\mathbb R}^n$, disturbance input $w\in\mathbb{W}\subset\mathbb{ R}^{n_u}$ and control input $u\in{\mathbb R}^n$ governed by \begin{equation}\label{nldynamics} x_{k+1}=f(x_k,u_k)+w_k . \end{equation} The function $f(\cdot,\cdot)$ satisfies $f(0,0)=0$ and is assumed to be uniformly continuous in its arguments for any feasible pair $(x,u)$. The system is subject to chance constraints on its state and mixed input-state hard constraints of the form \begin{subequations} \label{conssantos} \begin{align} &(x_k,u_k)\in \mathbb{Z} \label{santoshardc}, \\ \label{santosprobc} &\mathbb{P}\{g^\top_j x_{k+1}\le h_j \}\ge p_j, \quad j\in\mathbb{N}_{[1,n_c]} \end{align} \end{subequations} with $g\in \mathbb{R}^{n}$, $h\in\mathbb{R}$, $p\in(0, 1]$. Here $n_c$ is the number of probabilistic constraints and for simplicity we assume that the projection $\{x : \exists u, \, (x,u) \in \mathbb{Z}\}$ is bounded. The predicted control law is parameterized by decision variables $v_{i|k}$ so that $u_{i|k}=\pi(x_{i|k},v_{i|k})$, where $\pi(\cdot,\cdot)$ is assumed to be continuous on the feasible domain. Defining $f_\pi(x,v)=f(x,\pi(x,v))$, the predicted states therefore evolve according to \begin{equation}\label{nlcldynamics} x_{i+1|k}=f_\pi(x_{i|k},v_{i|k})+w_{i|k}. \end{equation} The cost and constraints in the formulation of the optimal control problem are defined over the nominal state predictions $\bar{x}_{i|k}$, obtained from \eqref{nlcldynamics} but assuming $w_{i|k}=0$ for all $i$. Noting that \eqref{santosprobc} can be imposed as a polyhedral constraint $x_{k+1}\in{\mathbb S}$, constraints \eqref{conssantos} are imposed in the optimal control problem for a prediction horizon $N$ by \[ (x_{i|k},v_{i|k})\in \tilde{{\mathbb{Z}}}_i, \quad x_{i+1|k}\in\tilde{\mathbb S}_i, \] where $\tilde{\mathbb{Z}}_i$ and $\tilde{\mathbb S}_i$ are tightened versions of $\mathbb{Z}$ and ${\mathbb S}$. These are computed by considering worst-case disturbances for the hard constraints (\ref{conssantos}a) and a combination of worst-case and stochastic disturbances to ensure recursive feasibility of the probabilistic constraints (\ref{conssantos}b). The construction of the required tightened sets relies on the uniform continuity of $f(x,y)$ for any pair $(x,y)$ and can be performed offline. For further details we refer the reader to \cite{santosetal2019}. The stochastic MPC formulation assumes a terminal control law $v_t(\cdot)$ (so that $u_{i|k} = \pi\bigl(x_{i|k},v_t(x_{i|k})\bigr)$ for $i\geq N$), a terminal constraint ${\mathbb X}_f$ and a terminal cost $V_f(\cdot)$ that satisfy the following conditions. (i) $\pi\bigl(0,v_t(0)\bigl)=0$. (ii) ${\mathbb X}_f$ is RPI, with $f_\pi(x,v_t(x)) + w \in{\mathbb X}_f$ for all $(x,w)\in{\mathbb X}_f\times\mathbb W$, and ${\mathbb X}_f\subseteq\tilde{\mathbb S}_N\cap\{ x:(x,v_t(x))\in\tilde{\mathbb Z}_N\}$. (iii) $V_f(x)$ is a positive definite function satisfying, for all $x,x_1,x_2\in{\mathbb X}_f$: $\alpha_{1f}(\|x\|)\le V_f(x)\le \alpha_{2f}(\|x\|)$, $V_f(x,v_t(x))-V_f(x)\le-L_\pi(x,v_t(x))$ and $|V_f(x_1)-V_f(x_2)|\le\delta(\|x_1-x_2\|)$, where $\alpha_{1f}(\cdot)$, $\alpha_{2f}(\cdot)$ and $\delta(\cdot)$ are $\mathcal{K}$-functions. The cost function is given by \begin{equation}\label{santos_cost} J(x_k,{\bf v}_k)=V_f(\bar{x}_{N|k}) + \sum_{i=0}^{N-1} L_\pi(\bar{x}_{i|k},v_{i|k} ), \end{equation} where ${\bf v}_k = \{v_{i|k}\}_{i\in\mathbb{N}_{N-1}}$ and $L_\pi(x,v)$ is a positive definite and uniformly continuous function for all feasible $(x,v)$. The optimal control problem solved at the $k$th instant is \begin{align*} \mathcal{P}_3: \ \ & \min_{{\bf v}_k} & & J(x_k,{\bf v}_k) \quad \text{s.t.} \quad \forall i\in\mathbb{N}_{N-1} \\ &&& \bar{x}_{i+1|k}\in \tilde{\mathbb S}_i \\ &&& (\bar{x}_{i|k},v_{i|k}) \in \tilde{\mathbb{Z}}_i \\ &&& \bar{x}_{i+1|k}=f_\pi(\bar{x}_{i|k},v_{i|k}) \\ &&& \bar{x}_{0|k}=x_k, \; \bar{x}_{N|k}\in {\mathbb X}_f. \end{align*} Let $\mathbb{X}$ be the set of all states $x_k$ such that ${\mathcal P}_3$ is feasible. Then for all $x_k\in\mathbb{X}$ the solution $v^*_{0|k}(x_k)$ defines the MPC law $u_k=\pi(x_k,v_{0|k}^*(x_k))$ and the closed-loop system is given by \begin{equation}\label{santosclosedloop} x_{k+1}=f_\pi(x_k,v_{0|k}^*(x_k))+w_k. \end{equation} It can be shown~\cite[Lemma 3.1]{santosetal2019} that $x_{k+1}\in\mathbb{X}$ if $x_{k}\in\mathbb{X}$, so the constraints \eqref{conssantos} are satisfied for all $k\in\mathbb{N}$ if $x_0\in\mathbb{X}$, and furthermore the system~\eqref{santosclosedloop} is ISS~\cite[Appendix B]{santosetal2019}. To find a tight limit set, let ${\mathbb X}_\infty$ be the minimal RPI set of \eqref{santosclosedloop}. The properties of this closed-loop system that allow the results of Section \ref{sec3} to be applied are summarized as follows. \begin{prop} Assumptions \ref{distass}-\ref{lyapunov} hold for the closed-loop system \eqref{santosclosedloop} and the set $\Omega =\mathbb{X}_\infty$. \end{prop} \begin{IEEEproof} Assumption \ref{distass} holds due to the assumptions on $w_k$, Assumption \ref{terminalass} holds because $\mathbb{X}_\infty$ is bounded due to $\mathbb{X}_\infty\subseteq\mathbb{Z}$ and Assumption~\ref{lyapunov} follows because \eqref{santosclosedloop} is ISS. \end{IEEEproof} Corollary \ref{mainthm2} therefore implies that the state of the closed-loop system (\ref{santosclosedloop}) converges to ${\mathbb X}_\infty$ with probability~1. \subsection{Discussion} \textit{On the use of the proposed analysis for MPC of linear and nonlinear systems:} We first highlight that even for linear systems under a stochastic MPC law it is necessary, in order to use the results of Section \ref{sec3} to analyse convergence, that the analysis is posed for nonlinear systems. In fact, even though system \eqref{lindyn} is linear, the closed-loop dynamics under the stochastic MPC laws in Sections~\ref{sec4}.A and \ref{sec4}.B are in general (except on the MPC terminal set) nonlinear. We note that stability analyses based on ISS properties have been developed for several nonlinear stochastic MPC formulations in the literature~\cite{sehrbitmead2017,santosetal2019,lorenzenetal2019}. This suggests that analysis discussed here is a generally useful tool for stochastic MPC. \textit{On the MPC law in the limit set:} We have shown that the strategies of Sections \ref{sec4}.A and \ref{sec4}.B recover their respective optimal control laws in the limit set, and their limit average performance is optimal for the MPC cost function. These results are obtained even though the terminal predicted control law of the MPC formulation in Section \ref{sec4}.B incorporates sub-optimal disturbance compensation terms and is thus sub-optimal with respect to the MPC cost. In fact, the MPC law in the limit set is not necessarily the same as the terminal control law of the MPC formulation. This highlights that in order to design a stochastic MPC strategy with optimal limit average performance it is not necessary that the terminal control law is optimal. Instead, we require that MPC law equals the optimal control law on an RPI set that contains the origin in its interior. The procedure consists of checking Assumptions~\ref{distass}-\ref{lyapunov} and invoking Corollary \ref{mainthm2} to conclude that the state converges to the minimal RPI set for the closed-loop system. Furthermore, if the dynamics of the closed-loop system are linear within some RPI set then a tight limit set can be determined explicitly (e.g.~\cite{blanchini08}). \section{Conclusions} This paper extends and generalizes methods for analysing the convergence for disturbed nonlinear systems, which can be applied to stochastic MPC formulations. We define a set of conditions on stochastic additive disturbances (Assumption~\ref{distass}); on the existence of an invariant set (Assumption~\ref{terminalass}) and an ISS-Lyapunov function (Assumption~\ref{lyapunov}) for the closed-loop system. We show that a nonlinear stochastic system satisfying these conditions converges almost surely to a limit set defined by the minimal RPI set of the system. For the case where the dynamics are linear in the limit set, we show that the asymptotic average performance is tightly bounded by the performance associated with those linear dynamics. The results are obtained using the ISS property of the system, but the limits directly implied by the ISS Lyapunov inequality would yield worse asymptotic bounds. These conditions are commonly met by stochastic MPC strategies, and we illustrate the use of the convergence analysis by applying it to three existing formulations of stochastic MPC. In each of these applications our analysis allows for improved ultimate bounds on state and performance. \end{document}
\begin{document} \title{Atom Counting in Expanding Ultracold Clouds} \author{Sibylle Braungardt\(^1\), Mirta Rodr\' iguez\(^2\), Aditi Sen(De)\(^3\), Ujjwal Sen\(^3\),and Maciej Lewenstein\(^{1,4}\)} \affiliation{\(^1\)ICFO-Institut de Ci\`encies Fot\`oniques, Mediterranean Technology Park, 08860 Castelldefels (Barcelona), Spain\\ \(^2\) Instituto de Estructura de la Materia, CSIC, Serrano 121, 28006 Madrid, Spain\\\(^3\) Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211 019, India\\ \(^4\) ICREA - Instituci\'{o} Catalana de Recerca i Estudis Avan\c{c}ats, Passeig Lluis Companys 23, E-08010 Barcelona, Spain} \begin{abstract} We study the counting statistics of ultracold bosonic atoms that are released from an optical lattice. We show that the counting probability distribution of the atoms collected at a detector located far away from the optical lattice can be used as a method to infer the properties of the initially trapped states. We consider initial superfluid and insulating states with different occupation patterns. We analyze how the correlations between the initially trapped modes that develop during the expansion in the gravitational field are reflected in the counting distribution. We find that for detectors that are large compared to the size of the expanded wave function, the long-range correlations of the initial states can be distinguished by observing the counting statistics. We consider counting at one detector, as well as the joint probability distribution of counting particles at two detectors. We show that using detectors that are small compared to the size of the expanded wave function, insulating states with different occupation patterns, as well as supersolid states with different density distributions can be distinguished. \end{abstract} \maketitle \def\com#1{{\tt [\hskip.5cm #1 \hskip.5cm ]}} \def\bra#1{\langle#1|} \def\ket#1{|#1\rangle} \def \av#1{\langle #1\rangle} \section{Introduction} Experiments with ultracold particles trapped in optical lattices aim towards the engineering of exotic many-body quantum states \cite{adp}. Recently, the trapping and cooling of dipolar gases have attracted much attention \cite{Lahaye2009}. The dipole moments induce long-range interactions between the particles, and new phases appear \cite{metastable}. In the strongly correlated regime, it has been shown that there are many quasi degenerate metastable insulating states with defined occupation patterns \cite{Menotti2007,Trefzger2008,Capogrosso2010,Pollet2010}. These metastable states could be used for the storage and processing of quantum information in analogy to classical neural networks, where the information is robustly encoded in the distributed stable states of a complex system \cite{Dorner2003,Pons2007}. Another way to induce long-range interactions between atoms trapped in an optical lattice is via coupling to an external cavity mode. This has just recently been achieved experimentally and a checkerboard to a supersolid transition has been observed \cite{Baumann2010}. The detection of exotic strongly correlated phases requires novel experimental techniques that give access to high-order correlation functions. Proposals for detection techniques typically make use of shot-noise measurements \cite{shotnoise} or atom-light interfaces \cite{ali}. Also, the counting statistics of atoms has been suggested as a technique able to distinguish strongly correlated \cite{amader_counting,demler} and fermionic \cite{fermiones} Hamiltonians, both at zero and finite temperature \cite{braungardt2011}. The detection of single atoms trapped in the optical lattice has become experimentally available \cite{Gericke2008,Chen2009,Bakr2010,Sherson2010} only recently. Most counting experiments are performed after switching off the trapping potential and letting the atoms propagate in the gravitational field. The counting statistics of Rb atoms falling within a high-finesse cavity has been reported in Ref. \cite{esslinger04}. Also, fermionic and bosonic counting probability distributions have been measured for metastable Helium atoms falling onto a microchannel plate \cite{Aspect_science, Aspect_nature}. The theoretical analysis of the counting process has so far mainly been considered for atoms trapped in the lattice. Propagation in the gravitational field mixes the initial modes of the atoms, such that the counting statistics in the lattice and after propagation are not expected to be the same. In this paper, we study the role of expansion in the counting process. We show that the mixing of the initial modes during the expansion becomes evident in the counting distribution when the detector is small compared to the size of the expanded wave function. We illustrate the effect by analyzing the counting statistics for bosons after time-of-flight expansion from the lattice. We consider initial states with different occupation patterns in the insulating regime and supersolid states with different density distributions in the superfluid regime. We calculate both the counting probabilities at a single detector and the joint probabilities at two detectors as a function of the horizontal distance between them. We show that a superfluid (SF) and Mott insulator (MI) state can be readily distinguished by their counting statistics. We further show that a suitable choice of the detector geometry allows for the detection of different occupation patterns in the insulating regime and different supersolid states. The paper is organized as follows. In Sec. \ref{s1} we review the propagation of the atomic wave functions and the atom counting formalism. In Sec. \ref{s2} we analyze the intensity of particles arriving at the detector, which consists of auto-correlation terms and crossed-correlations between the different expanded modes. Depending on the size and geometry of the detector, the ratio between the auto-correlations and the crossed-correlation terms changes. In Sec. \ref{s3} we obtain closed expressions for the counting distributions for expanded superfluid and insulating bosonic states. We consider the counting statistics when using one detector and the joint counting distribution at two detectors. In Sec. \ref{res}, we show our results and compare the SF with MI states and insulating states with different occupation patterns. \section{Description of the System \label{s1}} We consider neutral bosonic atoms trapped in an optical lattice. The system can be described using the Bose-Hubbard model \cite{bose_hubbard}, which includes the hopping of the particles between neighbouring sites and the on-site two-body interactions. At zero temperature, the two limiting cases of the phase diagram are the SF state, where the hopping term dominates, and the MI state, where local interactions are dominant. The field operator $\Psi (\textbf{r},t)$ of the many-body system can be expanded into the $N$ modes $a_i$ \begin{equation} \Psi (\textbf{r},t)=\sum_i\phi_i(\textbf{r},t) a_i.\label{eq-psi} \end{equation} For atoms trapped in an optical lattice, $a_i$ describes the destruction of a particle on site $i$. The corresponding initial wave functions are Wannier functions which are Gaussian functions centered at $\textbf{r}_i$ \begin{equation} \phi_i(\textbf{r},t=0)=\frac{1}{(\pi\omega^2)^{3/4}}e^{-(\textbf{r}-\textbf{r}_i)^2/2\omega^2},\label{eq-phi} \end{equation} where the width $\omega$ is chosen such that the initial wave functions at different sites $i$ do not overlap. The atoms are released from the optical lattice and expand in the gravitational field. At finite $t$, we can apply the single-particle expansion \begin{equation} \phi(\textbf{r},t)=\int d \textbf{r}' K(\textbf{r},\textbf{r}',t) \phi_i(\textbf{r}',0) \end{equation} where the propagator for the free expansion in the gravitational field reads \cite{propagator} \begin{equation} K(\textbf{r},\textbf{r}',t)=\left(\frac{m}{2\pi i\hbar t} \right)^{3/2}e^{\frac{im(\textbf{r}-\textbf{r}')^2}{2\hbar t}-\frac{imgt(z+z')}{2\hbar}-\frac{im^2g^2t^3}{24m\hbar}}. \end{equation} The full propagated wave function is then written as \begin{eqnarray} \phi_i(\textbf{r},t) = \frac{e^{-\frac{im^2g^2t^3}{24m\hbar}}}{\pi^{3/4}(i \omega_t+\omega)^{3/2}}e^{-\frac{(\textbf{r}-\textbf{r}_i)^2}{2(\omega_t^2+\omega^2)}}e^{-i\frac{(\textbf{r}-\textbf{r}_i)^2\omega_t}{2\omega(\omega_t^2+\omega^2)}}, \label{eq:fi} \end{eqnarray} where and $\textbf{r}_t=\textbf{r}+\textbf{z}_t$, with $\textbf{z}_t=(0,0,gt^2/2)$ and we have used that $|\textbf{r}_t-\textbf{r}_i|\gg \omega$. Note that in the limit of $\omega_t \gg \omega$, the expanded wave function is, up to a phase factor, a Gaussian function centered around $\textbf{z}_t$ with a width $\omega_t=\hbar t/(m \omega)$. \subsection{Atom counting} We describe a counting process in which the probability $p(m)$ of counting $m$ particles within a time interval $\tau$ is measured at a detector located at a distance $z_0$ from the lattice. The probability of detecting $m$ particles can be expressed as \cite{Glauber,Cahill_Glauber} \begin{equation}\label{eq-p} p(m)=\frac{(-1)^m}{m!}\frac{d^m}{d\lambda^m}\mathcal{Q}\Big|_{\lambda=1}, \end{equation} where the generating function $\mathcal{Q}(\lambda)$ is given by the expectation value of a normally ordered exponential of the intensity $\mathcal{I}$, \begin{equation} \mathcal{Q}(\lambda)=\mbox{Tr}(\rho:e^{-\lambda\mathcal{I}}:).\label{eq-Q} \end{equation} For photons, the intensity is proportional to an integral over the product of the negative-frequency part and the positive-frequency part of the field. The normal ordering $: ... :$ reflects the detection mechanism, in which the photons are absorbed at the detector, typically a photo multiplier or an avalanche photodiode. For the detection of atoms using microchannel plates, the detection process can be treated in an analogous way. Since typically not all the particles are counted, the intensity depends on the efficiency $\epsilon$ of the detector and the detection time $\tau$. When the dynamics of the measurement are fast in comparison to the dynamics of the system, the intensity is proportional to the factor $\kappa\equiv1-e^{-\epsilon\tau}$. For typical experimental situations, the dynamics of the system are determined by the expansion of the atomic cloud in the gravitational field, given by $\omega_t$ and the intensity can be described by the integral over the detector volume $\Omega$ of the positive-frequency and negative-frequency parts of the quantum fields describing the particles to be counted, multiplied by the efficiency factor $\kappa$ \cite{grochmalicki}, \begin{eqnarray} \mathcal{I}=\kappa \int_{\Omega} d\textbf{r}\Psi^\dagger (\textbf{r},t_d) \Psi (\textbf{r},t_d)\label{intensity}, \end{eqnarray} where $t_d$ denotes the time at which the instantaneous measurement is performed. The formalism described above is easily generalized to the case of detection with multiple detectors \cite{Arecchi1966}. For detection with $M$ detectors, the generating function reads \begin{equation} \mathcal{Q}_M(\lambda_1,\lambda_2,..,\lambda_M)=\mbox{Tr}(\rho:e^{-\sum_i\lambda_i\mathcal{I}_i}:), \end{equation} where the single detector intensity $\mathcal{I}_i$ for each of the detectors is given by eq. (\ref{intensity}). For a configuration with two detectors, the joint probability distribution of counting $m$ atoms at detector $1$ and $n$ atoms at detector $2$ is given by \begin{equation} p(m,n)=\frac{(-1)^{m+n}}{m!n!}\frac{d^{m+n}}{d\lambda_1^md\lambda_2^n}\mathcal{Q}_2\Big|_{\lambda_1=1,\lambda_2=1}.\label{joined_p} \end{equation} We study the correlations $\hbox{corr}(m,n)$ between the counting events detected at each detector by observing the ratio between the covariance and the single detector variances, \begin{equation} \hbox{corr}(m,n)=\frac{\hbox{cov}(m,n)}{\sigma^2(m)\sigma^2(n)},\label{eq-correlations} \end{equation} where $\hbox{cov}(m,n)=\sum_{m,n} m n p(m,n)-\bar{m}\bar{n}$, $\bar{m}$ denotes the mean and $\sigma^2(m)$ the variance of $p(m)$. \section{Detection of expanding atoms \label{s2}} Let us now discuss the counting process for the detection of atoms expanding in the gravitational field. We consider a cubic detector located at a distance $z_0$ from the lattice center with edge lengths $\Delta_x, \Delta_y, \Delta_z$ . For simplicity, all through this paper we consider $t_d=\sqrt{2 z_0/g}$ which is the time when the center of the cloud arrives at the detector. The intensity $\mathcal{I}$ of atoms registered at the detector defined in eq. (\ref{intensity}) is thus determined by the expanded field operator of the atoms at the time $t_d$ of detection, $\Psi(z_0,t_d)$. Using eqs. (\ref{eq-psi}) and (\ref{intensity}), the intensity $\mathcal{I}$ takes the form \begin{equation} \mathcal{I}=\sum_{ij}A_{ij} a_i^\dag a_j, \end{equation} where \begin{equation} A_{ij}(z_0,\Omega,\kappa)=\kappa\int_{\Omega} d\textbf{r}\phi_i^*(z_0,t_d)\phi_j(z_0,t_d).\label{A} \end{equation} The elements of the correlation matrix $A_{ij}$ defined in eq. (\ref{A}) describe the interference and autocorrelation terms between different modes registered at the detector. The diagonal terms represent the on-site correlations, whereas the off-diagonal terms represent the crossed-correlations between single particle modes initially located at different sites with distance $|i-j|$. Before studying the full counting distribution, let us consider the correlations given by the matrix elements $A_{ij}$. Using eq. (\ref{eq:fi}) and assuming $\omega_{t_d}\gg \omega$, the autocorrelation elements are given by \begin{equation} A_{ii}=\kappa \int_{\Omega} d\textbf{r}\frac{1}{\pi^{3/2} \omega_{t_d}^3}e^{-\frac{(\textbf{r}-\textbf{r}_i)^2}{\omega_{t_d}^2}}.\label{eq-diag-A} \end{equation} For expanded wave functions at $\textbf{r} \gg \textbf{r}_i$, the autocorrelations become all equal and independent of the original lattice site $i$. The crossed-correlations are given by \begin{equation} A_{ij}=\kappa \int_{\Omega} d\textbf{r} \frac{1}{\pi^{3/2} \omega_t^3}e^{-\frac{(\textbf{r}-\textbf{r}_i)^2}{\omega_t^2}}e^{-i\frac{\textbf{r}(\textbf{r}_i-\textbf{r}_j)}{\omega\omega_t}}\label{eq-off-diag-A} \end{equation} The ratio between the crossed correlations and the auto-correlations depend crucially on the geometry of the detector. In Fig. \ref{fig-Elements-A}, we show the on-site correlations eq. (\ref{eq-diag-A}) and the interference terms eq. (\ref{eq-off-diag-A}) in function of the size of the detector. We consider a one dimensional array in $z$-direction and plot the correlations at the location of the detector at $(0,0,z_0)$. We consider a fixed detector size in the x-y-plane, $\Delta=\Delta_x=\Delta_y$ and vary its width $\Delta_z$. Depending on the size of the detector, the whole cloud or a fraction of it is registered. For $z_0=1$ cm, the size of the expanded single-particle wave function at the detector is $\omega_{t_d}=0.8$ mm. \begin{figure} \caption{The ratio between the on-site-correlations and interference terms depend on the size of the detector. a) Wide detector limit $\Delta_z \simeq \omega_t$. b) Narrow detector limit $\Delta_z\ll\omega_t$. We plot $A_{ii} \label{fig-Elements-A} \end{figure} In Fig. \ref{fig-Elements-A} a, we show that for detectors of size $\Delta_z > 0.2$ mm, the interference terms are negligible. This is easily understood from eq. (\ref{eq-diag-A}), as the auto-correlations are given by an integral over the detector volume around the center of a Gaussian function. For detectors that are large compared to the size of the cloud, the on-site correlations approach unity. In contrast, the interference terms eq. (\ref{eq-off-diag-A}) are given by the integral over a Gaussian function multiplied by a highly oscillating phase, such that they approach zero as the size of the detector increases. Thus, for on-site counting and for detectors which are larger than the size of the cloud, the crossed correlations disappear $A_{ij}\simeq 0$ for $i \neq j$, while the auto-correlations approach $A_{ii} \simeq \kappa$. As we show below, the detection of the auto-correlations between different modes is sufficient to distinguish the long-range correlations in the system. In particular, we show that a MI state can be distinguished from a SF. On the contrary, as the auto-correlation terms for different sites are equal, distinguishing states with different occupation patterns cannot be achieved in this limit. Fig. \ref{fig-Elements-A} b shows that for small detector sizes, the interference terms are of the order of the on-site correlations. We will show that in this limit, different occupation patterns are distinguishable from the counting distribution. \section{Atom Counting Statistics \label{s3}} Let us now consider the counting distributions measured at the detector after the expansion for different initial states of the system of atoms trapped in the lattice. \subsection{Superfluid state} First, let us focus on a SF state, ground state of the Bose-Hubbard model for very shallow lattices. We derive the counting distribution using the Gutzwiller ansatz \cite{gutzwiller} for the wave function which assumes that it is a product of on-site coherent states. The initial state of the atoms in the lattice with \(N\) sites then reads: \begin{equation} \ket{\psi}=\prod_i^N\ket{\alpha_i}_i, \label{sf} \end{equation} where $\ket{\alpha_i}_i$ is the coherent state on site $i$, \begin{equation} \ket{\alpha_i}_i=e^{-|\alpha_i|^2/2}\sum_{n=0}^\infty \frac{\alpha_i^n}{\sqrt{n!}}\ket{n}_i\,\label{eq:cs} \end{equation} and $\ket{n}_i=(a_i)^n\ket{0}$ is a Fock state with $n$ particles. Note that $\ket{\psi}$ is an eigenstate of the annihilation operator $\Psi(\textbf{r},t)$ of the expanded atoms, \begin{eqnarray} \Psi(\textbf{r},t)\ket{\psi}= \sum_i\phi_i(\textbf{r},t)\alpha_i\ket{\psi}, \end{eqnarray} where $\phi_i$ is given by eq. (\ref{eq-phi}). The state $\ket{\psi}$ is thus an eigenstate of the expanded field operator $\Psi(\textbf{r},t)$ and we can write the generating function as $\mathcal{Q}(\lambda)=e^{-\lambda\sum_{ij}\alpha_i^*\alpha_j A_{ij}}$. Using eq. (\ref{eq-p}) the counting distribution $p(m)$ reads \begin{eqnarray} \label{eq-p_SF} && p(m)=\frac{\left( \sum_{ij}\alpha_i^*\alpha_jA_{ij}\right)^m}{m!}e^{- \sum_{ij}\alpha_i^*\alpha_jA_{ij}}, \end{eqnarray} where $A_{ij}$ is given by eq. (\ref{A}). For a homogeneous superfluid with equal mean number of particles per sites, $\alpha_i=\alpha$ for all $i$, and in the limit of big detectors where the diagonal elements of the matrix $A_{ij}$ are much bigger than the off-diagonal elements, the counting distribution of the SF simplifies to \begin{equation} p(m)=\frac{(N|\alpha|^2A_d)^m}{m!}e^{-N|\alpha|^2A_{d}},\label{eq-p-sf} \end{equation} which corresponds to a Poissonian distribution with mean (and thus also variance) $\bar{m}=\sigma^2(m)=|\alpha| A_d$. \subsection{Mott Insulator state} Let us now consider the Mott insulating regime. We first study a Mott insulator state with one particle per site, $\ket{\psi}=\ket{11..11}$. In this case, the generating function eq. (\ref{eq-Q}) reads \begin{eqnarray} \mathcal{Q}(\lambda)&=&\bra{11..11}:e^{-\lambda\kappa\int_\Omega d\textbf{r}\Psi^\dag(\textbf{r},t_d)\Psi(\textbf{r},t_d)}:\ket{11..11} \nonumber\\ &=&1-\lambda\sum_iA_{ii}+\lambda^2\sum_{i<j}(A_{ii}A_{jj}+|A_{ij}|^2)-... .\label{eq-mottA} \end{eqnarray} We can rewrite eq. (\ref{eq-mottA}) using the minors of the matrix $A$, \begin{eqnarray} \mathcal{Q}(\lambda)=1+\sum_{k=1}^N (-1)^k\lambda^k \hbox{M}_{+}(A,k),\label{Q_Mottbf} \end{eqnarray} where M$_{+}(A,m)$ denotes the permanent perm$(A)=\sum_{\sigma\epsilon S_n}\Pi_{i=1}^nA_{i,\sigma(i)}$ of the corner blocks of size $m$ of the matrix $A$. Note that M$_{+}(A,k)$ is closely related to the principal minors of the matrix, which are defined as the determinant of the respective block matrices. The counting distribution $p(m)$ can then be calculated using Eqs.~(\ref{eq-p}) and (\ref{Q_Mottbf}). As was outlined above, in typical experimental situations the detector is far away from the lattice and much bigger than the cloud, such that the off-diagonal elements of $A_{ij}$ are negligible and the diagonal elements $A_{ii}$ are equal for all $i$. In this case the generating function $\mathcal{Q}$ for the Mott insulator state with unit filling is given by \begin{equation} \mathcal{Q}(\lambda)=\sum_{k=0}^N\binom{N}{k}(-\lambda A_{d})^k=(1-A_d\lambda)^N,\label{MI_high_eff} \end{equation} where $A_{d}$ denotes any of the (equal) diagonal elements. The counting distribution $p(m)$ is then given by \begin{eqnarray} p(m)=\binom{N}{m}A_{d}^m(1-A_{d})^{N-m} \end{eqnarray} This corresponds to the distribution of a fock state. The mean $\bar{m}$ and variance $\sigma^2(m)$ of the distribution are given by \begin{equation} \bar{m}=N A_d,\,\,\,\,\,\, \sigma^2(m)=N A_{d}(1-A_d) \end{equation} Let us now consider the different occupation patterns that arise in the strongly correlated regime. In particular, we focus on such states where at most one particle occupies each site. The generating function is then calculated by eq. (\ref{Q_Mottbf}), with a correlation matrix $A'$, composed of the elements of the correlation matrix $A$ in eq. (\ref{A}) multiplied by the occupation numbers $n_i$ and $n_j$ of the involved sites, \begin{equation} A'=n_i n_j A_{ij}.\label{A-patterns} \end{equation} Finally, let us consider a symmetric superposition of all possible states with filling factor $N_p/N_s$, where $N_p$ is the number of particles, $N_s$ denotes the number of sites and $N_p \leq N_s$, the generating function reads \begin{eqnarray} \mathcal{Q}=1+\sum_m(-1)^m\lambda^m\mathcal{F}_{MI} (A,m,N_p,N_s) ,\label{eq-Q-Mott-general} \end{eqnarray} where \begin{eqnarray} \mathcal{F}_{MI} (A,m,N_p,N_s)=\frac{\binom{N_s-m}{N_p-m}}{\binom{N_s}{N_p}}M^+(A,m)\nonumber\\ +\frac{\binom{N_s-2m}{N_p-m}}{\binom{N_s}{N_p}}2^m\mathcal{K}(m),\label{eq:FMIl} \end{eqnarray} where $\mathcal{K}(m)$ is defined as the $m$fold product over the sum with non-repeated indices of the real part of $A_{ij}$, $\sum_{i<j}\hbox{Re}(A_{ij})$ . For $m=2$, e.g. $M_{+}=\sum_{i<j}(A_{ii}A_{jj}+|A_{ij}|^2)$ and $\mathcal{K}(m)=\hbox{Re}(A_{ij})\hbox{Re}(A_{kl})$ with $k,l \neq i,j$. \subsection{Counting at two detectors} In this section, we consider the detection of the MI and SF state using two detectors and study the correlations between the counting events. For the MI state, the joint counting distribution $p(m,n)$ of counting $m$ particles at one detector and $n$ particles at the other is given by eq. (\ref{joined_p}), where the generating function for two detectors is given by \begin{equation} \mathcal{Q}_{2}= \sum_{k=1}^N (-1)^k M^+(\lambda_1A^{(1)}+\lambda_2A^{(2)},k).\label{eq-MI-joint}\end{equation} For detectors that are located symmetrically with respect to the origin in the $x$-$y$-plane, in typical experimental situations the off-diagonal elements of $A_{ij}$ are negligible (see fig. \ref{fig-Elements-A}), and the diagonal elements $A_d$ are all equal for both detectors, $A^{(1)}_d=A^{(2)}_d=A_d$. The generating function thus simplifies to \begin{eqnarray} \mathcal{Q}_2(\lambda_1,\lambda_2)=\sum_{k=0}^N\binom{N}{k}(-A_{d})^k(\lambda_1+\lambda_2)^k\nonumber\\ =(1-A_d(\lambda_1+\lambda_2))^N,\label{MI_high_eff} \end{eqnarray} and the counting distribution is given by \begin{eqnarray} & p(m,n)=(-1)^{n+m}(1-2A_d)^{Np-m-n} \times \nonumber \\& (-A_d)^{m+n }\frac{Np!}{m!n!(Np-m-n)!} \end{eqnarray} For the SF state, the joint counting distribution $p_{SF}(m,n)$ is the product of the two single detector distributions $p_1(m)$ and $p_2(n)$ given by eq. (\ref{eq-p_SF}). The counting events at the two detectors are thus not correlated. \section{Results \label{res}} \subsection{Mott Insulator and Superfluid state} We consider the counting distributions of a SF and a MI state of bosons with the same average number of particles released from a three dimensional optical lattice. We assume the limit of a large detector, where the counting distribution is determined by the on-site correlation terms. In Fig. \ref{fig:distributions_mi_sf}, we plot the counting distributions for a SF and a MI state at different distances between the detector and the lattice. With increasing distance from the detector, a smaller fraction of the expanded wave function is registered. The difference between the MI and the SF becomes less visible, and the mean of the counting distribution decreases. In Fig. \ref{fig:mi-sf-mean-var}, we plot the mean and the variance of the counting distributions, both normalized by dividing by $N$, for the superfluid and the Mott insulator state for a detector with fixed size at different distances $z_0$ from the lattice. \begin{figure} \caption{MI vs. SF as a function of distance from detector. Probability distribution for MI (black bars) and superfluid (white bars) states in a 3x3x3 lattice. $\Delta_x=\Delta_y=2$ mm, $\Delta_z=2$ cm, $\kappa=1$. a) $z_0=1$ cm, b) $z_0=3$ cm, c) $z_0=5$ cm} \label{fig:distributions_mi_sf} \end{figure} \begin{figure} \caption{$\sigma^2(m)/N$ of the counting distribution for the MI (blue squares) and SF (green circles) state with respect to the distance from the detector $z_0$. $\Delta_x=\Delta_y=2$ mm, $\Delta_z=2$ cm, $\kappa=1$.} \label{fig:mi-sf-mean-var} \end{figure} \subsection{Mott Insulator and Superfluid state with two detectors} Let us now consider two detectors of the same size that are placed symmetrically at a distance $\textbf{x}_1=(x_d,0,z_0)$ and $\textbf{x}_2=(-x_d,0,z_0)$ from the lattice center. In the limit of large detectors, we study the joint counting distribution of the SF and MI state for different distances between the detectors. Fig. \ref{2det_mi_sf} shows the counting distributions for two overlapping detectors (left column) and for two detectors separated by $2 x_d= 1$cm (right column). For the SF state, shown in the lower row in figs. \ref{2det_mi_sf}, the joint counting distribution is a Gaussian function for both cases. This is expected, as the joint counting distribution eq. (\ref{eq-p_SF}) is a product of the single detector counting distributions. This is analogous to the detection of coherent states of light. For the MI state shown in the upper row in fig. \ref{2det_mi_sf}, we observe a squeezed distribution, indicating the correlations of the atoms counted at the two detectors. Note that as the distance between the detectors increases, the squeezing of the distribution is less pronounced. The correlations between the counting events at the two detectors can be seen more clearly when looking at the correlation function eq. (\ref{eq-correlations}). Note that for the superfluid state, there is no difference between the joint counting distribution and the product of the single particle distributions. For the Mott state, we study the correlations for varying distance between the two detectors $x_d$. In Fig. \ref{fig-correlations}, we show how the correlations decrease when increasing the distance between detectors $x_d$. Note that the distance $x_d$ denotes the distance between the center of the two detectors. For $x_d=0$, the detectors fully overlap, and for $x_d>\Delta$ the detectors are completely separated. \begin{figure} \caption{Joint probability distribution of an expanded MI (upper row) and a SF (lower row) in a 4x4 lattice with two symmetrically placed detectors. In fig. a) and c) $x_d=0$. In fig. b) and d) $x_d=1$ cm. Parameters used: $z_0=1 cm$, $\Delta_z=2$ mm, $\Delta_x=\Delta_y=2$ cm, $\kappa=0.5$.} \label{2det_mi_sf} \end{figure} \begin{figure} \caption{Correlations of the joint probability distribution for an expanded MI state with two symmetrically placed detectors. As the distance between the detectors increases, the counting events at the two detectors are no longer correlated. Parameters used: $z_0=1$ cm, $\Delta_z=2$ mm, $\Delta_x=\Delta_y=2$ cm, $\kappa=0.5$} \label{fig-correlations} \end{figure} \subsection{Detection of insulating states with different occupation patterns}\label{sec-checkboard} Let us now focus on the detection of insulating states with different occupation patterns by particle counting. As discussed above, in order to detect the different patterns, the crossed-correlations have to be of the order of the autocorrelations. This is clear as away from the lattice, all the on-site correlation terms become equal. Let us discuss the example of a checkerboard state, where every second site is occupied, and a state with stripes, where every second line is occupied. For the striped state, the leading crossed-correlation terms eq. (\ref{eq-off-diag-A}) are the ones that correspond to the nearest neighbors. For the checkerboard state, where neighboring sites are not occupied, the leading terms are the ones that correspond to diagonally adjacent sites. In order to distinguish the different patterns, it is thus essential that these two leading crossed-terms are sufficiently different and at the same time comparable to the on-site correlations. From Fig. \ref{fig-Elements-A}, we see that this implies that the limit of small detectors has to be considered. However, if the detector is very small, all the terms are equal and the patterns are not distinguishable. One should thus consider an intermediate detector size. In Fig. \ref{fig-checkerboard}, we illustrate the effect for a 1D system of $N=12$ particles. We compare the counting distributions of a checkerboard-like state, where every second site is occupied, and a state where a block of six sites is occupied and a block of six sites is empty. In order to distinguish the two states, from Fig. \ref{fig-Elements-A}, we choose a detector size of $\Delta=0.02$ mm, such that the ratio of the crossed-correlation terms between neighboring sites and the autocorrelations is $0.6$. Fig. \ref{fig-checkerboard} shows that the different occupation patterns are reflected in the counting distribution. \begin{figure} \caption{The counting distributions of an expanded one-dimensional checkerboard (black bars) and striped insulating pattern (white bars) are clearly distinguishable. Parameters used: $z_0=1$ cm, $\Delta=0.1$ cm, $\Delta_z=0.02$ mm,$\kappa=1$, $N_p=12$ .} \label{fig-checkerboard} \end{figure} \subsection{Detection of a Supersolid state} As for the detection of states with different occupation patterns in the insulating regime, the detection of supersolid states \cite{Legget1970,Prokof2007} requires the limit where the crossed-correlation terms for neighboring sites are comparable to the auto-correlation terms. We consider a supersolid state with $N$ sites and mean density $\alpha_{2i}=\beta$ and $\alpha_{2i-1}=\gamma$. For the limit where the crossed-correlation terms for neighboring sites are the only non-negligible interference terms, the counting distribution eq. (\ref{eq-p_SF}) is given by a Poissonian distribution with mean \begin{eqnarray} \label{eq-p-SS} && \bar{m}=\frac{N}{2}A_d(\beta^2+\gamma^2)+2 NA_{NN}\beta\gamma, \end{eqnarray} where $A_d$ denotes the diagonal elements corresponding to the on-site correlations and $A_{NN}$ denotes the nearest neighbor crossed-correlation terms. Let us compare this to a superfluid state with a homogeneous density per site, $|\alpha_i|^2=\frac{|\beta|^2+|\gamma^2|}{2}$ for all $i$. The counting distribution eq. (\ref{eq-p_SF}) is thus given by a Poissonian distribution with mean \begin{eqnarray} \label{eq-p-SF2} \bar{m}= \frac{N}{2}A_d(\beta^2+\gamma^2)+NA_{NN}(\beta^2+\gamma^2). \end{eqnarray} From eqs. (\ref{eq-p-SS}) and (\ref{eq-p-SF2}) it is clear that a supersolid state can be distinguished from a superfluid state by particle counting. In Fig. \ref{fig-SS-SF} we illustrate this by comparing a supersolid state to a superfluid state. \begin{figure} \caption{The counting distributions of an expanded supersolid state with $|\beta|^2=0.5$ and $|\gamma|^2=1.5$ (black bars) and a superfluid state $|\alpha|^2=1$ (white bars) are clearly distinguishable. Parameters used: $z_0=1$ cm, $\Delta=1$ cm, $\Delta_z=0.02$ mm, $\kappa=1$} \label{fig-SS-SF} \end{figure} \section{Summary} We have studied the counting distributions of atoms falling from an optical lattice and propagating in the gravitational field. The intensity of atoms recorded at a detector located far away from an optical lattice can be decomposed into autocorrelation and crossed-correlations between the expanding modes. The ratio between these terms depends crucially on the geometry of the detector. In the limit when the detector is large compared to the expanded modes, the crossed-correlation terms are negligible and only long-range correlations of different states can be distinguished. In this limit a SF state has a poissonian number distribution while a MI has subpoissonian number distribution for a detector of finite size located at a distance $z_0$ from the lattice. The two states can also be readily distinguished from the joint probability distribution of counting the particles at two detectors. In the SF regime, the joint probability distribution is a product of the two independent number distributions while in the MI regime, the distributions are highly correlated. When the detector is small compared to the expanded wave function, the crossed-correlation terms for adjacent sites are of the order of the auto-correlations. We have shown that by choosing the size of the detector in an appropriate way, different occupation patterns can be distinguished by particle counting after expansion both in the insulating as well as in the superfluid regime. \begin{acknowledgments} We acknowledge financial support from the Spanish MICINN project FIS2008-00784 (TOQATA), FIS2010-18799, Consolider Ingenio 2010 QOIT, EU-IP Project AQUTE, EU STREP project NAMEQUAM, ERC Advanced Grant QUAGATUA, the Ministry of Education of the Generalitat de Catalunya, and from the Humboldt Foundation. M.R. is grateful to the MICINN of Spain for a Ram\'on y Cajal grant, M.L. acknowledges the Alexander von Humboldt Foundation and Hamburg Theoretical Physics Prize. \end{acknowledgments} \end{document}
\begin{document} \title{Almost non-degenerate abelian fibrations} 1. \emph{Introduction.} Let $S$ be an algebraic space and $X$ an $S$-algebraic space. One says that $X$ is a \emph{non-degenerate} $S$-\emph{abelian fibration} if there is an $S$-abelian algebraic space $A$ such that $X$ is an $A$-torsor on $S$ for the \'{e}tale topology in which case $A$ is then the \emph{albanese} of this fibration and hence is uniquely determined. In the following \S 3 we define the notion for $X$ to be an \emph{almost non-degenerate} $S$-\emph{abelian fibration} and with each such associate its \emph{albanese} over its \emph{ramification} $S$-\emph{stack} and our goal is to support the \emph{Principle} : \emph{Non-uniruled abelian fibrations are almost non-degenerate}. This support is however very partial, as the purity theorem below (\cite{grothendieck_abelian} 4.5) on which our arguments at one step crucially rely fails in positive or mixed characteristics : \emph{If $T$ is a locally noetherian regular algebraic space of residue characteristics zero and if $U$ is an open sub-algebraic space of $T$ such that $\mathrm{codim}(T-U, T)\geq 2$, then the functor $A\mapsto A|U$, from the category of $T$-abelian algebraic spaces to the category of $U$-abelian algebraic spaces, is an equivalence.} When the conclusion of this purity theorem holds, it suffices largely to consider non-uniruled abelian degenerations over spectra of discrete valuation rings, for which our results are in \S 4. The extension to the global situation in characteristic zero with hypothesis of purity and para-factoriality is in the last section \S 5. Besides some terminologies, in \S 2 there is a write-up of Gabber's theorem of \emph{purity of branch locus}. 2. \emph{Rational curves, uniruled irreducible components, regular minimal models and purity of branch locus.} Recall some terminologies. Let $k$ be an algebraically closed field. A $k$-\emph{rational curve} is by definition a separated integral $k$-scheme $C$ of finite type such that $k(\eta)$ is a purely transcendental extension of $k$ of transcendence degree $1$, where $\eta$ denotes the generic point of $C$. Let $V$ be a $k$-algebraic space. One says that $V$ \emph{does not contain $k$-rational curves} if every $k$-morphism from a $k$-rational curve $C$ to $V$ factors as $C\to \mathrm{Spec}(k)\to V$ for a certain $k$-point $\mathrm{Spec}(k)\to V$ of $V$. If an algebraic space $V$ is quasi-separated locally of finite type over $k$ and does not contain $k$-rational curves, then the base change of $V$ to every algebraically closed extension $k'$ of $k$ does not contain $k'$-rational curves. For a proper $k$-algebraic space $V$, saying that $V$ does not contain $k$-rational curves amounts to saying that every $k$-morphism from the $k$-projective line to $V$ factors through a $k$-point of $V$. Thus, as the $k$-projective line is simply connected, if a proper $k$-algebraic space does not contain $k$-rational curves, neither does its quotient by any finite \'{e}tale $k$-equivalence relation. The following result of Murre and Chow, which has origin in one of Zariski's proofs of his Connectedness/Main Theorem, explains the significance of non-existence of rational curves. {\bf Lemma 2.1.} --- \emph{Let $S$ be an algebraic space and $T\to S$ a morphism to $S$ from a connected locally noetherian regular algebraic space $T$. Let $X$ be a proper $S$-algebraic space whose geometric $S$-fibers do not contain rational curves.} \emph{Then every $S$-morphism from a non-empty open sub-algebraic space of $T$ to $X$ extends uniquely to an $S$-morphism from $T$ to $X$.} \begin{proof} This is \cite{grothendieck_rational} 2.6. \end{proof} Let $F$ be a finitely generated field over $k$. One says that $F$ is \emph{ruled} over $k$ or that $F/k$ is \emph{ruled}, if $F$ has a $k$-sub-field $K$ such that $\mathrm{Spec}(F)\to\mathrm{Spec}(K)$ is geometrically regular, that is, the extension $F/K$ is \emph{regular} in the sense of Weil, and furthermore such that $\overline{K}\otimes_KF$ is a purely transcendental extension of $\overline{K}$, where $\overline{K}$ is an algebraic closure of $K$. One says that $F$ is \emph{uniruled} over $k$ or that $F/k$ is \emph{uniruled}, if $F$ has a finite extension $F'/k$ which is ruled. Let $V$ be a quasi-separated algebraic space locally of finite type over $k$, $\eta$ a maximal point of $V$ and $Z$ the irreducible component of $V$ with reduced induced structure with generic point $\eta$. If $k(\eta)/k$ is ruled (resp. uniruled), one says that $V$ is \emph{ruled} (resp. \emph{uniruled}) at $\eta$ and that $Z$ is a \emph{ruled} (resp. \emph{uniruled}) irreducible component of $V$. {\bf Lemma 2.2.} --- \emph{An abelian variety over an algebraically closed field $k$ does not contain $k$-rational curves and thus in particular is not uniruled. A connected smooth algebraic group over an algebraically closed field is not ruled only if it is an abelian variety.} \begin{proof} See \cite{neron_model} 9.2/4, 9.2/1. \end{proof} Let $S$ be the spectra of a discrete valuation ring and $t$ the generic point of $S$. Let $A$ be an $S$-abelian scheme and $X_t$ an $A_t$-torsor on $t$ for the \'{e}tale topology. Recall (\cite{raynaud_minimal}, p. 82, line -2) that $X_t$ with its $A_t$-action admits a unique extension to an $S$-scheme $X$ with an action by $A$ such that $X$ is projective flat over $S$, regular and such that the morphism \[A\times_SX\to X\times_SX,\ (a, x)\mapsto (a+x, x)\] is finite surjective. The geometric $S$-fibers of $X$ are irreducible and do not contain rational curves. Following N\'{e}ron--Raynaud, one says that $X$ is the \emph{regular} $S$-\emph{minimal model of} $X_t$. The formation of regular minimal models commutes with every formally smooth faithfully flat base change $S'\to S$ of spectra of discrete valuation rings. {\bf Lemma 2.3.} --- \emph{Let $S$ be the spectra of a discrete valuation ring, $t$ the generic point of $S$, $A$ an $S$-abelian scheme and $X_t$ an $A_t$-torsor on $t$ for the \'{e}tale topology. Assume that $X_t$ extends to an $S$-algebraic space $X$ which is proper over $S$, regular and with an irreducible closed $S$-fiber.} \emph{Then $X$ is the regular $S$-minimal model of $X_t$.} \begin{proof} Let $s$ be the closed point of $S$ and $x$ the generic point of $X_s$. Notice that $X$ is connected and that there is an open neighborhood of $x$ in $X$ which is a scheme (\cite{raynaud_specialization} 3.3.2). Let $Y$ be the regular $S$-minimal model of $X_t$. The identity $X_t=Y_t$ extends uniquely by (2.1) to an $S$-morphism $f: X\to Y$. Being proper birational, $f$ maps $x$ to the generic point of $Y_s$ and induces an isomorphism between spectra of discrete valuation rings \[\mathrm{Spec}(\mathcal{O}_{X, x})\ \widetilde{\to}\ \mathrm{Spec}(\mathcal{O}_{Y, f(x)}).\] By the theorem of \emph{purity of branch locus} (2.4) below, $f$ is \'{e}tale and hence is an isomorphism. \end{proof} The following theorem answers the question EGA IV 21.12.14 (v). {\bf Theorem 2.4} (Gabber). --- \emph{Let $f: X\to Y$ be a morphism, essentially of finite type, from a normal scheme $X$ to a locally noetherian regular scheme $Y$ such that $f$ is essentially \'{e}tale at all points of $X$ of codimension $\leq 1$.} \emph{Then $f$ is essentially \'{e}tale.} \begin{proof} When $f$ is moreover finite, $f$ is \'{e}tale by SGA 2 X 3.4. It follows that $f$ is essentially \'{e}tale at a point $x\in X$ if it is essentially quasi-finite at $x$. For, letting $X_{(x)}$ (resp. $Y_{(f(x))}$, resp. $f_{(x)}$) be the henselization of $X$ (resp. $Y$, resp. $f$) at $x$ (resp. $f(x)$, resp. $x$), one can apply \emph{loc.cit.} to $f_{(x)}: X_{(x)}\to Y_{(f(x))}$, as $f_{(x)}$ is finite by Zariski's Main Theorem. So it amounts to showing that $f$ is essentially quasi-finite. It suffices to see that $f$ is essentially quasi-finite at a point $x\in X$ if it is at all generalizations of $x$. One may assume that $X$ is local of dimension $\geq 2$ with closed point $x$, that $Y$ is local with closed point $f(x)$, that $f|(X-\{x\})$ is essentially \'{e}tale and that $x=f^{-1}(f(x))$. Notice that $f$ is essentially quasi-finite at $x$ if and only if the extension $k(x)/k(f(x))$ is finite. --- \emph{Reduction to the case where $Y$ is excellent }: The completion $Y'$ of $Y$ along $f(x)$ is excellent, so is $X'=X\times_YY'$, and the normalization $X''$ of the excellent scheme $X'$ is finite over $X'$. Write $y'$ for the closed point of $Y'$ and let $x'$ be the unique point of $X'$ with image $x$ in $X$ and with image $y'$ in $Y'$. The projection $X''\to X'$ restricts to an isomorphism over $X'-\{x'\}$, since $X'-\{x'\}$, being essentially \'{e}tale over $Y'$, is regular. It suffices to show that the composition $X''\to X'\to Y'$ is essentially quasi-finite at a point $x''$ of $X''$ above $x'$. For then the extension $k(x'')/k(y')$, a priori $k(x')/k(y')$ as well, is finite and hence the extension $k(x)/k(f(x))$ is finite. --- \emph{Assume $Y$ excellent. Reduction to the case $\mathrm{dim}(X)=2$ }: One has $\mathrm{dim}(Y)>0$. Let $h\in\Gamma(Y, \mathcal{O}_Y)$ be part of a regular system of parameters at $f(x)$, let $Y'=V(h)$ and let $X'=V(h\mathcal{O}_X)=X\times_YY'$. The normalization $X''$ of the excellent scheme $X'$ is finite over $X'$ and the projection $X''\to X'$ restricts to an isomorphism over $X'-\{x\}$, as $X'-\{x\}$, being essentially \'{e}tale over $Y'$, is regular. It suffices to show that the composition $X''\to X'\to Y'$ is essentially quasi-finite at a point $x''\in X''$ above $x\in X'$, for the extensions $k(x'')/k(f(x))$ and $k(x)/k(f(x))$ are then finite. Note that $X''$ is purely of dimension $\mathrm{dim}(X)-1$, which is $\geq 2$ if and only if $\mathrm{dim}(X)>2$. --- \emph{Assume $Y$ excellent and $\mathrm{dim}(X)=2$. Then $f^{!}\mathcal{O}_Y=\mathcal{O}_X$ }: Now $Y$ being regular local, $\mathcal{O}_Y$ is a dualizing object on $Y$, so is $f^{!}\mathcal{O}_Y$ on $X$. As $X$ being normal local of dimension $2$ is Cohen--Macaulay, $f^{!}\mathcal{O}_Y$ has one only non-zero cohomology, say $L$, which is Cohen--Macaulay, concentrated in degree, say $d$. Let $U=X-\{x\}$ and $j: U\to X$ the open immersion. Since $f$ is essentially \'{e}tale on $U$, \[(fj)^{!}\mathcal{O}_Y=j^{!}f^{!}\mathcal{O}_Y=j^*f^{!}\mathcal{O}_Y=j^*(L[-d])=(L|U)[-d]\] is canonically isomorphic to $\mathcal{O}_U$ in $D(U, \mathcal{O}_U)$. That is, $d=0$ and $L|U$ is canonically isomorphic to $\mathcal{O}_U$. Such an isomorphism uniquely extends to an isomorphism from $\mathcal{O}_X$ onto $L=f^{!}\mathcal{O}_Y$, as $\mathrm{prof}_x(\mathcal{O}_X)=\mathrm{prof}_x(L)=2$. --- \emph{Assume $Y$ excellent and $\mathrm{dim}(X)=2$. Then $X$ is regular }: One has $\mathrm{dim}(Y)>0$. Let $h\in\Gamma(Y, \mathcal{O}_Y)$ be part of a regular system of parameters at $f(x)$, let $Y'=V(h)$, $i: Y'\to Y$ the canonical closed immersion, $f'=f\times_YY': X'\to Y'$, let the normalization of $X'=V(h\mathcal{O}_X)=X\times_YY'$ be $p: X''\to X'$ and let $f''=f'p: X''\to X'\to Y'$ be the composition. Notice that $f: X\to Y$ and $i: Y'\to Y$ are tor-independent over $Y$. So, from the identity $f^{!}\mathcal{O}_Y=\mathcal{O}_X$, it follows that $f^{'!}\mathcal{O}_{Y'}=\mathcal{O}_{X'}$. Notice next that $f''$ is essentially of complete intersection, for $Y'$ is regular and $X''$, being normal of dimension $1$, is regular. As $f''$ is furthermore essentially \'{e}tale at all points of $X''$ above $X'-\{x\}$, the object $f^{''!}\mathcal{O}_{Y'}$ has a unique non-zero cohomology in degree $0$, which is an invertible $\mathcal{O}_{X''}$-module, and there is a canonical homomorphism, the fundamental class of $f''$, $c(f''): \mathcal{O}_{X''}\to f^{''!}\mathcal{O}_{Y'}$. This $c(f'')$ corresponds by duality to a morphism $p_{*}\mathcal{O}_{X''}\to f^{'!}\mathcal{O}_{Y'}$, which is injective and which when composed with the canonical homomorphism $\mathcal{O}_{X'}\to p_{*}\mathcal{O}_{X''}$ induces the above identity $\mathcal{O}_{X'}=f^{'!}\mathcal{O}_{Y'}$, as one verifies at each point of $X'-\{x\}$ over which $p$ is an isomorphism. So $\mathcal{O}_{X'}=p_{*}\mathcal{O}_{X''}$ and $X'$ is regular. So $X$ is regular. --- \emph{Assume $Y$ excellent, $X$ regular and $\mathrm{dim}(X)=2$. Then $f$ is essentially \'{e}tale }: Now $f$ is essentially of complete intersection. Its cotangent complex $L_f$ is a perfect complex in $D^{[-1,0]}(X, \mathcal{O}_X)$. With $L_f$ one associates a canonical ``theta divisor'' homomorphism \[\theta: \mathcal{O}_X\to \mathrm{det}(L_f),\] which, as $f|(X-\{x\})$ is essentially \'{e}tale, is an isomorphism outside $x$. As $\mathrm{prof}_x(X)=2$ and $\mathrm{det}(L_f)$ is an invertible $\mathcal{O}_X$-module, $\theta$ is an isomorphism also at $x$. So $f$ is essentially \'{e}tale at $x$ by the Jacobian Criterion. \end{proof} 3. \emph{Almost non-degenerate abelian fibrations.} {\bf Definition 3.1.} --- \emph{Let $S$ be an algebraic space. We say that an $S$-algebraic space $X$ is an almost non-degenerate $S$-abelian fibration if there exists in the category of $S$-algebraic spaces a groupoid whose nerve $(X_., d_., s_.)$ satisfies the conditions a) and b) }: \emph{a) $X=X_o$.} \emph{b) $d_1: X_1\to X_o$ is the structural morphism of an $X_o$-abelian algebraic space with zero section $s_o: X_o\to X_1$.} \emph{If for every smooth morphism $S'\to S$ $\mathrm{Coker}(d_o\times_SS', d_1\times_SS')=S'$ in the category of $S'$-algebraic spaces, we say that $X$ is a geometric almost non-degenerate $S$-abelian fibration. If $\mathrm{Coker}(d_o\times_SS', d_1\times_SS')=S'$ holds in the category of $S'$-algebraic spaces for every base change $S'\to S$, we say that $X$ is a universal almost non-degenerate $S$-abelian fibration.} \emph{We say that the $S$-stack $[X_.]$ defined by the groupoid $X_.$ is the ramification $S$-stack of $X$.} \emph{By a morphism $X\to X'$ of almost non-degenerate $S$-abelian fibrations, we mean a morphism of $S$-groupoids $X_.\to X'_.$.} Given every such morphism $f$ from $X$ to $X'$, $f_1: X_1\to X'_1$ is an $f_o$-homomorphism with respect to the abelian algebraic space structures $(d_1, s_o, +)$ on $X_1/X_o$ and on $X'_1/X'_o$ (``Geometric Invariant Theory'' 6.4). The isomorphism (``renversement des fl\`{e}ches'') $X_1\to X_1$ transports to $d_o: X_1\to X_o$ from $d_1: X_1\to X_o$ an $X_o$-abelian algebraic space structure with zero section $s_o: X_o\to X_1$. The structural morphism $X\to S$ factors, which is the essence of (3.1), as \[X\to [X_.]\to S\] in such a tautological way that the projection $X\to [X_.]$ is a torsor for the lisse-\'{e}tale topology under an $[X_.]$-abelian algebraic stack $A$ verifying $A\times_{[X_.]}X=X_1$. We call $A/[X_.]$ the \emph{albanese} of $X/S$, cf. \cite{basic} 7.2+7.3. Notice that $X/S$ is flat (resp. proper, resp. of finite presentation) if and only if $[X_.]/S$ is flat (resp. proper, resp. of finite presentation). For every $S$-algebraic space $S'$, $X\times_SS'$ is an almost non-degenerate $S'$-abelian fibration with defining $S'$-groupoid $X_.\times_SS'$. Let $E$ be an algebraic stack. We say that an $E$-algebraic stack $F$ is an \emph{almost non-degenerate} $E$-\emph{abelian fibration} if there is a smooth surjective morphism from an algebraic space $S$ to $E$ such that $X=F\times_ES$ is an almost non-degenerate $S$-abelian fibration endowed with a descent datum on its $S$-groupoid $X_.$ relative to $S\to E$. We say that $F/E$ is \emph{geometric} (resp. \emph{universal}) if $X/S$ is. If the $S$-groupoid $X_.$ is simply connected with $\mathrm{Coker}(d_o, d_1)=S$, that is, if $[X_.]=S$, then $X/S$ is a \emph{non-degenerate abelian fibration}, namely, a torsor for the \'{e}tale topology under an $S$-abelian algebraic space. One says correspondingly that an almost non-degenerate abelian fibration $F/E$ as above is \emph{non-degenerate} when $F\times_ES/S$ is non-degenerate. 4. \emph{Non-uniruled abelian fibrations over spectra of discrete valuation rings.} {\bf Theorem 4.1.} --- \emph{Let $S$ be the spectra of a discrete valuation ring, $t$ the generic point of $S$, $s$ the closed point and $\overline{s}$ the spectrum of an algebraic closure of $k(s)$. Let $A_t$ be a $t$-abelian variety and $X_t$ an $A_t$-torsor on $t$ for the \'{e}tale topology. Assume that $X_t$ extends to a separated faithfully flat $S$-algebraic space $X$ of finite type such that not all irreducible components of $X_{\overline{s}}$ are uniruled.} \emph{Then }: \emph{a) There is a spectrum $t'$ of a finite separable extension of $k(t)$ such that, if $S'$ denotes the normalization of $S$ in $t'$, $A_t\times_tt'$ extends to an $S'$-abelian scheme $A'$.} \emph{b) Exactly one irreducible component of $X_{\overline{s}}$ is not uniruled. In particular, $X_{\overline{s}}$ is irreducible if it does not have uniruled irreducible components.} {\bf Theorem 4.2.} --- \emph{Keep the assumptions of $(4.1)$. Assume furthermore that $X_{\overline{s}}$ is connected, proper and separable.} \emph{Then $X$ is proper over $S$ and normal, $X_{\overline{s}}$ has a unique non-uniruled irreducible component, $A_t$ extends to an $S$-abelian scheme $A$, the regular $S$-minimal model of $X_t$ is an $A$-torsor $F$ on $S$ for the \'{e}tale topology, the canonical birational map from $X$ to $F$ is an $S$-rational map $p$ whose domain of definition contains all points where $X$ is geometrically factorial of equal characteristic or regular, $p$ is \'{e}tale at precisely the points of $\mathrm{Dom}(p)$ outside the image in $X$ of the uniruled irreducible components of $X_{\overline{s}}$ and $p^{-1}$ extends to a proper $S$-birational morphism from $F$ onto $X$ if $X_{\overline{s}}$ does not contain $\overline{s}$-rational curves.} {\bf Corollary 4.3.} --- \emph{Keep the assumptions of $(4.1)$. Assume furthermore that $X_{\overline{s}}$ is proper, separable and does not have uniruled irreducible components and that $X$ is at each of its geometric codimension $\geq 2$ points either geometrically factorial of equal characteristic or regular.} \emph{Then $A_t$ extends to an $S$-abelian scheme $A$ and $X$ is an $A$-torsor on $S$ for the \'{e}tale topology.} {\bf Lemma 4.4.} --- \emph{Let $S$ be the spectra of an excellent discrete valuation ring with generic point $t$. Let $S'\to S$ be a surjective morphism to $S$, essentially of finite type, from a local integral scheme $S'$ of dimension $1$ with generic point $t'$ such that the extension $k(t')/k(t)$ is regular of transcendence degree $d\geq 1$.} \emph{Then there is an $S$-scheme $S_o$, which is the spectra of a discrete valuation ring and which is quasi-finite surjective over $S$, and there exists a sequence of morphisms, $S_d\to S_{d-1}\to\cdots\to S_1\to S_o$, each of which is smooth, surjective, purely of relative dimension $1$, with geometrically connected fibers, such that the normalization $S''$ of $S'\times_SS_o$ is $S_o$-isomorphic to a localization of $S_d$. In particular, $S''$ is essentially smooth over $S_o$.} \begin{proof} This is \cite{dejong} 2.13. \end{proof} {\bf Lemma 4.5.} --- \emph{Let $p: V\to S$ be a morphism, essentially of finite type, from a regular local scheme $V$ of dimension $1$ with closed point $v$ to a regular local scheme $S$ of dimension $>1$ with closed point $s$ such that $p$ is birational and that $p(v)=s$.} \emph{Then $k(v)$ has a sub-$k(s)$-extension $K$ such that the extension $k(v)/K$ is purely transcendental.} \begin{proof} Recall the argument of Zariski. Let $f: X\to S$ be the blow up of $S$ along $s$ (EGA II 8.1.3). Then $X$ is regular (EGA IV 19.4.3, 19.4.4), $f$ is proper and there is one only $S$-morphism $p_1: V\to X$ by the valuative criterion of properness. Write $s_1=p_1(v)$, $S_1=\mathrm{Spec}(\mathcal{O}_{X, s_1})$. Denote the canonical morphism $V\to S_1$ again by $p_1$. Blowing up $S_1$ along $s_1$ and localizing, one obtains similarly $p_2: V\to S_2$. Continuing this way, one finds a projective system of regular local schemes ``$\underleftarrow{\mathrm{Lim}}$'' $S_i$, indexed by $\mathbf{N}$, with $S_o=S$, with each transition morphism being birational, local and essentially of finite type. There is a unique $S$-morphism $(p_i): V\to$ ``$\underleftarrow{\mathrm{Lim}}$'' $S_i$ with $p_o=p$. Write $s_n$ for the closed point of $S_n$, $n\in\mathbf{N}$. It suffices to show that the projective system ``$\underleftarrow{\mathrm{Lim}}$'' $S_i$ is essentially constant and has $V$ as its limit. For, if $n$ is the smallest integer such that $V=S_n$, the extension $k(v)/k(s_{n-1})$ is purely transcendental. Let $\mathfrak{m}$ denote the ideal of $\mathcal{O}_S$ defining the closed point $s$. For every coherent $S$-ideal $I$ that is non-zero and distinct from $\mathcal{O}_S$, $I\mathcal{O}_X$ is a non-zero sub-ideal of $\mathfrak{m}\mathcal{O}_X=\mathcal{O}_X(1)$ (EGA II 8.1.7). Thus \[I\mathcal{O}_X\otimes_X\mathcal{O}_X(-1)=I_X(-1)\] is a non-zero ideal of $\mathcal{O}_X$. Write $I_1$ for the localization of $I_X(-1)$ at $s_1$. Then $I\mathcal{O}_V$ is \emph{strictly} contained in $I_1\mathcal{O}_V$. If $I_1$ is distinct from $\mathcal{O}_{S_1}$ and if $S_2$ is distinct from $S_1$, one obtains as above an ideal $I_2$ of $\mathcal{O}_{S_2}$ such that $I_1\mathcal{O}_V$ is strictly contained in $I_2\mathcal{O}_V$. As $V$ is noetherian, this process of producing ideals $I_1, I_2, \cdots$ from a given $I$ eventually stops. That is, for a certain integer $N\geq 1$, either $S_N$ is of dimension $1$ (thus is equal to $V$) or $I_N=\mathcal{O}_{S_N}$. Assume that the latter case holds for each coherent $S$-ideal $I$ that is distinct from $0$ and $\mathcal{O}_S$. For otherwise the above assertion is already proven. Choose $a_j, b_j\in\Gamma(S, \mathcal{O}_S)$, $a_j\mathcal{O}_V=b_j\mathcal{O}_V$ non-zero, $j=1,\cdots, d$, where $d=\mathrm{deg.tr.}(k(v)/k(s))$, such that the images of the fractions \[a_1/b_1,\ \cdots,\ a_d/b_d\] in $k(v)$ form a basis of transcendence over $k(s)$. By the assumption just made applied to the ideals $a_j\mathcal{O}_S, b_j\mathcal{O}_S$, there exists an integer $N_j\geq 1$ for each $j=1,\cdots, d$ such that either $a_j/b_j$ or $b_j/a_j$ is a section of $\mathcal{O}_{S_{N_j}}$ over $S_{N_j}$. Being invertible on $V$, $a_j/b_j$ or equivalently $b_j/a_j$ is invertible on $S_{N_j}$. Hence, for an integer $N\geq 1$, $k(v)$ is algebraic over $k(s_N)$ and $V$ is essentially quasi-finite over $S_N$. By Zariski's Main Theorem, $V$ is isomorphic to $S_N$. \end{proof} 4.6. \emph{Proof of} (4.1). See \cite{basic} \S 10 for an application of (4.1) \emph{b}). We reduce the proof to that of (4.2). --- \emph{Reduction to the case where $S$ is strictly henselian }: If $\overline{t}$ denotes the spectrum of a separable closure of $k(t)$ and if $\ell$ is a prime number prime to the characteristic of $k(s)$, the claim \emph{a}) is equivalent to the claim that the $\ell$-adic monodromy representation associated with the $t$-abelian variety $A_t$, \[\rho_{\ell, \overline{t}}: \pi_1(t, \overline{t})\to \mathrm{GL}(H^1(A_{\overline{t}}, \mathbf{Q}_{\ell})),\] when restricted to an inertia subgroup relative to $S$, has finite image (\cite{neron_model} 7.4/5). For both assertions \emph{a}), \emph{b}), replacing $S$ by its strict henselization $S_{(\overline{s})}$ at $\overline{s}$, $X$ by $X\times_SS_{(\overline{s})}$ and $A_t$ by $A_t\times_tt^{hs}$, where $t^{hs}$ denotes the generic point of $S_{(\overline{s})}$, one may assume that $S$ is strictly henselian. --- \emph{Assume $S$ strictly henselian. Reduction to the case where $S$ is complete }: Let $S'$ be the completion of $S$ along $s$, $t'$ the generic point of $S'$ and $\overline{t'}$ the spectrum of a separable closure of $k(t')$ containing a separable closure $k(\overline{t})$ of $k(t)$. The projection $t'\to t$ induces by SGA 4 X 2.2.1 an isomorphism \[\pi_1(t', \overline{t'})\ \widetilde{\to}\ \pi_1(t, \overline{t}),\] relative to which the proper base change isomorphism (SGA 4 XII 5.1) \[H^1(A_{\overline{t}}, \mathbf{Q}_{\ell})\ \widetilde{\to}\ H^1(A_{\overline{t'}}, \mathbf{Q}_{\ell})\] is equivariant. Replacing $S$ by $S'$, $X$ by $X\times_SS'$ and $A_t$ by $A_t\times_tt'$, one may assume moreover that $S$ is complete. --- \emph{Assume $S$ strictly henselian and complete. Reduction to the case where $X$ is a proper flat $S$-scheme }: Let the maximal points of $X_s$ be $x_1,\cdots, x_n$. There is an open neighborhood $V$ of $\{x_1,\cdots, x_n\}$ in $X$ which is a scheme (\cite{raynaud_specialization} 3.3.2). Replacing $X$ by $V$, one may assume that $X$ is a separated flat $S$-scheme of finite type. By Nagata, $X$ admits an open $S$-immersion into a proper $S$-scheme $P$. One may then replace $X$ by its closed image in $P$ and assume that $X$ is a proper flat $S$-scheme. --- \emph{Assume $S$ strictly henselian, complete and that $X$ is a proper flat $S$-scheme. Reduction to the case where $X_s$ is separable over $k(s)$ }: Notice that $X$ is integral. For, one has $\mathrm{Ass}(X)=\mathrm{Ass}(X_t)$, as $X$ is flat over $S$ (EGA IV 3.3.1). Notice also that $S$, being complete, is excellent. Let the maximal points of $X_s$ be $x_1,\cdots, x_n$. By applying (4.4) to the morphisms $\mathrm{Spec}(\mathcal{O}_{X, x_i})\to S$, one finds an $S$-scheme $S_o$, which is the spectra of a discrete valuation ring and which is finite surjective over $S$, such that the normalization of $X\times_SS_o$, $X_o$, is smooth over $S_o$ at all maximal points of the closed fiber of $X_o/S_o$. Again, at least one irreducible component of the geometric closed fiber of $X_o/S_o$ is not uniruled. Observe that $X/S$ verifies the claims \emph{a})+\emph{b}) as long as $X_o/S_o$ does. Thus, replacing $S$ by $S_o$, $X$ by $X_o$ and $A_t$ by $A_t\times_tt_o$, where $t_o$ denotes the generic point of $S_o$, one may assume that $X$ is smooth over $S$ at all maximal points of $X_s$. The fiber $X_{\overline{s}}$ is connected, as $X$ is proper over $S$. To finish it suffices to apply (4.2). \begin{flushright} $\square$ \end{flushright} 4.7. \emph{Proof of} (4.2), (4.3). See \cite{basic} 10.2 for an application of (4.3). One may assume that $S$ is strictly henselian. Being separated faithfully flat over $S$ of finite type with geometric fibers proper and connected, $X$ is by EGA IV 15.7.10 proper over $S$. Moreover, as $X_s$ is separable, $X$ is normal. And, the open sub-algebraic space $V$ of $X$ consisting of all points at which $X\to S$ is smooth is $S$-schematically dense in $X$. Fix an $S$-section $o$ of $V$ (EGA IV 17.16.3) by means of which one identifies $X_t=V_t$ with $A_t$. Consider a smoothening of $X$ (\cite{neron_model} 3.1/1, 3.1/3), $f: X'\to X$, which is so constructed as the composition of a finite sequence of blow-ups with centers lying above the complement of $V$ in $X$. Let $W'$ be the open sub-algebraic space of $X'$ consisting of all points at which $X'\to S$ is smooth. Naturally, $V$ can be identified with an open sub-algebraic space of $W'$ and one has $W'_t=V_t$. Let $d=\mathrm{dim}(A_t)$. Choose a non-zero section $\omega'\in\Gamma(W', \Omega^d_{W'/S})$ such that the support of the divisor $\mathrm{Div}_{W'}(\omega')$ is \emph{strictly} contained in $W'_s$. Such a section $\omega'$ clearly exists and is unique up to multiplication by a unit of $\Gamma(S, \mathcal{O}_S)$. The morphism $W'_t=A_t$, where the identification is provided by the above section $o$, has a unique extension to an $S$-morphism \[p: W'\to A,\] where $A$ denotes the $S$-N\'{e}ron model of $A_t$. In the language of N\'{e}ron models, $W'$ is a weak $S$-N\'{e}ron model of $W'_t$ (\cite{neron_model} 3.5/1, 3.5/2). Its open sub-algebraic space $U'=W'-\mathrm{Supp}(\mathrm{Div}_{W'}(\omega'))$ admits a canonical $S$-birational group law (\emph{loc.cit.} 4.3/5) which extends the group structure of $U'_t=A_t$ over $t$. And, the restriction of $p$ to $U'$, $p|U': U'\to A$, is an $S$-schematically dense open immersion, which solves the universal problem of extending the $S$-birational group law of $U'$ to an $S$-group law (\emph{loc.cit.} 4.3/6, 5.1/5). Let the maximal points of $X_s$ be $x_1,\cdots, x_n$. They lie in $V$ and can thus be considered as points of $W'$. Notice that there is an open neighborhood of $\{x_1,\cdots, x_n\}$ in $W'$ which is a scheme (\cite{raynaud_specialization} 3.3.2). Observe that if a point $x$ among $x_1,\cdots, x_n$ belongs to $V-U'$, that is, if $\mathrm{Div}_{W'}(\omega')_{x}$ is not zero, then \[p: \mathrm{Spec}(\mathcal{O}_{W', x})\to \mathrm{Spec}(\mathcal{O}_{A, p(x)})\] is not an isomorphism. This implies by (4.5) that $X_{\overline{s}}$ is uniruled at $x$. Here we have denoted again by $x$ the unique point of $X_{\overline{s}}$ that projects to $x$ in $X_s$. By hypothesis there is at least one point of $\{x_1,\cdots, x_n\}$, say $x_1$, at which $X_{\overline{s}}$ is not uniruled. One finds thus $x_1\in V\cap U'$ and that $A_{\overline{s}}$ is not uniruled at $p(x_1)$. So $A_{\overline{s}}$ is an $\overline{s}$-abelian variety (2.2). So $A$ is an $S$-abelian scheme, $U'_{\overline{s}}$ is irreducible, $X_{\overline{s}}$ is by (4.5) uniruled at all its maximal points other than $x_1$ and $p$ is \'{e}tale at precisely the points of $U'$. If $X_{\overline{s}}$ does not contain $\overline{s}$-rational curves, the rational map $p^{-1}$ extends by (2.1) to an $S$-morphism, hence a proper $S$-birational morphism, from $A$ onto $X$. Finally, being a trivial $A_t$-torsor, $X_t$ has a trivial $A$-torsor $F$ as its regular $S$-minimal model. The assertion on the points where the canonical birational map from $X$ to $F$ is \'{e}tale (resp. is defined) follows by (2.4) (resp. (5.2)+(5.3) below). Recall (EGA IV 21.13.9, 21.13.11) that a noetherian normal local scheme of dimension $\geq 2$ is factorial (resp. geometrically factorial) if and only if it is para-factorial (resp. geometrically para-factorial) at all of its points (resp. geometric points) of codimension $\geq 2$. With the conditions of (4.3), such a birational map is everywhere defined and is an $S$-isomorphism. \begin{flushright} $\square$ \end{flushright} {\bf Lemma 4.8.} --- \emph{Let $n, \delta_1,\cdots, \delta_n$ be integers $\geq 1$ such that the greatest common divisor of $\delta_1,\cdots, \delta_n$ is $1$. Let $S$ be an algebraic space, $A$ an $S$-abelian algebraic space, $X$ an $A$-torsor on $S$ for the \'{e}tale topology and $S_i$ an $S$-algebraic space finite flat of finite presentation over $S$ of constant degree $\delta_i$, $i=1,\cdots, n$. Suppose that $X$ has sections over all $S_i$, $i=1,\cdots, n$.} \emph{Then $X$ is a trivial $A$-torsor.} \begin{proof} By considering $A$ as $\mathrm{Pic}^o_{A^{*}/S}$, $A^{*}$ being the dual abelian algebraic space of $A$, one defines for each $i=1,\cdots, n$ the norm homomorphism \[N_i: \prod_{S_i/S} A_{S_i}\to A.\] The composition \[A\to \prod_{S_i/S}A_{S_i}\stackrel{N_i}{\longrightarrow} A\] is equal to $\delta_i\mathrm{Id}_A$, where \[A\to \prod_{S_i/S}A_{S_i},\ x\mapsto x_{S_i}=x\times_SS_i\] is the adjunction morphism associated with the pair of adjoint functors \[\prod_{S_i/S}-\ ,\ S_i\times_S-.\] Let $\sigma_i$ be a section of $X$ over $S_i$, $i=1,\cdots, n$, and choose integers $e_1,\cdots, e_n$ such that $e_1\delta_1+\cdots+e_n\delta_n=1$. Consider the $S$-morphism \[q: X\to A,\ x\mapsto \sum^n_{i=1}e_i.N_i(x_{S_i}-\sigma_i),\] where $x_{S_i}-\sigma_i$ denotes the unique local section $a_i$ of $A_{S_i}$ satisfying $a_i+\sigma_i=x_{S_i}$. For each local $S$-section $a$ (resp. $x$) of $A$ (resp. $X$), one has \[q(a+x)=\sum^n_{i=1}e_i(N_i(a_{S_i})+N_i(x_{S_i}-\sigma_i))=(\sum^n_{i=1}e_i\delta_i.a)+q(x)=a+q(x).\] Being thus an $A$-equivariant morphism between $A$-torsors, $q$ is an isomorphism. \end{proof} {\bf Theorem 4.9.} --- \emph{Keep the assumptions of $(4.1)$. Assume furthermore that $X_{\overline{s}}$ is connected, proper, of total multiplicity prime to the characteristic of $k(\overline{s})$ and that $X$ is regular.} \emph{Then there is a non-degenerate abelian fibration $F/E$ over an $S$-algebraic stack $E$ with $E\times_St=t$ which extends $X_t$ over $t$ where $E$ is finite flat over $S$, tame along $s$ and regular. The identity $X_t=F\times_EE_t$ extends to a proper $S$-morphism $p$ from $X$ to $F$ which is \'{e}tale at precisely the points outside the image of the uniruled irreducible components of $X_{\overline{s}}$. Such $(F/E, p)$ is unique up to unique $S$-isomorphisms and its formation commutes with every formally smooth faithfully flat base change $T\to S$ of spectra of discrete valuation rings.} {\bf Theorem 4.10.} --- \emph{Keep the hypothesis of $(4.9)$ except that $X$ be regular. Assume that $X$ is at each of its geometric points either geometrically factorial of equal characteristic or regular. Assume furthermore that $X_{\overline{s}}$ does not have uniruled irreducible components.} \emph{Then $X$ is regular and is a universal almost non-degenerate $S$-abelian fibration and $X_{\overline{s}}$ does not contain $\overline{s}$-rational curves. If $A/E$ denotes the albanese of $X/S$, the action of $A_t$ on $X_t$ extends uniquely to an action on $X$ by the $S$-N\'{e}ron model \[\prod_{E/S}A\] of $A_t$.} 4.11. \emph{Proof of} (4.9). --- \emph{Case where $S$ is strictly henselian }: Let $\overline{t}$ be the spectrum of a separable closure of $k(t)$ and let $\overline{\eta}$ be a geometric generic point of $X_{\overline{t}}$. The projection $X_{\overline{t}}\to X$ induces the specialization homomorphism (SGA 1 X 2) \[sp: \pi_1(X_{\overline{t}}, \overline{\eta})\to \pi_1(X, \overline{\eta}).\] The image of $sp$ is a normal subgroup of finite index (\cite{raynaud_specialization} 6.3.5) and its associated monodromy representation \[\pi_1(X, \overline{\eta})\to \mathrm{Coker}(sp)=G\] corresponds by Galois theory to an $X$-algebraic space $X'$ which is connected and finite \'{e}tale Galois over $X$ with Galois group $G$. Let $S'=\mathrm{Spec}\ \Gamma(X', \mathcal{O}_{X'})$, which is the spectra of a discrete valuation ring. Let the generic (resp. closed) point of $S'$ be $t'$ (resp. $s'$). Then $X'_{t'}=X_t\times_tt'$. By \cite{raynaud_specialization} 6.3.5+6.3.7, $X'_{s'}$ is of total multiplicity $1$, $k(s)=k(s')$ and $G$ is cyclic of order equal to the total multiplicity $\delta$ of $X_s$, as $\delta$ is prime to the characteristic of $k(s)$ and $X$ being regular has geometrically factorial local rings. Let the maximal points of $X'_{s'}$ be $x'_1,\cdots, x'_n$ and let $Z'_i$ be the closed image of $\mathrm{Spec}(\mathcal{O}_{X'_{s'}, x'_i})\to X'_{s'}$ in $X'_{s'}$, $i=1,\cdots, n$. By \cite{raynaud_specialization} 7.1.2, for each $i=1,\cdots, n$, there is a regular closed $S'$-immersion $S'_i\hookrightarrow X'$ such that $S'_i$ is finite flat over $S'$ of rank equal to the total multiplicity $\delta'_i$ of $Z'_i/s'$ and such that $S'_i$ intersects $X'_{s'}$ at one unique point of $Z'_i\backslash\sum_{j\neq i}Z'_j$. The greatest common divisor of $\delta'_1,\cdots, \delta'_n$ is by definition the total multiplicity of $X'_{s'}$, that is, $1$. Thus by (4.8) the $(A_t\times_tt')$-torsor $X'_{t'}=X_t\times_tt'$ admits a $t'$-point; by means of one such $t'$-point, one identifies $X'_{t'}$ with $A_t\times_tt'$. Write $W'$ for the open sub-algebraic space of $X'$ which consists of all points at which $X'\to S'$ is smooth. Each $t'$-point of $X'_{t'}$ uniquely extends to an $S'$-section of $W'$, as $X'$ is proper over $S'$ and regular. That is, $W'$ is a weak $S'$-N\'{e}ron model of $W'_{t'}=X'_{t'}$ (\cite{neron_model} 3.5/1). Let $d=\mathrm{dim}(W'_{t'})$. If one chooses a non-zero section $\omega'\in\Gamma(W', \Omega^d_{W'/S'})$ such that the divisor $\mathrm{Div}_{W'}(\omega')$ has support strictly contained in $W'_{s'}$, the open $U'=W'-\mathrm{Supp}(\mathrm{Div}_{W'}(\omega'))$ has an $S'$-birational group law which extends the group structure of $U'_{t'}=X'_{t'}=A_t\times_tt'$ over $t'$. One argues as in (4.7) that the $S'$-N\'{e}ron model of $A_t\times_tt'$ is an $S'$-abelian scheme $A'$ and that the regular $S'$-minimal model $F'$ of $X'_{t'}$ is a trivial $A'$-torsor. The identity $X'_{t'}=F'_{t'}$ extends uniquely by (2.1) to an $S'$-morphism $p': X'\to F'$, which is equivariant with respect to the canonical action of $G$ on $X'$ and on $F'$. Moreover, $p'$ is proper surjective and is \'{e}tale at precisely the points of $U'$ and $U'$ is the complement of the image of the uniruled irreducible components of $X'_{\overline{s'}}$, where $\overline{s'}$ denotes the spectrum of an algebraic closure of $k(s')$. The quotient of $p'$ by $G$, \[p=[p'/G]: [X'/G]=X\to [F'/G]=F,\] is consequently proper and is \'{e}tale at precisely the points outside the image of the uniruled irreducible components of $X_{\overline{s}}$. The projection \[[F'/G]=F\to [S'/G]=E\] is a non-degenerate abelian fibration with albanese $[A'/G]=A$. The algebraic stack $E$ is finite flat over $S$, regular and tame along $s$, as $S'$ is. Over $t$, one has $E\times_St=[S'_{t}/G]=[t'/G]=t$. The formation of $(F/E, p)$ evidently commutes with every formally smooth faithfully flat base change $T\to S$ of spectra of strictly henselian discrete valuation rings. It remains to characterize $(F/E, p)$ up to unique $S$-isomorphisms. Let $(F^{\natural}/E^{\natural}, p^{\natural})$ be an alternative with albanese $A^{\natural}$. Let $U=U'/G$ be the complement in $X$ of the image of the uniruled irreducible components of $X_{\overline{s}}$. There is a unique $U$-isomorphism of $U$-abelian algebraic spaces $A^{\natural}\times_{E^{\natural}}U=A\times_EU$ extending the identity morphism of $A_t\times_tX_t$. For, the restriction functor from the category of $U$-abelian algebraic spaces to the category of $X_t$-abelian algebraic spaces is fully faithful. Let $U_1^{\natural}$ be the open sub-$U$-algebraic space of $A^{\natural}\times_{E^{\natural}}U=A\times_EU=A_U$ which has image $U\times_{E^{\natural}}U$ by the isomorphism \[r^{\natural}=(\mu^{\natural}, p_2^{\natural}): A^{\natural}\times_{E^{\natural}}F^{\natural}\ \widetilde{\to}\ F^{\natural}\times_{E^{\natural}}F^{\natural},\] where $p_2^{\natural}$ (resp. $\mu^{\natural}$) is the second projection (resp. represents the action of $A^{\natural}$ on $F^{\natural}$). Write $U_1^{\natural'}$ for the inverse image of $U_1^{\natural}$ by the projection $A'_{U'}\to A_U$ and write $x'$ (resp. $y'$) for the generic point of $U'_{s'}$ (resp. $A'_{U'}\times_{S'}s'$). Now $U^{\natural'}$ contains $y'$, and $r^{\natural}$ induces an $S$-morphism \[(\mathrm{Spec}(\mathcal{O}_{U^{\natural'}_1, y'})\rightrightarrows\mathrm{Spec}(\mathcal{O}_{U', x'}))\to (U\times_{E^{\natural}}U\rightrightarrows U),\] which in turn by quotient induces an $S$-morphism $S'\to E^{\natural}$. This latter is smooth surjective, since the composition $U'\to S'\to E^{\natural}$ is. As both $S'$ and $E^{\natural}$ are finite over $S$, $S'\to E^{\natural}$ is finite \'{e}tale surjective and hence, for each integer $n\geq 0$, $\mathrm{cosq}_o(S'/E^{\natural})_n$ is the normalization of $S$ in $\mathrm{cosq}_o(t'/t)_n$. So $E^{\natural}=E$. To prove that $A^{\natural}=A$, it suffices to prove the equality $A^{\natural}\times_ES'=A\times_ES'=A'$ and that the descent data on $A'$ relative to $S'\to E$ corresponding to $A^{\natural}$ and to $A$ coincide. One has a unique $S'$-isomorphism of $S'$-abelian algebraic spaces $A^{\natural}\times_ES'=A\times_ES'$ which extends the identity morphism of $A'_{t'}$, for the restriction functor from the category of $S'$-abelian algebraic spaces to the category of $t'$-abelian algebraic spaces is fully faithful. The coincidence of the two descent data is deduced in the same way, since the restriction functor from the category of abelian algebraic spaces over $G_{S'}$ (resp. $(G\times G)_{S'}$) to the category of abelian algebraic spaces over $G_{t'}$ (resp. $(G\times G)_{t'}$) is fully faithful. The identity $F^{\natural}=F$ follows by a similar argument based on the uniqueness of regular minimal models. Finally, $p^{\natural}=p$, as one has $(p^{\natural}\times_ES')|X'_{t'}=(p\times_ES')|X'_{t'}$. --- \emph{General case }: Let $S_{(\overline{s})}$ denote the strictly henselization of $S$ at $\overline{s}$. Let $\pi\in\Gamma(S, \mathcal{O}_S)$ be a uniformizer and $f: X\to S$ the structural morphism. Notice that the cycle $\Delta=f^*\mathrm{Div}_S(\pi)/\delta$ is integral, where $\delta$ denotes the total multiplicity of $X_{\overline{s}}$, for $\Delta\times_SS_{(\overline{s})}$ is on $X\times_SS_{(\overline{s})}$. With $\Delta$ one associates a canonical $\mu_{\delta}$-torsor on $X$ for the \'{e}tale topology, $X'\to X$, which after the base change $S_{(\overline{s})}\to S$ corresponds to the specialization homomorphism of fundamental groups above. In particular, it suffices to define $E$ to be $[S'/\mu_{\delta}]$, where $S'$ is defined to be $\mathrm{Spec}\ \Gamma(X', \mathcal{O}_{X'})$. There is by quotient by $\mu_{\delta}$ an $S$-morphism \[[X'/\mu_{\delta}]=X\to [S'/\mu_{\delta}]=E.\] Let $t'$ be the generic point of $S'$. One verifies after the base change $S_{(\overline{s})}\to S$ that the $S'$-N\'{e}ron model of $A_t\times_tt'$ is an $S'$-abelian scheme $A'$ and that the regular $S'$-minimal model of $X'_{t'}=X_t\times_tt'$ is an $A'$-torsor $F'$ on $S'$ for the \'{e}tale topology. On $F'\to S'$, $\mu_{\delta}$ acts compatibly. The non-degenerate abelian fibration \[[F'/\mu_{\delta}]=F\to [S'/\mu_{\delta}]=E\] has albanese $[A'/\mu_{\delta}]=A$. The identity $X'_{t'}=F'_{t'}$ extends by (2.1) to a unique $S'$-morphism $p': X'\to F'$. Write $p=[p'/\mu_{\delta}]$, which is proper and is \'{e}tale at precisely the points outside the image of the uniruled irreducible components of $X_{\overline{s}}$. This $(F/E, p)$ is unique up to unique $S$-isomorphisms and its formation commutes with every formally smooth faithfully flat base change $T\to S$ of spectra of discrete valuation rings, as one verifies after the base change $S_{(\overline{s})}\to S$. \begin{flushright} $\square$ \end{flushright} 4.12. \emph{Proof of} (4.10). Keep the notations of (4.11). As $X_{\overline{s}}$ by hypothesis does not have uniruled irreducible components, the morphism $p': X'\to F'$ is thus an isomorphism (4.2) and $X'_{\overline{s'}}$, hence $X_{\overline{s}}$ as well, does not contain rational curves. Clearly, $X=F$ is an almost non-degenerate $S$-abelian fibration with ramification stack $E$ and albanese $A$. This fibration is universal. For, as $\delta$ is prime to the residue characteristics of $S$, the formation of the quotient $S'/\mu_{\delta}$ commutes with every base change $T\to S$. Write \[\overline{A}=\prod_{E/S}A,\] which is the kernel of the diagram \[(d_o^*, d_1^*): \prod_{S'/S}A'\rightrightarrows \prod_{S''/S}A'',\] where $S''=\mu_{\delta}\times_SS'$, $d_1, d_o: S''\to S'$ respectively denotes the second projection and represents the action of $\mu_{\delta}$ on $S'$, and $A''=A\times_ES''=d_o^*A'=d_1^*A'$. In particular, $\overline{A}$ is a separated $S$-group scheme of finite type, since both \[\prod_{S'/S}A',\ \prod_{S''/S}A''\] are separated $S$-group schemes of finite type (\cite{neron_model} 7.6/4). Moreover, $\overline{A}(t^{hs})=\overline{A}(S_{(\overline{s})})$, where $t^{hs}$ denotes the generic point of $S_{(\overline{s})}$, as $A'$ is an $S'$-abelian scheme. Next, let $T$ be an affine $S$-scheme and let $T_o$ be a closed sub-$S$-scheme of $T$ defined by an ideal $I$ with $I^2=0$. By applying the functor $H^0(\mu_{\delta}, -)$ to the exact sequence of $\mu_{\delta}$-modules \[0\to\Gamma(T\times_SS', I\otimes_S\mathrm{Lie}(A'/S'))\to A'(T\times_SS')\to A'(T_o\times_SS')\to 0\] one obtains a surjection ($\delta$ invertible on $S$) \[\overline{A}(T)=H^0(\mu_{\delta}, A'(T\times_SS'))\to \overline{A}(T_o)=H^0(\mu_{\delta}, A'(T_o\times_SS')).\] This shows that $\overline{A}$ is formally smooth over $S$. So $\overline{A}$ is the $S$-N\'{e}ron model of $\overline{A}_t=A_t$. Write the action of $A_t$ on $X_t$ as $\mu_t: A_t\times_tX_t\to X_t$. By (2.1) $\mu_t$ uniquely extends to an $S$-morphism $\mu: \overline{A}\times_SX\to X$, as $\overline{A}\times_SX$ is regular connected and as $X_{\overline{s}}$ does not contain $\overline{s}$-rational curves. The $S$-binary law $\mu$ is associative and hence represents an action of $\overline{A}$ on $X$, as $A_t\times_tA_t\times_tX_t$ is dense in $\overline{A}\times_S\overline{A}\times_SX$ and $X$ is $S$-separated. \begin{flushright} $\square$ \end{flushright} 5. \emph{Non-uniruled abelian fibrations in characteristic zero. Purity.} {\bf Lemma 5.1.} --- \emph{Let $S$ be a locally noetherian algebraic space and $U$ an open sub-algebraic space of $S$ with $\mathrm{prof}_{S-U}(S)\geq 2$.} \emph{Then the functor $A\mapsto A|U$, from the category of $S$-abelian algebraic spaces to the category of $U$-abelian algebraic spaces, is fully faithful. It is an equivalence if $S$ is normal of residue characteristics zero and pure along $S-U$} (SGA 2 X 3.1)\emph{, in particular, if $S$ is regular of residue characteristics zero.} \begin{proof} The full-faithfulness of the functor $A\mapsto A|U$ follows by \cite{lemme de Gabber} Proposition (1), 3). The assertion on the equivalence is \cite{grothendieck_abelian} 4.2+4.5. \end{proof} {\bf Lemma 5.2.} --- \emph{Let $S$ be a noetherian local scheme with closed point $s$. Let $U=S-\{s\}$. Let $A$ be an $S$-abelian algebraic space with structural morphism $f$ and zero section $e$. Let $A^*=\mathrm{Pic}^o_{A/S}$ be the dual $S$-abelian algebraic space of $A$ with structural morphism $f^*$ and zero section $e^*$.} \emph{Then the two following statements hold when $A$ is para-factorial} (SGA 2 XI 3.1)\emph{ along $f^{-1}(s)$ }: 1) \emph{Each $U$-section of $f^*|U$ extends uniquely to an $S$-section of $f^*$.} 2) \emph{Each $f|U$-fiberwise numerically trivial invertible module on $f^{-1}(U)$ rigidified along $e|U$ extends uniquely to an $f$-fiberwise numerically trivial invertible module on $A$ rigidified along $e$.} \begin{proof} Note that these two are the same statements. By the hypothesis that $A$ is para-factorial along $f^{-1}(s)$, one has $\mathrm{prof}_s(S)\geq 2$ and that each invertible module $L$ on $f^{-1}(U)$ extends up to unique isomorphisms to a unique invertible module $\overline{L}$ on $A$. It is evident that $\overline{L}$ is $f$-fiberwise numerically trivial (resp. has a unique rigidification along $e$ extending that of $L$ along $e|U$) if $L$ is $f|U$-fiberwise numerically trivial (resp. rigidified along $e|U$). \end{proof} {\bf Lemma 5.3.} --- \emph{Let $S$ be a noetherian local scheme with closed point $s$. Let $X$ be an $S$-smooth algebraic space with structural morphism $f$. Then $X$ is para-factorial along $f^{-1}(s)$ in the following cases }: i) \emph{$S$ is regular of dimension $\geq 2$.} ii) \emph{$S$ is of equal characteristic and geometrically para-factorial at $s$.} \begin{proof} Let $\overline{s}$ be the spectrum of a separable closure of $k(s)$ and $S_{(\overline{s})}$ the strict localization of $S$ at $\overline{s}$. Recall that one says that $S$ is \emph{geometrically para-factorial} at $s$ if $S_{(\overline{s})}$ is para-factorial along $\overline{s}$. Case i) is classical and ii) is \cite{boutot} III 2.14. \end{proof} {\bf Definition 5.4.} --- \emph{Let $S$ be an algebraic space and $U$ an open sub-algebraic space of $S$. We say that $S$ is $A$-pure along $S-U$ if for every smooth morphism $S'\to S$ the functor $A\mapsto A|U'$, from the category of $S'$-abelian algebraic spaces to the category of $U'$-abelian algebraic spaces, is an equivalence, where $U'=U\times_SS'$. We say that $S$ is strictly $A$-pure along $S-U$ if furthermore for every smooth morphism $S'\to S$ with $U'=U\times_SS'$ and every $S'$-abelian algebraic space $A$, each $U'$-section of $A$ extends uniquely to an $S'$-section of $A$. We say that $S$ is $A$-pure (resp. strictly $A$-pure) at a geometric point $s$ if its strict localization $S_{(s)}$ at $s$ is $A$-pure (resp. strictly $A$-pure) along $s'$, where $s'$ is the closed point of $S_{(s)}$.} {\bf Example 5.5.} --- By (5.1) and by SGA 4 XV 2.1, if an algebraic space $S$ is of residue characteristics zero locally noetherian normal and \emph{pure} (SGA 2 X 3.2) at a geometric point $s$, then $S$ is $A$-pure at $s$. If furthermore the strict localization $S_{(s)}$ is para-factorial along its closed point, then by (5.2)+(5.3) $S$ is strictly $A$-pure at $s$. {\bf Lemma 5.6.} --- \emph{Let $S$ be an algebraic space and $U$ an open sub-algebraic space of $S$ such that $S$ is strictly $A$-pure along $S-U$. For $i=1, 2$, let $A_i$ be an $S$-abelian algebraic space and $X_i$ an $A_i$-torsor on $S$ for the \'{e}tale topology.} \emph{Then each $U$-morphism from $X_1|U$ to $X_2|U$ extends uniquely to an $S$-morphism from $X_1$ to $X_2$.} \begin{proof} The question being an \'{e}tale local question on $S$, one may assume the torsors $X_i$, $i=1, 2$, to be trivial. Each $U$-morphism $q: A_1|U\to A_2|U$ is the unique composite of a translation ($a_2\mapsto a_2+q(0)$) and a $U$-group homomorphism $p: A_1|U\to A_2|U$ (``Geometric Invariant Theory'' 6.4). As by hypothesis $S$ is strictly $A$-pure along $S-U$, the $U$-section $q(0)=\sigma$ and the $U$-group homomorphism $p$ extend uniquely to an $S$-section $\overline{\sigma}$ and an $S$-group homomorphism $\overline{p}$, hence the claim. \end{proof} {\bf Proposition 5.7.} --- \emph{Let $S$ be an algebraic space and $U\to X$ an open immersion of $S$-algebraic spaces such that $X$ is strictly $A$-pure along $X-U$. Assume that on $U$ there is given an almost non-degenerate $S$-abelian fibration structure with defining $S$-groupoid $U_.$.} \emph{Then up to unique isomorphisms there exists a unique almost non-degenerate $S$-abelian fibration structure on $X$ with $S$-groupoid $X_.$ such that $d_1: X_1\to X_o=X$ restricts to $d_1: U_1\to U_o=U$ on $U$.} \begin{proof} As $X$ is $A$-pure along $X-U$, there exist unique cartesian diagrams in the category of $S$-algebraic spaces : {\[\xymatrix{ U_1 \ar[r]^{j'} \ar[d]_{d_1} & A' \ar[d]^{f'} \\ U \ar[r]_{} & X}\] } {\[\xymatrix{ U_1 \ar[r]^{j} \ar[d]_{d_o} & A\ar[d]^{f} \\ U \ar[r]_{} & X}\] }whose vertical arrows have abelian algebraic space structures. Let $p'$ (resp. $p$) denote the projection of $A'\times_XA$ onto $A'$ (resp. $A$). The diagonal immersion \[(j', j): U_1\to A'\times_XA\] satisfies \[p'(j', j)=j',\ p(j', j)=j.\] --- \emph{There is a unique section $i'$ of the abelian algebraic space structure $p'$ such that $i'j'=(j', j)$.} Indeed, as $j'$ is the base change of $U\hookrightarrow X$ by the smooth morphism $f'$, $A'$ is strictly $A$-pure along $A'-j'(U_1)$. So $(j', j)$, considered as a $U_1$-section of $p'$, extends uniquely to a section $i'$ of $p'$. That is, $p'i'=\mathrm{Id}_{A'}$ and $i'j'=(j', j)$. --- \emph{The following diagrams are commutative and cartesian }: {\[\xymatrix{ U_1 \ar[r]^{j'} \ar[d]_{d_1} & A' \ar[d]^{f'} \\ U \ar[r]_{} & X}\] } {\[\xymatrix{ U_1 \ar[r]^{j'} \ar[d]_{d_o} & A' \ar[d]^{fpi'} \\ U \ar[r]_{} & X}\] }which, from now on, we rewrite as : {\[\xymatrix{ U_1 \ar[r]^{} \ar[d]_{d_1} & X_1 \ar[d]^{d_1} \\ U_o \ar[r]_{} & X_o}\] } {\[\xymatrix{ U_1 \ar[r]^{} \ar[d]_{d_o} & X_1 \ar[d]^{d_o} \\ U_o \ar[r]_{} & X_o}\] }Next, form the cartesian diagram : {\[\xymatrix{ X_2 \ar[r]^{d_o} \ar[d]_{d_2} & X_1 \ar[d]^{d_1} \\ X_1 \ar[r]_{d_o} & X_o}\] } --- \emph{There is a unique morphism $d_1: X_2\to X_1$ such that the following diagram commutes and is cartesian }: {\[\xymatrix{ X_2 \ar[r]^{d_1} \ar[d]_{d_2} & X_1 \ar[d]^{d_1} \\ X_1 \ar[r]_{d_1} & X_o}\] } Indeed, as $U_1\hookrightarrow X_1$ is the base change of $U\hookrightarrow X$ by the smooth morphism $d_1$, $X_1$ is strictly $A$-pure along $X_1-U_1$. By (5.6) the cartesian diagram (SGA 3 V 1) {\[\xymatrix{ U_2 \ar[r]^{d_1} \ar[d]_{d_2} & U_1 \ar[d]^{d_1} \\ U_1 \ar[r]_{d_1} & U_o}\] }whose vertical arrows have abelian algebraic space structures has a unique extension as above claimed, for one verifies that : \noindent i) \emph{The base change of $d_2: X_2\to X_1$ by $U_1\hookrightarrow X_1$ is $d_2: U_2\to U_1$.} \noindent ii) \emph{The base change of $d_1: X_1\to X_o$ by $U_1\hookrightarrow X_1\stackrel{d_1}{\longrightarrow}X_o$ is the base change of $d_1: U_1\to U_o$ by $d_1: U_1\to U_o$.} --- \emph{The above $d_1: X_2\to X_1$ fits into the following diagram which is commutative and cartesian }: {\[\xymatrix{ X_2 \ar[r]^{d_1} \ar[d]_{d_o} & X_1 \ar[d]^{d_o} \\ X_1 \ar[r]_{d_o} & X_o}\] }For, $X_1$ is strictly $A$-pure along $X_1-U_1$ and one has similarly the cartesian diagram : {\[\xymatrix{ U_2 \ar[r]^{d_1} \ar[d]_{d_o} & U_1 \ar[d]^{d_o} \\ U_1 \ar[r]_{d_o} & U_o}\] } It is now immediate that one has obtained the desired $S$-groupoid $X_.$ (cf. SGA 3 V 1). \end{proof} {\bf Proposition 5.8.} --- \emph{Let $S$ be a noetherian normal integral scheme, $t$ the generic point of $S$, $A_t$ a $t$-abelian variety and $X_t$ an $A_t$-torsor on $t$ for the \'{e}tale topology. Assume that, for each strict henselization $S'$ of $S$ at a geometric codimension $1$ point $s$, if $t'$ (resp. $s'$) denotes the generic (resp. closed) point of $S'$, $X_t\times_tt'$ extends to a separated $S'$-algebraic space $X'$ of finite type such that $X'$ is normal integral and at each of its geometric codimension $\geq 2$ points either regular or pure geometrically para-factorial of equal characteristic and that $X'_{s'}$ is non-empty, separable, proper and does not have uniruled irreducible components.} \emph{Then, if $S$ is $A$-pure at all its geometric points of codimension $\geq 2$, there exists up to unique isomorphisms a unique $S$-abelian algebraic space $A$ extending $A_t$.} \begin{proof} Recall that the formation of N\'{e}ron models commutes with strict localization. Thus, the N\'{e}ron model of $A_t$ at every codimension $1$ point of $S$ is by (4.3) an abelian scheme. So, as $S$ is $A$-pure at all its geometric points of codimension $\geq 2$, there is up to unique isomorphisms a unique extension of $A_t$ to an $S$-abelian algebraic space $A$. \end{proof} {\bf Proposition 5.9.} --- \emph{Keep the notations of $(5.8)$. Assume that $S$ is of residue characteristics zero pure at all its points of codimension $\geq 2$ and that there is an open sub-scheme $R$ of $S$ which consists precisely of all points of $S$ where $S$ is regular.} \emph{Then $X_t$ extends to an $A$-torsor $X$ on $S$ for the \'{e}tale topology. Such an extension is unique up to unique isomorphisms if $S$ is geometrically para-factorial along $S-R$.} \begin{proof} Note that $S$ is by (5.5) $A$-pure at all its geometric points of codimension $\geq 2$. So (5.8) applies. As the formation of regular minimal models commutes with strict localization, the regular minimal model of $X_t$ at each codimension $1$ point $s$ of $S$ is by (4.3) a torsor for the \'{e}tale topology under the localization of $A$ at $s$. As $R$ is strictly $A$-pure at all its points of codimension $\geq 2$, there exist, by (5.6) and a ``passage \`{a} la limite'', an open sub-scheme $V$ of $R$ with $\mathrm{codim}(R-V, R)\geq 2$ and an $A|V$-torsor $Z$ on $V$ for the \'{e}tale topology such that $Z$ extends $X_t$. This torsor $Z$ is by \cite{raynaud_thesis} XIII 2.8 iv) of finite order. Namely, there exist an integer $n\geq 1$ and an ${}_nA|V$-torsor $P$ on $V$ for the \'{e}tale topology such that \[Z=P\stackrel{{}_nA|V}{\wedge}A|V,\] where ${}_nA=\mathrm{Ker}(n.\mathrm{Id}_A)$. As $S$ is pure at all its points of codimension $\geq 2$, thus in particular pure along $S-V$, there is a unique finite \'{e}tale $S$-scheme $\overline{P}$ which restricts to $P$ on $V$. By the purity of $S$ along $S-V$ again, $\overline{P}$ is in a unique way an ${}_nA$-torsor on $S$ for the \'{e}tale topology and hence \[X=\overline{P}\stackrel{{}_nA}{\wedge}A\] extends $X_t$. Such an extension is by (5.6) unique up to unique isomorphisms if $S$ is geometrically para-factorial at all its points of codimension $\geq 2$, or equivalently, at all points of $S-R$. \end{proof} {\bf Theorem 5.10.} --- \emph{Let $S$ be an integral scheme with generic point $t$ and $X$ an $S$-algebraic space with structural morphism $f$. Assume that $X$ is locally noetherian normal integral of residue characteristics zero and at all its geometric codimension $\geq 2$ points pure and geometrically para-factorial. Assume furthermore that $f^{-1}(t)$ is a non-degenerate $t$-abelian fibration and that, for each geometric codimension $1$ point $\overline{x}$ of $X$, $f\times_SS_{(\overline{s})}$ is separated of finite type and flat at $\overline{x}$ and the geometric fiber $f^{-1}(\overline{s})$ is proper and does not have uniruled irreducible components, where $S_{(\overline{s})}$ denotes the strict henselization of $S$ at $\overline{s}=f(\overline{x})$.} \emph{Then there exists a unique almost non-degenerate abelian fibration structure on $f$ extending that of $f_t$.} \begin{proof} One applies (4.10), (5.5) and (5.7). \end{proof} {\bf Proposition 5.11.} --- \emph{Keep the notations of $(5.10)$. Let $(X_., d_., s_.)$ denote the $S$-groupoid of $X/S$. Consider the following conditions }: 1) \emph{$f$ is proper, $S$ is excellent regular.} 2) \emph{$f$ is proper, $S$ is locally noetherian normal and at each of its points satisfies the condition $(W)$ $(\mathrm{EGA\ IV}\ 21.12.8)$.} \emph{Then, if $1)$ (resp. $2)$) holds, $S$ is the cokernel of $(d_o, d_1)$ in the full sub-category of the category of $S$-algebraic spaces consisting of the $S$-algebraic spaces (resp. $S$-schemes) which are $S$-separated and locally of finite type over $S$.} \begin{proof} Let $Z$ be an $S$-separated algebraic space locally of finite type over $S$ and $p: X\to Z$ an $S$-morphism satisfying $pd_o=pd_1$. As the $t$-groupoid $X_{.t}$ is simply connected, $p_t: X_t\to Z_t$ factors through a unique $t$-point, say $\sigma_t$, of $Z_t$. It amounts to showing that when 1) holds (resp. when 2) holds and $Z$ is a scheme) such a $t$-point uniquely extends to an $S$-section of $Z$. Replacing $Z$ by the closed image of $\sigma_t$ in $Z$, one may assume that $Z$ is integral and birational over $S$. In case $1)$, as $p$ is dominant, $X$ normal and $S$ excellent, one may by replacing $Z$ by its normalization assume that $Z$ is normal. As $f$ is proper and $Z$ is $S$-separated, $p$ is proper and hence surjective. It suffices to show that in case $1)$ (resp. $2)$ where $Z$ is a scheme) $Z$ is \'{e}tale over $S$ (resp. $Z\to S$ is a local isomorphism at every point of $Z$). For, being proper birational, $Z\to S$ is then an isomorphism. When 1) holds (resp. when 2) holds and $Z$ is a scheme), it suffices by the theorem of \emph{purity of branch locus} (2.4) (resp. the theorem of \emph{purity of branch locus} of van der Waerden, EGA IV 21.12.12) to show that $Z$ is $S$-\'{e}tale at each geometric codimension $1$ point $\overline{z}$ of $Z$ (resp. $Z\to S$ is a local isomorphism at each codimension $1$ point $z$ of $Z$). Now each geometric maxmal point $\overline{x}$ of $p^{-1}(\overline{z})$ (resp. each maximal point $x$ of $p^{-1}(z)$) is of codimension $\leq 1$ in $X$, and the image of $\overline{x}$ (resp. $x$) in $S$, which is also the image of $\overline{z}$ (resp. $z$), is of codimension $\leq 1$ in $S$ by hypothesis. The projection $Z\to S$ being proper birational is an isomorphism when localized at every codimension $\leq 1$ point of $S$ and in particular is \'{e}tale at $\overline{z}$ (resp. a local isomorphism at $z$). \end{proof} 5.12. \emph{Question }: In (5.11), does $\mathrm{Coker}(d_o, d_1)=S$ hold in the category of $S$-algebraic spaces? \end{document}
\begin{document} \begin{abstract} The aim of this article is to give a self-contained account of the algebra and model theory of Cohen rings, a natural generalization of Witt rings. Witt rings are only valuation rings in case the residue field is perfect, and Cohen rings arise as the Witt ring analogon over imperfect residue fields. Just as one studies truncated Witt rings to understand Witt rings, we study Cohen rings of positive characteristic as well as of characteristic zero. Our main results are a relative completeness and a relative model completeness result for Cohen rings, which imply the corresponding Ax--Kochen/Ershov type results for unramified henselian valued fields also in case the residue field is imperfect. \end{abstract} \maketitle \section{Introduction} The aim of this paper is to give an introduction to the model theory of complete Noetherian local rings $A$ which have maximal ideal $pA$. From an algebraic point-of-view, the theory of such rings is classical. Under the additional hypothesis of regularity, they are valuation rings, and their study goes back to work of Krull (\cite{Kru37}) and many others. Structure theorems were obtained by Hasse and Schmidt (\cite{HS34}), although there were deficiencies in the case that $A/pA$ is not perfect. Further structural results were obtained by Witt (\cite{Wit37}) and Teichm\"{u}ller (\cite{Tei36b}). In particular Teichm\"{u}ller gave a brief but precise account of the structure of such rings, even in the case that $A/pA$ is imperfect. This was followed by Mac Lane (\cite{Mac39c}), who improved upon Teichm\"{u}ller's theory and proved relative structure theorems. Mac Lane built his work upon his study of Teichm\"{u}ller's notion of $p$-independence in \cite{Tei36a}. For further historical information, especially on this early period, the reader is encouraged to consult Roquette's article \cite{Roq03} on the history of valuation theory. Turning away from the hypothesis of regularity, Cohen (\cite{Coh46}) gave an account of the structure of such rings. In fact his context was even more general: he did not assume Noetherianity. Despite all of this work, more modern treatments (e.g.~Serre, \cite{Ser79}) of this subject are often restricted to the case that $A/pA$ is perfect. Consequently, the literature on the model theory of complete Noetherian local rings is sparse. For example, \cite{vdD14} also assumes that $A/pA$ is perfect. We became interested in the model theory of complete Noetherian local rings when we started to construct examples of NIP henselian valued fields with imperfect residue field in order to obtain an understanding of NIP henselian valued fields (\cite{AJ3}). After getting acquainted with the algebra of these rings as scattered in the literature detailled above, we realized that with a bit of tweaking, the proof ideas of these (classical) results can be used gain an understanding of the model theory of such rings. To start with, this requires a careful recapitulation of the known algebraic (or structural) theory of such rings, bringing older results together in one framework. This overview is given in Part I of the article. In this first part, many of the proof ideas are inspired by the work of others (and we point to the original sources), but we take care to prove everything which cannot be cited directly from elsewhere. The underlying definition of a Cohen ring is the following: \begin{definition}[{Cf.~Definitions \ref{def:preCohen.ring} and \ref{def:Cohen.ring}}] A {\bf Cohen ring} is a complete Noetherian local ring $A$ with maximal ideal $pA$, where $p$ is the residue characteristic of $A$. \end{definition} A Cohen ring may either have characteristic $0$ (in which case we call it strict) or $p^n$, where $p$ is the characteristic of the residue field $A/pA$. In the second section, we introduce Cohen rings and recall that, for a given field $k$ of positive characteristic, Cohen rings of every possible characteristic exist, which have residue field $k$. In the third section, we discuss and develop the machinery of multiplicative representatives, namely good sections of the residue map from the perfect core of the residue field into the Cohen ring $A$ (Definition~\ref{def:Teichmuller}). We also comment on the extent to which these sections are unique, see Theorem~\ref{thm:representatives}, and how one can use them to generate Cohen rings (Proposition~\ref{prp:generation}). In the fourth section, we prove that Teichm\"uller's embedding technique works in this context: we embed a Cohen ring with residue field $k$, with a choice of representatives, into the corresponding Cohen ring over the perfect hull of $k$ (see Theorem~\ref{thm:TEP}). Building on this and using ideas from Cohen, we show that any two Cohen rings of the same characteristic and over the same residue field, both equipped with representatives, are isomorphic. In fact there is a unique isomorphism which respects the choices of representatives and is the identity on the residue field (Cohen Structure Theorem, \ref{cor:Cohen_structure_1}). In the final section of the first part of the paper, we compare Cohen rings to Witt rings. In the second part we begin a model-theoretic study, including describing the complete theories of Cohen ring of a fixed characteristic, over a given residue field. We work in the language $\mathfrak{L}_{\mathrm{vf}} = \mathfrak{L}_\mathrm{ring}\cup \{\mathcal{O}\}$ of valued fields. We then show relative completeness, using a classical proof strategy together with the Cohen Structure Theorem from section 6. In particular, this result gives the following Ax--Kochen/Ershov principle: \begin{theorem}[{Cf.~Corollary \ref{cor:AKE}}]\label{thm:intro_AKE} Let $(K,v)$ and $(L,w)$ be two unramified henselian valued fields. Then $$ \underbrace{Kv \equiv Lw}_{\textrm{in }\mathfrak{L}_\mathrm{ring}} \textrm{ and } \underbrace{vK \equiv wL}_{\textrm{in }\mathfrak{L}_\mathrm{oag}} \, \Longleftrightarrow \, \underbrace{(K,v)\equiv (L,w)}_{\textrm{in }\mathfrak{L}_\mathrm{vf}}. $$ \end{theorem} Note that this was already claimed by B\'elair in \cite[Corollaire 5.2(1)]{Bel99}. However, since his proof crucially relies on Witt rings, it only works for perfect residue fields. Moreover, we prove the following relative model-completeness result, which again essentially builds on the Cohen Structure Theorem. \begin{theorem}[{Cf.~Corollary \ref{cor:AKE-E}}]\label{thm:intro_AKE_MC} Let $(K,v)\subseteq (L,w)$ be two unramified henselian valued fields. Then, we have $$ \underbrace{Kv \preceq Lw}_{\textrm{in }\mathfrak{L}_\mathrm{ring}} \textrm{ and } \underbrace{vK \preceq wL}_{\textrm{in }\mathfrak{L}_\mathrm{oag}} \, \Longleftrightarrow \, \underbrace{(K,v) \preceq (L,w)}_{\textrm{in }\mathfrak{L}_\mathrm{vf}}. $$ \end{theorem} In the penultimate section, we prove an embedding lemma (Proposition \ref{prp:emb}) similar to one proved by Kuhlmann for tame fields in \cite{FVK}. This is the most delicate proof in the model-theoretic part of the paper. We then apply the embedding lemma to show relative existential closedness of unramified henselian valued fields assuming that the residue fields have a fixed finite degree of imperfection (Theorem \ref{thm:AKEE}). Finally, applying the embedding lemma once again, we argue that in any unramified henselian valued field, there is no new structure induced on the residue field and value group: \begin{theorem}[Cf.~Theorem \ref{thm:SEk}] Let $(K,v)$ be an unramfied henselian valued field. Then the value group $vK$ and the residue field $Kv$ are both stably embedded, as a pure ordered abelian group and as a pure field, respectively. \end{theorem} We conclude by giving an example of a finitely ramified henselian valued field in which the residue field is no longer stably embedded (Example \ref{ex:fr}). \part{The structure of Cohen rings} \label{part:1} \section{Pre-Cohen rings and Cohen rings} \label{section:Cohen} Throughout this paper, $A,B,C$ will denote rings, which will always have a multiplicative identity $1$ and be commutative; and $k,l$ will be fields of characteristic $p$, which is a fixed prime number. A ring $A$ is {\bf local} if it has a unique maximal ideal, which we will usually denote by $\mathfrak{m}$. A local ring is equipped with the {\bf local topology} \footnote{The local topology is also known as the {\em $\mathfrak{m}$-adic topology}.}, which is the ring topology defined by declaring the descending sequence of ideals $\mathfrak{m}\supseteq\mathfrak{m}^{2}\supseteq...$ to be a base of neighbourhoods of $0$. The {\bf residue field} of a local ring $A$, which we usually denote by $k$, is the quotient ring $A/\mathfrak{m}$, and the natural quotient map $$ \res:A\longrightarrow k $$ is called the {\bf residue map}. The {\bf residue characteristic} of $A$ is by definition the characteristic of $k$. For the sake of clarity, since maps between residue fields of local rings are of central importance in this paper, it will be suggestive to work with pairs $(A,k)$ consisting of a local ring $A$, together with its residue field $k$. Of course, such a pair is already determined by the local ring $A$, and this notation fails to explicitly mention the maximal ideal or the residue map. Without risk of confusion, we will also refer to such pairs as local rings. \begin{lemma}[{Krull, \cite[Theorem 2]{Kru38}}]\label{lem:Noetherian} Let $A$ be a Noetherian local ring. Then $\bigcap_{n\in\mathbb{N}}\mathfrak{m}^{n}=\{0\}$. In other words, $A$ is Hausdorff with respect to the local topology. \end{lemma} \begin{remark}[Other terminology] Before we give our main definitions, namely Definitions \ref{def:preCohen.ring} and \ref{def:Cohen.ring}, we note that many closely related ideas have been named in the literature, both in original papers and textbooks. Mac Lane, in \cite{Mac39c}, works with `{$p$-adic fields}' and `{$\mathfrak{p}$-adic fields}'; whereas Cohen, in \cite{Coh46}, prefers to work with `{local rings}' (which, for Cohen, are necessarily Noetherian), `{generalized local rings}', and `{$v$-rings}'. Serre, in \cite[Chapter II, \S5]{Ser79}, defines a `{$p$-ring}' to be a ring $A$ which is Hausdorff and complete in the topology defined by a decreasing sequence $\mathfrak{a}_{1}\supset\mathfrak{a}_{2}\supset...$ of ideals, such that $\mathfrak{a}_{m}\mathfrak{a}_{n}\subseteq\mathfrak{a}_{m+n}$, and for which $A/\mathfrak{a}_{1}$ is a perfect ring of characteristic $p$. More recently, van den Dries, in {\cite[p.~132]{vdD14}}, defines a `{local $p$-ring}' to be a complete local ring $A$ with maximal ideal $pA$ and perfect residue field $A/pA$. To minimise the risk of confusion with existing terminology, we will not work with $v$-rings, $p$-adic fields, $\mathfrak{p}$-adic fields, $p$-rings, or local $p$-rings. Instead, since Warner's point of view, in \cite[Chapter IX]{War93}, is closer to our own, it is his definition of `{Cohen ring}' that we adopt. We hope the reader will forgive us for this, but we feel that none of the other notions (several of which are arguably more standard in the literature) exactly captures the right context for this paper. \end{remark} \begin{definition}\label{def:preCohen.ring} A {\bf pre-Cohen ring} is a local ring $(A,k)$ such that $A$ is Noetherian and the maximal ideal $\mathfrak{m}$ is $pA$. \end{definition} In particular, pre-Cohen rings are of residue characteristic $p$. Turning to the question of the characteristic of $A$ itself, we note that a pre-Cohen ring need not even be an integral domain. However, a pre-Cohen ring is either of characteristic $0$ or of characteristic $p^{m}$, for some $m\in\mathbb{N}_{>0}$. \begin{lemma}\label{lem:strict} For a pre-Cohen ring $(A,k)$, the following are equivalent: \begin{enumerate} \item $A$ is of characteristic zero, \item $A$ is an integral domain, \item $A$ is a valuation ring. \end{enumerate} In this case, the corresponding valuation $v_{A}$ on the quotient field of $A$ is of mixed characteristic $(0,p)$, has value group isomorphic to $\mathbb{Z}$, with $v_{A}(p)$ minimum positive, and has residue field $k$. \end{lemma} \begin{proof} This is a special case of {\cite[21.4 Theorem]{War93}}. \end{proof} \begin{definition}\label{def:strict} If any (equivalently, all) of the conditions of Lemma \ref{lem:strict} are satisfied, then we say that $(A,k)$ is {\bf strict}. \end{definition} The word `strict' is borrowed from Serre, \cite[II,\S5]{Ser79}. \begin{remark} In \cite{Coh46}, Cohen writes in terms of {regular} Noetherian local rings. A local ring is {\bf regular} if its Krull dimension is equal to the number of generators of its unique maximal ideal. In the case of a pre-Cohen ring $(A,k)$, the maximal ideal is by definition generated by one element, namely $p$. Therefore, $(A,k)$ is regular if and only if its Krull dimension is $1$, which in turn holds if and only if $(A,k)$ is strict. \end{remark} A {\bf morphism} of pre-Cohen rings, which we write as $\varphi:(A_{1},k_{1})\longrightarrow(A_{2},k_{2})$, is a pair $\varphi=(\varphi_{A},\varphi_{k})$ of ring homomorphisms $\varphi_{A}:A_{1}\longrightarrow A_{2}$ and $\varphi_{k}:k_{1}\longrightarrow k_{2}$, such that \begin{enumerate} \item $\mathfrak{m}_{1}=\varphi_{A}^{-1}(\mathfrak{m}_{2})$, i.e.~$\varphi_{A}$ is a morphism of local rings, and \item $\varphi_{k}\circ\res=\res\circ\varphi_{A}$. \end{enumerate} This is nothing more than a way of speaking about morphisms of local rings as pairs of maps, to match the pairs $(A,k)$. Every morphism $\varphi_{A}$ of local rings {induces} a ring homomorphism $\varphi_{k}:k_{1}\longrightarrow k_{2}$ such that $(\varphi_{A},\varphi_{k})$ is a morphism of pre-Cohen rings. From now on, by `morphism' we mean a morphism of pre-Cohen rings. We will often (but not always) be concerned with morphisms $\varphi=(\varphi_{A},\varphi_{k})$ such that $k_{2}/\varphi_{k}(k_{1})$ is separable. By an {\bf embedding}, we mean a morphism $\varphi=(\varphi_{A},\varphi_{k})$ such that $\varphi_{A}$ is injective. In the obvious way, we write $(A_{1},k_{1})\subseteq(A_{2},k_{2})$ if $A_{1}$ is a subring of $A_{2}$, $k_{1}$ is a subfield of $k_{2}$, and the inclusion maps form an embedding $(A_{1},k_{1})\longrightarrow(A_{2},k_{2})$. \begin{definition}[{Cf.~\cite[21.3 Definition]{War93}}]\label{def:Cohen.ring} A pre-Cohen ring $(A,k)$ is a {\bf Cohen} ring if it is also complete, i.e.\ complete with respect to the local topology. \end{definition} \begin{example} $(\mathbb{Z}_{p},\mathbb{F}_{p})$ is a strict Cohen ring. For each $m\in\mathbb{N}_{>0}$, $(\mathbb{Z}_{p}/p^{m}\mathbb{Z}_{p},\mathbb{F}_{p})$ is a non-strict Cohen ring of characteristic $p^{m}$. \end{example} \begin{lemma} Every pre-Cohen ring of positive characteristic is already a Cohen ring. \end{lemma} \begin{proof} In a non-strict pre-Cohen ring the topology is discrete. Thus it is complete. \end{proof} Note that Cohen rings exist, for any residue field and any characteristic. This foundational existence result goes back to the work of Hasse and Schmidt. \begin{theorem}[{Existence Theorem}, {\cite[Theorem 20, p63]{HS34}}]\label{thm:HS} Let $k$ be a field of characteristic $p$. There exists a strict Cohen ring $(A,k)$. Moreover, for each $m\in\mathbb{N}_{>0}$, there exists a Cohen ring $(A_{m},k)$ of characteristic $p^{m}$. \end{theorem} \begin{remark}[Inverse systems]\label{rem:inverse_system} Let $(A,k)$ be a non-strict Cohen ring of characteristic $p^{m}$, and let $n\in\mathbb{N}_{>0}$ such that $n\leq m$. The image of $A$ under the quotient map $\res_{m,n}:A\longrightarrow A/\mathfrak{m}^{n}$ is a non-strict Cohen ring of characteristic $p^{n}$, again with residue field $k$. Similarly, if $(A,k)$ is a strict Cohen ring, and $n\in\mathbb{N}_{>0}$, the image of $A$ under the quotient map $r_{n}:A\longrightarrow A/\mathfrak{m}^{n}$ is a non-strict Cohen ring of characteristic $p^{n}$, and as before with residue field $k$. In both cases, the residue map of the quotient is induced by the residue map of $A$. These quotient maps are compatible in the sense that, for $n_{1}\leq n_{2}\leq m$, we have $r_{n_{1}}=\res_{n_{2},n_{1}}\circ r_{n_{2}}$ and $\res_{m,n_{1}}=\res_{n_{2},n_{1}}\circ\res_{m,n_{2}}$. In this way, given a strict Cohen ring $(A,k)$, we obtain an inverse system of non-strict Cohen rings $(A_{m},k)_{m>0}$ and maps $(\res_{m,n})_{n\leq m}$. Conversely, beginning with an inverse system of non-strict Cohen rings $(A_{m},k)_{m>0}$, where each $A_{m}$ has characteristic $p^{m}$, and quotient maps $(\res_{m,n})_{n\leq m}$, the inverse limit is a strict Cohen ring $(A,k)$, with quotient maps $(r_{m})_{m>0}$. Clearly, these constructions are mutually inverse. Furthermore, if $\varphi:(A,k)\longrightarrow(B,l)$ is an embedding of Cohen rings that are both of characteristic $p^{m}$ (resp., both strict), then $\varphi$ induces embeddings $\varphi_{n}:(A/\mathfrak{m}^{n},k)\longrightarrow(B/\mathfrak{m}^{n},l)$, for all $n\leq m$ (resp., for all $n>0$). On the other hand, if we begin with two inverse systems $(A_{n},k)_{n>0}$ and $(B_{n},l)_{n>0}$, such that $\mathrm{char}(A_{n})=\mathrm{char}(B_{n})=p^{n}$, and a compatible family of embeddings $\varphi_{n}:(A_{n},k)\longrightarrow(B_{n},l)$, for $n>0$, there is a unique compatible embedding $\varphi:(A,k)\longrightarrow(B,l)$, where $(A,k)$ and $(B,l)$ are the corresponding inverse limits. \end{remark} \section{Representatives} \label{section:representatives} \subsection{Teichm\"{u}ller's multiplicative representatives} The notion of `representatives' plays a key role in this subject. \begin{definition}[Cf.~{\cite[\S 4.]{Tei36b}}]\label{def:Teichmuller} Let $(A,k)$ be a pre-Cohen ring, and let $\alpha\in k$. A {\bf representative} of $\alpha$ is some $a\in A$ with $\res(a)=\alpha$. A {\bf multiplicative representative} $a$ of $\alpha$ is a representative which is also a $p^{n}$-th power in $A$, for all $n\in\mathbb{N}$. A {\bf choice of representatives} is a partial function \begin{align*} s:k&\dasharrow A \end{align*} such that $s(\alpha)$ is a representative of $\alpha$. To say that such a choice is {\bf for} $P$ means that $P$ is the domain of $s$, i.e.~$s:P\longrightarrow A$. Obviously, such a map is a {\bf choice of multiplicative representatives} if $s(\alpha)$ is a multiplicative representative of $\alpha$, for all $\alpha$ in the domain of $s$. \end{definition} We observe that, for any pre-Cohen ring $(A,k)$, there exist many choices of representatives for $k$, and of course for any subset $P$ of $k$. It is obvious that the largest subfield of $k$ for which multiplicative representatives may be chosen is the perfect core $k^{(p^{\infty})}$, which is by definition the subfield of elements which are $p^{n}$-th powers, for all $n\in\mathbb{N}$. Note that $k^{(p^{\infty})}$ is the largest perfect subfield of $k$. For any subset $X$ of a ring, and any $n\in\mathbb{N}$, denote by $X^{(n)}=\{x^{n}\mid x\in X\}$ the set of $n$-th powers of elements of $X$. The following straightforward lemma is the starting point for the study of multiplicative representatives. \begin{lemma}[{\cite[Cf.~Hilfssatz 8]{Tei37}}]\label{lem:binomial} Let $(A,k)$ be a pre-Cohen ring, let $a,b\in A$, and let $m,n\in\mathbb{N}$. If $a\equiv b\pmod{\mathfrak{m}^{m}}$, then $a^{p^{n}}\equiv b^{p^{n}}\pmod{\mathfrak{m}^{m+n}}$. \end{lemma} Perhaps the most important result about multiplicative representatives is Theorem \ref{thm:representatives}, which is also due to Teichm\"{u}ller. \begin{theorem}[Cf.~{\cite[\S 4.~Satz]{Tei36b}}]\label{thm:representatives} Let $(A,k)$ be a Cohen ring. There exists a unique choice of multiplicative representatives for $k^{(p^{\infty})}$: \begin{align*} s:k^{(p^{\infty})}&\longrightarrow A. \end{align*} \end{theorem} The proof can be found in many places, for example \cite[Lemma 7]{Coh46}. In fact, such a map $s$ is also multiplicative in a stronger sense, namely that $s(\alpha)s(\beta)=s(\alpha\beta)$, for all $\alpha,\beta\in k^{(p^{\infty})}$, cf.~\cite[Proposition 8(iii), \S4, Ch.~II]{Ser79}. \begin{lemma}\label{lem:pm-representatives} Let $(A,k)$ be a non-strict Cohen ring of characteristic $p^{m+1}$. There is a unique choice of representatives \begin{align*} s:k^{(p^{m})}\longrightarrow A^{(p^{m})}\subseteq A. \end{align*} \end{lemma} \begin{proof} Let $\alpha\in k$, let $a,b\in A$, and suppose that both $a^{p^{m}}$ and $b^{p^{m}}$ are representatives of $\alpha^{p^{m}}$. Then both $a$ and $b$ are representatives of $\alpha$, and in particular $a\equiv b\pmod{\mathfrak{m}}$. By Lemma~\ref{lem:binomial}, $a^{p^{m}}\equiv b^{p^{m}}\pmod{\mathfrak{m}^{m+1}}$. Since the characteristic of $A$ is $p^{m+1}$, we have $a^{p^{m}}=b^{p^{m}}$. \end{proof} The unique choice of representatives in Lemma~\ref{lem:pm-representatives} will be called the {\bf $p^{m}$-representatives}. \subsection{\texorpdfstring{$\lambda$}{λ}-maps} \label{section:lamba.maps} A subset $\bfbeta\subseteq k$ is {\bf $p$-independent} over a subfield $C\subseteq k$ if $[k^{(p)}C(\beta_{1},...,\beta_{r}):k^{(p)}C]=p^{r}$, for all pairwise distinct elements $\beta_{1},\dots,\beta_{r}\in\bfbeta$, and for all $r\in\mathbb{N}$; and $\bfbeta$ is a {\bf $p$-basis} over $C$ if furthermore $k=k^{(p)}C(\bfbeta)$. Equivalently, a $p$-basis is a maximal $p$-independent subset. If $C$ is the prime field $\mathbb{F}_{p}$, then we just speak of `$p$-independence' and `$p$-bases'. The cardinality of a $p$-basis of $k$ does not depend on the choice of any particular $p$-basis, and it is called the {\bf imperfection degree}\footnote{Imperfection degree is sometimes called {\em Ershov degree} or {\em $p$-degree}.} of $k$. See \cite{Tei36a}, \cite{Mac39a}, and \cite{Mac39b}, for more information on $p$-independence and $p$-bases. Our next task is to develop the theory of $\lambda$-maps and $\lambda$-representatives with respect to arbitrary $p$-independent subsets $\bfbeta$, which certainly may be infinite, since in general the imperfection degree of a field may be any cardinal number. For a cardinal $\nu$, and $n\in\mathbb{N}$, we define \begin{align*} P_{\nu,n}&:=\Big\{(i_{\mu})_{\mu<\nu}\;\Big|\;\mbox{$|\{\mu<\nu\;|\;i_{\mu}\neq0\}|<\infty$ and $\forall \mu<\nu,\;0\leq i_{\mu}<p^{n}$ }\Big\} \end{align*} to be the set of the multi-indices of finite support, in $\nu$-many elements, and in which each index is a non-negative integer strictly less that $p^{n}$. In this context, `finite support' means that any such multi-index contains only finitely many non-zero indices. We emphasise that this set is just a technical device to facilitate our analysis of $p$-independence, and we will be mostly interested in the case $n=1$. For a $p$-independent set $\bfbeta=(\beta_{\mu})_{\mu<\nu}$, and $I=(i_{\mu})_{\mu<\nu}\in P_{\nu,n}$, we write \begin{align*} \beta^{I}&:=\prod_{\mu<\nu}\beta_{\mu}^{i_{\mu}} \end{align*} for the $I$-th monomial of $\bfbeta$. The set $\{\beta^{I}\mid I\in P_{\mu,n}\}$ is a $k^{(p^{n})}$-linear basis of $k^{(p^{n})}(\bfbeta)$. Therefore, for each $\alpha\in k^{(p^{n})}(\bfbeta)$, there is a unique family $(\lambda^{I}_{\bfbeta}(\alpha))_{I\in P_{\nu,n}}$ of elements of $k$ such that \begin{align}\label{eq:lambda}\tag{3.2.1} \alpha&=\sum_{I\in P_{\nu,n}}\beta^{I}\lambda^{I}_{\bfbeta}(\alpha)^{p^{n}}. \end{align} Note that this sum is finite since $\lambda^{I}_{\bfbeta}(\alpha)$ is zero for cofinitely many $I\in P_{\nu,n}$. We refer to \begin{align*} \lambda^{I}_{\bfbeta}:k^{(p^{n})}(\bfbeta)&\longrightarrow k\\ \alpha&\longmapsto\lambda^{I}_{\bfbeta}(\alpha) \end{align*} as the $I$-th {\bf $\lambda$-map} with respect to $\bfbeta$. \subsection{Representatives and subrings} We prove in Proposition~\ref{prp:generation} that each Cohen ring of characteristic $p^{m+1}$ is generated by the union of the $p^{m}$-representatives and a set of representatives for a $p$-basis of its residue field; this is a key ingredient of Theorem~\ref{thm:MIT}. We begin with a lemma. \begin{lemma}\label{lem:expansion} Let $(A,k)$ be a non-strict Cohen ring of characteristic $p^{m+1}$. Let $s:k\longrightarrow A$ be a choice of representatives. Then every element of $A$ admits a unique representation as a sum $\sum_{i=0}^{m}s(\alpha_{i})p^{i}$, where $\alpha_{i}\in k$. In particular, $A$ is generated as a ring by the image $s(k)$ of $s$. \end{lemma} \begin{proof} We claim that, for all $n\in\mathbb{N}$ and for all $a\in A$, there exist $\alpha_{0},\ldots,\alpha_{n}\in k$ such that $a-\sum_{i=0}^{n}s(\alpha_{i})p^{i}\in\mathfrak{m}^{n+1}_{A}$. We prove this by induction on $n$. The base case $n=0$ follows from the fact that $s$ is a choice of representatives for $k$. Suppose the statement holds for some $n\in\mathbb{N}$. Let $a\in A$ and denote $\alpha:=\res(a)\in k$. Let $\hat{a}\in A$ be such that $a=s(\alpha)+p\hat{a}$. By the inductive hypothesis, there exist $\alpha_{0},\ldots,\alpha_{n}\in k$ such that $\hat{a}-\sum_{i=0}^{n}s(\alpha_{i})p^{i}\in\mathfrak{m}^{n+1}$. Denote $b:=s(\alpha)+\sum_{i=0}^{n}s(\alpha_{i})p^{i+1}$. Now $a-b=p\hat{a}-p\sum_{i=0}^{n}s(\alpha_{i})p^{i}\in\mathfrak{m}^{n+2}$. The uniqueness part of the statement follows from the fact that multiplication by $p^{i}$ is an isomorphism between the additive groups $A/pA$ and $p^{i}A/p^{i+1}A$, for $i\leq m$. If the characteristic of $A$ is $p^{m+1}$, then as soon as $m+1\leq n+1$, the claim shows that $A$ is generated by $s(k)$. \end{proof} \begin{proposition}\label{prp:generation} Let $(A,k)$ be a non-strict Cohen ring characteristic $p^{m+1}$, and let $\bfbeta$ be a $p$-basis of $k$ with representatives $s:\bfbeta\longrightarrow A$. Denote by $s_{A}:k^{(p^{m})}\longrightarrow A^{(p^{m})}$ the choice of $p^{m}$-representatives from Lemma~\ref{lem:pm-representatives}. Then $A$ is generated by $s(\bfbeta)\cup s_{A}(k^{(p^{m})})$. \end{proposition} \begin{proof} Let $B:=[s(\bfbeta)\cup s_{A}(k^{(p^{m})})]$ be the subring of $A$ generated by the images of $s$ and $s_{A}$. By Lemma~\ref{lem:expansion}, it suffices to show that $B$ contains a set of representatives for each element of $k$. For $I=(i_{\mu})_{\mu<\nu}\in P_{\nu,n}$, we write \begin{align*} s(\beta^{I})&:=\prod_{\mu<\nu}(s(\beta_{\mu}))^{i_{\mu}}. \end{align*} We now define a function $S:k\longrightarrow A$ by setting \begin{align*} S(\alpha)&:=\sum_{I\in P_{\nu,m}}s(\beta^{I})s_{A}(\lambda^{I}_{\bfbeta}(\alpha)^{p^{m}}), \end{align*} for $\alpha \in k$. Applying the residue map, and comparing with Equation~(\ref{eq:lambda}), we can see that $S(\alpha)$ is a representative of $\alpha$. Moreover, $S(\alpha)$ is an element of $B$, as required. \end{proof} \begin{remark}[Inverse systems with representatives]\label{rem:inverse_system_2} Let $k$ be a field of characteristic $p$. Let $(A_{m},k)_{m\in\mathbb{N}}$ be an inverse system of Cohen rings, in the sense of Remark~\ref{rem:inverse_system}, where each $A_{m}$ has characteristic $p^{m+1}$, and denote by $(\res_{n,m})_{n\geq m}$ the transition maps. Let $\bfbeta$ be a $p$-basis of $k$ and, for each $m$, let $s_{m}:\bfbeta\longrightarrow A_{m}$ be a choice of representatives. Suppose that the maps $s_{m}$ are compatible in the sense that $\res_{n,m}\circ s_{n}=s_{m}$, for $n\geq m$. Then $(A_{m},k,s_{m})_{m\in\mathbb{N}}$ forms an inverse system. Let $(A,k)$ be the inverse limit of $(A_{m},k)_{m\in\mathbb{N}}$ with projections $(r_{m})_{m\in\mathbb{N}}$. By Remark~\ref{rem:inverse_system}, $(A,k)$ is a strict Cohen ring. The inverse limit of the maps $s_{m}$ is a choice of representatives $s:\bfbeta\longrightarrow A$. \end{remark} \section{The Teichm\"{u}ller Embedding Process} At the heart of all the structural arguments about Cohen rings is Teichm\"{u}ller's embedding process, which we discuss in this section. The original formulation can be found in \cite[\S 7]{Tei36b}. Indeed, Mac Lane attributes this technique to Teichm\"{u}ller, and describes it as the `Teichm\"{u}ller embedding process'. See {\cite[Theorem 6]{Mac39c}} for Mac Lane's version. In \cite[Lemma 12]{Coh46}, Cohen rewrote Teichm\"{u}ller's embedding process for an arbitrary complete local ring. \begin{theorem}[{Teichm\"{u}ller Embedding Process}]\label{thm:TEP} Let $(A,k)$ be a Cohen ring, let $\bfbeta\subseteq k$ be $p$-independent with representatives $s:\bfbeta\longrightarrow A$. There exists a Cohen ring $(A^{T},k^{T})\supseteq(A,k)$ such that \begin{enumerate} \item $k^{T}=k(\bfbeta^{(p^{-\infty})})$, where $\bfbeta^{(p^{-\infty})}=\{\beta^{p^{-n}}\mid\beta\in\bfbeta,n\in\mathbb{N}\}$, \item $s$ coincides with the restriction to $\bfbeta$ of the unique choice of multiplicative representatives $(k^{T})^{(p^{\infty})}\longrightarrow A^{T}$. \end{enumerate} \end{theorem} \begin{proof} This proof is closely based on those of Teichm\"{u}ller (\cite[\S7]{Tei36b}) and Cohen (\cite[Lemma 12]{Coh46}). It is a recursive construction. We begin by formally adjoining a $p$-th root of each $s(\beta)$, for each $\beta\in\bfbeta$. More constructively, we introduce a family of new variables $(X_{\beta}:\beta\in\bfbeta)$, and let $$ A':=A[X_{\beta}:\beta\in\bfbeta]/\big(X_{\beta}^{p}-s(\beta):\beta\in\bfbeta\big). $$ That is, $A'$ is the quotient of the ring $A[X_{\beta}:\beta\in\bfbeta]$ by the ideal generated by the polynomials $X_{\beta}^{p}-s(\beta)$, for $\beta\in\bfbeta$. The natural map $A\longrightarrow A'$ is injective, and we identify $A$ with its image in $A'$. Taking the quotient of $A'$ by $pA'$ yields the field $k':=k(\beta^{p^{-1}}:\beta\in\bfbeta)$, and so $pA'$ is maximal. Indeed, since $A$ is local with unique maximal ideal $pA$, the maximal ideals of $A'$ are those lying over $pA$, which shows that $pA'$ is the unique maximal ideal of $A'$. Thus $(A',k')$ is a pre-Cohen ring, and we have $(A,k)\subseteq (A',k')$. Note that $\bfbeta^{(p^{-1})}:=\{\beta^{p^{-1}}\mid\beta\in\bfbeta\}$ is $p$-independent in $k'$. Indeed, for each $\beta\in\bfbeta$, we write $s'(\beta^{p^{-1}})$ for the image of $X_{\beta}$ in the quotient ring $A'$. Then $s':\bfbeta^{(p^{-1})}\longrightarrow A'$ is a choice of representatives, and $$ s'(\beta^{p^{-1}})^{p}=s(\beta), $$ for all $\beta\in\bfbeta$. Beginning with $(A,k)$, we continue this process recursively, with recursive step $(A,k)\longmapsto(A',k')$. In this way, we construct a chain $(A_{n},k_{n})_{n\in\mathbb{N}}$ of pre-Cohen rings, such that $\bfbeta^{(p^{-n})}:=\{\beta^{p^{-n}}\mid\beta\in\bfbeta\}$ is $p$-independent in $k_{n}=k(\bfbeta^{(p^{-n})})$ and $s_{n}:\bfbeta^{(p^{-n})}\longrightarrow A_{n}$ is a choice of representatives, such that \begin{align*} s_{n}(\beta^{p^{-n}})^{p^{n}}&=s(\beta), \end{align*} for all $n\in\mathbb{N}$ and all $\beta\in\bfbeta$. The morphisms in this chain are embeddings, which we may even view as inclusions, by identifying of each $(A_{n},k_{n})$ with its image in $(A_{n+1},k_{n+1})$. The direct limit is a pre-Cohen ring $(A_{\infty},k_{\infty})\supseteq(A,k)$. Taking the completion, we obtain a Cohen ring $(A^{T},k^{T})\supseteq(A,k)$. The union $s^{T}:=\bigcup_{n}s_{n}$ is a choice of representatives for $\bfbeta^{T}:=\bigcup_{n}\bfbeta^{(p^{-n})}$ which commutes with the Frobenius map. By construction, we have $k^{T}=k(\bfbeta^{(p^{-\infty})})$, and so $\bfbeta^{T}\subseteq (k^{T})^{(p^{\infty})}$. Therefore $s^{T}$ coincides with the restriction to $\bfbeta^{T}$ of the unique choice of multiplicative representatives $(k^{T})^{(p^{\infty})}\longrightarrow A^{T}$, as required. \end{proof} \section{Mac Lane's Identity Theorem} In this section we consider Cohen subrings of Cohen rings. We study the `identity' of such subrings inside their overrings: in Theorem \ref{thm:MIT}, which was first clearly articulated by Mac Lane, we show that such a subring is determined by a choice of representatives of a $p$-basis of its residue field. Teichm\"{u}ller's discussion of this issue can be found in \cite[\S8]{Tei36b}. Developing these ideas, Mac Lane's theorems \cite[Theorem 7]{Mac39c} and \cite[Theorem 12]{Mac39c} show that a complete subfield of a $\mathfrak{p}$-adic field, in his language, is determined by a choice of representatives for a $p$-basis of the residue field. Indeed, in our view, Mac Lane is the first to have clearly articulated this portion of the overall argument. Nevertheless, we closely follow Cohen's exposition, particularly relevant parts of his proof of \cite[Theorem 11]{Coh46}, which is in fact the theorem we will discuss in the next section. \begin{theorem}[{Mac Lane's Identity Theorem}]\label{thm:MIT} Let $(B,l)$ be a pre-Cohen ring, with pre-Cohen subrings $(A_{1},k_{1})$ and $(A_{2},k_{2})$. Suppose that $(A_{2},k_{2})$ is a Cohen ring, i.e.~is complete, and that $k_{1}\subseteq k_{2}$. Let $\bfbeta$ be a $p$-basis of $k_{1}$ with representatives $s:\bfbeta\longrightarrow A_{1}$. If $s(\bfbeta)\subseteq A_{2}$ then $(A_{1},k_{1})\subseteq (A_{2},k_{2})$. \end{theorem} \begin{proof} First we work in the case that $(B,l)$ is non-strict of characteristic $p^{m+1}$. For $i\in\{1,2\}$, let $s_{i}:k_{i}^{(p^{m})}\longrightarrow A_{i}^{(p^{m})}\subseteq A_{i}$, and let $s_{B}:l^{(p^{m})}\longrightarrow B^{(p^{m})}\subseteq B$, be the unique choices of $p^{m}$-representatives from Lemma~\ref{lem:pm-representatives}. Let $\alpha\in k_{i}^{(p^{m})}$. By Lemma~\ref{lem:pm-representatives}, $s_{B}(\alpha)$ is the unique element of $B^{(p^{m})}$ with residue $\alpha$; but $s_{i}(\alpha)$ is another such element that happens to lie in $A_{i}^{(p^{m})}\subseteq B^{(p^{m})}$. Therefore $s_{i}(\alpha)=s_{B}(\alpha)$, which means that $s_{B}$ extends $s_{i}$. Since $k_{1}\subseteq k_{2}\subseteq l$, we have $k_{1}^{(p^{m})}\subseteq k_{2}^{(p^{m})}\subseteq l^{(p^{m})}$. It follows that $s_{2}$ extends $s_{1}$. In particular, the image $s_{1}(k_{1}^{(p^{m})})$ of $s_{1}$ is contained in the image of $s_{2}$, which in turn is contained in $A_{2}$. Also note that $s(\bfbeta)\subseteq A_{2}$, by assumption. By Proposition~\ref{prp:generation}, the subring of $B$ generated by $s_{1}(k_{1}^{(p^{m})})\cup s(\bfbeta)$ is $A_{1}$. Therefore $A_{1}\subseteq A_{2}$. Finally, we suppose that $(B,l)$ is strict. For all $m\in\mathbb{N}$, by the preceding paragraph, the residue ring of $(A_{1},k_{1})$ of characteristic $p^{m+1}$ is a subring of the corresponding residue ring of $(A_{2},k_{2})$. Since $(A_2,k_2)$ is complete by assumption, Remark \ref{rem:inverse_system} implies that $(A_{1},k_{1})$ is a subring of $(A_{2},k_{2})$. \end{proof} \section{Cohen's Homomorphism Theorem and Structure Theorem} \label{section:structure} The remaining ingredient of a structure theorem is the relationship between two arbitrary Cohen rings with the same residue field. Such a relationship exists, in the form of a morphism, and such a morphism is uniquely determined by specifying the image of a set of representatives of a $p$-basis of the residue field. Cohen's paper \cite{Coh46} appears to be the first to study the case of characteristic $p^{m}$, $m>0$. In this section we state and prove a version of Cohen's Theorem, \cite[Theorem 11]{Coh46}, suitable for our setting. \begin{definition}\label{def:lift} Let $(A,k)$ and $(B,l)$ be pre-Cohen rings, and let $\varphi=(\varphi_{A},\varphi_{k}):(A,k)\longrightarrow(B,l)$ be a morphism. Also, let $\bfbeta\subseteq k$ be a $p$-basis of $k$, and let $s_{A}:\bfbeta\longrightarrow A$ and $s_{B}:\varphi_{k}(\bfbeta)\longrightarrow B$ be representatives. We say that $\varphi$ {\bf respects} $s_{A}$ and $s_{B}$ if $\varphi_{A}\circ s_{A}=s_{B}\circ\varphi_{k}|_{\bfbeta}$. \end{definition} \begin{figure} \caption{Illustration of Definition \ref{def:lift} \label{fig:3} \end{figure} \begin{theorem}[{Cohen's Homomorphism Theorem}]\label{thm:CoHo} Let $(A,k)$ and $(B,l)$ be Cohen rings, and let $\varphi_{k}:k\longrightarrow l$ be an embedding of fields such that $l/\varphi_{k}(k)$ is a separable extension. Let $\bfbeta$ be a $p$-basis of $k$ and let $s_{A}:\bfbeta\longrightarrow A$ and $s_{B}:\varphi_{k}(\bfbeta)\longrightarrow B$ be representatives. Suppose that $(A,k)$ is strict or that $(A,k)$ is non-strict of characteristic at least that of $(B,l)$. Then there exists a unique ring homomorphism $\varphi_{A}:A\longrightarrow B$ such that $$ \varphi=(\varphi_{A},\varphi_{k}):(A,k)\longrightarrow(B,l) $$ is a morphism which respects $s_{A}$ and $s_{B}$. Moreover, if $(A,k)$ and $(B,l)$ have the same characteristic, then $\varphi$ is an embedding. \end{theorem} \begin{proof} This proof is a construction closely based on that of Cohen (\cite[Theorem 11]{Coh46}). For notational simplicity, we identify $k$ with its image in $l$ under the embedding $\varphi_{k}$. Then $\varphi_{k}$ is the inclusion map $\mathrm{id}$, and $l/k$ is a separable extension. It suffices to construct a ring homomorphism $A\longrightarrow B$, which induces the inclusion map $k\longrightarrow l$. Throughout, we denote the residue maps by $\mathrm{res}_A: A \longrightarrow k$ and $\mathrm{res}_B: B \longrightarrow l$ respectively. To begin with, we suppose that $(A,k)$ is strict and that $k$ is perfect. Thus $\bfbeta$ is empty, and we dispense with both of the maps $s_{A}$ and $s_{B}$. Since $(A,k)$ is a strict Cohen ring, we have $(\mathbb{Z}_{p},\mathbb{F}_{p})\subseteq(A,k)$, and there is the following natural ring homomorphism: \begin{align*} \varphi_{0}:\mathbb{Z}_{p}\longrightarrow B. \end{align*} Let $T$ be a transcendence basis of $k/\mathbb{F}_{p}$. Since $k$ is perfect, we have $T\subseteq k^{(p^{\infty})}=k\subseteq l^{(p^{\infty})}\subseteq l$. By Theorem \ref{thm:representatives}, there are unique choices of multiplicative representatives: $s_{A,0}:k\longrightarrow A$ and $s_{B,0}:l^{(p^{\infty})}\longrightarrow B$. Note that $s_{A,0}(T)$ is algebraically independent over $\mathbb{Z}_{p}$ since $T$ is algebraically independent over $\mathbb{F}_{p}$. We consider the subring $\mathbb{Z}_{p}[s_{A,0}(T)]\subseteq A$, with $\res_{A}(\mathbb{Z}_{p}[s_{A,0}(T)])=\mathbb{F}_{p}[T]$. We may extend $\varphi_{0}$ to a ring homomorphism \begin{align*} \varphi_{1,0}:\mathbb{Z}_{p}[s_{A,0}(T)]\longrightarrow B \end{align*} by declaring $\varphi_{1,0}(s_{A,0}(t))=s_{B,0}(t)$, for each $t\in T$. In fact, for each $n\in\mathbb{N}$, we consider the subring $\mathbb{Z}_{p}[s_{A,0}(T^{(p^{-n})})]\subseteq A$, with $\res_A(\mathbb{Z}_{p}[s_{A,0}(T^{(p^{-n})})])=\mathbb{F}_{p}[T^{(p^{-n})}]$. As in the case $n=0$, we construct a ring homomorphism \begin{align*} \varphi_{1,n}:\mathbb{Z}_{p}[s_{A,0}(T^{(p^{-n})})]\longrightarrow B, \end{align*} by declaring $\varphi_{1,n}(s_{A,0}(t^{p^{-n}}))=s_{B,0}(t^{p^{-n}})$, for each $t\in T$. Since $s_{A,0}$ and $s_{B,0}$ are multiplicative, the family $(\varphi_{1,n})_{n\in\mathbb{N}}$ of ring homomorphisms forms an increasing chain. Taking the direct limit (i.e.\ union), we obtain the subring $A_{0}:=\mathbb{Z}_{p}[s_{A,0}(T^{(p^{-n})})\;|\;n\in\mathbb{N}]\subseteq A$, with $\res_{A}(A_{0})=\mathbb{F}_{p}[T^{(p^{-n})}\mid n\in\mathbb{N}]$, and we obtain the ring homomorphism \begin{align*} \varphi_{2}:A_{0}\longrightarrow B. \end{align*} Localising $A_{0}$ at $A_{0}\cap pA$, we obtain the local ring $A_{1}:=(A_{0})_{A_{0}\cap pA}\subseteq A$, with $\res_{A}(A_{1})=\mathbb{F}_{p}(T)^{\mathrm{perf}}$, and we extend $\varphi_{2}$ to a ring homomorphism \begin{align*} \varphi_{3}:A_{1}\longrightarrow B. \end{align*} The final part of this construction is to extend $\varphi_{3}$ to have domain $A$. Since strict Cohen rings are henselian valuation rings, and $k/\mathbb{F}_{p}(T)^{\mathrm{perf}}$ is separable algebraic, this prolongation can be accomplished by a direct application of Hensel's Lemma, as in e.g.~\cite[Lemma 9.30]{Kuhlmann2011}. More precisely, for a separable irreducible polynomial $f\in A_{1}[X]$ and $\alpha\in k$ with $\mathrm{res}_{A}(f)(\alpha)=0$, by Hensel's Lemma we obtain $a\in A$ such that $f(a)=0$. Likewise, we obtain $b\in B$ with $\varphi_{3}(f)(b)=0$. We now extend $\varphi_{3}$ to a morphism \begin{align*} \varphi_{4}:A_{1}[a]&\longrightarrow B \end{align*} by sending $a\longmapsto b$. Note that $\res_{A}(A_{1}[a])=\mathbb{F}_{p}(T)^{\mathrm{perf}}(\alpha)$. Taking the direct limit of ring homomorphisms constructed in this way, we obtain a ring homomorphism \begin{align*} \varphi:A&\longrightarrow B \end{align*} that induces the inclusion map on the residue fields and respects $s_{A}$ and $s_{B}$, as required. It remains to show that $\varphi$ is the unique such morphism, but this follows from Theorem \ref{thm:MIT}, applied to the case that $\bfbeta$ is empty. Remaining under the assumption that $(A,k)$ is strict, we turn to the case that $k$ is imperfect. We are given a $p$-basis $\bfbeta$ of $k$ with representatives $s_{A}:\bfbeta\longrightarrow A$ and $s_{B}:\bfbeta\longrightarrow B$. Note that $\bfbeta$ is $p$-independent in $l$, by our assumption that $l/k$ is separable. By Theorem \ref{thm:TEP}, there exists a Cohen ring $(A^{T},k^{T})\supseteq(A,k)$ such that \begin{enumerate} \item $k^{T}=k^{\mathrm{perf}}$, and \item $s_{A}$ is the restriction to $\bfbeta$ of the multiplicative representatives $s_{A}^{T}:k^{T}\longrightarrow A^{T}$. \end{enumerate} By another application of Theorem \ref{thm:TEP}, there exists a Cohen ring $(B^{T},l^{T})\supseteq(B,l)$ such that \begin{enumerate} \setcounter{enumi}{2} \item $l^{T}=l(\bfbeta^{(p^{-\infty})})$, and \item $s_{B}$ is the restriction to $\bfbeta$ of the multiplicative representatives $s_{B}^{T}:(l^{T})^{(p^{\infty})}\longrightarrow B^{T}$. \end{enumerate} Since $k^{T}\subseteq l^{T}$ and $k^{T}$ is perfect, by the first part of this proof there exists a unique morphism \begin{align*} \varphi:A^{T}\longrightarrow B^{T} \end{align*} that induces the inclusion map on the residue fields. By {\bf(ii)}, the composition $\varphi\circ s_{A}:\bfbeta\longrightarrow\varphi(A^{T})$ coincides with the unique choice of multiplicative representatives for $\bfbeta$ in $\varphi(A^{T})$; and by {\bf(iv)}, also $s_{B}$ coincides with the unique choice of multiplicative representatives for $\bfbeta$ in $B^{T}$. Applying Theorem~\ref{thm:representatives}, and since $\varphi((A^{T})^{(p^{\infty})})\subseteq(B^{T})^{(p^{\infty})}$, we have $\varphi\circ s_{A}=s_{B}$. If follows that both subrings $\varphi(A)$ and $B$ of $B^{T}$ contain $\varphi(s_{A}(\bfbeta))=s_{B}(\bfbeta)$. Since also $k\subseteq l$, we may apply Theorem~\ref{thm:MIT} to deduce that $\varphi(A)\subseteq B$. Therefore $\varphi$ restricts to a ring homomorphism $A\longrightarrow B$ that induces the inclusion map on the residue fields and respects $s_{A}$ and $s_{B}$. The uniqueness of $\varphi$ again follows from Theorem \ref{thm:MIT}. Note that, the non-trivial proper ideals of $A$ are $\mathfrak{m}_{C}^{n}$, for $n>0$, and the quotient $A/\mathfrak{m}_{C}^{n}$ has characteristic $p^{n}$. Thus the morphism we have constructed is an embedding if and only if $(B,l)$ is also strict. This completes the proof in the case that $(A,k)$ is strict. Finally, we turn to the case that $(A,k)$ and $(B,l)$ are non-strict, of characteristics $p^{n}$ and $p^{m}$, respectively, for $n\geq m$. By Theorem~\ref{thm:HS}, there exists a strict Cohen ring $(C,k)$. Choose representatives $s_{C}:\bfbeta\longrightarrow C$. By the first part of this Theorem, there is a unique ring homomorphism $\varphi:C\longrightarrow A$ which induces the identity map on the residue field and respects $s_{C}$ and $s_{A}$. Again note that the non-trivial proper ideals in $(C,k)$ are $\mathfrak{m}_{C}^{n}$, for $n\in\mathbb{N}_{>0}$. Therefore $\varphi$ is the composition of the quotient $C\longrightarrow C/\mathfrak{m}_{C}^{n}$ with an isomorphism $C/\mathfrak{m}_{C}^{n}\longrightarrow A$. Likewise there is a unique ring homomorphism $\psi:C\longrightarrow B$ which induces the inclusion map on the residue fields and respects $s_{C}$ and $s_{B}$. Again $\psi$ is the composition of the quotient of $C\longrightarrow C/\mathfrak{m}_{C}^{m}$ with an embedding $C/\mathfrak{m}_{C}^{m}\longrightarrow B$. Since $n\geq m$, $\mathfrak{m}_{C}^{n}\subseteq\mathfrak{m}_{C}^{m}$, and thus these homomorphisms give rise to a ring homomorphism $A\longrightarrow B$ that induces the inclusion map on the residue fields and respects $s_{A}$ and $s_{B}$. Once again, the uniqueness follows from Theorem~\ref{thm:MIT}. Note that the morphism is an embedding if and only if $m=n$. \end{proof} \begin{remark}\label{rem:quotient} In the setting of Theorem~\ref{thm:CoHo}, and in the case that $(A,k)$ is strict and $(B,l)$ is of characteristic $p^{m}$, the resulting morphism $\varphi$ factors into a composition of the natural quotient map $(A,k)\longrightarrow(A/\mathfrak{m}_{A}^{m},k)$ and an embedding $(A/\mathfrak{m}_{A}^{m},k)\longrightarrow(B,l)$. \end{remark} In our applications of Cohen's Homomorphism Theorem in the second part of the paper, we require the following consequence: \begin{corollary}[Relative Embedding Theorem]\label{cor:CoHo_relative} Let $(A_{1},k_{1})$ and $(A_{2},k_{2})$ be two Cohen rings, and let $(A_{0},k_{0})$ be a Cohen subring common to both. Assume we are given an embedding of residue fields $\varphi_k:k_1 \longrightarrow k_2$ over $k_0$ and that both $k_1/k_0$ and $k_2/\varphi_k(k_1)$ are separable. Then, there is an embedding $\varphi$ of $(A_{1},k_{1})$ into $(A_{2},k_{2})$ which induces $\varphi_k$ and fixes $A_{0}$ pointwise. Moreover, if $\varphi_{k}$ is an isomorphism then $\varphi$ is an isomorphism. \end{corollary} \begin{proof} Note that by assumption, $(A_1,k_1), (A_2,k_2)$ and $(A_0,k_0)$ have the same characteristic. Let $\bfbeta_0$ be a $p$-basis of $k_0$, and $s_0:\bfbeta_0 \longrightarrow A_0$ be a choice of representatives. We first show the existence of an embedding of Cohen rings $\varphi:A_1 \longrightarrow A_2$ which induces $\varphi_k$ and fixes $s_0(\bfbeta_{0})$. Since $k_1/k_0$ is separable, we can find a $p$-basis $\bfbeta_1$ of $k_{1}$ prolonging $\bfbeta_0$, and a choice of representatives $s_1:\bfbeta_1 \longrightarrow A_1$ prolonging $s_0$. Note that since $\varphi_k$ restricts to the identity on $k_0$, $\varphi_{k}(\bfbeta_1)$ also contains $\bfbeta_0$. We now choose representatives $s_2:\varphi_{k}(\bfbeta_1) \longrightarrow A_2$ such that $s_2$ prolongs $s_0$. By Theorem \ref{thm:CoHo}, and since $k_{2}/\varphi_{k}(k_{1})$ is separable, there is a morphism $\varphi:A_1 \longrightarrow A_2$ that respects $s_1$ and $s_2$ (and hence fixes $s_0(\bfbeta_{0})$) and induces $\varphi_k$. Since $A_{1}$ and $A_{2}$ have the same characteristic, $\varphi$ is an embedding. Now, let $\varphi:A_1\longrightarrow A_2$ be any embedding which induces $\varphi_k$ and fixes $s_0(\bfbeta_{0})$. Then the restriction $\varphi_0$ of $\varphi$ to $A_0$ is a ring isomorphism between $A_0$ and $\varphi_0(A_0)$. Since $s_0(\bfbeta_0)$ is contained in $\varphi_0(A_0)$, by Mac Lane's Identity Theorem (Theorem~\ref{thm:MIT}) we get $A_0 \subseteq \varphi_0(A_0)$. Symmetrically, as $s_0$ is also a choice of representatives for a $p$-basis of the residue field of $\varphi_0(A_0)$, we get $\varphi_0(A_0) \subseteq A_0$. Therefore $\varphi_{0}$ is an automorphism of $A_{0}$. By assumption, $\varphi_k$ restricts to the identity on $k_0$, and so $\varphi$ induces the identity on the residue field of $A_{0}$. By construction of $\varphi$, $\varphi_{0}$ fixes $s_0(\bfbeta_{0})$. Hence in particular $\varphi_{0}$ respects $s_{0}$ and $s_{0}$ (note that $s_{0}$ is a choice of representatives of the domain and codomain of $\varphi_{0}$, which are both $A_{0}$). Theorem~\ref{thm:CoHo} implies that there is a unique automorphism of $A_{0}$ with these properties. As the identity map from $A_0$ to $A_0$ also induces the identity on $k_0$ and fixes $s_0(\bfbeta_{0})$, we conclude $\varphi_0 = \mathrm{id}_{A_0}$. Finally, we show that if $\varphi_k$ is an isomorphism, then $\varphi$ is an isomorphism: if $\varphi_k$ is an isomorphism and $\bfbeta_{1}$ is a $p$-basis of $k_1$, then $\varphi_k(\bfbeta_{1})$ is a $p$-basis of $k_2$. Thus, $\varphi(A_1)$ contains the lift of a $p$-basis for $k_2$, and hence we have $A_2 \subseteq \varphi(A_1)$ by Mac Lane's Identity Theorem (Theorem~\ref{thm:MIT}). \end{proof} More explicitly, given an isomorphism between the residue fields of two Cohen rings of the same characteristic, we get a complete understanding of its lifts to isomorphisms of Cohen rings: \begin{corollary}[{Cohen Structure Theorem, v.1}]\label{cor:Cohen_structure_1} Let $(A_{1},k_{1})$ and $(A_{2},k_{2})$ be two Cohen rings of the same characteristic, let $\varphi_{k}:k_{1}\longrightarrow k_{2}$ be an isomorphism of residue fields, and let $\bfbeta\subseteq k_{1}$ be a $p$-basis. Consider representatives $s_{1}:\bfbeta\longrightarrow A_{1}$ and $s_{2}:\varphi_{k}(\bfbeta)\longrightarrow A_{2}$. There exists a unique isomorphism of Cohen rings $$ \varphi=(\varphi_{A},\varphi_{k}):(A_{1},k_{1})\longrightarrow(A_{2},k_{2}), $$ which respects $s_{1}$ and $s_{2}$, and which is $\varphi_{k}$ on the residue fields. \end{corollary} \begin{proof} If both $(A_{1},k_1)$ and $(A_{2},k_2)$ are strict then both existence and uniqueness follow from Theorem \ref{thm:CoHo}. Suppose next that both $(A_{1},k_1)$ and $(A_{2},k_2)$ are of characteristic $p^{m}$. Let $(B,k_1)$ be a strict Cohen ring with representatives $s:\bfbeta\longrightarrow B$. By Theorem \ref{thm:CoHo} there are unique morphisms $\varphi^{1}=(\varphi^{1}_{A},\mathrm{id}_{k_1}):(B,k_1)\longrightarrow(A_{1},k_1)$ and $\varphi^{2}=(\varphi^{2}_{A},\varphi_{k}):(B,k_1)\longrightarrow(A_{2},k_2)$ which respect $s$ and $s_1$ (resp.~$s_{2}$) and which induce the identity (resp.~$\varphi_k)$ on the residue field. Moreover, both $\varphi^{i}_{A}$ are surjective and both factor through the quotient map $B\longrightarrow B/\mathfrak{m}^{m}$ (cf.~Remark~\ref{rem:quotient}). Thus, by the Isomorphism Theorem, both $(A_{i},k_i)$ are isomorphic to $(B/\mathfrak{m}^{m},k_1)$. Therefore there is an isomorphism between them that respects $s_{1}$ and $s_{2}$, and induces $\varphi_{k}$ on the residue field. This isomorphism is unique, by Theorem~\ref{thm:CoHo}. \end{proof} As long as one is only interested in the existence of an isomorphism of Cohen rings, the following simplified version of the above is sufficient: \begin{corollary}[{Cohen Structure Theorem, v.2}]\label{cor:Cohen_structure_2} Let $(A_{1},k_1)$ and $(A_{2},k_2)$ be Cohen rings of the same characteristic, and assume that $\varphi_{k}:k_1 \longrightarrow k_2$ is an isomorphism of the residue fields. There exists an isomorphism of Cohen rings $$ \varphi=(\varphi_{A},\varphi_k):(A_{1},k_1)\longrightarrow(A_{2},k_2), $$ which is $\varphi_k$ on the residue fields. \end{corollary} \begin{proof} Immediate from Corollary \ref{cor:Cohen_structure_1}. \end{proof} Our aim is now to apply Cohen's Homomorphism Theorem to give a clear statement of the relative structure of Cohen rings. That is, we will describe the morphisms between Cohen rings which extend a given morphism between subrings. Although we will not refer to the statement later on in this paper, we state and prove it for future reference. It should be noted once again that this is closely based on the work of Teichm\"{u}ller, Mac Lane, Cohen, and others. See for example \cite{Tei36b}, \cite{Mac39c}, and \cite{Coh46}. \begin{theorem}[Relative Homomorphism Theorem] \label{thm:relative_homomorphism_theorem} Let $(A_{1},k_{1})\subseteq(A_{2},k_{2})$ and $(B_{1},l_{1})\subseteq(B_{2},l_{2})$ be two extensions of Cohen rings, let \begin{align*} \varphi=(\varphi_{A},\varphi_{k}):(A_{1},k_{1})\longrightarrow(B_{1},l_{1}) \end{align*} be a morphism, and let $\rmPhi_{k}:k_{2}\longrightarrow l_{2}$ be an embedding of fields which extends $\varphi_{k}$. Suppose that both $l_{2}/\rmPhi_{k}(k_{2})$ and $k_{2}/k_{1}$ are separable. Let $\bfbeta$ be a $p$-basis of $k_{2}$ over $k_{1}$, and let $s_{A}:\bfbeta\longrightarrow A_{2}$ and $s_{B}:\rmPhi_{k}(\bfbeta)\longrightarrow B_{2}$ be choices of representatives. Then, there exists a unique morphism of Cohen rings $$ \rmPhi:=(\rmPhi_{A},\rmPhi_{k}):(A_{2},k_{2})\longrightarrow(B_{2},l_{2}), $$ that respects $s_{A}$ and $s_{B}$, that induces $\rmPhi_{k}$ on the residue fields, and that extends $\varphi$. \end{theorem} \begin{figure} \caption{Illustration of Theorem~\ref{thm:relative_homomorphism_theorem} \label{fig:4} \end{figure} \begin{proof} We are given a $p$-basis $\bfbeta$ of $k_{2}$ over $k_{1}$, that is each $\beta\in\bfbeta$ is of degree $p$ over $k_{2}^{(p)}k_{1}(\bfbeta\setminus\{\beta\})$, and $k_{2}=k_{2}^{(p)}k_{1}(\bfbeta)$. Choose any $p$-basis $\bfbeta_{A,1}$ of $k_{1}$ and any representatives $s_{A,1}:\bfbeta_{A,1}\longrightarrow A_{1}$. Since $k_{2}/k_{1}$ is separable, $\bfbeta_{A,2}:=\bfbeta\sqcup\bfbeta_{A,1}$ is a $p$-basis of $k_{2}$. We define \begin{align*} s_{A,2}:\bfbeta_{A,2}&\longrightarrow A_{2}\\ \beta&\longmapsto\left\{ \begin{array}{ll} s_{A,1}(\beta)&\beta\in\bfbeta_{A,1}\\ s_{A}(\beta)&\beta\in\bfbeta, \end{array}\right. \end{align*} which is a choice of representatives for $\bfbeta_{A,2}$. Next we let $\bfbeta_{B,2}:=\varphi_{k}(\bfbeta_{A,1})\sqcup\rmPhi_{k}(\bfbeta)$. We define \begin{align*} s_{B,2}:\bfbeta_{B,2}&\longrightarrow B_{2}\\ \beta&\longmapsto\left\{ \begin{array}{ll} \varphi_{A}(s_{A,1}(\varphi_{k}^{-1}(\beta)))&\beta\in\varphi_{k}(\bfbeta_{A,1})\\ s_{B}(\beta)&\beta\in\rmPhi_{k}(\bfbeta), \end{array}\right. \end{align*} which is a choice of representatives for $\bfbeta_{B,2}$. It follows from Theorem \ref{thm:CoHo} that there is a unique morphism $$ \rmPhi=(\rmPhi_{A},\rmPhi_{k}):(A_{2},k_{2})\longrightarrow(B_{2},k_{2}), $$ which respects $s_{A,2}$ and $s_{B,2}$, and which is $\rmPhi_{k}$ on the residue fields. Observe that $\rmPhi$ extends $\varphi$ since in particular $\rmPhi$ respects $s_{A,1}$ and $\varphi_{A}\circ s_{A,1}\circ\varphi_{k}^{-1}|_{\varphi_{k}(\bfbeta_{A,1})}$ (the latter being a choice of representatives for $\varphi_{k}(\bfbeta_{A,1})$). This proves the existence part of our claim. For uniqueness, if $\rmPsi$ is any other morphism which extends $\varphi$ and respects $s_{A}$ and $s_{B}$ then we may argue that it also respects $s_{A,2}$ and $s_{B,2}$, just as for $\rmPhi$. Therefore $\rmPhi=\rmPsi$ by Theorem \ref{thm:CoHo}. \end{proof} \section{Cohen--Witt rings} Let $k$ denote a field of characteristic $p>0$. For each natural number $n\in\mathbb{N}$, we denote the {\bf $n$-th Witt ring} over $k$ by $W_{n+1}(k)$, and the {\bf infinite Witt ring} we denote by $W[k]$, as described, for example, in \cite{vdD14} and in many other places. If $k$ is perfect, then $W[k]$ is a complete discrete valuation ring of characteristic zero with residue field $k$. That is, $(W[k],k)$ is a strict Cohen ring. By Theorem \ref{thm:CoHo}, $(W[k],k)$ may be viewed as providing the canonical example of a Cohen ring with residue field $k$, canonical in the sense that for perfect $k$ there is a canonical isomorphism between any two strict Cohen rings with residue field $k$. Likewise, $(W_{n}(k),k)$ is the canonical example of a Cohen ring with residue field $k$, of characteristic $p^{n}$. On the other hand, if $k$ is imperfect, then $W[k]$ fails to be a valuation ring. There is a less well-known construction, appropriate for the case of imperfect residue fields, which constructs Cohen rings as subrings of Witt rings (see e.g.~\cite{Sch72}). To mitigate the conflict with our own terminology, we will refer to these more concrete rings as `Cohen--Witt rings'. We fix a $p$-basis $\bfbeta$ of $k$. For each $n\in\mathbb{N}$, the {\bf $n$-th Cohen--Witt ring} over $k$, which we denote by $C_{n+1}(k)$, is the subring of $W_{n+1}(k)$ generated by $W_{n+1}(k^{(p^{n})})$ and the elements $[\beta]=(\beta,0,...)$, for $\beta\in\bfbeta$. That is \begin{align*} C_{n+1}(k)&:=W_{n+1}(k^{(p^{n})})\big([\beta]\mid\beta\in\bfbeta\big). \end{align*} We note that $C_{n+1}(k)$ is a local ring, with maximal ideal $(p)$ and residue field $k$. Thus $(C_{n+1}(k),k)$ is indeed a Cohen ring. There are representatives $s_{n}:\bfbeta\longrightarrow C_{n+1}(k)$, given by $s_{n}(\beta)=[\beta]$, for $\beta\in\bfbeta$. The maps $\pi_{n}:W_{n+1}(k)\longrightarrow W_{n}(k)$, which are given by the truncation of the Witt vectors, restrict to surjections \begin{align*} \pi_{n}|_{C_{n+1}(k)}:C_{n+1}(k)&\longrightarrow C_{n}(k). \end{align*} Just as with the Witt rings, the Cohen--Witt rings equipped with these maps form an inverse system, as in Remark~\ref{rem:inverse_system}, the inverse limit of which is the {\bf strict Cohen--Witt ring} over $k$: \begin{align*} C[k]&:=\lim_{\longleftarrow}C_{n+1}(k). \end{align*} The field of fractions of $C[k]$ is often denoted by $C(k)$. This system may be enriched with a compatible system of representatives $s_{n}:\bfbeta\longrightarrow C_{n+1}(k)$, as in Remark~\ref{rem:inverse_system_2}. It is a consequence of Corollary \ref{cor:Cohen_structure_2} that any strict Cohen ring $(A,k)$ is isomorphic to the strict Cohen--Witt ring $C[k]$, though the isomorphism is not canonical in the sense that it depends on our choices of $\bfbeta$ and $s$. \part{The Model Theory} \section{Theories and Completeness} Having developed the algebraic theory of Cohen rings, \label{part:2} we are now in a position to describe their first-order theories. Let $\mathfrak{L}_{\mathrm{ring}}=\{+,-,\cdot,0,1\}$ denote the first-order language of rings, and $\mathfrak{L}_{\mathrm{vf}} = \mathfrak{L}_\mathrm{ring}\cup \{\mathcal{O}\}$ the expansion of $\mathfrak{L}_\mathrm{ring}$ by a unary predicate (usually interpreted as the valuation ring). To study the structure of the value group, we also consider the language of ordered abelian groups $\mathfrak{L}_{\mathrm{oag}}=\{0,+,\leq\}$. We consider the following theories: \begin{definition} Let $k$ be a field of characteristic $p$. \begin{itemize} \item Let $T_\mathrm{pc}$ be the $\mathfrak{L}_{\mathrm{ring}}$-theory of commutative rings with unity in which $(p)$ is the unique maximal ideal. \item Let $T_\mathrm{pc}(k,n)$ be the $\mathfrak{L}_{\mathrm{ring}}$-theory consisting of the union of $T_\mathrm{pc}$ with axioms that assert of a model $B$ that its characteristic is $p^{n}$ and that its residue field $k_{B}$ is a elementarily equivalent to $k$. \item Let $T_\mathrm{ur}$ be the $\mathfrak{L}_{\mathrm{vf}}$-theory that asserts of a model $(K,v)$ that $v$ is henselian, $\mathcal{O}_{v}$ is a valuation ring on $K$ which is a model of $T_\mathrm{pc}$, and that the characteristic of $K$ is zero. \item Let $T_\mathrm{ur}(k,\rmGamma)$ be the $\mathfrak{L}_{\mathrm{vf}}$-theory extending $T_\mathrm{ur}$ which requires in addition for a model $(K,v)$ that $Kv$ is elementarily equivalent to $k$ and that $vK$ is elementarily equivalent to $\rmGamma$. \end{itemize} \end{definition} Note that if $\rmGamma$ is an ordered abelian group \emph{without minimum positive element} then $T_\mathrm{ur}(k,\rmGamma)$ is not consistent. We do not define $T_{\mathrm{ur}}(k,\rmGamma)$ for a field $k$ of characteristic $0$. The aim of this section is to show that $T_\mathrm{pc}(k,n)$ and $T_\mathrm{ur}(k,\rmGamma)$ are complete and to deduce the usual AKE type consequences from this. The non-strict case is a simple application of the Structure Theorem for Cohen rings of positive characteristic: \begin{theorem}\label{thm:completeness_nonstrict} For any field $k$ of positive characteristic, the $\mathfrak{L}_\mathrm{ring}$-theory $T_\mathrm{pc}(k,n)$ is complete. \end{theorem} \begin{proof} Let $B_{1},B_{2}\models T_\mathrm{pc}(k,n)$. By the Keisler--Shelah Theorem (\cite{She71}), replacing $B_{1}$ and $B_{2}$ with suitable ultrapowers if necessary, we may assume that there is an $\mathfrak{L}_{\mathrm{ring}}$-isomorphism $\varphi_{k}:k_{B_{1}}\longrightarrow k_{B_{2}}$. Applying the Structure Theorem for Cohen rings (Corollary \ref{cor:Cohen_structure_2}), we get an isomorphism $\varphi:B_{1}\longrightarrow B_{2}$ that induces $\varphi_{k}$. In particular this implies that $B_{1}$ and $B_{2}$ are elementarily equivalent. \end{proof} For the general case, we use the usual proof method of combining the Structure Theorem with a coarsening argument which allows us to apply the Ax--Kochen/Ershov Theorem in the equicharacteristic $0$ case: \begin{theorem}\label{thm:completeness} For any field $k$ of positive characteristic and any ordered abelian group $\rmGamma$ with minimum positive element, the $\mathfrak{L}_\mathrm{vf}$-theory $T_\mathrm{ur}(k,\rmGamma)$ is complete. \end{theorem} \begin{proof} Let $(K_{1},v_{1}),(K_{2},v_{2})\models T_{\mathrm{ur}}(k,\rmGamma)$. By the Keisler--Shelah Theorem (\cite{She71}), replacing each structure with a suitable ultrapower if necessary, we may assume that both $(K_{1},v_{1})$ and $(K_{2},v_{2})$ are $\aleph_{1}$-saturated, and that there is an $\mathfrak{L}_\mathrm{ring}$-isomorphism $\varphi_{k}:K_{1}v_{1}\longrightarrow K_{2}v_{2}$ and an $\mathfrak{L}_\mathrm{oag}$-isomorphism $\varphi_{\rmGamma}:v_{1}K_{1}\longrightarrow v_{2}K_{2}$. For $i=1,2$, let $w_{i}$ denote the finest proper coarsening of $v_{i}$ (note that $w_i$ exists because $v_iK_i$ has a minimum positive element) and let $\bar{v}_{i}$ denote the valuation induced by $v_{i}$ on the residue field $K_{i}w_{i}$. By $\aleph_{1}$-saturation, both valued fields $(K_{i}w_{i}, \bar{v_i})$ are spherically complete and hence the valuation rings $\mathcal{O}_{\bar{v}_{i}}$ are strict Cohen rings. By the Structure Theorem for Cohen rings (Corollary \ref{cor:Cohen_structure_2}), there exists an isomorphism $\varphi:\mathcal{O}_{\bar{v}_{1}}\longrightarrow\mathcal{O}_{\bar{v}_{2}}$ that induces $\varphi_{k}$. Note that $\varphi_{\rmGamma}$ also induces an isomorphism $\bar{\varphi}_{\rmGamma}:w_{1}K_{1}\longrightarrow w_{2}K_{2}$ because any isomorphism of ordered abelian groups sends a minimum positive element (and hence the generated convex subgroup) to another such. Since both $(K_{i},w_{i})$ are henselian of equicharacteristic zero, the Ax--Kochen/Ershov principle (\cite[AKE-Theorem 5.1]{vdD14}) implies that $(K_{1},w_{1})$ and $(K_{2},w_{2})$ are elementarily equivalent. It follows by the $\emptyset$-definability of the valuation $v_{i}$ in $K_{i}$ (this is a variant of Robinson's classical definition of $\mathbb{Z}_p$ in $\mathbb{Q}_p$, see, e.g., \cite[Corollary 2]{Hon}) that $(K_{1},v_{1})$ and $(K_{2},v_{2})$ are elementarily equivalent, as required. \end{proof} \begin{remark} In fact, B\'elair proves relative quantifier elimination for the valued field sort of an unramified henselian valued field in the $\omega$-sorted language $\mathcal{L}_{\mathrm{co}_{\omega}}$ down to the sorts for the residue rings $\mathcal{O}/\mathfrak{m}^n$ (cf.~\cite[Th\'eor\`{e}me 5.1]{Bel99}). Applying this quantifier elimination, Theorem \ref{thm:completeness} can also be deduced from Theorem \ref{thm:completeness_nonstrict}. \end{remark} In particular, Theorem \ref{thm:completeness} immediately implies the following Ax-Kochen/Ershov-type result, sometimes referred to as an AKE$_\equiv$-principle: \begin{corollary}[Ax--Kochen/Ershov principle for unramified henselian valued fields] \label{cor:AKE} Let $(K,v)$ and $(L,w)$ be two unramified henselian valued fields. Then $$ \underbrace{Kv \equiv Lw}_{\textrm{in }\mathfrak{L}_\mathrm{ring}} \textrm{ and } \underbrace{vK \equiv wL}_{\textrm{in }\mathfrak{L}_\mathrm{oag}} \, \Longleftrightarrow \, \underbrace{(K,v)\equiv (L,w)}_{\textrm{in }\mathfrak{L}_\mathrm{vf}}. $$ \end{corollary} \begin{remark} Corollary \ref{cor:AKE} above is essentially claimed by B\'elair in \cite[Corollaire 5.2]{Bel99}. However, B\'{e}lair's proof only goes through in the case of a perfect residue field, since it uses the rings of Witt vectors. \end{remark} The Ax--Kochen/Ershov Principle, above, immediately gives an axiomatisation of the complete theories of unramified henselian valued fields, as follows. \begin{corollary}\label{cor:axiomatisation} Let $(K,v)$ be an unramified henselian valued field of mixed characteristic. The complete $\mathfrak{L}_{\mathrm{vf}}$-theory of $(K,v)$ is axiomatised by \begin{enumerate} \item $(K,v)$ is an henselian valued field of mixed characteristic $(0,p)$, \item the value group is elementarily equivalent to $vK$ and $v(p)$ is minimum positive, and \item the residue field is elementarily equivalent to $Kv$. \end{enumerate} \end{corollary} In particular, we get an axiomatization of the $\mathfrak{L}_\mathrm{vf}$-theory of $(C(k),v)$, relative to the $\mathfrak{L}_\mathrm{ring}$-theory of the residue field $k$. \section{Relative model completeness} In analogy to the case of unramified henselian fields with perfect residue field (cf.~\cite[Theorem 7.2]{vdD14}), we also get an AKE$_\preceq$-principle for the case of arbitrary residue fields. To prove relative completeness, or in other words an AKE$_\equiv$-principle for unramified henselian valued fields, it is sufficient to know that Cohen rings are unique up to isomorphism. However, in order to prove the AKE$_\preceq$-principle for unramified henselian valued fields, we need to apply the Relative Structure Theorem (Corollary~\ref{cor:CoHo_relative}). We first state the non-strict version: \begin{proposition}[Relative model completeness, non-strict version]\label{prp:MC_nonstrict} Given $A, B \models T_\mathrm{pc}(k,n)$ with $A \subseteq B$ such that the induced embedding of residue fields $k_A \subseteq k_B$ is an elementary embedding (in $\mathfrak{L}_\mathrm{ring}$), we have $$A \preceq B.$$ \end{proposition} \begin{proof} Let $B_{1},B_{2},A\models T_{\mathrm{pc}}(k,n)$ with $A\subseteq B_{1},B_{2}$ two extensions. We suppose that the induced extensions $k_{A}\preceq k_{B_{i}}$, for $i=1,2$, are elementary. We claim that $B_1$ and $B_2$ are elementarily equivalent over $A$, symbolically $B_{1}\equiv_{A}B_{2}$. By the Keisler--Shelah Theorem (\cite{She71}), replacing $B_{1}$ and $B_{2}$ with suitable ultrapowers if necessary, we may assume that there is an $\mathfrak{L}_{\mathrm{ring}}$-isomorphism $\varphi_{k}:k_{B_{1}}\longrightarrow k_{B_{2}}$ that fixes $k_{A}$ pointwise. By Corollary~\ref{cor:CoHo_relative}, there is an isomorphism $\varphi:(B_{1},k_{1})\longrightarrow(B_{2},k_{2})$ that induces $\varphi_{k}$ and fixes $A$ pointwise. In particular, $B_{1}$ and $B_{2}$ are elementarily equivalent over $A$. This proves the claim. We return to the setting of an extension $A\subseteq B$ of models of $T_{\mathrm{pc}}(k,n)$ for which $k_{A}\preceq k_{B}$ is elementary. From the claim it follows that $A\equiv_{A}B$, equivalently the extension $A\subseteq B$ is elementary. \end{proof} For the strict version of the relative model completeness theorem, we combine the Relative Structure Theorem (Corollary \ref{cor:CoHo_relative}) with the coarsening method and well-known results from the equicharacteristic zero world. \begin{theorem}[Relative model completeness]\label{thm:MC_strict} Given an extension $(K,v) \subseteq (L,w)$ of unramified henselian valued fields such that the induced embeddings of residue fields $Kv \subseteq Lw$ and value groups $vK \subseteq wL$ are elementary (in $\mathfrak{L}_\mathrm{ring}$ and $\mathfrak{L}_\mathrm{oag}$ respectively), we have $$(K,v)\preceq (L,w).$$ \end{theorem} \begin{proof} Let $(K_{i},v_{i})\models T_{\mathrm{ur}}(k,\rmGamma)$, for $i=0,1,2$, be such that $(K_{0},v_{0})\subseteq(K_{1},v_{1}),(K_{2},v_{2})$ are two extensions of valued fields. We suppose that the residue field extensions $K_{0}v_{0}\preceq K_{i}v_{i}$, and the value group extensions $v_{0}K_{0}\preceq v_{i}K_{i}$, both for $i=1,2$, are elementary. We claim that $(K_{1},v_{1})$ and $(K_{2},v_{2})$ are elementarily equivalent over $(K_{0},v_{0})$, symbolically $(K_{1},v_{1})\equiv_{(K_{0},v_{0})}(K_{2},v_{2})$. By the Keisler--Shelah Theorem (\cite{She71}), replacing each valued field with a suitable ultrapower if necessary, we may assume that all three valued fields are $\aleph_{1}$-saturated and that there is an isomorphism $\varphi_{k}:K_{1}v_{1}\longrightarrow K_{2}v_{2}$ that fixes $K_{0}v_{0}$ pointwise, and an isomorphism $\varphi_{\rmGamma}:v_{1}K_{1}\longrightarrow v_{2}K_{2}$ that fixes $v_{0}K_{0}$ pointwise. For $i=0,1,2$, let $\hat{v}_{i}$ be the finest proper coarsening of $v_{i}$, and let $\bar{v}_{i}$ be the valuation induced on $K_{i}\hat{v}_{i}$ by $v_{i}$. By $\aleph_{1}$-saturation, all three $(K_{i}\hat{v}_{i},\bar{v}_{i})$ are strict Cohen rings. Note also that these coarsenings are compatible in the sense that, for both $i=1,2$, we have extensions $(K_{0},\hat{v}_{0})\subseteq(K_{i},\hat{v}_{i})$ and $(K_{0}\hat{v}_{0},\bar{v}_{0})\subseteq(K_{i}\hat{v}_{i},\bar{v}_{i})$. Moreover $\varphi_{\rmGamma}$ induces an isomorphism $\hat{\varphi}_{\rmGamma}:\hat{v}_{1}K_{1}\longrightarrow\hat{v}_{2}K_{2}$ that fixes $\hat{v}_{0}K_{0}$ pointwise, since $\varphi_{\rmGamma}$ restricts to an isomorphism between the convex subgroups $\langle v_{1}(p)\rangle$ and $\langle v_{2}(p)\rangle$, and since $\hat{v}_{i}K_{i}$ is the quotient of $v_{i}K_{i}$ by $\langle v_{i}(p)\rangle$. By Corollary~\ref{cor:CoHo_relative}, there is an isomorphism $\varphi:(K_{1}\hat{v}_{1},\bar{v}_{1})\longrightarrow(K_{2}\hat{v}_{2},\bar{v}_{2})$ that induces $\varphi_{k}$ and fixes $K_{0}\hat{v}_{0}$ pointwise. In particular $K_{1}\hat{v}_{1}$ and $K_{2}\hat{v}_{2}$ are elementarily equivalent over $K_{0}\hat{v}_{0}$. Therefore $(K_{1},\hat{v}_{1})$ and $(K_{2},\hat{v}_{2})$ are two henselian valued fields of equicharacteristic zero, both extending $(K_{0},\hat{v}_{0})$, with value groups isomorphic over $\hat{v}_{0}K_{0}$ and residue fields elementarily equivalent over $K_{0}\hat{v}_{0}$. It follows from \cite[Theorem 7.1]{FVK} that $(K_{1},\hat{v}_{1})$ and $(K_{2},\hat{v}_{2})$ are elementarily equivalent over $(K_{0},\hat{v}_{0})$, i.e.~$(K_{1},\hat{v}_{1})\equiv_{(K_{0},\hat{v}_{0})}(K_{2},\hat{v}_{2})$. Since the valuation rings of all three valuations $v_{i}$ are $\emptyset$-definable by the same $\mathfrak{L}_{\mathrm{ring}}$-formula in each field $K_{i}$ (again, this is a variant of Robinson's classical definition of $\mathbb{Z}_p$ in $\mathbb{Q}_p$, see, e.g., \cite[Corollary 2]{Hon}), it follows that $(K_{1},v_{1})$ and $(K_{2},v_{2})$ are elementarily equivalent over $(K_{0},v_{0})$, i.e.~$(K_{1},v_{1})\equiv_{(K_{0},v_{0})}(K_{2},v_{2})$. This proves the claim. We return to the setting of an extension $(K,v)\subseteq(L,w)$ of models of $T_{\mathrm{ur}}$ for which $Kv\preceq Lw$ and $vK\preceq wL$ are elementary. From the claim it follows that $(K,v)\equiv_{(K,v)}(L,w)$, equivalently the extension $(K,v)\subseteq(L,w)$ is elementary. \end{proof} \begin{remark} In fact, the relative model completeness for non-strict Cohen rings (Proposition~\ref{prp:MC_nonstrict}) can also be combined with a result by B\'elair (\cite[Corollaire 5.2(2)]{Bel99}) to show model completeness, albeit in a slightly different ($\omega$-sorted) language. \end{remark} As a consequence, we get the following embedding version of the Ax-Kochen/Ershov result: \begin{corollary}\label{cor:AKE-E} Let $(K,v)\subseteq (L,w)$ be two unramified henselian valued fields. Then, we have $$ \underbrace{Kv \preceq Lw}_{\textrm{in }\mathfrak{L}_\mathrm{ring}} \textrm{ and } \underbrace{vK \preceq wL}_{\textrm{in }\mathfrak{L}_\mathrm{oag}} \, \Longleftrightarrow \, \underbrace{(K,v) \preceq (L,w)}_{\textrm{in }\mathfrak{L}_\mathrm{vf}}. $$ \end{corollary} For Cohen rings, or more generally unramified henselian valued fields taking values in a $\mathbb{Z}$-group, our result simplifies to: \begin{corollary} Let $(K,v)\subseteq (L,w)$ be two unramified henselian valued fields with value groups $vK\equiv wL\equiv\mathbb{Z}$. Then, we have $$ \underbrace{Kv \preceq Lw}_{\textrm{in }\mathfrak{L}_\mathrm{ring}} \, \Longleftrightarrow \, \underbrace{(K,v) \preceq (L,w)}_{\textrm{in }\mathfrak{L}_\mathrm{vf}}. $$ \end{corollary} \section{Embedding lemma and relative existential completeness} The aim of this section is to prove an embedding lemma for unramified henselian valued fields. This will be applied to prove relative existential completeness of unramified henselian valued fields of fixed finite degree of imperfection as well as to show stable embeddedness of residue field and value group in the subsequent section. The proof of the embedding lemma (i.e.~the next proposition) is a refined version of the proofs given in \cite[Lemmas 5.6 and 6.4]{FVK}. In Kuhlmann's terminology, we show that the class of $\aleph_1$-saturated models of $T_\mathrm{ur}$ satisfies an appropriate version of the Relative Embedding Property (cf.~\cite[p.~31]{FVK}). \begin{proposition}[Embedding Lemma] \label{prp:emb} Let $(L_{1},v_{1})$ and $(L_{2},v_{2})$ be extensions of $(K,v)$, and assume that all three are $\aleph_1$-saturated models of $T_\textrm{ur}$. Suppose that $L_{1}v_{1}/Kv$ is separable and $v_{1}L_{1}/vK$ is torsion-free. Moreover, assume that $(L_{2},v_{2})$ is $|L_{1}|^{+}$-saturated and that there are embeddings $\varphi_{k}:L_{1}v_{1}\longrightarrow L_{2}v_{2}$ over $Kv$ and $\varphi_{\rmGamma}:v_{1}L_{1}\longrightarrow v_{2}L_{2}$ over $vK$. Suppose that $L_{2}v_{2}/\varphi_{k}(L_{1}v_{1})$ is separable. Then, there is an embedding $\varphi:(L_1,v_1) \longrightarrow (L_2,v_2)$ over $K$ that induces $\varphi_k$ and $\varphi_\rmGamma$ on residue field and value group respectively. Moreover, if $\varphi_{k}$ and $\varphi_{\rmGamma}$ are elementary embeddings, then any such embedding $\varphi$ is elementary. \end{proposition} \begin{proof} Since $(K,v)$ is henselian and unramified, it is defectless, therefore our assumptions on value group and residue field imply that $K$ is relatively algebraically closed in $L_{1}$. Let $w$ denote the finest proper coarsening of $v$ on $K$, and, likewise, let $w_i$ denote the finest proper coarsening of $v_i$ on $L_i$ (for $i=1,2$). Note that $w$ is the restriction of each $w_{i}$ to $K$. By our saturation assumption, the valued fields $(Kw, \bar{v})$ and $(L_iw_i, \bar{v_i})$ are all Cohen fields. The inclusion $(K,w)\subseteq(L_{1},w_1)$ gives rise to an inclusion $Kw \subseteq L_1w_1$. Moreover, $\varphi_k$ induces an (not necessarily unique) embedding $$\psi_k: (L_1w_1, \bar{v_1}) \longrightarrow (L_2w_2, \bar{v_2})$$ over $(Kw,\bar{v})$ by Cohen's Homomorphism theorem (Theorem \ref{thm:CoHo}). We now fix one such $\psi_k$. We note that, in order to show that an embedding $\varphi$ of a subfield of $(L_1,v_1)$ into $(L_2,v_2)$ induces $\varphi_k$, it suffices to show that it induces $\psi_k$: \begin{claim} \label{psi} For any $(F,w_1,v_{1})\subseteq(L_{1}, w_1 ,v_{1})$ extending $(K,w,v)$, if $\varphi:(F,w_{1})\longrightarrow(L_{2},w_{2})$ is an embedding over $K$ that induces $\psi_{k}$, then $\varphi$ is also an embedding $\varphi:(F,v_{1})\longrightarrow(L_{2},v_{2})$ that induces $\varphi_{k}$. \end{claim} \begin{claimproof} Note that $\psi_{k}$ is an embedding of valued fields $(L_{1}w_{1},\bar{v}_{1})\longrightarrow(L_{2}w_{2},\bar{v}_{2})$. For $a\in F$, we have \begin{align*} v_{1}(a)\geq0&\Leftrightarrow w_{1}(a)\geq0\;\textrm{and}\;\bar{v}_{1}(aw_{1})\geq0\\ &\Leftrightarrow w_{2}(\varphi(a))\geq0\;\textrm{and}\;\bar{v}_{2}(\psi_{k}(aw_{1}))\geq0\\ &\Leftrightarrow w_{2}(\varphi(a))\geq0\;\textrm{and}\;\bar{v}_{2}((\varphi(a))w_{2})\geq0\\ &\Leftrightarrow v_{2}(\varphi(a))\geq0. \end{align*} Since $\varphi$ induces $\psi_{k}$, and $\psi_{k}$ is an embedding $(Fw_{1},\bar{v}_{1})\longrightarrow(L_{2}w_{2},\bar{v}_{2})$ that induces $\varphi_{k}$, it follows that $\varphi$ induces $\varphi_{k}$. \end{claimproof} We now adapt the proof of \cite[Lemmas 5.6 and 6.4]{FVK} carefully to our setting, in order to construct an embedding $\varphi: (L_1,v_1) \longrightarrow (L_2,v_2)$ over $K$ that induces $\varphi_k$ and $\varphi_\rmGamma$. We also use $\varphi$ to denote the restriction of $\varphi$ to any subfield of $L_1$. Let $\mathcal{T}$ be a standard valuation transcendence basis of $(L_{1},w_{1})/(K,w)$, i.e. $$\mathcal{T} =\{x_i, y_i\mid i\in I, j\in J \}$$ such that the set of values $\{w_1(x_i)\}_{i\in I}$ is a maximal rationally independent set in $w_1L$ over $wK$ and such that the set of residues $\{y_jw_1\}_{j \in J}$ is a transcendence base of $L_1w_1$ over $Kw$. Let $K'$ be the relative algebraic closure of $K(\mathcal{T})$ in $L_1$, and, by an abuse of notation, let $w_1$ also denote the restriction of $w_1$ to $K'$ and its subfields. Note that $(K',w_{1})/(K,w)$ is without transcendence defect, by \cite[Corollary 2.4]{FVK}. See \cite[p.~4]{FVK} for the definition of `without transcendence defect'. {\bf Step 1: Extending to valued function fields without transcendence defect.} Let $L$ be a subfield of $K'$ that is a finitely generated extension of $K$. Since $K$ is relatively algebraically closed in $L$, $(L,w_{1})/(K,w)$ is a valued function field without transcendence defect. By \cite[Theorem 1.9]{FVK}, $(L,w_{1})/(K,w)$ is strongly inertially generated, i.e.~there is a transcendence basis $\mathcal{T}_L=\{x_{i},y_{j}\mid i\in I_L,j\in J_L\}\subseteq L$ such that \begin{enumerate} \item $w_{1}L=w_{1}K(\mathcal{T}_L)=wK\oplus\bigoplus_{i}\mathbb{Z}\cdot w_{1}(x_{i})$ \item $\{y_{j}w_{1}\}_{j \in J_L}$ is a separating transcendence base of $Lw_{1}/Kw$, and \item there is an element $a\in L^{h}$ (the henselization of $L$ with respect to $w_{1}$) such that $L^{h}=K(\mathcal{T}_{L})^{h}(a)$, $w_{1}(a)=0$, and $\big(K(\mathcal{T}_{L})w_{1}\big)(aw_{1})/K(\mathcal{T}_{L})w_{1}$ is a separable extension of degree $[K(\mathcal{T}_{L})^{h}(a):K(\mathcal{T}_{L})^{h}]$. \end{enumerate} Note that in our case, the separability in {\bf(ii)} and {\bf(iii)} is automatic, since $w_{1}$ is of residue characteristic zero. We now explore these three properties, one after the other, in order to construct an embedding $\varphi:(L,v_{1})\longrightarrow(L_{2},v_{2})$ over $K$ that induces $\varphi_{k}$ and $\varphi_{\rmGamma}$. \begin{claim} $v_{1}L=vK\oplus\bigoplus_{i}\mathbb{Z}\cdot v_{1}(x_{i})$. \end{claim} \begin{claimproof} Suppose there are $n_{i}\in\mathbb{Q}$ and $\alpha\in vK$ such that $\alpha+\sum_{i}n_{i}\cdot v_{1}(x_{i})=0$. Then $(\alpha+\mathbb{Z})+\sum_{i}n_{i}\cdot w_{1}(x_{i})=0+\mathbb{Z}$. By $\mathbb{Q}$-linear independence of the $w_{1}(x_{i})$ over $wK$ in $w_{1}L$, all the $n_{i}$ are zero. Therefore the $v_{1}(x_{i})$ are $\mathbb{Q}$-linearly independent from $vK$ in $v_{1}L$. Thus $vK\oplus\bigoplus_{i}\mathbb{Z}\cdot v_{1}(x_{i})\leq v_{1}L$. Let $\gamma\in v_{1}L$. There exist $n_{i}\in\mathbb{Z}$ and $\alpha\in vK$ such that $(\gamma+\mathbb{Z})=(\alpha+\mathbb{Z})+\sum_{i}n_{i}w_{1}(x_{i})$. Therefore for some $\beta\in vK$ we have $\gamma=\beta+\sum_{i}n_{i}v_{1}(x_{i})$. \end{claimproof} Next, we choose $\mathcal{T}_{L}'=\{x_{i}',y_{j}'\mid i\in I_{L},j\in J_{L}\}\subseteq L_{2}$ such that \begin{enumerate} \item $v_{2}(x_{i}')=\varphi_{\rmGamma}(v_{1}(x_{i}))$, for each $i\in I_{L}$, and \item $y_{j}'w_{2}=\psi_{k}(y_{j}w_{1})$, for each $j\in J_{L}$. \end{enumerate} Immediately: $w_{2}(x_{i}')=\psi_{\rmGamma}(w_{1}(x_{i}))$, for each $i\in I_L$. Exactly as in the proof of \cite[Lemma 5.6]{FVK}, there is an isomorphism $\varphi:(K(\mathcal{T}_{L}),w_{1})\longrightarrow(K(\mathcal{T}_{L}'),w_{2})$ that maps \begin{align*} x_{i}&\longmapsto x_{i}'\\ y_{j}&\longmapsto y_{j}', \end{align*} and which therefore induces $\psi_{k}$ and $\psi_{\rmGamma}$. \begin{claim} $\varphi$ is also an isomorphism $\varphi:(K(\mathcal{T}_{L}),v_{1})\longrightarrow(K(\mathcal{T}_{L}'),v_{2})\subseteq(L_{2},v_{2})$ which induces $\varphi_{k}$ and $\varphi_{\rmGamma}$. \end{claim} Let $f\in K[\mathcal{T}_{L}]$, written $f=\sum_{k}c_{k}\prod_{i}x_{i}^{\mu_{k,i}}\prod_{j}y_{j}^{\nu_{k,j}}$, for $c_{k}\in K$. \begin{claimproof} We already know that $\varphi$ is an isomorphism of fields, inducing both $\psi_K$ and $\psi_\rmGamma$. By Claim \ref{psi}, $\varphi$ is moreover an isomorphism of valued fields with respect to the $v_i$'s that induces $\varphi_{k}$. We now check that it induces $\varphi_{\rmGamma}$, first working in the case $f\in K[x_{i}\mid i\in I_L]$. \begin{align*} v_{2}(\varphi(f))&=\min_{k}\{v_{2}(c_{k})+\sum_{i}\mu_{k,i}v_{2}x_{i}'\}\\ &=\min_{k}\{v_{2}(c_{k})+\sum_{i}\mu_{k,i}\varphi_{\rmGamma}(v_{1}x_{i})\}\\ &=\varphi_{\rmGamma}(\min_{k}\{v_{1}(c_{k})+\sum_{i}\mu_{k,i}v_{1}x_{i}\})\\ &=\varphi_{\rmGamma}(v_{1}(f)). \end{align*} Since the value group of $K[x_{i}\mid i\in I_L]$ with respect to $v_{1}$ is already $vK\oplus\bigoplus_{i}\mathbb{Z}\cdot v_{1}x_{i}=v_{1}K(\mathcal{T}_{L})$, indeed $\varphi$ induces $\varphi_{\rmGamma}$. \end{claimproof} By the universal property of henselizations, $\varphi$ extends to an isomorphism $$\varphi: (K(\mathcal{T}_{L})^{h},w_{1})\longrightarrow(K(\mathcal{T}_{L}')^{h},w_{2}) \subseteq (L_2,w_2)$$ where the henselizations are taken with respect to $w_i$ (for $i \in \{1,2\}$). Note that $\varphi$ still induces both $\psi_{k}$ and $\psi_{\rmGamma}$ since henselizations are immediate extensions. Thus, by Claim \ref{psi}, $\varphi$ is also an isomorphism $(K(\mathcal{T}_{L})^{h},v_{1})\longrightarrow (K(\mathcal{T}_{L}')^{h},v_{2})$. Since the henselization $K(\mathcal{T}_{L})^{h}$ of $K(\mathcal{T}_{L})$ with respect to $w_1$ is a subfield of the henselization with respect to $v_1$, which is an immediate extension of $(K(\mathcal{T}_{L}),v_{1})$, $\varphi$ induces $\varphi_{k}$ and $\varphi_{\rmGamma}$. Recall that by property {\bf(iii)} of strong inertial generation, there exists an element $a\in L^{h}$ such that $L^{h}=K(\mathcal{T}_{L})^{h}(a)$, $w_{1}(a)=0$, and $(K(\mathcal{T}_{L})^{h}w_{1})(aw_{1})/K(\mathcal{T}_{L})^{h}w_{1}$ is a separable extension of degree $[K(\mathcal{T}_{L})^{h}(a):K(\mathcal{T}_{L})^{h}]$. Let $f\in\mathcal{O}_{(K(\mathcal{T}_{L})^{h},w_{1})}[X]$ be such that $fw_{1}$ is the minimal polynomial of $aw_{1}$ over $K(\mathcal{T}_{L})^{h}w_{1}$. By henselianity, there exists a unique root $a'\in L_{2}$ of $\varphi(f)$ such that $a'w_{2}=\psi_{k}(aw_{1})$. \begin{claim} $\varphi$ extends to an isomorphism $\varphi:(K(\mathcal{T}_{L})^{h}(a),v_{1})\longrightarrow (K(\mathcal{T}_{L}')^{h}(a'),v_{2})\subseteq (L_{2},v_{2})$ which maps $a$ to $a'$ and induces $\varphi_{k}$ and $\varphi_{\rmGamma}$. \end{claim} \begin{claimproof} Mapping $a\longmapsto a'$ allows us to extend $\varphi$ to an isomorphism of fields $\varphi:K(\mathcal{T}_{L})^{h}(a)\longrightarrow K(\mathcal{T}_{L}')^{h}(a')$. By henselianity of $(K(\mathcal{T}_{L})^{h},w_{1})$, $\varphi$ is an isomorphism of valued fields $(K(\mathcal{T}_{L})^{h}(a),w_{1})\longrightarrow(K(\mathcal{T}_{L}')^{h}(a'),w_{2})$. We now argue that $\varphi$ induces $\psi_{k}$. Since $1,aw_{1},\ldots,a^{n-1}w_{1}$ are linearly independent over $K(\mathcal{T}_{L})^{h}w_{1}$, we have $w_{1}\big(\sum_{i<n}c_{i}a^{i}\big)=\min_{i} w_{1}(c_{i})$, for all $c_{i}\in K(\mathcal{T}_{L})^{h}$. In particular, we have $$\mathcal{O}_{(K(\mathcal{T}_{L})^{h}(a),w_{1})} =\Big\{\sum_{i<n}c_{i}a^{i}\;\Big|\; c_{i}\in\mathcal{O}_{(K(\mathcal{T}_{L})^{h},w_{1})}\Big\}.$$ Let $g\in\mathcal{O}_{(K(\mathcal{T}_{L})^{h},w_{1})}[X]$. Then $\varphi$ induces $\psi_{k}$ since we have $$(\varphi(g(a)))w_{2}=(\varphi(g)(a'))w_{2}=(\varphi(g)w_{2})(a'w_{2})=(\psi_{k}(gw_{1}))(\psi_k(aw_{1}))=\psi_k((g(a))w_1).$$ Thus, by Claim \ref{psi}, $\varphi: (K(\mathcal{T}_{L})^{h}(a),v_{1})\longrightarrow (K(\mathcal{T}_{L}')^{h}(a'),v_{2})$ is an isomorphism that induces $\varphi_k$. Finally, the value group of $(K(\mathcal{T}_{L})^{h}(a),v_{1})$ is torsion over the value group of $(K(\mathcal{T}_{L})^{h},v_{1})$: for each $\gamma\in v_{1}K(\mathcal{T}_{L})^{h}(a)$ we have $n\gamma\in v_{1}K(\mathcal{T}_{L})^{h}$. Since the restriction of $\varphi$ to $K(\mathcal{T}_{L})^{h}$ induces $\varphi_{\rmGamma}$ on $v_{1}K(\mathcal{T}_{L})^{h}$, and multiplication by $n$ in ordered abelian groups is injective, we conclude that $\varphi$ induces $\varphi_{\rmGamma}$. \end{claimproof} This finishes the construction of an embedding $\varphi:(L,v_{1})\longrightarrow(L_{2},v_{2})$ that induces $\varphi_{k}$ and $\varphi_{\rmGamma}$. Therefore, we have completed Step 1. By saturation, we obtain an embedding $$\varphi:(K',v_{1})\longrightarrow(L_{2},v_{2})$$ over $K$ that induces $\varphi_{k}$ and $\varphi_{\rmGamma}$. More precisely, we realise the finitely consistent type $$\mathrm{tp}_{\mathrm{qf}}((K',v_1)/(K,v))\cup \{v(x_{c})=\varphi_{\rmGamma}(v_{1}(c))\mid c\in K'\}\cup\{x_{c}v=\varphi_{k}(cv_{1})\mid c\in \mathcal{O}_{(K',v_{1})}\},$$ which is a type over $|K'|$-many parameters. {\bf Step 2: Extending to immediate function fields.} Our present aim is to extend $\varphi$ to an embedding $(L_{1},v_{1})\longrightarrow(L_{2},v_{2})$ over $K$ that induces $\varphi_{k}$ and $\varphi_{\rmGamma}$. Since $K'$ contains a standard valuation transcendence basis for $(L_{1},w_{1})/(K,w)$, we have that $w_{1}L_{1}/w_{1}K'$ is torsion and $L_{1}w_{1}/K'w_{1}$ is algebraic. Since $K'$ is relatively algebraically closed in $L_{1}$, it follows that $(K',v_{1})$ and $(K',w_1)$ are henselian. Because $\mathrm{char}(K'w_1)=0$, we conclude that the extension $(L_1,w_1)/(K',w_1)$ is immediate (for example see \cite[Lemma 3.7]{FVK}). Thus, so is $(L_1,v_1)/(K',v_1)$. Therefore any extension of $\varphi$ to an embedding $(L_1,v_1) \longrightarrow (L_2,v_2)$ automatically induces $\varphi_k$ and $\varphi_\rmGamma$. Consider a finitely generated subextension $F/K'$ of $L_{1}/K'$; then $(F,v_{1})/(K',v_{1})$ is an immediate function field. In fact, proceeding iteratively, it suffices to find a way to extend $\varphi$ to immediate function fields of transcendence degree $1$. By \cite[Theorem 2.2]{Kuh19}, such extensions are henselian rational, i.e.~subextensions of the henselization of a simple transcendental and immediate extension, of transcendental type. Suppose we have extended $\varphi$ to an embedding $\varphi:(F_{0},v_{1})\longrightarrow(L_{2},v_{1})$ over $K$ that induces $\varphi_{k}$ and $\varphi_{\rmGamma}$, where $(F_{0},v_{1})$ is the henselization of a finitely generated extension of $K'$. Let $b\in L_{1}$ be transcendental over $F_{0}$. Then $b$ is a pseudo-limit of a pseudo-Cauchy sequence $(b_{\rho})_{\rho<\sigma}\subseteq F_{0}$ of transcendental type, with respect to $v_{1}$: as $(F_{0},v_1)$ is henselian and algebraically maximal, all its immediate extensions are of transcendental type. Then $(\varphi(b_{\rho}))_{\rho<\sigma}$ is a pseudo-Cauchy sequence in $L_{2}$, also of transcendental type, now with respect to $v_{2}$. By saturation, this sequence has a pseudo-limit $b'\in L_{2}$. We extend $\varphi$ by mapping $b\longmapsto b'$ to an isomorphism of fields $F_{0}(b)\longrightarrow\varphi(F_{0})(b')$. Automatically, we get that $$\varphi:(F_{0}(b),v_{1})\longrightarrow(\varphi(F_{0})(b'),v_{2})\subseteq (L_2,v_2)$$ is an isomorphism of valued fields since pseudo-Cauchy sequences of transcendental type determine the isomorphism type of the valued field generated by a pseudo-limit (see \cite[Theorem 2]{Kaplansky}). Finally, we extend $\varphi$ to the henselization of $(F_0(b),v_{1})$, which is a subfield of $(L_1,v_1)$. This is accomplished by applying the universal property of the henselization once again. We have now constructed an embedding $\varphi: (L_1,v_1) \longrightarrow (L_2,v_2)$ over $K$ which induces $\varphi_k$ and $\varphi_\rmGamma$. Last but not least, if both $\varphi_k$ and $\varphi_\rmGamma$ are elementary embeddings, the AKE$_{\preceq}$-principle (Corollary \ref{cor:AKE-E}) implies that $\varphi$ is in fact an elementary embedding of unramified henselian valued fields, as desired. This finishes the proof. \end{proof} \begin{theorem} \label{thm:AKEE} Let $(K,v) \subseteq (L,w)$ be an extension of unramified henselian valued fields such that $Kv$ and $Lw$ have the same finite degree of imperfection. If the induced embeddings of residue field $Kv \subseteq Lw$ and value group $vK \subseteq wL$ are existentially closed (in $\mathfrak{L}_\textrm{ring}$ and $\mathfrak{L}_\textrm{oag}$ respectively), we have $$(K,v) \preceq_\exists (L,w).$$ \end{theorem} \begin{proof} By taking ultrapowers if necessary, we may assume that both $(K,v)$ and $(L,w)$ are $\aleph_{1}$-saturated. Let $(K^{*},v^{*})$ be a $|L|^{+}$-saturated elementary extension of $(K,v)$. Since $Kv\preceq_{\exists}Lw$ and $vK\preceq_{\exists}wL$, there are embeddings $\varphi_{k}:Lw\longrightarrow (Kv)^{*}=K^{*}v^{*}$ over $Kv$ and $\varphi_{\rmGamma}:wL\longrightarrow (vK)^{*}=v^{*}K^{*}$ over $vK$. Let $\bfbeta\subseteq Kv$ be a $p$-basis. Since $Kv\preceq_{\exists}Lw$, $\bfbeta$ is also $p$-independent in $Lw$; in particular $Lw/Kv$ is separable. Since the degree of imperfection of $Lw$ is the same as that of $Kv$, $\bfbeta$ is a $p$-basis of $Lw$. Since $K^{*}v^{*}$ is an elementary extension of $Kv$, $\bfbeta$ is also a $p$-basis of $K^{*}v^{*}$. Therefore $K^{*}v^{*}/\varphi_{k}(Lw)$ is separable. Finally, since $vK\preceq_{\exists}wL$, the group $wL/vK$ is torsion-free. Thus we may apply Proposition~\ref{prp:emb} to obtain an embedding $\varphi:(L,w)\longrightarrow(K^{*},v^{*})$ over $K$ that induces $\varphi_{k}$ and $\varphi_{\rmGamma}$. In particular, $(K,v)\preceq_{\exists}(L,w)$. \end{proof} Note that the reason why we require a fixed finite degree of imperfection in Theorem \ref{thm:AKEE} is that when we embed $Lw$ into an elementary extension of $Kv$ in the proof, we need to make sure that this latter extension is separable in order to apply Proposition \ref{prp:emb}. As a consequence of Theorem \ref{thm:AKEE}, we get the following existential version of the Ax-Kochen/Ershov result: \begin{corollary}\label{cor:AKE-EE} Let $(K,v)\subseteq (L,w)$ be two unramified henselian valued fields such that $Kv$ and $Lw$ have the same finite degree of imperfection. Then, we have $$ \underbrace{Kv \preceq_\exists Lw}_{\textrm{in }\mathfrak{L}_\mathrm{ring}} \textrm{ and } \underbrace{vK \preceq_\exists wL}_{\textrm{in }\mathfrak{L}_\mathrm{oag}} \, \Longleftrightarrow \, \underbrace{(K,v) \preceq_\exists (L,w)}_{\textrm{in }\mathfrak{L}_\mathrm{vf}}. $$ \end{corollary} \section{Stable embeddedness} In this final section, we comment on the structure induced on the residue field and value group in unramified henselian valued fields. \begin{definition} Let $\mathcal{M}$ be a structure and let $P\subseteq M^{k}$ be a definable set. We say that $P$ is {\em stably embedded} if for all formulas $\varphi(x,y)$ and all $b\in M^{|y|}$, $\varphi(M^{k|x|},b)\cap P^{|x|}$ is $P$-definable. \end{definition} In order to be able to consider the residue field and the value group as definable sets in a valued field (respectively, in a non-strict Cohen ring), it is necessary to switch from the language of valued fields (respectively, the language of rings) to a multisorted setting. Neither the results nor the proofs in this section are sensitive to this technicality. An alternative approach is to widen the definition of stable embeddedness to also apply to interpretable sets, as discussed in \cite{Pillay}. In the appendix to \cite{CH}, Chatzidakis and Hrushovski show stable embeddedness of a type-definable set is equivalent to an automorphism-lifting criterion in saturated models. This shows: \begin{theorem}\label{thm:SEk_non-strict} Let $A\models T_{\mathrm{pc}}(k,n)$. Then $k_{A}$ is stably embedded in $A$. \end{theorem} \begin{proof} By the Structure Theorem (Corollary \ref{cor:Cohen_structure_2}), each automorphism of the residue field $k_{A}$ lifts to an automorphism of the (non-strict) Cohen ring $A$. In particular this holds for sufficiently saturated models, and thus $k_A$ is stably embedded as a pure field by \cite[Appendix. Lemma 1]{CH}. \end{proof} Our aim is now to show that both residue field and value group are stably embedded in unramified henselian valued fields. Following the approach in \cite[Lemma 3.1]{JS20}, we do this via our embedding lemma. More precisely, stable embeddedness follows from Proposition \ref{prp:emb} in a straightforward manner since the type of an element of the residue field (respectively, the value group) over the residue field (resp., value group) of an elementary submodel determines the type of that element over the submodel: \begin{theorem} \label{thm:SEk} \label{thm:stably.embedded} Let $(K,v)$ be an unramfied henselian valued field. Then the value group $vK$ and the residue field $Kv$ are both stably embedded, as a pure ordered abelian group and as a pure field, respectively. \end{theorem} \begin{proof} In light of Proposition \ref{prp:emb}, this is exactly the same argument as the proof of \cite[Lemma 3.1]{JS20}. \end{proof} As an immediate consequence, we get the following generalization of \cite[Theorem 7.3]{vdD14} to the case of imperfect residue fields: \begin{corollary} Let $(K,v)$ be an unramified henselian valued field. Then each subset of $Kv^n$ which is $\mathfrak{L}_\mathrm{vf}$-definable in $(K,v)$ is already $\mathfrak{L}_\mathrm{ring}$-definable in $Kv$. \end{corollary} We now give an example that stable embeddedness of the residue field no longer holds for finite extensions of unramified henselian valued fields. A valued field $(K,v)$ of mixed characteristic $(0,p)$ is {\bf finitely ramified} if the interval $(0,v(p)]$ is finite. The following example shows that an analogue of Theorem \ref{thm:SEk} does not hold for all finitely ramified henselian valued fields. \begin{example} \label{ex:fr} Consider a field $k$ of characteristic $p>2$ with elements $\alpha_{1},\alpha_{2}\in k$ such that \begin{enumerate} \item there is an automorphism $\varphi$ of $k$ which maps $\alpha_{1}$ to $\alpha_{2}$, and \item $\alpha_{1}$ and $\alpha_{2}$ lie in different multiplicative cosets of $k^{\times2}$. \end{enumerate} Let $(K,v)$ be an unramified henselian valued field with residue field $k$. We distinguish a representative $a_{1}\in\res^{-1}(\alpha_{1})$ of $\alpha_{1}$, and let $(L,w)$ be the extension of $(K,v)$ given by adjoining a square-root $b_{1}$ of $pa_{1}$. Then $(L,w)$ is henselian and finitely ramified, and $(L,w)/(K,v)$ is a quadratic extension with ramification degree $e=2$ and inertia degree $f=1$. It is also easy to see that $L$ contains no element $b_{2}$ such that $b_{2}^{2}= pa_{2}$, where $a_{2}$ is any representative of $\alpha_{2}$. Suppose that $\rmPhi$ is an automorphism of $(L,w)$ which lifts $\varphi$. Then \begin{align*} \res(\rmPhi(b_{1})^2/p) = \res(\rmPhi(a_{1})) = \varphi(\res(a_{1})) = \alpha_{2}, \end{align*} showing that $\rmPhi(b_{1})$ is just such a non-existent element $b_{2}$ of $L$, which is a contradiction. Note that there are elementary extensions $(L^{*},w^{*})$ of $(L,w)$ with an automorphism $\varphi^*$ on the residue field $L^*w^*$ such that (i) and (ii) still hold. These can be obtained by adding a symbol for the automorphism $\varphi$ to the language on the residue field before saturating. The argument above shows in fact that in none of these models, $\varphi^*$ can be lifted to an automorphism of $L^*$. Therefore the residue field $Lw=k$ is not stably embedded in $(L,w)$. Finally, note that there are many such fields $k$; for example consider $k=\mathbb{F}_{p}(\alpha_{1},\alpha_{2})$ where $\alpha_{1},\alpha_{2}$ are algebraically independent over $\mathbb{F}_{p}$. \end{example} \end{document}
\begin{document} \title{Rational Shi tableaux and the skew length statistic} \author{Robin Sulzgruber}\thanks{Research supported by the Austrian Science Fund (FWF), grant S50-N15 in the framework of the Special Research Program ``Algorithmic and Enumerative Combinatorics'' (SFB F50).} \,\mathrm{d}ate{September~2016} \address{Fakult{\"a}t f{\"u}r Mathematik, Universit{\"a}t Wien, Oskar-Morgenstern-Platz 1, 1090 Wien, Austria} \email{[email protected]} \begin{abstract} We define two refinements of the skew length statistic on simultaneous core partitions. The first one relies on hook lengths and is used to prove a refined version of the theorem stating that the skew length is invariant under conjugation of the core. The second one is equivalent to a generalisation of Shi tableaux to the rational level of Catalan combinatorics. These rational Shi tableaux encode dominant $p$-stable elements in the affine symmetric group. We prove that the rational Shi tableau is injective, that is, each dominant $p$-stable affine permutation is determined uniquely by its Shi tableau. Moreover, we provide a uniform generalisation of rational Shi tableaux to Weyl groups, and conjecture injectivity in the general case. \end{abstract} \thispagestyle{empty} \maketitle \pagenumbering{arabic} \pagestyle{headings} The skew length is a statistic on simultaneous core partitions, which counts the number of certain cells in the Young diagram of the core. It was invented by Armstrong~\cite{AHJ2014} who used it to give a combinatorial formula for the rational $q,t$-Catalan numbers (see also~\cite{GM2013,ALW2014:sweep_maps}) \begin{align*} C_{n,p}(q,t) =\sum_{\kappa\in\mathfrak{C}_{n,p}} q^{\ell(\kappa)} t^{(n-1)(p-1)/2-\op{skl}(\kappa)}. \end{align*} Here $n$ and $p$ are relatively prime, $\mathfrak C_{n,p}$ denotes the set of $n,p$-cores, $\ell(\kappa)$ is the length of $\kappa$ and $\op{skl}(\kappa)$ denotes the skew length (see \refs{conj} for all definitions). The intriguing property of these polynomials is the apparent symmetry $C_{n,p}(q,t)=C_{n,p}(t,q)$, for which there is no proof except in very special cases. The skew length was further studied in~\cite{Xin2015} and ~\cite{CDH2015}. For coprime $n$ and $p$ it was proven that $\op{skl}(\kappa)$ is invariant under conjugation of the core and independent of whether $\kappa$ is viewed as an $n,p$-core or as a $p,n$-core. The first main contribution of our paper is a refinement of the skew length statistic in terms of the multiset of hook lengths of certain cells of the core. We obtain a refined version of the two results mentioned above in \reft{Hab} providing a conceptual explanation for these phenomena. Moreover our method of proof relies on induction and does not use rational Dyck paths. This has the advantage that we do not need to assume that $n$ and $p$ are relatively prime. \reft{Hab} therefore extends to previously untreated cases. Shi tableaux~\cite{FTV2011} encode the dominant regions of the ($m$-extended) Shi arrangement, which are counted by the Fu{\ss}--Catalan numbers. The Shi arrangement and its regions are intimately related to the affine symmetric group. In~\cite{GMV2014} Gorsky, Mazin and Vazirani define the so called $p$-stable affine permutations, which can be seen as a generalisation of the regions of the Shi arrangement to the rational level of Catalan combinatorics. In \refs{shitab} we define rational Shi tableaux, which encode dominant $p$-stable affine permutations. Our main result concerning rational Shi tableaux is \reft{shiA}, asserting that each dominant $p$-stable affine permutation is uniquely determined by its Shi tableau. \reft{shiA} is especially interesting in view of the fact that the rational Pak--Stanley labelling~\cite{GMV2014} can be obtained by taking the row-sums of the rational Shi tableau. The Shi arrangement can be defined for any irreducible crystallographic root system $\mathbb{P}hi$. Furthermore, $p$-stable affine Weyl group elements have been considered in~\cite{Thiel2015}. We provide a uniform definition of the rational Shi tableaux of dominant $p$-stable elements of the affine Weyl group of $\mathbb{P}hi$. We conjecture that dominant $p$-stable Weyl group elements are determined uniquely by their Shi tableaux (\refcj{shi}). In \refs{codinv} we tie our previous results together by relating dominant $p$-stable affine permutations to rational Dyck paths and simultaneous cores. This is achieved using the Anderson bijection of~\cite{GMV2014}. We show that the rational Shi tableau of a dominant $p$-stable affine permutation can be computed from the corresponding rational Dyck path as the codinv tableau of the path (\reft{dinvshi}). The codinv statistic on rational Dyck paths, that is, the sum of the entries of the codinv tableau, corresponds naturally to the skew length statistic on cores (\refc{codinv}). Hence we find that rational Shi tableaux can in fact be regarded as another refinement of the skew length statistic. The contents of \refs{codinv} and in particular the connection between rational Shi tableaux and the zeta map on rational Dyck paths~\cite{GM2013,ALW2014:sweep_maps} (\reft{zeta}) are the main tools used in the proof of \reft{shiA}. An extended abstract~\cite{Sul2016} of this paper has appeared in the proceedings of FPSAC~2016 in Vancouver. \section{Skew-length and conjugation}\label{Section:conj} \begin{figure} \caption{A simultaneous core $\kappa\in\mathfrak C_{7,16} \label{Figure:core} \end{figure} Throughout this section $n$ and $p$ are positive integers (not necessarily coprime!). For any positive integer $z$ we set $[z]=\{1,\,\mathrm{d}ots,z\}$. A \emph{partition} $\lambda$ is a weakly decreasing sequence $\lambda_1\geq\lambda_2\geq\,\mathrm{d}ots\geq\lambda_{\ell}>0$ of positive integers. The number of \emph{parts} or \emph{summands} $\lambda_i$ is called the \emph{length} of the partition and is denoted by $\ell(\lambda)$. The sum $\sum_i{\lambda_i}$ is called the \emph{size} of $\lambda$. A partition is often identified with its \emph{Young diagram} $\{(i,j):i\in[\ell(\lambda)],j\in[\lambda_i]\}$. We call the elements of the Young diagram \emph{cells} of the partition. The \emph{conjugate} of the partition $\lambda$ is the partition $\lambda'$ where $\lambda'_i=\#\{j:\lambda_j\geq i\}$. Equivalently, if we view partitions as Young diagrams then $(j,i)\in\lambda'$ if and only if $(i,j)\in\lambda$. The \emph{hook length} of a cell of $\lambda$ is defined as $h_{\lambda}(i,j)=\lambda_i-j+\lambda'_j-i+1$. A partition $\kappa$ is called \emph{$n$-core} if no cell of $\kappa$ has hook length $n$. A partition is called $n,p$-core if it is both an $n$-core and a $p$-core. We denote the set of $n$-core partitions by $\mathfrak C_n$ and the set of simultaneous $n,p$-cores by $\mathfrak C_{n,p}$. Clearly $\lambda\in\mathfrak{C}_n$ if and only if $\lambda'\in\mathfrak{C}_n$. Let $\lambda$ be a partition. Given the hook length $h$ of a cell $x$ in the top row of $\lambda$ we denote by $H^c(h)$ the set of hook lengths of cells in the same column as $x$. Given the hook length $h$ of a cell $x$ in the first column of $\lambda$ we denote by $H^r(h)$ the set of hook lengths of cells in the same row as $x$. If $h$ is not the hook length of a suitable cell, set $H^r(h)=\emptyset$ resp.~$H^c(h)=\emptyset$. For example, in \reff{core} we have $H^c(12)=\{12,5\}$ and $H^r(12)=\{12,10,5,3,1\}$ and $H^r(11)=\emptyset$. A subset $A\subseteq\mathbb{Z}$ is called \emph{abacus} if there exist integers $a,b\in\mathbb{Z}$ such that $z\in A$ for all $z$ with $z<a$, and $z\notin A$ for all $z$ with $z>b$. We call the elements of $A$ \emph{beads} and the elements of $\mathbb{Z}-A$ \emph{gaps}. An abacus is \emph{normalised} if zero is a gap and there are no negative gaps. An abacus is \emph{balanced} if the number of positive beads equals the number of non-positive gaps. An abacus $A$ is \emph{$n$-flush} if $z-n\in A$ for all $z\in A$. The theorem below is a version of the classical result that $n$-cores correspond to abacus diagrams that are $n$-flush (see for example \cite[2.7.13]{JamesKerber}). Let $\kappa\in\mathfrak C_n$ be an $n$-core with maximal hook length $m$, that is, $m$ is the hook length of the top left corner. Define $\alpha(\kappa)=\{\kappa_i+1-i:i\geq1\}$ and $\beta(\kappa)=H^c(m)\cup\{z\in\mathbb{Z}:z<0\}$, where by convention $\kappa_i=0$ when $i>\ell(\kappa)$. Note that $\beta(\kappa)=\{z+\ell(\kappa)-1:z\in\alpha(\kappa)\}$. \begin{mythrm}{coreabacus} The map $\alpha$ is a bijection between $n$-cores and balanced $n$-flush abaci. The map $\beta$ is a bijection between $n$-cores and normalised $n$-flush abaci. \end{mythrm} For our purposes in this section we need the following simple consequence of \reft{coreabacus}. \begin{mylem}{flush} Let $\kappa\in\mathfrak C_n$ be an $n$-core and $z\geq0$. Then $z+n\in H^r(h)$ implies $z\in H^r(h)$, and $z+n\in H^c(h)$ implies $z\in H^c(h)$. \end{mylem} In fact, the set of $n$-cores is characterised by the property that $z+n\in H^c(m)$ implies $z\in H^c(m)$ for all $z\geq0$, where $m$ is the maximal hook length of $\kappa$. Let $\kappa\in\mathfrak C_{n,p}$ be a simultaneous core with maximal hook length $m$. Moreover choose an element $h\in H^c(m)$. We call the row of $\kappa$ whose leftmost cell has hook length $h$ an $n$-row (resp.~$p$-row) if $h+n\notin H^c(m)$ (resp.~$h+p\notin H^c(m)$). Similarly, given $h\in H^r(m)$ we call the column whose top cell has hook length $h$ an $n$-column (resp.~$p$-column) if $h+n\notin H^r(m)$ (resp.~$h+p\notin H^r(m)$). Denote by $H_{n,p}(\kappa)$ the multiset of hook lengths of cells that are contained both in an $n$-row and in a $p$-column. In \reff{core} the leftmost hook lengths of the $7$-rows are $31,15,13,12$ and $4$, and the top hook lengths of the $7$-columns are $31,29,20,12,11$ and $9$. This section's main result is a surprising symmetry property of the multiset $H_{n,p}(\kappa)$. \begin{mythrm}{Hab} Let $\kappa\in\mathfrak C_{n,p}$ be an $n,p$-core. Then $H_{n,p}(\kappa)=H_{p,n}(\kappa)$. \end{mythrm} \begin{proof} We prove the claim by induction on the size of $\kappa$. Denote by $\bar\kappa$ the partition obtained from $\kappa$ by deleting the first column. Clearly $\bar\kappa\in\mathfrak C_{n,p}$ and we may assume that $H_{n,p}(\bar\kappa)=H_{p,n}(\bar\kappa)$. Let $m$ denote the maximal hook length in $\kappa$. Note that each $n$-row of $\bar\kappa$ is an $n$-row of $\kappa$. The only $p$-column of $\bar\kappa$ that is not a $p$-column of $\kappa$ has maximal hook length $m-p$. Thus there exist sets $A\subseteq H^c(m)$ and $B\subseteq H^c(m-p)$ with $H_{n,p}(\kappa)=(H_{n,p}(\bar\kappa)\cup A)-B$. Similarly $H_{p,n}(\kappa)=(H_{p,n}(\bar\kappa)\cup C)-D$ for some sets $C\subseteq H^c(m)$ and $D\subseteq H^c(m-n)$. It suffices to show that $A-B=C-D$ and $B-A=D-C$. Suppose $z\in A$ but $z\notin B$. Then $z\in H^c(m)$ and $z+n\notin H^c(m)$. On the one hand we obtain $z\notin H^c(m-n)$ and $z\notin D$. It follows that $A\cap D=\emptyset$. On the other hand we obtain $z+n+p\notin H^c(m)$ and therefore $z+n\notin H^c(m-p)$. Since $z\notin B$ this implies $z\notin H^c(m-p)$ and consequently $z+p\notin H^c(m)$. We obtain $z\in C$. Therefore $A-B\subseteq C-D$, and $A-B=C-D$ by symmetry. Conversely suppose $z\in B$ but $z\notin A$. By symmetry we have $B\cap C=\emptyset$ and $z\notin C$. On the other hand $z+n\notin H^c(m-p)$ implies $z+n+p\notin H^c(m)$ and thus $z+p\notin H^c(m-n)$. Moreover $z\in H^c(m-p)$ implies $z+p\in H^c(m)$ and therefore $z\in H^c(m)$. Since $z\notin A$ we obtain $z+n\in H^c(m)$ and $z\in H^c(m-n)$. We conclude that $z\in D$ and the proof is complete. \end{proof} The \emph{skew length} $\op{skl}(\kappa)$ of an $n,p$-core was defined by Armstrong, Hanusa and Jones~\cite{AHJ2014} as the number of cells that are contained in an $n$-row of $\kappa$ and have hook length less than $p$. The set $H_{n,p}(\kappa)$ allows for a new equivalent definition. \begin{myprop}{skl} Let $\kappa\in\mathfrak C_{n,p}$ be an $n,p$-core. Then $\op{skl}(\kappa)=\#H_{n,p}(\kappa)$. \end{myprop} \begin{proof} Fix an $n$-row with largest hook length $h$. On the one hand by \refl{flush} a cell $x$ in this row has hook length less than $p$ if and only if $h_{\kappa}(x)$ is the minimal representative of its residue class modulo $p$ in $H^r(h)$. On the other hand $x$ is contained in a $p$-column if and only if $h_{\kappa}(x)$ is the maximal representative of its residue class modulo $p$ in $H^r(h)$. Thus both $\op{skl}(\kappa)$ and $\#H_{n,p}(\kappa)$ count the number of residue classes modulo $p$ with a representative in $H^r(h)$. \end{proof} From \reft{Hab} we immediately recover two results that were recently proven by Guoce Xin~\cite{Xin2015}, and independently by Ceballos, Denton and Hanusa~\cite{CDH2015} when $n$ and $p$ are relatively prime. \begin{mycor}{skewsym1} The skew length of an $n,p$-core is independent of the order of $n$ and $p$. \end{mycor} \begin{mycor}{skewsym2} Let $\kappa\in\mathfrak C_{n,p}$ be an $n,p$-core with conjugate $\kappa'$. Then $\op{skl}(\kappa)=\op{skl}(\kappa')$. \end{mycor} Indeed, with our alternative definition of skew length given in \refp{skl} the statements of Corollaries~\ref{Corollary:skewsym1} and~\ref{Corollary:skewsym2} are identical. We close this section by stating two equivalent conjectural properties of the multiset $H_{n,p}(\kappa)$ that need further investigation. \begin{myconj}{inH} Let $n$ and $p$ be relatively prime, and $\kappa\in\mathfrak{C}_{n,p}$. Then the hook length of each cell of $\kappa$ appears in $H_{n,p}(\kappa)$ with multiplicity at least one. \end{myconj} \begin{myconj}{invH} Let $n$ and $p$ by relatively prime, $z\geq0$ and $\kappa\in\mathfrak{C}_{n,p}$. Then $z+n\in H_{n,p}(\kappa)$ implies $z\in H_{n,p}(\kappa)$. \end{myconj} \section{Rational Shi tableaux}\label{Section:shitab} In the beginning of this section we recall some facts about root systems and Weyl groups. For further details we refer the reader to~\cite{Humphreys}. \sk Let $\mathbb{P}hi$ be an irreducible crystallographic root system with ambient space $V$, positive system $\mathbb{P}hi^+$ and simple system $\Delta=\{\sigma_1,\,\mathrm{d}ots,\sigma_r\}$. Any root $\alpha\in\mathbb{P}hi$ can be written as a unique integer linear combination $\alpha=\sum_{i=1}^rc_i\sigma_i$, where all coefficients $c_i$ are non-negative if $\alpha\in\mathbb{P}hi^+$, or all coefficients are non-positive if $\alpha\in-\mathbb{P}hi^+$. We define the \emph{height} of the root $\alpha$ by $\op{ht}(\alpha)=\sum_{i=1}^rc_i$. Thereby $\op{ht}(\alpha)>0$ if and only if $\alpha\in\mathbb{P}hi^+$, and $\op{ht}(\alpha)=1$ if and only if $\alpha\in\Delta$. Moreover there exists a unique \emph{highest root} $\tilde\alpha$ such that $\op{ht}(\tilde\alpha)\geq\op{ht}(\alpha)$ for all $\alpha\in\mathbb{P}hi$. The \emph{Coxeter number} of $\mathbb{P}hi$ can be defined as $h=\op{ht}(\tilde\alpha)+1$. Let $\,\mathrm{d}elta$ be a formal variable. We define the set of \emph{affine roots} as \begin{align*} \widetilde\mathbb{P}hi =\{\alpha+k\,\mathrm{d}elta:\alpha\in\mathbb{P}hi,k\in\mathbb{Z}\}\subseteq V\oplus\mathbb{R}\,\mathrm{d}elta. \end{align*} The \emph{height} of an affine root is given by $\op{ht}(\alpha+k\,\mathrm{d}elta)=\op{ht}(\alpha)+kh$. The sets of \emph{positive} and \emph{simple affine roots} are defined as \begin{align*} \widetilde\mathbb{P}hi^+ =\mathbb{P}hi^+\cup\{\alpha+k\,\mathrm{d}elta\in\widetilde\mathbb{P}hi:\alpha\in\mathbb{P}hi,k>0\} &&\text{and}&& \widetilde\Delta =\Delta\cup\{-\tilde\alpha+\,\mathrm{d}elta\}. \end{align*} Thus $\alpha+k\,\mathrm{d}elta\in\widetilde\mathbb{P}hi^+$ if and only if $\op{ht}(\alpha+k\,\mathrm{d}elta)>0$, and $\alpha+k\,\mathrm{d}elta\in\widetilde\Delta$ if and only if $\op{ht}(\alpha+k\,\mathrm{d}elta)=1$. The \emph{Coxeter arrangement} $\op{Cox}(\mathbb{P}hi)$ consists of all hyperplanes of the form $H_{\alpha}=\{x\in V:\skal{x,\alpha}=0\}$ for $\alpha\in\mathbb{P}hi$. Its regions, that is, the connected components of $V-\bigcup_{\alpha\in\mathbb{P}hi}H_{\alpha}$, are called \emph{chambers}. We define the \emph{dominant chamber} as \begin{align*} C=\big\{x\in V:\skal{x,\alpha}>0\text{ for all }\alpha\in\Delta\big\}. \end{align*} The \emph{Weyl group} $W$ of $\mathbb{P}hi$ is the group of linear automorphisms of $V$ generated by all reflections in a hyperplane in $\op{Cox}(\mathbb{P}hi)$. The Weyl group acts simply transitively on the chambers. Thus identifying the identity $e\in W$ with the dominant chamber, each chamber corresponds to a unique Weyl group element. The \emph{affine arrangement} $\op{Aff}(\mathbb{P}hi)$ consists of all hyperplanes of the form $H_{\alpha,k}=\{x\in V:\skal{x,\alpha}=k\}$, where $\alpha\in\mathbb{P}hi$ and $k\in\mathbb{Z}$. Its regions are called \emph{alcoves}. We define the \emph{fundamental alcove} as \begin{align*} A_{\circ} =\big\{x\in V:\skal{x,\alpha}>0\text{ for all }\alpha\in\Delta\text{ and }\skal{x,\tilde\alpha}<1\big\}. \end{align*} The \emph{affine Weyl group} $\widetilde{W}$ of $\mathbb{P}hi$ is the group of affine transformations of $V$ that is generated by all reflections in a hyperplane in $\op{Aff}(\mathbb{P}hi)$. The affine Weyl group acts simply transitively on the set of alcoves. By identifying the identity $e\in\widetilde{W}$ with the fundamental alcove, every alcove corresponds to a unique element of $\widetilde{W}$. An element $\omega\in\widetilde{W}$ is called \emph{dominant} if and only if the alcove $\omega(A_{\circ})$ is contained in the dominant chamber $C$. Given a root $\alpha\in\mathbb{P}hi$ we define its \emph{coroot} as $\alpha^{\vee}=2\alpha/\skal{\alpha,\alpha}$. The \emph{coroot lattice} is the integer span of all coroots $\check{Q}=\sum_{\alpha\in\mathbb{P}hi}\mathbb{Z}\alpha^{\vee}\subseteq V$. For each $q\in\check{Q}$ the translation $t_q:V\to V$ defined by $t_q(x)=x+q$ for all $x\in V$ is an element of the affine Weyl group. Identifying $\check{Q}$ with its translation group we obtain $\widetilde{W}=W\ltimes\check{Q}$. Note that if $\omega\in\widetilde{W}$ is dominant and $\omega=t_qs$, where $q\in\check{Q}$ and $s\in W$, then in particular $q$ lies in the closure of the dominant chamber. Thus $\skal{\alpha,q}\geq0$ for each positive root $\alpha\in\mathbb{P}hi^+$. \sk The Weyl group is a parabolic subgroup of the affine Weyl group. Each element $\omega\in\widetilde{W}$ can be assigned a \emph{length} $\ell(\omega)$ indicating the minimal number of elements of a special set of generators needed to express $\omega$. Each coset $\omega W\in\widetilde{W}/W$ contains a unique representative of minimal length. These representatives are called \emph{Gra{\ss}mannian}. An element $\omega\in\widetilde{W}$ is Gra{\ss}mannian if and only if $\omega^{-1}$ is dominant. Fix a root system of type $A_{n-1}$ by declaring the roots, positive roots and simple roots to be \begin{align*} \mathbb{P}hi=\{e_i-e_j:i,j\in[n],i\neq j\}, \mathbb{P}hi^+=\{e_i-e_j:i,j\in[n],i<j\}, \Delta=\{e_i-e_{i+1}:i\in[n-1]\}. \end{align*} We now describe a combinatorial model for the affine Weyl group of type $A_{n-1}$. A detailed exposition is found in~\cite[Sec.~8.3]{BjoBre}. The \emph{affine symmetric group} $\affS_n$ is the group of bijections $\omega:\mathbb{Z}\to\mathbb{Z}$ such that $\omega(i+n)=\omega(i)+n$ for all $i\in\mathbb{Z}$ and $\omega(1)+\,\mathrm{d}ots+\omega(n)=n(n+1)/2$. Such a bijection $\omega$ is called \emph{affine permutation}. Each affine permutation is uniquely determined by its \emph{window} $[\omega(1),\omega(2),\,\mathrm{d}ots,\omega(n)]$. The group $\affS_n$ has a set of generators called \emph{simple transpositions} given by $s_i=[1,\,\mathrm{d}ots,i+1,i,\,\mathrm{d}ots,n]$ for $i\in[n-1]$ and $s_n=[0,2,\,\mathrm{d}ots,n-1,n+1]$. The affine symmetric group and the set of simple transpositions form a Coxeter system isomorphic to the the affine Weyl group of type $A_{n-1}$. The symmetric group $\mathfrak{S}_n$ can be seen as the subgroup of $\affS_n$ consisting of all affine permutations whose window is a permutation of $[n]$. Moreover the symmetric group $\mathfrak{S}_n$ is the parabolic subgroup of $\affS_n$ generated by $s_i$ for $i\in[n-1]$. The \emph{length} $\ell(\omega)$ of an affine permutation $\omega\in\affS_n$ is the minimal number $\ell$ of simple transpositions in an expression of the form $\omega=s_{i_1}s_{i_2}\cdots s_{i_{\ell}}$, where $i_j\in[n]$ for all $j\in[\ell]$. The \emph{Gra{\ss}mannian affine permutations}, that is, the minimal length representatives of the cosets in $\affS_n/\mathfrak{S}_n$, are described in \cite[Prop.~8.3.4]{BjoBre}. \begin{myprop}{grass} An affine permutation $\omega\in\affS_n$ is the minimal length representative of its coset $\omega\mathfrak{S}_n\in\affS_n/\mathfrak{S}_n$ if and only if its window is increasing, that is, \begin{align*} \omega(1)<\omega(2)<\,\mathrm{d}ots<\omega(n). \end{align*} \end{myprop} We now give a combinatorial description of the decomposition of the affine symmetric group as the semidirect product of translations and permutations. The coroot lattice of type $A_{n-1}$ is given by \begin{align*} \check{Q}=\big\{x\in\mathbb{Z}^n:x_1+\,\mathrm{d}ots+x_n=0\big\}. \end{align*} The affine symmetric group acts on the coroot lattice via the following rules \begin{align*} s_i\cdot x=(x_1,\,\mathrm{d}ots,x_{i+1},x_i,\,\mathrm{d}ots,x_n)&&\text{for }i\in[n-1]\text{ and}&& s_n\cdot x=(x_n+1,x_2,\,\mathrm{d}ots,x_{n-1},x_1-1). \end{align*} Given $\omega\in\affS_n$ write $\omega(i)=a_in+b_i$ for each $i\in\mathbb{Z}$, where $a_i\in\mathbb{Z}$ and $b_i\in[n]$. Set $\sigma(\omega,i)=b_i$, $\mu(\omega,b_i)=a_i$ and $\nu(\omega,i)=-a_i$. \begin{mylem}{sn} \begin{enumerate}[(i)] \item\label{item:sigma} The assignment $i\mapsto\sigma(\omega,i)$ defines a permutation $\sigma(\omega)\in\mathfrak{S}_n$. \item\label{item:munu} The vectors $\mu(\omega)=(\mu(\omega,1),\,\mathrm{d}ots,\mu(\omega,n))$ and $\nu(\omega)=(\nu(\omega,1),\,\mathrm{d}ots,\nu(\omega,n))$ lie in the coroot lattice $\check{Q}$ and for all $i\in[n]$ we have \begin{align*} \mu(s_i\omega)=s_i\cdot\mu(\omega),&& \nu(\omega s_i)=s_i\cdot\nu(\omega). \end{align*} \item\label{item:atzero} We have $\omega\cdot0=\mu(\omega)$ and $\omega^{-1}\cdot0=\nu(\omega)$. \item\label{item:munuinv} We have $\mu(\omega^{-1})=\nu(\omega)$ and $\sigma(\omega^{-1})=\sigma(\omega)^{-1}$. \item\label{item:munusigma} We have $\mu(\omega)=-\sigma(\omega)\cdot\nu(\omega)$. \end{enumerate} \end{mylem} \begin{proof} Claims~\refi{sigma} and~\refi{munusigma} are straight forward. Claims~\refi{atzero} and~\refi{munuinv} follow immediately from~\refi{munu}, which can be shown by induction on the length of $\omega$. Clearly $\mu(e)=\nu(e)=(0,\,\mathrm{d}ots,0)\in\check{Q}$. Set $\sigma=\sigma(\omega)$ and suppose that $\mu(\omega)\in\check{Q}$. Fix $i\in[n-1]$ and choose $j,k\in[n]$ such that $\sigma(j)=i$ and $\sigma(k)=i+1$. Then $s_i\omega(\ell)=\omega(\ell)$ for all $\ell\in[n]-\{j,k\}$ and hence $\mu(s_i\omega,\ell)=\mu(\omega,\ell)$ for all $\ell\in[n]-\{i,i+1\}$. Furthermore, $s_i\omega(j)=s_i(a_jn+i)=a_jn+i+1$ and $s_i\omega(k)=s_i(a_kn+i+1)=a_kn+i$. It follows that $\mu(s_i\omega,i)=a_k=\mu(\omega,i+1)$ and $\mu(s_i\omega,i+1)=a_j=\mu(\omega,i)$. Next choose $j,k\in[n]$ such that $\sigma(j)=1$ and $\sigma(k)=n$. Then $s_n\omega(\ell)=\omega(\ell)$ for all $\ell\in[n]-\{j,k\}$. Thus $\mu(s_n\omega,\ell)=\mu(\omega,\ell)$ for all $\ell\in[n]-\{1,n\}$. Moreover, $s_n\omega(j)=s_n(a_jn+1)=a_jn+0=(a_j-1)n+n$ and $s_n\omega(k)=s_n(a_kn+n)=a_kn+(n+1)=(a_k+1)n+1$. Thus $\mu(s_n\omega,1)=a_k+1=\mu(\omega,n)+1$ and $\mu(s_n\omega,n)=a_j-1=\mu(\omega,1)-1$. We obtain $\mu(s_i\omega)=s_i\cdot\mu(\omega)$, and in particular $\mu(s_i\omega)\in\check{Q}$, for all $i\in[n]$. Now suppose $\nu(\omega)\in\check{Q}$ and fix $i\in[n-1]$. Since $\omega s_i(j)=\omega(j)$ for all $j\in[n]-\{i,i+1\}$ we have $\nu(\omega s_i,j)=\nu(\omega,j)$ for all $j\in[n]-\{i,i+1\}$. Furthermore $\omega s_i(i)=\omega(i+1)$ and $\omega s_i(i+1)=\omega(i)$, thus $\nu(\omega s_i,i)=\nu(\omega,i+1)$ and $\nu(\omega s_i,i+1)=\nu(\omega,i)$. Finally since $\omega s_n(j)=\omega(j)$ for all $j\in[n]-\{1,n\}$ we have $\nu(\omega s_n,j)=\nu(\omega,j)$ for all $j\in[n]-\{1,n\}$. Moreover $\omega s_n(1)=\omega(0)=\omega(n-n)=\omega(n)-n=(a_n-1)n+b_n$, hence $\nu(\omega s_n,1)=-a_n+1=\nu(\omega,n)+1$. On the other hand $\omega s_n(n)=\omega(n+1)=n+\omega(1)=(a_1+1)n+b_1$. Thus $\nu(\omega s_n,n)=-a_1-1=\nu(\omega,1)-1$. We obtain that $\nu(\omega s_i)=s_i\cdot\nu(\omega)$, and in particular $\nu(\omega s_i)\in\check{Q}$, for all $i\in[n]$. This concludes the proof of claim \refi{munu} and thus the proof of the lemma. \end{proof} For $q\in\check{Q}$ define an affine permutation $t_q\in\affS_n$ by $t_q(i)=q_in+i$ for $i\in[n]$. We call an affine permutation $\omega\in\affS_n$ a \emph{translation} if there exists $q\in\check{Q}$ such that $\omega\cdot x=q+x$ for all $x\in\check{Q}$. \begin{mythrm}{translation} \begin{enumerate}[(i)] \item\label{item:tmusigma} Let $\omega\in\affS_n$ be an affine permutation and set $s=\sigma(\omega)$, $x=\mu(\omega)$ and $y=\nu(\omega)$. Then $\omega=t_xs=st_{-y}$. \item\label{item:coQ} Let $x,y\in\check{Q}$. Then $t_xt_y=t_{x+y}$ and $(t_x)^{-1}=t_{-x}$. Hence we may view the coroot lattice $\check{Q}$ as a subgroup of $\affS_n$. \item\label{item:translation} An affine permutation $\omega\in\affS_n$ is a translation if and only if $\omega=t_q$ for some $q\in\check{Q}$ \item\label{item:semidirect} The affine symmetric group is the semidirect product of the symmetric group and the coroot lattice, that is, $\affS_n=\mathfrak{S}_n\ltimes\check{Q}$. \end{enumerate} \end{mythrm} \begin{proof} Claims~\refi{tmusigma} and~\refi{coQ} are straightforward calculations. Claim~\refi{tmusigma} follows from \begin{align*} t_xs(i) &=\mu(\omega,s(i))n+s(i) =\omega(i) =-\nu(\omega,i)n+s(i) =s(-\nu(\omega,i)n+i) =st_{-y}(i). \end{align*} On the other hand \begin{align*} t_xt_y(i) &=t_x(y_in+i) =y_in+t_x(i) =(x_i+y_i)n+i =t_{x+y}(i) \end{align*} implies~\refi{coQ}. To see~\refi{translation} note that $t_q\cdot x=t_qt_x\cdot0=t_{q+x}\cdot0=q+x$. Conversely, if $\omega$ is a translation by $q\in\check{Q}$ then $q+x=\omega\cdot x=q+\sigma(\omega)\cdot x$ for all $x\in\check{Q}$, which implies $\sigma(\omega)=e$. Finally, $\mathfrak{S}_n\check{Q}=\affS_n$ and $\mathfrak{S}_n$ normalises $\check{Q}$ by~\refi{tmusigma}. Since $\mathfrak{S}_n\cap \check{Q}=\{e\}$, we obtain~\refi{semidirect}. \end{proof} Thus the decomposition of an affine permutation into a product of a translation and a permutation can be obtained from its window in a simple and direct fashion. We remark that this combinatorial decomposition has appeared in the literature before, for example in~\cite{BjoBre1996}. However, the author is unaware of a reference explaining explicitly its connection to the algebraic decomposition into a semidirect product, which exists for any affine Weyl group. The affine symmetric group possesses an involutive automorphism owing to the symmetry of the Dynkin diagram of type $A_{n-1}$. Set $s_i^*=s_{n-i}$ for $i\in[n-1]$ and $s_n^*=s_n$. This correspondence extends to an automorphism $\omega\mapsto\omega^*$ on $\affS_n$, where $\omega^*$ is obtained by replacing all instances of $s_i$ in any expression of $\omega$ in terms of the simple transpositions by $s_i^*$. The involutive automorphism has a simple explicit description in window notation and fulfils many desirable properties. Some of these properties are presented with proofs in this paper, although they are already well-known to experts. \begin{mylem}{aut} \begin{enumerate}[(i)] \item\label{item:window} Let $\omega\in\affS_n$ be an affine permutation and $i\in\mathbb{Z}$. Then $w^*(i)=1-\omega(1-i)$. In particular the window of $\omega^*$ is given by $[n+1-\omega(n),\,\mathrm{d}ots,n+1-\omega(1)]$. \item\label{item:aut} The involutive automorphism preserves $\mathfrak{S}_n$, translations, Gra{\ss}mannian affine permutations and dominant affine permutations. \end{enumerate} \end{mylem} \begin{proof} We prove claim~\refi{window} by induction on the length of $\omega$. Clearly~\refi{window} holds for the identity, as $e^*=e$. Thus assume it is true for $\omega$. For $i\in[n-1]$ the right multiplication of $\omega$ by $s_i$ corresponds to exchanging the two numbers $\omega(i),\omega(i+1)$ in the window of $\omega$. On the other hand multiplying $\omega^*$ by $s_i^*$ from the right exchanges the numbers $n+1-\omega(i+1),n+1-\omega(i)$. Claim~\refi{window} therefore also holds for $\omega s_i$. Claim~\refi{aut} follows from~\refi{window} and the fact that $(\omega^{-1})^*=(\omega^*)^{-1}$. \end{proof} \begin{figure} \caption{The inversion table of the affine permutation $\omega=[-2,15,-1,16,-14,10,4]\in\smash{\stackrel{\sim} \label{Figure:kij} \end{figure} Let $\omega\in\affS_n$ be an affine permutation. An \emph{affine inversion} of $\omega$ is a pair $(i,j)\in[n]\times\mathbb{N}$ such that $i<j$ and $\omega(i)>\omega(j)$. For $i,j\in[n]$ with $i<j$ define \begin{align}\label{eq:kij} k_{i,j}(\omega) &=\abs{\floor{\frac{\omega^{-1}(j)-\omega^{-1}(i)}{n}}}. \end{align} The numbers $k_{i,j}(\omega)$ where first considered by Shi who proved that $\sum_{i,j}k_{i,j}(\omega)=\ell(\omega)$ for all $\omega\in\affS_n$~\cite[Lem.~4.2.2]{Shi:Kazhdan_Lusztig}. We arrange these numbers in a staircase tableau as in \reff{kij}. The following lemma explains that $k_{i,j}(\omega)$ counts certain affine inversions of $\omega$. It should be compared to the discussion preceding~\cite[Prop.~8.3.1]{BjoBre}. \begin{mylem}{invA} Let $\omega\in\affS_n$ be an affine permutation. \begin{enumerate}[(i)] \item\label{item:inv} Then the number of affine inversions of $\omega$ is given by $\sum_{i,j}k_{i,j}(\omega)$. \item\label{item:invij} If $\omega$ is dominant then $k_{i,j}(\omega)$ equals the number of affine inversions $(a,b)$ of $\omega$, such that $\omega(a)\equiv i$ and $\omega(b)\equiv j$ modulo $n$. \end{enumerate} \end{mylem} \begin{proof} Let $i,j\in[n]$ with $i<j$. If $\omega^{-1}(i)<\omega^{-1}(j)$ then $(j,i+kn)$ is an affine inversion of $\omega^{-1}$ if and only if $k\geq1$ and \begin{align*} \omega^{-1}(j)>\omega^{-1}(i+kn) &\Leftrightarrow\omega^{-1}(j)-\omega^{-1}(i)>kn \Leftrightarrow\floor{\frac{\omega^{-1}(j)-\omega^{-1}(i)}{n}}\geq k. \end{align*} On the other and if $\omega^{-1}(i)>\omega^{-1}(j)$ then $(i,j+kn)$ is an affine inversion of $\omega^{-1}$ if and only if $k\geq0$ and \begin{align*} k\leq\floor{\frac{\omega^{-1}(i)-\omega^{-1}(j)}{n}} =\abs{\floor{\frac{\omega^{-1}(j)-\omega^{-1}(i)}{n}}}-1. \end{align*} This proves the first claim since $\omega$ and $\omega^{-1}$ have the same number of inversions. If $\omega$ is dominant then $\omega^{-1}$ is Gra{\ss}mannian. Thus $\omega^{-1}(i)<\omega^{-1}(j)$ and $k_{i,j}(\omega)$ counts the number of inversions of $\omega^{-1}$ of the form $(j,i+kn)$ as above. But these inversions correspond exactly to the inversions $(a,b)$ of $\omega$ with $\omega(a)\equiv i$ and $\omega(b)\equiv j$ modulo $n$. Claim~\refi{invij} follows. \end{proof} For example the dominant affine permutation $\omega=[-2,15,-1,16,-14,10,4]$ from \reff{kij} has three affine inversions $(a,b)$ such that $\omega(a)\equiv3$ and $\omega(b)\equiv7$ modulo $7$, namely $(6,12),(6,19)$ and $(6,26)$. Thus $k_{3,7}(\omega)=3$. Motivated by \refl{invA} we call the collection of numbers $k_{i,j}(\omega)$ for $i,j\in[n]$ with $i<j$ the \emph{inversion table} of $\omega\in\affS_n$. A simple computation reveals the effect of the involutive automorphism on the inversion table. \begin{myprop}{invconj} Let $\omega\in\affS_n$ be an affine permutation. Then the inversion table of $\omega^*$ is the transpose of the inversion table of $\omega$. That is, $k_{n+1-j,n+1-i}(\omega^*)=k_{i,j}(\omega)$ for all $i,j\in[n]$ with $i<j$. \end{myprop} \begin{proof} We compute \begin{align*} k_{n+1-j,n+1-i}(\omega^*) &=\abs{\floor{\frac{(\omega^*)^{-1}(n+1-i)-(\omega^*)^{-1}(n+1-j)}{n}}}\\ &=\abs{\floor{\frac{(\omega^{-1})^*(n+1-i)-(\omega^{-1})^*(n+1-j)}{n}}}\\ &=\abs{\floor{\frac{n+1-\omega^{-1}(i)-(n+1)+\omega^{-1}(j)}{n}}} =k_{i,j}(\omega).\qedhere \end{align*} \end{proof} There is a natural way to generalise the inversion table to affine Weyl groups. Let $\mathbb{P}hi$ be an irreducible crystallographic root system with affine Weyl group $\widetilde{W}$. Let $\omega\in\widetilde{W}$ have the decomposition $\omega=t_qs$, where $q\in\check{Q}$ and $s\in W$. The affine Weyl group acts on the set of affine roots by \begin{align*} \omega\cdot(\alpha+k\,\mathrm{d}elta) =s\cdot\alpha+(k-\skal{q,s\cdot\alpha})\,\mathrm{d}elta. \end{align*} A positive affine root $\alpha+k\,\mathrm{d}elta\in\widetilde{\mathbb{P}hi}^+$ is an \emph{affine inversion} of $\omega$ if $\omega\cdot(\alpha+k\,\mathrm{d}elta)\in-\widetilde{\mathbb{P}hi}^+$. Denote the set of affine inversions of $\omega$ by $\op{Inv}(\omega)=\widetilde{\mathbb{P}hi}^+\cap\omega^{-1}\cdot\big(-\widetilde{\mathbb{P}hi}^+\big)$. We remark that the affine inversions of $\omega$ correspond to the affine hyperplanes in $\op{Aff}(\mathbb{P}hi)$ separating $\omega(A_{\circ})$ from the fundamental alcove. Thus the number of inversions equals the length of $\omega$. For $\omega\in\widetilde{W}$ and a positive root $\alpha\in\mathbb{P}hi^+$ define \begin{align}\label{eq:kalpha} k_{\alpha}(\omega) &=\#(\op{Inv}(\omega^{-1})\cap\{\pm\alpha+k\,\mathrm{d}elta:k\in\mathbb{Z}\}). \end{align} \begin{mylem}{kalpha} Let $\alpha\in\mathbb{P}hi^+$ be a positive root and $\omega\in\widetilde{W}$ an element of the affine Weyl group. If $\omega=t_qs$, where $q\in\check{Q}$ and $s\in W$, then \begin{align*} k_{\alpha}(\omega)= \begin{cases} \abs{\skal{\alpha,q}}&\quad\text{if }s^{-1}\cdot\alpha\in\mathbb{P}hi^+,\\ \abs{\skal{\alpha,q}-1}&\quad\text{if }s^{-1}\cdot\alpha\in-\mathbb{P}hi^+. \end{cases} \end{align*} \end{mylem} \begin{proof} Set $\beta=s^{-1}\cdot\alpha$. For $k\geq0$ \begin{align*} \omega^{-1}\cdot(\alpha+k\,\mathrm{d}elta) &=\beta+(k+\skal{\alpha,q})\,\mathrm{d}elta \in-\widetilde\mathbb{P}hi^+ \end{align*} if and only if either $k<-\skal{\alpha,q}$ or both $k=-\skal{\alpha,q}$ and $\beta\in-\mathbb{P}hi^+$. On the other hand for $k\geq1$ \begin{align*} \omega^{-1}\cdot(-\alpha+k\,\mathrm{d}elta) &=-\beta+(k-\skal{\alpha,q})\,\mathrm{d}elta \in-\widetilde\mathbb{P}hi^+ \end{align*} if and only if $k<\skal{\alpha,q}$ or both $k=\skal{\alpha,q}$ and $\beta\in\mathbb{P}hi^+$. Combined this implies the claim. \end{proof} As a consequence we deduce that the collection of numbers $k_{\alpha}(\omega)$ defined in \refq{kalpha} really is a generalisation of the inversion table defined in \refq{kij} for type $A_{n-1}$. \begin{myprop}{kalpha} Let $i,j\in[n]$ with $i<j$ and $\omega\in\affS_n$. Then $k_{i,j}(\omega)=k_{e_i-e_j}(\omega)$. \end{myprop} \begin{proof} Write $\omega=t_qs$, where $q\in\check{Q}$ and $s\in\mathfrak{S}_n$. The claim follows from \refl{kalpha} and \begin{align*} \abs{\floor{\frac{\omega^{-1}(j)-\omega^{-1}(i)}{n}}} &=\abs{\floor{\frac{-q_jn+s^{-1}(j)-(-q_in+s^{-1}(i))}{n}}}\\ &= \begin{cases} \abs{q_i-q_j}&\quad\text{if }s^{-1}(i)<s^{-1}(j),\\ \abs{q_i-q_j-1}&\quad\text{if }s^{-1}(i)>s^{-1}(j). \end{cases}\qedhere \end{align*} \end{proof} Note that two distinct elements of $\widetilde{W}$ may well have the same inversion table. For example the two affine permutations $[0,1,5]$ and $[-1,3,4]$ in $\affS_3$ have the same inversion table. However, the inversion table is closely related to another construction studied by Shi. Let $\omega\in\widetilde{W}$ and $\alpha\in\mathbb{P}hi^+$. Define $\tilde{k}_{\alpha}(\omega)\in\mathbb{Z}$ such that \begin{align*} \tilde{k}_{\alpha}(\omega) <\skal{\alpha,x} <\tilde{k}_{\alpha}(\omega)+1 \end{align*} for all $x\in\omega(A_{\circ})$. The collection of numbers $\tilde{k}_{\alpha}(\omega)$ for $\alpha\in\mathbb{P}hi^+$ is called \emph{address} or \emph{Shi coordinates} of the alcove $\omega(A_{\circ})$. Clearly, each alcove is determined uniquely by its Shi coordinates, and $\tilde{k}_{\alpha}(\omega)\geq0$ for all $\alpha\in\mathbb{P}hi^+$ if and only if $\omega$ is dominant. Moreover it is not difficult to show that \begin{align*} \tilde{k}_{\alpha}(\omega)= \begin{cases} \skal{\alpha,q}&\quad\text{if }s^{-1}\cdot\alpha\in\mathbb{P}hi^+,\\ \skal{\alpha,q}-1&\quad\text{if }s^{-1}\cdot\alpha\in-\mathbb{P}hi^+, \end{cases} \end{align*} where $q\in\check{Q}$ and $s\in W$ are chosen such that $\omega=t_qs$ (See~\cite[Thm.~3.3]{Shi1987:alcoves}). Thus we obtain $k_{\alpha}(\omega)=\abs{\smash{\tilde{k}}_{\alpha}(\omega)}$ for all $\omega\in\widetilde{W}$ and $\alpha\in\mathbb{P}hi^+$. In particular it follows that no two distinct dominant\,(!) elements of the affine Weyl group have the same inversion table. \sk Similar to the inversion table, the entries of the rational Shi tableau count certain inversions of an affine permutation. However, as we have just seen, the set of all affine inversions might be too big to obtain a nice correspondence between permutations and tableaux. In case of the inversion table it is more fruitful to restrict ourselves to dominant affine permutations. In case of the rational Shi tableau the appropriate set is even smaller, in fact finite. Let $p$ be a positive integer that is relatively prime to $n$. An affine permutation $\omega\in\affS_n$ is called \emph{$p$-stable} if $\omega(i)<\omega(i+p)$ for all $i\in\mathbb{Z}$. The set of $p$-stable affine permutations in $\affS_n$, denoted by $\affS_n^p$, was first considered by Gorsky, Mazin and Vazirani~\cite{GMV2014}. As a consequence of~\refl{aut} we see that the involutive automorphism $\omega\mapsto\omega^*$ preserves $\affS_n^p$. If $p=mn+1$ then $\affS_n^p$ corresponds to the set of minimal alcoves of the regions of the $m$-extended Shi arrangement. If $p=mn-1$ then $\affS_n^p$ is related to the bounded regions of the $m$-extended Shi arrangement. Thus $p$-stable affine permutations should be viewed as a ``rational'' analogue of the regions of the Shi arrangement. Let $\omega\in\affS_n^p$ be a dominant $p$-stable affine permutation. Let $\omega^{-1}=t_qs$, where $q\in\check{Q}$ and $s\in\mathfrak{S}_n$, and $p=mn+r$ with $r\in[n-1]$. We define the \emph{rational Shi tableau} of $\omega$ as the collection of integers \begin{align}\label{eq:tij} t^p_{i,j}(\omega)= \begin{cases} \min(k_{i,j}(\omega),m)&\quad\text{if }r+s(i)<s(j)\text{ or }s(i)+r-n<s(j)<s(i).\\ \min(k_{i,j}(\omega),m+1)&\quad\text{otherwise,} \end{cases} \end{align} where $i,j\in[n]$ such that $i<j$. See \reff{tij} for an example. Note that $k_{i,j}(\omega)\leq m$ whenever $s(j)-s(i)\in\{r-n,r\}$. This follows from \refl{kalpha} and the fact that $\omega$ is dominant $p$-stable. As a consequence $t_{i,j}^p(\omega)=\min(k_{i,j}(\omega),m)$ if $p=mn+1$, and $t_{i,j}^p(\omega)=\min(k_{i,j}(\omega),m+1)$ if $p=mn+(n-1)$. For $p=mn+1$ we recover the Shi tableau of Fishel, Tzanaki and Vazirani~\cite{FTV2011} \begin{figure} \caption{The rational Shi tableau of $\omega=[-2,15,-1,16,-14,10,4]\in\smash{\stackrel{\sim} \label{Figure:tij} \end{figure} \sk The following proposition shows that the rational Shi tableau behaves similarly to the inversion table under the involutive automorphism. \begin{myprop}{shiconj} Let $\omega\in\affS_n^p$ be a dominant $p$-stable affine permutation. Then the rational Shi tableau of $\omega^*$ is the transpose of the rational Shi tableau of $\omega$. That is, $t^p_{n+1-j,n+1-i}(\omega^*)=t^p_{i,j}(\omega)$ for all $i,j\in[n]$ with $i<j$. \end{myprop} \begin{proof} The claim follows from \refp{invconj} and \begin{align*} r+s(i)<s(j) &\Leftrightarrow r+n+1-s^*(n+1-i)<n+1-s^*(n+1-j)\\ &\Leftrightarrow r+s^*(n+1-j)<s^*(n+1-i),\\ s(i)+r-n<s(j) &\Leftrightarrow n+1-s^*(n+1-i)+r-n<n+1-s^*(n+1-j)\\ &\Leftrightarrow s^*(n+1-j)+r-n<s^*(n+1-i),\\ s(j)<s(i) &\Leftrightarrow s^*(n+1-i)<s^*(n+1-j).\qedhere \end{align*} \end{proof} The rational Shi tableau generalises naturally to affine Weyl groups. Let $\mathbb{P}hi$ be an irreducible crystallographic root system, and $p$ be relatively prime to the Coxeter number $h$ of $\mathbb{P}hi$. Let $\widetilde\mathbb{P}hi_p$ denote the set of affine roots of height $p$. In~\cite[Sec.~8]{Thiel2015} Thiel defines \emph{$p$-stable} affine Weyl group elements as the set \begin{align*} \widetilde{W}^p=\big\{\omega\in\widetilde{W}:\omega\cdot\widetilde\mathbb{P}hi_p\subseteq\widetilde\mathbb{P}hi^+\big\}. \end{align*} Let $\alpha\in\mathbb{P}hi^+$ be a positive root and $\omega\in\widetilde{W}^p$ a dominant $p$-stable element of the affine Weyl group. Define \begin{align}\label{eq:talpha} t_{\alpha}^p(\omega) &=\#\big(\omega\cdot(-\widetilde\mathbb{P}hi_{<p}^+)\cap\{-\alpha+k\,\mathrm{d}elta:k\geq1\}\big), \end{align} where $\widetilde\mathbb{P}hi^+_{<p}$ denotes the set of positive affine roots with height less than $p$. The \emph{rational Shi tableau} of $\omega$ is the collection of numbers $t_{\alpha}^p(\omega)$ for $\alpha\in\mathbb{P}hi^+$. \begin{mylem}{dominant} Let $\alpha\in\mathbb{P}hi^+$ be a positive root and $\omega\in\widetilde{W}$ be dominant with $\omega=t_qs$, where $q\in\check{Q}$ and $s\in W$. If $\skal{\alpha,q}=0$ then $s^{-1}\cdot\alpha\in\mathbb{P}hi^+$. \end{mylem} \begin{proof} Suppose $\skal{\alpha,q}=0$. The height function $\op{ht}:\mathbb{P}hi\to\mathbb{R}$ extends to a linear functional on $V$. Thus we may chose $v\in V$ with $\skal{\beta,v}=\op{ht}(\beta)/h$ for all $\beta\in\mathbb{P}hi$, where $h$ is the Coxeter number of $\mathbb{P}hi$. Note that $v\inA_{\circ}$ by definition. Thus $\skal{\alpha,\omega\cdot v}>0$ since $\omega$ is dominant. We compute \begin{align*} \frac{\op{ht}(s^{-1}\cdot\alpha)}{h} &=\skal{s^{-1}\cdot\alpha,v} =\skal{\alpha,s\cdot v} =\skal{\alpha,q+s\cdot v} =\skal{\alpha,\omega\cdot v} >0.\qedhere \end{align*} \end{proof} Using \refl{dominant} we obtain that $\omega^{-1}\cdot(\alpha+k\,\mathrm{d}elta)\in\widetilde\mathbb{P}hi^+$ for all $\alpha\in\mathbb{P}hi^+$ and $k\geq0$ whenever $\omega$ is dominant. Hence, \begin{align*} t_{\alpha}^p(\omega) =\#(\op{Inv}(\omega^{-1})\cap\{\pm\alpha+k\,\mathrm{d}elta:k\in\mathbb{Z}\}\cap\omega\cdot(-\widetilde\mathbb{P}hi_{<p})). \end{align*} We see that similar to $k_{\alpha}(\omega)$ also $t_{\alpha}^p(\omega)$ counts certain affine inversions $\pm\alpha+k\,\mathrm{d}elta$ of $\omega^{-1}$, but with an additional restriction on the height of $\omega^{-1}\cdot(\pm\alpha+k\,\mathrm{d}elta)$. \begin{myprop}{talpha} Let $m\geq0$ and $r\in[h-1]$, where $h$ is the Coxeter number of $\mathbb{P}hi$, and $\alpha\in\mathbb{P}hi^+$ be a positive root. Set $p=mh+r$ and let $\omega\in\widetilde{W}^p$ be dominant $p$-stable. If $\omega=t_qs$, where $q\in\check{Q}$ and $s\in W$, then \begin{align*} t_{\alpha}^p(\omega) &= \begin{cases} \min(k_{\alpha}(\omega),m)&\quad\text{if }r-h<\op{ht}(s^{-1}\cdot\alpha)<0\text{ or }r<\op{ht}(s^{-1}\cdot\alpha),\\ \min(k_{\alpha}(\omega),m+1)&\quad\text{otherwise.} \end{cases} \end{align*} \end{myprop} \begin{proof} Set $\beta=s^{-1}\cdot\alpha$. Recall that $\skal{\alpha,q}\geq0$ because $\omega$ is dominant. If $\op{ht}(\beta)=r$ then $\beta+m\,\mathrm{d}elta\in\widetilde\mathbb{P}hi_p$ and therefore \begin{align*} \alpha+(m-\skal{\alpha,q})\,\mathrm{d}elta &=\omega\cdot(\beta+m\,\mathrm{d}elta) \in\widetilde\mathbb{P}hi^+ \end{align*} since $\omega\in\widetilde{W}^p$. If instead $\op{ht}(\beta)=r-h$ then $\beta+(m+1)\,\mathrm{d}elta\in\widetilde\mathbb{P}hi_p$ and \begin{align*} \alpha+(m+1-\skal{\alpha,q})\,\mathrm{d}elta &=\omega\cdot(\beta+(m+1)\,\mathrm{d}elta) \in\widetilde\mathbb{P}hi^+. \end{align*} Consequently $\op{ht}(\beta)=r$ implies $\skal{\alpha,q}\leq m$, and $\op{ht}(\beta)=r-h$ implies $\skal{\alpha,q}\leq m+1$. Now let $k\geq1$. Then \begin{align*} 0<\op{ht}\big(-\omega^{-1}\cdot(-\alpha+k\,\mathrm{d}elta)\big) =\op{ht}(\beta)+(\skal{\alpha,q}-k)h<p=mh+r \end{align*} if and only if one of the following (mutually exclusive) cases occurs \begin{align*} m>0\text{ and }k&=\skal{\alpha,q}\text{ and }0<\op{ht}(\beta),\\ m=0\text{ and }k&=\skal{\alpha,q}\text{ and }0<\op{ht}(\beta)<r,\\ -m&<k-\skal{\alpha,q}<0,\\ m>0\text{ and }-m&=k-\skal{\alpha,q}\text{ and }\op{ht}(\beta)<r,\\ -m-1&=k-\skal{\alpha,q}\text{ and }\op{ht}(\beta)<-h+r. \end{align*} Equivalently $-\alpha+k\,\mathrm{d}elta$ contributes to $t_{\alpha}^p(\omega)$ if and only if \begin{align*} k\in\{1,2,\,\mathrm{d}ots\}\cap \begin{cases} \{\skal{\alpha,q}-m+1,\,\mathrm{d}ots,\skal{\alpha,q}\}&\quad\text{if }r\leq\op{ht}(\beta),\\ \{\skal{\alpha,q}-m,\,\mathrm{d}ots,\skal{\alpha,q}\}&\quad\text{if }0<\op{ht}(\beta)\leq r,\\ \{\skal{\alpha,q}-m,\,\mathrm{d}ots,\skal{\alpha,q}-1\}&\quad\text{if }r-h\leq\op{ht}(\beta)<0,\\ \{\skal{\alpha,q}-m-1,\,\mathrm{d}ots,\skal{\alpha,q}-1\}&\quad\text{if }\op{ht}(\beta)\leq r-h. \end{cases} \end{align*} The claim now follows from \refl{kalpha}. Note that in the last two cases $\skal{\alpha,q}-1\geq0$ is ensured by \refl{dominant}. \end{proof} With this description of $t_{\alpha}^p(\omega)$ we are able to verify that the rational Shi tableau defined in \refq{talpha} really specialises to the rational Shi tableau defined in \refq{tij} for type $A_{n-1}$. \begin{myprop}{tijalphaA} Let $n,p$ be positive coprime integers, $i,j\in[n]$ with $i<j$, and $\omega\in\affS_n^p$ be a dominant $p$-stable affine permutation. Then $t_{i,j}^p(\omega)=t_{e_i-e_j}^p(\omega)$, which counts the number of affine inversions $(a,b)$ of $\omega$ such that $b<a+p$, and $\omega(a)\equiv i$ and $\omega(b)\equiv j$ modulo $n$. \end{myprop} \begin{proof} This is a consequence of \refp{kalpha} and \refp{talpha}. \end{proof} The following is the main conjecture of this paper. \begin{myconj}{shi} Let $\mathbb{P}hi$ be an irreducible crystallographic root system with affine Weyl group $\widetilde{W}$ and Coxeter number $h$, and let $p$ be a positive integer relatively prime to $h$. Then each dominant $p$-stable element of the affine Weyl group is determined uniquely by its rational Shi tableau. \end{myconj} \refcj{shi} is known in to be true in the Fu{\ss}--Catalan case where $p=mn+1$. In the next section we exploit the connections of the affine symmetric group to the combinatorics of rational Dyck paths to prove the conjecture if $\mathbb{P}hi$ is of type $A_{n-1}$. That is, we prove the following theorem. (The proof is found at the end of \refs{codinv}.) \begin{mythrm}{shiA} Let $n,p$ be two positive coprime integers. Then each dominant $p$-stable affine permutation in $\affS_n$ is determined uniquely by its rational Shi tableau. \end{mythrm} We remark that the sum of the entries of the Shi tableau generalises the height statistic used by Stump in~\cite[Conj.~3.14]{Stump2010}. Thus the $q$-Fu{\ss}--Catalan numbers proposed therein are generalised by the polynomials $q^{(p-1)r/2}C_{\mathbb{P}hi,p}(q^{-1})$, where $r$ is the rank of $\mathbb{P}hi$ and \begin{align} C_{\mathbb{P}hi,p}(q) =\sum_{\omega} \prod_{\alpha\in\mathbb{P}hi^+}q^{t_{\alpha}^p(\omega)} \end{align} the sum being taken over all dominant $p$-stable elements of the affine Weyl group. Pak and Stanley~\cite{Stanley1996,Stanley1998} found a bijection between the regions of the ($m$-extended) Shi arrangement of type $A_{n-1}$ and the set of ($m$-)parking functions of length $n$. Gorsky, Mazin and Vazirani~\cite[Def.~3.8]{GMV2014} generalised this bijection to a map from $\affS_n^p$ to the set of rational parking functions. Their \emph{rational Pak--Stanley labelling} $f(\omega)$ is defined by \begin{align*} f_i(\omega) =\#\big\{(a,b)\in[n]\times\mathbb{N}:a<b<a+p,\omega(a)>\omega(b)\text{ and }\omega(b)\equiv i\text{ modulo }n\big\}, \end{align*} where $i\in[n]$. If $\omega$ is a dominant $p$-stable affine permutation then the Pak--Stanley labelling is obtained by taking the row-sums of the Shi tableau of $\omega$. That is, \begin{align*} f_j(\omega)=\sum_{i=1}^{j-1}t_{i,j}^p(\omega). \end{align*} Consequently the \emph{dual Pak--Stanley labelling}, which we define by $f_i^*(\omega)=f_i(\omega^*)$, is obtained by taking the column-sums of the Shi tableau of $\omega$. That is, \begin{align*} f_i^*(\omega)=\sum_{j=n-i+2}^nt_{n-i+1,j}^p(\omega). \end{align*} Partly the motivation for studying rational Shi tableaux comes from the fact that in view of the above identities they can be regarded as an intermediate step between dominant $p$-stable affine permutations and their images under the Pak--Stanley labelling. If one replaces a dominant $p$-stable affine permutation $\omega$ by its rational Shi tableau then some information is apparently lost, since not all inversions of $\omega$ are taken into account. Moving on to the Pak--Stanley labelling $f(\omega)$ gives up even more information on the nature of these inversions. Nevertheless there seems to be just enough information left to determine $\omega$ uniquely. While the injectivity of the Pak--Stanley labelling on the set of all $p$-stable affine permutations remains an open problem~\cite[Conj.~1.4]{GMV2014}, it follows from the work of Thomas and Williams~\cite{ThoWil2016} that the Pak--Stanley labelling is injective on the set of dominant $p$-stable affine permutations (see also the remarks at the end of \refs{codinv}). In view of \reft{shiA} it is an interesting question whether Shi tableaux can offer new insights regarding this problem. Another major open problem concerning the Pak--Stanley labelling is to find an analogous labelling for different affine Weyl groups. Notably also this problem can be solved for rational Shi tableaux (recall the definition in~\refq{talpha}). To conclude this section we want to connect the world of cores and abaci to dominant affine permutations. This connection is achieved by the next theorem, which follows essentially from the work of Lascoux~\cite{Lascoux2001}. Let $\omega\in\affS_n$ be a dominant affine permutation, and define $\gamma(\omega)=\{z\in\mathbb{Z}:\omega(z)\leq0\}$. \begin{mythrm}{gamma} The map $\gamma$ is a bijection between dominant affine permutations in $\affS_n$ and balanced $n$-flush abaci. \end{mythrm} It is easy to adapt \reft{gamma} to involve $p$-stable affine permutations. \begin{myprop}{pgamma} Let $\omega\in\affS_n$ be a dominant affine permutation. Then the abacus $\gamma(\omega)$ is $p$-flush if and only if $\omega\in\affS_n^p$ is $p$-stable. \end{myprop} Recall the map $\alpha$ from cores to balanced flush abaci that was introduced in~\refs{conj}. We obtain a bijection $\alpha^{-1}\circ\gamma$ between the dominant affine permutations in $\affS_n$ and $n$-cores. In particular $\alpha^{-1}\circ\gamma$ restricts to a bijection between dominant $p$-stable affine permutations and $\mathfrak C_{n,p}$. The following result appears to be new. \begin{myprop}{conjcore} Let $\omega\in\affS_n$ be a dominant affine permutation. Then $\alpha^{-1}\circ\gamma(\omega^*)$ is the conjugate partition of $\alpha^{-1}\circ\gamma(\omega)$. \end{myprop} \begin{proof} This is best understood using a different description of the map $\alpha^{-1}\circ\gamma$. Following \cite[Sec.~1.2]{kschur} we read the one-line notation of $\omega$ from left to right, drawing a North step for each encountered non-positive number and an East step for each encountered positive number. The resulting path $P$ outlines the South-West boundary of the partition $\alpha^{-1}\circ\gamma(\omega)$. By \refl{aut}~\refi{window} $w^*(i) =1-w(1-i)$ Hence $w^*(i)\leq0$ if and only if $w(1-i)$ is positive. Reading the one-line notation of $\omega^*$ from left to right and drawing a path as prescribed therefore yields the reverse path of $P$ with North and East step exchanged. \end{proof} For example, consider the dominant affine permutation $\omega=[-2,15,-1,16,-14,10,4]\in\affS_7^{16}$. Reading only the signs (zero counting as a minus) of the values of $\omega$ we obtain the sequence \begin{align*} \cdots-+-+----+-+-+-[-+-+-++]++++-++++++-+\cdots \end{align*} where the initial dots encode minuses, and the dots at the end encode pluses. Drawing the associated path we obtain the boundary of the $7,16$-core in \reff{core}. \section{The codinv statistic}\label{Section:codinv} \begin{figure} \caption{A rational Dyck path $x\in\mathfrak D_{7,16} \label{Figure:dyck} \end{figure} Let $n$ and $p$ be positive coprime integers. A \emph{rational Dyck path} is a lattice path $x$ that starts at $(0,0)$, consists of $n$ North steps $N=(0,1)$ and $p$ East steps $E=(1,0)$, and never goes below the diagonal with rational slope $n/p$. Denote the set of all rational Dyck paths by $\mathfrak{D}_{n,p}$. For $(i,j)$ with $0\leq i\leq p$ and $0\leq j\leq n$ place the label $\ell_{i,j}=jp-in$ in the unit square with bottom right corner $(i,j)$. Given a rational Dyck path $x\in\mathfrak D_{n,p}$, we assign to each of its steps the label $\ell_{i,j}$, where $(i,j)$ is the starting point of the step. Let $(\ell_1,\ell_2,\,\mathrm{d}ots,\ell_n)$ be the vector consisting of the labels of the North steps of $x$ (indicated in red in \reff{dyck}) ordered increasingly. Thus in \reff{dyck} we have $(\ell_1,\ell_2,\ell_3,\ell_4,\ell_5,\ell_6,\ell_7)=(0,2,11,19,20,22,38)$. Denote by $H(x)$ the set of positive labels below $x$ (indicated in green in \reff{dyck}). These correspond exactly to those boxes below $x$ but strictly above the diagonal. Hence we have $\op{area}(x)=\#H(x)$. We introduce the following definitions. A \emph{codinv pair} of $x$ is a pair of integers $(\ell,h)$ such that $\ell$ is the label of a North step of $x$ and $h\in H(x)$ and $\ell\leq h\leq\ell+p$. The \emph{codinv tableau} of $x$ is the collection of numbers \begin{align} d_{i,j}(x) =\#\big\{(\ell_i,h):h\in H(x),\ell_i<h<\ell_i+p\text{ and }h\equiv\ell_j\mod n\big\} \end{align} where $i,j\in[n]$ with $i<j$. For example consider the Dyck path in \reff{dyck}, which has codinv pairs \begin{align*} \begin{matrix} (0,4),(0,6),(0,13),(0,1),(0,8),(0,15),(0,3),(0,10),(0,5),(0,12),\\ (2,4),(2,6),(2,13),(2,8),(2,15),(2,3),(2,10),(2,17),(2,5),(2,12),\\ (11,13),(11,15),(11,17),(11,24),(11,12),\\ (19,24),(19,31),(20,24),(20,31),(22,24),(22,31). \end{matrix} \end{align*} Its codinv tableau is found in \reff{codinv}. \begin{figure} \caption{The codinv tableau of the Dyck path in \reff{dyck} \label{Figure:codinv} \end{figure} \sk Similar constructions have appeared before. First we remark that the codinv tableau is related to, albeit not the same as, the laser fillings of Ceballos, Denton and Hanusa~\cite[Def.~5.13]{CDH2015}. The row-sums and column-sums of the codinv tableau and the laser filling of a Dyck path agree. However, the codinv tableau is always of staircase shape while the laser filling sits inside the boxes below the rational Dyck path. Secondly we note that codinv pairs have been considered by Gorsky and Mazin~\cite{GM2013} using slightly different notation. However, they only considered the column-sums of the codinv tableau. See also the remarks following \reft{zeta}. Our next aim is to relate rational Shi tableaux to codinv tableaux. Our starting point is an elegant bijection $\varphi:\mathfrak D_{n,p}\to\mathfrak C_{n,p}$ between rational Dyck paths and simultaneous cores discovered by Anderson~\cite{Anderson2002}. For any finite set $H=\{h_1,\,\mathrm{d}ots,h_k\}$ of positive integers there exists a unique partition $\lambda$ such that $H$ is the set of hook lengths of the cells in the first column of $\lambda$. For any $x\in\mathfrak D_{n,p}$ let $\varphi(x)$ be the partition such that the set of hook lengths of the cells in its first column equals $H(x)$. The Dyck path in \reff{dyck} thereby corresponds to the core in \reff{core}. \begin{mythrm}{anderson}~\textnormal{\cite[Prop.~1]{Anderson2002}} The map $\varphi:\mathfrak D_{n,p}\to\mathfrak C_{n,p}$ is a bijection. \end{mythrm} Gorsky, Mazin and Vazirani~\cite[Sec.~3.1]{GMV2014} defined a generalised Anderson map $\mathcal A$ that bijectively maps the set of $p$-stable affine permutations to the set of rational parking functions. We are interested in the restriction of the Anderson map to the set of dominant $p$-stable affine permutations, which can be written as the composition $\mathcal A=\varphi^{-1}\circ\alpha^{-1}\circ\gamma$ of maps we have already discussed. Here the inverse of Anderson's bijection $\varphi$ makes an appearance, which accounts for the name of the Anderson map. We use the Anderson map to relate rational Shi tableaux to the codinv statistic. \begin{figure} \caption{The balanced abacus $A=\gamma(\omega)$, where $\omega=[-2,15,-1,16,-14,10,4]$, (left) and the normalised abacus $B=\beta\circ\alpha^{-1} \label{Figure:abaci} \end{figure} \begin{mythrm}{dinvshi} Let $n,p$ be positive coprime integers and $\omega\in\affS_n^p$ be a dominant $p$-stable affine permutation. Then the rational Shi tableau of $\omega$ equals the codinv tableau of $\mathcal A(\omega)$. That is, $t_{i,j}^p(\omega)=d_{i,j}(\mathcal A(\omega))$ for all $i,j\in[n]$ with $i<j$. \end{mythrm} \begin{proof} Let $\omega^{-1}=t_qs$ where $q\in\check{Q}$ and $s\in\mathfrak{S}_n$, and fix $i,j\in[n]$ such that $i<j$. By \refp{tijalphaA} $t_{i,j}^p(\omega)$ equals the number of inversions $(j,kn+i)$ of $\omega^{-1}$ such that $\omega^{-1}(j)-\omega^{-1}(kn+i)<p$. Let $A=\gamma(\omega)$ be an abacus on $n$ runners (see \reff{abaci}). Then $\omega^{-1}(j)$ is the minimal gap of $A$ in the runner $s(j)$. Moreover, $(j,kn+i)$ is an inversion of $\omega^{-1}$ contributing to $t_{i,j}^p(\omega)$ if and only if $\omega^{-1}(kn+i)$ is a non-minimal gap of $A$ in the runner $s(i)$ and \begin{align*} \omega^{-1}(j)-p<\omega^{-1}(nk+i)<\omega^{-1}(j). \end{align*} Hence $t_{i,j}^p(\omega)$ counts the number of non-minimal gaps $g$ in runner $s(i)$ such that $m-p<g<m$ where $m$ is the minimal gap in runner $s(j)$. Equivalently $t_{i,j}^p(\omega)$ counts the number of beads $b$ in runner $s(j)$ such that $m<b<m+p$ where $m$ is the minimal gap in runner $s(i)$. Define another abacus on $n$ runners, namely \begin{align*} B=\beta\circ\alpha^{-1}\circ\gamma=\{z+\ell-1:z\in A\}, \end{align*} where $\ell$ is the length of the partition $\alpha^{-1}\circ\gamma(\omega)$. Moreover define $\sigma\in\mathfrak{S}_n$ by $\sigma(i)\equiv s(i)+\ell-1$ modulo $n$. Then $t_{i,j}^p(\omega)$ counts the number of beads $b$ in the runner $\sigma(j)$ of $B$ such that $m<b<m+p$ where $m$ is the minimal gap in the runner $\sigma(i)$ of $B$. Since $B$ is normalised, the minimal gap of each runner of $B$ is non-negative. Thus it is the same to consider only positive beads of $B$. But the positive beads of $B$ are just the hook-lengths of the cells in the first column of $\alpha^{-1}\circ\gamma$ and therefore make up the set $H(\mathcal A(\omega))$. Moreover the minimal gaps of the runners of $B$ are just the labels of the North steps of $\mathcal A(\omega)$. The theorem now follows from the observation that $\sigma$ sorts the minimal gaps of $B$ increasingly. This is implied by the fact that $s$ sorts the minimal gaps of $A$ increasingly, which form the window of the affine Gra{\ss}mannian permutation $\omega^{-1}$. \end{proof} From \reft{dinvshi}, \refp{shiconj} and \refp{conjcore} we obtain an interesting result for free. \begin{mycor}{conjdinv} Let $n,p$ be positive coprime integers and $x\in\mathfrak{D}_{n,p}$ be a rational Dyck path. Then the codinv tableau of $x$ is the transpose of the codinv tableau of $\varphi^{-1}(\varphi(x)')$. \end{mycor} Note that the map $x\mapsto\varphi^{-1}(\varphi(x)')$, which is evidently an involution on $\mathfrak{D}_{n,p}$, has been called the \emph{rank compliment} by Xin while Ceballos, Denton and Hanusa call it \emph{conjugation} on Dyck paths. \begin{figure} \caption{The image $\zeta(x)$ of the rational Dyck path in \reff{dyck} \label{Figure:zeta} \end{figure} The zeta map $\zeta:\mathfrak D_{n,p}\to\mathfrak D_{n,p}$ is a map on rational Dyck paths that has appeared many times in the literature. Andrews, Krattenthaler, Orsina and Papi~\cite{AKOP2002} first defined the inverse of the zeta function in the Catalan case $\mathfrak D_{n,n+1}$. Haglund~\cite{Haglund2003} studied the Catalan instance of the zeta map in connection with diagonal harmonics. More recently multiple descriptions of the rational zeta map have been found (see~\cite{GM2013,ALW2014:sweep_maps}). The \emph{zeta map} can be defined as follows. Let $x=x_1x_2\,\mathrm{d}ots x_{n+p}\in\mathfrak{D}_{n,p}$ be a rational Dyck path with steps $x_i\in\{N,E\}$. For $i\in[n+p]$ let $\ell(x_i)$ be the label of the step $x_i$. Then $\zeta(x)=x_{\sigma(1)}x_{\sigma(2)}\,\mathrm{d}ots x_{\sigma(n+p)}$, where $\sigma\in\mathfrak{S}_{n+p}$ is the unique permutation such that $\ell(x_{\sigma(1)})<\ell(x_{\sigma(2)})<\,\mathrm{d}ots<\ell(x_{\sigma(n+p)})$. The zeta map is accompanied by a second map on rational Dyck paths that is called the \emph{eta map} in~\cite{CDH2015}. It is defined by $\eta(x)=\zeta\circ\varphi^{-1}((\varphi(x)')$ where $\lambda'$ is the conjugate partition of $\lambda$ and $\varphi$ is Anderson's bijection. See \reff{zeta}. Using codinv tableaux we reprove a connection between the Anderson map, the zeta map and the Pak--Stanley labelling that was already observed in \cite[Thm.~5.3]{GMV2014}. Note that each rational Dyck path $x\in\mathfrak D_{n,p}$ is the South-East boundary of a partition $\lambda$ fitting inside the $p\times n$ rectangle. We call $\lambda$ the \emph{complement} of $x$. For example, the complement of the Dyck path $x$ in \reff{dyck} is the partition $\lambda=(11,6,6,4,3,2,0)$. The complements of the Dyck paths $\zeta(x)$ and $\eta(x)$ in \reff{zeta} are $(13,6,5,5,2,0,0)$ and $(10,10,5,2,2,2,0)$ respectively. \begin{mythrm}{zeta} Let $n,p$ be positive coprime integers and $\omega\in\affS_n^p$ be a dominant $p$-stable affine permutation. Then the complement of $\zeta\circ\mathcal A(\omega)$ equals the (reversed\footnote{In our convention partitions are weakly decreasing while $f(\omega)$ is weakly increasing.}) Pak--Stanley labelling $f(\omega)$. Moreover, the complement of $\eta\circ\mathcal A(\omega)$ equals the (reversed) dual Pak--Stanley labelling $f^*(\omega)$. \end{mythrm} \begin{proof} Let $x=\mathcal{A}(\omega)$. By \reft{dinvshi} the Pak--Stanley labelling $f(\omega)$ is given by the row-sums of the codinv tableau of $x$. Consider the $j$-th North step of $\zeta(x)$. An East step of $x$ precedes this North step in $\zeta(x)$ if and only if its label is less than $\ell_j$. Let $L$ be the set of lines of slope $n/p$ going through the bottom right corner of a cell labelled by $\ell_j-kn$ for some $k>0$. The label of an East step of $x$ is less than $\ell_j$ if and only if the East step is intersected by a line in $L$. Moreover each East step of $x$ is intersected by at most one such line. Thus the number of East steps preceding the $j$-th North step of $\zeta(x)$ is counted by the number of intersections of an East step of $x$ and a line in $L$. For each line in $L$ the number of intersected East steps equals the number of intersected North steps. Thus the number of East steps preceding the $j$-th North step of $\zeta(x)$ is counted by the number of intersections of a North step of $x$ and a line in $L$. It takes a moment of thought to verify that this number is given by the sum $\sum_{i=1}^{j-1}d_{i,j}(x)$ of a row of the codinv tableau. Indeed the number $d_{i,j}(x)$ counts the number of intersections of the North step of $x$ labelled $\ell_i$ and a line in $L$. We conclude that the $j$-th part of the complement of $\zeta(x)$ equals $f_{n-j+1}(\omega)$. The dual statement concerning the eta map could be deduced analogously. However, it follows immediately using the involutive automorphism (see \refp{shiconj} and \refp{conjcore}). \end{proof} Our proof is inspired by the resemblance of codinv tableaux and laser fillings. \reft{zeta} should therefore be compared to~\cite[Thm.~5.15]{CDH2015}. Moreover we remark that taking the column-sums of the codinv tableau actually coincides with the definition of a map of Gorsky and Mazin~\cite[Def.~3.3]{GM2013}, which is the eta map in our notation. Thus it is well worth comparing \reft{zeta} to~\cite[Thm.~4.12]{ALW2014:sweep_maps}. It is well known that $\op{skl}(\varphi(x))=(n-1)(p-1)/2-\op{area}(\zeta(x))$. This follows from Armstrong's definition of the zeta map~\cite[Sec.~4.2]{ALW2014:sweep_maps}, which uses cores. As a first consequence of \reft{zeta} we obtain a justification for calling the codinv tableau a refinement of the skew length statistic. \begin{mycor}{codinv} Let $x\in\mathfrak{D}_{n,p}$. Then \begin{align*} \sum_{i,j}d_{i,j}(x)&=\frac{(n-1)(p-1)}{2}-\op{area}(\zeta(x)). \end{align*} \end{mycor} Furthermore we may now use a result of Ceballos, Denton and Hanusa to prove that each rational Dyck path is determined uniquely by its codinv tableau, and equivalently, that each dominant $p$-stable affine permutation is determined uniquely by its rational Shi tableau. \begin{mythrm}{unique} Let $x,y\in\mathfrak D_{n,p}$ such that $d_{i,j}(x)=d_{i,j}(y)$ for all $i,j\in[n]$ with $i<j$. Then $x=y$. \end{mythrm} \begin{proof}[Proof of Theorems \ref{Theorem:shiA} and \ref{Theorem:unique}] We deduce the claims from \cite[Thm.~6.3]{CDH2015}, which asserts that any rational Dyck path $x$ can be reconstructed from the pair $(\zeta(x),\eta(x))$. By Theorems~\ref{Theorem:dinvshi} and~\ref{Theorem:zeta} the codinv tableau of $x$ encodes both $\zeta(x)$ and $\eta(x)$ in terms of column-sums and row-sums. Therefore it contains enough information to determine the path $x$ uniquely. Using the Anderson map again we obtain the analogous statement for rational Shi tableaux and dominant $p$-stable affine permutations. \end{proof} We close with an exciting recent result due to Thomas and Williams. \begin{mythrm}{zetabij}~\textnormal{\cite[Cor.~6.4]{ThoWil2016}} The zeta map $\zeta:\mathfrak{D}_{n,p}\to\mathfrak{D}_{n,p}$ is a bijection. \end{mythrm} It should be clear from the above discussion that \reft{zetabij} implies that each dominant $p$-stable affine permutation is determined uniquely by its Pak--Stanley labelling. Thus the stronger \reft{zetabij} could replace \cite[Thm.~6.3]{CDH2015} in the proof of \reft{shiA}. \end{document}
\begin{document} \begin{abstract} The concordance genus of a knot $K$ is the minimum Seifert genus of all knots smoothly concordant to $K$. Concordance genus is bounded below by the $4$-ball genus and above by the Seifert genus. We give a lower bound for the concordance genus of $K$ coming from the knot Floer complex of $K$. As an application, we prove that there are topologically slice knots with $4$-ball genus equal to one and arbitrarily large concordance genus. \end{abstract} \title{On the concordance genus of topologically slice knots} \section{Introduction} The \emph{concordance genus} of a knot $K$, $g_c(K)$, is the minimum genus of all knots smoothly concordant to $K$. The concordance genus is bounded below by the $4$-ball genus and above by the genus; that is, \[ g_4(K) \leq g_c(K) \leq g(K). \] Note that taking the connected sum with a slice knot does not change the value of $g_c$, but increases the genus. In this manner, the gap between $g_c(K)$ and $g(K)$ can be made arbitratily large. For many knots, $g_4(K)=g_c(K)$. For example, a consequence of the Milnor conjecture, first proved by Kronheimer and Mrowka \cite{KronMrowka}, is that $g_4(K)=g_c(K)=g(K)$ for torus knots. In \cite[Problem 14]{Gordonproblems}, Gordon asks if $g_4(K)=g_c(K)$ in general. Nakanishi \cite{Nakanishi} answered the question in the negative, using Alexander polynomials to show that the gap between $g_4(K)$ and $g_c(K)$ can be arbitrarily large. The more subtle question of whether there are algebraically slice knots for which the gap between $g_4(K)$ and $g_c(K)$ can be arbitrarily large was answered by Livingston in \cite{Livingstonconcordancegenus}, where he used Casson-Gordon invariants to find algebraically slice knots with $4$-ball genus equal to one and arbitrarily large concordance genus. Neither the Alexander polynomial nor Casson-Gordon invariants suffice to extend these results to topologically slice knots. In this paper, we give a lower bound for $g_c(K)$ coming from the knot Floer complex of $K$, and use this bound to give a family of topologically slice knots with smooth $4$-ball genus equal to one and arbitrarily large concordance genus. To a knot $K$ in $S^3$, Ozsv\'ath and Szab\'o \cite{OSknots}, and independently Rasmussen \cite{R}, associate a $\mathbb{Z} \oplus \mathbb{Z}$-filtered chain complex, $CFK^{\infty}(K)$, whose filtered chain homotopy type is an invariant of $K$. Associated to this chain complex are several concordance invariants; in this paper, we focus on the invariant $\varepsilon(K)$, a $\{-1, 0, 1\}$-valued invariant defined in \cite{Homsmooth}, and to a lesser extent, the invariant $\tau(K)$, defined in \cite{OS4ball}. Both $\varepsilon$ and $\tau$ are defined by studying certain natural maps on homology induced by inclusions and projections of appropriate subquotient complexes of $CFK^{\infty}(K)$. We say that two $\mathbb{Z} \oplus \mathbb{Z}$-filtered chain complexes, $C_1$ and $C_2$, are \emph{$\varepsilon$-equivalent} if \[ \varepsilon(C_1 \otimes C_2^*)=0,\] where $C^*$ denotes the dual of $C$. We say that two knots, $K_1$ and $K_2$, are \emph{$\varepsilon$-equivalent} if their knot Floer complexes are $\varepsilon$-equivalent, that is, if \[\ \varepsilon \big(CFK^{\infty}(K_1) \otimes CFK^{\infty}(K_2)^* \big)=0. \] As seen in the following theorem, $\varepsilon$-equivalence is closely related to concordance: \begin{Theorem}[{\cite{Homsmooth}}] \label{thm:epsilonequivalent} If two knots are concordant, then they are $\varepsilon$-equivalent. \end{Theorem} \noindent We define the \emph{breadth} of a $\mathbb{Z} \oplus \mathbb{Z}$-filtered chain complex $C$, $b(C)$, to be \[ b(C)=\textup{max} \{ j \ | \ H_*(C(0, j)) \neq 0 \}, \] where $C(i, j)$ denotes the $(i, j)$-graded summand of the associated graded complex. Recall from \cite[Theorem 1.2]{OSgenusbounds} that \[ g(K)=b(CFK^{\infty}(K)). \] The invariant $\gamma(K)$ is defined to be the minimum breadth of all filtered chain complexes $\varepsilon$-equivalent to $CFK^{\infty}(K)$: \[ \gamma(K)= \textup{min} \{ b(C) \ | \ \varepsilon(CFK^{\infty}(K) \otimes C^*)=0 \}.\] \begin{Theorem} \label{thm:gamma} The invariant $\gamma(K)$ gives a lower bound on the smooth concordance genus of $K$; that is, \[g_c(K) \geq \gamma(K).\] \end{Theorem} \noindent At this first glance, this may seem like an intractable invariant, as the set of chain complexes $\varepsilon$-equivalent to $CFK^{\infty}(K)$ is infinite. However, in many situations, there are tractable numerical invariants associated to the $\varepsilon$-equivalence class of $K$ giving lower bounds for $\gamma(K)$, and hence also for $g_c(K)$. In this next theorem, we use these bounds to prove a result concerning the concordance genus of a family of topologically slice knots. Let $D$ denote the (positive, untwisted) Whitehead double of the right-handed trefoil, and let $K_{p,q}$ denote the $(p, q)$-cable of $K$, where $p$ indicates the longitudinal winding and $q$ the meridional winding. We write $-K$ to denote the reverse of the mirror of $K$. \begin{Theorem} \label{thm:theknots} Let $K_p=D_{p, 1} \# -D_{p-1, 1}$. Then $K_p$ is topologically slice with $g_4(K_p)=1$ and $g_c(K_p)\geq p$. \end{Theorem} \noindent In \cite[Theorem 1.5]{Livingstonconcordancegenus}, Livingston constructs algebraically slice knots with $4$-ball genus equal to one and arbitrarily large concordance genus. However, his proof relies on Casson-Gordon invariants, and so his examples are not topologically slice. He also remarks on the inherent challenge in bounding the concordance genus: one must show that the given knot is not concordant to any knot in the infinite family of knots with genus less than a given $N$. The invariant $\gamma$ can help significantly in this regard. Moreover, the invariant $\gamma$ can give useful bounds on the concordance genus of topologically slice knots, while the techniques of \cite{Livingstonconcordancegenus} cannot. \noindent \textbf{Organization.} In Section \ref{sec:bounding}, we recall the necessary properties of Heegaard Floer homology and knot Floer homology, and use them to prove Theorem \ref{thm:gamma}. In Section \ref{sec:theknots}, we apply those results to give a family of topologically slice knots with $4$-ball genus one and arbitrarily large concordance genus. We work with coefficients in $\mathbb{F}=\mathbb{Z}/2\mathbb{Z}$ throughout. Unless otherwise stated, we work in the smooth category. \noindent \textbf{Acknowledgements.} I would like to thank Chuck Livingston and Peter Horn for many helpful conversations. \section{Bounding the concordance genus} \label{sec:bounding} We recall the basic definitions of knot Floer homology, assuming that the reader is familiar with these invariants; for an expository overview, we suggest \cite{OSsurvey}. In this paper, we concern ourselves primarily with the algebraic properties of the invariant. To a knot $K \subset S^3$, Ozsv\'ath-Szab\'o \cite{OSknots}, and independently Rasmussen \cite{R}, associate $CFK^{\infty}(K)$, a $\mathbb{Z}$-graded, $\mathbb{Z}$-filtered freely generated chain complex over the ring $\mathbb{F}[U, U^{-1}]$, where $U$ is a formal variable. The filtered chain homotopy type of $CFK^{\infty}(K)$ is an invariant of the knot $K$. The differential does not decrease the $U$-exponent, and the $U$-exponent (more precisely, the negative of the $U$-exponent) induces a second $\mathbb{Z}$-filtration, giving $CFK^{\infty}(K)$ the structure of a $\mathbb{Z} \oplus \mathbb{Z}$-filtered chain complex. The ordering on $\mathbb{Z} \oplus \mathbb{Z}$ is given by $(i, j) \leq (i', j')$ if $i \leq i'$ and $j \leq j'$. This chain complex is freely generated over $\mathbb{F} [U, U^{-1}]$ by tuples of intersection points in a doubly pointed Heegaard diagram for $S^3$ compatible with the knot $K$. Each generator $x$ comes with a homological, or Maslov grading, $M(x)$, and an Alexander filtration, $A(x)$. The differential, $\partial$, decreases the Maslov grading by one, and respects the Alexander filtration; that is, \[ M(\partial x) = M(x) -1 \qquad \textup{and} \qquad A(\partial x ) \leq A(x).\] Multiplication by $U$ shifts the Maslov grading by two and the Alexander filtration by one: \[ M(U \cdot x) = M(x) -2 \qquad \textup{and} \qquad A(U \cdot x) = A(x) -1. \] It is often convenient to graphically represent this complex in the $(i, j)$-plane, where the $i$-axis corresponds to $-(U\textup{-exponent})$, and the $j$-axis corresponds to the Alexander filtration. The Maslov grading is suppressed from this picture. A generator $x$ is placed at $(0, A(x))$, and a element of the form $U^n \cdot x$ is placed at $(-n, A(x)-n)$. Given a $\mathbb{Z} \oplus \mathbb{Z}$-filtered chain complex $C$ and $S \subset \mathbb{Z} \oplus \mathbb{Z}$, we write $C\{ S \}$ to denote the set of elements in the plane whose $(i, j)$-coordinates are in $S$ together with the arrows between them. If $S$ has the property that $(i, j) \in S$ implies that $(i', j') \in S$ for all $(i', j') \leq (i, j)$, then $C\{S\}$ is a subcomplex of $C$. We write $C(i, j)$ to denote the subquotient complex with coordinates $(i, j)$, that is, $C \{ (i, j) \}$. The $\mathbb{Z}$-filtered complex $\widehat{CFK}(K)$ is the subquotient complex consisting of the $j$-axis, i.e., $C\{ i \leq 0 \}/C\{ i<0 \}$. The homology of the associated graded object of $\widehat{CFK}(K)$ is $\widehat{HFK}(K)$. The groups $\widehat{HFK}(K)$ can themselves be viewed as a chain complex, with the differential induced by the higher order, i.e., non-filtration preserving, differentials on $\widehat{CFK}(K)$. Moreover, up to filtered chain homotopy equivalence, $\widehat{HFK}(K)$ is a basis over $\mathbb{F}[U, U^{-1}]$ for $CFK^{\infty}(K)$. Choosing $\widehat{HFK}(K)$ as a basis for $CFK^{\infty}(K)$ has the advantage that it is \emph{reduced}; that is, the differential strictly lowers the filtration. Graphically, this means that each arrow will point strictly downward or to the left (or both). We have the following chain homotopy equivalences \cite[Theorem 7.1 and Section 3.5]{OSknots}: \begin{align*} CFK^{\infty}(K_1 \# K_2) &\simeq CFK^{\infty}(K_1) \otimes_{\mathbb{F}[U, U^{-1}]} CFK^{\infty}(K_2) \\ CFK^{\infty}(-K) &\simeq CFK^{\infty}(K)^* \end{align*} where $CFK^{\infty}(K)^*$ denotes the dual of $CFK^{\infty}(K)$, i.e., $\textup{Hom}_{\mathbb{F}[U, U^{-1}]}(CFK^{\infty}(K), \mathbb{F}[U, U^{-1}])$. To fully exploit the richness of the invariant $CFK^{\infty}(K)$, it is helpful to study certain induced maps on homology. For example, the Ozsv\'ath-Szab\'o concordance invariant $\tau$ is defined in \cite{OS4ball} to be \[ \tau(K) = \textup{min} \{ s \ | \ \iota : C\{i=0, j \leq s \} \rightarrow C\{ i=0 \} \textup{ induces a non-trivial map on homology} \}, \] where $\iota$ is the natural inclusion of chain complexes. Note that $H_*(C\{ i=0\})\cong \widehat{HF}(S^3) \cong \mathbb{F}$. The invariant $\tau(K)$ provides a lower bound on the $4$-ball genus of $K$, and gives a surjective homomorphism from the smooth concordance group to the integers \cite{OS4ball}. More recently, the $\{-1, 0, 1\}$-valued concordance invariant $\varepsilon(K)$ has been defined in \cite{Homsmooth}. To define $\varepsilon$, one first considers the map on homology, $F_*$, induced by the chain map \[ F: C\{ i=0 \} \rightarrow C\{ \textup{min}(i, j-\tau)=0 \} \] where $\tau=\tau(K)$, and the chain map consists of quotienting by $C\{ i=0, j< \tau \}$ followed by the inclusion of $C \{ i=0, j \geq \tau \}$ into $C\{ \textup{min}(i, j-\tau)=0 \}$. Similarly, we consider the map $G_*$, induced by \[ G: C\{ \textup{max}(i, j-\tau)=0 \} \rightarrow C\{ i=0 \}, \] the composition of quotienting by $C \{ i < 0, j=\tau \}$ and including $C \{ i=0, j \leq \tau \}$ into $C\{ i=0 \}$. \begin{figure} \caption{Left, the subquotient complex $C \{ i=0 \} \end{figure} \begin{definition} The invariant $\varepsilon$ is defined in terms of $F_*$ and $G_*$ as follows: \begin{itemize} \item $\varepsilon(K)=1$ if $F_*$ is trivial (in which case $G_*$ is necessarily non-trivial). \item $\varepsilon(K)=-1$ if $G_*$ is trivial (in which case $F_*$ is necessarily non-trivial). \item $\varepsilon(K)=0$ if $F_*$ and $G_*$ are both non-trivial. \end{itemize} \end{definition} \noindent See \cite[Section 3]{Homsmooth} for details. Two knots $K_1$ and $K_2$ are \emph{$\varepsilon$-equivalent} if \[ \varepsilon(K_1 \# -K_2)=0.\] Concordant knots are $\varepsilon$-equivalent \cite[Theorem 2]{Homsmooth}. \begin{proof}[Proof of Theorem \ref{thm:gamma}] The proof that $\gamma(K)$ gives a lower bound on concordance genus is an immediate consequence of the definition of $\gamma(K)$, as follows. By Theorem \ref{thm:epsilonequivalent}, any two concordant knots are $\varepsilon$-equivalent. Since $g(K)=b(CFK^{\infty}(K))$ by \cite[Theorem 1.2]{OSgenusbounds} and \[ \gamma(K)= \textup{min} \{ b(C) \ | \ C \textup{ is $\varepsilon$-equivalent to } CFK^{\infty}(K) \}, \] it follows immediately that \[ g_c(K) \geq \gamma(K).\] \end{proof} Further invariants are defined in \cite[Section 6]{Homsmooth}. Suppose $\varepsilon(K)=1$, and consider the map on homology $H_s$ induced by the chain map \[ C\{ i=0 \} \rightarrow C\{ \textup{min} (i, j-\tau)=0, i \leq s\}, \] where $s$ is a non-negative integer, and the map consists of quotienting by $C\{ i=0, j < \tau \}$, followed by inclusion. When $s$ is sufficiently large, the map $H_s$ is trivial since $\varepsilon(K)=1$, while when $s=0$, it is not difficult to see that the map $H_s$ is non-trivial. Thus, one can define \[a_1= \textup{min} \{ s \ | \ H_s \textup{ is trivial} \}. \] Going even further, consider the map on homology $H_{a_1, s}$ induced by \[ C\{ i=0 \} \rightarrow C \big\{ \{ \textup{min} (i, j-\tau)=0, i \leq a_1\} \cup \{ i=a_1, \tau-s \leq j < \tau\} \big\}, \] where the map consists of quotienting by $C\{ i=0, j < \tau \}$, followed by inclusion. Define \[a_2= \textup{min} \{ s \ | \ H_{a_1, s} \textup{ is non-trivial} \}. \] The set $\{ s \ | \ H_{a_1, s} \textup{ is non-trivial} \}$ may be empty -- there is no reason why the map $H_{a_1, s}$ must be non-trivial for any $s$ -- in which case the invariant $a_2(K)$ is undefined. \begin{lemma}[{\cite[Lemma 6.2]{Homsmooth}}] \label{lem:basis} Let $a_1=a_1(K)$ and let $a_2=a_2(K)$ be well-defined. Then there exists a basis $\{x_i\}$ over $\mathbb{F}[U, U^{-1}]$ for $CFK^{\infty}$ with basis elements $x_0$, $x_1$, and $x_2$ with the property that \begin{itemize} \item \label{it:a_1} There is a horizontal arrow of length $a_1$ from $x_1$ to $x_0$. \item There are no other horizontal or vertical arrows to or from $x_0$. \item There are no other horizontal arrows to or from $x_1$. \newcounter{enumi_saved} \setcounter{enumi_saved}{\value{enumi}} \item \label{it:a_2} There is a vertical arrow of length $a_2$ from $x_1$ to $x_2$. \item There are no other vertical arrows to or from $x_1$ or $x_2$. \end{itemize} \end{lemma} \noindent See Figure \ref{fig:basis}. \begin{figure} \caption{Left, the complex $C\{ \textup{min} \label{fig:basis} \end{figure} The numbers $a_1$ and $a_2$ are invariants of $\varepsilon$-equivalence \cite[Lemma 6.1]{Homsmooth}. We recall the proof here. If $K_1$ and $K_2$ are $\varepsilon$-equivalent, then $\varepsilon(K_1 \# -K_2)=0$ and by \cite[Lemma 3.3]{Homcables}, there exists a basis for $CFK^{\infty}(K_1 \# -K_2)$ with a distinguished basis element $x_0$ with no incoming or outgoing vertical or horizontal arrows. Similarly, there exists a basis for $CFK^{\infty}(K_2 \# -K_2)$ with a distinguished basis element $y_0$. The knot $K_1 \# -K_2 \# K_2$ is $\varepsilon$-equivalent to $K_1$ and $K_2$, and we may compute $a_1(K_1 \# -K_2 \# K_2)$ and $a_2(K_1 \# -K_2 \# K_2)$ by considering either \[ \{x_0\} \otimes CFK^{\infty}(K_2) \qquad \textup{or} \qquad CFK^{\infty}(K_1) \otimes \{y_0\}. \] The former gives us $a_1(K_2)$ and $a_2(K_2)$, and the latter gives us $a_1(K_1)$ and $a_2(K_1)$, completing the proof. At times, it may be difficult to compute $\gamma(K)$ directly, but we can bound it using the invariants $\tau(K)$, $a_1(K)$, and $a_2(K)$. \begin{lemma} \label{lem:gamma} Suppose that $\varepsilon(K)=1$, and $a_2(K)$ is defined. Then \[ \gamma(K) \geq |\tau(K)-a_1(K)-a_2(K)|. \] \end{lemma} \begin{proof} From the basis found in Lemma \ref{lem:basis} and the fact that $\tau$, $a_1$, and $a_2$ are invariants of $\varepsilon$-equivalence, it follows that \[ H_*\Big(C\big(0, \tau(K)-a_1(K)-a_2(K) \big) \Big) \neq 0, \] for any complex $C$ that is $\varepsilon$-equivalent to $CFK^{\infty}(K)$. Using the various symmetry properties of $CFK^{\infty}(K)$ \cite[Section 3.5]{OSknots}, it follows that \[ H_*\big(C(0, |\tau(K)-a_1(K)-a_2(K)|\big) \neq 0, \] as well. This implies that $b(C) \geq |\tau(K)-a_1(K)-a_2(K)|$ for any $C$ that is $\varepsilon$-equivalent to $CFK^{\infty}(K)$, giving the desired bound. \end{proof} \section{The knots $D_{p, 1} \# -D_{p-1, 1}$} \label{sec:theknots} Let $D$ denote the (positive, untwisted) Whitehead double of the right-handed trefoil. Let $K_{p,q}$ denote the $(p, q)$-cable of $K$, where $p$ indicates the longitudinal winding and $q$ the meridional winding. We will study various properties of the family of knots \[ D_{p, 1} \# -D_{p-1, 1}, \quad p>1.\] Since the Alexander polynomial of $D$ is equal to one, by Freedman \cite{Freedman} $D$ is topologically slice. Hence the $(p, 1)$-cable of $D$ is topologically concordant to the underlying pattern torus knot, which is unknotted. It follows that the knot $D_{p, 1} \# -D_{p-1, 1}$ is topologically slice. In the following lemma, we will show that these knots are never smoothly slice. \begin{lemma} \label{lem:4genus1} The smooth $4$-ball genus of the knot $D_{p, 1} \# -D_{p-1, 1}$ is equal to one. \end{lemma} \begin{proof} A genus $p$ Seifert surface for $D_{p, 1}$ can be built from $p$ parallel copies of a genus one Seifert surface for $D$, and $p-1$ half-twisted bands connecting them. Likewise, we may build a genus $p-1$ Seifert surface for $-D_{p-1, 1}$. Connecting these two Seifert surfaces together with a band yields a genus $2p-1$ Seifert surface $F$ for $D_{p, 1} \# -D_{p-1, 1}$. The slice knot $D_{p-1, 1} \# -D_{p-1, 1}$ sits on $F$, and furthermore bounds a subsurface of genus $2p-2$. We may perform surgery on $F$ in $B^4$ along $D_{p-1, 1} \# -D_{p-1, 1}$, yielding a genus one slice surface for $D_{p, 1} \# -D_{p-1, 1}$. By \cite{HeddenWhitehead}, $\tau(D)=1$, and by \cite[Theorem 1.2]{HeddencablingII} (cf. \cite[Theorem 1]{Homcables}), it follows that $\tau(D_{p, 1})=p$. Therefore, $\tau(D_{p, 1} \# -D_{p-1, 1})=1$, which is a lower bound on the $4$-ball genus of the knot \cite{OS4ball}. Since this bound can be realized, it follows that $g_4(D_{p, 1} \# -D_{p-1, 1})=1$. \end{proof} To bound the concordance genus of $K_p=D_{p, 1} \# -D_{p-1, 1}$, we consider its knot Floer complex. We do this using the tools of \cite{Homsmooth} together with the bordered Floer homology package of Lipshitz, Ozsv\'ath, and Thurston \cite{LOT}, as applied to cables by Petkova \cite{Petkova}. The knot $D$ is $\varepsilon$-equivalent to the $(2,3)$-torus knot $T_{2,3}$ \cite[Lemma 6.12]{Homsmooth}. Moreover, if two knots are $\varepsilon$-equivalent, then so are their satellites \cite[Proposition 4]{Homsmooth}. Therefore, to understand $D$ and its satellites from the perspective of $\varepsilon$-equivalence, we may instead work with $T_{2,3}$ and its satellites. The advantage of this is the knot Floer complex of $T_{2,3}$ is simpler to work with from a computational perspective. It has rank three, and is homologically thin, meaning that $\widehat{HFK}(T_{2,3})$ is supported on a single diagonal with respect to its bigrading. Cables of homologically thin knots are studied by Petkova in \cite{Petkova}, where she describes $\widehat{HFK}(K_{p, pn+1})$ for any homologically thin knot $K$ in terms of the Alexander polynomial of $K$, $\tau(K)$, $p$, and $n$. The proof of her main result relies on bordered Floer homology, and the same techniques can be used to determine the $\mathbb{Z}$-filtered chain complex $\widehat{CFK}(K_{p, pn+1})$. Since $T_{2,3}$ is homologically thin, we may use Theorem 1 of \cite{Petkova} to compute the $\mathbb{Z}$-filtered chain complex $\widehat{CFK}(T_{2,3; p, 1})$, from which we can determine certain information about $CFK^{\infty}(T_{2,3;p,1})$, which is $\varepsilon$-equivalent to $CFK^{\infty}(D_{p,1})$. More precisely, this information will be the invariants $a_1$ and $a_2$, which will determine the bounds on concordance genus necessary for Theorem \ref{thm:theknots}. Towards this end, a useful tool is the well-known ``edge reduction'' procedure for filtered chain complexes over $\mathbb{F}$; see, for example, \cite[Section 2.6]{Levine}. That is, we may depict a filtered chain complex as a directed graph, where there is an arrow from $x_i$ to $x_j$ if $x_j$ appears with non-zero coefficient in $\partial x_i$. We label the arrow from $x_i$ to $x_j$ with the Alexander filtration difference between $x_i$ and $x_j$. If there is an arrow from $x_i$ to $x_j$ that preserves filtration, we may cancel it by deleting $x$ and $y$ from the graph, and for each $k$ and $\ell$ with edges \begin{align*} x_k &\xrightarrow{a} x_j \\ x_i &\xrightarrow{b} x_\ell, \end{align*} we either add an arrow from $x_k$ to $x_\ell$ if one was not there previously, or delete the arrow from $x_k$ to $x_\ell$ if there was one. See Figure \ref{fig:edgereduction}. If we add an arrow from $x_k$ to $x_\ell$, then its filtration shift is $a+b$ where $a$ and $b$ where the filtration shifts of the arrows from $x_k$ to $x_j$ and from $x_i$ to $x_\ell$, respectively. This procedure corresponds to the following chain homotopy equivalence, consisting of a change of basis which yields an acyclic subcomplex: \begin{itemize} \item For each $x_k$ with an arrow to $x_j$, we replace $x_k$ with $x_k+x_i$. \item The basis element $x_j$ is replaced with $\partial x_i$. \item The subcomplex spanned by $x_i$ and $\partial x_i$ is acyclic. \end{itemize} We make use of this procedure in the following proposition. \begin{figure} \caption{An example of edge reduction. Left, before reduction; right, after.} \label{fig:edgereduction} \end{figure} \begin{proposition} \label{prop:cable} The group $\widehat{HFK}(T_{2,3;p,1})$ has rank $6p-5$. The generators are listed in Table \ref{tab:cable}, and the non-zero higher differentials are \begin{align*} \partial b_1 v_1 &= b_1\mu_1[p] \\ \partial b_j v_1 &= b_{2p-j-1}v_1[p-j] \qquad 2 \leq j \leq p-1 \\ \partial b_jv_2 &= b_{j+1}\mu_1[1] \ \quad \qquad 1 \leq j \leq p-2 \\ \partial b_{p-1}v_2 &= b_pv_2[1] \\ \partial b_j \mu_2 &= b_{2p-j-1}\mu_2[p-j] \quad \quad 1 \leq j \leq p-1, \end{align*} where the brackets denote the drop in Alexander filtration, e.g., the Alexander filtration of $b_1 \mu_1$ is $p$ less than that of $b_1v_1$. \end{proposition} \begin{table}[htb!] \begin{center} \begin{tabular}{llllll} \hline &Generator \qquad \qquad & $(M, A)$ \qquad \qquad \qquad \qquad & $M+2p-2A$ &&\\\hline &$au_1$ & $(0, p)$ & $0$ &\\ &$b_1v_1$ & $(-1, p-1)$ & $1$ &\\ &$b_1\mu_1$ & $(-2, -1)$ & $2p$ &\\ &$b_jv_2$ & $(-2j-1, -j)$ & $2p-1$ &$1 \leq j \leq p-2$&\\ &$b_{j+1}\mu_1$ \qquad \quad & $(-2j-2, -j-1)$ & $2p$ &$1 \leq j \leq p-2$&\\ &$b_{p-1}v_2 $ & $(-2p+1, -p+1)$ & $2p-1$ &&\\ &$b_pv_2$ & $(-2p, -p)$ & $2p$ & \\ &$b_jv_1$ & $(-1, -j+p)$ & $-1+2j$ &$2 \leq j \leq p-1$&\\ &$b_{2p-1-j}v_1$ & $(-2, 0)$ & $2p-2$ &$2 \leq j \leq p-1$&\\ &$b_j\mu_2$ & $(0, -j+p)$ & $-2j$ &$1 \leq j \leq p-1$ &\\ &$b_{2p-1-j}\mu_2$ & $(-1, 0)$ & $2p-1$ &$1 \leq j \leq p-1$ &\\\hline \end{tabular} \end{center} \caption{$\widehat{HFK}(T_{2,3;p,1})$} \label{tab:cable} \end{table} \begin{proof} We use \cite[Theorem 1]{Petkova} to determine $\widehat{HFK}(T_{2,3;p,1})$. We also need the higher differentials (i.e., those that do not preserve Alexander grading) in order to determine the values of $a_1$ and $a_2$, and so we repeat the calculation of $\widehat{HFK}(T_{2,3;p,1})$ below, keeping track of this additional data. For background on bordered Floer homology, see \cite{LOT} or \cite[Section 2]{Homcables}. We prefer to work with $\mathbb{Z}$-filtered chain complex $\widehat{CFK}$ rather than the $\mathbb{F}[U]$-module $gCFK^-$, and so we use the basepoint conventions described in \cite[Remark 4.2]{Homcables}. In particular, the $\mathcal{A}_\infty$ relations on $\widehat{CFA}$ now each contribute a relation filtration shift, denoted with square brackets. \begin{figure} \caption{A genus one bordered Heegaard diagram $\mathcal{H} \label{fig:cablepattern} \end{figure} We use the notation of \cite{Petkova}, which matches that of \cite{LOT}. Let $\widehat{CFA}(p, 1)$ denote the type $A$ structure associated to the diagram in Figure \ref{fig:cablepattern}. The diagram describes the $(p, 1)$-torus knot in the solid torus, where $p$ denotes the longitudinal winding. There are $2p-1$ generators, $a, b_1, b_2, \ldots, b_{2p-2}$, where $a \cdot \iota_0=a$, and $b_i \cdot \iota_1 = b_i$. In \cite[Section 4.1]{Homcables}, the $\mathcal{A}_\infty$ relations on $\widehat{CFA}(p, 1)$ are determined to be \begin{equation*} \begin{array}{lll} m_{3+i}(a, \rho_3, \overbrace{\rho_{23}, \ldots, \rho_{23}}^{i}, \rho_2)=a[pi+p], &&i \geq 0 \\ m_{4+i+j}(a, \rho_3, \overbrace{\rho_{23}, \ldots, \rho_{23}}^{i}, \rho_2, \overbrace{\rho_{12}, \ldots, \rho_{12}}^{j}, \rho_1)=b_{j+1}[pi+j+1], && 0\leq j \leq p-2\\ && i \geq 0\\ m_{2+j}(a, \overbrace{\rho_{12}, \ldots, \rho_{12}}^{j}, \rho_1)=b_{2p-j-2}[0], &&0\leq j \leq p-2 \\ &&\\ m_1(b_j)=b_{2p-j-1}[p-j], &&1\leq j \leq p-1 \\ m_{3+i}(b_j, \rho_2, \overbrace{\rho_{12}, \ldots, \rho_{12}}^{i}, \rho_1)=b_{j+i+1}[i+1], &&1\leq j \leq p-2\\ && 0\leq i \leq p-j-2 \\ m_{3+i}(b_j, \rho_2, \overbrace{\rho_{12}, \ldots, \rho_{12}}^{i}, \rho_1)=b_{j-i-1}[0], &&p+1\leq j \leq 2p-2\\ && 0 \leq i \leq j-p-1.\\ \end{array} \end{equation*} Let $Y$ denote the $0$-framed complement of the right-handed trefoil. By \cite[Theorem A.11]{LOT}, the type $D$ structure $\widehat{CFD}(Y)$ is as shown in Figure \ref{fig:CFDRHT}, where the generators $u_i$ are in the idempotent $\iota_0$, i.e., $\iota_0 \cdot u_i = u_i$, and the remaining generators are in the idempotent $\iota_1$. \begin{figure} \caption{$\widehat{CFD} \label{fig:CFDRHT} \label{} \end{figure} The generators and differentials of $\widehat{CFA}(p, 1) \boxtimes \widehat{CFD}(Y)$ are \begin{equation*} \begin{array}{ll} \partial (au_1) = 0 &\\ \partial (au_2) = au_3[p] + b_1\mu_1[1] + b_{2p-2}v_1[0] \quad &\\ \partial (au_3) = b_{2p-2}\mu_1[0] &\\ \partial (b_jv_1) = b_{2p-j-1}v_1[p-j], & 1 \leq j \leq p-1 \\ \partial (b_jv_2) = b_{2p-j-1}v_2[p-j] + b_{j+1}\mu_1[1], & 1 \leq j \leq p-2 \\ \partial (b_{p-1}v_2 = b_pv_2[1] &\\ \partial (b_j \mu_1) = b_{2p-j-1}\mu_1[p-j], & 1\leq j \leq p-1 \\ \partial (b_j \mu_2) = b_{2p-j-1}\mu_2[p-j], & 1 \leq j \leq p-1 \\ \partial (b_pv_1) = 0 &\\ \partial (b_pv_2) = 0 &\\ \partial (b_p\mu_1) = 0 &\\ \partial (b_p\mu_2) = 0 &\\ \partial (b_jv_1) = 0, & p+1 \leq j \leq 2p-2 \\ \partial (b_jv_2) = b_{j-1}\mu_1[0], & p+1 \leq j \leq 2p-2 \\ \partial (b_j\mu_1) = 0, & p+1 \leq j \leq 2p-2 \\ \partial (b_j\mu_2) = 0, & p+1 \leq j \leq 2p-2. \\ \end{array} \end{equation*} The change in Alexander filtration, denoted in square brackets, can be determined from the relative Alexander filtration shifts in the $\mathcal{A}_\infty$ relations on $\widehat{CFA}(p, 1)$. There is a summand of $\widehat{CFK}(T_{2,3;p,1})$ consisting of the generators \[ au_2, \ \ au_3, \ \ b_1v_1, \ \ b_1\mu_1, \ \ b_{2p-2}v_1, \ \ b_{2p-2}\mu_1 \] with the nonzero differentials \begin{align*} \partial (au_2) &= au_3[p] + b_1\mu_1[1] + b_{2p-2}v_1[0] \\ \partial (b_1v_1) &= b_{2p-2}v_1[p-1] \\ \partial (b_1\mu_1) &= b_{2p-2}\mu_1[p-1] \\ \partial (au_3) &= b_{2p-2}\mu_1[0]. \end{align*} See Figure \ref{subfig:summand1a}. We cancel the edge the edge between $au_2$ and $b_{2p-2}v_1$, and the edge between $au_3$ and $b_{2p-2}\mu_1$, which introduces an edge between $b_1v_1$ and $b_1\mu_1$. The summand now consists of \[ b_1v_1, \ \ b_1\mu_1 \] with the differential \[ \partial (b_1v_1) = b_1\mu_1[p].\] See Figure \ref{subfig:summand1}. Similarly, when $p\geq 3$, there is a summand of $\widehat{CFK}(T_{2,3;p,1})$ consisting of the generators \[ b_jv_2, \ \ b_{2p-j-1}v_2, \ \ b_{j+1}\mu_1, \ \ b_{2p-j-2}\mu_1 \] for $1 \leq j \leq p-2$, with the following nonzero differentials \begin{align*} \partial (b_jv_2) &= b_{2p-j-1}v_2[p-1]+b_{j+1}\mu_1[1] \\ \partial (b_{2p-j-1}v_2) &= b_{2p-j-2}\mu_1[0]\\ \partial (b_{j+1}\mu_1) &= b_{2p-j-2}\mu_1[p-2]. \end{align*} After canceling the edge between $b_{2p-j-1}v_2$ and $b_{2p-j-2}\mu_1$, we reduce the summand to \[ b_jv_2, \ \ b_{j+1}\mu_1 \] with the nonzero differential \[ \partial (b_jv_2) = b_{j+1}\mu_1[1].\] See Figure \ref{fig:summand2}. The remaining summands of $\widehat{CFK}(T_{2,3;p,1})$ are shown in Figure \ref{fig:summand3}. \begin{figure} \caption{A summand of $\widehat{CFK} \label{subfig:summand1a} \label{subfig:summand1} \label{fig:summand1} \end{figure} \begin{figure} \caption{A summand of $\widehat{CFK} \label{subfig:summand2} \label{fig:summand2} \end{figure} \begin{figure} \caption{The remaining summands of $\widehat{CFK} \label{fig:summand3} \end{figure} After applying the edge reduction procedure, the nonzero higher differentials on $\widehat{HFK}(T_{2,3;p,1})$ are \begin{align*} \partial (b_1v_1) &= b_1\mu_1[p] \\ \partial (b_iv_2)&= b_{i+1}\mu_1[1], \quad \qquad 1 \leq i \leq p-2 \\ \partial (b_{p-1}v_2) &= b_pv_2[1] \\ \partial (b_iv_1) &= b_{2p-1-i}v_1[p-i], \qquad 2 \leq i \leq p-1 \\ \partial (b_{i}\mu_2 ) &= b_{2p-1-i}\mu_2[p-i], \qquad 1 \leq i \leq p-1, \end{align*} as depicted in Figures \ref{subfig:summand1}, \ref{subfig:summand2}, and \ref{fig:summand3}. We determine the gradings in Table \ref{tab:cable} using \cite[Theorem 1]{Petkova}. Due to our choice of basepoint conventions, our gradings differ from those in \cite{Petkova} in the following ways: our Alexander grading $A$ is the negative of Petkova's, and our Maslov grading $M$ is Petkova's $N$. This completes the proof of the proposition. \end{proof} The basis for $\widehat{HFK}$ in the above proposition has a particularly simple form. In the language of \cite[Definition 11.25]{LOT}, it is \emph{simplified}; that is, there is at most one arrow starting or ending at each basis element. \begin{figure} \caption{$\widehat{CFK} \label{fig:} \end{figure} \begin{lemma} \label{lem:a1a21p} Let $D$ denote the (positive, untwisted) Whitehead double of the right-handed trefoil, and $D_{p,1}$ its $(p,1)$-cable, $p>1$. Then $a_1(D_{p,1})=1$ and $a_2(D_{p,1})=p$. \end{lemma} \begin{proof} The knot $T_{2,3;p,1}$ is $\varepsilon$-equivalent to $D_{p,1}$, so we will study $CFK^{\infty}(T_{2,3;p,1})$ instead of $CFK^{\infty}(D_{p,1})$. By \cite[Theorem 2]{Homcables}, we know that $\varepsilon(T_{2,3;p,1})=1$. By Proposition \ref{prop:cable}, we know that $au_1$ is a generator of the total homology $H_*(\widehat{CFK}(T_{2,3;p,1}))$. We will now find a basis satisfying the conditions in Lemma \ref{lem:basis}, and in doing so, will determine the values of $a_1(T_{2,3;p,1})$ and $a_2(T_{2,3;p,1})$. In order to accomplish this, we will need to find an element whose horizontal boundary in $CFK^{\infty}(T_{2,3;p,1})$ is $au_1$. We will view the $\mathbb{Z} \oplus \mathbb{Z}$-filtered chain complex $CFK^{\infty}(K)$ in the $(i,j)$-plane. The complex $CFK^{\infty}(K)$ is filtered chain homotopic to a complex generated over $\mathbb{F}[U, U^{-1}]$ by $\widehat{HFK}(K)$, and thus the complex $\widehat{HFK}(K)$ can be viewed as the subquotient complex of $CFK^{\infty}(K)$ consisting of elements with $i$-coordinate equal to zero. See Figure \ref{subfig:vertical}. We place a generator $x$ at the lattice point $(0, A(x))$, where $A(x)$ denotes the Alexander grading of $x$. For example, the generator $b_1v_1$ has coordinates $(0, p-1)$. Multiplication by $U$ decreases both the $i$- and $j$-coordinates by one. The Maslov grading is suppressed from the picture, although we will still keep track of it, and recall that an element $U^n \cdot x$ has $(i,j)$-coordinates $(-n, A(x)-n)$, and Maslov grading $M(x)-2n$.. We would like to find an element with $j$-coordinate equal to $\tau(T_{2,3;p,1})$ whose horizontal boundary is equal to $au_1$. In particular, we would like to find an element with $j$-coordinate equal to $p$, $i$-coordinate greater than zero, and Maslov grading one, which is one more than the Maslov grading of $au_1$. To find the elements with $j$-coordinate equal to $p$, we view the appropriate $U$-translates of elements in $\widehat{HFK}(K)$. More specifically, given a generator $x$ of $\widehat{HFK}(K)$, the translate $U^{A(x)-p} \cdot x$ will be in the $p^{\textup{th}}$-row, with \[ A(U^{A(x)-p} \cdot x)=p \qquad \textup{and} \qquad M(U^{A(x)-p} \cdot x)=M(x)+2p-2A(x).\] See Figure \ref{subfig:horizontal}. By considering the gradings in the third column of Table \ref{tab:cable}, which are the Maslov gradings of the elements in the $p^{\textup{th}}$-row, we see that the only element in that row with Maslov grading one is $U^{-1} \cdot b_1v_1$. \begin{figure} \caption{Left, the complex $\widehat{HFK} \label{subfig:vertical} \label{subfig:horizontal} \label{fig:} \end{figure} Thus, by grading considerations, we have concluded that $au_1$ is the horizontal boundary of $U^{-1}\cdot b_1v_1$. The vertical boundary of $U^{-1} \cdot b_1v_1$ is $U^{-1} \cdot b_1\mu_1$, and \[ A(U^{-1} \cdot b_1v_1) = A(U^{-1} \cdot b_1\mu_1) +p. \] It follows that \[a_1(T_{2,3;p,1})=1 \qquad \textup{and} \qquad a_2(T_{2,3;p,1})=p, \] and since $T_{2,3;p,1}$ and $D_{p,1}$ are $\varepsilon$-equivalent, the result follows. \end{proof} \begin{figure} \caption{The elements of interest in the proof of Lemma \ref{lem:a1a21p} \label{fig:} \end{figure} We are now ready to prove Theorem \ref{thm:theknots}, giving an infinite family of topologically slice knots with $4$-ball genus one and arbitrarily large concordance genus. \begin{proof}[Proof of Theorem \ref{thm:theknots}] By Lemma \ref{lem:a1a21p}, \[ a_1(D_{p,1})=1 \qquad \textup{and} \qquad a_2(D_{p,1})=p. \] In the proof of \cite[Lemma 6.4]{Homsmooth}, it is shown that given knots $J$ and $K$, if $a_1(J)=a_1(K)$ and $a_2(J)>a_2(K)$, then \[ a_1(J \# -K)=a_1(J) \qquad \textup{and} \qquad a_2(J \# -K)=a_2(J). \] In particular, \[ a_1(D_{p,1} \# -D_{p-1,1})=1 \qquad \textup{and} \qquad a_2(D_{p,1} \# -D_{p-1,1})=p. \] In the beginning of this section, it was observed that the knots $D_{p,1} \# -D_{p-1,1}$ are topologically slice, and in Lemma \ref{lem:4genus1}, we saw that $g_4(D_{p,1} \# -D_{p-1,1})=1$. By Theorem \ref{thm:gamma} and Lemma \ref{lem:gamma}, we see that \begin{align*} g_c(D_{p,1} \# -D_{p-1,1}) &\geq |\tau(D_{p,1} \# -D_{p-1,1})-a_1(D_{p,1} \# -D_{p-1,1})-a_2(D_{p,1} \# -D_{p-1,1})| \\ &= |1-1-p| \\ &= p, \end{align*} completing the proof of the theorem. \end{proof} \end{document}
\begin{document} \title{On the generation of the coefficient field of a newform by a single Hecke eigenvalue} \author{Koopa Tak-Lun Koo\footnote{Department of Mathematics, University of Washington, Seattle, Box 354350 WA 98195, USA; e-mail: {\tt [email protected]}}\,\, and William Stein\footnote{Department of Mathematics, University of Washington, Seattle, Box 354350 WA 98195, USA; e-mail: {\tt [email protected]}}\,\, and Gabor Wiese\footnote{Institut f\"ur Experimentelle Mathematik, Universit\"at Duisburg-Essen, Ellernstra{\ss}e 29, 45326 Essen, Germany; e-mail: {\tt [email protected]}}} \maketitle \begin{abstract} Let $f$ be a non-CM newform of weight $k \ge 2$. Let $L$ be a subfield of the coefficient field of~$f$. We completely settle the question of the density of the set of primes $p$ such that the $p$-th coefficient of~$f$ generates the field~$L$. This density is determined by the inner twists of~$f$. As a particular case, we obtain that in the absence of non-trivial inner twists, the density is~$1$ for $L$ equal to the whole coefficient field. We also present some new data on reducibility of Hecke polynomials, which suggest questions for further investigation. Mathematics Subject Classification (2000): 11F30 (primary); 11F11, 11F25, 11F80, 11R45 (secondary). \end{abstract} \section{Statement of the results}{\ell}abel{secone} The principal result of this paper is the following theorem. Its corollaries below completely resolve the question of the density of the set of primes~$p$ such that the $p$-th coefficient of~$f$ generates a given field. \begin{thm}{\ell}abel{main density} Let $f$ be a newform (i.e., a new normalized cuspidal Hecke eigenform) of weight $k \ge 2$, level $N$ and Dirichlet character~$\chi$ which does not have complex multiplication (CM, see \cite[p.~48]{Rib80}). Let $E_f = {\mathbf Q}(a_n(f) \,:\,(n,N)=1)$ be the field of coefficients of~$f$ and $F_f = {\mathbf Q}{\ell}eft(\frac{a_n(f)^2}{\chi(n)} \,:\,(n,N)=1\right)$. The set $${\ell}eft\{p \; \textnormal{prime}: {\mathbf Q}{\ell}eft(\frac{a_p(f)^2}{\chi(p)}\right) = F_f \right\}$$ has density $1$. \end{thm} A twist of~$f$ by a Dirichlet character~$\epsilon$ is said to be {\em inner} if there exists a (necessarily unique) field automorphism $\sum_{k = 1}^{\infty}gma_\epsilon: E_f \to E_f$ such that \begin{equation}{\ell}abel{eqin} a_p (f \otimes \epsilon) = a_p(f) \epsilon(p) = \sum_{k = 1}^{\infty}gma_\epsilon (a_p(f)) \end{equation} for almost all primes~$p$. For a discussion of inner twists we refer the reader to \cite[\S3]{Rib80} and \cite[\S3]{Rib85}. Here we give several statements that will be needed for the sequel. The $\sum_{k = 1}^{\infty}gma_\epsilon$ belonging to the inner twists of~$f$ form an abelian subgroup~$\Gamma$ of the automorphism group of~$E_f$. The field $F_f$ is the subfield of~$E_f$ fixed by~$\Gamma$. It is well-known that the coefficient field~$E_f$ is either a CM field or totally real. In the former case, the formula \begin{equation}{\ell}abel{eqcc} \overline{a_p(f)} = \chi(p)^{-1} a_p(f), \end{equation} which is easily derived from the behaviour of the Hecke operators under the Petersson scalar product, shows that $f$ has a non-trivial inner twist by $\chi^{-1}$ with $\sum_{k = 1}^{\infty}gma_{\chi^{-1}}$ being complex conjugation. If $N$ is square free, $k=2$ and the Dirichlet character $\chi$ of $f$ is the trivial character, then there are no nontrivial inner twists of $f$. \begin{lem} The field $F_f$ is totally real and ${\mathbf Q}(a_p(f))$ contains $\frac{a_p(f)^2}{\chi(p)}$. \end{lem} \begin{proof} Equation~\ref{eqcc} gives $\frac{a_p(f)^2}{\chi(p)} = a_p(f) \overline{a_p(f)}$, whence $F_f$ is totally real. Since every subfield of a CM field is preserved by complex conjugation, ${\mathbf Q}(a_p(f))$ contains $\overline{a_p(f)}$, thus it also contains $\frac{a_p(f)^2}{\chi(p)}$. \end{proof} We immediately obtain the following two results. \begin{cor}{\ell}abel{cor density} Let $f$ and $E_f$ be as in Theorem~\ref{main density}. If $f$ does not have any nontrivial inner twists (e.g.\ if $k=2$, $N$ is square free and $\chi$ is trivial), then the set $${\ell}eft\{p \; \textnormal{prime}: {\mathbf Q}(a_p(f)) = E_f \right\}$$ has density $1$. \end{cor} \begin{cor}{\ell}abel{cor Ff} Let $f$ and $F_f$ be as in Theorem~\ref{main density}. The set $${\ell}eft\{p \; \textnormal{prime}: F_f \subseteq {\mathbf Q}(a_p(f)) \right\}$$ has density $1$. \end{cor} To any subgroup $H$ of $\Gamma$, we associate a number field $K_H$ as follows. Consider the inner twists as characters of the absolute Galois group $\operatorname {Gal}({\overline{\QQ}}/{\mathbf Q})$ and let $\epsilon_1,\dots,\epsilon_r$ be the inner twists such that $H = \{\sum_{k = 1}^{\infty}gma_{\epsilon_1},\dots,\sum_{k = 1}^{\infty}gma_{\epsilon_r}\}$. Let $K_H$ be the minimal number field on which all $\epsilon_i$ for $1{\ell}e i {\ell}e r$ are trivial, i.e.\ the field such that its absolute Galois group is the kernel of the map $$\operatorname {Gal}({\overline{\QQ}}/{\mathbf Q}) \xrightarrow{\epsilon_1,\dots,\epsilon_r} {\mathbf C}^\times \times \dots \times {\mathbf C}^\times.$$ We use this field to express the density of the set of primes~$p$ such that $a_p(f)$ is {\em contained} in a given subfield of the coefficient field. \begin{cor} Let $f$, $E_f$ and $F_f$ be as in Theorem~\ref{main density}. Let $L$ be any subfield of~$E_f$. Let $M_L$ be the set $$ {\ell}eft\{p \; \textnormal{prime}: a_p(f) \in L \right\}.$$ \begin{enumerate}[(a)] \item If $L$ does not contain $F_f$, then $M_L$ has density~$0$. \item If $L$ contains $F_f$, then $L = E_f^H$ for some subgroup $H \subseteq \Gamma$ and $M_L$ has density $1/[K_H : {\mathbf Q}]$. \end{enumerate} \end{cor} \begin{proof} Suppose first that $L$ does not contain~$F_f$. Then $a_p(f) \in L$ implies that $F_f$ is not a subfield of~${\mathbf Q}(a_p(f))$. Thus by Corollary~\ref{cor Ff}, $M_L$ is a subset of a set of density~$0$ and is consequently itself of density~$0$. We now assume that $L=E_f^H$. Then we have \begin{align*} M_L & = {\ell}eft\{p \; \textnormal{prime}: \sum_{k = 1}^{\infty}gma(a_p(f)) = a_p(f) \, \forall \sum_{k = 1}^{\infty}gma \in H \right\}\\ & = {\ell}eft\{p \; \textnormal{prime}: a_p(f) \epsilon_i(p) = a_p(f) \, \forall i \in \{1,\dots,r\}\right\}. \end{align*} Since the set of~$p$ with $a_p(f)=0$ has density~$0$ (see for instance \cite{Serre}, p.~174), the density of $M_L$ is equal to the density of $${\ell}eft\{p \; \textnormal{prime}: \epsilon_i(p) = 1 \, \forall i \in \{1,\dots,r\} \right\} = {\ell}eft\{p \; \textnormal{prime}: p \textnormal{ splits completely in } K_H \right\},$$ yielding the claimed formula. \end{proof} A complete answer as to the density of the set of~$p$ such that $a_p(f)$ {\em generates} a given field $L \subseteq E_f$ is given by the following immediate result. \begin{cor} Let $f$, $E_f$ and $F_f$ be as in Theorem~\ref{main density}. Let $L$ be $E_f^H$ with $H$ some subgroup of~$\Gamma$. The density of the set $$ {\ell}eft\{p \; \textnormal{prime}: {\mathbf Q}(a_p(f)) = L \right\}.$$ is equal to the density of the set $${\ell}eft\{p \; \textnormal{prime}: \epsilon_i(p) = 1 \, \forall i \in \{1,\dots,r\} \textnormal{ and } \epsilon_j(p) \neq 1 \, \forall j \in \{r+1,\dots,s\} \right\},$$ where the $\epsilon_j$ for $j \in \{r+1,\dots,s\}$ are the inner twists of~$f$ that belong to elements of~$\Gamma - H$. \end{cor} This corollary means that the above density is completely determined by the inner twists of~$f$. We illustrate this by giving two examples. In weight~$2$ there is a newform on $\Gamma_0(63)$ with coefficient field~${\mathbf Q}(\sqrt{3})$. It has an inner twist by the Legendre symbol $p \mapsto {\ell}eft(\frac{p}{3}\right)$. Consequently, the field $F_f$ is ${\mathbf Q}$ and the set of~$p$ such that $a_p(f) \in {\mathbf Q}$ has density~$\frac{1}{2}$. For the next example we consider the newform of weight~$2$ on $\Gamma_0(512)$ whose coefficient field has degree~$4$ over~${\mathbf Q}$. More precisely, the coefficient field $E_f$ is ${\mathbf Q}(\sqrt{2},\sqrt{3})$ and $F_f = {\mathbf Q}$. Hence, $\Gamma = {\mathbf Z}/2{\mathbf Z} \times {\mathbf Z}/2{\mathbf Z} = \{1, \sum_{k = 1}^{\infty}gma_1,\sum_{k = 1}^{\infty}gma_2,\sum_{k = 1}^{\infty}gma_3\}$. There are thus nontrivial inner twists $\epsilon_1$, $\epsilon_2$ and $\epsilon_3$, all of which are quadratic, as their values must be contained in the totally real field~$E_f$. As $\sum_{k = 1}^{\infty}gma_1 \sum_{k = 1}^{\infty}gma_2 = \sum_{k = 1}^{\infty}gma_3$, it follows that $\epsilon_1(p) \epsilon_2(p) = \epsilon_3(p)$. This equation already excludes the possibility that all $\epsilon_i(p) \neq 1$, whence there is not a single $p$ such that $a_p(f)$ generates~$E_f$. Furthermore, the set of~$p$ such that $a_p$ generates the quadratic field $E_f^{{\ell}angle \sum_{k = 1}^{\infty}gma_1 \rangle}$ is equal to the density of ${\ell}eft\{p \; \textnormal{prime}: \epsilon_1(p)=1 \textnormal{ and } \epsilon_2(p) \neq 1 \right\},$ which is~$\frac{1}{4}$. Similar arguments apply to the other two quadratic fields. The set of~$p$ such that $a_p \in {\mathbf Q}$ also has density~$\frac{1}{4}$. In the literature there are related but weaker results concerning Corollary~\ref{cor density}, which are situated in the context of Maeda's conjecture, i.e., they concern the case of level~$1$ and assume that the space $S_k(1)$ of cusp forms of weight~$k$ and level~$1$ consists of a single Galois orbit of newforms (see, e.g., \cite{JOno} and \cite{BM2003}). We now show how Corollary~\ref{cor density} extends the principal results of these two papers. Let $f$ be a newform of level~$N$, weight~$k \ge 2$ and trivial Dirichlet character $\chi=1$ which neither has CM nor nontrivial inner twists. This is for instance true when $N=1$. Let ${\mathbb T}$ be the ${\mathbf Q}$-algebra generated by all $T_n$ with $n\ge 1$ inside $\operatorname {End}(S_k(N,1))$ and let ${\mathfrak P}$ be the kernel of the ${\mathbf Q}$-algebra homomorphism ${\mathbb T} \xrightarrow{T_n \mapsto a_n(f)} E_f$. As ${\mathbb T}$ is reduced, the map ${\mathbb T}_{\mathfrak P} \xrightarrow{T_n \mapsto a_n(f)} E_f$ is a ring isomorphism with ${\mathbb T}_{\mathfrak P}$ the localization of ${\mathbb T}$ at~${\mathfrak P}$. Non canonically ${\mathbb T}_{\mathfrak P}$ is also isomorphic as a ${\mathbb T}_{\mathfrak P}$-module (equivalently as an $E_f$-vector space) to its ${\mathbf Q}$-linear dual, which can be identified with the localization at~${\mathfrak P}$ of the ${\mathbf Q}$-vector space $S_k(N,1; {\mathbf Q})$ of cusp forms in $S_k(N,1)$ with $q$-expansion in ${\mathbf Q}[[q]]$. Hence, ${\mathbf Q}(a_p(f)) = E_f$ precisely means that the characteristic polynomial~$P_p \in {\mathbf Q}[X]$ of~$T_p$ acting on the localization at~${\mathfrak P}$ of $S_k(N,1;{\mathbf Q})$ is irreducible. Corollary~\ref{cor density} hence shows that the set of primes~$p$ such that $P_p$ is irreducible has density~$1$. This extends Theorem 1 of~\cite{JOno} and Theorem 1.1 of~\cite{BM2003}. Both theorems restrict to the case $N=1$ and assume that there is a unique Galois orbit of newforms, i.e., a unique~${\mathfrak P}$, so that no localization is needed. Theorem~1 of~\cite{JOno} says that $$\# \{p < X \, \textnormal{prime} \, : P_p \textnormal{ is irreducible in } {\mathbf Q}[X]\} \gg \frac{X}{{\ell}og X}$$ and Theorem 1.1 of~\cite{BM2003} states that there is $\delta > 0$ such that $$\# \{p < X \, \textnormal{prime} \, : P_p \textnormal{ is reducible in } {\mathbf Q}[X]\} {\ell}l \frac{X}{({\ell}og X)^{1+\delta}}.$$ \noindent{\bf Acknowledgements.} The authors would like to thank the MSRI, where part of this research was done, for its hospitality. The first author would like to thank his advisor Ralph Greenberg for suggesting the problem. The second author acknowledges partial support from the National Science Foundation grant No.\ 0555776, and also used \cite{sage} for some calculations related to this paper. All three authors thank Jordi Quer for useful discussions. \section{Group theoretic input} \begin{lem}{\ell}abel{fh} Let $q$ be a prime power and $\epsilon$ a generator of the cyclic group ${\mathbb F}_q^{{\times}}.$ \begin{enumerate}[(a)] \item The conjugacy classes $c$ in $\operatorname {GL}_2({\mathbb F}_q)$ have the following four kinds of representatives: $$ S_a = \begin{pmatrix}a & 0 \\ 0 & a \end{pmatrix}, \; \; T_{a} = \begin{pmatrix}a & 0 \\ 1 & a \end{pmatrix}, \; \; U_{a, b} = \begin{pmatrix}a & 0 \\ 0 & b \end{pmatrix}, \; \; V_{x, y} = \begin{pmatrix}x & \epsilon y \\ y & x \end{pmatrix}$$ where $a \not= b,$ and $y \not= 0.$ \item The number of elements in each of these conjugacy classes are: $1, q^2 - 1, q^2 + q,$ and $q^2 - q$, respectively. \end{enumerate} \end{lem} \begin{proof} See Fulton-Harris~\cite{FH}, page 68. \end{proof} We use the notation $[g]_G$ for the conjugacy class of~$g$ in~$G$. \begin{prop}{\ell}abel{gpthy} Let $q$ be a prime power and $r$ a positive integer. Let further $R \subseteq \widetilde{R} \subseteq {\mathbb F}_{q^{r}}^\times$ be subgroups. Put $\sqrt{\widetilde{R}} = \{ s \in {\mathbb F}_{q^r}^\times \,:\, s^2 \in \widetilde{R}\}$. Set $$ H = \{ g \in \operatorname {GL}_2({\mathbb F}_{q}) \,:\, \det (g) \in R \} $$ and let $$ G \subseteq \{ g \in \operatorname {GL}_2({\mathbb F}_{q^r}) \,:\, \det (g) \in \widetilde{R} \} $$ be any subgroup such that $H$ is a normal subgroup of~$G$. Then the following statements hold. \begin{enumerate}[(a)] \item The group $G/(G \cap {\mathbb F}_{q^r}^\times)$ (with ${\mathbb F}_{q^r}^\times$ identified with scalar matrices) is either equal to $\operatorname {PSL}_2({\mathbb F}_q)$ or to $\operatorname {PGL}_2({\mathbb F}_q)$. More precisely, if we let $\{s_1,\dots,s_n\}$ be a system of representatives for $\sqrt{\widetilde{R}}/R$, then for all $g \in G$ there is $i$ such that $g \mat {s_i^{-1}} 00 {s_i^{-1}} \in G \cap \operatorname {GL}_2({\mathbb F}_q)$ and $\mat {s_i}00{s_i} \in G$. \item Let $g \in G$ such that $g \mat {s_i^{-1}} 00 {s_i^{-1}} \in G \cap \operatorname {GL}_2({\mathbb F}_q)$ and $\mat {s_i}00{s_i} \in G$. Then $$[g]_G = [g \mat {s_i^{-1}} 00 {s_i^{-1}}]_{G \cap \operatorname {GL}_2({\mathbb F}_q)} \mat {s_i}00{s_i}.$$ \item Let $P(X) = X^2 - aX + b \in {\mathbb F}_{q^r}[X]$ be a polynomial. Then the inequality $$ \sum_C | C | \;{\ell}e\; 2 | \widetilde{R}/R | (q^2 + q)$$ holds, where the sum runs over the conjugacy classes $C$ of~$G$ with characteristic polynomial equal to~$P(X)$. \end{enumerate} \end{prop} \begin{proof} (a) The classification of the finite subgroups of $\operatorname {PGL}_2({\overline{\FF}}_q)$ yields that the group $G/(G \cap {\mathbb F}_{q^r}^\times)$ is either $\operatorname {PGL}_2({\mathbb F}_{q^u})$ or $\operatorname {PSL}_2({\mathbb F}_{q^u})$ for some $u \mid r$. This, however, can only occur with $u=1$, as $\operatorname {PSL}_2({\mathbb F}_{q^u})$ is simple. The rest is only a reformulation. (b) This follows from (a), since scalar matrices are central. (c) From (b) we get the inclusion $$\bigsqcup_C C \subseteq \bigsqcup_{i=1}^n \bigsqcup_D D \mat {s_i}00{s_i},$$ where $C$ runs over the conjugacy classes of $G$ with characteristic polynomial equal to $P(X)$ and $D$ runs over the conjugacy classes of $G\cap \operatorname {GL}_2({\mathbb F}_q)$ with characteristic polynomial equal to $X^2 - a s_i^{-1} X + b s_i^{-2}$ (such a conjugacy class is empty if the polynomial is not in ${\mathbb F}_q[X]$). The group $G\cap \operatorname {GL}_2({\mathbb F}_q)$ is normal in $\operatorname {GL}_2({\mathbb F}_q)$, as it contains $\operatorname {SL}_2({\mathbb F}_q)$. Hence, any conjugacy class of $\operatorname {GL}_2({\mathbb F}_q)$ either has an empty intersection with $G\cap \operatorname {GL}_2({\mathbb F}_q)$ or is a disjoint union of conjugacy classes of $G\cap \operatorname {GL}_2({\mathbb F}_q)$. Consequently, by Lemma~\ref{fh}, the disjoint union $\bigsqcup_D D \mat {s_i}00{s_i}$ is equal to one of \begin{enumerate}[(i)] \item $[U_{a,b}]_{\operatorname {GL}_2({\mathbb F}_q)} \mat {s_i}00{s_i}$, \item $[V_{x,y}]_{\operatorname {GL}_2({\mathbb F}_q)} \mat {s_i}00{s_i}$ or \item $[S_a]_{\operatorname {GL}_2({\mathbb F}_q)} \mat {s_i}00{s_i} \sqcup [T_a]_{\operatorname {GL}_2({\mathbb F}_q)} \mat {s_i}00{s_i}$. \end{enumerate} Still by Lemma~\ref{fh}, the first set contains $q^2 + q$, the second set $q^2-q$ and the third one $q^2$ elements. Hence, the set $\bigsqcup_C C$ contains at most $2 | \widetilde{R}/R | (q^2+q)$ elements. \end{proof} \section{Proof} The proof of Theorem~\ref{main density} relies on the following important theorem by Ribet, which, roughly speaking, says that the image of the mod~${\ell}$ Galois representation attached to a fixed newform is as big as it can be for almost all primes~${\ell}$. \begin{thm}[Ribet]{\ell}abel{ribet} Let $f$ be a Hecke eigenform of weight $k \ge 2$, level~$N$ and Dirichlet character $\chi: ({\mathbf Z}/N{\mathbf Z})^\times \to {\mathbf C}^\times$. Suppose that $f$ does not have CM. Let $E_f$ and $F_f$ be as in Theorem~\ref{main density} and denote by $\mathcal{O}_{E_f}$ and $\mathcal{O}_{F_f}$ the corresponding rings of integers. For almost all prime numbers~${\ell}$ the following statement holds: \begin{quote} Let $\widetilde{{\mathcal L}}$ be a prime ideal of $\mathcal{O}_{E_f}$ dividing~${\ell}$. Put ${\mathcal L} = \widetilde{{\mathcal L}} \cap \mathcal{O}_{F_f}$ and $\mathcal{O}_{F_f}/{\mathcal L} \cong {\mathbb F}$. Consider the residual Galois representation $${\overline{\rho}}_{f,\widetilde{{\mathcal L}}}: \operatorname {Gal}({\overline{\QQ}}/{\mathbf Q}) \to \operatorname {GL}_2(\mathcal{O}_{E_f}/\widetilde{{\mathcal L}})$$ attached to~$f$. Then the image ${\overline{\rho}}_{f,\widetilde{{\mathcal L}}}(\operatorname {Gal}({\overline{\QQ}}/K_\Gamma))$ is equal to $$ \{g \in \operatorname {GL}_2({\mathbb F}) \,:\, \det(g) \in {\mathbb F}_{\ell}^{\times (k-1)} \},$$ where $K_\Gamma$ is the field defined in Section~\ref{secone}. \end{quote} \end{thm} \begin{proof} It suffices to take Ribet~\cite[Thm.~3.1]{Rib85} mod~$\widetilde{{\mathcal L}}$. \end{proof} \begin{thm}{\ell}abel{global_density} Let $f$ be a non-CM newform of weight $k \ge 2$, level $N$ and Dirichlet character~$\chi$. Let $F_f$ be as in Theorem~\ref{main density} and let $L \subset F_f$ be any proper subfield. Then the set $${\ell}eft\{p \; \textnormal{prime}: \frac{a_p(f)^2}{\chi(p)} \in L\right\}$$ has density zero. \end{thm} \begin{proof} Let $L \subsetneq F_f$ be a proper subfield and ${\mathcal O}_L$ its integer ring. We define the set $$ S := \{ {\mathcal L} \subset {\mathcal O}_{F_f} \textnormal{ prime ideal}: [{\mathcal O}_{F_f}/{\mathcal L} : {\mathcal O}_L / (L \cap {\mathcal L})] \ge 2 \}.$$ Notice that this set is infinite. For, if it were finite, then all but finitely many primes would split completely in the extension $F_f/L$, which is not the case by Chebotarev's density theorem. Let ${\mathcal L} \in S$ be any prime, ${\ell}$ its residue characteristic and $\widetilde{{\mathcal L}}$ a prime of ${\mathcal O}_{E_f}$ lying over~${\mathcal L}$. Put ${\mathbb F}_q = {\mathcal O}_L / (L \cap {\mathcal L})$, ${\mathbb F}_{q^r} = {\mathcal O}_{F_f}/{\mathcal L}$ and ${\mathbb F}_{q^{rs}} = {\mathcal O}_{E_f}/\widetilde{{\mathcal L}}$. We have $r \ge 2$. Let $W$ be the subgroup of ${\mathbb F}_{q^{rs}}^\times$ consisting of the values of~$\chi$ modulo~$\widetilde{{\mathcal L}}$; its size $|W|$ is less than or equal to $| ({\mathbf Z}/N{\mathbf Z})^\times|$. Let $R = {\mathbb F}_{\ell}^{\times (k-1)}$ be the subgroup of $(k-1)$st powers of elements in the multiplicative group ${\mathbb F}_{\ell}^{\times}$ and let $\widetilde{R} = {\ell}angle R, W \rangle \subset {\mathbb F}_{q^{rs}}^\times$. The size of $\widetilde{R}$ is less than or equal to $|R| \cdot |W|$. Let $H = \{g \in \operatorname {GL}_2({\mathbb F}_{q^r}) \,:\, \det(g) \in R \}$ and $G = \operatorname {Gal}({\overline{\QQ}}^{\ker{{\overline{\rho}}_{f,\widetilde{{\mathcal L}}}}}/{\mathbf Q})$. By Galois theory, $G$ can be identified with the image of the residual representation ${\overline{\rho}}_{f,\widetilde{{\mathcal L}}}$, and we shall make this identification from now on. By Theorem~\ref{ribet} we have the inclusion of groups $$ H \subseteq G \subseteq \{g \in \operatorname {GL}_2({\mathbb F}_{q^{rs}}): \det(g) \in \widetilde{R} \}$$ with $H$ being normal in~$G$. If $C$ is a conjugacy class of~$G$, by Chebotarev's density theorem the density of $$\{p \, \textnormal{prime} : \, [{\overline{\rho}}_{f,\widetilde{{\mathcal L}}}(\operatorname {Frob}_p)]_G = C\}$$ equals $|C|/|G|.$ We consider the set $$ M_{\mathcal L} := \bigsqcup_C \{ p \, \textnormal{prime} : \, [{\overline{\rho}}_{f,\widetilde{{\mathcal L}}}(\operatorname {Frob}_p)]_G = C\} \supseteq {\ell}eft\{ p \, \textnormal{prime} : \, \overline{{\ell}eft(\frac{a_p(f)^2}{\chi(p)}\right)} \in {\mathbb F}_q \right\},$$ where the reduction modulo~${\mathcal L}$ of an element $x \in \mathcal{O}_{F_f}$ is denoted by $\overline{x}$ and $C$ runs over the conjugacy classes of~$G$ with characteristic polynomials equal to some $X^2-aX+b \in {\mathbb F}_{q^{rs}}[X]$ such that $$a^2 \in \{ t \in {\mathbb F}_{q^{rs}} \, : \, \exists u \in {\mathbb F}_q \; \exists w \in W : t = uw \}$$ and automatically $b \in \widetilde{R}$. The set $M_{\mathcal L}$ has the density $\delta(M_{\mathcal L}) = \sum_{C}^{}\frac{|C|}{|G|}$ with $C$ as before. There are at most $2q |W|^2 \cdot |R|$ such polynomials. We are now precisely in the situation to apply Prop.~\ref{gpthy}, Part~(c), which yields the inequality $$ \delta (M_{\mathcal L}) {\ell}e \frac{4 |W|^3 q (q^{2r} +q^{r})}{(q^{3r} - q^r)} = O{\ell}eft(\frac{1}{q^{r - 1}}\right) {\ell}e O{\ell}eft(\frac{1}{q}\right), $$ where for the denominator we used $|G| \ge |H| = |R| \cdot |\operatorname {SL}_2({\mathbb F}_{q^r})|$. Since $q$ is unbounded for ${\mathcal L} \in S$, the intersection $M := \bigcap_{{\mathcal L} \in S} M_{\mathcal L}$ is a set having a density and this density is~$0$. The inclusion $$ {\ell}eft\{ p \, \textnormal{prime} : \, \frac{a_p(f)^2}{\chi(p)} \in L \right\} \subseteq M $$ finishes the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{main density}] It suffices to apply Theorem~\ref{global_density} to each of the finitely many subextension of~$F_f$. \end{proof} \section{Reducibility of Hecke polynomials: questions} Motivated by a conjecture of Maeda, there has been some speculation that for every integer $k$ and prime number $p$, the characteristic polynomial of $T_p$ acting on $S_k(1)$ is irreducible. See, for example, \cite{fj}, which verifies this for all $k<2000$ and $p<2000$. The most general such speculation might be the following question: {\em if~$f$ is a non-CM newform of level $N\geq 1$ and weight $k \ge 2$ such that some $a_p(f)$ generates the field $E_f = {\mathbf Q}(a_n(f) : n\geq 1)$, do all but finitely many prime-indexed Fourier coefficients $a_p(f)$ generate~$E_f$?} The answer in general is no. An example is given by the newform in level~$63$ and weight~$2$ that has an inner twist by ${\ell}eft(\frac{\cdot}{3}\right)$. Also for non-CM newforms of weight~$2$ without nontrivial inner twists such that $[E_f:{\mathbf Q}]=2$, we think that the answer is likely no. Let $f\in S_k(\Gamma_0(N))$ be a newform of weight $k$ and level $N$. The {\em degree} of $f$ is the degree of the field $E_f$, and we say that~$f$ is a {\em reducible newform} if $a_p(f)$ does not generate~$E_f$ for infinitely many primes~$p$. For each even weight $k{\ell}eq 12$ and degree $d=2,3,4$, we used \cite{sage} to find newforms $f$ of weight $k$ and degree $d$. For each of these forms, we computed the {\em reducible primes} $p<1000$, i.e., the primes such $a_p(f)$ does not generate~$E_f$. The result of this computation is given in Table~\ref{counting}. Table~\ref{newforms2} contains the number of reducible primes $p<10000$ for the first $20$ newforms of degree $2$ and weight $2$. This data inspires the following question. \begin{question} If $f\in S_2(\Gamma_0(N))$ is a newform of degree $2$, is $f$ necessarily reducible? That is, are there infinitely many primes $p$ such that $a_p(f)\in {\mathbf Z}$? \end{question} Tables~\ref{newforms3}--\ref{newforms389} contain additional data about the first few newforms of given degree and weight, which may suggest other similar questions. In particular, Table~\ref{tab:million} contains data for all primes up to $10^6$ for the first degree 2 form $f$ with $L(f,1)\neq 0$, and for the first degree 2 form $g$ with $L(g,1) = 0$. We find that there are 386 primes $<10^6$ with $a_p(f) \in {\mathbf Z}$ and $309$ with $a_p(g)\in {\mathbf Z}$. \begin{question} If $f\in S_2(\Gamma_0(N))$ is a newform of degree $2$, can the asymptotic behaviour of the function $$ N(x) := \#\{ p \, \textnormal{prime} : \, p < x, a_p(f) \in {\mathbf Z} \} $$ be described as a function of~$x$? \end{question} The authors intend to investigate these questions in a subsequent paper. \begin{table} \begin{center} \caption{Counting Reducible Characteristic Polynomials{\ell}abel{counting}} \begin{tabular}{|l|l|l|l|}\hline $k$ & $d$ & $N$ & reducible $p<1000$\\\hline 2 & 2 & 23 & 13, 19, 23, 29, 43, 109, 223, 229, 271, 463, 673, 677, 883, 991\\ 2 & 3 & 41 & 17, 41\\ 2 & 4 & 47 & 47 \\\hline 4 & 2 & 11 & 11 \\ 4 & 3 & 17 & 17 \\ 4 & 4 & 23 & 23 \\\hline 6 & 2 & 7 & 7 \\ 6 & 3 & 11 & 11 \\ 6 & 4 & 17 & 17 \\\hline 8 & 2 & 5 & 5 \\ 8 & 3 & 17 & 17 \\ 8 & 4 & 11 & 11 \\\hline 10 & 2 & 5 & 5 \\ 10 & 3 & 7 & 7 \\ 10 & 4 & 13 & 13 \\\hline 12 & 2 & 5 & 5 \\ 12 & 3 & 7 & 7 \\ 12 & 4 & 21 & 3, 7 \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{First 20 Newforms of Degree 2 and Weight 2{\ell}abel{newforms2}} \begin{tabular}{|l|l|l|c|}\hline $k$ & $d$ & $N$ & \#\{reducible $p<10000$\}\!\\\hline 2 & 2 & 23 & 47 \\ 2 & 2 & 29 & 42 \\ 2 & 2 & 31 & 78 \\ 2 & 2 & 35 & 48 \\ 2 & 2 & 39 & 71 \\ 2 & 2 & 43 & 43 \\ 2 & 2 & 51 & 64 \\ 2 & 2 & 55 & 95 \\ 2 & 2 & 62 & 77 \\ 2 & 2 & 63 & 622 (inner twist by ${\ell}eft(\frac{\cdot}{3}\right)$)\\ \hline \end{tabular}\hspace{1em}\begin{tabular}{|l|l|l|c|}\hline $k$ & $d$ & $N$ & \#\{reducible $p<10000$\}\!\\\hline 2 & 2 & 65 & 43 \\ 2 & 2 & 65 & 90 \\ 2 & 2 & 67 & 51 \\ 2 & 2 & 67 & 19 \\ 2 & 2 & 68 & 53 \\ 2 & 2 & 69 & 47 \\ 2 & 2 & 73 & 43 \\ 2 & 2 & 73 & 55 \\ 2 & 2 & 74 & 52 \\ 2 & 2 & 74 & 21 \\ \hline \end{tabular} \end{center} \end{table} \begin{table}{\ell}abel{tab:million} \begin{center} \caption{Newforms 23a and 67b: values of ${\partial}si(x) = \#\{\text{reducible }p< x\cdot 10^5\}$ {\ell}abel{newforms1000000}} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}\hline $k$ & $d$ & $N$ & $r_{\an}$ & $1$ & $2$ & $3$ &$4$ &$5$ &$6$ &$7$ &$8$ &$9$ & $10$ \\\hline 2 & 2 & $23$ & 0 & 127 & 180 & 210 & 243 & 277 & 308 & 331 & 345 & 360 & 386 \\\hline 2 & 2 & $67$ & 1 & 111 & 159 & 195 & 218 & 240 & 257 & 276 & 288 & 301 & 309 \\\hline \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{First 5 Newforms of Degrees 3, 4 and Weight 2{\ell}abel{newforms3}} \begin{tabular}{|l|l|l|l|}\hline $k$ & $d$ & $N$ & reducible $p<10000$\\ \hline 2 & 3 & 41 & 17, 41 \\ 2 & 3 & 53 & 13, 53 \\ 2 & 3 & 61 & 61, 2087 \\ 2 & 3 & 71 & 23, 31, 71, 479, \\ &&&647, 1013, 3181\\ 2 & 3 & 71 & 13, 71, 509, 3613 \\ \hline\end{tabular} \hspace{2em} \begin{tabular}{|l|l|l|l|}\hline $k$ & $d$ & $N$ & reducible $p<10000$\\ \hline 2 & 4 & 47 & 47 \\ 2 & 4 & 95 & 5, 19 \\ 2 & 4 & 97 & 97 \\ 2 & 4 & 109 & 109, 4513 \\ 2 & 4 & 111 & 3, 37 \\ &&&\\ \hline\end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{First 5 Newforms of Degrees 2, 3 and Weight 4{\ell}abel{newforms2w4}} \begin{tabular}{|l|l|l|l|}\hline $k$ & $d$ & $N$ & reducible $p<1000$\\ \hline 4 & 2 & 11 & 11 \\ 4 & 2 & 13 & 13 \\ 4 & 2 & 21 & 3, 7 \\ 4 & 2 & 27 & {\tiny 3, 7, 13, 19, 31, 37, 43, 61, 67, 73, 79, 97, 103,}\\ &&&{\tiny 109, 127, 139, 151, 157, 163, 181, 193, 199, 211,}\\ &&&{\tiny 223, 229,241, 271, 277, 283, 307, 313, 331, 337,}\\ &&&{\tiny 349, 367, 373, 379, 397, 409, 421, 433, 439, 457,}\\ &&&{\tiny 463, 487, 499, 523, 541, 547, 571, 577, 601, 607,}\\ &&&{\tiny 613, 619, 631, 643, 661, 673, 691, 709, 727, 733,}\\ &&&{\tiny 739, 751, 757, 769, 787, 811, 823, 829, 853, 859,}\\ &&&{\tiny 877, 883, 907, 919, 937, 967, 991, 997} \\ &&&(has inner twists)\\ 4 & 2 & 29 & 29 \\ \hline \end{tabular} \begin{tabular}{|l|l|l|l|}\hline $k$ & $d$ & $N$ & reducible $p<1000$ \\\hline 4 & 3 & 17 & 17 \\ 4 & 3 & 19 & 19 \\ 4 & 3 & 35 & 5, 7 \\ 4 & 3 & 39 & 3, 13 \\ 4 & 3 & 41 & 41 \\ &&&\\&&&\\&&&\\&&&\\&&&\\&&&\\ &&&\\ &&&\\ \hline \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{Newforms on $\Gamma_0(389)$ of Weight $2${\ell}abel{newforms389}} \begin{tabular}{|l|l|l|l|}\hline $k$ & $d$ & $N$ & reducible $p<10000$ \\\hline 2 & 1 & 389 & none (degree 1 polynomials are all irreducible) \\ 2 & 2 & 389 & {\tiny 5, 11, 59, 97, 157, 173, 223, 389, 653, 739, 859, 947, 1033, 1283, 1549, 1667, 2207, 2417, 2909, 3121, 4337,}\\ &&&{\tiny 5431, 5647, 5689, 5879, 6151, 6323, 6373, 6607, 6763, 7583, 7589, 8363, 9013, 9371, 9767} \\ 2 & 3 & 389 & 7, 13, 389, 503, 1303, 1429, 1877, 5443 \\ 2 & 6 & 389 & 19, 389\\ 2 & 20 & 389 & 389 \\ \hline \end{tabular} \end{center} \end{table} \end{document}
\begin{document} \title[Stubborn Set Reduction for Two-Player Reachability Games] {Stubborn Set Reduction for \texorpdfstring{\\}{} Two-Player Reachability Games} \author[F.M.~B{\o}nneland]{Frederik~Meyer~B{\o}nneland} \author[P.G.~Jensen]{Peter~Gj{\o}l~Jensen} \author[K.G.~Larsen]{Kim~Guldstrand~Larsen} \author[M.~Mu\~{n}iz]{Marco~Mu\~{n}iz} \author[J.~Srba]{\texorpdfstring{Ji\v{r}\'{\i}}{Jir\'{\i}}~Srba\texorpdfstring{ }{}} \address{Department of Computer Science, Aalborg University, Denmark} \email{\{frederikb,pgj,kgl,muniz,srba\}@cs.aau.dk} \begin{abstract} Partial order reductions have been successfully applied to model checking of concurrent systems and practical applications of the technique show nontrivial reduction in the size of the explored state space. We present a theory of partial order reduction based on stubborn sets in the game-theoretical setting of 2-player games with reachability objectives. Our stubborn reduction allows us to prune the interleaving behaviour of both players in the game, and we formally prove its correctness on the class of games played on general labelled transition systems. We then instantiate the framework to the class of weighted Petri net games with inhibitor arcs and provide its efficient implementation in the model checker TAPAAL\@. Finally, we evaluate our stubborn reduction on several case studies and demonstrate its efficiency. \ensuremath{en}\xspaced{abstract} \maketitle \section{Introduction} The state space explosion problem is the main obstacle for model checking of concurrent systems. Even simple processes running in parallel can produce an exponentially large number of interleavings, making full state space search practically intractable. A family of methods for taming this problem is that of partial order reductions~\cite{G:96,peled1993stubborn,V:90} by exploiting the commutativity of independent concurrent processes. Variants of partial order reductions include persistent sets~\cite{G:96,godefroid1990using,godefroid1993using}, ample sets~\cite{peled1993stubborn,peled1996combining,peled1998ten}, and stubborn sets~\cite{V:90,valmari1992attack,valmari1993partial,valmari2017stubborn}. As our main contribution, we generalise the theory of the stubborn set variant of partial order reductions into the setting of 2-player games. We exploit the observation that either of the two players often is left with no actions to propose, leaving the opponent to independently dictate the behavior of the system for a limited, consecutive sequence of actions. In such cases we may apply the classical stubborn set reductions in order to reduce the number of interleavings of independent actions. To preserve the winning strategies of both players, a number of conditions of the reduction has to be satisfied. We define the notion of a \emph{stable} stubborn set reduction by a set of sufficient conditions that guarantee the preservation of winning strategies for both players. Furthermore, we formally prove the correctness of stable reductions in the setting of general game labelled transition systems, and instantiate our framework to weighted Petri net games with inhibitor arcs. We propose approximate syntax-driven conditions of a stable Petri net game reduction satisfying the sufficient conditions for our stable reductions and demonstrate their applicability in an efficient, open source implementation in the model checker TAPAAL~\cite{david2012tapaal}. Our implementation is based on dependency graphs, following the approach from~\cite{DEFJJJKLNOPS:FI:18,jensen2016real}, and we demonstrate on several case studies that the computation of the stubborn sets only has a minor overhead while having the potential of achieving exponential reduction both in the running time as well as in the number of searched configurations. To the best of our knowledge, this is the first efficient implementation of a 2-player game partial order reduction technique for Petri net games. \emph{Related Work.} Partial order reductions in the non-game setting for linear time properties have previously been studied~\cite{laarman2014real,lehmann2012stubborn,peled1993stubborn,valmari1992attack} which lends itself towards the safety or liveness properties we wish to preserve for winning states. Originally Peled and Valmari presented partial order reductions for general stuttering-insensitive LTL~\cite{peled1993stubborn,valmari1992attack} and Lehmann et al.\ subsequently studied stubborn sets applied to a subset of LTL properties, called simple linear time properties, allowing them to utilise a relaxed set of conditions compared to those for general LTL preservation~\cite{lehmann2012stubborn}. The extension of partial order reductions to game-oriented formalisms and verification tasks has not yet received much attention in the literature. In~\cite{jamroga2018towards} partial order reductions for LTL without the next operator are adapted to a subset of alternating-time temporal logic and applied to multi-agent systems. The authors consider games with imperfect information, however, they also show that their technique is inapplicable for perfect information games. In our work, we assume an antagonistic environment and focus on preserving the existence of winning strategies with perfect information, reducing the state space, and improving existing controller synthesis algorithms. Partial order reduction for the problem of checking bisimulation equivalence between two labelled transition systems is presented in~\cite{valmari1997set,huhn1998partial,gerth1999partial}. Our partial order reduction is applied directly to a labelled transition system while theirs are applied to the bisimulation game graph. While the setting is distinctly different, our approach is more general as we allow for mixed states and allow for reduction in both controllable as well as environmental states. Moreover, we provide an implementation of the on-the-fly strategy synthesis algorithm and argue by a number of case studies for its practical applicability. The work on partial order reductions for weak modal $\mu$-calculus and CTL (see e.g.~\cite{RS:CONCUR:97,561357}) allows us in principle to encode the game semantics as a part of a $\mu$-calculus formula. Recently, partial order reduction techniques for parity games have been proposed by Neele et al.~\cite{neele2020partial}, which allows for model checking the full modal $\mu$-calculus. However, the use of more general partial order reduction methods may waste reduction potential, as the more general methods usually generate larger stubborn sets to preserve properties that are not required in the 2-player game setting. Complexity and decidability results for control synthesis in Petri net games are not encouraging. The control synthesis problem is for many instances of Petri net formalisms undecidable~\cite{alechina2016complexity,berard2012concurrent}, including those that allow for inhibition~\cite{berard2012concurrent} which we utilise to model our case studies. If the problem is decidable for a given instance of a Petri net formalism (like e.g.\ for bounded nets) then it is usually of high computational complexity. In fact, most questions about the behaviour of bounded Petri nets are at least PSPACE-hard~\cite{esparza1998decidability}. We opt to use efficient overapproximation algorithms using both syntactic and local state information to generate stable stubborn sets. The work presented in this article is an extended version with full proofs of our conference paper~\cite{boenneland2019partial}. The stubborn set conditions presented in~\cite{boenneland2019partial} were insufficient in order to guarantee the preservation of reachability while condition {\bf C} from the conference paper was found to be redundant. These issues are fixed, and in the present article we add an additional visibility condition on player 2 actions and we elaborate on its syntax-based algorithmic overapproximation for the Petri net games. The implementation is accordingly fixed and the efficiency of the method is still confirmed on an extended set of case studies compared to~\cite{boenneland2019partial}. \section{Preliminaries} We shall first introduce the basic notation and definitions. \begin{defi}[Game Labelled Transition System] A (deterministic) Game Labelled Transition System (GLTS) is a tuple $\ensuremath{en}\xspacesuremath{G}\xspace = (\ensuremath{en}\xspacesuremath{G}\xspacestates, \ensuremath{en}\xspacesuremath{G}\xspacelabels_1, \ensuremath{en}\xspacesuremath{G}\xspacelabels_2, \ensuremath{en}\xspacesuremath{G}\xspaceedges, \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace)$ where \begin{itemize} \item $\ensuremath{en}\xspacesuremath{G}\xspacestates$ is a set of states, \item $\ensuremath{en}\xspacesuremath{G}\xspacelabels_1$ is a finite set of actions for player~$1$ (the controller), \item $\ensuremath{en}\xspacesuremath{G}\xspacelabels_2$ is a finite set of actions for player~$2$ (the environment) where $\ensuremath{en}\xspacesuremath{G}\xspacelabels_1 \cap \ensuremath{en}\xspacesuremath{G}\xspacelabels_2 = \emptyset$ and $\ensuremath{en}\xspacesuremath{G}\xspacelabels = \ensuremath{en}\xspacesuremath{G}\xspacelabels_1 \cup \ensuremath{en}\xspacesuremath{G}\xspacelabels_2$, \item $\ensuremath{en}\xspacesuremath{G}\xspaceedges \subseteq \ensuremath{en}\xspacesuremath{G}\xspacestates \times \ensuremath{en}\xspacesuremath{G}\xspacelabels \times \ensuremath{en}\xspacesuremath{G}\xspacestates$ is a transition relation such that if $(s,a,s') \in \ensuremath{en}\xspacesuremath{G}\xspaceedges$ and $(s,a,s'') \in \ensuremath{en}\xspacesuremath{G}\xspaceedges$ then $s' = s''$, and \item $\ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace \subseteq \ensuremath{en}\xspacesuremath{G}\xspacestates$ is a set of goal states. \ensuremath{en}\xspaced{itemize} \ensuremath{en}\xspaced{defi} \noindent Let $\ensuremath{en}\xspacesuremath{G}\xspace = (\ensuremath{en}\xspacesuremath{G}\xspacestates, \ensuremath{en}\xspacesuremath{G}\xspacelabels_1, \ensuremath{en}\xspacesuremath{G}\xspacelabels_2, \ensuremath{en}\xspacesuremath{G}\xspaceedges, \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace)$ be a fixed GLTS for the remainder of the section. Whenever $(s,a,s') \in \ensuremath{en}\xspacesuremath{G}\xspaceedges$ we write $s \rtrans[a] s'$ and say that $a$ is enabled in $s$ and can be \emph{executed} in $s$ yielding $s'$. Otherwise we say that $a$ is \emph{disabled} in $s$. The set of \emph{enabled} player~$i$ actions where $i \in \{1,2\}$ in a state $s \in \ensuremath{en}\xspacesuremath{G}\xspacestates$ is given by $\ensuremath{en}\xspace_i(s) = \{ a \in \ensuremath{en}\xspacesuremath{G}\xspacelabels_i \mid \exists s' \in \ensuremath{en}\xspacesuremath{G}\xspacestates.\ s \rtrans[a] s'\}$. The set of all enabled actions is given by $\ensuremath{en}\xspace(s) = \ensuremath{en}\xspace_1(s) \cup \ensuremath{en}\xspace_2(s)$. For a state $s \in \ensuremath{en}\xspacesuremath{G}\xspacestates$ where $\ensuremath{en}\xspace(s) \neq \emptyset$ if $\ensuremath{en}\xspace_2(s) = \emptyset$ then we call $s$ a player~$1$ state, if $\ensuremath{en}\xspace_1(s) = \emptyset$ then we call $s$ a player~$2$ state, and otherwise we call it a \emph{mixed} state. If $\ensuremath{en}\xspace(s) = \emptyset$ then we call $s$ a \emph{deadlock} state. The GLTS $\ensuremath{en}\xspacesuremath{G}\xspace$ is called \emph{non-mixed} if all states are either player~$1$, player~$2$, or deadlock states. For a sequence of actions $w = a_1 a_2 \cdots a_n \in \ensuremath{en}\xspacesuremath{G}\xspacelabels^*$ we write $s \rtrans[w] s'$ if $s \rtrans[a_1] s_1 \rtrans[a_2] \cdots \rtrans[a_n] s'$ and say it is \emph{executable}. If $w \in \ensuremath{en}\xspacesuremath{G}\xspacelabels^{\omega}$, i.e.\ if it is infinite, then we write $s \rtrans[w]$. Actions that are a part of $w$ are said to occur in $w$. A sequence of states induced by $w \in \ensuremath{en}\xspacesuremath{G}\xspacelabels^* \cup \ensuremath{en}\xspacesuremath{G}\xspacelabels^{\omega}$ is called a \emph{run} and is written as $\pi = s_0 s_1\cdots$. We use $\ensuremath{en}\xspacesuremath{\Pi}\xspace_{\ensuremath{en}\xspacesuremath{G}\xspace}(s)$ to denote the set of all runs starting from a state $s \in \ensuremath{en}\xspacesuremath{G}\xspacestates$ in GLTS $\ensuremath{en}\xspacesuremath{G}\xspace$, s.t.\ for all $s_0 s_1 \cdots \in \ensuremath{en}\xspacesuremath{\Pi}\xspace_G(s)$ we have $s_0 = s$, and $\ensuremath{en}\xspacesuremath{\Pi}\xspace_{\ensuremath{en}\xspacesuremath{G}\xspace} = \bigcup_{s \in \ensuremath{en}\xspacesuremath{G}\xspacestates} \ensuremath{en}\xspacesuremath{\Pi}\xspace_{\ensuremath{en}\xspacesuremath{G}\xspace}(s)$ as the set of all runs. The number of actions in a run $\pi$ is given by the function $\ensuremath{en}\xspacesuremath{\ell}\xspace : \ensuremath{en}\xspacesuremath{\Pi}\xspace_G \to \mathbb{N}^0 \cup \{ \infty \}$ s.t.\ for a run $\pi = s_0 \cdots s_n$ we have $\ensuremath{en}\xspacesuremath{\ell}\xspace(\pi) = n$ if $\pi$ is finite and otherwise $\ensuremath{en}\xspacesuremath{\ell}\xspace(\pi) = \infty$. A position in a run $\pi = s_0 s_1 \ldots \in \ensuremath{en}\xspacesuremath{\Pi}\xspace_{\ensuremath{en}\xspacesuremath{G}\xspace}(s)$ is a natural number $i \in \mathbb{N}^0$ that refers to the state $s_i$ and is written as $\pi_i$. A position $i$ can range from $0$ to $\ensuremath{en}\xspacesuremath{\ell}\xspace(\pi)$ s.t.\ if $\pi$ is infinite then $i \in \mathbb{N}^0$ and otherwise $0 \leq i \leq \ensuremath{en}\xspacesuremath{\ell}\xspace(\pi)$. Let $\ensuremath{en}\xspacesuremath{\Pi^{\mathit{max}}}\xspace_{\ensuremath{en}\xspacesuremath{G}\xspace}(s)$ be the set of all maximal runs starting from $s$, defined as $\ensuremath{en}\xspacesuremath{\Pi^{\mathit{max}}}\xspace_{\ensuremath{en}\xspacesuremath{G}\xspace}(s) = \{ \pi \in \ensuremath{en}\xspacesuremath{\Pi}\xspace_{\ensuremath{en}\xspacesuremath{G}\xspace}(s) \mid \ensuremath{en}\xspacesuremath{\ell}\xspace(\pi) \neq \infty \implies \ensuremath{en}\xspace(\pi_{\ensuremath{en}\xspacesuremath{\ell}\xspace(\pi)}) = \emptyset \}$. We omit the GLTS $\ensuremath{en}\xspacesuremath{G}\xspace$ from the subscript of run sets if it is clear from the context. A reduced game is defined by a function called a reduction. \begin{defi}[Reduction] Let $\ensuremath{en}\xspacesuremath{G}\xspace = (\ensuremath{en}\xspacesuremath{G}\xspacestates, \ensuremath{en}\xspacesuremath{G}\xspacelabels_1, \ensuremath{en}\xspacesuremath{G}\xspacelabels_2, \ensuremath{en}\xspacesuremath{G}\xspaceedges, \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace)$ be a GLTS\@. A \emph{reduction} is a function $\reduction : \ensuremath{en}\xspacesuremath{G}\xspacestates \to 2^\ensuremath{en}\xspacesuremath{G}\xspacelabels$. \ensuremath{en}\xspaced{defi} \begin{defi}[Reduced Game] Let $\ensuremath{en}\xspacesuremath{G}\xspace = (\ensuremath{en}\xspacesuremath{G}\xspacestates, \ensuremath{en}\xspacesuremath{G}\xspacelabels_1, \ensuremath{en}\xspacesuremath{G}\xspacelabels_2, \ensuremath{en}\xspacesuremath{G}\xspaceedges, \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace)$ be a GLTS and $\reduction$ be a reduction. The \emph{reduced game} of $\ensuremath{en}\xspacesuremath{G}\xspace$ by the reduction $\reduction$ is given by $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction} = (\ensuremath{en}\xspacesuremath{G}\xspacestates, \ensuremath{en}\xspacesuremath{G}\xspacelabels_1, \ensuremath{en}\xspacesuremath{G}\xspacelabels_2, \rtrans[][\reduction], \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace)$ where $s \rtrans[a][\reduction] s'$ iff $s \rtrans[a] s'$ and $a \in \reduction(s)$. \ensuremath{en}\xspaced{defi} The set of actions $\reduction(s)$ is the \emph{stubborn set} of $s$ with the reduction $\reduction$. The set of non-stubborn actions for $s$ is defined as $\overline{\reduction(s)} = \ensuremath{en}\xspacesuremath{G}\xspacelabels \setminus \reduction(s)$. A (memoryless) strategy is a function that proposes the next action player~$1$ wants to execute. \begin{defi}[Strategy] Let $\ensuremath{en}\xspacesuremath{G}\xspace = (\ensuremath{en}\xspacesuremath{G}\xspacestates, \ensuremath{en}\xspacesuremath{G}\xspacelabels_1, \ensuremath{en}\xspacesuremath{G}\xspacelabels_2, \ensuremath{en}\xspacesuremath{G}\xspaceedges, \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace)$ be a GLTS\@. A \emph{strategy} is a function $\sigma: \ensuremath{en}\xspacesuremath{G}\xspacestates \to \ensuremath{en}\xspacesuremath{G}\xspacelabels_1 \cup \{ \bot \}$ where for all $s \in \ensuremath{en}\xspacesuremath{G}\xspacestates$ we have that if $\ensuremath{en}\xspace_1(s) \neq \emptyset$ then $\sigma(s) \in \ensuremath{en}\xspace_1(s)$ else $\sigma(s) = \bot$. \ensuremath{en}\xspaced{defi} The intuition is that in order to ensure progress, player~$1$ always has to propose an action if she has an enabled action. Let $\sigma$ be a fixed strategy for the remainder of the section. We define a function $\stratnext{\sigma}{s}$ that returns the set of actions considered at $s \in \ensuremath{en}\xspacesuremath{G}\xspacestates$ under $\sigma$ as: \[ \stratnext{\sigma}{s} = \begin{cases} \ensuremath{en}\xspace_2(s) \cup \sigma(s)\ &\text{if } \sigma(s) \neq \bot \\ \ensuremath{en}\xspace_2(s)\ &\text{otherwise.} \ensuremath{en}\xspaced{cases} \] Let $\ensuremath{en}\xspacesuremath{\Pi^{\mathit{max}}}\xspace_{\sigma}(s) \subseteq \ensuremath{en}\xspacesuremath{\Pi^{\mathit{max}}}\xspace(s)$ be the set of maximal runs subject to $\sigma$ starting at $s \in \ensuremath{en}\xspacesuremath{G}\xspacestates$, defined as: \[\ensuremath{en}\xspacesuremath{\Pi^{\mathit{max}}}\xspace_{\sigma}(s) = \{\pi \in \ensuremath{en}\xspacesuremath{\Pi^{\mathit{max}}}\xspace(s) \mid \forall i \in \{1,\ldots,\ensuremath{en}\xspacesuremath{\ell}\xspace(\pi)\}.\ \exists a \in \stratnext{\sigma}{\pi_{i-1}}.\ \pi_{i-1} \rtrans[a] \pi_{i}\} \ . \] \begin{defi}[Winning Strategy] Let $\ensuremath{en}\xspacesuremath{G}\xspace = (\ensuremath{en}\xspacesuremath{G}\xspacestates, \ensuremath{en}\xspacesuremath{G}\xspacelabels_1, \ensuremath{en}\xspacesuremath{G}\xspacelabels_2, \ensuremath{en}\xspacesuremath{G}\xspaceedges, \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace)$ be a GLTS and $s \in \ensuremath{en}\xspacesuremath{G}\xspacestates$ be a state. A strategy $\sigma$ is a \emph{winning strategy} for player~$1$ at $s$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ iff for all $\pi \in \ensuremath{en}\xspacesuremath{\Pi^{\mathit{max}}}\xspace_{\sigma}(s)$ there exists a position $i$ s.t.\ $\pi_i \in \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace$. A state $s$ is called \emph{winning} if there is a winning strategy for player~$1$ at $s$. \ensuremath{en}\xspaced{defi} If a state is winning for player~$1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ then no matter what action sequence the environment chooses, eventually a goal state is reached. Furthermore, for a given winning strategy $\sigma$ at $s$ in $\ensuremath{en}\xspacesuremath{G}\xspace$, there is a finite number $n \in \mathbb{N}^0$ such that we always reach a goal state with at most $n$ action firings, which we prove in Lemma~\ref{lemma:depth}. We call this minimum number the \emph{strategy depth} of $\sigma$. \begin{defi}[Strategy Depth] Let $\ensuremath{en}\xspacesuremath{G}\xspace = (\ensuremath{en}\xspacesuremath{G}\xspacestates, \ensuremath{en}\xspacesuremath{G}\xspacelabels_1, \ensuremath{en}\xspacesuremath{G}\xspacelabels_2, \ensuremath{en}\xspacesuremath{G}\xspaceedges, \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace)$ be a GLTS, $s \in \ensuremath{en}\xspacesuremath{G}\xspacestates$ a winning state for player~$1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ and $\sigma$ a winning strategy at $s$ in $\ensuremath{en}\xspacesuremath{G}\xspace$. Then $n \in \mathbb{N}^0$ is the \emph{depth} of $\sigma$ at $s$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ if: \begin{itemize} \item for all $\pi \in \ensuremath{en}\xspacesuremath{\Pi^{\mathit{max}}}\xspace_{\sigma}(s)$ there exists $0 \leq i \leq n$ s.t.\ $\pi_i \in \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace$, and \item there exists $\pi' \in \ensuremath{en}\xspacesuremath{\Pi^{\mathit{max}}}\xspace_{\sigma}(s)$ s.t.\ $\pi'_n \in \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace$ and for all $0 \leq j < n$ we have $\pi'_j \notin \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace$. \ensuremath{en}\xspaced{itemize} \ensuremath{en}\xspaced{defi} \begin{lem}\label{lemma:depth} Let $\ensuremath{en}\xspacesuremath{G}\xspace = (\ensuremath{en}\xspacesuremath{G}\xspacestates, \ensuremath{en}\xspacesuremath{G}\xspacelabels_1, \ensuremath{en}\xspacesuremath{G}\xspacelabels_2, \ensuremath{en}\xspacesuremath{G}\xspaceedges, \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace)$ be a GLTS, $s \in \ensuremath{en}\xspacesuremath{G}\xspacestates$ a winning state for player~$1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$, and $\sigma$ a winning strategy at $s$ in $\ensuremath{en}\xspacesuremath{G}\xspace$. Then \begin{enumerate} \item\label{stratA} there exists $n \in \mathbb{N}$ that is the depth of $\sigma$ at $s$ in $\ensuremath{en}\xspacesuremath{G}\xspace$, and \item\label{stratB} if $s \notin \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace$ then for all $a \in \stratnext{\sigma}{s}$ where $s \rtrans[a] s'$, the depth of $\sigma$ at $s'$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ is $m$ such that $0 \leq m < n$. \ensuremath{en}\xspaced{enumerate} \ensuremath{en}\xspaced{lem} \begin{proof} (\ref{stratA}): Due to $\ensuremath{en}\xspacesuremath{G}\xspacelabels_1$ and $\ensuremath{en}\xspacesuremath{G}\xspacelabels_2$ being finite and any $\ensuremath{en}\xspacesuremath{G}\xspace$ being deterministic, we know that every state $s \in \ensuremath{en}\xspacesuremath{G}\xspacestates$ is finitely branching. Since $s$ is a winning state for player~$1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$, we get that every run leads to a goal state in a finite number of actions. Therefore, due to K\"onig's lemma, the tree induced by all runs starting from $s$, with the leafs being the first occurring goal states, is a finite tree and hence such $n$ exists. (\ref{stratB}): Let $n$ be the depth of $\sigma$ at $s$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ and let $s \rtrans[a] s'$ such that $a \in \stratnext{\sigma}{s}$. By contradiction let us assume that the depth of $\sigma$ at $s'$ is larger than or equal to $n$. However, this implies the existence of a run $\pi$ from $s'$ that contains $n$ or more non-goal states before reaching the goal. The run $s \pi$ now contradicts that the depth of $s$ is $n$. \ensuremath{en}\xspaced{proof} A set of actions for a given state and a given set of goal states is called an \emph{interesting set} if for any run leading to any goal state at least one action from the set of interesting actions has to be executed. \begin{defi}[Interesting Actions] Let $\ensuremath{en}\xspacesuremath{G}\xspace = (\ensuremath{en}\xspacesuremath{G}\xspacestates, \ensuremath{en}\xspacesuremath{G}\xspacelabels_1, \ensuremath{en}\xspacesuremath{G}\xspacelabels_2, \ensuremath{en}\xspacesuremath{G}\xspaceedges, \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace)$ be a GLTS and $s \in \ensuremath{en}\xspacesuremath{G}\xspacestates$ a state. A set of actions $\interest{s}{\ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace} \subseteq \ensuremath{en}\xspacesuremath{G}\xspacelabels$ is called an \emph{interesting set} of actions for $s$ and $\ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace$ if whenever $s \notin \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace$, $w = a_1 \cdots a_n \in \ensuremath{en}\xspacesuremath{G}\xspacelabels^*$, $s \rtrans[w] s'$, and $s' \in \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace$ then there exists $i$, $1 \leq i \leq n$, such that $a_i \in \interest{s}{\ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace}$. \ensuremath{en}\xspaced{defi} \begin{exa}\label{ex:interesting} In Figure~\ref{fig:safe-interesting-example} we see an example of a GLTS $\ensuremath{en}\xspacesuremath{G}\xspace = (\ensuremath{en}\xspacesuremath{G}\xspacestates, \ensuremath{en}\xspacesuremath{G}\xspacelabels_1, \ensuremath{en}\xspacesuremath{G}\xspacelabels_2, \ensuremath{en}\xspacesuremath{G}\xspaceedges, \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace)$ where $\ensuremath{en}\xspacesuremath{G}\xspacestates = \{ s_1, s_2, s_3, s_4, s_5, s_6, s_7 \}$ are the states denoted by circles, $\ensuremath{en}\xspacesuremath{G}\xspacelabels_1 = \{a,b,c\}$ is the set of player~$1$ actions, $\ensuremath{en}\xspacesuremath{G}\xspacelabels_2 = \{d\}$ is the set of player~$2$ actions, and $\ensuremath{en}\xspacesuremath{G}\xspaceedges$ is denoted by the solid (controllable) and dashed (uncontrollable) transitions between states, labelled by the corresponding actions for player~$1$ and~$2$, respectively. Let $\ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace = \{ s_6 \}$. We now consider different proposals for a set of interesting actions for the state $s_1$. The set $\{ b \}$ is an interesting set of actions in $s_1$ since the goal state $s_6$ cannot be reached without firing $b$ at least once. Furthermore, the sets $\{ a \}$ and $\{ c \}$ are also sets of interesting actions for the state $s_1$. \begin{figure}[t] \centering \begin{tikzpicture}[font=\scriptsize,xscale=2.2,yscale=1.3] \tikzstyle{state}=[inner sep=0pt,circle,draw=black,very thick,fill=white,minimum height=5mm, minimum width=5mm,font=\small] \tikzstyle{empty}=[rectangle,draw=none,font=\small] \tikzstyle{reducedstate}=[inner sep=0pt,circle,draw=black,fill=white,minimum height=5mm, minimum width=5mm,font=\small] \tikzstyle{every label}=[black] \tikzstyle{playerArc}=[->,>=stealth,very thick] \tikzstyle{reducedplayerArc}=[->,>=stealth] \tikzstyle{opponentArc}=[->,>=stealth,dashed,very thick] \tikzstyle{reducedopponentArc}=[->,>=stealth,dashed] \begin{scope} \node [state] at (0,0) (s1) {$s_1$}; \node [state] at (1.2,0) (s2) {$s_2$}; \node [state] at (2.4,0) (s3) {$s_3$}; \node [state] at (0,-1.2) (s4) {$s_4$}; \node [state] at (1.2,-1.2) (s5) {$s_5$}; \node [state,label=right:$\in \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace$] at (2.4,-1.2) (s6) {$s_6$}; \node [state] at (1.2,-2.4) (s7) {$s_7$}; \node [empty] at (0,-2.4) {$\safe{s_1} = \{ a \}$}; \node [empty] at (0,-2.1) {$\interest{s_1}{\{s_6\}} = \{ a \}$}; \draw[playerArc] (s1) -- (s2) node[midway,above]{$a$} {}; \draw[playerArc] (s2) -- (s3) node[midway,above]{$b$} {}; \draw[playerArc] (s1) -- (s4) node[midway,left]{$c$} {}; \draw[playerArc] (s4) -- (s5) node[midway,above]{$a$} {}; \draw[playerArc] (s5) -- (s6) node[midway,above]{$b$} {}; \draw[playerArc] (s3) -- (s6) node[midway,right]{$c$} {}; \draw[opponentArc] (s5) -- (s7) node[midway,right]{$d$} {}; \ensuremath{en}\xspaced{scope} \ensuremath{en}\xspaced{tikzpicture} \caption{Example of safe and interesting sets of actions for a state $\mathindex{s}{1}$} \label{fig:safe-interesting-example} \ensuremath{en}\xspaced{figure} \ensuremath{en}\xspaced{exa} Player~$1$ has to also consider her safe actions. A player~$1$ action is \emph{safe} in a given player~$1$ state if for any player~$1$ action sequence (excluding the safe action) that does not enable any player~$2$ action, prefixing this sequence with the safe action will (in case it is executable) also not enable any player~$2$ action. \begin{defi}[Safe Action] Let $\ensuremath{en}\xspacesuremath{G}\xspace = (\ensuremath{en}\xspacesuremath{G}\xspacestates, \ensuremath{en}\xspacesuremath{G}\xspacelabels_1, \ensuremath{en}\xspacesuremath{G}\xspacelabels_2, \ensuremath{en}\xspacesuremath{G}\xspaceedges, \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace)$ be a GLTS and $s \in \ensuremath{en}\xspacesuremath{G}\xspacestates$ a state such that $\ensuremath{en}\xspace_2(s) = \emptyset$. An action $a \in \ensuremath{en}\xspace_1(s)$ is \emph{safe} in $s$ if whenever $w \in {(\ensuremath{en}\xspacesuremath{G}\xspacelabels_1 \setminus \{a\})}^*$ with $s \rtrans[w] s'$ s.t.\ $\ensuremath{en}\xspace_2(s') = \emptyset$ and $s \rtrans[aw] s''$ then $\ensuremath{en}\xspace_2(s'') = \emptyset$. The set of all safe actions for $s$ is written as $\safe{s}$. \ensuremath{en}\xspaced{defi} \begin{exa}\label{ex:safe} Consider again the GLTS in Figure~\ref{fig:safe-interesting-example}. We reasoned in Example~\ref{ex:interesting} that the set $\{ b \}$ is an interesting set of actions in the state $s_1$. However, $b$ is not a safe player~$1$ action in $s_1$ since by definition $b$ has to be enabled at $s_1$ to be safe. The set of enabled actions in $s_1$ is $\ensuremath{en}\xspace(s_1) = \{ a, c \}$, and between these two actions only $a$ is safe. The action $c$ is not safe since we have $s_1 \rtrans[a] s_2$ and $\ensuremath{en}\xspace_2(s_2) = \emptyset$ but $s_1 \rtrans[ca] s_5$ and $\ensuremath{en}\xspace_2(s_5) \neq \emptyset$. It is clear that $s_1$ is a winning state for player~$1$ and player~$1$ must initially play $a$ as playing $c$ will bring us to the mixed state $s_5$ from which player~$1$ does not have winning strategy. \ensuremath{en}\xspaced{exa} \section{Stable Reduction} In this section we introduce the notion of a \emph{stable} reduction $\reduction$ that provides at each state $s$ the set of actions $\reduction(s)$ that are sufficient to be explored so that the given reachability property is preserved in the reduced game. In the game setting, we have to guarantee the preservation of winning strategies for both players in the game. In what follows, we shall introduce a number of conditions (formulated in general terms of game labelled transition systems) that guarantee that a given reduction preserves winning strategies and we shall call reductions satisfying these conditions \emph{stable}. For the remainder of the section let $s \in S$ be a state and $\ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace \subseteq \ensuremath{en}\xspacesuremath{G}\xspacestates$ be a set of goal states, and let $\interest{s}{\ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace}$ be an arbitrary but fixed set of interesting actions for $s$ and $\ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace$. \begin{defi}[Stable Reduction Conditions] A reduction $\reduction$ is called \emph{stable} if $\reduction$ satisfies for every $s \in S$ Conditions~\ref{rule:stub-init},~\ref{rule:stub},~\ref{rule:reach},~\ref{rule:game-1},~\ref{rule:game-2},~\ref{rule:safe},~\ref{rule:visible} and~\ref{rule:dead}. \begin{itemize}[left=6mm] \item[\textbf{I}]\namedlabel{rule:stub-init}{\textbf{I}} If $\ensuremath{en}\xspace_1(s) \neq \emptyset$ and $\ensuremath{en}\xspace_2(s) \neq \emptyset$ then $\ensuremath{en}\xspace(s) \subseteq \reduction(s)$. \item[\textbf{W}]\namedlabel{rule:stub}{\textbf{W}} For all $w \in \overline{\reduction(s)}^*$ and all $a \in \reduction(s)$ if $s \rtrans[wa] s'$ then $s \rtrans[aw] s'$. \item[\textbf{R}]\namedlabel{rule:reach}{\textbf{R}} $\interest{s}{\ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace} \subseteq \reduction(s)$ \item[\textbf{G1}]\namedlabel{rule:game-1}{\textbf{G1}} For all $w \in \overline{\reduction(s)}^*$ if $\ensuremath{en}\xspace_2(s) = \emptyset$ and $s \rtrans[w] s'$ then $\ensuremath{en}\xspace_2(s') = \emptyset$. \item[\textbf{G2}]\namedlabel{rule:game-2}{\textbf{G2}} For all $w \in \overline{\reduction(s)}^*$ if $\ensuremath{en}\xspace_1(s) = \emptyset$ and $s \rtrans[w] s'$ then $\ensuremath{en}\xspace_1(s') = \emptyset$. \item[\textbf{S}]\namedlabel{rule:safe}{\textbf{S}} $\ensuremath{en}\xspace_1(s) \cap \reduction(s) \subseteq \safe{s}$ or $en_1(s) \subseteq \reduction(s)$ \item[\textbf{V}]\namedlabel{rule:visible}{\textbf{V}} If there exists $w \in \ensuremath{en}\xspacesuremath{G}\xspacelabels_2^*$ s.t.\ $s \rtrans[w] s'$ and $s' \in \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace$ then $\ensuremath{en}\xspace_2(s) \subseteq \reduction(s)$. \item[\textbf{D}]\namedlabel{rule:dead}{\textbf{D}} If $\ensuremath{en}\xspace_2(s) \neq \emptyset$ then there exists $a \in \ensuremath{en}\xspace_2(s) \cap \reduction(s)$ s.t.\ for all $w \in \overline{\reduction(s)}^*$ where $s \rtrans[w] s'$ we have $a \in \ensuremath{en}\xspace_2(s')$. \ensuremath{en}\xspaced{itemize} \ensuremath{en}\xspaced{defi} If $s$ is a mixed state then Condition~\ref{rule:stub-init} ensures that all enabled actions are included in the reduction. That is, we do not attempt to reduce the state space from this state. Condition~\ref{rule:stub} states that we can swap the ordering of action sequences such that performing stubborn actions first still ensures that we can reach a given state (i.e.\ a stubborn action commutes with any sequence of nonstubborn actions). Condition~\ref{rule:reach} ensures that a goal state cannot be reached solely by exploring actions not in the stubborn set (i.e.\ we preserve the reachability of goal states). Conditions~\ref{rule:game-1} resp.~\ref{rule:game-2} ensure that from any state belonging to player~$1$ (resp.\ player~$2$), it is not possible to reach any player $2$ (resp.\ player $1$) state or a mixed state, solely by exploring only nonstubborn actions (i.e.\ reachability of mixed states and opposing player states are preserved in the reduction). Condition~\ref{rule:safe} ensures that either all enabled stubborn player 1 actions are also safe, or if this is not the case then all enabled player $1$ actions are included in the stubborn set. Condition~\ref{rule:visible} checks if it is possible to reach a goal state by firing exclusively player $2$ actions, and includes all enabled player $2$ actions into the stubborn set if it is the case. Condition~\ref{rule:dead} ensures that at least one player $2$ action cannot be disabled solely by exploring nonstubborn actions. \begin{exa}\label{ex:stable} In Figure~\ref{fig:reduction-example} we see an example of a GLTS using the previously introduced graphical notation. Let $\ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace = \{ s_8 \}$ be the set of goal states and let $\interest{s_1}{\ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace} = \{ a \}$ be a fixed set of interesting actions. For state $s_1$ we assume $\reduction(s_1) = \{a, c\}$ as this stubborn set satisfies the stable reduction conditions. We satisfy~\ref{rule:game-1} since $c$ has to be fired before we can reach the player $2$ state $s_9$. For $s_1 \rtrans[ba] s_5$ and $s_1 \rtrans[bc] s_7$ we also have $s_1 \rtrans[ab] s_5$ and $s_1 \rtrans[cb] s_7$, so~\ref{rule:stub} is satisfied as well. Clearly $\reduction(s_1)$ contains the interesting set $\interest{s_1}{\ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace}$ that we fixed to $\{a\}$, so~\ref{rule:reach} is satisfied. Condition~\ref{rule:safe} is satisfied since $\reduction(s_1) \cap \ensuremath{en}\xspace(s_1) \subseteq \safe{s_1}$. We have that~\ref{rule:stub-init},~\ref{rule:game-2},~\ref{rule:visible}, and~\ref{rule:dead} are satisfied as well since their antecedents are not true. Thick lines in the figure indicate transitions and states that are preserved by a stable reduction $\reduction$, while thin lines indicates transitions and states that are removed by the same reduction. \begin{figure}[t] \centering \begin{tikzpicture}[font=\scriptsize,xscale=2.2,yscale=1.3] \tikzstyle{state}=[inner sep=0pt,circle,draw=black,very thick,fill=white,minimum height=5mm, minimum width=5mm,font=\small] \tikzstyle{empty}=[rectangle,draw=none,font=\small] \tikzstyle{reducedstate}=[inner sep=0pt,circle,draw=black,fill=white,minimum height=5mm, minimum width=5mm,font=\small] \tikzstyle{every label}=[black] \tikzstyle{playerArc}=[->,>=stealth,very thick] \tikzstyle{reducedplayerArc}=[->,>=stealth] \tikzstyle{opponentArc}=[->,>=stealth,dashed,very thick] \tikzstyle{reducedopponentArc}=[->,>=stealth,dashed] \begin{scope} \node [state] at (0,0) (s1) {$s_1$}; \node [state] at (1.2,1.2) (s2) {$s_2$}; \node [reducedstate] at (1.2,0) (s3) {$s_3$}; \node [state] at (1.2,-1.2) (s4) {$s_4$}; \node [reducedstate] at (2.4,1.2) (s5) {$s_5$}; \node [state] at (2.4,0) (s6) {$s_6$}; \node [state] at (2.4,-1.2) (s7) {$s_7$}; \node [state,label=right:$\in \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace$] at (3.6,0) (s8) {$s_8$}; \node [state] at (3.6,-1.2) (s9) {$s_9$}; \node [state] at (4.8,-1.2) (s10) {$s_{10}$}; \node [empty] at (0,1.5) {$\safe{s_1} = \{ a, b, c \}$}; \node [empty] at (0,1.2) {$\interest{s_1}{\{s_8\}} = \{ a \}$}; \draw[playerArc] (s1) -- (s2) node[midway,above]{$a$} {}; \draw[reducedplayerArc] (s1) -- (s3) node[midway,above]{$b$} {}; \draw[playerArc] (s1) -- (s4) node[midway,above]{$c$} {}; \draw[reducedplayerArc] (s2) -- (s5) node[midway,above]{$b$} {}; \draw[playerArc] (s2) -- (s6) node[pos=0.3,above]{$c$} {}; \draw[reducedplayerArc] (s3) -- (s5) node[pos=0.7,above]{$a$} {}; \draw[reducedplayerArc] (s3) -- (s7) node[pos=0.3,above]{$c$} {}; \draw[playerArc] (s4) -- (s6) node[pos=0.7,above]{$a$} {}; \draw[playerArc] (s4) -- (s7) node[midway,above]{$b$} {}; \draw[reducedplayerArc] (s5) -- (s8) node[midway,above]{$c$} {}; \draw[playerArc] (s6) -- (s8) node[midway,above]{$b$} {}; \draw[playerArc] (s7) -- (s8) node[midway,above]{$a$} {}; \draw[playerArc] (s7) -- (s9) node[midway,above]{$d$} {}; \draw[opponentArc] (s9) -- (s10) node[midway,above]{$e$} {}; \ensuremath{en}\xspaced{scope} \ensuremath{en}\xspaced{tikzpicture} \caption{Example of a stable reduction for a state $\mathindex{s}{1}$} \label{fig:reduction-example} \ensuremath{en}\xspaced{figure} \ensuremath{en}\xspaced{exa} We shall now prove the correctness of our stubborn set reduction. We first notice the fact that if a goal state is reachable from some state, then the state has at least one enabled action that is also in the stubborn set. \begin{lem}\label{lemma:early-termination} Let $\ensuremath{en}\xspacesuremath{G}\xspace = (\ensuremath{en}\xspacesuremath{G}\xspacestates, \ensuremath{en}\xspacesuremath{G}\xspacelabels_1, \ensuremath{en}\xspacesuremath{G}\xspacelabels_2, \ensuremath{en}\xspacesuremath{G}\xspaceedges, \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace)$ be a GLTS, $\reduction$ a reduction that satisfies Conditions~\ref{rule:stub} and~\ref{rule:reach}, and $s \in \ensuremath{en}\xspacesuremath{G}\xspacestates \setminus \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace$ a state. If there exists $w \in \ensuremath{en}\xspacesuremath{G}\xspacelabels^*$ s.t.\ $s \rtrans[w] s'$ and $s' \in \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace$ then $\reduction(s) \cap \ensuremath{en}\xspace(s) \neq \emptyset$. \ensuremath{en}\xspaced{lem} \begin{proof} Assume that there exists $w = a_1 \cdots a_n \in \ensuremath{en}\xspacesuremath{G}\xspacelabels^*$ s.t.\ $s \rtrans[w] s'$ and $s' \in \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace$. If $w \in \overline{\reduction(s)}^*$ then by Condition~\ref{rule:reach} we must have $s' \notin \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace$, however this contradicts our assumption. Therefore there must exist an action that occurs in $w$ that is in the stubborn set of $s$. Let $a_i \in \reduction(s)$ be the first of such an action s.t.\ for all $j$, $1 \leq j < i$, we have $a_j \notin \reduction(s)$. Clearly, we have $a_1 \cdots a_j \in \overline{\reduction(s)}^*$ and by Condition~\ref{rule:stub} we have $a_i \in \reduction(s) \cap \ensuremath{en}\xspace(s)$. \ensuremath{en}\xspaced{proof} The correctness of stable stubborn reductions is proved by the next two lemmas. Both lemmas are proved by induction on the depth of a winning strategy for player 1 in the game. \begin{lem}\label{lemma1} Let $\ensuremath{en}\xspacesuremath{G}\xspace = (\ensuremath{en}\xspacesuremath{G}\xspacestates, \ensuremath{en}\xspacesuremath{G}\xspacelabels_1, \ensuremath{en}\xspacesuremath{G}\xspacelabels_2, \ensuremath{en}\xspacesuremath{G}\xspaceedges, \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace)$ be a GLTS and $\reduction$ a stable reduction. If a state $s \in \ensuremath{en}\xspacesuremath{G}\xspacestates$ is winning for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ then $s$ is also winning for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$. \ensuremath{en}\xspaced{lem} \begin{proof} Assume that $s \in \ensuremath{en}\xspacesuremath{G}\xspacestates$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$. By definition we have that there exists a player $1$ strategy $\sigma$ such that for all $\pi \in \ensuremath{en}\xspacesuremath{\Pi^{\mathit{max}}}\xspace_{\ensuremath{en}\xspacesuremath{G}\xspace,\sigma}(s)$ there exists a position $i$ s.t.\ $\pi_i \in \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace$. By induction on $n$ we now prove the induction hypothesis $\mathit{IH}(n)$: ``If $s$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ with a strategy with a depth of $n$ then $s$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$.'' \emph{Base step}. Let $n = 0$. Then since $n$ is the depth at $s$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ we must have $s \in \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace$ and so $s$ is trivially a winning state for player $1$ also in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$. \emph{Induction step}. Let $n > 0$ and let $\sigma$ be a winning strategy with depth $n$ for $s$. There are three cases: (1) $\ensuremath{en}\xspace_1(s) \neq \emptyset$ and $\ensuremath{en}\xspace_2(s) \neq \emptyset$, (2) $\ensuremath{en}\xspace_2(s) = \emptyset$, and (3) $\ensuremath{en}\xspace_1(s) = \emptyset$. A deadlock at $s$, i.e.\ $\ensuremath{en}\xspace(s) = \emptyset$, cannot be the case as we otherwise have $n = 0$. Case (1): Let $\ensuremath{en}\xspace_1(s) \neq \emptyset$ and $\ensuremath{en}\xspace_2(s) \neq \emptyset$. We assume that $s$ is a winning state for player~$1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ with a strategy $\sigma$ with a depth of $n$ and we want to show that there exists a strategy $\sigma'$ s.t.\ $s$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$ with $\sigma'$. Since $s$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ with $\sigma$ if $s \rtrans[a] s'$ where $a \in \stratnext{\sigma}{s}$ then $s'$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ with $m < n$ as the depth of $\sigma$ at $s'$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ due to property~\ref{stratB} of Lemma~\ref{lemma:depth}. By the induction hypothesis $s'$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$ and there exists a strategy $\sigma'$ s.t.\ $\sigma'$ is a winning strategy for player $1$ at $s'$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$. By Condition~\ref{rule:stub-init} we know $\ensuremath{en}\xspace_1(s) \subseteq \reduction(s)$ implying that $\sigma(s) \in \reduction(s)$. Player $1$ can therefore choose the same action proposed in the original game s.t.\ $\sigma'(s) = \sigma(s)$. From the definition of a winning strategy we have that no matter what action player $2$ chooses, the resulting state is a winning state for player $1$, and hence $s$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$. Case (2): Let $\ensuremath{en}\xspace_2(s) = \emptyset$. Assume that $s$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ with a strategy $\sigma$ with a depth of $n$. We want to show that there exists a strategy $\sigma'$ s.t.\ $s$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$ with $\sigma'$. Let $\pi \in \ensuremath{en}\xspacesuremath{\Pi^{\mathit{max}}}\xspace_{\ensuremath{en}\xspacesuremath{G}\xspace,\sigma}(s)$ be any run and $\pi_0 = s$. Since $s$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ with $\sigma$ we know there exists an $m \leq n$ s.t.\ $\pi_0 \rtrans[a_1] \pi_1 \rtrans[a_2] \cdots \rtrans[a_m] \pi_m$ and $\pi_m \in \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace$. Let $w = a_1 \cdots a_m$. We start by showing that there exists $i$, $1 \leq i \leq m$, such that $a_i \in \reduction(s)$. Assume that $w \in \overline{\reduction(s)}^*$ is true. Then we have $\pi_m \notin \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace$ due to Condition~\ref{rule:reach}, a contradiction. Therefore there must exist $i$, $1 \leq i \leq m$, s.t.\ $a_i \in \reduction(s)$. Let $i$ be minimal in the sense that for all $j$, $1 \leq j < i$, we have $a_j \notin \reduction(s)$. We can then divide $w$ s.t.\ $w = va_{i}u$, $v \in \overline{\reduction(s)}^*$ and we have $s \rtrans[a_i] s'_0 \rtrans[v] \pi_i \rtrans[u] \pi_m$ due to Condition~\ref{rule:stub} as well as $s \rtrans[a_i][\reduction] s'_0$. There are two subcases: (2.1) $a_i \in \safe{s}$ or (2.2) $a_i \notin \safe{s}$. \begin{itemize} \item Case (2.1): Let $a_i \in \safe{s}$. For all $1 \leq j < i$ we have $\ensuremath{en}\xspace_2(\pi_j) = \emptyset$ due to $i$ being minimal and Condition~\ref{rule:game-1}. From that, if $a_i \in \safe{s}$ then for all intermediate states in $s \rtrans[a_{i}v] \pi_i$ we only have player $1$ states otherwise $a_i$ is not a safe action due to the definition of safe actions. We have that $s'_0$ is a player $1$ state and let $v = a_1 a_2 \cdots a_{i-1}$ s.t.\ $s'_0 \rtrans[a_1] s'_1 \rtrans[a_2] \cdots \rtrans[a_{i-1}] \pi_i$ and for all $k$, $1 \leq k < i-1$, we have $\ensuremath{en}\xspace_2(s'_k) = \emptyset$. Let $\sigma''$ be defined such that for all $j$, $0 < j < i-1$, we have $\sigma''(s'_{j-1}) = a_j$, and let $\sigma''$ from $\pi_i$ be defined as $\sigma$. Clearly, $\sigma''$ is a winning strategy for player $1$ at $s'_0$ in $\ensuremath{en}\xspacesuremath{G}\xspace$. Due to property~\ref{stratB} of Lemma~\ref{lemma:depth} the depth of $\sigma''$ at $\pi_i$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ is at most $k \leq n-i$. Since $\ensuremath{en}\xspacesuremath{G}\xspace$ is deterministic by following the strategy $\sigma''$ from $s'_0$ we always reach $\pi_i$ in $i-1$ actions. From this we can infer that the depth of $\sigma''$ at $s'_0$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ is at most $k+i-1$ which is clearly smaller than $n$. Therefore $s'_0$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ with at most $k+i-1 < n$ as the depth of $\sigma''$ at $s'_0$ in $\ensuremath{en}\xspacesuremath{G}\xspace$. By the induction hypothesis $s'_0$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$ and there exists a strategy $\sigma'$ s.t.\ $\sigma'$ is a winning strategy for player $1$ at $s'_0$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$. Player $1$ can then choose $a_i$ in the reduced game such that $\sigma'(s) = a_i$ and $s$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$. \item Case (2.2): Let $a_i \notin \safe{s}$. Since $a_i \notin \safe{s}$ we have $\reduction(s) \cap \ensuremath{en}\xspace_1(s) \nsubseteq \safe{s}$ and $\ensuremath{en}\xspace_1(s) \subseteq \reduction(s)$ by Condition~\ref{rule:safe}. If $s \rtrans[\sigma(s)] s'$ then $s'$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ with $m < n$ as the depth of $\sigma$ at $s'$ in $\ensuremath{en}\xspacesuremath{G}\xspace$, following property~\ref{stratB} of Lemma~\ref{lemma:depth}. By the induction hypothesis $s'$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$ and there exists a strategy $\sigma'$ s.t.\ $\sigma'$ is a winning strategy for player $1$ at $s'$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$. Player $1$ can choose the same action proposed in the original game such that $\sigma'(s) = \sigma(s)$ and $s$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$. \ensuremath{en}\xspaced{itemize} Case (3): Let $\ensuremath{en}\xspace_1(s) = \emptyset$. Assume that $s$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ with $\sigma$ as the winning strategy. We want to show that there exists a strategy $\sigma'$ s.t.\ $s$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$ with $\sigma'$. Since $\ensuremath{en}\xspace_1(s) = \emptyset$ we have $\sigma(s) = \sigma'(s) = \bot$ by the definition of strategies. We have from the definition of a winning strategy that no matter what action player~$2$ chooses, the resulting state is a winning state for player $1$. What remains to be shown is that at least one enabled player $2$ action is included in $\reduction(s)$. As $\ensuremath{en}\xspace_2(s) \neq \emptyset$, due to Condition~\ref{rule:dead} we get that there exists $a \in \ensuremath{en}\xspace_2(s) \cap \reduction(s)$, and this last case is also established. \ensuremath{en}\xspaced{proof} \begin{lem}\label{lemma2} Let $\ensuremath{en}\xspacesuremath{G}\xspace = (\ensuremath{en}\xspacesuremath{G}\xspacestates, \ensuremath{en}\xspacesuremath{G}\xspacelabels_1, \ensuremath{en}\xspacesuremath{G}\xspacelabels_2, \ensuremath{en}\xspacesuremath{G}\xspaceedges, \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace)$ be a GLTS and $\reduction$ a stable reduction. If a state $s \in \ensuremath{en}\xspacesuremath{G}\xspacestates$ is winning for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$ then $s$ is also winning for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$. \ensuremath{en}\xspaced{lem} \begin{proof} Assume that $s \in \ensuremath{en}\xspacesuremath{G}\xspacestates$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$. By definition we have that there exists a strategy $\sigma$ s.t.\ for all $\pi \in \ensuremath{en}\xspacesuremath{\Pi^{\mathit{max}}}\xspace_{\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction},\sigma}(s)$ there exists a position $i$ s.t.\ $\pi_i \in \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace$. Let $\sigma$ be fixed for the remainder of the proof. Let $n$ be the depth of $\sigma$ at $s$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$. By induction on $n$ we prove the induction hypothesis $\mathit{IH}(n)$: ``If $s$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$ with a strategy with a depth of $n$ then $s$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$.'' \emph{Base step}. If $n = 0$ then since $n$ is the depth at $s$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$ we must have $s \in \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace$, implying that $s$ is a winning state for player $1$ also in $\ensuremath{en}\xspacesuremath{G}\xspace$. \emph{Induction step}. Let $n > 0$ and let $s$ be a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$ with a strategy with a depth of $n$. There are three cases: (1) $\ensuremath{en}\xspace_1(s) \cap \reduction(s) \neq \emptyset$ and $\ensuremath{en}\xspace_2(s) \cap \reduction(s) \neq \emptyset$, (2) $\ensuremath{en}\xspace_2(s) \cap \reduction(s) = \emptyset$, and (3) $\ensuremath{en}\xspace_1(s) \cap \reduction(s) = \emptyset$. A deadlock at $s$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$ such that $\ensuremath{en}\xspace(s) \cap \reduction(s) = \emptyset$ is not possible as otherwise we have the case where $n = 0$. Case (1): Let $\ensuremath{en}\xspace_1(s) \cap \reduction(s) \neq \emptyset$ and $\ensuremath{en}\xspace_2(s) \cap \reduction(s) \neq \emptyset$. We assume that $s$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$ with a strategy $\sigma$ with a depth of $n$. We want to show that there exists a strategy $\sigma'$ s.t.\ $s$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ with $\sigma'$. Since $s$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$ with $\sigma$, whenever $s \rtrans[\sigma(s)][\reduction] s'$ or $s \rtrans[a][\reduction] s'$ where $a \in \ensuremath{en}\xspace_2(s) \cap \reduction(s)$ then $s'$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$ with $m < n$ as the depth of $\sigma$ at $s'$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$, following property~\ref{stratB} of Lemma~\ref{lemma:depth}. By the induction hypothesis $s'$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ and there exists a strategy $\sigma'$ s.t.\ $\sigma'$ is a winning strategy for player $1$ at $s'$ in $\ensuremath{en}\xspacesuremath{G}\xspace$. Since $s$ is a mixed state in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$ then $s$ must also be a mixed state in $\ensuremath{en}\xspacesuremath{G}\xspace$ due to $\rtrans[][\reduction] \subseteq \ensuremath{en}\xspacesuremath{G}\xspaceedges$. This implies that $s \rtrans[\sigma(s)] s'$. Therefore player $1$ can choose the same action proposed in the reduced game such that $\sigma'(s) = \sigma(s)$. Furthermore we have $\ensuremath{en}\xspace_2(s) \cap \reduction(s) = \ensuremath{en}\xspace_2(s)$ from Condition~\ref{rule:stub-init}. From this we can conclude that $s$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ with strategy $\sigma'$. Case (2): Let $\ensuremath{en}\xspace_2(s) \cap \reduction(s) = \emptyset$. Assume that $s$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$ with a strategy $\sigma$ with a depth of $n$. We want to show that there exists a strategy $\sigma'$ s.t.\ $s$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ with $\sigma'$. Since $s$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$ with $\sigma$ we have $s \rtrans[\sigma(s)][\reduction] s'$ and $s'$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$ with $m < n$ as the depth of $\sigma$ at $s'$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$, following property~\ref{stratB} of Lemma~\ref{lemma:depth}. By the induction hypothesis $s'$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ and there exists a strategy $\sigma'$ s.t.\ $\sigma'$ is a winning strategy for player $1$ at $s'$ in $\ensuremath{en}\xspacesuremath{G}\xspace$. Trivially we have that $s \rtrans[\sigma(s)] s'$ since we have $\rtrans[][\reduction] \subseteq \ensuremath{en}\xspacesuremath{G}\xspaceedges$. Therefore player $1$ can choose the same action proposed in the reduced game $\sigma'(s) = \sigma(s)$. Next we show by contradiction that $s$ is a player~$1$ state also in $\ensuremath{en}\xspacesuremath{G}\xspace$. Assume $\ensuremath{en}\xspace_2(s) \neq \emptyset$, i.e.\ that $s$ is a mixed state in $\ensuremath{en}\xspacesuremath{G}\xspace$. From this we can infer by Condition~\ref{rule:stub-init} that $\ensuremath{en}\xspace(s) \subseteq \reduction(s)$ and $\ensuremath{en}\xspace_2(s) \cap \reduction(s) \neq \emptyset$, which is a contradiction. Therefore we have $\ensuremath{en}\xspace_2(s) = \emptyset$, i.e.\ $s$ is a player $1$ state also in $\ensuremath{en}\xspacesuremath{G}\xspace$, and $s$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ with strategy $\sigma'$. Case (3): Let $\ensuremath{en}\xspace_1(s) \cap \reduction(s) = \emptyset$. Assume that $s$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$ with a strategy $\sigma$ with a depth of $n$. We want to show that there exists a strategy $\sigma'$ s.t.\ $s$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ with $\sigma'$. Since $\ensuremath{en}\xspace_1(s) \cap \reduction(s) = \emptyset$ then we have $\sigma(s) = \bot$. Furthermore, we have $\ensuremath{en}\xspace_1(s) = \emptyset$ since otherwise with Condition~\ref{rule:stub-init} we will be able to infer that $\ensuremath{en}\xspace_1(s) \cap \reduction(s) \neq \emptyset$, which is a contradiction. We define $\sigma'=\sigma$. What remains to be shown is that $s$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$. For the sake of contradiction assume that this is not the case, i.e. that there exists $\pi \in \ensuremath{en}\xspacesuremath{\Pi^{\mathit{max}}}\xspace_{\ensuremath{en}\xspacesuremath{G}\xspace,\sigma'}(s)$ such that \[s=\pi_0 \rtrans[a_1] \pi_1 \rtrans[a_2] \pi_2 \rtrans[a_3] \cdots\] and $\pi_i \notin \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace$ for all positions $i$. We shal first argue that $a_1 \notin \reduction(s)$. If this is not the case, then $\pi_0 \rtrans[a_1][\reduction] \pi_1$ also in the reduced game $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$. Due to our assumption that $s = \pi_0$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$ and $a_1 \in \ensuremath{en}\xspacesuremath{G}\xspacelabels_2$, we know that also $\pi_1$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$ with $m < n$ as the depth of $\sigma$ at $\pi_1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$, following property~\ref{stratB} of Lemma~\ref{lemma:depth}. By the induction hypothesis $\pi_1$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$, which contradicts the existence of the maximal path $\pi$ with no goal states. Let us so assume that $a_1 \notin \reduction(s)$. Let $j > 1$ be the smallest index such that $a_j \in \reduction(s)$ and $a_1a_2\cdots a_{j-1}a_j \in \ensuremath{en}\xspacesuremath{G}\xspacelabels_2^*$. Such index must exist because of the following case analysis. \begin{itemize} \item Either the sequence $a_1a_2\cdots$ contains an action that belongs to $\ensuremath{en}\xspacesuremath{G}\xspacelabels_1$ (we note that because of our assumption $a_1 \notin \ensuremath{en}\xspacesuremath{G}\xspacelabels_1$). Due to Condition~\ref{rule:game-2} there must exist an action $a_j$ that is stubborn in $s$ and let $j > 1$ be the smallest index such that $a_j \in \reduction(s)$. As $a_j$ is the first action that is stubborn, we get that $a_1a_2\cdots a_{j-1} a_j \in \ensuremath{en}\xspacesuremath{G}\xspacelabels_2^*$ as otherwise the existence of $i \leq j$ where $a_i \in \ensuremath{en}\xspacesuremath{G}\xspacelabels_1$ contradicts the minimality of $j$ due to Condition~\ref{rule:game-2}. \item Otherwise the sequence $a_1a_2\cdots$ consists solely of actions from $\ensuremath{en}\xspacesuremath{G}\xspacelabels_2$. If the sequence contains a stubborn action then we are done and similarly, if the sequence is finite and ends in a deadlock, we get by Condition~\ref{rule:dead} that there must be an $j > 1$ where $a_j \in \reduction(s)$ as required. The last option is that the sequence $a_1a_2\cdots$ is infinite and does not contain any stubborn action. By Condition~\ref{rule:dead} there exists $a \in \ensuremath{en}\xspace_2(s) \cap \reduction(s)$ such that for all $i > 0$ we have $s \rtrans[a_1 \cdots a_i] \pi_i \rtrans[a] \pi_i'$ and then by Condition~\ref{rule:stub} we get $s \rtrans[a] \pi_0' \rtrans[a_1 \cdots a_i] \pi_i'$. This implies that from $\pi_0'$ we can also execute the infinite sequence of actions $a_1a_2\cdots$ while Condition~\ref{rule:visible} guarantees that none of the states visited during this execution is a goal state. Hence the state $\pi_0'$ must be losing for player 1 in $\ensuremath{en}\xspacesuremath{G}\xspace$, which however contradicts that by induction hypothesis $\pi_0'$ is winning for player 1 in $\ensuremath{en}\xspacesuremath{G}\xspace$ as $s \rtrans[a] \pi_0'$ with $a \in \reduction(s)$ and the depth of player 1 winning strategy at $\pi_0'$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$ is smaller than the depth at $s$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$. Hence there cannot be any infinite sequence of nonstubborn actions starting from $s$. \ensuremath{en}\xspaced{itemize} As we have now established that there is the smallest index $j > 1$ such that $a_j \in \reduction(s)$ and $a_1a_2\cdots a_{j-1}a_j \in \ensuremath{en}\xspacesuremath{G}\xspacelabels_2^*$, the minimality of $j$ implies that $a_1a_2\cdots a_{j-1} \in \overline{\reduction(s)}^*$. This means that we can apply Condition~\ref{rule:stub} and conclude that there exists a maximal run $\pi'$ given by \[s \rtrans[a_j] s' \rtrans[a_1 a_2 \cdots a_{j-1}] \pi_j \rtrans[a_{j+1}] \pi_{j+1} \rtrans[a_{j+2}] \cdots\] that is from $\pi_j$ identical to the run of $\pi$. Hence $\pi_i \notin \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace$ for all $i \geq j$. We notice that also the intermediate states in the prefix of the run $\pi'$ may not be goal states, which is implied by Condition~\ref{rule:visible} and the fact that $a_1 \in \ensuremath{en}\xspace_2(s)$, $a_1a_2\cdots a_{j-1}a_j \in \ensuremath{en}\xspacesuremath{G}\xspacelabels_2^*$, and $a_1 \notin \reduction(s)$. However, as $a_j \in \reduction(s)$ we get $s \rtrans[a_j][\reduction] s'$ and because $a_j \in \ensuremath{en}\xspacesuremath{G}\xspacelabels_2$ we know that $s'$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$ with $m < n$ as the depth of $\sigma$ at $s'$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$, following property~\ref{stratB} of Lemma~\ref{lemma:depth}. By the induction hypothesis $s'$ is a winning state for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$, which contradicts the existence of a maximal run from $s'$ that contains no goal states. Hence the proof of Case (3) is finished. \ensuremath{en}\xspaced{proof} We can now present the main theorem showing that stable reductions preserve the winning strategies of both players in the game. \begin{thm}[Strategy Preservation for GLTS]\label{theorem:preservation-1} Let $\ensuremath{en}\xspacesuremath{G}\xspace = (\ensuremath{en}\xspacesuremath{G}\xspacestates, \ensuremath{en}\xspacesuremath{G}\xspacelabels_1, \ensuremath{en}\xspacesuremath{G}\xspacelabels_2, \ensuremath{en}\xspacesuremath{G}\xspaceedges, \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace)$ be a GLTS and $\reduction$ a stable reduction. A state $s \in \ensuremath{en}\xspacesuremath{G}\xspacestates$ is winning for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ iff $s$ is winning for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$. \ensuremath{en}\xspaced{thm} \begin{proof} Follows from Lemma~\ref{lemma1} and~\ref{lemma2}. \ensuremath{en}\xspaced{proof} \begin{rem}\label{rem:error} In~\cite{boenneland2019partial} we omitted Condition~\ref{rule:visible} from the definition of stable reduction and this implied that Lemma~\ref{lemma2} (as it was stated in~\cite{boenneland2019partial}) did not hold. We illustrate this in Figure~\ref{fig:counter} where all actions are player $2$ actions and the goal state is $s_2$. Clearly, player $1$ does not have a winning strategy as player $2$ can play the action $b$ followed by $a$ and reach the deadlock state $s_4$ without visiting the goal state. The stubborn set $\reduction(s_1) = \{ a \}$ on the other hand satisfies all conditions of the stable reduction, except for~\ref{rule:visible}, however, it breaks Lemma~\ref{lemma2} because in the reduced system the action $b$ in $s_1$ is now exluded and the (only) stubborn action $a$ for the environment brings us to a goal state. It is therefore the case that in the original game $s_1$ is not a winning state for player $1$ but in the reduced game it is. The extra Condition~\ref{rule:visible} introduced in this article forces us to include all enabled actions in $s_1$ into the stubborn set, and hence the validity of Lemma~\ref{lemma2} is recovered. \begin{figure}[t] \centering \begin{tikzpicture}[font=\scriptsize,xscale=2,yscale=2] \tikzstyle{state}=[inner sep=0pt,circle,draw=black,very thick,fill=white,minimum height=5mm, minimum width=5mm,font=\small] \tikzstyle{empty}=[rectangle,draw=none,font=\small] \tikzstyle{reducedstate}=[inner sep=0pt,circle,draw=black,fill=white,minimum height=5mm, minimum width=5mm,font=\small] \tikzstyle{every label}=[black] \tikzstyle{playerArc}=[->,>=stealth,very thick] \tikzstyle{reducedplayerArc}=[->,>=stealth] \tikzstyle{opponentArc}=[->,>=stealth,dashed,very thick] \tikzstyle{reducedopponentArc}=[->,>=stealth,dashed] \begin{scope} \node [state] at (0,0) (s1) {$s_1$}; \node [state] at (1.2,0) (s2) {$s_2$}; \node [state] at (0,-1.2) (s3) {$s_3$}; \node [state] at (1.2,-1.2) (s4) {$s_4$}; \node [empty] at (1.7,0) {$\in \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace$}; \draw[opponentArc] (s1) -- (s2) node[midway,above]{$a$} {}; \draw[opponentArc] (s1) -- (s3) node[midway,left]{$b$} {}; \draw[opponentArc] (s2) -- (s4) node[midway,left]{$b$} {}; \draw[opponentArc] (s3) -- (s4) node[midway,above]{$a$} {}; \ensuremath{en}\xspaced{scope} \ensuremath{en}\xspaced{tikzpicture} \caption{Example showing the importance of Condition~\ref{rule:visible}} \label{fig:counter} \ensuremath{en}\xspaced{figure} \ensuremath{en}\xspaced{rem} Finally, we notice that for non-mixed games we can simplify the conditions of stable reductions by removing the requirement on safe actions. \begin{thm}[Strategy Preservation for Non-Mixed GLTS]\label{theorem:preservation-2} Let $\ensuremath{en}\xspacesuremath{G}\xspace = (\ensuremath{en}\xspacesuremath{G}\xspacestates, \ensuremath{en}\xspacesuremath{G}\xspacelabels_1, \ensuremath{en}\xspacesuremath{G}\xspacelabels_2, \ensuremath{en}\xspacesuremath{G}\xspaceedges, \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace)$ be a non-mixed GLTS and $\reduction$ a stable reduction with Condition~\ref{rule:safe} excluded. A state $s \in \ensuremath{en}\xspacesuremath{G}\xspacestates$ is winning for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace$ iff $s$ is winning for player $1$ in $\ensuremath{en}\xspacesuremath{G}\xspace_{\reduction}$. \ensuremath{en}\xspaced{thm} \begin{proof} In Lemma~\ref{lemma2} the condition~\ref{rule:safe} is not used at all. In Lemma~\ref{lemma1} the subcase (2.2) is the only one that relies on~\ref{rule:safe}. Because there are no mixed states, the arguments in subcase (2.1) are valid irrelevant of whether $a_i$ is safe or not. \ensuremath{en}\xspaced{proof} \section{Stable Reductions on Petri Net Games} We now introduce the formalism of Petri net games and show how to algorithmically construct stable reductions in a syntax-driven manner. \begin{defi}[Petri Net Game] A \trailingspace{Petri net}G is a tuple $N = \ensuremath{en}\xspacesuremath{(\places,\transitions_1, \transitions_2, \weights,\inhib)}\xspace$ where \begin{itemize} \item $\ensuremath{en}\xspacesuremath{P}\xspace$ and $\ensuremath{en}\xspacesuremath{T}\xspace = \ensuremath{en}\xspacesuremath{T}\xspace_1 \uplus \ensuremath{en}\xspacesuremath{T}\xspace_2$ are finite sets of places and transitions, respectively, such that $\ensuremath{en}\xspacesuremath{P}\xspace \cap \ensuremath{en}\xspacesuremath{T}\xspace = \emptyset$ and where transitions are partitioned into player $1$ and player $2$ transitions, \item $\ensuremath{en}\xspacesuremath{W}\xspace: (\ensuremath{en}\xspacesuremath{P}\xspace \times \ensuremath{en}\xspacesuremath{T}\xspace) \cup (\ensuremath{en}\xspacesuremath{T}\xspace \times \ensuremath{en}\xspacesuremath{P}\xspace) \rightarrow \mathbb{N}^0$ is a weight function for regular arcs, and \item $\ensuremath{en}\xspacesuremath{I}\xspace: (\ensuremath{en}\xspacesuremath{P}\xspace \times \ensuremath{en}\xspacesuremath{T}\xspace) \rightarrow \mathbb{N}^{\infty}$ is a weight function for inhibitor arcs. \ensuremath{en}\xspaced{itemize} A \emph{marking} $M$ is a function $M: P \to \mathbb{N}^0$ and $\ensuremath{en}\xspacesuremath{\mathcal{M}(N)}\xspace$ denotes the set of all markings for $N$. \ensuremath{en}\xspaced{defi} For the rest of this section, let $N = \ensuremath{en}\xspacesuremath{(\places,\transitions_1, \transitions_2, \weights,\inhib)}\xspace$ be a fixed \trailingspace{Petri net}G such that $\ensuremath{en}\xspacesuremath{T}\xspace = \ensuremath{en}\xspacesuremath{T}\xspace_1 \uplus \ensuremath{en}\xspacesuremath{T}\xspace_2$. Let us first fix some useful notation. For a place or transition $x$, we denote the \emph{preset} of $x$ as $\preset{x} = \{ y \in \ensuremath{en}\xspacesuremath{P}\xspace \cup \ensuremath{en}\xspacesuremath{T}\xspace \mid \ensuremath{en}\xspacesuremath{W}\xspace(y,x) > 0 \}$, and the \emph{postset} of $x$ as $\postset{x} = \{ y \in \ensuremath{en}\xspacesuremath{P}\xspace \cup \ensuremath{en}\xspacesuremath{T}\xspace \mid \ensuremath{en}\xspacesuremath{W}\xspace(x,y) > 0 \}$. For a transition $t$, we denote the \emph{inhibitor preset} of $t$ as $\ensuremath{en}\xspacesuremath{I}\xspacepreset{t} = \{ p \in \ensuremath{en}\xspacesuremath{P}\xspace \mid \ensuremath{en}\xspacesuremath{I}\xspace(p,t) \neq \infty \}$, and the \emph{inhibitor postset} of a place $p$ as $\ensuremath{en}\xspacesuremath{I}\xspacepostset{p} = \{ t \in \ensuremath{en}\xspacesuremath{T}\xspace \mid \ensuremath{en}\xspacesuremath{I}\xspace(p,t) \neq \infty \}$. For a place $p$ we define the \emph{increasing preset} of $p$, containing all transitions that increase the number of tokens in $p$, as $\preincr{p} = \{ t \in \preset{p} \mid \ensuremath{en}\xspacesuremath{W}\xspace(t,p) > \ensuremath{en}\xspacesuremath{W}\xspace(p,t) \}$, and similarly the \emph{decreasing postset} of $p$ as $\postdecr{p} = \{ t \in \postset{p} \mid \ensuremath{en}\xspacesuremath{W}\xspace(t,p) < \ensuremath{en}\xspacesuremath{W}\xspace(p,t) \}$. For a transition $t$ we define the \emph{decreasing preset} of $t$, containing all places that have their number of tokens decreased by $t$, as $\mathit{pred}ecr{t} = \{ p \in \preset{t} \mid \ensuremath{en}\xspacesuremath{W}\xspace(p,t) > \ensuremath{en}\xspacesuremath{W}\xspace(t,p) \}$, and similarly the \emph{increasing postset} of $t$ as $\postincr{t} = \{ p \in \postset{t} \mid \ensuremath{en}\xspacesuremath{W}\xspace(p,t) < \ensuremath{en}\xspacesuremath{W}\xspace(t,p) \}$. For a set $X$ of either places or transitions, we extend the notation as $\preset{X} = \bigcup_{x \in X} \preset{x}$ and $\postset{X} = \bigcup_{x \in X} \postset{x}$, and similarly for the other operators. A Petri net $N = \ensuremath{en}\xspacesuremath{(\places,\transitions_1, \transitions_2, \weights,\inhib)}\xspace$ defines a GLTS $\ensuremath{en}\xspacesuremath{G}\xspace(N) = (\ensuremath{en}\xspacesuremath{G}\xspacestates, \ensuremath{en}\xspacesuremath{G}\xspacelabels_1, \ensuremath{en}\xspacesuremath{G}\xspacelabels_2, \ensuremath{en}\xspacesuremath{G}\xspaceedges, \ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace)$ where \begin{itemize} \item $\ensuremath{en}\xspacesuremath{G}\xspacestates = \ensuremath{en}\xspacesuremath{\mathcal{M}(N)}\xspace$ is the set of all markings, \item $\ensuremath{en}\xspacesuremath{G}\xspacelabels_1 = \ensuremath{en}\xspacesuremath{T}\xspace_1$ is the set of player 1 actions, \item $\ensuremath{en}\xspacesuremath{G}\xspacelabels_2 = \ensuremath{en}\xspacesuremath{T}\xspace_2$ is the set of player 2 actions, \item $M \rtrans[t] M'$ whenever for all $p \in P$ we have $M(p) \geq \ensuremath{en}\xspacesuremath{W}\xspace(p,t)$, $M(p) < \ensuremath{en}\xspacesuremath{I}\xspace(p,t)$ and $M'(p) = M(p) - \ensuremath{en}\xspacesuremath{W}\xspace(p,t) + \ensuremath{en}\xspacesuremath{W}\xspace(t,p)$, and \item $\ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace \in \ensuremath{en}\xspacesuremath{\mathcal{M}(N)}\xspace$ is the set of goal markings, described by a simple reachability logic formula defined below. \ensuremath{en}\xspaced{itemize} \noindent Let $E_N$ be the set of marking expressions in $N$ given by the abstract syntax (here $e$ ranges over $E_N$): \[e ::= c \mid p \mid e_1 \ensuremath{op}\xspacelus e_2\] where $c \in \mathbb{N}^0$, $p \in \ensuremath{en}\xspacesuremath{P}\xspace$, and $\ensuremath{op}\xspacelus \in \{ +, -, * \}$. An expression $e \in E_N$ is evaluated relative to a marking $M \in \ensuremath{en}\xspacesuremath{\mathcal{M}(N)}\xspace$ by the function $\eval_M : E_N \to \mathbb{Z}$ where $\eval[M][c] = c$, $\eval[M][p] = M(p)$ and $\eval[M][e_{1} \ensuremath{op}\xspacelus e_{2}] = \eval[M][e_{1}] \ensuremath{op}\xspacelus \eval[M][e_{2}]$. In Table~\ref{tab:incr-and-decr} we define the functions $\mathit{incr_M}: E_N \to 2^T$ and $\mathit{decr_M}: E_N \to 2^T$ that, given an expression $e \in E_N$, return the set of transitions that can (when fired) increase resp.\ decrease the evaluation of $e$. We note that transitions in $\mathit{incr_M}(e)$ and $\mathit{decr_M}(e)$ are not necessarily enabled in $M$, however, due to Lemma~\ref{lemma:incr}, if a transition firing increases the evaluation of $e$ then the transition must be in $\mathit{incr_M}(e)$, and similarly for $\mathit{decr_M}(e)$. \begin{lemC}[\cite{boenneland2018start}]\label{lemma:incr} Let $N = \ensuremath{en}\xspacesuremath{(\places,\transitions_1, \transitions_2, \weights,\inhib)}\xspace$ be a Petri net and $M \in \ensuremath{en}\xspacesuremath{\mathcal{M}(N)}\xspace$ a marking. Let $e \in E_N$ and let $M \rtrans[w] M'$ where $w=t_1t_2\dots t_n \in T^*$. \begin{itemize} \item If $\eval[M][e] < \eval[M'][e]$ then there is $i$, $1 \leq i \leq n$, such that $t_i \in \incr[M][e]$. \item If $\eval[M][e] > \eval[M'][e]$ then there is $i$, $1 \leq i \leq n$, such that $t_i \in \decr[M][e]$. \ensuremath{en}\xspaced{itemize} \ensuremath{en}\xspaced{lemC} \begin{table}[t] \centering \scalebox{0.94}{ \def1.2{1.2} \begin{tabular}{lll} Expression $e$ & $\incr[M][e]$ & $\decr[M][e]$ \\ \toprule $c$ & $\emptyset$ & $\emptyset$ \\ $p$ & $\preincr{p}$ & $\postdecr{p}$ \\ $e_1 + e_2$ & $\incr[M][e_1] \cup \incr[M][e_2]$ & $\decr[M][e_1] \cup \decr[M][e_2]$ \\ $e_1 - e_2$ & $\incr[M][e_1] \cup \decr[M][e_2]$ & $\decr[M][e_1] \cup \incr[M][e_2]$ \\ $e_1 \cdot e_2$ & $\incr[M][e_1] \cup \decr[M][e_1] \cup {}$ & $\incr[M][e_1] \cup \decr[M][e_1] \cup {}$ \\ & \;$ \incr[M][e_2] \cup \decr[M][e_2]$ & \;$ \incr[M][e_2] \cup \decr[M][e_2]$ \ensuremath{en}\xspaced{tabular} } \caption{Increasing and decreasing transitions for expression $e \in \mathindex{E}{N}$} \label{tab:incr-and-decr} \ensuremath{en}\xspaced{table} We can now define the set of reachability formulae $\Phi_N$ that evaluate over the markings in $N$ as follows: \[\varphi ::= \ensuremath{en}\xspacesuremath{\mathit{true}}\xspace \mid \ensuremath{en}\xspacesuremath{\mathit{false}}\xspace \mid t \mid e_1 \bowtie e_2 \mid \ensuremath{en}\xspacesuremath{\mathit{deadlock}}\xspace \mid \varphi_{1} \land \varphi_{2} \mid \varphi_{1} \lor \varphi_{2} \mid \neg\varphi\] where $e_1,e_2 \in E_N$, $t \in \ensuremath{en}\xspacesuremath{T}\xspace$ and $\bowtie$ $\in$ $\{<, \leq, =, \neq, >, \geq\}$. The satisfaction relation for a formula $\varphi \in \Phi_N$ in a marking $M$ is defined as expected: \begin{align*} &M \models \ensuremath{en}\xspacesuremath{\mathit{true}}\xspace && \\ &M \models t && \mbox{iff } t \in \ensuremath{en}\xspace(M) \\ &M \models e_1 \bowtie e_2 && \mbox{iff } \eval[M][e_1] \bowtie \eval[M][e_2] \\ &M \models \ensuremath{en}\xspacesuremath{\mathit{deadlock}}\xspace &&\mbox{iff } \ensuremath{en}\xspace(M) = \emptyset \\ &M \models \varphi_1 \land \varphi_2 &&\mbox{iff } M \models \varphi_1 \mbox{ and } M \models \varphi_2 \\ &M \models \varphi_1 \lor \varphi_2 &&\mbox{iff } M \models \varphi_1 \mbox{ or } M \models \varphi_2 \\ &M \models \neg \varphi&&\mbox{iff } M \not\models \varphi \ensuremath{en}\xspaced{align*} We want to be able to preserve at least one execution to the set $\ensuremath{en}\xspacesuremath{\mathit{Goal}}\xspace=\{ M \in \ensuremath{en}\xspacesuremath{\mathcal{M}(N)}\xspace \mid M \models \varphi \}$ for a given formula $\varphi$ describing the set of goal markings. In order to achieve this, we define the set of interesting transitions $\interesting[M][{\varphi}]$ for a formula $\varphi$ so that any firing sequence of transitions from a marking that does not satisfy $\varphi$ leading to a marking that satisfies $\varphi$ must contain at least one interesting transition. Table~\ref{tab:bool-formula} provides the definition of $\interesting[M][{\varphi}]$ that is similar to the one presented in~\cite{boenneland2018start} for the non-game setting, except for the conjunction where we in our setting use Equation~(\ref{eq:interest-and}) that provides an optimisation for Condition~\ref{rule:safe} and possibly ends with a smaller set of interesting transitions. \begin{table}[t] \centering \scalebox{0.94}{ \def1.2{1.2} \begin{tabular}{lll} $\varphi$ & $\interest{M}{{\varphi}}$ & $\interesting[M][{\neg\varphi}]$ \\ \toprule $\mathit{deadlock}$ & $t \cup \postdecr{(\preset{t})} \cup \preincr{(\ensuremath{en}\xspacesuremath{I}\xspacepreset{t})}$ for some selected $t \in \ensuremath{en}\xspace(M)$ & $\emptyset$ \\[2mm] $t$ & \pbox{20cm}{$\preincr{p}$ for some selected $p \in \preset{t}$ where $M(p) < \ensuremath{en}\xspacesuremath{W}\xspace(p,t)$, or \\ $\postdecr{p}$ for some selected $p \in\ensuremath{en}\xspacesuremath{I}\xspacepreset{t}$ where $M(p) \geq \ensuremath{en}\xspacesuremath{I}\xspace(p,t)$} & $\postdecr{(\preset{t})}\cup \preincr{(\ensuremath{en}\xspacesuremath{I}\xspacepreset{t})}$ \\[3mm] $e_ 1 < e_ 2$ & $\decr[M][e_1] \cup \incr[M][e_2]$ & $\interesting[M][{e_1 \geq e_2}]$ \\ $e_ 1 \leq e_ 2$ & $\decr[M][e_1] \cup \incr[M][e_2]$ & $\interesting[M][{e_1 > e_2}]$ \\ $e_ 1 > e_ 2$ & $\incr[M][e_1] \cup \decr[M][e_2]$ & $\interesting[M][{e_1 \leq e_2}]$ \\ $e_ 1 \geq e_ 2$ & $\incr[M][e_1] \cup \decr[M][e_2]$ & $\interesting[M][{e_1 < e_2}]$ \\[2mm] $e_ 1 = e_ 2$ & \pbox{10cm}{ $\decr[M][e_1] \cup \incr[M][e_2]$ if $\eval[M][e_1] > \eval[M][e_2]$ \\ $\incr[M][e_1] \cup \decr[M][e_2]$ if $\eval[M][e_1] < \eval[M][e_2]$} & $\interesting[M][{e_1 \neq e_2}]$ \\[3mm] $e_ 1 \neq e_ 2$ & $\incr[M][e_1] \cup \decr[M][e_1] \cup\incr[M][e_2] \cup \decr[M][e_2]$ & $\interesting[M][{e_1 = e_2}]$ \\ $\varphi_1 \land \varphi_2$ & Defined in Equation~(\ref{eq:interest-and}) & $\interesting[M][{\neg\varphi_1\lor\neg\varphi_2}]$ \\ $\varphi_1 \lor \varphi_2$ & $\interesting[M][{\varphi_1}] \cup \interesting[M][{\varphi_2}]$ & $\interesting[M][{\neg\varphi_1\land\neg\varphi_2}]$ \ensuremath{en}\xspaced{tabular} } \caption{Interesting transitions of $\varphi$ (assuming $M \not\models \varphi$, otherwise $\interesting[M][{\varphi}] = \emptyset$)} \label{tab:bool-formula} \ensuremath{en}\xspaced{table} \begin{equation}\label{eq:interest-and} \interest{M}{{\varphi_1 \land \varphi_2}} = \begin{cases} \interest{M}{{\varphi_1}} & \text{if } M\models\varphi_2\\ \interest{M}{{\varphi_2}} & \text{if } M\models\varphi_1\\ \interest{M}{{\varphi_1}} & \text{if } M\not\models\varphi_1 \text{ and } \interest{M}{{\varphi_1}} \subseteq \safe{M}\\ \interest{M}{{\varphi_2}} & \text{if } M\not\models\varphi_2 \text{ and } \interest{M}{{\varphi_2}} \subseteq \safe{M}\\ \interest{M}{{\varphi_i}} & \text{otherwise where } i \in \{1,2\} \ensuremath{en}\xspaced{cases} \ensuremath{en}\xspaced{equation} The desired property of the set of interesting transitions is formulated below. \begin{lem}\label{lemma:reach} Let $N = \ensuremath{en}\xspacesuremath{(\places,\transitions_1, \transitions_2, \weights,\inhib)}\xspace$ be a Petri net, $M \in \ensuremath{en}\xspacesuremath{\mathcal{M}(N)}\xspace$ a marking, and $\varphi \in \Phi_N$ a formula. If $M \not\models \varphi$ and $M \rtrans[w] M'$ where $w \in \overline{\interest{M}{{\varphi}}}^*$ then $M' \not\models \varphi$. \ensuremath{en}\xspaced{lem} \begin{proof} Assume that $M \not\models \varphi$. The proof proceeds by structural induction on $\varphi$. All cases, with the exception of $\varphi_1 \land \varphi_2$, are proved in Lemma 2 presented in~\cite{boenneland2018start}. Let $\varphi = \varphi_1 \land \varphi_2$. There are five subcases defined by Equation~\ref{eq:interest-and}: (1) $M\models\varphi_2$, (2) $M\models\varphi_1$, (3) $M\not\models\varphi_1$ and $\interest{M}{\varphi_1} \subseteq \safe{M}$, (4) $M\not\models\varphi_2$ and $\interest{M}{\varphi_2} \subseteq \safe{M}$, and (5) the default case. \begin{itemize} \item Case (1): Let $M\models\varphi_2$. Since we have $M \not\models \varphi$ and $M\models\varphi_2$ we must therefore have that $M \not\models \varphi_1$ by the semantics of $\varphi$. By Equation~\ref{eq:interest-and}, since $M\models\varphi_2$, we have $\interest{M}{\varphi_1 \land \varphi_2} = \interest{M}{\varphi_1}$. By the induction hypothesis this implies $M' \not\models \varphi_1$, and from this and the semantics of $\varphi$ we have $M' \not\models\varphi$. \item Case (2): Let $M\models\varphi_1$. This case is symmetric to Case (1) and follows the same approach. \item Case (3): Let $M\not\models\varphi_1$ and $\interest{M}{\varphi_1} \subseteq \safe{M}$. By Equation~\ref{eq:interest-and} we have $\interest{M}{\varphi_1 \land \varphi_2} = \interest{M}{\varphi_1}$. By the induction hypothesis this implies $M' \not\models \varphi_1$, and from this and the semantics of $\varphi$ we have $M' \not\models\varphi$. \item Case (4): Let $M\not\models\varphi_2$ and $\interest{M}{\varphi_2} \subseteq \safe{M}$. This case is symmetric to Case (3) and follows the same approach. \item Case (5): Default case. We have $M\not\models\varphi_1$ and $M\not\models\varphi_2$ due to Equation~\ref{eq:interest-and} and $\interest{M}{\varphi_1 \land \varphi_2} = \interest{M}{\varphi_i}$ for some $i \in \{1,2\}$. By the induction hypothesis this implies $M' \not\models \varphi_i$, and from this and the semantics of $\varphi$ we have $M' \not\models\varphi$. \qedhere \ensuremath{en}\xspaced{itemize} \ensuremath{en}\xspaced{proof} \begin{algorithm}[t] \SetKwInOut{Input}{input} \SetKwInOut{Output}{output} \Input{$N=\ensuremath{en}\xspacesuremath{(\places,\transitions_1, \transitions_2, \weights,\inhib)}\xspace$ with $M \in \ensuremath{en}\xspacesuremath{\mathcal{M}(N)}\xspace$ and a formula $\varphi \in \Phi_N$} \Output{If there is $w \in \ensuremath{en}\xspacesuremath{G}\xspacelabels_2^*$ s.t. $M \rtrans[w] M'$ and $M' \models \varphi$ then the algorithm returns \emph{true}.} We assume that all negations in $\varphi$ are only in front of atomic propositions (if not, we can use De Morgan's laws in order to guarantee this). $\mathit{ub}(x) := \infty$ for all $x \in \ensuremath{en}\xspacesuremath{P}\xspace \cup \ensuremath{en}\xspacesuremath{T}\xspace_2$;\label{line:init-inf} $\mathit{ub}(p) := M(p)$ for all $p \in P$ such that $\ensuremath{en}\xspacesuremath{W}\xspace(p,t) \geq \ensuremath{en}\xspacesuremath{W}\xspace(t,p)$ for every $t \in \preset{p} \cap \ensuremath{en}\xspacesuremath{T}\xspace_2$;\label{line:cond1} \Repeat{$\mathit{ub}(x)$ stabilises for all $x \in \ensuremath{en}\xspacesuremath{P}\xspace \cup \ensuremath{en}\xspacesuremath{T}\xspace_2$}{ \ForEach{$t \in \ensuremath{en}\xspacesuremath{T}\xspace_2$}{ $\displaystyle \mathit{ub}(t) := \min_{ p \in \mathit{pred}ecr{t}} \lfloor \frac{\mathit{ub}(p)}{\ensuremath{en}\xspacesuremath{W}\xspace(p,t)-\ensuremath{en}\xspacesuremath{W}\xspace(t,p)}\rfloor$\label{line:tup} } \ForEach{$p \in \ensuremath{en}\xspacesuremath{P}\xspace$}{ $\displaystyle \mathit{ub}(p) := M(p) + \sum_{\substack{t \in\; \preset{p} \; \cap \; \ensuremath{en}\xspacesuremath{T}\xspace_2 \\ \ensuremath{en}\xspacesuremath{W}\xspace(t,p) > \ensuremath{en}\xspacesuremath{W}\xspace(p,t)}} \mathit{ub}(t) \cdot \big(\ensuremath{en}\xspacesuremath{W}\xspace(t,p) - \ensuremath{en}\xspacesuremath{W}\xspace(p,t)\big) $\label{line:pup} } } \ForEach{$p \in \ensuremath{en}\xspacesuremath{P}\xspace$}{ $\displaystyle \mathit{lb}(p) := M(p) - \sum_{\substack{t \in \ensuremath{en}\xspacesuremath{T}\xspace_2 \\ \ensuremath{en}\xspacesuremath{W}\xspace(p,t) > \ensuremath{en}\xspacesuremath{W}\xspace(t,p)}} \mathit{ub}(t) \cdot \big( \ensuremath{en}\xspacesuremath{W}\xspace(p,t) - \ensuremath{en}\xspacesuremath{W}\xspace(t,p) \big)$\label{line:lb} } \Return{$\mathit{lb}, \mathit{ub} \models \varphi$}; \ \ \ *** See definition in Table~\ref{tab:lusat} \caption{$\mathit{reach}(N,M,\varphi)$: Overapproximation for checking if $\varphi$ can be satisfied by performing only player 2 transitions, assuming that $\min \emptyset = \infty$ and $\sum \emptyset = 0$} \label{alg:reachphi} \ensuremath{en}\xspaced{algorithm} \begin{table}[t] \begin{align*} &\mathit{lb},\mathit{ub} \models \ensuremath{en}\xspacesuremath{\mathit{true}}\xspace && \\ &\mathit{lb},\mathit{ub} \models t && \mbox{iff } \mathit{ub}(p) \geq \ensuremath{en}\xspacesuremath{W}\xspace(p,t) \text{ for all } p \in \preset{t} \text{ and } \mathit{lb}(p) < \ensuremath{en}\xspacesuremath{I}\xspace(p,t) \text{ for all } p \in \ensuremath{en}\xspacesuremath{I}\xspacepreset{t} \\ &\mathit{lb},\mathit{ub} \models \neg t && \mbox{iff } \mathit{lb}(p) < \ensuremath{en}\xspacesuremath{W}\xspace(p,t) \text{ for some } p \in \preset{t} \text{ or } \mathit{ub}(p) \geq \ensuremath{en}\xspacesuremath{I}\xspace(p,t) \text{ for some } p \in \ensuremath{en}\xspacesuremath{I}\xspacepreset{t} \\ &\mathit{lb},\mathit{ub} \models e_1 < e_2 && \mbox{iff } \mathit{lb}(e_1) < \mathit{ub}(e_2) \\ &\mathit{lb},\mathit{ub} \models e_1 \leq e_2 && \mbox{iff } \mathit{lb}(e_1) \leq \mathit{ub}(e_2) \\ &\mathit{lb},\mathit{ub} \models e_1 = e_2 && \mbox{iff } \max\{\mathit{lb}(e_1), \mathit{lb}(e_2)\} \leq \min\{\mathit{ub}(e_1),\mathit{ub}(e_2)\} \\ &\mathit{lb},\mathit{ub} \models e_1 \not= e_2 && \mbox{iff it is not the case that } \mathit{lb}(e_1)=\mathit{lb}(e_2)=\mathit{ub}(e_1)=\mathit{ub}(e_2) \\ &\mathit{lb},\mathit{ub} \models e_1 \geq e_2 && \mbox{iff } \mathit{ub}(e_1) \geq \mathit{lb}(e_2) \\ &\mathit{lb},\mathit{ub} \models e_1 > e_2 && \mbox{iff } \mathit{ub}(e_1) > \mathit{lb}(e_2) \\ &\mathit{lb},\mathit{ub} \models \ensuremath{en}\xspacesuremath{\mathit{deadlock}}\xspace &&\mbox{iff } \mathit{lb},\mathit{ub} \not\models t \text{ for all } t \in \ensuremath{en}\xspacesuremath{T}\xspace \\ &\mathit{lb},\mathit{ub} \models \neg\ensuremath{en}\xspacesuremath{\mathit{deadlock}}\xspace &&\mbox{iff } \mathit{lb},\mathit{ub} \models t \text{ for some } t \in \ensuremath{en}\xspacesuremath{T}\xspace \\ &\mathit{lb},\mathit{ub} \models \varphi_1 \land \varphi_2 &&\mbox{iff } \mathit{lb},\mathit{ub} \models \varphi_1 \mbox{ and } \mathit{lb},\mathit{ub} \models \varphi_2 \\ &\mathit{lb},\mathit{ub} \models \varphi_1 \lor \varphi_2 &&\mbox{iff } \mathit{lb},\mathit{ub} \models \varphi_1 \mbox{ or } \mathit{lb},\mathit{ub} \models \varphi_2 \\ \ensuremath{en}\xspaced{align*} \begin{align*} \mathit{lb}(c) & =c \text{ \ \ \ where $c$ is a constant} \\ \mathit{ub}(c) & =c \text{ \ \ \ where $c$ is a constant} \\ \mathit{lb}(e_1 + e_2) & = \mathit{lb}(e_1) + \mathit{lb}(e_2) \\ \mathit{ub}(e_1 + e_2) & = \mathit{ub}(e_1) + \mathit{ub}(e_2) \\ \mathit{lb}(e_1 - e_2) & = \mathit{lb}(e_1) - \mathit{ub}(e_2) \\ \mathit{ub}(e_1 - e_2) & = \mathit{ub}(e_1) - \mathit{lb}(e_2) \\ \mathit{lb}(e_1 * e_2) & = \min \{ \mathit{lb}(e_1)\cdot\mathit{lb}(e_2), \mathit{lb}(e_1)\cdot\mathit{ub}(e_2), \mathit{ub}(e_1)\cdot\mathit{lb}(e_2), \mathit{ub}(e_1)\cdot\mathit{ub}(e_2)\} \\ \mathit{ub}(e_1 * e_2) & = \max \{ \mathit{lb}(e_1)\cdot\mathit{lb}(e_2), \mathit{lb}(e_1)\cdot\mathit{ub}(e_2), \mathit{ub}(e_1)\cdot\mathit{lb}(e_2), \mathit{ub}(e_1)\cdot\mathit{ub}(e_2)\} \\ \ensuremath{en}\xspaced{align*} \caption{Definition of $\mathit{lb},\mathit{ub} \models \varphi$ assuming that $\mathit{lb}(p)$ and $\mathit{ub}(p)$ are given for all $p \in P$} \label{tab:lusat} \ensuremath{en}\xspaced{table} \noindent As a next step, we provide an algorithm that returns \emph{true} whenever there is a sequence of player 2 actions that leads to a marking satisfying a given formula $\varphi$ (and hence overapproximates Condition \textbf{V} from the definition of a stable reduction). The pseudocode is given in Algorithm~\ref{alg:reachphi}. The algorithm uses an extended definition of formula satisfiability that, instead of asking whether a formula holds in a given marking, specifies instead a range of markings by two functions $\mathit{lb} : \ensuremath{en}\xspacesuremath{P}\xspace \to \mathbb{N}^0$ for fixing a lower bound on the number of tokens in places and $\mathit{ub} : \ensuremath{en}\xspacesuremath{P}\xspace \to \mathbb{N}^0 \cup \{ \infty \}$ for specifying an upper bound. A marking $M$ belongs to the range $\mathit{lb}, \mathit{ub}$ iff for all places $p \in P$ we have $\mathit{lb}(p) \leq M(p) \leq \mathit{ub}(p)$. The extended satisfability predicate $\mathit{lb}, \mathit{ub} \models \varphi$ is given in Table~\ref{tab:lusat} and it must hold whenever there is a marking in the range specified by $\mathit{lb}$ and $\mathit{ub}$ such that the marking satisfies the formula $\varphi$. Finally, Algorithm~\ref{alg:reachphi} computes a safe overapproximation of the lower and upper bounds such that if $M \rtrans[w] M'$ for some $w \in \ensuremath{en}\xspacesuremath{T}\xspace_2^*$ then $\mathit{lb}(p) \leq M'(p) \leq \mathit{ub}(p)$ for all $p \in P$. \begin{lem}\label{lemma:reach2} Let $N=\ensuremath{en}\xspacesuremath{(\places,\transitions_1, \transitions_2, \weights,\inhib)}\xspace$ be a Petri net game, $M \in \ensuremath{en}\xspacesuremath{\mathcal{M}(N)}\xspace$ a marking on $N$ and $\varphi \in \Phi_N$ a formula. If there is $w \in \ensuremath{en}\xspacesuremath{G}\xspacelabels_2^*$ s.t. $M \rtrans[w] M'$ and $M' \models \varphi$ then $\mathit{reach}(N,M,\varphi)=\mathit{true}$. \ensuremath{en}\xspaced{lem} \begin{proof} Algorithm~\ref{alg:reachphi} first computes for each place $p \in \ensuremath{en}\xspacesuremath{P}\xspace$ the upper bound $\mathit{ub}(p)$ and lower bound $\mathit{lb}(p)$ on the number of tokens that can appear in $p$ by performing any sequence of player 2 transitions, starting from the marking $M$. The bounds are then used to return the value of the expression $\mathit{lb},\mathit{ub} \models \varphi$ that is defined in Table~\ref{tab:lusat}. We shall first notice if there is a marking $M'$ such that $\mathit{lb}(p) \leq M'(p) \leq \mathit{ub}(p)$ for all $p \in \ensuremath{en}\xspacesuremath{P}\xspace$ and $M' \models \varphi$ then $\mathit{lb},\mathit{ub} \models \varphi$ holds. This can be proved by a straightforward structural induction on $\varphi$ while following the cases in Table~\ref{tab:lusat} where the functions $\mathit{lb}$ and $\mathit{ub}$ are extended to arithmetical expressions used in the query language such that for every marking $M'$ (as given above) and for every arithmetical expressions $e$ we have $\mathit{lb}(e) \leq \eval[M'][e] \leq \mathit{ub}(e)$. What remains to be established is the property that Algorithm~\ref{alg:reachphi} correctly computes the lower and upper bounds for all places in the net. We do this by proving the invariant for the repeat-until loop that claims that for every $w \in \ensuremath{en}\xspacesuremath{G}\xspacelabels_2^*$ such that $M \rtrans[w] M'$ we have \begin{enumerate} \item\label{inv1} $\displaystyle M(p) + \sum_{\substack{t \in w \\ \ensuremath{en}\xspacesuremath{W}\xspace(t,p) > \ensuremath{en}\xspacesuremath{W}\xspace(p,t)}} \big(\ensuremath{en}\xspacesuremath{W}\xspace(t,p) - \ensuremath{en}\xspacesuremath{W}\xspace(p,t)\big) \leq \mathit{ub}(p)$ for all $p \in \ensuremath{en}\xspacesuremath{P}\xspace$, and \item\label{inv2} $\#_t(w) \leq \mathit{ub}(t)$ for all $t \in \ensuremath{en}\xspacesuremath{T}\xspace_2$ where $\#_t(w)$ denotes the number of occurences of the transition $t$ in the sequence $w$. \ensuremath{en}\xspaced{enumerate} Here the notation $t \in w$ means that a summand is added for every occurence of $t$ in the sequence $w$. We note that invariant (\ref{inv1}) clearly implies that whenever $M \rtrans[w] M'$ for $w \in \ensuremath{en}\xspacesuremath{G}\xspacelabels_2^*$ then $M'(p) \leq \mathit{ub}(p)$ for all $p \in \ensuremath{en}\xspacesuremath{P}\xspace$. Notice that the repeat-until loop in Algorithm~\ref{alg:reachphi} clearly terminates since during the iteration of the loop $\mathit{ub}$ can only become smaller. First, we notice that before entering the repeat-until loop, the invariant holds because intitially the upper bound values are all set to $\infty$ and only at line~\ref{line:cond1} the upper bound for a place $p$ is set to $M(p)$ provided that the firing of any transition $t \in \ensuremath{en}\xspacesuremath{T}\xspace_2$ can never increase the number of tokens in $p$. This clearly satisfies invariant (\ref{inv1}). Let us now assume that both (\ref{inv1}) and (\ref{inv2}) hold at the beginning of the execution of the repeat-until loop. Suppose that the value $\mathit{ub}(t)$ is decreased for some transition $t$ by the assignment at line~\ref{line:tup}. This means that there is a place $p \in \mathit{pred}ecr{t}$ such that $\ensuremath{en}\xspacesuremath{W}\xspace(p,t)>\ensuremath{en}\xspacesuremath{W}\xspace(t,p)$, meaning that firing of $t$ removes $\ensuremath{en}\xspacesuremath{W}\xspace(t,p)-\ensuremath{en}\xspacesuremath{W}\xspace(p,t)$ tokens from $p$. As there can be at most $\mathit{ub}(p)$ tokens added to the place $p$ due to invariant (\ref{inv1}), this limits the number of times that the transition $t$ can fire to $\lfloor \frac{\mathit{ub}(p)}{\ensuremath{en}\xspacesuremath{W}\xspace(t,p)-\ensuremath{en}\xspacesuremath{W}\xspace(p,t)} \rfloor$ and hence it preserves invariant (\ref{inv2}). Similarly, suppose that the value of $\mathit{ub}(p)$ is decreased for some place $p$ by the assignment at line~\ref{line:pup}. Due to invariant (\ref{inv2}), we know that every transition $t \in \ensuremath{en}\xspacesuremath{T}\xspace_2$ can be fired at most $\mathit{ub}(t)$ times and hence adds at most $\mathit{ub}(t) \cdot \big(\ensuremath{en}\xspacesuremath{W}\xspace(t,p) - \ensuremath{en}\xspacesuremath{W}\xspace(p,t)\big)$ tokens to $p$. As we add those contributions for all such transitions together with the number $M(p)$ of tokens in the starting marking $M$, we satisfy also invariant (\ref{inv1}). Finally, the assignment at line~\ref{line:lb} provides a safe lower bound on the number of tokens that can be in the place $p$, as due to invariant (\ref{inv2}) we know that $\mathit{ub}(t)$ is the maximum number of times a transition $t$ can fire, and we subtract the number of tokens that each $t$ removes from $p$ by $\mathit{ub}(t)$. Hence, we can conclude that whenever $M \rtrans[w] M'$ for $w \in \ensuremath{en}\xspacesuremath{G}\xspacelabels_2^*$ then $\mathit{lb}(p) \leq M'(p) \leq \mathit{ub}(p)$ for all $p \in \ensuremath{en}\xspacesuremath{P}\xspace$ and the correctness of the lemma is established. \ensuremath{en}\xspaced{proof} \begin{exa}\label{ex:algo-v} In Figure~\ref{fig:algo-v-fig} we see a Petri net consisting of four places $\ensuremath{en}\xspacesuremath{P}\xspace = \{p_1, p_2, p_3,p_4\}$ and three player $2$ transitions $\ensuremath{en}\xspacesuremath{T}\xspace_2 = \{t_1, t_2, t_3\}$. The weights are given as seen in the figure (arcs without any annotations have the default weight $1$) and the initial marking contains three tokens in the place $p_1$. Initially, for all $x \in \ensuremath{en}\xspacesuremath{P}\xspace \cup \ensuremath{en}\xspacesuremath{T}\xspace$ we have $\mathit{ub}(x) = \infty$ as seen in line~\ref{line:init-inf} of Algorithm~\ref{alg:reachphi}. In line~\ref{line:cond1} we can set the upper bound of some places if the number of tokens are non-increasing, i.e.\ for all $t \in \preset{p} \cap \ensuremath{en}\xspacesuremath{T}\xspace_2$ we have $\ensuremath{en}\xspacesuremath{W}\xspace(p,t) \geq \ensuremath{en}\xspacesuremath{W}\xspace(t,p)$. In Figure~\ref{fig:algo-v-fig} this is the case only for $p_1$, we therefore have $\mathit{ub}(p_1) = M(p_1) = 3$. Next, the upper bound for all places and transitions are calculated through a repeat-until loop. The upper bound for transitions are found by checking, given the current upper bound on places, how many times we can fire a transition. In line~\ref{line:tup} we get \[\mathit{ub}(t_1) = \left\lfloor\frac{\mathit{ub}(p_1)}{\ensuremath{en}\xspacesuremath{W}\xspace(p_1,t_1)-\ensuremath{en}\xspacesuremath{W}\xspace(t_1,p_1)}\right\rfloor = \left\lfloor\frac{3}{1-0}\right\rfloor = 3\] and \begin{align*} \mathit{ub}(t_2) &= \min \left\{\left\lfloor\frac{\mathit{ub}(p_1)}{\ensuremath{en}\xspacesuremath{W}\xspace(p_1,t_2)-\ensuremath{en}\xspacesuremath{W}\xspace(t_2,p_1)}\right\rfloor, \left\lfloor\frac{\mathit{ub}(p_3)}{\ensuremath{en}\xspacesuremath{W}\xspace(p_3,t_2)-\ensuremath{en}\xspacesuremath{W}\xspace(t_2,p_3)}\right\rfloor\right\} \\ &= \min \left\{ \left\lfloor\frac{3}{2-0}\right\rfloor, \left\lfloor\frac{\infty}{1-0} \right\rfloor\right\} \\ &= \min\{1,\infty\} = 1. \ensuremath{en}\xspaced{align*} In the next iteration, at line~\ref{line:pup} we get \[ \mathit{ub}(p_2) = M(p_2) + \mathit{ub}(t_1) \cdot (\ensuremath{en}\xspacesuremath{W}\xspace(t_1,p_2) - \ensuremath{en}\xspacesuremath{W}\xspace(p_2,t_1)) = 0 + 3 \cdot (1 - 0) = 3\] and similarly \[\mathit{ub}(p_4) = M(p_4) + \mathit{ub}(t_2) \cdot (\ensuremath{en}\xspacesuremath{W}\xspace(t_2,p_4) - \ensuremath{en}\xspacesuremath{W}\xspace(p_4,t_2)) = 0 + 1 \cdot (1 - 0) = 1.\] Afterwards, there are no further changes to be made to the upper bounds and the repeat-until loop terminates. Finally, the calculated lower bounds for all places are $0$ in our example. \ensuremath{en}\xspaced{exa} \begin{figure}[t] \centering \begin{tikzpicture}[font=\scriptsize,xscale=2.4,yscale=1.8] \tikzstyle{arc}=[->,>=stealth,thick] \tikzstyle{every place}=[minimum size=6mm,thick] \tikzstyle{every transition}=[fill=black,minimum width=2mm,minimum height=5mm] \tikzstyle{oppTransition}=[draw=black,fill=white,minimum width=2mm,minimum height=5mm,thick] \tikzstyle{token}=[fill=white,text=black] \begin{scope} \node [place,label=$p_1$] at (0,0.6) (p1) {$\bullet \bullet \bullet$}; \node [oppTransition,label=$t_1$] at (1,0.6) (t1) {}; \node [oppTransition,label=$t_2$] at (1,-0.6) (t2) {}; \node [place,label=$p_2$] at (2,0.6) (p2) {}; \node [place,label=$p_3$] at (0,-0.6) (p3) {}; \node [oppTransition,label=$t_3$] at (-1,-0.6) (t3) {}; \node [place,label=$p_4$] at (2,-0.6) (p4) {}; \draw [->,>=stealth,thick] (p1) to (t1); \draw [->,>=stealth,thick] (p1) to node[pos=0.5,above]{$2$} (t2); \draw [->,>=stealth,thick] (t1) to (p2); \draw [->,>=stealth,thick] (p3) to (t2); \draw [->,>=stealth,thick] (t3) to (p3); \draw [->,>=stealth,thick] (t2) to (p4); \ensuremath{en}\xspaced{scope} \ensuremath{en}\xspaced{tikzpicture} \caption{Example Petri Net for Algorithm~\ref{alg:reachphi}} \label{fig:algo-v-fig} \ensuremath{en}\xspaced{figure} Before we can state our main theorem, we need to find an overapproximation method for determining safe transitions. This can be done by analysing the increasing presets and postsets of transitions as demonstrated in the following lemma. \begin{lem}[Safe Transition]\label{lemma:trans-safe} Let $N = \ensuremath{en}\xspacesuremath{(\places,\transitions_1, \transitions_2, \weights,\inhib)}\xspace$ be a \trailingspace{Petri net}G and $t \in \ensuremath{en}\xspacesuremath{T}\xspace$ a transition. If $\postincr{t} \cap \preset{T_2} = \emptyset$ and $\mathit{pred}ecr{t} \cap \ensuremath{en}\xspacesuremath{I}\xspacepreset{T_2} = \emptyset$ then $t$ is safe in any marking of $N$. \ensuremath{en}\xspaced{lem} \begin{proof} Assume $\postincr{t} \cap \preset{T_2} = \emptyset$ and $\mathit{pred}ecr{t} \cap \ensuremath{en}\xspacesuremath{I}\xspacepreset{T_2} = \emptyset$. We prove directly that $t$ is safe in $M$. Let $w \in {(\ensuremath{en}\xspacesuremath{T}\xspace_1 \setminus \{ t \})}^*$ s.t. $M \rtrans[w] M'$, $\ensuremath{en}\xspace_2(M') = \emptyset$, and $M \rtrans[tw] M''$. The only difference between $M'$ and $M''$ is that $t$ is fired first and we have $M''(p') = M'(p') + \ensuremath{en}\xspacesuremath{W}\xspace(t,p') - \ensuremath{en}\xspacesuremath{W}\xspace(p',t)$ for all $p \in \ensuremath{en}\xspacesuremath{P}\xspace$. Then for all $t' \in \ensuremath{en}\xspacesuremath{T}\xspace_2$ we have that there either exists $p \in \preset{t'}$ s.t. $M'(p) <\ensuremath{en}\xspacesuremath{W}\xspace(p,t')$, or there exists $p' \in \ensuremath{en}\xspacesuremath{I}\xspacepreset{t'}$ s.t. $M'(p') \geq \ensuremath{en}\xspacesuremath{I}\xspace(p',t')$. In the first case, since $\postincr{t} \cap \preset{T_2} = \emptyset$, we must have $\ensuremath{en}\xspacesuremath{W}\xspace(t,p) \leq \ensuremath{en}\xspacesuremath{W}\xspace(p,t)$ which implies $M''(p) \leq M'(p)$ and $t' \notin \ensuremath{en}\xspace(M'')$. In the second case, since $\mathit{pred}ecr{t} \cap \ensuremath{en}\xspacesuremath{I}\xspacepreset{T_2} = \emptyset$, we must have $\ensuremath{en}\xspacesuremath{W}\xspace(t,p') \geq \ensuremath{en}\xspacesuremath{W}\xspace(p',t)$, which implies $M''(p') \geq M'(p')$ and $t' \notin \ensuremath{en}\xspace(M'')$. Therefore $t$ is safe in $M$. \ensuremath{en}\xspaced{proof} We can now provide a list of syntactic conditions that guarantee the stability of a given reduction and state the main theorem of this section. \begin{thm}[Stable Reduction Preserving Closure]\label{prop:turn-game-preservation} Let $N = \ensuremath{en}\xspacesuremath{(\places,\transitions_1, \transitions_2, \weights,\inhib)}\xspace$ be a \trailingspace{Petri net}G, $\varphi$ a formula, and \reduction a reduction of $\ensuremath{en}\xspacesuremath{G}\xspace(N)$ such that for all $M \in \ensuremath{en}\xspacesuremath{\mathcal{M}(N)}\xspace$ the following conditions hold. \begin{enumerate} \item\label{item:contested} If $\ensuremath{en}\xspace_1(M) \neq \emptyset$ and $\ensuremath{en}\xspace_2(M) \neq \emptyset$ then $\ensuremath{en}\xspace(M) \subseteq \reduction(M)$. \item\label{item:safe} If $\ensuremath{en}\xspace_1(M) \cap \reduction(M) \nsubseteq \safe{M}$ then $\ensuremath{en}\xspace_1(M) \subseteq \reduction(M)$. \item\label{item:interesting} $\interest{M}{\varphi} \subseteq \reduction(M)$ \item\label{item:opponent-1} If $\ensuremath{en}\xspace_1(M) = \emptyset$ then $\ensuremath{en}\xspacesuremath{T}\xspace_1 \subseteq \reduction(M)$. \item\label{item:opponent-2} If $\ensuremath{en}\xspace_2(M) = \emptyset$ then $\ensuremath{en}\xspacesuremath{T}\xspace_2 \subseteq \reduction(M)$. \item\label{item:prop-disabled} For all $t \in \reduction(M)$ if $t \notin en(M)$ then either \begin{enumerate} \item\label{item:disabled-1} there exists $p \in \preset{t}$ s.t. $M(p) < \ensuremath{en}\xspacesuremath{W}\xspace(p,t)$ and $\preincr{p} \subseteq \reduction(s)$, or \item\label{item:disabled-2} there exists $p \in \ensuremath{en}\xspacesuremath{I}\xspacepreset{t}$ s.t. $M(p) \geq \ensuremath{en}\xspacesuremath{I}\xspace(p,t)$ and $\postdecr{p} \subseteq \reduction(s)$. \ensuremath{en}\xspaced{enumerate} \item\label{item:prop-enabled} For all $t \in \reduction(M)$ if $t \in \ensuremath{en}\xspace(M)$ then \begin{enumerate} \item\label{item:enabled-1} for all $p \in \mathit{pred}ecr{t}$ we have $\postset{p} \subseteq \reduction(M)$, and \item\label{item:enabled-2} for all $p \in \postincr{t}$ we have $\ensuremath{en}\xspacesuremath{I}\xspacepostset{p} \subseteq \reduction(M)$. \ensuremath{en}\xspaced{enumerate} \item\label{item:deadlock} If $\ensuremath{en}\xspace_2(M) \neq \emptyset$ then there exists $t \in \ensuremath{en}\xspace_2(M) \cap \reduction(M)$ s.t. $\postdecr{(\preset{t})} \cup \preincr{(\ensuremath{en}\xspacesuremath{I}\xspacepreset{t})} \subseteq \reduction(M)$. \item\label{item:visible} If $\ensuremath{en}\xspace_1(M) = \emptyset$ and $\mathit{reach}(N,M,\varphi)=\mathit{true}$ then $\ensuremath{en}\xspace(M) \subseteq \reduction(M)$. \ensuremath{en}\xspaced{enumerate} Then \reduction satisfies~\ref{rule:stub-init},~\ref{rule:stub},~\ref{rule:reach},~\ref{rule:game-1},~\ref{rule:game-2},~\ref{rule:safe},~\ref{rule:visible} and~\ref{rule:dead}. \ensuremath{en}\xspaced{thm} \begin{proof} We shall argue that any reduction $\reduction$ satisfying the conditions of the theorem also satisfies the~\ref{rule:stub-init},~\ref{rule:stub},~\ref{rule:reach},~\ref{rule:game-1},~\ref{rule:game-2},~\ref{rule:safe},~\ref{rule:visible}, and~\ref{rule:dead} conditions. \begin{itemize}[leftmargin=10mm] \item[(\ref{rule:stub-init})] Follows from Condition~\ref{item:contested}. \item[(\ref{rule:stub})] Let $M,M' \in \ensuremath{en}\xspacesuremath{\mathcal{M}(N)}\xspace$ be markings, $t \in \reduction(M)$, and $w \in \overline{\reduction(M)}^*$. We will show that if $M \rtrans[wt] M'$ then $M \rtrans[tw] M'$. Let $M_w \in \mathcal{M}(N)$ be a marking s.t. $M \rtrans[w] M_w$. Assume for the sake of contradiction that $t \notin \ensuremath{en}\xspace(M)$. As $t$ is disabled in $M$, there must be $p \in \preset{t}$ such that $M(p) < \ensuremath{en}\xspacesuremath{W}\xspace(p,t)$ or there is $p \in \ensuremath{en}\xspacesuremath{I}\xspacepreset{t}$ such that $M(p) \geq \ensuremath{en}\xspacesuremath{I}\xspace(p,t)$. In the first case, due to Condition~\ref{item:disabled-1} all the transitions that can add tokens to $p$ are included in $\reduction(M)$. Since $w \in \overline{\reduction(M)}^*$ this implies that $M_w(p) < \ensuremath{en}\xspacesuremath{W}\xspace(p,t)$ and $t \notin \ensuremath{en}\xspace(M_w)$ contradicting our assumption that $M_w \rtrans[t] M'$. In the second case, due to Condition~\ref{item:disabled-2} all the transitions that can remove tokens from $p$ are included in $\reduction(M)$. Since $w \in \overline{\reduction(M)}^*$ this implies that $M_w(p) \geq \ensuremath{en}\xspacesuremath{I}\xspace(p,t)$ and $t \notin \ensuremath{en}\xspace(M_w)$ contradicting our assumption that $M_w \rtrans[t] M'$. Therefore we must have that $t \in \ensuremath{en}\xspace(M)$. Since $t \in \ensuremath{en}\xspace(M)$ there is $M_t \in \ensuremath{en}\xspacesuremath{\mathcal{M}(N)}\xspace$ s.t. $M \rtrans[t] M_t$. We have to show that $M_t \rtrans[w] M'$ is possible. For the sake of contradiction, assume that this is not the case. Then there must exist a transition $t'$ that occurs in $w$ that became disabled because $t$ was fired. There are two cases: $t$ removed one or more tokens from a shared pre-place $p \in \mathit{pred}ecr{t} \cap \preset{t'}$ or added one or more tokens to a place $p \in \postincr{t} \cap \ensuremath{en}\xspacesuremath{I}\xspacepreset{t'}$. In the first case, due to Condition~\ref{item:enabled-1} all the transitions that can remove tokens from $p$ are included in $\reduction(M)$, implying that $t' \in \reduction(M)$. Since $w \in \overline{\reduction(M)}^*$ such a $t'$ cannot exist. In the second case, due to Condition~\ref{item:enabled-2} all the transitions that can add tokens to $p$ are included in $\reduction(M)$, implying that $t' \in \reduction(M)$. Since $w \in \overline{\reduction(M)}^*$ such a $t'$ cannot exist. Therefore we must have that $M_t \rtrans[w] M'$ and we can conclude with $M \rtrans[tw] M'$. \item[(\ref{rule:reach})] Follows from Condition~\ref{item:interesting} and Lemma~\ref{lemma:reach}. \item[(\ref{rule:game-1})] Let $M \in \ensuremath{en}\xspacesuremath{\mathcal{M}(N)}\xspace$ be a marking and $w \in \overline{\reduction(M)}^*$ s.t. $M \rtrans[w] M'$. We will show that if $\ensuremath{en}\xspace_2(M) = \emptyset$ then $\ensuremath{en}\xspace_2(M') = \emptyset$. Assume that $\ensuremath{en}\xspace_2(M) = \emptyset$. Then by Condition~\ref{item:opponent-2} we have $\ensuremath{en}\xspacesuremath{T}\xspace_2 \subseteq \reduction(M)$. Let $t \in \ensuremath{en}\xspacesuremath{T}\xspace_2$ be a player $2$ transition. By Condition~\ref{item:prop-disabled} we know that either there exists $p \in \preset{t}$ s.t. $M(p) < \ensuremath{en}\xspacesuremath{W}\xspace(p,t)$ and $\preincr{p} \subseteq \reduction(s)$, or there exists $p \in \ensuremath{en}\xspacesuremath{I}\xspacepreset{t}$ s.t. $M(p) \geq \ensuremath{en}\xspacesuremath{I}\xspace(p,t)$ and $\postdecr{p} \subseteq \reduction(s)$. In the first case, in order to enable $t$ at least one transition from $\preincr{p}$ has to be fired. However, we know $\preincr{p} \subseteq \reduction(s)$ is true, and therefore none of the transitions in $\preincr{p}$ can occur in $w$, which implies $t \notin \ensuremath{en}\xspace_2(M')$. In the second case, in order to enable $t$ at least one transition from $\postdecr{p}$ has to be fired. However, we know $\postdecr{p} \subseteq \reduction(s)$ is true, and therefore none of the transitions in $\postdecr{p}$ can occur in $w$, which implies $t \notin \ensuremath{en}\xspace_2(M')$. These two cases together imply that $\ensuremath{en}\xspace_2(M') = \emptyset$. \item[(\ref{rule:game-2})] Follows the same approach as~\ref{rule:game-1}. \item[(\ref{rule:safe})] Follows from Condition~\ref{item:safe}. \item[(\ref{rule:visible})] Follows from Condition~\ref{item:visible} and Lemma~\ref{lemma:reach2}. Notice that if $\ensuremath{en}\xspace_1(M) \neq \emptyset$ then the antecedent of Condition~\ref{rule:visible} never holds if $\ensuremath{en}\xspace_2(M) = \emptyset$ unless $M$ is already a goal marking, or $M$ is a mixed state and the consequent of Condition~\ref{rule:visible} always holds due to Condition~\ref{rule:stub-init}. \item[(\ref{rule:dead})] Let $M \in \ensuremath{en}\xspacesuremath{\mathcal{M}(N)}\xspace$ be a marking and $w \in \overline{\reduction(M)}^*$ s.t. $M \rtrans[w] M'$. We will show that if $\ensuremath{en}\xspace_2(M) \neq \emptyset$ then there exists $t \in \ensuremath{en}\xspace_2(M) \cap \reduction(M)$ s.t. $t \in \ensuremath{en}\xspace_2(M')$. Assume that $\ensuremath{en}\xspace_2(M) \neq \emptyset$. From Condition~\ref{item:deadlock} we know that there exists $t \in \ensuremath{en}\xspace_2(M) \cap \reduction(M)$ s.t. $\postdecr{(\preset{t})} \cup \preincr{(\ensuremath{en}\xspacesuremath{I}\xspacepreset{t})} \subseteq \reduction(M)$. Assume for the sake of contradiction that $t \notin \ensuremath{en}\xspace_2(M')$. In this case there must either exist $p \in \preset{t}$ s.t. $M'(p) < \ensuremath{en}\xspacesuremath{W}\xspace(p,t)$, or there exists $p \in \ensuremath{en}\xspacesuremath{I}\xspacepreset{t}$ s.t. $M'(p) \geq \ensuremath{en}\xspacesuremath{I}\xspace(p,t)$. In the first case, since $t \in \ensuremath{en}\xspace_2(M)$ we have that $M(p) \geq \ensuremath{en}\xspacesuremath{W}\xspace(p,t)$. Therefore at least one transition from $\postdecr{p}$ has to have been fired. However, we know $\postdecr{(\preset{t})} \subseteq \reduction(M)$ is true, and therefore none of the transitions in $\postdecr{p}$ can occur in $w$, which implies $M'(p) \geq \ensuremath{en}\xspacesuremath{W}\xspace(p,t)$, a contradiction. In the second case, since $t \in \ensuremath{en}\xspace_2(M)$ we have that $M(p) < \ensuremath{en}\xspacesuremath{I}\xspace(p,t)$. Therefore at least one transition from $\preincr{p}$ has to have been fired. However, we know $\preincr{(\preset{t})} \subseteq \reduction(M)$ is true, and therefore none of the transitions in $\preincr{p}$ can occur in $w$, which implies $M'(p) < \ensuremath{en}\xspacesuremath{I}\xspace(p,t)$, a contradiction. Therefore $t \notin \ensuremath{en}\xspace_2(M')$ cannot be true, and we must have that $t \in \ensuremath{en}\xspace_2(M')$. \ensuremath{en}\xspaced{itemize} This completes the proof of the theorem. \ensuremath{en}\xspaced{proof} In Algorithm~\ref{alg:saturation} we provide a pseudocode for calculating stubborn sets for a given marking. It essentially rephrases Theorem~\ref{prop:turn-game-preservation} into an executable code. The algorithm calls Algorithm~\ref{alg:saturation-2} that saturates a given set to satisfy Conditions~\ref{item:prop-disabled} and~\ref{item:prop-enabled} of Theorem~\ref{prop:turn-game-preservation}. \begin{algorithm}[t] \SetKwInOut{Input}{input} \SetKwInOut{Output}{output} \Input{A Petri net game $N=\ensuremath{en}\xspacesuremath{(\places,\transitions_1, \transitions_2, \weights,\inhib)}\xspace$ and $M \in \ensuremath{en}\xspacesuremath{\mathcal{M}(N)}\xspace$ and formula $\varphi$} \Output{$X \subseteq \ensuremath{en}\xspacesuremath{T}\xspace$ where $X$ is a stable stubborn set for $M$} \If{$\ensuremath{en}\xspace(M) = \emptyset$}{ \Return{$\ensuremath{en}\xspacesuremath{T}\xspace$}; } \If{$\ensuremath{en}\xspace_1(M) \neq \emptyset \land \ensuremath{en}\xspace_2(M) \neq \emptyset$}{ \Return{$\ensuremath{en}\xspacesuremath{T}\xspace$};\label{line:mixed} } $Y$ := $\emptyset$; \eIf{$\ensuremath{en}\xspace_1(M) = \emptyset$}{ \If{$\mathit{reach}(N,M,\varphi)$}{ \Return{$\ensuremath{en}\xspacesuremath{T}\xspace$};\label{line:visible} } Pick any $t \in \ensuremath{en}\xspace_2(M)$;\label{line:dead-pick} $Y$ := $\ensuremath{en}\xspacesuremath{T}\xspace_1 \cup t \cup \postdecr{(\preset{t})} \cup \preincr{(\ensuremath{en}\xspacesuremath{I}\xspacepreset{t})}$;\label{line:p1} }{ $Y$ := $\ensuremath{en}\xspacesuremath{T}\xspace_2$;\label{line:p2} } $Y$ := $Y \cup \interest{M}{\varphi}$;\label{line:interesting} $X$ := $\mathit{Saturate}(Y)$; \If{$X \cap \ensuremath{en}\xspace_1(M) \nsubseteq \safe{M}$}{ \Return{$\ensuremath{en}\xspacesuremath{T}\xspace$};\label{line:safe} } \Return{$X$};\label{line:terminate} \caption{Computation of $\reduction(M)$ for some stable reduction $\reduction$} \label{alg:saturation} \ensuremath{en}\xspaced{algorithm} \begin{algorithm} $X$ := $\emptyset$; \While{$Y \neq \emptyset$} { Pick any $t \in Y$; \eIf{$t \notin \ensuremath{en}\xspace(M)$}{ \eIf{$\exists p \in \preset{t}.\ M(p) < \ensuremath{en}\xspacesuremath{W}\xspace(p,t)$}{ Pick any $p \in \preset{t}$ s.t. $M(p) < \ensuremath{en}\xspacesuremath{W}\xspace(p,t)$;\label{line:pick-dis-1} $Y$ := $Y \cup (\preincr{p} \setminus X$);\label{line:dis-1-add} }{ Pick any $p \in \ensuremath{en}\xspacesuremath{I}\xspacepreset{t}$ s.t. $M(p) \geq \ensuremath{en}\xspacesuremath{I}\xspace(p,t)$;\label{line:pick-dis-2} $Y$ := $Y \cup (\postdecr{p} \setminus X$);\label{line:dis-2-add} } }{ $Y$ := $Y \cup ((\postset{(\mathit{pred}ecr{t})} \cup \ensuremath{en}\xspacesuremath{I}\xspacepostset{(\postincr{t})}) \setminus X)$;\label{line:enabled} } $X$ := $X \cup \{t\}$;\label{line:retire} $Y$ := $Y \setminus \{t\}$; } \Return{$X$};\label{line:final-return} \caption{$\mathit{Saturate}(Y)$} \label{alg:saturation-2} \ensuremath{en}\xspaced{algorithm} \begin{thm}\label{thm:alg} Algorithm~\ref{alg:saturation} terminates and returns $\reduction(M)$ for some stable reduction $\reduction$. \ensuremath{en}\xspaced{thm} \begin{proof} \textit{Termination}. If $\ensuremath{en}\xspace_1(M) \neq \emptyset$ and $\ensuremath{en}\xspace_2(M) \neq \emptyset$ then we terminate in line~\ref{line:mixed}. Otherwise $Y \neq \emptyset$ and we enter the while-loop in Algorithm~\ref{alg:saturation-2}. Notice that $X \cap Y = \emptyset$ is always the case in the execution of Algorithm~\ref{alg:saturation-2}. We never remove transitions from $X$ after they have been added. Therefore, since in line~\ref{line:retire} of Algorithm~\ref{alg:saturation-2} a new transition is added to $X$ at the end of each loop iteration, the loop can iterate at most once for each transition. Since $T$ is finite by the Petri Net Game definition, the loop iterates a finite number of times, and Algorithm~\ref{alg:saturation-2} terminates. If $\ensuremath{en}\xspace_1(M) \cap X \nsubseteq \safe{M}$ then we terminate in line~\ref{line:safe} of Algorithm~\ref{alg:saturation}, and otherwise we return in line~\ref{line:terminate} and Algorithm~\ref{alg:saturation} terminates. \textit{Correctness}. It was shown that the construction in Theorem~\ref{prop:turn-game-preservation} results in a set that is a stubborn set of a stable reduction. It is therefore sufficient to show that Algorithm~\ref{alg:saturation} replicates the construction. Notice that every transition that is added to $Y$ is eventually added to $X$ in line~\ref{line:retire} and returned in line~\ref{line:final-return} of Algorithm~\ref{alg:saturation-2}. Let $t \in Y$ and we discuss that all conditions of Theorem~\ref{prop:turn-game-preservation} hold upon termination. \begin{itemize} \item Condition~\ref{item:contested}: If $\ensuremath{en}\xspace_1(M) \neq \emptyset$ and $\ensuremath{en}\xspace_2(M) \neq \emptyset$ then we return $\ensuremath{en}\xspacesuremath{T}\xspace$ in line~\ref{line:mixed} of Algorithm~\ref{alg:saturation}. \item Condition~\ref{item:safe}: If $\ensuremath{en}\xspace_1(M) \cap \reduction(M) \nsubseteq \safe{M}$ then we return $\ensuremath{en}\xspacesuremath{T}\xspace$ in line~\ref{line:safe} of Algorithm~\ref{alg:saturation}. \item Condition~\ref{item:interesting}: We have $\interest{M}{\varphi} \subseteq Y$ in line~\ref{line:interesting} of Algorithm~\ref{alg:saturation}. \item Condition~\ref{item:opponent-1}: We have $\ensuremath{en}\xspacesuremath{T}\xspace_1 \subseteq Y$ in line~\ref{line:p1} of Algorithm~\ref{alg:saturation}. \item Condition~\ref{item:opponent-2}: We have $\ensuremath{en}\xspacesuremath{T}\xspace_2 \subseteq Y$ in line~\ref{line:p2} of Algorithm~\ref{alg:saturation}. \item Condition~\ref{item:disabled-1}: In line~\ref{line:pick-dis-1} we pick any $p \in \preset{t}$ s.t. $M(p) < \ensuremath{en}\xspacesuremath{W}\xspace(p,t)$, and in line~\ref{line:dis-1-add} of Algorithm~\ref{alg:saturation-2} we add $\preincr{p}$ to $Y$. \item Condition~\ref{item:disabled-2}: In line~\ref{line:pick-dis-2} we pick any $p \in \ensuremath{en}\xspacesuremath{I}\xspacepreset{t}$ s.t. $M(p) \geq \ensuremath{en}\xspacesuremath{I}\xspace(p,t)$, and in line~\ref{line:dis-2-add} of Algorithm~\ref{alg:saturation-2} we add $\postdecr{p}$ to $Y$. \item Condition~\ref{item:enabled-1}: In line~\ref{line:enabled} of Algorithm~\ref{alg:saturation-2} we add $\postset{(\mathit{pred}ecr{t})}$ to $Y$. \item Condition~\ref{item:enabled-2}: In line~\ref{line:enabled} of Algorithm~\ref{alg:saturation-2} we add $\ensuremath{en}\xspacesuremath{I}\xspacepostset{(\postincr{t})}$ to $Y$. \item Condition~\ref{item:deadlock}: In line~\ref{line:dead-pick} of Algorithm~\ref{alg:saturation} we pick any $t' \in \ensuremath{en}\xspace_2(M)$ and in line~\ref{line:p1} we add $\postdecr{(\preset{t})} \cup \preincr{(\ensuremath{en}\xspacesuremath{I}\xspacepreset{t})}$ to $Y$. \item Condition~\ref{item:visible}: If $\ensuremath{en}\xspace_1(M) = \emptyset$ and $\mathit{reach}(N,M,\varphi)=\mathit{true}$ then we return $T$ at line~\ref{line:visible} of Algorithm~\ref{alg:saturation}. \qedhere \ensuremath{en}\xspaced{itemize} \ensuremath{en}\xspaced{proof} \begin{rem} In the actual implementation of the algorithm, we first saturate only over the set of interesting transitions and in the case that $\mathit{Saturate}(\interest{M}{\varphi}) \cap \ensuremath{en}\xspace(M) = \emptyset$, we do not explore any of the successors of the marking $M$ as we know that no goal marking can be reached from $M$ (this follows from Lemma~\ref{lemma:early-termination}). \ensuremath{en}\xspaced{rem} \section{Implementation and Experiments} We extend the Petri net verification engine \texttt{verifypn}~\cite{jensen2016tapaal}, a part of the TAPAAL tool suite~\cite{david2012tapaal}, to experimentally demonstrate the viability of our approach. The synthesis algorithm for solving Petri net games is an adaptation of the dependency graph fixed-point computation from~\cite{jensen2018discrete,jensen2016real} that we reimplement in \texttt{C++} while utilising PTries~\cite{JLS:ICTAC:17} for efficient state storage. The source code is available under GPLv3~\cite{repeatability}. We conduct a series of experiments using the following scalable case studies. \begin{itemize} \item In \emph{Autonomous Intersection Management} (AIM) vehicles move at different speeds towards an intersection and we want to ensure the absence of collisions. We model the problem as a Petri net game and refer to each instance as AIM-$W$-$X$-$Y$-$Z$ where $W$ is the number of intersections with lanes of length $X$, $Z$ is the number of cars, and $Y$ is the number of different speeds for each car. The controller assign speeds to cars while the environment aims to cause a collision. The goal marking is where all cars reach their destinations while there are no collisions. \item We reformulate the classical \emph{Producer Consumer System} (PCS) as a Petri net game. In each instance PCS-$N$-$K$ the total of $N$ consumers (controlled by the environment) and $N$ producers (controlled by the controller) share $N$ buffers. Each consumer and producer has a fixed buffer to consume/produce from/to, and each consumer/producer has $K$ different randomly chosen consumption/production rates. The game alternates in rounds where the players choose for each consumer/producer appropriate buffers and rates. The goal of the game is to ensure that the consumers have always enough products in the selected buffers while at the same time the buffers have limited capacity and may not overflow. \item The \emph{Railway Scheduling Problem} contains four instances modeling the Danish train station Lyngby and three of its smaller variants. The scheduling problem, including the station layout, was originally described as a game in~\cite{kasting2016synthesis} and each instance is annotated by a number $N$ representing the number of trains that migrate through the railway network. The controller controls the lights and switches, while the environment moves the trains. The goal of the controller is to make sure that all trains reach (without any collisions) their final destinations. \item The \emph{Nim} (NIM-$K$-$S$) Petri net game was described in~\cite{T:CIMCA:08} as a two player game where the players in rounds repeatedly remove between $1$ and $K$ pebbles from an initial stack containing $S$ pebbles. The player that has a turn and an empty stack of pebbles loses. In our (equivalent) model, we are instead adding pebbles to an initially empty stack and the player that first adds to or above the given number $S$ loses. \item The \emph{Manufacturing Workflow} (MW) contains instances of a software product line Petri net model presented in~\cite{QUINTANILLA2013342}. The net describes a series of possible ways of configuring a product (performed by the environment) while the controller aims to construct a requested product. The model instance MW-$N$ contains $N$ possible choices of product features. \item The \emph{Order Workflow} (OW) Petri net game model is taken from~\cite{10.1007/978-3-540-30468-511} and the goal of the game is to synthesise a strategy that guarantees workflow soundness, irrelevant of the choices made by the environment. We scale the workflow by repeatedly re-initialising the workflow $N$ times (denoted by OW-$N$). \item In \emph{Flexible Manufacturing Systems} (FMS) we use the Petri net models from~\cite{LZ:04,AE:98} modeling different production lines with shared resources. The Petri nets FMS-D~\cite{AE:98} and FMS-C~\cite{LZ:04} both contain a deadlock and the problem is to control a small subset of transitions so that the deadlock can be avoided. The models are scaled by the number of resources and products in the line. The goal in the FMS-N~\cite{LZ:04} model is to control a subset of transitions in the net in order to guarantee that a given resource (Petri net place) never becomes empty. \ensuremath{en}\xspaced{itemize} \noindent All experimental evaluation is run on AMD Epyc 7551 Processors with 110 GB memory limitation and 12 hours timeout (we measure only the execution time without the parsing time of the models). We use for all experiments the depth first search strategy and we only report the examples where the algorithms both with and without partial order reduction returned a result within the time and memory limits. We provide a reproducibility package with all models and experimental data~\cite{repeatability}. \input{sections/results} \section{Conclusion} We generalised the partial order reduction technique based on stubborn sets from plain reachability to a game theoretical setting. This required a nontrivial extension of the classical conditions on stubborn sets so that a state space reduction can be achieved for both players in the game. In particular, the computation of the stubborn sets for player 2 (uncontrollable transitions) needed a new technique for interval approximation on the number of tokens in reachable markings. We proved the correctness of our approach and instantiated it to the case of Petri net games. We provided (to the best of our knowledge) the first implementation of partial order reduction for Petri net games and made it available as a part of the model checker TAPAAL\@. The experiments show promising results on a number of case studies, achieving in general a substantial state space reduction with only a small overhead for computing the stubborn sets. In the future work, we plan to combine our contribution with a recent insight on how to effectively use partial order reduction in the timed setting~\cite{boenneland2018start} in order to extend our framework to general timed games. \textbf{Acknowledgments.} We are grateful to Thomas Neele from Eindhoven University of Technology for letting us know about the false claim in Lemma~\ref{lemma2} that was presented in the conference version of this article. The counterexample, presented in Remark~\ref{rem:error}, is attributed to him. We are obliged to Antti Valmari for noticing that condition {\bf C} in our conference paper is redundant and can be substituted by conditions {\bf W} and {\bf D}, as it is done in this article. We also thank the anonymous reviewers for their numerous suggestions that helped us to improve the quality of the presentation. The research leading to these results has received funding from the project \mbox{DiCyPS} funded by the Innovation Fund Denmark, the ERC Advanced Grant LASSO and DFF project QASNET\@. {} \ensuremath{en}\xspaced{document}
\begin{document} \title{Toeplitz and Toeplitz-block-Toeplitz matrices\and their correlation with syzygies of polynomials} \begin{abstract} In this paper, we re-investigate the resolution of Toeplitz systems $T\, u =g$, from a new point of view, by correlating the solution of such problems with syzygies of polynomials or moving lines. We show an explicit connection between the generators of a Toeplitz matrix and the generators of the corresponding module of syzygies. We show that this module is generated by two elements of degree $n$ and the solution of $T\,u=g$ can be reinterpreted as the remainder of an explicit vector depending on $g$, by these two generators. This approach extends naturally to multivariate problems and we describe for Toeplitz-block-Toeplitz matrices, the structure of the corresponding generators. \end{abstract} \begin{keywords} Toeplitz matrix, rational interpolation, syzygie \end{keywords} \section{Introduction} Structured matrices appear in various domains, such as scientific computing, signal processing, \dots They usually express, in a linearize way, a problem which depends on less parameters than the number of entries of the corresponding matrix. An important area of research is devoted to the development of methods for the treatment of such matrices, which depend on the actual parameters involved in these matrices. Among well-known structured matrices, Toeplitz and Hankel structures have been intensively studied \cite{MR782105, MR1355506}. Nearly optimal algorithms are known for the multiplication or the resolution of linear systems, for such structure. Namely, if $A$ is a Toeplitz matrix of size $n$, multiplying it by a vector or solving a linear system with $A$ requires $\tilde{\mathcal{O}}(n)$ arithmetic operations (where $\tilde{\mathcal{O}}(n)=\mathcal{O}(n \log^{c}(n))$ for some $c>0$) \cite{LDRBA80, MR1871324}. Such algorithms are called super-fast, in opposition with fast algorithms requiring $\mathcal{O}(n^{2})$ arithmetic operations. The fundamental ingredients in these algorithms are the so-called generators \cite{MR1355506}, encoding the minimal information stored in these matrices, and on which the matrix transformations are translated. The correlation with other types of structured matrices has also been well developed in the literature \cite{MR1843842,MR1755557}, allowing to treat so efficiently other structures such as Vandermonde or Cauchy-like structures. Such problems are strongly connected to polynomial problems \cite{LDRFuh96b, MR1289412}. For instance, the product of a Toeplitz matrix by a vector can be deduced from the product of two univariate polynomials, and thus can be computed efficiently by evaluation-interpolation techniques, based on FFT. The inverse of a Hankel or Toeplitz matrix is connected to the Bezoutian of the polynomials associated to their generators. However, most of these methods involve univariate polynomials. So far, few investigations have been pursued for the treatment of multilevel structured matrices \cite{LDRTyr85}, related to multivariate problems. Such linear systems appear for instance in resultant or in residue constructions, in normal form computations, or more generally in multivariate polynomial algebra. We refer to \cite{MR1762401} for a general description of such correlations between multi-structured matrices and multivariate polynomials. Surprisingly, they also appear in numerical scheme and preconditionners. A main challenge here is to devise super-fast algorithms of complexity $\tilde{\mathcal{O}}(n)$ for the resolution of multi-structured systems of size $n$. In this paper, we consider block-Toeplitz matrices, where each block is a Toeplitz matrix. Such a structure, which is the first step to multi-level structures, is involved in many bivariate problems, or in numerical linear problems.We re-investigate first the resolution of Toeplitz systems $T\, u =g$, from a new point of view, by correlating the solution of such problems with syzygies of polynomials or moving lines. We show an explicit connection between the generators of a Toeplitz matrix and the generators of the corresponding module of syzygies. We show that this module is generated by two elements of degree $n$ and the solution of $T\,u=g$ can be reinterpreted as the remainder of an explicit vector depending on $g$, by these two generators. This approach extends naturally to multivariate problems and we describe for Toeplitz-block-Toeplitz matrices, the structure of the corresponding generators. In particular, we show the known result that the module of syzygies of $k$ non-zero bivariate polynomials is free of rank $k-1$, by a new elementary proof. Exploiting the properties of moving lines associated to Toeplitz matrices, we give a new point of view to resolve a Toeplitz-block-Toeplitz system. In the next section we studie the scalar Toeplitz case. In the chapter 3 we consider the Toeplitz-block-Toeplitz case. Let $R=\mathbb{K}[x]$. For $n \in \mathbb{N}$, we denote by $\mathbb{K}[x]_{n}$ the vector space of polynomials of degree $\le n$. Let $L=\mathbb{K}[x,x^{-1}]$ be the set of Laurent polynomials in the variable $x$. For any polynomial $p=\sum_{i=-m}^{n} p_{i}\, x^{i} \in L$, we denote by $p^{+}$ the sum of terms with positive exponents: $p^{+}=\sum_{i=0}^{n} p_{i}\, x^{i}$ and by $p^{-}$, the sum of terms with strictly negative exponents: $p^{-}=\sum_{i=-m}^{-1} p_{i}\, x^{i}$. We have $p=p^{+} +p^{-}$. For $n\in \mathbb{N}$, we denote by $\Unit{n}=\{\omega; \omega^{n}=1\}$ the set of roots of unity of order $n$. \section{Univariate case} We begin by the univariate case and the following problem: \begin{problem} Given a Toeplitz matrix $T=(t_{i-j})_{i,j=0}^{n-1}\in \mathbb{K}^{n\times n}$ ($T=(T_{ij})_{i,j=0}^{n-1}$ with $T_{ij}=t_{i-j}$) of size $n$ and $g=(g_0,\dots,g_{n-1}) \in \mathbb{K}^{n}$, find $u=(u_0,\dots,u_{n-1}) \in \mathbb{K}^{n}$ such that \begin{equation}\label{pb:toep} T\,u=g. \end{equation} \end{problem} Let $E=\{1,\dots,x^{n-1}\}$, and $\Pi_{E}$ be the projection of $R$ on the vector space generated by $E$, along $\langlex^{n},x^{n+1},\ldots\rangle$. \begin{definition} We define the following polynomials: \begin{itemize} \item $T(x)=\displaystyle\sum _{i=-n+1}^{n-1}t_ix^i,$ \item $\tilde{T}(x)=\displaystyle\sum_{i=0}^{2n-1}\tilde{t}_ix^i$ with $\tilde{t}_i=\left\{ \begin{array}{ll} t_i&\textrm{ if } i< n\\ t_{i-2n}&\textrm{ if } i\ge n \end{array}\right.$, \item $u(x)=\displaystyle\sum_{i=0}^{n-1}u_ix^i,\:g(x)=\sum_{i=0}^{n-1}g_ix^i$. \end{itemize} \end{definition} \noindent{}Notice that $\tilde{T}= T^{+} + x^{2\,n}\, T^{-}$ and $T(w)=\tilde{T}(w)$ if $w \in \Unit{2\,n}$. We also have (see \cite{MR1762401}) $$ T\,u=g\Leftrightarrow\Pi_{E}(T(x)u(x))=g(x). $$ For any polynomial $u \in \mathbb{K}[x]$ of degree $d$, we denote it as $u(x)= \ud{u}(x) + x^{n} \ov{u}(x)$ with $\deg(\ud{u})\le n-1$ and $\deg(\ov{u}) \le d-n$ if $d\ge n$ and $\ov{u}=0$ otherwise. Then, we have \begin{eqnarray} T(x)\, u(x) &=& T(x) \ud{u}(x) + T(x) x^{n} \ov{u}(x) \nonumber \\ & = & \Pi_{E}(T(x) \ud{u}(x)) + \Pi_{E}(T(x) x^{n} \ov{u}(x)) \nonumber \\ & & + (\alpha_{-n+1}x^{-n+1}+\dots+\alpha_{-1}x^{-1}) \nonumber \\ & & + (\alpha_{n}x^n+\dots+\alpha_{n+m}x^{n+m}) \nonumber \\ \label{eq:TU1} &=& \Pi_{E}(T(x) \ud{u}(x)) + \Pi_{E}(T(x) x^{n} \ov{u}(x)) \nonumber \\ & & + x^{-n+1} A(x) + x^{n} B(x), \end{eqnarray} with $m= \max(n-2, d-1),$ \begin{eqnarray}\label{eq:TU2} && A(x) = \alpha_{-n+1}+\dots+\alpha_{-1}x^{n-2}, \nonumber \\ && B(x) = \alpha_{n}+\dots+\alpha_{n+m}x^{m}. \end{eqnarray} See \cite{MR1762401} for more details, on the correlation between structured matrices and (multivariate) polynomials. \subsection{Moving lines and Toeplitz matrices} We consider here another problem, related to interesting questions in Effective Algebraic Geometry. \begin{problem}\label{pb:mvlines} Given three polynomials $a, b, c \in R$ respectively of degree $<l, <m, <n$, find three polynomials $p, q, r \in R$ of degree $< \nu-l, <\nu-m, <\nu-n$, such that \begin{equation} \label{eq:mvlines} a(x)\, p(x) + b(x)\, q(x) + c(x)\, r(x) =0. \end{equation} \end{problem} We denote by $\mathcal{L}(a,b,c)$ the set of $(p,q,r)\in\mathbb{K}[x]^{3}$ which are solutions of \eqref{eq:mvlines}. It is a $\mathbb{K}[x]$-module of $\mathbb{K}[x]^{3}$. The solutions of the problem \eqref{pb:mvlines} are $\mathcal{L}(a,b,c) \cap \mathbb{K}[x]_{\nu-l-1}\times\mathbb{K}[x]_{\nu-m-1}\times\mathbb{K}[x]_{\nu-n-1}$. Given a new polynomial $d(x)\in \mathbb{K}[x]$, we denote by $\mathcal{L}(a,b,c;d)$ the set of $(p,q,r)\in\mathbb{K}[x]^{3}$ such that $$ a(x)\, p(x) + b(x)\, q(x) + c(x)\, r(x) = d(x). $$ \begin{theorem} For any non-zero vector of polynomials $(a,b,c)\in \mathbb{K}[x]^{3}$, the $\mathbb{K}[x]$-module $\mathcal{L}(a,b,c)$ is free of rank $2$. \end{theorem} \begin{proof} By the Hilbert's theorem, the ideal $I$ generated by $(a,b,c)$ has a free resolution of length at most $1$, that is of the form: $$ 0\rightarrow\mathbb{K}[x]^p\rightarrow \mathbb{K}[x]^3\rightarrow \mathbb{K}[x] \rightarrow \mathbb{K}[x]/I \rightarrow 0. $$ As $I\neq 0$, for dimensional reasons, we must have $p=2$. \end{proof} \begin{definition} A $\mu$-base of $\mathcal{L}(a,b,c)$ is a basis $(p,q,r)$, $(p',q',r')$ of $\mathcal{L}(a,b,c)$, with $(p,q,r)$ of minimal degree $\mu$. \end{definition} Notice if $\mu_1$ is the smallest degree of a generator and $\mu_2$ the degree of the second generator $(p',q',r')$, we have $d=max(\deg(a),\deg(b),\deg(c))=\mu_1+\mu_2$. Indeed, we have \begin{eqnarray*} \lefteqn{0 \rightarrow \mathbb{K}[x]_{\nu-d-\mu_1} \oplus \mathbb{K}[x]_{\nu-d-\mu_2} \rightarrow} \\ & &\mathbb{K}[x]_{\nu-d}^3\rightarrow \mathbb{K}[x]_{\nu} \rightarrow \mathbb{K}[x]_{\nu}/(a,b,c)_{\nu} \rightarrow 0, \end{eqnarray*} for $\nu >> 0$. As the alternate sum of the dimension of the $\mathbb{K}$-vector spaces is zero and $\mathbb{K}[x]_{\nu}/(a,b,c)_{\nu}$ is $0$ for $\nu >> 0$, we have \begin{eqnarray*} 0 & = & 3\,(d-\nu-1) +\nu -\mu_1- d +1 + \nu -\mu_2 -d +1 + \nu +1\\ & = & d -\mu_1 -\mu_2. \end{eqnarray*} For $\mathcal{L}(\tilde{T}(x), x^{n}, x^{2n}-1)$, we have $\mu_1+\mu_2=2\,n$. We are going to show now that in fact $\mu_1=\mu_2=n$: \begin{proposition}\label{prop:ML} The $\mathbb{K}[x]$-module $\mathcal{L}(\tilde{T}(x), x^{n}, x^{2n}-1)$ has a $n$-basis. \end{proposition} \begin{proof} Consider the map \begin{eqnarray}\label{syzfunction1} \mathbb{K}[x]_{n-1}^3 &\rightarrow & \mathbb{K}[x]_{3n-1}\\ (p(x),q(x),r(x)) & \mapsto & \tilde{T}(x) p(x) + x^{n} q(x) +(x^{2n}-1) r(x)\nonumber \end{eqnarray} which $3n \times 3n$ matrix is of the form \begin{equation}\label{form:S} S:= \left( \begin{array}{c|c|c} T_{0} &\mathbf{0} & -\mathbb{I}_{n} \\ T_{1} & \mathbb{I}_{n} & \mathbf{0} \\ T_{2} & \mathbf{0} & \ \, \mathbb{I}_{n} \\ \end{array} \right). \end{equation} where $T_{0}, T_{1}, T_{2}$ are the coefficient matrices of $(\tilde{T}(x)$, $x\, \tilde{T}(x)$, $\ldots,$ $x^{n}\tilde{T}(x))$, respectively for the list of monomials $(1,\ldots,x^{n-1})$, $(x^{n},\ldots,x^{2n-1})$, $(x^{2n},\ldots, x^{3n-1})$. Notice in particular that $T= T_{0}+T_{2}$ Reducing the first rows of $(T_{0}| \mathbf{0} | -\mathbb{I}_{n})$ by the last rows $(T_{2}| \mathbf{0} | \mathbb{I}_{n})$, we replace it by the block $(T_{0}+T_{2}| \mathbf{0} | \mathbf{0})$, without changing the rank of $S$. As $T=T_{0}+T_{2}$ is invertible, this shows that the matrix $S$ is of rank $3n$. Therefore, there is no syzygies in degree $n-1$. As the sum $2n=\mu_1+\mu_2$ and $\mu_{1}\le n,\mu_{2}\le n$ where $\mu_1,\mu_2$ are the smallest degree of a pair of generators of $\mathcal{L}(\tilde{T}(x), x^{n}, x^{2n}-1)$ of degree $\le n$, we have $\mu_1=\mu_2=n$. Thus there exist two linearly independent syzygies $(u_1,v_1,w_1)$, $(u_2,v_2,w_2)$ of degree $n$, which generate $\mathcal{L}(\tilde{T}(x), x^{n}, x^{2n}-1)$. \end{proof} A similar result can also be found in \cite{MR1871324}, but the proof much longer than this one, is based on interpolation techniques and explicit computations. Let us now describe how to construct explicitly two generators of $\mathcal{L}(\tilde{T}(x), x^{n}, x^{2n}-1)$ of degree $n$ (see also \cite{MR1871324}). As $\tilde{T}(x)$ is of degree $\le 2\,n -1$ and the map \eqref{syzfunction1} is a surjective function, there exists $(u,v,w) \in \mathbb{K}[x]_{n-1}^3$ such that \begin{equation}\label{base1} \tilde{T}(x) u(x) + x^n v(x) + (x^{2\,n}-1)\, w = \tilde{T}(x) x^n, \end{equation} we deduce that $(u_1,v_1,w_1)=(x^n-u, -v, -w) \in \mathcal{L}(\tilde{T}(x), x^{n}, x^{2n}-1)$. As there exists $(u',v',w') \in \mathbb{K}[x]_{n-1}^3$ such that \begin{equation}\label{base2} \tilde{T}(x) u'(x) + x^n v'(x) + (x^{2\,n}-1)\, w' =1 = x^n\, x^n - (x^{2\,n}-1) \end{equation} we deduce that $(u_2,v_2,w_2)=(-u',x^n -v', -w' - 1) \in \mathcal{L}(\tilde{T}(x), x^{n}, x^{2n}-1)$. Now, the vectors $(u_1,v_1,w_1)$, $(u_2,v_2,w_2)$ of $\mathcal{L}( \tilde{T}(x),x^{n},x^{2n}-1)$ are linearly independent since by construction, the coefficient vectors of $x^{n}$ in $(u_1,v_1,w_1)$ and $(u_2,v_2,w_2)$ are respectively $(1,0,0)$ and $(0,1,0)$. \begin{proposition}\label{division} The vector $u$ is solution of \eqref{pb:toep} if and only if there exist $v(x) \in \mathbb{K}[x]_{n-1}, w(x) \in \mathbb{K}[x]_{n-1}$ such that $$ (u(x), v(x), w(x)) \in \mathcal{L}(\tilde{T}(x), x^{n}, x^{2n}-1; g(x) ) $$ \end{proposition} \begin{proof} The vector $u$ is solution of \eqref{pb:toep} if and only if we have $$ \Pi_{E}(T(x)u(x))=g(x). $$ As $u(x)$ is of degree $\le n-1$, we deduce from \eqref{eq:TU1} and \eqref{eq:TU2} that there exist polynomial $A(x) \in \mathbb{K}[x]_{n-2}$ and $B(x) \in \mathbb{K}[x]_{n-1}$ such that $$ T(x)u(x) - x^{-n+1} A(x) - x^{n} B(x) = g(x). $$ By evaluation at the roots $\omega \in \Unit{2n}$, and since $\omega^{-n} = \omega^{n}$ and $\tilde{T}(\omega)=T(\omega)$ for $\omega\in \Unit{n}$, we have $$ \tilde{T}(\omega) u(\omega) + \omega^{n} v(\omega) = g(\omega), \forall \omega \in \Unit{2n}(\omega), $$ with $v(x)= -x\, A(x)-B(x)$ of degree $\le n-1$. We deduce that there exists $w(x)\in\mathbb{K}[x]$ such that $$ \tilde{T}(x) u(x) + x^{n} v(x) + (x^{2n}-1) w(x)= g(x). $$ Notice that $w(x)$ is of degree $\le n-1$, because $(x^{2n}-1)\, w(x)$ is of degree $\le 3n-1$. Conversely, a solution $(u(x), v(x), w(x)) \in \mathcal{L}(\tilde{T}(x),x^{n},x^{2n}-1; g(x) )\cap \mathbb{K}[x]_{n-1}^{3}$ implies a solution $(u,v,w)\in \mathbb{K}^{3\,n}$ of the linear system: $$ S \, \left( \begin{array}{c} u\\ v\\ w\\ \end{array} \right) = \left( \begin{array}{c} g\\ 0\\ 0\\ \end{array} \right) $$ where $S$ is has the block structure \eqref{form:S}, so that $T_{2}\, u + w =0$ and $T_{0}\, u - w = (T_{0}+T_{2}) u=g$. As we have $T_{0}+T_{2}=T$, the vector $u$ is a solution of \eqref{pb:toep}, which ends the proof of the proposition. \end{proof} \subsection{Euclidean division} As a consequence of proposition \ref{prop:ML}, we have the following property: \begin{proposition}\label{divi} Let $\{(u_1,v_1,w_1),(u_2,v_2,w_2)\}$ a $n$-basis of $\mathcal{L}(\tilde{T}(x),x^{n},x^{2n}-1)$, the remainder of the division of $\begin{pmatrix}0\\x^n\,g\\g\end{pmatrix}$ by $\begin{pmatrix} u_1&u_2\\v_1&v_2\\w_1&w_2\end{pmatrix}$ is the vector solution given in the proposition \eqref{division}. \end{proposition} \begin{proof} The vector $\begin{pmatrix}0\\x^n\, g\\ -g\end{pmatrix}\in\mathcal{L}(\tilde{T}(x), x^{n}, x^{2\,n}-1;g)$ (a particular solution). We divide it by $\begin{pmatrix}u_1&u_2\\v_1&v_2\\w_1&w_2\end{pmatrix}$ we obtain $$\begin{pmatrix}u\\v\\w\end{pmatrix}=\begin{pmatrix}0\\x^n\,g\\g\end{pmatrix}-\begin{pmatrix}u_1&u_2\\v_1&v_2\\w_1&w_2\end{pmatrix}\begin{pmatrix}p\\q\end{pmatrix} $$ $(u,v,w)$ is the remainder of division, thus $(u,v,w)\in\mathbb{K}[x]^3_{n-1}\cap\mathcal{L}(\tilde{T}(x), x^{n}, x^{2\,n}-1;g)$. However $(u,v,w)$ is the unique vector $\in\mathbb{K}[x]^3_{n-1}\cap\mathcal{L}(\tilde{T}(x), x^{n}, x^{2\,n}-1;g)$ because if there is an other vector then their difference is in $\mathcal{L}(\tilde{T}(x),x^{n},x^{2\,n}-1) \cap \mathbb{K}[x]_{n-1}^{3}$ which is equal to $\{(0,0,0)\}$. \end{proof} \begin{problem}\label{pb:division} Given a matrix and a vector of polynomials $\begin{pmatrix}e(x)&e'(x)\\f(x)&f'(x)\end{pmatrix}$ of degree $n$, and $\begin{pmatrix}p(x)\\q(x)\end{pmatrix}$ of degree $m\geq n$, such that $\begin{pmatrix}e_n&e_n'\\f_n&f_n' \end{pmatrix}$ is invertible; find the remainder of the division of $\begin{pmatrix}p(x)\\q(x)\end{pmatrix} $ by $\begin{pmatrix}e(x)&e'(x)\\f(x)&f'(x)\end{pmatrix}$. \end{problem} \begin{proposition} The first coordinate of remainder vector of the division of $\begin{pmatrix}0\\x^ng\end{pmatrix}$ by $\begin{pmatrix} u&u'\\r&r'\end{pmatrix}$ is the polynomial $v(x)$ solution of \eqref{pb:toep}.\end{proposition} We describe here a generalized Euclidean division algorithm to solve problem \eqref{pb:division}. Let $E(x)=\begin{pmatrix}p(x)\\q(x)\end{pmatrix}$ of degree $m$, $B(x)=\begin{pmatrix}e(x)&e'(x)\\f(x)&f'(x)\end{pmatrix}$ of degree $n\leq m$. $E(x)=B(x)Q(x)+R(x)$ with $\deg(R(x))<n,$ and $ \deg(Q(x))\leq m-n$. Let $z=\frac{1}{x}$ \begin{eqnarray}\label{div} &E(x)&=B(x)Q(x)+R(x)\nonumber\\ \Leftrightarrow& E(\displaystyle \frac{1}{z})&=B(\frac{1}{z})Q(\frac{1}{z})+R(\frac{1}{z})\nonumber\\ \Leftrightarrow& z^{m}E(\displaystyle \frac{1}{z})&=z^nB(\frac{1}{z})z^{m-n}Q(\frac{1}{z})+z^{m-n+1}z^{n-1}R(\frac{1}{z})\nonumber\\ \Leftrightarrow& \hat{E}(z)&= \hat{B}(z) \hat{Q}(z)+z^{m-n+1} \hat{R}(z) \end{eqnarray} with $ \hat{E}(z), \hat{B}(z), \hat{Q}(z), \hat{R}(z)$ are the polynomials obtained by reversing the order of coefficients of polynomials $E(z),B(z),Q(z),R(z)$. \begin{eqnarray*} (\ref{div})&\Rightarrow& \frac{ \hat{E}(z)}{ \hat{B}(z)}= \hat{Q}(z)+z^{m+n-1}\frac{ \hat{R}(z)}{ \hat{B}(z)}\\ &\Rightarrow& \hat{Q}(z)=\frac{ \hat{E}(z)}{ \hat{B}(z)} \mod z^{m-n+1} \end{eqnarray*} $\displaystyle\frac{1}{ \hat{B}(z)}$ exists because its coefficient of highest degree is invertible. Thus $\hat{Q}(z)$ is obtained by computing the first $m-n+1$ coefficients of $\displaystyle\frac{ \hat{E}(z)}{ \hat{B}(z)}$. To find $W(x)=\displaystyle\frac{1}{\hat{B}(x)}$ we will use Newton's iteration: Let $f(W)=\hat{B}-W^{-1}$. \\$f'(W_l).(W_{l+1}-W_l)=-W_l^{-1}(W_l+1-W_l)W_l^{-1}=f(W_l)=\hat{B}-W_l^{-1},$ thus $$W_{l+1}=2W_l-W_l\hat{B}W_l.$$ and $W_0=\hat{B}_0^{-1}$ which exists. \begin{eqnarray*} W-W_{l+1}&=&W-2W_l+W_l\hat{B}W_l\\ &=&W(\mathbb{I}_2-\hat{B}W_l)^2\\ &=&(W-W_l)\hat{B}(W-W_l) \end{eqnarray*} Thus $W_l(x)=W(x) \mod x^{2l}$ for $l=0,\dots,\lceil\log(m-n+1) \rceil$. \begin{proposition} We need $\mathcal{O}(n\log(n)\log(m-n)+m\log m)$ arithmetic operations to solve problem \eqref{pb:division} \end{proposition} \begin{proof} We must do $\lceil\log(m-n+1) \rceil$ Newton's iteration to obtain the first $m-n+1$ coeficients of $\displaystyle\frac{1}{\hat{B}}=W(x)$. And for each iteration we must do $\mathcal{O}(n\log n)$ arithmetic operations (multiplication of polynimials of degree $n$). And then we need $\mathcal{O}(m\log m)$ aritmetic operations to do the multiplication $\hat{E}.\displaystyle\frac{1}{\hat{B}}$. \end{proof} \subsection{Construction of the generators} The canonical basis of $\mathbb{K}[x]^3$ is denoted by $\sigma_1,\sigma_2,\sigma_3$. Let $\rho_1,\,\rho_2$ the generators of $\mathcal{L}(\tilde{T}(x),x^n,x^{2n}-1)$ of degree $n$ given by \begin{equation}\label{base3} \begin{array}{l}\rho_1=x^n\sigma_1-(u,v,w)=(u_1,v_1,w_1)\\ \rho_2=x^n\sigma_2-(u',v',w')=(u_2,v2_,w_2)\end{array} \end{equation} with $(u,v,w),\,(u',v',w')$ are the vector given in \eqref{base1} and \eqref{base2}. We will describe here how we compute $(u_1,v_1,w_1)$ and $(u_2,v_2,w_2)$. We will give two methods to compute them, the second one is the method given in \cite{MR1871324}. The first one use the Euclidean gcd algorithm: We will recal firstly the algebraic and computational properties of the well known extended Euclidean algorithm (see \cite{MR2001757}): Given $p(x), p'(x)$ two polynomials in degree $m$ and $m'$ respectively, let $$\begin{array}{ll} r_0=p,\qquad&r_1=p',\qquad\\s_0=1,&s_1=0,\\t_0=0,&t_1=1. \end{array}$$ and define \begin{eqnarray*} r_{i+1}&=&r_{i-1}-q_ir_i,\\ s_{i+1}&=&s_{i-1}-q_is_i,\\ t_{i+1}&=&t_{i-1}-q_it_i, \end{eqnarray*} where $q_i$ results when the division algorithm is applied to $r_{i-1}$ and $r_i$, i.e. $r_{i-1}=q_ir_i+r_{i+1}$ . $\deg r_{i+1}<\deg r_{i}$ for $i=1,\ldots,l$ with $l$ is such that $r_l=0$, therefore $r_{l-1}=\gcd(p(x),p'(x))$. \begin{proposition}\label{eea} The following relations hold: $$s_ip+t_ip'=r_i\quad \textrm{ and }\quad(s_i,t_i)=1\quad\textrm{ for }i=1,\ldots,l$$ and $$\left\{\begin{array}{l} \deg r_{i+1}<\deg r_i, \quad i=1,\ldots,l-1\\ \deg s_{i+1}>\deg s_i\quad\textrm{ and }\quad \deg t_{i+1}>\deg t_i,\\ \deg s_{i+1}=\deg(q_i.s_i)=\deg v-\deg r_i,\\ \deg t_{i+1}=\deg(q_i.t_i)=\deg u-\deg r_i. \end{array}\right.$$ \end{proposition} \begin{proposition} By applying the Euclidean gcd algorithm in $p(x)=x^{n-1}T$ and $p'(x)=x^{2n-1}$ in degree $n-1$ and $n-2$ we obtain $\rho_1$ and $\rho_2$ respectively \end{proposition} \begin{proof} We saw that $Tu=g$ if and only if there exist $A(x)$ and $B(x)$ such that $$\bar{T}(x)u(x)+x^{2n-1}B(x)=x^{n-1}b(x)+A(x)$$ with $\bar{T}(x)=x^{n-1}T(x)$ a polynomial of degree $\leq2n-2$. In \eqref{base1} and \eqref{base2} we saw that for $g(x)=1$ $(g=e_1)$ and $g(x)=x^nT(x)$ $(g=(0,t_{-n+1},\ldots,t_{-1})^T)$ we obtain a base of $\mathcal{L}(\tilde{T}(x),x^n,x^{2n}-1)$. $Tu_1=e_1$ if and only if there exist $A_1(x)$, $B_1(x)$ such that \begin{equation}\label{eea1}\bar{T}(x)u_1(x)+x^{2n-1}B_1(x)=x^{n-1}+A_1(x)\end{equation} and $Tu_2=(0,t_{-n+1},\ldots,t_{-1})^T$ if and only if there exist $A_2(x)$, $B_2(x)$ such that \begin{equation}\label{eea2}\bar{T}(x)(u_2(x)+x^{n})+x^{2n-1}B_2(x)=A_2(x)\end{equation} with $\deg A_1(x)\leq n-2$ and $\deg A_2(x)\leq n-2$. Thus By applying the extended Euclidean algorithm in $p(x)=x^{n-1}T$ and $p'(x)=x^{2n-1}$ until we have $\deg r_l(x)=n-1$ and $\deg r_{l+1}(x)=n-2$ we obtain $$u_1(x)=\frac{1}{c_1}s_l(x),\quad B_1(x)=\frac{1}{c_1}t_l(x),\quad x^{n-1}+A_1(x)=\frac{1}{c_1}r_l(x)$$ and $$x^n+u_2(x)=\frac{1}{c_2}s_{l+1}(x),\quad B_2(x)=\frac{1}{c_2}t_{l+1}(x),\quad A_2(x)=\frac{1}{c_2}r_{l+1}(x)$$ with $c_1$ and $c_2$ are the highest coefficients of $r_l(x)$ and $s_{l+1}(x)$ respectively, in fact: The equation \eqref{eea1} is equivalent to $$ \begin{array}{r} \overbrace{\phantom{mmmmmmmm}}^{n}\qquad\overbrace{\phantom{mmmmmmm}}^{n-1}\qquad\\ \begin{array}{r} \left.\begin{array}{l} {}_{\displaystyle{n-1}}\\\phantom{r} \end{array}\right\{\\\phantom{r}\\ \left.\begin{array}{l} \phantom{r}\\n\\\phantom{r} \end{array}\right\{\\\phantom{r}\\ \left.\begin{array}{l} {}_{\displaystyle{n-1}}\\\phantom{r} \end{array}\right\{ \end{array} \left( \begin{array}{ccc|ccc} t_{-n+1}&&&&&\\ \vdots&\ddots&&&&\\ \hline t_0&\dots&t_{-n+1}&&&\\ \vdots&\ddots&\vdots&&&\\ t_{n-1}&\dots&t_0&&&\\ \hline &\ddots&\vdots&\;\;1\;\;&&\\ &&&&\;\ddots\;&\\ &&t_{n-1}&&&\;\;1\;\; \end{array} \right) \end{array} \begin{pmatrix} \phantom{r}\\u_1\\\phantom{r}\\B_1\\\phantom{r} \end{pmatrix} =\begin{pmatrix}\phantom{r}\\A_1\\\phantom{r}\\\hline 1\\0\\\vdots\\0\end{pmatrix} $$ since $T$ is invertible then the $(2n-1)\times(2n-1)$ block at the bottom is invertible and then $u_1$ and $B_1$ are unique, therefore $u_1,\,B_1$ and $A_1$ are unique. And, by proposition \eqref{eea}, $\deg r_l=n-1$ ($r_l=c_1(x^n+A_1(x)$) then $\deg s_{l+1}=(2n-1)-(n-1)=n$ and $\deg t_{l+1}=(2n-2)-(n-1)=n-1$ thus, by the same proposition, $\deg s_l\leq n-1$ and $\deg t_l\leq n-2$. Therfore $\frac{1}{c_1} s_l=u_1$ and $\frac{1}{c_1} t_l=B_1$. Finaly, $Tu=e_1$ if and only if there exist $v(x)$, $w(x)$ such that \begin{equation}\tilde{T}(x)u(x)+x^nv(x)+(x^{2n}-1)w(x)=1\end{equation} $\tilde{T}(x)=T^++x^{2n}T^-=T+(x^{2n}-1)T^-$thus \begin{equation}\label{syz} T(x)u(x)+x^nv(x)+(x^{2n}-1)(w(x)+T^-(x)u(x))=1 \end{equation} of a other hand $T(x)u(x)-x^{-n+1}A_1(x)+x^nB_1(x)=1$ and $x^{-n+1}A_1(x)=x^n(xA_1)-x^{-n}(x^{2n}-1)xA_1$ thus \begin{equation}\label{syz2}T(x)u(x)+x^{n}(B(x)-xA(x))+(x^{2n}-1)x^{-n+1}A(x)=1\end{equation} By comparing \eqref{syz} and \eqref{syz2}, and as $1=x^nx^n-(x^{2n}-1)$ we have the proposition and we have $w(x)=x^{-n+1}A(x)-T_-(x)u(x)+1$ which is the part of positif degree of $-T_-(x)u(x)+1$. \end{proof} \begin{remark} A superfast euclidean gcd algorithm, wich uses no more then $\mathcal{O}(nlog^2 n)$, is given in \cite{MR2001757} chapter 11. \end{remark} The second methode to compute $(u_1,v_1,w_1)$ and $(u_2,v_2,w_2)$ is given in \cite{MR1871324}. We are interested in computing the coefficients of $\sigma_1,\,\sigma_2$, the coefficients of $\sigma_3$ correspond to elements in the ideal $(x^{2n}-1)$ and thus can be obtain by reduction of $(\tilde{T}(x)\,x^n).B(x)$ by $x^{2n}-1$, with $B(x)=\begin{pmatrix}x^n-u_0&-v_0\\-u_1&x^n-v_1\end{pmatrix}=\begin{pmatrix}u(x)&v(x)\\u'(x)&v'(x)\end{pmatrix}$. A superfast algorithm to compute $B(x)$ is given in \cite{MR1871324}. Let us describe how to compute it. By evaluation of \eqref{base3} at the roots $\omega_j\in \Unit{2n}$ we deduce that $(u(x)\,v(x))^T$ and $(u'(x)\,v'(x))^T$ are the solution of the following rational interpolation problem: $$\left\{\begin{array}{l}\tilde{T}(\omega_j)u(\omega_j)+\omega_j^nv(\omega_j)=0\\ \tilde{T}(\omega_j)u'(\omega_j)+\omega_j^nv'(\omega_j)=0\end{array}\right. \textrm{ with } $$ $$\left\{\begin{array}{l}u_n=1,\,v_n=0\\u'_n=0,\,v'_n=1\end{array}\right.$$ \begin{definition} The $\tau$-degree of a vector polynomial $w(x)=(w_1(x)\,w_2(x))^T$ is defined as $$\tau-\deg w(x):=\max\{\deg w_1(x),\,\deg w_2(x)-\tau\}$$ \end{definition} $B(x)$ is a $n-$reduced basis of the module of all vector polynomials $r(x)\in\mathbb{K}[x]^2$ that satisfy the interpolation conditions $$f_j^Tr(\omega_j)=0,\;\;j=0,\ldots,2n-1$$ with $f_j=\begin{pmatrix}\tilde{T}(\omega_j)\\\omega^n_j\end{pmatrix}$. $B(x)$ is called a $\tau-$reduced basis (with $\tau=n$) that corresponds to the interpolation data $(\omega_j,f_j),\,j=0,\ldots,2n-1$. \begin{definition} A set of vector polynomial in $\mathbb{K}[x]^{2}$ is called $\tau$-reduced if the $\tau$-highest degree coefficients are lineary independent. \end{definition} \begin{theorem} Let $\tau=n$. Suppose $J$ is a positive integer. Let $\sigma_1,\ldots,\sigma_J\in\mathbb{K}$ and $\phi_1,\ldots,\phi_J\in\mathbb{K}^2$ wich are $\neq(0\,0)^T$. Let $1\leq j\leq J$ and $\tau_J\in\mathbb{Z}$. Suppose that $B_j(x)\in\mathbb{K}[x]^{2\times2}$ is a $\tau_J$-reduced basis matrix with basis vectors having $\tau_J-$degree $\delta_1$ and $\delta_2$, respectively, corresponding to the interpolation data $\{(\sigma_i,\phi_i); i=1,\ldots,j\}$. Let $\tau_{j\rightarrow J}:=\delta_1-\delta_2$. Let $B_{j\rightarrow J}(x)$ be a $\tau_{j\rightarrow J}$-reduced basis matrix corresponding to the interpolation data $\{(\sigma_i,B_j^T(\sigma_j)\phi_i); i=j+1,\ldots,J\}$. Then $B_J(x):=B_j(x)B_{j\rightarrow J}(x)$ is a $\tau_J$-reduced basis matrix corresponding to the interpolation data $\{(\sigma_i,\phi_i); i=1,\ldots,J\}$. \end{theorem} \begin{proof} For the proof, see \cite{MR1871324}. \end{proof} When we apply this theorem for the $\omega_j\in\Unit{2n}$ as interpolation points, we obtain a superfast algorithm ($\mathcal{O}(n\log^2n)$) wich compute $B(x)$.\cite{MR1871324} We consider the two following problems: \section{Bivariate case} Let $m\in\mathbb{N},m\in\mathbb{N}$. In this section we denote by $E=\{(i,j);\;0\leq i\leq m-1,\,0\leq j\leq n-1\}$, and $R=\mathbb{K}[x,y]$. We denote by $\mathbb{K}[x,y]_{\substack{m\\n}}$ the vector space of bivariate polynomials of degree $\leq m$ in $x$ and $\leq n$ in $y$. \begin{notation} For a block matrix $M$, of block size $n$ and each block is of size $m$, we will use the following indication : \begin{equation} M=\left(M_{(i_1,i_2),(j_1,j_2)}\right)_{\substack{0\leq i_1,j_1\leq m-1\\0\leq i_2,j_2\leq n-1}}=(M_{\alpha\beta})_{\alpha,\beta\in E}. \end{equation} $(i_2,j_2)$ gives the block's positions, $(i_1,j_1)$ the position in the blocks. \end{notation} \begin{problem} Given a Toeplitz block Toeplitz matrix $T=(t_{\alpha-\beta})_{\alpha\in E,\beta\in E}\in\mathbb{K}^{mn\times mn}$ ($T=(T_{\alpha\beta})_{\alpha,\beta\in E}$ with $T_{\alpha\beta}=t_{\alpha-\beta}$) of size $mn$ and $g=(g_{\alpha})_{\alpha\in\mathtt{E}}\in\mathbb{K}^{mn}$, find $u=(u_{\alpha})_{\alpha\in\mathtt{E}}$ such that \begin{equation}\label{pb:toblto} T\, u=g \end{equation} \end{problem} \begin{definition} We define the following polynomials: \begin{itemize} \item $T(x,y):=\displaystyle\sum_{(i,j)\in \mathtt{E}-\mathtt{E}}t_{i,j}x^iy^j$, \item $\tilde{T}(x,y):=\displaystyle\sum_{i,j=0}^{2n-1,2m-1}\tilde{t}_{i,j}x^iy^j$ with\\ $\tilde{t}_{i,j}:=\left\{\begin{array}{ll}t_{i,j}&\textrm{si }i<m, j<n\\ t_{i-2m,j}&\textrm{si }i\geq m, j<n\\ t_{i,j-2n}&\textrm{si }i<m, j\geq n\\ t_{i-2m,j-2n}&\textrm{si }i\geq m,i\geq n \end{array}\right.$, \item $u(x,y):=\displaystyle\sum_{(i,j)\in \mathtt{E}}u_{i,j}\, x^iy^j$, $g(x,y):=\displaystyle\sum_{(i,j)\in \mathtt{E}}g_{i,j} x^{i} y^{j}$. \end{itemize} \end{definition} \subsection{Moving hyperplanes} For any non-zero vector of polynomials $\mathbf{a}=(a_{1},\ldots,a_{n})\in \mathbb{K}[x,y]^{n}$, we denote by $\mathcal{L}(\mathbf{a})$ the set of vectors $(h_{1},\ldots,h_{n})\in\mathbb{K}[x,y]^n$ such that \begin{equation} \label{eq:mvplanes} \sum_{i=1}^{n} a_{i} \, h_{i} = 0. \end{equation} It is a $\mathbb{K}[x,y]$-module of $\mathbb{K}[x,y]^n$. \begin{proposition}\label{division2} The vector $u$ is solution of \eqref{pb:toblto} if and only if there exist $h_{2}, \ldots, h_{9} \in \mathbb{K}[x,y]_{\substack{m-1\\n-1}}$ such that $(u(x,y),h_{2}(x,y),\ldots,h_{9}(x,y))$ belongs to\\ $\mathcal{L}(\tilde{T}(x,y),x^{m}, x^{2\, m} -1, y^{n}, x^{m}\,y^{n},(x^{2\,m}-1)\, y^{n}, y^{2\,n}-1, x^{m} (y^{2\,n}-1), (x^{2\,m}-1)\, (y^{2\,n}-1)). $ \end{proposition} \begin{proof} Let $L=\{x^{\alpha_{1}}y^{\alpha_{2}}, 0\le \alpha_{1} \le m-1, 0 \le \alpha_{2} \le n-1\}$, and $\Pi_{E}$ the projection of $R$ on the vector space generated by $L$. By \cite{MR1762401}, we have \begin{equation}\label{dv} T\,u=g \Leftrightarrow \Pi_{E}(T(x,y)\,u(x,y))=g(x,y) \end{equation} which implies that \begin{eqnarray} \label{eq:sol1} T(x,y)u(x,y) & = & g(x,y)+x^{m}y^{n}A_1(x,y)+x^{m}y^{-n}A_2(x,y)+x^{-m}y^{n}A_3(x,y)+ x^{-m}y^{-n}A_4(x,y) \nonumber\\ & + & x^{m}A_5(x,y)+x^{-m} A_6(x,y)+y^{n}A_7(x,y)+y^{-n}A_8(x,y), \end{eqnarray} where the $A_i(x,y)$ are polynomials of degree at most $m-1$ in $x$ and $n-1$ in $y$. Since $\omega^{m} = \omega^{-m}$, $\upsilon^{n}=\upsilon^{-n}$, $\tilde{T}(\omega,\upsilon)=T(\omega,\upsilon)$ for $\omega\in \Unit{2\,m}$, $\upsilon\in \Unit{2\,n}$, we deduce by evaluation at the roots $\omega\in \Unit{2\,m}$, $\upsilon\in \Unit{2\,n}$ that \begin{equation*} R(x,y):= \tilde{T}(x,y)u(x,y)+x^{m} h_{2}(x,y)+ y^{n}h_{4}(x,y)+ x^{m}y^{n} h_{5}(x,y) - g(x,y) \in (x^{2\,m}-1,y^{2\,n}-1) \end{equation*} with $h_{2}=-(A_{5}+A_{6})$, $h_{4}=-(A_{7}+A_{8})$, $h_{5}=-(A_{1}(x,y)+A_{2}(x,y)+A_{3}(x,y)+A_{4}(x,y))$. By reduction by the polynomials $x^{2\,m}-1$, $y^{2\, n}-1$, and as $R(x,y)$ is of degree $\le 3m -1$ in $x$ and $\le 3n-1$ in $y$, there exist $h_{3}(x,y), h_{6}(x,y),\ldots,h_8(x,y) \in\mathbb{K}[x,y]_{\substack{m-1\\n-1}}$ such that \begin{eqnarray}\label{reduc1} &&\tilde{T}(x,y)u(x,y)+x^m\, h_2(x,y)+(x^{2m}-1)h_{3}(x,y)+y^n h_{4}(x,y)+ x^my^nh_{5}(x,y)+\\\nonumber &&(x^{2m}-1)y^nh_{6}(x,y)+(y^{2n}-1)h_{7}(x,y)+x^m(y^{2m}-1)h_{7}(x,y)+(x^{2n}-1)(y^{2n}-1)h_{8}(x,y)=g(x,y). \end{eqnarray} Conversely a solution of \eqref{reduc1} can be transformed into a solution of \eqref{eq:sol1}, which ends the proof of the proposition. \end{proof} In the following, we are going to denote by $\mathbf{T}$ the vector $\mathbf{T}=(\tilde{T}(x,y),x^{m}, x^{2\, m} -1, y^{n}, x^{m}\,y^{n},(x^{2\,m}-1)\, y^{n},y^{2\,n}-1,x^{m} (y^{2\,n}-1),(x^{2\,m}-1)\, (y^{2\,n}-1))$. \begin{proposition}\label{syzygy:2d:deg} There is no elements of $\mathbb{K}[x,y]_{\substack{m-1\\n-1}}$ in $\mathcal{L}(\mathbf{T})$. \end{proposition} \begin{proof} We consider the map \begin{eqnarray}\label{syzfunction2} \mathbb{K}[x,y]_{\substack{m-1\\n-1}}^9&\rightarrow & \mathbb{K}[x,y]_{\substack{3m-1\\3n-1}}\\ p(x,y)=(p_1(x,y),\ldots,p_9(x,y)) & \mapsto & \mathbf{T}.p \\ \end{eqnarray} which $9mn \times 9mn$ matrix is of the form \begin{equation}S:=\left(\begin{array}{c|c|c|c} \begin{matrix} \\ T_0\\ \end{matrix} & \begin{matrix} E_{21}&-E_{11}+E_{31}\\ \vdots&\vdots\\ E_{2n}&-E_{1n}+E_{3n} \end{matrix} & & \begin{matrix} -E_{11}&-E_{21}&E_{11}-E_{31}\\ \vdots&\vdots&\vdots\\ -E_{1n}&-E_{2n}&E_{1n}-E_{3n} \end{matrix} \\\hline \begin{matrix} \\ T_1\\ \end{matrix} & & \begin{matrix} E_{11}&E_{21}&-E_{11}+E_{31}\\ \vdots&\vdots&\vdots\\ E_{1n}&E_{2n}&-E_{1n}+E_{3n} \end{matrix} & \\\hline \begin{matrix} \\ T_2\\ \end{matrix} &&& \begin{matrix} E_{11}&E_{21}&-E_{11}+E_{31}\\ \vdots&\vdots&\vdots\\ E_{1n}&E_{2n}&-E_{1n}+E_{3n} \end{matrix} \end{array}\right) \end{equation} with $E_{ij}$ is the $3m\times mn$ matrix $e_{ij}\otimes I_m$ and $e_{ij}$ is the $3\times n$ matrix with entries equal zero except the $(i,j)$th entrie equal $1$. And the matrix $\begin{pmatrix}T_0\\T_1\\T_2 \end{pmatrix}$ is the following $9mn\times m$ matrix $$ \begin{pmatrix} t_0&0&\ldots&0\\t_1&t_0&\ldots&0\\ \vdots&\ddots&\ddots&\vdots\\ t_{n-1}&\ldots&t_1&t_0\\ \hline 0&t_{n-1}&\ldots&t_1\\t_{-n+1}&0&\ddots&\vdots\\ \vdots&\ddots&\ddots&t_{-n+1}\\ t_{-1}&\ldots&t_{-n+1}&0\\ \hline 0&t_{-1}&\ldots&t_{-n+1}\\\vdots&\ddots&\ddots&\vdots\\ \vdots&&\ddots&t_{-1}\\ 0&\ldots&\ldots&0\\ \end{pmatrix} \textrm{ and } t_i=\begin{pmatrix} t_{i,0}&0&\ldots&0\\t_{i,1}&t_{i,0}&\ldots&0\\ \vdots&\ddots&\ddots&\vdots\\ t_{i,n-1}&\ldots&t_{i,1}&t_{i,0}\\ \hline 0&t_{i,m-1}&\ldots&t_{i,1}\\t_{i,-m+1}&0&\ddots&\vdots\\ \vdots&\ddots&\ddots&t_{i,-m+1}\\ t_{i,-1}&\ldots&t_{i,-m+1}&0\\ \hline 0&t_{i,-1}&\ldots&t_{i,-m+1}\\\vdots&\ddots&\ddots&\vdots\\ \vdots&&\ddots&t_{i,-1}\\ 0&\ldots&\ldots&0\\ \end{pmatrix} $$ For the same reasons in the proof of proposition \eqref{prop:ML} the matrix $S$ is invertible. \end{proof} \begin{theorem}\label{free:mod} For any non-zero vector of polynomials $\mathbf{a}=(a_{i})_{i=1,\ldots, n}\in \mathbb{K}[x,y]^{n}$, the $\mathbb{K}[x,y]$-module $\mathcal{L}(a_{1},\ldots, a_{n})$ is free of rank $n-1$. \end{theorem} \begin{proof} Consider first the case where $a_{i}$ are monomials. $a_{i}= x^{\alpha_{i}}y^{\beta_{i}}$ that are sorted in lexicographic order such that $x<y$, $a_{1}$ being the biggest and $a_{n}$ the smallest. Then the module of syzygies of $\mathbf{a}$ is generated by the $S$-polynomials: $$ S(a_{i},a_{j}) = \mathrm{lcm}(a_{i},a_{j})({\sigma_{i}\over a_{i}}- {\sigma_{j}\over a_{j}}), $$ where $(\sigma_{i})_{i=1,\ldots,n}$ is the canonical basis of $\mathbb{K}[x,y]^{n}$ \cite{MR1322960}. We easily check that $S(a_{i},a_{k}) = {\mathrm{lcm}(a_{i},a_{k})\over\mathrm{lcm}(a_{i},a_{j})} S(a_{i},a_{j}) - {\mathrm{lcm}(a_{i},a_{k})\over\mathrm{lcm}(a_{j},a_{k})} S(a_{j},a_{k})$ if $i\neq j \neq k$ and $\mathrm{lcm} (a_{i},a_{j})$ divides $\mathrm{lcm}(a_{i},a_{k})$. Therefore $\mathcal{L}(\mathbf{a})$ is generated by the $S(a_{i},a_{j})$ which are minimal for the division, that is, by $S(a_{i},a_{i+1})$ (for $i=1,\ldots,n-1$), since the monomials $a_{i}$ are sorted lexicographically. As the syzygies $S(a_{i},a_{i+1})$ involve the basis elements $\sigma_{i},\sigma_{i+1}$, they are linearly independent over $\mathbb{K}[x,y]$, which shows that $\mathcal{L}(\mathbf{a})$ is a free module of rank $n-1$ and that we have the following resolution: $$ 0 \rightarrow \mathbb{K}[x,y]^{n-1} \rightarrow \mathbb{K}[x,y]^{n} \rightarrow (\mathbf{a}) \rightarrow 0. $$ Suppose now that $a_{i}$ are general polynomials $\in \mathbb{K}[x,y]$ and let us compute a Gr\"obner basis of $a_{i}$, for a monomial ordering refining the degree \cite{MR1322960}. We denote by $m_{1},\ldots, m_{s}$ the leading terms of the polynomials in this Gr\"obner basis, sorted by lexicographic order. The previous construction yields a resolution of $(m_{1}$, $\ldots$, $m_{s})$: $$ 0 \rightarrow \mathbb{K}[x,y]^{s-1} \rightarrow \mathbb{K}[x,y]^{s} \rightarrow (m_{i})_{i=1,\ldots,s} \rightarrow 0. $$ Using \cite{MR839576} (or \cite{MR1322960}), this resolution can be deformed into a resolution of $(\mathbf{a})$, of the form $$ 0 \rightarrow \mathbb{K}[x,y]^{p} \rightarrow \mathbb{K}[x,y]^{n} \rightarrow (\mathbf{a}) \rightarrow 0, $$ which shows that $\mathcal{L}(\mathbf{a})$ is also a free module. Its rank $p$ is necessarily equal to $n-1$, since the alternate sum of the dimensions of the vector spaces of elements of degree $\le \nu$ in each module of this resolution should be $0$, for $\nu \in \mathbb{N}$. \end{proof} \subsection{Generators and reduction} In this section, we describe an explicit set of generators of $\mathcal{L}(\mathbf{T})$. The canonical basis of $\mathbb{K}[x,y]^9$ is denoted by $\sigma_1,\ldots, \sigma_9$. First as $\tilde{T}(x,y)$ is of degree $\le 2\,m-1$ in $x$ and $\le 2\, n-1$ in $y$ and as the function \eqref{syzfunction2} in surjective, there exists $u_1, u_2 \in \mathbb{K}[x,y]_{\substack{m-1\\n-1}}^9$ such that $\mathbf{T}\cdot u_1 = \tilde{T}(x,y) x^m$, $\mathbf{T}\cdot u_2 = \tilde{T}(x,y) y^n$. Thus, $$ \begin{array}{l} \rho_{1} = x^m \sigma_1 - u_1 \in \mathcal{L}(\mathbf{T}),\\ \rho_{2} = y^n \sigma_1 - u_2 \in \mathcal{L}(\mathbf{T}). \end{array} $$ We also have $u_3 \in \mathbb{K}[x,y]_{\substack{m-1\\n-1}}$, such that $\mathbf{T}\cdot u_3 = 1 = x^m \, x^m - (x^{2\,m}-1) = y^n\, y^n - (y^{2\,n} -1)$. We deduce that $$ \begin{array}{l} \rho_{3} = x^m \sigma_2 - \sigma_3 -u_3 \in \mathcal{L}(\mathbf{T}),\\ \rho_{4} = y^n \sigma_4 - \sigma_7 -u_3 \in \mathcal{L}(\mathbf{T}). \end{array} $$ Finally, we have the obvious relations: $$ \begin{array}{l} \rho_{5} = y^n \sigma_2 - \sigma_5 \in \mathcal{L}(\mathbf{T}),\\ \rho_{6} = x^m \sigma_4 - \sigma_5 \in \mathcal{L}(\mathbf{T}), \\ \rho_{7} = x^m \sigma_5 - \sigma_6 + \sigma_4 \in \mathcal{L}(\mathbf{T}), \\ \rho_{8} = y^n \sigma_5 - \sigma_8 + \sigma_2 \in \mathcal{L}(\mathbf{T}). \\ \end{array} $$ \begin{proposition}\label{basis} The relations $\rho_{1},\ldots,\rho_{8}$ form a basis of $\mathcal{L}(\mathbf{T})$. \end{proposition} \begin{proof} Let $\mathbf{h}=(h_1,\ldots,h_9)\in \mathcal{L}(\mathbf{T})$. By reduction by the previous elements of $\mathcal{L}(\mathbf{T})$, we can assume that the coefficients $h_{1}, h_{2}, h_{4}, h_{5}$ are in $\mathbb{K}[x,y]_{\substack{m-1\\n-1}}$. Thus, $\tilde{T}(x,y) h_{1} + x^{m} h_{2} + y^{n} h_{4} + x^{m} y^{n} h_{5} \in (x^{2\,n}-1,y^{2\,m}-1)$. As this polynomial is of degree $\le 3\,m-1$ in $x$ and $\le 3\, n-1$ in $y$, by reduction by the polynomials, we deduce that the coefficients $h_{3},h_{6},\ldots,h_{9}$ are in $\mathbb{K}[x,y]_{\substack{m-1\\n-1}}$. By proposition \ref{syzygy:2d:deg}, there is no non-zero syzygy in $\mathbb{K}[x,y]_{\substack{m-1\\n-1}}^{9}$. Thus we have $\mathbf{h}=0$ and every element of $\mathcal{L}(\mathbf{T})$ can be reduced to $0$ by the previous relations. In other words, $\rho_{1},\ldots, \rho_{8}$ is a generating set of the $\mathbb{K}[x,y]$-module $\mathcal{L}(\mathbf{T})$. By theorem \ref{free:mod}, the relations $\rho_{i}$ cannot be dependent over $\mathbb{K}[x,y]$ and thus form a basis of $\mathcal{L}(\mathbf{T})$. \end{proof} \subsection{Interpolation} Our aim is now to compute efficiently a system of generators of $\mathcal{L}(\mathbf{T})$. More precisely, we are interested in computing the coefficients of $\sigma_{1},\sigma_{2},\sigma_{4}, \sigma_{5}$ of $\rho_{1},\rho_{2},\rho_{3}$. Let us call $B(x,y)$ the corresponding coefficient matrix, which is of the form: \begin{equation}\label{form:B} \left( \begin{array}{cccc} x^m & y^n & 0 \\ 0 & 0 & x^{m} \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ \end{array} \right) + \mathbb{K}[x,y]_{\substack{m-1\\n-1}}^{4,3} \end{equation} Notice that the other coefficients of the relations $\rho_{1},\rho_{2},\rho_{3}$ correspond to elements in the ideal $(x^{2\,m}-1,y^{2\,n}-1)$ and thus can be obtained easily by reduction of the entries of $(\tilde{T}(x,y),x^m,y^n,x^m\,y^n)\cdot B(x,y)$ by the polynomials $x^{2\,m}-1,y^{2\,n}-1$. Notice also that the relation $\rho_{4}$ can be easily deduced from $\rho_{3}$, since we have $\rho_{3} -x^{m}\sigma_{2} +\sigma_{3}+y^n\, \sigma_{4} -\sigma_{7} = \rho_{4}$. Since the other relations $\rho_{i}$ (for $i>4$) are explicit and independent of $\tilde{T}(x,y)$, we can easily deduce a basis of $\mathcal{L}(\mathbf{T})$ from the matrix $B(x,y)$. As in $\mathcal{L}(\mathbf{T})\cap\mathbb{K}[x,y]_{\substack{m-1\\n-1}}$ there is only one element, thus by computing the basis given in proposition \eqref{basis} and reducing it we can obtain this element in $\mathcal{L}(\mathbf{T})\cap\mathbb{K}[x,y]_{\substack{m-1\\n-1}}$ which gives us the solution of $Tu=g$. We can give a fast algorithm to do these two step, but a superfast algorithm is not available. \section{Conclusions} We show in this paper a correlation between the solution of a Toeplitz system and the syzygies of polynomials. We generalized this way, and we gave a correlation between the solution of a Toeplitz-block-Toeplitz system and the syzygies of bivariate polynomials. In the univariate case we could exploit this correlation to give a superfast resolution algorithm. The generalization of this technique to the bivariate case is not very clear and it remains an important challenge. \end{document}
\begin{document} \title{Transience, Recurrence and the Speed of a Random Walk in a Site-Based Feedback Environment} \author{Ross G. Pinsky\widehat{t}anks{Department of Mathematics, Technion--Israel Institute of Technology. E-mail - \texttt{[email protected]}. } ~~and~ Nicholas F. Travers\widehat{t}anks{Department of Mathematics, Technion--Israel Institute of Technology. E-mail - \texttt{[email protected]}. }} \date{} \maketitle \begin{abstract} We study a random walk on $\mathbb{Z}$ which evolves in a dynamic environment determined by its own trajectory. Sites flip back and forth between two modes, $p$ and $q$. $R$ consecutive right jumps from a site in the $q$-mode are required to switch it to the $p$-mode, and $L$ consecutive left jumps from a site in the $p$-mode are required to switch it to the $q$-mode. From a site in the $p$-mode the walk jumps right with probability $p$ and left with probability $1-p$, while from a site in the $q$-mode these probabilities are $q$ and $1-q$. We prove a sharp cutoff for right/left transience of the random walk in terms of an explicit function of the parameters $\alpha = \alpha(p,q,R,L)$. For $\alpha > 1/2$ the walk is transient to $+\infty$ for any initial environment, whereas for $\alpha < 1/2$ the walk is transient to $-\infty$ for any initial environment. In the critical case, $\alpha = 1/2$, the situation is more complicated and the behavior of the walk depends on the initial environment. Nevertheless, we are able to give a characterization of transience/recurrence in many instances, including when either $R=1$ or $L=1$ and when $R=L=2$. In the noncritical case, we also show that the walk has positive speed, and in some situations are able to give an explicit formula for this speed. \end{abstract} \section{Introduction and Statement of Results} \label{sec:Intro} In this paper we introduce a process we call a site-based feedback random walk on $\mathbb{Z}$. The process $(X_n)_{n \geq 0}$ is a nearest neighbor random walk governed by four parameters: $p,q\in(0,1)$ and $R,L\in\mathbb{N}$. An informal description is as follows. Initially each site $x \in \mathbb{Z}$ is set to either the $p$-mode or the $q$-mode. From a site in the $p$-mode the walk jumps right with probability $p$ and left with probability $1-p$, whereas from a site in the $q$-mode these probabilities are $q$ and $1-q$, respectively. A site $x$ switches from the $q$-mode to the $p$-mode after the walk jumps right from $x$ on $R$ consecutive visits to $x$, and a site $x$ switches from the $p$-mode to the $q$-mode after the walk jumps left from $x$ on $L$ consecutive visits to $x$. In light of this description, we say the random walk $(X_n)$ has \emph{positive feedback} if $q < p$ and \emph{negative feedback} if $q > p$. Of course, if $q = p$ the situation is trivial; we just have a simple random walk of bias $p$. We now give the formal description and set some notation. \begin{itemize} \widetilde{i}em $\mathbb{L}ambda = \{(p,0),...,(p,L-1),(q,0),...,(q,R-1)\}$ is the set of \emph {single site configurations}. A typical configuration is denoted by $\lambda = (r,i)$, where $r \in \{p,q\}$ is the \emph{mode} and $i$ is the number of \emph{charges} in favor of the alternative mode. \widetilde{i}em $\mathbb{L}ambda_p = \{(p,0),...,(p,L-1)\}$ is the set of $p$-configurations, and \\ $\mathbb{L}ambda_q = \{(q,0),...,(q,R-1)\}$ is the set of $q$-configurations. \widetilde{i}em $\omega = \{\omega(x)\}_{x \in \mathbb{Z}} \in \mathbb{L}ambda^{\mathbb{Z}}$ is the \emph{initial environment}. $\omega_n$ is the (random) environment at time $n \geq 0$, $\omega_0 = \omega$. \widetilde{i}em At each step the walk $(X_n)$ jumps right or left according to the following rules: \begin{align*} \overline{m}ox{ If } \omega_n(X_n) \in \mathbb{L}ambda_p, ~\overline{m}ox{ then } \left\{ \begin{array}{l} \mathbb{P}(X_{n+1} = X_n + 1) = p, \\ \mathbb{P}(X_{n+1} = X_n - 1) = 1- p. \end{array} \right. \end{align*} \begin{align*} \overline{m}ox{ If } \omega_n(X_n) \in \mathbb{L}ambda_q, ~\overline{m}ox{ then } \left\{ \begin{array}{l} \mathbb{P}(X_{n+1} = X_n + 1) = q, \\ \mathbb{P}(X_{n+1} = X_n - 1) = 1- q. \end{array} \right. \end{align*} \widetilde{i}em \noindent For all $x \not= X_n$, $\omega_{n+1}(x) = \omega_n(x)$. The configuration at the current position of the walk $X_n$ is updated as follows, depending on the direction of the next jump: \begin{align*} & \overline{m}ox{ If } \omega_n(X_n) \in \mathbb{L}ambda_p \cup \{(q, R -1)\} \overline{m}ox{ and } X_{n+1} = X_n + 1, \overline{m}ox{ then } \omega_{n+1}(X_n) = (p,0). \\ & \overline{m}ox{ If } \omega_n(X_n) = (q,i), 0 \leq i \leq R-2, \overline{m}ox{ and } X_{n+1} = X_n + 1, \overline{m}ox{ then } \omega_{n+1}(X_n) = (q,i+1). \\ & \overline{m}ox{ If } \omega_n(X_n) \in \mathbb{L}ambda_q \cup \{(p, L -1)\} \overline{m}ox{ and } X_{n+1} = X_n - 1, \overline{m}ox{ then } \omega_{n+1}(X_n) = (q,0). \\ & \overline{m}ox{ If } \omega_n(X_n) = (p,i), 0 \leq i \leq L-2, \overline{m}ox{ and } X_{n+1} = X_n - 1, \overline{m}ox{ then } \omega_{n+1}(X_n) = (p,i+1). \end{align*} \end{itemize} This site-based feedback random walk is motivated by so-called cookie random walks and shares certain fundamental properties of two outgrowths of the basic cookie random walk. A basic cookie random walk on $\mathbb{Z}$ is defined as follows. Let $M\ge1$ be a positive integer. At each site $x \in \mathbb{Z}$, place a pile of $M$ ``cookies'' with values $\omega(x,k)\in[0,1]$, $k=1,\ldots, M$. For $k\le M$, the $k$-th time the process reaches site $x$, it eats the $k$-th cookie at that site, whose value is $\omega(x,k)$, and this empowers the process to jump to the right with probability $\omega(x,k)$ and to the left with probability $1-\omega(x,k)$. After the site $x$ has been visited $M$ times, whenever the process visits that site again, it behaves like an ordinary simple, symmetric random walk, jumping left or right with equal probability. Cookie random walks were first introduced by Benjamini and Wilson \cite{Benjamini2003}; see the survey paper of Kosygina and Zerner \cite{Kosygina2013} for more on cookie random walks and a bibliography. We now describe two outgrowths of the basic cookie random walk. Kozma, Orenshtein, and Shinkar \cite{Kozma2013} recently considered a \emph{periodic cookie} random walk. Instead of having a cookie only the first $M$ times the process is at a given site, consider periodic cookies with period $M$, and assume that these cookies are identical at each $x \in \mathbb{Z}$. Thus, one defines $\omega(k)$, $k\in\mathbb{N}$, with $\omega(k+M)=\omega(k)$. For each $x \in \mathbb{Z}$, the $k$th time the process is at $x$ it jumps right or left with probabilities $\omega(k)$ and $1-\omega(k)$ respectively. In particular, the process never reverts to a simple, symmetric random walk at any site. Another outgrowth of the basic cookie random walk is the ``have your cookie and eat it'' random walk \cite{Pinsky2010}. Now there is only one cookie at each site; call it $\omega(x), x \in \mathbb{Z}$, and assume $\omega(x)>1/2$. When the process first reaches $x$, it jumps right with probability $\omega(x)$ and left with probability $1-\omega(x)$. For each site $x$, as long as the process continues to jump to the right from $x$, it continues to use this right-biased cookie; but after the first time the process jumps to the left from $x$, the cookie at $x$ is removed. From then on, whenever the process is at $x$, it behaves like a simple, symmetric random walk, jumping left or right with equal probability. The site-based feedback random walk has something in common with each of the two above processes. In particular, the sequence of configurations encountered on repeated visits to a given site in the site-based feedback case is a finite-state Markov chain, and thus ``roughly periodic'' on long time scales, while the jump mechanism at a given site in the site-based feedback case depends not only on the number of visits to that site but also on the direction of the jumps on these visits, as in the ``have your cookie and eat it" random walk. However, the site-based feedback random walk is also fundamentally different from both of the above processes in that it itself has \emph{both} of these properties, and also in that it has persistent interactions with its environment, whereas in the ``have your cookie and eat it'' case the interactions at a given site $x$ terminate after the first leftward jump. In this paper we study the transience/recurrence properties of the site-based feedback random walk, and in the transient case we study the speed of the process. Some new features occur that were not present in other cookie random walk models. In particular, the initial environment can have a dramatic influence on the behavior for certain critical values of the parameters $p,q,R,L$. Before stating the results, we need to introduce a bit more notation and terminology. Let $\mathbb{P}_{\omega,k}$ denote the probability measure for the random walk started at $X_0 = k$ in the initial environment $\omega$, and let $\mathbb{P}_{\omega} = \mathbb{P}_{\omega, 0}$. Also, let $\mathbb{E}_{\omega}$ and $\mathbb{E}_{\omega,k}$ denote, respectively, expectations with respect to the measures $\mathbb{P}_{\omega}$ and $\mathbb{P}_{\omega,k}$. Finally, for $x \in \mathbb{Z}$, let $N_x$ be the total number of visits to site $x$: \begin{align} \label{eq:DefNx} N_x = |\{n \geq 0 : X_n = x\}|. \end{align} We say that the random walk path $(X_n)$ is \footnotemark{}: \begin{itemize} \widetilde{i}em \emph{recurrent} if $N_0 = \infty$. \widetilde{i}em \emph{right transient}, or \emph{transient to $+\infty$}, if $\lim_{n \to \infty} X_n = +\infty$, and \emph{left transient}, or \emph{transient to $-\infty$}, if $\lim_{n \to \infty} X_n = -\infty$. \widetilde{i}em \emph{ballistic} if $\liminf_{n \to \infty} |X_n|/n > 0$. \end{itemize} \footnotetext{Note that these definitions do not have any a.s. qualifications, and are simply statements about the (random) path $(X_n) = (X_0, X_1,...)$. Thus, the random walk $(X_n)$ has some probability of being right transient, some probability of being left transient, and some probability of being recurrent. Typically one says that a random walk $(X_n)$ is recurrent/right transient/left transient if, according to our definitions, it is a.s. recurrent/right transient/left transient. However, for our model there are some situations (see Theorem \ref{LR2crit}) where there is positive probability both for transience to $+\infty$ and for transience to $-\infty$, so for consistency we will speak of all of these properties probabilistically.} Our first theorem gives the cutoff point for left/right transience. \begin{The} \label{thm:RightLeftTransienceCutoff} Define $\alpha = \alpha(p,q,R,L) \in (0,1)$ by \begin{align} \label{eq:DefAlpha} \alpha = \frac{p \cdot \big[(1-q)q^R(1 - (1-p)^L)\big] ~+~ q \cdot \big[p(1-p)^L(1-q^R)\big]} {\big[(1-q)q^R(1 - (1-p)^L)\big] ~+~ \big[p(1-p)^L(1-q^R)\big]}. \end{align} \begin{itemize} \widetilde{i}em If $\alpha > 1/2$ then the random walk $(X_n)$ is $\mathbb{P}_{\omega}$ a.s. right transient, for any initial environment $\omega$. \widetilde{i}em If $\alpha < 1/2$ then the random walk $(X_n)$ is $\mathbb{P}_{\omega}$ a.s. left transient, for any initial environment $\omega$. \end{itemize} \end{The} We will call the vector $(p,q,R,L)$ the \emph{parameter quadruple} for the random walk $(X_n)$. In light of Theorem \ref{thm:RightLeftTransienceCutoff}, we say that the parameter quadruple $(p,q,R,L)$ is \emph{critical} if $\alpha(p,q,R,L) = 1/2$, and \emph{noncritical} otherwise. Our next theorem shows that in the noncritical case, the random walk is not just transient, but in fact ballistic. \begin{The} \label{thm:BallisticityWhenNonCritical} If $\alpha(p,q,R,L) \neq 1/2$, then there exists a $\beta = \beta(p,q,R,L) > 0$ such that, for any initial environment $\omega$, \begin{align} \label{eq:LiminfXnnGreaterDelta} \liminf_{n \to \infty} \frac{|X_n|}{n} \geq \beta, ~ \mathbb{P}_{\omega} \overline{m}ox{ a.s. } \end{align} Moreover, if $\alpha>1/2$ ($\alpha<1/2$) and the initial environment $\omega(x)$ is constant for $x \geq m$ ($x \leq - m$) then $\mathbb{E}_\omega(N_x)$ is also constant for $x \geq m$ ($x \leq - m$), and denoting this common value by $\gamma$, \begin{align} \label{eq:SpeedENxInverse} \lim_{n \to \infty} \frac{|X_n|}{n} = \frac1\gamma,~ \mathbb{P}_{\omega} \overline{m}ox{ a.s.} \end{align} Here, $m$ can be any nonnegative integer. \end{The} The following proposition characterizes some properties of the fundamental function $\alpha$. We choose to analyze $\alpha$ as a function of $p$ for fixed $R,L,q$; of course, a similar analysis also works to analyze $\alpha$ as a function of $q$ for fixed $R,L,p$. \begin{Prop} \label{prop:PropertiesOfAlpha} Let $R,L,q$ be fixed and consider $\alpha$ as a function of $p$, $\alpha(p) \equiv \alpha(p,q,R,L)$. \begin{itemize} \widetilde{i}em[(i)] If $q = 1/2$, then \begin{equation*} \begin{aligned} \label{eq:alphaHalf} \alpha(1/2) = 1/2 ~,~ \alpha(p) < 1/2 \overline{m}ox{ for } p < 1/2 ~,~ \alpha(p) > 1/2 \overline{m}ox{ for } p > 1/2. \end{aligned} \end{equation*} \widetilde{i}em[(ii)] If $q < 1/2$, then there exists a unique critical point $p_0 = p_0(q,R,L) \in (1/2,1)$ such that \begin{equation} \begin{aligned} \label{eq:alphap0} \alpha(p_0) = 1/2 ~,~ \alpha(p) < 1/2 \overline{m}ox{ for } p < p_0 ~,~ \alpha(p) > 1/2 \overline{m}ox{ for } p > p_0. \end{aligned} \end{equation} \widetilde{i}em[(iii)] For $q < 1/2$ the critical point $p_0$ from (ii) satisfies \begin{align} \label{eq:qPlusp0} &q+p_0(q,R,L)<1, \ \text{if}\ R<L; \nonumber \\ &q+p_0(q,R,L)>1, \ \text{if}\ R>L. \end{align} Also, for any fixed $R$ and $L$, $p_0(q,R,L)$ is a decreasing function of $q$, for $q \in (0,1/2)$. \widetilde{i}em[(iv)] If $q < 1/2$ and $L=1$, then \begin{equation}\label{p_0} p_0 = \frac{1-2q+q^{R+1}}{1-2q+q^R}. \end{equation} If $q>1/2$ and $L=1$, then (\ref{eq:alphap0}) still holds with $p_0 = \frac{1-2q+q^{R+1}}{1-2q+q^R}$ as long as $1 - 2q + q^{R+1} > 0$. However, if $1 - 2q + q^{R+1} \leq 0$, then $\alpha(p) > 1/2$, for all $p \in (0,1)$. \widetilde{i}em[(v)] If $q < 1/2$ and $L=R$, then $p_0 =1 - q.$ If $q > 1/2$ and $L=R$, then $1-q$ is still a critical point (i.e. $\alpha(1-q) = 1/2$), but it is not always unique. \widetilde{i}em[(vi)] For any $R,L,q$, $\lim_{p \to 1} \alpha(p) = 1$. In particular, $\alpha > 1/2$ for all sufficiently large $p$. \end{itemize} \end{Prop} \noindent \bf Remark 1.\rm\ Part (v) shows that $\alpha(p)$ is not always a monotonic function of $p$, and, in fact, often it is not. Consequently, increasing $p$ (with fixed $q$, $R$, $L$) may sometimes change the process from the right transient regime to the left transient regime. However, this phenomena can only occur when $q > 1/2$, by part (ii), in which case the process has negative feedback at all critical points. Illustrative plots are given in Figure \ref{fig:AlphaPlot}. \noindent \bf Remark 2.\rm\ As noted before the proposition, we could have considered $\alpha$ as a function of $q$ for fixed $p,R,L$. We note, in particular, that in the case that $p>1/2$, there exists a unique critical point $q_0=q_0(p,R,L) \in (0,1/2)$, and when in addition, $R=1$, one has \begin{equation}\label{q_0} q_0=\frac{p(1-p)^L}{2p-1+(1-p)^L}. \end{equation} Moreover, if $R = 1$ and $p \leq 1/2$, then there is still a unique critical point $q_0$ given by \eqref{q_0} as long as $2p-1 + (1-p)^L > 0$. However, if $2p-1+(1-p)^L \leq 0$, then $\alpha < 1/2$, for all $q \in (0,1)$. \begin{figure} \caption{Plots of $\alpha(p)$ with $L = 10, R = 10, q = 0.75$ (left) and $L = 20, R = 10, q = 0.75$ (right). In both cases, as $p$ increases from $0$ to $1$ the parameter quadruple $(p,q,R,L)$ passes from right transient ($\alpha > 1/2$), to left transient ($\alpha < 1/2$), and back to right transient.} \label{fig:AlphaPlot} \end{figure} In general for cookie-type random walks, it is very difficult to obtain an explicit formula for the speed in the ballistic regime. However, the additional level of interaction between the random walker and the environment in the site-based feedback case makes a calculation of the speed possible in some situations. Before moving on to the critical case, we present two results that give an exact characterization of the limiting speed with $R$ or $L$ equal to $1$, in certain initial environments. We assume that $\alpha > 1/2$, but analogous results for $\alpha < 1/2$ are easily inferred by symmetry considerations. Specifically, if $\alpha(p,q,R,L) < 1/2$ then $\alpha(1-q, 1-p, L, R) > 1/2$, and the speed to $-\infty$ with parameters $p,q,R,L$ in an initial environment $\omega$ is the same as the speed to $+\infty$ with parameters $1-q,1-p,L,R$ in an initial environment $\omega'$ defined by $\omega'(x) = \omega(-x)^*, x \in \mathbb{Z}$, where $(q,i)^* = (1-q,i)$ and $(p,i)^* = (1-p,i)$. \begin{The} \label{thm:L1Speed} Let $L = 1$ and $\alpha > 1/2$. If $\omega(x) = (q,0)$ in a neighborhood of $+\infty$, then \begin{align} \label{eq:L1Speed} \lim_{n \to \infty} \frac{X_n}{n} = \frac{1 - t_*}{1 + t_*},~ \mathbb{P}_{\omega} \overline{m}ox{ a.s.}, \end{align} where $t_*$ is the unique root of the polynomial \begin{align} \label{eq:DefPolynomialP} P(t) = (1 - q) + (pq - p - 1)t + (p+q)t^2 - pqt^3 - (p-q) q^R (t^R - t^{R+1}) \end{align} in the interval $(1-q,1)$. \end{The} \begin{The} \label{thm:R1Speed} Let $R = 1$ and $\alpha > 1/2$. Assume that the limiting right density of $(p,i)$ sites $d_i \equiv \lim_{n \to \infty} \frac{1}{n} |\{0 \leq x \leq n-1: \omega(x) = (p,i)\}|$ exists, for each $0 \leq i \leq L-1$, and let $d_L = 1 - \sum_{i=0}^{L-1} d_i$ denote the limiting right density of $(q,0)$ sites. Then, \begin{align} \label{eq:R1Speed} \lim_{n \to \infty} \frac{X_n}{n} = \frac{1}{\sum_{i = 0}^L a_i d_i}~,~ \mathbb{P}_{\omega} \overline{m}ox{ a.s.,} \end{align} where \begin{align} \label{eq:Def_a0} a_0 = \frac{1 + (p/q-1)(1-p)^L}{(2p-1) - (p/q - 1)(1-p)^L}~, \end{align} and, for $1 \leq i \leq L$, \begin{align} \label{eq:Def_ai} a_i = \frac{1 + (p/q-1)(1-p)^{L-i}}{p} ~+~ \left( \frac{(1-p) + (p/q-1)(1-p)^{L-i}}{p} \right) a_0. \end{align} \end{The} \noindent \bf Remark.\rm\ In the case $R = L = 1$ with $\alpha > 1/2$ (i.e. with $p+q > 1$), the speed $s = \lim_{n \to \infty} \frac{X_n}{n}$ from \eqref{eq:R1Speed} reduces to \begin{align*} s = \frac{p+q-1}{1 + (1-2d)(p-q)}~, \end{align*} where $d = d_0$ is the limiting right density of $(p,0)$ sites in the initial environment $\omega$. Interestingly, this formula for $s$ is invariant under the interchange $p \leftrightarrow q$, $d \leftrightarrow 1-d$. That is, the speed of the walk is the same with positive or negative reinforcement, as along as the pair of bias parameters and the limiting right density of sites initially in the configuration with the more favorable bias for jumping right remain the same. We now turn to the critical case, $\alpha=1/2$. Here, there are two possibilities: positive feedback with $q < 1/2 < p$ or negative feedback with $p < 1/2 < q$. In the case of positive feedback the situation is somewhat simpler, but in both cases the analysis is more delicate than before, and the transience/recurrence of the random walk often depends heavily on the initial environment. Our first result shows that there always exist initial environments for which the random walk is recurrent. \begin{The} \label{recurrpossible} If $\alpha(p,q,R,L)=1/2$, then there exist initial environments $\omega$ for which the random walk $(X_n)$ is a.s. recurrent. In particular, in the positive feedback case, $q < p$, the random walk $(X_n)$ is $\mathbb{P}_{\omega}$ a.s. recurrent for any initial environment $\omega$ with $\omega(x)=(q,0)$ for $x$ in a neighborhood of $+\infty$ and $\omega(x)=(p,0)$ for $x$ in a neighborhood of $-\infty$. \end{The} The next two theorems, and concomitant corollary and proposition, concern the situation that either $R$ or $L$ is 1. In this case, we can give an essentially complete description of when the random walk is recurrent, right transient, or left transient. However, for technical reasons, we will need to assume in many instances that the initial environment $\omega$ is constant either in a neighborhood of $+\infty$, a neighborhood of $-\infty$, or both. Our first result indicates that, when $R$ or $L$ is equal to 1, only one of the two directions is possible for transience. \begin{The} \label{RorL1} Assume $\alpha = 1/2$. \begin{itemize} \widetilde{i}em If $R = 1$ and $q < p$, then the random walk $(X_n)$ is $\mathbb{P}_{\omega}$ a.s. not transient to $+\infty$, for any initial environment $\omega$. \widetilde{i}em If $R = 1$ and $p < q$, then the random walk $(X_n)$ is $\mathbb{P}_{\omega}$ a.s. not transient to $+\infty$, for any environment $\omega$ which is constant in a neighborhood of $+\infty$. \widetilde{i}em If $L = 1$ and $q < p$, then the random walk $(X_n)$ is $\mathbb{P}_{\omega}$ a.s. not transient to $-\infty$, for any initial environment $\omega$. \widetilde{i}em If $L = 1$ and $p < q$, then the random walk $(X_n)$ is $\mathbb{P}_{\omega}$ a.s. not transient to $-\infty$, for any environment $\omega$ which is constant in a neighborhood of $-\infty$. \end{itemize} \end{The} The following corollary is an immediate consequence of this theorem and part (ii) of Lemma \ref{lem:SimpleTransConditions} in section \ref{subsec:BasicLemmas}. \begin{Cor} \label{RandL1} Let $R=L=1$ and $p=1-q$; so $\alpha=1/2$. \begin{itemize} \widetilde{i}em If $q < p$ then the random walk $(X_n)$ is $\mathbb{P}_{\omega}$ a.s. recurrent, for any initial environment $\omega$. \widetilde{i}em If $p < q$ then the random walk $(X_n)$ is $\mathbb{P}_{\omega}$ a.s. recurrent, for any initial environment $\omega$ which is constant in neighborhoods of $+ \infty$ and $- \infty$. \end{itemize} \end{Cor} Our next theorem gives specific conditions to determine if the random walk is recurrent or transient to $+\infty$ in the case $L = 1$ and $R > 1$ (which, by Theorem \ref{RorL1}, are the only possibilities). By symmetry considerations, if $R=1$, instead of $L=1$, then the result obtained for $L=1$ will hold with the roles of $q,p,R$ and $\pm\infty$ replaced by $1-p,1-q,L$ and $\mp\infty$ respectively. \begin{The} \label{L1andEnvironment} Assume that $L=1$, $R \geq 2$, and $\alpha = 1/2$. Thus, by \eqref{p_0}, $p=p_0=\frac{1-2q+q^{R+1}}{1-2q+q^R}$. In the case of negative feedback, $p < q$, assume also that the initial environment $\omega$ is constant in a neighborhood of $-\infty$. \begin{itemize} \widetilde{i}em[(i)] If $\omega(x) = (q,0)$ in a neighborhood of $+\infty$, then the random walk $(X_n)$ is $\mathbb{P}_{\omega}$ a.s. recurrent. \widetilde{i}em[(ii)] If $\omega(x) = (q,i)$ in a neighborhood of $+\infty$, $1 \leq i \leq R-1$, then the random walk $(X_n)$ is \begin{align*} \mathbb{P}_{\omega}& \overline{m}ox{ a.s. recurrent if } P_{R,i}(q) \geq 0, \overline{m}ox{ and }\\ \mathbb{P}_{\omega}& \overline{m}ox{ a.s. transient to $+\infty$ if } P_{R,i}(q) < 0, \end{align*} where \begin{align} \label{poly} P_{R,i}(q) & = (2R-1)q^{R+2}-(3R+1)q^{R+1}+(R+1)q^R \nonumber \\ &~~~ -2q^{R+2-i}+3q^{R+1-i}-q^{R-i}+(1-2q)^2. \end{align} \widetilde{i}em[(iii)] If $\omega(x) = (p,0)$ in a neighborhood of $+\infty$, then the random walk $(X_n)$ is \begin{align*} \mathbb{P}_{\omega}& \overline{m}ox{ a.s. recurrent if } P_{R,R}(q) \geq 0, \overline{m}ox{ and }\\ \mathbb{P}_{\omega}& \overline{m}ox{ a.s. transient to $+\infty$ if } P_{R,R}(q) < 0, \end{align*} where \begin{align} \label{polyR} P_{R,R}(q)=(2R-1)q^{R+2}-(3R+1)q^{R+1}+(R+1)q^{R}+2q^2-q. \end{align} Moreover, for each $R \geq 2$ there exists a unique root $q_*(R)\in(0,1/2)$ of the polynomial $P_{R,R}(q)$, $P_{R,R}(q) > 0$ for $q > q_*(R)$, $P_{R,R}(q) < 0$ for $q < q_*(R)$, and $\lim_{R\to\infty}q_*(R)=1/2$. \end{itemize} \end{The} \noindent \bf Remark 1. \rm\ In the case of positive feedback, $q < p$, one can also determine between right transience and recurrence for some environments that are not constant in a neighborhood of $+\infty$, using the comparison lemma given in section \ref{subsec:ComparisonOfEnvironments}. In particular, $(p,0)$ is the most favorable environment for right transience in the positive feedback case, so if the random walk is not right transient with the $(p,0)$ environment in a neighborhood of $+\infty$, then it is not right transient for any initial environment. \noindent \bf Remark 2. \rm\ $P_{R,R}(q)$ is the same polynomial one obtains by substituting $i = R$ into the definition of $P_{R,i}(q)$ in \eqref{poly}. \noindent \bf Remark 3. \rm\ For any $1 \leq i \leq R$, $P_{R,i}(q)$ has a double root at $1$. $P_{R,R}(q)$ also has a single root at $0$ and factors as $P_{R,R}(q) = q(1-q)^2 \mathbb{P}t_{R,R}(q)$ where \begin{align} \label{eq:FactoredPolyR} \mathbb{P}t_{R,R}(q)=-1+\sum_{j=1}^{R-3}jq^{j+1}+(2R-1)q^{R-1}. \end{align} Here, the sum is defined to be 0, for $R = 2,3$. \noindent \bf Remark 4. \rm\ Using \eqref{eq:FactoredPolyR} one finds that $q_*(2) = 1/3$ and $q_*(3) = 1/\sqrt{5} \approx 0.447$. Using a combination of analytical techniques and computer generated plots one also finds the following behavior for $P_{R,i}$, $1 \leq i \leq R-1$. For $R=2,3,4$, $P_{R,i}(q) \geq 0$ for all $1 \leq i \leq R-1$ and $q \in (0,1)$. For $R = 5,6$, $P_{R,i}(q) \geq 0$ for all $1 \leq i \leq R-2$ and $q \in (0,1)$. However, $P_{5,4}(q) < 0$ if (and only if) $q \in (a,b)$, where $a\approx.410$ and $b\approx.473$, and $P_{6,5}(q) < 0$ if (and only if) $q \in (a,b)$, where $a\approx.391$ and $b\approx.490$. For $i = 5,6$ there are ranges of $q$ for which $P_{7,i}(q) < 0$. The next proposition characterizes asymptotic properties of the function $P_{R,i}(q)$ in the limit of large $R$, for two different cases of $i = i_R$. In the first case, $R-i_R$ grows to infinity, so the process must jump right many consecutive times from a given site to switch it to the $p$-mode, starting in the $(q,i_R)$ initial environment. In the second case, $i_R = R-k$, for a fixed $k$, so the process need jump right only $k$ consecutive times from any site to switch it to the $p$-mode, starting in the $(q,i_R)$ initial environment. The proof of both cases is straightforward, and is left to the reader. \begin{Prop} ~\ \label{Rlarge} \begin{itemize} \widetilde{i}em[(i)] If $(R - i_R) \rightarrow \infty$ as $R \rightarrow \infty$ then, for any fixed $q \in (0,1)$, $P_{R,i_R}(q) > 0$ for all sufficiently large $R$. \widetilde{i}em[(ii)] Let $i_R = R-k$, and define \begin{align} \label{Qpoly} Q_k(q)= (1-2q)^2 -q^k + 3q^{k+1} - 2q^{k+2}. \end{align} If $Q_k(q) > 0$ then $P_{R,i_R}(q) > 0$ for sufficiently large $R$, and if $Q_k(q) < 0$ then $P_{R,i_R}(q) < 0$ for sufficiently large $R$. \end{itemize} \end{Prop} \noindent \bf Remark.\rm\ The polynomial $Q_k(q)$ factors as $(2q-1)(2q-1+q^k-q^{k+1})$. Using this representation it is not hard to verify the following facts: $Q_k(q)>0$ for $q>1/2$, and there exists an $a_k\in(\frac12(3-\sqrt5),1/2)\approx(.382,1/2)$ such that $Q_k(q)>0$ for $q\in(0,a_k)$ and $Q_k(q)<0$ for $q\in(a_k,1/2)$. Furthermore, $a_k$ is increasing in $k$ and $\lim_{k\to\infty}a_k=1/2$. Corollary \ref{RandL1} showed that if $R=L=1$ then in the critical case $p = 1-q$ the random walk is always recurrent (assuming constant initial environment in neighborhoods of $\pm \infty$ in the negative feedback regime). When $R=L=2$ the behavior in the critical case is much more complicated; in particular, for appropriate initial environments, there is a positive probability for both transience to $+\infty$ and transience to $-\infty$. \begin{The}\label{LR2crit} Let $R=L=2$ and assume that $p=1-q$; so $\alpha=\frac12$. In the negative feedback case, $q>\frac12$, assume also that the initial environment $\omega$ is constant in a neighborhood of $+\infty$ and in a neighborhood of $-\infty$. Let $$ q^*_1=\frac{\sqrt{13}-3}2\approx0.303 $$ and let $$ q^*_2\approx .682\ \text{ be the unique root in $(0,1)$ of } q^3+q-1. $$ \begin{itemize} \widetilde{i}em[(i)] Let $q<q_1^*$. a. If $\omega(x) =(p,0)$ in a neighborhood of $+\infty$ and $\omega(x) \neq(q,0)$ for any $x$ in a neighborhood of $-\infty$, then the random walk $(X_n)$ is $\mathbb{P}_\omega$ a.s. transient to $+\infty$. b. If $\omega(x) =(q,0)$ in a neighborhood of $-\infty$ and $\omega(x) \neq(p,0)$ for any $x$ in a neighborhood of $+\infty$, then the random walk $(X_n)$ is $\mathbb{P}_\omega$ a.s. transient to $-\infty$. c. If $\omega(x) \neq(p,0)$ for any $x$ in a neighborhood of $+\infty$ and $\omega(x) \neq(q,0)$ for any $x$ in a neighborhood of $-\infty$, then the random walk $(X_n)$ is $\mathbb{P}_\omega$ a.s. recurrent. d. If $\omega(x) =(p,0)$ in a neighborhood of $+\infty$ and $\omega(x) = (q,0)$ in a neighborhood of $-\infty$, then $\mathbb{P}_\omega(X_n\to+\infty) = 1 - \mathbb{P}_\omega(X_n\to-\infty) \in (0,1).$ \widetilde{i}em[(ii)] Let $q>q^*_2$. Then (a)-(d) of (i) hold with the roles of $(p,0)$ and $(q,0)$ reversed. \widetilde{i}em[(iii)] Let $q\in[q^*_1,q^*_2]$. Then the random walk $(X_n)$ is $\mathbb{P}_\omega$ a.s. recurrent. \end{itemize} \end{The} \noindent \bf Remark.\rm\ The proof of Theorem \ref{LR2crit} relies on the eigen-decomposition of a $2 \times 2$ matrix. In principal, one could also apply similar methods to characterize the transience/recurrence properties for general $R, L \geq 2$. Specifically, to determine if there is positive probability of right transience one needs to diagonalize an $L \times L$ matrix, and to determine if there is positive probability of left transience one needs to diagonalize an $R \times R$ matrix. For $R = L = 3$ this is possible analytically, but in general it is not. Nevertheless, it could be done numerically for reasonably sized $R$ and $L$. In the case $R \not= L$, one would also need to determine numerically the critical value/s of $p$ such that $\alpha(p,q,R,L) = 1/2$, as the entries of these matrices depend on both $p$ and $q$. We close this introductory section with an \emph{open problem}. In the non-critical case, Theorem \ref{thm:BallisticityWhenNonCritical} shows that the random walk is ballistic. The open problem is to show that in the critical case, if the random walk is transient, then it is not ballistic; that is, $\lim_{n\to\infty}\frac{X_n}n=0$ a.s. Note that if this were not always true, then in light of Theorem \ref{thm:BallisticityWhenNonCritical}, there would be cases (that is, choices of $R$ and $L$) for which the speed would have a discontinuity as a function of $p$ and $q$. \\ The remainder of the paper is organized as follows. In section \ref{sec:Preliminaries} we introduce some important constructions that will be central to our proofs and establish a number of simple lemmas. In section \ref{sec:NoncriticalCase} we prove Theorems \ref{thm:RightLeftTransienceCutoff}--\ref{thm:R1Speed} concerning the behavior of the random walk $(X_n)$ in the noncritical case. In section \ref{sec:CriticalCase} we prove Theorems \ref{recurrpossible}--\ref{LR2crit} concerning the behavior of the random walk in the critical case. Finally, in section \ref{sec:AnalysisOfAlpha} we prove Proposition \ref{prop:PropertiesOfAlpha} which characterizes properties of the important function $\alpha$. \section{Preliminaries} \label{sec:Preliminaries} In this section we introduce a basic framework for proving the theorems stated above and establish a number of useful lemmas. Section \ref{subsec:AuxilliaryMarkovChains} gives constructions of the single site Markov chains $(Y_n^x)_{n \in \mathbb{N}}$ and the right jumps Markov chain $(Z_x)_{x \geq 0}$, which will be the primary tools used in the proofs of Theorems \ref {thm:RightLeftTransienceCutoff}, \ref{thm:BallisticityWhenNonCritical}, \ref{recurrpossible}, \ref{L1andEnvironment} and \ref{LR2crit}. Section \ref{subsec:BasicLemmas} gives three simple lemmas that will be used in a number of places. The first two concern conditions for transience, and the other relates hitting times to speed. Finally, section \ref{subsec:ComparisonOfEnvironments} gives an important lemma comparing the possibility of transience in different environments. \subsection{Auxilliary Markov Chains} \label{subsec:AuxilliaryMarkovChains} \subsubsection{The Single Site Markov Chains $(Y_n^x)_{n \in \mathbb{N}}$} \label{subsubsec:SingleSiteMarkovChain} Let $M$ be the stochastic transition matrix on the set of single site configurations $\mathbb{L}ambda$, with nonzero entries defined as follows: \begin{align*} & M_{(p,i)(p,0)} = p, M_{(p,i)(p,i+1)} = 1-p~,~ \overline{m}ox{ for } 0 \leq i \leq L-2. \\ & M_{(p,L-1)(p,0)} = p, M_{(p,L-1)(q,0)} = 1-p. \\ & M_{(q,i)(q,0)} = 1-q, M_{(q,i)(q,i+1)} = q~,~ \overline{m}ox{ for } 0 \leq i \leq R-2. \\ & M_{(q,R-1)(q,0)} = 1-q, M_{(q,R-1)(p,0)} = q. \end{align*} For $x \in \mathbb{Z}$, let $(Y_n^x)_{n \in \mathbb{N}}$ be the Markov chain with state space $\mathbb{L}ambda$, transition matrix $M$, and initial state $\omega(x)$. We refer to the chain $(Y_n^x)_{n \in \mathbb{N}}$ as the \emph{single site Markov chain} at $x$. It is the Markovian sequence of configurations at site $x$ that would occur if $x$ were to be visited infinitely often. That is, \begin{align*} \mathbb{P}(C_{n+1}^x = \lambda'|C_{n}^x = \lambda, N_x \geq n + 1) = M_{\lambda \lambda'},~ \lambda, \lambda' \in \mathbb{L}ambda \end{align*} where $N_x$ is the total number of visits to site $x$, as above, and $C_n^x$ is the configuration at site $x$ immediately after the $n$-th visit. The \emph{extended single site chain} at $x$, $(\mathbb{Y}h_n^x)_{n \in \mathbb{N}} = (Y_n^x, J_n^x)_{n \in \mathbb{N}}$, is the Markov chain whose states are pairs $(\lambda,j)$, where $\lambda \in \mathbb{L}ambda$ denotes the current configuration at site $x$ and $j \in \{1,-1\}$ represents the next jump from $x$ ($1$ for right, $-1$ for left). The state space of this chain is $\widehat{\mathbb{L}ambda} = \mathbb{L}ambda \times \{1,-1\}$ and the transition matrix $\mathbb{M}h$ is defined by: \begin{align*} & \mathbb{M}h_{((p,i),1)((p,0),1)} = p, \mathbb{M}h_{((p,i),1)((p,0),-1)} = 1-p ~,~\overline{m}ox{ for } 0 \leq i \leq L-1. \\ & \mathbb{M}h_{((p,i),-1)((p,i+1),1)} = p, \mathbb{M}h_{((p,i),-1)((p,i+1),-1)} = 1-p ~,~\overline{m}ox{ for } 0 \leq i \leq L-2. \\ & \mathbb{M}h_{((p,L-1),-1)((q,0),1)} = q, \mathbb{M}h_{((p,L-1),-1)((q,0),-1)} = 1-q. \\ & \mathbb{M}h_{((q,i),-1)((q,0),1)} = q, \mathbb{M}h_{((q,i),-1)((q,0),-1)} = 1-q ~,~\overline{m}ox{ for } 0 \leq i \leq R-1. \\ & \mathbb{M}h_{((q,i),1)((q,i+1),1)} = q, \mathbb{M}h_{((q,i),1)((q,i+1),-1)} = 1-q ~,~\overline{m}ox{ for } 0 \leq i \leq R-2. \\ & \mathbb{M}h_{((q,R-1),1)((p,0),1)} = p, \mathbb{M}h_{((q,R-1),1)((p,0),-1)} = 1-p. \end{align*} The initial state $\mathbb{Y}h_1^x$ for the chain has the following distribution: \begin{itemize} \widetilde{i}em If $\omega(x) = (p,i)$, for some $0 \leq i \leq L-1$, then \begin{align} \label{eq:DistFrom_pSite} \mathbb{P}\left(\mathbb{Y}h_1^x = ((p,i),1)\right) = p,~ \mathbb{P}\left(\mathbb{Y}h_1^x = ((p,i),-1)\right) = 1-p. \end{align} \widetilde{i}em If $\omega(x) = (q,i)$, for some $0 \leq i \leq R-1$, then \begin{align} \label{eq:DistFrom_qSite} \mathbb{P}\left(\mathbb{Y}h_1^x = ((q,i),1)\right) = q,~ \mathbb{P}\left(\mathbb{Y}h_1^x = ((q,i),-1)\right) = 1-q. \end{align} \end{itemize} By construction, the sequence of site configurations $(Y_n^x)$ obtained by projection from this extended Markov chain state sequence $(\mathbb{Y}h_n^x)$ with transition matrix $\mathbb{M}h$ and initial state distributed according to (\ref{eq:DistFrom_pSite}) and (\ref{eq:DistFrom_qSite}) has the same law as above, when defined directly by the transition matrix $M$ with initial state $\omega(x)$. \\ \noindent \emph{Coupling to the Random Walk $(X_n)$} \\ For a given initial position $x_0$ and initial environment $\omega = \{\omega(x)\}_{x \in \mathbb{Z}}$ one can construct the random walk $(X_n)_{n \geq 0}$ according to the following two step procedure, similar to that given in \cite{Amir2013} for cookie random walks. \begin{enumerate} \widetilde{i}em Run the extended single site Markov chains $(\mathbb{Y}h_n^x)_{n \in \mathbb{N}}$ at each site $x$ independently. \widetilde{i}em Walk deterministically from the initial point $x_0$ according to the corresponding ``jump pattern'' $\{J_n^x\}_{n \in \mathbb{N}, x \in \mathbb{Z}}$. That is, upon the the $k_{th}$ visit to site $x$, the walk jumps right if $J_k^x = 1$ and left if $J_k^x = -1$. Formally, we have: \begin{itemize} \widetilde{i}em $X_0 = x_0$. \widetilde{i}em For $n \geq 0$, $X_{n+1} = X_n + J_{K_n}^{X_n}$ where $K_n = |\{0 \leq m \leq n: X_m = X_n\}|$. \end{itemize} \end{enumerate} By definition of the extended single site chains, the random walk $(X_n)_{n \geq 0}$ constructed by this two step procedure will have the correct law, and in the sequel we always assume our random walk $(X_n)$ to be defined in this fashion. We also denote by $\mathbb{P}_{\omega}$ the probability measure for the extended single site chains, run independently at each site $x$, with initial environment $\omega = \{\omega(x)\}_{x \in \mathbb{Z}}$. This is a slight abuse of notation since the probability measure $\mathbb{P}_{\omega} \equiv \mathbb{P}_{\omega,0}$ introduced in section \ref{sec:Intro} also specifies the initial position of the random walk as $X_0 = 0$. However, things should be clear from the context. \\ \noindent \emph{Stationary Distribution}\\ Since $\mathbb{L}ambda$ is finite and $M$ is an irreducible transition matrix, there exists a unique stationary probability distribution $\pi$ on $\mathbb{L}ambda$ satisfying $\pi = \pi M$. Solving the linear system $\{\pi = \pi M, \sum_{\lambda \in \mathbb{L}ambda} \pi_{\lambda} = 1\}$, one obtains the following explicit form for $\pi$ (see Appendix \ref{subsec:StationaryDistributionSingleSiteMC}): \begin{align} \label{eq:pipi_piqi} \pi_{(p,i)} & = \frac{p(1-q)q^R}{(1-q)q^R(1 - (1-p)^L) + p(1-p)^L(1-q^R)} \cdot (1-p)^i ~,~ 0 \leq i \leq L-1. \nonumber \\ \pi_{(q,i)} & = \frac{p(1-q)(1-p)^L}{(1-q)q^R(1 - (1-p)^L) + p(1-p)^L(1-q^R)} \cdot q^i ~,~ 0 \leq i \leq R-1. \end{align} In particular, \begin{align} \label{eq:pippiq} \pi_p & \equiv \sum_{i=0}^{L-1} \pi_{(p,i)} = \frac{(1-q)q^R(1 - (1-p)^L)}{(1-q)q^R(1 - (1-p)^L) + p(1-p)^L(1-q^R)} ~, \overline{m}ox{ and } \nonumber \\ \pi_q & \equiv \sum_{i=0}^{R-1} \pi_{(q,i)} = \frac{p(1-p)^L(1-q^R)}{(1-q)q^R(1 - (1-p)^L) + p(1-p)^L(1-q^R)}. \end{align} So, defining $\widehat{p}i : \widehat{\mathbb{L}ambda} \rightarrow \{0,1\}$ by $\widehat{p}i(\lambda,j) = \mathds{1}\{j= 1\}$, we have \begin{align} \label{eq:ExpectedValuePhiEqualAlpha} \mathbb{E}_{\widehat{\pi}}(\widehat{p}i) = p \cdot \pi_p + q \cdot \pi_q = \alpha , \end{align} where $\widehat{\pi}$ is the stationary distribution for the transition matrix $\mathbb{M}h$, and $\alpha \in (0,1)$ is as in Theorem \ref{thm:RightLeftTransienceCutoff}. It follows, by the ergodic theorem for finite-state Markov chains, that the limiting fraction of right jumps in the sequence $(J_n^x)_{n \in \mathbb{N}}$ is equal to $\alpha$ a.s., for each site $x$. \subsubsection{The Right Jumps Markov Chain $(Z_x)_{x \geq 0}$} \label{subsubsec:RightJumpsMarkovChain} The \emph{right jumps Markov chain} $(Z_x)_{x \geq 0}$ is defined as follows: \begin{itemize} \widetilde{i}em $Z_0 = 1$. \widetilde{i}em For $x \geq 1$, \begin{align} \label{eq:DefRightJumpsMC} Z_x = \mathbb{T}heta_x - Z_{x-1} ~~\overline{m}ox{ where }~~ \mathbb{T}heta_x = \inf \mathbb{B}ig\{n \geq 0: \sum_{m=1}^{n} \mathds{1}\{J_m^x = -1\} = Z_{x-1} \mathbb{B}ig\}, \end{align} \end{itemize} with the convention that $\sum_{m=1}^0\mathds{1}\{J_m^x = -1\} =0$. That is, $\mathbb{T}heta_x$ is the first time that there are $Z_{x-1}$ left jumps in the sequence $(J_n^x)_{n \in \mathbb{N}}$, and $Z_x = \mathbb{T}heta_x - Z_{x-1}$ is the total number of right jumps in the sequence $(J_n^x)_{n \in \mathbb{N}}$ before there are $Z_{x-1}$ left jumps. For an initial environment $\omega$, we denote the probability measure for the right jumps chain $(Z_x)$ also by $\mathbb{P}_{\omega}$. This is simply the projection of the measure $\mathbb{P}_{\omega}$ for the extended single site Markov chains, of which the right jumps chain is a deterministic function. \\ \noindent \emph{Relation to the Random Walk $(X_n)$} \\ We denote by $T_x$ the first hitting time of site $x$, \begin{align*} T_x = \inf \{n \geq 0 : X_n = x\} ~,~ x \in \mathbb{Z}. \end{align*} Also, we say that a jump pattern $\{J_n^x\}_{n \in \mathbb{N}, x \in \mathbb{Z}}$ is \emph{non-degenerate} if \begin{align*} |\{n : J_{n+1}^x \not= J_n^x \}| = \infty ~,~ \overline{m}ox{for each } x \in \mathbb{Z}. \end{align*} Clearly, for any initial environment $\omega$, the corresponding jump pattern $\{J_n^x\}_{n \in \mathbb{N}, x \in \mathbb{Z}}$ is non-degenerate $\mathbb{P}_{\omega}$ a.s. The following important proposition relating transience/recurrence of the random walk $(X_n)$ to survival of the Markov chain $(Z_x)$ is shown in \cite{Amir2013}\footnotemark{}. \footnotetext{The terminology there is slightly different. The jump pattern is referred to as an \emph{arrow environment} and denoted by $a$. After the arrow environment is chosen (according to some random rule which differs depending on the model) the walker follows the directional arrows deterministically on its walk.} \begin{Prop} \label{prop:SurvivalZxTransienceXn} If $X_0 = 1$ and $\{J_n^x\}_{n \in \mathbb{N}, x \in \mathbb{Z}}$ is non-degenerate, then \begin{align} \label{eq:SurvivalZxTransienceXn} T_0 = \infty ~\overline{m}ox{ if and only if } Z_x > 0, \overline{m}ox{ for all } x > 0. \end{align} Moreover, if $T_0 < \infty$ then, for each $x \in \mathbb{N}$, $Z_x$ is equal to the number of right jumps of the process $(X_n)$ from site $x$ before hitting 0. \end{Prop} \subsection{Basic Lemmas} \label{subsec:BasicLemmas} For $n \geq 0$ we denote by $\mathbb{A}M_n^+$ the event that the random walk steps right at time $n$ and never returns to its time-$n$ location, and by $\mathbb{A}M_n^{-}$ the event that the random walks steps left at time $n$ and never returns: \begin{align} \label{eq:DefAnpm} \mathbb{A}M_n^+ = \{X_m > X_n, \forall m > n\} ~\overline{m}ox{ and }~ \mathbb{A}M_n^- = \{X_m < X_n, \forall m > n\}. \end{align} The following simple facts will be needed in several instances below. A proof is provided in Appendix \ref{sec:BasicTransienceConditions}. \begin{Lem} \label{lem:SimpleTransConditions} For any initial environment $\omega$: \begin{itemize} \widetilde{i}em[(i)] $\mathbb{P}_{\omega}(\mathbb{A}M_0^+) > 0$ if and only if $\mathbb{P}_{\omega}(X_n \rightarrow \infty) > 0$, and \\ $\mathbb{P}_{\omega}(\mathbb{A}M_0^-) > 0$ if and only if $\mathbb{P}_{\omega}(X_n \rightarrow -\infty) > 0$. \widetilde{i}em[(ii)] $\mathbb{P}_{\omega}(X_n \rightarrow \infty) = \mathbb{P}_{\omega}(\liminf_{n \to \infty} X_n > -\infty)$, and \\ $\mathbb{P}_{\omega}(X_n \rightarrow -\infty) = \mathbb{P}_{\omega}(\limsup_{n \to \infty} X_n < \infty)$. \widetilde{i}em[(iii)] $\mathbb{P}_{\omega}(X_n \rightarrow \infty) = 1$ if $\mathbb{P}_{\omega}(X_n \rightarrow \infty) > 0$ and $\mathbb{P}_{\omega}(X_n \rightarrow -\infty) = 0$. \\ $\mathbb{P}_{\omega}(X_n \rightarrow -\infty) = 1$ if $\mathbb{P}_{\omega}(X_n \rightarrow -\infty) > 0$ and $\mathbb{P}_{\omega}(X_n \rightarrow \infty) = 0$. \end{itemize} \end{Lem} Combining Proposition \ref{prop:SurvivalZxTransienceXn} and part (i) of Lemma \ref{lem:SimpleTransConditions} gives the following useful lemma. \begin{Lem} \label{lem:ProbSurvivalZxProbTransienceXn} For any initial environment $\omega$, \begin{align} \label{eq:ProbA0Plus} \mathbb{P}_{\omega}(\mathbb{A}M_0^+) = \mathbb{P}_{\omega}(X_1 = 1) \cdot \mathbb{P}_{\omega}(Z_x > 0, \forall x > 0). \end{align} Consequently, $\mathbb{P}_{\omega}(X_n \rightarrow \infty) > 0 \overline{m}ox{ if and only if } \mathbb{P}_{\omega}(Z_x > 0, \forall x > 0) > 0$. \end{Lem} \begin{proof} Fix any initial environment $\omega$, and let $\omega'$ denote the environment at time $1$ induced by jumping right from $X_0 = 0$ starting in $\omega$: \begin{align*} \{\omega_0 = \omega, X_0 = 0, X_1 = 1\} \mathbb{L}ongrightarrow \omega_1 = \omega'. \end{align*} Since $\omega(x) = \omega'(x)$, for all $x > 0$, the distribution of the random variables $(J_n^x)_{n,x > 0}$, is the same in the two environments $\omega$ and $\omega'$. Thus, \begin{align*} \mathbb{P}_{\omega}(Z_x > 0, \forall x > 0) = \mathbb{P}_{\omega'}(Z_x > 0, \forall x > 0). \end{align*} So, \begin{align*} \mathbb{P}_{\omega}(\mathbb{A}M_0^+) & \equiv \mathbb{P}_{\omega,0}(X_n > 0, \forall n > 0) \\ & = \mathbb{P}_{\omega,0}(X_1 = 1) \cdot \mathbb{P}_{\omega,0}(X_n > 0, \forall n > 1|X_1 = 1) \\ & = \mathbb{P}_{\omega,0}(X_1 = 1) \cdot \mathbb{P}_{\omega',1}(X_n > 0, \forall n > 0) \\ & \widetilde{s}ackrel{(*)}{=} \mathbb{P}_{\omega,0}(X_1 = 1) \cdot \mathbb{P}_{\omega'}(Z_x > 0, \forall x > 0) \\ & = \mathbb{P}_{\omega,0}(X_1 = 1) \cdot \mathbb{P}_{\omega}(Z_x > 0, \forall x > 0). \end{align*} This proves \eqref{eq:ProbA0Plus}, and the ``consequently'' part of the proposition follows immediately from part (i) of Lemma \ref{lem:SimpleTransConditions}. Step $(*)$ follows from Proposition \ref{prop:SurvivalZxTransienceXn}\footnotemark{}. \end{proof} \footnotetext{ In the proof we have used the explicit notation $\mathbb{P}_{\omega,0}$, rather than simply $\mathbb{P}_{\omega}$, for the random walk variables $X_n$, $n \geq 0$, to emphasize that the initial position $X_0 = 0$ plays a role in their distribution. By contrast, $\mathbb{P}_{\omega}$, $\mathbb{P}_{\omega'}$ are used for the distribution of the right jumps Markov chain $(Z_x)_{x \geq 0}$, where the initial position of the random walk plays no role.} For the proofs of Theorems \ref{thm:BallisticityWhenNonCritical} and \ref{thm:R1Speed} we will need the following lemma relating hitting times to speed. The same result is shown in \cite[Lemma 2.1.17]{Zeitouni2004}, for the case $C < \infty$ without the a priori assumption that $X_n \rightarrow \infty$. It is easy to see that with this assumption the claim also holds in the case $C = \infty$. \begin{Lem} \label{lem:HittingTimesVersusSpeed} If $\lim_{n \to \infty} X_n = \infty$ and $\lim_{x \to \infty} T_x/x = C \in (0,\infty]$, then \begin{align*} \lim_{n \to \infty} X_n/n = 1/C. \end{align*} \end{Lem} We note that, although stated in \cite[Lemma 2.1.17]{Zeitouni2004} in the context of random walks in random environment, the proof is entirely non-probabilistic and holds for \emph{any} nearest neighbor walk trajectory $(X_0,X_1,...)$ such that $X_n \rightarrow \infty$ and $\lim_{x \to \infty} T_x/x = C$. \subsection{Comparison of Environments} \label{subsec:ComparisonOfEnvironments} Let $\prec$ be the ordering on the set of single site configurations $\mathbb{L}ambda$ defined by \begin{align*} (q,0) \prec ... \prec (q,R-1) \prec (p,L-1) \prec ... \prec (p,0). \end{align*} We write $\lambda \preceq \widetilde{\lambda}$ if $\lambda \prec \widetilde{\lambda}$ or $\lambda = \widetilde{\lambda}$, and $\omega \preceq \widetilde{\omega}$ if $\omega(x) \preceq \widetilde{\omega}(x)$, for all $x \in \mathbb{Z}$. In this case, we also say that the environment $\widetilde{\omega}$ \emph{dominates} the environment $\omega$. The following lemma relating the possibility of right transience in different environments will be important for the analysis of transience and recurrence in the critical case $\alpha = 1/2$. \begin{Lem} \label{lem:ComparisonOfEnvironments} If $q < p$ and $\omega \preceq \widetilde{\omega}$, then $\mathbb{P}_{\omega}(\mathbb{A}M_0^+) \leq \mathbb{P}_{\widetilde{\omega}}(\mathbb{A}M_0^+)$. In particular, by Lemma \ref{lem:SimpleTransConditions}, if $q < p$, $\omega \preceq \widetilde{\omega}$, and $\mathbb{P}_{\widetilde{\omega}}(X_n \rightarrow \infty) = 0$, then $\mathbb{P}_{\omega}(X_n \rightarrow \infty) = 0$. \end{Lem} For the proof it will be convenient to introduce the following definitions. \begin{itemize} \widetilde{i}em The threshold function $f: \mathbb{L}ambda \times [0,1] \rightarrow \{1,-1\}$ is defined by \begin{align*} f(\lambda, u) & = \mathds{1} \{u \leq p \} - \mathds{1} \{u > p \} ~,~ \overline{m}ox{ if }\lambda \in \mathbb{L}ambda_p\\ f(\lambda, u) & = \mathds{1} \{u \leq q \} - \mathds{1} \{u > q \} ~,~ \overline{m}ox{ if } \lambda \in \mathbb{L}ambda_q. \end{align*} \widetilde{i}em The transition function $g: \mathbb{L}ambda \times \{1,-1\} \rightarrow \mathbb{L}ambda$ is defined by \begin{align*} g(\lambda,j) = {\lambda'} \mathbb{L}ongleftrightarrow \{ Y_n^x = \lambda, J_n^x = j \} \overline{m}ox{ implies } Y_{n+1}^x = \lambda'. \end{align*} That is, $g(\lambda,j)$ is the (deterministic) next configuration at site $x$ if the walk jumps in direction $j$ from site $x$ when $x$ is in configuration $\lambda$. \end{itemize} \begin{proof}[Proof of Lemma \ref{lem:ComparisonOfEnvironments}] For $x \in \mathbb{Z}$, let $(Y_n^x, J_n^x)_{n \in \mathbb{N}}$ and $(\mathbb{Y}t_n^x, \mathbb{J}t_n^x)_{n \in \mathbb{N}}$ denote, respectively, the state sequences of the extended single site Markov chains at $x$ for the environments $\omega$ and $\widetilde{\omega}$. Also, let $(U_n^x)_{x \in \mathbb{Z}, n \in \mathbb{N}}$ be i.i.d. uniform([0,1]) random variables. For each $x$, we will use the i.i.d. sequence $(U_n^x)_{n \in \mathbb{N}}$ to couple the state sequences $(Y_n^x, J_n^x)_{n \in \mathbb{N}}$ and $(\mathbb{Y}t_n^x, \mathbb{J}t_n^x)_{n \in \mathbb{N}}$ in such a way that $J_n^x \leq \mathbb{J}t_n^x$, for all $n$. By independence, this coupling at each individual site $x$ passes to a coupling of the entire joint processes $(Y_n^x, J_n^x)_{x \in \mathbb{Z}, n \in \mathbb{N}}$ and $(\mathbb{Y}t_n^x, \mathbb{J}t_n^x)_{x \in \mathbb{Z}, n \in \mathbb{N}}$, with the correct law. This final larger coupling will be used to show that $\mathbb{P}_{\omega}(\mathbb{A}M_0^+) \leq \mathbb{P}_{\widetilde{\omega}}(\mathbb{A}M_0^+)$. \\ \noindent \emph{Step 1}: The Coupling \\ For a fixed site $x$, we construct the sequences $(Y_n^x, J_n^x)_{n \in \mathbb{N}}$ and $(\mathbb{Y}t_n^x, \mathbb{J}t_n^x)_{n \in \mathbb{N}}$ inductively from the i.i.d. random variables $(U_n^x)_{n \in \mathbb{N}}$ as follows. \begin{itemize} \widetilde{i}em $Y_1^x = \omega(x)$ and $\mathbb{Y}t_1^x = \widetilde{\omega}(x)$. \widetilde{i}em For $n \geq 1$, \begin{align*} J_n^x = f(Y_n^x,U_n^x),Y_{n+1}^x = g(Y_n^x, J_n^x) ~\overline{m}ox{ and }~ \mathbb{J}t_n^x = f(\mathbb{Y}t_n^x,U_n^x), \mathbb{Y}t_{n+1}^x = g(\mathbb{Y}t_n^x, \mathbb{J}t_n^x). \end{align*} \end{itemize} Clearly, the sequences $(Y_n^x,J_n^x)_{n \in \mathbb{N}}$ and $(\mathbb{Y}t_n^x,\mathbb{J}t_n^x)_{n \in \mathbb{N}}$ each have have the appropriate marginal laws under this coupling. Moreover, by considering the various possible cases for $Y_n^x, \mathbb{Y}t_n^x \in \mathbb{L}ambda$ and possible ranges for $U_n^x \in [0,1]$ one finds that, since $q < p$, whatever the value of $U_n^x$ is: \begin{align*} Y_n^x \preceq \mathbb{Y}t_n^x \mathbb{L}ongrightarrow J_n^x \leq \mathbb{J}t_n^x \overline{m}ox{ and } Y_{n+1}^x \preceq \mathbb{Y}t_{n+1}^x. \end{align*} Since $Y_1^x = \omega(x) \preceq \widetilde{\omega}(x) = \mathbb{Y}t_1^x$ it follows, by induction, that \begin{align} \label{eq:DominationJumpSequences} J_n^x \leq \mathbb{J}t_n^x, \overline{m}ox{ for all } n. \end{align} \noindent \emph{Step 2}: Relation to the probability of $\mathbb{A}M_0^+$ \\ Let $(Z_x)_{x \geq 0}$ and $(\mathbb{Z}t_x)_{x \geq 0}$ denote, respectively, the right jumps Markov chains constructed from the jump patterns $(J_n^x)_{x \in \mathbb{Z}, n \in \mathbb{N}}$ and $(\mathbb{J}t_n^x)_{x \in \mathbb{Z}, n \in \mathbb{N}}$ according to (\ref{eq:DefRightJumpsMC}). Also, for $x, k \geq 0$ define $\mathbb{T}heta_{x,k}$ and $\widetilde{\mathbb{T}heta}_{x,k}$ by \begin{align*} \mathbb{T}heta_{x,k} = \inf \mathbb{B}ig\{n : \sum_{m=1}^n \mathds{1} \{J_m^x = -1\} = k \mathbb{B}ig\} ~,~ \widetilde{\mathbb{T}heta}_{x,k} = \inf \mathbb{B}ig\{n : \sum_{m=1}^n \mathds{1} \{ \mathbb{J}t_m^x = -1\} = k \mathbb{B}ig\}. \end{align*} If $Z_{x-1} \leq \mathbb{Z}t_{x-1}$, then applying the definition (\ref{eq:DefRightJumpsMC}) gives \begin{align*} Z_x & = \sum_{m=1}^{\mathbb{T}heta_{x,Z_{x-1}}} \mathds{1} \{J_m^x = 1\} \widetilde{s}ackrel{(a)}{\leq} \sum_{m=1}^{\widetilde{\mathbb{T}heta}_{x,Z_{x-1}}} \mathds{1} \{J_m^x = 1\} \\ & \leq \sum_{m=1}^{\widetilde{\mathbb{T}heta}_{x,\mathbb{Z}t_{x-1}}} \mathds{1} \{J_m^x = 1\} \widetilde{s}ackrel{(b)}{\leq} \sum_{m=1}^{\widetilde{\mathbb{T}heta}_{x,\mathbb{Z}t_{x-1}}} \mathds{1} \{\mathbb{J}t_m^x = 1\} = \mathbb{Z}t_x. \end{align*} Here, (a) follows from (\ref{eq:DominationJumpSequences}), which implies $\mathbb{T}heta_{x,k} \leq \widetilde{\mathbb{T}heta}_{x,k}$ for any $k$, and (b) follows directly from (\ref{eq:DominationJumpSequences}). Since $Z_0 = \mathbb{Z}t_0 = 1$, it follows, by induction, that \begin{align} \label{eq:DominationRightJumpsChains} Z_x \leq \mathbb{Z}t_x , \overline{m}ox{ for all } x \in \mathbb{Z}. \end{align} Now, since (\ref{eq:DominationJumpSequences}) and (\ref{eq:DominationRightJumpsChains}) both hold with probability 1, under our coupling, it follows from Lemma \ref{lem:ProbSurvivalZxProbTransienceXn} that \begin{align*} \mathbb{P}_{\omega}(\mathbb{A}M_0^+) & = \mathbb{P}_{\omega}(X_1 = 1) \cdot \mathbb{P}_{\omega}(Z_x > 0, \forall x > 0) \\ & = \mathbb{P}_{\omega}(J_1^0 = 1) \cdot \mathbb{P}_{\omega}(Z_x > 0, \forall x > 0) \\ & \leq \mathbb{P}_{\widetilde{\omega}}(J_1^0 = 1) \cdot \mathbb{P}_{\widetilde{\omega}}(Z_x > 0, \forall x > 0) \\ & = \mathbb{P}_{\widetilde{\omega}}(X_1 = 1) \cdot \mathbb{P}_{\widetilde{\omega}}(Z_x > 0, \forall x > 0) \\ & = \mathbb{P}_{\widetilde{\omega}}(\mathbb{A}M_0^+). \end{align*} Here, we have dropped the tildes on all random variables corresponding to the initial environment $\widetilde{\omega}$, since the probability measure $\mathbb{P}_{\widetilde{\omega}}$ is now explicit. \end{proof} \section{The Noncritical Case} \label{sec:NoncriticalCase} Here we analyze the behavior of the random walk $(X_n)$ for $\alpha \not= 1/2$, proving Theorems \ref{thm:RightLeftTransienceCutoff}--\ref{thm:R1Speed}. We begin in section \ref{subsec:SurvivalRightJumpsMarkovChain} with a key lemma for the survival probability of the right jumps Markov chain $(Z_x)$, from which we derive a number of useful corollaries. Using these results, Theorem \ref{thm:RightLeftTransienceCutoff} on the cutoff for right/left transience and Theorem \ref{thm:BallisticityWhenNonCritical} on ballisticity of the random walk are then proved in sections \ref{subsec:CutoffForRightLeftTransience} and \ref{subsec:Ballisticity}. Theorems \ref{thm:L1Speed} and \ref{thm:R1Speed} on the exact speed of the random walk in certain special cases are proved afterward in sections \ref{subsec:SpeedWithL1} and \ref{subsec:SpeedWithR1}. Throughout we use the following notation: \begin{itemize} \widetilde{i}em $T_x^{(i)}$, $x \in \mathbb{Z}$ and $i \in \mathbb{N}$, is the $i$-th hitting time of site $x$. \begin{align} \label{eq:Def_ith_HittingTimes} T_x^{(1)} = T_x ~~\overline{m}ox{ and }~~ T_x^{(i+1)} = \inf\{n > T_x^{(i)}: X_n = x\}, \end{align} with the convention $T_x^{(j)} = \infty$, for all $j > i$, if $T_x^{(i)} = \infty$. \widetilde{i}em $N_x$ is the total number of visits to site $x$, as in (\ref{eq:DefNx}), and $N_x^y$ is the number of visits to site $x$ up to time $T_y$. \begin{align*} N_x & = |\{n \geq 0: X_n = x\}| ~,~ x \in \mathbb{Z}. \\ N_x^y & = |\{0 \leq n \leq T_y: X_n = x\}| ~,~ x,y \in \mathbb{Z}. \end{align*} \widetilde{i}em $R_x$ is the total number of right jumps from site $x$, and $L_x$ is the total number of left jumps from site $x$. \begin{align*} R_x & = |\{n \geq 0: X_n = x, X_{n+1} = x + 1\}| ~,~ x \in \mathbb{Z}. \\ L_x & = |\{n \geq 0: X_n = x, X_{n+1} = x - 1\}| ~,~ x \in \mathbb{Z}. \end{align*} \widetilde{i}em $B_x$ is the farthest distance the random walk ever steps backward from site $x$ after hitting $x$ for the first time. \begin{align*} B_x = \sup \{k \geq 0: \exists n \geq T_x \overline{m}ox{ with } X_n = x - k \} ~,~ x \in \mathbb{Z}. \end{align*} In the case $T_x = \infty$, $B_x \equiv 0$. \widetilde{i}em $\mathbb{A}M_n^+$, given by (\ref{eq:DefAnpm}), is the event that the random walk steps to the right at time $n$ and never returns to its time-$n$ location. \widetilde{i}em $\mathbb{B}M_{\epsilon}$, $0 < \epsilon < 1$, is the event that $B_x \leq \epsilon x$, for all sufficiently large $x$. \begin{align} \label{eq:DefBepsilon} \mathbb{B}M_{\epsilon} = \left\{\exists N \in \mathbb{N} \overline{m}ox{ s.t. } B_x \leq \epsilon x, \forall x \geq N \right\}. \end{align} \end{itemize} \subsection{Survival of Right Jumps Markov Chain $(Z_x)$} \label{subsec:SurvivalRightJumpsMarkovChain} \begin{Lem} \label{Lem:ProbZxGreater0} If $\alpha = \alpha(p,q,R,L) > 1/2$, then there exists some $\beta = \beta(p,q,R,L) > 0$ such that, for any initial environment $\omega$, \begin{align*} \mathbb{P}_{\omega}(Z_x > 0, \forall x > 0) \geq \beta. \end{align*} \end{Lem} \begin{proof} Fix $p,q,R,L$ such that $\alpha > 1/2$ and any initial environment $\omega$. Define $0 < \epsilon < 1/4$ by the relation $\alpha = 1/2 + 2 \epsilon$, and for $\widehat{\lambda} = (\lambda,j) \in \widehat{\mathbb{L}ambda}$, let $\widehat{p}i(\widehat{\lambda}) = \mathds{1}\{j = 1\}$. By (\ref{eq:ExpectedValuePhiEqualAlpha}) we have $\alpha = \mathbb{E}_{\widehat{\pi}}(\widehat{p}i)$, where $\widehat{\pi}$ is the stationary distribution for the extended single site transition matrix $\mathbb{M}h$. So, by standard large deviation bounds for finite-state Markov chains, there exist some $0 < a < 1$ and $n_0 \in \mathbb{N}$ such that for any initial state $\widehat{\lambda} \in \widehat{\mathbb{L}ambda}$ the Markov chain $(\mathbb{Y}h_n)$ with transition matrix $\mathbb{M}h$ satisfies \begin{align*} \mathbb{P}_{\widehat{\lambda}}\left(\frac{1}{n} \sum_{m=1}^{n} \widehat{p}i(\mathbb{Y}h_m) \leq 1/2 + \epsilon \right) = \mathbb{P}_{\widehat{\lambda}}\left(\frac{1}{n} \sum_{m=1}^{n} \widehat{p}i(\mathbb{Y}h_m) \leq \mathbb{E}_{\widehat{\pi}}(\widehat{p}i) - \epsilon \right) \leq a^n~, n \geq n_0. \end{align*} Using this estimate we obtain the following important inequality: \begin{align} \label{eq:Zx_given_ZxMinus1} \mathbb{P}_{\omega} (Z_x& \leq n(1/2 + \epsilon)/(1/2 - \epsilon) ~|~ Z_{x-1} = n) \nonumber \\ & = \mathbb{P}_{\omega}(\mathbb{T}heta_x \leq n/(1/2 - \epsilon) ~|~ Z_{x-1} = n) \nonumber \\ & = \mathbb{P}_{\omega}\left( \exists~ n \leq m \leq n/(1/2-\epsilon) : \sum_{i = 1}^{m} (1 - \widehat{p}i(\mathbb{Y}h_i^x)) = n \right) \nonumber \\ & = \mathbb{P}_{\omega}\left( \exists~ n \leq m \leq n/(1/2-\epsilon) : \frac{1}{m} \sum_{i = 1}^{m} \widehat{p}i(\mathbb{Y}h_i^x) = \frac{m - n}{m} \right) \nonumber \\ & \leq \mathbb{P}_{\omega}\left( \exists~ n \leq m \leq n/(1/2-\epsilon) : \frac{1}{m} \sum_{i = 1}^{m} \widehat{p}i(\mathbb{Y}h_i^x) \leq 1/2 + \epsilon \right) \nonumber \\ & \leq \mathbb{P}_{\omega}\left( \exists m \geq n : \frac{1}{m} \sum_{i = 1}^{m} \widehat{p}i(\mathbb{Y}h_i^x) \leq 1/2 + \epsilon \right) \nonumber \\ & \leq \sum_{m=n}^{\infty} a^{m} = \frac{a^n}{1-a} ~,~ \overline{m}ox{for all $n \geq n_0$ and $x \in \mathbb{N}$.} \end{align} Now, define $b > 1$ by $b = \frac{1/2 + \epsilon}{1/2 - \epsilon}$, and take $n_1 \geq n_0$ sufficiently large that $\frac{a^{n_1}}{1-a} < 1$. Thus, $\frac{a^{\lceil n_1 b^{x-1}\rceil}}{1-a} < 1$, $\forall x \in \mathbb{N}$. Applying the inequality (\ref{eq:Zx_given_ZxMinus1}) gives, \\ \begin{align*} \mathbb{P}_{\omega}( & Z_x > 0, \forall x > 0) \\ & \geq \mathbb{P}_{\omega}( Z_x \geq n_1 b^x, \forall x > 0) \\ & = \mathbb{P}_{\omega}(Z_1 \geq n_1 b) \cdot \prod_{x=2}^{\infty} \mathbb{P}_{\omega}(Z_x \geq n_1 b^x | Z_1 \geq n_1 b, ... , Z_{x-1} \geq n_1 b^{x-1}) \\ & \geq \mathbb{P}_{\omega}(Z_1 \geq n_1 b) \cdot \prod_{x=2}^{\infty} \mathbb{P}_{\omega}(Z_x \geq n_1 b^x | Z_{x-1} = \lceil n_1 b^{x-1} \rceil) \\ & \geq (\min\{p,q\})^{\ceil{n_1 b}} \cdot \prod_{x=2}^{\infty} \left(1 - \frac{a^{\lceil n_1 b^{x-1}\rceil}}{1-a} \right) \equiv \beta. \end{align*} Note that $\sum_{x=2}^{\infty} \frac{a^{\lceil n_1 b^{x-1}\rceil}}{1-a} < \infty$, so $\prod_{x=2}^{\infty} \left(1 - \frac{a^{\lceil n_1 b^{x-1}\rceil}}{1-a} \right) > 0$. \end{proof} \begin{Cor} \label{cor:PrAnPlusGreaterEqualBeta} If $\alpha = \alpha(p,q,R,L) > 1/2$ then there exists some $\beta = \beta(p,q,R,L) > 0$, such that for any initial environment $\omega$ and random walk path $(x_0, x_1,...,x_n)$, \begin{align} \label{eq:PrAnPlusGreaterEqualBeta} \mathbb{P}_{\omega,x_0}(\mathbb{A}M_n^+|X_0 = x_0,...,X_n = x_n) \geq \beta. \end{align} \end{Cor} \begin{proof} Since the claimed bound is uniform in the initial environment $\omega$, it suffices to consider the case $x_0 = n = 0$. By Lemma \ref{Lem:ProbZxGreater0}, there exists some $\beta' > 0$ such that $\mathbb{P}_{\omega}(Z_x > 0, \forall x > 0) \geq \beta'$, for any initial environment $\omega$. Thus, by Lemma \ref{lem:ProbSurvivalZxProbTransienceXn}, \begin{align*} \mathbb{P}_{\omega}(\mathbb{A}M_0^+) \geq \min\{p,q\} \cdot \beta' \equiv \beta \end{align*} for any initial environment $\omega$. \end{proof} \begin{Cor} \label{cor:NxDominatedByGeometric} If $\alpha > 1/2$ then, for any initial environment $\omega$ and site $x \geq 0$, \begin{align*} \mathbb{P}_{\omega}(N_x \geq k) \leq (1 - \beta)^{k-1} ~,~ \overline{m}ox{ for all } k \geq 1 \end{align*} where $\beta > 0$ is the constant in Corollary \ref{cor:PrAnPlusGreaterEqualBeta}. \end{Cor} \begin{proof} Let $A_x^{(i)}$ be the set of all random walk paths $(x_0, x_1, ..., x_n)$, of any length $n$, which end in an $i$-th hitting time of site $x$. That is, $\{X_0 = x_0, X_1 = x_1,..., X_n = x_n\} \mathbb{L}ongrightarrow T_x^{(i)} = n$. For brevity we denote $(X_0,...,X_n)$ as $X_0^n$ and $(x_0,...,x_n)$ as $x_0^n$. By Corollary \ref{cor:PrAnPlusGreaterEqualBeta}, for any $i \geq 1$, we have \begin{align*} & \mathbb{P}_{\omega}(T_x^{(i+1)} < \infty | T_x^{(i)} < \infty) \\ & = \sum_{x_0^n \in A_x^{(i)}} \mathbb{P}_{\omega}(X_0^n = x_0^n | T_x^{(i)} < \infty) \cdot \mathbb{P}_{\omega} (T_x^{(i+1)} < \infty | T_x^{(i)} < \infty, X_0^n = x_0^n) \\ & = \sum_{x_0^n \in A_x^{(i)}} \mathbb{P}_{\omega}(X_0^n = x_0^n | T_x^{(i)} < \infty) \cdot \mathbb{P}_{\omega} (T_x^{(i+1)} < \infty | X_0^n = x_0^n) \\ & \leq \sum_{x_0^n \in A_x^{(i)}} \mathbb{P}_{\omega}(X_0^n = x_0^n | T_x^{(i)} < \infty) \cdot \mathbb{P}_{\omega}((\mathbb{A}M_n^+)^c|X_0^n = x_0^n) \\ & \leq (1- \beta). \end{align*} Hence, for each $k \geq 1$, \begin{align*} \mathbb{P}_{\omega}(N_x \geq k) = \mathbb{P}_{\omega}(T_x^{(1)} < \infty) \cdot \prod_{i = 1}^{k-1} \mathbb{P}_{\omega} (T_x^{(i+1)} < \infty | T_x^{(i)} < \infty) \leq (1-\beta)^{k-1}. \end{align*} \end{proof} \begin{Cor} \label{cor:BxDominatedByGeometric} If $\alpha > 1/2$ then, for any initial environment $\omega$ and site $x \geq 0$, \begin{align} \label{eq:ExponentialBoundOnBx} \mathbb{P}_{\omega}(B_x \geq k) \leq (1 - \beta)^k ~,~ \overline{m}ox{ for all } k \geq 1 \end{align} where $\beta > 0$ is the constant in Corollary \ref{cor:PrAnPlusGreaterEqualBeta}. In particular, by the Borel-Cantelli lemma, \begin{align*} \mathbb{P}_{\omega}(\mathbb{B}M_{\epsilon}) = 1, \overline{m}ox{ for each } 0 < \epsilon < 1. \end{align*} \end{Cor} \begin{proof} The proof is similar to that of Corollary \ref{cor:NxDominatedByGeometric}. For $x \in \mathbb{Z}$, let $\tau_x^{(0)}$ be the first hitting time of site $x$, and let $\tau_x^{(i)}$, $i \in \mathbb{N}$, be the first time greater than $\tau_x^{(i-1)}$ at which the walk steps backward from its position $x - (i-1)$ at time $\tau_x^{(i-1)}$. That is, $\tau_x^{(0)} = T_x$, and for $i \geq 1$, \begin{align*} \tau_x^{(i)} & = \inf \{n > \tau_x^{(i-1)} : X_n < X_{\tau_x^{(i-1)}} \} \\ & = \inf \{n > T_x : X_n = x - i \} \end{align*} with the convention $\tau_x^{(j)} = \infty$, for all $j > i$, if $\tau_x^{(i)} = \infty$. Also, let $A_x^{(i)}$ be the set of all random walk paths $(x_0, x_1, ..., x_n)$, of any length $n$, which end in an $i$-th ``back step time'' from site $x$. That is, $\{X_0 = x_0, X_1 = x_1,..., X_n = x_n\} \mathbb{L}ongrightarrow \tau_x^{(i)} = n$. As above, we denote $(X_0,...,X_n)$ as $X_0^n$ and $(x_0,...,x_n)$ as $x_0^n$. By Corollary \ref{cor:PrAnPlusGreaterEqualBeta}, for any $i \geq 0$, we have \begin{align*} & \mathbb{P}_{\omega}(\tau_x^{(i+1)} < \infty | \tau_x^{(i)} < \infty) \\ & = \sum_{x_0^n \in A_x^{(i)}} \mathbb{P}_{\omega}(X_0^n = x_0^n | \tau_x^{(i)} < \infty) \cdot \mathbb{P}_{\omega} (\tau_x^{(i+1)} < \infty | \tau_x^{(i)} < \infty, X_0^n = x_0^n) \\ & = \sum_{x_0^n \in A_x^{(i)}} \mathbb{P}_{\omega}(X_0^n = x_0^n | \tau_x^{(i)} < \infty) \cdot \mathbb{P}_{\omega} (\tau_x^{(i+1)} < \infty | X_0^n = x_0^n) \\ & \leq \sum_{x_0^n \in A_x^{(i)}} \mathbb{P}_{\omega}(X_0^n = x_0^n | \tau_x^{(i)} < \infty) \cdot \mathbb{P}_{\omega}((\mathbb{A}M_n^+)^c|X_0^n = x_0^n) \\ & \leq (1- \beta). \end{align*} So, for each $k \geq 1$, \begin{align*} \mathbb{P}_{\omega}(B_x \geq k) = \mathbb{P}_{\omega}(\tau_x^{(0)} < \infty) \cdot \prod_{i = 0}^{k-1} \mathbb{P}_{\omega} (\tau_x^{(i+1)} < \infty | \tau_x^{(i)} < \infty) \leq (1-\beta)^k. \end{align*} \end{proof} \subsection{Proof of Theorem \ref{thm:RightLeftTransienceCutoff}} \label{subsec:CutoffForRightLeftTransience} \begin{proof}[Proof of Theorem \ref{thm:RightLeftTransienceCutoff}] If $\alpha > 1/2$ then Corollary \ref{cor:BxDominatedByGeometric} implies that $B_0$ is $\mathbb{P}_{\omega}$ a.s. finite, for any initial environment $\omega$. Thus, by part (ii) of Lemma \ref{lem:SimpleTransConditions}, for $\alpha > 1/2$ we must have $X_n \rightarrow \infty$, $\mathbb{P}_{\omega}$ a.s., for any initial environment $\omega$. It follows by symmetry that, for $\alpha < 1/2$ and any initial environment $\omega$, $X_n \rightarrow -\infty$, $\mathbb{P}_{\omega}$ a.s. \end{proof} \subsection{Proof of Theorem \ref{thm:BallisticityWhenNonCritical}} \label{subsec:Ballisticity} For the proof of Theorem \ref{thm:BallisticityWhenNonCritical} we will assume that $\alpha > 1/2$, the case $\alpha < 1/2$ follows by symmetry considerations. The primary ingredients for the proof are Corollaries \ref{cor:NxDominatedByGeometric} and \ref{cor:BxDominatedByGeometric}, above, and Lemmas \ref{lem:SimpleConsequencesAlphaGreaterHalf} and \ref{lem:StrongLawForNx}, given below. Lemma \ref{lem:SimpleConsequencesAlphaGreaterHalf} is a simple consequence of Theorem \ref{thm:RightLeftTransienceCutoff}. Lemma \ref{lem:StrongLawForNx} shows that, when $\alpha > 1/2$, the sequence $(N_x)$ obeys a strong law of large numbers. The proof of this fact is somewhat lengthy and is deferred to Appendix \ref{sec:StrongLawForNx}. \begin{Lem} \label{lem:SimpleConsequencesAlphaGreaterHalf} Assume that $\alpha > 1/2$ and $X_0 = 0$. \begin{itemize} \widetilde{i}em[(i)] For all $x \geq 0$, the random variables $N_x$, $L_x$, and $R_x$ are each independent of the environment to the left of site $x$ when site $x$ is first reached: \begin{align*} N_x, L_x, R_x \perp \{ \omega_{T_x}(y), y < x \}. \end{align*} \widetilde{i}em[(ii)] $N_x^y$ and $N_y$ are independent, for all $0 \leq x < y$. \widetilde{i}em[(iii)] If, for some $y \geq 0$, $\omega(x)$ is constant for $x \geq y$, then $N_x$ and $N_y$ have the same distribution for all $x \geq y$. Similarly, if $\omega(x) = \omega(y)$ for all $x \geq y$, then $R_x$ and $R_y$ have the same distribution for all $x \geq y$, and $L_x$ and $L_y$ have the same distribution for all $x \geq y$. \end{itemize} \end{Lem} \begin{proof} Since $\alpha > 1/2$ and $X_0 = 0$, Theorem \ref{thm:RightLeftTransienceCutoff} shows that $T_x$ is a.s. finite, for each $x \geq 0$, and that regardless of the environment to the left of site $x$ at time $T_x$, the walks returns to site $x$ with probability 1 each time it steps left from $x$. This implies (i). Now, (ii) and (iii) follow easily since (i) shows that the distribution of $N_x$, $L_x$, and $R_x$ are each entirely determined by the values of $\omega_{T_x}(y), y \geq x$, which are the same as the original values $\omega(y), y \geq x$. \end{proof} \begin{Lem} \label{lem:StrongLawForNx} If $\alpha > 1/2$ then, for any initial environment $\omega$, \begin{align*} \lim_{n \to \infty} \frac{1}{n} \sum_{x = 1}^n (N_x - \mathbb{E}_{\omega}(N_x)) = 0, ~\mathbb{P}_{\omega} \overline{m}ox{ a.s. } \end{align*} \end{Lem} \begin{proof}[Proof of Theorem \ref{thm:BallisticityWhenNonCritical}, Equation (\ref{eq:LiminfXnnGreaterDelta}), with $\alpha > 1/2$] Let $\alpha > 1/2$, and fix any initial environment $\omega$. Also, let $\beta > 0$ be the constant defined in Corollary \ref{cor:PrAnPlusGreaterEqualBeta}. We will show that: \begin{itemize} \widetilde{i}em[(i)] $\limsup_{x \to \infty} \frac{1}{x} \sum_{y = 1}^x N_y \leq 1/\beta,~ \mathbb{P}_{\omega} \overline{m}ox{ a.s. }$ \widetilde{i}em[(ii)] $\limsup_{x \to \infty} T_x/x \leq \limsup_{x \to \infty} \frac{1}{x} \sum_{y = 1}^x N_y,~ \mathbb{P}_{\omega} \overline{m}ox{ a.s. }$ \widetilde{i}em[(iii)] $\liminf_{n \to \infty} X_n/n \geq \left(\limsup_{x \to \infty} T_x/x\right)^{-1},~ \mathbb{P}_{\omega} \overline{m}ox{ a.s. }$ \end{itemize} The result (\ref{eq:LiminfXnnGreaterDelta}) follows directly from these three facts. \\ \noindent \emph{Proof of (i)}: This is immediate from Lemma \ref{lem:StrongLawForNx} and Corollary \ref{cor:NxDominatedByGeometric}. \\ \noindent \emph{Proof of (ii)}: Since $\alpha > 1/2$, $X_n \rightarrow \infty$ $\mathbb{P}_{\omega}$ a.s. So, $\sum_{x \leq 0} N_x$ is $\mathbb{P}_{\omega}$ a.s. finite. Thus, $\mathbb{P}_{\omega}$ a.s. we have \begin{align*} \limsup_{x \to \infty} \frac{T_x}{x} = \limsup_{x \to \infty} \frac{1}{x} \sum_{y = -\infty}^x N_y^x \leq \limsup_{x \to \infty} \frac{1}{x} \sum_{y = -\infty}^x N_y = \limsup_{x \to \infty} \frac{1}{x} \sum_{y = 1}^x N_y. \end{align*} \noindent \emph{Proof of (iii)}: For $0 < \epsilon < 1$, let $\mathbb{B}M_{\epsilon}' = \mathbb{B}M_{\epsilon} \cap \{T_x < \infty, \forall x > 0\}$, where $\mathbb{B}M_{\epsilon}$ is defined by (\ref{eq:DefBepsilon}). On the event $\mathbb{B}M_{\epsilon}'$, for all sufficiently large $x$ and $T_x \leq n < T_{x+1}$, we have \begin{align*} \frac{X_n}{n} \geq \frac{x - \epsilon x}{T_{x+1}} = (1-\epsilon) \frac{x+1}{T_{x+1}} - \frac{1-\epsilon}{T_{x+1}}. \end{align*} So, \begin{align*} \liminf_{n \to \infty} \frac{X_n}{n} \geq \liminf_{x \to \infty}~ \left( (1- \epsilon) \frac{x+1}{T_{x+1}} - \frac{1-\epsilon}{T_{x+1}} \right) = (1-\epsilon) \cdot \left(\limsup_{x \to \infty} T_x/x\right)^{-1}. \end{align*} The result follows since $\mathbb{P}_{\omega}(\mathbb{B}M_{\epsilon}') = 1$, for each $\epsilon > 0$, due to Corollary \ref{cor:BxDominatedByGeometric} and the fact that the random walk $(X_n)$ is a.s. right transient with $\alpha > 1/2$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:BallisticityWhenNonCritical}, Equation (\ref{eq:SpeedENxInverse}), with $\alpha > 1/2$] By assumption $\omega(x)$ is constant for $x \geq m$, so Lemma \ref{lem:SimpleConsequencesAlphaGreaterHalf} implies $N_x$ and $N_m$ are equal in law, for all $x \geq m$, under $\mathbb{P}_{\omega}$. Thus, \begin{align} \label{eq:ENxEqualEN0} \mathbb{E}_{\omega}(N_x) = \mathbb{E}_{\omega}(N_m) \equiv \gamma, \overline{m}ox{ for all } x \geq m. \end{align} To show that $\lim_{n \to \infty} X_n/n = 1/\gamma$, $\mathbb{P}_{\omega}$ a.s., note first that (\ref{eq:ENxEqualEN0}) and Lemma \ref{lem:StrongLawForNx} imply that \begin{align} \label{eq:LimNxAvg} \lim_{x \to \infty} \frac{1}{x} \sum_{y = 1}^x N_y = \gamma,~ \mathbb{P}_{\omega} \overline{m}ox{ a.s. } \end{align} So, by point (ii) above, we have \begin{align} \label{eq:LimsupTxxBound} \limsup_{x \to \infty} T_x/x \leq \gamma, ~\mathbb{P}_{\omega} \overline{m}ox{ a.s. } \end{align} On the other hand, on the event $\mathbb{B}M_{\epsilon}' = \mathbb{B}M_{\epsilon} \cap \{T_x < \infty, \forall x > 0\}$, we have \begin{align} \label{eq:AddNyNotTx} \liminf_{x \to \infty} \frac{T_x}{x} = \liminf_{x \to \infty} \frac{1}{x} \sum_{y={-\infty}}^x N_y^x \geq \liminf_{x \to \infty} \frac{1}{x} \sum_{y=1}^{\floor{(1-\epsilon)x}} N_y^x = \liminf_{x \to \infty} \frac{1}{x} \sum_{y=1}^{\floor{(1-\epsilon)x}} N_y. \end{align} Since $\mathbb{P}_{\omega}(\mathbb{B}M_{\epsilon}') = 1$, for each $\epsilon > 0$, and the RHS of (\ref{eq:AddNyNotTx}) is equal to $(1-\epsilon) \gamma$, $\mathbb{P}_{\omega}$ a.s., by (\ref{eq:LimNxAvg}), this implies \begin{align} \label{eq:LiminfTxxBound} \liminf_{x \to \infty} T_x/x \geq \gamma, ~\mathbb{P}_{\omega} \overline{m}ox{ a.s. } \end{align} Together, (\ref{eq:LimsupTxxBound}) and (\ref{eq:LiminfTxxBound}) imply $\lim_{x \to \infty} T_x/x = \gamma$, $\mathbb{P}_{\omega}$ a.s., so the result follows from Lemma \ref{lem:HittingTimesVersusSpeed}. \end{proof} \subsection{Proof of Theorem \ref{thm:L1Speed}} \label{subsec:SpeedWithL1} The proof of Theorem \ref{thm:L1Speed} is based on the speed formula given in Theorem \ref{thm:BallisticityWhenNonCritical}, and uses the assumptions on $L$ and $\omega$ to obtain a more explicit expression for $\gamma$. \begin{proof}[Proof of Theorem \ref{thm:L1Speed}] We will prove the theorem under the assumption $\omega(x) = (q,0)$, for all $x \geq 0$. The case $\omega(x) = (q,0)$ in a neighborhood of $+\infty$ follows immediately from this. The main observation is that since $L = 1$ and the random walk starts from $X_0 = 0$ in an environment $\omega$ satisfying $\omega(x) = (q,0)$, for all $x \geq 0$, we have \begin{align*} \omega_n(x) = (q,0), \overline{m}ox{ for each $n \geq 0$ and $x > X_n$}. \end{align*} That is, the environment to the right of the current position of the random walk always consists entirely of sites in the $(q,0)$ configuration. Consequently, when the walk jumps right the environment both at its current position and to its right consists entirely of sites in the $(q,0)$ configuration: \begin{align} \label{eq:q0CurrentAndToRight} \{ X_{n-1} = x-1 \overline{m}ox{ and } X_n = x \}~\mathbb{L}ongrightarrow~ \omega_n(y) = (q,0), \overline{m}ox{ for all } y \geq x. \end{align} Using this fact we will show that: \begin{itemize} \widetilde{i}em[(i)] $\gamma \equiv \mathbb{E}_{\omega}(N_0) = \frac{1 + \widetilde{e}a}{1 - \widetilde{e}a}$, where $\widetilde{e}a \equiv \mathbb{P}_{\omega}(T_{-1} < \infty)$. \widetilde{i}em[(ii)] $\widetilde{e}a$ satisfies $P(\widetilde{e}a) = 0$, where $P$ is as in \eqref{eq:DefPolynomialP}. \end{itemize} Also, using direct calculus arguments we will show that: \begin{itemize} \widetilde{i}em[(iii)] The polynomial $P(t)$ has a unique real root in the interval (0,1). \end{itemize} Clearly, $\widetilde{e}a > \mathbb{P}_{\omega}(T_{-1} = 1) = 1-q$, and by Corollary \ref{cor:PrAnPlusGreaterEqualBeta}, we know $\widetilde{e}a < 1$. Thus, the theorem follows from points (i)-(iii) and Theorem \ref{thm:BallisticityWhenNonCritical}. \\ \noindent \emph{Proof of (i):} Since $\alpha > 1/2$ the random walk returns to site $0$ with probability $1$ every time it steps left from $0$, and by (\ref{eq:q0CurrentAndToRight}), applied in the case $x = 0$, we know that at each time $n$ when the random walk returns to site $0$ after stepping left on its last visit, we have $\omega_n(x) = \omega_0(x) = (q,0)$, for all $x \geq 0$. Therefore, since $\mathbb{P}_{\omega'}(T_{-1} < \infty)$ does not depend on the values of $\omega'(x)$, $x < 0$, it follows that $L_0$ is a geometric random variable with distribution \begin{align*} \mathbb{P}_{\omega}(L_0 = k) = \widetilde{e}a^k (1 - \widetilde{e}a),~ k \geq 0. \end{align*} Hence, by Lemma \ref{lem:SimpleConsequencesAlphaGreaterHalf}, \begin{align*} \mathbb{E}_{\omega}(N_0) = \mathbb{E}_{\omega}(R_0 + L_0) \widetilde{s}ackrel{(*)}{=} [\mathbb{E}_{\omega}(L_1) + 1] + \mathbb{E}_{\omega}(L_0) = 2 \mathbb{E}_{\omega}(L_0) + 1 = \frac{1 + \widetilde{e}a}{1 - \widetilde{e}a}. \end{align*} Step (*) follows from the fact that $R_0 = L_1 + 1$ a.s., since the random walk is a.s. transient to $+\infty$. \\ \noindent \emph{Proof of (ii):} For $i \geq 0$, let $A_i$ be the event that the random walk steps right from site $0$ and eventually returns $i$ times without ever jumping left from 0, and let $A_i'$ be the event that the random walk steps right from site $0$ and eventually returns $i$ times without stepping left from $0$, but then does step left on its next visit: \begin{align*} A_i & = \{N_0 \geq i + 1, T_{-1} > T_0^{(i+1)}\}, \\ A_i' & = \{N_0 \geq i+1, T_{-1} = T_0^{(i+1)} + 1\}. \end{align*} Clearly, $\mathbb{P}_{\omega}(A_0) = 1$. We claim also that: \begin{align} \label{eq:ProbAiPrimeGivenAi} \mathbb{P}_{\omega}(A_i'|A_i) = \left\{ \begin{array}{l} (1-q)~, \overline{m}ox{ for } 0 \leq i \leq R-1\\ (1-p)~, \overline{m}ox{ for } i \geq R \end{array} \right. \end{align} and \begin{align} \label{eq:ProbAi1GivenAi} \mathbb{P}_{\omega}(A_{i+1}|A_i) = \left\{ \begin{array}{l} q \widetilde{e}a ~, \overline{m}ox{ for } 0 \leq i \leq R-1\\ p \widetilde{e}a ~, \overline{m}ox{ for } i \geq R. \end{array} \right. \end{align} To see (\ref{eq:ProbAiPrimeGivenAi}), note that after jumping right from site $0$ and returning $i$ times in a row, site $0$ will be in configuration $(q,i)$, for $0 \leq i \leq R-1$, and in configuration $(p,0)$ for $i \geq R$. Thus, for $0 \leq i \leq R-1$, we have \begin{align*} \mathbb{P}_{\omega}(A_i'|A_i) = \mathbb{P}_{\omega}\left(X_{T_0^{(i+1)}+1} = -1\right. \left| \omega_{T_0^{(i+1)}}(0) = (q,i)\right) = (1 - q) \end{align*} and, for $i \geq R$, we have \begin{align*} \mathbb{P}_{\omega}(A_i'|A_i) = \mathbb{P}_{\omega}\left(X_{T_0^{(i+1)}+1} = -1\right. \left| \omega_{T_0^{(i+1)}}(0) = (p,0)\right) = (1 - p). \end{align*} Now, (\ref{eq:ProbAi1GivenAi}) follows from (\ref{eq:ProbAiPrimeGivenAi}) and the following calculation which is valid for all $i \geq 0$: \begin{align*} \mathbb{P}_{\omega}(A_{i+1}|A_i) & = \mathbb{P}_{\omega}(X_{T_0^{(i+1)}+1} = 1| A_i) \cdot \mathbb{P}_{\omega}(T_0^{(i+2)} < \infty|A_i, X_{T_0^{(i+1)} + 1} = 1) \\ & = \mathbb{P}_{\omega}((A_i')^c| A_i) \cdot \widetilde{e}a. \end{align*} The second equality above follows from (\ref{eq:q0CurrentAndToRight}), which implies that on the event $\{X_{T_0^{(i+1)} + 1} = 1\}$, all sites $x \geq 1$ are in the $(q,0)$ configuration at time ${T_0^{(i+1)} + 1}$. Now, from (\ref{eq:ProbAiPrimeGivenAi}) and (\ref{eq:ProbAi1GivenAi}), along with the fact $\mathbb{P}_{\omega}(A_0) = 1$, we conclude that \begin{align*} \mathbb{P}_{\omega}(A_i') & = \bigg(\prod_{j = 1}^i \mathbb{P}_{\omega}(A_j|A_{j-1}) \bigg) \cdot \mathbb{P}_{\omega}(A_i'|A_i) = \left\{ \begin{array}{l} (q\widetilde{e}a)^i(1-q)~, \overline{m}ox{ for } 0 \leq i \leq R-1\\ (q\widetilde{e}a)^R (p\widetilde{e}a)^{i - R} (1-p)~, \overline{m}ox{ for } i \geq R. \end{array} \right. \end{align*} So, \begin{align*} \widetilde{e}a = \mathbb{P}_{\omega}(T_{-1} < \infty) = \sum_{i = 0}^{\infty} \mathbb{P}_{\omega}(A_i') = (1-q) \frac{1 - (q \widetilde{e}a)^R}{1 - q \widetilde{e}a} + (1-p) \frac{(q \widetilde{e}a)^R}{1 - p \widetilde{e}a}. \end{align*} For $0 < \widetilde{e}a < 1$, this condition is equivalent to $P(\widetilde{e}a) = 0$. \\ \noindent \emph{Proof of (iii):} From point (ii) above we know that $\widetilde{e}a\in(0,1)$ is a root of the polynomial $P(t)$. We now show that there cannot be any other roots in $(0,1)$. Observe that $t=1$ is a root of $P$ and that $P$ factors as $$ P(t)=(1-t)\mathbb{B}ig(1-q+(pq-p-q)t+pqt^2-(p-q)q^Rt^R\mathbb{B}ig)\equiv(1-t)Q(t). $$ Thus, we need to show that the only root of $Q$ in $(0,1)$ is $\widetilde{e}a$. Observe that $\frac1q>1$ is a root of $Q$. For $R \geq 3$, we have $$ Q''(t) = 2pq-(p-q)q^R R(R-1)t^{R-2}. $$ So, if $R \geq 3$ and $q\ge p$ then $Q$ is convex and can have at most two real roots. Also, if $R \in \{1,2\}$ then $Q$ is quadratic and, thus, has at most two real roots. In either case, this completes the proof. Now assume that $R \geq 3$ and $q<p$. In this case, $Q''$ has one real root. Thus, $Q'$ can have no more than two real roots. Let $t^+$ denote the largest root of $Q$ in $(0,1)$. We will show below that $Q(1)<0$. Using this along with the facts that $Q(t^+)=Q(\frac1q)=0$ and $Q(t)<0$ for sufficiently large $t$, it follows that $Q'$ has two roots in $(t^+,\infty)$. If there were another root $t^-\in(0,1)$ of $Q$, then $Q'$ would have to have a root in $(t^-,t^+)$, but this is impossible since $Q'$ cannot have more than two real roots. It remains to show that $Q(1)<0$. We have \begin{equation}\label{Q(1)} Q(1)=1-2q+q^{R+1}-p(1-2q+q^R). \end{equation} Since we are assuming that $\alpha>\frac12$, it follows from Proposition 1 that $p>\frac{1-2q+q^{R+1}}{1-2q+q^R}\equiv p_0$, if $1-2q+q^{R+1}>0$. On the other hand, if $1-2q+q^{R+1}\le0$, then $p\in(0,1)$ is unrestricted. In the former case, it follows from \eqref{Q(1)} that for any $q$, $Q(1)<1-2q+q^{R+1}-p_0(1-2q+q^R)=0$. In the latter case: \begin{itemize} \widetilde{i}em If $1 - 2q + q^R = 0$, then \eqref{Q(1)} implies $Q(1) = 1 - 2q + q^{R+1} < 0$. \widetilde{i}em If $1 - 2q + q^R > 0$, then \eqref{Q(1)} implies $Q(1) < 1 - 2q + q^{R+1} \leq 0$, for all $p \in (0,1)$. \widetilde{i}em If $1 - 2q + q^R < 0$, then \eqref{Q(1)} implies $Q(1) < 1 - 2q + q^{R+1} - (1 - 2q + q^R) < 0$, for all $p \in (0,1)$. \end{itemize} \end{proof} \subsection{Proof of Theorem \ref{thm:R1Speed}} \label{subsec:SpeedWithR1} Unlike the proof of Theorem \ref{thm:L1Speed} for the speed with $L = 1$, the proof of Theorem \ref{thm:R1Speed} for the speed with $R=1$ does not rely on the implicit characterization of the speed given by Theorem \ref{thm:BallisticityWhenNonCritical} in terms of $\gamma$. Instead, the proof is based on a direct method for estimating the hitting times $T_x$ for large $x$. \begin{proof}[Proof of Theorem \ref{thm:R1Speed}] For $0 \leq i \leq L-1$, we define $a_i$ to be the expected hitting time of site $1$, starting from site $0$, in an initial environment with all sites $x < 0$ in the $(p,0)$ configuration and site $0$ in the $(p,i)$ configuration. Also, we define $a_L$ to be the expected hitting time of site $1$, starting from site $0$, in an initial environment with all sites $x < 0$ in the $(p,0)$ configuration and site $0$ in the $(q,0)$ configuration. \begin{align*} a_i = \mathbb{E}_{\omega^{(i)}}(T_1) ~,~ 0 \leq i \leq L, \end{align*} where the environments $\omega^{(i)}$ satisfy: \begin{align*} \omega^{(i)}(x) & = (p,0) ~,~ x < 0 ~\overline{m}ox{ and }~ 0 \leq i \leq L. \\ \omega^{(i)}(0) &= (p,i) ~,~0 \leq i \leq L-1. \\ \omega^{(L)}(0) & = (q,0). \end{align*} The proof proceeds in two steps. First we set up a linear system of equations for the $a_i$'s, which can be solved to obtain the desired speed formula in the case that the initial environment $\omega$ satisfies $\omega(x) = (p,0)$, for all $x < 0$. Then, using this result, we show that the same speed formula holds in the general case. \\ \noindent \emph{Case (1)}: $\omega(x) = (p,0)$, for all $x < 0$. \\ Since $\alpha > 1/2$, $T_x$ is a.s. finite for each $x > 0$, and we define $\mathbb{D}elta_x$, $x \geq 0$, by \begin{align*} \mathbb{D}elta_x = T_{x+1} - T_x. \end{align*} The key observation is that because $R = 1$ and the random walk starts at $X_0 = 0$ in an environment $\omega$ satisfying $\omega(x) = (p,0)$, for all $x < 0$, we have \begin{align*} \omega_n(x) = (p,0), ~\overline{m}ox{ for each } n \geq 0 \overline{m}ox{ and } x < X_n. \end{align*} That is, the environment to the left of the current position of the random walk always consists entirely of sites in the $(p,0)$ configuration. Applying this fact at the random time $T_x$ it follows that, for each $x > 0$, $\mathbb{D}elta_x$ is independent of $\mathbb{D}elta_0,...,\mathbb{D}elta_{x-1}$ and has distribution: \begin{align*} \mathbb{P}_{\omega}(\mathbb{D}elta_x = k) & = \mathbb{P}_{\omega^{(i)}}(T_1 = k) ~, \overline{m}ox{ if } \omega(x) = (p,i) ~,~ 0 \leq i \leq L-1. \\ \mathbb{P}_{\omega}(\mathbb{D}elta_x = k) & = \mathbb{P}_{\omega^{(L)}}(T_1 = k) ~, \overline{m}ox{ if } \omega(x) = (q,0). \end{align*} Thus, defining \begin{align*} A_i^x & = \{ 0 \leq y \leq x-1 : \omega(x) = (p,i)\} ~,~0 \leq i \leq L-1 \\ A_L^x & = \{ 0 \leq y \leq x-1: \omega(x) = (q,0)\} \end{align*} and applying the strong law of large numbers for the i.i.d. random variables $\{\mathbb{D}elta_y: \omega(y) = (p,i)\}$ and $\{\mathbb{D}elta_y: \omega(y) = (q,0)\}$ we have that $\mathbb{P}_{\omega}$ a.s. \begin{align} \label{eq:SLLNDecompositionTxx} \lim_{x \to \infty} T_x/x = \lim_{x \to \infty} \frac{1}{x} \sum_{i = 0}^L \sum_{y \in A_i^x} \mathbb{D}elta_y = \lim_{x \to \infty} \sum_{i = 0}^L \frac{|A_i^x|}{x} \left( \frac{1}{|A_i^x|} \sum_{y \in A_i^x} \mathbb{D}elta_y \right) = \sum_{i = 0}^L d_i a_i. \footnotemark{} \end{align} \footnotetext{Of course, in order to apply the strong law to conclude that $\lim_{x \to \infty} \frac{1}{|A_i^x|} \sum_{y \in A_i^x} \mathbb{D}elta_y = a_i$, we need $|A_i^x| \rightarrow \infty$. However, if $|A_i^x| \not\rightarrow \infty$, for some $i$, then $d_i = 0$. So, $\lim_{x \to \infty} \frac{1}{x} \sum_{y \in A_i^x} \mathbb{D}elta_y = 0 = d_i a_i$, and (\ref{eq:SLLNDecompositionTxx}) still holds.} So, by Lemma \ref{lem:HittingTimesVersusSpeed}, \begin{align} \label{eq:SpeedGivenByHittingTimes} \lim_{n \to \infty} \frac{X_n}{n} = \frac{1}{\sum_{i=0}^L d_i a_i} ~,~ \mathbb{P}_{\omega} \overline{m}ox{ a.s. } \end{align} Now, by conditioning on the first step of the walk it is easy to see that the following relations between the $a_i$'s hold: \begin{align} \label{eq:LinearSystemExpectedHittingTimes} a_i & = p \cdot 1 ~+~ (1-p) \cdot (1 + a_0 + a_{i+1}) ~,~ 0 \leq i \leq L-1. \nonumber \\ a_L & = q \cdot 1 ~+~ (1-q) \cdot (1 + a_0 + a_L). \end{align} One possible solution to the system (\ref{eq:LinearSystemExpectedHittingTimes}) is $a_0 = a_1 = ... = a_L = \infty$. However, by (\ref{eq:SpeedGivenByHittingTimes}), this implies $X_n/n \rightarrow 0$, $\mathbb{P}_{\omega}$ a.s., which contradicts Theorem \ref{thm:BallisticityWhenNonCritical}. Also, if $a_j = \infty$, for any $j$, then to satisfy (\ref{eq:LinearSystemExpectedHittingTimes}) we must have $a_i = \infty$, for all $i$, which, as just shown, cannot happen. Over the real numbers the system (\ref{eq:LinearSystemExpectedHittingTimes}) has a unique solution given by (\ref{eq:Def_a0}) and (\ref{eq:Def_ai}). This is shown in Appendix \ref{subsec:ExpectedHittingTimes}. \\ \noindent \emph{Case (2)}: General Case\\ Fix any initial environment $\omega$ such that the limiting right densities $d_i$ exist, and let $s = 1/ (\sum_{i=0}^L d_i a_i)$. Also, for an arbitrary environment $\omega'$, let $\tau$ denote the last hitting time of site $0$ (which is a.s. finite by Theorem \ref{thm:RightLeftTransienceCutoff}). We observe that: \begin{enumerate} \widetilde{i}em For any $\omega'$, \begin{align*} \mathbb{P}_{\omega'}(\tau = 0) = \mathbb{P}_{\omega''}(\tau = 0) > 0, \end{align*} by Corollary \ref{cor:PrAnPlusGreaterEqualBeta}, where $\omega''$ is the environment defined by \begin{align} \label{eq:DefOmegaDoublePrime} \omega''(x) = \omega'(x) ~,~ x \geq 0 ~~\overline{m}ox{ and }~~\omega''(x) = (p,0) ~,~ x < 0. \end{align} \widetilde{i}em For any environment $\omega'$ with $\mathbb{P}_{\omega}(\omega_{\tau} = \omega') > 0$, we have \begin{align*} \mathbb{P}_{\omega}(X_n/n \rightarrow s | \omega_{\tau} = \omega') = \mathbb{P}_{\omega'}(X_n/n \rightarrow s| \tau = 0) = \mathbb{P}_{\omega''}(X_n/n \rightarrow s| \tau = 0), \end{align*} where $\omega''$ is defined by (\ref{eq:DefOmegaDoublePrime}). \widetilde{i}em For any environment $\omega'$ with $\mathbb{P}_{\omega}(\omega_{\tau} = \omega') > 0$, $\omega'(x) = \omega(x)$, for all but finitely many $x$. So, the limiting right densities $d_i'$ of states in each configuration for the environment $\omega'$ are the same as the limiting right densities $d_i$ for the initial environment $\omega$. \end{enumerate} It follows from these three observations and the result for Case (1) that, for any environment $\omega'$ with $\mathbb{P}_{\omega}(\omega_{\tau} = \omega') > 0$, \begin{align*} \mathbb{P}_{\omega}(X_n/n \rightarrow s | \omega_{\tau} = \omega') = \mathbb{P}_{\omega''}(X_n/n \rightarrow s| \tau = 0) = \mathbb{P}_{\omega''}(X_n/n \rightarrow s) = 1. \end{align*} Hence, $X_n/n \rightarrow s$, $\mathbb{P}_{\omega}$ a.s. \end{proof} \section{The Critical Case} \label{sec:CriticalCase} Here we analyze the transience/recurrence properties of the random walk $(X_n)$ in the critical case $\alpha = 1/2$, proving Theorems \ref{recurrpossible}--\ref{LR2crit}. We begin in section \ref{subsec:TransienceRecurrenceMConN0} with an important lemma for transience/recurrence of Markov chains on $\mathbb{N}_0$. Then, in section \ref{subsec:StepDistributionZxChain} we establish a framework relating the right jumps Markov chain $(Z_x)$ to the setup of this lemma. Using this framework, Theorem \ref{recurrpossible} is proved in section \ref{subsec:ProofThmRecurrPossible}, Theorem \ref{L1andEnvironment} in section \ref{subsec:ProofThmL1andEnvironment} and Theorem \ref{LR2crit} in section \ref{subsec:ProofThmLR2crit}. Theorem \ref{RorL1} is proved in section \ref{subsec:ProofThmRorL1}, using other methods. \subsection{Transience and Recurrence for Markov Chains on $\mathbb{N}_0$} \label{subsec:TransienceRecurrenceMConN0} Let $(\mathbb{Z}M_x)_{x \geq 0}$ be a time-homogenous Markov chain on the state space $\mathbb{N}_0 = \{0,1,2,...\}$ with step distribution $U(n)$. That is, \begin{align*} \mathbb{P}(\mathbb{Z}M_{x+1} = m | \mathbb{Z}M_x = n) = \mathbb{P}( U(n) = m ) ~, ~~ n,m \geq 0. \end{align*} We will say that the chain $(\mathbb{Z}M_x)$ is \emph{irreducible and aperiodic with the exception of state $0$} if $0$ is an absorbing state, but it is possible to redefine the transition probabilities from $0$ to make the chain irreducible and aperiodic. Also, we will say that the chain $(\mathbb{Z}M_x)$ is \emph{irreducible and aperiodic with the possible exception of state $0$} if it is either irreducible and aperiodic or irreducible and aperiodic with the exception of state $0$. Finally, we will say that the step distribution $U(n)$ is \emph{well concentrated} if \begin{equation}\label{def:mu} \mu \equiv \lim_{n \to \infty} \mathbb{E}(U(n))/n \end{equation} exists and there exist constants $C, c > 0$ and $N \in \mathbb{N}$ such that: \begin{align} \label{eq:UnEpsilonSquaredBound} & \mathbb{P}\left(|U(n) - \mu n| > \epsilon n\right) \leq C e^{-c \epsilon^2 n} , \overline{m}ox{ for $0 < \epsilon \leq 1$ and $n \geq N$.} \\ \label{eq:UnEpsilonBound} & \mathbb{P}\left(|U(n) - \mu n| > \epsilon n\right) \leq C e^{-c \epsilon n} , \overline{m}ox{ for $\epsilon \geq 1$ and $n \geq N$.} \end{align} In this case, we define also the quantities $\widehat{r}o(n)$, $\nu(n)$, and $\widehat{t}eta(n)$ by \begin{equation}\label{def:rhonutheta} \widehat{r}o(n) = \mathbb{E}(U(n) - \mu n),~ \ \nu(n) = \mathbb{E}((U(n) - \mu n )^2)/n,~ \ \widehat{t}eta(n) = 2 \widehat{r}o(n)/\nu(n). \end{equation} The following lemma is essentially Theorem 1.3 from \cite{Kozma2013}\footnotemark{}. \begin{Lem} \label{lem:ThetaLessGreater1} Let $(\mathbb{Z}M_x)_{x \geq 0}$ be a time-homogenous Markov chain on state space $\mathbb{N}_0$, which is irreducible and aperiodic with the possible exception of state $0$ and has well concentrated step distribution $U(n)$. Also, denote by $\mathbb{P}_k$ the probability measure for the chain $(\mathbb{Z}M_x)$ started from $\mathbb{Z}M_0 = k$. Then the following hold for any initial state $k \geq 1$. \begin{itemize} \widetilde{i}em[(i)] If $\mu < 1$, then $\mathbb{P}_k(\mathbb{Z}M_x > 0, \forall x \geq 0) = 0$. \widetilde{i}em[(ii)] If $\mu > 1$, then $\mathbb{P}_k(\mathbb{Z}M_x > 0, \forall x \geq 0) > 0$. \widetilde{i}em[(iii)] If $\mu = 1$, $\liminf_{n \to \infty} \nu(n) > 0$, and $\widehat{t}eta(n) < 1 + \frac{1}{\ln(n)} - \frac{a(n)}{n^{1/2}}$ for sufficiently large $n$, for some function $a(n) \rightarrow \infty$, then $\mathbb{P}_k(\mathbb{Z}M_x > 0, \forall x \geq 0) = 0$. \widetilde{i}em[(iv)] If $\mu = 1$, $\liminf_{n \to \infty} \nu(n) > 0$, and $\widehat{t}eta(n) > 1 + \frac{2}{\ln(n)} + \frac{a(n)}{n^{1/2}}$ for sufficiently large $n$, for some function $a(n) \rightarrow \infty$, then $\mathbb{P}_k(\mathbb{Z}M_x > 0, \forall x \geq 0) > 0$. \end{itemize} \end{Lem} \footnotetext{ There are three small differences. First, in Theorem 1.3 of \cite{Kozma2013} the chain $(\mathbb{Z}M_x)$ is required to be truly irreducible and aperiodic, without the possible exception of state $0$. Second, instead of \eqref{eq:UnEpsilonSquaredBound} and \eqref{eq:UnEpsilonBound} the following somewhat stronger concentration condition for $U(n)$ is assumed: There exist $c > 0$ and $N \in \mathbb{N}$ such that \begin{align} \label{eq:UnGeneralEpsilonSquareBound} \mathbb{P}\left(|U(n) - \mu n| > \epsilon n\right) \leq 2 e^{-c \epsilon^2 n}, \overline{m}ox{ for all $\epsilon > 0$ and $n \geq N$}. \end{align} Finally, there is no assumption that $\liminf_{n \to \infty} \nu(n) > 0$ for cases (iii) and (iv). Allowing the possible exception of state $0$ in the irreducible and aperiodic hypothesis clearly has no effect, since the probability of ever hitting state $0$, starting from a state $k \geq 1$, depends only on the transition probabilities from the nonzero states. Also, the concentration condition (\ref{eq:UnGeneralEpsilonSquareBound}) is used in \cite{Kozma2013} only to bound the error terms in certain Taylor series expansions, and these estimates remain valid if (\ref{eq:UnEpsilonSquaredBound}) and (\ref{eq:UnEpsilonBound}) hold instead, so there is no issue with using the weaker concentration condition. However, the proof of cases (iii) and (iv) given in \cite{Kozma2013} actually works as stated only if $\liminf_{n \to \infty} \nu(n) > 0$, so we require this condition also in our statement. } \subsection{Step Distribution of the Right Jumps Markov chain} \label{subsec:StepDistributionZxChain} By definition (\ref{eq:DefRightJumpsMC}) for the right jumps Markov chain $(Z_x)_{x \geq 0}$, \begin{align*} \mathbb{P}(Z_{x+1} = m |Z_x = n) = \mathbb{P}(U(n,x+1) = m) ~,~n,m,x \geq 0, \end{align*} where $U(n,x)$ is the (random) number of right jumps in the sequence $(J_k^x)_{k \in \mathbb{N}}$ before the time of the $n$-th left jump: \begin{align} \label{eq:UofnDistribution} U(n,x)=\inf \mathbb{B}ig\{\ell \geq 0: \sum_{k=1}^{\ell} \mathds{1}\{J_k^x = -1\} =n\mathbb{B}ig\}-n. \end{align} If the initial environment $\omega(x)$ is constant for all $x \geq 0$, then the distribution of the jump sequence $(J_k^x)_{k \in \mathbb{N}}$ is the same for all $x \geq 0$, so the distribution of $U(n,x)$ is also the same for all $x \geq 0$. In this case, the right jumps chain $(Z_x)_{x \geq 0}$ is time-homogeneous (where $x$ is the time variable) with step distribution $U(n) = U(n,x)$. It is also irreducible and aperiodic with the exception of state $0$. For example, redefining the transition probabilities from state $0$ as $\mathbb{P}(Z_{x+1} = 1|Z_x = 0) = 1$ would make the chain irreducible and aperiodic. For the remainder of this section we assume $\omega(x) = \omega(0)$, for all $x \geq 0$. For our analysis of the step distribution $U(n)$ we fix an arbitrary site $x \geq 0$ and decompose $U(n) = U(n,x)$ as \begin{align} \label{eq:UnEqualSumGammaj} U(n)=\sum_{j=1}^n\mathbb{G}amma_j, \end{align} where $\mathbb{G}amma_j$ is the number of right jumps in the sequence $(J_k^x)_{k \in \mathbb{N}}$ between the $(j-1)$-th and $j$-th left jumps. That is, $\mathbb{G}amma_j = k_ j - k_{j-1} - 1$, where $k_0 = 0$ and, for $j \geq 1$, $k_j = \inf \{k > k_{j-1}: J_{k}^x = -1 \}$. We think of the $(\mathbb{G}amma_j)_{j=1}^n$ as the values obtained in $n$ ``sessions,'' and denote by $\omega^j = \omega^j(x)$ the configuration at site $x$ at the beginning of the $j$-th session. Thus, $\omega^1 = \omega(x)$ and, for $j \geq 2$, $\omega^j = Y_{k_{j-1}+1}^x$ is the configuration at site $x$ immediately after the $(j-1)$-th left jump in the sequence $(J_k^x)_{k \in \mathbb{N}}$. It is straightforward to see that conditioned on $\omega^j$, $\mathbb{G}amma_j$ is independent of $\mathbb{G}amma_1,...,\mathbb{G}amma_{j-1}$ and has the following distribution: \begin{align} \label{eq:GammajDist} \mathbb{G}amma_j & \sim S_i, \overline{m}ox{ if } \omega^j = (q,i) , \overline{m}ox{ for some } 0 \leq i \leq R-1, \overline{m}ox{ and } \nonumber \\ \mathbb{G}amma_j & \sim S_R, \overline{m}ox{ if } \omega^j = (p,i), \overline{m}ox{ for some } 0 \leq i \leq L-1, \end{align} where $S_0, ... ,S_R$ are random variables with law \begin{equation} \label{Sidist} \mathbb{P}(S_i=k)=\begin{cases} q^k(1-q),\ 0 \leq k \leq R-i-1;\\ q^{R-i}p^{k-(R-i)}(1-p),\ k\ge R-i.\end{cases} \end{equation} In particular, $S_R$ is a standard geometric random variable with parameter $1-p$. Now, the configuration $\omega^{j+1}$ at the beginning of the next session is determined entirely by $\omega^j$ and $\mathbb{G}amma_j$. More precisely, $\omega^{j+1}$ is the (deterministic) configuration obtained by jumping right $\mathbb{G}amma_j$ times from site $x$, starting in configuration $\omega^j$, and then jumping left once. Thus, assuming that $L\ge2$: \begin{align} \label{eq:omegajplus1_determined} & \overline{m}ox{ If } \omega^j = (q,i), 0 \leq i \leq R-1, \overline{m}ox{ then } \omega^{j+1} = \begin{cases} (p,1) ~, \overline{m}ox{ if } \mathbb{G}amma_j \geq R-i; \nonumber\\ (q,0)~, \overline{m}ox{ if } \mathbb{G}amma_j < R-i.\end{cases} \\ & \overline{m}ox{ If } \omega^j = (p,i), 0 \leq i \leq L-2, \overline{m}ox{ then } \omega^{j+1} = \begin{cases} (p,1) ~, \overline{m}ox{ if } \mathbb{G}amma_j \geq 1; \nonumber\\ (p,i+1)~, \overline{m}ox{ if } \mathbb{G}amma_j = 0.\end{cases} \\ & \overline{m}ox{ If } \omega^j = (p,L-1), \overline{m}ox{ then } \omega^{j+1} = \begin{cases} (p,1) ~, \overline{m}ox{ if } \mathbb{G}amma_j \geq 1;\\ (q,0)~, \overline{m}ox{ if } \mathbb{G}amma_j = 0.\end{cases} \end{align} If $L=1$ then the configuration $\omega^j$ at the beginning of each of the right jumps sessions after the first is always $(q,0)$, since the configuration at site $x$ immediately after a left jump is $(q,0)$. From \eqref{eq:GammajDist}-\eqref{eq:omegajplus1_determined} it follows that, for any $L \geq 2$, the sequence of configurations $(\omega^j)_{j=1}^n$ is a Markov chain with (initial state $\omega(x)$ and) transition matrix $\mathbb{A}h$ given by \begin{equation}\label{Ahatmatrix} \begin{aligned} & \mathbb{A}h_{(q,i),(q,0)} =1-q^{R-i} ~,~ \mathbb{A}h_{(q,i),(p,1)}=q^{R-i}~\overline{m}ox{ for }~ 0 \leq i \leq R-1; \\ & \mathbb{A}h_{(p,i),(p,1)} =p ~,~ \mathbb{A}h_{(p,i),(p,i+1)}=1-p ~\overline{m}ox{ for }~ 0 \leq i \leq L-2; \\ & \mathbb{A}h_{(p,L-1),(p,1)} =p ~,~ \mathbb{A}h_{(p,L-1),(q,0)}=1-p. \end{aligned} \end{equation} In the case $L=1$, $(\omega^j)_{j=1}^n$ is still a Markov chain, but it is degenerate. The transition matrix $\mathbb{A}h$ has $\mathbb{A}h_{\lambda,(q,0)} = 1$, for all $\lambda \in \mathbb{L}ambda$. In either case, the transition matrix $\mathbb{A}h$ is indecomposable, and the $L$ states $\mathbb{L}ambda_0 \equiv \{(q,0),(p,1),...,(p,L-1)\}$ constitute a closed, irreducible set of states. We denote by $A$ the corresponding transition matrix obtained from $\mathbb{A}h$ by restricting to these $L$ states, and by $\psi$ the unique invariant measure for $A$ (for $L=1$, $A = \psi = 1$). Also, we denote by $e_{(p,i)}$ the unit $L$-vector with a $1$ in the position of state $(p,i)$, and by $e_{(q,0)}$ the unit $L$-vector with a $1$ in the position of $(q,0)$. Finally, we let $E$ denote the $L$-vector with components, $E_{(q,0)}=\mathbb{E}(S_0)$ and $E_{(p,i)}=\mathbb{E}(S_R)$, for $i\in\{1,\ldots, L-1\}$. \\ \noindent \emph{Basic Lemmas} \\ The following lemmas characterize some key properties of the step distribution $U(n) = \sum_{j=1}^n \mathbb{G}amma_j$ in the critical case, $\alpha = 1/2$. Proofs are deferred to Appendix \ref{sec:ProofOfUnLemmas}, but in all three cases use the underlying Markov chain $(\omega^j)_{j=1}^n$. \begin{Lem} \label{lem:muEqual1} If $\alpha = 1/2$, then \begin{align} \label{eq:muEqual1} \mu \equiv \lim_{n \to \infty} \frac{\mathbb{E}(U(n))}{n} = \left<\psi, E \right> = 1, \end{align} where $\left<\cdot, \cdot \right>$ denotes the standard inner product of two real vectors. \end{Lem} \begin{Lem} \label{lem:ConcentrationEstimate} If $\alpha = 1/2$, then the step distribution $U(n)$ is well concentrated. \end{Lem} \begin{Lem} \label{lem:nuLowerBound} If $\alpha = 1/2$, then $\liminf_{n \to \infty} \nu(n) > 0$, where $\nu(n)$ is given by \eqref{def:rhonutheta}. \end{Lem} \noindent \bf Remark.\rm\ The assumption $\alpha = 1/2$ is actually not necessary for the conclusions of Lemmas \ref{lem:ConcentrationEstimate} and \ref{lem:nuLowerBound} to hold, but it simplifies the writing of the proofs slightly and is the only case where we will apply them. \\ \noindent \emph{Setup for the Proofs of Theorems \ref{recurrpossible}, \ref{L1andEnvironment}, and \ref{LR2crit}} \\ For the proofs of Theorems \ref{recurrpossible}, \ref{L1andEnvironment}, and \ref{LR2crit} below we will adopt the framework given here for analyzing the step distribution $U(n)$ of the right jumps Markov chain, without further mention, whenever the initial environment $\omega$ is constant over all $x \geq 0$. In this case, since $\alpha = 1/2$ for all three of these theorems, we know $\mu = 1$ by Lemma \ref{lem:muEqual1}. Thus, \eqref{def:rhonutheta} becomes \begin{equation}\label{def:rhonuthetaalphahalf} \widehat{r}o(n) = \mathbb{E}(U(n) - n),~ \ \nu(n) = \mathbb{E}((U(n) - n)^2)/n,~ \ \widehat{t}eta(n) = 2 \widehat{r}o(n)/\nu(n) \end{equation} and it follows from Lemmas \ref{lem:ProbSurvivalZxProbTransienceXn}, \ref{lem:ThetaLessGreater1}, \ref{lem:ConcentrationEstimate}, and \ref{lem:nuLowerBound} that \begin{align} \label{eq:ThetaTransienceRecurrenceCondition} & \mathbb{P}_{\omega}(X_n \rightarrow \infty) = 0 , ~\overline{m}ox{ if }~ \widehat{t}eta(n) \leq 1 + O(\frac1n) ~\overline{m}ox{ and } \nonumber \\ & \mathbb{P}_{\omega}(X_n \rightarrow \infty) > 0 , ~\overline{m}ox{ if } \lim_{n \to \infty} \widehat{t}eta(n) > 1. \end{align} In all cases one of these two possibilities for $\widehat{t}eta(n)$ will occur. \subsection{Proof of Theorem \ref{recurrpossible}} \label{subsec:ProofThmRecurrPossible} \begin{proof}[Proof of Theorem \ref{recurrpossible}] If the initial environment $\omega$ satisfies $\omega(x) = \lambda \in \mathbb{L}ambda_0$ for all $x \geq 0$, then the Markov chain representation of section \ref{subsec:StepDistributionZxChain} and Lemma \ref{lem:muEqual1} give \begin{align*} \mathbb{E}&(\mathbb{G}amma_j) = \sum_{\lambda'} \mathbb{P}(\omega^j = \lambda'|\omega^1 = \lambda) \cdot \mathbb{E}(\mathbb{G}amma_j|\omega^j = \lambda') \\ & = \left<e_{\lambda} A^{j-1}, E \right> = \left<\psi, E \right> + \left<(e_{\lambda} A^{j-1} - \psi), E \right> = 1 + O(a^j), \end{align*} for some $0 < a < 1$, which depends on the matrix $A$. Thus, in this case, for all $n \geq j$ we have from \eqref{def:rhonuthetaalphahalf} \begin{align*} \widehat{r}o(n) = \sum_{i = 1}^n \mathbb{E}(\mathbb{G}amma_i) - n = \bigg(\sum_{i = 1}^{j} \mathbb{E}(\mathbb{G}amma_i) - j \bigg) + O(a^j) = \widehat{r}o(j) + O(a^j). \end{align*} Also, by Lemma \ref{lem:nuLowerBound}, we know that there exists some $\epsilon > 0$, which can be chosen uniformly over $\lambda \in \mathbb{L}ambda_0$, such that $\liminf_{n \to \infty} \nu(n) \geq \epsilon$ if $\omega(x) = \lambda$, for $x \geq 0$. Combining these observations we see that there exists some $n_0 \in \mathbb{N}$ satisfying \begin{align} \label{eq:WhenBiggern0} \nu^{(\lambda)}(n) \geq \epsilon/2 ~\overline{m}ox{ and }~ \widehat{r}o^{(\lambda)}(n) - \widehat{r}o^{(\lambda)}(n_0) \leq \epsilon/4, \overline{m}ox{ for all $\lambda \in \mathbb{L}ambda_0$ and $n \geq n_0$}, \end{align} where $\widehat{r}o^{(\lambda)}(n)$ and $\nu^{(\lambda)}(n)$ are the quantities $\widehat{r}o(n)$ and $\nu(n)$ with initial environment $\omega(x) = \lambda$, $x \geq 0$. Now, for any fixed $j$, Lemma \ref{lem:muEqual1} implies \begin{align*} & j = \sum_{i=1}^j \left<\psi, E \right> = \sum_{i=1}^j \left<\psi A^{j-1}, E \right> = \sum_{i=1}^j \sum_{\lambda \in \mathbb{L}ambda_0} \psi_{\lambda} \left<e_{\lambda} A^{j-1}, E \right> \\ & = \sum_{i=1}^j \sum_{\lambda \in \mathbb{L}ambda_0} \psi_{\lambda} \cdot \mathbb{E}(\mathbb{G}amma_j^{(\lambda)}) = \sum_{\lambda \in \mathbb{L}ambda_0} \psi_{\lambda} \cdot \mathbb{E}(U^{(\lambda)}(j)), \end{align*} where $\mathbb{G}amma_j^{(\lambda)}$ and $U^{(\lambda)}(j)$ are the random variables $\mathbb{G}amma_j$ and $U(j)$, with initial environment $\omega(x) = \lambda$, $x \geq 0$. Thus, for any fixed $j$, \begin{align*} \sum_{\lambda \in \mathbb{L}ambda_0} \psi_{\lambda} \cdot \big[ \mathbb{E}(U^{(\lambda)}(j)) - j\big] = \sum_{\lambda \in \mathbb{L}ambda_0} \psi_{\lambda} \cdot \widehat{r}o^{(\lambda)}(j) = 0, \end{align*} so there exists some $\lambda_j \in \mathbb{L}ambda_0$ such that \begin{align} \label{eq:rholambdaj} \widehat{r}o^{(\lambda_j)}(j) \leq 0. \end{align} Define $\lambda^* = \lambda_{n_0}$. Then, by \eqref{eq:WhenBiggern0} and \eqref{eq:rholambdaj}, $\widehat{r}o^{(\lambda^*)}(n) \leq \widehat{r}o^{(\lambda^*)}(n_0) + \epsilon/4 = \widehat{r}o^{(\lambda_{n_0})}(n_0) + \epsilon/4 \leq \epsilon/4$, for $n \geq n_0$. So, by \eqref{eq:WhenBiggern0}, \begin{align*} \widehat{t}eta^{(\lambda^*)}(n) = \frac{2 \widehat{r}o^{(\lambda^*)}(n)}{\nu^{(\lambda^*)}(n)} \leq \frac{2 \cdot (\epsilon/4)}{\epsilon/2} = 1, \overline{m}ox{ for } n \geq n_0. \end{align*} It follows, from \eqref{eq:ThetaTransienceRecurrenceCondition}, that $\mathbb{P}_{\omega}(X_n \rightarrow \infty) = 0$ for any initial environment $\omega$ with $\omega(x) = \lambda^*$, for all $x \geq 0$. Thus, also, $\mathbb{P}_{\omega}(X_n \rightarrow \infty) = 0$ for any initial environment $\omega$ which is equal to $\lambda^*$ in a neighborhood of $+\infty$. An analogous argument shows that there exists some $\lambda_{*}$ such that $\mathbb{P}_{\omega}(X_n \rightarrow -\infty) = 0$, for any initial environment $\omega$ which is equal to $\lambda_{*}$ in a neighborhood of $-\infty$. So, by part (ii) of Lemma \ref{lem:SimpleTransConditions}, the random walk $(X_n)$ is $\mathbb{P}_{\omega}$ a.s. recurrent for any initial environment $\omega$ which is equal to $\lambda^*$ in a neighborhood of $+\infty$ and equal to $\lambda_{*}$ in a neighborhood of $-\infty$. By Lemma \ref{lem:ComparisonOfEnvironments}, and symmetry considerations, we may take $\lambda^* = (q,0)$ and $\lambda_{*} = (p,0)$ in the case of positive feedback, $q < p$. \end{proof} \subsection{Proof of Theorem \ref{L1andEnvironment}} \label{subsec:ProofThmL1andEnvironment} For notational convenience in the proof of Theorem \ref{L1andEnvironment} we define \begin{align*} \lambda_0 = (q,0),...,\lambda_{R-1} = (q,R-1), \lambda_R = (p,0). \end{align*} As discussed above, in the case $L=1$ the transition matrix $\mathbb{A}h$ is degenerate with $\mathbb{A}h_{\lambda,(q,0)} = 1$, for all $\lambda \in \mathbb{L}ambda$. Thus, in this case, $\omega^j = (q,0)$, for all $j \geq 2$ (independent of the values of the $\mathbb{G}amma_j$'s). Also, with $L = 1$, $\psi$ is simply the length-$1$ vector $1$ and $E$ is simply the length-$1$ vector $\mathbb{E}(S_0)$. The following facts are immediate from this. \begin{enumerate} \widetilde{i}em If $L = 1$ and $\omega(x) = \lambda_i$, $x \geq 0$, then \begin{align} \label{eq:L1GammajDist} \mathbb{G}amma_1, \mathbb{G}amma_2,... \overline{m}ox{ are independent with } \mathbb{G}amma_1 \sim S_i ~\overline{m}ox{ and }~ \mathbb{G}amma_j \sim S_0,~ j \geq 2. \end{align} \widetilde{i}em If $L=1$ and $\alpha = 1/2$ then, by Lemma \ref{lem:muEqual1}, \begin{align} \label{eq:ExS0is1} \mathbb{E}(S_0) = \left<\psi, E \right> = 1. \end{align} \end{enumerate} Using these facts we now prove Theorem \ref{L1andEnvironment}. \begin{proof}[Proof of Theorem \ref{L1andEnvironment}] By assumption $\alpha = 1/2$ and $L=1$, and the initial environment is a constant in a neighborhood of $-\infty$ in the case of negative feedback, $p < q$. Thus, by Theorem \ref{RorL1}, the probability of the random walk $(X_n)$ being transient to $-\infty$ is equal to $0$\footnotemark{}. So, by Lemma \ref{lem:SimpleTransConditions}, the probability of being transient to $+\infty$ is either $0$ or $1$, and if it is $0$, the process is recurrent. Moreover, without loss of generality, clearly we may assume that the initial environment $\omega$ is constant for all $x \geq 0$ (rather than only in a neighborhood of $+\infty$). Thus, it suffices to show the following to establish the transience/recurrence claims in the theorem: \footnotetext{Theorem \ref{RorL1} is not proved till later in section \ref{subsec:ProofThmRorL1}, but the proof is independent of the proof of this theorem.} \begin{align} \label{eq:BulletConditions} \bullet & \overline{m}ox{ If $\omega(x) = \lambda_0$ for all $x \geq 0$, then $\mathbb{P}_{\omega}(X_n \rightarrow \infty) = 0$. } \nonumber \\ \bullet & \overline{m}ox{ If $\omega(x) = \lambda_i$ for all $x \geq 0$, $1 \leq i \leq R$, then $\mathbb{P}_{\omega}(X_n \rightarrow \infty) = 0$ } \nonumber \\ & \overline{m}ox{ if and only if $P_{R,i}(q) \geq 0$. } \end{align} For the remainder of the proof we assume that \begin{align} \label{eq:ConstantEnvironmentToRight} \omega(x) = \lambda_i,~ x \geq 0, \end{align} for some $0 \leq i \leq R$. To determine if there is positive probability of transience to $+\infty$ we will calculate $\widehat{t}eta(n) = 2 \widehat{r}o(n) / \nu(n)$ and apply \eqref{eq:ThetaTransienceRecurrenceCondition}. We begin with $\widehat{r}o(n)$. Since $L=1$ and $\alpha = 1/2$, $\mathbb{E}(S_0) = 1$, by \eqref{eq:ExS0is1}. Thus, by \eqref{def:rhonuthetaalphahalf} and \eqref{eq:L1GammajDist}, \begin{align} \label{eq:rhosimple} \widehat{r}o(n) & = \mathbb{E}(S_i) + (n-1) \mathbb{E}(S_0) - n = \mathbb{E}(S_i) - 1. \end{align} A direct computation yields \begin{equation}\label{ExpSi} \begin{aligned} \mathbb{E}(S_i) & = \sum_{k = 0}^{R-i-1} k \cdot q^k(1-q) + \sum_{k = R - i}^{\infty} k \cdot q^{R-i} p^{k-(R-i)}(1-p) \\ & =\frac1{1-q}\big[-(1-q)q^{R-i}(R-i)+(1-q^{R-i})q\big] ~+\\ &~~~~~~ \frac1{1-p}(\frac qp)^{R-i}\big[(1-p)p^{R-i}(R-i)+p^{R-i+1}\big]. \end{aligned} \end{equation} Since $L=1$ and $\alpha = 1/2$, Proposition \ref{prop:PropertiesOfAlpha} implies \begin{equation} \label{pqcrit} p=p_0=\frac{1-2q+q^{R+1}}{1-2q+q^R},\ \ 1-p=1-p_0=\frac{q^R-q^{R+1}}{1-2q+q^R}. \end{equation} Substituting for $p$ above we obtain, after some lengthy simplifications, $$ \mathbb{E}(S_i)=\frac{1-2q+q^{i+1}}{q^i(1-q)}. $$ Thus, by \eqref{eq:rhosimple}, \begin{align} \label{eq:rho} \widehat{r}o(n) =\frac{1-2q-q^i+2q^{i+1}}{q^i(1-q)} ~,~ \overline{m}ox{ for all } n. \end{align} We now turn to the calculation of $\nu(n)$. From \eqref{def:rhonuthetaalphahalf} we recall that $\nu(n) = \mathbb{E}[(U(n)-n)^2]/n$. Using the independence of the random variables $\mathbb{G}amma_1,...,\mathbb{G}amma_n$ we have \begin{equation} \label{U(n)-n} \begin{aligned} &\mathbb{E}\left[(U(n)-n)^2\right]=\mathbb{E}\mathbb{B}ig[\mathbb{B}ig(\sum_{j=1}^n\mathbb{G}amma_j-(n-1+\mathbb{E}(S_i))-(1-\mathbb{E}(S_i))\mathbb{B}ig)^2\mathbb{B}ig]=\\ &\overline{m}ox{Var}(S_i)+(n-1)\overline{m}ox{Var}(S_0)+(1 - \mathbb{E}(S_i))^2 = n[\mathbb{E}(S_0^2)-1]+ O(1). \end{aligned} \end{equation} A tedious computation gives \begin{equation}\label{expS0squared} \begin{aligned} \mathbb{E}(S_0^2) & = \sum_{k = 0}^{R-1} k^2 \cdot q^k(1-q) + \sum_{k = R}^{\infty} k^2 \cdot q^R p^{k-R}(1-p) \\ & = \frac{q+q^2-q^R\mathbb{B}ig(R^2-(2R^2-2R-1)q+(R-1)^2q^2\mathbb{B}ig)}{(1-q)^2} ~+\\ & ~~~~~~~~ \frac{q^R\mathbb{B}ig(R^2-(2R^2-2R-1)p+(R-1)^2p^2\mathbb{B}ig)}{(1-p)^2}. \end{aligned} \end{equation} Substituting for $p$ from \eqref{pqcrit} and doing a lot of algebra, one eventually finds that \begin{equation}\label{ES02} \mathbb{E}(S_0^2)=\frac1{q^R(1-q)^2}\mathbb{B}ig[2-8q+8q^2+(2R+1)q^R+(2-6R)q^{R+1}+(4R-5)q^{R+2}\mathbb{B}ig]. \end{equation} Finally, from \eqref{U(n)-n} and \eqref{ES02}, we have \begin{equation}\label{eq:nu} \begin{aligned} &\nu(n)= \frac{\mathbb{E}[(U(n)-n)^2]}n=\mathbb{E}(S_0^2)-1+O(\frac1n)=\\ &\frac1{q^R(1-q)^2}\mathbb{B}ig[2-8q+8q^2+2Rq^R+(4-6R)q^{R+1}+(4R-6)q^{R+2}\mathbb{B}ig]+O(\frac1n). \end{aligned} \end{equation} Now, combining \eqref{eq:rho} and \eqref{eq:nu} shows that $\widehat{t}eta(n) = \widehat{t}eta + O(\frac1n)$ where \begin{equation}\label{thetaagain} \begin{aligned} &~\widehat{t}eta =\frac{q^{R-i}(1-q)(1-2q-q^i+2q^{i+1})}{1-4q+4q^2+Rq^R+(2-3R)q^{R+1}+(2R-3)q^{R+2}} \\ &= \frac{q^{R-i}-3q^{R-i+1}+2q^{R-i+2}-q^R+3q^{R+1}-2q^{R+2}}{1-4q+4q^2+Rq^R+(2-3R)q^{R+1}+(2R-3)q^{R+2}}. \end{aligned} \end{equation} For $1 \leq i \leq R$, $\widehat{t}eta \leq 1$ is equivalent to $P_{R,i}(q) \geq 0$, and in the case $i = 0$, $\widehat{t}eta$ is $0$. Thus, \eqref{eq:BulletConditions} follows from \eqref{eq:ThetaTransienceRecurrenceCondition}. It remains only to show the claims concerning the polynomial $P_{R,R}(q)$. For these we will use the factored representation $P_{R,R}(q) = q(1-q)^2 \mathbb{P}t_{R,R}(q)$, where $\mathbb{P}t_{R,R}(q)=-1+\sum_{j=1}^{R-3}jq^{j+1}+(2R-1)q^{R-1}$, as in \eqref{eq:FactoredPolyR}. Since, $P_{R,R}(q)$ and $\mathbb{P}t_{R,R}(q)$ have the same sign for all $q \in (0,1)$, it suffices to prove the claims for the polynomial $\mathbb{P}t_{R,R}(q)$. Now, clearly, $\mathbb{P}t_{R,R}$ is increasing, and $\mathbb{P}t_{R,R}(0)=-1$. For $R\ge4$, one can rewrite $\mathbb{P}t_{R,R}$ as $\mathbb{P}t_{R,R}(q)=-1+(\frac q{1-q})^2[1-(R-2)q^{R-3}+(R-3)q^{R-2}]+(2R-1)q^{R-1}$. Using this, we find that $\mathbb{P}t_{R,R}(\frac12)=(\frac12)^{R-1}$, for all $R\ge2$. Consequently, $\mathbb{P}t_{R,R}$ has a unique root $q_*(R)\in(0,\frac12)$, with $\mathbb{P}t_{R,R}(q) < 0$ for $q<q_*(R)$ and $\mathbb{P}t_{R,R}(q) > 0$ for $q > q_*(R)$. Furthermore, \begin{align} \label{PRdec} &\mathbb{P}t_{R+1,R+1}(q)-\mathbb{P}t_{R,R}(q)=(2R+1)q^R-(R+1)q^{R-1}= \nonumber \\ &q^{R-1}\big[(2R+1)q-(R+1)\big]\le -\frac12q^{R-1} < 0,\ \text{for}\ q\in[0,\frac12]. \end{align} So, $q_*(R)$ is increasing in $R$. Also, we have $\mathbb{P}t_{\infty,\infty}(q)\equiv\lim_{R\to\infty}\mathbb{P}t_{R,R}(q)=\frac{2q-1}{(1-q)^2}$. Since the root of $\mathbb{P}t_{\infty,\infty}$ is at $q=\frac12$, it follows that $\lim_{R\to\infty}q_*(R)=\frac12$. \end{proof} \subsection{Proof of Theorem \ref{LR2crit}} \label{subsec:ProofThmLR2crit} For the proof of Theorem \ref{LR2crit} we will need the following lemma. \begin{Lem} \label{lem:MustBeTransientTopmInfinity} If the initial environment $\omega$ is constant in a neighborhood of $+\infty$ ($-\infty$) and $\mathbb{P}_{\omega}(X_n \rightarrow +\infty) > 0$ ($\mathbb{P}_{\omega}(X_n \rightarrow -\infty) > 0$), then \begin{align*} \mathbb{P}_{\omega}(X_n \rightarrow +\infty) + \mathbb{P}_{\omega}(X_n \rightarrow -\infty) = 1. \end{align*} \end{Lem} \begin{proof} We will prove the claim for the case of constant initial environment in a neighborhood of $+\infty$; the other claim follows from symmetry considerations. Thus, we assume the environment $\omega$ satisfies $\mathbb{P}_{\omega}(X_n \rightarrow \infty) > 0$ and $\omega(x) = \lambda$, $x \geq N$, for some $\lambda \in \mathbb{L}ambda$ and $N \in \mathbb{N}$. Also, we let $\omega'$ be any environment with $\omega'(x) = \lambda$, for all $x \geq 0$. Since we assume $\mathbb{P}_{\omega}(X_n \rightarrow \infty) > 0$ it follows from part (i) of Lemma \ref{lem:SimpleTransConditions} that $\mathbb{P}_{\omega'}(X_n \rightarrow \infty), \mathbb{P}_{\omega'}(\mathbb{A}M_0^+) > 0$, and we define $\delta = \mathbb{P}_{\omega'}(\mathbb{A}M_0^+)$. If the random walk $(X_n)$ is run starting in $\omega$, then at every time $n$ that it first hits a site $x \geq N$ the environment at time $n$ is equal to $\lambda$ at all sites $y \geq x$. Thus, \begin{align} \label{eq:DeltaChanceOfEscape} & \mathbb{P}_{\omega}\big(\mathbb{A}M_n^+ | X_0 = x_0, ..., X_n = x_n \big) = \mathbb{P}_{\omega'}(\mathbb{A}M_0^+) = \delta, \nonumber \\ & \overline{m}ox{for any path $(x_0,...,x_n)$ satisfying $x_0 = 0, x_n \geq N, x_m < x_n$ for $m < n$}. \end{align} We define stopping times $\tau_i, \tau_i'$ and stopping points $z_i$ as follows: \begin{itemize} \widetilde{i}em $\tau_0 = T_N$, $z_0 = N$. \widetilde{i}em For $i \geq 1$, \begin{align*} & \tau_i' = \inf \{n > \tau_{i-1}: X_n = z_{i-1}\}, \\ & z_i = \sup\{X_n : \tau_{i-1} \leq n < \tau_i' \}, \\ & \tau_i = \inf \{n > \tau_i' : X_n = z_i + 1\}. \end{align*} \end{itemize} Of course, not all these stopping times are necessarily finite. We think of the times as an ordered list $\tau_0, \tau_1', \tau_1, \tau_2', \tau_2,...$ and if any element in this list is equal to $\infty$ then all elements to the right of it are defined to be $\infty$ as well. By construction, the list is an increasing sequence of positive integers up till the point of the first $\infty$, and it follows from \eqref{eq:DeltaChanceOfEscape} that, for any $i \geq 1$, \begin{align*} \mathbb{P}_{\omega}(\tau_i' < \infty|\tau_{i-1} < \infty) \leq 1 - \delta. \end{align*} Thus, for any $i \geq 1$, \begin{align*} \mathbb{P}_{\omega}(\tau_i < \infty) = \mathbb{P}_{\omega}(\tau_0 < \infty) \cdot \prod_{j=1}^i \mathbb{B}ig( \mathbb{P}_{\omega} (\tau_j' < \infty|\tau_{j-1} < \infty) \cdot \mathbb{P}_{\omega}(\tau_j < \infty|\tau_j' < \infty) \mathbb{B}ig) \leq (1-\delta)^i. \end{align*} So, $\mathbb{P}_{\omega}(\tau_i < \infty, \overline{m}ox{ for all } i > 0) = 0$. However, by part (ii) of Lemma \ref{lem:SimpleTransConditions}, we also have $\mathbb{P}_{\omega}(X_n \not\rightarrow +\infty \overline{m}ox{ and } X_n \not\rightarrow -\infty) \leq \mathbb{P}_{\omega}(\tau_i < \infty, \overline{m}ox{ for all } i > 0)$. Thus, $\mathbb{P}_{\omega}(X_n \rightarrow +\infty) + \mathbb{P}_{\omega}(X_n \rightarrow -\infty) = 1$. \end{proof} \begin{proof}[Proof of Theorem \ref{LR2crit}] We will show that if $\omega(x)=(p,0)$ for $x \geq 0$, then $\widehat{t}eta(n)$ from \eqref{def:rhonuthetaalphahalf} satisfies $\widehat{t}eta(n) \leq 1+O(\frac1n)$, if $q\ge q_1^*$, and $\lim_{n\to\infty}\widehat{t}eta(n)>1$, if $q<q_1^*$. We will also show that if $\omega(x)=(p,1)$ for $x \geq 0$ or $\omega(x)=(q,1)$ for $x \geq 0$, then $\widehat{t}eta(n)$ satisfies $\widehat{t}eta(n)\le 1+O(\frac1n)$ for all $q\in(0,1)$. Finally, we will show that if $\omega(x)=(q,0)$ for $x \geq 0$, then $\widehat{t}eta(n)$ satisfies $\widehat{t}eta(n)\le 1+O(\frac1n)$, if $q\le q_2^*$, and $\lim_{n\to\infty}\widehat{t}eta(n)>1$, if $q>q_2^*$. It follows, by \eqref{eq:ThetaTransienceRecurrenceCondition}, that: \begin{align} \label{eq:TransiencPossibilitiesConstantEnvironmentsRL2} & \bullet \overline{m}ox{ If $\omega(x) = (p,0)$ for $x \geq 0$, then $\mathbb{P}_{\omega}(X_n \rightarrow \infty) > 0$ if and only if $q < q_1^*$.} \nonumber \\ & \bullet \overline{m}ox{ If $\omega(x) = (q,0)$ for $x \geq 0$, then $\mathbb{P}_{\omega}(X_n \rightarrow \infty) > 0$ if and only if $q > q_2^*$.} \nonumber \\ & \bullet \overline{m}ox{ If $\omega(x) = (p,1)$ for $x \geq 0$ or $\omega(x) = (q,1)$ for $x \geq 0$, then $\mathbb{P}_{\omega}(X_n \rightarrow \infty) = 0$.} \end{align} Clearly, \eqref{eq:TransiencPossibilitiesConstantEnvironmentsRL2} is still valid if the hypothesis ``$x \geq 0$'' is changed to ``$x$ in a neighborhood of $+\infty$''. Thus, this will prove the theorem, in light of Lemmas \ref{lem:SimpleTransConditions} and \ref{lem:MustBeTransientTopmInfinity}, and the symmetry that holds because $R=L$ and $p = 1-q$, along with Lemma \ref{lem:ComparisonOfEnvironments} in the case $q<p$, where the theorem sometimes allows for nonconstant environments in a neighborhood of $+\infty$ or $-\infty$. By definition, $\widehat{t}eta(n) = 2\widehat{r}o(n)/\nu(n)$. The calculation of the two components $\widehat{r}o(n)$ and $\nu(n)$ will be done separately, but we begin first with some general setup that will be used in both cases. Throughout we will use implicitly the following basic fact many times, which is immediate from the construction of the joint process $(\omega^j,\mathbb{G}amma_j)$: \begin{align*} \overline{m}ox{ Conditioned on $\omega^i$, $(\omega^j,\mathbb{G}amma_j)_{j=1}^{i-1}$ and $(\mathbb{G}amma_j)_{j=i}^{\infty}$ are independent. } \end{align*} \noindent \emph{Setup} \\ Since $L=2$, the transition matrix $A$ defined in section \ref{subsec:StepDistributionZxChain} corresponds to the recurrent states $(p,1)$ and $(q,0)$. Ordering the states in the order they appear here, and using the fact that $R=2$ and the assumption $p = 1-q$, it follows from \eqref{Ahatmatrix} that \begin{equation*} A= \left(\begin{matrix} p& 1-p \\ q^2& 1-q^2 \end{matrix}\right) = \left(\begin{matrix} 1-q& q \\ q^2& 1-q^2 \end{matrix}\right). \end{equation*} This matrix $A$ has eigenvalues \begin{align} \label{eq:RL2Eigenvalues} \widetilde{e}a_1 = 1 ~,~ \widetilde{e}a_2 = 1 - q - q^2 \end{align} with corresponding left eigenvectors \begin{align} \label{eq:RL2Eigenvectors} w_1 = (q,1) ~,~ w_2 = (-1,1). \end{align} For future reference we observe that $|\widetilde{e}a_2| < 1$, for any $q \in (0,1)$, and that the unit vectors $e_1 = (1,0)$ and $e_2 = (0,1)$ decompose as \begin{align} \label{eq:e1e2Decomposition} e_1 = c_1 w_1 + c_2 w_2 ~,~ e_2 = d_1 w_1 + d_2 w_2 \end{align} where \begin{align} \label{eq:c1c2d1d2} c_1 = \frac{1}{q+1} ~,~ c_2 = - \frac{1}{q+1} ~,~ d_1 = \frac{1}{q+1} ~,~ d_2 = \frac{q}{q+1}. \end{align} With $R=2$ and $p = 1-q$ the distribution of the random variables $S_0, S_1$ and $S_2$ given in \eqref{Sidist} becomes: \begin{align} \label{SidistRL2} & \mathbb{P}(S_0 = k) = q^k(1-q),~ k = 0,1 ~\overline{m}ox{ and }~ \mathbb{P}(S_0 = k) = q^3 (1-q)^{k-2}, ~k \geq 2. \nonumber \\ & \mathbb{P}(S_1 = 0) = 1-q ~\overline{m}ox{ and }~ \mathbb{P}(S_1 = k) = q^2 (1-q)^{k-1}, ~k \geq 1. \nonumber \\ & \mathbb{P}(S_2 = k) = q(1-q)^k,~ k \geq 0. \end{align} Direct calculations yield \begin{align} \label{eq:ListOfExpectations} & \mathbb{E}(S_0) = 2q ~,~ \mathbb{E}(S_1) = 1 ~,~ \mathbb{E}(S_2) = (1-q)/q, \nonumber \\ & \mathbb{E}(S_0^2) = 2(1+q) ~,~\mathbb{E}(S_2^2) = (1-q)(2-q)/q^2, \nonumber \\ & \mathbb{E}(S_0|S_0 \geq 2) = (1+q)/q ~,~ \mathbb{E}(S_2|S_2 \geq 1) = 1/q. \end{align} By \eqref{eq:GammajDist} we have $\mathbb{E}(\mathbb{G}amma_i|\omega^i = (p,1)) = \mathbb{E}(S_2)$ and $\mathbb{E}(\mathbb{G}amma_i|\omega^i = (q,0)) = \mathbb{E}(S_0)$, and with our chosen state ordering $(p,1),(q,0)$ the expectation vector $E$ from section \ref{subsec:StepDistributionZxChain} becomes $E = (\mathbb{E}(S_2), \mathbb{E}(S_0))$. Thus, since $\omega^{i+\ell}$ is distributed as $e_1A^{\ell}$ when $\omega^i = (p,1)$ and as $e_2 A^{\ell}$ when $\omega^i = (q,0)$, we have \begin{align} \label{eq:ExTil_given_omegaip1} \mathbb{E}\mathbb{B}ig(&\mathbb{G}amma_{i+\ell}\mathbb{B}ig|\omega^i = (p,1)\mathbb{B}ig) \nonumber \\ & = \mathbb{P}\mathbb{B}ig(\omega^{i+\ell} = (p,1)\mathbb{B}ig|\omega^i = (p,1)\mathbb{B}ig) \cdot \mathbb{E}(S_2) ~+~ \mathbb{P}\mathbb{B}ig(\omega^{i+\ell} = (q,0)\mathbb{B}ig|\omega^i = (p,1)\mathbb{B}ig) \cdot \mathbb{E}(S_0) \nonumber \\ & = \left<e_1A^{\ell}, E \right> = c_1 \left<w_1,E\right> \widetilde{e}a_1^{\ell} + c_2 \left<w_2,E\right> \widetilde{e}a_2^{\ell} = 1 + \mathbb{B}ig(\frac{1-2q}{q} \mathbb{B}ig) \widetilde{e}a_2^{\ell}, \end{align} and \begin{align} \label{eq:ExTil_given_omegaiq0} \mathbb{E}\mathbb{B}ig(&\mathbb{G}amma_{i+\ell}\mathbb{B}ig|\omega^i = (q,0)\mathbb{B}ig) \nonumber \\ & = \mathbb{P}\mathbb{B}ig(\omega^{i+\ell} = (p,1)\mathbb{B}ig|\omega^i = (q,0)\mathbb{B}ig) \cdot \mathbb{E}(S_2) ~+~ \mathbb{P}\mathbb{B}ig(\omega^{i+\ell} = (q,0)\mathbb{B}ig|\omega^i = (q,0)\mathbb{B}ig) \cdot \mathbb{E}(S_0) \nonumber \\ &= \left<e_2A^{\ell}, E \right> = d_1 \left<w_1,E\right> \widetilde{e}a_1^{\ell} + d_2 \left<w_2,E\right> \widetilde{e}a_2^{\ell} = 1 + (2q-1) \widetilde{e}a_2^{\ell}, \end{align} for any $\ell \geq 0$. Similarly, denoting $E' = (\mathbb{E}(S_2^2), \mathbb{E}(S_0^2))$, we have \begin{align} \label{eq:ExTil2_given_omegaip1} \mathbb{E}\mathbb{B}ig(&\mathbb{G}amma_{i+\ell}^2\mathbb{B}ig|\omega^i = (p,1)\mathbb{B}ig) \nonumber \\ &= \left<e_1A^{\ell}, E' \right> = c_1 \left<w_1, E' \right> \widetilde{e}a_1^{\ell} + c_2 \left<w_2, E' \right> \widetilde{e}a_2^{\ell} = \frac{2-q+3q^2}{q(1+q)} + O(|\widetilde{e}a_2|^{\ell}). \end{align} \noindent \emph{Calculation of $\widehat{r}o(n)$} \\ We will write $\widehat{r}o^{(p,0)}(n)$ for the quantity $\widehat{r}o(n)$ when the initial environment is $\omega(x) = (p,0)$, $x \geq 0$, and similarly we denote by $\widehat{r}o^{(q,0)}(n)$, $\widehat{r}o^{(p,1)}(n)$, and $\widehat{r}o^{(q,1)}(n)$ the quantity $\widehat{r}o(n)$ with initial environments $(q,0)$, $(p,1)$, and $(q,1)$ for $x \geq 0$. In all cases, we have, from \eqref{def:rhonuthetaalphahalf}, $\widehat{r}o(n) = \mathbb{E}(U(n)) - n = \sum_{j=1}^n \mathbb{E}(\mathbb{G}amma_j) - n$, where the expectation is (implicitly) the expectation conditioned on $\omega^1 = (p,0),(q,0),(p,1)$, or $(q,1)$. Using \eqref{eq:ExTil_given_omegaip1} with $i = 1$, along with \eqref{eq:RL2Eigenvalues}, gives \begin{align} \label{eq:rhop1n} & \widehat{r}o^{(p,1)}(n) = \sum_{j=1}^n \mathbb{E}\mathbb{B}ig(\mathbb{G}amma_j\mathbb{B}ig|\omega^1 = (p,1)\mathbb{B}ig) - n = \sum_{j=1}^n \left[1 + \mathbb{B}ig(\frac{1-2q}{q} \mathbb{B}ig) \widetilde{e}a_2^{j-1}\right] - n \nonumber \\ & = \mathbb{B}ig(\frac{1-2q}{q}\mathbb{B}ig) \frac{1}{1 - \widetilde{e}a_2} + O(|\widetilde{e}a_2|^n) = \frac{1-2q}{q^2(1+q)} + O(|\widetilde{e}a_2|^n). \end{align} Similarly, using \eqref{eq:ExTil_given_omegaiq0} with $i = 1$, along with \eqref{eq:RL2Eigenvalues}, gives \begin{align} \label{eq:rhoq0n} & \widehat{r}o^{(q,0)}(n) = \sum_{j=1}^n \mathbb{E}\mathbb{B}ig(\mathbb{G}amma_j\mathbb{B}ig|\omega^1 = (q,0)\mathbb{B}ig) - n = \sum_{j=1}^n \left[1 + (2q-1) \widetilde{e}a_2^{j-1}\right] - n \nonumber \\ & = (2q-1) \frac{1}{1 - \widetilde{e}a_2} + O(|\widetilde{e}a_2|^n) = \frac{2q-1}{q(1+q)} + O(|\widetilde{e}a_2|^n). \end{align} Now, if $\omega^1 = (p,0)$ then $\mathbb{G}amma_1$ is distributed according to $S_2$, and $\omega^2 = (p,1)$ with probability 1, by \eqref{eq:omegajplus1_determined}. Thus, by \eqref{eq:ListOfExpectations} and \eqref{eq:rhop1n}, \begin{align} \label{eq:rhop0n} & \widehat{r}o^{(p,0)}(n) = \mathbb{B}ig[\mathbb{E}\mathbb{B}ig(\mathbb{G}amma_1\mathbb{B}ig|\omega^1 = (p,0)\mathbb{B}ig) - 1\mathbb{B}ig] + \mathbb{B}ig[\sum_{j=2}^n \mathbb{E}\mathbb{B}ig(\mathbb{G}amma_j\mathbb{B}ig|\omega^2 = (p,1)\mathbb{B}ig) - (n-1)\mathbb{B}ig] \nonumber \\ & = [\mathbb{E}(S_2) - 1]+ \widehat{r}o^{(p,1)}(n-1) = \frac{(1-2q)(1+q+q^2)}{q^2(1+q)} + O(|\widetilde{e}a_2|^n). \end{align} Finally, when $\omega^1 = (q,1)$ it follows from \eqref{eq:omegajplus1_determined} that $\omega^2 = (p,1)$ if $\mathbb{G}amma_1 \geq 1$ and $\omega^2 = (q,0)$ if $\mathbb{G}amma_1 = 0$. Since $\mathbb{G}amma_1$ is distributed according to $S_1$ when $\omega^1 = (q,1)$, we have \begin{align} \label{eq:rhoq1n} & \widehat{r}o^{(q,1)}(n) = \mathbb{E}\mathbb{B}ig(\mathbb{G}amma_1\mathbb{B}ig|\omega^1 = (q,1)\mathbb{B}ig) + \sum_{j=2}^n \mathbb{E}\mathbb{B}ig(\mathbb{G}amma_j\mathbb{B}ig|\omega^1 = (q,1)\mathbb{B}ig) - n \nonumber \\ & = \mathbb{E}(S_1) + \mathbb{P}(S_1 = 0) \sum_{j=2}^n \mathbb{E}\mathbb{B}ig(\mathbb{G}amma_j\mathbb{B}ig|\omega^2 = (q,0)\mathbb{B}ig) + \mathbb{P}(S_1 \geq 1) \sum_{j=2}^n \mathbb{E}\mathbb{B}ig(\mathbb{G}amma_j\mathbb{B}ig|\omega^2 = (p,1)\mathbb{B}ig) - n \nonumber \\ & = \mathbb{E}(S_1) - 1 + (1-q) \widehat{r}o^{(q,0)}(n-1) + q \widehat{r}o^{(p,1)}(n-1) = \frac{1-2q}{1+q} + O(|\widetilde{e}a_2|^n), \end{align} by \eqref{eq:ListOfExpectations}, \eqref{eq:rhop1n}, and \eqref{eq:rhoq0n}. \\ \noindent \emph{Calculation of $\nu(n)$} \\ From \eqref{def:rhonuthetaalphahalf} we have, \begin{equation}\label{eq:nuDef} \nu(n) = \frac{1}{n} \mathbb{E}[(U(n)-n)^2] = \frac{1}{n} \mathbb{E}\mathbb{B}ig[\mathbb{B}ig(\sum_{i=1}^n\mathbb{G}amma_i-n\mathbb{B}ig)^2\mathbb{B}ig]. \end{equation} We will show below that with $\omega(x) = (p,1)$ for $x \geq 0$, i.e. with $\omega^1 = (p,1)$, \begin{equation}\label{eq:nun} \begin{aligned} \nu(n) = \frac{2(1-q)(1-q+q^2)}{q^2(q+1)} ~+~ O(\frac1n). \end{aligned} \end{equation} Similar calculations show that \eqref{eq:nun} also holds with $\omega^1 = (p,0), (q,0)$ and $(q,1)$. The latter are omitted for the sake of brevity. However, this asymptotic equivalence of $\nu(n)$ for different values of $\omega^1$ is to be expected from \eqref{eq:nuDef}, since the distribution of $\omega^j$ converges exponentially fast to the stationary distribution of the matrix $A$, for any value of the initial state $\omega^1$. Thus, one can couple the joint processes $(\omega^j,\mathbb{G}amma_j)_{j=1}^\infty$ starting from two different values of $\omega^1$ in such a way that the probability that the tails $(\omega^j,\mathbb{G}amma_j)_{j=n}^\infty$ are not the same in the two processes decays exponentially in $n$. We now proceed to the calculation of $\nu(n)$ with $\omega^1 = (p,1)$; thus, henceforth, all expectations are conditioned on $\omega^1 = (p,1)$ if not otherwise stated. From \eqref{eq:nuDef} we have \begin{equation}\label{nu(n)3Terms} \nu(n) = \frac{1}{n} \sum_{i=1}^n \mathbb{E}(\mathbb{G}amma_i^2) + \frac{2}{n} \sum_{1 \leq i<j \leq n} \mathbb{E}(\mathbb{G}amma_i\mathbb{G}amma_j) - 2 \sum_{i=1}^n \mathbb{E}(\mathbb{G}amma_i) + n. \end{equation} By \eqref{eq:ExTil2_given_omegaip1}, the first term on the right hand side of \eqref{nu(n)3Terms} is \begin{align} \label{eq:FirstTermnu(n)} \frac{1}{n} \sum_{i=1}^n \mathbb{E}(\mathbb{G}amma_i^2) = \frac{1}{n} \sum_{i=1}^n \left[ \frac{2-q+3q^2}{q(1+q)} + O(|\widetilde{e}a_2|^{i-1}) \right] = \frac{2-q+3q^2}{q(1+q)} + O(\frac1n). \end{align} Also, by \eqref{eq:rhop1n}, the third term on the right hand side of \eqref{nu(n)3Terms} is \begin{align} \label{eq:ThirdTermnu(n)} - 2 \sum_{i=1}^n \mathbb{E}(\mathbb{G}amma_i) = - 2(n + \widehat{r}o^{(p,1)}(n)) = -2n + \frac{2(2q-1)}{q^2(1+q)} + O(|\widetilde{e}a_2|^n). \end{align} To compute the second term on the right hand side of \eqref{nu(n)3Terms} we observe that, by \eqref{eq:GammajDist} and \eqref{eq:omegajplus1_determined}, the following hold. \begin{enumerate} \widetilde{i}em[(a)] If $\omega^i = (p,1)$ then: \\ $~~~~\mathbb{G}amma_i$ is distributed as $S_2$, and $\omega^{i+1} = (p,1)$ if $\mathbb{G}amma_i \geq 1$. \widetilde{i}em[(b)] If $\omega^i = (q,0)$ then: \\ $~~~~\mathbb{G}amma_i$ is distributed as $S_0$, and $\omega^{i+1} = (p,1)$ if $\mathbb{G}amma_i \geq 2$ and $(q,0)$ otherwise. \end{enumerate} From (a) and \eqref{SidistRL2}-\eqref{eq:ExTil_given_omegaip1} we have \begin{align} \label{eq:ExGammaijGivenp1} \mathbb{E}\mathbb{B}ig(\mathbb{G}amma_i \mathbb{G}amma_j \mathbb{B}ig|\omega^i = (p,1)\mathbb{B}ig) & = \mathbb{P}( S_2 \geq 1) \cdot \mathbb{E}(S_2 | S_2 \geq 1) \cdot \mathbb{E}\mathbb{B}ig(\mathbb{G}amma_j\mathbb{B}ig|\omega^{i+1} = (p,1)\mathbb{B}ig) \nonumber \\ & = \frac{1-q}{q} + \mathbb{B}ig[ \frac{(1-q)(1-2q)}{q^2} \mathbb{B}ig] \widetilde{e}a_2^{j - (i+1)}, \end{align} and from (b) and \eqref{SidistRL2}-\eqref{eq:ExTil_given_omegaiq0} we have \begin{align} \label{eq:ExGammaijGivenq0} \mathbb{E}\mathbb{B}ig(\mathbb{G}amma_i \mathbb{G}amma_j \mathbb{B}ig|\omega^i = (q,0)\mathbb{B}ig) & = \mathbb{P}(S_0 \geq 2) \cdot \mathbb{E}(S_0| S_0 \geq 2) \cdot \mathbb{E}\mathbb{B}ig(\mathbb{G}amma_j\mathbb{B}ig|\omega^{i+1} = (p,1)\mathbb{B}ig) \nonumber \\ &~~~ + \mathbb{P}(S_0 = 1) \cdot 1 \cdot \mathbb{E}\mathbb{B}ig(\mathbb{G}amma_j\mathbb{B}ig|\omega^{i+1} = (q,0)\mathbb{B}ig) \nonumber \\ & = 2q + (1-2q)(1+q^2) \widetilde{e}a_2^{j-(i+1)}. \end{align} Since $\omega^i$ is distributed as $e_1A^{i-1}$, under our assumption $\omega^1 = (p,1)$, \eqref{eq:ExGammaijGivenp1} and \eqref{eq:ExGammaijGivenq0} give \begin{align*} \mathbb{E}(\mathbb{G}amma_i \mathbb{G}amma_j) & = \mathbb{P}\mathbb{B}ig(\omega^i = (p,1)\mathbb{B}ig) \cdot \mathbb{E}\mathbb{B}ig(\mathbb{G}amma_i \mathbb{G}amma_j \mathbb{B}ig| \omega^i = (p,1)\mathbb{B}ig) ~+~ \mathbb{P}\mathbb{B}ig(\omega^i = (q,0)\mathbb{B}ig) \cdot \mathbb{E}\mathbb{B}ig(\mathbb{G}amma_i \mathbb{G}amma_j \mathbb{B}ig| \omega^i = (q,0)\mathbb{B}ig) \nonumber \\ & = \left<e_1A^{i-1}, \left( \frac{1-q}{q} + \mathbb{B}ig[ \frac{(1-q)(1-2q)}{q^2} \mathbb{B}ig] \widetilde{e}a_2^{j - (i+1)} ~,~ 2q + (1-2q)(1+q^2) \widetilde{e}a_2^{j-(i+1)} \right) \right>. \end{align*} Using the decomposition \eqref{eq:e1e2Decomposition}-\eqref{eq:c1c2d1d2} this simplifies to \begin{align} \label{eq:ExGammaij} \mathbb{E}(\mathbb{G}amma_i \mathbb{G}amma_j) = 1 + \mathbb{B}ig[\frac{(1-2q)(1-q+q^2)}{q}\mathbb{B}ig] \widetilde{e}a_2^{j-(i+1)} + \mathbb{B}ig[\frac{1-2q}{q}\mathbb{B}ig] \widetilde{e}a_2^{i-1} + C \widetilde{e}a_2^{j-2} \end{align} after a bit of algebra, where $C$ is an unimportant constant which depends only on $q$. So, using \eqref{eq:RL2Eigenvalues}, we find \begin{align} \label{eq:SecondTermnu(n)} \frac{2}{n}& \sum_{1 \leq i<j \leq n} \mathbb{E}(\mathbb{G}amma_i\mathbb{G}amma_j) \nonumber \\ & = \frac{2}{n} \left( \frac{n(n-1)}{2} + \mathbb{B}ig[\frac{(1-2q)(1-q+q^2)}{q}\mathbb{B}ig] \frac{n}{1 - \widetilde{e}a_2} + \mathbb{B}ig[\frac{1-2q}{q}\mathbb{B}ig] \frac{n}{1 - \widetilde{e}a_2} + O(1) \right) \nonumber \\ & = n - 1 + \frac{2(1-2q)(2-q+q^2)}{q^2(1+q)} ~+~ O(\frac1n). \end{align} Combining \eqref{nu(n)3Terms}-\eqref{eq:ThirdTermnu(n)} and \eqref{eq:SecondTermnu(n)} and simplifying one arrives at \eqref{eq:nun}. \\ \noindent \emph{Calculation and Analysis of $\widehat{t}eta(n)$}\\ Recall that, in general, $\widehat{t}eta(n) = 2\widehat{r}o(n)/\nu(n)$. We denote by $\widehat{t}eta^{(p,0)}(n)$ the quantity $\widehat{t}eta(n)$ with $\omega^1 = (p,0)$ and define $\widehat{t}eta^{(p,0)} = \lim_{n \to \infty} \widehat{t}eta^{(p,0)}(n)$. Also, we define the analogous quantities for $(q,0), (p,1)$, and $(q,1)$. From \eqref{eq:rhop1n}-\eqref{eq:rhoq1n} and \eqref{eq:nun} we have \begin{align} \label{eq:thetap0} & \widehat{t}eta^{(p,0)}(n) = \widehat{t}eta^{(p,0)} + O(\frac1n) ~~\overline{m}ox{ where }~~ \widehat{t}eta^{(p,0)} = \frac{(1-2q)(1+q+q^2)}{(1-q)(1-q+q^2)}, \\ \label{eq:thetap1} & \widehat{t}eta^{(p,1)}(n) = \widehat{t}eta^{(p,1)} + O(\frac1n) ~~\overline{m}ox{ where }~~ \widehat{t}eta^{(p,1)} = \frac{1-2q}{(1-q)(1-q+q^2)}, \\ \label{eq:thetaq0} & \widehat{t}eta^{(q,0)}(n) = \widehat{t}eta^{(q,0)} + O(\frac1n) ~~\overline{m}ox{ where }~~ \widehat{t}eta^{(q,0)} = \frac{(2q-1)q}{(1-q)(1-q+q^2)}, \\ \label{eq:thetaq1} & \widehat{t}eta^{(q,1)}(n) = \widehat{t}eta^{(q,1)} + O(\frac1n) ~~\overline{m}ox{ where }~~ \widehat{t}eta^{(q,1)} = \frac{q^2(1-2q)}{(1-q)(1-q+q^2)}. \end{align} From \eqref{eq:thetap0} it follows that $\widehat{t}eta^{(p,0)} > 1$ is equivalent to $1-3q-q^2 > 0.$ Consequently, $\widehat{t}eta^{(p,0)} > 1$ if and only if $q\in(0,q_1^*)$, where $q_1^*$ is as in the statement of the theorem. From \eqref{eq:thetap1} it follows that $\widehat{t}eta^{(p,1)}>1$ is equivalent to $q>2$; thus, we always have $\widehat{t}eta^{(p,1)}\leq1$. From \eqref{eq:thetaq1} it follows that $\widehat{t}eta^{(q,1)} > 1$ is equivalent to $q^3+q^2-2q+1<0$. Since $q^3+q^2-2q+1=q^3+(1-q)^2$, we always have $\widehat{t}eta^{(q,1)}\leq1$. Finally, from \eqref{eq:thetaq0} it follows that $\widehat{t}eta^{(q,0)} > 1$ is equivalent to $q^3+q-1>0$. Thus, $\widehat{t}eta^{(q,0)}>1$ if and only if $q>q^*_2$, where $q^*_2$ is as in the statement of the theorem. This establishes all claims made in the first paragraph of the proof. \end{proof} \subsection{Proof of Theorem \ref{RorL1}} \label{subsec:ProofThmRorL1} Thus far the proofs of transience/recurrence for the random walk $(X_n)$ have centered around an analysis of the right jumps Markov chain $(Z_x)$. For the proof of Theorem \ref{RorL1}, we will need to construct another auxiliary process called the left jumps Markov chain. Consider the random walk $(X_n)_{n \geq 0}$ started from $X_0 = 0$ and restricted to $\mathbb{N}\cup\{0,-1\}$ by the following modification of its transition mechanism: when the walker is at a site $x\ge0$, it behaves as before, but at the site $-1$ it jumps right with probability one. Denote the modified random walk by $(\widetilde{X}_n)_{n \geq 0}$. Note that the modified random walk can be defined in terms of the extended single site Markov chains, $(\widehat{Y}_n^x)_{n \in \mathbb{N}} = (Y_n^x, J_n^x)_{n \in \mathbb{N}}$, $x\ge0$, along with an appropriately defined deterministic single site mechanism at $x=-1$. Fix $N\in\mathbb{Z}^+$ and let $\widetilde{T}_N=\inf\{n\ge0:\widetilde{X}_n=N\}$ denote the first time the modified random walk hits $N$. Note that $\mathbb{T}t_N$ is almost surely finite. We define a process $(\mathbb{W}t^{(N)}_x)_{x=0}^N$ by setting $\mathbb{W}t_x^{(N)}$ equal to the number of times the modified walk $(\widetilde{X}_n)$ jumps left from site $x$ before time $\widetilde{T}_N$. That is, $$ \mathbb{W}t^{(N)}_x=|\{n\le \widetilde{T}_N-1:\widetilde{X}_n=x, \widetilde{X}_{n+1}=x-1\}|. $$ We will refer to this process $(\mathbb{W}t^{(N)}_x)_{x=0}^N$ as the \emph{left jumps $N$-chain}. It can also be defined directly in terms of the jump sequences $(J_k^x)_{k \in \mathbb{N}}$, $0 \leq x \leq N$: \begin{equation}\label{W} \mathbb{W}t^{(N)}_N\equiv0, \ \mathbb{W}t^{(N)}_x=\mathbb{T}heta^{(N)}_x- \mathbb{W}t^{(N)}_{x+1}-1,\ \ x\in\{N-1,N-2,\ldots, 0\}, \end{equation} where \begin{equation} \label{Wtheta} \mathbb{T}heta^{(N)}_x=\inf \mathbb{B}ig\{n\ge1: \sum_{k=1}^n \mathds{1} \{J_k^x=1\}= \mathbb{W}t^{(N)}_{x+1}+1\mathbb{B}ig\}. \end{equation} That is, $\mathbb{W}t^{(N)}_x$ is the number of left jumps in the jump sequence $(J_k^x)_{k \in \mathbb{N}}$ before the $(\mathbb{W}t^{(N)}_{x+1} + 1)$-th right jump. In particular, $\mathbb{W}t^{(N)}_x$ is independent of $\mathbb{W}t^{(N)}_{x+2},...,\mathbb{W}t^{(N)}_{N}$ conditioned on $\mathbb{W}t^{(N)}_{x+1}$, so the sequence $(\mathbb{W}t^{(N)}_N, ..., \mathbb{W}t^{(N)}_0)$ is Markovian. The distribution of the jump sequence $(J_k^x)_{k \in \mathbb{N}}$ is the same for all $x \geq 0$, if the initial environment $\omega$ is constant for all $x \geq 0$. So, in this case, the transition probabilities \begin{align*} \mathbb{P}(&\mathbb{W}t^{(N)}_x = \ell | \mathbb{W}t^{(N)}_{x+1} = m) \\ & ~~= \mathbb{P} \mathbb{B}ig( \inf \mathbb{B}ig\{ n \geq 1: \sum_{k = 1}^n \mathds{1} \{J_k^x = 1\} = m + 1 \mathbb{B}ig\} - (m+1) = \ell \mathbb{B}ig) \end{align*} are independent of $N$ and $x \in \{0,...,N-2\}$, and we may define a single time-homogeneous Markov chain $(W_n)_{n=0}^\infty$ such that $(\mathbb{W}t^{(N)}_N, \mathbb{W}t^{(N)}_{N-1}, \ldots, \mathbb{W}t^{(N)}_0)$ has the same distribution as $(W_0,W_1,\ldots, W_N)$, for all $N$. We call $(W_n)_{n=0}^\infty$ the \emph{left jumps Markov chain}. The following proposition characterizes the transience or recurrence of the original random walk $(X_n)$ in terms of the positive recurrence or non-positive recurrence of the left jumps Markov chain. \begin{Prop}\label{leftjumps} If $X_0 = 0$ and the initial environment $\omega(x)$ is constant for $x \geq 0$, then the random walk $(X_n)$ has positive probability of being transient to $+\infty$ if and only if the left jumps Markov chain $(W_n)$ is positive recurrent. \end{Prop} \begin{proof} Arguments exactly like the proof of part (iii) of Lemma \ref{lem:SimpleTransConditions} show that the modified random walk $(\mathbb{X}t_n)$ either has probability 1 of being transient to $+\infty$ or probability 1 of being recurrent, and clearly the former occurs if and only if the original random walk $(X_n)$ has a positive probability of being transient to $+\infty$. Thus, it suffices to show that the left jumps Markov chain $(W_n)$ is positive recurrent if and only if the modified random walk $(\mathbb{X}t_n)$ is transient to $+\infty$. Now, by construction of the left jumps Markov chain $(W_n)$, we know $W_N$ and $\mathbb{W}t^{(N)}_0$ have the same distribution for each $N > 0$, where $\mathbb{W}t^{(N)}_0$ is the number of jumps of the modified random walk $(\mathbb{X}t_n)$ from 0 to $-1$ before it first reaches $N$. Thus, the distribution of $W_N$ is stochastically increasing, and it converges to a limiting finite distribution if and only if the modified random walk $(\mathbb{X}t_n)$ is transient. On the other hand, since $(W_n)_{n \geq 0}$ is a (time-homogeneous) irreducible, aperiodic, Markov chain, the distribution of $W_N$ converges to a finite limiting distribution if and only if this chain is positive recurrent. \end{proof} We now use Proposition \ref{leftjumps} to prove Theorem \ref{RorL1}. \begin{proof}[Proof of Theorem \ref{RorL1}] By symmetry it suffices to treat the case $R=1$. In the statement of the theorem, it is assumed that the initial environment is constant in a neighborhood of $+\infty$ in the negative feedback case. For the proof, we will make this assumption even in the positive feedback case. This causes no problem because in the positive feedback case if we can prove that the probability of transience to $+\infty$ is 0 for any constant environment then, by Lemma \ref{lem:ComparisonOfEnvironments}, it is also true for any non-constant initial environment. Without loss of generality, we may assume also that the initial environment is constant for all $x\ge0$. In this case, by Proposition \ref{leftjumps}, it suffices to show the left jumps Markov chain $(W_n)$ is not positive recurrent. By construction of the left jumps chain we have \begin{align*} \mathbb{P}(W_n=\cdot|W_{n-1}=m) = \mathbb{P}(\mathbb{W}t_x^{(N)}=\cdot|\mathbb{W}t_{x+1}^{(N)}=m), \end{align*} where the right hand side is independent of $N$ and $x \in \{0,...,N-2\}$ (due to the assumption on the initial environment). Now, if we condition on $\mathbb{W}t_{x+1}^{(N)} = m$, it follows from \eqref{W} and \eqref{Wtheta} that $\mathbb{W}t_x^{(N)}$ is equal to the number of left jumps in the jump sequence $(J_k^x)_{k \in \mathbb{N}}$ before the time of the $(m+1)$-th right jump. Similarly to the analysis of the right jumps chain, we decompose $\mathbb{W}t_x^{(N)}$ as \begin{align*} \mathbb{W}t_x^{(N)} = \sum_{j = 1}^{m+1} V_j, \end{align*} where $V_j$ is the number of left jumps in the sequence $(J_k^x)_{k \in \mathbb{N}}$ between the $(j-1)$-th and $j$-th right jumps. Since $R = 1$, the configuration at site $x$ is always $(p,0)$ immediately after a right jump from site $x$. So, the starting configuration for each of the ``left jump sessions'' after the first one is $(p,0)$, independent of the number of left jumps in all previous sessions. It follows that the random variables $V_1,...,V_{m+1}$ are independent and $V_2,...,V_{m+1}$ are i.i.d. with common distribution $V$, which is the distribution of the number of left jumps from site $x$ before the first right jump, starting in the $(p,0)$ configuration: \begin{equation*} \mathbb{P}(V=k)=\begin{cases} (1-p)^k p, \ 0 \leq k \leq L-1;\\ (1-p)^L(1-q)^{k-L}q,\ k\ge L.\end{cases} \end{equation*} (This is analogous to the situation $L=1$ for the right jumps Markov chain, where $U(m) = \sum_{j = 1}^m \mathbb{G}amma_j$ with $\mathbb{G}amma_1,...,\mathbb{G}amma_m$ independent and $\mathbb{G}amma_2, ... , \mathbb{G}amma_m$ i.i.d.) We now show that since $\alpha=\frac12$, $\mathbb{E}(V)=1$. After a somewhat messy calculation and some algebraic simplification, one finds that $$ \mathbb{E}(V)=\frac{(1-p)q+(1-p)^L(p-q)}{pq}. $$ From this it follows that $\mathbb{E}(V)=1$ if and only if $q=\frac{p(1-p)^L}{2p-1+(1-p)^L}$. Since $R = 1$ and $\alpha = 1/2$, we know that $q= q_0 = \frac{p(1-p)^L}{2p-1+(1-p)^L}$ by Remark 2 after Proposition \ref{prop:PropertiesOfAlpha}. So, we conclude that $\mathbb{E}(V)=1$. We have now shown that \begin{align*} \mathbb{P}(W_n=\cdot|W_{n-1}=m) = \mathbb{P} \mathbb{B}ig(\sum_{j=1}^{m+1} V_j = \cdot \mathbb{B}ig), \end{align*} where $V_1,...,V_{m+1}$ are independent and $V_2,...,V_{m+1}$ are i.i.d. with mean $1$. So, the Markov chain $(W_n)$ has the transition probabilities of a critical branching process with immigration. The immigration term $V_1$ depends on the initial environment, but is always nonnegative and not identically zero with finite mean. Also, clearly $\mathbb{E}(V^2) < \infty$, so the branching terms have finite variance. It thus follows from \cite{Seneta1970} that $\frac{W_n}n$ converges in law to a certain nonzero limiting distribution, which implies the Markov chain $(W_n)_{n \geq 0}$ cannot be positive recurrent. \end{proof} \section{Analysis of $\alpha$} \label{sec:AnalysisOfAlpha} In this section we prove Proposition \ref{prop:PropertiesOfAlpha}, which characterizes some properties of the important quantity \begin{align} \label{eq:AlphaFormula} \alpha = p \cdot \pi_p + q \cdot \pi_q \end{align} that determines the direction of transience for our random walk (away from borderline critical case). We recall from \eqref{eq:pippiq} that \begin{align*} \pi_p & = \frac{(1-q)q^R(1 - (1-p)^L)}{(1-q)q^R(1 - (1-p)^L) + p(1-p)^L(1-q^R)} ~, \\ \pi_q & = \frac{p(1-p)^L(1-q^R)}{(1-q)q^R(1 - (1-p)^L) + p(1-p)^L(1-q^R)}. \end{align*} The various pieces of the proposition will be proved separately, but we begin first with two useful observations. \begin{itemize} \widetilde{i}em[(I)] For any fixed $q, R, L$ the quantity \begin{align} \label{eq:Ratio_pippiq} \frac{\pi_p}{\pi_q} = \frac{(1-q)q^R}{1-q^R} \cdot \frac{1 - (1-p)^L}{p(1-p)^L} \end{align} satisfies $\lim_{p \rightarrow 1} \left( \frac{\pi_p}{\pi_q} \right) = \infty.$ Since $\pi_p + \pi_q = 1$, this implies $\lim_{p \to 1} \pi_p = 1$. \widetilde{i}em[(II)] For any fixed $q, R, L$ the quantity $\frac{\pi_p}{\pi_q}$ satisfies \begin{align*} \frac{d}{dp} \left( \frac{\pi_p}{\pi_q} \right) = \frac{(1-q)q^R}{1-q^R} \cdot \frac{p(L+1) + (1-p)^{L+1} - 1}{p^2 (1-p)^{L+1}} > 0 ~,~ \forall p \in (0,1). \end{align*} Since $\pi_p + \pi_q = 1$, this implies $\frac{d}{dp}(\pi_p) > 0$, $\forall p \in (0,1)$. So, \begin{align} \label{eq:DerivativeAlpha} &\frac{d}{dp} (\alpha) = \frac{d}{dp} (p \cdot \pi_p + q \cdot \pi_q) \nonumber = \frac{d}{dp} (p \cdot \pi_p + q \cdot (1-\pi_p)) \nonumber \\ & = \pi_p + (p-q) \cdot \frac{d}{dp} (\pi_p) > 0, \overline{m}ox{ for all } p \geq q. \end{align} \end{itemize} \noindent \emph{Proof of (vi):} By (I), $\lim_{p \to 1} \alpha = \lim_{p \to 1} (p \cdot \pi _p + q \cdot \pi q) = 1 \cdot 1 ~+~ q \cdot 0 = 1$. \\ \noindent \emph{Proof of (i):} This is immediate from (\ref{eq:AlphaFormula}) since $\pi_p + \pi_q = 1$ and $\pi_p, \pi_q > 0$, for any $p,q$. \\ \noindent \emph{Proof of (ii):} If $q < 1/2$, then $\alpha < 1/2$ for all $p \leq 1/2$, by (\ref{eq:AlphaFormula}). But, by (II) and (vi), we also know that $\alpha(p)$ is monotonically increasing on the interval $[1/2,1) \subset [q,1)$, with $\lim_{p \to 1} \alpha(p) = 1$. Thus, the claim follows by continuity of $\alpha(p)$. \\ \noindent \emph{Proof of (iii):} Plugging $p = 1-q$ into (\ref{eq:DefAlpha}) and simplifying one finds that \begin{align*} \alpha(1-q) < 1/2 & \mathbb{L}ongleftrightarrow q^R(1/2 - q) - q^L(1/2-q) < 0, \overline{m}ox{ and }\\ \alpha(1-q) > 1/2 & \mathbb{L}ongleftrightarrow q^R(1/2 - q) - q^L(1/2-q) > 0. \end{align*} Thus, for $q < 1/2$ and $R > L$, $\alpha(1-q) < 1/2$, which implies $p_0 > 1-q$. While, for $q < 1/2$ and $R < L$, $\alpha(1-q) > 1/2$, which implies $p_0 < 1-q$. This proves \eqref{eq:qPlusp0}. Now, by (II) and symmetry considerations, for any fixed $R,L,p$ we know that $d/dq(\alpha) > 0$ for $q \leq p$. Thus, for any $0 < q < q' < 1/2$, we have \begin{align*} \alpha(p_0(q,R,L),q',R,L) > \alpha(p_0(q,R,L),q,R,L) = 1/2, \end{align*} which implies $p_0(q',R,L) < p_0(q,R,L)$. So, $p_0$ is a decreasing function of $q$, for $q \in (0,1/2)$. \\ \noindent \emph{Proof of (iv):} Plugging $L=1$ into (\ref{eq:DefAlpha}) and simplifying one finds that \begin{align*} \alpha = 1/2 \mathbb{L}ongleftrightarrow p(1 - 2q + q^R) = 1 - 2q + q^{R+1} \end{align*} and, similarly, \begin{align*} \alpha < 1/2 \mathbb{L}ongleftrightarrow p(1 - 2q + q^R) < 1 - 2q + q^{R+1}, \\ \alpha > 1/2 \mathbb{L}ongleftrightarrow p(1 - 2q + q^R) > 1 - 2q + q^{R+1}. \end{align*} (iv) follows by considering separately the two cases $1 - 2q + q^{R+1} > 0$ and $1 - 2q + q^{R+1} \leq 0$. \\ \noindent \emph{Proof of (v):} If $L = R$, then plugging in $p = 1-q$ into (\ref{eq:DefAlpha}) gives $\alpha = 1/2$. So, by (ii), if $q < 1/2$ then $p_0 = 1 - q$ is the unique critical point. On the the other hand, for any $q > 1/2$, if $L = R$ is sufficiently large then there exists another critical point $p_0' > 1 - q$. This follows from (vi), continuity of $\alpha$, and the following claim. \\ \noindent \emph{Claim}: For any fixed $q > 1/2$, if $L = R$ is sufficiently large then $\frac{d}{dp}(\alpha) |_{p = 1-q} < 0$. Thus, there exists some $\epsilon > 0$ such that $\alpha(1-q + \epsilon) < 1/2$. \\ \noindent \emph{Proof}: Computing $\frac{d}{dp}(\alpha)$ directly from (\ref{eq:DefAlpha}) and then substituting $L = R$ and $p = 1 - q$, one finds, after some lengthy simplifications, that the condition $\frac{d}{dp}(\alpha) |_{p = 1-q} < 0$ is equivalent to the condition \begin{align*} R(1-q)(1-2q) + q(1-q^R) < 0. \end{align*} For fixed $q > 1/2$, this condition is satisfied for all sufficiently large $R$. $\square$ \appendix \section{Solution of Linear Systems} \label{sec:SolutionLinearSystems} \subsection{Stationary Distribution of Single Site Markov Chains} \label{subsec:StationaryDistributionSingleSiteMC} Here we solve the linear system $\{ \pi = \pi M , \sum_{\lambda} \pi_\lambda = 1\}$ for the stationary distribution $\pi$ of the single site Markov chain transition matrix $M$. In expanded form this system becomes \begin{align} \label{eq:pi_piEqual} & \pi_{(p,i)} = (1-p) \cdot \pi_{(p,i-1)} ~,~ 1 \leq i \leq L-1\\ \label{eq:pi_p0Equal} & \pi_{(p,0)} = p \cdot \pi_p + q \cdot \pi_{(q,R-1)} \\ \label{eq:pi_qiEqual} & \pi_{(q,i)} = q \cdot \pi_{(q, i -1)} ~,~ 1 \leq i \leq R-1 \\ \label{eq:pi_q0Equal} & \pi_{(q,0)} = (1-q) \cdot \pi_q + (1-p) \cdot \pi_{(p,L-1)} \\ \label{eq:pipPluspiq_Equal1} & \pi_p + \pi_q = 1, \end{align} where $\pi_p = \sum_{i=0}^{L-1} \pi_{(p,i)}$ and $\pi_q = \sum_{i=0}^{R-1} \pi_{(q,i)}$. Applying (\ref{eq:pi_piEqual}) and (\ref{eq:pi_qiEqual}) repeatedly gives \begin{align} \label{eq:pi_piEq} \pi_{(p,i)} & = (1-p)^i \cdot \pi_{(p,0)} ~,~0 \leq i \leq L-1; \\ \label{eq:pi_qiEq} \pi_{(q,i)} & = q^i \cdot \pi_{(q,0)} ~,~0 \leq i \leq R-1. \end{align} Hence, \begin{align} \label{eq:pi_pEq} \pi_p &= \sum_{i=0}^{L-1} (1-p)^i \cdot \pi_{(p,0)} = \frac{1 - (1-p)^L}{p} \cdot \pi_{(p,0)}~, \\ \label{eq:pi_qEq} \pi_q & = \sum_{i=0}^{R-1} q^i \cdot \pi_{(q,0)} = \frac{1 - q^R}{1 - q} \cdot \pi_{(q,0)}. \end{align} Plugging (\ref{eq:pi_qiEq}) and (\ref{eq:pi_pEq}) into (\ref{eq:pi_p0Equal}) gives \begin{align*} \pi_{(p,0)} = p \cdot \left( \frac{1 - (1-p)^L}{p} \cdot \pi_{(p,0)} \right) ~+~ q \cdot \left( q^{R-1} \cdot \pi_{(q,0)} \right), \end{align*} which implies \begin{align} \label{eq:pi_p0_Expression1} \pi_{(p,0)} = \pi_{(q,0)} \cdot \frac{q^R}{(1-p)^L}. \end{align} But, by (\ref{eq:pipPluspiq_Equal1}), (\ref{eq:pi_pEq}), and (\ref{eq:pi_qEq}), we also have \begin{align*} \frac{1 - (1-p)^L}{p} \cdot \pi_{(p,0)} ~+~ \frac{1 - q^R}{1 - q} \cdot \pi_{(q,0)} = 1 \end{align*} or, equivalently, \begin{align} \label{eq:pi_p0_Expression2} \pi_{(p,0)} = \left(1 - \pi_{(q,0)} \frac{1 - q^R}{1-q} \right) \cdot \frac{p}{1 - (1-p)^L}. \end{align} Equating the right hand sides of (\ref{eq:pi_p0_Expression1}) and (\ref{eq:pi_p0_Expression2}) and solving for $\pi_{(q,0)}$ gives \begin{align*} \pi_{(q,0)} = \frac{p(1-q)(1-p)^L}{(1-q)q^R(1 - (1-p)^L) + p(1-p)^L(1-q^R)}. \end{align*} Substituting this value of $\pi_{(q,0)}$ into \eqref{eq:pi_p0_Expression1} gives an explicit expression for $\pi_{(p,0)}$, and the values of $\pi_{(q,i)}, 1 \leq i \leq R-1$, and $\pi_{(p,i)}, 1 \leq i \leq L-1$, are then easily found by substituting the expressions for $\pi_{(p,0)}$ and $\pi_{(q,0)}$ in (\ref{eq:pi_piEq}) and (\ref{eq:pi_qiEq}), giving \eqref{eq:pipi_piqi}. \subsection{Expected Hitting Times with $R=1$} \label{subsec:ExpectedHittingTimes} Here we solve the linear system (\ref{eq:LinearSystemExpectedHittingTimes}) for the expected hitting times $a_i$, $0 \leq i \leq L$. As shown in the proof of Theorem \ref{thm:R1Speed}, using soft methods, these expected hitting times must all be finite. For simplicity of notation we define $b_i = a_{L-i}$, $0 \leq i \leq L$. Rearranging slightly the system (\ref{eq:LinearSystemExpectedHittingTimes}) then becomes \begin{align*} b_{i+1} & = 1 + (1-p)(a_0 + b_i) ~,~0 \leq i \leq L-1 \\ b_0 & = \frac{1}{q} + \left(\frac{1-q}{q}\right) a_0. \end{align*} Thus, for each $0 \leq i \leq L$, we have \begin{align*} b_i = u_i + v_i \cdot a_0 \end{align*} where the sequences $(u_i)_{i=0}^L$ and $(v_i)_{i=0}^L$ are defined recursively by \begin{align*} u_0 & = 1/q ~~\overline{m}ox{ and }~~ u_{i+1} = 1 + (1-p)u_i,~ 0 \leq i \leq L-1,\\ v_0 & = (1-q)/q ~~\overline{m}ox{ and }~~ v_{i+1} = (1-p)(1 + v_i),~ 0 \leq i \leq L-1. \end{align*} By induction on $i$, we find that, for each $1 \leq i \leq L$, \begin{align*} u_i & = \frac{(1-p)^i}{q} + \sum_{j = 0}^{i-1} (1-p)^j = \frac{1 + (p/q - 1)(1-p)^i}{p}~, \\ v_i & = \frac{(1-p)^i}{q} + \sum_{j = 1}^{i-1} (1-p)^j = \frac{1 - p + (p/q - 1)(1-p)^i}{p}. \end{align*} Substituting, first for the $b_i$'s and then for the $a_i$'s with $a_i = b_{L-i}$, one obtains (\ref{eq:Def_a0}) and (\ref{eq:Def_ai}). \section{Proof of Lemma \ref{lem:SimpleTransConditions}} \label{sec:BasicTransienceConditions} Here we prove Lemma \ref{lem:SimpleTransConditions} from section \ref{subsec:BasicLemmas}. The three parts are proved separately. In each case, we prove only the first of the two statements, since the second follows by symmetry. The following notation will be used for the proofs. \begin{itemize} \widetilde{i}em $T_x^{(i)}$ is the $i$-th hitting time of site $x$: \begin{align*} T_x^{(1)} = T_x ~~\overline{m}ox{ and }~~ T_x^{(i+1)} = \inf\{n > T_x^{(i)}: X_n = x\}, \end{align*} with the convention $T_x^{(j)} = \infty$, for all $j > i$, if $T_x^{(i)} = \infty$. \widetilde{i}em $m_i = \sup \{ X_n : n \leq T_0^{(i)} \}$ is the maximum position of the random walk up to the $i$-th hitting time of site $0$. \widetilde{i}em For an initial environment $\omega$ and path $\zeta = (x_0,...,x_n)$, $\omega^{(\zeta)}$ is the environment induced at time $n$ by following the path $\zeta$ starting in $\omega$: \begin{align*} \{\omega_0 = \omega, X_0 = x_0,...,X_n = x_n\} \mathbb{L}ongrightarrow \omega_n = \omega^{(\zeta)}. \end{align*} \end{itemize} \noindent \emph{Proof of (ii):} Clearly, $\mathbb{P}_{\omega}(X_n \rightarrow \infty) \leq \mathbb{P}_{\omega}(\liminf_{n \to \infty} X_n > -\infty)$. To show the reverse inequality also holds observe that, for any $k \in \mathbb{Z}$, $\mathbb{P}_{\omega}(\liminf_{n \to \infty} X_n = k) = 0$. Thus, \begin{align*} \mathbb{P}_{\omega}\left(\liminf_{n \to \infty} X_n > -\infty, X_n \not\rightarrow \infty \right) = \mathbb{P}_{\omega}\left(-\infty< \liminf_{n \to \infty} X_n <\infty \right) = 0. \end{align*} \noindent \emph{Proof of (i):} By (ii), $\mathbb{P}_{\omega}(X_n \rightarrow \infty) \geq \mathbb{P}_{\omega}(\mathbb{A}M_0^+)$. Thus, $\mathbb{P}_{\omega}(X_n \rightarrow \infty) > 0$, if $\mathbb{P}_{\omega}(\mathbb{A}M_0^+) > 0$. On the other hand, if $\mathbb{P}_{\omega}(X_n \rightarrow \infty) > 0$ then there exists some finite path $\zeta = (x_0,...,x_n)$, such that $x_ 0 = 0$, $x_n = 2$, and \begin{align*} \mathbb{P}_{\omega}(X_m > 1, \forall m \geq n|X_0 = x_0, ..., X_n = x_n) > 0. \end{align*} We construct from $\zeta = (x_0,...,x_n)$ the reduced path $\widetilde{\zeta} = (\widetilde{x}_0,...,\widetilde{x}_{\widetilde{n}})$ by setting $\widetilde{x}_0 = x_0 = 0$, and then removing from the tail $(x_1,...,x_n)$ all steps before the first hitting time of site $1$ and all steps in any leftward excursions from site $1$. For example, \begin{align*} \overline{m}ox{ if } \zeta & = (0,\textbf{-1},\textbf{0},1,2,1,\textbf{0},\textbf{1}, 2, 1, \textbf{0} ,\textbf{-1},\textbf{-2},\textbf{-1},\textbf{-2},\textbf{-1},\textbf{0},\textbf{1},2,3,2), \\ \overline{m}ox{ then } \widetilde{\zeta} & = (0,1,2,1,2,1,2,3,2) \end{align*} (where we denote the removed steps in bold for visual clarity). By construction, $\omega^{(\widetilde{\zeta})}(x) = \omega^{(\zeta)}(x)$, for all $x \geq 2$. So, $\mathbb{P}_{\omega}(X_m > 1, \forall m \geq \widetilde{n}|(X_0, ..., X_{\widetilde{n}}) = \widetilde{\zeta}) = \mathbb{P}_{\omega}(X_m > 1, \forall m \geq n|(X_0,...,X_n) = \zeta) > 0$. Thus, \begin{align*} \mathbb{P}_{\omega}(\mathbb{A}M_0^+) \geq \mathbb{P}_{\omega}( (X_0,...,X_{\widetilde{n}}) = \widetilde{\zeta}) \cdot \mathbb{P}_{\omega}(X_m > 1, \forall m \geq \widetilde{n}|(X_0,...,X_{\widetilde{n}}) = \widetilde{\zeta}) > 0. \end{align*} \noindent \emph{Proof of (iii):} Since we assume $\mathbb{P}_{\omega}(X_n \rightarrow - \infty) = 0$, it follows from (ii) that (a) $T_x$ is $\mathbb{P}_{\omega}$ a.s. finite for each $x \geq 0$, and (b) every time the random walk steps left from a site $x$ it will eventually return with probability $1$. Now (b) implies that the probability that the walk is transient to $+\infty$, after first hitting a site $x \geq 0$, is independent of the trajectory taken to get to $x$. That is, $\mathbb{P}_{\omega}(X_n\rightarrow \infty|(X_0,...,X_n) = \zeta) = \mathbb{P}_{\omega}(X_n\rightarrow \infty|T_x < \infty)$, for any $x \geq 0$ and path $\zeta = (x_0, ... ,x_n)$ such that $x_0 = 0, x_n = x$, and $x_m < x$ for $m < n$. Combining this last observation with (a) shows that \begin{align*} & \mathbb{P}_{\omega}(X_n\rightarrow \infty|T_0^{(i)} < \infty, m_i = x) = \mathbb{P}_{\omega}(X_n \rightarrow \infty|T_0^{(i)} < \infty, m_i = x, T_{x+1} < \infty) \nonumber \\ &= \mathbb{P}_{\omega}(X_n \rightarrow \infty|T_{x+1} < \infty) = \mathbb{P}_{\omega}(X_n \rightarrow \infty), \overline{m}ox{ for all $x \geq 0$ and $i \geq 1$. } \end{align*} So, $\mathbb{P}_{\omega}(X_n\rightarrow \infty|T_0^{(i)} < \infty) = \mathbb{P}_{\omega}(X_n\rightarrow \infty)$, for all $i \geq 1$. Thus, by (ii), \begin{align*} \mathbb{P}_{\omega}(X_n \not\rightarrow \infty) = \mathbb{P}_{\omega}(X_n \not\rightarrow \infty|T_0^{(i)} < \infty) = \prod_{j=i}^{\infty} \mathbb{P}_{\omega}(T_0^{(j+1)} < \infty|T_0^{(j)} < \infty), \forall i \geq 1. \end{align*} Since the LHS is independent of $i$, the product on the RHS is constant for $i \geq 1$. Thus, there are two possibilities: either the product is $0$ (for all $i \geq 1$) or $\mathbb{P}_{\omega}(T_0^{(j+1)} < \infty|T_0^{(j)} < \infty) = 1$, for all $j \geq 1$. In the latter case, $\mathbb{P}_{\omega}(X_n \not\rightarrow \infty) = 1$, which contradicts the assumption that $\mathbb{P}_{\omega}(X_n \rightarrow \infty) > 0$. In the former case, $\mathbb{P}_{\omega}(X_n \rightarrow \infty) = 1$, as required. $\square$ \section{Proof of Lemma \ref{lem:StrongLawForNx}} \label{sec:StrongLawForNx} The following strong law for sums of dependent random variables is a special case of \cite[Theorem 1]{Etemadi1983} with $w_i = 1$ and $W_i = i$. \begin{The} \label{thm:StrongLawDepedentRVs} Let $(\xi_i)_{i \in \mathbb{N}}$ be a sequence of nonnegative random variables satisfying: \begin{enumerate} \widetilde{i}em $\sup_i \mathbb{E}(\xi_i) < \infty$. \widetilde{i}em $\mathbb{E}(\xi_i^2) < \infty$, for each $i$. \widetilde{i}em $\sum_{j = 1}^{\infty} \sum_{i=1}^j \frac{1}{j^2} \cdot Cov^+(\xi_i,\xi_j) < \infty$. \end{enumerate} Then \begin{align*} \frac{1}{n} \sum_{i=1}^n (\xi_i - \mathbb{E}(\xi_i)) \widetilde{s}ackrel{a.s.}{\longrightarrow} 0, \overline{m}ox{ as } n \rightarrow \infty. \end{align*} \end{The} Using this theorem we will prove Lemma \ref{lem:StrongLawForNx}. Throughout our proof the initial environment $\omega$ is fixed, and all random variables are distributed according to the measure $\mathbb{P}_{\omega}$, which we will abbreviate simply as $\mathbb{P}$. Also, $\beta > 0$ is the constant given in Corollary \ref{cor:PrAnPlusGreaterEqualBeta}. \begin{proof}[Proof of Lemma \ref{lem:StrongLawForNx}] By Corollary \ref{cor:NxDominatedByGeometric}, \begin{align} \label{eq:ENxVarNxBounds} \mathbb{E}(N_x) \leq \frac{1}{\beta} ~~\overline{m}ox{ and }~~ \mathbb{E}(N_x^2) \leq \frac{2 - \beta}{\beta^2}~,~ \overline{m}ox{ for each $x \in \mathbb{N}$}. \end{align} Thus, by Theorem \ref{thm:StrongLawDepedentRVs}, it suffices to show that \begin{align*} \sum_{y = 1}^{\infty} \sum_{x = 1}^{y} ~ \frac{1}{y^2} Cov^{+} (N_x,N_y) < \infty. \end{align*} Since $N_x$ and $N_y$ are nonnegative integer valued random variables, $Cov(N_x,N_y)$ can be represented as the following absolutely convergent double sum: \begin{align} \label{eq:CovNxNySumExpression} Cov(N_x,N_y) = \sum_{j = 1}^{\infty} \sum_{k = 1}^{\infty} \mathbb{B}ig( \mathbb{P}(N_x \geq k, N_y \geq j) - \mathbb{P}(N_x \geq k) \mathbb{P}(N_y \geq j) \mathbb{B}ig). \end{align} To bound this sum we will need the following two estimates for the differences $D_{k,j} \equiv \mathbb{P}(N_x \geq k, N_y \geq j) - \mathbb{P}(N_x \geq k) \mathbb{P}(N_y \geq j)$: \begin{align} \label{eq:jkEstimate} & \overline{m}ox{ For any $1 \leq x < y$ and $k,j \in \mathbb{N}$}, D_{k,j} \leq (1-\beta)^{\max\{j,k\}-1}. \\ \label{eq:xyEstimate} & \overline{m}ox{ For any $1 \leq x < y$ and $k,j \in \mathbb{N}$}, D_{k,j} \leq (1 - \beta)^{y-x}. \end{align} \eqref{eq:jkEstimate} follows from Corollary \ref{cor:NxDominatedByGeometric}: \begin{align*} D_{k,j} & \equiv \mathbb{P}( N_x \geq k , N_y \geq j) - \mathbb{P}(N_x \geq k) \mathbb{P}(N_y \geq j) \\ & \leq \mathbb{P}(N_x \geq k, N_y \geq j) \leq \min\{ \mathbb{P}(N_x \geq k), \mathbb{P}(N_y \geq j) \} \leq (1-\beta)^{\max\{j,k\}-1}. \end{align*} To see \eqref{eq:xyEstimate} recall that $N_x^y$ and $N_y$ are independent for all $1 \leq x < y$, by Lemma \ref{lem:SimpleConsequencesAlphaGreaterHalf}. Thus, for any $1 \leq x < y$, we have \begin{align*} \mathbb{P}(N_x \geq k, N_y \geq j) & = \mathbb{P}(N_x^y \geq k, N_y \geq j) + \mathbb{P}(N_x^y < k, N_x \geq k, N_y \geq j) \\ & = \mathbb{P}(N_x^y \geq k) \mathbb{P}(N_y \geq j) + \mathbb{P}(N_x^y < k, N_x \geq k, N_y \geq j) \\ & \leq \mathbb{P}(N_x \geq k) \mathbb{P}(N_y \geq j) + \mathbb{P}(B_y \geq y - x) \\ & \leq \mathbb{P}(N_x \geq k) \mathbb{P}(N_y \geq j) + (1 - \beta)^{y - x} \end{align*} by Corollary \ref{cor:BxDominatedByGeometric}. Now, for given $1 \leq x < y$, let $n = y - x$ and let $N = \floor{(1-\beta)^{-n/4}}$. Breaking the (absolutely convergent) double sum in (\ref{eq:CovNxNySumExpression}) into pieces and applying Fubini's Theorem gives \begin{align*} Cov(N_x,N_y) & = \sum_{j=1}^N \sum_{k = 1}^N D_{k,j} ~+~ \sum_{j=1}^N \sum_{k = N+1}^{\infty} D_{k,j} ~+~ \sum_{k=1}^N \sum_{j = N+1}^{\infty} D_{k,j} \\ &~~+ \sum_{k=N+1}^{\infty} \sum_{j = k}^{\infty} D_{k,j} ~+~ \sum_{j=N+1}^{\infty} \sum_{k = j+1}^{\infty} D_{k,j}. \end{align*} By \eqref{eq:xyEstimate}, the first term on the RHS of this equation is bounded above by \\ $N^2 (1-\beta)^n$. Similarly, by \eqref{eq:jkEstimate}: \begin{itemize} \widetilde{i}em The second term is bounded by $N \cdot \sum_{k=N+1}^{\infty} (1-\beta)^{k-1} = N (1-\beta)^N/\beta$. \widetilde{i}em The third term is bounded by $N \cdot \sum_{j=N+1}^{\infty} (1-\beta)^{j-1} = N (1-\beta)^N/\beta$. \widetilde{i}em The fourth term is bounded by $\sum_{k=N+1}^{\infty} \sum_{j = k}^{\infty} (1-\beta)^{j-1} = (1 - \beta)^N/\beta^2$. \widetilde{i}em The fifth term is bounded by $\sum_{j=N+1}^{\infty} \sum_{k = j+1}^{\infty} (1-\beta)^{k-1} = (1 - \beta)^{N+1}/\beta^2$. \end{itemize} The upper bound on the first term is at most $(1-\beta)^{n/2}$, and the same is also true for the upper bounds on each of the other 4 terms for all sufficiently $n$, since $N$ grows exponentially in $n$. Thus, there exists some $n_0 \in \mathbb{N}$ such that \begin{align*} Cov(N_x,N_y) \leq 5 (1-\beta)^{n/2}, \overline{m}ox{ whenever } y - x = n \geq n_0. \end{align*} But, for any $1 \leq x \leq y$ such that $y - x = n < n_0$ we also have \begin{align*} Cov(N_x,N_y) \leq \mathbb{E}(N_x^2)^{1/2} \cdot \mathbb{E}(N_y^2)^{1/2} \leq \frac{2 - \beta}{\beta^2} \leq \left( \frac{2 - \beta}{\beta^2 (1-\beta)^{n_0 - 1}} \right) (1-\beta)^n \end{align*} by \eqref{eq:ENxVarNxBounds}. Thus, for all $1 \leq x \leq y$, \begin{align*} Cov(N_x,N_y) \leq C (1 - \beta)^{n/2}, \overline{m}ox{ where } C \equiv \max \left\{5, \frac{2 - \beta}{\beta^2 (1-\beta)^{n_0 - 1}}\right\} \overline{m}ox{ and } n = y -x. \end{align*} So, \begin{align*} \sum_{y = 1}^{\infty} \sum_{x = 1}^{y} ~ \frac{1}{y^2} Cov^{+} (N_x,N_y) \leq \sum_{y = 1}^{\infty} \sum_{x = 1}^{y} ~ \frac{1}{y^2} \cdot C(1-\beta)^{(y-x)/2} < \infty. \end{align*} \end{proof} \section{Proofs of Lemmas \ref{lem:muEqual1}, \ref{lem:ConcentrationEstimate}, and \ref{lem:nuLowerBound}} \label{sec:ProofOfUnLemmas} \begin{proof}[Proof of Lemma \ref{lem:muEqual1}] Since $U(n) = \sum_{j = 1}^n \mathbb{G}amma_j$, if follows from the Markov chain representation of section \ref{subsec:StepDistributionZxChain} and the ergodic theorem for finite-state Markov chains along with \eqref{eq:GammajDist} that \begin{align*} \lim_{n \to \infty} \frac{\mathbb{E}(U(n))}{n} = \lim_{j \to \infty} \mathbb{E}(\mathbb{G}amma_j) = \left<\psi,E \right> ~\overline{m}ox{ and }~ \lim_{n \to \infty} \frac{1}{n} \sum_{j = 1}^n \mathbb{G}amma_j = \left<\psi,E \right>,~ \overline{m}ox{ a.s. } \end{align*} By definition, $\mathbb{G}amma_j$ is the number of right jumps (i.e. $1$'s) in the jump sequence $(J_k^x)_{k \in \mathbb{N}}$ between the $(j-1)$-th and $j$-th left jumps. So, this implies \begin{align*} \lim_{m \to \infty} \frac{1}{m} \sum_{k=1}^m \mathds{1}\{J_k^x = 1\} = \lim_{n \to \infty} \left( \frac{ \sum_{j=1}^n \mathbb{G}amma_j }{n + \sum_{j=1}^n \mathbb{G}amma_j} \right) = \frac{ \left<\psi,E \right> }{1 + \left<\psi,E \right> } ~,\overline{m}ox{ a.s. } \end{align*} On the other hand, as noted at the end of section \ref{subsubsec:SingleSiteMarkovChain}, \begin{align*} \lim_{m \to \infty} \frac{1}{m} \sum_{k = 1}^m \mathds{1}\{J_k^x = 1\} = \alpha ~, \overline{m}ox{ a.s. } \end{align*} Since $\alpha = 1/2$, it follows that $\left<\psi,E \right> = 1$. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:nuLowerBound}] We consider separately the cases $L = 1$ and $L \geq 2$. In both cases, since $\alpha = 1/2$ we have $\mu = 1$, by Lemma \ref{lem:muEqual1}. Thus, $\nu(n) = \mathbb{E} [ (U(n)-n)^2 ]/n$. \\ \noindent \emph{Case 1}: $L = 1$.\\ In this case $\omega^j = (q,0)$ for all $j \geq 2$, regardless of the values of the $\mathbb{G}amma_j$'s. Thus, $\mathbb{G}amma_1, ..., \mathbb{G}amma_n$ are independent and $\mathbb{G}amma_2,...,\mathbb{G}amma_n$ are i.i.d. distributed as $S_0$. So, \begin{align*} \liminf_{n \to \infty} \nu(n) = \liminf_{n \to \infty} \frac{\mathbb{E} [ (U(n)-n)^2 ]}{n} \geq \liminf_{n \to \infty} \frac{\overline{m}ox{Var}(U(n))}{n} = \overline{m}ox{Var}(S_0) > 0. \end{align*} \noindent \emph{Case 2}: $L \geq 2$.\\ By construction $\omega^{j+1}$ is a deterministic function of $\omega^j$ and $\mathbb{G}amma_j$. For $\lambda, \lambda' \in \mathbb{L}ambda$, we define $K_{\lambda,\lambda'} = \{k \geq 0: \omega^{j+1} = \lambda', \overline{m}ox{ if } \omega^j = \lambda \overline{m}ox{ and } \mathbb{G}amma_j = k \}$. We say a sequence of configurations $\vec{\lambda} = (\lambda_1,...,\lambda_{n+1}) \in \mathbb{L}ambda^{n+1}$ is \emph{allowable} if $|K_{\lambda_i, \lambda_{i+1}}| > 0$ for all $1 \leq i \leq n$, and denote by $G_{n+1}$ the set of all allowable length-$(n+1)$ configuration sequences. For each allowable configuration sequence $\vec{\lambda} \in G_{n+1}$ we define $(\mathbb{G}amma_{j,{\vec{\lambda}}})_{j=1}^n$ to be independent random variables with distribution \begin{align*} \mathbb{P}(\mathbb{G}amma_{j,\vec{\lambda}} = k) & = \mathbb{P}(\mathbb{G}amma_j = k|\omega^j = \lambda_j, \omega^{j+1} = \lambda_{j+1}) \\ & = \mathbb{P}(\mathbb{G}amma_j = k|\omega^j = \lambda_j, \mathbb{G}amma_j \in K_{\lambda_j, \lambda_{j+1}}). \end{align*} Also, we define $U_{\vec{\lambda}}(n) = \sum_{j=1}^n \mathbb{G}amma_{j,\vec{\lambda}}$. By construction of the joint process $(\omega^j,\mathbb{G}amma_j)$, it follows that $U(n)$ conditioned on $(\omega^1,...,\omega^{n+1}) = \vec{\lambda}$ is distributed as $U_{\vec{\lambda}}(n)$. Thus, denoting $\vec{\omega} = (\omega^1,...,\omega^{n+1})$, we have \begin{align} \label{eq:ConditionalSumNun} \mathbb{E}[(U(n)-n)^2] & = \sum_{\vec{\lambda} \in G_{n+1}} \mathbb{P}(\vec{\omega} = \vec{\lambda}) \cdot \mathbb{E}[(U(n)-n)^2 | \vec{\omega} = \vec{\lambda}] \nonumber \\ & = \sum_{\vec{\lambda} \in G_{n+1}} \mathbb{P}(\vec{\omega} = \vec{\lambda}) \cdot \mathbb{E}[(U_{\vec{\lambda}}(n)-n)^2] \nonumber \\ & \geq \sum_{\vec{\lambda} \in G_{n+1}} \mathbb{P}(\vec{\omega} = \vec{\lambda}) \cdot \overline{m}ox{Var}( U_{\vec{\lambda}}(n) ) \nonumber \\ & = \sum_{\vec{\lambda} \in G_{n+1}} \mathbb{P}(\vec{\omega} = \vec{\lambda}) \sum_{j=1}^n \overline{m}ox{Var}( \mathbb{G}amma_{j,\vec{\lambda}} ). \end{align} The lemma follows easily from this since the pair $((p,1),(p,1))$ is a recurrent state for the Markov chain over configuration pairs $(\omega^j,\omega^{j+1})_{j \in \mathbb{N}}$ and the distribution of $\mathbb{G}amma_j$ conditioned on $\omega^j = \omega^{j+1} = (p,1)$ is non-degenerate. Indeed, denoting the variance in the distribution of $\mathbb{G}amma_j$ conditioned on $\omega^j = \omega^{j+1} = (p,1)$ as $V_{(p,1),(p,1)}$ and the stationary probability of the pair $((p,1),(p,1))$ as $\psi_{(p,1),(p,1)}$, \eqref{eq:ConditionalSumNun} implies \begin{align*} \liminf_{n \to \infty} \nu(n) = \liminf_{n \to \infty} \frac{\mathbb{E}[(U(n)-n)^2]}{n} \geq V_{(p,1),(p,1)} \cdot \psi_{(p,1),(p,1)} > 0. \end{align*} \end{proof} We now proceed to the proof of Lemma \ref{lem:ConcentrationEstimate}. This is based on the following basic facts concerning large deviations of i.i.d. random variables and finite-state Markov chains: \\ \noindent \emph{Fact 1}: If $\xi$ is a random variable with exponential tails and $\xi_1, \xi_2,...$ are i.i.d. random variables distributed as $\xi$, then there exist constants $b_1, b_2 > 0$ such that the empirical means $\overline{\xi}_n \equiv \frac{1}{n} \sum_{i=1}^n \xi_i$ satisfy: \begin{align} \label{eq:iidSmallEpsBound} \mathbb{P}( |\overline{\xi}_n - \mathbb{E}(\xi)| > \epsilon ) & \leq b_1 \exp(-b_2 \epsilon^2 n) ~,~ \overline{m}ox{for all } 0 < \epsilon \leq 1 \overline{m}ox{ and } n \in \mathbb{N}; \\ \label{eq:iidLargeEpsBound} \mathbb{P}( |\overline{\xi}_n - \mathbb{E}(\xi)| > \epsilon ) & \leq b_1 \exp(-b_2 \epsilon n) ~,~ \overline{m}ox{for all } \epsilon > 1 \overline{m}ox{ and } n \in \mathbb{N}. \end{align} \noindent \emph{Fact 2}: If $(\xi_n)_{n \in \mathbb{N}}$ is an irreducible Markov chain on a finite state space $S$ with stationary distribution $\widehat{p}i$, then there exist constants $b_1, b_2 > 0$ such that the empirical state frequencies $\widehat{p}i_n(s) \equiv \frac{1}{n} \sum_{i = 1}^n \mathds{1}\{\xi_i = s\}$ satisfy \begin{align*} \mathbb{P}_{s'}( |\widehat{p}i_n(s) - \widehat{p}i(s)| > \epsilon ) & \leq b_1 \exp(-b_2 \epsilon^2 n) ~,~ \overline{m}ox{for all } s, s' \in S,~ \epsilon > 0, \overline{m}ox{ and } n \in \mathbb{N}. \end{align*} Here $\mathbb{P}_{s'}(\cdot) \equiv \mathbb{P}(\cdot | \xi_1 = s')$ is the probability measure for the Markov chain $(\xi_n)$ started from state $s'$. \\ Fact 1 can be proved using the standard Chernoff-Hoeffding method for establishing large deviation bounds of independent random variables. Fact 2 follows from Fact 1, since for a finite-state, irreducible Markov chain the return times to a given state are i.i.d. with exponential tails. \begin{proof}[Proof of Lemma \ref{lem:ConcentrationEstimate}] Throughout the proof we assume $\omega(x) = \lambda_0$, $x \geq 0$, for some $\lambda_0 \in \mathbb{L}ambda_0 = \{(p,1),...,(p,L-1),(q,0)\}$. The result for general $\lambda \in \mathbb{L}ambda$ follows directly from this since, for any initial state $\lambda \in \mathbb{L}ambda$, the Markov chain $(\omega^j)_{j \in \mathbb{N}}$ collapses to the recurrent state set $\mathbb{L}ambda_0$ with probability 1 after a single transition and the random variable $\mathbb{G}amma_1$ has an exponential tail. The bounds for small $\epsilon$ and large $\epsilon$ are established separately. Specifically, we will show that there exist constants $c_1, c_2, \epsilon_0 > 0$ and other constants $c_1', c_2', \epsilon_0' > 0$ such that the empirical means $\mathbb{G}ammab_n \equiv \frac{1}{n} \sum_{j=1}^n \mathbb{G}amma_j$ satisfy: \begin{align} \label{eq:GammabSmallEpsilon} & \mathbb{P}(|\mathbb{G}ammab_n - 1| > \epsilon) \leq c_1 \exp(- c_2 \epsilon^2 n) ~,~ \overline{m}ox{for all } 0 < \epsilon \leq \epsilon_0 \overline{m}ox{ and } n \in \mathbb{N}, \\ \label{eq:GammabLargeEpsilon} & \mathbb{P}(|\mathbb{G}ammab_n - 1| > \epsilon) \leq c_1' \exp(- c_2' \epsilon n) ~,~ \overline{m}ox{for all } \epsilon \geq \epsilon_0' \overline{m}ox{ and } n \in \mathbb{N}. \end{align} Together \eqref{eq:GammabSmallEpsilon} and \eqref{eq:GammabLargeEpsilon} show that \eqref{eq:UnEpsilonSquaredBound} and \eqref{eq:UnEpsilonBound} hold, with $\mu = 1$ and $N=1$, for some constants $C,c > 0$ depending on $c_1,c_2,c_1',c_2',\epsilon_0, \epsilon_0'$. For the proofs in both cases below we use the following notation for states $\lambda \in \mathbb{L}ambda_0$. \begin{itemize} \widetilde{i}em $\psi(\lambda) \equiv \psi_{\lambda}$ is the stationary probability of state $\lambda$, as defined in Section \ref{subsec:StepDistributionZxChain}, and $\psi_n(\lambda) \equiv \frac{1}{n} \sum_{j=1}^n \mathds{1}\{\omega^j = \lambda\}$ is the empirical frequency of state $\lambda$. \widetilde{i}em $\mathbb{G}amma_j(\lambda) \equiv \mathbb{G}amma_{\tau_j(\lambda)}$, where $\tau_j(\lambda)$ is the $j$-{th} visit time to state $\lambda$ for the Markov chain $(\omega^i)_{i \in \mathbb{N}}$: $\tau_{j+1}(\lambda) = \inf \{i > \tau_j(\lambda) : \omega^i = \lambda\} ~\overline{m}ox{ with }~ \tau_0(\lambda) \equiv 0.$ \widetilde{i}em $E(\lambda) \equiv \mathbb{E}(\mathbb{G}amma_j(\lambda)) = \mathbb{E}(\mathbb{G}amma_j|\omega^j = \lambda)$. \end{itemize} \noindent \emph{Proof of \eqref{eq:GammabSmallEpsilon}}: For each $\lambda \in \mathbb{L}ambda_0$, $(\mathbb{G}amma_j(\lambda))_{j \in \mathbb{N}}$ is a sequence of i.i.d. random variables with mean $E(\lambda)$ and exponential tails. Thus, by Fact 1, there exist constants $b_1, b_2 > 0$ such that for each $\lambda \in \mathbb{L}ambda_0$, \begin{align} \label{eq:EachLambdaGammaBound} \mathbb{P}(|\mathbb{G}ammab_n(\lambda) - E(\lambda)| > \epsilon) \leq b_1 \exp(-b_2 \epsilon^2 n) ~,~ \overline{m}ox{for all } 0 < \epsilon \leq 1, n \in \mathbb{N}. \end{align} Also, by Fact 2, there exists constants $b_3, b_4 > 0$ such that for each $\lambda \in \mathbb{L}ambda_0$, \begin{align} \label{eq:EachLambdaPsiBound} \mathbb{P}(|\psi_n(\lambda) - \psi(\lambda)| > \epsilon) \leq b_3 \exp(- b_4 \epsilon^2 n) ~,~ \overline{m}ox{for all } \epsilon > 0, n \in \mathbb{N}. \end{align} Finally, using nonnegativity of the sequence $(\mathbb{G}amma_j(\lambda))_{j \in \mathbb{N}}$ one may show that, for any $0 < \epsilon \leq 1/3$ and $n \in \mathbb{N}$, the following holds: \begin{align} \label{eq:ConcentrationBetweenEndpoints} & \overline{m}ox{ If } |\mathbb{G}ammab_{j_{\min}}(\lambda) - E(\lambda)| \leq \epsilon \overline{m}ox{ and } |\mathbb{G}ammab_{j_{\max}}(\lambda) - E(\lambda)| \leq \epsilon, \nonumber \\ & \overline{m}ox{ then } |\mathbb{G}ammab_j(\lambda) - E(\lambda)| \leq \epsilon b_5, \overline{m}ox{ for all } n \psi(\lambda) (1-\epsilon) \leq j \leq n \psi(\lambda) (1+\epsilon), \end{align} where \begin{align*} & j_{\min} = j_{\min}(n,\lambda, \epsilon) \equiv \ceil{n \psi(\lambda) (1 - \epsilon)}, \\ & j_{\max} = j_{\max}(n,\lambda, \epsilon) \equiv \max\{j_{\min}, \floor{n \psi(\lambda) (1 + \epsilon)} \}, \\ & b_5 \equiv \max_{\lambda \in \mathbb{L}ambda_0} \{3E(\lambda) + 2\}. \end{align*} Now, define $G_{n,\epsilon}$ to be the ``good event'' that for each $\lambda \in \mathbb{L}ambda_0$ the following two conditions are satisfied: \begin{enumerate} \widetilde{i}em $|\psi_n(\lambda) - \psi(\lambda)| \leq \epsilon \psi(\lambda)$. \widetilde{i}em $|\mathbb{G}ammab_j(\lambda) - E(\lambda)| \leq \epsilon b_5$, for all $n \psi(\lambda) (1-\epsilon) \leq j \leq n \psi(\lambda) (1+\epsilon)$. \end{enumerate} By \eqref{eq:EachLambdaGammaBound} and \eqref{eq:ConcentrationBetweenEndpoints} together with the union bound, we have \begin{align*} \mathbb{P}\big( |\mathbb{G}ammab_j&(\lambda) - E(\lambda)| > \epsilon b_5, \overline{m}ox{ for some } n \psi(\lambda) (1-\epsilon) \leq j \leq n \psi(\lambda) (1+\epsilon) \big) \\ & \leq 2 b_1 \exp[-b_2 \epsilon^2 (n\psi(\lambda)(1-\epsilon))] \leq 2 b_1 \exp[- (2/3) b_2 \psi(\lambda) \epsilon^2 n] \end{align*} for each $\lambda \in \mathbb{L}ambda_0$, $n \in \mathbb{N}$, and $\epsilon \leq 1/3$. Thus, by \eqref{eq:EachLambdaPsiBound} and the union bound, \begin{align} \label{eq:ProbGnepsComplimentBound} \mathbb{P}(G_{n,\epsilon}^c) & \leq 2Lb_1 \exp(- (2/3) b_2 \psi_{\min} \epsilon^2 n) + Lb_3 \exp(- b_4 \psi_{\min}^2 \epsilon^2 n) \nonumber \\ & \leq b_6 \exp(- b_7 \epsilon^2 n), \end{align} for all $n \in \mathbb{N}$ and $\epsilon \leq 1/3$, where \begin{align*} \psi_{\min} = \min_{\lambda \in \mathbb{L}ambda_0} \psi(\lambda),~ b_6 = 2 L b_1 + L b_3,~ \overline{m}ox{and } b_7 = \min\{ (2/3) b_2 \psi_{\min}, b_4 \psi_{\min}^2 \}. \end{align*} Since $\alpha = 1/2$, Lemma \ref{lem:muEqual1} implies $\sum_{\lambda \in \mathbb{L}ambda_0} \psi(\lambda) E(\lambda) = \left<\psi, E \right> = 1$. Thus, on the event $G_{n,\epsilon}$, $\epsilon \leq 1/3$, we have \begin{align} \label{eq:ConcentrationOnGneps} |\mathbb{G}ammab_n - 1| & = \bigg| \sum_{\lambda \in \mathbb{L}ambda_0} \bigg( \sum_{j=1}^{n \psi_n(\lambda)} \frac{\mathbb{G}amma_j(\lambda)}{n} - E(\lambda) \psi(\lambda) \bigg) \bigg| \nonumber \\ & \leq \sum_{\lambda \in \mathbb{L}ambda_0} \bigg( \psi_n(\lambda) \bigg| \sum_{j=1}^{n \psi_n(\lambda)} \frac{\mathbb{G}amma_j(\lambda)}{n \psi_n(\lambda)} - E(\lambda) \bigg| ~+~ E(\lambda) \big| \psi_n(\lambda) - \psi(\lambda) \big| \bigg) \nonumber \\ & \leq \sum_{\lambda \in \mathbb{L}ambda_0} \mathbb{B}ig( \psi_n(\lambda) \cdot \epsilon b_5 ~+~ E(\lambda) \cdot \epsilon \psi(\lambda) \mathbb{B}ig) \nonumber \\ & \leq b_8 \epsilon, \overline{m}ox{ where } b_8 \equiv b_5 + \max_{\lambda \in \mathbb{L}ambda_0} E(\lambda). \end{align} Together \eqref{eq:ProbGnepsComplimentBound} and \eqref{eq:ConcentrationOnGneps} show that, for any $0 < \epsilon \leq 1/3$ and $n \in \mathbb{N}$, \begin{align*} \mathbb{P}(|\mathbb{G}ammab_n - 1| > b_8 \epsilon) \leq b_6 \exp(- b_7 \epsilon^2 n), \end{align*} which is equivalent to \eqref{eq:GammabSmallEpsilon}, for $0 < \epsilon \leq \epsilon_0 \equiv b_8/3$, with $c_1 = b_6$ and $c_2 = b_7/b_8^2$. \\ \noindent \emph{Proof of \eqref{eq:GammabLargeEpsilon}}: Let $r = \max \{p, q\}$ and let $\xi$ be a geometric random variable with parameter $1-r$ started from $0$, i.e. $\mathbb{P}(\xi = k) = r^k (1-r)$, $k \geq 0$. Then, $S_0$ and $S_R$ are both stochastically dominated by $\xi$, so $\sum_{j = 1 }^n \mathbb{G}amma_j$ is stochastically dominated by $\sum_{j = 1}^n \xi_j$, for each $n \in \mathbb{N}$, where $\xi_1, \xi_2,...$ are i.i.d. distributed as $\xi$. Further, by Fact 1, there exist constants $b_1,b_2 > 0$ such that for all $\epsilon \geq 1$ and $n \in \mathbb{N}$, \begin{align*} \mathbb{P}\mathbb{B}ig(\overline{\xi}_n - \frac{r}{1-r} > \epsilon \mathbb{B}ig) = \mathbb{P}(\overline{\xi}_n - \mathbb{E}(\xi) > \epsilon) \leq b_1 \exp( - b_2 \epsilon n). \end{align*} Now, since $\alpha = 1/2$, either $p$ or $q$ must be at least $1/2$, so $r/(1-r) \geq 1$. Thus, for $\epsilon \geq \epsilon_0' \equiv 2r/(1-r)$ we have \begin{align*} \mathbb{P}(\mathbb{G}ammab_n - 1 > \epsilon) \leq \mathbb{P}(\overline{\xi}_n > \epsilon) \leq \mathbb{P}\mathbb{B}ig(\overline{\xi}_n - \frac{r}{1-r} > \frac{\epsilon}{2} \mathbb{B}ig) \leq b_1 \exp( - (b_2/2) \epsilon n). \end{align*} On the other hand, for all $\epsilon \geq \epsilon_0'$ we also have \begin{align*} \mathbb{P}(\mathbb{G}ammab_n - 1 < -\epsilon) = 0, \end{align*} since $\mathbb{G}ammab_n$ is nonnegative and $\epsilon_0' > 1$. Hence, \eqref{eq:GammabLargeEpsilon} holds with $c_1' = b_1$ and $c_2' = b_2/2$. \end{proof} \end{document}
\begin{document} \title{ Completeness Results for Parameterized~Space~Classes } \begin{abstract} The parameterized complexity of a problem is generally considered ``settled'' once it has been shown to lie in FPT or to be complete for a class in the W-hierarchy or a similar parameterized hierarchy. Several natural parameterized problems have, however, resisted such a classification. At least in some cases, the reason is that upper and lower bounds for their parameterized \emph{space} complexity have recently been obtained that rule out completeness results for parameterized \emph{time} classes. In this paper, we make progress in this direction by proving that the associative generability problem and the longest common subsequence problem are complete for parameterized space classes. These classes are defined in terms of different forms of bounded nondeterminism and in terms of simultaneous time--space bounds. As a technical tool we introduce a ``union operation'' that translates between problems complete for classical complexity classes and for W-classes. \end{abstract} \section{Introduction} Parameterization has become a powerful paradigm in complexity theory, both in theory and practice. Instead of just considering the runtime of an algorithm as a function of the input \emph{length}, we analyse the runtime as a multivariate function depending on a number of different problem \emph{parameters}, the input length being just one of them. While in classical complexity theory instead of ``runtime'' many other resource bounds have been studied in great detail, in the parameterized world the focus has lain almost entirely on time complexity. This changed when in a number of papers~\cite{Caietal1997,ElberfeldST2012,Guillemot2011} it was shown for different natural problems, including the vertex cover problem, the feedback vertex set problem, and the longest common subsequence problem, that their parameterized \emph{space} complexity is of interest. Indeed, the parameterized space complexity of natural problems can explain why some problems in $\Class{FPT}$ are easier to solve than others (namely, because they lie in much smaller space classes) and why some problems cannot be classified as complete for levels of the weft hierarchy (namely, because upper and lower bounds on their space complexity rule out such completeness results unless unlikely collapses occur). \paragraph{Our Contributions.} In the present paper, we present completeness results of natural parameterized problems for different parameterized space complexity classes. The classes we study are of two kinds: First, parameterized classes of \emph{bounded nondeterminism} and, second, parameterized classes where the \emph{space and time} resources of the machines are bounded \emph{simultaneously.} In both cases, we introduce the classes for systematic reasons, but also because they are needed to classify the complexity of the natural problems that we are interested in. In the context of bounded nondeterminism, we introduce a general ``union operation'' that turns any language into a parameterized problem in such a way that completeness of the language for some complexity class $C$ carries over to completeness of the parameterized problem for a class ``$\Para[W]C$,'' which we will define rigorously later. Building on this result, we show that many union versions of graph problems are complete for $\Para[W]\Class{NL}$ and $\Para[W]\Class L$, but the theorem can also be used to show that $\PLang{weighted-sat}$ is complete for $\Para[W]\Class{NC}^1$. Our technically most challenging result is that the associative generability problem parameterized by the generator set size is complete for the class $\Para[WN]\Class{L}$. Regarding time--space classes, we present different problems that are complete for the class of problems solvable ``nondeterministically in fixed-parameter time and slice-wise logarithmic space.'' Among these problems are the longest common subsequence problem parameterized by the number of strings, but also the acceptance problem for certain cellular automata parameterized by the number of cells and also a simple but useful pebble game. \paragraph{Related Work.} Early work on parameterized space classes is due to Cai et al.~\cite{Caietal1997} who introduced the classes $\Para\Class L$ and $\Para\Class{NL}$, albeit under different names, and showed that several important problems in $\Class{FPT}$ lie in these classes: the parameterized vertex cover problem lies in $\Para\Class L$ and the parameterized $k$-leaf spanning tree problem lies in $\Para\Class{NL}$. Later, Flum and Grohe~\cite{FlumG2003} showed that the parameterized model checking problem of first-order formulas on graphs of bounded degree lies in~$\Para\Class L$. In particular, standard parameterized graph problems belong to $\Para\Class L$ when we restrict attention to bounded-degree graphs. Recently, Guillemot~\cite{Guillemot2011} showed that the longest common subsequence problem (\textsc{lcs}) is equivalent under fpt-reductions to the short halting problem for \textsc{ntm}s, where the time and space bounds are part of the input and the space bound is the parameter. Our results differ from Guillemot's insofar as we use weaker reductions (para-$\Class L$- rather than fpt-reductions) and prove completeness for a class defined using a machine model rather than for a class defined as a reduction closure. The paper \cite{ElberfeldST2012} by Elberfeld and us is similar to the present paper insofar as it also introduces new parameterized space complexity classes and presents upper and lower bounds for natural parameterized problems. The core difference is that in the present paper we focus on completeness results for natural problems rather than ``just'' on upper and lower bounds. \paragraph{Organisation of This Paper.} In Section~\ref{classes} we review the parameterized space classes previously studied in the literature and introduce some new classes that will be needed in the later sections. For some of the classes from the literature we propose new names in order to systematise the naming and to make connections between the different classes easier to spot. In Section~\ref{section-bounded-non} we study problems complete for classes defined in terms of bounded nondeterminism, in Section~\ref{section-space-time} we do the same for time--space classes. Due to lack of space, all proofs have been omitted. They can be found in the technical report version of this paper. \section{Parameterized Space Classes} \label{classes} Before we turn our attention to parameterized \emph{space} classes, let us first review some basic terminology. As in~\cite{ElberfeldST2012}, we define a \emph{parameterized problem} as a pair~$(Q,\kappa)$ of a language $Q \subseteq \Sigma^*$ and a parameterization $\kappa\colon\Sigma^*\to\mathbb N$ that maps input instances to parameter values and that is computable in logarithmic space.\footnote{In the classical definition, Downey and Fellows~\cite{DowneyF1999} just require the parameterization to be computable, while Flum and Grohe~\cite{FlumG2006} require it to be computable in polynomial time. Whenever the parameter is part of the input, it is certainly computable in logarithmic space.} For a classical complexity class~$C$, a parameterized problem $(Q,\kappa)$ belongs to the \emph{para-class} $\Para C$ if there are an alphabet~$\Pi$, a computable function $\pi\colon\mathbb N\to\Pi^*$, and a language $A\subseteq\Sigma^*\times\Pi^*$ with $A\in C$ such that for all $x\in \Sigma^*$ we have $x\in Q\iff\big(x,\pi\big(\kappa(x)\big)\big)\in A$. The problem is in the \emph{X-class} $\Class{X}C$ if for every number $w \in \mathbb N$ the slice $Q_w=\{\,x\mid\text{$x\in Q$ and $\kappa(x)=w$}\}$ lies in~$C$. It is immediate from the definition that $\Para C \subseteq \Class XC$ holds. The ``popular'' class $\Class{FPT}$ is the same as $\Para\Class P$. In terms of the $O$-notation, a parameterized problem $(Q,\kappa)$ is in $\Para\Class P$ if there is a function $f\colon\mathbb N\to\mathbb N$ such that the question $x\in Q$ can be decided within time $f(\kappa(x)) \cdot |x|^{O(1)}$. By comparison, $(Q,\kappa)$ is in $\Para\Class L$ if $x\in Q$ can be decided within space $f(\kappa(x)) + O(\log|x|)$; and for $\Para\Class{PSPACE}$ the space requirement is $f(\kappa(x)) \cdot |x|^{O(1)}$. The class $\Class{XP}$ is in wide use in parameterized complexity theory; the logarithmic space classes $\Class{XL}$ and $\Class{XNL}$ have previously been studied by Chen et al.~\hbox{\cite{FlumG2003,ChenFG2003}}. To simplify the notation, let us write $f_x$~for $f\big(\kappa(x)\big)$ and $n$~for $|x|$ in the following. Then the time bound for $\Para\Class P$ can be written as $f_x n^{O(1)}$ and the space bound for $\Para\Class L$ as $f_x + O(\log n)$. \emph{Parameterized logspace reductions} ($\Para\Class L$-reductions) are the natural restriction of fpt-reductions to logarithmic space: A $\Para\Class L$-reduction from a parameterized problem $(Q_1,\kappa_1)$ to $(Q_2,\kappa_2)$ is a mapping $r\colon\Sigma_1^*\to\Sigma_2^*$ such~that \begin{enumerate} \item for all $x\in\Sigma_1^*$ we have $x\in Q_1\iff r(x)\in Q_2$, \item $\kappa_2\big(r(x)\big)\leq g\big(\kappa_1(x)\big)$ for some computable function~$g$, and, \item $r$ is $\Para\Class L$-computable with respect to $\kappa_1$ (that is, there is a Turing machine that outputs $r(x)$ on input $x$ and needs space at most $f(\kappa_1(x))+O(\log |x|)$ for some computable function $f$). \end{enumerate} Using standard arguments one can show that all classes in this paper are closed with respect to $\Para\Class L$-reductions; with the possible exception of $\Para[W]\Class{NC}^1$, a class we encounter in Theorem~\ref{theorem-nc1}. Throughout this paper, all completeness and hardness results are meant with respect to $\Para\Class L$-reductions. \subsection{Parameterized Bounded Nondeterminism} \label{section-def-bounded-nondet} While the interplay of nondeterminism and parameterized space may seem to be simple at first sight ($\Class{NL}$ is closed under complement and $\Class{NPSPACE}$ is even equal to $\Class{PSPACE}$, so only $\Class{XNL}$ and $\Para\Class{NL}$ appear interesting), a closer look reveals that useful and interesting new classes arise when we bound the amount of nondeterminism used by machines in dependence on the parameter. For this, it is useful to view nondeterministic computations as deterministic computations using ``choice tapes'' or ``tapes filled with nondeterministic bits.'' These are extra tapes for a deterministic Turing machine, and an input word is accepted if there is at least one bitstring that we can place on this extra tape at the beginning of the computation such that the Turing machine accepts. It is well known that $\Class{NP}$ and $\Class{NL}$ can be defined in this way using deterministic polynomial-time or logarithmic-space machines, respectively, that have \emph{one-way} access to a choice tape. (For $\Class{NP}$ it makes no difference whether we have one- or two-way access, but logspace \textsc{dtm}s with access to a two-way choice tape can accept all of $\Class{NP}$.) Classes of \emph{bounded} nondeterminism arise when we restrict the length of the bitstrings on the choice tape. For instance, the classes $\beta^h$ for $h \ge 1$, see~\cite{KintalaF1980} and also~\cite{BussG1993} for variants, are defined in the same way as $\Class{NP}$ above, only the length of the bitstring on the choice tape may be at most $O(\log^h n)$. Classes of \emph{parameterized bounded} nondeterminism arise when we restrict the length the bitstring on the choice tape in dependence not only on the input length, but also of the parameter. Furthermore, in the context of bounded space computations, it also makes a difference whether we have one-way or two-way access to the choice tapes. \begin{definition} Let $C$ be a complexity class defined in terms of a deterministic Turing machine model (like $\Class L$ or $\Class P$). We define $\Para[\exists^{\leftrightarrow}]C$ as the class of parameterized problems $(Q, \kappa)$ for which there exists a $C$-machine $M$, an alphabet $\Pi$, and a computable function $\pi\colon\mathbb N\to\Pi^*$ such that: For every $x \in \Sigma^*$ we have $x \in Q$ if, and only if, there exists a bitstring $b \in \{0,1\}^*$ such that $M$ accepts with $(x,\pi(\kappa(x)))$ on its input tape and $b$ on the two-way choice tape. We define $\Para[\exists^{\to}]C$ similarly, only access to the choice tape is now one-way. We define $\Para[\exists^{\leftrightarrow}_{\mathit f\!\log}]C$ and $\Para[\exists^{\to}_{\mathit f\!\log}]C$ in the same way, but the length of $b$ may be at most $|\pi(\kappa(x))| \cdot O(\log n)$. \end{definition} \begin{figure} \caption{In this inclusion diagram bounded nondeterminism classes are shown in red and time--space classes in blue. The X-classes are shown twice to keep the diagram readable. All known inclusions are indicated, where $C \to D$ means $C \supseteq D$.} \label{fig-inclusions} \end{figure} Observe that, as argued earlier, $\Para[\exists^{\leftrightarrow}]\Class L = \Para[\exists^{\leftrightarrow}]\Class P = \Para[\exists^{\to}] \Class P = \Para\Class{NP}$ and $\Para[\exists^{\to}]\Class L = \Para\Class{NL}$. Also observe that $\Para[\exists^{\leftrightarrow}_{\mathit f log}] \Class P = \Para[\exists^{\to}_{\mathit f log}] \Class P = \Class W [ \Class P ]$ by one of the many possible definitions of~$\Class W[\Class P]$. The above definition can easily be extended to the case where a universal quantifier is used instead of an existential one and where \emph{sequences} of quantifiers are used. This is interpreted in the usual way as having a choice tape for each quantifier and the different ``exists \dots\,for all''-conditions must be met in the order the quantifiers appear. For instance, for problems in $\Para[\exists^{\leftrightarrow}_{\mathit f\!\log}\exists^{\to}]\Class L$ we have $x \in Q$ if, and only if, there exists a bitstring of length $f_x\log_2 n$ for the first, two-way-readable choice tape for which an $\Class{NL}$-machine accepts. The classes $\Class{para\text-NL}[f\log]$, $\Class{para\text-L\text-cert}$, and $\Class{para\text-NL\text-cert}$ introduced in an ad hoc manner by Elberfeld et al.\ in \cite{ElberfeldST2012} can now be represented systematically: They are $\Para[\exists^{\to}_{\mathit f\!\log}]\Class L$, $\Para[\exists^{\leftrightarrow}_{\mathit f\!\log}]\Class L$, and $\Para[\exists^{\leftrightarrow}_{\mathit f\!\log}\exists^{\to}]\Class L$, respectively. In order to make the notation more useful in practice, instead of ``$\exists^{\to}$'' let us write ``$\Class{N}$'' and instead of ``$\exists^{\to}_{\mathit f\!\log}$'' we write ``$\beta$'' as is customary. As a new notation, instead of ``$\exists^{\leftrightarrow}_{\mathit f\!\log}$'' and ``$\forall^{\leftrightarrow}_{\mathit f\!\log}$'' we write ``$\Class{W}$'' and ``$\Class{W_\forall}$,'' respectively. The three classes of \cite{ElberfeldST2012} now become $\Para[\beta] \Class L$, $\Para[W] \Class L$, and $\Para[W N] \Class L$. Our reasons for using ``W'' to denote $\exists^{\leftrightarrow}_{\mathit f\!\log}$ will be explained fully in Section~\ref{section-justification-w}; for the moment just observe that $\Class W[\Class P] = \Para[W] \Class P$ holds. To get a better intuition on the W-operator, note that it provides machines with ``$f_x \log_2 n$ bits of nondeterministic information'' or, equivalently, with ``$f_x$ many nondeterministic positions in the input'' and these bits are provided as part of the input. This allows us to also apply the W-operator to classes like $\Class{NC}^1$ that are not defined in terms of Turing machines. The right half of Figure~\ref{fig-inclusions} depicts the known inclusions between the introduced classes, the left half shows the classes introduced next. \subsection{Parameterized Time--Space Classes} \label{section-intro-ftpxp} In classical complexity theory, the major complexity classes are either defined in terms of time complexity ($\Class P$, $\Class{NP}$, $\Class{EXP}$) or in terms of space complexity ($\Class L$, $\Class{NL}$, $\Class{PSPACE}$), but not both at the same time: by the well-known inclusion chain $\Class L \subseteq \Class{NL} \subseteq \Class P \subseteq \Class{NP} \subseteq \Class{PSPACE} = \Class{NPSPACE} \subseteq \Class {EXP}$ space and time are intertwined in such a way that bounding either automatically bounds the other in a specific way (at least for the major complexity classes). In the parameterized world, interesting new classes arise when we restrict time and space simultaneously: namely whenever the time is ``para-restricted'' while space is ``X-restricted'' or \emph{vice versa.} \begin{definition} For a space bound $s$ and a time bound $t$, both of which may depend on a parameter $k$ and the input length $n$, let $\Class D[t,s]$ denote the class of all parameterized problems that can be accepted by a deterministic Turing machine in time $t(\kappa(x),|x|)$ and space $s(\kappa(x),|x|)$. Let $\Class N[t,s]$ denote the corresponding nondeterministic class. \end{definition} Four cases are of interest: First, $\Class{D}[f \mathrm{poly}, f\log]$, meaning that $t(k,n) = f(k) \cdot n^{O(1)}$ and $s(k,n) = f(k) \cdot O(\log n)$, contains all problems that are ``fixed parameter tractable via a machine needing only slice-wise logarithmic space,'' and, second, the nondeterministic counterpart $\Class N[f \mathrm{poly}, f\log]$. The two other cases are $\Class D[n^f, f\mathrm{poly}]$ and $\Class N[n^f, f\mathrm{poly}]$, which contain problems that are ``in slice-wise polynomial time via machines that need only fixed parameter polynomial space.'' See Figure~\ref{fig-inclusions} for the trivial inclusions between the classes. In Section~\ref{section-lcs} we will see that these classes are not only of scholarly interest. Rather, we will show that $\Lang{lcs}$ parameterized by the number of input strings is complete for $\Class N[f \mathrm{poly}, f\log]$. \section{Complete Problems for Bounded Nondeterminism} \label{section-bounded-non} \label{section-justification-w} In this section we present new natural problems that are complete for $\Para[W]\Class{NL}$ and $\Para[W]\Class{L}$. Previously, it was only known that the following ``colored reachability problem''~\cite{ElberfeldST2012} is complete for $\Para[W]\Class{NL}$: We are given an edge-colored graph, two vertices $s$ and $t$, and a parameter~$k$. The question is whether there is a path from $s$ to $t$ that uses only $k$ colors. Our key tool for proving new completeness results will be the introduction of a ``union operation,'' which turns $\Class P$-, $\Class{NL}$-, and $\Class L$-complete problems into $\Para[W]\Class P$-, $\Para[W]\Class{NL}$-, and $\Para[W]\Class L$-complete problems, respectively. Building on this, we prove the parameterized associative generability problem to be complete for $\Para[W]\Class{NL}$. Note that the underlying classical problem is well-known to be $\Class{NL}$-complete and, furthermore, if we drop the requirement of associativity, the parameterized and classical versions are known to be complete for $\Para[W]\Class P$ and~$\Class P$, respectively. At this point, we remark that Guillemot, in a paper~\cite{Guillemot2011} on parameterized \emph{time} complexity, uses ``$\Class{WNL}$'' to denote a class different from the class $\Para[W]\Class{NL}$ defined in this paper. Guillemot chose the name because his definition of the class is derived from one possible definition of $\Class{W}[1]$ by replacing a time by a space constraint. Nevertheless, we believe that our definition of a ``W-operator'' yields the ``right analogue'' of $\Class W[\Class P]$: First, there is the above pattern that parameterized version of problems complete $\Class P$, $\Class{NL}$, and $\Class L$ tend to be complete for $\Para[W]\Class P$, $\Para[W]\Class{NL}$, and $\Para[W]\Class L$, respectively. Furthermore, in Section~\ref{section-lcs} we show that the class $\Class{WNL}$ defined and studied by Guillemot is exactly the fpt-reduction closure of the time--space class $\Class N[f\operatorname{poly},f\log]$. \paragraph{Union Problems.} For numerous problems studied in complexity theory the input consists of a string in which some positions can be ``selected'' and the objective is to select a ``good'' subset of these positions. For instance, for the satisfiability problem we must select some variables such that setting them to true makes a formula true; for the circuit satisfiability problem we must select some input gates such that when they are set to~$1$ the circuit evaluates to~$1$; and for the exact cover problem we must select some sets from a family of sets so that they form a partition of the union of the family. In the following, we introduce some terminology that allows us to formulate all of these problems in a uniform way and to link them to the W-operator. Let $\Sigma$ be an alphabet that contains none of the three special symbols $?$, $0$, and~$1$. We call a word $t \in (\Sigma \cup \{?\})^*$ a \emph{template}. We call a word $s \in (\Sigma \cup \{0,1\})^*$ an \emph{instantiation of $t$} if $s$ is obtained from $t$ by replacing exactly the $?$-symbols arbitrarily by $0$- or $1$-symbols. Given instantiations $s_1, \dots, s_k$ of the same template~$t$, their \emph{union} $s$ is the instantiation of $t$ that has a $1$ exactly at those positions $i$ where at least one $s_j$ has a $1$ at position~$i$ (the union is the ``bitwise or'' of the instantiated positions and is otherwise equal to the template). Given a language $A \subseteq (\Sigma\cup\{0,1\})^*$, we define three different kinds of union problems for~$A$. Each of them is a parameterized problem where the parameter is~$k$. As we will see in a moment, the first kind is linked to the W-operator while the last kind links several well-known languages from classical complexity theory to well-known parameterized problems. We will also see that the three kinds of union problems for a language~$A$ often all have the same complexity. \begin{enumerate} \item The input for $\PLang{family-union-}A$ are a template $t \in (\Sigma \cup \{?\})^*$ and a family $(S_1, \dots, S_k)$ of $k$ sets of instantiations of~$t$. The question is whether there are $s_i \in S_i$ for $i \in \{1,\dots,k\}$ such that the union of $s_1,\dots,s_k$ lies in $A$. \item The input for $\PLang{subset-union-}A$ are a template $t \in (\Sigma \cup \{?\})^*$, a set $S$ of instantiations of~$t$, and a number~$k$. The question is whether there exists a subset $R \subseteq S$ of size $|R| = k$ such that the union of $R$'s elements lies in $A$. \item The input for $\PLang{weighted-union-}A$ are a template $t \in (\Sigma \cup \{?\})^*$ and a number~$k$. The question is whether there exists an instantiation $s$ of $t$ containing exactly $k$ many $1$-symbols such that $s \in A$? \end{enumerate} To get an intuition for these definitions, think of instantiations as words written on transparencies with $0$ rendered as an empty box and $1$ as a checked box. Then for the family union problem we are given $k$ heaps of transparencies and the task is to pick one transparency from each heap such that ``stacking them on top of each other'' yields an element of~$A$. For the subset union problem, we are only given one stack and must pick $k$ elements from it. We call the weighted union problem a ``union'' problem partly in order to avoid a clash with existing terminology and partly because the weighted union problem is the same as the subset union problem for the special set $S$ containing all instantiations of the template of weight~$1$. Concerning the promised link between well-known languages and parameterized problems, consider $A= \Lang{circuit-value-problem}$ ($\Lang{cvp}$) where we use $\Sigma$ to encode a circuit and use $0$'s and $1$'s solely to describe an assignment to the input gates. Then the input for \PLangText{weighted-union-cvp} are a circuit with $?$-symbols instead of a concrete assignment together with a number $k$, and the question is whether we can replace exactly $k$ of the $?$-symbols by $1$'s (and the other by $0$'s) so that the resulting instantiation lies in $\Lang{cvp}$. Clearly, $\PLang{weighted-union-cvp}$ is exactly the $\Class W[\Class P]$-complete problem $\PLang{circuit-sat}$, which asks whether there is a satisfying assignment for a given circuit that sets exactly $k$ input gates to~$1$. Concerning the promised link between the union problems and the W-op\-er\-a\-tor, recall that the operator provides machines with $f_x$ nondeterministic indices as part of the input. In particular, a W-machine can mark $f_x$ different ``parts'' of the input -- like one element from each of $f_x$ many sets in a family, like the elements of a size-$f_x$ subset of some set, or like $f_x$ many positions in a template. With this observation it is not difficult to see that if $A \in C$, then all union versions of $A$ lie in $\Para[W]C$. A much deeper observation is that the union versions are also often \emph{complete} for these classes. In the next theorem, which states this claim precisely, the following definition of a \emph{compatible logspace projection $p$} from a language $A$ to a language~$B$ is used: First, $p$ must be a logspace reduction from $A$ to~$B$. Second, $p$ is a projection, meaning that each symbol of $p(x)$ depends on at most one symbol of~$x$. Third, for each word length $n$ there is a single template $t_n$ such for all $x\in\Sigma^n$ the word $p(x)$ is an instantiation of~$t_n$. \begin{theorem}\label{theorem-comp} Let $C \in \{\Class{NC}^1, \Class L, \Class{NL}, \Class P\}$. Let $A$ be complete for~$C$ via compatible logspace projections. Then $\PLang{family-union-}A$ is complete for $\Para[W]C$ under para-$\Class L$-reductions.\footnote{The proof shows that the theorem actually also holds for any ``reasonable'' class $C$ and any ``reasonable'' weaker reduction.} \end{theorem} \begin{proof} For containment, on input of a template $t$ and family $(S_1,\dots,\penalty0 S_k)$ of sets of instantiations of~$t$, a $\Para[W]C$-\penalty0machine or -circuit interprets its $k \log_2 n$ nondeterministic bits as $k$ indices, one for each $S_i$. Let $s_i \in S_i$ be the elements selected in this way. We run a simulation of the $C$-machine (or $C$-circuit) that decides $A$ on the union $s$ of $s_1,\dots,s_k$. For logspace machines, we may not have enough space to write $s$ on a tape, so whenever the machine would like to know the $j$th bit of $u$, we simply compute the bitwise-or of the $j$th positions of the $s_i$. For hardness, consider any problem $(Q,\kappa) \in \Para[W]C$. By definition, this means the following: There are a language $X \subseteq \Gamma^*$ in~$C$ and computable functions $\pi \colon \mathbb N \to \Pi^*$ and $f \colon \mathbb N \to \mathbb N$ such that $x \in Q$ if, and only if, there is a string $b \in \{0,1\}^{f_x \log_2 n}$ with $(x,\pi(\kappa(x)),b) \in X$. Furthermore, since $A$ is complete for~$C$ via compatible logspace projections, we can reduce $X$ to $A$ via some~$p$. (As always, $n = |x|$ and $f_x = f(\kappa(x))$.) For the reduction of $(Q,\kappa)$ to $\PLang{family-union-}A$, let an input $x$ be given. Our para-$\Class L$-re\-duc\-tion first computes $\pi(\kappa(x))$. Since our reduction $p$ is compatible, for all possible $b$ the string $p(x,\pi(\kappa(x)),b)$ will have $0$-symbols and $1$-symbols at the same positions and all other positions will not vary with~$b$ at all. Our template~$t$ will be the string $p(x,\pi(\kappa(x)),b)$ with $?$-symbols placed at these positions (as argued, we can use any $b$). To define the sets of instances $S_i$, observe that the strings $b \in \{0,1\}^{f_x \log_2 n}$ can be thought of as sequences of $f_x$ symbols from the alphabet $\Delta = \{0,1\}^{\log_2 n}$, whose elements we call \emph{blocks}. For $i \in \{1,\dots,f_x\}$ let $S_i = \{m_i^\delta \mid \delta \in \Delta\}$ where $m_i^\delta$ replaces the $r$th $?$-symbol of the template~$t$ by $c \in \{0,1\}$ as follows: If the $r$th position depends on a symbol in $(x,\pi(\kappa(x)),b)$ that lies in a block of $b$, but not in the $i$th block, let $c = 0$. Otherwise, let $c$ be whatever symbol ($0$ or $1$) the reduction outputs when the $i$th block is set to~$\delta$. This concludes the construction. As an example for the construction, suppose the reduction $p$ simply doubles its input (so $w$ is mapped to $w w$) and $\pi$ just returns the empty string, and $\Sigma = \{\alpha,\beta,\gamma\}$. Consider, say, $x = \alpha\beta\gamma\alpha$ and assume $f_x=2$. We then have $\Delta = \{00,01,10,11\}$. The reduction would produce two sets $S_1$ and $S_2$. For $S_1$, we have a look at what $p$ does on input of a string like $(x,\pi(\kappa(x)),b)$. For simplicity let us ignore parentheses and commas, and consider $b=1111$, so this string would just be $\alpha\beta\gamma\alpha1111$. The reduction maps this to $\alpha\beta\gamma\alpha1111\alpha\beta\gamma\alpha1111$. In this string, the fifth, sixth, thirteenth, and fourteenth bits actually depend on the first block of $\alpha\beta\gamma\alpha1111$, so the reduction would produce the first set \begin{align*} S_1 = \{ &\alpha\beta\gamma\alpha0000\alpha\beta\gamma\alpha0000,\\ &\alpha\beta\gamma\alpha0100\alpha\beta\gamma\alpha0100,\\ &\alpha\beta\gamma\alpha1000\alpha\beta\gamma\alpha1000,\\ &\alpha\beta\gamma\alpha1100\alpha\beta\gamma\alpha1100\}. \end{align*} In a similar manner, the reduction would produce the second set \begin{align*} S_2 = \{ &\alpha\beta\gamma\alpha0000\alpha\beta\gamma\alpha0000,\\ &\alpha\beta\gamma\alpha0001\alpha\beta\gamma\alpha0001,\\ &\alpha\beta\gamma\alpha0010\alpha\beta\gamma\alpha0010,\\ &\alpha\beta\gamma\alpha0011\alpha\beta\gamma\alpha0011\}. \end{align*} Observe that, indeed, we can get every string $p(x,\pi(\kappa(x)),b)$ by taking the union of one string from $S_1$ and one string from~$S_2$. To see that the reduction is correct, consider the union of the elements of any set $\smash{\{m_1^{\delta_1}},\dots,\penalty 0m_{f_x}^{\delta_{f_x}}\}$ where the $m_i^{\delta_i}$ are chosen from the different $S_i$. By construction, their union will be exactly the image of $(x,\pi(\kappa(x)),\delta_1\dots\delta_{f_x})$ under~$p$. In particular, $x \in Q$ holds if, and only if, we can choose one instantiation from each $S_i$ such that their union is in~$A$. \end{proof} \paragraph{Parameterized Satisfiability Problems.} Recall that the problem \PLangText{weighted-union-cvp} equals \PLangText{circuit-sat}. Since one can reduce \PLangText{family-union-cvp} to \PLangText{weighted-union-cvp} (via essentially the same reduction as that used in the proof of Theorem~\ref{theorem-nc1} below), Theorem~\ref{theorem-comp} provides us with a direct proof that \PLangText{circuit-sat}${}={}$\PLangText{weighted-union-cvp} is complete for $\Para[W]\Class P$. We get an even more interesting result when we apply the theorem to $\Lang{bf}$, the propositional formula evaluation problem. We encode pairs of formulas and assignments in the straightforward way by using $0$ and $1$ solely for the assignment. Since $\Lang{bf}$ is complete for $\Class{NC}^1$ under compatible logspace reductions, see~\cite{Buss1987,Bussetal1992}, \PLangText{family-union-bf} is complete for $\Para[W]\Class{NC}^1$ by Theorem~\ref{theorem-comp}. By further reducing the problem to \PLangText{weighted-union-bf}, we obtain: \begin{theorem}\label{theorem-nc1} $\PLang{weighted-union-bf}$ is para-$\Class L$-complete\footnote{As in Theorem~\ref{theorem-comp} one can also use weaker reductions.} for $\Para[W]\Class{NC}^1$. \end{theorem} \begin{proof} The language $\Lang{bf}$ is complete for $\Class{NC}^1$, see~\cite{Buss1987,Bussetal1992}, and completeness can be achieved by compatible projections: Indeed, for input words of the same length, the reduction will map them to the same formula, only the assignment to the variables will differ (the input word is encoded solely in this assignment). Thus, by Theorem~\ref{theorem-comp} we get that $\PLang{family-union-bf}$ is complete for $\Para[W]\Class{NC}^1$ under para-$\Class L$-reductions (actually, also under weaker reductions like parameterized first-order reduction, but they are not in the focus of this paper). We now show that $\PLang{family-union-bf}$ reduces to $\PLang{subset-union-bf}$, which in turn reduces to $\PLang{weighted-union-bf}$. For the first reduction, let the sets $S_1$ to $S_k$ be given as input. All elements $s_{ij}$ of the $S_i$ represent assignments to the variables of the same formula~$\phi$. Our aim is to construct a set $S$ and a new formula $\phi' = \phi \land \psi$, where the job of $\psi$ is to ensure that any selection of $k$ elements from $S$ can only lead to $\phi'$ being true if the selection corresponds to picking ``exactly one element from each $S_i$.'' In detail, for each $s_{ij}$ we introduce a new variable $v_{ij}$. The assignment $s'_{ij}$ for $\phi'$ is the same as $s_{ij}$ for the ``old'' variables and is $1$ only for $\smash{v_{ij}}$ among the new variables ($\smash{v_{ij}}$ ``tags'' $\smash{s_{ij}}$). As an example, suppose there are three variable $x$, $y$, and $z$ in $\phi$ and suppose $S_1 = \{\phi000, \phi001\}$ (meaning that one assignment sets all variables to false and the other sets only $z$ to true) and $S_2 = \{\phi001\}$. Then there would be three additional new variables and $S = \{\phi'000\,100, \phi'001\,010, \phi'001\,001\}$. Now, setting $\psi = \bigwedge_{i=1}^k \bigvee_{j=1}^{|I_i|} v_{ij}$ ensures that $\psi$ will only be true for the union of $k$ assignments taken from $S$ if exactly one assignment was taken from each~$S_i$. Next, we reduce $\PLang{subset-union-bf}$ to $\PLang{weighted-union-bf}$. Towards this end, let $S = \{b_1,\dots,b_n\}$ be given as input and let $\phi$ be the formula underlying the~$s_i$. Our new formula $\phi'$ has exactly $n$ variables $v_1$ to~$v_n$ and is obtained from $\phi$ by leaving the structure of $\phi$ identical, but substituting each occurrence of a variable $x$ as follows: let $X \subseteq \{1,\dots,n\}$ be the set of indices $i$ such that in $s_i$ the variable $x$ is set to~$1$. Then we substitute $x$ by $\bigvee_{i\in X} v_i$. The output $S'$ of the reduction is $\phi'$ together with all assignments making exactly one of the variables $v_i$ true. As an example, let $\phi = x \land (y \to x) \land z$ and let $S = \{\phi000, \phi101, \phi 010, \phi111\}$. Then there would be four variables $v_1$ to $v_4$ and the formula $\phi'$ would be $v_2 \land ((v_3 \lor v_4) \to v_2) \land (v_2 \lor v_4)$ and the set $S'$ would be $\{\phi'0001, \phi'0010, \phi'0100, \phi'1000\}$. To see that this reduction is correct, first assume that we have $(t,S,k) \in \PLang{subset-union-bf}$ via a selection $\{s_1,\dots,s_k \}\subseteq S$. Then we also have $(t',S',k) \in \PLang{weighted-union-bf}$ via the $k$ elements of $S'$ where exactly the variables corresponding to $s_1$ to $s_k$ are set to true: in $\phi'$ the expressions $\bigvee_{i\in X} v_i$ that was substituted for a variable $x$ will be true exactly if one of the $s_i$ has set $x$ to~$1$. Thus, $\phi'$ will evaluate to $1$ for the assignment in which exactly the selected $v_i$ are true if, and only if, $\phi$ evaluates to true for the ``bitwise or'' of the assignments $s_1,\dots,s_k$ -- which it does by assumption. For the other direction, assume that $(t',S',k) \in \PLang{weighted-union-bf}$. Then, by essentially the same argument, we obtain a subset of $S$ whose bitwise or makes $\phi$ evaluate to~$1$. \end{proof} By definition, $\Class W[\Lang{sat}]$ is the fpt-reduction closure of $\PLang{weighted-sat}$, which is the same as \PLangText{weighted-union-bf}. Thus, by the theorem, $\Class W[\Lang{sat}]$ is also the fpt-reduction closure of $\Para[W]\Class{NC}^1$ --~a result that may be of independent interest. For example, it shows that $\Class{NC}^1 = \Class P$ implies $\Class W[\Lang{sat}] = \Class W[\Class P]$. Note that we do not claim $\Class W[\Lang{sat}]=\Para[W]\Class{NC}^1$ since $\Para[W]\Class{NC}^1$ is presumably not closed under fpt-reductions. \paragraph{Graph Problems.} In order to apply Theorem~\ref{theorem-comp} to standard graph problems like $\Lang{reach}$ or $\Lang{cycle}$, we encode graphs using adjacency matrices consisting of $0$- and $1$-symbols. Then a template is always a string of $n^2$ many $?$-symbols for $n$ vertex graphs. The ``colored reachability problem'' mentioned at the beginning of this section equals $\PLang{subset-union-reach}$.\footnote{For exact equality, in the colored reachability problem we must allow edges to have several colors, but this is does not change the problem complexity.} Note that any reduction to a union problem for this encoding is automatically compatible as long as the number of vertices in the reduction's output depends only on the length of its input. Applying Theorem~\ref{theorem-comp} to standard $\Class{L}$- or $\Class{NL}$-complete problems yields that their family union versions are complete for $\Para[W]\Class L$ and $\Para[W]\Class{NL}$, respectively. By reducing the family versions further to the subset union version, we get the following: \begin{theorem}\label{theorem-unions} For $A \in \{\Lang{reach}, \Lang{dag-reach}, \Lang{cycle}\}$, $\PLang{subset-union-}A$ is complete for $\Para[W]\Class{NL}$, while for $B \in \{\Lang{undirected-reach},$ $\Lang{tree},$ $\Lang{forest},$ $\Lang{undirected-cycle} \}$, the problem $\PLang{subset-union-}B$ is complete for $\Para[W]\Class L$. \end{theorem} \begin{proof} For each of the problems, we reduce its family union version to it. This suffices: By Theorem~\ref{theorem-comp} and the fact that the underlying problems like $\Lang{reach}$ are complete for $\Class{NL}$ and $\Class L$ under compatible logspace projections (even under first-order projections), the family versions are complete for the respective classes. Recall that the difference between the problems $\PLang{family-union-}A$ and $\PLang{subset-union-}A$ is that in the first we are given $k$ sets $S_i$ from each of which we must choose one element, while for the latter we can pick $k$ elements from a single set $S$ arbitrarily. If the reduction were to just set $S$ to the union of the $S_i$, then many choices of $k$ sets of $S$ will correspond to taking multiple elements from a single~$S_i$. In such cases, their union should \emph{not} be an element of~$A$. \begin{figure} \caption{An example of the reduction from a family union graph problem to a subset union graph problem. In the example, $V = \{x,y,z\} \label{fig-red} \end{figure} To achieve the effect that the union of a subset of $S$ with multiple elements from the same $S_i$ is not in~$A$, we use the same construction for all $A$, except for $A = \Lang{forest}$. The construction works as follows: Since the $S_i$ are compatible, they are defined over the same set~$V$ of vertices. Each $s \in S_i$ encodes an edge set $E_s \subseteq V^2$. We construct a new vertex set $V' \supseteq V$ as follows: For each pair $(a,b) \in V^2$ we introduce $k$ new vertices $v_{ab}^1$, \dots, $v_{ab}^k$ and add them to~$V'$. For each $s\in S_i$ we define a new edge set $E'_s \subseteq V' \times V'$ as follows: First, for each $(a,b) \in V^2$ let $(v_{ab}^{i-1},v_{ab}^i) \in E'_s$, where $v_{ab}^0 = a$. Second, for each $(a,b) \in E_s$, let $(v_{ab}^k,b) \in E'_s$. Let $s'$ be the bitstring encoding the adjacency matrix of~$E'_s$. We set $S = \{\,s' \mid s \in S_i \text{ for some $i$}\}$. An example of how this reduction works is depicted in Figure~\ref{fig-red}. In order to argue that the reduction works for all problems, we make two observations. Given any subset $\{s'_1,\dots,s'_k\} \subseteq S$, for each $s'_i$ there is a unique corresponding~$s_i$, lying in (some) $S_j$. Let $G' = (V',E')$ denote graph whose adjacency matrix is the union of $\{s'_1,\dots,s'_k\}$ and, correspondingly, let $G = (V,E)$ be the union of the $S_i$. Now, first assume that, indeed, we have $s_i \in S_i$ for all $i \in \{1,\dots,k\}$. Then for every pair $(a,b) \in V$ the new vertices $v_{ab}^1$ to $v_{ab}^k$ will form a path in $G'$ attached to~$a$. Furthermore, for every edge $(a,b) \in E$ there is a path from $a$ to $b$ in $E'$. On the other hand, for $(a,b) \notin E$, we cannot get from $a$ to $b$ in $G'$ using only new vertices: the edge $v_{ab}^k \to b$ will be missing. This proves our first observation: for vertices $a,b \in V$ there is a path from $a$ to $b$ in $G'$ if, and only if, there is such a path in $G$. Our second observation concerns the case that there are two strings $s'_i$ and $s'_j$ such that $s_i$ and $s_j$ lie in the same set $S_x$. In this case, for every two vertices $a,b \in V$ at least one edge is missing along the path $v_{ab}^0$ to $v_{ab}^k$. Thus, we can observe that there is no path from any $a \in V$ to any other $b \in V$ in $G'$. Let us now argue that the reduction is correct: For the reachability problem, by the first observation reachability is correctly transferred from $G$ to $G'$ and by the second observation no ``wrong'' choice of $s'_i$ will induce reachability. The exact same argument holds for undirected reachability and reachability in \textsc{dag}s. For trees and cycles, the argument also works since trees and cycles remain trees and cycles for ``correct'' choices of the $s'_i$ and they get destroyed for any ``wrong'' choice. For forests, the reduction described above does not work since in case several $s'_i$ are picked such that their $s_i$ stem from the same~$S_j$, the graph $G'$ becomes a collection of small trees: a forest -- and this is exactly what should \emph{not} happen. The trick is to use a different reduction: For every pair $s_i, s_j \in S_x$ for $x \in \{1,\dots,k\}$ we add three new vertices to the graph: $a$, $b$, and $c$. In $s'_i$ we add the edges $(a,b)$ and $(b,c)$, in $s'_j$ we add the edge~$(c,a)$. Now, clearly, whenever $s'_i$ and $s'_j$ are picked stemming from the same $S_x$, a cycle will ensue; and if only one $s_i$ is picked from each $S_i$, paths of length $1$ or $2$ will result in the new vertices that do not influence whether the graph is a forest or not. \end{proof} To conclude this section on union graph problems, we would like to point out that one can also ask which problems are complete for the ``co-W-classes'' $\Para[W_\forall]\Class{NL}$ and $\Para[W_\forall]\Class L$. It is straightforward to see that an analogue of Theorem~\ref{theorem-comp} holds if we define problems $\PLang{partitioned-\penalty0union$_\forall$-}A$ as a ``universal version'' of the partitioned union problem (we ask whether \emph{for all} choices of $b_i$ their union is in $A$). For instance, $\PLang{partitioned-union$_\forall$-cycle}$ is complete for $\Para[W_\forall]L$. It is also relatively easy to employ the same ideas as those from the proof of Theorem~\ref{theorem-unions} to show that the universal union versions of all problems mentioned in Theorem~\ref{theorem-unions} are complete for $\Para[W_\forall]\Class{NL}$ and $\Para[W_\forall]\Class L$ \emph{except} for $\PLang{union$_\forall$-tree}$, whose complexity remains open. \paragraph{Associative Generability.} The last union problem we study is based on the \textsc{generators} problem, which contains tuples $(U,\circ,x,G)$ where $U$ is a set, $\circ\colon U^2 \to U$ is (the table of) a binary operation, $x\in U$, and $G \subseteq U$ is a set. The question is whether the closure of $G$ under~$\circ$ (the smallest superset of $G$ closed under~$\circ$) contains~$x$. A restriction of this problem is $\Lang{associative-generator}$, where $\circ$ must be associative. By two classical results, $\Lang{generators}$ is $\Class P$-complete~\cite{JonesL1976} and $\Lang{associative-generator}$ is $\Class{NL}$-complete~\cite{JonesLL1976}. In order to apply the union operation to generator problems, we encode $(U,\circ,x,G)$ as follows: $U$, $\circ$, and $x$ are encoded in some sensible way using the alphabet~$\Sigma$. To encode $G$, we add a $1$ after the elements of $U$ that are in $G$ and we add a $0$ after \emph{some} elements of $U$ that are not in~$G$. This means that in the underlying templates we get the freedom to specify that only some elements of~$U$ may be chosen for~$G$. Now, $\PLang{weighted-union-generators}$ equals the problem known as $\PLang{generators}$ in the literature: Given $\circ$, a subset~$C\subseteq U$ of generator candidates, a parameter $k$, and a target element~$x$, the question is whether there exists a set $G \subseteq C$ of size $|G| = k$ such that the closure of~$G$ under $\circ$ contains~$x$. Flum and Grohe~\cite{FlumG2006} have shown that $\PLang{generators}$ is complete for $\Class W[\Class P] = \Para[W] \Class P$ (using a slightly different problem definition that has the same complexity, however). Similarly, \PLangText{weighted-union-associative-generator} is also known as $\PLang{agen}$ and we show: \begin{theorem}\label{theorem:agen} $\PLang{agen}$ is complete for $\Para[W]\Class{NL}$. \end{theorem} \begin{proof} Clearly, $\PLang{agen} \in \Para[WN]\Class L$ since the nondeterministic bits provided by the W-operator suffice to describe the generator set and since testing whether a set is, indeed, a generator is well-known to lie in $\Class{NL}$. For hardness note that $\Lang{agen}$ is complete for $\Class{NL}$ under compatible logspace projections, see~\cite{JonesLL1976}. By Theorem~\ref{theorem-comp} we then have that \PLangText{family-union-agen} is complete for $\Para[W]\Class{NL}$. We now show that this problem reduces to \PLangText{subset-union-agen}, which in turn reduces to \PLangText{weighted-union-agen}, i.\,e., $\PLang{agen}$. \def\hbox{\tt error}{\hbox{\tt error}} \emph{Hardness of \PLangText{subset-union-agen}.} For the first reduction let the compatible sets $S_1,\dots,S_k$ be given as input. The template encodes a universe~$U$, a set of generator candidates~$C\subseteq U$, a target element~$x\in U$, and an operation~$\circ\colon U^2\to U$. The instantiations encode subsets of the generator candidates. Our aim is to construct a new instance of \PLangText{subset-union-agen}, i.\,e., a single set~$S'$ of compatible strings encoding a universe~$U'$, a set of generator candidates~$C'\subseteq U'$, a target element~$x'$, and an operation~$\circ'$ such that there are $k$~elements of~$S'$ that induce a generating set for~$x'$ if, and only if, there are $k$ elements~$s_i$, one from every~$S_i$, such that they induce a generating set of~$x$. To achieve this we first set $U'=U\cup\{e_1,\dots,e_k\}$ for new elements~$e_i$, one for each $S_i$, and also add them to the new set of generator candidates $C'=C\cup\{e_1,\dots,e_k\}$. We then augment the operation~$\circ$ to $\circ'$ with respect to the new elements requiring that no $e_i$ can be generated by any combination of two other elements from the universe and that no $e_i$ can be used to generate elements from the universe other than itself (we achieve this by actually using whole string as elements of our universe, as will be discussed later in this proof). Furthermore, we insert a new target element~$x'$ into the universe. Our aim is to enforce that~$x'$ can only be generated via the expression~$x\circ' e_1\circ' e_2\circ'\dots\circ' e_k$. Finally, we add an \emph{error element}~$\hbox{\tt error}$ to the universe that we will use to create dead ends in the evaluation of expressions: Any expression that does not make sense or contains the error element is evaluated to $\hbox{\tt error}$. The set $S'$ then contains a string~$s_{ij}'$ for every $s_{ij}\in S_i$ that is essentially $s_{ij}$ adjusted to $U'$, $C'$, $x'$, and $\circ'$, where we require that the binary string that selects a set of generators from $C'$ also selects $e_i$ and no other of the introduced elements $e_j$. From this we have that there is a selection of $k$ elements of $S'$ that induces a set of generators whose closure contains $x,e_1,\dots,e_k$ and therefore also $x'$ if, and only if, there is a set of $k$~strings $s_i\in S_i$ describing a set of generators whose closure contains~$x$. Unfortunately, our operator~$\circ'$ is binary and, therefore, we cannot evaluate expressions like $x\circ' e_1\circ' e_2\circ'\dots\circ' e_k$ in a single step. Moreover, because of the required associativity of~$\circ'$, it has to be possible to completely evaluate any subexpression of a larger expression. To achieve this, we actually use strings as elements of our universe, instead of single symbols, that are evaluated ``as far as possible.'' For instance, the expression $a\circ' b\circ' c\circ' e_1\circ' e_2$ evaluates to $d\circ' e_1\circ' e_2$ if the expression $a\circ' b\circ' c$ evaluates to~$d$. Since $d\circ' e_1\circ' e_2$ cannot be evaluated further, we want the string $d e_1 e_2$ to be part of our universe. To formalize the idea of ``strings evaluated as far as possible,'' we need some definitions. Given an alphabet $\Gamma$, let us call a set $R$ of rules of the form $w \to w'$ with $w,w' \in \Gamma^*$ a \emph{replacement system}. An \emph{application} of a rule $w \to w'$ takes a word $u w v$ and yields the word $u w' v$; we write $u w v \Rightarrow_R u w' v$ in this case. A word is \emph{irreducible} if no rule can be applied to it. Let $\equiv_R$ be the reflexive, symmetric, transitive closure of $\Rightarrow_R$. Given a word $u$, let $[u]_R = \{v \mid u \equiv_R v\}$ be the equivalence class of~$u$. We use $\Gamma^* /_{\equiv_R} = \{ [v]_R \mid v \in \Gamma^*\}$ to denote the set of all equivalence classes of $\Gamma^*$. Observe that we can define a natural concatenation operation $\circ_R$ on the elements of $\Gamma^* /_{\equiv_R}$: Let $[u]_R \circ_R [v]_R = [u\circ v]_R$. Clearly, this operation is well-defined and associative. An \emph{irreducible representative system} of $R$ is a set of irreducible words that contains exactly one word from each equivalence class in $(U')^* /_{\equiv_R}$. In the context of our reduction, $\Gamma$ will be $U'$ and $R$ contains the following rules: First, for elements $a,b,c \in U$ of the original universe $U$ with $a \circ b = c$, we have the rule $ab \to c$. Second, we have the rule $x e_1 \dots e_k \to x'$. Third, we have the rules $\hbox{\tt error} u \to \hbox{\tt error}$ and $u \hbox{\tt error} \to \hbox{\tt error}$ for all $u \in U'$. Fourth, we have the rules $e_i u \to \hbox{\tt error}$ for all $u \in U' \setminus \{e_{i+1}\}$ and $x' u \to \hbox{\tt error}$ for all $u \in U'$. We can now, finally, describe the sets to which the reduction actually maps an input $(U,\circ,x,C)$: The universe $U''$ is an irreducible representative system of~$R$, the operation $\circ''$ maps a pair $(u,v)$ to the representative of $[u\circ v]_R$, let the target element $x''$ be the representative of $[x']_R$, and let $C''$ contain all representatives of $[c]_R$ for $c \in C'\}$. Our first observation is that $(U')^* /_{\equiv_R}$ (and hence also $U''$) has polynomial size: Consider any $[w]_R$ and let $w$ be irreducible. If $w$ does not happen to the error symbol itself, it cannot contain the error symbol (by the third rule). Furthermore, in~$w$ there cannot be any element from~$U$ to the right of any~$e_i$ or of~$x'$ (by the fourth rule). Thus, it must be of the form $w_1 w_2$ with $w_1 \in U^*$ and $w_2 \in \{e_1,\dots,e_k,x'\}^*$. Then $w_1$ must actually be a single letter (by the first rule) and $w_2$ must by $x'$ or a sequence $e_i e_{i+1} \dots e_j$ for some $i \le j$ (by the fourth rule). This shows that the total number of different equivalence classes is at most $1+ |U'| (k^2+1)$. The second observation concerns the equivalence class of $[x']_R$, which contains the string $x e_1\dots e_k$. We can only generate this class from elements $[c]_R$ with $c \in C'$ if these elements include all $[e_i]_R$ and also the equivalence classes $[c]_R$ of elements $c \in C$ that suffice to generate~$x$. This shows the correctness of the reduction. \def\g{ \hbox to 2.5pt{\hfil\vrule width.6pt height.125ex depth.125ex\hfil} } \def\hbox{\tt error}{\hbox{\tt error}} \def\hbox{$\triangleleft$}{\hbox{$\triangleleft$}} \emph{Hardness of \PLangText{weighted-union-agen}.} Given a compatible set $S=\{s_1,\dots,\penalty0 s_k\}$ whose strings encode a universe~$U$, a set of generator candidates~$C\subseteq U$, an associative operation~$\circ\colon U^2\to U$, and a target element~$x\in U$, together with a selection of generator candidates, we have to construct an instance~$S'$ such that every string only selects a single generator candidate. To achieve this we construct a new universe $U'$ that contains the elements of the old universe~$U$ with new elements described below. As in the previous reduction, we define reduction rules alongside these new elements and then use an irreducible representative system of the rules as our universe. \begin{enumerate} \item We have an error element $\hbox{\tt error}$ with similar rules as above. \item We have an \emph{end element} $\hbox{$\triangleleft$}$. No rule has $\hbox{$\triangleleft$}$ on its right-hand side. Therefore, $\hbox{$\triangleleft$}$ has to be an element of any generating set~$G$. We require this element for technical reasons that we will discuss later. There are rules $\hbox{$\triangleleft$} u \to \hbox{\tt error}$ for all $u \in U'$. \item We have a \emph{counter element} $\g$. Like the end symbol, this symbol cannot be generated by any expression and has to be an element of any generating set. \item We have elements $\sigma_i$ for each $s_i\in S$, which we call \emph{selector elements}. The idea behind these elements is that we will use them together with the counter element to enumerate all the elements $u_1,\dots,u_l$ of the generator candidates selected by a string $s_i\in S$. The objective is that strings like $\sigma_i\g\g\g\g$ can be replaced by $u_4$. We will give rules for this in a moment. \end{enumerate} In our new template, the candidates are (the representatives of the equivalence classes of) the $\sigma_i$ as well as $\hbox{$\triangleleft$}$, $\g$, and $\hbox{\tt error}$. Now, there is a selection of $k+3$ elements of $S'$ that forms a generating set for the target element if, and only if, there is a selection of $k$ elements of~$S$ that forms a generating set. It remains to explain how rules can be set up such that $\sigma_i\g\g\g\g$ gets replaced by $u_4$. Consider the expression $\sigma_1\g\g \sigma_1\g \sigma_3\g\g\g \sigma_2\g\g$. Here, $\sigma_1\g\g$ can be replaced by some $u$ and $\sigma_3\g\g$ by some $u'$, but $ \sigma_2\g\g$ cannot yet be replaced since it is not clear what element will be appended to the expression (if there is another element). To fix this, we use the end symbol~$\hbox{$\triangleleft$}$ that has to be appended to every expression. It marks the right end of the expression and enforces the unambiguous evaluation of the very last subexpression and, therefore, the whole expression. Translated into rules, this means that if, for instance, $\sigma_i\g\g\g\g$ should select $u_4$, then we have rules like $\sigma_i\g\g\g\g u \to u_4u$ if $u \neq \g$, but do not have the rule $\sigma_i\g\g\g\g \to u_4$. The target is the irreducible string $x\hbox{$\triangleleft$}$. Again, the number of equivalence classes is polynomially in the size of the universe. Therefore, the reduction can be computed in $\Para\Class L$ space. \end{proof} With the machinery introduced in this section, this result may not seem surprising: \textsc{as\-so\-ci\-a\-tive-generators} is known to be complete for $\Class{NL}$ via compatible logspace reductions and, thus, by Theorem~\ref{theorem-comp}, \PLangText{family-union-as\-so\-ci\-a\-tive-gen\-er\-a\-tors} is complete for $\Para[W]\Class{NL}$. To prove Theorem~\ref{theorem:agen} we ``just'' need to further reduce to the weighted union version. However, unlike for satisfiability and graph problems, this reduction turns out to be technically difficult. \section{Problems Complete for Time--Space Classes} \label{section-space-time} The classes $\Para\Class P = \Class{FPT}$ and $\Class{XL}$ appear to be incomparable: Machines for the first class may use $f_x n^{O(1)}$ time and as much space as they want (which will be at most $f_x n^{O(1)}$), while machines for the second class may use $f_x \log n$ space and as much time as they want (which will be at most $n^{f_x}$). A natural question is which problems are in the intersection $\Para\Class P \cap \Class{XL}$ or -- even better -- in the class $\Class D[f \operatorname{poly}, f \log]$, which means that there is a \emph{single} machine using only fixed-parameter time and slice-wise logarithmic space simultaneously. It is not particularly hard to find artificial problems that are complete for the different time--space classes introduced in Section~\ref{section-intro-ftpxp}; we present such problems at the beginning of this section. We then move on to automata problems, but still some ad hoc restrictions are needed to make the problems complete for time--space classes. The real challenge lies in finding problems together with \emph{natural} parameterization that are complete. We present one such problem: the longest common subsequence problem parameterized by the number of strings. \paragraph{Resource-Bounded Machine Acceptance.} A good starting point for finding complete problems for new classes is typically some variant of Turing machine acceptance (or halting). Since we study machines with simultaneous time--space limitations, it makes sense to start with the following ``time and space bounded computation'' problems: For $\Lang{dtsc}$ the input is a single-tape \textsc{dtm} $M$ together with two numbers $s$ and $t$ given in unary. The question is whether $M$ accepts the empty string making at most $t$ steps and using at most $s$ tape cells. The problem $\Lang{ntsc}$ is the nondeterministic variant. As observed by Cai et al.~\cite{CaiCDF1997}, the fpt-reduction closure of $\PLang[\mathit t]{ntsc}$ (that is, the problem parameterized by $t$) is exactly $\Class W[1]$. In analogy, Guillemot~\cite{Guillemot2011} proposed the name ``$\Class{WNL}$'' for the fpt-reduction closure of $\PLang[\mathit s]{ntsc}$ (now parameterized by~$s$ rather than~$t$). As pointed out in Section~\ref{section-justification-w}, we believe that this name should be reserved for the class resulting from applying the operator $\exists^{\leftrightarrow}_{f\log}$ to the class $\Class{NL}$. Furthermore, the following theorem shows that $\PLang[\mathit s]{ntsc}$ is better understood in terms of time--space classes: \begin{theorem}\label{theorem-dtsc} The problems $\PLang[\mathit s]{dtsc}$ and $\PLang[\mathit s]{ntsc}$ are complete for the classes $\Class D[f\operatorname{poly},f\log]$ and $\Class N[f\operatorname{poly},f\log]$, respectively. \end{theorem} \begin{proof} We only prove the claim for the deterministic case, the nondeterministic case works exactly the same way. For containment, on input of a machine $M$, a time bound $t$ in unary, and a space bound $s$ in unary, a $\Class D[f\operatorname{poly},f\log]$-\penalty0machine can simply simulate $M$ for $t$ steps, making sure that no more than $s$ tape cells are used. Clearly, the time needed for this simulation is a fixed polynomial in $t$ and, hence, in the input length. The space needed to store the $s$ tape cells is clearly $O(s \log n)$ since $O(\log n)$ bits suffice to store the contents of a tape cell (the amount needed is not $O(1)$ since the tape alphabet is part of the input). For hardness, consider any problem $(Q,\kappa) \in \Class D[f\operatorname{poly},f\log]$ via some machine~$M$. Let $t_M(k,n)$ and $s_M(k,n)$ be the time and space bounds of $M$, respectively. The reduction must now map inputs $x$ to triples $(M',1^t,1^s)$. The reduction faces two problems: First, while $M$ has an input tape and a work tape, $M'$ has no input tape and starts with the empty string. Second, while $t$ can simply be set to $t_M(\kappa(x),x)$, $s$ cannot be set to $s_M(\kappa(x),x)$ since this only lies in $O\bigl(f(\kappa(x)) \log |x|\bigr)$ for some function~$f$ -- while in a parameterized reduction the new parameter may only depend on the old one ($\kappa(x)$) and not on the input length. The first problem can be overcome using a standard trick: $M'$ simulates $M$ and uses its tape to store the contents of the work tape of~$M$. Concerning the input tape (which $M'$ does not have), when $M$ accesses an input symbol, $M'$ has this symbol ``stored in its state,'' which means that there are $|x|$ many copies of $M$'s state set inside $M'$, one for each possible position of the head on the input tape. A movement of the head corresponds to switching between these copies. In each copy, the behaviour of the machine $M$ for the specific input symbol represented by this copy is hard-wired. The second problem is a bit harder to tackle, but one can also apply standard tricks. Instead of mapping to~$M'$, we actually map to a new machine $M''$ that performs the following space compression trick: For each $\log_2 |x|$ many tape cells of $M'$, the machine $M''$ uses only one tape cell. This can be achieved by enlarging the tape alphabet of $M'$: If the old alphabet was $\Gamma$, we now use $\Gamma^{\log_2 |x|}$, which is still polynomial in $|x|$. Naturally, we now have to adjust the transitions and states of $M'$ so that a step of $M'$ for its old tape is now appropriately simulated by one step of $M''$ for its compressed tape. Taking it all together, we map $x$ to $(M'', t,s)$ where $t$ is as indicated above and $s = s_M(\kappa(x),x) / \log_2 |x|$, which is bounded by a function depending only on $\kappa(x)$. Clearly, the reduction is correct. \end{proof} \paragraph{Automata.} A classical result of Hartmanis~\cite{Hartmanis1972} states that $\Class L$ contains exactly the languages accepted by finite multi-head automata. In \cite{ElberfeldST2012}, Elberfeld et al.\ used this to show that $\PLang[heads]{mdfa}$ (the multi-head automata acceptance problem parameterized by the number of heads) is complete for $\Class{XL}$. It turns out that multi-head automata can also be used to define a (fairly) natural complete problem for $\Class D[f \operatorname{poly}, f\log]$: A \emph{\textsc{dag}-automaton} is an automaton whose transition graph is a topologically sorted \textsc{dag} (formally, the states must form the set $\{1,\dots, |Q|\}$ and the transition function must map each state to a strictly greater state). Clearly, a \textsc{dag}-automata will never need more than $|Q|$ steps to accept a word, which allows us to prove the following theorem: \begin{theorem}\label{theorem-multi} The problems $\PLang[heads]{dag-mdfa}$ and $\PLang[heads]{dag-mnfa}$ are complete for the classes $\Class D[f \operatorname{poly}, f\log]$ and $\Class N[f \operatorname{poly}, f\log]$, respectively. \end{theorem} \begin{proof} We only prove the claim for deterministic automata, the argument works exactly the same way for nondeterministic ones. Let $(Q,\kappa) \in \Class D[f \operatorname{poly},f \log]$ via some~$M$. Then $(Q,\kappa) \in \Class{XL}$ will hold via the same machine~$M$. In \cite{ElberfeldST2012} it is shown that every problem in $\Class{XL}$ can be reduced to $\PLang[heads]{mdfa}$ via a simulation dating back to the work of Hartmanis~\cite{Hartmanis1972}: Each step of~$M$ is simulated by a number of movement of the heads of an automaton~$A$. The positions of a fixed number of heads of~$A$ store the contents of one work tape symbol and for each step of~$M$ the heads of~$A$ perform a complicated ballet to determine the current contents of certain tape cells and to adjust the heads accordingly. For our purposes, it is only important that each step of $M$ gives rise to a polynomial number (in the input length) of steps of $A$ for some fixed polynomial independent of the number of heads. In particular, to simulate $f(\kappa(x)) n^c$ steps of~$M$, the automaton~$A$ needs to perform $f'(\kappa(x)) n^{O(c)}$ steps. Thus, to reduce $(Q,\kappa)$ to $\PLang[heads]{dag-mdfa}$, we first compute $A$ as in the reduction to $\PLang[heads]{mdfa}$, but make $f'(\kappa(x)) n^{O(c)}$ copies of~$A$. We then modify the transitions such that when a transition in the $i$th copy $A$ maps a state $q$ to a state $q'$, the transition then instead maps this $q$ to $q'$ from the $(i+1)st$ copy, instead. Clearly, the resulting automaton is a \textsc{dag}-automaton and it accepts an input word if, and only if, $M$ accepts it in time $f(\kappa(x)) n^c$ and using only $f(\kappa(x)) \log n$ space. \end{proof} Instead of \textsc{dag}-automata, we can also consider a ``bounded time version'' of $\Lang{mdfa}$ and $\Lang{mnfa}$, where we ask whether the automaton accepts within $s$ steps ($s$ being given in unary). Both versions are clearly equivalent: The number of nodes in the \textsc{dag} bounds the number of steps the automaton can make and cyclic transitions graphs can be made acyclic by making $s$ layered copies. Another, rather natural kind of automata are \emph{cellular automata,} where there is one instance of the automaton (called a \emph{cell}) for each input symbol. The cells perform individual synchronous computations, but ``see'' the states of the two neighbouring cells (we only consider one-dimensional automata, but the results hold for any fixed number of dimensions). Formally, the transition function of such an automaton is a function $\delta \colon Q^3 \to Q$ (for the cells at the left and right end this has to be modified appropriately). The ``input'' is just a string $q_1\dots q_k \in Q^*$ of states and the question is whether $k$ cells started in the states $q_1$ to $q_k$ will arrive at a situation where one of them is in an accepting state (one can also require all to be in an accepting state, this makes no difference). Let $\Lang{dca}$ be the language $\bigl\{(C,q_1\dots q_k) \mid C$ is a deterministic cellular automaton that accepts $q_1\dots q_k\smash{\bigr\}}$. Let $\Lang{nca}$ denote the nondeterministic version and let $\Lang{dag-dca}$ and $\Lang{dag-nca}$ be the versions where $C$ is required to be a \textsc{dag}-auto\-ma\-ton (meaning that $\delta$ must always output a number strictly larger than all its inputs). The following theorem states the complexity of the resulting problems when we parameterize by $k$ (number of cells): \begin{theorem}\label{theorem-cellular} The problems $\PLang[cells]{dca}$ and $\PLang[cells]{nca}$ are complete for $\Class{XL}$ and $\Class{XNL}$, respectively. The problems $\PLang[cells]{dag-dca}$ and $\PLang[cells]{dag-nca}$ are complete for $\Class D[f \mathrm{poly},f \log]$ and $\Class N[f \mathrm{poly},f \log]$, respectively. \end{theorem} \begin{proof} We start with containment and then prove hardness for all problems. \emph{Containment.} Clearly, $\PLang[cells]{dca}$ lies in $\Class{XL}$ since we can keep track of the $k$ states of the $k$ cells in space $O(k \log n)$. To see that $\PLang[cells]{dag-dca} \in \Class D[f \operatorname{poly},\penalty100 f\log]$, just observe that for \textsc{dag}-automata no computation can take more than a linear number of steps. The arguments for the nondeterministic versions are the same. \emph{Hardness for Deterministic Cellular Automata.} To prove hardness of $\PLang[cells]{dca}$ for $\Class{XL}$, we reduce from a canonically complete problem for $\Class{XL}$. Such a problem can easily be obtained from $\PLang[\mathit s]{dtsc}$ by lifting the restriction on the time allowed to the machine, leading to the following problem: \begin{problem}[{$\PLang[\mathit s]{deterministic-space-bounded-computation}$ ($\PLang[\mathit s]{dsc}$)}] \begin{parameterizedproblem} \item[\labelstyle Instance:] (The code of) a single-tape machine $M$, a number $s$. \item[\labelstyle Parameter:] $s$. \item[\labelstyle Question:] Does $M$ accept on an initially empty tape using at most $s$ tape cells? \end{parameterizedproblem} \end{problem} Proving that this problem is complete for $\Class{XL}$ follows exactly the same argument as that used in Theorem~\ref{theorem-dtsc}. Let us now reduce $\PLang[\mathit s]{dsc}$ to $\PLang[cells]{dca}$. The input for the reduction is a pair $(M,s)$. We must map this to some cellular automaton $C$ and an initial string of states. The obvious idea is to have one automaton for each tape cell that can be used by~$M$. In detail, let $Q$ be the set of states of $M$ and let $\Gamma$ be the tape alphabet of $M$. The state set of $C$ will be $R = (Q \cup \{\bot\}) \times \Gamma$, where $\bot$ is used to indicate that the head is elsewhere. Clearly, a state string from $R^s$ allows us to encode a configuration of~$M$. Furthermore, we can now set up the transition relation of~$C$ in such a way that one parallel step of the $s$ automata corresponds exactly to one computational step of~$M$: as an example, suppose in state $q$ for the symbol $a$ the machine $M$ will not move its head, write~$b$, and switch to state $q'$. Then in $C$ for every $x,y \in \{\bot\} \times \Gamma$ there would be a transition mapping $(x, (q,a), y)$ to $(q',b)$ and also transitions mapping $((q,a),x,y)$ to~$x$ and $(x,y,(q,a))$ to~$y$. For triples corresponding to situations that ``cannot arise'' like the head being in two places at the same time, the transition function can be setup arbitrarily. The initial string of states for the cellular automaton is of course $(q_0,\Box)(\bot,\Box)\dots(\bot,\Box)$, where $\Box$ is the blank symbol and $q_0$ is the initial state of~$M$. With this setup, the strings of states of the cellular automaton are in one-to-one correspondence with the configurations of~$M$. In particular, we will reach a state string containing an accepting state if, and only if, $M$ accepts when started with an empty tape. Clearly, the reduction is a para-$\Class L$-reduction. \emph{Hardness for Nondeterministic Cellular Automata.} One might expect that one can use the exact same argument for nondeterministic automata and simply use the same reduction, but starting from $\PLang[\mathit s]{nsc}$. However, there is a complication: The cells work independently of one another. In particular, there is no guarantee that a nondeterministic decision taken by one cell is also take by a neighboring cell. To illustrate this point, consider the situation where the machine $M$ can nondeterministically step ``left or right'' in some state~$q$. Now assume that some cell $c$ is in state $(q,x)$ and consider the cells $c-1$ and $c+1$. For both of them, there would now be a transition allowing them to ``take over the head'' and both could nondeterministically decide to do so -- which is wrong, of course; only one of them may receive the head. To solve this problem, we must ensure that a nondeterministic decision is taken ``by only one cell.'' Towards this aim, we first modify~$M$, if necessary, so that every nondeterministic decision is a binary decision. Next, we change the state set of~$C$: Instead of $(Q \cup \{\bot\}) \times \Gamma$ we use $(Q \times \{0,1\} \cup Q \cup \{\bot\}) \times \Gamma$. In other words, in addition to the normal states from $Q$ we add two copies of the state set, one tagged with~$0$ and one tagged with~$1$. The idea is that when a cell is in state $(q,x) \in Q \times \Gamma$, it can nondeterministically reach $((q,0),x)$ or $((q,1),x)$. However, from those states, we can \emph{deterministically} make the next step: if the state is tagged by $0$, both the cell and the neighboring cells continue according to what happens for the first of the two possible nondeterministic choices, if the state is tagged by $1$, the other choice is used. Note that as long as the state is not yet tagged, the neighboring cells do not change their state. With these modifications, we arrive at a new cellular automaton with the property that after every two computational steps of the automaton its string of states encodes one of the two possible next computational steps of the machine~$M$. This shows that the reduction is correct. \emph{Hardness for Cellular \textsc{dag}-Automata.} To prove hardness of $\PLang[cells]{dag-dca}$ for the class $\Class D[f \operatorname{poly}, f\log]$, we reduce form $\PLang[\mathit s]{dtsc}$, which is complete for the class by Theorem~\ref{theorem-dtsc}. On input $(M,1^t,1^s)$, the reduction is initially exactly the same as for $\PLang[cells]{dca}$ and we just ignore the time bound~$t$. Once an automaton $C$ has been computed, we can turn it into a \textsc{dag}-automaton and incorporate the time bound as follows: We create $t$ many copies of $C$ and transitions that used to be inside one copy of~$C$ now lead to the next copy (this is same idea as in the proof of Theorem~\ref{theorem-multi}). This construction ensures that the automaton will accept the initial sequence if, and only if, $M$ accepts on an empty input tape in time~$t$ using space~$s$. For the nondeterministic case, we combine the constructions we employed for $\PLang[cells]{dag-dca}$ and for $\PLang[cells]{nca}$. \end{proof} We remark that, for once, the nondeterministic cases need special arguments. \paragraph{Pebble Games.} Pebble games are played on graphs on whose vertices we place pebbles (a \emph{pebbling} is thus a subset of the set of vertices) and, over time, we (re)move and add pebbles according to different rules. Depending on the rules and the kind of graphs, the resulting computational problems are complete for different complexity classes, which is why pebble games have received a lot of attention in the literature. We introduce a simple pebble game played by a single player, different versions of which turn out to be complete for different parameterized space complexity classes: A \emph{threshold pebble game (\textsc{tpg})} consists of a directed graph $G= (V,E)$ together with a threshold function $t \colon V \to \mathbb N$. Given a pebbling $X \subseteq V$, a vertex~$v$ \emph{can be pebbled after~$X$} if the number of $v$'s pebbled predecessors is at least $v$'s threshold, that is, $\bigl| \{\, p \mid (p,v) \in E\} \cap X\bigr| \ge t(v)$. Given a pebbling~$X$, \emph{a next pebbling} is any set $Y$ of vertices that can be pebbled after~$X$. The \emph{maximum next pebbling} is the maximum of such~$Y$. The language $\Lang{tpg}$ contains all threshold pebble games together with two pebblings $S$ and $T$ such that we can reach $T$ when we start with $S$ and apply the next pebbling operation repeatedly (always replacing the current pebbling $X$ completely by~$Y$). For the $\Lang{tpg-max}$ problem, $Y$ is always chosen as the maximum next pebbling (which makes the game deterministic). For $\Lang{dag-tpg}$ and $\Lang{dag-tpg-max}$, the graph is restricted to be a \textsc{dag}. In the following theorem, we parameterize by the maximum number of pebbles that may be present in any step. \begin{theorem}\label{theorem-tpg1} The problems $\PLang[pebbles]{tpg-max}$ and $\PLang[pebbles]{tpg}$ are complete for $\Class{XL}$ and $\Class{XNL}$, respectively. The problems $\PLang[pebbles]{dag-tpg-max}$ and $\PLang[pebbles]{dag-tpg}$ are complete for $\Class D[f \mathrm{poly},f \log]$ and $\Class N[f \mathrm{poly},f \log]$, respectively. \end{theorem} \begin{proof} We first prove containment for all problems. Then we prove completeness first for the \textsc{dag} versions and then for the general version. \emph{Containment.} To see that $\PLang[pebbles]{tpg-max}\in\Class{XL}$ holds, observe that a deterministic Turing machine can store a pebbling in space $\kappa(x) O(\log n)$. Starting with $S$, the machine can compute the successive next steps and accepts when $T$ is reached. Similarly, $\PLang[pebbles]{tpg} \in \Class{XNL}$ since we can nondeterministically guess the correct subset of the next pebbling that has to be chosen. For the problems for \textsc{dag}s, observe that by the acyclicity the maximal distance of a pebble to any sink in the graph gets reduced by~$1$ in every step. Since the maximal distance at the start of the simulation is bounded by~$|V|$, we must reach $T$ after at most $|V|$ steps and, thus, the simulation can be done in both time $k\cdot |V|^{O(1)}$ and space $k\cdot O(\log|V|)$. \begin{figure} \caption{Example of two layers of a threshold game constructed for $Q = \{q_1,q_2\} \label{fig-tpg} \end{figure} \emph{Completeness of the maximum \textsc{dag}-version.} We reduce $\PLang[cells]{dag-dca}$ to the problem $\PLang[pebbles]{dag-tpg-max}$, which proves the claim by Theorem~\ref{theorem-cellular}. Let $(C,s)$ be given as input for the reduction and let $Q$ be $C$'s state set. The reduction outputs a pebble graph that encodes the computation of $s$ many copies of $C$ in the following way: It consists of $t = |Q|$ layers, each encoding a configuration during the computation. Each layer consists of $s$ blocks of $|Q|$ vertices and placing one pebble in each block clearly encodes exactly one state string. To connect two layers $L$ and $L'$, we first insert an auxiliary layer between them with $s \cdot |Q|^3$ vertices, namely one vertex for each cell and each triple of a state of the preceding cell, in the own cell, and in the next cell. The threshold of the auxiliary vertex is $3$. (Actually, for the first and last cell, only $|Q|^2$ auxiliary vertices are needed and the threshold is $2$. However, to simplify the presentation, we ignore these special cells in the following). Note that in a game step, an auxiliary vertex can be pebbled if, and only if, the previous, current, and next cell were in specific states. Also note that there will be exactly $s$ vertices on each auxiliary level that can be pebbled in such a step if the level before it corresponded to a configuration. We now connect the vertices of the auxiliary layer to~$L'$. Let $(c,q)$ be a vertex of $L'$, corresponding to a cell position $c$ and a state $q$. Its predecessors will be all auxiliary vertices $(c,q_1,q_2,q_3)$ of the preceding layer such that $C$'s transition function maps $(q_1,q_2,q_3)$ to $q$. The threshold of all layer vertices is~$1$. Figure~\ref{fig-tpg} depicts an example of this construction. Since the auxiliary vertices can be pebbled exactly if the automaton reaches the three states $(q_1,q_2,q_3)$, exactly those layer vertices can be pebbled that correspond to the next configuration of the cellular automaton. To conclude the description of the reduction, let $S$ be the pebbling placing vertices on the first layer corresponding to the initial state string (which is part of the input) and let $T$ be the pebbling placing vertices on the last layer corresponding to the (only) accepting configuration (this can be achieved by an appropriate modification of $C$, if necessary). By construction, the deterministic game played on the graph starting with $S$ will end in $T$ if, and only if, the $s$-cell version of~$C$ accepts the state sequence after $t$ steps. This shows that the reduction is correct. Clearly, the reduction is a para-$\Class L$-reduction. \emph{Completeness of the \textsc{dag}-version.} For the completeness for $\Class N[f \mathrm{poly},f \log]$ of the problem $\PLang[pebbles]{dag-tpg}$ we use the same reduction as above, but start from $\PLang[cells]{dag-nca}$. It is not immediately obvious that this reduction is correct since in a non-maximal pebble game the nondeterminism of the game allows us to ``forget'' pebbles and, possibly, this could ``make room'' for ``illegal'' pebble to appear that could disrupt the whole simulation. To see that this does not happen, let us introduce the following notion: For a cell number~$c$ let us say that a vertex \emph{belongs} to $c$ if it is either a vertex on the main layers of the form $(c,q)$ or a vertex on an auxiliary layer of the form $(c,q_1,q_2,q_3)$. Clearly, every vertex belongs to exactly one cell and in the deterministic case in each step for each cell exactly one vertex belonging to this cell is pebbled. The crucial observation is that if in a non-maximal step we do not pebble any vertex of a cell~$c$, we cannot pebble any vertex of cell~$c$ in any later step. This is due to the way the edges are set up: In order to pebble a vertex belonging to a cell~$c$ it is always a prerequisite that at least one vertex belonging to $c$ was pebbled in the previous step. Now, in the target pebbling $T$ for each cell one vertex belonging to this cell is pebbled. By the above observation, in order to reach $T$ this must have been the case in all intermediate steps. Because of the upper bound of $s$ on the number of pebbles, we know that on each layer \emph{exactly} $s$ vertices are pebbled and, thus, exactly one vertex belonging to each cell is pebbled. This proves that, indeed, on each main layer the pebbled vertices correspond exactly to possible cell contents of a computation of the automaton. \emph{Completeness of the maximal general version.} Let us now prove that the problem $\PLang[pebbles]{tpg-max}$ is complete for $\Class{XL}$ by reducing from $\PLang[cells]{dca}$. We basically proceed as for the \textsc{dag} version, but no longer need to produce an acyclic graph and no longer have a time bound in the input. The idea is to let the computation continue ``as long as necessary'': Instead of constructing a graph consisting of many identical main layers that alternate with identical auxiliary layers, we put only \emph{one} main layer in the graph and only \emph{one} auxiliary layer. The predecessors of the auxiliary layer's vertices are, as before, the vertices of the main layer. However, the successors of the auxiliary layer's vertices are no longer the vertices on the next main layer, but the corresponding vertices on the first (and only) main layer. With this construction, the pebbling will alternate between the main layer and the auxiliary layer and, after every two game steps, the main layer will encode the next configuration of the input machine~$M$. By having $S$ encode the start configuration and $T$ encode the only accepting configuration (on the main layer), we get a reduction. \emph{Completeness of the general version.} For the general version, as for the \textsc{dag} version, one can argue that the same reduction as for the maximal case works for the general case. \end{proof} Another natural parametrization is by the number of steps needed rather than by the number of pebbles. It is easily seen that $\PLang[steps]{tpg} \in \Para\Class{NP}$ and $\PLang[steps]{tpg-max} \in \Para\Class P$. Furthermore, $\PLang[steps]{tpg-max} \in \Class{XL}$ holds also since we can compute one next step in the game in space $O(\log n)$ and by the standard trick of chaining together two logspace computations, we can compute $k$ steps in space $O(k \log n)$. Interestingly, the argument can neither be used to show that $\PLang[steps]{tpg}$ lies in $\Class{XNL}$ nor that $\PLang[steps]{tpg-max}$ lies in $\Class D[f\operatorname{poly},f\log]$. We were not able to prove completeness of either problem for a parameterized class. One can also consider a ``power parameterization'' similar to that of $\PLang{power-ntsc}$: in $\PLang[step\ power]{tpg}$ we are given a parameter $k$ along with a threshold pebble game and ask whether $T$ can be reached from $S$ in $n^k$ steps, where $n$ is the order of the graph. As for the generic Turing machine problem, the power parametrization results in problems that are complete for $\Class N[n^f,f \mathrm{poly}]$ (for $\PLang[step\ power]{tpg}$) and for $\Class D[n^f,f \mathrm{poly}]$ (for $\PLang[step\ power]{tpg-max}$). The proof is essentially the same as in the above theorem, only the machine can now use much more space ($f_x\cdot n^c$ for some constant~$c$ instead of $f_x\cdot O(\log n)$), but we can also use more pebbles (up to $n$ many instead of just $k$). \paragraph{Longest Common Subsequence.} \label{section-lcs} The input for the longest common subsequence problem $\Lang{lcs}$ is a set $S$ of strings over some alphabet $\Sigma$ together with a number~$l$. The question is whether there is a string $c \in \Sigma^l$ that is a \emph{subsequence} of all strings in $S$, meaning that for all $s \in S$ just by removing symbols from $s$ we arrive at~$c$. There are several natural parameterization of $\Lang{lcs}$: We can parameterize by the number of strings in~$S$, by the size of the alphabet, by the length~$l$, or any combination thereof. Guillemot has shown \cite{Guillemot2011} that $\PLang[strings,length]{lcs}$ is fpt-complete for $\Class W[1]$, while $\PLang[strings]{lcs}$ is fpt-equivalent to $\PLang[\mathit s]{ntsc}$. Hence, by Theorem~\ref{theorem-dtsc}, both problems are complete under fpt-reductions for the fpt-reduction closure of $\Class N[f \operatorname{poly}, f\log]$. We tighten this in Theorem~\ref{theorem-lcs} below (using a weaker reduction is more than a technicality: $\Class N[f \operatorname{poly}, f\log]$ is presumably not even closed under fpt-reduction, while it \emph{is} closed under para-$\Class L$-reductions). As a preparation for the proof of Theorem~\ref{theorem-lcs}, we first present a simpler-to-prove result: Let $\Lang{lcs-injective}$ denote the restriction of $\Lang{lcs}$ where all input words must be \emph{$p$-sequences} \cite{FellowsHS2003}, which are words containing any symbol at most once (the function mapping word indices to word symbols is injective). \begin{theorem}\label{theorem-lcs-injective} $\Lang{lcs-injective}$ is $\Class{NL}$-complete and this holds already under the restriction $|S| \le 4$. \end{theorem} \begin{proof} The problem $\Lang{lcs-injective}$ lies in $\Class{NL}$ via the following algorithm: We guess the common subsequence $c$ nondeterministically and use a counter to ensure that $c$ has length at least~$l$. The problem is that we cannot remember more than a fixed number of letters of~$c$ without running out of space. Fortunately, we do not need to: We always only keep track of the last two guessed symbols. For each such pair $(a,b)$, we check whether $a$ appears before $b$ in all strings in $S$. If so, we move on to the next pair, and so on. Clearly, this algorithm needs only logarithmic space and correctly decides $\Lang{lcs-injective}$. To prove hardness for $|S| = 4$, we reduce from the $\Class{NL}$-complete language $\Lang{layered-reach}$, where the input is a layered graph $G$ (each vertex is assigned a layer number and all edges only go from one layer to the next), the source vertex $s$ is the (only) vertex on layer~$1$ and the target $t$ is the (only) vertex on the last layer~$m$. The question is whether there is a path from $s$ to~$t$. For the reduction to $\Lang{lcs-injective}$ we introduce a symbol for each edge of~$G$. The common subsequence will then be exactly the sequence of edges along a path from $s$ to~$t$. We consider the layers $L_1$, $L_2$, \dots, $L_m$ in order and, for each of them, append edge symbols to the four strings as described in the following. Consider a layer $L_i$, containing vertices $\{v_1, \dots, v_n\}$. Assume $i$ is odd. We go over the vertices $v_1$ to $v_n$ in that order. For $v_1$, first consider all edges that end at $v_1$. They must come from layer $i-1$. We add these edges in some order to the first string (for instance, in the order of the index of the start vertex of these edges). Still considering $v_1$, we then consider all outgoing edges and append them in some fixed order. Then we move on to $v_2$ and add edge symbols in the same way for it, and so on. If $i$ is even rather than odd, we add the same edge symbols to the third rather than to the first string. For the second (or, for even $i$, the fourth string), we go over the vertices in decreasing order. We start with $v_n$. We consider the incoming edges for $v_n$ and add them to the second string, but in reverse order compared to the order we used for the first string. Next, we append the outgoing edges, again in reverse order. Then we consider $v_{n-1}$ and proceed in the same way. As an example, consider the following layered graph: \begin{tikzpicture} \graph[graph, no placement, typeset=$v_\tikzgraphnodetext$, nodes=node] { { [x=0] 1[y=1], 2[y=2], 3[y=3]}, { [x=2] 4[y=1], 5[y=2], 6[y=3]}, { [x=4] 7[y=1], 8[y=2], 9[y=3]}; 1 ->["$a$"] 4 -> ["$f$"] 7; 2 ->["$b$"] 4 -> ["$g$"] 8; 2 ->["$c$"] 5 -> ["$h$" near start] 9; 2 ->["$d$"] 6 -> ["$i$" near start] 8; 3 ->["$e$"] 6 -> ["$j$"] 9; }; \end{tikzpicture} This would result in the following strings, where spaces have been added for clarity and also the symbols $v_i$, which are not part of the strings (so the second string is actually $edcbajhigf$): \def\v#1{\textcolor{black!50}{v_#1}} \begin{align*} &\v1 a\quad \v2 bcd\quad \v3 e\qquad f\v7\quad gi\v8\quad hj\v9\\ &\v3 e\quad \v2 dcb\quad \v1 a\qquad jh\v9\quad ig\v8\quad f\v7\\ &ab \v4 fg\quad c \v5 h\quad de \v6 ij\\ &ed \v6 ji\quad c \v5 h\quad ba \v4 gf \end{align*} We make two crucial observations. First, if an edge is included in the common subsequence, no other edge starting at the same layer can be included also: The edge symbols of one layer come in one order in the first (or third) string and in the reverse order in the second (or fourth) string. Thus, there cannot be two of them in the common subsequence. For the same reason, there can only be one edge arriving at a layer in the common subsequence. The second crucial observation is that if the sequence contains an edge $e$ arriving at a vertex $v$, it can only contain edges leaving from vertex $v$, if it contains any edge leaving from $v$'s layer: Only the edge $e'$ leaving $v$ will come after $e$ in both the first and second (or third and fourth) string. Putting it all together, we get the following: There is a path from $s$ to $t$ in $G$ if, and only if, there is a common subsequence of length $m-1$ in the constructed strings: If there is a path, the sequence of the edges on it form a subsequence; and if there is such a subsequence, because of its length, it must contain exactly one edge leaving from each layer except the last -- and these edges must form a path as we just argued. \end{proof} Although we do not prove this, we remark that $\Class{NL}$-completeness already holds for $|S| = 3$, while for $|S| = 2$ the complexity appears to drop significantly. \begin{corollary}\label{corollary-lcs-injective} $\PLang[strings]{lcs-injective}$ is para-L-complete for $\Para\Class{NL}$. \end{corollary} \begin{proof} The problem lies in $\Para\Class{NL}$ since by Theorem~\ref{theorem-lcs-injective} we can solve any instance in $\Class{NL}$ without even using the parameter. On the other hand, the theorem also shows that a slice of the parameterized problem (namely for 4 strings) is already hard for $\Class{NL}$. It is a well-known fact that in this case the parameterized problem is hard for the corresponding para-class, which happens to be $\Para\Class{NL}$. \end{proof} \begin{theorem}\label{theorem-lcs} $\PLang[strings]{lcs}$ is para-L-complete for $\Class N[f\mathrm{poly}, f\log]$. \end{theorem} \begin{proof} Clearly, $\PLang[strings]{lcs} \in \Class N[f\mathrm{poly}, f\log]$ since a nondeterministic machine can guess the common subsequence on the fly and only needs to keep track of $k$ pointers into the strings, which can be done in space $O(k \log_2 n)$. To prove hardness, we reduce from the $\Class N[f\mathrm{poly}, f\log]$-complete problem $\PLang[cells]{dag-nca}$, the acceptance problem for nondeterministic cellular \textsc{dag}-automata, see Theorem~\ref{theorem-cellular}. Our first step is to tackle the problem that in an \textsc{lcs} instance we choose ``one symbol after the other'' whereas in a cellular automaton all cells make one step in parallel. To address this, we introduce a new intermediate problem $\PLang[cells]{dag-nca-sequential}$ where the model of computation of the cellular automaton is modified as follows: Instead of all $k$ cells making one parallel step, initially only the first cell makes a transition, then the second cell makes a transition (seeing already the new state reached at the first cell, but still the initial state of the third cell), then the third cell (seeing the new state of cell two and the old of cell four), and so on up to the $k$th cell. Then, we begin again with the first cell, followed by the second cell, and so on. \begin{claim} $\PLang[cells]{dag-nca}$ reduces to $\PLang[cells]{dag-nca-sequential}$. \end{claim} \begin{proof}[of the claim] The trick is to have cells ``remember'' the states they were in: On input of $(C,\penalty0q_1\dots q_k)$, we construct a ``sequential'' cellular automaton $C'$ as follows. If $Q$ is the state set of $C$, the state set of $C'$ is $Q \times Q$. Each state $q \in Q'$ is now a pair $(q^{\mathrm{previous}}, q^{\mathrm{current}})$. The transition relation is adjusted as follows: If there used to be a transition $(q_{\mathrm{left}},q_{\mathrm{old}},q_{\mathrm{right}},q_{\mathrm{new}}) \in Q^4$, meaning that a cell of the parallel automaton~$C$ can switch to state $q_{\mathrm{new}}$ if it is in state $q_{\mathrm{old}}$, its left neighbor is in state $q_{\mathrm{left}}$, and its right neighbor is in state $q_{\mathrm{right}}$, the we now have the following transitions in $C'$: $((q_{\mathrm{left}},x),(y,q_{\mathrm{old}}),(z,q_{\mathrm{right}}),(q_{\mathrm{old}},q_{\mathrm{new}}))$ where $x,y,z \in Q$ are arbitrary. Indeed, this transition will switch a cells state based on the \emph{previous} state of the cell before it and on the \emph{current} state of the cell following it and will store that previous state. For the first and last cells, this construction is adapted in the obvious manner. Clearly, the resulting sequential automaton will arrive in a sequence $(x_1,q_1)\dots(x_k,q_k)$ of states for some $x_i \in Q$ after $t\cdot k$ steps if, and only if, the original automaton arrives in states $q_1\dots q_k$ after $t$~steps. This proves the reduction. \end{proof} \emph{The basic idea.} We now show how $\PLang[cells]{dag-nca-sequential}$ can be reduced to $\PLang[strings]{lcs}$. Before we plunge into the details, let us first outline the basic idea: Each cell of a cellular \textsc{dag}-automaton ``behaves somewhat like a reachability problem'' meaning that we must find out whether the automaton will arrive in the accepting state starting from the initial state. Thus, as in the proof of Theorem~\ref{theorem-lcs-injective}, we use four strings to represent a cell of the automaton, giving a total of $4k$ strings, where $k$ is the number of cells. However, the cells do not act independently; rather each step of a cell depends on the states of the two neighboring cells. Fortunately, this ``control'' effect can be modelled by adding an ``edge's'' symbol (actually, a transition's symbol) not only to the four strings of the cell, but also to the four strings of predecessor and successor cells at the right position (namely ``before the required state symbol''). In the following, we explain the idea just outlined in detail. Let $(C,q_1\dots q_k)$ be given as input for the reduction. Since $C$ is sequential and also a \textsc{dag}-automaton, its steps can be grouped into at most $t$ many groups (``major steps'') of $k$ sequential steps (``minor steps'') taken by cells $1$ to $k$ in that order, where $t$ depends linearly on the size of~$C$. By modifying $C$, if necessary, we may assume that $C$ makes exactly $t\cdot k$ sequential steps when it accepts the input and, otherwise, makes strictly less steps. We use $s$ to denote a major step number. \emph{Construction of the strings.} We map $(C,q_1\dots q_k)$ to $4k$ strings $s_1^1$, $s_2^1$, $s_3^1$, $s_4^1$, \dots, $s_1^k$, $s_2^k$, $s_3^k$, $s_4^k$ and ask whether they have a common subsequence of length $t\cdot k$. Each group of four strings is setup similarly to the four strings from the proof of Theorem~\ref{theorem-lcs-injective}: $s_1^i$ and $s_2^i$ model the states (vertices) the $i$th cell has just before odd major steps~$s$; and $s_3^i$ and $s_4^i$ model the states the cell has before even major steps~$s$. Consider cell $i$ and its four strings $s_1^i$ to $s_4^i$. Recall that in Theorem~\ref{theorem-lcs-injective} we conceptually added the vertices of the first layer in opposite orders to $s_1^i$ and~$s_2^i$, although in reality these vertices were not part of the final strings and were added to make it easier to explain where the actual symbols (the edges) were placed in the strings. In our setting, the role of the vertices on the first layer is taken by the states $Q = \{q_1,\dots,q_n\}$ of the automaton $C$ tagged by the major step number~$1$. Thus, $s_1^i$ starts (conceptually) with $(q_1,1) \dots (q_n,1)$ and $s_2^i$ starts with $(q_n,1) \dots (q_1,1)$. Next come tagged versions of the states just before the third major step, so $s_1^i$ continues $(q_1,3) \dots (q_n,3)$ and $s_2^i$ with $(q_n,3) \dots (q_1,3)$. We continue in this way for all odd major steps. For even major steps, we add analogous strings to $s_3^i$ and $s_4^i$. Continuing the idea from Theorem~\ref{theorem-lcs-injective}, we now add ``edges'' to the strings. However, instead of an edge from one vertex so another, the transition relation of a cellular automaton contain 4-tuples $f = (f_{\mathrm{left}},f_{\mathrm{old}},f_{\mathrm{right}},f_{\mathrm{new}}) \in Q^4$ of states, which allows a cell to switch to state $f_{\mathrm{new}}$ when it was in state $f_{\mathrm{old}}$ and its left neighbor was in state $f_{\mathrm{left}}$ and the right neighbor was in state~$f_{\mathrm{right}}$. Recall that in Theorem~\ref{theorem-lcs-injective}, for each $e$ from some vertex $a$ on an odd layer to a vertex $b$, we added the symbol $e$ \emph{after} $a$ in the first two strings and \emph{before} $b$ in the last two strings. In a similar way, for the cellular automaton for each 4-tuple $f$ we add new ``symbols'' $(f,s,i)$ consisting of a transition, a major step number, and a cell index~$i$ to the strings. This symbol is added at several places to the strings (we assume that $s$ is odd; for even $s$ exchange the roles of the first two and the last two strings everywhere); sometimes even more than once. The rules are as follows: \begin{enumerate} \item Iterate over all $(f,s,i)$ in some order and insert $(f,s,i)$ directly after $(f_{\mathrm{old}},s)$ in $s_1^i$. \item Next, again iterate over all $(f,s,i)$, but now in reverse order, and insert $(f,s,i)$ after $(f_{\mathrm{old}},s)$ in $s_2^i$. \end{enumerate} Note that using the two opposite orderings, as in Theorem~\ref{theorem-lcs-injective}, for each $(f_{\mathrm{old}},s)$ at most one $(f,s,i)$ can be part of a common subsequence. \begin{enumerate}\setcounter{enumi}{2} \item Next, iterate over all $(f,s,i)$ in some order and insert $(f,s,i)$ directly before $(f_{\mathrm{new}},s+1)$ in $s_3^i$. \item Next, iterate over all $(f,s,i)$ in reverse order and insert $(f,s,i)$ directly before $(f_{\mathrm{new}},s+1)$ in $s_4^i$. \end{enumerate} The effect of the above is to make the automaton switch to $f_{\mathrm{new}}$ in cell~$i$ after major step~$s$. Now, we still need to ensure that this switch is only possible when the preceding cell has already switched to state $f_{\mathrm{left}}$ after step~$s$ and the next cell is in state $f_{\mathrm{right}}$ before step~$s$. \begin{enumerate}\setcounter{enumi}{4} \item Next, iterate over all $(f,s,i)$ and insert $(f,s,i)$ directly after $(f_{\mathrm{left}},s+1)$ in $s_3^{i-1}$ and $s_4^{i-1}$. For $i=1$, no symbols are added. \item Next, iterate over all $(f,s,i)$ and insert $(f,s,i)$ directly after $(f_{\mathrm{right}},s)$ in $s_1^{i+1}$ and $s_2^{i+1}$. For $i=k$, no symbols are added. \end{enumerate} Note that since the last two steps are applied later, the added symbols are ``nearer'' to the state symbols than the symbols added in the first two steps. In particular, a common subsequence can contain first a symbol added in step~6 added after some $(q,s+1)$, then a symbol added after $(q,s)$ in step~5, and then symbols added before or after $(q,s)$ in one of the first four steps. The last rule ensures that when a tuple $(f,s,i)$ is not mentioned for a string by one of the first six rules, we can always make it part of a common subsequence: \begin{enumerate}\setcounter{enumi}{6} \item Finally, iterate over the $4k$ strings. For each such string $s_j^i$, consider the set $X$ of all $(f,s,i)$ that are not present in $s_j^i$. Add all symbols of $X$ once after each letter of $s_j^i$. \end{enumerate} As the last step of the construction of the strings, in order to model the initial configuration $q_1\dots q_k$ of the automaton, for each $i \in \{1,\dots,k\}$ in $s_1^i$ to $s_4^i$ we remove all symbols before $(q_i,1)$. \emph{Correctness: First direction.} Having finished the description of the reduction, we now argue that it is correct. For this, first assume that the automaton, does, indeed, accept the input sequence $q_1\dots q_k$. By assumption, this means that the automaton will make $t \cdot k$ sequential steps. Assume that in major step~$s$ and minor step~$i$ the automaton makes transition $f^{s,i}$, meaning that the $i$th cell switches its state from $f^{s,i}_{\mathrm{old}}$ to $f^{s,i}_{\mathrm{new}}$. We claim that the sequence $(f^{1,1},1,1)\penalty0(f^{1,2},1,2)\dots\penalty0 (f^{1,k},1,k)\penalty0 (f^{2,1},2,1) \dots\penalty0 (f^{t,k},t,k)$ is a common subsequence of all $s_j^i$. To see this, consider the first symbol $(f^{1,1},1,1)$. It will be present both in $s_1^1$ and $s_2^1$ since for the first transition the first cell was exactly in state $q_1 = f^{1,1}_{\mathrm{old}}$ and, thus, this symbol \emph{followed} $(q_1,1)$ in the construction and was not removed in the last construction step. The symbol is also present in $s_3^1$ and $s_4^1$, namely right before the (``virtual'') pair $(f_{\mathrm{new}},2)$. The symbol will also be present in $s_1^2$ and $s_2^2$ since $q_2 = f^{1,1}_{\mathrm{right}}$ and we added $(f^{1,1},1,1)$ to both $s_1^2$ and $s_2^2$ in step~6. Finally, the symbol will be present in all other strings near the beginning because of step~7. Next, consider the second symbol $(f^{1,2},1,2)$, which corresponds to the second step the automaton has taken. Here, the second cell switches from $f^{1,2}_{\mathrm{old}}$ to $f^{1,2}_{\mathrm{new}}$ because the first cell has already switched to $f^{1,2}_{\mathrm{left}} = f^{1,1}_{\mathrm{new}}$ during the first transition and the third cell is still in $f^{1,2}_{\mathrm{right}} = q_3$. Now, observe that in all strings $(f^{1,2},1,2)$ does, indeed, come after $(f^{1,1},1,1)$: For $s_1^2$ to $s_4^2$ this is because of steps 1 to~4. For $s_3^1$ to $s_4^1$, we have, indeed, $(f^{1,2},1,2)$ following $(f^{1,1},1,1)$ by step~5. For $s_1^3$ to $s_2^3$, the symbol $(f^{1,2},1,2)$ is present by step~6. All other strings contain the symbol by step~7 near the beginning. Continuing in a similar fashion with the other symbols, we see that the sequence \begin{align*} (f^{1,1},1,1)\dots(f^{t,k},t,k) \end{align*} is a common subsequence of all strings and it clearly has length $t\cdot k$. \emph{Correctness: Second direction.} It remains to argue that if there is a common subsequence of the strings of length $t\cdot k$, then the automaton accepts the input. First observe that the common subsequence must be of the form $(f^{1,1},1,1)\penalty0(f^{1,2},1,2)\dots\penalty0 (f^{1,k},1,k)\penalty0 (f^{2,1},2,1) \dots\penalty0 (f^{t,k},t,k)$. The reason is that for any two symbols $(f,s,i)$ and $(f',s',i')$ if $s <s'$ then the first of these symbols always comes before the second in all strings. The same is true if $s = s'$ and $i < i'$. Finally, for $s=s'$ and $i=i'$, the opposite orderings for the symbols in steps 1 and~2 (and, also, in steps 3 and~4) ensure that at most one of the two symbols can be present in a common subsequence. Thus, the indices stored in the symbols of the common subsequence must strictly increase and, since the length of the sequence is $t\cdot k$, all possible indices must be present. We must now argue that the $f^{s,i}$ form a sequence of transitions that make the automaton accept. For this, we perform an induction on the length of an initial segment up to some symbol $(f^{s_0,i_0},s_0,i_0)$ of the common sequence. For each cell index~$i$, let $f^i = (f^{s,i},s,i)$ be the last symbol in the segment whose last component is $i$. Let $q^i = f^i_{\mathrm{new}}$ or, if the segment is so short that there is no $f^i$, let $q^i$ be the initial state~$q_i$. The inductive claim is that after $(s_0-1) \cdot k + i_0$ steps of the automaton, the cells will have reached exactly states $q^1,\dots,q^k$. Clearly, this is correct at the start. For the inductive step, the crucial observation is that steps 1 to~6 guarantee that for $i_0 < k$ the only symbol $(f^{s_0,i_0+1},s_0,i_0+1)$ that can follow $(f^{s_0,i_0},s_0,i_0)$ in a common sequence is one that makes that cell $i_0+1$ change its state according to the transition $f^{s_0,i_0+1}$. For $i_0 = k$, we similarly have that only symbols $(f^{s_0+1,1},s_0+1,1)$ can follow that make cell $1$ change its state according to the transition $f^{s_0+1,1}$. \end{proof} \section{Conclusion} Bounded nondeterminism plays a key role in parameterized complexity theory since it lies at the heart of the definition of important classes like $\Class W[\Class P]$, but also of $\Class W[1]$. In the present paper we introduced a ``W-operator'' that cannot only be applied to $\Class P$, yielding $\Para[W]\Class P$, but also to classes like $\Class{NL}$ or $\Class{NC}^1$. We showed that ``union versions'' of problems complete for $\Class P$, $\Class{NL}$, and $\Class L$ tend to be complete for $\Para[W]\Class P$, $\Para[W]\Class{NL}$, and $\Para[W]\Class L$. Several important problems studied in parameterized complexity turn out to be union problems, including $\PLang{circuit-sat}$ and $\PLang{weighted-sat}$, and we could show that the latter problem is complete for $\Para[W]\Class{NC}^1$. For the associative generability problem $\PLang{agen}$, which is also a union problem, we established its $\Para[W]\Class{NL}$-completeness. An interesting open problem is determining the complexity of the ``universal'' version of $\Lang{agen}$, where the question is whether \emph{all} size-$k$ subsets of the universe are generators. Possibly, this problem is complete for $\Para[W_\forall]\Class{NL}$. We showed that different problems are complete for the time--space class $\Class N[f \operatorname{poly}, f\log]$. We shied away from presenting complete problem for the classes $\Class D[n^f, f \operatorname{poly}]$ and $\Class N[n^f,\penalty0 f \operatorname{poly}]$ because in their definition we need restrictions like ``the machine may make at most $n^k$ steps where $k$ is the parameter.'' Such artificial parameterizations have been studied, though: In \cite[Theorem 2.25]{FlumG2006} Flum and Grohe show that ``$\PLang{exp-dtm-halt}$'' is complete for $\Class{XP}$. Adding a unary upper bound on the number of steps to the definition of the problem yields a problem easily seen to be complete for $\Class D[n^f, f \operatorname{poly}]$. Finding a \emph{natural} problem complete for the latter class is, however, an open problem. \subsubsection*{Acknowledgements.} We would like to thank Michael Elberfeld for helping us with the proof of Theorem~\ref{theorem:agen}. \end{document}
\begin{document} \def\rangle{\ranglengle} \def\langle{\langlengle} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} \def{\hat a}{{{\hat a}t a}} \def{\hat b}{{{\hat a}t b}} \def{\hat u}{{{\hat a}t u}} \def{\hat v}{{{\hat a}t v}} \def{\hat c}{{{\hat a}t c}} \def{\hat d}{{{\hat a}t d}} \def\noindent}\def\non{\nonumber{\noindent}\def\non{\nonumberindent}\def\noindent}\def\non{\nonumbern{\noindent}\def\non{\nonumbernumber} \def\hangindent=45pt{{\hat a}ngindent=45pt} \def\vskip 12pt{\vskip 12ptskip 12pt} \newcommand{\bra}[1]{\left\langlengle #1 \right\vskip 12ptert} \newcommand{\ket}[1]{\left\vskip 12ptert #1 \right\ranglengle} \title{Improving the Fidelity of Optical Zeno Gates via Distillation} \author{Patrick M. Leung} \email{[email protected]} \author{Timothy C. Ralph} \affiliation{Centre for Quantum Computer Technology, Department of Physics, University of Queensland, Brisbane 4072, Australia} \date{\today} \pacs{03.67.Lx, 42.50.-p} \begin{abstract} We have modelled the Zeno effect Control-Sign gate of Franson et al (PRA 70, 062302, 2004) and shown that high two-photon to one-photon absorption ratios, $\kappa$, are needed for high fidelity free standing operation. Hence we instead employ this gate for cluster state fusion, where the requirement for $\kappa$ is less restrictive. With the help of partially offline one-photon and two-photon distillations, we can achieve a fusion gate with unity fidelity but non-unit probability of success. We conclude that for $\kappa > 2200$, the Zeno fusion gate will out perform the equivalent linear optics gate. \end{abstract} \maketitle \section{Introduction} Quantum bits (qubits) based on polarization or spatial degrees of freedom of optical modes have several advantages: they are easily manipulated and measured; they exist in a low noise environment and; they are easily communicated over comparitively long distances. Recently considerable progress has been made on implementing two qubit gates in optics using the measurement induced non-linearities proposed by Knill, Laflamme and Milburn \cite{KNI01}. Non-deterministic experimental demonstrations have been made \cite{PIT03, OBR03, GAS04} and theory has found significant ways to reduce the resource overheads \cite{YOR03,NIE04,HAY04,BRO05}. Nevertheless, the number of photons and gate operations required to implement a near deterministic two qubit gate remains high. A possible solution to this problem is the optical quantum Zeno gate suggested by Franson et al \cite{ref:Franson}, ~\cite{ref:Jacobs}. This gate uses passive two-photon absorption to suppress gate failure events associated with photon bunching at the linear optical elements, using the quantum Zeno effect \cite{ref:Kaiser}. In principle a near deterministic, high fidelity control-sign (CZ) gate can be implemented between a pair of photonic qubits in this way. However, the slow convergence of the Zeno effect to the ideal result, with ensuing loss of fidelity, and the effect of single photon loss raises questions about the practicality of this approach. Here we consider a model of the gate that includes the effects of finite two-photon absorption and non-negligible single photon absorption. We obtain analytic expressions for the fidelity of the gate and its probability of success in several scenarios and show how the inclusion of optical distilling elements~\cite{ref:Thew} can lead to high fidelity operation under non-ideal conditions for tasks such as cluster state construction~\cite{NIE04}. The paper is arranged in the following way. We begin in the next section by introducing our model in an idealized and then more realistic setting and obtain results for a free-standing CZ gate. In section 3 we focus on using the gate as a fusion element~\cite{BRO05} for the construction of, for example, optical cluster states. We introduce a distillation protocol that significantly improves the operation of the gate in this scenario. In section 4 we summarize and conclude.\\ \section{Model of Zeno CZ Gate} Franson et al~\cite{ref:Franson} suggested using a pair of optical fibres weakly evanescently coupled and doped with two-photon absorbing atoms to implement the gate. As the photons in the two fibre modes couple the occurence of two photon state components is suppressed by the presence of the two-photon absorbers via the Zeno effect. After a length of fibre corresponding to a complete swap of the two modes a $\pi$ phase difference is produced between the $|11 \ranglengle$ term and the others. If the fibre modes are then swapped back by simply crossing them, a CZ gate is achieved. We model this system as a succession of $n$ weak beamsplitters followed by 2-photon absorbers as shown in Fig.~\ref{fig:OurCsign}. As $n \to \infty$ the model tends to the continuous coupling limit envisaged for the physical realization. The gate operates on the single-rail encoding~\cite{LUN02} for which $|0\rangle_{L}=|0\rangle$ and $|1\rangle_{L}=|1\rangle$ with the kets representing photon Fock states. Fig.~\ref{fig:CZ} shows how the single rail CZ can be converted into a dual rail CZ with logical encoding $|0\rangle_{L}=|H\rangle=|10\rangle$ and $|1\rangle_{L}=|V\rangle=|01\rangle$ with $|ij\rangle$ a Fock state with $i$ photons in the horizontal polarization mode and $j$ photons in the vertical.\\ \begin{figure} \caption{Construction of our CZ gate.} \end{figure} \begin{figure} \caption{CZ gate in dual rail implementation.} \end{figure} The general symmetric beam splitter matrix has the form: \[ e^{i\delta} \left[ \begin{array}{cc} \cos\theta & \pm i\sin\theta \\ \pm i\sin\theta & \cos\theta \end{array} \right]\] According to Figure~\ref{fig:OurCsign}, after the first beam splitter, the four computational photon number states become: \begin{eqnarray} |00\rangle & \rightarrow & |00\rangle \noindent}\def\non{\nonumbernumber\\ |01\rangle & \rightarrow & e^{i\delta}(\cos\theta|01\rangle \pm i\sin\theta|10\rangle)\noindent}\def\non{\nonumbernumber\\ |10\rangle & \rightarrow & e^{i\delta}(\pm i\sin\theta|01\rangle +\cos\theta|10\rangle)\noindent}\def\non{\nonumbernumber\\ |11\rangle & \rightarrow & e^{i2\delta}(\cos2\theta|11\rangle \pm \frac{i}{\sqrt{2}}\sin2\theta(|02\rangle+|20\rangle)) \end{eqnarray} \subsection{Ideal Two-Photon Absorption} To illustrate the operation of the gate we first assume ideal two-photon absorbers, i.e. they completely block the two-photon state components but do not cause any single photon loss. Propagation through the first pair of ideal two-photon absorbers gives the mixed state \begin{equation} \rho^{(1)} = P_s^{(1)} | \phi \ranglengle^{(1)} \langlengle \phi|^{(1)} + P_f^{(1)} |vac \ranglengle \langlengle vac| \end{equation} where $|\phi \ranglengle^{(1)}$ is the evolved two-mode input state obtained for the case of no two-photon absorption event and $|vac \ranglengle$ is the vacuum state obtained in the case a two-photon absorption event occurs. The individual components of $|\phi \ranglengle^{(1)}$ transform as \begin{eqnarray} |00\rangle & \rightarrow & |00\rangle \noindent}\def\non{\nonumbernumber\\ |01\rangle & \rightarrow & e^{i\delta}(\cos\theta|01\rangle \pm i\sin\theta|10\rangle)\noindent}\def\non{\nonumbernumber\\ |10\rangle & \rightarrow & e^{i\delta}(\pm i\sin\theta|01\rangle +\cos\theta|10\rangle)\noindent}\def\non{\nonumbernumber\\ |11\rangle & \rightarrow & e^{i2\delta}\cos2\theta|11\rangle \langlebel{eqn:complete} \end{eqnarray} Notice that, because we are embedded in a dual rail circuit, we can distinguish between the $|00\rangle$ state that corresponds to input state $|HH\rangle$ and the $|vac \rangle$ state that results from two-photon absorption of input state $|VV\rangle$. Equation~(\ref{eqn:complete}) describes the transformation of each unit, hence repeating the procedure $n$ times gives, \begin{eqnarray} |00\rangle & \rightarrow & |00\rangle\noindent}\def\non{\nonumbernumber\\ |01\rangle & \rightarrow & e^{in\delta}(\cos n\theta|01\rangle \pm i\sin n\theta|10\rangle)\noindent}\def\non{\nonumbernumber\\ |10\rangle & \rightarrow & e^{in\delta}(\pm i\sin n\theta|01\rangle + \cos n\theta|10\rangle)\noindent}\def\non{\nonumbernumber\\ |11\rangle & \rightarrow & e^{i2n\delta}(\cos2\theta)^n|11\rangle \end{eqnarray} describing the transformations giving the evolved input state after $n$ units, $|\phi \ranglengle^{(n)}$. There are three conditions to satisfy for building a CZ gate. The first condition is ``~$n\theta=\frac{\pi}{2}$~", so that $|01\rangle \rightarrow e^{i(n\delta\pm\frac{\pi}{2})}|10\rangle$ and $|10\rangle \rightarrow e^{i(n\delta\pm\frac{\pi}{2})}|01\rangle$. The second condition is ``$n\delta\pm\frac{\pi}{2}=k\pi$" (equivalently, $\delta=\frac{\pi}{2n}+\frac{k\pi}{n}$), where $k$ is any integer, so that $|10\rangle \rightarrow e^{ik\pi}|01\rangle$ and $|01\rangle \rightarrow e^{ik\pi}|10\rangle$ and $|11\rangle \rightarrow -(\cos 2\theta)^n|11\rangle$. Phase shifters are needed to correct the sign of the output state of $|01\rangle$ and $|10\rangle$ for odd $k$, but here we simply set $k=0$. The last condition is ``$\cos2\theta > 0$" (i.e. $0< \theta < \frac{\pi}{4}$), such that a minus sign is induced on $|11\rangle$. This condition is always true because we are using many weak beam splitters (i.e. $\theta$ is small). Let $\tau = (\cos 2\theta)^n = (\cos\frac{\pi}{n})^n \ge 0$, then swapping the fibres gives the transformations \begin{eqnarray} |00\rangle & \rightarrow & |00\rangle\noindent}\def\non{\nonumbernumber\\ |01\rangle & \rightarrow & |01\rangle\noindent}\def\non{\nonumbernumber\\ |10\rangle & \rightarrow & |10\rangle\noindent}\def\non{\nonumbernumber\\ |11\rangle & \rightarrow & -\tau|11\rangle \end{eqnarray} Clearly, the above is a controlled sign operation with a skew (quantified as $\tau$) on the probability amplitude of the $|11\rangle$ state. If there is some way to herald failure, i.e. two-photon absorption events, then the fidelity of the gate will be $F_h = |\langlengle T|\phi \ranglengle^{(n)}|^2$, where $|T \ranglengle$ is the target state, and the probability of success will be $P_s^{(n)}$. On the other hand if two-photon absorption events are unheralded then the fidelity will be $F_{uh} = F_h P_s^{(n)}$. For simplicity we consider the equally weighted superposition input state $\frac{1}{2}(|00\rangle+|01\rangle+|10\rangle+|11\rangle)$. The corresponding $|\phi \ranglengle^{(n)}$ after the Zeno-CZ gate is $\frac{1}{2}(|00\rangle+|01\rangle+|10\rangle-\tau|11\rangle)$, to be compared with the target state $|T \ranglengle = \frac{1}{2}(|00\rangle+|01\rangle+|10\rangle- |11\rangle)$. The heralded fidelity and probability of success are then $F_h =\frac{(3+\tau)^2}{4(3+\tau^2)}$ and $P_s=\frac{3+\tau^2}{4}$ respectively. As $n$ becomes very large and hence tends to the continuous limit, $\tau$ tends to one, and so both $F_h$ and $P_s$ approach one. \subsection{Incomplete Two-Photon Absorption with Single Photon Loss} The previous analysis is clearly unrealistic as it assumes infinitely strong two-photon absorption but negligible single photon absorption. We now include the effect of finite two-photon absorption and non-negligible single photon loss. Let $\gamma_{1}=\exp(\frac{-\langlembda}{n\kappa})$ and $\gamma_{2}=\exp(\frac{-\langlembda}{n})$ be the probability of single photon and two-photon transmission respectively for one absorber. Here the parameter $\langlembda=\chi L$, where $L$ is the length of the absorber and $\chi$ is the corresponding proportionality constant related to the absorption cross section. Furthermore, $\kappa$ specifies the relative strength of the two transmissions and relates them by $\gamma_{2}=\gamma_{1}^{\kappa}$. Now each unit of weak beam splitter and absorbers does the following transformation on the computational states of $|\phi \ranglengle$ \begin{eqnarray} |00\rangle & \rightarrow & |00\rangle \noindent}\def\non{\nonumbernumber\\ |01\rangle & \rightarrow & e^{i\delta}\sqrt{\gamma_{1}}(\cos\theta|01\rangle \pm i\sin\theta|10\rangle)\noindent}\def\non{\nonumbernumber\\ |10\rangle & \rightarrow & e^{i\delta}\sqrt{\gamma_{1}}(\pm i\sin\theta|01\rangle +\cos\theta|10\rangle)\noindent}\def\non{\nonumbernumber\\ |11\rangle & \rightarrow & e^{i2\delta}\gamma_{1}\bigl(\cos2\theta|11\rangle \pm \frac{i\sqrt{\gamma_{2}}\sin2\theta}{\sqrt{2}}(|02\rangle+|20\rangle)\bigr)\noindent}\def\non{\nonumbernumber\\ |02\rangle & \rightarrow & e^{i2\delta}\gamma_{1}(\frac{\pm i\sin2\theta}{\sqrt{2}}|11\rangle + \sqrt{\gamma_{2}}(\cos^2\theta|02\rangle-\sin^2\theta|20\rangle))\noindent}\def\non{\nonumbernumber\\ |20\rangle & \rightarrow & e^{i2\delta}\gamma_{1}(\frac{\pm i\sin2\theta}{\sqrt{2}}|11\rangle - \sqrt{\gamma_{2}}(\sin^2\theta|02\rangle-\cos^2\theta|20\rangle))\noindent}\def\non{\nonumbernumber\\ \end{eqnarray} Repeating the procedure $n$ times with the aforementioned conditions on $\theta$ gives the following \begin{eqnarray} |00\rangle & \rightarrow & |00\rangle\noindent}\def\non{\nonumbernumber\\ |01\rangle & \rightarrow & \gamma_{1}^{n/2}|01\rangle\noindent}\def\non{\nonumbernumber\\ |10\rangle & \rightarrow & \gamma_{1}^{n/2}|10\rangle\noindent}\def\non{\nonumbernumber\\ |11\rangle & \rightarrow & -\gamma_{1}^{n}\tau|11\rangle + f(|02 \ranglengle, |20 \ranglengle) \langlebel{eqn:incomplete} \end{eqnarray} where the new expression for $\tau$ is given by: \begin{eqnarray} \tau_{n,\langlembda} & = & \frac{2^{-\frac{3}{2}-n}}{d}\bigl( (g+\frac{d}{\sqrt{2}})^{n}(\sqrt{2}d-h) \noindent}\def\non{\nonumbernumber\\ & & + (g-\frac{d}{\sqrt{2}})^{n}(\sqrt{2}d+h)\bigr)\noindent}\def\non{\nonumbernumber\\ d_{n,\langlembda} & = & \sqrt{(1+\cos\frac{2\pi}{n})(1+\gamma_{2})+2\sqrt{\gamma_{2}}(\cos(\frac{2\pi}{n})-3)}\noindent}\def\non{\nonumbernumber\\ g_{n,\langlembda} & = & (\cos\frac{\pi}{n})(\sqrt{\gamma_{2}}+1)\noindent}\def\non{\nonumbernumber\\ h_{n,\langlembda} & = & 2(\cos\frac{\pi}{n})(\sqrt{\gamma_{2}}-1) \end{eqnarray} and we have suppressed the explicit form of the $|02 \ranglengle, |20 \ranglengle$ state components as they lie outside the computational basis and so do not explicitly contribute to the fidelity. These expressions can be used to calculate the unheralded fidelity, the heralded fidelity and probability of success. Our numerical evaluations are all carried out in the (near) continuous limit of large $n$. \subsection{Free Standing Gate} For a free-standing gate, as depicted in Fig.\ref{fig:CZ}, gate failure events are not heralded, thus the unheralded fidelity is appropriate to consider. The fidelity is a function of $\langlembda$. As the length of the interaction region is increased ($\langlembda$ increased) the effective strength of the two-photon absorption is increased leading to an improvement in the heralded fidelity, $F_h$. However, at the same time, the level of single photon absorption is also increasing with the length, acting to decrease the probability of success, $P_s$. As the unheralded fidelity is $F_{uh} = F_h P_s$, there is a trade-off between these two effects leading to an optimum value for $\langlembda$ for sufficiently large $\kappa$. An example of the dependence is shown in Fig.\ref{fig:fidelity_vs_lambda}. The fidelity is plotted as a function of $\kappa$ with $\langlembda$ optimized for each point in Fig.\ref{fig:fidelity_vs_kappa}. For large ratios of two-photon absorption to single-photon absorption, $\kappa$, we tend to the ideal case of unit fidelity. However, the conditions required are demanding with absorption ratios of a million to one required for $F_{uh} > 0.99$ and 100 million to one for $F_{uh} > 0.999$. Recent estimates suggest $\kappa$'s of ten thousand to one may be achievable~\cite{ref:Franson3}, well short of these numbers. In the following we will consider a different scenario in which the gate can be usefully employed with less stringent conditions on $\kappa$.\\ \begin{figure} \caption{Unheralded fidelity versus $\langlembda$ for the CZ gate shown in Fig.\ref{fig:CZ} \end{figure} \begin{figure} \caption{Unheralded fidelity versus Log($\kappa$) (in base 10) for the CZ gate shown in Fig.\ref{fig:CZ} \end{figure} \section{Zeno Fusion Gate} We have seen that the requirements on high fidelity operation for the free-standing gate are quite extreme. We now consider an alternate scenario in which probability of success is traded-off against fidelity by heralding failure events through direct detection. In particular we consider using the Zeno gate to implement the fusion technique \cite{BRO05}. Fusion can be used to efficiently construct cluster states \cite{ref:RAU01}, or re-encode parity states \cite{ref:Ralph}. We will specifically consider cluster state construction here. Essentially, the gate is used to make a Bell measurement on a pair of qubits, as depicted in Fig.\ref{fig:fusion}. One of the qubits comes from the cluster we are constructing, whilst the other comes from a resource cluster state, in a known logical state. The Bell measurement has the effect of ``fusing" the resource state onto the existing state. By careful choice of the resource state, large 2-dimensional cluster states, suitable for quantum computation, can be constructed~\cite{ref:Dawson}. Because the Bell measurement ends with the direct detection of the qubits, the loss of one or both of the photons, or the bunching of two photons in a single qubit mode can immediately be identified in the detection record, and hence failure events will be heralded. Effectively we will postselect the density operator $\rho = P_s | \phi' \ranglengle \langlengle \phi'| + \rho_r$, where $|\phi' \ranglengle$ is the component of the output state which remains in the computational basis and $\rho_r$ are all the components that do not. The measurement record then allows us to herald the first term of the density operator as successful operation, with fidelity $F_h = |\langlengle T|\phi' \ranglengle|^2$ and probability of success of $P_s$, and the second term as failure. We now consider techniques for improving the heralded fidelity of the gate and then evaluate its performance as a fusion gate. \subsection{Single Photon Distillation} From equation~(\ref{eqn:incomplete}), we can see that $\gamma_{1}<1$ lowers the probability amplitude of the four computational states unevenly as previously discussed by Jacobs et al~\cite{ref:Jacobs}. By distilling the states with beam splitters and detectors \cite{ref:Thew} (see Figure~\ref{fig:cz_distill}), where each beam splitter has a transmission coefficient equal to $\gamma_{1}^{n/2}$, the four computational states of $|\phi' \ranglengle$ become: \begin{eqnarray} |00\rangle & \rightarrow & \gamma_{1}^{n}|00\rangle\noindent}\def\non{\nonumbernumber\\ |01\rangle & \rightarrow & \gamma_{1}^{n}|01\rangle\noindent}\def\non{\nonumbernumber\\ |10\rangle & \rightarrow & \gamma_{1}^{n}|10\rangle\noindent}\def\non{\nonumbernumber\\ |11\rangle & \rightarrow & -\gamma_{1}^{n}\tau|11\rangle \end{eqnarray} \begin{figure} \caption{CZ gate in dual rail implementation with beam splitter distillation to improve average fidelity.} \end{figure} The distillation is successful when the (ideal) detectors measure no photon. The fidelity and probability of success of this scheme are $F_h=\frac{(3+\tau)^2}{4(3+\tau^2)}$ and $P_s=\gamma_{1}^{2n}(\frac{3+\tau^2}{4}) = e^{-2\langlembda/\kappa}(\frac{3+\tau^2}{4})$ respectively. For $\langlembda$ tends to infinity $F_h \to 1$, however at the same time $P_s \to 0$. In order to achieve unit fidelity \emph{independent} of $\langlembda$, we now apply two-photon distillation. \subsection{Two-Photon Distillation} As shown previously, after the CZ gate and single photon distillation, the input state $\frac{1}{2}(|00\rangle+|01\rangle+|10\rangle+|11\rangle)$ becomes $\frac{\gamma^{n}_{1}}{2}(|00\rangle+|01\rangle+|10\rangle-\tau|11\rangle)$. Now we require two-photon distillation to renormalise the input state by inducing $\tau$ on the other three computational states as shown in Figure~\ref{fig:two_photon_distill}. To do so, we first apply a bit-flip on the control qubit and then apply a $\tau$-gate (see Figure~\ref{fig:tau_gate}) and a single photon distiller on the control qubit with transmission coefficient $\sqrt{\gamma_{1}'}$ and another single photon distiller on the target qubit with transmission coefficient $\sqrt{\gamma_{1}'}\tau$ and then undo the previous bit-flip by applying another bit-flip on the control qubit. The $\tau$-gate does the same operation as the aforementioned CZ gate (excluding the single photon distillation) except that no minus sign is induced on the output of $|11\bigr>$. The construction of a $\tau$-gate is described in the next subsection. In summary the two-photon distillation circuit does the following: \begin{eqnarray} |00\rangle & \rightarrow & \gamma_{1}'\tau|00\rangle\noindent}\def\non{\nonumbernumber\\ |01\rangle & \rightarrow & \gamma_{1}'\tau|01\rangle\noindent}\def\non{\nonumbernumber\\ |10\rangle & \rightarrow & \gamma_{1}'\tau|10\rangle\noindent}\def\non{\nonumbernumber\\ |11\rangle & \rightarrow & \gamma_{1}'|11\rangle \end{eqnarray} \begin{figure} \caption{Two-photon distillation. Schematic of operation sequence, CZ gate, bit-flip, $\tau$-gate with two single photon distillators and then bit-flip.} \end{figure} After the above operations, the input state $\frac{1}{2}(|00\bigr>+|01\bigr>+|10\bigr>+|11\bigr>)$ becomes $\frac{\gamma^{n}_{1}\gamma_{1}'\tau}{2}(|00\bigr>+|01\bigr>+|10\bigr>-|11\bigr>)$. Now the state can be renormalised to achieve unit fidelity \emph{independent} of $\langlembda$. The explicit expression for the probability of success is $P_s = \gamma^{2n}_{1}\gamma_{1}'^{2}\tau^{2} = e^{-2\langlembda/\kappa}\tau^{2+2/\kappa}$ Figure~\ref{fig:two_photon_distill_psuccess} shows the probability of success of this gate for different values of $\kappa$.\\ \begin{figure} \caption{Probability of success of the CZ gate with two-photon and single photon distillation plotted against $\log\kappa$ (in base 10). Fidelity is always one.} \end{figure} \subsection{The $\tau$-Gate Circuit} We can construct a $\tau$-gate with two 50-50 beam splitters, a pair of two-photon absorbers, and some phase shifters, as shown in Figure~\ref{fig:tau_gate}. The first beam splitter performs $|01\rangle\rightarrow|10\rangle$, $|10\rangle\rightarrow|01\rangle$ and $|11\rangle\rightarrow\frac{i}{\sqrt{2}}(|02\rangle+|20\rangle)$. The pair of two-photon absorbers then induce $\sqrt{\gamma_{1}'}$ on both $|01\rangle$ and $|10\rangle$ due to single photon loss, and induce $\gamma_{1}'\gamma_{2}'$ on $\frac{i}{\sqrt{2}}(|02\rangle+|20\rangle)$ due to both single photon and two-photon loss. The second beam splitter undoes the operation of the first beam splitter. Then with some phase shifters to correct the relative phase between the terms and having $\gamma_{1}'^\kappa=\gamma_{2}'=\tau$, we have a $\tau$-gate that does the following operation: \begin{eqnarray} |00\rangle & \rightarrow & |00\rangle\noindent}\def\non{\nonumbernumber\\ |01\rangle & \rightarrow & \sqrt{\gamma_{1}'}|01\rangle\noindent}\def\non{\nonumbernumber\\ |10\rangle & \rightarrow & \sqrt{\gamma_{1}'}|10\rangle\noindent}\def\non{\nonumbernumber\\ |11\rangle & \rightarrow & \gamma_{1}'\tau|11\rangle \end{eqnarray} \begin{figure} \caption{$\tau$-gate.} \end{figure} \subsection{Performance of the Zeno Fusion Gate} The fusion approach is important because it is the most efficient method known for performing quantum computation using only linear optics. Linear optics allows a partial Bell measurement to be made with a probability of success of 50\% (assuming ideal detectors). In addition the failure mode measures the qubits in the computational basis, which does not affect the state of the remaining qubits in the cluster or parity state. Thus a failure event only sacrifices a single qubit from the cluster being constructed and the probability of destroying $N$ qubits in the process of achieving a successful fusion is $P_l = 2^{-N}$. In contrast, many of the failure events for the Zeno gate will simply erase the photon giving no knowledge about its state. For simplicity, and to be conservative, we will assume all events lead to complete erasure of the photon state. In order to recover from this situation the adjoining qubit in the cluster must be measured in the logical basis, thus removing the affect of the erasure \cite{ref:Lim, ref:Duan}. This means that every failure event sacrifices two qubits from the cluster being constructed and the probability of destroying $N$ qubits in the process of achieving a successful fusion is $P_z = (1-P_s)^{N/2}$. Requiring $P_l = P_z$ we estimate that the Zeno gate must have $P_s > 0.75$ to offer an advantage over linear optics.\\ \begin{figure} \caption{Zeno Fusion gate with partial offline distillation.} \end{figure} We can make one final improvement to the set-up by relocating the distillation process for the resource qubit to offline (see Fig.5), which boosts the probability of success. The probability of success is then given by $P=\frac{2\gamma_{1}^{2n}\gamma_{1}'^2\tau^2}{1+\gamma_{1}^{n}\gamma_{1}'\tau} = \frac{2e^{-2\langlembda/\kappa}\tau^{(2+2/\kappa)}}{1+e^{-\langlembda/\kappa}\tau^{(1+1/\kappa)}}$. The plot for the probability of success versus $\kappa$ and optimal $\langlembda$ versus $\kappa$ are shown in Figure~\ref{fig:PsuccessVsNoffline} and Figure~\ref{fig:OptimalVsNoffline} respectively.\\ \begin{figure} \caption{Probability of success of the Zeno fusion gate with partially offline two-photon and single photon distillation plotted against $\log\kappa$ (in base 10).} \end{figure} \begin{figure} \caption{Optimal $\langlembda$ for the probability of success of the Zeno fusion gate with partially offline two-photon and single photon distillation plotted against $\log\kappa$ (in base 10).} \end{figure} The break even point between linear optics and the Zeno gate is when $\kappa=2200$, such that the probability of success is about $0.75$. When $\kappa=10000$ the probability of success is about $0.87$. Thus we conclude that an absorption ratio of ten thousand to one or more would produce a Zeno gate with significant advantage over linear fusion techniques.\\ \textbf{Conclusion}\\ In this paper, we have modelled Franson et al's CZ gate with a succession of $n$ weak beam-splitters followed by two-photon absorbers, in the (near) continuous limit of large $n$. We analysed this CZ gate for both the ideal two-photon absorption case and the imcomplete two-photon absorption with single photon loss case, giving analytical and numerical results for the fidelity and probability of success. The result shows that for a free-standing gate we need an absorption ratio $\kappa$ of a million to one to achieve $F>0.99$ and 100 million to one to achieve $F>0.999$, where recent estimate only suggests that $\kappa\approx10000$ may be achievable. We therefore employ this gate for qubit fusion, where the requirement for $\kappa$ is less restrictive. With the help of partially offline one-photon and two-photon distillations, we can achieve a CZ gate with unity fidelity and with probability of success is about 0.87 for $\kappa=10000$. We conclude that when employed as a fusion gate, the Zeno gate could offer significant advantages over linear techniques for reasonable parameters.\\ \textbf{Acknowledgement}\\ We thank W.J.Munro, A.Gilchrist and C.Myers for useful discussions. This work was supported by the Australian Research Council and the DTO-funded U.S. Army Research Office Contract No. W911NF-05-0397.\\ \end{document}
\begin{document} \title{Tropical Images of Intersection Points} \begin{abstract} A key issue in tropical geometry is the lifting of intersection points to a non-Archimedean field. Here, we ask: ÒWhere can classical intersection points of planar curves tropicalize to?Ó An answer should have two parts: first, identifying constraints on the images of classical intersections, and, second, showing that all tropical configurations satisfying these constraints can be achieved. This paper provides the first part: images of intersection points must be linearly equivalent to the stable tropical intersection by a suitable rational function. Several examples provide evidence for the conjecture that our constraints may suffice for part two. \end{abstract} \section{Introduction}\label{section:intro} Let $K$ be an algebraically closed non-Archimedean field with a nontrivial valuation $\text{val}:K^*\rightarrow\ensuremath{\mathbb{R}}$. The examples throughout this paper will use $K=\ensuremath{\mathbb{C}}\{\!\{t\}\!\}$, the field of Puiseux series over the complex numbers with indeterminate $t$. This is the algebraic closure of the field of Laurent series over $\ensuremath{\mathbb{C}}$, and can be defined as $$\ensuremath{\mathbb{C}}\{\!\{t\}\!\}=\left\{\sum_{i=k}^\infty a_i t^{i/n}\,:\, a_i\in \ensuremath{\mathbb{C}}, n,k\in\ensuremath{\mathbb{Z}}, n>0 \right\}, $$ with $\text{val}\left(\sum_{i=k}^\infty a_i t^{i/n}\right)=k/n$ if $a_k\neq 0$. In particular, $\text{val}(t)=1$. The tropicalization map $\text{trop}:(K^*)^n\rightarrow\ensuremath{\mathbb{R}}^n$ sends points in the $n$-dimensional torus $(K^*)^n$ into Euclidean space under coordinate-wise valuation: $$\text{trop}:(a_1,\ldots,a_n)\rightarrow(\text{val}(a_1),\ldots,\text{val}(a_n)).$$ In tropical geometry, we consider the tropicalization map on a variety $X\subset (K^*)^n$. Since the value group is dense in $\ensuremath{\mathbb{R}}$, we take the Euclidean closure of $\text{trop}(X)$ in $\ensuremath{\mathbb{R}}^n$, and call this the \emph{tropicalization of $X$}, denoted $\text{Trop}(X)$. The tropicalization of a variety is a piece-wise linear subset of $\ensuremath{\mathbb{R}}^n$, and has the structure of a balanced weighted polyhedral complex. In the case where $X$ is a hypersurface, the combinatorics of the tropicalization can be found from a subdivision of the Newton polytope of $X$. For more background on tropical geometry, see \cite{Gu} and \cite{MS}. Consider two curves $X,Y\subset (K^*)^2$ intersecting in a finite number of points. We are interested in the image of the intersection points under tropicalization; that is, in $\text{Trop}(X\cap Y)$ inside of $\text{Trop}(X)\cap\text{Trop}(Y)\subset\ensuremath{\mathbb{R}}^2$. It was shown in \cite[Theorem 1.1]{OP} that if $\text{Trop}(X)\cap\text{Trop}(Y)$ is zero dimensional in a neighborhood of a point in the intersection, then that point is in $\text{Trop}(X\cap Y)$. More generally, they showed this for varieties $X$ and $Y$ under the assumption that $\text{Trop}(X)\cap\text{Trop}(Y)$ has codimension $\text{codim }X+\text{codim }Y$ in a neighborhood of the point. It follows that if $\text{Trop}(X)\cap\text{Trop}(Y)$ is a finite set, then $\text{Trop}(X\cap Y)=\text{Trop}(X)\cap\text{Trop}(Y)$. It is possible for $\text{Trop}(X)\cap\text{Trop}(Y)$ to have higher dimensional components, namely finite unions of line segments and rays. It was shown in \cite{OR} that if $\text{Trop}(X)\cap\text{Trop}(Y)$ is bounded, then each connected component of $\text{Trop}(X)\cap\text{Trop}(Y)$ has the ``right'' number of images of points in $X\cap Y$, counted with multiplicity. In this context, the ``right'' number is the number of points in the stable tropical intersection of that connected component; the stable tropical intersection is $\lim_{\varepsilon\rightarrow 0}(\text{Trop}(X)+\varepsilon\cdot v)\cap \text{Trop}(Y)$, where $v$ is a generic vector and $\varepsilon$ is a real number \cite[\S4]{OR}. They further showed that the theorem holds for components of $\text{Trop}(X)\cap\text{Trop}(Y)$ that are unbounded, after a suitable compactification. We offer the following example to illustrate this higher dimensional component phenomenon. This will motivate the following question: as we vary $X$ and $Y$ over curves with the same tropicalizations, what are the possibilities for the varying set $\text{Trop}(X\cap Y)$ inside of the fixed set $\text{Trop}(X)\cap\text{Trop}(Y)$? \begin{example}\label{motivating_example} Let $K=\ensuremath{\mathbb{C}}\{\!\{t\}\!\}$ and let $f,g\in K[x,y]$ be $f(x,y)=c_1+c_2x+c_3y $ and $g(x,y)=c_4x+c_5xy+tc_6y$, where $c_i\in K$ and $\text{val}(c_i)=0$ for all $i$. Let $X,Y\subset (K^*)^2$ be the curves defined by $f$ and $g$, respectively. \begin{figure} \caption{ $\text{Trop} \label{figure:line_conic} \end{figure} Regardless of our choice of $c_i$, $\text{Trop}(X)$ and $\text{Trop}(Y)$ will be as pictured in Figure \ref{figure:line_conic}, with $\text{Trop}(X)$ and $\text{Trop}(Y)$ intersecting in the line segment $L$ from $(0,0)$ to $(1,0)$. However, $X$ and $Y$ only intersect in two points (or one point with multiplicity $2$). The natural question is: as we vary the coefficients while keeping valuations (and thus tropicalizations) fixed, what are the possible images of the two intersection points within $L$? A reasonable guess is that the intersection points map to the stable tropical intersection $\{(0,0),(1,0)\}$, and indeed this does happen for a generic choice of coefficients. However, as shown in Example \ref{motivating_example_detailed}, one can choose coefficients such that the intersection points map to any pair of points in $L$ of the form $(r,0)$ and $(1-r,0)$, where $0\leq r\leq \frac{1}{2}$. These possible configurations are illustrated in Figure \ref{figure:line_conic_configurations}. \begin{figure} \caption{Possible images of $X\cap Y$ in $\text{Trop} \label{figure:line_conic_configurations} \end{figure} \end{example} The main result of this paper is that the points $\text{Trop}(X\cap Y)$ inside of $\text{Trop}(X)\cap\text{Trop}(Y)$ must be linearly equivalent to the stable tropical intersection via particular \emph{tropical rational functions}, defined in Section \ref{section:background}. To distinguish tropical rational functions from classical rational functions, they will be written as $f^{\text{trop}}$, $g^{\text{trop}}$, or $h^{\text{trop}}$ instead of $f$, $g$, or $h$. See \cite{BN} and \cite{GK} for more background. In all the examples discussed in Section \ref{section:examples}, essentially every such configuration is achievable. Conjecture \ref{main_conjecture} expresses our hope that this always holds. \begin{theorem}\label{main_theorem} Let $X,Y\subset (K^*)^2$ where $X\cap Y$ is equal to the multiset $\{p_1,\ldots,p_n\}$ and where $\text{Trop}(X)$ is smooth. Let $E$ be the stable intersection divisor of $\text{Trop}(X)$ and $\text{Trop}(Y)$, and let $D$ be $$D=\sum_i\text{trop}(p_i).$$ Then there exists a tropical rational function $h^{\text{trop}}$ on $\text{Trop}(X)$ such that $(h^{\text{trop}})=D-E$ and $\text{supp}(h^{\text{trop}})\subset \text{Trop}(X)\cap\text{Trop}(Y)$. \end{theorem} We will present two proofs of this theorem. In Sections \ref{section:background} and \ref{section:main_result} we approach the question from the perspective of Berkovich theory, which in the smooth case allows us to tropicalize rational functions on classical curves. In Section \ref{section:modifications} we present an alternate argument using tropical modifications, which allows us to drop the smoothness assumption. \begin{example}\label{motivating_example_extended} Let $X$ and $Y$ be as in Example \ref{motivating_example}. We will consider tropical rational functions on $\text{Trop}(X)\cap \text{Trop}(Y)$ such that \begin{itemize} \item[(i)] the stable intersection points are the poles (possibly canceling with zeros), and \item[(ii)] the tropical rational function takes on the same value at every boundary point of $\text{Trop}(X)\cap\text{Trop}(Y)$. \end{itemize} If we insist that the ``same value'' in condition (ii) is $0$, we may extend these tropical rational functions to all of $\text{Trop}(X)$ by setting them equal to $0$ on $\text{Trop}(X)\setminus \text{Trop}(Y)$. This yields tropical rational functions on $\text{Trop}(X)$ with $\text{supp}(h^{\text{trop}})\subset \text{Trop}(X)\cap\text{Trop}(Y)$, as in Theorem \ref{main_theorem}. Instances of the types of such tropical rational functions on $L=\text{Trop}(X)\cap\text{Trop}(Y)$ from our example are illustrated in Figure \ref{figure:line_conic_rational_functions}. \begin{figure} \caption{Graphs of tropical rational functions on $\text{Trop} \label{figure:line_conic_rational_functions} \end{figure} As asserted by Theorem \ref{main_theorem}, all possible image intersection sets in $\text{Trop}(X)\cap\text{Trop}(Y)$ arise as the zero set of such a tropical rational function. Equivalently, the stable intersection divisor and the image of intersection divisor are linearly equivalent via one of these functions. \end{example} \begin{remark}It is not quite the case that the zero set of every such tropical rational function (from Example \ref{motivating_example_extended}) is attainable as the image of the intersections of $X$ and $Y$ (with changed coefficients). For instance, such a tropical rational function could have zeros at $(\frac{\sqrt{2}}{2},0)$ and $(1-\frac{\sqrt{2}}{2},0)$, which cannot be the images of \emph{any} points on $X$ and $Y$ since they have irrational coordinates. However, if we insist that the tropical rational functions have zeros at points with rational coefficients (since $\mathbb{Q}=\text{val}(K^*)$), all zero sets can be achieved as the images of intersections. This is the content of Conjecture \ref{main_conjecture}. \end{remark} \section{Tropicalizations of Rational Functions}\label{section:background} In this section we present background information on tropical rational function theory, and use some Berkovich theory to define the tropicalization of a rational function. For the theory of tropical rational functions, we consider abstract tropical curves $\mathbb{G}amma$, which are weighted metric graphs with finitely many edges and vertices, where the edges have possibly infinite lengths. See \cite{BPR} for background on Berkovich spaces, and \cite{Mi} for more background on tropical rational functions. Tropical rational functions on tropical curves are analogous to classical rational functions on classical curves. A \emph{divisor} on a tropical curve $\mathbb{G}amma$ is a finite formal sum of points in $\mathbb{G}amma$ with coefficients in $\ensuremath{\mathbb{Z}}$. If $D=\sum_ia_iP_i$, the \emph{degree} of $D$ is $\deg D:=\sum_ia_i$. The \emph{support} of $D$ is the set of all points $P_i$ with $a_i\neq 0$, and $D$ is called \emph{effective} if all $a_i$'s are nonnegative. \begin{defn}\rm{ A \emph{rational function} on a tropical curve $\mathbb{G}amma$ is a continuous function $f^{\text{trop}}:\mathbb{G}amma\rightarrow \ensuremath{\mathbb{R}}\cup\{\pm\infty\}$ such that the restriction of $f^{\text{trop}}$ to any edge of $\mathbb{G}amma$ is a piecewise linear function with integer slopes and only finitely many pieces. This means that $f^{\text{trop}}$ can only take on the values of $\pm\infty$ at the unbounded ends of $\mathbb{G}amma$. The \emph{associated divisor} of $f^{\text{trop}}$ is $(f^{\text{trop}})=\sum_{P\in\mathbb{G}amma}\text{ord}_P(f^{\text{trop}})\cdot P$}, where $\text{ord}_P(f^{\text{trop}})$ is minus the sum of the outgoing slopes of $f$ at a point $P$. If $D$ and $E$ are divisors such that $D-E=(f^{\text{trop}})$ for some tropical rational function $f$, we say that $D$ and $E$ are \emph{linearly equivalent}. \end{defn} \begin{figure} \caption{The graph of a rational function $f^{\text{trop} \label{rationalfunction} \end{figure} As an example, consider Figure \ref{rationalfunction}. Here $\mathbb{G}amma$ consists of four vertices and three edges arranged in a Y-shape, and the image of $\mathbb{G}amma$ under a rational function $f$ is illustrated lying above it. The leftmost vertex is a zero of order $2$, since there is an outgoing slope of $-2$ and no other outgoing slopes. The next kink in the graph is a pole of order $1$, since the outgoing slopes are $2$ and $-1$ and $2+(-1)=1$. Moving along in this direction we have a pole of order $4$, a zero of order $4$, at one endpoint a pole of order $1$, and at the other endpoint no zeros or poles. Note that, counting multiplicity, there are six zeros and six poles. The numbers agree, as in the classical case. Since we can tropicalize a curve to obtain a tropical curve, we would like to tropicalize a rational function on a curve and obtain a tropical rational function on a tropical curve. A na\"{i}ve definition of ``tropicalizing a rational function'' would be as follows. \begin{ndefn}Let $h$ be a rational function on a curve $X$. Define the \emph{tropicalization of $h$}, denoted $\text{trop}(h)$, as follows. For every point $w$ in the image of $X\setminus\{\text{zeros and poles of $h$}\}$ under tropicalization, lift that point to $p\in X$, and define $$\text{trop}(h)(w)=\text{val}(h( p)).$$ Extend this function to all of $\text{Trop}(X)$ by continuity. \end{ndefn} Unfortunately this is not quite well-defined, because $\text{val}(h( p))$ depends on which lift $p$ of $w$ we choose. However, as suggested to the author by Matt Baker, this definition can be made rigorous if at least one of the tropicalizations is suitably faithful in a Berkovich sense. Let $h$ be a rational function on $X$, and assume that there is a canonical section $s$ to the map $X^{an}\rightarrow \text{Trop}(X)$, where $X^{an}$ is the analytification of $X$. For $w\in \text{Trop}(X)$, define $$\text{trop}(h)(w)=\log|h|_{s(w)},$$ where $|\cdot|_{s(w)}$ is the seminorm corresponding to the point $s(w)$ in $X^{an}$. This rational function has the desired properties. \begin{remark}\label{remark_suitably_faithful} In \cite{BPR} one can find conditions to guarantee that there exists a canonical section $s$ to the map $X^{an}\rightarrow \text{Trop}(X)$. For instance, if $\text{Trop}(X)$ is smooth in the sense that it comes from a unimodular triangulation of its Newton polygon, such a section will exist. \end{remark} \section{Main Result and a Conjecture}\label{section:main_result} We are ready to prove Theorem \ref{main_theorem}. \begin{proof}[Proof of Theorem \ref{main_theorem}] Let $f$ and $g$ be the defining equations of $X$ and $Y$, respectively. Let $g'\in K[x,y]$ have the same tropical polynomial as $g$, and let $Y'$ be the curve defined by $g'$. We have that $\text{Trop}(Y)=\text{Trop}(Y')$, and for generic $g'$ we have that $\text{Trop}(X\cap Y')$ is the stable tropical intersection of $\text{Trop}(X)$ and $\text{Trop}(Y)$. Recall that $p_1,\ldots,p_n$ denote the intersection points of $X$ and $Y$, possibly with repeats. Let $p'_1,\ldots,p'_m$ denote the intersection points of $X$ and $Y'$, with duplicates in the case of multiplicity. Note that $m$ and $n$ will be equal unless $X$ and $Y$ have intersection points outside of $(K^*)^2$; this is discussed in Remark \ref{intersections_at_infinity}. Consider the rational function $h=\frac{g}{g'}$ on $X$, which has zeros at the intersection points of $X$ and $Y$ and poles at the intersection points of $X$ and $Y'$. Since $\text{Trop}(X)$ is smooth, by Remark \ref{remark_suitably_faithful} we may tropicalize $h$. This gives a tropical rational function $\text{trop}(h)$ on $\text{Trop}(X)$ with divisor $$(\text{trop}(h))=\text{trop}(p_1)+\ldots+\text{trop}(p_n)-\text{trop}(p'_1)-\ldots-\text{trop}(p'_m)=D-E.$$ We claim that $\text{trop}(h)$ is the desired $h^{trop}$ from the statement of the theorem. All that remains to show is that $\text{supp}(\text{trop}(h))\subset \text{Trop}(X)\cap\text{Trop}(Y)$. If $w\in \text{Trop}(X)\setminus \text{Trop}(Y)$, then $|g|_{s(w)}=|g'|_{s(w)}$ because $g$ and $g'$ both have bend locus $\text{Trop}(Y)$, and $w$ is away from $\text{Trop}(Y)$. This means that $\text{trop}(h)(w)=|h|_{s(w)}=|g|_{s(w)}-|g'|_{s(w)}=0$ on $\text{Trop}(X)\setminus \text{Trop}(Y)$. This completes the proof. \end{proof} \begin{remark} The argument and result will hold even if $\text{Trop}(X)$ is not smooth as long as there exists a section $s$ to $X^{an}\rightarrow \text{Trop}(X)$. \end{remark} \begin{remark} Since we have our result in terms of linear equivalence, we get as a corollary that the configurations of points differ by a sequence of chip firing moves by \cite{HMY}. \end{remark} \begin{remark}\label{intersections_at_infinity} If $\text{Trop}(X)\cap\text{Trop}(Y)$ is unbounded (for instance, if $\text{Trop}(X)=\text{Trop}(Y)$), then it is possible to have zeros of the rational function ``at infinity.'' This is OK, and can be made sense of using a compactifying fan as in \cite[\S3]{OR}. See Example \ref{doubleline} for an instance of this phenomenon. \end{remark} Our theorem has placed a constraint on the configurations of intersection points mapping into tropicalizations. The following conjecture posits that essentially all these configurations are attainable. \begin{conjecture}\label{main_conjecture} Assume we are given $\text{Trop}(X)$ and $\text{Trop}(Y)$ and a tropical rational function $h^{\text{trop}}$ on $\text{Trop}(X)$ with simple poles precisely at the stable tropical intersection points and zeros in some configuration (possibly canceling some of the poles) with coordinates in the value group ($\mathbb{Q}$ for $\ensuremath{\mathbb{C}}\{\{t\}\}$), such that $\text{supp}(h^{\text{trop}})\subset \text{Trop}(X)\cap\text{Trop}(Y)$. Then it is possible to find $X$ and $Y$ with the given tropicalizations such that $\text{trop}(p_1),\ldots,\text{trop}(p_n)$ are the zeros of $h^{\text{trop}}$. \end{conjecture} \bpf[Proof Strategy] We will consider the space of all configurations of zeros of rational functions on $\text{Trop}(X)\cap\text{Trop}(Y)$ satisfying the given properties. This will form a polyhedral complex. \bi \item First, we will prove that we can achieve the configurations corresponding to the vertices of this complex. \item Next, let $E$ be an edge connecting $V$ and $V'$, where the configuration given by $V$ is achieved by $X$ and $Y$ and the configuration given by $V'$ is achieved by $X'$ and $Y'$. We will prove that we can achieve any configuration along the edge by somehow deforming $(X,Y)$ to $(X',Y')$. This will show that all points on edges of the complex correspond to achievable configurations. \item We will continue this process (vertices give edges, edges give faces, etc.) to show that all points in the complex correspond to achievable configurations. \ei \varepsilonf For an illustration of this process, see Example \ref{example_cc} and Figure \ref{mscc}. \section{Tropical Modifactions}\label{section:modifications} In this section we outline an alternate proof to Theorem \ref{main_theorem} using tropical modifications. See \cite[\textsection 4]{BL} for background on this subject. \begin{proof}[Outline of proof of Theorem \ref{main_theorem} using tropical modifications] Let $X$, $Y$, $f$, $g$, $g'$, $D$, and $E$ be as in the proof from Section \ref{section:main_result}. Let $g_{\text{trop}}$ and $g'_{\text{trop}}$ be the tropical polynomials defined by $g$ and $g'$, respectively. Let $g(X)\subset (K^*)^2\times K$ be the curve that is the closure of $\{(p,g(p )\,|\,p\in X\}$. Its tropicalization $\text{Trop}(g(X))$ is contained in the tropical hypersurface in $\mathbb{R}^3$ determined by the polynomial $z=g_{\text{trop}}$, and projects onto $\text{Trop}(X)$. Call this projection $\pi$. Note that outside of $\text{Trop}(Y)$, $\pi$ is one-to-one, and $\text{Trop}(g(X))$ agrees with $\text{Trop}(g'(X))$. By \cite[Lemma 4.4]{BL}, the infinite vertical rays in $\pi^{-1}(\text{Trop}(X)\cap\text{Trop}(Y))$ correspond to the intersection points of $X$ and $Y$, and so lie above the support of the divisor $D$ on $\text{Trop}(X)$. Delete the vertical rays from $\pi^{-1}(\text{Trop}(X)\cap\text{Trop}(Y))$, and decompose the remaining line segments into one or more layer, where each layer gives the graph of a piecewise linear function on $\text{Trop}(X)\cap\text{Trop}(Y)$. (If deleting the vertical rays makes $\pi$ a bijection, there will be only one layer.) Call these piecewise linear functions $\ell_1,\ldots,\ell_k$. The tropical rational function $$h^{\text{trop}}=\sum_{i=1}^k(\ell_i-g'_{\text{trop}}) $$ has value $0$ outside of $\text{Trop}(X)\cap\text{Trop}(Y)$ because of the agreement of $\text{Trop}(g(X))$ and $\text{Trop}(g'(X))$, and has divisor $D-E$. \end{proof} This argument gives us a slightly stronger version of Theorem \ref{main_theorem}, in that it does not require the assumption of smoothness on $X$. \section{Evidence for Conjecture \ref{main_conjecture}}\label{section:examples} In these examples we consider curves $X$ and $Y$ over the field of Puiseux series $\mathbb{C}\{\{t\}\}$. \begin{example}\label{motivating_example_detailed} Let $f$ and $g$ be as in Example \ref{motivating_example}. Treating them as elements of $(K[x])[y]$, their resultant is $$-c_2c_5x^2 +(c_3c_4 -c_1c_5- tc_2c_6)x - tc_1c_6$$ The two roots of this quadratic polynomial in $x$, which are the $x$-coordinates of the two points in $X\cap Y$, have valuations equal to the slopes of the Newton polygon. Generically the valuations of the coefficients are $0$, $0$, and $1$, giving slopes $0$ and $1$. For any rational number $r>0$ we may choose $c_1=1-t^r-t$ and all other $c_i=1$, giving $\text{val}(c_3c_4 -c_1c_5- tc_2c_6)=\text{val}(t^r)=r$. If $r\leq\frac{1}{2}$ this gives slopes of $r$ and $1-r$, and if $r\geq\frac{1}{2}$ this gives two slopes of $\frac{1}{2}$. These cases are illustrated in Figure \ref{figure:line_conic_configurations} and correspond to rational functions illustrated in Figure \ref{figure:line_conic_rational_functions}. This means all possible images of intersections allowed by Theorem \ref{main_theorem} with rational coordinates are achievable, so Conjecture \ref{main_conjecture} holds for this example. \end{example} \begin{example}\label{example_cc} Consider conic curves $X$ and $Y$ given by the polynomials $f(x,y)=c_1x+c_2y+c_3xy=0\}$ and $g(x,y)=c_4x+c_5y+c_6xy+t(c_7x^2+c_8y^2+c_9)=0$, where $\text{val}(c_i)=0$ for all $i$. The tropicalizations of $X$ and $Y$ are shown in Figure \ref{cc}, and intersect in three line segments joined at a point. \begin{figure} \caption{$\text{Trop} \label{cc} \end{figure} The stable tropical intersection consists of four points: $(-1,0)$, $(0,-1)$, $(1,1)$, and $(0,0)$. The possible images of $\text{Trop}(X\cap Y)$ must be linearly equivalent to these via a rational function equal to $0$ on the three exterior points. This gives us intersection configurations of three possible types: \bi \item[(i)]$\{(-(p-r),0),(0,-p), (p,p), (-r,0)\}$ where $0\leq r\leq p/2$; \item[(ii)] $\{(-p,0),(0,-(p-r)), (p,p), (0,-r)\}$ where $0\leq r\leq p/2$; and \item[(iii)] $\{(-p,0),(0,-p), (p-r,p-r), (r,r)\}$ where $0\leq r\leq p/2$. \ei To achieve a type (i) configuration, set $f(x,y)=x+y+xy$ and $g(x,y)=(1+2t^{1-p+r})x+(1+t^{1-p})y+xy+t(x^2+y^2+1)$; if $r>0$, the $2$ can be omitted from the coefficient of $x$ in $g$. The Newton polygons of two polynomials, namely the resultants of $f$ and $g$ with respect to $x$ and with respect to $y$, show that $\text{Trop}(X\cap Y)=\{(-(p-r),0),(0,-p), (p,p), (-r,0)\}$. Type (ii) and (iii) are achieved similarly, so Conjecture \ref{main_conjecture} holds for this example. For instance, if $f(x,y)=x+y+xy$ and $g(x,y)=(1+t^{1/2})x+(1+t^{1/3})y+xy+t(x^2+y^2+1)$, then $\text{Trop}(X\cap Y)=\{(2/3,2/3),(0,-2/3),(-1/2,0),(-1/6,0)\}$. The formal sum of these points is linearly equivalent to the stable intersection divisor, as illustrated by the rational function in Figure \ref{trophgraph}. This is the tropicalization of the rational function $h(x,y)=\frac{(1+t^{1/2})x+(1+t^{1/3})y+xy+t(x^2+y^2+1)}{2x+4y+xy+t(x^2+y^2+1)}$, where $g'(x,y):=2x+4y+xy+t(x^2+y^2+1)$ was chosen so that $\text{Trop}(X)\cap\text{Trop}(V(g'))$ is the stable tropical intersection of $\text{Trop}(X)$ and $\text{Trop}(Y)$. \begin{figure} \caption{The graph of $\text{trop} \label{trophgraph} \end{figure} \begin{figure} \caption{The moduli space $M$ of intersection configurations, with six examples.} \label{mscc} \end{figure} We can also consider this example in view of the outlined method of proof for Conjecture \ref{main_conjecture}. Considering each intersection configuration as a point in $\ensuremath{\mathbb{R}}^8$ (natural for four points in $\ensuremath{\mathbb{R}}^2$), we obtain a moduli space $M$ for the possible tropical images of $X\cap Y$. The structure of this space is related to the notion of tropical convexity, as discussed in \cite{Lu}. As illustrated in Figure \ref{mscc}, $M$ consists of three triangles glued along one edge. The hope is that if vertices like $A$ and $C$ can be achieved, then it is possible to slide along the edge and achieve points like $D$. For instance, if we set $$f_A(x,y)=f_C(x,y)=f_{{AC},r}=x+y+xy$$ $$g_A=(1+t^0)x+4y+xy+t(x^2+y^2+1)$$ $$g_C=(1+t^{1/2})x+4y+xy+t(x^2+y^2+1)$$ $$g_{{AC},r}=(1+t^{r})x+4y+xy+t(x^2+y^2+1),$$ then $f_A$ and $g_A$ give configuration $A$, $f_C$ and $g_C$ give configuration $C$, and $f_{AC,r}$ and $g_{AC,r}$ give all configurations along the edge $AC$ as $r$ varies from $0$ to $\frac{1}{2}$. \end{example} \begin{example}\label{doubleline} Let $X$ and $Y$ be distinct lines defined by $f(x,y)=c_1+c_2x+c_3y$ and $g(x,y)=c_6+c_4x+c_5y$ with $\text{val}(c_i)=0$ for all $i$. These lines tropicalize to the same tropical line centered at the origin, with stable tropical intersection equal to the single point $(0,0)$. Any point on $\text{Trop}(X)=\text{Trop}(X)\cap\text{Trop}(Y)$ is linearly equivalent to $(0,0)$ via a tropical rational function on $X$, so Theorem \ref{main_theorem} puts no restrictions on the image of $p=X\cap Y$ under tropicalization. In keeping with Conjecture \ref{main_conjecture}, all possibilities can be achieved: \bi \item[(i)] For $\text{trop}( p)=(r,0)$, let $f(x,y)=1+x+y$, $g(x,y)=(1+t^r)+x+y$. \item[(ii)] For $\text{trop}( p)=(0,r)$, let $f(x,y)=1+x+y$, $g(x,y)=1+(1+t^r)x+y$. \item[(iii)] For $\text{trop}(p )=(-r,-r)$, let $f(x,y)=1+x+y$, $g(x,y)=1+x+(1+t^r)y$. \ei The point $(0,0)$ is also linearly equivalent to points at infinity, as witnessed by rational functions with constant slope $1$ on an entire infinite ray. Mapping $p$ ``to infinity'' means that $X$ and $Y$ cannot intersect in $(K^*)^2$, so we can choose equations for $X$ and $Y$ that give $p$ a coordinate equal to $0$, such as $x+y+1=0$ and $x+2y+1=0$. \end{example} \begin{example} Let $X$ and $Y$ be the curves defined by $$f(x,y)=xy+t(c_1x+c_2y^2+c_3x^2y)$$ $$g(x,y)=xy+t(d_1x+d_2y^2+d_3x^2y) $$ respectively, where $\text{val}(c_i)=\text{val}(d_i)=0$ for all $i$. This means $\text{Trop}(X)$ and $\text{Trop}(Y)$ are the same, and are as pictured in Figure \ref{figure:cubic_cubic}. \begin{figure} \caption{ $\text{Trop} \label{figure:cubic_cubic} \end{figure} The resultant of $f$ and $g$ with respect to the variable $y$ is \begin{align*} &t^4( c_2^2d_1^2 - 2c_1c_2d_1d_2 + c_1^2d_2^2)x^2+ t^2(c_1c_2 - c_2d_1 - c_1d_2 + d_1d_2)x^3 \\&+ t^3(-c_2c_3d_1 - c_1c_3d_2 + 2c_3d_1d_2 + 2c_1c_2d_3 - c_2d_1d_3 - c_1d_2d_3)x^4 \\&+t^4 (c_3^2d_1d_2 - c_2c_3d_1d_3 - c_1c_3d_2d_3 + c_1c_2d_3^2)x^5, \end{align*} and the resultant of $f$ and $g$ with respect to the variable $x$ is \begin{align*} &t^4(c_2c_3d_1^2 - c_1c_3d_1d_2 - c_1c_2d_1d_3 + c_1^2d_2d_3)y^3 \\&+t^3(2c_2c_3d_1 - c_1c_3d_2 - c_3d_1d_2 - c_1c_2d_3 - c_2d_1d_3 + 2c_1d_2d_3)y^4 \\&+t^2( c_2c_3 - c_3d_2 - c_2d_3 + d_2d_3)y^5 +t^4( c_3^2d_2^2 - 2c_2c_3d_2d_3 + c_2^2d_3^2)y^6. \end{align*} The stable tropical intersection consists of the three vertices of the triangle. Let us consider possible configurations of the three intersection points that have all three intersection points lying on the triangle, rather than on the unbounded rays. These are the configurations of zeros of rational functions with poles precisely at the three vertices; let $h^{\text{trop}}$ be such a function. Label the vertices clockwise starting with $(-1,1)$ as $v_1$, $v_2$, $v_3$. Starting from $v_1$ and going clockwise, label the poles of $h^{\text{trop}}$ as $w_1$, $w_2$, $w_3$. Let $\partialta_i$ denote the signed lattice distance between $v_i$ and $w_i$, with counterclockwise distance negative. Then a necessary condition for the $w_i$'s to be the poles of $h^{\text{trop}}$ is $\partialta_1+\partialta_2+\partialta_3=0$; and in fact this condition is sufficient to guarantee the existence of such an $h^{\text{trop}}$. It follows that the $w_i$'s cannot be in all different or all the same line segment of triangle, as all different would have $\partialta_1+\partialta_2+\partialta_3>0$ and all the same would have $\partialta_1+\partialta_2+\partialta_3\neq0$. Hence we need only show that each configuration with exactly two $w_i$'s on the same edge satisfying $\partialta_1+\partialta_2+\partialta_3=0$ is achievable. There are six cases to handle, since there are three choices for the edge with a pair of points and then two choices for the edge with the remaining point point. We will focus on the case where $w_1$ and $w_2$ are on the edge connecting $v_1$ and $v_2$, and $w_3$ is on the edge connecting $v_2$ and $v_3$, as shown in Figure \ref{figure:triangle}. Let $\partialta_1=r$ and $\partialta_2=-s$, where $r,s>0$, and $2-s\geq -1+r$. It follows that $\partialta_3=-(r-s)$, and that $r>s$ by the position of $w_3$. \begin{figure} \caption{The desired configuration of intersection points, where $\partialta_1=r>0$, $\partialta_2=-s<0$, and $\partialta_3=-(r-s)<0$.} \label{figure:triangle} \end{figure} To achieve the configuration specified by $r$ and $s$, set $$c_1=3+t^r,c_2=3,c_3=1,d_1=3, d_2=d+2t^{r-s},d_3=2. $$ The valuations of the coefficients of the resultant polynomial with $x$ terms are $4+2(r-s)$ for $x^2$, $2+2r-3$ for $x^3$, $3+r-s$ for $x^4$, and $4$ for $x^5$. It follows that the valuations of the $x$-coordinates are $2-s$, $-1+r$, and $-1-s+r$. When coupled with rational function restrictions, this implies that the intersection points of $X$ and $Y$ tropicalize to $(-1+r,1)$, $(2-s,1)$, and $(-1-s+r, -2-s+r)$, which are indeed the points $w_1$, $w_2$, and $w_3$ we desired. The five other cases with all three intersection points in the triangle are handled similarly, and the cases with one or more intersection point on an infinite ray are even simpler. \end{example} These examples provide not only a helpful check of Theorem \ref{main_theorem}, but also evidence that all possible intersection configurations can in fact be achieved. Future work towards proving this might be of a Berkovich flavor, as in Sections \ref{section:background} and \ref{section:main_result}, or may have more to do with tropical modifications, as presented in Section \ref{section:modifications}. Regardless of the approach, future investigations should not only look towards proving Conjecture \ref{main_conjecture}, but also towards algorithmically lifting tropical intersection configurations to curves yielding them. \end{document}
\begin{document} \title{Tractability of multivariate analytic problems} \author{Peter Kritzer\thanks{P.~Kritzer gratefully acknowledges the support of the Austrian Science Fund, Project P23389-N18 and Project F5506-N26, which is part of the Special Research Program ``Quasi-Monte Carlo Methods: Theory and Applications''.}, Friedrich Pillichshammer\thanks{F.~Pillichshammer is supported by the Austrian Science Fund (FWF) Project F5509- N26, which is part of the Special Research Program ``Quasi-Monte Carlo Methods: Theory and Applications''.}, Henryk Wo\'zniakowski\thanks{H.~Wo\'zniakowski is supported in part by the National Science Foundation.}} \maketitle \begin{abstract} In the theory of tractability of multivariate problems one usually studies problems with finite smoothness. Then we want to know which $s$-variate problems can be approximated to within $\varepsilon$ by using, say, polynomially many in $s$ and $\varepsilon^{-1}$ function values or arbitrary linear functionals. There is a recent stream of work for multivariate analytic problems for which we want to answer the usual tractability questions with $\varepsilon^{-1}$ replaced by $1+\log \varepsilon^{-1}$. In this vein of research, multivariate integration and approximation have been studied over Korobov spaces with exponentially fast decaying Fourier coefficients. This is work of J. Dick, G. Larcher, and the authors. There is a natural need to analyze more general analytic problems defined over more general spaces and obtain tractability results in terms of $s$ and $1+\log \varepsilon^{-1}$. The goal of this paper is to survey the existing results, present some new results, and propose further questions for the study of tractability of multivariate analytic questions. {\varepsilon}nd{abstract} \noindent\textbf{Keywords:} Tractability, Korobov space, numerical integration, $L_2$-approximation.\\ \noindent\textbf{2010 MSC:} 65D15, 65D30, 65C05, 11K45.\\ \section{Introduction} In this paper we discuss algorithms for multivariate integration or approximation of $s$-variate functions defined on the unit cube $[0,1]^s$. These problems have been studied in a large number of papers from many different perspectives. The focus of this article is to discuss algorithms for high-dimensional problems defined for functions from certain Hilbert spaces. There exist many results for such algorithms, and much progress has been made on this subject over the past decades. It is the goal of this review to focus on a recent vein of research that deals with function spaces containing analytic periodic functions with exponentially fast decaying Fourier coefficients. We present necessary and sufficient conditions that allow us to obtain exponential error convergence and various notions of tractability. We consider algorithms that use finitely many information evaluations. For multivariate integration, algorithms use $n$ information evaluations from the class $\Lambda^{\rm{std}}$ of standard information which consists of only function evaluations. For multivariate approximation in the $L_2$-norm, algorithms use $n$ information evaluations either from the class $\Lambda^{\rm{all}}$ of all continuous linear functionals or from the class $\Lambda^{\rm{std}}$. Since we approximate functions from the unit ball of the corresponding space, without loss of generality we restrict ourselves to linear algorithms that use nonadaptive information evaluations. In all cases, we measure the error by considering the worst-case error setting. For large~$s$, it is essential to not only control how the error of an algorithm depends on $n$, but also how it depends on $s$. To this end, we consider the information complexity, $n({\varepsilon},s)$, which is the minimal number $n$ for which there exists an algorithm using $n$ information evaluations with an error of at most ${\varepsilon}$ for the $s$-variate functions. In all cases considered in this survey, the information complexity is proportional to the minimal cost of computing an ${\varepsilon}$-approximation since linear algorithms are optimal and their implementation cost is proportional to $n({\varepsilon},s)$. We would like to control how $n({\varepsilon},s)$ depends on ${\varepsilon}^{-1}$ and $s$. This is the subject of tractability. In the standard theory of tractability, see~\cite{NW08,NW10,NW12}, {\varepsilon}mph{weak tractability} means that $n({\varepsilon},s)$ is {\varepsilon}mph{not} exponentially dependent on ${\varepsilon}^{-1}$ and $s$, {\varepsilon}mph{polynomial tractability} means that $n({\varepsilon},s)$ is polynomially bounded in ${\varepsilon}^{-1}$ and $s$, and {\varepsilon}mph{strong polynomial tractability} means that $n({\varepsilon},s)$ is polynomially bounded in ${\varepsilon}^{-1}$ independently of $s$. Typically, $n({\varepsilon},s)$ is polynomially dependent on ${\varepsilon}^{-1}$ and $s$ for weighted classes of smooth functions. The notion of weighted function classes means that the dependence of functions on successive variables and groups of variables is moderated by certain weights. For sufficiently fast decaying weights, the information complexity depends at most polynomially on ${\varepsilon}^{-1}$ and $s$; hence we obtain {\varepsilon}mph{polynomial} tractability, or even {\varepsilon}mph{strong polynomial} tractability. These notions of tractability are suitable for problems with finite smoothness, that is, when functions from the problem space are differentiable only finitely many times. Then the minimal errors $e(n,s)$ of algorithms that use $n$ information evaluations typically enjoy polynomial convergence, i.e., $e(n,s)=\mathcal{O}(n^{-p})$, where the factor in the big~$\mathcal{O}$ notation as well as a positive $p$ may depend on $s$. The case of analytic or infinitely many times differentiable functions is also of interest. For such classes of functions we would like to replace polynomial convergence by \textit{exponential convergence}, and study similar notions of tractability in terms of $(1+\log\,{\varepsilon}^{-1},s)$ instead of $({\varepsilon}^{-1},s)$. By exponential convergence we mean that $e(n,s)=\mathcal{O}(q^{[\mathcal{O}(n)]^p})$ with $q\in(0,1)$, where the factors in the big $\mathcal{O}$ notation as well as a positive $p$ may depend on $s$. Exponential convergence with various notions of tractability was studied in the papers~\cite{DLPW11} and~\cite{KPW12} for multivariate integration in weighted Korobov spaces with exponentially fast decaying Fourier coefficients. In the paper \cite{DKPW13}, multivariate $L_2$-approximation in the worst-case setting for the same class of functions was considered. In this article, we give an overview of recent results on exponential convergence with different notions of tractability such as weak, polynomial and strong polynomial tractability in terms of $1+\log\,{\varepsilon}^{-1}$ and $s$. We also present a few new results and compare conditions which are needed for the standard and new tractability notions. In Section~\ref{sectractability}, we give a short overview of $s$-variate problems, describe how we measure errors, and give precise definitions of various notions of tractability. In Section~\ref{secKor}, we introduce the function class under consideration here, which is a special example of a reproducing kernel Hilbert space that was also studied in \cite{DKPW13, DLPW11, KPW12}. In Sections~\ref{secint} and~\ref{secapp}, we provide details on the particular problems of $s$-variate numerical integration and $L_2$-approximation by linear algorithms. We summarize and give an outlook to some related open questions in Section~\ref{secconcl}. \section{Tractability}\label{sectractability} We consider Hilbert spaces $H_s$ of $s$-variate functions defined on $[0,1]^s$, and we assume that there is a family of continuous linear operators $S_s: H_s\rightarrow G_s$ for $s\in\mathbb{N}$, where $G_s$ is a normed space. Later, we will introduce a special choice of a Hilbert space $H_s$ (cf. Section~\ref{secKor}) and study two particular examples of $s$-variate problems, namely: \begin{itemize} \item Numerical integration of functions $f\in H_s$, see Section~\ref{secint}. In this case, we have $S_s (f)=\int_{[0,1]^s} f(\boldsymbol{x})\,\mathrm{d}\boldsymbol{x}$ and $G_s=\mathbb{R}$. \item $L_2$-approximation of functions $f\in H_s$, see Section~\ref{secapp}. In this case, we have $S_s(f)=f$ and $G_s=L_2 ([0,1]^s)$. {\varepsilon}nd{itemize} As already mentioned, without loss of generality, we approximate $S_s$ by a linear algorithm $A_{n,s}$ using $n$ information evaluations which are given by linear functionals from the class $\Lambda\in\{\Lambda^{\rm{all}},\Lambda^{\rm{std}}\}$. That is, $$ A_{n,s}(f)=\sum_{j=1}^nL_j(f)\,a_j \ \ \ \ \ \mbox{for all}\ \ \ \ \ f\in H_s, $$ where $L_j\in \Lambda$ and $a_j\in G_s$ for all $j=1,2,\dots,n$. For $\Lambda=\Lambda^{\rm{all}}$ we have $L_j\in H_s^*$ whereas for $\Lambda=\Lambda^{\rm{std}}$ we have $L_j(f)=f(x_j)$ for all $f\in H_s$, and for some $x_j\in[0,1]^d$. For $\Lambda^{\rm{std}}$, we choose $H_s$ as a reproducing kernel Hilbert space so that $\Lambda^{\rm{std}}\subset\Lambda^{\rm{all}}$. We measure the error of an algorithm $A_{n,s}$ in terms of the \textit{worst-case error}, which is defined as \[ e(H_s,A_{n,s}):=\sup_{f \in H_s \atop \|f\|_{H_s} \le 1} \norm{S_s(f)-A_{n,s}(f)}_{G_s}, \] where $\norm{\cdot}_{H_s}$ denotes the norm in $H_s$, and $\norm{\cdot}_{G_s}$ denotes the norm in $G_s$. The \textit{$n$th minimal (worst-case) error} is given by $$ e(n,s):=\operatornamewithlimits{inf\phantom{p}}_{A_{n,s}} e(H_s, A_{n,s}), $$ where the infimum is taken over all admissible algorithms $A_{n,s}$. For $n=0$, we consider algorithms that do not use information evaluations and therefore we use $A_{0,s}{\varepsilon}quiv 0$. The error of $A_{0,s}$ is called the \textit{initial (worst-case) error} and is given by $$ e(0,s):=\sup_{f \in H_s \atop \|f\|_{H_s} \le 1} \norm{S_s(f)}_{G_s}=\norm{S_s}. $$ When studying algorithms $A_{n,s}$, we do not only want to control how their errors depend on $n$, but also how they depend on the dimension $s$. This is of particular importance for high-dimensional problems. To this end, we define, for ${\varepsilon}\in(0,1)$ and $s\in\mathbb{N}$, the {\it information complexity} by \[ n(\varepsilon,s):= \min\left\{n\,:\,e(n,s)\le\varepsilon \right\} \] as the minimal number of information evaluations needed to obtain an ${\varepsilon}$-approximation to $S_s$. In this case, we speak of the \textit{absolute error criterion}. Alternatively, we can also define the information complexity as \[ n(\varepsilon,s):= \min\left\{n\,:\,e(n,s)\le\varepsilon e(0,s)\right\}, \] i.e., as the minimal number of information evaluations needed to reduce the initial error by a factor of ${\varepsilon}$. In this case we speak of the \textit{normalized error criterion}. The examples considered in this paper have the convenient property that the initial errors are one, and the absolute and normalized error criteria coincide. For problems for which the initial errors are not one, the results for the absolute and normalized error criteria may be quite different; we refer the interested reader to the monographs \cite{NW08, NW10, NW12} for further details. \vskip 1pc The subject of tractability deals with the question how the information complexity depends on ${\varepsilon}^{-1}$ and $s$. Roughly speaking, tractability means that the information complexity lacks a certain disadvantageous dependence on ${\varepsilon}^{-1}$ and $s$. The standard notions of tractability were introduced in such a way that positive results were possible for problems with finite smoothness. In this case, one is usually interested in when $n({\varepsilon},s)$ depends at most polynomially on ${\varepsilon}^{-1}$ and $s$. The following notions have been frequently studied. We say that we have: \begin{itemize} \item[(a)] The {\varepsilon}mph{curse of dimensionality} if there exist positive $c,\tau$ and ${\varepsilon}_0$ such that $$ n({\varepsilon},s)\ge c\,(1+\tau)^s \ \ \ \ \ \mbox{for all\, ${\varepsilon}\le {\varepsilon}_0$\, and infinitely many $s$}. $$ \item[(b)] {\varepsilon}mph{Weak Tractability (WT)} if $$ \lim_{s+{\varepsilon}^{-1}\to\infty}\frac{\log\ n(\varepsilon,s)}{s+{\varepsilon}^{-1}}=0\ \ \ \ \ \mbox{with}\ \ \ \log\,0=0\ \ \mbox{by convention}. $$ \item[(c)] {\varepsilon}mph{Polynomial Tractability (PT)} if there exist non-negative numbers $c,\tau_1,\tau_2$ such that $$ n(\varepsilon,s)\le c\,s^{\,\tau_1}\,({\varepsilon}^{-1})^{\,\tau_2}\ \ \ \ \ \mbox{for all}\ \ \ \ s\in\mathbb{N}, \ {\varepsilon}\in(0,1). $$ \item[(d)] {\varepsilon}mph{Strong Polynomial Tractability (SPT)} if there exist non-negative numbers $c$ and $\tau$ such that $$ n(\varepsilon,s)\le c\,({\varepsilon}^{-1})^{\,\tau}\ \ \ \ \ \mbox{for all}\ \ \ \ s\in\mathbb{N}, \ {\varepsilon}\in(0,1). $$ The exponent $\tau^*$ of strong polynomial tractability is defined as the infimum of $\tau$ for which strong polynomial tractability holds. {\varepsilon}nd{itemize} It turns out that many multivariate problems defined over standard spaces of functions suffer from the curse of dimensionality. The reason for this negative result is that for standard spaces all variables and groups of variables are equally important. If we introduce weighted spaces, in which the importance of successive variables and groups of variables is monitored by corresponding weights, we can vanquish the curse of dimensionality and obtain weak, polynomial or even strong polynomial tractability depending on the decay of the weights. Furthermore, this holds for weighted spaces with finite smoothness. We refer to \cite{NW08, NW10, NW12} for the current state of the art in this field of research. However, the particular weighted function space we are going to define in Section~\ref{secKor} is such that its elements are infinitely many times differentiable and even analytic. Therefore, it is natural to demand more of the $n$th minimal errors $e(n,s)$ and of the information complexity $n({\varepsilon}, s)$ than for those cases where we only have finite smoothness. To be more precise, we are interested in obtaining exponential or uniform exponential convergence of the minimal errors $e(n,s)$ for problems with unbounded smoothness. We now explain how these notions are defined. By exponential convergence we mean that there exist functions $q:\mathbb{N}\to(0,1)$ and $p,C:\mathbb{N}\to (0,\infty)$ such that $$ e(n,s)\le C(s)\, q(s)^{\,n^{\,p(s)}} \ \ \ \ \ \mbox{for all} \ \ \ \ \ s, n\in \mathbb{N}. $$ Obviously, the functions $q(\cdot)$ and $p(\cdot)$ are not uniquely defined. For instance, we can take an arbitrary number $q\in(0,1)$, define the function $C_1$ as $$ C_1(s)=\left(\frac{\log\,q}{\log\,q(s)}\right)^{1/p(s)}, $$ and then $$ C(s)\, q(s)^{\,n^{\,p(s)}}=C(s)\,q^{\,(n/C_1(s))^{p(s)}}. $$ We prefer to work with the latter bound which was also considered in~\cite{DKPW13, KPW12}. We say that we achieve {\varepsilon}mph{exponential convergence} (EXP) for $e(n,s)$ if there exist a number $q\in(0,1)$ and functions $p,C,C_1:\mathbb{N}\to (0,\infty)$ such that \begin{equation}\label{exrate} e(n,s)\le C(s)\, q^{\,(n/C_1(s))^{\,p(s)}} \ \ \ \ \ \mbox{for all} \ \ \ \ \ s, n\in \mathbb{N}. {\varepsilon}nd{equation} If {\varepsilon}qref{exrate} holds we would like to find the largest possible rate $p(s)$ of exponential convergence defined as $$ p^*(s)=\sup\{\,p(s)\ :\ \ p(s)\ \ \mbox{satisfies {\varepsilon}qref{exrate}}\,\}. $$ We say that we achieve {\varepsilon}mph{uniform exponential convergence} (UEXP) for $e(n,s)$ if the function $p$ in {\varepsilon}qref{exrate} can be taken as a constant function, i.e., $p(s)=p>0$ for all $s\in\mathbb{N}$. Similarly, let $$ p^*=\sup\{\,p\ :\ \ p(s)=p>0\ \ \mbox{satisfies {\varepsilon}qref{exrate} for all $s\in\mathbb{N}$}\,\} $$ denote the largest rate of uniform exponential convergence. \vskip 1pc Exponential convergence implies that asymptotically, with respect to ${\varepsilon}$ tending to zero, we need $\mathcal{O}(\log^{1/p(s)} {\varepsilon}^{-1})$ information evaluations to compute an ${\varepsilon}$-approximation. However, it is not clear how long we have to wait to see this nice asymptotic behavior especially for large $s$. This, of course, depends on how $C(s),C_1(s)$ and $p(s)$ depend on $s$, and it is therefore near at hand to adapt the concepts (b)--(d) of tractability to exponential error convergence. Indeed, we would like to replace ${\varepsilon}^{-1}$ by $1+\log\,{\varepsilon}^{-1}$ in the standard notions (b)--(d), which yields new versions of weak, polynomial, and strong polynomial tractability. The following new tractability versions (e), (f), and (g) were already introduced in \cite{DKPW13, DLPW11, KPW12}. We use a new kind of notation in order to be able to distinguish (b)--(d) from (e)--(g). We say that we have: \begin{itemize} \item[(e)] {\varepsilon}mph{Exponential Convergence-Weak Tractability (EC-WT)} if $$ \lim_{s+\log\,{\varepsilon}^{-1}\to\infty}\frac{\log\ n(\varepsilon,s)}{s+\log\,{\varepsilon}^{-1}}=0\ \ \ \ \ \mbox{with}\ \ \ \log\,0=0\ \ \mbox{by convention}. $$ \item[(f)] {\varepsilon}mph{Exponential Convergence-Polynomial Tractability (EC-PT)} if there exist non-nega\-tive numbers $c,\tau_1,\tau_2$ such that $$ n(\varepsilon,s)\le c\,s^{\,\tau_1}\,(1+\log\,{\varepsilon}^{-1})^{\,\tau_2}\ \ \ \ \ \mbox{for all}\ \ \ \ s\in\mathbb{N}, \ {\varepsilon}\in(0,1). $$ \item[(g)] {\varepsilon}mph{Exponential Convergence-Strong Polynomial Tractability (EC-SPT)} if there exist non-negative numbers $c$ and $\tau$ such that $$ n(\varepsilon,s)\le c\,(1+\log\,{\varepsilon}^{-1})^{\,\tau}\ \ \ \ \ \mbox{for all}\ \ \ \ s\in\mathbb{N}, \ {\varepsilon}\in(0,1). $$ The exponent $\tau^*$ of EC-SPT is defined as the infimum of $\tau$ for which EC-SPT holds. {\varepsilon}nd{itemize} Let us give some comments on these definitions. First, we remark that the use of the prefix EC (exponential convergence) in (e)--(g) is motivated by the fact that EC-PT (and therefore also EC-SPT) implies exponential convergence (cf.~Theorem \ref{thmintectract}). Also EC-WT implies that $e(n,s)$ converges to zero faster than any power of $n^{-1}$ as $n$ goes to infinity, i.e., for any $\alpha >0$ we have \begin{equation}\label{eqlimit} \lim_{n \rightarrow \infty} n^{\alpha} e(n,s)=0. {\varepsilon}nd{equation} This can be seen as follows. Let $\alpha >0$ and choose $\delta\in (0,\frac{1}{\alpha})$. For a fixed dimension~$s$, EC-WT implies the existence of an $M=M(\delta) >0$ such that for all $\varepsilon>0$ with $\log \varepsilon^{-1} >M$ we have $$ \frac{\log n(\varepsilon,s)}{\log \varepsilon^{-1}} < \delta \ \Leftrightarrow \ n(\varepsilon,s) < \varepsilon^{-\delta}. $$ This implies that for large enough $n \in \mathbb{N}$ we have $e(n,s)< n^{-1/\delta}$. Hence, we have $n^{\alpha} e(n,s) < n^{\alpha-1/\delta} \rightarrow 0$ as $n \rightarrow \infty$. Furthermore we note, as in \cite{DKPW13, DLPW11}, that if {\varepsilon}qref{exrate} holds then \begin{equation}\label{exrate2} n({\varepsilon},s) \le \left\lceil C_1(s) \left(\frac{\log C(s) + \log {\varepsilon}^{-1}}{\log q^{-1}}\right)^{1/p(s)}\right\rceil \ \ \ \ \ \mbox{for all}\ \ \ s\in \mathbb{N}\ \ \mbox{and}\ \ {\varepsilon}\in (0,1). {\varepsilon}nd{equation} Moreover, if~{\varepsilon}qref{exrate2} holds then $$ e(n+1,s)\le C(s)\, q^{\,(n/C_1(s))^{\,p(s)}}\ \ \ \ \ \mbox{for all}\ \ \ s,n\in \mathbb{N}. $$ This means that~{\varepsilon}qref{exrate} and~{\varepsilon}qref{exrate2} are practically equivalent. Note that $1/p(s)$ determines the power of $\log\,{\varepsilon}^{-1}$ in the information complexity, whereas $\log\,q^{-1}$ affects only the multiplier of $\log^{1/p(s)}{\varepsilon}^{-1}$. From this point of view, $p(s)$ is more important than $q$. In particular, EC-WT means that we rule out the cases for which $n({\varepsilon},s)$ depends exponentially on $s$ and $\log\,{\varepsilon}^{-1}$. For instance, assume that~{\varepsilon}qref{exrate} holds. Then uniform exponential convergence (UEXP) implies EC-WT if $$ C(s)={\varepsilon}xp\left({\varepsilon}xp\left(o(s)\right)\right) \ \ \ \mbox{and}\ \ \ C_1(s)={\varepsilon}xp(o(s)) \ \ \ \ \ \mbox{as}\ \ \ \ \ s\to\infty. $$ These conditions are rather weak since $C(s)$ can be almost doubly exponential and $C_1(s)$ almost exponential in $s$. The definition of EC-PT (and EC-SPT) implies that we have uniform exponential convergence with $C(s)={\varepsilon}e$ (where ${\varepsilon}e$ denotes ${\varepsilon}xp (1)$), $q=1/{\varepsilon}e$, $C_1(s)=c\,s^{\,\tau_1}$ and $p=1/\tau_2$. Obviously, EC-SPT implies $C_1(s)=c$ and $\tau^*\le1/p^*$. If~{\varepsilon}qref{exrate2} holds then we have EC-PT if $p:=\inf_sp(s)>0$ and there exist non-negative numbers $A,A_1$ and ${\varepsilon}ta,{\varepsilon}ta_1$ such that $$ C(s)\le {\varepsilon}xp\left(A s^{{\varepsilon}ta}\right)\ \ \ \mbox{and}\ \ \ C_1(s)\le A_1\,s^{{\varepsilon}ta_1}\ \ \ \ \ \mbox{for all}\ \ \ \ \ s\in\mathbb{N}. $$ The condition on $C(s)$ seems to be quite weak since even for singly exponential $C(s)$ we have EC-PT. Then $\tau_1={\varepsilon}ta_1+{\varepsilon}ta/p$ and $\tau_2=1/p$. EC-SPT holds if $C(s)$ and $C_1(s)$ are uniformly bounded in $s$, and then $\tau^*\le 1/p$. We briefly mention a recent paper~\cite{PP13}, where a new notion of weak tractability is defined similarly to EC-WT. Namely, let $\kappa\ge1$. Then it is required that \begin{equation}\label{PP13} \lim_{s+\log\,{\varepsilon}^{-1}\to\infty}\frac{\log\ n(\varepsilon,s)}{s+[\log\,{\varepsilon}^{-1}]^\kappa}=0\ \ \ \ \ \mbox{with}\ \ \ \log\,0=0\ \ \mbox{by convention}. {\varepsilon}nd{equation} Obviously, for $\kappa=1$ this is the same as EC-WT. However, for $\kappa>1$ the condition on WT is relaxed. This is essential and leads to new results for linear unweighted tensor product problems. \vskip 1pc In the following sections, we are going to discuss a special choice of $H_s$ and study the problems of $s$-variate integration and $L_2$-approximation. \section{A weighted Korobov space of analytic functions}\label{secKor} In this article, we choose for the Hilbert space $H_s$ a weighted Korobov space of periodic and smooth functions, which is probably the most popular kind of space used to analyze periodic functions. Such Korobov spaces can be defined via a reproducing kernel (for general information on reproducing kernel Hilbert spaces, see~\cite{Aron}) of the form \begin{equation}\label{defkernel} K_s(\boldsymbol{x},\boldsymbol{y})=\sum_{\boldsymbol{h} \in \mathbb{Z}^s} \rho_{\boldsymbol{h}} {\varepsilon}xp(2 \pi \mathtt{i} \boldsymbol{h} \cdot (\boldsymbol{x}-\boldsymbol{y})) \ \ \ \mbox{for all}\ \ \ \boldsymbol{x},\boldsymbol{y}\in[0,1]^s {\varepsilon}nd{equation} with the usual dot product $$ \boldsymbol{h}\cdot(\boldsymbol{x}-\boldsymbol{y})=\sum_{j=1}^sh_j(x_j-y_j), $$ where $h_j,x_j,y_j$ are the $j$th components of the vectors $\boldsymbol{h},\boldsymbol{x},\boldsymbol{y}$, respectively. Furthermore, $\mathtt{i}=\sqrt{-1}$. The nonnegative $\rho_{\boldsymbol{h}}$ for $\boldsymbol{h} \in \mathbb{Z}^s$, which may also depend on $s$ and other parameters, are chosen such that $\sum_{\boldsymbol{h} \in \mathbb{Z}^s} \rho_{\boldsymbol{h}} < \infty$. This choice guarantees that the kernel $K_s$ is well defined, since $$ |K_s(\boldsymbol{x},\boldsymbol{y})| \le K_s(\boldsymbol{x}, \boldsymbol{x}) = \sum_{\boldsymbol{h} \in \mathbb{Z}^s} \rho_{\boldsymbol{h}} < \infty. $$ Obviously, the function $K_s$ is symmetric in $\boldsymbol{x}$ and $\boldsymbol{y}$ and it is easy to show that it is also positive definite. Therefore, $K_s(\boldsymbol{x}, \boldsymbol{y})$ is indeed a reproducing kernel. The corresponding Korobov space is denoted by $H(K_s)$. The smoothness of the functions from $H(K_s)$ is determined by the decay of the $\rho_{\boldsymbol{h}}$'s. A very well studied case in literature is for Korobov spaces of {\it finite} smoothness~$\alpha$. Here $\rho_{\boldsymbol{h}}$ is of the form $$ \rho_{\boldsymbol{h}}=r_{\alpha,\boldsymbol{\gamma}}(\boldsymbol{h}), $$ where $\alpha> 1$ is a real, $\boldsymbol{\gamma}= (\gamma_1, \gamma_2, \ldots)$ is a sequence of positive reals, and for $\boldsymbol{h} = (h_1, \ldots , h_s)$ we have $$ r_{\alpha,\boldsymbol{\gamma}}(\boldsymbol{h}) =\prod_{j=1}^s r_{\alpha,\gamma_j}(h_j), $$ with $r_{\alpha,\gamma}(0) =1$ and $r_{\alpha,\gamma}(h)=\gamma |h|^{-\alpha}$ whenever $h \not= 0$. Hence the $\rho_{\boldsymbol{h}}$'s decay polynomially in the components of $\boldsymbol{h}$. The parameter $\alpha$ guarantees the existence of some partial derivatives of the functions and the so-called weights $\boldsymbol{\gamma}$ model the influence of the different components on the variation of the functions from the Korobov space. More information can be found in \cite[Appendix~A.1]{NW08}. The idea of introducing weights stems from Sloan and Wo\'{z}niakowski and was first discussed in \cite{SW98}. For multivariate integration defined over weighted Korobov spaces of smoothness $\alpha$, algorithms based on $n$ function evaluations can obtain the best possible convergence rate of order $\mathcal{O}(n^{-\alpha/2+\delta})$ for any $\delta>0$. Under certain conditions on the weights, weak, polynomial or even strong polynomial tractability in the sense of (b)--(d) can be achieved. We refer to \cite{NW08,NW10,NW12} and the references therein and to the recent survey \cite{DKS14} for further details. Besides the case of finite smoothness, Korobov spaces of {\it infinite} smoothness were also considered. In this case, the $\rho_{\boldsymbol{h}}$'s decay to zero exponentially fast in $\boldsymbol{h}$. Multivariate integration and $L_2$-approximation for such Korobov spaces have been analyzed in \cite{DKPW13, DLPW11, KPW12}. To model the influence of different components we use two weight sequences $$ \boldsymbol{a}=\{a_j\}_{j \ge 1}\ \ \ \mbox{and}\ \ \ \boldsymbol{b}=\{b_j\}_{j \ge 1}. $$ In order to guarantee that the kernel that we will introduce in a moment is well defined we must assume that $a_j>0$ and $b_j>0$. In fact, we assume a little more throughout the paper, namely that with the proper ordering of variables we have \begin{equation}\label{aabb} 0<a_1\le a_2\le \cdots\ \ \ \ \mbox{ and }\ \ \ b_\ast =\inf b_j >0. {\varepsilon}nd{equation} Let $a_{\ast}=\inf a_j$ which is $a_1$ in our case. Fix $\omega\in(0,1)$ and put in {\varepsilon}qref{defkernel} \begin{equation}\label{formofomega} \rho_{\boldsymbol{h}}=\omega_{\boldsymbol{h}}:=\omega^{\sum_{j=1}^{s}a_j \abs{h_j}^{b_j}} \qquad\mbox{for all}\qquad \boldsymbol{h}=(h_1,h_2,\dots,h_s)\in\mathbb{Z}^s. {\varepsilon}nd{equation} For this choice of $\rho_{\boldsymbol{h}}$ we denote the kernel in {\varepsilon}qref{defkernel} by $K_{s,\boldsymbol{a},\boldsymbol{b}}$. We suppress the dependence on $\omega$ in the notation since $\omega$ will be fixed throughout the paper and $\boldsymbol{a}$ and $\boldsymbol{b}$ will be varied. Note that $K_{s,\boldsymbol{a},\boldsymbol{b}}$ is well defined since \begin{eqnarray*} \sum_{\boldsymbol{h} \in \mathbb{Z}^s} \omega_{\boldsymbol{h}} = \prod_{j=1}^s \left(1+2 \sum_{h=1}^{\infty} \omega^{a_j h^{b_j}}\right) \le \left(1+2 \sum_{h=1}^{\infty} \omega^{a_\ast h^{b_\ast}}\right)^s <\infty. {\varepsilon}nd{eqnarray*} The last series is finite by the comparison test because $a_\ast >0$ and $b_\ast >0$. The Korobov space with reproducing kernel $K_{s,\boldsymbol{a},\boldsymbol{b}}$ is denoted by $H(K_{s,\boldsymbol{a},\boldsymbol{b}})$. Clearly, functions from $H(K_{s,\boldsymbol{a},\boldsymbol{b}})$ are infinitely many times differentiable, see \cite{DLPW11}, and they are even analytic as shown in \cite[Proposition~2]{DKPW13}. For $f\in H(K_{s,\boldsymbol{a},\boldsymbol{b}})$ we have $$ f(\boldsymbol{x})=\sum_{\boldsymbol{h}\in\mathbb{Z}^s} \widehat f(\boldsymbol{h})\,{\varepsilon}xp(2\pi \mathtt{i} \boldsymbol{h} \cdot\boldsymbol{x}) \ \ \ \mbox{for all}\ \ \ \boldsymbol{x}\in [0,1]^s, $$ where $\widehat{f}(\boldsymbol{h}) = \int_{[0,1]^s} f(\boldsymbol{x}) {\varepsilon}xp(-2 \pi \mathtt{i} \boldsymbol{h} \cdot \boldsymbol{x}) \,\mathrm{d} \boldsymbol{x}$ is the $\boldsymbol{h}$th Fourier coefficient of $f$. The inner product of $f$ and $g$ from $H(K_{s,\boldsymbol{a},\boldsymbol{b}})$ is given by $$ \left< f,g\right>_{H(K_{s,\boldsymbol{a},\boldsymbol{b}})}=\sum_{\boldsymbol{h}\in \mathbb{Z}^s}\widehat f(\boldsymbol{h})\, \overline{\widehat g(\boldsymbol{h})}\, \omega_{\boldsymbol{h}}^{-1} $$ and the norm of $f$ from $H(K_{s,\boldsymbol{a},\boldsymbol{b}})$ by $$ \|f\|_{H(K_{s,\boldsymbol{a},\boldsymbol{b}})}=\left(\sum_{\boldsymbol{h}\in \mathbb{Z}^s}|\widehat f(\boldsymbol{h})|^2\omega_{\boldsymbol{h}}^{-1}\right)^{1/2}<\infty. $$ Define the functions \begin{equation}\label{basis} e_{\boldsymbol{h}}(\boldsymbol{x})={\varepsilon}xp(2\pi\mathtt{i}\,\boldsymbol{h}\cdot\boldsymbol{x})\, \omega_{\boldsymbol{h}}^{1/2}\ \ \ \ \ \mbox{for all}\ \ \ \ \ \boldsymbol{x} \in[0,1]^s. {\varepsilon}nd{equation} Then $\{e_{\boldsymbol{h}}\}_{\boldsymbol{h}\in\mathbb{Z}^s}$ is a complete orthonormal basis of the Korobov space $H(K_{s,\boldsymbol{a},\boldsymbol{b}})$. \section{Integration in $H(K_{s,\boldsymbol{a},\boldsymbol{b}})$}\label{secint} In this section we study numerical integration, i.e., we are interested in numerical approximation of the values of integrals \[I_s (f)=\int_{[0,1]^s}f(\boldsymbol{x})\,\mathrm{d}\boldsymbol{x}\ \ \ \ \ \mbox{for all}\ \ \ f\in H(K_{s,\boldsymbol{a},\boldsymbol{b}}).\] Using the general notation from Section~\ref{sectractability}, we now have $S_s(f)=I_s(f)$ for functions $f\in H_s=H(K_{s,\boldsymbol{a},\boldsymbol{b}})$, and $G_s=\mathbb{C}$. We approximate $I_s(f)$ by means of linear algorithms $Q_{n,s}$ of the form \[Q_{n,s}(f):=\sum_{k=1}^n q_k f(\boldsymbol{x}_k),\] where coefficients $q_k\in \mathbb{C}$ and sample points $\boldsymbol{x}_k\in[0,1)^s$. If we choose $q_k=1/n$ for all $k=1,2,\ldots , n$ then we obtain so-called {\it quasi-Monte Carlo (QMC) algorithms} which are often used in practical applications especially if $s$ is large. For recent overviews of the study of QMC algorithms we refer to \cite{DKS14,DP10,KSS11}. The $n$th minimal worst-case error is given by $$ e^{\mathrm{int}}(n,s)=\operatornamewithlimits{inf\phantom{p}}_{q_k,\boldsymbol{x}_k,\ k=1,2,\dots,n}\ \sup_{f\in H(K_{s,\boldsymbol{a},\boldsymbol{b}}) \atop \|f\|_{H(K_{s,\boldsymbol{a},\boldsymbol{b}})}\le1\ } \bigg|I_s(f)-\sum_{k=1}^nq_k f(\boldsymbol{x}_k)\bigg|. $$ It is well known, see for instance \cite{NW10,TWW88}, that \begin{equation}\label{wellknown} e^{\mathrm{int}}(n,s)=\operatornamewithlimits{inf\phantom{p}}_{\boldsymbol{x}_k,\ k=1,2,\dots,n}\ \ \sup_{f\in H(K_{s,\boldsymbol{a},\boldsymbol{b}}), \ f(\boldsymbol{x}_k)=0, \ k=1,2,\dots,n \atop \|f\|_{H(K_{s,\boldsymbol{a},\boldsymbol{b}})}\le1\ } |I_s(f)|. {\varepsilon}nd{equation} For $n=0$, the best we can do is to approximate $I_s(f)$ simply by zero, and $$ e^{\mathrm{int}}(0,s)=\|I_s\|=1\ \ \ \ \ \mbox{for all}\ \ \ \ \ s\in \mathbb{N}. $$ Hence, the integration problem is well normalized for all $s$. We now summarize the main results regarding numerical integration in $H(K_{s,\boldsymbol{a},\boldsymbol{b}})$. Here and in the following, we will be using the notational abbreviations \begin{center} EXP\qquad UEXP\\ WT\qquad PT\qquad SPT\\ EC-WT\qquad \ EC-PT\qquad \ EC-SPT\\ {\varepsilon}nd{center} to denote exponential and uniform exponential convergence, and weak, polynomial and strong polynomial tractability in terms of (b)--(d) and (e)--(g). We now state relations between these concepts as well as necessary and sufficient conditions on $\boldsymbol{a}$ and $\boldsymbol{b}$ for which these concepts hold. As we shall see, in the settings considered in this paper, many conditions for obtaining these concepts are equivalent. We first state a theorem which describes conditions on the weight sequences $\boldsymbol{a}$ and~$\boldsymbol{b}$ to obtain exponential (EXP) and uniform exponential (UEXP) convergence. This theorem is from \cite{DKPW13, KPW12}. \begin{theorem}\label{thmint(u)exp} Consider integration defined over the Korobov space $H(K_{s,\boldsymbol{a},\boldsymbol{b}})$ with weight sequences $\boldsymbol{a}$ and $\boldsymbol{b}$ satisfying~{\varepsilon}qref{aabb}. \begin{itemize} \item\label{intexp} EXP holds for all considered $\boldsymbol{a}$ and $\boldsymbol{b}$ and $$ p^{*}(s)=\frac{1}{B(s)} \ \ \ \ \ \mbox{with}\ \ \ \ \ B(s):=\sum_{j=1}^s\frac1{b_j}. $$ \item\label{intuexp} UEXP holds iff $\boldsymbol{b}$ is such that $$ B:=\sum_{j=1}^\infty\frac1{b_j}<\infty. $$ If so then $p^*=1/B$. {\varepsilon}nd{itemize} {\varepsilon}nd{theorem} Theorem~\ref{thmint(u)exp} states that we always have exponential convergence. However, a necessary and sufficient condition for uniform exponential convergence is that the weights~$b_j$ go to infinity so fast that $B:=\sum_{j=1}^\infty b_j^{-1}<\infty$, with no extra conditions on $a_j$ and $\omega$. The largest exponent $p$ of uniform exponential convergence is $1/B$. Hence for small $B$ the exponent $p$ is large. For instance, for $b_j=j^{-2}$ we have $B=\pi^2/6$ and $p^*=6/\pi^2=0.6079\dots$. Next, we consider standard notions of tractability, (b)--(d). They have not yet been studied for the Korobov space $H(K_{s,\boldsymbol{a},\boldsymbol{b}})$ and therefore we need to prove the next theorem. \begin{theorem}\label{PTint} Consider integration defined over the Korobov space $H(K_{s,\boldsymbol{a},\boldsymbol{b}})$ with weight sequences $\boldsymbol{a}$ and $\boldsymbol{b}$ satisfying~{\varepsilon}qref{aabb}. For simplicity, assume that $$ A:=\lim_{j \rightarrow \infty} \frac{a_j}{\log j} $$ exists. \begin{itemize} \item\label{SPT} SPT holds if $A >\frac{1}{\log \omega^{-1}}$. In this case the exponent $\tau^\ast$ of SPT satisfies $$ \tau^\ast \le \min\left(2, \frac{2}{A \log \omega^{-1}}\left( 1+\frac1{A \log \omega^{-1}}\right)\right). $$ On the other hand, if we have SPT with exponent $\tau^\ast$, then $A \ge \frac{1}{\tau^{\ast} \log \omega^{-1}}$. \item\label{PT} PT holds if there is an integer $j_0\ge2$ such that $$ \frac{a_j}{\log\,j}\ge\frac1{\log\,\omega^{-1}}\ \ \ \ \mbox{for all}\ \ \ \ \ j\ge j_0. $$ \item WT holds if $\lim_{j \rightarrow \infty} a_j=\infty$. {\varepsilon}nd{itemize} {\varepsilon}nd{theorem} \begin{proof} It is well known that integration is no harder than $L_2$-approximation for the class $\Lambda^{\rm{std}}$. For the Korobov class the initial errors of integration and approximation are $1$. Therefore the corresponding notions of tractability for approximation imply the same notions of tractability for integration. From Theorem~\ref{PTapprox}, presented in the next section, we thus conclude SPT, PT and WT also for integration. The second bound on the exponent $\tau^*$ of SPT also follows from Theorem~\ref{PTapprox}. It remains to prove that $\tau^*\le 2$. It is known, see, e.g., \cite[Theorem 10.4]{NW10}, that $$ [e^{\rm int}(n,s)]^2\le \frac1{n}\,\int_{[0,1]^s}K_{s,\boldsymbol{a},\boldsymbol{b}}(\boldsymbol{x},\boldsymbol{x})\,{\rm d}\boldsymbol{x}\le\frac1n\,\prod_{j=1}^s\left(1+ 2\sum_{h=1}^\infty \omega^{a_jh^{b^*}}\right). $$ It is shown in the proof of Theorem~\ref{PTapprox} (with $\tau=1$) that $A>0$ implies the existence of $C\in(0,\infty)$ such that $2\sum_{h=1}^\infty\omega^{a_jh^{b^*}}\le C\,\omega^{a_j}$. Therefore $$ [e^{\rm int}(n,s)]^2\le \frac1n\prod_{j=1}^s\left(1+C\sum_{j=1}^\infty\omega^{a_j}\right)\le \frac1n\, {\varepsilon}xp\left(C\sum_{j=1}^s\omega^{a_j}\right). $$ Note that for $j\ge2$ we have $\omega^{a_j}=j^{-a_j\,(\log \omega^{-1})/\log j}$. Since $A>1/(\log\,\omega^{-1})$ for large~$j$ we conclude that $\omega^{a_j}\le j^{-\beta}$ with $\beta\in(1,A \log\,\omega^{-1})$. Hence $\sum_{j=1}^\infty \omega^{a_j}<\infty$ and $e(n,s)\le {\varepsilon}$ for $n=\mathcal{O}({\varepsilon}^{-2})$ with the factor in the big $\mathcal{O}$ notation independent of~$s$. This implies SPT with the exponent at most $2$. It remains to show the necessary condition for SPT with exponent $\tau^\ast$. First of all we show the estimate \begin{equation}\label{lowest} e^{\mathrm{int}}(s,s)\ge\frac{\omega^{a_s}}{\sqrt{1+\omega^{2a_s}}} \ \ \ \ \ \mbox{for all}\ \ \ \ \ s\in\mathbb{N}. {\varepsilon}nd{equation} Let $\boldsymbol{h}^{(0)}=(0,0,\dots,0)\in\mathbb{Z}^s$. For $j=1,2,\dots,s$, let $$ \boldsymbol{h}^{(j)}=(0,0,\dots,1,0,\dots,0)\in\mathbb{Z}^s\ \ \ \mbox{with $1$ on the $j$th place}. $$ For $\boldsymbol{h}\in\mathbb{Z}^s$, let $$ c_{\boldsymbol{h}}(\boldsymbol{x})={\varepsilon}xp\left(2\pi\,{\rm i}\,\sum_{j=1}^sh_jx_j\right)\ \ \ \ \ \mbox{for all}\ \ \ \ \ \boldsymbol{x}\in[0,1]^s. $$ For $j=0,1,\dots,s$, note that $c_{\boldsymbol{h}^{(j)}}(\boldsymbol{x})={\varepsilon}xp(2\,\pi\,{\rm i}\, x_j)$ and $$ c_{\boldsymbol{h}^{(j)}}(\boldsymbol{x})\,\overline{c_{\boldsymbol{h}^{(k)}}(\boldsymbol{x})}= c_{\boldsymbol{h}^{(j)}-\boldsymbol{h}^{(k)}}(\boldsymbol{x}). $$ Consider the function $$ f(\boldsymbol{x})=\sum_{j=0}^s \alpha_j\,c_{\boldsymbol{h}^{(j)}}(\boldsymbol{x})\ \ \ \ \ \mbox{for all}\ \ \ \ \ \boldsymbol{x}\in[0,1]^s $$ for some complex numbers $\alpha_j$. We know that adaption does not help for the integration problem. Suppose that we sample functions at $s$ nonadaptive points $\boldsymbol{x}_1,\boldsymbol{x}_2,\dots,\boldsymbol{x}_s\in[0,1]^s$. We choose numbers $\alpha_j$ such that $$ f(\boldsymbol{x}_j)=0\ \ \ \ \ \mbox{for all}\ \ \ \ \ j=1,2,\dots,s. $$ This corresponds to $s$ homogeneous linear equations in $(s+1)$ unknowns. Therefore there exists a nonzero solution $\alpha_0,\alpha_1,\dots,\alpha_s$ which we may normalize such that $$ \sum_{j=0}^s|\alpha_j|^2=1. $$ Let $$ g(\boldsymbol{x})=\overline{f(\boldsymbol{x})}\,f(\boldsymbol{x})=\sum_{j,k=0}^s\alpha_j\, \overline{\alpha}_k\,c_{\boldsymbol{h}^{(j)}-\boldsymbol{h}^{(k)}}(\boldsymbol{x})\ \ \ \ \ \mbox{for all}\ \ \ \ \ \boldsymbol{x}\in[0,1]^s. $$ Clearly, $g(\boldsymbol{x}_j)=0$ for all $j=1,2,\dots,s$. Since $I_s(c_{\boldsymbol{h}^{(j)}-\boldsymbol{h}^{(k)}})=0$ for $j\not=k$ and $1$ for $j=k$ we obtain $$ I_s(g)=\sum_{j=0}^s|\alpha_j|^2=1. $$ Now it follows from {\varepsilon}qref{wellknown} that $$ e^{\mathrm{int}}(s,s)\ge I_s\left(\frac{g}{\|g\|_{H(K_{s,\boldsymbol{a},\boldsymbol{b}})}}\right)= \frac1{\|g\|_{H(K_{s,\boldsymbol{a},\boldsymbol{b}})}}. $$ This is why we need to estimate the norm of $g$ from above. Note that \begin{eqnarray*} \|g\|_{H(K_{s,\boldsymbol{a},\boldsymbol{b}})}^2&=& \left< g,g\right>_{H(K_{s,\boldsymbol{a},\boldsymbol{b}})}\\&=&\left< \sum_{j_1,k_1=0}^s\alpha_{j_1}\,\overline{\alpha_{k_1}}\, c_{\boldsymbol{h}^{(j_1)}-\boldsymbol{h}^{(k_1)}}, \sum_{j_2,k_2=0}^s\alpha_{j_2}\,\overline{\alpha_{k_2}}\, c_{\boldsymbol{h}^{(j_2)}-\boldsymbol{h}^{(k_2)}}\right>_{H(K_{s,\boldsymbol{a},\boldsymbol{b}})} \\ &=& \sum_{j_1,k_1,j_2,k_2=0}^s\alpha_{j_1}\,\overline{\alpha_{j_2}}\, \overline{\alpha_{k_1}}\,\alpha_{k_2}\, \left< c_{\boldsymbol{h}^{(j_1)}-\boldsymbol{h}^{(k_1)}}, c_{\boldsymbol{h}^{(j_2)}-\boldsymbol{h}^{(k_2)}}\right>_{H(K_{s,\boldsymbol{a},\boldsymbol{b}})}. {\varepsilon}nd{eqnarray*} For $\boldsymbol{h}^{(j_1)}-\boldsymbol{h}^{(k_1)}\not=\boldsymbol{h}^{(j_2)}-\boldsymbol{h}^{(k_2)}$ we have $$ \left< c_{\boldsymbol{h}^{(j_1)}-\boldsymbol{h}^{(k_1)}}, c_{\boldsymbol{h}^{(j_2)}-\boldsymbol{h}^{(k_2)}}\right>_{H(K_{s,\boldsymbol{a},\boldsymbol{b}})}=0, $$ whereas for $\boldsymbol{h}^{(j_1)}-\boldsymbol{h}^{(k_1)}=\boldsymbol{h}^{(j_2)}-\boldsymbol{h}^{(k_2)}$ we have $$ \left< c_{\boldsymbol{h}^{(j_1)}-\boldsymbol{h}^{(k_1)}}, c_{\boldsymbol{h}^{(j_2)}-\boldsymbol{h}^{(k_2)}}\right>_{H(K_{s,\boldsymbol{a},\boldsymbol{b}})}= \omega^{-1}_{\boldsymbol{h}^{(j_1)}-\boldsymbol{h}^{(k_1)}}. $$ Therefore it is enough to consider $$ \boldsymbol{h}^{(j_1)}-\boldsymbol{h}^{(k_1)}=\boldsymbol{h}^{(j_2)}-\boldsymbol{h}^{(k_2)}. $$ Suppose first that $j_1\not=k_1$. Then $\boldsymbol{h}^{(j_1)}-\boldsymbol{h}^{(k_1)}=\boldsymbol{h}^{(j_2)}-\boldsymbol{h}^{(k_2)}$ implies that $j_2=j_1$ and $k_2=k_1$ and $$ \omega^{-1}_{\boldsymbol{h}^{(j_1)}-\boldsymbol{h}^{(k_1)}}=\omega^{-a_{j_1}-a_{k_1}}. $$ On the other hand, if $j_1=k_1$ then $\boldsymbol{h}^{(j_1)}-\boldsymbol{h}^{(k_1)}=\boldsymbol{h}^{(0)}$ which implies that $j_2=k_2$ and $$ \omega^{-1}_{\boldsymbol{h}^{(j_1)}-\boldsymbol{h}^{(k_1)}}=1. $$ Therefore \begin{eqnarray*} \|g\|_{H(K_{s,\boldsymbol{a},\boldsymbol{b}})}^2&=& \sum_{j_1,k_1,j_2,k_2=0, \ \boldsymbol{h}^{(j_1)}-\boldsymbol{h}^{(k_1)}= \boldsymbol{h}^{(j_2)}-\boldsymbol{h}^{(k_2)}}^s\alpha_{j_1}\, \overline{\alpha_{j_2}}\,\overline{\alpha_{k_1}}\, \alpha_{k_2}\,\omega_{\boldsymbol{h}^{(j_1)}-\boldsymbol{h}^{(k_1)}}\\ &=& \sum_{j_1=0}^s\,\sum_{k_1=0,k_1\not=j_1}^s|\alpha_{j_1}|^2\,|\alpha_{k_1}|^2\, \omega^{-a_{j_1}-a_{k_1}}\, +\, \sum_{j_1=0}^s|\alpha_{j_1}|^2\,\sum_{j_2=0}^s|\alpha_{j_2}|^2\\ &=&\sum_{j=0}^s|\alpha_j|^2\omega^{-a_j}\,\left(-|\alpha_j|^2\omega^{-a_j}\,+\, \sum_{k=0}^s|\alpha_k|^2\omega^{-a_k}\right)+1\\ &\le& \left(\sum_{j=0}^s|\alpha_j|^2\omega^{-a_j}\right)^2+1 \le \omega^{-2a_s}+1. {\varepsilon}nd{eqnarray*} Hence, $$ \|g\|_{H(K_{s,\boldsymbol{a},\boldsymbol{b}})}\le \sqrt{1+\omega^{-2a_s}}=\frac{\sqrt{1+\omega^{2a_s}}}{\omega^{a_s}}. $$ Finally, $$ e^{\mathrm{int}}(s,s)\ge \frac1{\|g\|_{H(K_{s,\boldsymbol{a},\boldsymbol{b}})}}\ge \frac{\omega^{a_s}}{\sqrt{1+\omega^{2a_s}}}, $$ and thus {\varepsilon}qref{lowest} is shown. Assume that we have SPT with the exponent $\tau^*$. This means that for any positive $\delta$ there exists a positive number $C_\delta$ such that $$ n({\varepsilon},s)\le C_\delta\,{\varepsilon}^{-(\tau^*+\delta)}\ \ \ \ \ \mbox{for all}\ \ \ \ \ {\varepsilon}\in(0,1),\ s\in\mathbb{N}. $$ Let $n=n({\varepsilon}):=\lfloor C_{\delta}\,{\varepsilon}^{-(\tau^*+\delta)}\rfloor$. Then $$ e^{\mathrm{int}}(n({\varepsilon}),s)\le {\varepsilon} \ \ \ \ \ \mbox{for all}\ \ \ \ \ s\in\mathbb{N}. $$ Taking $s=n({\varepsilon})$, we conclude from~{\varepsilon}qref{lowest} that $$ \frac{\omega^{a_s}}{\sqrt{1+\omega^{2a_s}}} \le e^{\mathrm{int}}(s,s)\le {\varepsilon}, $$ which implies $$ (1-{\varepsilon}^2)\omega^{2a_s} \le {\varepsilon}^2. $$ Taking logarithms this means that $$ \frac{a_s}{\log\,{\varepsilon}^{-1}}\ge \frac{1+o(1)}{\log\,\omega^{-1}}\ \ \ \ \ \mbox{as}\ \ \ \ \ {\varepsilon}\to0. $$ Since $\log\,{\varepsilon}^{-1}=(1+o(1))(\tau^*+\delta)^{-1}\,\log\,s$ we finally have $$ A=\lim_{s\to \infty}\frac{a_s}{\log\,s}\ge \frac1{(\tau^*+\delta)\,\log\, \omega^{-1}}. $$ Since $\delta$ can be arbitrarily small, the proof is completed. {\varepsilon}nd{proof} We stress that for integration we only know sufficient conditions on $\boldsymbol{a}$ and $\boldsymbol{b}$ for the standard notions PT and WT. Obviously, it would be welcome to find also necessary conditions and verify if they match the conditions presented in the last theorem. For SPT we have a sufficient condition and a necessary condition, but there remains a (small) gap between these. Again, it would be welcome to find matching sufficient and necessary conditions for SPT. Note that it may happen that $A=\infty$. This happens when $a_j$'s go to infinity faster than $\log\,j$. In this case, the exponent of SPT is zero. This means that for any positive $\delta$, no matter how small, $n({\varepsilon},s)=\mathcal{O}({\varepsilon}^{-\delta})$ with the factor in the big $\mathcal{O}$ notation independent of~$s$. We also stress that the conditions on all standard notions of tractability depend only on $\boldsymbol{a}$ and are independent of $\boldsymbol{b}$. Finally, we have a result regarding the EC notions of tractability, (d)--(f). The subsequent theorem follows by combining the findings in \cite{KPW12} and \cite[Section~9]{DKPW13}. \begin{theorem}\label{thmintectract} Consider integration defined over the Korobov space $H(K_{s,\boldsymbol{a},\boldsymbol{b}})$ with weight sequences $\boldsymbol{a}$ and $\boldsymbol{b}$ satisfying~{\varepsilon}qref{aabb}. Then the following results hold: \begin{itemize} \item\label{intecpt} EC-PT (and, of course, EC-SPT) implies UEXP. \item\label{intecwt} We have \begin{eqnarray*} \mbox{EC-WT}\ &\Leftrightarrow&\ \lim_{j\to\infty}a_j=\infty,\\ \mbox{EC-WT+UEXP}\ &\Leftrightarrow&\ B<\infty\ \ \mbox{and}\ \ \lim_{j\to\infty}a_j=\infty. {\varepsilon}nd{eqnarray*} \item\label{intecequiv} The following notions are equivalent: \begin{multline*} \ \ \ \ \mbox{EC-PT}\ \Leftrightarrow\ \mbox{EC-PT+EXP}\ \Leftrightarrow\ \mbox{EC-PT+UEXP} \\ \Leftrightarrow \mbox{EC-SPT}\ \Leftrightarrow\ \mbox{EC-SPT+EXP}\ \Leftrightarrow\ \mbox{EC-SPT+UEXP}. {\varepsilon}nd{multline*} \item \label{intecspt} EC-SPT+UEXP holds iff $b_j^{-1}$'s are summable and $a_j$'s are exponentially large in~$j$, i.e., $$ B:=\sum_{j=1}^\infty\frac1{b_j}<\infty\quad\mbox{and}\quad \alpha^*:=\liminf_{j\to\infty}\frac{\log\,a_j}j>0. $$ Then the exponent $\tau^*$ of EC-SPT satisfies $$ \tau^*\in \left[B,B+\min\left(B,\frac{\log\,3}{\alpha^*}\right)\right]. $$ In particular, if $\alpha^* =\infty$ then $\tau^* = B$. {\varepsilon}nd{itemize} {\varepsilon}nd{theorem} Theorem~\ref{thmintectract} states that EC-PT implies UEXP and hence $B<\infty$. The notion of EC-PT is therefore stronger than the notion of uniform exponential convergence. EC-WT holds if and only if the $a_j$'s tend to infinity. This holds independently of the weights~$\boldsymbol{b}$ and independently of the rate of convergence of $\boldsymbol{a}$ to infinity. As already shown, this implies that {\varepsilon}qref{eqlimit} holds. Furthermore, EC-WT+UEXP holds if additionally $B< \infty$. Hence for $\lim_j a_j=\infty$ and $B=\infty$, EC-WT holds without UEXP. It is a bit surprising that the notions of EC-tractability with uniform exponential convergence are equivalent. Necessary and sufficient conditions for EC-SPT with uniform exponential convergence are $B< \infty$ and $\alpha^{\ast}>0$. The last condition means that $a_j$'s are exponentially large in $j$ for large $j$. \section{$L_2$-approximation in $H(K_{s,\boldsymbol{a},\boldsymbol{b}})$}\label{secapp} Let us now turn to approximation in the space $H(K_{s,\boldsymbol{a},\boldsymbol{b}})$. We study $L_2$-approximation of functions from $H(K_{s,\boldsymbol{a},\boldsymbol{b}})$. This problem is defined as an approximation of the embedding from the space $H(K_{s,\boldsymbol{a},\boldsymbol{b}})$ to the space $L_2([0,1]^s)$, i.e., $$ {\rm EMB}_s:H(K_{s,\boldsymbol{a},\boldsymbol{b}}) \rightarrow L_2([0,1]^s)\ \ \ \mbox{given by}\ \ \ {\rm EMB}_s(f)=f. $$ In terms of the notation in Section~\ref{sectractability}, $S_s(f)={\rm EMB}_s(f)=f$ for $f\in H(K_{s,\boldsymbol{a},\boldsymbol{b}})$, and $G_s=L_2 ([0,1]^s)$. Without loss of generality, see again \cite{NW08,TWW88}, we approximate ${\rm EMB}_s$ by linear algorithms~$A_{n,s}$ of the form \begin{equation}\label{linalg} A_{n,s}(f) = \sum_{k=1}^{n}\alpha_k L_k(f)\ \ \ \ \mbox{for} \ \ \ \ \ f \in H(K_{s,\boldsymbol{a},\boldsymbol{b}}), {\varepsilon}nd{equation} where each $\alpha_k$ is a function from $L_{2}([0,1]^{s})$ and each $L_k$ is a continuous linear functional defined on $H_s$ from a permissible class $\Lambda$ of information, $\Lambda\in\{\Lambda^{\rm{all}},\Lambda^{\rm{std}}\}$. Since $H(K_{s,\boldsymbol{a},\boldsymbol{b}})$ is a reproducing kernel Hilbert space, function evaluations are continuous linear functionals and therefore $\Lambda^{\mathrm{std}}\subseteq \Lambda^{\mathrm{all}}$. Let $e^{L_2-\mathrm{app},\Lambda}(n,s)$ be the $n$th minimal worst-case error, $$ e^{L_2-\mathrm{app},\Lambda}(n,s) = \inf_{A_{n,s}} e^{L_2-\mathrm{app}}(H(K_{s,\boldsymbol{a},\boldsymbol{b}}),A_{n,s}), $$ where the infimum is taken over all linear algorithms $A_{n,s}$ of the form {\varepsilon}qref{linalg} using information from the class $\Lambda\in\{\Lambda^{\mathrm{all}},\Lambda^{\mathrm{std}}\}$. For $n=0$ we simply approximate $f$ by zero, and the initial error is $$ e^{L_2-\mathrm{app},\Lambda}(0,s) = \|{\rm EMB}_s\|= \sup_{f \in H(K_{s,\boldsymbol{a},\boldsymbol{b}}) \atop \norm{f}_{H(K_{s,\boldsymbol{a},\boldsymbol{b}})}\le 1} \norm{f}_{L_{2}([0,1]^s)} = 1. $$ This means that also $L_2$-approximation is well normalized for all $s\in\mathbb{N}$. Let us now outline the main results regarding $L_2$-approximation in $H(K_{s,\boldsymbol{a},\boldsymbol{b}})$. Again, we start with results on EXP and UEXP. The following result was proved in~\cite{DKPW13}. \begin{theorem}\label{thmapp(u)exp} Consider $L_2$-approximation defined over the Korobov space $H(K_{s,\boldsymbol{a},\boldsymbol{b}})$ with weight sequences $\boldsymbol{a}$ and $\boldsymbol{b}$ satisfying~{\varepsilon}qref{aabb}. Then the following results hold for both classes $\Lambda^{\rm{all}}$ and $\Lambda^{\rm{std}}$: \begin{itemize} \item\label{appexp} EXP holds for all considered $\boldsymbol{a}$ and $\boldsymbol{b}$ with $$ p^{*}(s)=\frac{1}{B(s)} \ \ \ \ \ \mbox{with}\ \ \ \ \ B(s):=\sum_{j=1}^s\frac1{b_j}. $$ \item\label{appuexp} UEXP holds iff $\boldsymbol{a}$ is an arbitrary sequence and $\boldsymbol{b}$ is such that $$ B:=\sum_{j=1}^\infty\frac1{b_j}<\infty. $$ If so then $p^*=1/B$. {\varepsilon}nd{itemize} {\varepsilon}nd{theorem} Note that the conditions are the same as for the integration problem in Theorem~\ref{thmint(u)exp}. Hence the comments following Theorem~\ref{thmint(u)exp} also apply for approximation. Beyond that it is interesting that we have the same conditions for $\Lambda^{\rm{all}}$ and $\Lambda^{\rm{std}}$, although the class $\Lambda^{\rm{std}}$ is much smaller than the class $\Lambda^{\rm{all}}$. We now address conditions on the weights $\boldsymbol{a}$ and $\boldsymbol{b}$ for the standard concepts of tractability. This has not yet been done before for $\omega_{\boldsymbol{h}}$ of the form~{\varepsilon}qref{formofomega}, and therefore we need to prove the next theorem. \begin{theorem}\label{PTapprox} Consider $L_2$-approximation defined over the Korobov space $H(K_{s,\boldsymbol{a},\boldsymbol{b}})$ with arbitrary sequences $\boldsymbol{a}$ and $\boldsymbol{b}$ satisfying~{\varepsilon}qref{aabb}. Assume for simplicity that $$ A:=\lim_{j\to\infty}\frac{a_j}{\log\,j} $$ exists. Then the following results hold: For $\Lambda^{\rm{all}}$ we have: \begin{itemize} \item $ \mbox{SPT}\ \ \ \Leftrightarrow\ \ \ A>0.$ In this case, the exponent of SPT is $$ [\tau^{\rm all}]^*= \frac{2}{A\,\log\,\omega^{-1}}. $$ \item $ \mbox{PT}\ \ \ \Leftrightarrow\ \ \ \mbox{SPT}. $ \item \mbox{WT} holds for all considered $\boldsymbol{a}$ and $\boldsymbol{b}$. {\varepsilon}nd{itemize} For $\Lambda^{\rm{std}}$ we have: \begin{itemize} \item SPT holds if $A>1/(\log \omega^{-1})$. In this case, the exponent $[\tau^{{\rm std}}]^\ast$ satisfies $$ [\tau^{\rm all}]^*\le [\tau^{{\rm std}}]^\ast \le [\tau^{\rm{all}}]^\ast + \frac{1}{2}([\tau^{\rm{all}}]^\ast)^2 < [\tau^{\rm{all}}]^\ast +2.$$ On the other hand, if we have SPT with exponent $[\tau^{\rm{std}}]^\ast$, then $A \ge \frac{1}{[\tau^{\rm{std}}]^\ast \log \omega^{-1}}$. \item PT holds if there is an integer $j_0\ge2$ such that $$ \frac{a_j}{\log\,j}\ge\frac1{\log\,\omega^{-1}}\ \ \ \ \mbox{for all}\ \ \ \ \ j\ge j_0. $$ \item WT holds if $\lim_{j \rightarrow \infty} a_j=\infty$. {\varepsilon}nd{itemize} {\varepsilon}nd{theorem} \begin{proof} Consider first the class $\Lambda^{\rm{all}}$. \begin{itemize} \item From \cite[Theorem~5.2]{NW08} it follows that SPT for $\Lambda^{\rm{all}}$ is equivalent to the existence of a number $\tau>0$ such that $$ C_{{\rm SPT},\tau}:= \sup_s \left( \sum_{\boldsymbol{h} \in \mathbb{Z}^s} \omega_{\boldsymbol{h}}^{\tau}\right)^{1/\tau} < \infty. $$ Note that $$ \sum_{\boldsymbol{h} \in \mathbb{Z}^s} \omega_{\boldsymbol{h}}^{\tau} = \prod_{j=1}^s \left(1+2 \sum_{h=1}^\infty \omega^{\tau a_j h^{b_j}}\right)= \prod_{j=1}^s \left(1+2\omega^{\tau a_j}\,\sum_{h=1}^\infty \omega^{\tau a_j (h^{b_j}-1)}\right). $$ We have $$ 1\le\sum_{h=1}^\infty \omega^{\tau a_j(h^{b_j}-1)} \le \sum_{h=1}^\infty \omega^{\tau a_*(h^{b_*}-1)} =:A_\tau. $$ We can rewrite $A_\tau$ as $$ A_\tau=\sum_{h=1}^\infty h^{-x_h}, $$ where $x_1=1$ and for $h \ge 2$ we have $$ x_h=\tau\,a_* (\log\,\omega^{-1})\,\frac{h^{b_*}-1}{\log\, h}. $$ Since $\lim_hx_h=\infty$ the last series is convergent and therefore $A_\tau<\infty$. This proves that $$ \sum_{\boldsymbol{h} \in \mathbb{Z}^s} \omega_{\boldsymbol{h}}^{\tau}=\prod_{j=1}^s\left (1+2A(\tau)\,\omega^{\tau a_j}\right)\ \ \ \mbox{with}\ \ \ \ A(\tau)\in\left[1,A_\tau\right]. $$ This implies that $$ \sup_s\,\left(\sum_{\boldsymbol{h} \in \mathbb{Z}^s} \omega_{\boldsymbol{h}}^{\tau}\right)^{1/\tau}= \prod_{j=1}^\infty \left(1+2A(\tau)\, \omega^{\tau a_j}\right)^{1/\tau}<\infty\ \ \ \ \mbox{iff}\ \ \ \ \sum_{j=1}^\infty \omega^{\tau a_j}<\infty. $$ We now show that $$ \sum_{j=1}^\infty \omega^{\tau a_j}<\infty\ \ \mbox{for some}\ \ \tau \ \ \ \ \mbox{iff}\ \ \ \ A >0. $$ Indeed, for $j\ge2$ we can write $\omega^{\tau a_j}=j^{-y_j}$ with $$ y_j=\tau \log\,\omega^{-1}\ \frac {a_j}{\log\,j}. $$ If $A>0$ then for an arbitrary positive $\delta$ we can choose $\tau$ such that $y_j\ge 1+\delta$ for sufficiently large $j$ and therefore the series $$ \sum_{j=1}^\infty\omega^{\tau a_j}=\omega^{\tau a_1}+\sum_{j=2}^\infty j^{-y_j} $$ is convergent. If $A=0$ then independently of $\tau$ the series $\sum_{j=1}^\infty \omega^{\tau a_j}$ is divergent. Indeed, then $\lim_jy_j=0$ and for an arbitrary positive $\delta\le 1$ and $\tau$ we can choose $j(\delta,\tau)$ such that $y_j\in(0,\delta)$ for all $j\ge j(\delta,\tau)$ and $$ \sum_{j=1}^\infty\omega^{\tau a_j}\ge \sum_{j=j(\delta,\tau)}^\infty j^{-\delta}=\infty, $$ as claimed. This proves that SPT holds iff $A>0$. Furthermore, \cite[Theorem~5.2]{NW08} states that the exponent of SPT is $2\tau^*$, where $\tau^*$ is the infimum of $\tau$ for which $C_{{\rm SPT},\tau}<\infty$. In our case, it is clear that we must have $\tau\ge (1+\delta)/((A-\delta)\log\,\omega^{-1})$ for arbitrary $\delta\in (0,A)$. This completes the proof of this point. \item To show that PT is equivalent to SPT, it is obviously enough to show that PT implies SPT. According to \cite[Theorem~5.2]{NW08}, PT for $\Lambda^{\rm{all}}$ is equivalent to the existence of numbers $\tau>0$ and $q \ge 0$ such that $$ C_{{\rm PT}}:= \sup_s \left( \sum_{\boldsymbol{h} \in \mathbb{Z}^s} \omega_{\boldsymbol{h}}^{\tau}\right)^{1/\tau} s^{-q} < \infty. $$ This means that \begin{equation}\label{eqlogCPT} \log\, \sum_{\boldsymbol{h} \in \mathbb{Z}^s} \omega_{\boldsymbol{h}}^{\tau} \le \tau\left(\log\,C_{\rm PT}\ +\ q\,\log\,s\right). {\varepsilon}nd{equation} From the previous considerations we know that $$ \log\, \sum_{\boldsymbol{h} \in \mathbb{Z}^s} \omega_{\boldsymbol{h}}^{\tau}=\log\,\prod_{j=1}^s\left(1+2A(\tau)\omega^{\tau a_j}\right)=\sum_{j=1}^s\log\, (1+2A(\tau)\omega^{\tau a_j}). $$ Assume that $A=0$. Suppose first that $a_j$'s are uniformly bounded. Then\linebreak $\log\, \sum_{\boldsymbol{h} \in \mathbb{Z}^s} \omega_{\boldsymbol{h}}^{\tau}$ is of order $s$ which contradicts the inequality {\varepsilon}qref{eqlogCPT}. Assume now that $\lim_ja_j=\infty$. Then $\log\, \sum_{\boldsymbol{h} \in \mathbb{Z}^s} \omega_{\boldsymbol{h}}^{\tau}$ is of order $\sum_{j=2}^s\omega^{\tau a_j}=\sum_{j=2}^sj^{-y_j}$. Since $\lim_jy_j=0$ we have for $\delta\in(0,1)$, as before, $j^{-y_j}\ge j^{-\delta}$ for large $j$. This proves that $\sum_{j=2}^sj^{-\delta}\approx \int_2^sx^{-\delta}\,{\rm d}x$ is of order $s^{1-\delta}$ which again contradicts the inequality {\varepsilon}qref{eqlogCPT}. Hence, $A>0$ and we have SPT. \item We now show WT for all $\boldsymbol{a}$ and $\boldsymbol{b}$ with $a_*,b_*>0$. We have $$ \omega_{\boldsymbol{h}}=\omega^{\sum_{j=1}^sa_j|h_j|^{b_j}}\le \omega_{*,\boldsymbol{h}}:=\omega^{a_*\,\sum_{j=1}^s|h_j|^{b_*}}. $$ Note that for $\boldsymbol{h}={\bf 0}$ we have $\omega_{\boldsymbol{h}}=\omega_{*,\boldsymbol{h}}=1$. This shows that the approximation problem with $\omega_{\boldsymbol{h}}$ is not harder than the approximation problem with $\omega_{*,\boldsymbol{h}}$. The latter problem is a linear tensor product problem with the univariate eigenvalues of $W_1={\rm EMB}_1^*\,{\rm EMB}_1: H(K_{1,a_*,b_*})\to H(K_{1,a_*,b_*})$ given by $$ \lambda_1=1, \ \ \ \ \lambda_{2j}=\lambda_{2j+1}=\omega^{a_*\,j^{b_*}} \ \ \ \ \mbox{for all}\ \ \ \ j\ge1. $$ Clearly, $\lambda_2<\lambda_1$ and $\lambda_j$ goes to zero faster than polynomially with $j$. This implies WT due to \cite[Theorem~5.5]{NW08}.\footnote{ In fact, we also have quasi-polynomial tractability, i.e., $n({\varepsilon},s)\le C\,{\varepsilon}xp(t(1+\log\,{\varepsilon}^{-1})(1+\log\,s))$ for some $C>0$ and $t\approx 2/(a_*\log\,\omega^{-1})$, see \cite{GW11}.} {\varepsilon}nd{itemize} We now turn to the class $\Lambda^{\rm{std}}$. \begin{itemize} \item For $A>1/(\log\,\omega^{-1})$ we have $[\tau^{\rm all}]^*<2$. {}From \cite[Theorem 26.20]{NW12} we get SPT for $\Lambda^{\rm{std}}$ as well as the bounds on $[\tau^{\rm std}]^*$. The necessary condition for SPT with exponent $[\tau^{\rm std}]^*$ follows from Theorem~\ref{PTint}. \item To obtain PT we use \cite[Theorem 26.13]{NW12} which states that polynomial tractabilities for $\Lambda^{\rm{std}}$ and $\Lambda^{\rm{all}}$ are equivalent if ${\rm trace}(W_s)=\mathcal{O}(s^{\,q})$ for some $q\ge0$, where ${\rm trace}(W_s)$ is the sum of the eigenvalues of the operator $$W_s={\rm EMB}_s^*\,{\rm EMB}_s: \ H(K_{s,\boldsymbol{a},\boldsymbol{b}})\to H(K_{s,\boldsymbol{a},\boldsymbol{b}}). $$ In our case, $W_s$ is given by $$ W_sf=\sum_{\boldsymbol{h}\in\mathbb{Z}^s}\omega_{\boldsymbol{h}}\left< f,e_{\boldsymbol{h}}\right>_{H(K_{s,\boldsymbol{a},\boldsymbol{b}})}e_{\boldsymbol{h}} $$ with $e_{\boldsymbol{h}}$ given by~{\varepsilon}qref{basis}. The eigenpairs of $W_s$ are $(\omega_{\boldsymbol{h}},e_{\boldsymbol{h}})$ since $$ W_se_{\boldsymbol{h}}=\omega_{\boldsymbol{h}}e_{\boldsymbol{h}}= \omega^{\,\sum_{j=1}^sa_j|h_j|^{b_j}}\, e_{\boldsymbol{h}}\ \ \ \ \ \mbox{for all}\ \ \ \ \ \boldsymbol{h}\in \mathbb{Z}^s $$ and hence $$ {\rm trace}(W_s) =\prod_{j=1}^s\left(1+2A(1)\omega^{a_j}\right)\le {\varepsilon}xp\left(2A(1)\sum_{j=1}^s\omega^{a_j}\right). $$ Due to the assumption $a_j/\log\,j\ge 1/(\log\,\omega^{-1})$ for $j\ge j_0$ we have $\omega^{a_j}\le j^{-1}$ for $j\ge j_0$. Therefore there is a positive $C$ such that $$ {\rm trace}(W_s)\le C{\varepsilon}xp\left(A(1)\sum_{j=j_0}^sj^{-1}\right)\le C\,s^{A(1)}. $$ This proves that PT for $\Lambda^{\rm{std}}$ holds iff PT for $\Lambda^{\rm{all}}$ holds. As we already proved, the latter holds iff $A>0$. The assumption on $a_j$ implies that $A\ge1/(\log\,\omega^{-1})>0$. \item To obtain WT we use \cite[Theorem 26.11]{NW12}. This theorem states that weak tractabilities for classes $\Lambda^{\rm{std}}$ and $\Lambda^{\rm{all}}$ are equivalent if $\log\,{\rm trace}(W_s)=o(s)$. The proof of Theorem~\ref{thmapp(u)exp} in \cite{DKPW13} yields that $\lim_ja_j=\infty$ implies $\sum_{j=1}^s\omega^{a_j}=o(s)$. Hence, $$ \log\,{\rm trace}(W_s)\le \log\left({\varepsilon}xp\left(2A(1)\,o(s)\right)\right)=o(s), $$ as needed. {\varepsilon}nd{itemize} {\varepsilon}nd{proof} We briefly comment on Theorem~\ref{PTapprox}. For the class $\Lambda^{\rm{all}}$ we know necessary and sufficient conditions on SPT, PT and WT if the limit of $a_j/\log\, j$ exists. It is interesting to study the case when the last limit does not exist. It is easy to check that $A_{\rm inf}:=\liminf_ja_j/\log\,j>0$ implies SPT but it is not clear whether SPT implies $A_{\rm inf}>0$. For the class $\Lambda^{\rm{std}}$ we only know sufficient conditions for PT and WT. It would be of interest to verify if these conditions are also necessary. For SPT, as for multivariate integration, there remains a (small) gap between sufficient and necessary conditions. Again it would be desirable to close this gap. Finally, we have results regarding the EC-notions of tractability, (e)--(g). The subsequent theorem has been shown in \cite{DKPW13}. \begin{theorem}\label{thmappectract} Consider $L_2$-approximation defined over the Korobov space $H(K_{s,\boldsymbol{a},\boldsymbol{b}})$ with arbitrary sequences $\boldsymbol{a}$ and $\boldsymbol{b}$ satisfying~{\varepsilon}qref{aabb}. Then the following results hold for both classes $\Lambda^{\rm{all}}$ and $\Lambda^{\rm{std}}$: \begin{itemize} \item\label{appecpt} EC-PT (and, of course, EC-SPT) tractability implies uniform exponential convergence, $ \mbox{EC-PT}\ \ \Rightarrow\ \ \mbox{UEXP}. $ \item\label{appecwt} We have \begin{eqnarray*} \mbox{EC-WT}\ &\Leftrightarrow&\ \lim_{j\to\infty}a_j=\infty,\\ \mbox{EC-WT+UEXP}\ &\Leftrightarrow&\ B<\infty\ \ \mbox{and}\ \ \lim_{j\to\infty}a_j=\infty. {\varepsilon}nd{eqnarray*} \item\label{appecequiv} The following notions are equivalent: \begin{eqnarray*} \ \ \ \ \mbox{EC-PT}\ &\Leftrightarrow&\ \mbox{EC-PT+EXP}\ \Leftrightarrow\ \mbox{EC-PT+UEXP} \\ \ \ \ \ & \Leftrightarrow&\ \mbox{EC-SPT}\ \Leftrightarrow\ \mbox{EC-SPT+EXP}\ \Leftrightarrow \ \mbox{EC-SPT+UEXP}. {\varepsilon}nd{eqnarray*} \item \label{appecspt} EC-SPT+UEXP holds iff $b_j^{-1}$'s are summable and $a_j$'s are exponentially large in~$j$, i.e., $$ \mbox{EC-SPT+UEXP}\ \ \Leftrightarrow\ \ B:=\sum_{j=1}^\infty\frac1{b_j}<\infty\quad\mbox{and}\quad \alpha^*:=\liminf_{j\to\infty}\frac{\log\,a_j}j>0. $$ Then the exponent $\tau^*$ of EC-SPT satisfies $$ \tau^*\in \left[B,B+\min\left(B,\frac{\log\,3}{\alpha^*}\right)\right]. $$ In particular, if $\alpha^* =\infty$ then $\tau^* = B$. {\varepsilon}nd{itemize} {\varepsilon}nd{theorem} Again, the conditions are the same as for the integration problem in Theorem~\ref{thmintectract} and we have the same conditions for $\Lambda^{\rm{all}}$ and $\Lambda^{\rm{std}}$. The comments following Theorem~\ref{thmintectract} apply also for approximation. We remark that the results are constructive. The corresponding algorithms for the class $\Lambda^{\rm{all}}$ and $\Lambda^{\rm{std}}$ can be found in \cite{DKPW13}. We want to stress that for the class $\Lambda^{\rm{std}}$ we obtain the results of Theorem~\ref{thmappectract} by computing function values at grid points with varying mesh-sizes for successive variables. Such grids are also successfully used for multivariate integration in \cite{DKPW13,KPW12}. This relatively simple design of sample points should be compared with the design of (almost) optimal sample points for analogue problems defined over spaces of finite smoothness. In this case, the design is much harder and requires the use of deep theory of digital nets and low discrepancy points, see~\cite{DP10,N92}. It is worth adding that if we use the definition~{\varepsilon}qref{PP13} of WT with $\kappa>1/b_*$ then it is proved in~\cite{PP13} that WT holds even for $a_j=a_1>0$ and $b_j=b_*>0$. Hence, the condition $\lim_ja_j=\infty$ which is necessary and sufficient for EC-WT is now not needed. \section{Conclusion and Outlook}\label{secconcl} The study of tractability with exponential convergence is a new research subject. We presented a handful of results only for multivariate integration and approximation problems defined over Korobov spaces of analytic functions. Obviously, such a study should be performed for more general multivariate problems defined over more general spaces of $C^\infty$ or analytic functions. It would be very much desirable to characterize multivariate problems for which various notions of tractability with exponential convergence hold. In this survey we presented the notions of EC-WT, EC-PT and EC-SPT. We believe that other notions of tractability with exponential convergence should be also studied. In fact, all notions which were presented for tractability with respect to the pairs $({\varepsilon}^{-1},s)$ can be easily generalized and studied for the pairs $(1+\log\,{\varepsilon}^{-1},s)$. In particular, the notions of EC-QPT (exponential convergence-quasi polynomial tractability) and EC-UWT (exponential convergence-uniform weak tractability) are probably the first candidates for such a study. Quasi-polynomial tract\-a\-bi\-lity was briefly mentioned in the footnote of Section~\ref{secapp}. Uniform weak tractability generalizes the notion of weak tractability and means that $n({\varepsilon},s)$ is not exponential in ${\varepsilon}^{-\alpha}$ and $s^{\,\beta}$ for all positive $\alpha$ and $\beta$, see~\cite{S13}. The proof technique used for EC-tractability of integration and approximation is quite different than the proof technique used for standard tractability. Furthermore, it seems that some results are easier to prove for EC-tractability than their counterparts for the standard tractability. In particular, optimal design of sample points seems to be such an example. We are not sure if this holds for other multivariate problems. We hope that exponential convergence and tractability will be an active research field in the future. \begin{thebibliography}{99} \bibitem{Aron} N. Aronszajn, Theory of reproducing kernels. Trans. Amer. Math. Soc. 68, 337--404, 1950 \bibitem{DKPW13} J.~Dick, P.~Kritzer, F.~Pillichshammer, H.~Wo\'{z}niakowski. Approximation of analytic functions in Korobov spaces. To appear in J. Complexity, 2014. \bibitem{DKS14} J.~Dick, F.Y.~Kuo, I.H.~Sloan. High dimensional integration---the quasi-Monte Carlo way. Acta Numer. 22, 133--288, 2013. \bibitem{DLPW11} J.~Dick, G.~Larcher, F.~Pillichshammer, H.~Wo\'{z}niakowski. Exponential convergence and tractability of multivariate integration for Korobov spaces. Math. Comp. 80, 905--930, 2011. \bibitem{DP10} J.~Dick, F.~Pillichshammer. \textit{Digital Nets and Sequences. Discrepancy Theory and Quasi-Monte Carlo Integration}. Cambridge University Press, Cambridge, 2010. \bibitem{GW09} M. Gnewuch, H. Wo\'zniakowski, Generalized tractability for multivariate problems, Part II: Linear tensor product problems, linear information, unrestricted tractability. Found. Comput. Math. 9, 431--460, 2009. \bibitem{GW11} M. Gnewuch, H. Wo\'zniakowski. Quasi-polynomial tractability. J. Complexity 27, 312--330, 2011. \bibitem{KPW12} P. Kritzer, F. Pillichshammer, H. Wo\'{z}niakowski. Multivariate integration of infinitely many times differentiable functions in weighted Korobov spaces. To appear in Math. Comp., 2014. \bibitem{KSS11} F.Y.~Kuo, Ch.~Schwab, I.H.~Sloan. Quasi-Monte Carlo methods for high dimensional integration: the standard (weighted Hilbert space) setting and beyond. ANZIAM J. 53, 1--37, 2011. \bibitem{N92} H.~Niederreiter. \textit{Random Number Generation and Quasi-Monte Carlo Methods}. SIAM, Philadelphia, 1992. \bibitem{NW08} E.~Novak and H.~Wo\'zniakowski. \textit{Tractability of Multivariate Problems, Volume I: Linear Information}. EMS, Z\"{u}rich, 2008. \bibitem{NW10} E.~Novak and H.~Wo\'zniakowski. \textit{Tractability of Multivariate Problems, Volume II: Standard Information for Functionals}. EMS, Z\"{u}rich, 2010. \bibitem{NW12} E.~Novak and H.~Wo\'zniakowski. \textit{Tractability of Multivariate Problems, Volume III: Standard Information for Operators}. EMS, Z\"{u}rich, 2012. \bibitem{PP13} A. Papageorgiou and I. Petras, A new criterion for tractability of multivariate problems. Submitted, 2013. \bibitem{S13} P. Siedlecki. Uniform weak tractability. J. Complexity 29, 438--453, 2013. \bibitem{SW98} I.H.~Sloan, H.~Wo\'{z}niakowski. When are quasi-Monte Carlo algorithms efficient for high dimensional integrals? J. Complexity 14, 1--33, 1998. \bibitem{TWW88} J.F.~Traub, G.W.~Wasilkowski, and H.~Wo\'{z}niakowski. \newblock {{\varepsilon}m Information-Based Complexity}. \newblock Academic Press, New York, 1988. {\varepsilon}nd{thebibliography} {\varepsilon}nd{document}
\betagin{document} \maketitle \betagin{abstract} \paragraph{Abstract.} In this paper, we present Sch\"utzenberger's factorization in different combinatorial contexts and show that its validity is not restricted to these cases but can be extended to every Lie algebra endowed with an ordered basis. We also expose some elements of the relations between the Poincar\'e-Birkhoff-Witt bases of the enveloping algebra and their dual families. \paragraph{R\'esum\'e.} Dans cet article, nous pr\'esentons la factorisation de Sch\"utzenberger dans diff\'erents contextes combinatoires avant de montrer que c'est en fait une relation g\'en\'erale valide pour toute alg\`ebre de Lie dot\'ee d'une base ordonn\'ee. Nous pr\'esentons par ailleurs quelques \'el\'ements relatifs aux liens qui unissent les bases de Poincar\'e-Birkhoff-Witt de l'alg\`ebre enveloppante de l'alg\`ebre de Lie consid\'er\'ee et leurs familles duales. \end{abstract} \searrowction{Introduction} On the ground of computation, the Haussdorff group has not received the attention it deserves. Though its definition is simple (it is the group of group-like elements in a suitable complete bialgebra) it seems that, even in the free context (free Lie algebra), this group permits to encompass many desired combinatorial tools for a theory of infinite dimensional (Combinatorial) Lie Groups. This is done through Sch\"utzenberger's factorization formula in its resolution-of-unity-like formulation\footnote{Here indexed by the partially commutative monoid $w\in M(X,\theta)$ (\cite{CaFo}).} \betagin{equation}\label{SF1} \sum_{w\in X^*} w\otimes w = \prod_{l \in \rm{Lyn}(X)}^{\searrowarrow} e^{S_l\otimes P_l} \end{equation} which provides a beautiful framework for ``local coordinates'' in this infinite dimensional Lie Group. The paper is organized as follows. Section \ref{generalities} is devoted to generalities. In Section \ref{examples}, we present four combinatorial examples : non commutative, partially commutative and commutative free algebras and the case of the stuffle algebra. Finally, in Section \ref{generalsettings}, we give the general theorem and presents the duality between Radford bases and bases of Poincar\'e-Birkhoff-Witt type.\\ \searrowction{Generalities} \label{generalities} \textbf{Multiindex notation :} If $Y=(y_i)_{i\in I}$ is a totally ordered family in an algebra $\mathcal{A}$ and $\alpha\in {\mathbb N}^{(I)}$, one defines $Y^{\alpha}$ by \betagin{equation} y_{i_1}^{\alpha(i_1)}y_{i_2}^{\alpha(i_2)}\cdots y_{i_k}^{\alpha(i_k)} \end{equation} for every subset $J=\{i_1,i_2\cdots i_k\}\ , \ i_1 > i_2 > \cdots > i_k$, of $I$ which contains the support of $\alpha$ (it is easily shown that the value of $Y^\alpha$ does not depend on the choice of $J\supset supp(\alpha)$).\\ In particular, if $(e_i)_{i \in I}$ denotes the canonical basis of ${\mathbb N}^{(I)}$, one has $Y^{e_i} = y_i$. \noindent \textbf{Characteristic :} Throughout the paper, $k$ denotes a field of characteristic $0$. \searrowction{Combinatorial examples} \label{examples} \subsection{Non Commutative case} Let $X$ be an alphabet (totally) ordered with $<$. We denote by $\rm{Lyn}(X)$ the set of Lyndon words with letters in $X$. The standard factorization (\cite{Reutenauer93}) of $l \in \rm{Lyn}(X)$ is denoted by $\sigma(l) = ( l_1 , l_2 )$ where $l_2$ is the Lyndon proper right-factor of $l$ of maximal length. The standard factorization and the fact that every word $w \in X^*$ can be factorized as a decreasing product of Lyndon words allow us to define a triangular basis $(P_{w})_{w \in X^*}$, of \emph{Poincar\'e-Birkhoff-Witt} type for the free algebra $\ncp{k}{X}$ as follows : \betagin{equation} \label{bracketing} P_w = \left\{ \betagin{aligned} w \quad \quad \quad \quad \quad \, \, \, & \, \text{ if }|w|=1 ; \\ \left[P_{l_1} ,P_{l_2} \right]\quad \quad \quad \quad \, & \, \text{ if } w=l \in {\rm Lyn}(X) \text{ and } (l_1 , l_2) = \sigma(l) ; \\ P_{l_1}^{\alphapha_1} \dots P_{l_n}^{\alphapha_n} \quad \quad \quad \, & \, \text{ if } w = l_{i_1}^{\alphapha_1} \dots l_{i_k}^{\alphapha_k} \text{ with } l_1 > \dots > l_n\ . \end{aligned} \right. \end{equation} This basis is triangular because one has $$ P_w = w + \sum_{u > w} \scal{P}{u} u\ . $$ Because of the multihomogeneity of $(P_{w})_{w \in X^*}$, it is possible to construct a basis $(S_w)_{w \in X^*}$ of $\ncp{k}{X}$ satisfying $\scal{S_u}{P_v} = \kd{u}{v}$, for all $u,v \in X^*$. One can show that (\cite{Reutenauer93}) $S_w$ is given by \betagin{equation} S_w = \left\{ \betagin{aligned} w \quad \quad \quad \quad \, & \, \text{if }|w|=1 ; \\ x S_u \quad \quad \quad \, \, \, \, \, & \, \text{if }w=xu \text{ and }w \in \text{Lyn}(X) ; \\ \frac{S_{l_{i_1}}^{\, \shuffle \, \alphapha_1} \shuffle \, \dots \shuffle \, S_{l_{i_k}}^{\, \shuffle \, \alphapha_k}}{\alphapha_1 ! \dots \alphapha_k !} \quad \, & \, \text{ if } w = l_{i_1}^{\alphapha_1} \dots l_{i_k}^{\alphapha_k} \text{ (decreasing factorization)} \ . \end{aligned} \right. \end{equation} With these notations, the following equality holds : \betagin{equation} \sum_{w \in X^*} w \otimes w = \prod_{l \in \rm{Lyn}(X)}^{\searrowarrow} \exp(S_l \otimes P_l) \end{equation} where the product in the right-hand side is the shuffle product on the left and the concatenation product on the right. \subsection{Partially commutative case} Let $X$ be a set and $\theta \subset X \times X$ a symmetric and antireflexive relation on $X$ (here, antireflexive means that for all $x \in X, \, (x,x) \notin \theta$). We denote by $M(X , \theta)$ the free partially commutative monoid over $X$ (\cite{CaFo}, \cite{vie86}). It is defined by generators and relations by \betagin{equation} M(X,\theta)=\langle X , \left\{ ( x y , y x ) \right\}_{(x,y) \in \theta} \rightarrowngle_{\rm Mon}\ . \end{equation} Let $\ncp{k}{X,\theta}$ denote the partially commutative free algebra over $X$ (\cite{DuchampKrob}), defined by generators and relations by $\langle X, ( xy = yx)_{(x,y) \in \theta} \rightarrowngle_{k\text{-alg}}$, and $k \left[ M(X,\theta ) \right]$ the algebra of the partially commutative free monoid. By universal arguments, one can easily see that \betagin{equation} \ncp{k}{X,\theta} \cong k \left[ M(X , \theta ) \right]\ . \end{equation} Therefore, it is possible to consider the elements of $k\left[ M(X,\theta ) \right]$ as polynomials over the partially commutative free monoid and set, for all $P \in k \left[ M(X,\theta ) \right]$, \betagin{equation} P = \sum_{m \in M(X,\theta)} \scal{P}{m} m\ . \end{equation} We are interested in the Hopf algebra structure of $(\ncp{k}{X,\theta} , \mu, 1_{M(X,\theta)} , \mathcal{D}elta , \epsilonsilon , S)$ where ($\mu$ and $1_{M(X,\theta)}$ being straightforward) $\epsilonsilon (P) = \scal{P}{1}$, $\mathcal{D}elta(x) = x \otimes 1 + 1 \otimes x$ and $S(x_1 \dots x_n) = (-1)^{n} x_n \dots x_1$. It is known that the primitive elements of $\ncp{k}{X,\theta}$ are the elements of the free partially commutative Lie algebra ${\mathscr L}_k (X , \theta)$ : ${\rm Prim} (\ncp{k}{X,\theta}) = {\mathscr L}_k (X , \theta)$. Moreover, it is possible to generalize Lyndon words to the partially commutative monoids (\cite{lal}) : a partially commutative Lyndon word is a non-empty, primitive (partially commutative) word which is minimal (for the order on $M(X,\theta)$ induced by the lexicographic order on well chosen normal forms (\cite{LalondeKrob})) in its conjugacy class. We denote their set by $\text{Lyn}(X , \theta)$. Krob and Lalonde have generalized the standard factorization of Lyndon words to the partially commutative case : \betagin{proposition} Let $w$ belong to $M(X , \theta )$ with length ${\mathfrak g}eq 2$. Then there exists a unique factorization $w = fn$, called \emph{standard factorization} of $w$, this unique pair being denoted by $\sigma (w) = ( f , n )$, such that \betagin{enumerate} \item $f \nearrowq 1$ ; \item $n \in {\rm Lyn}(X, \theta)$ ; \item $n$ is minimal among all possible partially commutative Lyndon words that provide a factorization of $w$. \end{enumerate} Moreover, if $l \in {\rm Lyn}(X,\theta)$ with length ${\mathfrak g}eq 2$ and with $a_i$ as unique initial letter, and if $\sigma(l) = (f,n)$ is the standard factorization of $l$, then $f \in {\rm Lyn}(X , \theta)$ with $a_i$ as unique initial letter and $f < l < n$. \end{proposition} These properties allow us to construct a family $(P_l)_{l \in {\rm Lyn}(X,\theta)}$ in the same way as in the commutative case (see Eq. (\ref{bracketing})). One can show (\cite{lal}) that this family forms a basis of ${\mathscr L}_k(X,\theta)$ and that $P_l$ satifies \betagin{equation} P_l = l + \sum_{{\mathfrak g}enfrac{}{}{0pt}{}{l' > l}{l' \in \text{Lyn}(X,\theta)}} \alphapha_{l'} l'. \end{equation} Moreover, it is possible to show that each partially commutative word admits a unique nonincreasing factorization in terms of (partially commutative) Lyndon words. Therefore, one can define $P_w, \, w \in M(X,\theta)$ and show that (for example by translating the proof of \cite{Reutenauer93} in the language of partially commutative words) \betagin{equation} P_w = w + \sum_{u > w \in M(X,\theta)} \scal{P_w}{u} u\ . \end{equation} Finally, the non commutative construction can be extended to the dual family $S_w$ and this allows us to write the following factorization \betagin{equation} \sum_{w \in M(X,\theta)} w \otimes w = \prod_{l \in \rm{Lyn}(X , \theta)}^{\searrowarrow} \exp^{S_l \otimes P_l}\ . \end{equation} \betagin{remark} Note that Reutenauer had noticed that the construction of the dual basis is possible in every enveloping algebra (see Theorem 5.3 and Section 5.7 of \cite{Reutenauer93}). \end{remark} \subsection{Commutative case} The commutative case is obtained from the partially commutative setting by choosing $\theta = X \times X - \text{diag}(X)$ (where $\text{diag} (X) = \left\{ (x,x) , \, x \in X \right\}$). Then $\ncp{k}{X,\theta} = k \left[ X \right]$ is the algebra of commutative polynomials and the primitive elements are the homogeneous polynomials of degree 1 : \betagin{equation} \text{Prim}(k \left[ X \right]) = k . X = \left\{ P = \sum_{x \in X} \scal{P}{x} x \right\}\ . \end{equation} The set of Lyndon words is $X$. The specialization of equation (\ref{SF1}) yields \betagin{equation} \prod_{x \in X} \exp(x \otimes x) = \sum_{\alpha \in {\mathbb N}^{(X)}} X^\alpha \otimes X^\alpha= \sum_{w \in X^{\oplus}} w \otimes w\ . \end{equation} Indeed, with $x$ a letter, one has \betagin{equation} x^k \, \shuffle \, x = \frac{(k+1)!}{k!} x^{k+1} = (k+1) x^k\ . \end{equation} Thus, \betagin{equation} \prod_{x \in X} \sum_{n {\mathfrak g}eq 0} \frac{x^{\shuffle \, n} \otimes x^n}{n!} = \prod_{x \in X} \sum_{n {\mathfrak g}eq 0} x^n \otimes x^n \end{equation} and one easily recovers the result. \subsection{Stuffle algebra} Let $Y = \left\{ y_i \right\}_{i {\mathfrak g}eq 1}$. We endow $\ncp{k}{Y}$ with the stuffle product given by the following recursion : for all $y_i, \, y_j \in Y$ and for all $u , \, v \in Y^*$, \betagin{equation} \left\lbrace \betagin{aligned} u \stuffle 1 & = 1 \stuffle u = u ; \\ y_i u \stuffle y_j v & = y_i ( u \stuffle y_j v ) + y_j ( y_i u \stuffle v ) + y_{i+j} ( u \stuffle v)\ . \end{aligned} \right. \end{equation} We define on $\ncp{k}{Y}$ a gradation with values in ${\mathbb N}$ given by an integer valued weight function on $Y^*$. It is a morphism of monoids given on the letters by $|y_s| = s$. Thus \betagin{equation} |w| = \sum_{k=1}^{\ell(w)}|w\left[ k \right]|\ , \end{equation} the only word of weight $0$ is the empty word and the number of words of weight $n>0$ is $2^{n-1}$. Therefore, $(\ncp{k}{Y} , \stuffle , 1_{Y^*} )$ is graded in finite dimensions. The stuffle product then admits a dual law denoted by $\mathcal{D}elta_{\stuffle}$. It satisfies \betagin{equation} \scal{ u \stuffle v}{w} = \scal{u \otimes v}{\mathcal{D}elta_{\stuffle} (w)}, \, \text{ for all } \, u , \, v , \, w \, \in Y^*\ . \end{equation} It is given on the letters by \betagin{equation} \mathcal{D}elta_{\stuffle}(y_s) = y_s \otimes 1 + 1 \otimes y_s + \sum_{s_1+s_2 =s} y_{s_1} \otimes y_{s_2} \end{equation} and one can prove that $\mathcal{D}elta_{\stuffle}$ is a morphism of algebras from $\ncp{k}{Y}$ to $\ncp{k}{Y} \otimes \ncp{k}{Y}$. Then, $(\ncp{k}{Y} , conc, 1_{Y^*} , \mathcal{D}elta_{\stuffle} , \epsilonsilon)$ is a ${\mathbb N}$-graded cocommutative bialgebra. Thus, it is possible to apply the Cartier-Quillen-Milnor-Moore theorem which ensures that $\ncp{k}{Y}$ is the enveloping algebra of the (Lie-) algebra of its primitive elements (we recall that $\text{char}(k) = 0$) : \betagin{equation} \ncp{k}{Y} \equiv {\mathscr U} ( \text{Prim}(\ncp{k}{Y}))\ . \end{equation} Let $(B,<)$ be any totally ordered basis of $\text{Prim}(\ncp{k}{Y})$ and $(S_\alpha)_{\alpha\in {\mathbb N}^{(B)}}$ the dual basis of $(B^\alpha)_{\alpha\in {\mathbb N}^{(B)}}$. By the general setting (see below), one has \betagin{equation} \prod_{b \in B}^{\searrowarrow} \exp(S_b \otimes b) = \sum_{w \in Y^*} w \otimes w\ . \end{equation} \searrowction{General setting.} \label{generalsettings} \subsection{From PBW to Radford} As potential combinatorial applications include : free Lie algebra (noncommutative or with partial commutations as presented above) and finite dimensional Lie algebra where the factorization has only a finite number of terms, we prefer to state the result with its full generality (\textit{i.e.} considering an arbitrary Lie algebra). Let us first give the context. Let ${\mathfrak g}$ be a $k$-Lie algebra and $B=(b_i)_{i\in I}$ be an ordered basis of it. The PBW theorem states exactly that $(B^\alpha)_{\alpha\in {\mathbb N}^{(I)}}$ is a basis of $\mathcal{U}({\mathfrak g})$.\\ Now, one considers, in $\mathcal{U}({\mathfrak g})$, $(S_\alpha)_{\alpha\in {\mathbb N}^{(I)}}$, the dual family, i. e. the family of linear forms on $\mathcal{U}$ defined by \betagin{equation} \scal{S_\alpha}{B^\betata}=\delta_{\alpha,\beta}\ . \end{equation} One has \betagin{eqnarray} \label{mult} S_\alpha*S_\beta \stackrel{(1)}{=} \sum_{{\mathfrak g}ammamma\in {\mathbb N}^{(I)}} \scal{S_\alpha*S_\beta}{B^{\mathfrak g}ammamma}S^{\mathfrak g}ammamma= \sum_{{\mathfrak g}ammamma\in {\mathbb N}^{(I)}} \scal{S_\alpha\otimes S_\beta}{\mathcal{D}elta(B^{\mathfrak g}ammamma)}^{\otimes 2}S^{\mathfrak g}ammamma=\cr \sum_{{\mathfrak g}ammamma\in {\mathbb N}^{(I)}} \scal{S_\alpha\otimes S_\beta} {\sum_{{\mathfrak g}amma_1+{\mathfrak g}amma_2={\mathfrak g}amma}\frac{{\mathfrak g}amma !}{{\mathfrak g}amma_1 !\,{\mathfrak g}amma_2 !}\, B^{{\mathfrak g}amma_1}\otimes B^{{\mathfrak g}amma_2}}^{\otimes 2}S^{\mathfrak g}ammamma= \frac{(\alpha+\beta) !}{\alpha !\,\beta !}\, S_{\alpha+\beta} \end{eqnarray} which shows that the family $T_\alpha=\alpha !\, S_\alpha$ is multiplicative ($T_\alpha*T_\beta=T_{\alpha+\beta}$)\footnote{As this family is (linearly) free, the correspondence $k[I]\rightarrow \mathcal{U}^*$ is an isomorphism onto its image. This image is exactly the space of linear forms that are of finite support {\it on the PBW basis $(B^\alpha)_{\alpha\in N^{(I)}}$}.}. \betagin{remark} At first, the right-hand-side member of equality $(1)$ of relation $\ref{mult}$ may provide an infinite sum (we do not know whether $S_\alpha*S_\beta \in span_{{\mathfrak g}ammamma \in {\mathbb N}^{(I)}} (S_{\mathfrak g}ammamma)$) as, for a suitable topology, every $\phi \in \mathcal{U}({\mathfrak g})^*$ reads \[ \phi = \sum_{{\mathfrak g}ammamma \in {\mathbb N}^{(I)}} \scal{\phi}{B^{\mathfrak g}ammamma} S_{\mathfrak g}ammamma\ . \] \end{remark} Now, one can see that since the setting of identity \mref{SF1} requires, in general, infinite sums and products, we need to have at our disposal a topology, a convergence criterion or some limiting process. This will be done by endowing $\mathcal{U}({\mathfrak g})$ with the discrete topology and ${\mathbb E}nd_k(\mathcal{U}({\mathfrak g}))$ with the topology of pointwise convergence. This means that a net $(f_i)_{i\in A}$ ($A$ is a directed set\footnote{A directed (or filtered) set is an ordered set $(A,<)$ such that every pair of elements is bounded above i. e. \betagin{equation} (\forall a,b\in A)(\exists c\in A)(a{\mathfrak g}eq c,\, b{\mathfrak g}eq c)\ . \end{equation} }) converges to $g\in {\mathbb E}nd_k(\mathcal{U}({\mathfrak g}))$ iff \betagin{equation} (\forall b\in \mathcal{U}({\mathfrak g}))(\exists N\in A)(\forall i{\mathfrak g}eq N)(f_i(b)=g(b))\ . \end{equation} This gives the two following derived criteria considering the partial sums and products.\\ A family $(f_i)_{i\in J}$ will be said {\it summable} if the net of partial sums $$ S_F=\sum_{j\in F}f_j $$ (for $F$ any finite subset of $J$) converges to some $g\in {\mathbb E}nd_k(\mathcal{U}({\mathfrak g}))$. This can be formalized as \betagin{equation}\label{summable} (\forall b\in \mathcal{U}({\mathfrak g}))(\exists F \subset_{finite} J)(\forall F')(F\subset F' \subset_{finite} J\Longrightarrow {\mathcal B}ig(\sum_{j\in F'}f_j{\mathcal B}ig)(b)=g(b))\ . \end{equation} Similarly, a family $(f_i)_{i\in J}$ (this time we need that $J$ be totally ordered) will be said {\it mutipliable} (w.r.t. convolution) if the net of partial products $$ M_F=\prod^{\rightarrow}_{j\in F}f_j $$ (for $F$ any finite subset of $J$) converges to some $g\in {\mathbb E}nd_k(\mathcal{U}({\mathfrak g}))$. This will be formalized as \betagin{equation}\label{multipliable} (\forall b\in \mathcal{U}({\mathfrak g}))(\exists F \subset_{finite} J)(\forall F')(F\subset F' \subset_{finite} J\Longrightarrow {\mathcal B}ig(\prod^{\rightarrow}_{j\in F'}f_j{\mathcal B}ig)(b)=g(b))\ . \end{equation} We are now in position to state the general factorization theorem. \betagin{theorem} Let $k$ be a field of characteristic zero, ${\mathfrak g}$ a $k$-Lie algebra, $B=(b_i)_{i\in I}$ be an ordered basis of it and $(B^\alpha)_{\alpha\in {\mathbb N}^{(I)}}$ be the associated PBW basis. Denoting $(S_\alpha)_{\alpha\in {\mathbb N}^{(I)}}$ the dual family of $(B^\alpha)_{\alpha\in {\mathbb N}^{(I)}}$ in $\mathcal{U}^*$, one gets the following \betagin{equation}\label{maim_fact_thm} \sum_{\alpha\in {\mathbb N}^{(I)}} S_\alpha\otimes B^\alpha=\prod^{\rightarrow}_{i\in I} \exp\,(S_{e_i}\otimes B^{e_i}) \end{equation} where $e_i$ denotes the canonical basis of ${\mathbb N}^{(I)}$ (given by $e_i(j)=\delta_{ij}$). \end{theorem} \betagin{remark} The two members of \mref{maim_fact_thm} are in fact a resolution of the identity through the mapping $$ \Phi : V^* \otimes V \rightarrow {\mathbb E}nd^{\text finite} (V) $$ which associates to each separated tensor $f\otimes v\in V^* \otimes V$ the endomorphism $\Phi(f\otimes v) : b \mapsto f(b) \cdot v\ .$ This mapping extends by continuity to series and gives $\sum_{\alpha\in {\mathbb N}^{(I)}} S_\alpha\otimes B^\alpha$ as an expression of $Id_\mathcal{U}$. \end{remark} \subsection{From Radford to PBW} In this paragraph, we take the problem the other way round, starting from a family of linear forms (within $\mathcal{U}({\mathfrak g})$) $(T_\alpha)_{\alpha \in {\mathbb N}^{(I)}}$ such that $T_\alpha \star T_\beta = T_{\alpha + \beta}$ (such a family is called a \emph{Radford family}) and that is in duality with some basis $( B^{\left[ \alpha \right]} )_{\alpha \in {\mathbb N}^{(I)}}$ of $\mathcal{U}({\mathfrak g})$ (that is, one has $\displaystyle \scal{T_\alpha}{B^{\left[ \beta \right]}} = \delta_{\alpha \beta}$). Here, the brackets around the multiindex recall that $B^{\left[ \beta \right]}$ is not a product as defined in section \ref{generalities}. \betagin{theorem} Let $(T_\alpha)_{\alpha \in N^{(I)}}$ be a multiplicative basis of $\mathcal{U}({\mathfrak g})^*$ in duality with some basis $(B^{\left[ \alpha \right]})_{\alpha \in N^{(I)}}$ of $\mathcal{U}({\mathfrak g})$. Then $(B^{\left[ e_i \right]})_{i \in I}$ is a basis of ${\mathfrak g}$. \end{theorem} \betagin{remark} This technique is originated from the application of the CQMM Theorem to the stuffle algebra $(\ncp{k}{Y} , conc, 1_{Y^*} , \stuffle , \epsilonsilon )$ with $\text{\rm Prim}(\ncp{k}{Y}) = {\mathfrak g}$ and $\ncp{k}{Y} = \mathcal{U}({\mathfrak g})$. Note that, in that case, $y_p$ is not primitive anymore if $p>1$. Indeed, one can use $\log_*(I)$ which is a projector on the space of primitive elements $\text{\rm Prim}(k<Y>)$ : \betagin{equation} \betagin{aligned} \log_*(I)(y_p) & = y_p - \frac{1}{2} \sum_{p_1+p_2=p} y_{p_1}y_{p_2} \\ & + \frac{1}{3} \sum_{p_1+p_2+p_3=p} y_{p_1}y_{p_2} y_{p_3} \\ & - \frac{1}{4} \dots \end{aligned} \end{equation} For example, \betagin{equation} \betagin{aligned} \log_*(I)(y_4) & = y_4 - \frac{1}{2} ( y_{1}y_{3} + y_{2}y_{2} + y_{3}y_{1} ) \\ & + \frac{1}{3} ( y_{1}y_{1} y_{2} + y_{1}y_{2} y_{1} + y_{2}y_{1} y_{1} ) \\ & - \frac{1}{4} y_1^4\ . \end{aligned} \end{equation} \end{remark} For the stuffle product, the set $Y$ forms a transcendence basis. We recall here the recursive definition of the dual product of the stuffle product : \betagin{equation} \betagin{aligned} \mathcal{D}elta_{\stuffle} (y_n w) & = \mathcal{D}elta_{\stuffle}(y_n) \mathcal{D}elta_{\stuffle}(w) ; \\ \mathcal{D}elta_{\stuffle} (y_n ) & = y_n \otimes 1 + 1 \otimes y_n + \sum_{{\mathfrak g}enfrac{}{}{0pt}{}{p+q = n}{p,q {\mathfrak g}eq 1}} y_p \otimes y_q\ . \end{aligned} \end{equation} One has \betagin{equation*} \scal{ u \stuffle v }{ w } = \scal{ u \otimes v }{ \mathcal{D}elta_{\stuffle} w }\ . \end{equation*} Thus \betagin{equation} \betagin{aligned} y_p u \stuffle y_q v & = \sum_{w \in X^*} \scal{ y_p u \stuffle y_q v }{ w } w \\ & = \sum_{w \in X^*} \scal{ y_p u \otimes y_q v }{ \mathcal{D}elta_{\stuffle}(w) } w \\ & = \scal{ y_p u \otimes y_q v }{ \mathcal{D}elta_{\stuffle}(1) } + \sum_{w \in Y^+} \scal{ y_p u \otimes y_q v }{ \mathcal{D}elta_{\stuffle}(w) } w \\ & = \sum_{{\mathfrak g}enfrac{}{}{0pt}{}{r {\mathfrak g}eq 1}{w \in Y^*}} \scal{ y_p u \otimes y_q v }{ \mathcal{D}elta_{\stuffle} (y_r w) } y_r w \\ & = \sum_{{\mathfrak g}enfrac{}{}{0pt}{}{r {\mathfrak g}eq 1}{w \in Y^*}} \scal{ y_p u \otimes y_q v }{ (y_r \otimes 1 + 1 \otimes y_r) \mathcal{D}elta_{\stuffle} (w) } y_r w \\ & = \sum_{{\mathfrak g}enfrac{}{}{0pt}{}{r {\mathfrak g}eq 1}{w \in Y^*}} \scal{ y_p u \otimes y_q v }{ (y_r \otimes 1) \mathcal{D}elta_{\stuffle} (w) } y_r w + \\ & + \sum_{{\mathfrak g}enfrac{}{}{0pt}{}{r {\mathfrak g}eq 1}{w \in Y^*}} \scal{ y_p u \otimes y_q v }{ (1 \otimes y_r) \mathcal{D}elta_{\stuffle} (w) } y_r w \\ & + \sum_{{\mathfrak g}enfrac{}{}{0pt}{}{r {\mathfrak g}eq 1}{w \in Y^*}} \sum_{r_1 + r_2 =r} \scal{ y_p u \otimes y_q v }{ (y_{r_1} \otimes y_{r_2}) \mathcal{D}elta_{\stuffle} (w) } y_r w \\ & = \scal{ u \otimes y_q v }{ \mathcal{D}elta_{\stuffle} (w) } y_p u + \scal{ y_p u \otimes v }{ \mathcal{D}elta_{\stuffle} ( w ) } y_q v + \scal{ u \otimes v }{ \mathcal{D}elta_{\stuffle} ( w ) } y_{p+q} \\ & = y_p(u \stuffle y_q v ) + y_q (y_p u \stuffle v ) + y_{p+q} (u \stuffle v)\ . \end{aligned} \end{equation} This proves that $\left( \text{Lyn}(Y)^{\stuffle \alpha} \right)_{\alpha \in {\mathbb N}^{(\text{Lyn}(Y))}}$ is homogeneous with $|y_i|=i$. Now, one can consider the set of products of the form $\tilde{B}^\alpha = \left[ (B^{\left[ e_i \right]})_{i \in I} \right]^\alpha$ and address the question whether $\tilde{B}^\alpha = B^{\left[ \alpha \right]}$. This is true in case where $B^\alpha$ is a PBW basis w.r.t. the chosen order. Therefore, when one starts with a Poincar\'e-Birkhoff-Witt basis, the factorization (\ref{maim_fact_thm}) holds. If one starts with a multiplicative family $(T_\alpha)_{\alpha \in {\mathbb N}^{(I)}}$, the following identity holds \betagin{equation} \sum_{\alpha\in {\mathbb N}^{(I)}} S_\alpha \otimes \tilde{B}^\alpha=\prod^{\rightarrow}_{i\in I} \exp\,(S_{e_i} \otimes B^{\left[ e_i \right]}) \end{equation} (where $T_\alphapha = \alphapha ! S_\alphapha$). But it remains to be proved that the products of the $B^{\left[ e_i \right]}$'s yield the elements $B^{ \left[ \alphapha \right]}$ if one wants to have a factorization of the form (\ref{SF1}). \searrowction{Conclusion} Though it frequently appears in relation to the free algebra, Sch\"utzenberger's factorization is not a specific property of this structure. On the contrary, it is a very general relation which holds in every enveloping algebra as shown by our theorem. \\ It is also interesting because it underlines the duality between bases of Poincar\'e-Birkhoff-Witt type and Radford bases. We have not yet fully investigated the relations between these structures and hope to shed more light on this subject by computing more combinatorial examples. \acknowledgements \label{sec:ack} The authors wish to acknowledge support from Agence Nationale de la Recherche (Paris, France) under Program No. ANR-08-BLAN-0243-2 as well as support from ``Projet interne au LIPN'' ``Polyz\^eta functions''. \label{sec:biblio} \end{document}
\begin{document} \title{Lehmer-type congruences for lacunary harmonic sums modulo $p^2$} \author{Hao Pan} \address{Department of Mathematics, Nanjing University, Nanjing 210093, People's Republic of China} \email{[email protected]} \subjclass[2000]{Primary 11A07; Secondary 11B65} \thanks{The author was supported by the National Natural Science Foundation of China (Grant No. 10771135).}\keywords{} \date{}\mathfrak maketitle \begin{abstract} In this paper, we establish some Lehmer-type congruences for lacunary harmonic sums modulo $p^2$. \end{abstract} \section{Introduction} \setcounter{equation}{0} \setcounter{Thm}{0} \setcounter{Lem}{0} \setcounter{Cor}{0} The well-known Wolstenholme's harmonic series congruence asserts that \begin{equation} \label{wolstenholme} \sum_{k=1}^{p-1}\frac{1}{k}\equiv 0\pmod{p^2} \end{equation} for each prime $p\geq 5$. With help of (\ref{wolstenholme}), Wolstenholme \cite{Wolstenholme62} proved that $$\binom{mp}{np}\equiv\binom{m}{n}\pmod{p^3}. $$ for any $m,n\geq 1$ and prime $p\geq 5$. In 1938, Lehmer \cite{Lehmer38} discovered an interesting congruence as follows: \begin{equation*} \sum_{j=1}^{(p-1)/2}\frac{1}{j}\equiv-\frac{2^p-2}{p}+\frac{(2^{p-1}-1)^2}{p}\pmod{p^2} \end{equation*} for each prime $p\geq 3$. Define $$ {\mathfrak mathcal H}_{r,m}(n)=\sum_{\substack{1\leq k\leq n\\ k\equiv r\pmod{m}}}\frac{1}{k}. $$ Clearly, with help of \ref{wolstenholme}, Lehmer's congruence can be rewritten as \begin{equation} \label{lehmer2} {\mathfrak mathcal H}_{p,2}(p-1)\equiv\frac{2^{p-1}-1}{p}-\frac{(2^{p-1}-1)^2}{2p}\pmod{p}. \end{equation} In fact, Lehmer also proved three another congruences in the same flavor: \begin{equation} \label{lehmer3} {\mathfrak mathcal H}_{p,3}(p-1)\equiv\frac{3^{p-1}-1}{2p}-\frac{(3^{p-1}-1)^2}{4p}\pmod{p^2}, \end{equation} \begin{equation} \label{lehmer4} {\mathfrak mathcal H}_{p,4}(p-1)\equiv\frac{3(2^{p-1}-1)}{4p}-\frac{3(2^{p-1}-1)^2}{8p}\pmod{p^2} \end{equation} and \begin{equation} \label{lehmer6} {\mathfrak mathcal H}_{p,6}(p-1)\equiv\frac{2^{p-1}-1}{3p}+\frac{3^{p-1}-1}{4p}-\frac{(2^{p-1}-1)^2}{6p}-\frac{(3^{p-1}-1)^2}{8p}\pmod{p^2}, \end{equation} where $p\geq 5$ is a prime. The proofs of (\ref{lehmer2}),(\ref{lehmer3}),(\ref{lehmer4}) and (\ref{lehmer6}) are based on the values of Bernoulli polynomial $B_{p(p-1)}(x)$ at $x=1/2,1/3,1/4,1/6$. However, no another congruence for ${\mathfrak mathcal H}_{p,m}(p-1)$ modulo $p^2$ is known, partly since very few is known on the values of $B_{p(p-1)}(n/m)$ when $m\not=1,2,3,4,6$. Some Lehmer-type congruences modulo $p$ (not modulo $p^2$!) have be proved in \cite{Williams82,Sun92,Sun93,Sun08,SunSun92,Sun02}. In this paper, we shall investigate the Lehmer-type congruences modulo $p^2$. Define $$ {\mathfrak mathcal T}_{r,m}(n)=\sum_{\substack{0\leq k\leq n\\ k\equiv r\pmod{m}}}\binom{n}{k} \qquad\text{and}\qquad {\mathfrak mathcal T}_{r,m}^*(n)=\sum_{\substack{0\leq k\leq n\\ k\equiv r\pmod{m}}}(-1)^k\binom{n}{k}. $$ Clearly ${\mathfrak mathcal T}_{r,m}^*(n)=(-1)^n{\mathfrak mathcal T}_{n-r,m}^*(n)$ and $$ {\mathfrak mathcal T}_{r,m}^*(n)=\begin{cases} (-1)^r{\mathfrak mathcal T}_{r,m}(n)&\qquad\text{if }m\text{ is even},\\ (-1)^r({\mathfrak mathcal T}_{r,2m}(n)-{\mathfrak mathcal T}_{m+r,2m}(n))&\qquad\text{if }m\text{ is odd}. \end{cases} $$ As we shall see soon, it is not difficult to show that $$ {\mathfrak mathcal H}_{r,m}(p-1)\equiv-\frac{{\mathfrak mathcal T}_{r,m}^*(p)-\delta_{r,m}(p)}p\pmod{p}, $$ where \begin{equation} \label{delta} \delta_{r,m}(p)=\begin{cases} 1&\text{if }r\equiv 0\pmod{m},\\ -1&\text{if }r\equiv p\pmod{m},\\ 0&\qquad\text{otherwise}. \end{cases} \end{equation} \begin{Thm} \label{t1} Let $m\geq 2$ be an integer and let $p>3$ be a prime with $p\not=m$. Then \begin{align} \label{t1e} {\mathfrak mathcal H}_{p,m}(p-1)\equiv&-\frac{2{\mathfrak mathcal T}_{p,m}^*(p)+2}p+\frac{{\mathfrak mathcal T}_{p,m}^*(2p)+2}{4p}\pmod{p^2}. \end{align} \end{Thm} Let us see how (\ref{lehmer2}) follows from Theorem \ref{t1}. Clearly we have ${\mathfrak mathcal T}_{0,2}^*(n)=2^{n-1}$ and ${\mathfrak mathcal T}_{1,2}^*(n)=-2^{n-1}$. Hence in view of (\ref{t1e}), for any prime $p\geq 5$, \begin{align*} {\mathfrak mathcal H}_{p,2}(p-1)\equiv&-\frac{2{\mathfrak mathcal T}_{p,2}^*(p)+2}p+\frac{{\mathfrak mathcal T}_{p,2}^*(2p)+2}{4p}= \frac{2^p-2}p-\frac{2^{2p-1}-2}{4p}\pmod{p^2}. \end{align*} In \cite{Sun02}, Sun had showed that ${\mathfrak mathcal T}_{r,m}(n)$ can be expressed in terms of some linearly recurrent sequences with orders not exceeding $\phi(m)/2$, where $\phi$ is the Euler totient function. Thus in view of Theorem \ref{t1}, for each $m$, we always have a Lehmer-type congruence for ${\mathfrak mathcal H}_{p,m}(p-1)$ modulo $p^2$, involving some linearly recurrent sequences. However, as we shall see later, (\ref{t1e}) is not suitable to derive (\ref{lehmer3}), (\ref{lehmer4}) and (\ref{lehmer6}). So we need the following theorem. \begin{Thm} \label{t2} Let $m\geq 2$ be an integer and let $p>3$ be a prime with $p\not=m$. Then \begin{equation} \label{t2e1} {\mathfrak mathcal H}_{p,m}(p-1)\equiv-\frac{{\mathfrak mathcal T}_{p,m}^*(2p)+2}{4p}-\frac p2\sum_{\substack{1\leq r\leq m\\ 2r\not\equiv p\pmod{m}}}{\mathfrak mathcal H}_{r,m}(p-1)^2\pmod{p^2}. \end{equation} \end{Thm} When $m=3$, we have ${\mathfrak mathcal T}_{p,3}^*(2p)=-2\times 3^{p-1}$ (cf. \cite[Theorem 1.9]{Sun92} and \cite[Theorem 3.2]{Sun02}). Thus by (\ref{t2e1}), we get $$ {\mathfrak mathcal H}_{p,3}(p-1) \equiv-\frac{{\mathfrak mathcal T}_{p,3}^*(2p)+2}{4p}-p\bigg(\frac{{\mathfrak mathcal T}_{p,3}^*(2p)+2}{4p}\bigg)^2=\frac{3^{p-1}-1}{2p}-\frac{(3^{p-1}-1)^2}{4p}\pmod{p^2}, $$ since $$ {\mathfrak mathcal H}_{0,3}(p-1)\equiv-{\mathfrak mathcal H}_{p,3}(p-1)\equiv\frac{{\mathfrak mathcal T}_{p,3}^*(2p)+2}{4p}\pmod{p}. $$ Let us apply Theorem \ref{t2} to obtain more Lehmer's type congruences. The Fibonacci numbers $F_0,F_1,F_2,\ldots$ are given by $F_0=0$, $F_1=1$ and $F_{n}=F_{n-1}+F_{n-2}$ for $n\geq 2$. It is well-known that $F_{p}\equiv\jacob{5}{p}\pmod{p}$ and $F_{p-\jacob{5}{p}}\equiv0\pmod{p}$ for prime $p\not=2,5$, where $\jacob{\cdot}{p}$ is the Legendre symbol. Williams \cite{Williams82} proved that $$ \frac{2}{5}\sum_{1\leq k\leq 4p/5-1}\frac{(-1)^k}{k}\equiv\frac{F_{p-\jacob{5}{p}}}{p}\pmod{p} $$ for prime $p\not=2,5$. Subsequently Sun and Sun \cite[Corollary 3]{SunSun92} proved that \begin{equation} \label{SunSun} {\mathfrak mathcal H}_{2p,5}(p-1)\equiv-{\mathfrak mathcal H}_{-p,5}(p-1)\equiv-\frac{F_{p-\jacob{5}{p}}}{2p}\pmod{p}. \end{equation} We have a Lehmer-type congruences as follows. \begin{Thm} \label{t3} Suppose that $p>5$ is a prime. Then \begin{equation} \label{t3e} {\mathfrak mathcal H}_{p,5}(p-1)\equiv\frac{5^{\frac{p-1}{2}}F_{p}-1}{p}-\frac{5^{p-1}F_{2p-\jacob{5}{p}}-1}{4p}\pmod{p^2}. \end{equation} \end{Thm} The Pell numbers $P_0,P_1,P_2,\ldots$ are given by $P_0=0$, $P_1=1$ and $P_{n}=2P_{n-1}+P_{n-2}$ for $n\geq 2$. We know $P_{p}\equiv\jacob{2}{p}\pmod{p}$ and $P_{p-\jacob{2}{p}}\equiv0\pmod{p}$ for every odd prime $p$. In \cite{Sun93}, Sun proved that $$ (-1)^{\frac{p-1}2}\sum_{1\leq k\leq (p+1)/4}\frac{(-1)^k}{2k-1}\equiv-\frac{1}{4}\sum_{k=1}^{\frac{p-1}2}\frac{2^k}{k}\equiv\frac{P_{p-\jacob{2}{p}}}{p}\pmod{p} $$ for odd prime $p$. Similarly, we have a Lehmer-type congruence involving Pell numbers. \begin{Thm} \label{t4} Suppose that $p>3$ is a prime. Then \begin{equation} \label{t4e} {\mathfrak mathcal H}_{p,8}(p-1)\equiv\frac{2^{2p-4}+2^{p-3}+2^{\frac{p-3}{2}}P_{p}-1}{p}-\frac{2^{4p-6}+2^{2p-4}+2^{p-2}P_{2p-\jacob{2}{p}}-1}{4p}\pmod{p^2}. \end{equation} \end{Thm} We shall prove Theorems \ref{t1} and \ref{t2} in Section 2. And the proofs of Theorems \ref{t3} and \ref{t4} will be given in Section 3. \section{Proof Theorems \ref{t1} and \ref{t2} } \setcounter{equation}{0} \setcounter{Thm}{0} \setcounter{Lem}{0} \setcounter{Cor}{0} \begin{Lem} Suppose that $p$ is a prime. Then \begin{equation} \label{l1e1} \frac{1}{p}\sum_{\substack{1\leq k\leq p-1\\ k\equiv r\pmod{m}}}(-a)^k\binom{p}{k}\equiv-\sum_{\substack{1\leq k\leq p-1\\ k\equiv r\pmod{m}}}\frac{a^k}{k}+p\sum_{\substack{1\leq j<k\leq p-1\\ k\equiv r\pmod{m}}}\frac{a^k}{jk}\pmod{p^2} \end{equation} and \begin{align} \label{l1e2} \frac{1}{2p}\sum_{\substack{1\leq k\leq 2p-1,\ k\not=p\\ k\equiv r\pmod{m}}}(-a)^k\binom{2p}{k} \equiv&-\sum_{\substack{1\leq k\leq p-1\\ k\equiv r\pmod{m}}}\frac{a^k}{k}-\sum_{\substack{1\leq k\leq p-1\\ k\equiv 2p-r\pmod{m}}}\frac{a^{2p-k}}{k}\notag\\&+2p\sum_{\substack{1\leq j<k\leq p-1\\ k\equiv r\pmod{m}}}\frac{a^k}{jk}+2p\sum_{\substack{1\leq j<k\leq p-1\\ k\equiv 2p-r\pmod{m}}}\frac{a^{2-k}}{jk}\pmod{p^2}. \end{align} \end{Lem} \begin{proof} \begin{align*} \frac{1}{p}\sum_{\substack{1\leq k\leq p-1\\ k\equiv r\pmod{m}}}(-a)^k\binom{p}{k}=& \sum_{\substack{1\leq k\leq p-1\\ k\equiv r\pmod{m}}}\frac{(-a)^k}{k}\prod_{j=1}^{k-1}\bigg(\frac{p}{j}-1\bigg)\\ \equiv&-\sum_{\substack{1\leq k\leq p-1\\ k\equiv r\pmod{m}}}\frac{a^k}{k}+\sum_{\substack{2\leq k\leq p-1\\ k\equiv r\pmod{m}}}\frac{a^k}{k}\sum_{j=1}^{k-1}\frac{p}{j}\pmod{p^2}. \end{align*} Similarly, \begin{align*} &\frac{1}{2p}\sum_{\substack{1\leq k\leq 2p-1,\ k\not=p\\ k\equiv r\pmod{m}}}(-a)^k\binom{2p}{k}\\ =&\sum_{\substack{1\leq k\leq p-1\\ k\equiv r\pmod{m}}}\frac{(-a)^k}{k}\binom{2p-1}{k-1}+\sum_{\substack{1\leq k\leq p-1\\ k\equiv 2p-r\pmod{m}}}\frac{(-a)^{2p-k}}{2p-k}\binom{2p-1}{k}. \end{align*} We have $$ \sum_{\substack{1\leq k\leq p-1\\ k\equiv r\pmod{m}}}\frac{(-a)^k}{k}\binom{2p-1}{k-1}\equiv-\sum_{\substack{1\leq k\leq p-1\\ k\equiv r\pmod{m}}}\frac{a^k}{k}+2p\sum_{\substack{1\leq j<k\leq p-1\\ k\equiv r\pmod{m}}}\frac{a^k}{jk}\pmod{p^2}. $$ And \begin{align*} &\sum_{\substack{1\leq k\leq p-1\\ k\equiv 2p-r\pmod{m}}}\frac{(-a)^{2p-k}}{2p-k}\binom{2p-1}{k}\\ \equiv&\sum_{\substack{1\leq k\leq p-1\\ k\equiv 2p-r\pmod{m}}}\frac{a^{2p-k}}{2p-k}-2p\sum_{\substack{1\leq k\leq p-1\\ k\equiv 2p-r\pmod{m}}}\frac{a^{2-k}}{2p-k}\sum_{j=1}^{k}\frac{1}{j}\\ \equiv&-\sum_{\substack{1\leq k\leq p-1\\ k\equiv 2p-r\pmod{m}}}\bigg(\frac{a^{2p-k}}{k}+2p\cdot\frac{a^{2-k}}{k^2}\bigg)+2p\sum_{\substack{1\leq j\leq k\leq p-1\\ k\equiv 2p-r\pmod{m}}}\frac{a^{2-k}}{jk}\pmod{p^2}. \end{align*} We are done. \end{proof} Define $$ {\mathfrak mathcal S}_{r,m}(n)=\sum_{\substack{2\leq k\leq n\\ k\equiv r\pmod{m}}}\frac{1}{k}\sum_{j=1}^{k-1}\frac{1}j. $$ Substituting $a=1$ in (\ref{l1e1}), we get \begin{Cor} Suppose that $m\geq 2$. Then \begin{equation} \label{c1e1} {\mathfrak mathcal H}_{r,m}(p-1)\equiv-\frac{{\mathfrak mathcal T}_{r,m}^*(p)-\delta_{r,m}(p)}p+p{\mathfrak mathcal S}_{r,m}(p-1)\pmod{p^2}, \end{equation} where $\delta_{r,m}(p)$ is same as the one defined in (\ref{delta}). In particular, \begin{equation} \label{c1e2} {\mathfrak mathcal H}_{p,m}(p-1)\equiv-\frac{{\mathfrak mathcal T}_{p,m}^*(p)+1}p+p{\mathfrak mathcal S}_{p,m}(p-1)\pmod{p^2}. \end{equation} \end{Cor} Substituting $r=p, p+m/2$ and $a=1$ in (\ref{l1e2}) and noting that $\binom{2p}{p}\equiv 2\pmod{p^3}$, we have \begin{Cor} Suppose that $m\geq 2$. Then \begin{equation} \label{c2e1} {\mathfrak mathcal H}_{p,m}(p-1)\equiv-\frac{{\mathfrak mathcal T}_{p,m}^*(2p)+2}{4p}+2p{\mathfrak mathcal S}_{p,m}(p-1)\pmod{p^2}. \end{equation} And if $m$ is even, then \begin{equation} \label{c2e2} {\mathfrak mathcal H}_{p+m/2,m}(p-1)\equiv-\frac{{\mathfrak mathcal T}_{p+m/2,m}^*(2p)}{4p}+2p{\mathfrak mathcal S}_{p+m/2,m}(p-1)\pmod{p^2}. \end{equation} \end{Cor} Combining (\ref{c1e2}) and (\ref{c2e1}), we get $$ p{\mathfrak mathcal S}_{p,m}(p-1)\equiv -\frac{{\mathfrak mathcal T}_{p,m}^*(p)+1}p+\frac{{\mathfrak mathcal T}_{p,m}^*(2p)+2}{4p}\pmod{p^2}, $$ and Theorem \ref{t1} easily follows. \begin{Lem} \label{l2} $$ \sum_{r=1}^{m}{\mathfrak mathcal T}_{r,m}^*(n){\mathfrak mathcal T}_{r+s,m}^*(n)=(-1)^n{\mathfrak mathcal T}_{n+s,m}^*(2n). $$ \end{Lem} \begin{proof} Let $\zeta$ be a primitive $m$-th root of unity. Clearly, \begin{align*} {\mathfrak mathcal T}_{r,m}^*(n)=&\frac1m\sum_{k=0}^{n}(-1)^k\binom{n}{k}\sum_{t=1}^{m}\zeta^{(k-r)t}=\frac1m\sum_{t=1}^{m}\zeta^{-rt}(1-\zeta^t)^n. \end{align*} Hence \begin{align*} \sum_{r=1}^{m}{\mathfrak mathcal T}_{r,m}^*(n){\mathfrak mathcal T}_{r+s,m}^*(n)=&\sum_{r=1}^m\frac1{m^2}\sum_{1\leq t_1,t_2\leq m}\zeta^{-r(t_1+t_2)-st_2}(1-\zeta^{t_1})^n(1-\zeta^{t_2})^n\\ =&\frac{(-1)^n}m\sum_{t=1}^{m}\zeta^{-(n+s)t}(1-\zeta^{t})^{2n} =(-1)^n{\mathfrak mathcal T}_{n+s,m}^*(2n). \end{align*} \end{proof} By Lemma \ref{l2}, we have $$ {\mathfrak mathcal T}_{p,m}^*(2p)=-\sum_{r=1}^{m}{\mathfrak mathcal T}_{r,m}^*(p)^2. $$ Hence \begin{align} {\mathfrak mathcal S}_{p,m}(p-1)\equiv&\frac{{\mathfrak mathcal T}_{0,m}^*(p)-1}{2p^2}-\frac{{\mathfrak mathcal T}_{p,m}^*(p)+1}{2p^2}-\frac{\sum_{r=1}^{m}{\mathfrak mathcal T}_{r,m}^*(p)^2-2}{4p^2}\notag\\ =&-\sum_{\substack{1\leq r\leq m\\ r\not\equiv 0,p\mathfrak mod{m}}}\frac{{\mathfrak mathcal T}_{r,m}^*(p)^2}{4p^2}-\frac{({\mathfrak mathcal T}_{0,m}^*(p)-1)^2}{4p^2}-\frac{({\mathfrak mathcal T}_{p,m}^*(p)+1)^2}{4p^2}\notag\\ \equiv&-\frac{1}{4}\sum_{r=1}^m{\mathfrak mathcal H}_{r,m}(p-1)^2\pmod{p}. \end{align} And since ${\mathfrak mathcal H}_{p-r,m}(p-1)\equiv-{\mathfrak mathcal H}_{r,m}(p-1)\pmod{p}$, ${\mathfrak mathcal H}_{r,m}(p-1)\equiv0\pmod{p}$ provided that $2r\equiv p\pmod{m}$. So we also have \begin{equation} {\mathfrak mathcal S}_{p,m}(p-1)\equiv-\frac{1}{4}\sum_{\substack{1\leq r\leq m\\ 2r\not\equiv p\pmod{m}}}{\mathfrak mathcal H}_{r,m}(p-1)^2\pmod{p}. \end{equation} Thus by (\ref{c2e1}), Theorem \ref{t2} is concluded. \section{Fermat's Quotient and Pell's Quotient} \setcounter{equation}{0} \setcounter{Thm}{0} \setcounter{Lem}{0} \setcounter{Cor}{0} Let $L_n$ be the Lucas numbers given by $L_0=2$, $L_1=1$ and $L_{n}=L_{n-1}+L_{n-2}$ for $n\geq 2$. We require the following result of Sun and Sun on ${\mathfrak mathcal T}_{r,10}(n)$. \begin{Lem}{{\cite[Theorem 1]{SunSun92}}} \label{SunSunFib} Let $n$ be a positive odd integer. If $n\equiv 1\pmod{4}$, then $$ \begin{array}{ccc} &10{\mathfrak mathcal T}_{\frac{n-1}{2},10}(n)=2^n+L_{n+1}+5^{\frac{n+3}4}F_{\frac{n+1}2},\quad &10{\mathfrak mathcal T}_{\frac{n+3}{2},10}(n)=2^n-L_{n-1}+5^{\frac{n+3}4}F_{\frac{n-1}2},\\ &10{\mathfrak mathcal T}_{\frac{n+7}{2},10}(n)=2^n-L_{n-1}-5^{\frac{n+3}4}F_{\frac{n-1}2},\quad &10{\mathfrak mathcal T}_{\frac{n+11}{2},10}(n)=2^n+L_{n+1}-5^{\frac{n+3}4}F_{\frac{n+1}2}. \end{array} $$ And if $n\equiv 3\pmod{4}$, then $$ \begin{array}{ccc} &10{\mathfrak mathcal T}_{\frac{n-1}{2},10}(n)=2^n+L_{n+1}+5^{\frac{n+1}4}L_{\frac{n+1}2},\quad &10{\mathfrak mathcal T}_{\frac{n+3}{2},10}(n)=2^n-L_{n-1}+5^{\frac{n+1}4}L_{\frac{n-1}2},\\ &10{\mathfrak mathcal T}_{\frac{n+7}{2},10}(n)=2^n-L_{n-1}-5^{\frac{n+1}4}L_{\frac{n-1}2},\quad &10{\mathfrak mathcal T}_{\frac{n+11}{2},10}(n)=2^n+L_{n+1}-5^{\frac{n+1}4}L_{\frac{n+1}2}. \end{array} $$ Furthermore, for every odd $n$, $$ 10{\mathfrak mathcal T}_{\frac{n+13}{2},10}(n)=2^n-2L_n. $$ \end{Lem} For each odd $n\geq 1$, since $$ {\mathfrak mathcal T}_{n,m}^*(2n)={\mathfrak mathcal T}_{n,m}^*(2n-1)-{\mathfrak mathcal T}_{n-1,m}^*(2n-1)=-2{\mathfrak mathcal T}_{n-1,m}^*(2n-1) $$ and $$ {\mathfrak mathcal T}_{n+m,2m}^*(2n)={\mathfrak mathcal T}_{n+m,2m}^*(2n-1)-{\mathfrak mathcal T}_{n+m-1,m}^*(2n-1)=-2{\mathfrak mathcal T}_{n+m-1,m}^*(2n-1), $$ by Lemma \ref{SunSunFib}, we get \begin{equation} {\mathfrak mathcal T}_{n,5}^*(2n)=-2\cdot5^{\frac{n-1}2}F_{n}. \end{equation} Let $p>5$ be a prime. By (\ref{t2e1}), $$ {\mathfrak mathcal H}_{p,5}(p-1)\equiv-\frac{{\mathfrak mathcal T}_{p,5}^*(2p)+2}{4p}-p(H_{p,5}(p-1)^2+H_{2p,5}(p-1)^2)\pmod{p^2}. $$ By (\ref{SunSun}), we have \begin{align*} {\mathfrak mathcal H}_{p,5}(p-1)\equiv&\frac{5^{\frac{p-1}2}F_{p}-1}{2p}-p\bigg(\bigg(\frac{F_{p-\jacob{5}{p}}}{2p}\bigg)^2+\bigg(\frac{5^{\frac{p-1}2}F_{p}-1}{2p}\bigg)^2\bigg)\\ \equiv&\frac{5^{\frac{p-1}2}F_{p}-1}{p}-\frac{5^{p-1}\big(F_{p-\jacob{5}{p}}^2+F_{p}^2\big)-1}{4p}\\ =&\frac{5^{\frac{p-1}2}F_{p}-1}{p}-\frac{5^{p-1}F_{2p-\jacob{5}{p}}-1}{4p}\pmod{p^2}, \end{align*} where in the last step we use the fact $F_{2n-1}=F_n^2+F_{n-1}^2$. Thus the proof of Theorem \ref{t3} is complete. \begin{Rem} Similarly, we can get \begin{align} &\sum_{\substack{1\leq k\leq p-1\\ k\equiv p\pmod{5}}}\frac{(-1)^k}{k}\notag\\ \equiv& \frac{5(2^{4p-1}-2^{2p+3})+12L_{4p}+L_{4p-4\jacob{5}{p}}-112L_{2p}-4L_{2p-2\jacob{5}{p}}+378}{400p} \pmod{p^2}.\end{align} \end{Rem} Let $Q_n$ be the Pell-Lucas numbers given by $Q_0=2$, $Q_1=2$ and $Q_{n}=2Q_{n-1}+Q_{n-2}$ for $n\geq 2$. For ${\mathfrak mathcal T}_{r,8}(n)$, Sun had proved that \begin{Lem}{{\cite[Theorem 2.2]{Sun93}}}\label{SunPell} Let $n$ be a positive odd integer. If $n\equiv 1\pmod{4}$, then $$ \begin{array}{ccc} &8{\mathfrak mathcal T}_{\frac{n-1}{2},8}(n)=2^n+2^{\frac{n+1}2}+2^{\frac{n+7}4}P_{\frac{n+1}2},\quad &8{\mathfrak mathcal T}_{\frac{n+3}{2},8}(n)=2^n-2^{\frac{n+1}2}+2^{\frac{n+7}4}P_{\frac{n-1}2},\\ &8{\mathfrak mathcal T}_{\frac{n+7}{2},8}(n)=2^n-2^{\frac{n+1}2}-2^{\frac{n+7}4}P_{\frac{n-1}2},\quad &8{\mathfrak mathcal T}_{\frac{n+11}{2},8}(n)=2^n+2^{\frac{n+1}2}-2^{\frac{n+7}4}P_{\frac{n+1}2}. \end{array} $$ And if $n\equiv 3\pmod{4}$, then $$ \begin{array}{ccc} &8{\mathfrak mathcal T}_{\frac{n-1}{2},8}(n)=2^n+2^{\frac{n+1}2}+2^{\frac{n+1}4}Q_{\frac{n+1}2},\quad &8{\mathfrak mathcal T}_{\frac{n+3}{2},8}(n)=2^n-2^{\frac{n+1}2}+2^{\frac{n+1}4}Q_{\frac{n-1}2},\\ &8{\mathfrak mathcal T}_{\frac{n+7}{2},8}(n)=2^n-2^{\frac{n+1}2}-2^{\frac{n+1}4}Q_{\frac{n-1}2},\quad &8{\mathfrak mathcal T}_{\frac{n+11}{2},8}(n)=2^n+2^{\frac{n+1}2}-2^{\frac{n+1}4}Q_{\frac{n+1}2}. \end{array} $$ \end{Lem} Thus we have \begin{equation} T_{n,8}^*(2n)=-2^{2n-3}-2^{n-2}-2^{\frac{n-1}2}P_{n} \end{equation} and \begin{equation} T_{n+4,8}^*(2n)=-2^{2n-3}-2^{n-2}+2^{\frac{n-1}2}P_{n} \end{equation} for odd $n\geq 1$. Applying (\ref{t2e1}), $$ {\mathfrak mathcal H}_{p,8}(p-1)\equiv-\frac{{\mathfrak mathcal T}_{p,8}^*(2p)+2}{4p}-p\sum_{0\leq j\leq 3}{\mathfrak mathcal H}_{p+2j,8}(p)^2\pmod{p^2}. $$ By (\ref{c2e1}) and (\ref{c2e2}), we have $$ {\mathfrak mathcal H}_{p,8}(p-1)\equiv-\frac{{\mathfrak mathcal T}_{p,8}^*(2p)+2}{4p}\pmod{p} $$ and $$ {\mathfrak mathcal H}_{p+4,8}(p-1)\equiv-\frac{{\mathfrak mathcal T}_{p+4,8}^*(2p)}{4p}\pmod{p}. $$ And in view of (\ref{c1e1}) and Lemma \ref{SunPell}. \begin{align} \label{hp26} &{\mathfrak mathcal H}_{p+2,8}(p-1)^2+{\mathfrak mathcal H}_{p+6,8}(p-1)^2\notag\\ \equiv&\begin{cases} \sum_{i=0}^1p^{-2}\big(2^{p-3}-\jacob{2}{p}2^{\frac{p-5}2}+(-1)^i2^{\frac{p-5}4}P_{(p-\jacob{2}{p})/2}\big)^2\pmod{p}&\text{if }p\equiv1\pmod{4},\\ \sum_{i=0}^1p^{-2}\big(2^{p-3}-\jacob{2}{p}2^{\frac{p-5}2}+(-1)^i2^{\frac{p-11}4}Q_{(p-\jacob{2}{p})/2}\big)^2\pmod{p}&\text{if }p\equiv3\pmod{4}. \end{cases} \end{align} \begin{Lem} \begin{equation} \label{pellcong} \begin{array}{ccc} P_{\frac{p-1}2}\equiv0\pmod{p},&P_{\frac{p+1}2}\equiv(-1)^{\frac{p-1}8}2^{\frac{p-1}4}\pmod{p},&\text{if }p\equiv 1\pmod{8},\\ P_{\frac{p-1}2}\equiv(-1)^{\frac{p-3}8}2^{\frac{p-3}4}\pmod{p},&P_{\frac{p+1}2}\equiv(-1)^{\frac{p+5}8}2^{\frac{p-3}4}\pmod{p},&\text{if }p\equiv 3\pmod{8},\\ P_{\frac{p-1}2}\equiv(-1)^{\frac{p-5}8}2^{\frac{p-1}4}\pmod{p},&P_{\frac{p+1}2}\equiv0\pmod{p},&\text{if }p\equiv 5\pmod{8},\\ P_{\frac{p-1}2}\equiv(-1)^{\frac{p+1}8}2^{\frac{p-3}4}\pmod{p},&P_{\frac{p+1}2}\equiv(-1)^{\frac{p+1}8}2^{\frac{p-3}4}\pmod{p},&\text{if }p\equiv 7\pmod{8}, \end{array} \end{equation} and \begin{equation} \label{pelllucascong} \begin{array}{ccc} Q_{\frac{p-1}2}\equiv(-1)^{\frac{p-1}8}2^{\frac{p+3}4}\pmod{p},&Q_{\frac{p+1}2}\equiv(-1)^{\frac{p-1}8}2^{\frac{p+3}4}\pmod{p},&\text{if }p\equiv 1\pmod{8},\\ Q_{\frac{p-1}2}\equiv(-1)^{\frac{p+5}8}2^{\frac{p+5}4}\pmod{p},&Q_{\frac{p+1}2}\equiv0\pmod{p},&\text{if }p\equiv 3\pmod{8},\\ Q_{\frac{p-1}2}\equiv(-1)^{\frac{p+3}8}2^{\frac{p+3}4}\pmod{p},&Q_{\frac{p+1}2}\equiv(-1)^{\frac{p-5}8}2^{\frac{p+3}4}\pmod{p},&\text{if }p\equiv 5\pmod{8},\\ Q_{\frac{p-1}2}\equiv0\pmod{p},&Q_{\frac{p+1}2}\equiv(-1)^{\frac{p+1}8}2^{\frac{p+1}4}\pmod{p},&\text{if }p\equiv 7\pmod{8}. \end{array} \end{equation} \end{Lem} \begin{proof} The congruences in (\ref{pellcong}) were obtained by Sun \cite[Theorem 2.3]{Sun93}. And the congruences in (\ref{pelllucascong}) follows from (\ref{pellcong}), by noting that $Q_{n}=2P_{n+1}-2P_{n}$ and $Q_{n+1}=2P_{n+1}+2P_{n}$. \end{proof} Thus since $P_{(p-\jacob{2}{p})/2}Q_{(p-\jacob{2}{p})/2}=P_{p-\jacob{2}{p}}$, by (\ref{hp26}), we have \begin{align*} {\mathfrak mathcal H}_{p+2,8}(p-1)^2+{\mathfrak mathcal H}_{p+6,8}(p-1)^2 \equiv\frac{2^{p-1}(2^{\frac{p-1}2}-\jacob{2}{p})^2+P_{p-\jacob{2}{p}}^2}{8p}\pmod{p}. \end{align*} Observe that $$ \frac{2^{p-1}-1}{p}=\frac{(2^{\frac{p-1}2}+\jacob2p)(2^{\frac{p-1}2}-\jacob2p)}{p}\equiv2\jacob2p\frac{2^{\frac{p-1}2}-\jacob2p}{p}\pmod{p}. $$ Hence \begin{align*} {\mathfrak mathcal H}_{p,8}(p-1) \equiv&\frac{2^{2p-4}+2^{p-3}+2^{\frac{p-3}2}P_{p}-1}{p}-\frac{(2^{2p-3}+2^{p-2}+2^{\frac{p-1}2}P_{p})^2-4}{16p}\\ &-\frac{(2^{2p-3}+2^{p-2}-2^{\frac{p-1}2}P_{p})^2}{16p}-\frac{2^{p-1}(2^{\frac{p-1}2}-\jacob{2}{p})^2+2^{p-1}P_{p-\jacob{2}{p}}^2}{8p}\\ \equiv&\frac{2^{2p-4}+2^{p-3}+2^{\frac{p-3}2}P_{p}-1}{p}-\frac{2^{4p-6}+2^{2p-4}+2^{p-2}P_{2p-\jacob{2}{p}}-1}{4p}\pmod{p^2}, \end{align*} by noting that $P_{p-\jacob{2}{p}}^2+P_{p}^2=P_{2p-\jacob{2}{p}}$. This concludes the proof of Theorem \ref{t4}.\qed \begin{Rem} The Bernoulli polynomials $B_n(x)$ are given by $$ \frac{te^{xt}}{e^{t}-1}=\sum_{n=0}^\infty\frac{B_n(x)}{n!}t^n. $$ In particular, define the Bernoulli number $B_n=B_n(0)$. Granville and Sun \cite{GranvilleSun96} proved that $$ B_{p-1}(\{p\}_5/5)-B_{p-1}\equiv\frac{5}{4p}F_{p-\jacob5p}+\frac{5^p-5}{4p}\pmod{p} $$ and $$ B_{p-1}(\{p\}_8/8)-B_{p-1}\equiv\frac{2}{p}P_{p-\jacob2p}+\frac{2^{p+1}-4}{p}\pmod{p} $$ for prime $p\not=2,5$, where $\{p\}_m$ denotes the least non-negative residue of $p$ modulo $m$. In \cite[Theorem 3.3]{Sun08}, Sun also proved that $$ m{\mathfrak mathcal H}_{p,m}(p-1)\equiv\frac{B_{2p-2}(\{p\}_m/m)-B_{2p-2}}{2p-2}-2\frac{B_{p-1}(\{p\}_m/m)-B_{p-1}}{p-1}\pmod{p^2}. $$ Now using Theorems \ref{t3} and \ref{t4} it is easy to deduce that \begin{align} \frac{B_{p(p-1)}(\{p\}_5/5)-B_{p(p-1)}}{5p(p-1)} \equiv-\frac{5^{\frac{p-1}{2}}F_{p}-1}{p}+\frac{5^{p-1}F_{2p-\jacob{5}{p}}-1}{4p}\pmod{p^2} \end{align} for prime $p>5$, and \begin{align} &\frac{B_{p(p-1)}(\{p\}_8/8)-B_{p(p-1)}}{8p(p-1)}\notag\\ \equiv&-\frac{2^{2p-4}+2^{p-3}+2^{\frac{p-3}{2}}P_{p}-1}{p}+\frac{2^{4p-6}+2^{2p-4}+2^{p-2}P_{2p-\jacob{2}{p}}-1}{4p}\pmod{p^2} \end{align} for prime $p>3$. \end{Rem} \end{Ack} \end{document}
\begin{document} \author{Axel Hultman} \address{Department of Mathematics, KTH-Royal Institute of Technology, SE-100 44, Stockholm, Sweden.} \email{[email protected]} \author{Jakob Jonsson} \address{Department of Mathematics, KTH-Royal Institute of Technology, SE-100 44, Stockholm, Sweden.} \email{[email protected]} \title[Matrices of Barvinok rank $2$]{The topology of the space of matrices of Barvinok rank two} \begin{abstract} The Barvinok rank of a $d \times n$ matrix is the minimum number of points in $\mathbb{R}^d$ such that the tropical convex hull of the points contains all columns of the matrix. The concept originated in work by Barvinok and others on the travelling salesman problem. Our object of study is the space of real $d \times n$ matrices of Barvinok rank two. Let $B_{d,n}$ denote this space modulo rescaling and translation. We show that $B_{d,n}$ is a manifold, thereby settling a conjecture due to Develin. In fact, $B_{d,n}$ is homeomorphic to the quotient of the product of spheres $S^{d-2} \times S^{n-2}$ under the involution which sends each point to its antipode simultaneously in both components. In addition, using discrete Morse theory, we compute the integral homology of $B_{d,n}$. Assuming $d \ge n$, for odd $d$ the homology turns out to be isomorphic to that of $S^{d-2} \times \mathbb{RP}^{n-2}$. This is true also for even $d$ up to degree $d-3$, but the two cases differ from degree $d-2$ and up. The homology computation straightforwardly extends to more general complexes of the form $(S^{d-2} \times X)/\mathbb{Z}_2$, where $X$ is a finite cell complex of dimension at most $d-2$ admitting a free $\mathbb{Z}_2$-action. \end{abstract} \maketitle \section{Introduction} In the {\em tropical semiring} $(\mathbb{R},\odot,\oplus)$ one defines ``multiplication'' and ``addition'' by $a\odot b = a+b$ and $a\oplus b = \min(a,b)$, respectively. Real $n$-space $\mathbb{R}^n$ has the natural structure of a semimodule over the tropical semiring so that one obtains a theory of ``tropical geometry''.\footnote{Some authors require of a semiring the existence of an additively neutral element, which the tropical semiring lacks as we have defined it. The issue is of no importance to us, but could be rectified by incorporating an infinity element.} Given any classical geometric notion, one may try to construct reasonable analogues in tropical geometry and study their properties. This has been a very lively direction of research over recent years. A brief introduction is \cite{Mikhalkin}. For more background the reader may e.g.\ consult the extensive list of references appearing in \cite{Litvinov}. Consider the classical concept of $n$ points on a line in $\mathbb{R}^d$. Such point sets are characterized by the property that their convex hull is at most one-dimensional. An equivalent characterization is that all points lie in the convex hull of at most two of the points. In tropical geometry, there is a natural notion of {\em tropical convex hull} which leads to obvious analogues of the above characterizations. However, the two analogues differ; the latter is more restrictive. Considering the points as columns of matrices, the former situation deals with matrices of {\em tropical rank} at most $2$, whereas the latter leads one to consider matrices of {\em Barvinok rank} at most $2$. Various tropically motivated definitions of rank are discussed in \cite{DSS}. The concept of Barvinok rank has found applications in optimization theory. Motivating the nomenclature, Barvinok et al.\ \cite{BFJTWW} showed that the maximum version of the travelling salesman problem can be solved in polynomial time if the Barvinok rank of the distance matrix is fixed (with $\oplus$ denoting $\max$ rather than $\min$). Let $\mathcal{M}(d,n)$ denote the set of $d\times n$ matrices with real entries. In this paper, we are interested in the space of matrices $M\in \mathcal{M}(d,n)$ with Barvinok rank $2$, corresponding to sets of $n$ marked points in $\mathbb{R}^d$ whose tropical convex hull is generated by two of the points. The space contains some topologically less interesting features in that it is invariant under rescaling and translation. Taking the quotient by the equivalence relation generated by these operations, one obtains a space $B_{d,n}$ which is our object of study. Develin \cite{Develin} conjectured that $B_{d,n}$ is a manifold and that its homology ``does not increase in complexity as $n$ gets large''. He confirmed the conjecture for $d=3$ and computed the integral homology in a few more accessible cases. Our main results are summarized in the following two theorems: \begin{theorem} \label{th:manifold} The space $B_{d,n}$ is homeomorphic to $(S^{d-2}\times S^{n-2})/\mathbb{Z}_2$, the quotient of the product of two spheres under the involution which sends each point to its antipode simultaneously in both components. \end{theorem} \begin{theorem}\label{th:homology} The reduced integral homology groups of $B_{d,n}$ are given by \[ \widetilde{H}_i(B_{d,n};\mathbb{Z}) \cong \mathbb{Z}^{f(i)}\oplus \mathbb{Z}_2^{t(i)}, \] where \[ f(i)= \begin{cases} 2 & \text{if $i+2=d=n$ and $i$ is odd,}\\ 1 & \text{if $i+2=n\neq d$ and $i$ is odd,}\\ 1 & \text{if $i+2=d\neq n$ and $i$ is odd,}\\ 1 & \text{if $i=d+n-4$ and $i$ is even,}\\ 0 & \text{otherwise.} \end{cases} \] and \[ t(i) = \begin{cases} 1 & \text{if $1\le i \le \min\{d,n\}-3$ and $i$ is odd,}\\ 1 & \text{if $\max\{d,n\}-2\le i \le d+n-5$ and $i$ is even,}\\ 0 & \text{otherwise,} \end{cases} \] \end{theorem} When a finite group acts freely on a manifold, the quotient is again a manifold. In particular, therefore, the manifold part of Develin's conjecture follows from Theorem \ref{th:manifold}, whereas the ``complexity of homology'' part is implied by Theorem \ref{th:homology}. For $d \ge n$, note that the homology of $B_{d,n}$ coincides with that of $S^{d-2} \times (S^{n-2}/\mathbb{Z}_2) \cong S^{d-2} \times \mathbb{R}P^{n-2}$ for odd $d$. For even $d$, the homology groups of the two manifolds differ in higher degrees for reasons to be explained in Section~\ref{se:homology}. For the computations that lead to Theorem \ref{th:homology}, we have opted to work with slightly more general chain complexes, since this requires no additional effort. An advantage of this approach is that it highlights the connection between $B_{d,n} \cong (S^{d-2}\times S^{n-2})/\mathbb{Z}_2$ and $S^{d-2} \times (S^{n-2}/\mathbb{Z}_2)$. Specifically, for any finite cell complex $X$ with a free $\mathbb{Z}_2$-action, we show that $(S^{d-2}\times X)/\mathbb{Z}_2$ and $S^{d-2} \times (X/\mathbb{Z}_2)$ have the same homology whenever $\dim X \le d-2$ and $d$ is odd. For even $d$, there is still a connection, but the situation is slightly more complicated; see Theorem~\ref{th:Vhemi} for details. The remainder of the paper is organized as follows. We review some concepts from tropical geometry in the next section. In Section \ref{se:BDN}, $B_{d,n}$ is defined and given an explicit simplicial decomposition in terms of trees. From it, Theorem \ref{th:manifold} is deduced. The last section is devoted to the proof of Theorem~\ref{th:homology}. Our approach is based on Forman's discrete Morse theory \cite{Forman}. \section{Tropical convexity and notions of rank} Recall that in the tropical semiring we define $a\odot b = a+b$ and $a\oplus b = \min(a,b)$ for $a,b\in \mathbb{R}$. For example, \[ 0\odot 3 \oplus (-2)\odot 3 = 1. \] A natural semimodule structure on $\mathbb{R}^n$ is provided by the ``addition'' \[ (x_1, \dots, x_n)\oplus (y_1, \dots, y_n) = (x_1\oplus y_1, \dots, x_n \oplus y_n) \] and the ``multiplication by scalar'' \[ \lambda \odot (x_1, \dots, x_n) = (\lambda \odot x_1, \dots, \lambda \odot x_n). \] Following \cite{DS} we say that $S\subseteq \mathbb{R}^n$ is {\em tropically convex} if $\lambda\odot x \oplus \mu \odot y\in S$ for all $x,y\in S$ and $\lambda, \mu\in \mathbb{R}$. Note that if $S$ is tropically convex, then $\lambda \odot x \in S$ for all $\lambda \in \mathbb{R}$, $x\in S$. Defining {\em tropical projective space} \[ \mathcal{T}P^{n-1} = \mathbb{R}^n/(1,\dots,1)\mathbb{R}, \] any tropically convex set in $\mathbb{R}^n$ is therefore uniquely determined by its image in $\mathcal{T}P^{n-1}$. We obtain a convenient set of representatives for the elements of tropical projective space by requiring the first coordinate to be zero. The {\em tropical convex hull} $\mathrm{tconv}(S)$ is the smallest tropically convex set which contains $S\subseteq \mathbb{R}^n$. It coincides with the set of finite tropical linear combinations: \[ \mathrm{tconv}(S) = \left \{\bigoplus_{x\in X}\lambda_x\odot x : \lambda_x\in \mathbb{R}, \, \emptyset\neq X \subseteq S, \, |X|< \infty \right \}; \] see \cite{DS}. An important observation is that the tropical convex hull of two points in $\mathbb{R}^n$ forms a piecewise linear curve in $\mathcal{T}P^{n-1}$. This curve is the {\em tropical line segment} between the two points. \begin{figure} \caption{The tropical convex hulls of the point sets $\{(0,0,1),(0,3,2),(0,2,4)\} \label{fi:hulls} \end{figure} In Figure \ref{fi:hulls}, the tropical convex hulls of two three-point sets in $\mathcal{T}P^2$ are shown. According to the next definition, the $3\times 3$ matrix with the three left-hand points as columns has tropical rank $3$, whereas the points to the right form a matrix of tropical rank $2$. \begin{definition}[cf.\ Theorem 4.2 in \cite{DSS}] The {\em tropical rank} of $M\in \mathcal{M}(d,n)$ equals one plus the dimension in $\mathcal{T}P^{d-1}$ of the tropical convex hull of the columns of $M$. \end{definition} In particular, a matrix has tropical rank at most $2$ if and only if the tropical convex hull of its columns is the union of the tropical line segments between all pairs of columns. \begin{definition} The {\em Barvinok rank} of $M\in \mathcal{M}(d,n)$ equals the smallest number of points in $\mathbb{R}^d$ whose tropical convex hull contains all columns of $M$. \end{definition} It is easy to see that the Barvinok rank cannot be smaller than the tropical rank. In general, the two notions are different. For example, the two matrices whose columns are the point sets in Figure \ref{fi:hulls} both have Barvinok rank $3$. Observe that a matrix has Barvinok rank at most $2$ if and only if the tropical convex hull of its columns is a tropical line segment. \section{The manifold of Barvinok rank $2$ matrices}\label{se:BDN} In this section, we shall deduce Theorem \ref{th:manifold}. To begin with, we define the space $B_{d,n}$ which encodes the topologically interesting part of the space of matrices of Barvinok rank $2$. We think of $B_{d,n}$ as sitting inside $\mathbb{R}^{dn}$ with the subspace topology. Fix positive integers $d$ and $n$. Let $M\in \mathcal{M}(d,n)$. As usual, we consider the columns of $M$ as a collection of $n$ marked points in $\mathbb{R}^d$. Adding any real number to any row or any column of $M$ preserves the Barvinok rank; adding to a row merely translates the point set, whereas adding to a column yields another representative for the same point in $\mathcal{T}P^{d-1}$. Similarly, multiplying $M$ by any $\lambda\in \mathbb{R}$ does not increase the Barvinok rank (which is preserved if $\lambda \neq 0$). In order to get unique representatives for matrices under the operations just described, the following definition is convenient. \begin{definition}\label{de:representatives} Let $B_{d,n}$ be the set of matrices $M\in \mathcal{M}(d,n)$ satisfying \begin{itemize} \item[(i)] The first row of $M$ is zero. \item[(ii)] The smallest entry in every row of $M$ is zero. \item[(iii)] As a point in $\mathbb{R}^{dn}$, $M$ is on the unit sphere. \end{itemize} \end{definition} \subsection{A simplicial complex of trees} Let $P=\{p_1, \dots, p_d\}$ and $Q=\{q_1, \dots, q_n\}$ be disjoint sets of cardinality $d$ and $n$, respectively. We now describe an abstract simplicial complex with $B_{d,n}$ as geometric realization. The simplices are encoded by combinatorial trees whose leaves are marked using $P$ and $Q$ as label sets. This model is equivalent to that given by Develin in \cite[\S~ 3]{Develin}.\footnote{To translate from our trees to those of \cite{Develin}, simply replace the leaf labelled $p_i$ and its incident edge by a leaf ``heading off to infinity in the $i$-th coordinate direction'', and replace the leaf labelled $q_i$ and its incident edge by the $i$-th marked point.} A completely analogous description in the context of matrices of tropical rank $2$ was given by Markwig and Yu \cite{MY}; their complex is denoted $\mathcal{T}_{d,n}$ below. Consider the set of trees $\mathcal{T}$ with leaf set $P\cup Q$ such that every internal vertex (i.e.\ non-leaf) has degree at least three. We may think of $\mathcal{T}$ as a simplicial complex in the following way. The vertex set of the complex consists of all bipartitions of $P\cup Q$, and we identify a tree $\tau\in \mathcal{T}$ with the simplex comprised of the bipartitions induced by the connected components that arise when an internal edge (one not incident to a leaf) of $\tau$ is removed. (It is well-known, and easy to see, that there is at most one tree giving rise to any given set of bipartitions.) Clearly, the simplices in the boundary of $\tau$ are those obtained by contracting internal edges. Let $\mathcal{T}_{d,n}$ be the subcomplex of $\mathcal{T}$ which is induced by the bipartitions where both parts have nonempty intersection with both $P$ and $Q$. Our main object of study is the subcomplex of $\mathcal{T}_{d,n}$ which consists of the trees whose internal vertices form a path as induced subgraph. Let us denote this complex by $\B{d}{n}$. \begin{proposition}[\S~ 3 in \cite{Develin}] \label{pr:realization} A geometric realization of $\B{d}{n}$ is given by $B_{d,n}$. \end{proposition} Specifically, a matrix $M\in B_{d,n}$ is associated with a tree in $\B{d}{n}$ as follows. Let $c_1, \dots, c_n$ denote the columns of $M$. Then, $T=\mathrm{tconv}(c_1, \dots, c_n)$ is a tropical line segment in $\mathcal{T}P^{d-1}$. Now construct a tree $\tau=\tau(M)$ whose internal vertex set is the union of $\{c_1, \dots, c_n\}$ and the set of points where the curve $T$ is not smooth. Two internal vertices $v_1\neq v_2$ are adjacent if and only if $\mathrm{tconv}(v_1, v_2)$ (which is a subset of $T$) contains no other internal vertex. The leaf set of $\tau$ is $P\cup Q$. The leaf $p_i$ is adjacent to the internal vertex which is closest to the origin (which, by Condition (ii) of Definition \ref{de:representatives}, is necessarily an internal vertex) among those where the $i$-th coordinate is maximized. Finally, the leaf $q_i$ is adjacent to the vertex $c_i$. The resulting tree (seen as an abstract, leaf-labelled tree) is $\tau$. \begin{example}\label{ex:example} Consider the $6\times 5$ matrix \[ M = \left( \begin{array}{crrrr} 6 & 1 & 4 & 6 & 3\\ 2 & -3 & -1 & 2 & -1\\ 5 & -2 & 0 & 4 & 2\\ 5 & -2 & 0 & 4 & 2\\ 0 & -5 & -1 & 0 & -3\\ 7 & -2 & 0 & 4 & 4 \end{array} \right). \] We shall see shortly that $M$ has Barvinok rank $2$. However, $M$ does not satisfy the conditions of Definition \ref{de:representatives}, hence does not belong to $B_{6,5}$. Subtracting the first row of $M$ from each row, adding an appropriate amount to each row and rescaling, we obtain the following matrix which represents the equivalence class of $M$ in $B_{6,5}$: \[ M^\prime = \frac{1}{\sqrt{97}}\left( \begin{array}{ccccc} 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 0 & 1 & 1\\ 3 & 1 & 0 & 2 & 3\\ 3 & 1 & 0 & 2 & 3\\ 0 & 0 & 1 & 0 & 0\\ 4 & 1 & 0 & 2 & 5 \end{array} \right). \] The tropical convex hull of the columns of $M^\prime$ is shown in Figure \ref{fi:example}. It is generated by the third and fifth columns. Thus, $M^\prime$ (and $M$) has Barvinok rank $2$. The associated tree $\tau(M^\prime)$ encoding the simplex in $\B{6}{5}$ which contains $M^\prime$ is also displayed. \end{example} \begin{figure} \caption{(Left) The tropical convex hull of five points on a tropical line segment in $\mathcal{T} \label{fi:example} \end{figure} \subsection{Proof of Theorem \ref{th:manifold}} Our next goal is to prove that $\B{d}{n}$ is homeomorphic to $(S^{d-2}\times S^{n-2})/\mathbb{Z}_2$, where the generator of $\mathbb{Z}_2$ acts by the antipodal map on both components. For a set $S$, let $\bool{S}$ denote the proper part of the Boolean lattice on $S$. Thus, $\bool{S}$ is the poset of all proper, nonempty subsets of $S$ ordered by inclusion. Recall our sets $P$ and $Q$. The chains in $\bool{P}\times \bool{Q}$ can be thought of as compositions (i.e.\ ordered set partitions) of $P\cup Q$ such that the first and the last block both have nonempty intersection with both $P$ and $Q$; let us for brevity call such compositions {\em balanced}. Namely, a chain $(S_1,T_1)< \cdots < (S_k,T_k)$ corresponds to the balanced composition $(C_1, \dots, C_{k+1})$, where \[ C_i = \left(\bigcup_{j=1}^i (S_j\cup T_j)\right)\setminus \left(\bigcup_{j=1}^{i-1} (S_j\cup T_j)\right), \] identifying $S_{k+1}$ and $T_{k+1}$ with $P$ and $Q$, respectively. Under this bijection, inclusion among chains corresponds to refinement among compositions. Let $\Delta(\cdot)$ denote order complex\footnote{The order complex of a finite poset is the abstract simplicial complex whose simplices are the totally ordered subsets.}. We have a map of simplicial complexes $\varphi: \Delta \left(\bool{P}\times \bool{Q}\right)\to \B{d}{n}$ by sending a balanced composition $C = (C_1, \dots, C_k)$ to the unique tree $\varphi(C)\in \B{d}{n}$ in which $C_i$ is the set of leaves adjacent to the $i$th internal vertex, counting along the internal path from one of the endpoints. As an example, there are two balanced compositions that are mapped to the tree in Figure \ref{fi:example}, namely $(p_5q_3,p_1,p_2q_2,q_4,p_3p_4,q_1,p_6q_5)$ and its reverse composition $(p_6q_5,q_1,p_3p_4,q_4,p_2q_2,p_1,p_5q_3)$. Conversely, for $\tau\in \B{d}{n}$, suppose the path from one endpoint of the internal path to the other traverses the internal vertices in the order $v_1, \dots, v_k$. Let $C_i\subset P\cup Q$ be the set of leaves adjacent to $v_i$. Clearly, \[ \varphi^{-1}(\tau)=\{(C_1, \dots, C_k), (C_k,\dots, C_1)\}. \] This shows that $\varphi$ induces an isomorphism of simplicial complexes: \[ \Delta\left(\bool{P}\times \bool{Q}\right)/\mathbb{Z}_2 \cong \B{d}{n}, \] where the generator of $\mathbb{Z}_2$ acts by taking complement inside $\bool{P}$ and $\bool{Q}$ simultaneously. Given finite posets $\Pi$ and $\Sigma$, the product space $\Delta(\Pi)\times\Delta(\Sigma)$ has a natural cell complex structure, where the cells are of the form $\Delta(C^\Pi)\times \Delta(C^\Sigma)$ for chains $C^\Pi\subseteq \Pi$, $C^\Sigma\subseteq \Sigma$. It is well known \cite[Lemma 8.9]{ES} that $\Delta(\Pi\times \Sigma)$ is a simplicial subdivision of $\Delta(\Pi)\times\Delta(\Sigma)$, a cell $\Delta(C^\Pi)\times \Delta(C^\Sigma)$ being subdivided by $\Delta(C^\Pi\times C^\Sigma)$. In particular, $\Delta\left(\bool{P}\times \bool{Q}\right) \cong \Delta\left(\bool{P}\right) \times \Delta\left(\bool{Q}\right)$. Moreover, the $\mathbb{Z}_2$-action clearly respects the subdivision so that we obtain \[ \B{d}{n} \cong \left(\Delta\left(\bool{P}\right) \times \Delta\left(\bool{Q}\right)\right)/\mathbb{Z}_2, \] where the generator of $\mathbb{Z}_2$ acts by taking complementary chains in both $\bool{P}$ and $\bool{Q}$. Finally, it is well known that $\Delta\left(\bool{S}\right)$ is homeomorphic to the $(|S|-2)$-sphere, and that the complement map on $\bool{S}$ corresponds to the antipodal map on the sphere. This concludes the proof of Theorem \ref{th:manifold}. \section{Computing the homology} \label{se:homology} The main aim of the remainder of the paper is to compute the integral homology groups of $B_{d,n}$, thereby proving Theorem \ref{th:homology}. By Theorem~\ref{th:manifold}, $B_{d,n}$ is homeomorphic to $(S^{d-2}\times S^{n-2})/\mathbb{Z}_2$. To compute the homology of this manifold, we consider the standard cell decomposition into hemispheres of each of $S^{d-2}$ and $S^{n-2}$; see Section~\ref{hemisphere-sec} for a description. It is useful, however, to work with slightly more general chain complexes. Thus, we shift gears and temporarily forget about the context of the previous sections. Let $R$ be a principal ideal domain of odd or zero characteristic. Let \[ \begin{CD} \mathsf{V} : \cdots @>\partial>> V_{d+1} @>\partial>> V_{d} @>\partial>> V_{d-1} @>\partial>> \cdots \\ \mathsf{W} : \cdots @>\partial>> W_{d+1} @>\partial>> W_{d} @>\partial>> W_{d-1} @>\partial>> \cdots \end{CD} \] be chain complexes of $R$-modules equipped with a degree-preserving $\mathbb{Z}_2$-action. This means that for each of the two chain complexes there is a degree-preserving involutive automorphism $\iota$ commuting with $\partial$. Consider the tensor product $\mathsf{V} \otimes \mathsf{W}$ over $R$; the $k$th chain group is equal to \[ \bigoplus_{i+j = k} V_i \otimes W_j, \] and the boundary map is given by \[ \partial(v \otimes w) = \partial(v) \otimes w + (-1)^i v \otimes \partial(w) \] for $v \in V_i$ and $w \in W_j$. We obtain a $\mathbb{Z}_2$-action on $\mathsf{V} \otimes \mathsf{W}$ by \[ \iota(v \otimes w) = \iota(v) \otimes \iota(w). \] For any chain complex $\mathsf{C}$ equipped with a $\mathbb{Z}_2$-action induced by the involution $\iota$, let $\mathsf{C}^+$ be the chain complex obtained by identifying an element $c$ with zero whenever $c+\iota(c) = 0$. Moreover, define $\mathsf{C}^-$ to be the chain complex obtained in the similar manner by identifying an element $c$ with zero whenever $c-\iota(c) = 0$. Our goal is to examine $(\mathsf{V} \otimes \mathsf{W})^+$. \subsection{The hemispherical chain complex} \label{hemisphere-sec} Let us consider the special case of interest in our calculation of the homology of $B_{d,n}$. Write $D = d-2$ and $N = n-2$. For $0 \le i \le D$, let $V_i$ be a free $R$-module generated by two elements $\sigma_i^+ = \sigma_{i}^{+1}$ and $\sigma_i^- = \sigma_{i}^{-1}$; set $V_i = 0$ for $i<0$ and $i>D$. We define \begin{equation} \partial(\sigma_{i}^\epsilon) = \sigma_{i-1}^\epsilon + (-1)^i \sigma_{i-1}^{-\epsilon} \label{hemiboundary-eq} \end{equation} for $\epsilon = \pm 1$. This means that $\mathsf{V}$ is the unreduced chain complex corresponding to the standard hemispherical cell decomposition of the $D$-sphere. A $\mathbb{Z}_2$-action is given by mapping $\sigma_{i}^+$ to $\sigma_{i}^{-}$ and vice versa. This corresponds to the antipodal action on the sphere, and $\mathsf{V}^+$ consequently corresponds to the minimal cell decomposition of real projective $D$-space \cite[Ex. 2.42]{Hatcher}. We refer to $\mathsf{V}$ as the standard hemispherical chain complex over $R$ of degree $D$. Using Theorem~\ref{th:manifold}, we deduce the following. \begin{lemma} Let $\mathsf{V}$ and $\mathsf{W}$ be the standard hemispherical chain complexes over $R$ of degree $D = d-2$ and $N = n-2$, respectively. Then, the homology of $\left(\mathsf{V} \otimes \mathsf{W}\right)^+$ is isomorphic to the unreduced homology over $R$ of $B_{d,n}$. \end{lemma} \begin{lemma} \label{lem:proj} Suppose that $\mathsf{V}$ is the standard hemispherical chain complex over $R$ of degree $D$. Then, \[ H_i(\mathsf{V}^+) \cong \left\{ \begin{array}{ll} R & \text{if $i=0$,}\\ R/(2R) & \text{if $1\le i < D$ and $i$ is odd,}\\ R & \text{if $i=D$ and $D$ is odd,}\\ 0 & \text{otherwise,} \end{array} \right. \] and \[ H_i(\mathsf{V}^-) \cong \left\{ \begin{array}{ll} R/(2R) & \text{if $0 \le i < D$ and $i$ is even,}\\ R & \text{if $i=D$ and $D$ is even,}\\ 0 & \text{otherwise.} \end{array} \right. \] \end{lemma} \begin{proof} For each $k$, we have that $\sigma_k^+ = \sigma_k^-$ in the group $V_k^+$. Hence, \[ \partial(\sigma_{i}^+) = \sigma_{i-1}^+ + (-1)^i \sigma_{i-1}^- = (1+(-1)^i)\sigma_{i-1}^+ \] in $V_{i-1}^+$, which is $2\sigma_{i-1}^+$ if $i$ is even and $0$ if $i$ is odd. Since the characteristic of $R$ is odd or zero, the first claim follows. For the second claim, note that $\sigma_k^- = - \sigma_k^+$ in the group $V_k^-$. Therefore, \[ \partial(\sigma_{i}^+) = \sigma_{i-1}^+ + (-1)^i \sigma_{i-1}^- = (1-(-1)^i)\sigma_{i-1}^+ \] in $V_{i-1}^-$, which is $2\sigma_{i-1}^+$ if $i$ is odd and $0$ if $i$ is even. This proves the claim. \end{proof} \subsection{$2$ is a unit in $R$} \label{se:2unit} First we consider the case that $2$ is a unit in $R$. Let $\mathsf{C}$ be a chain complex of $R$-modules with an involution $\iota$. Then, we may write each element $c$ uniquely as a sum $c = a + b$ such that $a = \iota(a)$ and $b = - \iota(b)$. Namely, $a = \frac{1}{2}(c + \iota(c))$ and $b = \frac{1}{2}(c - \iota(c))$. This means that $\mathsf{C}^+$ can be identified with the subcomplex of elements $a$ satisfying $a - \iota(a) = 0$ and $\mathsf{C}^-$ with the subcomplex of elements $b$ satisfying $b + \iota(b) = 0$. Moreover, we may identify $\mathsf{C}$ with the direct sum $\mathsf{C}^+ \oplus \mathsf{C}^-$. Applying the above to each of $\mathsf{V}$ and $\mathsf{W}$, we obtain that \[ \mathsf{V} \otimes \mathsf{W} = \left(\mathsf{V}^+ \otimes \mathsf{W}^+\right) \oplus \left(\mathsf{V}^- \otimes \mathsf{W}^-\right) \oplus \left(\mathsf{V}^+ \otimes \mathsf{W}^-\right) \oplus \left(\mathsf{V}^- \otimes \mathsf{W}^+\right). \] We have that $\iota(x) = x$ if \[ x \in \left(\mathsf{V}^+ \otimes \mathsf{W}^+\right) \oplus \left(\mathsf{V}^- \otimes \mathsf{W}^-\right) \] and $\iota(x) = -x$ if \[ x \in \left(\mathsf{V}^+ \otimes \mathsf{W}^-\right) \oplus \left(\mathsf{V}^- \otimes \mathsf{W}^+\right). \] As a consequence, \[ \left(\mathsf{V} \otimes \mathsf{W}\right)^+ = \left(\mathsf{V}^+ \otimes \mathsf{W}^+\right) \oplus \left(\mathsf{V}^- \otimes \mathsf{W}^-\right). \] Applying K\"unneth's theorem \cite[Th. 3B.5]{Hatcher}, we obtain the following result. \begin{proposition} \label{prop:kunneth} If $H_i(\mathsf{V}^+)$ and $H_i(\mathsf{V}^-)$ are finitely generated free $R$-modules for each $i$, then \[ H_d(\left(\mathsf{V} \otimes \mathsf{W}\right)^+) \cong \sum_{i+j = d} \left(H_i(\mathsf{V}^+) \otimes H_j(\mathsf{W}^+)\right) \oplus \left(H_i(\mathsf{V}^-) \otimes H_j(\mathsf{W}^-)\right). \] \end{proposition} Suppose that $\mathsf{V}$ is the standard hemispherical chain complex of degree $D$. By Lemma~\ref{lem:proj}, we have that $H_i(\mathsf{V}^+) \cong H_i(\mathsf{V}^-) \cong 0$ unless $i = 0$ or $i=D$. Moreover, $H_0(\mathsf{V}^+) \cong R$ and $H_0(\mathsf{V}^-) \cong 0$. Finally, if $D$ is odd, then $H_D(\mathsf{V}^+) \cong R$ and $H_D(\mathsf{V}^-) \cong 0$. If instead $D$ is even, then $H_D(\mathsf{V}^+) \cong 0$ and $H_D(\mathsf{V}^-) \cong R$. The following assertion is an immediate consequence: \begin{proposition} \label{prop:kunneth-hemi} If $\mathsf{V}$ is the standard hemispherical chain complex of degree $D$, then the homology of $\left(\mathsf{V} \otimes \mathsf{W}\right)^+$ consists of one copy of $H_j(\mathsf{W}^+)$ in degree $j$ for each $j$ and one copy of either $H_j(\mathsf{W}^+)$ or $H_j(\mathsf{W}^-)$ in degree $D+j$ for each $j$. The former is the case if $D$ is odd, the latter if $D$ is even. \end{proposition} Identifying $\mathsf{V}$ and $\mathsf{W}$ with the cellular chain complexes corresponding to the hemispherical cell decompositions of the $(d-2)$-sphere and the $(n-2)$-sphere, respectively, yields the following corollary, which could also be deduced using transfer methods; see e.g.\ Bredon \cite[\S~ III]{Bredon}. \begin{corollary} \label{cor:freepart} If $2$ is invertible in the coefficient ring $R$, then the reduced homology of $B_{d,n}$ is given by \[ \widetilde{H}_i(B_{d,n}) \cong \begin{cases} R^2 & \text{if $i+2=d=n$ and $i$ is odd,}\\ R & \text{if $i+2=n\neq d$ and $i$ is odd,}\\ R & \text{if $i+2=d \neq n$ and $i$ is odd,}\\ R & \text{if $i=d+n-4$ and $i$ is even,}\\ 0 & \text{otherwise.} \end{cases} \] In particular, the free part of the integral homology of $B_{d,n}$ is as described in Theorem~\ref{th:homology}. \end{corollary} \begin{proof} This is an immediate consequence of Lemma~\ref{lem:proj} and Proposition~\ref{prop:kunneth-hemi}. For the free part of the integral homology, set $R = \mathbb{Q}$ and apply the universal coefficient theorem; see e.g.\ \cite[Cor. 3A.6]{Hatcher}. \end{proof} \subsection{Discrete Morse theory on chain complexes} \label{se:dmt} To compute the homology of $\left({\sf V} \otimes {\sf W}\right)^+$ in the case that $2$ is not a unit in $R$, we will use an algebraic version \cite[\S~4.4]{thesis} of discrete Morse theory \cite{Forman}. The general situation is that we have a chain complex ${\sf C}$ of finitely generated $R$-modules $C_i$. Write $C = \bigoplus_i C_i$. Assume that $C$ can be written as a direct sum of three $R$-modules $A$, $B$, and $U$ such that $f = \alpha \circ \partial$ defines an isomorphism $f: B \rightarrow A$, where $\alpha(a+b+u) = a$ for $a \in A, b \in B, u \in U$. Let $h: A \rightarrow B$ be the inverse of $f|_B$. For any chain group element $x$, define $\beta(x) = h \circ f(x)$ and $\hat{U} = (\text{id} -\beta)(U)$. Let $\hat{U}_k$ be the component of $\hat{U}$ in degree $k$. \begin{proposition}[{\cite[Th. 4.16, Cor. 4.17]{thesis}}] \label{prop:dmt} With notation as above, we have that \[ \begin{CD} \hat{\mathsf{U}} : \cdots @>\partial>> \hat{U}_{k+1} @>\partial>> \hat{U}_{k} @>\partial>> \hat{U}_{k-1} @>\partial>> \cdots \end{CD} \] forms a chain complex with the same homology as the original chain complex $\mathsf{C}$. Moreover, for each $u \in U$, the element $\beta(u)$ is the unique element $b \in B$ with the property that $\partial(u-b) \in B+U$. \end{proposition} \begin{proof} For the reader's convenience, we give a proof outline. Let $u \in U$. Note that \[ f(u-\beta(u)) = f(u) - f\circ h\circ f(u) = f(u) - f(u) = 0; \] hence $\partial(u - \beta(u))$ is of the form $u_0 - b_0$, where $u_0 \in U$ and $b_0 \in B$. Since $u_0-b_0$ is a cycle, we have that $f(b_0) = f(u_0)$ and hence \[ u_0 - b_0 = u_0 - h \circ f(b_0) = u_0 - h \circ f(u_0) = u_0 - \beta(u_0), \] which implies that $\partial(\hat{U}) \subseteq \hat{U}$. We claim that we may write $C$ as a direct sum $\partial(B) \oplus B \oplus \hat{U}$. Namely, let $x \in C$, and define $\hat{a} = \partial \circ h \circ \alpha(x)$. We have that $\alpha(\hat{a}) = \alpha(x)$, which implies that $x - \hat{a} = b+u$ for some $b \in B$ and $u \in U$. Defining $\hat{b} = b + \beta(u)$ and $\hat{u} = u - \beta(u)$, we obtain that we may write $x = \hat{a} + \hat{b} + \hat{u}$, where $\hat{a} \in \partial(B)$, $\hat{b} \in B$, and $\hat{u} \in \hat{U}$. It is easy to show that this decomposition of $x$ is unique; hence we obtain the claim. Write $M = \partial(B) \oplus B$. As $\partial(M) \subseteq M$, we deduce that ${\sf C}$ splits into the direct sum of $\hat{\sf U}$ and \[ \begin{CD} \mathsf{M} : \cdots @>\partial>> M_{k+1} @>\partial>> M_{k} @>\partial>> M_{k-1} @>\partial>> \cdots \end{CD}. \] The homology of the latter complex is zero, because $\partial : B \rightarrow \partial(B)$ is an isomorphism. As a consequence, we are done. The very last statement in the proposition is immediate from the fact that $f : B \rightarrow A$ is an isomorphism. \end{proof} For the connection to discrete Morse theory \cite{Forman}, consider a matching on the set of cells in a cell complex such that each pair in the matching is of the form $(\sigma,\tau)$, where $\sigma$ is a regular codimension one face of $\tau$. Let $A$ be the free $R$-module generated by cells matched with larger cells, let $B$ be generated by cells matched with smaller cells, and let $U$ be generated by unmatched cells. Then, the map $f : B \rightarrow A$ is an isomorphism if the matching is a Morse matching \cite{Chari}, and $\hat{\sf U}$ is the Morse complex associated to the matching. \subsection{$2$ is not a unit in $R$} \label{se:2nonunit} In the case that $2$ is not a unit, the discussion in Section~\ref{se:2unit} does not apply, as the chain complex no longer splits in the manner described. In fact, the situation is considerably more complicated. For this reason, we only examine the special case that ${\sf V}$ is the standard hemispherical chain complex of degree $D$. We also need some assumptions on $\mathsf{W}$. Specifically, we assume that we may write $W_j$ as a direct product $\LG{j} \times \LG{j}$, where $\LG{j}$ is a finitely generated $R$-module for each $j \in \mathbb{Z}$. Moreover, we assume that $\iota(w_0,w_1) = (w_1,w_0)$ for each element $(w_0,w_1) \in W_j$. For our main result to hold, we must assume that $L_j = 0$ unless $0 \le j \le D$. However, we will not actually use this assumption until Lemma~\ref{lem:U0}. We make no specific assumptions on the boundary operator on $\mathsf{W}$, which hence is of the general form \[ \partial(w_0,w_1) = (p(w_0)+r(w_1), q(w_0) + s(w_1)), \] where $p, q, r, s$ are maps $\LG{j} \rightarrow \LG{j-1}$ such that $\partial^2 = 0$ and $\iota \partial = \partial \iota$. Since $\iota(w,0) = (0,w)$, we have that \[ (q(w),p(w)) = \iota \circ \partial(w,0) = \partial\circ\iota(w,0) = (r(w),s(w)) \] and hence that \[ \partial(w_0,w_1) = (p(w_0)+ q(w_1), p(w_1) + q(w_0)). \] For $\partial^2$ to be zero, it is necessary and sufficient that $p^2+q^2 = pq + qp = 0$. Recall that we want to examine $(\mathsf{V} \otimes \mathsf{W})^+$. In this chain complex, we have for each $i$ and $w_0, w_1 \in \LG{j}$ the identity \[ \sigma_i^- \otimes (w_0,w_1) = \sigma_i^+ \otimes (w_1,w_0). \] In particular, we may identify $(V_i \otimes W_j)^+$ with $\langle \sigma_i^+\rangle \otimes W_j \cong W_j \mathbf{e}_{i,j}$, where $\mathbf{e}_{i,j}$ is a formal variable. Writing $\negone{} = -1$ for compactness, also note that \begin{eqnarray*} \partial(\sigma_i^+ \otimes (w_0,w_1)) &=& (\sigma_{i-1}^+ + \negone{i}\sigma_{i-1}^-) \otimes (w_0,w_1) + \negone{i}\sigma_i^+ \otimes \partial(w_0,w_1) \\ &=& \sigma_{i-1}^+ \otimes (w_0+\negone{i}w_1,w_1+\negone{i}w_0) \mbox{}+ \negone{i} \sigma_i^+ \otimes \partial(w_0,w_1). \end{eqnarray*} Identifying $(V_i \otimes W_j)^+$ with $W_j \mathbf{e}_{i,j}$ as described above, we may express this as \[ \partial((w_0,w_1)\mathbf{e}_{i,j}) = (w_0+\negone{i}w_1,w_1+\negone{i}w_0)\mathbf{e}_{i-1,j} + \negone{i} \partial(w_0,w_1)\mathbf{e}_{i,j-1}. \] We want to use Proposition~\ref{prop:dmt} to simplify $(\mathsf{V} \otimes \mathsf{W})^+$. For $0 \le i \le D$ and $j \in \mathbb{Z}$, define \begin{eqnarray*} {\bf a}_{i,j} &=& (1,0)\mathbf{e}_{i,j},\\ {\bf b}_{i,j} &=& (0,1)\mathbf{e}_{i,j}. \end{eqnarray*} Note that $(V_i \otimes W_j)^+ \cong (\LG{j} \times \LG{j})\mathbf{e}_{i,j}$ is isomorphic to the direct sum of $\LG{j} {\bf a}_{i,j}$ and $\LG{j}{\bf b}_{i,j}$. Define \begin{table}[htb] \caption{The table indicates for different $i$ the groups in which $L_j{\bf a}_{i,j}$ and $L_j{\bf b}_{i,j}$ are contained.} \begin{tabular}{|c||l|l|} \hline $i$ & $L_j{\bf a}_{i,j}$ & $L_j{\bf b}_{i,j}$ \\ \hline $D$ & $U^{(D)}$ & $B$ \\ $D-1$ & $A$ & $B$ \\ $\cdots$ & $\cdots$ & $\cdots$ \\ $1$ & $A$ & $B$ \\ $0$ & $A$ & $U^{(0)}$ \\ \hline \end{tabular} \label{tab:ABU} \end{table} \begin{eqnarray*} A &=& \bigoplus_{i=0}^{D-1} \bigoplus_{j} \LG{j}{\bf a}_{i,j}, \\ B &=& \bigoplus_{i=1}^D \bigoplus_{j} \LG{j}{\bf b}_{i,j}, \\ U^{(D)} &=& \bigoplus_{j} U_{D+j}^{(D)}, \text{where } U_{D+j}^{(D)} = \LG{j}{\bf a}_{D,j}, \\ U^{(0)} &=& \bigoplus_{j} U_j^{(0)}, \text{where } U_j^{(0)} = \LG{j}{\bf b}_{0,j}. \end{eqnarray*} See Table~\ref{tab:ABU} for a schematic description. Write $U = U^{(0)} \oplus U^{(D)}$. Note that the direct sum of $A$, $B$, and $U$ constitutes the full chain complex $\left({\sf V} \otimes {\sf W}\right)^+$. It is clear that we obtain an isomorphism $g : B \rightarrow A$ by assigning $g(w{\bf b}_{i,j}) = w{\bf a}_{i-1,j}$ for each $i$ and $j$ and each $w \in L_j$. We now show that discrete Morse theory indeed yields an isomorphism between $B$ and $A$, though the assignment is slightly more complicated. Let $\alpha$ be the projection map from $A + B + U$ to $A$ as defined in Section~\ref{se:dmt}, and let $f = \alpha \circ \partial$. \begin{lemma} \label{lem:fiso} We have that $f|_B : B \rightarrow A$ defines an isomorphism. \end{lemma} \begin{proof} For $w \in \LG{j}$, note that \begin{eqnarray*} \partial(w{\bf b}_{i,j}) &=& (\negone{i}w,w){\bf e}_{i-1,j} + \negone{i}(q(w),p(w)){\bf e}_{i,j-1} \\ &=& \negone{i}w{\bf a}_{i-1,j} + w{\bf b}_{i-1,j} + \negone{i}q(w){\bf a}_{i,j-1} + \negone{i}p(w){\bf b}_{i,j-1}. \end{eqnarray*} In particular, \[ f(w{\bf b}_{i,j}) = \left\{ \begin{array}{ll} \negone{i}w{\bf a}_{i-1,j} & \text{if $i = D$},\\ \negone{i}(w{\bf a}_{i-1,j} + q(w) {\bf a}_{i,j-1}) & \text{if $1\le i\le D-1$}. \end{array} \right. \] For $i=D$, the term $q(w){\bf a}_{D,j-1}$ is not present, as it belongs to $U$ rather than $A$. Each element $x$ of degree $k$ in $B$ is of the form $x = \sum_{i=1}^D w_{k-i} {\bf b}_{i,k-i}$, where $w_{k-i} \in \LG{k-i}$. We may express $f(x)$ in operator matrix form as \[ f(x) = \left( \begin{array}{ccccccc} \negone{1}I & 0 & 0 & \cdots & 0 & 0 \\ \negone{1}q & \negone{2}I & 0 & \cdots & 0 & 0 \\ 0 & \negone{2}q & \negone{3}I & \cdots & 0 & 0 \\ \cdots & \cdots & \cdots & \cdots& \cdots& \cdots\\ 0 & 0 & 0 & \cdots & \negone{D-1}I & 0 \\ 0 & 0 & 0 & \cdots & \negone{D-1}q & \negone{D}I \end{array} \right) \left( \begin{array}{l} w_{k-1}\\ w_{k-2} \\ w_{k-3} \\ \cdots \\ w_{k-D+1} \\ w_{k-D} \end{array} \right) \] in the basis $({\bf a}_{0,k-1}, {\bf a}_{1,k-2}, \ldots, {\bf a}_{D-3,k-D+2}, {\bf a}_{D-2,k-D+1}, {\bf a}_{D-1,k-D})$. Now, the operator matrix is invertible; its inverse is \[ \left( \begin{array}{llllll} \negone{1}I & 0 & 0 & \cdots & 0 & 0 \\ \negone{1}q & \negone{2}I & 0 & \cdots & 0 & 0 \\ \negone{1}q^2 & \negone{2}q & \negone{3}I & \cdots & 0 & 0 \\ \cdots & \cdots & \cdots & \cdots& \cdots& \cdots\\ \negone{1}q^{D-2} & \negone{2}q^{D-3} & \negone{3}q^{D-4} & \cdots & \negone{D-1}I & 0 \\ \negone{1}q^{D-1} & \negone{2}q^{D-2} & \negone{3}q^{D-3} & \cdots & \negone{D-1}q & \negone{D}I \end{array} \right). \] Since this is true in each degree $k$, we deduce that $f|_B$ is an isomorphism. \end{proof} Let $\beta$ and $\hat{\sf U}$ be defined as in Proposition~\ref{prop:dmt}. \begin{corollary} \label{cor:dmt} We have that \[ \begin{CD} \hat{\mathsf{U}} : \cdots @>\partial>> \hat{U}_{k+1} @>\partial>> \hat{U}_{k} @>\partial>> \hat{U}_{k-1} @>\partial>> \cdots \end{CD} \] forms a chain complex with the same homology as $(\mathsf{V} \otimes \mathsf{W})^+$. \end{corollary} \begin{proof} Apply Proposition~\ref{prop:dmt} and Lemma~\ref{lem:fiso}. \end{proof} Write $\hat{u} = u - \beta(u)$, $\hat{U}_k^{(D)} = (\text{id} - \beta)(U_k^{(D)})$, and $\hat{U}_k^{(0)} = (\text{id} - \beta)(U_k^{(0)})$. Moreover, define ${\bf \hat{a}}_{D,j} = {\bf a}_{D,j} + \negone{D+1}{\bf b}_{D,j} = (1,\negone{D+1}){\bf e}_{D,j}$. \begin{lemma} \label{lem:Un} Consider an element in $U^{(D)}$ of the form $u = w {\bf a}_{D,j}$, where $w \in \LG{j}$. Then, we have that $\hat{u} = w {\bf \hat{a}}_{D,j}$ and \[ \partial(w {\bf \hat{a}}_{D,j}) = \negone{D}(p(w)+\negone{D+1}q(w)){\bf \hat{a}}_{D,j-1}. \] In particular, the groups $\hat{U}_j^{(D)}$ constitute a subcomplex $\hat{\sf U}^{(D)}$ of $\hat{\sf U}$, and \[ H_*(\hat{\sf U}^{(D)}) \cong \left\{ \begin{array}{ll} H_{*-D}({\sf W}^+) & \text{if $D$ is odd},\\ H_{*-D}({\sf W}^-) & \text{if $D$ is even}. \end{array} \right. \] \end{lemma} \begin{proof} The formula for $\partial(w {\bf \hat{a}}_{D,j})$ is just a straightforward computation. By Proposition~\ref{prop:dmt}, it follows that $\hat{u} = w {\bf \hat{a}}_{D,j}$. For the last statement, we may identify $\mathsf{W}^+$ with the chain complex with chain groups $\LG{j}$ and with the boundary map given by \[ \partial(w) = p(w)+q(w). \] Similarly, we may identify $\mathsf{W}^-$ with the chain complex with chain groups $\LG{j}$ and with the boundary map given by \[ \partial(w) = p(w)-q(w). \] Up to a shift in degree by $D$, the chain groups and the boundary map of $\hat{\sf U}^{(D)}$ are isomorphic to those of either $\mathsf{W}^+$ or $\mathsf{W}^-$, depending on the parity of $D$. As a consequence, we obtain the statement. \end{proof} For $a \in A$, $b \in B$, and $u \in U$, write $\gamma(a+b+u) = u$. Recall the assumption that $L_j = 0$ unless $0 \le j \le D$. \begin{lemma} \label{lem:U0} Let $0 \le k \le D$ and let $u = w{\bf b}_{0,k}$, where $w \in \LG{k}$. Then, \begin{equation} \hat{u} = \varphi(w{\bf b}_{0,k}) := \sum_{i=0}^D q^i(w){\bf b}_{i,k-i} \label{eq:hatu0} \end{equation} and \begin{equation} \partial(\varphi(w{\bf b}_{0,k})) = \varphi((p(w)+q(w)){\bf b}_{0,k-1}). \label{eq:hatu0b} \end{equation} In particular, the groups $\hat{U}_j^{(0)}$ constitute a subcomplex $\hat{\sf U}^{(0)}$ of $\hat{\sf U}$, and \[ H_*(\hat{\sf U}^{(0)}) \cong H_{*}({\sf W}^+). \] \end{lemma} \begin{proof} The boundary of the right-hand side of (\ref{eq:hatu0}) equals \begin{eqnarray*} & & \sum_{i=1}^D q^i(w)(\negone{i}{\bf a}_{i-1,k-i}+{\bf b}_{i-1,k-i}) \\ && \mbox{} + \sum_{i=0}^{D} \negone{i}q^{i+1}(w){\bf a}_{i,k-i-1} + \sum_{i=0}^{D} \negone{i} pq^i(w) {\bf b}_{i,k-i-1} \\ &=& \sum_{i=0}^{D-1} \negone{i} (pq^i(w) +\negone{i}q^{i+1}(w)) {\bf b}_{i,k-i-1} \\ && + \mbox{} \negone{D} q^{D+1}(w){\bf a}_{D,k-D-1} + \negone{D} pq^D(w) {\bf b}_{D,k-D-1} \\ &=& \sum_{i=0}^{D-1} \negone{i} (pq^i(w) +\negone{i}q^{i+1}(w)) {\bf b}_{i,k-i-1} \\ &=& \sum_{i=0}^{D-1} q^i(p(w)+q(w)) {\bf b}_{i,k-i-1} = \varphi((p(w)+q(w)){\bf b}_{0,k-1}). \end{eqnarray*} The second equality is because $k\le D$ and hence $pq^D(w) = q^{D+1}(w) = 0$. The third equality follows from repeated application of the identity $pq = - qp$. This yields (\ref{eq:hatu0b}). Since the element in the right-hand side of (\ref{eq:hatu0b}) is an element in $B+U^{(0)}$, Proposition~\ref{prop:dmt} yields (\ref{eq:hatu0}). For the final claim in the lemma, note that the chain groups and the boundary map of $\hat{U}^{(0)}$ are isomorphic to those of $\hat{\sf W}^+$. \end{proof} Without the assumption that $\LG{j} = 0$ unless $0 \le j \le D$, (\ref{eq:hatu0b}) would not necessarily be true, and the groups $\hat{U}_j^{(0)}$ might not constitute a subcomplex of $\hat{\sf U}$. Namely, the term $q^{D+1}(w){\bf a}_{D,k-D-1} \in U^{(D)}$ in the expansion of $\partial(\hat{u})$ is nonzero if $q^{D+1}(w)$ is nonzero. \begin{corollary} \label{cor:Usplit} $\hat{\mathsf{U}}$ splits into two complexes \[ \begin{CD} \hat{\mathsf{U}}^{(D)} : \hat{U}^{(D)}_{2D} @>\partial>> \hat{U}^{(D)}_{2D-1} @>\partial>> \cdots @> \partial>> \hat{U}^{(D)}_{D+1} @>\partial>> \hat{U}^{(D)}_{D},\\ \hat{\mathsf{U}}^{(0)} : \hat{U}^{(0)}_{D} @>\partial>> \hat{U}^{(0)}_{D-1} @>\partial>> \cdots @> \partial>> \hat{U}^{(0)}_{1} @>\partial>> \hat{U}^{(0)}_{0}. \end{CD} \] \end{corollary} \begin{proof} This is a an immediate consequence of Lemmas~\ref{lem:Un} and \ref{lem:U0}. \end{proof} Applying Lemmas~\ref{lem:Un} and \ref{lem:U0} and using Corollaries~\ref{cor:dmt} and \ref{cor:Usplit}, we obtain a description of the homology of $\left(\mathsf{V} \otimes \mathsf{W}\right)^+$ in terms of $\mathsf{W}^+$ and $\mathsf{W}^-$. \begin{theorem} \label{th:Vhemi} If $D$ is odd, then \begin{eqnarray*} H_i(\left(\mathsf{V} \otimes \mathsf{W}\right)^+) &\cong& H_i(\mathsf{W}^+) \oplus H_{i-D}(\mathsf{W}^+)\\ &\cong& \left\{ \begin{array}{ll} H_i(\mathsf{W}^+) & \text{if $0 \le i \le D-1$}, \\ H_D(\mathsf{W}^+) \oplus H_0(\mathsf{W}^+) & \text{if $i = D$}, \\ H_{i-D}(\mathsf{W}^+) & \text{if $D+1 \le i \le 2D$},\\ 0 & \text{otherwise.} \end{array} \right. \end{eqnarray*} If $D$ is even, then \begin{eqnarray*} H_i(\left(\mathsf{V} \otimes \mathsf{W}\right)^+) &\cong& H_i(\mathsf{W}^+) \oplus H_{i-D}(\mathsf{W}^-)\\ &\cong& \left\{ \begin{array}{ll} H_i(\mathsf{W}^+) & \text{if $0 \le i \le D-1$}, \\ H_D(\mathsf{W}^+) \oplus H_0(\mathsf{W}^-) & \text{if $i = D$}, \\ H_{i-D}(\mathsf{W}^-) & \text{if $D+1 \le i \le 2D$},\\ 0 & \text{otherwise.} \end{array} \right. \end{eqnarray*} \end{theorem} In the case of $B_{d,n}$, we already know the free part of the homology by Corollary~\ref{cor:freepart}; hence we may focus on the torsion part. \begin{corollary} \label{cor:torpart} Let $d \ge n$. Then, the torsion part $T_*(B_{d,n};\mathbb{Z})$ of $H_*(B_{d,n};\mathbb{Z})$ is an elementary $2$-group satisfying \[ T_i(B_{d,n};\mathbb{Z}) \cong \left\{ \begin{array}{ll} \mathbb{Z}_2 & \text{if $1 \le i \le n-3$ and $i$ is odd}, \\ \mathbb{Z}_2 & \text{if $d-2 \le i \le d+n-5$ and $i$ is even}, \\ 0 & \text{otherwise}. \end{array} \right. \] \end{corollary} \begin{proof} This is an immediate consequence of Lemma~\ref{lem:proj} and Theorem~\ref{th:Vhemi}; let $\mathsf{W}$ be the standard hemispherical cell decomposition of the $(n-2)$-sphere. \end{proof} Combining Corollaries~\ref{cor:freepart} and \ref{cor:torpart}, we obtain Theorem~\ref{th:homology}. \end{document}
\begin{document} \title{The Strong Tree Property at Successors of Singular Cardinals} \author[Laura Fontanella ]{Laura Fontanella} \urladdr{http://www.logique.jussieu.fr/$\sim$fontanella} \address{ Equipe de Logique Math\'ematique, Universit\'e Paris Diderot Paris 7, UFR de math\'ematiques case 7012, site Chevaleret, 75205 Paris Cedex 13, France} \address{Kurt G\"{o}del Research Center for Mathematical Logic, University of Vienna, Department of Mathematics\\ W\"{a}hringer Strasse $25,$ Vienna $1090$ (Austria) } \email{[email protected]} \subjclass[2010]{03E55 } \keywords{tree property, large cardinals, forcing.} \date{9 february 2012} \maketitle \begin{abstract}{} An inaccessible cardinal is strongly compact if, and only if, it satisfies the strong tree property. We prove that if there is a model of ${\rm ZFC }$ with infinitely many supercompact cardinals, then there is a model of {\rm ZFC } where $\aleph_{\omega+1}$ has the strong tree property. Moreover, we prove that every successor of a singular limit of strongly compact cardinals has the strong tree property. \end{abstract} \ \section{Introduction} The strong tree property is a strong generalization of the usual tree property. Given a regular cardinal $\kappa,$ we say that $\kappa$ has the tree property when every $\kappa$-tree (i.e. every tree of height $\kappa$ with levels of size less than $\kappa$) has a branch of length $\kappa.$ K\"{o}nig's Lemma establishes that the tree property holds at $\aleph_0.$ On the other hand, $\aleph_1$ does not satisfy the tree property, and for larger regular cardinals whether or not they satisfy the tree property is independent from ${\rm ZFC }.$ It is well known that the tree property provide a combinatorial characterization of weak compactness. \noindent {\bf Theorem:} (Erd\"{o}s and Tarski \cite{ErdosTarski} $1961$) Assume $\kappa$ is an inaccessible cardinal, then $\kappa$ is weakly compact if and only if it satisfies the tree property. Strongly compact and supercompact cardinals admit similar characterizations. \noindent {\bf Theorem:} If $\kappa$ is an inaccessible cardinal, then \begin{enumerate} \item $\kappa$ is strongly compact if and only if it satisfies the strong tree property (Jech \cite{Jech} $1973,$ Di Prisco - Zwicker \cite{DiPriscoZwicker} $1980$ and Donder - Weiss \cite{WeissPhd} $2010$); \item $\kappa$ is supercompact if and only if it satisfies the super tree property (Jech \cite{Jech} $1973,$ Magidor \cite{Magidor} $1974$ and Donder - Weiss \cite{WeissPhd} $2010$).\\ \end{enumerate} \noindent The strong and super tree properties generalize the usual tree property to the combinatorics of $[\lambda]^{<\kappa},$ in fact they concern special structures known as \emph{$(\kappa,\lambda)$-trees} that can be seen as ``trees over $[\lambda]^{<\kappa}$" whose ``levels'' have size less than $\kappa$ (this notion will be defined in \S \ref{sec:maindef}). The super tree property implies the strong tree property, that entails the usual tree property in its turn. While the previous characterizations date back to the early $1970$s, a systematic study of the strong and the super tree properties has only recently been undertaken by Weiss\footnote{In Weiss' terminology, the strong tree property at a regular cardinal $\kappa$ corresponds to the property $(\kappa, \lambda)$-TP for all $\lambda\geq \kappa,$ while the super tree property corresponds to $(\kappa, \lambda)$-ITP for all $\lambda\geq \kappa.$} who worked on theese properties in his Ph.D thesis \cite{WeissPhd} and proved that even small cardinals can consistently satisfy the strong and the super tree properties, if we assume large cardinals.\\ There is a huge literature concerning the construction of models of set theory in which several distinct regular cardinals satisfy the usual tree property. We list a few classical results of that sort. \begin{enumerate} \item (Mitchell \cite{Mitchell72} $1972$) Let $\tau$ be a regular cardinal such that $\tau^{<\tau}= \tau.$ Assume there is a model of ZFC with a weakly compact cardinal, then there is a model of ZFC where $\tau^{++}$ has the tree property. \item (Cummings and Foreman \cite{CummingsForeman} $1998$) Assume there is a model of ZFC with infinitely many supercompact cardinals, then there is a model of ZFC where every cardinal of the form $\aleph_n$ with $2\leq n<\omega$ has the tree property. \item (Magidor and Shelah \cite{MagidorShelah} $1996$) Assume there is a model of ZFC with an increasing sequence $\langle \lambda_n\rangle_{n<\omega}$ such that \begin{enumerate} \item if $\lambda= \sup_{n\geq 0} \lambda_n,$ then $\lambda_n$ is $\lambda^+$-supercompact, for all $n>0;$ \item $\lambda_0$ is the critical point of an embedding $j: V\to M$ where $j(\lambda_0)= \lambda_1$ and ${}^{{\lambda}^+}M\subseteq M.$ \end{enumerate} Then there is a model of ZFC where $\aleph_{\omega+1}$ has the tree property. \item (Sinapova \cite{Sinapova} $2012$) Assume there is a model of ZFC with infinitely many supercompact cardinals, then there is a model of ZFC where $\aleph_{\omega+1}$ has the tree property. \item (Neeman \cite{Neeman} $2012$) Assume there is a model of ZFC with infinitely many supercompact cardinals, then there is a model of ZFC where the tree property holds at every $\aleph_n$ with $n\geq 2$ and at $\aleph_{\omega+1}.$ \end{enumerate} All these results were oriented toward the construction of a model where the tree property holds simultaneously at every regular cardinal --- whether such a model can be found is still an open question. Some of these theorems can be generalized to the strong or the super tree property. In fact, Weiss proved that for every integer $n\geq 2,$ if we force with Mitchell's forcing over a supercompact cardinal, we get a model of set theory where even the super tree property holds at $\aleph_n.$ The author \cite{Fontanella, Fontanella2} (and independently Unger \cite{Unger}) proved that Weiss result can be generalized to get a model where all cardinals of the form $\aleph_n$ with $2\leq n<\omega$ simultaneously satisfy the super tree property, starting from infinitely many supercompact cardinals. Indeed, a forcing construction by Cummings and Foreman produces a model where all the $\aleph_n$'s satisfy the super tree property. We are going to prove from large cardinals that even $\aleph_{\omega+1}$ can consistently satisfy the strong tree property. More precisely we will prove the following theorem. {\bf Theorem:} If there is a model of ZFC with infinitely many supercompact cardinals, then there is a model of ZFC where $\aleph_{\omega+1}$ has the strong tree property. \noindent The proof of such theorem is motivated by Neeman's paper \cite{Neeman}. By generalizing a theorem by Magidor and Shelah \cite{MagidorShelah}, we will also prove the following result. {\bf Theorem:} If $\nu$ is a singular limit of strongly compact cardinals, then the strong tree property holds at $\nu^+.$ Moreover, we will weakened the hypothesis of the latter theorem by using a partition property satisfied by strongly compact cardinals. \section{Preliminaries and Notation} It may be useful to recall some terminology. The main reference for basic set theory is \cite{Jech}, while we will refer to \cite{Kanamori} for large cardinals notions and to \cite{Kunen} for the forcing technique. We denote by $[A]^{<\kappa}$ the set of all subsets of $A$ of size less than $\kappa.$ We recall the definition of closed unbounded subset of $[A]^{<\kappa}$ (club) and stationary subset of $[A]^{<\kappa}.$ \begin{definition} Assume $\kappa$ is a cardinal, $A$ is a set of size $\geq \kappa$ and $C\subseteq [A]^{<\kappa}.$ \begin{enumerate} \item $C$ is \emph{unbounded}\index{unbounded} if for every $x\in [A]^{<\kappa}$ there exists $y\in C$ such that $x\subseteq y.$ \item $C$ is \emph{closed}\index{closed} if for any $\subseteq$-increasing chain $\langle x_{\gamma}\rangle_{\gamma<\alpha}$ of sets in $C,$ with $\alpha<\kappa,$ the union $\bigcup_{\gamma<\alpha} x_{\gamma}\in C.$ \item $C$ is a \emph{club}\index{club!-- of $[A]^{<\kappa}$} if it is closed and unbounded. \item $C$ is \emph{stationary}\index{stationary!-- subset of $[A]^{<\kappa}$} if $S$ has non-empty intersection with every club of $[A]^{<\kappa}.$ \end{enumerate} \end{definition} We will often use the following lemma. \begin{lemma}(Pressing Down Lemma)\index{Lemma!Pressing Down --} If $f$ is a regressive function on a stationary set $S\subseteq [A]^{<\kappa}$ (i.e. $f(x)\in x,$ for every non empty $x\in S$), then there exists a stationary set $T\subseteq S$ such that $f$ is constant on $T.$ \end{lemma} For a proof of that lemma see \cite[Theorem 8.24]{Jechbook}.\\ Given a forcing $\mathbb{P}$ and conditions $p,q\in \mathbb{P},$ we use $p\leq q$ in the sense that $p$ is stronger than $q.$ Assume that $\mathbb{P}$ is a forcing notion in a model $V,$ we will use $V^{\mathbb{P}}$ to denote the class of $\mathbb{P}$-names. If $G\subseteq \mathbb{P}$ is a generic filter over $V,$ then $V[G]$ denotes the generic extension of $V$ determined by $G.$ If $a\in V^{\mathbb{P}}$ and $G\subseteq \mathbb{P}$ is generic over $V,$ then $a^G$ denotes the interpretation of $a$ in $V[G].$ Every element $x$ of the ground model $V$ is represented in a canonical way by a name $\check{x}.$ However, to simplify the notation, we will use just $x$ instead of $\check{x}$ in forcing formulas. \begin{definition} Given a forcing $\mathbb{P}$ and a cardinal $\kappa,$ we say that \begin{enumerate} \item $\mathbb{P}$ is \emph{$\kappa$-closed}\index{$\kappa$-closed} if and only if every decreasing sequence of conditions of $\mathbb{P}$ of size less than $\kappa$ has an infimum; \item $\mathbb{P}$ is \emph{$\kappa$-c.c.}\index{$\kappa$-c.c.} when every antichain of $\mathbb{P}$ has size less than $\kappa;$ \item $\mathbb{P}$ has the \emph{$\kappa$-covering property} if $\mathbb{P}$ preserves $\kappa$ as a cardinal and for every filter $G\subseteq \mathbb{P}$ generic over $V,$ every set $X\subseteq V$ in $V[G]$ of cardinality less than $\kappa$ is contained in a set $Y\in V$ of cardinality less than $\kappa$ in $V.$ \end{enumerate} \end{definition} We will use the following forcing notions. \begin{definition}(The L\'evy Collapse) Let $\kappa<\lambda$ be two cardinals with $\kappa$ regular, \begin{enumerate} \item we denote by ${\rm Coll}(\kappa, \lambda)$ the set $\{p: \kappa\to \lambda;\ \vert {\rm dom}(p)\vert<\kappa \}$ ordered by reverse inclusion; \item if $\lambda$ is inaccessible, then ${\rm Coll}(\kappa, <\lambda):= \Pi_{\alpha<\lambda} {\rm Coll}(\kappa, \alpha).$ \end{enumerate} \end{definition} \begin{lemma}(L\'evy) Let $\kappa<\lambda$ be two cardinals with $\kappa$ regular, then ${\rm Coll}(\kappa, \lambda)$ collapses $\lambda$ onto $\kappa,$ i.e. $\lambda$ has cardinality $\kappa$ in the generic extension. Moreover, \begin{enumerate} \item every cardinal $\alpha\leq \kappa$ in $V$ remains a cardinal in $V[G];$ \item if $\lambda^{<\kappa}= \lambda,$ then every cardinal $\alpha>\lambda$ remains a cardinal in the extension. \end{enumerate} \end{lemma} For a proof of that lemma see for example \cite[Lemma 15.21]{Jech}. \begin{lemma}(L\'evy) If $\kappa$ is regular and $\lambda>\kappa$ is inaccessible. Then for every $G\subseteq {\rm Coll}(\kappa, <\lambda)$ generic over $V,$ \begin{enumerate} \item every $\alpha$ such that $\kappa\leq \alpha<\lambda$ has cardinality $\kappa$ in $V[G];$ \item every cardinal $\leq \kappa$ and every cardinal $\geq \lambda$ remains a cardinal in $V[G].$ \end{enumerate} Hence $V[G]\models \lambda= \kappa^+.$ \end{lemma} For a proof of that lemma see for example \cite[Theorem 15.22]{Jech}. We will assume familiarity with the theory of large cardinals and elementary embeddings, as developed for example in \cite{Kanamori}. We recall the definition of strongly compact and supercompact cardinals. \begin{definition} Let $\kappa$ be a regular uncountable cardinal, \begin{enumerate} \item $\kappa$ is \emph{strongly compact} if and only if, every $\kappa$-complete filter on a set $S$ can be extended to a $\kappa$-complete ultrafilter on $S;$ \item $\kappa$ is \emph{supercompact} if and only if, for every cardinal $\lambda\geq \kappa,$ there exists an elementary embedding $j: V\to M$ with critical point $\kappa$ such that $j(\kappa)>\lambda$ and $M$ is closed by subsets of size $\lambda.$ \end{enumerate} \end{definition} The following two theorems will be deeply used in this paper. \begin{lemma} (Laver) \cite{Laver} If $\kappa$ is a supercompact cardinal, then there exists $L: \kappa \to V_{\kappa}$ such that: for all $\lambda,$ for all $x\in H_{\lambda^+},$ there is an elementary embedding $j: V\to M$ with critical point $\kappa$ such that $j(\kappa)>\lambda,$ ${}^\lambda M\subseteq M$ and $j(L)(\kappa)= x.$ \end{lemma} \begin{lemma} (Silver) Let $j: M\to N$ be an elementary embedding between inner models of {\rm ZFC}. Let $\mathbb{P}\in M$ be a forcing and suppose that $G$ is $\mathbb{P}$-generic over $M,$ $H$ is $j(\mathbb{P})$-generic over $N,$ and $j[G]\subseteq H.$ Then there is a unique $j^*: M[G]\to N[H]$ such that $j^*\upharpoonright M= j$ and $j^*(G)= H.$ \end{lemma} \begin{proof} If $j[G]\subseteq H,$ then the map $j^*(\dot{x}^{G})= j(\dot{x})^{H}$ is well defined and satisfies the required properties. \end{proof} \section{The Strong and the Super Tree Properties}\label{sec:maindef} In this section we introduce the strong and super tree properties. Although the main results presented in this paper do not concern the super tree property (just the strong tree property), for the sake of completeness we include the definition of this property as well. In order to define the strong and the super tree properties, we need to introduce the notion of $(\kappa,\lambda)$-tree\footnote{In Weiss Phd-thesis \cite{WeissPhd} $(\kappa, \lambda)$-trees were called \emph{$\mathscr{P}_{\kappa}\lambda$-thin lists}.}. \begin{definition}\label{main definition} Given a regular cardinal $\kappa\geq \omega_2$ and an ordinal $\lambda\geq \kappa,$ a \emph{$(\kappa, \lambda)$-tree}\index{$(\kappa, \lambda)$-tree} is a set $F$ satisfying the following properties: \begin{enumerate} \item for every $f\in F,$ $f: X\to 2,$ for some $X\in [\lambda]^{<\kappa}$ \item for all $f\in F,$ if $X\subseteq {\rm dom}(f),$ then $f\upharpoonright X\in F;$ \item the set ${\rm Lev}_X(F):= \{f\in F;\textrm{ } {\rm dom}(f)=X \}$ is non empty, for all $X\in [\lambda]^{<\kappa};$ \item $\vert {\rm Lev}_X(F) \vert<\kappa ,$ for all $X\in [\lambda]^{<\kappa}.$ \end{enumerate} \end{definition} The elements of a $(\kappa, \lambda)$-tree are called \emph{nodes}. Note that, despite the name, a $(\kappa, \lambda)$-tree is not a tree. In fact for a given node $f$ on some level ${\rm Lev}_X,$ the set of all its \emph{predecessors} is $\{f\upharpoonright Y;\ Y\subseteq X\}$ and it is not well ordered. So the main difference between a $\kappa$-tree and a $(\kappa, \lambda)$-tree is that in the former the levels are indexed by ordinals which are well ordered, while in the latter we have a level for every set in $[\lambda]^{<\kappa}$ which is not even linearly ordered. As usual, when there is no ambiguity, we will simply write ${\rm Lev}_X$ instead of ${\rm Lev}_X(F).$ \begin{definition}\label{branches} Given a regular $\kappa\geq \omega_2,$ an ordinal $\lambda\geq \kappa$ and a $(\kappa, \lambda)$-tree $F,$ \begin{enumerate} \item a \emph{cofinal branch}\index{branch! cofinal -- of a $(\kappa, \lambda)$-tree} for $F$ is a function $b: \lambda \to 2$ such that $b\upharpoonright X\in {\rm Lev}_X(F),$ for all $X\in[\lambda]^{<\kappa};$ \item an \emph{$F$-level sequence}\index{$F$-level sequence}\index{level sequence} is a function $D: [\lambda]^{<\kappa}\to F$ such that for every $X\in [\lambda]^{<\kappa},$ $D(X)\in {\rm Lev}_X(F);$ \item given an $F$-level sequence $D,$ an \emph{ineffable branch} for $D$ is a cofinal branch $b: \lambda \to 2$ such that $\{X\in [\lambda]^{<\kappa};\textrm{ } b\upharpoonright X= D(X) \}$ is stationary. \end{enumerate} \end{definition} \begin{definition}\label{def: TP ITP} Given a regular cardinal $\kappa\geq \omega_2$ and an ordinal $\lambda\geq \kappa,$ \begin{enumerate} \item $(\kappa, \lambda)$-{\rm TP } \index{$(\kappa, \lambda)$-TP} holds if every $(\kappa, \lambda)$-tree has a cofinal branch; \item $(\kappa, \lambda)$-{\rm ITP } \index{$(\kappa, \lambda)$-ITP} holds if for every $(\kappa, \lambda)$-tree $F$ and for every $F$-level sequence $D,$ there is an an ineffable branch for $D;$ \item we say that $\kappa$ satisfies the \emph{strong tree property}\index{strong tree property} if $(\kappa, \mu)$-{\rm TP } holds, for all $\mu\geq \kappa;$ \item we say that $\kappa$ satisfies the \emph{super tree property}\index{super tree property} if $(\kappa,\mu)$-{\rm ITP } holds, for all $\mu\geq \kappa;$ \end{enumerate} \end{definition} We prove a simple result that will be used repeatedly. \begin{lemma}\label{simple lemma} Let $\kappa$ be a regular cardinal and $\lambda\geq \kappa.$ For every $\lambda^* >\lambda,$ every $(\kappa, \lambda)$-tree with no cofinal branches, can be extended to a $(\kappa, \lambda^*)$-tree with no cofinal branches. \end{lemma} \begin{proof} Let $F$ be a $(\kappa, \lambda)$-tree with no cofinal branches and let $\lambda^*>\lambda.$ We define a $(\kappa, \lambda^*)$-tree $F^*$ as follows: for every $X\in [\lambda^*]^{<\kappa},$ we let \begin{center} $f: X\to 2 \in F^* \Longleftrightarrow_{def} \ f\upharpoonright (X\cap \lambda) \in F$ and for every $\alpha\in X\setminus \lambda,$ $f(\alpha)= 0.$ \end{center} It is clear that $F^*$ extends $F,$ i.e. for every $X\in [\lambda]^{<\kappa},$ ${\rm Lev}_X(F)= {\rm Lev}_X(F^*).$ We check that $F^*$ is a $(\kappa, \lambda)$-tree. Conditions $1$ and $3$ of Definition \ref{main definition} are trivially satisfied. Condition $2$ is easily proved: if $f: X\to 2$ is in $F^*$ and $Y\subseteq X,$ then by definition $f\upharpoonright (X\cap \lambda)\in F,$ hence $f\upharpoonright (Y\cap \lambda)\in F.$ Moreover, for every $\alpha\in Y\setminus \lambda,$ we have $f\upharpoonright Y(\alpha)= f(\alpha)=0.$ Therefore $f\upharpoonright Y\in F^*.$ It remains to prove that for every $X\in [\lambda^*]^{<\kappa},$ the level ${\rm Lev}_X(F^*)$ has size less than $\kappa,$ but the function $f\mapsto f\upharpoonright \lambda$ defines a bijection of ${\rm Lev}_X(F^*)$ into ${\rm Lev}_{(X\cap \lambda)}(F),$ so ${\rm Lev}_X(F^*)$ has size less than $\kappa.$ If $F^*$ has a cofinal branch $b^*: \lambda^*\to 2,$ then $b^*\upharpoonright \lambda$ is a cofinal branch for $F$ as well, because for every $X\in [\lambda]^{<\kappa},$ $b^*\upharpoonright X\in {\rm Lev}_X(F^*)= {\rm Lev}_X(F).$ Since $F$ has no cofinal branches, $F^*$ has no cofinal branches as required. \end{proof} \section{The Strong Tree Property at Successors of Singular Cardinals}\label{sec: successors} To prove the consistency of the usual tree property at $\aleph_{\omega+1},$ Magidor and Shelah first proved a more general result concerning the tree property at successors of singular cardinals. \begin{theorem} (Magidor and Shelah \cite{MagidorShelah}) Assume $\nu$ is a singular limit of strongly compact cardinals, then $\nu^+$ has the tree property. \end{theorem} In this section we prove that under the same assumptions, even the \emph{strong} tree property is satisfied at $\nu^+.$ The structure of the proof is very closed to Magidor and Shelah's proof of the previous theorem, although we will prove that to get the strong tree property at $\nu^+,$ it is enough for $\nu$ to be a singular limit of cardinals satisfying a nice partition property. This result is very important in the following, since the proof of the consistency of the strong tree property at $\aleph_{\omega+1},$ will mimic the proof of this theorem. \begin{notation} Let $\mu$ be a regular cardinal and let $\lambda\geq \mu$ be any ordinal. For every cofinal set $I\subseteq [\lambda]^{<\mu}$ we denote by $[[\ I\ ]]^2$ the set of all pairs $(X,Y)\in I\times I$ such that $X\subseteq Y.$ \end{notation} \begin{definition} Let $\mu>\kappa$ be two regular cardinals and let $S\subseteq [\lambda]^{<\mu}$ be a cofinal set and $c: [[\ S\ ]]^2\to \gamma$ a function such that $\gamma<\kappa.$ We say that a cofinal set $H\subseteq S$ is a \emph{quasi homogenous} set of color $i<\gamma$ iff for every $X,Y\in H$ there is $W\supseteq X,Y$ in $H$ such that $c(X,W)=i= c(Y,W).$\end{definition} \begin{definition} Given two regular cardinals $\nu\geq \kappa,$ we say that the principle $\varphi(\kappa, \nu)$ holds when for every $\lambda\geq \nu$ if $S\subseteq [\lambda]^{<\nu}$ is a stationary set, then every function $c: [[\ S\ ]]^2\to \gamma$ with $\gamma<\kappa$ has a quasi homogenous set $H$ which is also stationary. \end{definition} We now prove that strongly compact cardinals satisfy $\varphi$ everywhere. \begin{theorem}\label{coloring} Let $\kappa$ be a strongly compact cardinal, then $\varphi(\kappa, \nu)$ holds for every regular $\nu\geq \kappa.$ \end{theorem} \begin{proof} Fix $\lambda\geq \nu$ and a function $c: [[\ S\ ]]^2\to \gamma$ where $\gamma<\kappa,$ and let $S\subseteq [\lambda]^{<\nu}$ be a stationary set. Consider all the sets of the form $C\cap S$ where $C\subseteq [\lambda]^{<\nu}$ is a club; they form a $\kappa$-complete family. Since $\kappa$ is strongly compact, there exists a $\kappa$-complete ultrafilter $U$ that contains all these sets. Note that every set in $U$ is stationary. In fact if $H\in U$ and $C$ is a club, then by definition $C\cap S\in U,$ hence $H\cap C\cap S$ is in U as well, and it is non empty. First we show that for every $X\in S,$ there is $i_X<\gamma$ and a set $H_X\subseteq S$ in $U$ such that for every $Y$ in $H_X$ we have $c(X,Y)= i_X.$ Assume for a contradiction that for every $i<\gamma,$ the set $K_i:= \{Y\in S;\ Y\supseteq X \textrm{ and }\ c(X,Y)\neq i \}\in U$ then, by the $\kappa$-completeness of $U,$ the intersection $\bigcap_{i<\gamma}C_i$ is in $U$ and it is empty, a contradiction. A similar argument proves that the function $X\mapsto i_X$ is constant on a set $H\in U;$ let $i$ be such that $i= i_X,$ for every $X\in H.$ Now, it is easy to see that $H$ is quasi-homogenous of color $i.$ Indeed, if $X,Y\in H,$ then $H_X\cap H_Y\cap H$ belongs to $U$ and it is, therefore, non empty. Let $Z\in H_X\cap H_Y\cap H,$ then we have $c(X,Z)= i= c(Y,Z)$ as required. \end{proof} \begin{theorem}\label{phi} Let $\nu$ be a singular cardinal such that $\nu= \lim_{i<cof(\nu)} \kappa_i$ where every $\kappa_i$ is an uncountable cardinal satisfying $\varphi(\kappa_i, \nu^+).$ Then $\nu^+$ has the strong tree property. \end{theorem} \begin{proof} To simplify the notation we will assume that $\nu$ has countable cofinality, so $\nu= \lim_{n<\omega} \kappa_n.$ Suppose without loss of generality that $\langle \kappa_n\rangle_{n<\omega}$ is increasing. Let $\mu\geq \nu^+$ and let $F$ be a $(\nu^+, \mu)$-tree. For every $X\in [\mu]^{<\nu^+},$ let $\{f_i^X\}_{i<\vert {\rm Lev}_X(F)\vert}$ be an enumeration of ${\rm Lev}_X(F).$ First we ``shrink" the tree as follows. \begin{lemma} There exists $n<\omega$ and a stationary set $S\subseteq [\mu]^{<\nu^+},$ such that for all $X,Y\in S,$ there are $\zeta, \eta<\kappa_n$ such that $f_{\zeta}^X \upharpoonright (X\cap Y) = f_{\eta}^Y\upharpoonright (X\cap Y).$\end{lemma} \begin{proof} Given a function $f\in Lev_X,$ we write $\#f = i$ for $i<\nu,$ when $f= f_i^X.$ Define $c: [[\ [\mu]^{<\nu^+} ]]^2\to \omega $ by $c(X,Y)=\min \{i;\ \#(f_0^Y\upharpoonright X)<\kappa_i \}.$ By hypothesis $\varphi(\kappa_0, \nu^+)$ holds, hence there is a stationary quasi homogenous set $S\subseteq [\mu]^{<\nu^+}$ of color $n<\omega.$ Then, for every $X,Y\in S,$ there is $Z\supseteq X,Y$ in $S$ such that $c(X,Z)= n= c(Y,Z).$ This means that $\#(f_0^Z\upharpoonright X),\ \#(f_0^Z\upharpoonright Y)<\kappa_n,$ namely there are $\zeta,\eta<\kappa_n$ such that $f_0^Z\upharpoonright X= f_\zeta^X$ and $f_0^Z\upharpoonright Y= f_\eta^Y.$ So we have $$f_\zeta^X \upharpoonright (X\cap Y) = f_0^Z\upharpoonright (X\cap Y)= f_\eta^Y\upharpoonright (X\cap Y),$$ as required. That completes the proof of the lemma. \end{proof} Let $n$ and $S$ be as above, we prove the following fact. \begin{lemma} There is a cofinal $S'\subseteq S$ and an ordinal $\zeta<\kappa_n$ such that for all $X,Y\in S',$ we have $f_{\zeta}^X \upharpoonright (X\cap Y)= f_{\zeta}^Y \upharpoonright (X\cap Y)$ (the set $S'$ is even stationary). \end{lemma} \begin{proof} For every $(X,Y)\in [[\ S\ ]]^2,$ we define $\bar{c}(X,Y)$ as the minimum couple $(\zeta,\eta)\in \kappa_n\times \kappa_n,$ in the lexicografical order, such that $f_\eta^Y\upharpoonright X= f_\zeta^X$ --- the function is well defined by definition of $n$ and $S.$ We can apply $\varphi(\kappa_{n+1}, \nu^+)$ to $\bar{c}$ as this can be seen as a function from $[[\ S\ ]]^2$ into $\kappa_n$ --- take any bijection $h: \kappa_n\times \kappa_n\to \kappa_n$ and apply $\varphi(\kappa_{n+1}, \nu^+)$ to $\bar{c} \circ h: [[\ S\ ]]^2 \to \kappa_n.$ So there exists a quasi homogenous stationary set $S'$ of color $(\zeta, \eta)\in \kappa_n\times \kappa_n,$ hence for every $X,Y\in S',$ there is $Z\supseteq X,Y$ in $S'$ such that $\bar{c}(X,Z)= (\zeta, \eta) =\bar{c}(Y,Z).$ This means that $f_\eta^Z\upharpoonright X= f_\zeta^X$ and $f_\eta^Z\upharpoonright Y= f_\zeta^Y.$ It follows that $$f_\zeta^X \upharpoonright (X\cap Y)= f_\eta^Z\upharpoonright (X\cap Y)= f_\zeta^Y\upharpoonright (X\cap Y).$$ That completes the proof of the lemma. \end{proof} Now we conclude the proof of the theorem by defining a cofinal branch. Let $b:= \bigcup_{X\in S'} f_{\zeta}^X,$ by the previous lemma $b$ is a function. Moreover, for every $Y\in S'$ we have $$b\upharpoonright Y = \bigcup_{X\in S'} f_{\zeta}^X \upharpoonright Y= \bigcup_{X\in S'} f_{\zeta}^X \upharpoonright (X\cap Y)= \bigcup_{X\in S'} f_{\zeta}^Y\upharpoonright (X\cap Y)= f_{\zeta}^Y.$$ It follows that $b$ is a cofinal branch for $F.$ \end{proof} \begin{coroll}\label{MS} Let $\nu$ be a singular limit of strongly compact cardinals, then $\nu^+$ has the strong tree property. \end{coroll} \begin{proof} Apply Theorem \ref{phi} and Theorem \ref{coloring}. \end{proof} Whether such result can be generalized to the \emph{super} tree property is still an open problem. Based on the analogy between supercompact cardinals and the super tree property, we can conjecture that the successor of a singular limit of \emph{supercompact} cardinals satisfy the super tree property. We conclude this section by proving the following fact. \begin{proposition} Given a regular cardinal $\kappa,$ if $\varphi(\kappa, \kappa)$ holds, then $\kappa$ has the strong tree property. \end{proposition} \begin{proof} Let $F$ be a $(\kappa, \lambda)$-tree where $\lambda\geq \kappa.$ For every $X\in [\lambda]^{<\kappa},$ let $\{f_i^X\}_{i<\gamma_X}$ be an enumeration of ${\rm Lev}_X(F).$ We can assume without loss of generality that $\lambda^{<\kappa}$ is large enough so that $\gamma_X$ (the size of $\vert {\rm Lev}_X(F)\vert$) is constant on a stationary set $S\subseteq [\lambda]^{<\kappa}$ --- indeed, if it is not the case we can take a larger $\lambda^*$ and use Lemma \ref{simple lemma}. So let $\gamma<\kappa$ be such that $\gamma_X= \gamma$ for every $X\in S.$ We define a function $c: [[\ S\ ]]^2\to \gamma\times \gamma$ by letting $c(X,Y)$ be the the minimum couple $(i,j)$ in the lexicografical order such that $f_j^Y\upharpoonright X= f_i^X.$ The function $c$ can be seen as a function from $[[\ S\ ]]^2$ into $\gamma,$ so there exists a quasi-homogenous and stationary set $H\subseteq S.$ Assume $H$ has color $(i,j)\in \gamma\times \gamma,$ we let $b:= \bigcup_{X\in H} f_i^X$ and we prove that $b$ is a cofinal branch. Given $X,Y\in H,$ there is $Z\in H$ such that $X,Y\subseteq Z$ and $c(X,Z)= (i,j)= c(Y,Z).$ By definition of $c,$ we have \begin{enumerate} \item $f_j^Z\upharpoonright X= f_i^X;$ \item $f_j^Z\upharpoonright Y= f_i^Y.$ \end{enumerate} It follows that $f_i^X\upharpoonright (X\cap Y)= f_i^Y\upharpoonright (X\cap Y).$ Therefore $b$ is a function and for every $Y\in H,$ we have $$b\upharpoonright Y= \bigcup_{X\in H} f_i^X \upharpoonright Y= \bigcup_{X\in H} f_i^X\upharpoonright (X\cap Y)= \bigcup_{X\in H} f_i^Y\upharpoonright (X\cap Y)= f_i^Y,$$ so $b$ is a cofinal branch. \end{proof} \section{Systems} To prove the consistency of the strong tree property at $\aleph_{\omega+1},$ we will work with a special structure that we call a ``system''. To understand this notion, suppose we are in the following situation. Let $\nu$ be a cardinal in a model $V$ and let $\mathbb{P}$ be a forcing notion with the $\nu^+$-covering property --- so that for every $\lambda\geq \nu^+,$ the set $([\lambda]^{<\nu^+})^V$ is cofinal in the $[\lambda]^{<\nu^+}$ of the generic extension. Assume $\dot{F}\in V^{\mathbb{P}}$ is a name for a $(\nu^+, \lambda)$-tree and for every $X\in [\lambda]^{<\nu^+},$ $\dot{e}_X$ is a $\mathbb{P}$-name for a an enumeration of ${\rm Lev}_X(\dot{F})$ (i.e. $\Vdash_{\mathbb{P}}\ \dot{e}_X: \nu \to {\rm Lev}_X(\dot{F})$ is onto). For every $p\in \mathbb{P},$ we define a binary relation $S_p$ over the pairs $(X, \zeta),$ where $X\in [\lambda]^{<\nu^+}$ and $\zeta<\nu:$ \begin{center} $(X, \zeta)\ S_p\ (Y, \eta) \Longleftrightarrow_{def} \ p\Vdash\ \dot{e}_X(\zeta)= \dot{e}_Y(\eta)\upharpoonright X.$ \end{center} In other words, we have $(X, \zeta)\ S_p\ (Y, \eta)$ when $p$ forces that the $\eta$-th function on level $Y$ extends the $\zeta$-th function on level $X.$ The family $\{S_p\}_{p\in \mathbb{P}}$ satisfies the following definition. \begin{definition}\label{system} Given an ordinal $\lambda\geq \nu^+,$ a cofinal set $D\subseteq [\lambda]^{<\nu^+}$ and a family $\mathscr{S}:= \{S_i\}_{i\in I}$ of transitive, reflexive binary relations over $D\times \nu,$ we say that $\mathscr{S}$ is a \emph{system} if the following hold: \begin{enumerate} \item if $(X, \zeta)\ S_i\ (Y,\eta)$ and $(X,\zeta)\neq (Y,\eta),$ then $X\subsetneq Y;$ \item for every $X\subseteq Y,$ if both $(X,\zeta)\ S_i\ (Z, \theta)$ and $(Y, \eta)\ S_i\ (Z, \theta),$\linebreak then $(X, \zeta)\ S_i\ (Y, \eta);$ \item for every $X, Y\in D,$ there is $Z\supseteq X,Y$ and $\zeta_X, \zeta_Y, \eta\in \nu$ such that for some $i\in I$ we have $(X, \zeta_X)\ S_i\ (Z, \eta)$ and $(Y, \zeta_Y)\ S_i\ (Z,\eta)$ (in particular, if $X\subseteq Y,$ then $(X,\zeta_X)\ S_i\ (Y,\zeta_Y)$). \end{enumerate} \end{definition} To prove the consistency of the strong tree property at $\aleph_{\omega+1},$ we will have to deal with a system similar to the one defined above. In this section, we analyze some properties of these structures. The elements of $D\times \nu$ are called \emph{nodes of the system}. Given two nodes $u$ and $v,$ we say that they are \emph{$S_i$-incompatible,} for some $i \in I,$ if there is no $w\in D\times \nu$ such that $u\ S_i\ w$ and $v\ S_i\ w.$ We will say that a node $u$ belongs to a \emph{level $X$} if the first coordinate of $u$ is $X$ (i.e. $u=(X,\zeta),$ for some $\zeta\in \nu$). \ \begin{definition} Let $\{S_i\}_{i\in I}$ be a system on $D\times \nu$ and let $b: D\to \nu$ be a partial function. \begin{enumerate} \item We say that $b$ is an \emph{$S_i$-branch} for some $i\in I,$ if the following holds. For every $X\in {\rm dom}(b)$ and for every $Y\in D$ such that $Y\subseteq X,$ we have \begin{center} $Y\in {\rm dom}(b)$ iff there exists $\zeta<\nu$ such that $(Y, \zeta)\ S_i\ (X, b(X)),$ and $b(Y)$ is the unique $\zeta$ witnessing this. \end{center} \item We say that $b$ is a \emph{cofinal branch} for the system if it is an $S_i$-branch for some $i\in I,$ and $X\in {\rm dom}(X)$ for cofinally many $X$'s in $D.$ \end{enumerate} \end{definition} We will often work with families of branches satisfying specific conditions. \begin{definition}\label{system of branches} Let $\{S_i\}_{i\in I}$ be a system on $D\times \nu,$ a \emph{system of branches} is a family $\{b_j\}_{j\in J}$ such that \begin{enumerate} \item every $b_j$ is an $S_i$-branch for some $i\in I;$ \item for every $X\in D,$ there is $j\in J$ such that $X\in {\rm dom}(b_j).$ \end{enumerate} \end{definition} A lemma by Silver establishes that whenever we force with a forcing that has enough closure, it cannot add cofinal branches to a given tree. \begin{lemma} (Silver) Let $\tau, \kappa$ be regular cardinals, and suppose $\tau<\kappa \leq 2^{\tau}.$ Let $\mathbb{P}$ be a $\tau^+$-closed forcing in a model $V$ and let $T$ be a $\kappa$-tree. Then for every generic extension $V[G]$ by $\mathbb{P},$ every branch of $T$ in $V[G]$ is in fact a member of $V.$ \end{lemma} \begin{proof} We may assume that $\tau$ is minimal with $2^{\tau}\geq \kappa.$ Let $\dot{b}$ be a $\mathbb{P}$-name for a new branch. We build by induction for each $s\in {}^{\leq \tau+1} 2$ conditions $p_s$ and points $x_s$ of $T$ such that \begin{enumerate} \item if $t \sqsubseteq s,$ then $p_s\leq p_t$ and $x_s>_T\ x_t;$ \item $p_s\Vdash\ x_s \in \dot{b};$ \item for each $\alpha,$ the nodes $\{x_s;\ s\in {}^{\alpha}2 \}$ are all on the same level $\eta_{\alpha};$ \item for each $s\in {}^{<\tau} 2$ the nodes $x_{s\smallfrown 0}$ and $x_{s\smallfrown 1}$ are incompatible. \end{enumerate} By minimality of $\tau,$ for every $\alpha<\tau,$ the set $\{x_s;\ s\in {}^{\alpha} 2 \}$ has size less than $\kappa,$ so we can choose $\eta_{\alpha+1}.$ The closure of $\mathbb{P}$ guarantees that the construction works at limit stages. In the end we have a contradiction, because the level $\eta_{\tau}$ must have fewer than $\kappa$ many nodes, yet we have constructed $2^{\tau}$ many distinct ones. \end{proof} Now we want to generalize Silver's lemma to systems. More precisely, we are going to prove that if a $\kappa$-closed forcing adds a system of branches trough a ``small'' system, then a cofinal branch must already exist in the ground model (Theorem \ref{preservation theorem} below). Such a result generalizes a lemma by Sinapova (see \cite{Sinapova} Preservation Lemma) and will be used to prove the consistency of the strong tree property at $\aleph_{\omega+1}.$ First we prove the following lemma that provides a useful ``splitting argument''. \begin{lemma}\label{splitting} (Splitting Lemma) Let $\nu$ be a singular cardinal of countable cofinality and let $\lambda\geq \nu^+.$ Let $\{R_i\}_{i\in I}$ be a system on $D\times \tau$ (with $D\subseteq [\lambda]^{<\nu^+}$ cofinal) and let $\mathbb{P}$ be a forcing notion such that: \begin{enumerate} \item $\max(\vert I\vert, \tau )<\nu;$ \item $\mathbb{P}$ is $\kappa$-closed for some regular $\kappa$ between $\max(\vert I\vert, \tau )^+$ and $\nu;$ \item for some $p\in \mathbb{P},$ $\dot{b}\in V^{\mathbb{P}}$ and $i\in I,$ we have $$p\Vdash \dot{b}\textrm{ is a cofinal $R_i$-branch}.$$ \end{enumerate} If $V$ has no cofinal branches for the system, then for all $\eta<\kappa,$ we can find a sequence $\langle v_{\zeta};\ \zeta<\eta \rangle$ of pairwise $R$-incompatible elements of $D\times\tau$ such that for every $\zeta<\eta,$ there exists $q\leq p$ that forces $v_{\zeta}\in \dot{b}.$ \end{lemma} \begin{proof} It might be helpful to point out that if $G$ is a generic filter containing $p,$ then in $V[G]$ the domain of $\dot{b}^G$ is a cofinal set in $([\lambda]^{<\nu^+})^V.$ We work in $V.$ Let $R:= R_i$ and let $E:= \{ u\in D\times \tau;\ \exists q\leq p( q\Vdash u\in \dot{b})\}.$ First remark that, since $p$ forces that $\dot{b}$ is cofinal, the set $\{X\in D;\ \exists \zeta\in \tau\ (X,\zeta)\in E\}$ is cofinal. As $V$ has no cofinal branches for the system, we can find, for all $v\in E$ two $R$-incompatible nodes $w_1, w_2\in E$ such that $v\ R\ w_1,$ $v\ R\ w_2.$ We inductively define for all $\zeta< \eta$ two nodes $u_\zeta, v_\zeta\in E$ and a condition $p_\zeta\leq p$ such that: \begin{enumerate} \item $u_\zeta$ and $v_\zeta$ are $R$-incompatible; \item\label{claim:Rchain} for all $\varepsilon<\zeta,$ $u_{\varepsilon}\ R\ u_{\zeta}$ and $u_\varepsilon\ R\ v_\zeta;$ \item $p_{\zeta}\Vdash u_{\zeta}\in \dot{b};$ \item the sequence $\langle p_\varepsilon;\ \varepsilon\leq \zeta \rangle$ is decreasing; \end{enumerate} Let $u$ be any node in $E.$ From the remark above, there are $u_0,v_0\in E$ which are $R$-incompatible and both $u\ R\ u_0$ and $u\ R\ v_0$ hold. By definition of $E,$ there is a condition $p_0\leq p$ such that $p_0\Vdash u_0\in \dot{b}.$ Let $\zeta>0$ and assume that $u_\varepsilon,$ $v_\varepsilon,$ $p_\varepsilon$ are defined for every $\varepsilon <\zeta.$ Let $q$ be stronger than every condition in $\{p_\varepsilon;\ \varepsilon<\zeta\}.$ By the inductive hypothesis (claim \ref{claim:Rchain}), the nodes $\langle u_\varepsilon;\ \varepsilon<\zeta \rangle $ form an $R$-chain whose levels are sets in $[\lambda]^{<\nu^+}.$ The union of the levels of these nodes is a set $X$ in $[\lambda]^{<\nu^+}$ and since $\dot{b}$ is forced to be a cofinal $R$-branch we can find a node $h$ of level above $X$ and a condition $q^*\leq q$ such that $q^*\Vdash h\in \dot{b}.$ It follows that $u_\varepsilon\ R\ h,$ for all $\varepsilon<\zeta.$ Since there is no cofinal branch in $V$ for the system, we can find two $R$-incompatible nodes $u_\zeta,v_\zeta\in E$ and a condition $p_\zeta\leq q^*$ such that $h\ R\ u_\zeta,$ $h\ R\ v_\zeta$ and $p_\zeta\Vdash u_\zeta\in \dot{b}.$ That completes the construction. The sequence $\langle v_\zeta;\ \zeta<\eta \rangle$ is as required: for if $\zeta'<\zeta<\eta,$ then by definition $u_{\zeta'}$ and $v_{\zeta'}$ are $R$-incompatible, and $u_{\zeta'}\ R\ v_{\zeta},$ hence $v_{\zeta'}$ and $v_{\zeta}$ are $R$-incompatible as well. \end{proof} \ \begin{theorem}\label{preservation theorem} (Preservation Theorem) In a model $V,$ we let $\nu$ be a singular cardinal of countable cofinality and let $\lambda\geq \nu^+.$ Let $\{R_i\}_{i\in I}$ be a system on $D\times \tau$ (with $D\subseteq [\lambda]^{<\nu^+}$ cofinal), let $\mathbb{P}$ be a forcing notion and let $G\subseteq \mathbb{P}$ a generic filter over $V.$ Assume that \begin{enumerate} \item $\max(\vert I\vert, \tau )<\nu;$ \item $\mathbb{P}$ is $\kappa$-closed for some regular $\kappa$ between $\max(\vert I\vert, \tau )^+$ and $\nu;$ \item in $V[G]$ there is a system of branches $\{b_j\}_{j\in J}$ through $\{R_i\}_{i\in I}$ such that \begin{enumerate} \item $J\in V$ and $\vert J\vert^+<\kappa;$ \item\label{cofinal} for some $j\in J,$ the branch $b_j$ is cofinal. \end{enumerate} \end{enumerate} Then, for some $i\in I,$ there exists in $V$ a cofinal $R_i$-branch. \end{theorem} \begin{proof} Suppose for contradiction that $V$ has no cofinal branches for the system $\{R_i\}_{i\in I}.$ Let $\{\dot{b}_j\}_{j\in J}$ be $\mathbb{P}$-names for the branches of the system of branches in the generic extension. The idea of the proof is similar to the proof of Silver's lemma above and it follows three steps. \begin{enumerate} \item We consider just the $\dot{b}_j$'s that are forced to be cofinal and for every such $\dot{b}_j,$ we use the Splitting Lemma to build $\eta$ many incompatible nodes that are forced to belong to $\dot{b}_j,$ where $\eta$ is a cardinal between $\max(\vert J\vert, \vert I\vert, \tau)$ and $\kappa.$ \item By using the $\kappa$-closure of $\mathbb{P}$ and the fact that there are less than $\kappa$ many possible cofinal branches, we find a name $\dot{b}$ for a $R$-branch and $\eta$ many $R$-incompatible nodes $\langle u_{\gamma};\ \gamma<\eta \rangle$ that are forced by ``nice conditions'' to belong to $\dot{b}.$ \item As $\eta<\nu^+,$ all these nodes are below some level $X\in D$ and we can find a node $w$ on a level above $X$ which is forced by those conditions to belong to $\dot{b}$ as well. Then we have a contradiction, as $w$ stands in the relation $R$ with $R$-incompatible nodes below it. \end{enumerate} We work in $V.$ Fix, for every $j\in J$ a condition $p_j$ deciding whether or not $\dot{b}_j$ is cofinal. We can choose the $p_j$'s so that they form a decreasing sequence, then by the $\kappa$-closure of $\mathbb{P}$ (recall $\vert J\vert<\kappa$) there exists a condition $p$ deciding, for every $j\in J$ whether or not $b_j$ is cofinal. We let $B:= \{j\in J;\ p\Vdash \dot{b}_j\textrm{ is not cofinal} \}.$ For every $j\in B,$ fix $X_j\in [\lambda]^{<\nu^+}$ such that $p$ forces that ${\rm dom}(\dot{b}_j)$ has empty intersection with every $Y\supseteq X_j.$ Since $B$ has size less than $\nu,$ the set $X^*:= \underset{j\in B}{\bigcup} X_j$ is in $[\lambda]^{<\nu^+}.$ Let $C^*:=\{Z\in D;\ X^*\subseteq Z \}.$ Define $A:= \{j\in J;\ p\Vdash \dot{b}_j \textrm{ is cofinal} \},$ then by hypothesis $A$ is non empty (claim \ref{cofinal}). Moreover, by strengthening $p$ if necessary, we can assume \begin{equation} \label{eq:one} p\Vdash \forall X\in C^*\exists j\in A( X\in {\rm dom}(\dot{b}_j)) \end{equation} \ (use condition $2$ of Definition \ref{system of branches} and the definition of $C^*$). For every $a\in A,$ we denote by $R_a$ the relation in the system such that $p\Vdash \dot{b}_a \textrm{ is an $R_a$-branch}.$ Fix a regular cardinal $\eta$ between $\max(\vert J\vert, \vert I\vert, \tau )$ and $\kappa,$ we prove the following claim. \begin{claim} Let $\vartriangleleft$ be a well ordering (strict) of $A.$ For every $a\in A,$ we can define $\langle q_{\gamma}^a;\ \gamma<\eta \rangle $ and $\langle u_{\gamma}^a;\ \gamma<\eta \rangle$ such that \begin{enumerate} \item for all $\gamma<\eta,$ $q_{\gamma}^a\leq p$ and $q_{\gamma}^a\Vdash u_{\gamma}^a\in \dot{b}_a,$ \item the nodes $\langle u_{\gamma}^a;\ \gamma<\eta \rangle$ are pairwise $R_a$-incompatible, \item\label{claim:decreasing} for all $\gamma<\eta,$ the sequence $\langle q_{\gamma}^c;\ c\vartriangleleft a\rangle$ is decreasing, i.e. if $b \vartriangleleft c,$ then $q_{\gamma}^{c}\leq q_{\gamma}^{b}.$ \end{enumerate} \end{claim} \begin{proof} We proceed by induction on the ordering $\vartriangleleft.$ Assume that the sequences have been defined up to $a\in A$ (i.e. for every $c\vartriangleleft a$). For every $\gamma<\eta,$ let $r_{\gamma}$ be stronger than every condition in the set $\{q_{\gamma}^{c};\ c\vartriangleleft a \}$ (the sequence $\langle q_{\gamma}^{c};\ c\vartriangleleft a \rangle$ is decreasing by claim \ref{claim:decreasing}) and let $E_{\gamma}:= \{ u\in D\times \tau;\ \exists q\leq r_{\gamma}( q\Vdash u\in \dot{b}_a)\}.$ For all $\gamma<\eta,$ there exists $\langle v_{\zeta}^{\gamma};\ \zeta<\eta \rangle$ like in the conclusion of Lemma \ref{splitting} applied to $r_{\gamma}$ and $\dot{b}_a.$ Let $X_{\gamma}\in [\lambda]^{<\nu^+}$ be such that the level of each $v_{\zeta}^{\gamma}$ is below $X_{\gamma}$ and let $X^*\supsetneq \underset{\gamma<\eta}{\bigcup} X_{\gamma}$ in $D.$ We want to define the sequence $\langle u_{\gamma}^a;\ \gamma<\eta \rangle$ with each $u_{\gamma}^a\in E_{\gamma}$ belonging to a level above $X^*.$ We proceed by induction: suppose we have defined $\langle u_{\gamma}^a;\ \gamma<\delta \rangle $ for some $\delta<\eta.$ For every $\gamma<\delta,$ there is at most one $\zeta<\eta$ such that $v_{\zeta}^{\delta}\ R_a\ u_{\gamma}^a$ (because the $v_{\zeta}^{\delta}$'s are pairwise $R_a$-incompatible), let $\zeta_{\gamma}$ be that unique index if it exists and let $\zeta_{\gamma}$ be $0$ otherwise. Choose $\zeta\in \eta\setminus \{\zeta_{\gamma}^{\delta};\ \gamma<\delta \},$ then for all $\gamma<\delta,$ the nodes $v_{\zeta}^{\delta}$ and $u_{\gamma}^a$ are $R_a$-incompatible. Let $u_\delta^a\in E_\delta$ be such that $v_{\zeta}^{\delta}\ R_a\ u_\delta^a.$ Then, for all $\gamma<\delta,$ the nodes $u_{\gamma}^a$ and $u_\delta^a$ are $R_a$-incompatible. Since for every $\gamma<\eta,$ we have $u_\gamma^a\in E_\gamma,$ we can find a condition $q_\gamma^a\leq r_{\gamma}$ such that $q_{\gamma}^a\Vdash u_{\gamma}^a\in \dot{b}_a.$ That completes the proof of the claim. \end{proof} We return to the proof of the theorem. Condition \ref{claim:decreasing} above guarantees that for every $\gamma<\eta,$ the sequence $\langle q_{\gamma}^a;\ a\in A \rangle$ is decreasing. Since $A$ has size less than $\kappa,$ we can find for every $\gamma<\eta,$ a condition $p_{\gamma}$ stronger than all the conditions $\langle q_{\gamma}^a;\ a\in A\rangle$ and there is $Y_\gamma\in D$ such that the nodes in $\{u_\gamma^a;\ a\in A\} $ belong to levels below $Y_\gamma.$ Let $Y^*\in C^*$ be such that $Y^*\supseteq \underset{\gamma}{\bigcup}Y_{\gamma}.$ For all $\gamma<\eta,$ we fix $p_{\gamma}^*, w_{\gamma}$ and $a_{\gamma}$ such that $p_{\gamma}^*\leq p_{\gamma},$ $w_\gamma$ is a node on level $Y^*,$ $a_{\gamma}\in A$ and $p_{\gamma}^*\Vdash w_{\gamma}\in \dot{b}_{a_{\gamma}}$ (use Equation \ref{eq:one}). Since $\vert A\vert, \tau<\eta,$ there is $w^*$ on level $Y^*$ and $a^*\in A$ such that $w_{\gamma}=w^*,$ $a_{\gamma}= a^*,$ for almost all $\gamma<\eta.$ Let $b^*:=\dot{b}_{a^*}.$ Given two distinct $\gamma,\delta<\eta$ large enough, if $u:= u_{\gamma}^{a^*}$ and $v:= u_{\delta}^{a^*},$ then the following hold: \begin{enumerate} \item $p_{\gamma}^*\Vdash u\in b^*,$ $p_{\gamma}^*\Vdash w^*\in b^*;$ \item $p_\delta^*\Vdash v\in b^*,$ $p_\delta^*\Vdash w^*\in b^*.$ \end{enumerate} It follows that $u\ R_{a^*} w$ and $v\ R_{a^*} w.$ However, $u$ and $v$ are $R_{a^*}$-incompatible by definition, and that leads to a contradiction. \end{proof} \section{The Strong Tree Property at $\aleph_{\omega+1}$} Now we are ready to prove the consistency of the strong tree property at $\aleph_{\omega+1}.$ The structure of the proof of this theorem is motivated by Neeman \cite{Neeman}. \begin{theorem} Let $\langle \kappa_n\rangle_n<\omega$ be an increasing sequence of indestructibly supercompact cardinals. There is a strong limit cardinal $\mu<\kappa_0$ of cofinality $\omega$ such that by forcing over $V$ with the poset $${\rm Coll}(\omega, \mu)\times {\rm Coll}(\mu^+, <\kappa_0)\times \Pi_{n<\omega} {\rm Coll}(\kappa_n, <\kappa_{n+1}),$$ one gets a model where the strong tree property holds at $\aleph_{\omega+1}.$ \end{theorem} \begin{proof} Let $\kappa$ denote $\kappa_0,$ for every $\mu<\kappa$ we let \begin{enumerate} \item $\mathbb{R}(\mu):= {\rm Coll}(\omega, \mu)\times {\rm Coll}(\mu^+, <\kappa_0)\times \Pi_{n<\omega} {\rm Coll}(\kappa_n, <\kappa_{n+1}),$ \item $\mathbb{L}(\mu):= {\rm Coll}(\omega, \mu)\times {\rm Coll}(\mu^+, <\kappa_0),$ \item $\mathbb{C}:= \Pi_{n<\omega} {\rm Coll}(\kappa_n, <\kappa_{n+1}).$ \end{enumerate} Assume that $\nu= \sup_n{\kappa_n}.$ Note that for every $\mu<\kappa,$ the forcing $\mathbb{R}(\mu)$ produces a model where $\aleph_{\omega+1}= \nu^+.$ Fix $H:= \Pi_{n<\omega} H_n\subseteq \mathbb{C}$ generic over $V.$ We work in $W:=V[H].$ Assume for a contradiction that in every extension of $W$ by $\mathbb{L}(\mu)$ with $\mu<\kappa$ strong limit of cofinality $\omega,$ the strong tree property fails at $\nu^+.$ For every such $\mu,$ let $\lambda_{\mu}$ and $\dot{F}(\mu)\in W^{\mathbb{L}(\mu)}$ be a name for a $(\nu^+, \lambda_{\mu})$-tree with no cofinal branches. Let $\lambda= \sup_{\mu<\kappa } \lambda_\mu,$ without loss of generality we can assume that $\lambda_{\mu}= \lambda$ for every $\mu,$ since a $(\nu^+, \lambda_{\mu})$-tree with no cofinal branches can be extended to a $(\nu^+, \lambda)$-tree with no cofinal branches (by Lemma \ref{simple lemma}). Note that for every $\mu$ the poset $\mathbb{L}(\mu)$ has the $\nu^+$-covering property since it is $\kappa_0$-c.c.. Therefore, $([\lambda]^{<\nu^+})^W$ is cofinal in the $[\lambda]^{<\nu^+}$ of any generic extension of $W$ by $\mathbb{L}(\mu).$ Given $X,Y\in [\lambda]^{<\nu^+}$ and $\zeta,\eta<\nu,$ we will write $\Vdash_{\mathbb{L}(\mu)} (X,\zeta) <_{\dot{F}_{\mu}} (Y, \eta)$ when $$\Vdash_{\mathbb{L}(\mu)} \textrm{ the $\eta$'th function on level $Y$ extends the $\zeta$'th function on level $X$}$$ (i.e. for every $\mu$ and $X,$ we fix an $\mathbb{L}(\mu)$-name $\dot{e}_X^{\mu}$ for an enumeration of the level of $X$ into at most $\nu$ elements, then we write $\Vdash_{\mathbb{L}(\mu)} (X,\zeta) <_{\dot{F}_{\mu}} (Y, \eta)$ when $\Vdash_{\mathbb{L}(\mu)} \dot{e}_X^{\mu}(\zeta)=\dot{e}_Y^{\mu}(\eta)\upharpoonright X$). Consider the following set $$I:= \{(a,b,\mu);\ \mu<\kappa \textrm{ is strong limit of cof $\omega$ and }(a,b)\in \mathbb{L}(\mu)\}.$$ We define a system $\mathscr{S}= \{S_i\}_{i\in I}$ on $[\lambda]^{<\nu^+}\times \nu$ as follows. Given $i= (a,b,\mu)\in I,$ for every $X,Y\in [\lambda]^{<\nu^+}$ and for every $\zeta, \eta<\nu,$ we let $$(X, \zeta)\ S_i\ (Y,\eta) \Longleftrightarrow_{def} (a,b)\Vdash (X,\zeta)<_{\dot{F}_\mu} (Y,\eta).$$ First we prove that we can shrink the system. \begin{lemma}\label{system lemma} There is, in $W,$ an integer $n<\omega$ and a cofinal set $D\subseteq [\lambda]^{<\nu^+}$ such that $\{\ S_i\upharpoonright D\times \kappa_n\}_{i\in I}$ is a system. \end{lemma} \begin{proof} $\kappa$ is indestructible supercompact, so we can fix $j: W\to W^*$ a $\sigma$-supercompact elementary embedding with critical point $\kappa,$ where $\sigma$ is large enough for the argument that follows. We have $a^*:=j[\lambda]\in W^*\cap [j(\lambda)]^{<j(\nu^+)}.$ Let $F^*$ be the name $j(\dot{F})(\nu),$ where $\dot{F}$ is the map $\mu \mapsto \dot{F}(\mu).$ We denote by $\ll \lambda \gg^{<\nu^+}$ the set of all the strictly increasing sequences from an ordinal $\alpha<\nu^+.$ into $\lambda.$ For every $s\in\ \ll \lambda \gg^{<\nu^+},$ the image of $s$ is a subset of $[\lambda]^{<\nu^+}.$ We define a sequence $\langle (p_s, q_s, \zeta_s, n_s);\ s\in\ \ll \lambda\gg^{<\nu^+} \rangle $ such that \begin{enumerate} \item $(p_s, q_s)\in {\rm Coll}(\omega, \nu)\times {\rm Coll}(\nu^+, <j(\kappa)),$ $n_s<\omega,$ and $\zeta_s<j(\kappa_{n_s});$ \item $(p_s, q_s)\Vdash (j[Im(s)], \zeta_s)<_{F^*} (a^*, 0);$ \item for every $t\sqsubseteq s$ in $\ll \lambda\gg^{<\nu^+},$ we have $q_s\leq q_t.$ \end{enumerate} The sequence is inductively defined as follows. Let $s: \alpha\to \lambda$ be a strictly increasing sequence, assume by inductive hypothesis that $$\langle (p_s, q_s, \zeta_s, n_s);\ s\in\ \ll \lambda\gg^{<\alpha} \rangle $$ is defined. By condition $(3),$ the sequence $\langle q_{s\upharpoonright \beta};\ \beta<\alpha\rangle$ is decreasing. Moreover, ${\rm Coll}(\nu^+, <j(\kappa))$ is $\nu^+$-closed, so there exists a lower bound $\bar{q}_s$ for $\langle q_{s\upharpoonright \beta};\ \beta<\alpha\rangle.$ The set $j[Im(s)]$ is in $[j(\lambda)]^{<j(\nu^+)},$ so there exists $p_s\in {\rm Coll}(\omega, \nu),$ $q_s\leq \bar{q}_s$ in ${\rm Coll}(\nu^+, <j(\kappa))$ and $\zeta_s<j(\nu)$ such that $$(p_s, q_s)\Vdash (j[Im(s)], \zeta_s)<_{F^*} (a^*, 0).$$ If we let $n_s$ be the minimum integer such that $\zeta_s<j(\kappa_{n_s}),$ then $p_s, q_s,\zeta_s$ and $n_s$ satisfy conditions $1,$ $2$ and $3$ for the sequence $s.$ That completes the definition. For every $X\in [\lambda]^{<\nu^+}$ we denote by $s_X$ the unique strictly increasing sequence whose image is $X$ (i.e. $s_X: o.t. (X) \to \lambda$ and $Im(s_X):= X$). As ${\rm Coll}(\omega, \nu)$ has size less than $\lambda^{<\nu^+},$ there is a condition $p$ and a cofinal set $D\subseteq [\lambda]^{<\nu^+}$ such that for every $X\in D,$ we have $p= p_{s_{X}}.$ By shrinking $D,$ we can also assume that there exists $n<\omega$ such that $n= n_{s_{X}},$ for every $X\in D.$ \begin{claim} $\{\ S_i\upharpoonright D\times \kappa_n\}_{i\in I}$ is a system. \end{claim} \begin{proof} We just have to prove that it satisfies condition $(3)$ of Definition \ref{system}. Fix $X,Y\in D,$ by construction we have \begin{enumerate} \item $(p, q_{s_X})\Vdash (j[X], \zeta_X)<_{F^*} (a^*, 0),$ \item $(p, q_{s_Y})\Vdash (j[Y], \zeta_Y)<_{F^*} (a^*, 0).$ \end{enumerate} Take any set $Z$ in $D$ such that $s_Z\sqsupseteq s_X, s_Y$ (in particular $Z\supseteq X,Y$), then $q_Z$ is stronger than both $q_X$ and $q_Y.$ Therefore, the condition $(p, q_Z)$ forces that: \begin{enumerate} \item[(i)] $(j[X], \zeta_X)<_{F^*} (a^*, 0);$ \item[(ii)] $(j[Z], \zeta_Z)<_{F^*} (a^*, 0);$ \item[(iii)] $(j[Z], \zeta_Z)<_{F^*} (a^*, 0);$ \item[(iv)] $(j[Y], \zeta_Y) <_{F^*} (a^*, 0).$ \end{enumerate} From $(i)$ and $(ii)$ follows $(p,q_Z)\Vdash (j[X], \zeta_X)<_{F^*} (j[Z], \zeta_Z);$ from $(iii)$ and $(iv)$ follows $(p,q_Z)\Vdash (j[Y], \zeta_Y)<_{F^*} (j[Z], \zeta_Z).$ Then, by elementarity, there exists $\mu<\kappa$ and $(\bar{p}, \bar{q})\in \mathbb{L}(\mu)$ and $\bar{\zeta}_X, \bar{\zeta}_Y, \bar{\zeta}_Z<\kappa_n$ such that $$(\bar{p}, \bar{q})\Vdash (X, \bar{\zeta}_X)<_{\dot{F_\mu}} (Z, \bar{\zeta}_Z)\textrm{ and } (Y, \bar{\zeta}_Y)<_{\dot{F}_\mu} (Z, \bar{\zeta}_Z).$$ If $i= (\bar{p}, \bar{q}, \mu),$ then we just proved $(X, \bar{\zeta}_X)\ S_i\ (Z, \bar{\zeta}_Z)$ and $(Y, \bar{\zeta}_Y)\ S_i\ (Z, \bar{\zeta}_Z).$ \end{proof} That completes the proof of the lemma. \end{proof} To simplify the notation, we define $R_i:= S_i\upharpoonright D\times \kappa_n,$ for every $i\in I.$ Let $m= n+2,$ by the indestructibility of $\kappa_{m+1}$ forcing over $W= V[H]$ with ${\rm Coll}(\kappa_m, \gamma)^V$ for sufficiently large $\gamma,$ adds an elementary embedding $\pi: V[H]\to M[H^*]$ with critical point $\kappa_{m+1}$ and $\pi(\kappa_m)>\sup\pi[\lambda]$ (use standard arguments for extending embeddings). \begin{lemma} There is in $V[H^*]$ a system of branches $\{b_j\}_{j\in J}$ for the system $\{ R_i\}_{i\in I}$ with $J= I\times \kappa_n,$ such that for some $j\in J,$ the branch $b_j$ is cofinal. \end{lemma} \begin{proof} First note that since $\kappa_n, \vert I\vert <cr(\pi),$ we may assume that $\pi(I)=I$ and $\pi ( \{ R_i \}_{i\in I} )= \{\pi(R_i)\}_{i\in I}.$ This is a system on $\pi(D)\times \kappa_n.$ Let $a^*$ be a set in $\pi(D)$ such that $\pi[\lambda]\subseteq a^*.$ For every $(i, \delta)\in I\times \kappa_n,$ let $b_{i,\delta}$ be the partial map sending each $X\in D$ to the unique $\zeta<\kappa_n$ such that $(\pi[X], \zeta)\ \pi(R_i)\ (a^*, \delta)$ if such $\zeta$ exists. By elementarity, every $b_{i,\delta}$ is an $R_i$-branch. Condition $(2)$ of Definition \ref{system of branches} is satisfied as well: indeed, if $X\in D,$ then by condition $(3)$ of Definition \ref{system}, there exists $\zeta,\eta<\kappa_n$ and $i\in I$ such that $(\pi[X], \zeta)\ \pi(R_i)\ (a^*, \eta),$ hence $X\in {\rm dom}(b_{i, \eta}).$ It remains to prove that for some $j\in J,$ $b_j$ is cofinal. For every $X\in D,$ we fix $i_X,\delta_X$ such that $X\in {\rm dom}(b_{i_X, \delta_X}).$ The set $I$ has size less than $\kappa_m$ in $W,$ moreover ${\rm Coll}(\kappa_m, \gamma)^V$ is $\kappa_m$-closed in $V[H_m\times H_{m+1}\times \dots]$ and $W= V[H]$ is a $\kappa_m$-c.c. forcing extension of $V[H_m\times H_{m+1}\times \dots],$ so $I$ has size $<\kappa_m$ even in $V[H^*].$ On the other hand $\vert D\vert\geq \kappa_m,$ so there exists a cofinal $D'\subseteq D$ and $i,\delta$ in $V[H^*]$ such that $i= i_X$ and $\delta= \delta_X,$ for every $X\in D'.$ This means that $X\in {\rm dom}(b_{i,\delta})$ for every $X\in D',$ namely $b_{i,\delta}$ is a cofinal branch. \end{proof} $V[H^*]$ is a $\kappa_m$-closed forcing extension of $V[H]= W,$ so we can apply the Preservation Theorem. Therefore a cofinal $R_i$-branch $b$ exists in $W,$ for some $i\in I.$ Assume that $i= (a,b,\mu),$ for every $X\subseteq Y$ in ${\rm dom}(b),$ we have $$(a,c)\Vdash (X, b(X))<_{\dot{F}_{\mu}} (Y, b(Y)).$$ If $G_0\times G_1\subseteq \mathbb{L}(\mu)$ is any generic filter containing the condition $(a,c),$ then the branch $b$ determines a cofinal branch for $\dot{F}_{\mu}^{G_0\times G_1}$ in $W[G_0\times G_1]$ contradicting the fact that $\dot{F}_{\mu}$ is a name for a $(\nu^+, \lambda)$-tree with no cofinal branches. This completes the proof of the theorem. \end{proof} \section{Conclusions} We proved that if infinitely many supercompact cardinals exist in a model $V,$ then there is a forcing extension of $V$ where $\aleph_{\omega+1}$ has the strong tree property. We do not know whether $\aleph_{\omega+1}$ can consistently satisfy even the \emph{super} tree property. We also know (see \cite{Fontanella2}) that from infinitely many supercompact cardinals, one can build a model where the super tree property (hence in particular the strong tree property) holds at every cardinal of the form $\aleph_{n+2},$ where $n<\omega.$ Then, it is natural to ask whether it is possible to combine the two consistency results and prove from infinitely many supercompact cardinals, the consistency of the strong tree property ``up to'' $\aleph_{\omega+1},$ i.e. at every regular cardinal $\leq \aleph_{\omega+1}$ (above $\aleph_1$). These problems remain open. \end{document}
\begin{document} \begin{abstract} We give an explicit formula for the motivic zeta function in terms of a log smooth model. It generalizes the classical formulas for snc-models, but it gives rise to much fewer candidate poles, in general. This formula plays an essential role in recent work on motivic zeta functions of degenerating Calabi-Yau varieties by the second-named author and his collaborators. As a further illustration, we explain how the formula for Newton non-degenerate polynomials can be viewed as a special case of our results. \end{abstract} \thanks{MSC2010:14E18;14M25. Keywords: motivic zeta functions, logarithmic geometry, monodromy conjecture} \title{Computing motivic zeta functions on log smooth models} \section{Introduction} Denef and Loeser's motivic zeta function is a subtle invariant associated with hypersurface singularities over a field $k$ of characteristic zero. It can be viewed as a motivic upgrade of Igusa's local zeta function for polynomials over $p$-adic fields. The motivic zeta function is a power series over a suitable Grothendieck ring of varieties, and it can be specialized to more classical invariants of the singularity, such as the Hodge spectrum. The main open problem in this context is the so-called {\em monodromy conjecture}, which predicts that each pole of the motivic zeta function is a root of the Bernstein polynomial of the hypersurface. One of the principal tools in the study of the motivic zeta function is its explicit computation on a log resolution \cite[3.3.1]{DL-barc}. While this formula gives a complete list of candidate poles of the zeta function, in practice most of these candidates tend to cancel out for reasons that are not well understood. Understanding this cancellation phenomenon is the key to the monodromy conjecture. The aim of this paper is to establish a formula for the motivic zeta function in terms of {\em log smooth} models instead of log resolutions (Theorem \ref{thm:main}). These log smooth models can be viewed as partial resolutions with toroidal singularities. Our formula generalizes the computation on log resolutions, but typically gives substantially fewer candidate poles (Proposition \ref{prop:poles}). A nice bonus is that, even for log resolutions, the language of log geometry allows for a cleaner and more conceptual proof of the formula for the motivic zeta function in \cite{NiSe}, and to extend the results to arbitrary characteristic (Corollary \ref{cor:snc}). We will also indicate in Section \ref{sec:curves} how our formula gives a conceptual explanation for the determination of the set of poles of the motivic zeta function of a curve singularity; this is the only dimension in which the monodromy conjecture has been proven completely. A special case of our formula has appeared in the literature under a different guise, namely, the calculation of motivic zeta functions of hypersurfaces that are non-degenerate with respect to their Newton polyhedron \cite{guibert}. We will explain the precise connection (and correct some small errors in \cite{guibert}) in Section \ref{sec:nondeg}. Our results apply not only to Denef and Loeser's motivic zeta function, but also to the motivic zeta functions of degenerations of Calabi-Yau varieties that were introduced in \cite{HaNi}. Here the formula in terms of log smooth models is particularly relevant in the context of the Gross-Siebert program on toric degenerations and Mirror Symmetry, where log smooth models appear naturally in the constructions. We have already applied our formula to compute the motivic zeta function of the degeneration of the quartic surface in \cite{NOR}. Our formula is also used in an essential way in \cite{HaNi-CY} to prove an analog of the monodromy conjecture for a large and interesting class of degenerations of Calabi-Yau varieties (namely, the degenerations with monodromy-equivariant Kulikov models). The main results in this paper form a part of the first author's PhD thesis \cite{PhD}. They were announced in \cite{cras}. \ensuremath{\subseteq}ection*{Acknowledgements} We are grateful to Wim Veys for his suggestion to interpret the results in Section \ref{sec:curves} in the context of logarithmic geometry. The first author was supported by a PhD grant from the Fund of Scientific Research -- Flanders (FWO). We would also like to thank the referee for their valuable and thoughtful comments. The second author was supported by the ERC Starting Grant MOTZETA (project 306610) of the European Research Council, and by long term structural funding (Methusalem grant) of the Flemish Government. \ensuremath{\subseteq}ection*{Notations and conventions} The general theory of logarithmic schemes that we will use is explained in \cite{kato-log,kato,GaRa}. For the reader's convenience, we have included a user-friendly introduction to regular log schemes and their fans in Section \ref{sec:logsch}. These results are not new, but they are scattered in the literature. This is not meant to be a self-contained introduction to logarithmic geometry: we assume that the reader is familiar with the basic definitions of the theory, as explained for instance in Sections 1--4 of \cite{kato-log}. In the remainder of the paper, we will frequently refer back to this section for auxiliary results on logarithmic geometry. Unless explicitly stated otherwise, all logarithmic structures in this paper are defined with respect to the Zariski topology, like in \cite{kato}, and all the log schemes are Noetherian and fs (fine and saturated). This means that they satisfy condition (S) in \cite[1.5]{kato}, and are, in addition, quasi-compact. We will discuss a generalisation of our results to the Nisnevich topology in Section \ref{sec:etale}. Log schemes will be denoted by symbols of the form $\mcl{X}^\dagger$, and the underlying scheme will be denoted by $\mcl{X}$. We write $M_{\mcl{X}^\dagger}$ for the sheaf of monoids on $\mcl{X}^\dagger$. We will follow the convention in \cite{GaRa} and speak of regular log schemes and smooth morphisms between log schemes instead of log regular log schemes and log smooth morphisms. When we refer to geometric properties of the underlying schemes instead, this will always be clearly indicated. \section{Monoids} \ensuremath{\subseteq}ection{Generalities} For general background on the algebraic theory of monoids, we refer to \cite[\S4]{GaRa}. All monoids are assumed to be commutative, and we will most often use the additive notation $(M,+)$, with neutral element $0$. The embedding of the category of abelian groups into the category of monoids has a left adjoint $(\cdot)^{\ensuremath{\mathrm{gp}}}$, which is called the groupification functor. For every monoid $M$, we have a canonical morphism of monoids $M\to M^{\ensuremath{\mathrm{gp}}}$ by adjunction. The monoid $M$ is called {\em integral} if this morphism is injective. This is equivalent to saying that the addition on $M$ satisfies the cancellation property, that is, $x+z=y+z$ implies $x=y$ for all $x$, $y$ and $z$ in $M$. An integral monoid $M$ is called {\em saturated} if an element $m$ in $M^{\ensuremath{\mathrm{gp}}}$ belongs to $M$ if and only if there exists an integer $d>0$ such that $dm$ belongs to $M$. We say that a monoid $M$ is {\em fine} if it is finitely generated and integral. The embedding of the category of integral monoids into the category of monoids has a left adjoint, which is denoted by $(\cdot)^{\ensuremath{\mathrm{int}}}$. For every monoid $M$, we have a canonical morphism $M\to M^{\ensuremath{\mathrm{int}}}$ by adjunction, and this morphism is surjective. Since every group is integral, the morphism $M\to M^{\ensuremath{\mathrm{gp}}}$ factors through $M^{\ensuremath{\mathrm{int}}}$. The induced morphism $M^{\ensuremath{\mathrm{int}}}\to M^{\ensuremath{\mathrm{gp}}}$ is injective, so that we can identify $M^{\ensuremath{\mathrm{int}}}$ with the image of the groupification morphism $M\to M^{\ensuremath{\mathrm{gp}}}$. Likewise, the embedding of the category of saturated monoids into the category of monoids has a left adjoint $(\cdot)^{\ensuremath{\mathrm{sat}}}$, and we can identify $M^{\ensuremath{\mathrm{sat}}}$ with the submonoid of $M^{\ensuremath{\mathrm{gp}}}$ consisting of the elements $m$ such that $dm$ belongs to $M^{\ensuremath{\mathrm{int}}}$ for some integer $d>0$. A monoid $M$ is integral, resp.~saturated, if and only if the natural morphisms $M\to M^{\ensuremath{\mathrm{int}}}$, resp.~$M\to M^{\ensuremath{\mathrm{sat}}}$, are isomorphisms. For every monoid $M$, we denote by $M^\times$ the submonoid of invertible elements of $M$. The monoid $M$ is called {\em sharp} when $M^{\times}=\{0\}$. Note that sharp monoids are, in particular, torsion free, because all torsion elements are invertible. We denote by $M^\ensuremath{\mathrm{sh}}arp$ the sharp monoid $M/M^\times$, called the {\em sharpification} of $M$. We set $M^+ = M\ensuremath{\backslash} M^\times$, the unique maximal ideal of $M$. For every monoid $M$, we denote by $M^\vee$ its dual monoid: $M^{\vee}=\ensuremath{\mathrm{Hom}}(M,\mbb{N})$. We will also consider the submonoid $M^{\vee,\ensuremath{\mathrm{loc}}}$ of $M^{\vee}$ consisting of local homomorphisms $M\to \mbb{N}$, that is, morphisms $\varphi:M\to \mbb{N}$ such that $\varphi(m)\neq 0$ for every $m\in M^+$. \begin{rmk}\label{rema:cones} When working with monoids, it is useful to keep in mind the following more concrete description: if $M$ is a fine, saturated and torsion free monoid, then we can identify $M$ with the monoid of integral points of the convex rational polyhedral cone $\sigma$ generated by $M$ in the vector space $M^{\ensuremath{\mathrm{gp}}}\otimes_{\mbb{Z}}\mbb{R}$. Conversely, for every convex rational polyhedral cone $\sigma$ in $\mbb{R}^n$, the intersection $M=\sigma\cap \mbb{Z}^n$ is a fine, saturated and torsion free monoid, by Gordan's Lemma (see \cite[4.3.22]{GaRa}). The monoid $M$ is sharp if and only if $\sigma$ is strictly convex. This correspondence between fine, saturated and torsion free monoids and convex rational polyhedral cones preserves the faces, by \cite[4.4.7]{GaRa}: the faces of $M$ are precisely the intersections of the faces of $\sigma$ with the lattice $M^{\ensuremath{\mathrm{gp}}}$. \end{rmk} A \emph{monoidal space} is a topological space $T$ endowed with a sheaf $M$ of monoids. If $(T,M)$ is a monoidal space, we will denote by $T^\ensuremath{\mathrm{sh}}arp$ the monoidal space obtained by equipping $T$ with the sheaf $M^\ensuremath{\mathrm{sh}}arp = M/M^\times$, where \[M^\times(U)= \bigl(M(U)\bigr)^\times\] for every open subspace $U$ in $T$. This construction applies, in particular, to the monoidal space $(\mcl{X},M_{\loga{X}})$ associated with a log scheme $\loga{X}$; the result will be denoted by $(\loga{X})^{\ensuremath{\mathrm{sh}}arp}$. \ensuremath{\subseteq}ection{The root index} \begin{defi} Let $M$ be a fine and saturated monoid endowed with a morphism of monoids $\varphi:\mbb{N}\to M$. The {\em root index} of $\varphi$ is defined to be $0$ if $\varphi(1)$ is invertible. Otherwise, it is the largest positive integer $\rho$ such that the residue class of $\varphi(1)$ in $M^{\ensuremath{\mathrm{sh}}arp}$ is divisible by $\rho$. \end{defi} Note that such a largest $\rho$ exists because $M^{\ensuremath{\mathrm{sh}}arp}$ is a submonoid of the free abelian group of finite rank $(M^{\ensuremath{\mathrm{sh}}arp})^{\ensuremath{\mathrm{gp}}}$. The importance of the root index lies in the following properties. \begin{prop}\label{prop:root} Let $M$ be a fine and saturated monoid, and let $\varphi:\mbb{N}\to M$ be a morphism of monoids. Denote by $\rho$ the root index of $\varphi$. For every positive integer $d$, we consider the monoid $$M(d)=\left(M\oplus_{\mbb{N}}\frac{1}{d}\mbb{N} \ensuremath{\mathrm{rig}}ht).$$ \begin{enumerate} \item \label{it:rootfine} The monoid $M(d)$ is fine for every $d>0$. \item \label{it:rootdiv} If $d$ divides $\rho$, then $M^{\ensuremath{\mathrm{sh}}arp}\to (M(d)^{\ensuremath{\mathrm{sat}}})^{\ensuremath{\mathrm{sh}}arp}$ is an isomorphism, and the morphism $$\varphi_d:\frac{1}{d}\mbb{N}\to M(d)^{\ensuremath{\mathrm{sat}}}$$ has root index $\rho/d$. \item \label{it:rootsharp} If $d$ is prime to $\rho$, then the morphisms $$M^{\times}\to M(d)^{\times},\qquad M(d)^{\times}\to (M(d)^{\ensuremath{\mathrm{sat}}})^{\times}$$ are isomorphisms. In particular, if $M$ is sharp, then so are $M(d)$ and $M(d)^{\ensuremath{\mathrm{sat}}}$. \end{enumerate} \end{prop} \begin{proof} \eqref{it:rootfine} It is obvious that $M(d)$ is finitely generated. It is also integral because we can apply the criteria of \cite[4.1]{kato-log} to the morphism $\mbb{N}\to (1/d)\mbb{N}$. \eqref{it:rootdiv} Assume that $d$ divides $\rho$. We may suppose that $M$ is sharp, because the morphism $$(M(d)^{\ensuremath{\mathrm{sat}}})^{\ensuremath{\mathrm{sh}}arp}\to (M^{\ensuremath{\mathrm{sh}}arp}(d)^{\ensuremath{\mathrm{sat}}})^{\ensuremath{\mathrm{sh}}arp}$$ is an isomorphism by the universal properties of sharpification, coproduct and saturation. Let $m$ be an element of $M$ such that $\rho m=\varphi(1)$. Then $$d((\rho/d)m-\varphi_d(1/d))=0,$$ so that $(\rho/d)m-\varphi_d(1/d)$ is a unit in $M(d)$ and $(\rho/d)m=\varphi_d(1/d)$ in $(M(d)^{\ensuremath{\mathrm{sat}}})^{\ensuremath{\mathrm{sh}}arp}$. Using the universal properties of the coproduct, saturation and sharpification, together with the fact that $M$ is sharp and saturated, we obtain a morphism of monoids $(M(d)^{\ensuremath{\mathrm{sat}}})^{\ensuremath{\mathrm{sh}}arp}\to M$ that sends the residue class of $\varphi_d(1/d)$ to $(\rho/d)m$ and that is inverse to $M\to (M(d)^{\ensuremath{\mathrm{sat}}})^{\ensuremath{\mathrm{sh}}arp}$. It follows at once that $$\varphi_d:\frac{1}{d}\mbb{N}\to M(d)^{\ensuremath{\mathrm{sat}}}$$ has root index $\rho/d$. \eqref{it:rootsharp} Assume that $d$ is prime to $\rho$. It suffices to prove that the composed morphism $M^{\times}\to (M(d)^{\ensuremath{\mathrm{sat}}})^{\times}$ is an isomorphism, because $M(d)^{\times}\to (M(d)^{\ensuremath{\mathrm{sat}}})^{\times}$ is injective since $M(d)$ is integral. Let $x$ be an invertible element in $M(d)^{\ensuremath{\mathrm{sat}}}$. We must show that $x$ lies in $M$. Since $M\to M(d)^{\ensuremath{\mathrm{sat}}}$ is exact by \cite[4.4.42(vi)]{GaRa}, it is enough to prove that $x\in M^{\ensuremath{\mathrm{gp}}}$. We can write $x$ as $(m,i/d)$ with $m\in M^{\ensuremath{\mathrm{gp}}}$ and $i\in \{0,\ldots,d-1\}$. Since $M$ is saturated, the element $dx$ lies in $M$, and hence in $M^{\times}$ because we can apply the same argument to the inverse of $x$. This means that $dm=-i\varphi(1)$ in $M^{\ensuremath{\mathrm{sh}}arp}$. But $d$ is prime to $\rho$, so that $d$ divides $i$. Hence, $i=0$ and $x\in M^{\ensuremath{\mathrm{gp}}}$. \end{proof} \section{Regular log schemes}\label{sec:logsch} \ensuremath{\subseteq}ection{Kato's definition of regularity} The notion of regularity for logarithmic schemes was introduced by K.~Kato in \cite{kato}. It can be viewed as a generalization of the theory of toroidal embeddings in \cite{KKMS}. An important advantage of the logarithmic approach is that it works equally well in mixed characteristic; moreover, the monoidal structure on logarithmic schemes keeps track in an efficient way of the cones that describe the local toroidal structure. Let $\mcl{X}^{\dagger}$ be a log scheme. Thus $\loga{X}$ consists of a scheme $\mcl{X}$, equipped with a sheaf of monoids $M_{\loga{X}}$ and a morphism of sheaves of monoids $$\alpha\colon (M_{\loga{X}},+)\to (\mathcal{O}_{\mcl{X}},\times)$$ such that the induced morphism $\alpha^{-1}(\mathcal{O}^{\times}_{\mcl{X}})\to \mathcal{O}^{\times}_{\mcl{X}}$ is an isomorphism. In particular, $\alpha$ identifies the invertible elements in $M_{\loga{X}}$ and $\mathcal{O}_{\mcl{X}}$. For every point $x$ of $\mcl{X}$, we call the monoid $$M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},x}=M_{\loga{X},x}/M^{\times}_{\loga{X},x}\cong M_{\loga{X},x}/\mathcal{O}^{\times}_{\mcl{X},x}$$ the {\em characteristic monoid} of $\loga{X}$ at $x$. We say that the log structure is {\em trivial} at $x$ if $M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},x}=\{0\}$, that is, if every element in the stalk $M_{\loga{X},x}$ is invertible. Assume that $\loga{X}$ is Noetherian and fs (fine and saturated). Thus it satisfies condition (S) in \cite[1.5]{kato}. Then the characteristic monoids of $\loga{X}$ are sharp, fine and saturated. For every point $x$ of the underlying scheme $\mcl{X}$, we denote by $I_{\mcl{X}^{\dagger},x}$ the ideal in $\mathcal{O}_{\mcl{X},x}$ generated by the image of $M^+_{\loga{X},x}=M_{\loga{X},x}\setminus M^{\times}_{\loga{X},x}$ in $\mathcal{O}_{\mcl{X},x}$. The log scheme $\loga{X}$ is called {\em regular} at $x$ if $\mathcal{O}_{\mcl{X},x}/I_{\loga{X},x}$ is regular and $$\dim(\mathcal{O}_{\mcl{X},x})=\dim(\mathcal{O}_{\mcl{X},x}/I_{\loga{X},x})+\dim(M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},x}).$$ Here $\dim(M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},x})$ denotes the Krull dimension of the monoid $M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},x}$, which can also be expressed as the rank of the free abelian group $(M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},x})^{\ensuremath{\mathrm{gp}}}$ by \cite[4.4.10]{GaRa}. An important consequence of the definition is that every regular log scheme is normal, by \cite[4.1]{kato}. Every quasi-compact fine and saturated log scheme that is smooth over a regular log scheme is itself regular, by \cite[8.2]{kato}. The idea behind the definition of regularity for log schemes is that, when $\mcl{X}^{\dagger}$ is regular at $x$, the lack of regularity of the scheme $\mcl{X}$ at $x$ is encoded in the characteristic monoid $M^{\ensuremath{\mathrm{sh}}arp}_{\mcl{X}^{\dagger},x}$. By Remark, \ref{rema:cones}, we can think of $M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},x}$ as the monoid of integral points in a strictly convex rational polyhedral cone, and this cone describes the toroidal structure of $\mcl{X}$ at $x$. An explicit description of the completed local ring of $\mcl{X}$ at $x$ in terms of the characteristic monoid can be found in \cite[3.2]{kato}. \begin{eg}\label{ex:snc} An important class of logarithmic structures are the {\em divisorial} log structures. Let $\mcl{X}$ be a Noetherian integral scheme, and let $D$ be a Weil divisor on $\mcl{X}$. Then $D$ gives rise to a log structure $M$ on $\mcl{X}$, called the divisorial log structure induced by $D$. The sheaf of monoids $M$ is defined by $$M=\mathcal{O}_{\mcl{X}}\cap i_* \mathcal{O}^{\times}_{\mcl{X}\setminus D} $$ where $i\colon \mcl{X}\setminus D\to \mcl{X}$ is the open embedding of the complement of $D$ in $\mcl{X}$. Thus the sections of $M$ are the regular functions on $\mcl{X}$ that are invertible outside $D$. The log scheme $\mcl{X}^{\dagger}=(\mcl{X},M)$ is not fine and saturated, in general. However, if $\mcl{X}$ is regular and the reduced divisor $D_{\ensuremath{\mathrm{red}}}$ has strict normal crossings, then $\mcl{X}^{\dagger}$ is regular. If $x$ is a point of $\mcl{X}$ and $(z_1,\ldots,z_n)$ is a regular system of local parameters in $\mathcal{O}_{\mcl{X},x}$ such that $D_{\ensuremath{\mathrm{red}}}$ is defined by $z_1\cdot \ldots \cdot z_r=0$ locally at $x$, for some $0\leq r\leq n$, then $I_{\mcl{X}^\dagger,x}$ is the ideal generated by $(z_1,\ldots,z_r)$, and $M_x/M^{\times}_x$ is a free monoid of rank $r$. A basis for this monoid is given by the residue classes of $z_1,\ldots,z_r$. Conversely, let $\mcl{X}^{\dagger}$ be a log regular scheme. Let $D$ be the set of the points $x$ of $\mcl{X}$ where the log structure is nontrivial. It follows from \cite[11.6]{kato} that $D$ is a closed subset of $\mcl{X}$ of pure codimension one; thus we can view it as a reduced Weil divisor on $\mcl{X}$. Then by \cite[11.6]{kato}, the log structure on $\mcl{X}^{\dagger}$ is precisely the divisorial log structure induced by $D$. For every point $x$ of $\loga{X}$, we can interpret the characteristic monoid $$M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},x}=M_{\loga{X},x}/\mathcal{O}^{\times}_{\mcl{X},x}$$ as the monoid of effective Cartier divisors on $\Spec \mathcal{O}_{\mcl{X},x}$ supported on the inverse image of $D$. Note, however, that $\mcl{X}$ is not necessarily regular. For instance, if $\mcl{X}$ is a toric variety over a field, and $D$ is its toric boundary, then the divisorial log structure induced by $D$ makes $\mcl{X}$ into a regular log scheme (see Example \ref{exam:toric}). \end{eg} \ensuremath{\subseteq}ection{Fans and log stratifications}\label{sec:fans} Let $\mcl{X}^\dagger$ be a regular log scheme, and consider its associated fan $F(\mcl{X}^\dagger)$ in the sense of \cite[\S10]{kato}. This is a sharp monoidal space whose underlying topological space is the subspace of $\mcl{X}$ consisting of the points $x$ such that $M_{\mcl{X}^\dagger,x}^+$ generates the maximal ideal of $\mathcal{O}_{\mcl{X},x}$. The sheaf of monoids on $F(\mcl{X}^\dagger)$ is the pullback of the sheaf $M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X}}=M_{\mcl{X}^\dagger}/\mathcal{O}_{\mcl{X}}^{\times}$ on $\mcl{X}$. A dictionary between this notion of fan and the usual notion in toric geometry is provided in Example \ref{exam:toric} below. By \cite[10.6.9(ii)]{GaRa}, the natural morphism of monoidal spaces $$F(\mcl{X}^\dagger)\to (\loga{X})^{\ensuremath{\mathrm{sh}}arp}=(\mcl{X},M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X}})$$ has a canonical retraction $\pi\colon (\loga{X})^{\ensuremath{\mathrm{sh}}arp}\to F(\mcl{X}^\dagger)$. It sends a point $x$ of $\mcl{X}$ to the point of $F(\mcl{X}^\dagger)$ that corresponds to the prime ideal $I_{\loga{X},x}$ of $\mathcal{O}_{\mcl{X},x}$. The map $\pi$ is open, by \cite[10.6.9(iii)]{GaRa}. With the help of this retraction, we can enhance the construction of the Kato fan to a functor from the category of regular log schemes to the category of fans, by sending a morphism of regular log schemes $h:\loga{Y}\to \loga{X}$ to the morphism of fans obtained as the composition $$\begin{CD}F(\loga{Y})@>>> (\loga{Y})^{\ensuremath{\mathrm{sh}}arp}@>h^{\ensuremath{\mathrm{sh}}arp}>> (\loga{X})^{\ensuremath{\mathrm{sh}}arp}@>\pi>> F(\loga{X}). \end{CD}$$ See \cite[\S10.6]{GaRa} for additional background. For every point $\tau$ of $F(\loga{X})$, we denote by $r(\tau)$ the dimension of the sharp, fine and saturated monoid $M_{F(\loga{X}),\tau}=M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},\tau}$. We call this number the {\em rank} of $\tau$. The fiber $\pi^{-1}(\tau)$ is an irreducible locally closed subset of $\mcl{X}$ of pure codimension $r(\tau)$, and it is a regular subscheme of $\mcl{X}$ if we endow it with its reduced induced structure \cite[10.6.9(iii)]{GaRa}. We denote this subscheme by $E(\tau)^o$. Locally around each point $x$ of $E(\tau)^o$, the scheme $E(\tau)^o$ is the zero locus of the prime ideal $I_{\loga{X},x}$ of $\mathcal{O}_{\mcl{X},x}$. We write $E(\tau)$ for the schematic closure of $E(\tau)^o$ in $\mcl{X}$. Then $E(\tau)$ is the disjoint union of the sets $E(\sigma)^o$ where $\sigma$ runs through the closure of $\{\tau\}$ in $F(\loga{X})$, because the retraction $\pi$ is open. Thus the collection of subschemes $$\{E(\tau)^o\,|\,\tau\in F(\loga{X})\}$$ is a stratification of $\mcl{X}$, which is called the log stratification of $\loga{X}$. It follows immediately from the definitions that $\tau$ is the generic point of $E(\tau)^o$ and $E(\tau)$. \begin{prop}\label{prop:cosp} Let $\loga{X}$ be a regular log scheme. Then the morphism $$\pi^{-1}M_{F(\mcl{X}^\dagger)}\to M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X}}$$ is an isomorphism of sheaves of monoids. In particular, the sheaf $M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X}}$ is constant along every stratum $E(\tau)^o$ of the logarithmic stratification. \end{prop} \begin{proof} This is stated without proof in \cite[10.2]{kato}; as observed in \cite[10.6.9(i)]{GaRa}, it follows from the construction of the retraction $\pi$ in \cite[10.6.9(ii)]{GaRa}. \end{proof} It follows that, for every point $x$ of $E(\tau)^o$, the monoid $M_{\loga{X},x}^{\ensuremath{\mathrm{sh}}arp}$ has dimension $r(\tau)$; we say that $E(\tau)^o$ is a stratum of rank $r(\tau)$. \begin{eg}\label{exam:standard} Let $R$ be a discrete valuation ring. We write $S=\Spec R$ and denote by $s$ and $\eta$ the closed and generic point of $S$, respectively. We denote by $S^{\dagger}$ the log scheme obtained by endowing $S$ with the divisorial log structure induced by the closed point $s$; this is called the {\em standard} log structure on $S$. Then $M_{S^{\dagger}}(S)=R\setminus \{0\}$ and $M_{S^{\dagger}}(\eta)=K^{\times}$. Thus the fan $F(S^{\dagger})$ consists of the underlying topological space $|S|=\{s,\eta\}$ of $S$, endowed with the sheaf of monoids $M_{F(S^{\dagger})}$ determined by $$M_{F(S^{\dagger})}(S)=(R\setminus \{0\})/R^{\times}=\mbb{N}, \quad M_{F(S^{\dagger})}(\eta)=\{0\}.$$ \end{eg} \begin{eg}\label{exam:toric} Let $k$ be a field, and let $Y=Y(\Sigma)$ be a toric variety over $k$, associated with a fan $\Sigma$ in $\mbb{R}^n$. We endow $Y$ with the divisorial log structure induced by the toric boundary divisor $D$; the resulting log scheme will be denoted by $Y^{\dagger}$. Let $y$ be a point of $Y$, and let $\sigma$ be the unique cone in $\Sigma$ such that $y$ is contained in the torus orbit $O(\sigma)$. We will describe the characteristic monoid $M^{\ensuremath{\mathrm{sh}}arp}_{\loga{Y},y}$ at $y$. By definition, the monoid $M_{\loga{Y},y}$ consists of the functions in $\mathcal{O}_{Y,y}$ that are invertible outside $D$. By \cite[3.3]{fulton}, we can write such a function in a unique way as $u\chi^m$, where $u$ is a unit in $\mathcal{O}_{Y,y}$ and $\chi^m$ is the character associated with an element $m$ of $\sigma^{\vee}\cap \mbb{Z}^n$. Thus $M^{\ensuremath{\mathrm{sh}}arp}_{\loga{Y},y}=(\sigma^{\vee}\cap \mbb{Z}^n)^{\ensuremath{\mathrm{sh}}arp}$. Locally around $y$, the zero locus of the elements in $M^+_{\loga{Y},y}$ is the torus orbit $O(\sigma)$. Since $O(\sigma)$ is regular of dimension $$\dim(Y)-\dim(\sigma)= \dim(Y)-\dim(M^{\ensuremath{\mathrm{sh}}arp}_{\loga{Y},y}),$$ the log scheme $\loga{Y}$ is regular at $y$. Our description of the characteristic monoids of $\loga{Y}$ also implies that the logarithmic strata of $\loga{Y}$ are precisely the torus orbits of $Y$. Thus the points of the fan $F(\loga{Y})$ are in bijective correspondence with the cones in the fan $\Sigma$. If $y$ and $y'$ are points of $F(\loga{Y})$ and we denote by $\sigma$ and $\sigma'$ the corresponding cones in $\Sigma$, then $y'$ lies in the closure of $\{y\}$ if and only if $\sigma$ is a face of $\sigma'$. The cospecialization map $$M_{F(\loga{Y}),y'}\to M_{F(\loga{Y}),y}$$ is precisely the inclusion $$((\sigma')^{\vee}\cap \mbb{Z}^n)^\ensuremath{\mathrm{sh}}arp\to (\sigma^{\vee}\cap \mbb{Z}^n)^\ensuremath{\mathrm{sh}}arp$$ induced by the face map $\sigma\to \sigma'$. Thus the fan $F(\loga{Y})$ encodes the monoids of integral points in the cones of $\Sigma$ (or rather, the sharpified dual cones) and how they are glued together in the fan $\Sigma$; see also \cite[9.5]{kato}. \end{eg} \ensuremath{\subseteq}ection{Boundary divisor and divisorial valuations}\label{sec:val} Let $\loga{X}$ be a regular log scheme. The {\em boundary} of $\loga{X}$ is the locus of points where the log structure is non-trivial; this is a closed subset of $\mcl{X}$ of pure codimension one, and it is equal to the union of the strata $E(\tau)^o$ such that $M_{F(\loga{X}),\tau}\neq 0$. We denote by $D$ the reduced Weil divisor supported on the boundary of $\loga{X}$. Then the log structure on $\loga{X}$ coincides with the divisorial log structure induced by $D$ (see Example \ref{ex:snc}). If we denote by $E_i,\,i\in I$ the prime components of $D$, then a point $\tau$ of $\mcl{X}$ belongs to $F(\loga{X})$ if and only if it is a generic point of $\cap_{j\in J}E_j$ for some subset $J$ of $I$, by \cite[2.2.3]{BM} (note that, when $J$ is empty, this intersection equals $\mcl{X}$). In that case, $E(\tau)$ is the connected component of $\cap_{j\in J}E_j$ that contains $\tau$, and $$E(\tau)^o=E(\tau)\setminus \bigcup_{i\notin J}E_i.$$ \begin{eg} Let $\mcl{X}$ be a quasi-compact regular scheme and let $D$ be a strict normal crossings divisor on $\mcl{X}$. Let $\loga{X}$ be the log scheme that we obtain by endowing $\mcl{X}$ with the divisorial log structure induced by $D$ (see Example \ref{ex:snc}). Then $\loga{X}$ is log regular and its boundary coincides with $D$. Conversely, if $\loga{X}$ is a regular log scheme, then the underlying scheme $\mcl{X}$ is regular if and only if $M_{F(\loga{X}),\tau}$ is isomorphic to $\mbb{N}^{r(\tau)}$ for every $\tau$ in $F(\loga{X})$ \cite[10.5.35]{GaRa}. In that case, the boundary divisor $D$ of $\loga{X}$ has strict normal crossings \cite[2.7]{saito}. \end{eg} Let $\loga{X}$ be any regular log scheme, and fix a point $\tau$ of the fan $F(\loga{X})$. Let $F(\tau)$ be the subspace of $F(\loga{X})$ consisting of the points $\sigma$ such that $\tau$ is contained in the closure of $\{\sigma\}$. We denote by $M_{F(\tau)}$ the restriction of the sheaf of monoids $M_{F(\loga{X})}$ to $F(\tau)$. Then the monoidal space $(F(\tau),M_{F(\tau)})$ is canonically isomorphic to the spectrum of the characteristic monoid $M_{F(\loga{X}),\tau}$, by \cite[10.1]{kato} and its proof. This implies, in particular, that the prime ideals of height one in $M_{F(\loga{X}),\tau}$ are in bijective correspondence with the strata $E(\sigma)$ such that $\sigma$ is a codimension one point in $F(\loga{X})$ whose closure contains $\tau$; these are precisely the irreducible components of $D$ that pass through $\tau$. If $\mathfrak{p}$ is a prime ideal of height one in $M_{F(\loga{X}),\tau}$ and $\sigma$ is the corresponding point of $F(\tau)$, then $$M_{F(\loga{X}),\sigma}\cong (M_{F(\loga{X}),\tau})_{\mathfrak{p}}^{\ensuremath{\mathrm{sh}}arp}= \mbb{N}.$$ This monoid is generated by the residue class of a local equation for $E(\sigma)$ at its generic point $\sigma$. More generally, for every element $f$ in $M_{\loga{X},\sigma}$, the residue class of $f$ in $M_{F(\loga{X}),\sigma}= \mbb{N}$ is the order of $f$ along the component $E(\sigma)$. We can also interpret this from the perspective of the dual monoid $M^{\vee}_{F(\loga{X}),\tau}$. The complement of a height one prime ideal $\mathfrak{p}$ in $M_{F(\loga{X}),\tau}$ is a codimension one face of $M_{F(\loga{X}),\tau}$. The dual face $$(M_{F(\loga{X}),\tau}\setminus \mathfrak{p})^{\bot}= \{\varphi\in M^{\vee}_{F(\loga{X}),\tau}\,|\,\varphi(m)=0 \mbox{ for all }m\notin \mathfrak{p}\}$$ of $M^{\vee}_{F(\loga{X}),\tau}$ is generated by the localization morphism $$M_{F(\loga{X}),\tau}\to (M_{F(\loga{X}),\tau})_{\mathfrak{p}}^{\ensuremath{\mathrm{sh}}arp}= \mbb{N}.$$ The map $$\mathfrak{p}\mapsto (M_{F(\loga{X}),\tau}\setminus \mathfrak{p})^{\bot}$$ is a bijection between the set of height one prime ideals in $M_{F(\loga{X}),\tau}$ and the set of one-dimensional faces of $M^{\vee}_{F(\loga{X}),\tau}$ (in view of Remark \ref{rema:cones}, this follows from the classical duality properties of convex rational polyhedral cones). \ensuremath{\subseteq}ection{Subdivisions}\label{sec:subdiv} Let $F'$ be a fine and saturated proper subdivision of the fan $F(\loga{X})$ in the sense of \cite[9.7]{kato}. Such a subdivision gives rise to a proper \'etale morphism of log schemes $h:\loga{Y}\to \loga{X}$ such that $F(\loga{Y})$ is isomorphic to $F'$ over $F(\loga{X})$. Moreover, $h$ is an isomorphism over the log trivial locus of $\loga{X}$. More precisely, the discriminant locus of $h:\mcl{Y}\to \mcl{X}$ is the union of strata $E(\tau)$ in $\mcl{X}$ such that the morphism of monoidal spaces $F'\to F(\loga{X})$ is not an isomorphism over any open neighbourhood of $\tau$ in $F(\loga{X})$. Let $\tau'$ be a point of $F(\loga{Y})$ and denote by $\tau$ its image in $F(\loga{X})$. Then the morphism $M^{\ensuremath{\mathrm{gp}}}_{F(\loga{X}),\tau}\to M^{\ensuremath{\mathrm{gp}}}_{F(\loga{Y}),\tau'}$ is surjective by the definition of a subdivision. In particular, $r(\tau)\geq r(\tau')$. Denote by $N$ the kernel of $M^{\ensuremath{\mathrm{gp}}}_{F(\loga{X}),\tau}\to M^{\ensuremath{\mathrm{gp}}}_{F(\loga{Y}),\tau'}$. It follows immediately from the construction of the morphism $h\colon \loga{Y}\to \loga{X}$ in the proof of \cite[9.9]{kato} that $E(\tau')^o$ is a torsor over $E(\tau)^o$ with translation group $$\Spec \mbb{Z}[N]\cong \mathbb{G}^{r(\tau)-r(\tau')}_{m,\mbb{Z}}.$$ \begin{eg} For toric varieties, the subdivisions in \cite[9.6]{kato} correspond to subdivisions of the toric fan {\em via} the dictionary provided in Example \ref{exam:toric}. The associated morphism of log schemes is precisely the toric modification associated with the subdivision of the toric fan. \end{eg} \ensuremath{\subseteq}ection{Charts}\label{sec:charts} Let $\loga{X}$ be a fine and saturated log scheme. A {\em chart} for $\loga{X}$ is a strict morphism of log schemes $\loga{X}\to \Spec^\dagger \mbb{Z}[N]$, where $N$ is a monoid and we denote by $\Spec^{\dagger}\mbb{Z}[N]$ the scheme $\Spec \mbb{Z}[N]$ endowed with the log structure induced by $N\to \mbb{Z}[N]$. Here {\em strict} means that the log structure on $\loga{X}$ is the pullback of the log structure on $\Spec^{\dagger} \mbb{Z}[N]$. If $\loga{X}\to \Spec^{\dagger} \mbb{Z}[N]$ is a chart for $\loga{X}$, then for every point $x$ of $\mcl{X}$, the morphism of monoids $N\to M_{\loga{X},x}$ induces a surjection $N\to M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},x}$. Thus, up to a multiplicative factor, every element in $M^+_{\loga{X},x}$ lifts to an element of $N^+$. Therefore, if $\loga{X}$ is regular, then locally around $x$, the logarithmic stratum that contains $x$ is the zero locus of the image of $N^+$ in $\mathcal{O}_{\mcl{X},x}$. We will use this description in the proof of Lemma \ref{lemm:root}. For every fine and saturated log scheme $\loga{X}$ and every point $x$ of $\mcl{X}$, we can find locally around $x$ a chart of the form $\loga{X}\to \Spec^{\dagger} \mbb{Z}[M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},x}]$, by the proof of \cite[10.1.36(i)]{GaRa}. Then the induced morphism of monoids $M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},x}\to M_{\loga{X},x}$ is a section of the projection morphism $M_{\loga{X},x}\to M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},x}$. If $f\colon \loga{Y}\to \loga{X}$ is a morphism of fine and saturated log schemes, then a chart for $f$ is a commutative diagram of log schemes $$\begin{CD} \loga{Y}@>>> \Spec^{\dagger}\mbb{Z}[P] \\ @VfVV @VVgV \\ \loga{X} @>>> \Spec^{\dagger}\mbb{Z}[N] \end{CD}$$ where $g$ is induced by a morphism of monoids $N\to P$ and the horizontal maps are charts. Locally on $\loga{Y}$ and $\loga{X}$, we can always find a chart for $f$ with $N$ and $P$ fine and saturated monoids. More precisely, if $\loga{X}\to \Spec^{\dagger}\mbb{Z}[N]$ is any chart with $N$ fine and saturated, then locally on $\loga{Y}$, we can find a morphism of fine and saturated monoids $N\to P$ and a chart for $f$ as in the diagram above. This follows from \cite[10.1.40]{GaRa}, except for the fact that we can choose $P$ to be saturated; that property follows from the proof of \cite[10.1.37]{GaRa}. One usually cannot choose the chart $\loga{Y}\to \Spec^{\dagger}\mbb{Z}[P]$ independently of the morphism $f$, because there may not be a morphism of monoids $N\to P$ that makes the diagram commute. In practice, one can solve this problem by modifying the monoid $P$; let us discuss the only case that will be used in this paper (in the proof of Lemma \ref{lemm:root}). Suppose that we are given charts $\loga{X} \to \Spec^{\dagger}\mbb{Z}[N]$ and $\loga{Y} \to \Spec^{\dagger}\mbb{Z}[Q]$, where the monoid $N$ is isomorphic to $\mbb{N}$. Let $y$ be a point of $\mcl{Y}$ and set $x=f(y)$. We denote by $\varphi$ the morphism of monoids $Q\to M_{\loga{Y},y}$ induced by the given chart for $\loga{Y}$. Denote by $h$ the image of the generator of $N$ under the composed morphism $$N\to M_{\loga{X},x}\to M_{\loga{Y},y}.$$ By the surjectivity of the morphism $Q\to M^{\ensuremath{\mathrm{sh}}arp}_{\loga{Y},y}$, we can find a unit $u$ in $\mathcal{O}_{\mcl{Y},y}$ such that $uh$ lifts to an element $q$ in $Q$. Now we set $P=Q\times \mbb{Z}$ and, locally around $y$, we consider the chart for $\loga{Y}$ induced by the morphism of monoids $P\to M_{\loga{Y},y}$ that maps $(a,b)$ to $u^b\varphi(a)$. Then $f$ has a chart as in the diagram above, where the morphism of monoids $N\to P$ sends the generator of $N$ to $(q,-1)$. \ensuremath{\subseteq}ection{Smoothness versus regularity} For the applications in Section \ref{sec:DL} we will also need the following result, which relates log regularity and log smoothness. For the definition of regularity for log schemes with respect to the \'etale topology, we refer to \cite[2.2]{niziol}. It is the direct analog of the Zariski case, replacing points by geometric points and local rings by their strict henselizations. The basic theory of logarithmic smoothness can be found in \cite[\S3.3]{kato-log}. \begin{prop}\label{prop:regsm} Let $R$ be a discrete valuation ring with residue field $k$, and set $S=\Spec(R)$. We denote by $S^{\dagger}$ the scheme $S$ equipped with its standard log structure (see Example \ref{exam:standard}). Let $\loga{X}$ be a fine and saturated log scheme of finite type over $S^{\dagger}$ (with respect to the Zariski or \'etale topology). \begin{enumerate} \item \label{it:flat} If $\loga{X}$ is regular, then $\mcl{X}$ is flat over $S$. \item \label{it:smreg} If $\loga{X}$ is smooth over $S^{\dagger}$, then it is regular. \item \label{it:regsm} If $k$ has characteristic zero and $\loga{X}$ is regular, then $\loga{X}$ is smooth over $S^{\dagger}$ . \end{enumerate} \end{prop} \begin{proof} \eqref{it:flat} Assume that $\loga{X}$ is regular. Then $\loga{X}$ is normal; in particular, it does not have any embedded components. For every point $x$ of the special fiber $\mcl{X}_k$, the morphism $\loga{X}\to S^{\dagger}$ induces a commutative diagram $$\begin{CD} M_{\loga{X},x}@>>>\mathcal{O}_{\loga{X},x} \\ @AAA @AAA \\ M_{S^{\dagger},s}=R\setminus \{0\}@>>>R \end{CD}$$ where $s$ denotes the closed point of $S$. If $t$ is a uniformizer in $R$, then $t$ is not invertible in $\mathcal{O}_{\loga{X},x}$, so that the log structure on $\loga{X}$ is non-trivial at every point of $\mcl{X}_k$. However, the definition of regularity implies that the log structure is trivial at every generic point of $\mcl{X}$; thus all the generic points of $\mcl{X}$ are contained in the generic fiber, so that $\mcl{X}$ is flat over $S$. \eqref{it:smreg} Every smooth fs log scheme over $S^{\dagger}$ is regular by \cite[8.2]{kato} (we can reduce to the Zariski case by passing to an \'etale cover of $\mcl{X}$ where the log structure on $\loga{X}$ is Zariski in the sense of \cite[2.1.1]{niziol}). \eqref{it:regsm} Assume that $k$ has characteristic zero, and that $\loga{X}$ is regular. Let $\overline{x}$ be a geometric point on $\mcl{X}_k$, and set $M=M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},\overline{x}}$. We choose a chart $\loga{X}\to \Spec^{\dagger}\mbb{Z}[M]$ \'etale-locally around $\overline{x}$. We also choose a uniformizer $t$ in $R$; this choice determines a chart $S^{\dagger}\to \Spec^{\dagger}\mbb{Z}[\mbb{N}]$ such that the induced morphism $\mbb{N}\to M_{S^{\dagger},s}$ maps $1$ to $t$. Then we can find an element $m$ in $M$ and a unit $u$ in $\mathcal{O}_{\mcl{X},\overline{x}}$ such that $t=um$ in $\mathcal{O}_{\mcl{X},\overline{x}}$. Since $k$ has characteristic zero, we can take arbitrary roots of invertible functions on $\mcl{X}$ locally in the \'etale topology on $\mcl{X}$. Thus there exists a morphism $\psi$ from the free abelian group $M^{\ensuremath{\mathrm{gp}}}$ to $\mathcal{O}^{\times}_{\mcl{X},\overline{x}}$ that maps $m$ to $u$. Multiplying the morphism of monoids $M\to \mathcal{O}_{\mcl{X},\overline{x}}$ with the restriction of $\psi$ to $M$, we obtain a new chart $\loga{X}\to \Spec^{\dagger}\mbb{Z}[M]$ \'etale-locally around $\overline{x}$ such that the pullback of $m$ is equal to $t$. This implies that, \'etale locally around every geometric point $\overline{x}$ on $\mcl{X}_k$, we can find a chart for $\loga{X}\to S^{\dagger}$ of the form $$\begin{CD} \loga{X}@>>> \Spec^{\dagger} \mbb{Z}[M] \\ @VVV @VVV \\ S^{\dagger}@>>> \Spec^{\dagger} \mbb{Z}[\mbb{N}] \end{CD}$$ The morphism $\mbb{N}\to M$ is injective, because $M$ is integral and $t$ is not invertible at $\overline{x}$. The order of the torsion part of $\mathrm{coker}(\mbb{N}^{\ensuremath{\mathrm{gp}}}=\mbb{Z}\to M^{\ensuremath{\mathrm{gp}}})$ is invertible in $k$, by our assumption that $k$ has characteristic zero. Moreover, the morphism of schemes $$\mcl{X}\to \Spec \mbb{Z}[M] \times_{\Spec \mbb{Z}[\mbb{N}]}S$$ is smooth over a neighbourhood of $\overline{x}$, by the local description of regular log schemes in \cite[3.2(1)]{kato}. Thus it follows from Kato's logarithmic criterion for smoothness \cite[3.5]{kato-log} that $\loga{X}\to S^{\dagger}$ is smooth. \end{proof} Beware that Proposition \ref{prop:regsm}\eqref{it:regsm} does not extend to the case where $k$ has characteristic $p>0$. The problem is that we cannot take $p$-th roots of all invertible functions locally in the \'etale topology, and that the order of the torsion part of the cokernel of the morphism $\mbb{Z}\to M^{\ensuremath{\mathrm{gp}}}$ may not be invertible in $k$. A sufficient condition for log smoothness is given by the following statement. Let $\loga{X}$ be a regular log scheme of finite type over $S^{\dagger}$ (with respect to the \'etale topology). Suppose moreover that $k$ is perfect, the log structure on $\loga{X}$ is vertical (that is, it is trivial on $\mcl{X}_K$), the generic fiber $\mcl{X}_K$ is smooth over $K$, and the multiplicities of the components in the special fiber are prime to $p$. Then $\loga{X}$ is smooth over $S^{\dagger}$. This follows from the same argument as in the proof of Proposition \ref{prop:regsm}. \ensuremath{\subseteq}ection{Fine and saturated fibered products}\label{sec:fsfib} An important role in this paper is played by fibered products in the category of fine and saturated log schemes. Let us briefly describe their structure for further reference. Let $\loga{X}\to \loga{Z}$ and $\loga{Y}\to \loga{Z}$ be morphisms of fine and saturated log schemes. The fibered product $\loga{W}$ of $\loga{X}$ and $\loga{Y}$ over $\loga{Z}$ in the category of log schemes can be constructed by endowing the scheme $\mcl{W}=\mcl{X}\times_{\mcl{Z}}\mcl{Y}$ with the fibered coproduct of the pullbacks of the log structures on $\loga{X}$ and $\loga{Y}$ over the pullback of the log structure on $\loga{Z}$ (see \cite[1.6]{kato-log}). Let $w$ be a point of $\mcl{W}$ that maps to $x$, $y$ and $z$ in $\mcl{X}$, $\mcl{Y}$ and $\mcl{Z}$, respectively. Then it follows directly from the construction that the characteristic monoid $M^{\ensuremath{\mathrm{sh}}arp}_{\loga{W},w}$ is canonically isomorphic to $$\left(M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},x}\oplus_{M^{\ensuremath{\mathrm{sh}}arp}_{\loga{Z},z}} M^{\ensuremath{\mathrm{sh}}arp}_{\loga{Y},y}\ensuremath{\mathrm{rig}}ht)^{\ensuremath{\mathrm{sh}}arp}.$$ If $M^{\ensuremath{\mathrm{sh}}arp}_{\loga{Z},z}\to M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},x}$ and $M^{\ensuremath{\mathrm{sh}}arp}_{\loga{Z},z}\to M^{\ensuremath{\mathrm{sh}}arp}_{\loga{Y},y}$ are injective, then the coproduct $$M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},x}\oplus_{M^{\ensuremath{\mathrm{sh}}arp}_{\loga{Z},z}} M^{\ensuremath{\mathrm{sh}}arp}_{\loga{Y},y}$$ is already sharp, by \cite[4.1.12]{GaRa}. The log structure on $\loga{W}$ is coherent \cite[2.6]{kato-log}, but in general it is neither integral, nor saturated. The fibered product $$\loga{X}\times^{\ensuremath{\mathrm{fs}}}_{\loga{Z}}\loga{Y}$$ in the category of fine and saturated log schemes can be constructed by consecutively applying the functors $(\cdot)^{\mathrm{f}}$ and $(\cdot)^{\mathrm{fs}}$ from \cite[10.2.36(i)]{GaRa} to $\loga{W}$. A subtle point of this construction is that it changes the underlying scheme $\mcl{W}$. If $\loga{W}\to \Spec^{\dagger}\mbb{Z}[N]$ is a chart for $\loga{W}$, with $N$ a finitely generated monoid, then the underlying scheme of $\loga{X}\times^{\ensuremath{\mathrm{fs}}}_{\loga{Z}}\loga{Y}$ is given by $$\mcl{W}\times_{\mbb{Z}[N]}\mbb{Z}[N^{\ensuremath{\mathrm{sat}}}].$$ The natural morphism $$\loga{X}\times^{\ensuremath{\mathrm{fs}}}_{\loga{Z}}\loga{Y}\to \loga{W}$$ is a finite morphism on the underlying schemes \cite[10.2.36(ii)]{GaRa}, and it is an isomorphism over the open subscheme of $\loga{W}$ where the log structure is trivial. For every point $w'$ lying above $w$, we have a canonical isomorphism $$M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X}\times^{\ensuremath{\mathrm{fs}}}_{\loga{Z}}\loga{Y},w'}\cong ((M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},x}\oplus_{M^{\ensuremath{\mathrm{sh}}arp}_{\loga{Z},z}} M^{\ensuremath{\mathrm{sh}}arp}_{\loga{Y},y})^{\ensuremath{\mathrm{sat}}})^{\ensuremath{\mathrm{sh}}arp}.$$ This is proven in \cite[2.1.1]{nakayama} for log schemes with respect to the \'etale topology, but the proof is also valid for the Zariski topology. The most important class of fs fibered products for our purposes is described in the following proposition. \begin{prop}\label{prop:fsfib} Let $R$ be a complete discrete valuation ring with quotient field $K$. Let $K'$ be a finite extension of $K$ and denote by $R'$ the integral closure of $R$ in $K'$. We denote by $S^{\dagger}$ the scheme $S=\Spec R$ endowed with its standard log structure (see Example \ref{exam:standard}). The log scheme $(S')^{\dagger}$ is defined analogously, replacing $R$ by $R'$. Let $\loga{X}$ be a fine and saturated log scheme, and let $\loga{X}\to S^+$ be a smooth morphism of log schemes. Then the underlying scheme $\mcl{Y}$ of $\loga{Y}=\loga{X}\times^{\ensuremath{\mathrm{fs}}}_{S^{\dagger}}(S')^{\dagger}$ is the normalization of $\mcl{X}\times_S S'$. \end{prop} \begin{proof} Smoothness is preserved by fine and saturated base change, so that the log scheme $\loga{Y}$ is smooth over $(S')^{\dagger}$. Since $S^{\dagger}$ and $(S')^{\dagger}$ are regular, the log schemes $\loga{X}$ and $\loga{Y}$ are regular, as well, by Proposition \ref{prop:regsm}. Thus $\mcl{X}$ and $\mcl{Y}$ are normal. It also follows from Proposition \ref{prop:regsm} that $\mcl{X}$ is flat over $S$ and $\mcl{Y}$ is flat over $S'$. The morphism $\mcl{Y}\to \mcl{X}\times_S S'$ is an isomorphism on the generic fibers, because the log structures on $S$ and $S'$ are trivial at their generic points, so that $\loga{X}\times_{S^{\dagger}}\Spec(K')$ is already saturated. Thus $\mcl{Y}\to \mcl{X}\times_S S'$ is birational; since it is also finite and $\mcl{Y}$ is normal, it is a normalization morphism. \end{proof} \section{Smooth log schemes over discrete valuation rings} \ensuremath{\subseteq}ection{Log modifications and ramified base change}\label{sec:stratram} Let $R$ be a complete discrete valuation ring with residue field $k$ and quotient field $K$. We write $S^\dagger$ for the scheme $S=\Spec R$ endowed with its standard log structure (see Example \ref{exam:standard}). We fix a uniformizer $t$ in $R$. For every positive integer $n$ we denote by $R(n)$ the extension $R[u]/(u^{n}-t)$ of $R$. We write $S(n)^\dagger$ for the scheme $S(n)=\Spec R(n)$ with its standard log structure. The morphism of monoids $$((1/n)\mbb{N},+)\to (R[u]/(u^n-t),\times)$$ that sends $1/n$ to $u$ induces a chart $$S(n)^{\dagger}\to \Spec^{\dagger}\mbb{Z}[(1/n)\mbb{N}]$$ that we call the standard chart for $S(n)^{\dagger}$. Whenever $m$ is a positive multiple of $n$, we have a morphism of log schemes $S(m)^{\dagger}\to S(n)^{\dagger}$ associated with the morphism of $R$-algebras $$R[u]/(u^n-t)\to R[v]/(v^m-t)\colon u\mapsto v^{m/n}.$$ The standard charts for $S(m)^{\dagger}$ and $S(n)^{\dagger}$ fit into a chart $$\begin{CD} S(m)^{\dagger} @>>> \Spec^{\dagger}\mbb{Z}[(1/m)\mbb{N}] \\ @VVV @VVV \\ S(n)^{\dagger} @>>> \Spec^{\dagger}\mbb{Z}[(1/n)\mbb{N}] \end{CD}$$ for the morphism $S(m)^{\dagger}\to S(n)^{\dagger}$, where the morphism of monoids $(1/n)\mbb{N}\to (1/m)\mbb{N}$ is the inclusion map. Let $\loga{X}$ be a smooth fine and saturated log scheme of finite type over $S^\dagger$. Then $\loga{X}$ is regular, by Proposition \ref{prop:regsm}, so that we can apply the constructions from Section \ref{sec:fans} to $\loga{X}$. Proposition \ref{prop:regsm} also implies that the underlying scheme $\mcl{X}$ is flat over $S$. We denote by $e_t$ the image of the uniformizer $t$ in the monoid $M_{\loga{X}}(\mcl{X})$. Let $x$ be a point in $\mcl{X}_k$. Then the structural morphism $\loga{X}\to S^{\dagger}$ induces a local morphism of monoids $$\varphi:\mbb{N}\to M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},x}$$ that sends $1$ to the image of $e_t$ in $M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},x}$, which we will still denote by $e_t$. We define the {\em root index} $\rho(x)$ to be root index of this morphism $\varphi$. Now let $\tau$ be a point of $F(\loga{X})\cap \mcl{X}_k$. Then $M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},\tau}=M_{F(\loga{X}),\tau}$. We set $\rho=\rho(\tau)$, and we denote by $\loga{Y}$ the fibered product $$\loga{X}\times^{\ensuremath{\mathrm{fs}}}_{S^\dagger}S(\rho)^{\dagger}$$ in the category of fine and saturated log schemes (see Section \ref{sec:fsfib}). The log scheme $\loga{Y}$ is smooth over $S(\rho)^{\dagger}$ because smoothness is preserved by fs base change. The underlying scheme $\mcl{Y}$ is the normalization of $\mcl{X}\times_S S(\rho)$, by Proposition \ref{prop:fsfib}. We set $$\widetilde{E}(\tau)^o=\left(\mcl{Y}\times_{\mcl{X}}E(\tau)^o\ensuremath{\mathrm{rig}}ht)_{\ensuremath{\mathrm{red}}}.$$ This is a union of logarithmic strata of $\loga{Y}$, each of which has characteristic monoid $$(M_{F(\loga{X}),\tau}\oplus_\mbb{N}\frac{1}{\rho}\mbb{N})^{\ensuremath{\mathrm{sat}},\ensuremath{\mathrm{sh}}arp}.$$ By Proposition \ref{prop:root}\eqref{it:rootdiv}, we know that the natural morphism $$M_{F(\loga{X}),\tau}\to (M_{F(\loga{X}),\tau}\oplus_\mbb{N}\frac{1}{\rho}\mbb{N})^{\ensuremath{\mathrm{sat}},\ensuremath{\mathrm{sh}}arp}$$ is an isomorphism. If $\tau$ is a point of $F(\loga{X})$ that is not contained in $\mcl{X}_k$, then we set $\widetilde{E}(\tau)^o=E(\tau)^o$. \begin{eg}\label{exam:torsor} Let $\loga{X}$ be a smooth fs log scheme of finite type over $S^{\dagger}$. Assume that the underlying scheme $\mcl{X}$ is regular; then $\mcl{X}_k$ is a divisor with strict normal crossings (see Example \ref{ex:snc}). We write $$\mcl{X}_k=\sum_{i\in I}N_i E_i$$ where $E_i,\,i\in I$ are the prime components of $\mcl{X}_k$ and the coefficients $N_i$ are their multiplicities. Let $x$ be a point of $\mcl{X}_k$ and let $J$ be the set of indices $j\in I$ such that $x$ lies on $E_j$. Then there exists an isomorphism of monoids $$\psi\colon M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},x}\to \mbb{N}^{J}\times \mbb{N}^{h},$$ for some integer $h\geq 0$, such that $\psi(e_t)=((N_j)_{j\in J},0)$. Here $h$ is the number of irreducible components of the boundary of $\loga{X}$ that pass through $x$ and that are horizontal, that is, not contained in the special fiber $\mcl{X}_k$. The root index $\rho(x)$ is the greatest common divisor of the multiplicities $N_j,\,j\in J$. If $\tau$ is a point of $F(\loga{X})\cap \mcl{X}_k$ and $\rho(\tau)$ is not divisible by the characteristic of $k$, then $\widetilde{E}(\tau)^o\to E(\tau)^o$ has a canonical structure of a $\mu_{\rho(\tau)}$-torsor, which is described explicitly in \cite[\S2.3]{Ni-tameram}. \end{eg} The following results constitute a key step in the calculation of the motivic zeta function. \begin{lem}\label{lemm:root} Let $\loga{X}$ be a smooth fs log scheme of finite type over $S^\dagger$ and let $\tau$ be a point of $F(\loga{X})\cap \mcl{X}_k$ of root index $\rho=\rho(\tau)$. Let $m$ be any positive multiple of $\rho$ and denote by $\loga{Z}$ be the fibered product $$\loga{X}\times^{\ensuremath{\mathrm{fs}}}_{S^\dagger}S(m)^{\dagger}$$ in the category of fine and saturated log schemes. Then the natural morphism $$(\mcl{Z}\times_{\mcl{X}}E(\tau)^o)_{\ensuremath{\mathrm{red}}}\to \widetilde{E}(\tau)^o$$ is an isomorphism. \end{lem} \begin{proof} We set $$\loga{Y}=\loga{X}\times^{\ensuremath{\mathrm{fs}}}_{S^\dagger}S(\rho)^{\dagger},$$ as before. We have already recalled that the log scheme $\loga{Y}$ is regular and that $\widetilde{E}(\tau)^o$ is a union of logarithmic strata with characteristic monoid $$M=(M_{F(\loga{X}),\tau}\oplus_\mbb{N}\frac{1}{\rho}\mbb{N})^{\ensuremath{\mathrm{sat}},\ensuremath{\mathrm{sh}}arp}\cong M_{F(\loga{X}),\tau}.$$ The monoid $M$ is endowed with a local morphism $\varphi:(1/\rho)\mbb{N}\to M$ induced by $\loga{Y}\to S(\rho)^{\dagger}$, and the morphism $\varphi$ has root index $1$ by Proposition \ref{prop:root}\eqref{it:rootdiv}. Let $y$ be a point of $\widetilde{E}(\tau)^o\ensuremath{\subseteq}et \mcl{Y}$. By Section \ref{sec:charts}, locally around $y$, we can find a chart for the morphism $\loga{Y}\to S(\rho)^{\dagger}$ of the form $$\begin{CD} \loga{Y}@>>> \Spec^{\dagger} \mbb{Z}[M\times \mbb{Z}] \\ @VVV @VVV \\ S(\rho)^{\dagger}@>>> \Spec^{\dagger} \mbb{Z}[\frac{1}{\rho}\mbb{N}] \end{CD}$$ where the lower horizontal morphism is the standard chart for $S(\rho)^{\dagger}$, and $(1/\rho)\mbb{N}\to M\times \mbb{Z}$ is a morphism of monoids such that the composition with the projection $M\times \mbb{Z}\to M$ coincides with $\varphi$. We set $$N=(M\times \mbb{Z})\oplus_{\frac{1}{\rho}\mbb{N}}\frac{1}{m}\mbb{N}.$$ As we have recalled in Section \ref{sec:fsfib}, the $fs$ fibered product $\loga{Z}$ is obtained by first taking the fibered product $$\loga{W}=\loga{Y}\times_{S(\rho)^{\dagger}}S(m)^{\dagger}$$ in the category of log schemes; the underlying scheme of $\loga{W}$ is the fibered product $\mcl{Y}\times_{S(\rho)}S(m)$, and, locally around $y$, our chart for $\loga{Y}\to S(\rho)^{\dagger}$ induces a chart $\loga{W}\to \Spec^{\dagger} \mbb{Z}[N]$. Then, over some open neighbourhood of $y$ in $\mcl{Y}$, the underlying scheme of $\loga{Z}$ is given by $$\mcl{Z}= \left(\mcl{Y}\times_{S(\rho)}S(m)\ensuremath{\mathrm{rig}}ht)\times_{\mbb{Z}[N]}\mbb{Z}[N^{\ensuremath{\mathrm{sat}}}],$$ and the morphism $\loga{Z}\to \Spec^{\dagger} \mbb{Z}[N^{\ensuremath{\mathrm{sat}}}]$ is a chart for the log structure on $\loga{Z}$. By Section \ref{sec:charts}, all the elements in the image of the morphism $M^+\times \mbb{Z}\to \mathcal{O}_{\mcl{Y},y}$ vanish in $\mathcal{O}_{\widetilde{E}(\tau)^o,y}$. By construction, $(m/\rho)N^+$ is contained in $M^+\times \mbb{Z}$. Thus for every element $h$ in the image of $N^+\to \mathcal{O}_{\mcl{W},y}$, we have that $h^{m/\rho}$ vanishes in $\mathcal{O}_{\widetilde{E}(\tau)^o,y}$. Since $\mathcal{O}_{\widetilde{E}(\tau)^o,y}$ is reduced, we conclude that the morphism $\mbb{Z}[N]\to \mathcal{O}_{\widetilde{E}(\tau)^o,y}$ factors through $\mbb{Z}[N]/\langle N^+\rangle$, where $\langle N^+\rangle$ denotes the ideal generated by $N^+$. It is obvious that the morphism $\mbb{Z}[N^{\times}]\to \mbb{Z}[N]$ induces an isomorphism $$\mbb{Z}[N^{\times}]\to \mbb{Z}[N]/\langle N^+\rangle.$$ We have a similar isomorphism $$\mbb{Z}[(N^{\ensuremath{\mathrm{sat}}})^{\times}]\to \mbb{Z}[N^{\ensuremath{\mathrm{sat}}}]/\langle (N^{\ensuremath{\mathrm{sat}}})^+\rangle$$ for the monoid $N^{\ensuremath{\mathrm{sat}}}$, and the morphism $\mbb{Z}[N^{\times}]\to \mbb{Z}[(N^{\ensuremath{\mathrm{sat}}})^{\times}]$ is an isomorphism, by Proposition \ref{prop:root}\eqref{it:rootsharp}. Moreover, $\langle(N^{\ensuremath{\mathrm{sat}}})^+\rangle$ is the radical of the ideal in $\mbb{Z}[N^{\ensuremath{\mathrm{sat}}}]$ generated by $N^+$. Therefore, over an open neighbourhood of $y$, the $\widetilde{E}(\tau)^o$-scheme $(\mcl{Z}\times_{\mcl{X}}E(\tau)^o)_{\ensuremath{\mathrm{red}}}$ is isomorphic to $$\left(\widetilde{E}(\tau)^o\times_{\mbb{Z}[N]}\mbb{Z}[N^{\ensuremath{\mathrm{sat}}}]\ensuremath{\mathrm{rig}}ht)_{\ensuremath{\mathrm{red}}}\cong \widetilde{E}(\tau)^o\times_{\mbb{Z}[N^{\times}]}\mbb{Z}[(N^{\ensuremath{\mathrm{sat}}})^{\times}]\cong \widetilde{E}(\tau)^o.$$ \end{proof} \begin{prop}\label{prop:logblup} Let $\loga{X}$ be a smooth fs log scheme of finite type over $S^\dagger$. Let $\psi:F'\to F$ be a fine and saturated proper subdivision of $F=F(\loga{X})$, and denote by $h:\loga{(X')}\to \loga{X}$ the corresponding morphism of log schemes. Let $\tau'$ be a point of $F'$ and set $\tau=\psi(\tau')$. Then there exists a natural morphism of $E(\tau)^o$-schemes $$\widetilde{E}(\tau')^o\to \widetilde{E}(\tau)^o$$ such that $\widetilde{E}(\tau')^o$ is a $\mathbb{G}^{r(\tau)-r(\tau')}_{m,k}$-torsor over $\widetilde{E}(\tau)^o$. \end{prop} \begin{proof} Let $m$ be a positive integer that is divisible by both $\rho(\tau)$ and $\rho(\tau')$ and set $$\loga{Z}=\loga{X}\times^{\ensuremath{\mathrm{fs}}}_{S^\dagger}S(m)^{\dagger},\quad \loga{(Z')}=\loga{(X')}\times^{\ensuremath{\mathrm{fs}}}_{S^\dagger}S(m)^{\dagger}.$$ By Lemma \ref{lemm:root}, the morphism $h_m:\loga{(Z')}\to \loga{Z}$ obtained from $h$ by fs base change induces a morphism of $E(\tau)^o$-schemes $\widetilde{E}(\tau')^o\to \widetilde{E}(\tau)^o$. We will prove that the morphism $h$ induced by the subdivision $\psi$ is compatible with fs base change, in the following sense. The refinement $\psi:F'\to F$ induces a refinement $$\psi_m:F(\loga{Z})\times^{\ensuremath{\mathrm{fs}}}_{F}F' \to F(\loga{Z})$$ where the fibered product is taken in the category of fine and saturated fans. We claim that the morphism of log schemes induced by this refinement is precisely the morphism $h_m:\loga{(Z')}\to \loga{Z}$. Since $\widetilde{E}(\tau)^o$ is a union of logarithmic strata of rank $r(\tau)$ in $\loga{Z}$ and $\widetilde{E}(\tau')^o$ is a union of logarithmic strata of rank $r(\tau')$ in $\loga{(Z')}$, this implies that $\widetilde{E}(\tau')^o$ is a $\mathbb{G}^{r(\tau)-r(\tau')}_{m,k}$-torsor over $\widetilde{E}(\tau)^o$ (see Section \ref{sec:subdiv}). It remains to prove that $h_m$ is indeed the morphism induced by the refinement $\psi_m$. The morphism induced by $\psi_m$ is characterized by the following universal property \cite[9.9]{kato}: it is a final object in the category of logarithmic schemes $\loga{W}$ endowed with a morphism $\loga{W}\to \loga{Z}$ and a morphism of monoidal spaces $\pi':\loga{W}\to F(\loga{Z})\times^{\ensuremath{\mathrm{fs}}}_{F}F'$ such that the diagram $$\begin{CD} \loga{W}@>>> \loga{Z} \\ @V\pi' VV @VV\pi V \\ F(\loga{Z})\times^{\ensuremath{\mathrm{fs}}}_{F}F' @>>\psi_m > F(\loga{Z}) \end{CD}$$ commutes. If $\loga{W}$ is such a final object, then we have a canonical morphism $\loga{(Z')}\to \loga{W}$ of log schemes over $\loga{Z}$. Conversely, applying the universal properties for the morphism $\loga{(X')}\to \loga{X}$ and the fs base change to $S(m)^{\dagger}$, we obtain a morphism $\loga{W}\to \loga{(Z')}$ of log schemes over $\loga{Z}$. These two morphisms are mutually inverse, so that $\loga{W}$ is isomorphic to $\loga{(Z')}$ over $\loga{Z}$. \end{proof} \section{Motivic zeta functions} We denote by $R$ a complete discrete valuation ring with residue field $k$ and quotient field $K$. We assume that $k$ is perfect and we fix a uniformizer $t$ in $R$. For every positive integer $n$, we write $R(n)=R[u]/(u^n-t)$ and $K(n)=K[u]/(u^n-t)$. We write $S^\dagger$ and $S(n)^\dagger$ for the schemes $S=\Spec R$ and $S(n)=\Spec R(n)$ endowed with their standard log structures. \ensuremath{\subseteq}ection{Grothendieck rings and geometric series}\label{sec:groth} If $R$ has equal characteristic, then for every noetherian $k$-scheme $X$, we denote by $\mathcal{M}_{X}$ the Grothendieck ring of varieties over $X$, localized with respect to the class $\mbb{L}$ of the affine line $\mbb{A}^1_{X}$. If $R$ has mixed characteristic, we use the same notation, but we replace the Grothendieck ring of varieties by its {\em modified} version, which means that we identify the classes of universally homeomorphic $X$-schemes of finite type -- see \cite[\S3.8]{NS-K0}. In the calculation of the motivic zeta function, we will need to consider some specific geometric series in $\mbb{L}^{-1}$. The standard technique is to pass to the completion $\widehat{\mathcal{M}}_X$ of $\mathcal{M}_X$ with respect to the dimensional filtration. However, since it is not known whether the completion morphism $\mathcal{M}_X\to \widehat{\mathcal{M}}_X$ is injective, we will use a different method to avoid any loss of information. We start with an elementary lemma. \begin{lem}\label{lemm:cone} Let $M$ be a sharp, fine and saturated monoid of dimension $d$. For every morphism of monoids $u:M\to \mbb{N}$, we denote by $u^{\ensuremath{\mathrm{gp}}}:M^{\ensuremath{\mathrm{gp}}}\to \mbb{Z}$ the induced morphism of groups. Let $m$ be an element of $M^{\ensuremath{\mathrm{gp}}}$ and let $n$ be an element of $M\setminus \{0\}$. Let $u_1,\ldots,u_r$ be the generators of the one-dimensional faces of $M^{\vee}$, and denote by $I$ the set of indices $i$ in $\{1,\ldots,r\}$ such that $u_i(n)>0$. We assume that $u_j^{\ensuremath{\mathrm{gp}}}(m)=1$ for every $j\notin I$. Then the series \begin{equation}\label{eq:series} (L-1)^{d-1}\sum_{u \in M^{\vee,\ensuremath{\mathrm{loc}}} } L^{-u^{\ensuremath{\mathrm{gp}}}(m)} T^{u(n)} \end{equation} in the variables $L$ and $T$ lies in the subring $$\mbb{Z}[L,L^{-1},T]\left[\frac{T}{1-L^{-u^{\ensuremath{\mathrm{gp}}}_i(m)}T^{u_i(n)}}\ensuremath{\mathrm{rig}}ht]_{i\in I}$$ of $\mbb{Z}\ensuremath{\llbracket} L^{-1},T \ensuremath{\rrbracket} [L]$. \end{lem} \begin{proof} A sharp, fine and saturated monoid is called {\em simplicial} if its number of one-dimensional faces is equal to its dimension. We can subdivide $M^{\vee}$ into a fan of simplicial monoids without inserting new one-dimensional faces, and such a subdivision gives rise to a partition of $M^{\vee,\ensuremath{\mathrm{loc}}}$. Thus we may assume from the start that $M^{\vee}$ is simplicial, so that $d=r$ and $u_1,\ldots,u_r$ form a basis for the $\mbb{Q}$-vector space $(M^{\vee})^{\ensuremath{\mathrm{gp}}}\otimes_{\mbb{Z}}\mbb{Q}$. Denote by $P$ the fundamental parallelepiped $$P=\{\lambda_1u_1+\ldots+\lambda_r u_r\in M^{\vee,\ensuremath{\mathrm{loc}}}\,|\,\lambda_i\in \mbb{Q}\cap (0,1]\,\}$$ in $M^{\vee,\ensuremath{\mathrm{loc}}}$. Then $P$ is a finite set, and we have $$\eqref{eq:series}=(L-1)^{r-1}\left(\sum_{u\in P}L^{-u^{\ensuremath{\mathrm{gp}}}(m)} T^{u(n)}\ensuremath{\mathrm{rig}}ht)\prod_{i=1}^r \frac{1}{1-L^{-u^{\ensuremath{\mathrm{gp}}}_i(m)}T^{u_i(n)}}.$$ Now the result follows from the assumption that for every $i$, either $u_i(n)>0$ or $u^{\ensuremath{\mathrm{gp}}}_i(m)=1$; note that at most $r-1$ of the values $u_i(n)$ vanish, because $n\neq 0$. \end{proof} Keeping the notations and assumptions of Lemma \ref{lemm:cone}, we {\em define} $$(\mbb{L}-1)^{d-1}\sum_{u \in M^{\vee,\ensuremath{\mathrm{loc}}} } \mbb{L}^{-u^{\ensuremath{\mathrm{gp}}}(m)} T^{u(n)}$$ as the value of $$(L-1)^{d-1}\sum_{u \in M^{\vee,\ensuremath{\mathrm{loc}}} } L^{-u^{\ensuremath{\mathrm{gp}}}(m)} T^{u(n)}$$ at $L=\mbb{L}$. Lemma \ref{lemm:cone} guarantees that this is a well-defined element of $\mathcal{M}_k\ensuremath{\llbracket} T \ensuremath{\rrbracket}$. \ensuremath{\subseteq}ection{Definition of the motivic zeta function}\label{sec:defzeta} Let $\mcl{X}$ be an $R$-scheme of finite type with smooth generic fiber $\mcl{X}_K$, and let $\omega$ be a volume form on $\mcl{X}_K$ (that is, a nowhere vanishing differential form of maximal degree on each connected component of $\mcl{X}_K$). A {\em N\'eron smoothening} of $\mcl{X}$ is a morphism of finite type $h:\mcl{Y}\to \mcl{X}$ such that $\mcl{Y}$ is smooth over $R$, $h_K:\mcl{Y}_K\to \mcl{X}_K$ is an isomorphism, and the natural map $\mcl{Y}(R')\to \mcl{X}(R')$ is a bijection for every finite unramified extension $R'$ of $R$. Such a N\'eron smoothening always exists, by \cite[3.1.3]{BLR}. For every connected component $C$ of $\mcl{Y}_k$, we denote by $\ord_C\omega$ the unique integer $a$ such that $t^{-a}\omega$ extends to a relative volume form on $\mcl{Y}$ locally around the generic point of $C$. \begin{defi}[Loeser-Sebag]\label{def:motint} The motivic integral of $\omega$ on $\mcl{X}$ is defined by $$\int_{\mcl{X}}|\omega|=\sum_{C\in \pi_0(\mcl{Y}_k)}[C]\mbb{L}^{-\ord_C\omega}\quad \in \mathcal{M}_{\mcl{X}_k}$$ where $\mcl{Y}\to \mcl{X}$ is any N\'eron smoothening and $\pi_0(\mcl{Y}_k)$ is the set of connected components of $\mcl{Y}_k$. \end{defi} It is a deep fact that this definition does not depend on the choice of a N\'eron smoothening; the proof relies on the theory of motivic integration \cite{motrigid}. Definition \ref{def:motint} can be interpreted as a motivic upgrade of the integral of a volume form on a compact $p$-adic manifold \cite[\S4.6]{motrigid}. The motivic zeta function of the pair $(\mcl{X},\omega)$ is a generating series that measures how the motivic integral in Definition \ref{def:motint} changes under ramified extensions of $R$. For every positive integer $n$, we set $\mcl{X}(n)=\mcl{X}\times_R R(n)$, and we denote by $\omega(n)$ the pullback of $\omega$ to the generic fiber of $\mcl{X}(n)$. \begin{defi}\label{def:motzeta} The motivic zeta function of the pair $(\mcl{X},\omega)$ is the generating series $$Z_{\mcl{X},\omega}(T)=\sum_{n>0}\left(\int_{\mcl{X}(n)}|\omega(n)|\ensuremath{\mathrm{rig}}ht)T^n\quad \in \mathcal{M}_{\mcl{X}_k}\ensuremath{\llbracket} T \ensuremath{\rrbracket}.$$ \end{defi} Beware that this definition depends on the choice of the uniformizer $t$, except when $k$ has characteristic zero and contains all the roots of unity: in that case, $K(n)$ is the unique degree $n$ extension of $K$, up to $K$-isomorphism. If $h:\mcl{X}'\to \mcl{X}$ is a proper morphism of $R$-schemes such that $h_K:\mcl{X}'_K\to \mcl{X}_K$ is an isomorphism, then it follows immediately from the definition that we can recover $Z_{\mcl{X},\omega}(T)$ from $Z_{\mcl{X}',\omega}(T)$ by specializing the coefficients with respect to the forgetful group homomorphism $\mathcal{M}_{\mcl{X}'_k}\to \mathcal{M}_{\mcl{X}_k}$. Thus we can compute $Z_{\mcl{X},\omega}(T)$ after a suitable proper modification of $\mcl{X}$. The principal aim of this paper is to establish an explicit formula for $Z_{\mcl{X},\omega}(T)$ in the case where $\mcl{X}$ is smooth over $S^\dagger$ with respect to a suitable choice of log structure on $\mcl{X}$. \ensuremath{\subseteq}ection{Explicit formula on a log smooth model} Let $\loga{X}$ be a smooth fs log scheme of finite type over $S^\dagger$, and denote by $D$ its reduced boundary divisor, which was defined in Section \ref{sec:fans}. We write $F=F(\loga{X})$ for the fan associated with $\loga{X}$, and we denote by $e_t$ the image of the uniformizer $t$ in the monoid of global sections of $M_{\loga{X}}$. We write $F_k$ for the set $F\cap \mcl{X}_k$. This is a finite set, consisting of the points in the special fiber $\mcl{X}_k$ whose Zariski closure is a connected component of an intersection of irreducible components of $D$ (this follows from the description of the logarithmic stratification in Section \ref{sec:fans}). Let $\omega$ be a differential form of maximal degree on $\mcl{X}_K$ that is nowhere vanishing on $\mcl{X}_K\setminus D$. Then we can view $\omega$ as a rational section of the relative canonical bundle $\omega_{\loga{X}/S^\dagger}$. As such, it defines a Cartier divisor on $\mcl{X}$, which we denote by $\mathrm{div}_{\loga{X}}(\omega)$. This divisor is supported on $D$. Let $\tau$ be a point of $F$. For every element $u$ of $M_{F,\tau}^{\vee,\ensuremath{\mathrm{loc}}}$, we set $u(\omega)=u^{\ensuremath{\mathrm{gp}}}(\overline{f})\in \mbb{Z}$, where $\overline{f}$ is the residue class in $M^{\ensuremath{\mathrm{gp}}}_{F,\tau}$ of any element $f\in M^{\ensuremath{\mathrm{gp}}}_{\loga{X},\tau}$ such that $\mathrm{div}(f)=\mathrm{div}_{\loga{X}}(\omega)$ locally at $\tau$. This definition does not depend on the choice of $f$. Note that $u(\omega)>0$ if $\tau$ is not contained in $F_k$, because $\mathrm{div}_{\loga{X}}(\omega)\geq D$ on $\mcl{X}_K$. \begin{thm}\label{thm:main} Let $\loga{X}$ be a smooth fs log scheme of finite type over $S^\dagger$. We assume that the generic fiber $\mcl{X}_K$ is smooth over $K$ (but we allow the log structure on $\loga{X}$ to be nontrivial on $\mcl{X}_K$). Let $\omega$ be a volume form on $\mcl{X}_K$. Then for every $\tau$ in $F_k$, the expression \begin{equation}\label{eq:geometric} (\mbb{L}-1)^{r(\tau)-1} \sum_{u \in M_{F,\tau}^{\vee,\ensuremath{\mathrm{loc}}} } \mbb{L}^{-u(\omega)} T^{u(e_t)} \end{equation} is well-defined in $\mathcal{M}_{\mcl{X}_k}\ensuremath{\llbracket} T \ensuremath{\rrbracket}$, and the motivic zeta function of $(\mcl{X},\omega)$ is given by \begin{equation}\label{eq:motzeta} Z_{\mcl{X},\omega}(T)=\sum_{\tau\in F_k} [\wtl{E}(\tau)^o] (\mbb{L}-1)^{r(\tau)-1} \sum_{u \in M_{F,\tau}^{\vee,\ensuremath{\mathrm{loc}}} } \mbb{L}^{-u(\omega)} T^{u(e_t)} \end{equation} in $\mathcal{M}_{\mcl{X}_k}\ensuremath{\llbracket} T \ensuremath{\rrbracket}$. \end{thm} \begin{proof} We break up the proof into four steps. {\em Step 1: the expression \eqref{eq:geometric} is well-defined.} Since $\omega$ is a volume form on $\mcl{X}_K$, the horizontal part of the divisor $\mathrm{div}_{\loga{X}}(\omega)$ coincides with the horizontal part of the reduced boundary divisor $D$ of $\loga{X}$. This means that $u(\omega)=1$ for every $\tau\in F_k$ and every generator $u$ of a one-dimensional face of $M_{F,\tau}$ such that $u(e_t)=0$. Hence, Lemma \ref{lemm:cone} guarantees that \eqref{eq:geometric} is a well-defined element of $\mathcal{M}_{\mcl{X}_k}\ensuremath{\llbracket} T \ensuremath{\rrbracket}$. {\em Step 2: invariance under log modifications.} We will show that the right hand side of \eqref{eq:motzeta} is invariant under the log modification $h:\loga{(X')}\to \loga{X}$ induced by any fs proper subdivision $\psi:F'\to F$ that is an isomorphism over $F\cap \mcl{X}_K$, or equivalently, such that $h_K:\mcl{X}'_K\to \mcl{X}_K$ is an isomorphism. Let $\tau$ be a point in $F_k$. Then, by the definition of a proper subdivision, the morphism $\psi$ induces a bijection between $M_{F,\tau}^{\vee,\ensuremath{\mathrm{loc}}}$ and the disjoint union of the sets $M_{F,\tau'}^{\vee,\ensuremath{\mathrm{loc}}}$ where $\tau'$ runs through the set of points in $\psi^{-1}(\tau)$. Since $h$ is an \'etale morphism of log schemes, the pullback of $\mathrm{div}_{\loga{X}}(\omega)$ to $\mcl{X}'$ coincides with $\mathrm{div}_{\loga{(X')}}(\omega)$. Thus if $u$ is an element of $M_{F,\tau'}^{\vee,\ensuremath{\mathrm{loc}}}$ for some $\tau'$ in $\psi^{-1}(\tau)$, then the value $u(h^*_K\omega)$ computed on $\loga{(X')}$ coincides with the value $u(\omega)$ computed on $\loga{X}$. The same is obviously true for $u(e_t)$. Moreover, by Proposition \ref{prop:logblup}, we have \begin{equation}\label{eq:torus} [\wtl{E}(\tau')^o]=(\mbb{L}-1)^{r(\tau)-r(\tau')} [\wtl{E}(\tau)^o] \end{equation} for every point $\tau'$ in $\psi^{-1}(\tau)$. Thus the right hand side of \eqref{eq:motzeta} does not change if we replace $\loga{X}$ by $\loga{(X')}$. As a side remark, we observe that our assumption that $h_K$ is an isomorphism has only been used to ensure that $h_K^*\omega$ is a volume form on $\mcl{X}'_K$, so that the right hand side of \eqref{eq:motzeta} is still well-defined in $\mathcal{M}_{\mcl{X}_k}\ensuremath{\llbracket} T \ensuremath{\rrbracket}$ if we replace $\loga{X}$ by $\loga{(X')}$. Our proof actually shows that the right hand side of \eqref{eq:motzeta}, viewed as an element of $$\mathcal{M}_{\mcl{X}_k}\ensuremath{\llbracket} T \ensuremath{\rrbracket} \left[\frac{1}{\mbb{L}^i -1} \ensuremath{\mathrm{rig}}ht]_{i>0},$$ is invariant under {\em any} proper subdivision $\psi:F'\to F$. {\em Step 3: compatibility with fs base change.} We will prove that the formula \eqref{eq:motzeta} is compatible with fs base change, in the following sense. Let $n$ be a positive integer and denote by $F(n)$ the fan of the smooth log scheme $\loga{X}\times^{\ensuremath{\mathrm{fs}}}_{S^\dagger}S(n)^{\dagger}$ over $S(n)^{\dagger}$. Let $t(n)$ be a uniformizer in $R(n)$. Then for every positive integer $i$, the coefficient of $T^i$ in the expression $$\sum_{\tau'\in F(n)_k} (\mbb{L}-1)^{r(\tau')-1} [\wtl{E}(\tau')^o] \sum_{u' \in M_{F(n),\tau'}^{\vee,\ensuremath{\mathrm{loc}}} } \mbb{L}^{-u'(\omega(n))} T^{u'(e_{t(n)})}$$ is equal to the coefficient of $T^{in}$ in the right hand side of \eqref{eq:motzeta}. To see this, we first observe that Lemma \ref{lemm:root} implies that, for every point $\tau$ of $F_k$, the $k$-scheme $\widetilde{E}(\tau)^o$ is isomorphic to the disjoint union of the $k$-schemes $\wtl{E}(\tau')^o$ where $\tau'$ runs over the points of $F(n)_k$ that are mapped to $\tau$ under the morphism of fans $F(n)\to F$. Moreover, $M_{F(n),\tau'}$ is canonically isomorphic to $$(M_{F,\tau}\oplus_{\mbb{N}}\frac{1}{n}\mbb{N})^{\ensuremath{\mathrm{sat}},\ensuremath{\mathrm{sh}}arp},$$ which yields a bijective correspondence between the local morphisms $u':M_{F(n),\tau'}\to \mbb{N}$ such that $u'(e_{t(n)})=i$ and the local morphisms $u:M_{F,\tau}\to \mbb{N}$ such that $u(e_t)=in$. Now it only remains to notice that $u'(\omega(n))=u(\omega)$ because $$\omega_{\loga{X}\times^{\ensuremath{\mathrm{fs}}}_{S^\dagger}S(n)^{\dagger}/S(n)^{\dagger}}$$ is canonically isomorphic to the pullback of $\omega_{\loga{X}/S^\dagger}$, by the compatibility of relative log differentials with fs base change. {\em Step 4: proof of the formula.} By \cite[4.6.31]{GaRa}, we can find a proper subdivision $\psi\colon F'\to F$ such that, if we denote by $h\colon \loga{(X')}\to \loga{X}$ the associated morphism of log schemes, the scheme $\mcl{X}$ is regular and the morphism $h_K\colon \mcl{X}'_K\to \mcl{X}_K$ is an isomorphism. Thus, by Step 2, we may assume right away that $\mcl{X}$ itself is regular. We write $\mathrm{Sm}(\mcl{X})$ for the $R$-smooth locus of $\mcl{X}$. Then the open immersion $\mathrm{Sm}(\mcl{X})\to \mcl{X}$ is a N\'eron smoothening, by \cite[3.1.2]{BLR} and the subsequent remark. By Step 3 and the definition of the motivic integral, we only need to consider the coefficient of $T$ in the right hand side of \eqref{eq:motzeta}, and prove the equality $$\sum_{\tau\in F_k} [\wtl{E}(\tau)^o] (\mbb{L}-1)^{r(\tau)-1} \sum_{u \in M_{F,\tau}^{\vee,\ensuremath{\mathrm{loc}}},\,u(e_t)=1 } \mbb{L}^{-u(\omega)} =\sum_{C\in \pi_0(\mathrm{Sm}(\mcl{X})_k)}[C]\mbb{L}^{-\ord_C\omega}$$ in $\mathcal{M}_{\mcl{X}_k}$. If $\tau$ is a point in $F_k$, then by Example \ref{exam:torsor}, there exists a local morphism $u:M_{F,\tau}\to \mbb{N}$ with $u(e_t)=1$ if and only if $\tau$ is contained in a unique component of $\mcl{X}_k$ and this component has multiplicity $1$ in $\mcl{X}_k$. This is equivalent to the condition that $\tau$ lies in $\mathrm{Sm}(\mcl{X})_k$. In that case, the root index of $\tau$ is equal to $1$, so that $\widetilde{E}(\tau)^o=E(\tau)^o$. By the explicit description of the fan $F$ in Section \ref{sec:fans}, the set $\mathrm{Sm}(\mcl{X})_k$ is a union of logarithmic strata $E(\tau)^o$. Thus for every connected component $C$ of $\mathrm{Sm}(\mcl{X}_k)$, we can write $$[C]=\sum_{\tau\in F\cap C} [E(\tau)^o]$$ in $K_0(\ensuremath\mathrm{Var}_{\mcl{X}_k})$. Therefore, it only remains to prove the following property: let $\tau$ be a point in $F\cap \mathrm{Sm}(\mcl{X})_k$ and denote by $C(\tau)$ the unique connected component of $\mathrm{Sm}(\mcl{X})_k$ containing $\tau$. Then we have \begin{equation}\label{eq:order} (\mbb{L}-1)^{r(\tau)-1}\sum_{u \in M_{F,\tau}^{\vee,\ensuremath{\mathrm{loc}}},\,u(e_t)=1} \mbb{L}^{-u(\omega)}=\mbb{L}^{-\ord_{C(\tau)}\omega} \end{equation} in $\mathcal{M}_{k}$ (here we again use Lemma \ref{lemm:cone} to view the left hand side as an element of $\mathcal{M}_{k}$). First, we consider the case where the log structure at $\tau$ is vertical (this means that every irreducible component of the boundary divisor $D$ that passes through $\tau$ is contained in the special fiber $\mcl{X}_k$). Then $\tau$ is the generic point of $C(\tau)$, the monoid $M_{F,\tau}$ is isomorphic to $\mbb{N}$, and $e_t$ is its unique generator. Thus the only morphism $u$ contributing to the sum in the left hand side of \eqref{eq:order} is the identity morphism $u:\mbb{N}\to \mbb{N}$. Now the equality follows from the fact that locally around $\tau$, we have a canonical isomorphism $\omega_{\mcl{X}/S}\cong \omega_{\loga{X}/S^{\dagger}}$ because the morphism $\loga{X}\to S^{\dagger}$ is strict at $\tau$. Finally, we generalize the result to the case where the log structure at $\tau$ is not vertical. By Example \ref{exam:torsor}, there exists an isomorphism $M_{F,\tau}\to \mbb{N}\times \mbb{N}^h$ for some integer $h\geq 0$ such that the morphism $\mbb{N}\to M_{F,\tau}$ is given by $1\mapsto (1,0)$. In this case, restriction to $\mbb{N}^h$ defines a bijection between the set of local morphisms $u:M_{F,\tau}\to \mbb{N}$ mapping $e_t$ to $1$ and the set of local morphisms $u':\mbb{N}^h\to \mbb{N}$. Since $\omega$ is a volume form on $\mcl{X}_K$, we have $u(\omega)=\ord_{C(\tau)}\omega + u'(1,\ldots,1)$. Hence, \begin{eqnarray*} (\mbb{L}-1)^{r(\tau)-1}\sum_{u \in M_{F,\tau}^{\vee,\ensuremath{\mathrm{loc}}},\,u(e_t)=1} \mbb{L}^{-u(\omega)}&=& \mbb{L}^{-\ord_{C(\tau)}\omega}(\mbb{L}-1)^{h}\sum_{u' \in (\mbb{N}^h)^{\vee,\ensuremath{\mathrm{loc}}}} \mbb{L}^{-u'(1,\ldots,1)} \\ &=&\mbb{L}^{-\ord_{C(\tau)}\omega}(\mbb{L}-1)^{h}\sum_{i_1,\ldots,i_h>0}\mbb{L}^{-(i_1+\ldots +i_h)} \\&=&\mbb{L}^{-\ord_{C(\tau)}\omega} \end{eqnarray*} in $\mathcal{M}_k$. \end{proof} As a special case of Theorem \ref{thm:main}, we recover a generalization to arbitrary characteristic of the formula for strict normal crossings models from \cite[7.7]{NiSe}. Beware that in \cite{NiSe}, the motivic integrals were renormalized by multiplying them with $\mbb{L}^{-d}$, where $d$ is the relative dimension of $\mcl{X}$ over $R$. \begin{cor}\label{cor:snc} Let $\mcl{X}$ be a regular flat $R$-scheme of finite type such that $\mcl{X}_k$ is a strict normal crossings divisor, and write $$\mcl{X}_k=\sum_{i\in I}N_i E_i.$$ Denote by $\loga{X}$ the log scheme obtained by endowing $\mcl{X}$ with the divisorial log structure induced by $\mcl{X}_k$, and assume that $\loga{X}$ is smooth over $S^{\dagger}$ (this is automatic when $k$ has characteristic zero, by Proposition \ref{prop:regsm}). Let $\omega$ be a volume form on $\mcl{X}_K$. For every non-empty subset $J$ of $I$, we set $$E_J^o=(\bigcap_{j\in J}E_j)\setminus (\bigcup_{i\notin J}E_i)$$ and $N_J=\gcd\{N_j\,|\,j\in J\}$. We denote by $\widetilde{E}_J^o$ the inverse image of $E_J^o$ in the normalization of $\mcl{X}\times_R R(N_J)$. Let $\nu_i$ be the multiplicity of $E_i$ in the divisor $\mathrm{div}_{\loga{X}}(\omega)$, for every $i$ in $I$. Then the motivic zeta function of $(\mcl{X},\omega)$ is given by $$Z_{\mcl{X},\omega}(T)=\sum_{\emptyset \neq J\ensuremath{\subseteq}et I}[\widetilde{E}_J^o](\mbb{L}-1)^{|J|-1}\prod_{j\in J}\frac{\mbb{L}^{-\nu_j}T^{N_j} }{1-\mbb{L}^{-\nu_j}T^{N_j}}$$ in $\mathcal{M}_{\mcl{X}_k}\ensuremath{\llbracket} T \ensuremath{\rrbracket} $. \end{cor} \begin{proof} By the explicit description of the logarithmic stratification in Section \ref{sec:fans}, the set $E_J^o$ is the union of the strata $E(\tau)^o$ where $\tau$ runs through the intersection $F(\loga{X})\cap E_J^o$. By Example \ref{exam:torsor}, the scheme $\widetilde{E}(\tau)^o$ is the reduced inverse image of $E(\tau)^o$ in the normalization of $\mcl{X}\times_R R(N_J)$. Thus by the scissor relations in the Grothendieck ring, we have $$[\widetilde{E}^o_J]=\sum_{\tau\in F(\loga{X})\cap E_J^o}[\widetilde{E}(\tau)^o].$$ Now the description of the characteristic monoids of $\loga{X}$ in Example \ref{exam:torsor} shows that the expression for $Z_{\mcl{X},\omega}(T)$ in the statement is a particular case of the formula \eqref{eq:motzeta} in Theorem \ref{thm:main}. \end{proof} \ensuremath{\subseteq}ection{Poles of the motivic zeta function} Theorem \ref{thm:main} yields interesting information on the poles of the motivic zeta function. Since the localized Grothendieck ring of varieties is not a domain, the notion of a pole requires some care; see \cite{RoVe}. To circumvent this issue, we introduce the following definition. \begin{defi}\label{def:poles} Let $X$ be a Noetherian $k$-scheme and let $Z(T)$ be an element of $\mathcal{M}_X \ensuremath{\llbracket} T \ensuremath{\rrbracket}$. Let $\mathcal{P}$ be a set of rational numbers. We say that $\mathcal{P}$ is a {\em set of candidate poles} for $Z(T)$ if $Z(T)$ belongs to the subring $$\mathcal{M}_{X}\left[T,\frac{1}{1-\mbb{L}^a T^b} \ensuremath{\mathrm{rig}}ht]_{(a,b)\in \mbb{Z}\times \mbb{Z}_{>0},\,a/b\in \mathcal{P}}$$ of $\mathcal{M}_{X}\ensuremath{\llbracket} T \ensuremath{\rrbracket}$. \end{defi} For any reasonable definition of a pole (in particular, the one in \cite{RoVe}), the set of rational poles is included in every set of candidate poles. \begin{prop}\label{prop:poles} Let $\loga{X}$ be a smooth fs log scheme of finite type over $S^\dagger$ such that $\mcl{X}_K$ is smooth over $K$. Let $\omega$ be a volume form on $\mcl{X}_K$. Write $\mcl{X}_k=\sum_{i\in I}N_i E_i$ and denote by $\nu_i$ the multiplicity of $E_i$ in $\mathrm{div}_{\loga{X}}(\omega)$, for every $i\in I$. Then $$\mathcal{P}(\mcl{X})=\{-\frac{\nu_i}{N_i}\,|\,i\in I\}$$ is a set of candidate poles for $Z_{\mcl{X},\omega}(T)$. \end{prop} \begin{proof} Set $F=F(\loga{X})$ and let $\tau$ be a point of $F\cap \mcl{X}_k$. In view of Theorem \ref{thm:main}, it suffices to show that $\mathcal{P}(\mcl{X})$ is a set of candidate poles for $$\sum_{u \in M_{F,\tau}^{\vee,\ensuremath{\mathrm{loc}}} } \mbb{L}^{-u(\omega)} T^{u(e_t)}.$$ As we have explained in Section \ref{sec:val}, the one-dimensional faces of $M^{\vee}_{F,\tau}$ correspond canonically to the irreducible components of the boundary divisor $D$ of $\loga{X}$ passing through $\tau$. If $u$ is a generator of a one-dimensional face and $E$ is the corresponding component of $D$, then $u(\omega)$ and $u(e_t)$ are the multiplicities of $E$ in $\mathrm{div}_{\loga{X}}$ and $\mcl{X}_k$, respectively. In particular, if $D$ is not included in $\mcl{X}_k$, then $u(e_t)=0$ and $u(\omega)=1$, because $\omega$ is a volume form on $\mcl{X}_K$. Thus the result follows from Lemma \ref{lemm:cone}. \end{proof} Proposition \ref{prop:poles} tells us that, in order to find a set of candidate poles of $Z_{\mcl{X},\omega}(T)$, it is not necessary to take a log resolution of the pair $(\mcl{X},\mcl{X}_k)$, which would introduce many redundant candiate poles. This observation is particularly useful in the context of the monodromy conjecture for motivic zeta functions; see Section \ref{sec:DL}. \section{Generalizations} \ensuremath{\subseteq}ection{Formal schemes}\label{sec-formal} The definition of the motivic zeta function (Definition \ref{def:motzeta}) can be generalized to the case where $\mcl{X}$ is a formal scheme satisfying a suitable finiteness condition (a so-called {\em special} formal scheme in the sense of \cite{berk}, which is also called a formal scheme formally of finite type in the literature). This generalization is carried out in \cite{Ni}, and it is not difficult to extend our formula from Theorem \ref{thm:main} to this setting. The main reason why we have chosen to work in the category of schemes in this article is the lack of suitable references for the basic properties of logarithmic formal schemes on which the proof of our formula relies. However, the proofs for log schemes carry over easily to the formal case, so that the reader who would want to apply Theorem \ref{thm:main} to formal schemes should have no difficulties in making the necessary verifications. \ensuremath{\subseteq}ection{Nisnevich log structures}\label{sec:etale} We will now show how Theorem \ref{thm:main} can be adapted to log schemes in the Nisnevich topology. This allows us to compute motivic zeta functions on a larger class of models with components with ``mild'' self-intersections in the special fiber. This generality is needed, for instance, for the applications to motivic zeta functions of Calabi-Yau varieties in \cite{HaNi-CY}. We will explain in Example \ref{ex:etale} what is the advantage of the Nisnevich topology over the \'etale topology when computing motivic zeta functions. Let $Y$ be a Noetherian scheme. A family of morphisms of schemes $$\{u_{\alpha}\colon Y_{\alpha}\to Y\,|\,\alpha\in A\}$$ is called a Nisnevich cover if each morphism is \'etale and, for every point $y$ in $Y$, there exist an element $\alpha$ in $A$ and a point $y_{\alpha}$ in $Y_{\alpha}$ such that $u_{\alpha}(y_{\alpha})=y$ and the induced morphism of residue fields $\kappa(y)\to \kappa(y_{\alpha})$ is an isomorphism. By taking for $y$ a generic point of $Y$ and applying Noetherian induction, the definition implies that there exists a finite partition of $Y$ into reduced subschemes $Z$ with the property that there exist an index $\alpha$ in $A$ and a subscheme $Z_{\alpha}$ of $Y_{\alpha}$ such that the restriction of $u_{\alpha}$ to $Z_{\alpha}$ is an isomorphism onto $Z$. These covering families generate a Grothendieck topology, which is called the Nisnevich topology. Let $M_Y\to \mathcal{O}_Y$ be an fs log structure on $Y$ with respect to the \'etale topology (as in \cite{kato-log}). We say that the log structure $M_Y$ is Nisnevich if we can find charts for the log structure $M_Y$ locally in the Nisnevich topology on $Y$. Let $\loga{X}$ be a smooth fs Nisnevich log scheme of finite type over $S^\dagger$. Then the sheaf of monoids $M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X}}$ is constructible on the Nisnevich site of $\mcl{X}$, by the same proof as in \cite[10.2.21]{GaRa}. We choose a partition $\mathscr{P}$ of $\mcl{X}_k$ into irreducible locally closed subsets $U$ such that the restriction of $M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X}}$ to the Nisnevich site on $U$ is constant. We denote by $P$ the set consisting of the generic points of all the strata $U$ in $\mathscr{P}$. For every point $\tau$ in $P$ we will write $E(\tau)^o$ for the unique stratum in $\mathscr{P}$ containing $\tau$, and we denote by $r(\tau)$ the dimension of the monoid $M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},\tau}$. We define the root index $\rho(\tau)$ and the scheme $\widetilde{E}(\tau)^o$ in exactly the same way as before, and we write $e_t$ for the image of $t$ in the monoid of global sections of $M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X}}$. If $\mcl{X}_K$ is smooth over $K$ and $\omega$ is a volume form on $\mcl{X}_K$, then we can also simply copy the definition of the value $u(\omega)$ for every local morphism $u:M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},\tau}\to \mbb{N}$. \begin{thm}\label{thm:etale} Let $\loga{X}$ be a smooth fs Nisnevich log scheme of finite type over $S^\dagger$. We assume that the generic fiber $\mcl{X}_K$ is smooth over $K$. Let $\omega$ be a volume form on $\mcl{X}_K$. Then the motivic zeta function of $(\mcl{X},\omega)$ is given by \begin{equation}\label{eq:motzeta-etale} Z_{\mcl{X},\omega}(T)=\sum_{\ensuremath{\subseteq}tack{\tau\in P}} (\mbb{L}-1)^{r(\tau)-1} [\wtl{E}(\tau)^o] \sum_{u \in (M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},\tau})^{\vee,\ensuremath{\mathrm{loc}}} } \mbb{L}^{-u(\omega)} T^{u(e_t)} \end{equation} in $\mathcal{M}_{\mcl{X}_k}\ensuremath{\llbracket} T \ensuremath{\rrbracket}$. \end{thm} \begin{proof} One can reduce to the Zariski case by observing that the motivic zeta function $Z_{\mcl{X},\omega}(T)$ is local with respect to the Nisnevich topology, in the following sense: if $h:\mcl{U}\to \mcl{X}$ is an \'etale morphism of finite type and $Y$ is a subscheme of $\mcl{X}_k$ such that $Y'=\mcl{U}\times_{\mcl{X}}Y\to Y$ is an isomorphism, then $Z_{\mcl{X},\omega}(T)$ and $Z_{\mcl{U},h^*_K\omega}(T)$ have the same image under the base change morphisms $\mathcal{M}_{\mcl{U}_k}\ensuremath{\llbracket} T \ensuremath{\rrbracket}\to \mathcal{M}_{Y}\ensuremath{\llbracket} T \ensuremath{\rrbracket}$ and $\mathcal{M}_{\mcl{X}_k}\ensuremath{\llbracket} T \ensuremath{\rrbracket}\to \mathcal{M}_{Y}\ensuremath{\llbracket} T \ensuremath{\rrbracket}$, respectively. This is an immediate consequence of the definition of the motivic integral. Moreover, if $\{Y_1,\ldots,Y_r\}$ is a finite partition of $\mcl{X}_k$ into subschemes and we denote by $Z_i(T)$ the image of $Z_{\mcl{X},\omega}(T)$ under the composition $$\mathcal{M}_{\mcl{X}_k}\ensuremath{\llbracket} T \ensuremath{\rrbracket}\to \mathcal{M}_{Y_i}\ensuremath{\llbracket} T \ensuremath{\rrbracket}\to \mathcal{M}_{\mcl{X}_k}\ensuremath{\llbracket} T \ensuremath{\rrbracket}$$ (base change followed by the forgetful morphism), then $$Z_{\mcl{X},\omega}(T)=Z_1(T)+\ldots+Z_r(T).$$ Thus we can compute $Z_{\mcl{X},\omega}(T)$ on a Nisnevich cover of $\mcl{X}$ where the log structure becomes Zariski in the sense of \cite[2.1.1]{niziol}. Since the right hand side of \eqref{eq:motzeta-etale} satisfies the analogous localization property with respect to the Zariski topology, the result now follows from the Zariski case that was proven in Theorem \ref{thm:main}. \end{proof} \begin{cor}\label{cor:etpoles} Let $\loga{X}$ be a smooth fs log scheme of finite type over $S^\dagger$ (with respect to the Nisnevich topology) such that $\mcl{X}_K$ is smooth over $K$. Let $\omega$ be a volume form on $\mcl{X}_K$. Write $\mcl{X}_k=\sum_{i\in I}N_i E_i$ and denote by $\nu_i$ the multiplicity of $E_i$ in $\mathrm{div}_{\loga{X}}(\omega)$, for every $i\in I$. Then $$\mathcal{P}(\mcl{X})=\{-\frac{\nu_i}{N_i}\,|\,i\in I\}$$ is a set of candidate poles for $Z_{\mcl{X},\omega}(T)$. \end{cor} \begin{proof} The proof is almost identical to the proof of the Zariski case (Proposition \ref{prop:poles}), using the formula in Theorem \ref{thm:etale} instead of Theorem \ref{thm:main}. We no longer have a bijective correspondence between the generators $u$ of one-dimensional faces of $(M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},\tau})^{\vee}$ and the irreducible components of the boundary $D$ containing $\tau$, in general, because one component may have multiple formal branches at $\tau$ and each of these will give rise to a one-dimensional face of $(M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},\tau})^{\vee}$. However, it remains true that for every generator $u$ of a one-dimensional face of $(M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X},\tau})^{\vee}$, there exists an irreducible component $E$ of $D$ such that $u(e_t)$ equals the multiplicity of $D$ in $\mcl{X}_k$ and $u(\omega)$ equals the multiplicity of $E$ in $\mathrm{div}_{\loga{X}}(\omega)$. This is sufficient to prove the result. \end{proof} The following example shows that the formula in Theorem \ref{thm:etale} may fail if we replace the Nisnevich topology by the \'etale topology. \begin{eg}\label{ex:etale} Let $R=\mbb{R}\ensuremath{\llbracket} t\ensuremath{\rrbracket}$ and set $$\mcl{X}=\Spec R[x,y]/(x^2+y^2-t).$$ We denote by $\loga{X}$ the \'etale log scheme we get by endowing $\mcl{X}$ with the divisorial log structure induced by $\mcl{X}_{\mbb{R}}$. Then $\loga{X}$ is smooth over $S^+$, since the base change to $R'=\mbb{C}\ensuremath{\llbracket} t \ensuremath{\rrbracket}$ is isomorphic to $\Spec R'[u,v]/(uv-t)$. However, the log structure is not Nisnevich (that is, the \'etale sheaf $M_{\loga{X}}$ is not the pullback of a sheaf in the Nisnevich topology). The \'etale sheaf $M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X}}$ is locally constant on the complement of the origin $O$ of $\mcl{X}_\mbb{R}$, with geometric stalk $\mbb{N}$. The geometric stalk of $M^{\ensuremath{\mathrm{sh}}arp}_{\loga{X}}$ at $O$ is isomorphic to $\mbb{N}^2$. The line bundle $\omega_{\loga{X}/S^{\dagger}}$ is trivial on $\mcl{X}$ with generator $$\omega=\frac{1}{t}(ydx-xdy).$$ Blowing up $\mcl{X}$ at $O$, we obtain a regular $R$-scheme whose special fiber has strict normal crossings. Using Corollary \ref{cor:snc}, one computes that the image of $Z_{\mcl{X},\omega}(T)$ under the forgetful morphism $\mathcal{M}_{\mcl{X}_\mbb{R}}\ensuremath{\llbracket} T \ensuremath{\rrbracket}\to \mathcal{M}_{\mbb{R}}\ensuremath{\llbracket} T \ensuremath{\rrbracket}$ is equal to $$([C]-[\Spec \mbb{C}])\frac{ T^2}{1- T^2 }+[\Spec \mbb{C}](\mbb{L}-1)\frac{T}{1-T} +[\Spec \mbb{C}](\mbb{L}-1)\frac{ T^3 }{(1-T)(1-T^2)},$$ where $C$ is a geometrically connected smooth projective rational curve over $\mbb{R}$ without rational point. The right hand side of \eqref{eq:motzeta-etale} equals $$[\Spec \mbb{C}](\mbb{L}-1)\frac{T}{1-T}+(\mbb{L}-1)\frac{T^2}{(1-T)^2},$$ which does not agree with our expression for $Z_{\mcl{X},\omega}(T)$. Indeed, $$[C]-[\Spec \mbb{C}]\neq \mbb{L}-1$$ in $\mathcal{M}_{\mbb{R}}$, as can be seen by applying the \'etale realization morphism $$\mathcal{M}_{\mbb{R}}\to K_0(\mbb{Q}_\ell[\mbb{Z}/2\mbb{Z}]):[X]\mapsto \sum_{i\geq 0}(-1)^i [H^i_{\acute{e}t,c}(X\times_{\mbb{R}}\mbb{C},\mbb{Q}_\ell)]$$ for any prime $\ell$. Similar examples can be constructed when $k$ is algebraically closed, for instance by considering $$\mathcal{X}=\Spec R[x,y,z,z^{-1}]/(x^2+zy^2-t)$$ with $R=k\ensuremath{\llbracket} t\ensuremath{\rrbracket}$. \end{eg} \section{The monodromy action}\label{sec:monodromy} \ensuremath{\subseteq}ection{Equivariant Grothendieck rings} Let $X$ be a Noetherian scheme and let $G$ be a finite group scheme over $\mbb{Z}$ that acts on $X$; unless explicitly stated otherwise, we will always assume that group schemes act on schemes from the left. Suppose that the action of $G$ on $X$ is {\em good}, which means that $X$ can be covered by $G$-stable affine open subschemes. Then the Grothendieck group $K^G_0(\ensuremath\mathrm{Var}_X)$ of $X$-schemes with $G$-action is the abelian group defined by the following presentation: \begin{itemize} \item {\em Generators:} Isomorphism classes $[Y]$ of $X$-schemes $Y$ of finite type endowed with a good action of $G$ such that the morphism $Y\to X$ is $G$-equivariant; here the isomorphism class is taken with respect to $G$-equivariant isomorphisms. \item{\em Relations:} \begin{enumerate} \item If $Y$ is an $X$-scheme of finite type with good $G$-action and $Z$ is a closed subscheme of $Y$ that is stable under the $G$-action, then $$[Y]=[Z]+[Y\setminus Z].$$ \item If $Y$ is an $X$-scheme of finite type with good $G$-action and $A\to Y$ is an affine bundle of rank $r$ endowed with an affine lift of the $G$-action on $Y$, then $$[A]=[\mbb{A}^r_{\mbb{Z}}\times_{\mbb{Z}} Y]$$ where $G$ acts trivially on $\mbb{A}^r_{\mbb{Z}}$. \end{enumerate} \end{itemize} We define a ring structure on $K^G_0(\ensuremath\mathrm{Var}_X)$ by means of the multiplication rule $[Y]\cdot [Y']=[Y\times_X Y']$ where $G$ acts diagonally on $Y\times_X Y'$. We write $\mbb{L}$ for the class $[\mbb{A}^1_{\mbb{Z}}\times_{\mbb{Z}} X]$ and we set $\mathcal{M}^G_X=K^G_0(\ensuremath\mathrm{Var}_X)[\mbb{L}^{-1}]$. We will use this definition in the case where $G=\mu_n$, the group scheme of $n$-th roots of unity, for some positive integer $n$. If $m$ is a positive multiple of $n$, then the $(m/n)$-th power map $\mu_m\to \mu_n$ induces a ring morphism $\mathcal{M}^{\mu_n}_X\to \mathcal{M}^{\mu_m}_X$. We denote by $\widehat{\mu}$ the profinite group scheme of roots of unity and we set $$\mathcal{M}^{\widehat{\mu}}_X=\lim_{\stackrel{\longrightarrow}{n>0}}\mathcal{M}^{\mu_n}_{X}$$ where the positive integers $n$ are ordered by the divisibility relation. An action of $\widehat{\mu}$ on a Noetherian scheme is called {\em good} if it factors through a good action of $\mu_n$ for some $n>0$. We will need the following elementary result. \begin{prop}\label{prop:eqtor} Let $Y\to X$ be an equivariant morphism of Noetherian schemes with a good $\mu_n$-action, for some $n>0$. Assume that $Y$ is a $\mathbb{G}^r_{m,\mbb{Z}}$-torsor over $X$ and that the action $$\mathbb{G}^r_{m,\mbb{Z}}\times_{\mbb{Z}}Y\to Y$$ is $\mu_n$-equivariant, where $\mu_n$ acts trivially on $\mathbb{G}^r_{m,\mbb{Z}}$. Then we have $$[Y]=[X](\mbb{L}-1)^r$$ in $K^{\mu_n}_0(\ensuremath\mathrm{Var}_X)$. \end{prop} \begin{proof} The torsor $Y$ can be decomposed as a product $$\mathcal{L}_1^{\ast}\times_X \cdots \times_X \mathcal{L}^{\ast}_{r}$$ where $\mathcal{L}_1, \ldots,\mathcal{L}_{r}$ are $\mu_n$-equivariant line bundles on $X$ and $\mathcal{L}_i^{\ast}$ is obtained from $\mathcal{L}_i$ by removing the zero section. Now the relations in the equivariant Grothendieck ring immediately imply that $$[Y]=[X](\mbb{L}-1)^r$$ in $K^{\mu_n}_0(\ensuremath\mathrm{Var}_X)$. \end{proof} \ensuremath{\subseteq}ection{Monodromy action on the motivic zeta function} Let $k$ be a field of characteristic zero and set $R=k\ensuremath{\llbracket} t\ensuremath{\rrbracket}$ and $K=k(\negthinspace( t)\negthinspace)$. Let $\mcl{X}$ be an $R$-scheme of finite type with smooth generic fiber $\mcl{X}_K$, and let $\omega$ be a volume form on $\mcl{X}_K$. Then the definition of the motivic zeta function $Z_{\mcl{X},\omega}(T)$ (Definition \ref{def:motzeta}) can be refined in the following way. For every positive integer $n$, the finite group scheme $\mu_{n}$ of $n$-th roots of unity acts on $S(n)=\Spec R[u]/(u^n-t)$ from the right {\em via} multiplication on $u$: $$R[u]/(u^n-t)\to R[u]/(u^n-t)\otimes_{\mbb{Z}} \mbb{Z}[\zeta]/(\zeta^n-1):u\mapsto \zeta u.$$ We invert this action to obtain a left action on $S(n)$. This induces a left action of $\mu_n$ on $\mcl{X}(n)$. One can use this action to upgrade the motivic integral $$\int_{\mcl{X}(n)}|\omega(n)|$$ to an element in the equivariant Grothendieck ring $\mathcal{M}_{\mcl{X}_k}^{\mu_n}$ of $\mcl{X}_k$-varieties with $\mu_n$-action -- see \cite{hartmann}, where one can remove the assumption that $k$ contains all the roots of unity, since it is not needed in the arguments. This equivariant motivic integral can be computed by taking a quasi-projective $\mu_n$-equivariant N\'eron smoothening $\mcl{Y}\to \mcl{X}(n)$ over $R(n)$: then $$\int_{\mcl{X}(n)}|\omega(n)|=\sum_{i\in \mbb{Z}}[C(i)]\mbb{L}^{-i}\quad \in \mathcal{M}^{\mu_n}_{\mcl{X}_k}$$ where $C(i)$ is the union of the connected components $C$ of $\mcl{Y}_k$ such that $\ord_{C}\omega(n)=i$; note that $C(i)$ is stable under the action of $\mu_n$, because $\omega(n)$ is defined over $K$. A quasi-projective $\mu_n$-equivariant smoothening $\mcl{Y}\to \mcl{X}(n)$ can always be produced by means of the smoothening algorithm described in the proof of \cite[3.4.2]{BLR}; quasi-projectivity implies that the $\mu_n$-action on $\mcl{Y}$ is good. Now we can view the motivic zeta function $$Z_{\mcl{X},\omega}(T)=\sum_{n>0}\left(\int_{\mcl{X}(n)}|\omega(n)|\ensuremath{\mathrm{rig}}ht)T^n$$ as an object in $\mathcal{M}^{\widehat{\mu}}_{\mcl{X}_k}\ensuremath{\llbracket} T \ensuremath{\rrbracket}$. We will use the new notation $Z^{\widehat{\mu}}_{\mcl{X},\omega}(T)$ to indicate that we take the $\widehat{\mu}$-action into account. On the other hand, the schemes $\widetilde{E}(\tau)^o$ appearing in Theorems \ref{thm:main} and \ref{thm:etale} also carry an obvious action of the group scheme $\widehat{\mu}$, because $\mu_n$ acts on the fs base change $$\loga{X}\times_{S^{\dagger}}S(n)^{\dagger}$$ for every $n>0$ {\em via} the left action on $S(n)$. \begin{thm}\label{thm:equiv} If $k$ has characteristic zero, then Theorems \ref{thm:main} and \ref{thm:etale} are valid already for the equivariant motivic zeta function $Z^{\widehat{\mu}}_{\mcl{X},\omega}(T)$ in $\mathcal{M}^{\widehat{\mu}}_{\mcl{X}_k}\ensuremath{\llbracket} T \ensuremath{\rrbracket}$. \end{thm} \begin{proof} We can follow a similar strategy as in the proof of Theorem \ref{thm:main}. Let $k^a$ be an algebraic closure of $k$ and set $(S^a)^{\dagger}=\Spec k^a\ensuremath{\llbracket} t\ensuremath{\rrbracket}$ with its standard log structure. To compute the degree $n$ coefficient of $Z_{\mcl{X},\omega}(T)$ we can choose a regular subdivision of the fan $F^a(n)$ of $\mcl{X}(n)^{\dagger}\times_{S^\dagger}(S^a)^{\dagger}$ that is equivariant with respect to the actions of $\mu_n(k^a)$ and the Galois group $\ensuremath{\mathrm{Gal}}(k^a/k)$. This can be achieved by canonical equivariant resolution of singularities for toroidal embeddings (see for instance the remark on p.~33 of \cite{wang}). The induced log modification of $\mcl{X}(n)$ is a regular scheme and its smooth locus is a $\mu_n$-equivariant N\'eron smoothening of $\mcl{X}(n)$. Then a similar computation as in the last step of the proof of Theorem \ref{thm:main} yields the desired result. The only step that requires further clarification is the equality \eqref{eq:torus}: we need to show that it remains valid in the equivariant Grothendieck ring $\mathcal{M}^{\widehat{\mu}}_{\mcl{X}_k}$. For every fixed point $\sigma$ of the $\mu_n(k^a)$-action on $F^a(n)$, the group $\mu_n(k^a)$ also acts trivially on the stalk of $M_{F^a(n)}$ at $\sigma$ by \cite[2.1.1]{nakayama}. In the notation of \eqref{eq:torus}, this means that the natural morphism $\widetilde{E}(\tau')^o\to \widetilde{E}(\tau)^o$ is a $\mu_n$-equivariant torsor with translation group $\mathbb{G}_{m,k}^{r(\tau)-r(\tau')}$, where $\mu_n$ acts trivially on $\mathbb{G}_{m,k}^{r(\tau)-r(\tau')}$. Now it follows from Proposition \ref{prop:eqtor} that $$[\wtl{E}(\tau')^o]=(\mbb{L}-1)^{r(\tau)-r(\tau')} [\wtl{E}(\tau)^o]$$ in $\mathcal{M}^{\widehat{\mu}}_{\mcl{X}_k}$. \end{proof} The definition of a set of candidate poles (Definition \ref{def:poles}) can be generalized to elements of $\mathcal{M}^{\widehat{\mu}}_{\mcl{X}_k}\ensuremath{\llbracket} T \ensuremath{\rrbracket}$ in the obvious way. Then we can deduce the following result from Theorem \ref{thm:equiv}. \begin{cor}\label{cor:monopoles} Assume that $k$ has characteristic zero. Let $\loga{X}$ be a smooth fs log scheme of finite type over $S^\dagger$ (with respect to the Zariski or Nisnevich topology) such that $\mcl{X}_K$ is smooth over $K$. Let $\omega$ be a volume form on $\mcl{X}_K$. Write $\mcl{X}_k=\sum_{i\in I}N_i E_i$ and denote by $\nu_i$ the multiplicity of $E_i$ in $\mathrm{div}_{\loga{X}}(\omega)$, for every $i\in I$. Then $$\mathcal{P}(\mcl{X})=\{-\frac{\nu_i}{N_i}\,|\,i\in I\}$$ is a set of candidate poles for $Z^{\widehat{\mu}}_{\mcl{X},\omega}(T)$. \end{cor} \begin{proof} The argument is entirely similar to the proofs of Proposition \ref{prop:poles} and Corollary \ref{cor:etpoles}. \end{proof} \section{Applications to Denef and Loeser's motivic zeta function}\label{sec:DL} \ensuremath{\subseteq}ection{The motivic zeta function of Denef-Loeser} Let $k$ be a field of characteristic zero, let $X$ be an irreducible smooth $k$-variety and let $$f:X\to \mbb{A}^1_k=\Spec k[t]$$ be a dominant morphism of $k$-schemes. We set $X_0=f^{-1}(0)$. In \cite{DL-barc}, Denef and Loeser have defined the {\em motivic zeta function} $Z_f(T)$ of $f$, which is a power series in $\mathcal{M}^{\widehat{\mu}}_{X_0}\ensuremath{\llbracket} T \ensuremath{\rrbracket}$ that can be viewed as a motivic upgrade of Igusa's local zeta function for polynomials over $p$-adic fields. The famous {\em monodromy conjecture} predicts that the set of roots of the Bernstein polynomial of $f$ is a set of candidate poles for $Z_f(T)$ (in the sense of Definition \ref{def:poles}). This has been proven when $X$ has dimension $2$ and for some specific classes of singularities, but the conjecture is wide open in general. In fact, to our best knowledge, the proofs of the dimension $2$ case in the literature consider a slightly weaker conjecture, dealing only with the so-called ``na\"ive'' motivic zeta function, which can be viewed as the quotient of $Z_f(T)$ by the action of $\widehat{\mu}$ (up to multiplication by a factor $\mbb{L}-1$). We will explain below how the argument can be refined to prove the conjecture for $Z_f(T)$ (Corollary \ref{cor:twovar}). Set $R=k\ensuremath{\llbracket} t \ensuremath{\rrbracket}$, $K=k(\negthinspace( t)\negthinspace)$ and $\mcl{X}=X\times_{k[t]}R$. Let us recall how one can rewrite $Z_f(T)$ as the motivic zeta function of $(\mcl{X},\omega)$ for a suitable volume form $\omega$ on $\mcl{X}_K$. Since the definition of the motivic zeta function is local on $X$, we can assume that $X$ carries a volume form $\phi$ over $k$. To this volume form, one can attach a so-called {\em Gelfand-Leray form} $\omega=\phi/df$, which is a volume form on $\mcl{X}_K$ \cite[9.5]{NiSe}. Theorem 9.10 in \cite{NiSe} states that $$Z_{\mcl{X},\omega}(T)=Z_f(\mbb{L} T)$$ in $\mathcal{M}_{X_0}\ensuremath{\llbracket} T \ensuremath{\rrbracket}$ (where we forget the $\widehat{\mu}$-action on the right hand side). This can also be deduced from Corollary \ref{cor:snc} and Denef and Loeser's formula for the motivic zeta function in terms of a log resolution for $f$ \cite[3.3.1]{DL-barc}. Using Theorem \ref{thm:equiv}, one can moreover show that this equality holds already for the equivariant motivic zeta function $Z^{\widehat{\mu}}_{\mcl{X},\omega}(T)$ in $\mathcal{M}^{\widehat{\mu}}_{X_0}\ensuremath{\llbracket} T \ensuremath{\rrbracket}$. More precisely, let $h:Y\to X$ be a log resolution for the pair $(X,X_0)$, and write $h^*X_0=\sum_{i\in I}N_i E_i$ and $K_{Y/X}=\sum_{i\in I}(\nu_i-1)E_i$. Set $\mcl{Y}=Y\times_{k[t]}R$ and endow it with the divisorial log structure induced by $\mcl{Y}_k$. Then the multiplicity of $E_i$ in $\mathrm{div}_{\loga{Y}}(\omega)$ equals $\nu_i-N_i$, so that the expression in Corollary \ref{cor:snc} is precisely Denef and Loeser's formula for $Z_f(\mbb{L} T)$. Hence, we obtain that $$Z^{\widehat{\mu}}_{\mcl{X},\omega}(T)=Z_f(\mbb{L} T)$$ in $\mathcal{M}^{\widehat{\mu}}_{X_0}\ensuremath{\llbracket} T \ensuremath{\rrbracket}$. Thus Theorem \ref{thm:equiv} and Corollary \ref{cor:monopoles} also apply to the motivic zeta function of Denef and Loeser. As an illustration, we will apply these results to two particular situations: the case where $X$ has dimension $2$, and the case where $f$ is a polynomial that is non-degenerate with respect to its Newton polyhedron. These cases have been studied extensively in the literature; we will explain how some of the main results can be viewed as special cases of Theorem \ref{thm:main}. \ensuremath{\subseteq}ection{The surface case}\label{sec:curves} Assume that $X$ has dimension $2$, and let $h:Y\to X$ be a log resolution for the pair $(X,X_0)$. We write $h^*X_0=\sum_{i\in I}N_i E_i$ and $K_{Y/X}=\sum_{i\in I}(\nu_i-1)E_i$. The numbers $N_i$ and $\nu_i$ are called the {\em numerical data} associated with the component $E_i$. It follows from Denef and Loeser's formula \cite[3.3.1]{DL-barc} that $$\mathcal{P}=\{-\frac{\nu_i}{N_i}\,|\,i\in I\}$$ is a set of candidate poles for $Z_f(T)$. However, it is known that many of these candidate poles are not actual poles of $Z_f(T)$. In \cite{veys}, Veys has provided a conceptual explanation for this phenomenon by providing a formula for the topological zeta function (a coarser predecessor of the motivic zeta function) in terms of the relative log canonical model of $(X,X_0)$ over $X$. We can now upgrade this result to the motivic zeta function and understand it as a special case of Theorem \ref{thm:main}. \begin{thm}\label{thm:twovar} For every $i\in I$ such that $E_i$ is exceptional with respect to $h$, we denote by $k_i$ the field $H^0(E_i,\mathcal{O}_{E_i})$. We define $I_0$ to be the subset of $I$ consisting of the indices $i$ such that $E_i$ is an exceptional component of $h^*X_0$ satisfying $(E_i)^2\geq -2[k_i:k]$. Then $$\mathcal{P}'=\{-\frac{\nu_i}{N_i}\,|\,i\in I\setminus I_0\}$$ is still a set of candidate poles of $Z_f(T)\in \mathcal{M}^{\widehat{\mu}}_{X_0}\ensuremath{\llbracket} T \ensuremath{\rrbracket}$. \end{thm} \begin{proof} Set $\mcl{Y}=Y\times_{k[t]}R$. Contracting all the components $E_i$ with $i\in I_0$ yields a new model $\mcl{Z}$ of $\mcl{X}_K$ that is proper over $\mcl{X}$, namely, the log canonical model of $(\mcl{X},\mcl{X}_k)$ over $\mcl{X}$. We endow $\mcl{Z}$ with the divisorial log structure induced by $\mathcal{Z}_k$. It follows from \cite[\S3]{quotient} that the resulting log scheme $\loga{Z}$ is regular with respect to the \'etale topology, and since $k$ has characteristic zero, this implies that $\loga{Z}$ is smooth over $S^\dagger$ with respect to the \'etale topology (Proposition \ref{prop:regsm}). The log structure on $\loga{Z}$ fails to be Zariski precisely at the self-intersection points of components in the strict transform of $X_0$. If $k$ is algebraically closed, then the log structure is Nisnevich at these points (because they have algebraically closed residue field) and our result follows immediately from Corollary \ref{cor:monopoles}. For general $k$, we can make the log structure Zariski by blowing up $\loga{Z}$ at each of the self-intersection points (see the proof of \cite[5.4]{niziol}). These blow-ups are log blow-ups, so that the resulting morphism of log schemes $\loga{W}\to\loga{Z}$ is \'etale. Therefore, blowing up at a self-intersection point of a component $E_i$ yields an exceptional divisor with numerical data $N=2N_i$ and $\nu=2\nu_i$. This implies that $\mathcal{P}(\loga{W})=\mathcal{P}'$, so that the result follows from Corollary \ref{cor:monopoles} (applied to the smooth Zariski log scheme $\loga{W}$). \end{proof} \begin{cor}\label{cor:twovar} There exists a set of candidate poles for $Z_f(T)$ that consists entirely of roots of the Bernstein polynomial of $f$. Thus the monodromy conjecture for $Z_f(T)\in \mathcal{M}_{\mcl{X}_k}^{\widehat{\mu}}\ensuremath{\llbracket} T\ensuremath{\rrbracket} $ holds in dimension $2$. \end{cor} \begin{proof} Loeser has proven in \cite[III.3.1]{loeser} that every element of $\mathcal{P}'$ is a root of the Bernstein polynomial of $f$. \end{proof} Analogous results have previously appeared in the literature for the $p$-adic zeta function \cite{loeser, strauss}, the topological zeta function \cite{veys} and the so-called ``na{\"\i}ve'' motivic zeta function \cite{rodrigues}. \ensuremath{\subseteq}ection{Non-degenerate polynomials}\label{sec:nondeg} As a second illustration, we will use our results to recover the formula for the motivic zeta function of a polynomial that is non-degenerate with respect to its Newton polyhedron \cite[\S2.1]{guibert} (see also \cite[\S10]{bories} for a calculation of the local ``na{\"i}ve'' motivic zeta function at the origin). In fact, our computations show that the formula in \cite[2.1.3]{guibert} has some flaws; we will explain in Remark \ref{rem:guibert} what needs to be corrected. Let $$f=\sum_{m\in \mbb{N}^n}a_{m}x^{m}$$ be a non-constant polynomial in $k[x_1,\ldots,x_n]$, where we use the multi-index notation for $x=(x_1,\ldots,x_n)$. We assume that $f(0,\ldots,0)=0$. The {\em support} $S(f)$ of $f$ is the set of $m\in \mbb{N}^n$ such that $a_{m}$ is non-zero, and the {\em Newton polyhedron} $\Gamma(f)$ of $f$ is the convex hull of $$\bigcup_{m\in S(f)}(m+\mbb{R}_{\geq 0}^n).$$ For every face $\gamma$ of $\Gamma(f)$, we set $$f_{\gamma}=\sum_{m \in \gamma\cap \mbb{N}^n}a_{m}x^{m}.$$ Then $f$ is called {\em non-degenerate} with respect to its Newton polyhedron if, for every face $\gamma$ of $\Gamma(f)$, the polynomial $f_{\gamma}$ has no critical points in the torus $\mathbb{G}_{m,k}^n$ (this includes the case $\gamma=\Gamma(f)$). This condition was introduced by Kushnirenko in \cite{kouchnirenko}. It guarantees that many interesting invariants of the singularities of $f$ can be computed from the Newton polyhedron in a combinatorial way. In particular, every regular subdivision of the dual fan of $\Gamma(f)$ defines a toric modification of $\mbb{A}^n_k$ that is a log resolution for the pair $(\mbb{A}^n_k,\mathrm{div}(f))$. Moreover, if we fix the support $S(f)$, then $f$ is Newton non-degenerate for a generic choice of coefficients $a_{m}$. We denote by $\Sigma$ the dual fan of $\Gamma(f)$ and by $h:Y\to \mbb{A}^n_k$ the toric modification associated with the subdivision $\Sigma$ of $(\mbb{R}_{\geq 0})^n$. We view $Y$ as a $k[t]$-scheme {\em via} the morphism $f\circ h:Y\to \Spec k[t]$ and we set $\mcl{Y}=Y\times_{k[t]}R$. We denote by $H$ the pullback to $\mcl{Y}$ of the union of the coordinate hyperplanes in $\mbb{A}^n_k$. We endow $\mcl{Y}$ with the divisorial Zariski log structure induced by the divisor $\mcl{Y}_k+H$. The result is a Zariski log scheme $\loga{Y}$ over $S^{\dagger}$. \begin{prop}\label{prop:nondeg-smooth} The log scheme $\loga{Y}$ is fine and saturated, and smooth over $S^{\dagger}$. \end{prop} \begin{proof} By Proposition \ref{prop:regsm}, we only need to show that $\loga{Y}$ is regular. Since $f$ has no critical points on $\mathbb{G}^n_{m,k}$ by the non-degeneracy assumption, we only need to check regularity at the points $y$ on $H\cap \mcl{Y}_k$. Let $\sigma$ be the cone of $\Sigma$ such that $y$ lies on the associated torus orbit $O(\sigma)$ in $Y$, and denote by $\gamma$ the face of $\Gamma(f)$ corresponding to $\sigma$. We denote by $M$ the characteristic monoid $M^{\ensuremath{\mathrm{sh}}arp}_{\loga{Y},y}$ of $\loga{Y}$ at $y$. If $y$ does not lie in the closure of $\mathrm{div}(f\circ h)\cap \mathbb{G}_{m,k}^n$, then locally around $y$, the log structure on $\loga{Y}$ is the pullback of the natural log structure on the toric variety $Y$, which was described in Example \ref{exam:toric}. Thus $\loga{Y}$ is fine and saturated, because the divisorial log structure on $Y$ induced by the toric boundary has these properties. It also follows that $M=(\sigma^{\vee}\cap \mbb{Z}^n)^{\ensuremath{\mathrm{sh}}arp}$ so that $M^{\ensuremath{\mathrm{gp}}}$ has rank $r=\mathrm{dim}(\sigma)$. Locally at $y$, the maximal ideal of $M_{\loga{Y},y}$ defines the toric orbit $O(\sigma)$, which is regular of codimension $r$ in $\mcl{Y}$. Thus $\loga{Y}$ is regular at $y$. Now suppose that $y$ lies in the schematic closure of $\mathrm{div}(f\circ h)\cap \mathbb{G}_{m,k}^n$. If $v$ is any lattice point on $\gamma+(\sigma^{\vee}\cap \mbb{Z}^n)^{\times}$, then locally at $y$ this schematic closure is the zero locus of $f'=(f_{\gamma}/x^{v})+g$ where $g$ is an element of $\mathcal{O}_{\mcl{Y},y}$ that vanishes along $O(\sigma)$. The monoid $M$ is a submonoid of the monoid $\mathcal{O}_{\mcl{Y},y}/\mathcal{O}^{\times}_{\mcl{Y},y}$ of Cartier divisors on $\Spec \mathcal{O}_{\mcl{Y},y}$, generated by $(\sigma^{\vee}\cap \mbb{Z}^n)^{\ensuremath{\mathrm{sh}}arp}$ and $\mathrm{div}(f')$. The morphism of monoids $$ (\sigma^{\vee}\cap \mbb{Z}^n)^{\ensuremath{\mathrm{sh}}arp}\oplus \mbb{N}\to M $$ that acts as the identity on the first summand and sends $(0,1)$ to $\mathrm{div}(f')$ is an isomorphism. In particular, $M^{\ensuremath{\mathrm{gp}}}$ has rank $r=\mathrm{dim}(\sigma)+1$. Locally at $y$, the log scheme $\loga{Y}$ has a chart of the form $\loga{Y}\to \Spec^{\dagger}\mbb{Z}[M]$ where the morphism of monoids $$M=(\sigma^{\vee}\cap \mbb{Z}^n)^{\ensuremath{\mathrm{sh}}arp}\oplus \mbb{N}\to \mathcal{O}_{\mcl{Y},y}$$ maps $(m,1)$ to $\chi^m f'$. Here we denote by $\chi^m$ the pullback to $\mcl{Y}$ of the character of $Y$ associated with $m$. Since $M$ is fine and saturated, it follows that $\loga{Y}$ is fine and saturated, as well. Locally at $y$, the closed subscheme $Z$ of $\mcl{Y}$ defined by the maximal ideal of $M_{\loga{Y},y}$ coincides with the schematic intersection of $O(\sigma)$ with $\mathrm{div}(f')$. Since $O(\sigma)$ is canonically isomorphic to $\Spec k[(\sigma^{\vee}\cap \mbb{Z}^n)^{\times}]$ and $Z$ is the closed subscheme of $O(\sigma)$ defined by $f_{\gamma}/x^{v}$, the assumption that $f_{\gamma}$ has no critical points in $\mathbb{G}^n_{m,k}$ now implies that $Z$ is regular at $y$ of codimension $r$ in $\mathcal{Y}$. Hence, $\loga{Y}$ is regular at $y$. \end{proof} In order to write down an explicit expression for the motivic zeta function $Z_f(T)$, we need to introduce some further notation. We set $X_0=f^{-1}(0)$. We define piecewise affine functions $N$ and $\nu$ on $\Sigma$ by setting \begin{eqnarray*} N(u)&=&\min\{u(m) \,|\,m \in \Gamma(f)\} \\ \nu(u)&=&u_1+\ldots+u_n \end{eqnarray*} for every $u$ in $\mbb{R}^n$. For every face $\gamma$ of $\Gamma(f)$, we denote by $\sigma_{\gamma}$ the associated cone in the dual fan $\Sigma$ and by $\mathring{\sigma}_\gamma$ its relative interior. We write $O(\sigma_\gamma)$ for the torus orbit of $Y$ corresponding to $\sigma_\gamma$. We denote by $M_{\gamma}$ the fine and saturated monoid $\sigma_{\gamma}^{\vee}\cap \mbb{Z}^n$. Since $\sigma_{\gamma}$ is contained in $(\mbb{R}_{\geq 0})^n$, the dual cone $\sigma_\gamma^{\vee}$ contains $(\mbb{R}_{\geq 0})^n$ and, in particular, the face $\gamma$. If $v$ is any point on the relative interior of $\gamma$, then the cone $\sigma_\gamma^{\vee}$ is generated by the vectors of the form $v'-v$ wit $v'$ in $\Gamma(f)$. The group of invertible elements in $\sigma^{\vee}_{\gamma}$ coincides with the subspace of $\mbb{R}^n$ generated by the translated face $\gamma-v$, and $M_{\gamma}^{\times}=(\sigma^{\vee}_{\gamma})^{\times}\cap \mbb{Z}^n$. Thus the image of $\gamma\cap \mbb{Z}^n$ under the projection $M_\gamma\to M_\gamma^{\ensuremath{\mathrm{sh}}arp}$ consists of a unique point, which we denote by $v_\gamma$. We write $X_{\gamma}(0)$ for the closed subscheme of $\mathbb{G}_{m,k}^n$ defined by $f_{\gamma}$, endowed with the trivial $\widehat{\mu}$-action. We view $X_{\gamma}(0)$ as a scheme over $X_0$ {\em via} the composition $$X_\gamma(0)\ensuremath{\subseteq}et \Spec k[\mbb{Z}^n]\to \Spec k[M_\gamma^{\times}]\cong O(\sigma_\gamma)\ensuremath{\subseteq}et Y\to \mbb{A}^n_k.$$ Note that the morphism $X_\gamma(0)\to \mbb{A}^n_k$ factors through $X_0$, because the image of $X_\gamma(0)$ in $\Spec k[M_\gamma^{\times}]$ is precisely the intersection of $O(\sigma_\gamma)$ with the strict transform of $\mathrm{div}(f)$, by the proof of Proposition \ref{prop:nondeg-smooth}. We also define an $X_0$-scheme $X_{\gamma}(1)$ with a good $\widehat{\mu}$-action, as follows. The element $v_\gamma$ of $M_\gamma^{\ensuremath{\mathrm{sh}}arp}$ equals $0$ if and only if $O(\sigma_\gamma)$ is not contained in the zero locus of $f\circ h$. In that case, we set $X_\gamma(1)=\emptyset$. Otherwise, we can write $v_\gamma=\rho v_\gamma^{\mathrm{prim}}$ for a unique positive integer $\rho$ and a unique primitive vector $v_\gamma^{\mathrm{prim}}$ in $M_\gamma^{\ensuremath{\mathrm{sh}}arp}$. We choose an element $w$ in $\mbb{Z}^n$ such that $\langle m,w\rangle=\rho$ for every point $m$ on $\gamma$. We define $X_\gamma(1)$ to be the closed subscheme of $\mathbb{G}_{m,k}^n$ defined by the equation $f_\gamma=1$. We consider the left $\mu_{\rho}$-action on $\mathbb{G}^n_{m,k}$ with weight vector $w$: $$\zeta\ast (x_1,\ldots,x_n)=(\zeta^{w_1}x_1,\ldots,\zeta^{w_n}x_n).$$ The subscheme $X_{\gamma}(1)$ is stable under this action, and thus inherits a left $\mu_{\rho}$-action from $\mathbb{G}^n_{m,k}$. We again view $X_\gamma(1)$ as an $X_0$-scheme {\em via} the composition $$X_\gamma(1)\ensuremath{\subseteq}et \Spec k[\mbb{Z}^n]\to \Spec k[M_\gamma^{\times}]\cong O(\sigma_\gamma)\to X_0.$$ The resulting morphism $X_\gamma(1)\to X_0$ is $\mu_{\rho}$-equivariant with respect to the trivial action on $X_0$. In formula \eqref{eq:nondeg} in the proof of Theorem \ref{thm:nondeg}, we will give an alternative expression for the class of $X_\gamma(1)$ in $K^{\widehat{\mu}}_0(\ensuremath\mathrm{Var}_{X_0})$ that implies that this class does not depend on the choice of the weight vector $w$. \begin{thm}\label{thm:nondeg} Let $f$ be a non-constant polynomial in $k[x_1,\ldots,x_n]$ such that $f$ vanishes at the origin and $f$ is non-degenerate with respect to its Newton polyhedron $\Gamma(f)$. Set $X_0=f^{-1}(0)$. Then the motivic zeta function $Z_f(T)$ can be written as $$\sum_{\gamma}\left(\left([X_{\gamma}(0)]\frac{\mbb{L}^{-1}T}{1-\mbb{L}^{-1}T}+[X_{\gamma}(1)]\ensuremath{\mathrm{rig}}ht)\sum_{u\in \mathring{\sigma}_{\gamma}\cap \mbb{N}^n}\mbb{L}^{-\nu(u)}T^{N(u)}\ensuremath{\mathrm{rig}}ht)$$ in $\mathcal{M}^{\widehat{\mu}}_{X_0}\ensuremath{\llbracket} T \ensuremath{\rrbracket}$, where the sum is taken over all the faces $\gamma$ of $\Gamma(f)$. \end{thm} \begin{proof} We will explain how this can be interpreted as a special case of Theorem \ref{thm:main}. We set $$\omega=\frac{dx_1\wedge \ldots \wedge dx_n}{df},$$ viewed as a volume form on $\mcl{Y}_K$. We denote by $D$ the logarithmic boundary divisor on $\loga{Y}$. By the definition of a non-degenerate polynomial, the divisor $\mathrm{div}(f\circ h)\cap \mathbb{G}^n_{m,k}$ in the dense torus of $Y$ is smooth. We denote by $D'$ the restriction to $\mcl{Y}$ of the schematic closure of this divisor in $Y$. Then $D$ is the sum of $D'$ and the restriction to $\mcl{Y}$ of the toric boundary on $Y$. By the description of the logarithmic strata on a regular log scheme in Section \ref{sec:fans}, the logarithmic strata of $\loga{Y}$ are precisely the sets $O(\sigma)\setminus D'$ and the connected components of $O(\sigma)\cap D'$, where $\sigma$ ranges through the cones of $\Sigma$. Like on any regular log scheme, the points of the fan $F(\loga{Y})$ are the generic points of these logarithmic strata. Let $\gamma$ be a face of $\Gamma(f)$, and denote by $\sigma_{\gamma}$ the corresponding cone in the dual fan $\Sigma$. The points in $F(\loga{Y})_k$ that lie on $O(\sigma_{\gamma})$ are the generic points of $O(\sigma_{\gamma})\cap D'$ and, provided that $v_\gamma\neq 0$, also the generic point of $O(\sigma_{\gamma})$. Let $\tau$ be a point of $F(\loga{Y})_k\cap O(\sigma_{\gamma})$ and set $M=M^{\ensuremath{\mathrm{sh}}arp}_{\loga{Y},\tau}$. As usual, we write $e_t$ for the class of $t=f\circ h$ in $M$. If $\tau$ lies on $D'$ then we have $M\cong M_\gamma^{\ensuremath{\mathrm{sh}}arp}\oplus \mbb{N}$ and this yields an isomorphism $$M^{\vee,\ensuremath{\mathrm{loc}}}\cong (\mathring{\sigma}_{\gamma}\cap \mbb{N}^n)\oplus (\mbb{N}\setminus \{0\}).$$ The element $e_t$ of $M$ corresponds to $(v_\gamma,1)$, because $D'$ is reduced. It follows that the root index of $\mbb{N}\to M:1\mapsto e_t$ equals $1$, so that $\widetilde{E}(\tau)^o=E(\tau)^o$. This is the connected component of $O(\sigma_{\gamma})\cap D'$ that contains $\tau$. By the scissor relations in the Grothendieck ring, we have $$[O(\sigma_\gamma)\cap D']=\sum_{\tau \in F(\loga{Y})\cap O(\sigma_\gamma)\cap D'}[E(\tau)^o]$$ in $K_0(\ensuremath\mathrm{Var}_{X_0})$. We have seen in the proof of Proposition \ref{prop:nondeg-smooth} that $X_\gamma(0)$ is isomorphic to $(O(\sigma_{\gamma})\cap D')\times_k \mathbb{G}^{\mathrm{dim}(\sigma_\gamma)}_{m,k}$, so that $$[X_\gamma(0)]=[O(\sigma_\gamma)\cap D'](\mbb{L}-1)^{\dim(\sigma_\gamma)}$$ in $K_0(\ensuremath\mathrm{Var}_{X_0})$. Moreover, for every element $u'=(u,n)$ in $M^{\vee,\ensuremath{\mathrm{loc}}}$ we have $u'(e_t)=u(v_\gamma)+n=N(u)+n$ and $u'(\omega)=\nu(u)-N(u)$. Thus the contribution of $F(\loga{Y})_k\cap O(\sigma_{\gamma})\cap D'$ to the formula for $Z_f(T)=Z^{\widehat{\mu}}_{\mcl{Y},\omega}(\mbb{L}^{-1} T)$ in Theorem \ref{thm:main} is equal to $$[X_{\gamma}(0)]\frac{\mbb{L}^{-1}T}{1-\mbb{L}^{-1}T}\sum_{u\in \mathring{\sigma}_{\gamma}\cap \mbb{N}^n}\mbb{L}^{-\nu(u)}T^{N(u)}.$$ If $\tau$ does not lie on $D'$, then $E(\tau)^o=O(\sigma_{\gamma})\setminus D'$, and $M$ is canonically isomorphic to $M_\gamma^{\ensuremath{\mathrm{sh}}arp}$ so that we can identify $M^{\vee,\ensuremath{\mathrm{loc}}}$ with $\mathring{\sigma}_{\gamma}\cap \mbb{N}^n$. The element $e_t$ of $M$ is equal to $v_\gamma$, so that $u(e_t)=u(v_\gamma)=N(u)$ for every $u$ in $\sigma_{\gamma}\cap \mbb{N}^n$. We also have $u(\omega)=\nu(u)-N(u)$. Thus, in order to match the formula in the statement of the theorem with the one in Theorem \ref{thm:main}, it suffices to show that \begin{equation}\label{eq:nondeg} [X_\gamma(1)]=[\widetilde{E}(\tau)^o](\mbb{L}-1)^{n-\mathrm{dim}(\gamma)-1} \end{equation} in $K^{\widehat{\mu}}_0(\ensuremath\mathrm{Var}_{X_0})$. We write $v_\gamma=\rho v_\gamma^{\mathrm{prim}}$ for a positive integer $\rho$ and a primitive vector $v_\gamma^{\mathrm{prim}}$ in $M_\gamma^{\ensuremath{\mathrm{sh}}arp}$. Then $\rho$ is the root index of the morphism of monoids $\mbb{N}\to M_\gamma^{\ensuremath{\mathrm{sh}}arp}$ that maps $1$ to $e_t$. The torus orbit $O(\sigma_{\gamma})$ is canonically isomorphic to $\Spec k[M_\gamma^{\times}]$, and, locally at every point of $O(\sigma_\gamma)$, we can write $f\circ h$ as $x^{v}((f_{\gamma}/x^{v})+g)$ where $v$ is any lattice point on $\gamma+(\sigma^{\vee}_\gamma)^{\times}$ and $g$ is a regular function on $\mcl{Y}$ that vanishes along $O(\sigma_\gamma)$. The intersection $O(\sigma_{\gamma})\cap D'$ is the zero locus of $f_{\gamma}/x^{v}$. We can choose $v$ in such a way that it is divisible by $\rho$, because $v_\gamma$ is divisible by $\rho$. Now it easily follows from the definition that $\widetilde{E}(\tau)^o$ is the cover of $E(\tau)^o$ defined by taking a $\rho$-th root of the unit $f_{\gamma}/x^{v}$: $$\widetilde{E}(\tau)^o\cong \Spec k[M_\gamma^{\times},T,T^{-1}]/((f_\gamma/x^{v})-T^{\rho}).$$ The group scheme $\mu_\rho$ acts on $\widetilde{E}(\tau)^o$ from the left by the inverse of multiplication on $T$, that is, $T \ast \zeta=\zeta^{-1}T$. We define a $\mu_\rho$-equivariant morphism of $X_0$-schemes $X_\gamma(1)\to \widetilde{E}(\tau)^o$ by means of the morphism of $k$-algebras $$ k[M_\gamma^{\times},T,T^{-1}]/((f_\gamma/x^{v})-T^{\rho})\to k[\mbb{Z}^n]/(f_\gamma-1)$$ that maps $T$ to $x^{-v/\rho}$ and that maps $x^m$ to itself, for every $m\in M_\gamma^{\times}$. The morphism $X_\gamma(1)\to \widetilde{E}(\tau)^o$ is an equivariant torsor with translation group $\Spec \mbb{Z}[V^{\bot}_{\gamma}\cap \mbb{Z}^n]$, where $V_{\gamma}$ is the sub-vector space of $\mbb{R}^n$ generated by $\gamma$, and $V^{\bot}_{\gamma}$ denotes the orthogonal subspace. The equality \eqref{eq:nondeg} now follows from Proposition \ref{prop:eqtor}. \end{proof} \begin{rmk}\label{rem:guibert} The calculation of $Z_f(T)$ in \cite[\S2.1]{guibert} contains the following flaws: the $\widehat{\mu}$-action on the schemes $X_\gamma(1)$ is ill-defined; the term involving $[X_\gamma(1)]$ should be omitted if $v_\gamma=0$; the factor $(\mbb{L}-1)$ after $[X_\gamma(0)]$ should be omitted; the $X_0$-scheme structure on $X_\gamma(0)$ and $X_\gamma(1)$ is not specified. \end{rmk} \begin{cor}\label{cor:nondeg} Let $f$ be a non-constant polynomial in $k[x_1,\ldots,x_n]$ such that $f$ vanishes at the origin and $f$ is non-degenerate with respect to its Newton polyhedron $\Gamma(f)$. Denote by $R(f)$ the set of primitive generators of the rays of the dual fan of $\Gamma(f)$ (that is, the inward pointing primitive normal vector on the facets of $\Gamma(f)$). Then the set $$\{-1\}\cup \{-\frac{\nu(u) }{N(u)}\,|\,u\in R(f), \,N(u)\neq 0 \}$$ is a set of candidate poles of $Z_f(T)$. \end{cor} \begin{proof} This follows from Theorem \ref{thm:nondeg} in the same way as in the proof of Proposition \ref{prop:poles}. \end{proof} Note that this set of candidate poles is substantially smaller than the set of candidates we would get from a toric log resolution of $(\mbb{A}^n_k,X_0)$: the latter set would include the candidate poles associated with all the rays in a regular subdivision of the dual fan of $\Gamma(f)$. An analogous result for Igusa's $p$-adic zeta function was proven in \cite{denef-hoorn}. The same method of proof yields similar results for the local motivic zeta function $Z_{f,O}(T)$ of $f$ at the origin $O$ of $\mbb{A}^n_k$. This zeta function is defined as the image of $Z_f(T)$ under the base change morphism $\mathcal{M}^{\widehat{\mu}}_{X_0}\to \mathcal{M}^{\widehat{\mu}}_{O}=\mathcal{M}^{\widehat{\mu}}_{k}$. In fact, we only need to assume that $f$ is non-degenerate with respect to the {\em compact} faces of its Newton polyhedron. This means that for every compact face $\gamma$ of $\Gamma(f)$, the polynomial $f_\gamma$ has no critical points in $\mathbb{G}^n_{m,k}$. \begin{thm} We keep the notations of Theorem \ref{thm:nondeg}, but we replace the non-degeneracy assumption on $f$ by the weaker condition that $f$ is non-degenerate with respect to the compact faces of $\Gamma(f)$. Let $O$ be the origin of $\mbb{A}^n_k$. Then the motivic zeta function $Z_{f,O}(T)$ of $f$ at $O$ can be written as $$\sum_{\gamma}\left(\left([X_{\gamma}(0)]\frac{\mbb{L}^{-1}T}{1-\mbb{L}^{-1}T}+[X_{\gamma}(1)]\ensuremath{\mathrm{rig}}ht)\sum_{u\in \mathring{\sigma}_{\gamma}\cap \mbb{N}^n}\mbb{L}^{-\nu(u)}T^{N(u)}\ensuremath{\mathrm{rig}}ht)$$ in $\mathcal{M}^{\widehat{\mu}}_{k}$, where the sum is taken over all the compact faces $\gamma$ of $\Gamma(f)$. \end{thm} \begin{proof} The non-degeneracy condition on $f$ guarantees that $\loga{Y}$ is smooth over $S^\dagger$ at every point of $h^{-1}(O)$, by the same arguments as in the proof of Proposition \ref{prop:nondeg-smooth}. The remainder of the argument is identical to the proof of Theorem \ref{thm:nondeg}: we only need to take into account that $O(\sigma_\gamma)$ lies in $h^{-1}(O)$ if $\gamma$ is compact, and has empty intersection with $h^{-1}(O)$ otherwise. \end{proof} \begin{rmk} The monodromy conjecture for non-degenerate polynomials in at most $3$ variables has been proven for the topological zeta function \cite{lema} and the $p$-adic and na\"\i ve motivic zeta functions \cite{bories} (in a weaker form, replacing roots of the Bernstein polynomial by local monodromy eigenvalues). See also \cite{loeser-nondeg} for partial results in arbitrary dimension in the $p$-adic setting. \end{rmk} \end{document}
\begin{document} \begin{abstract} We show that the residual categories of quadric surface bundles are equivalent to the (twisted) derived categories of some scheme under the following hypotheses. Case 1: The quadric surface bundle has a smooth section. Case 2: The total space of the quadric surface bundle is smooth and the base is a smooth surface. We provide two proofs in Case 1 describing the scheme as the hyperbolic reduction and as a subscheme of the relative Hilbert scheme of lines, respectively. In Case 2, the twisted scheme is obtained by performing birational transformations to the relative Hilbert scheme of lines. Finally, we apply the results to certain complete intersections of quadrics. \end{abstract} \title{Residual categories of quadric surface bundles} \tableofcontents \section{Introduction} For a flat family $f\colon X\to S$ of Fano varieties of index $n$, there is a semiorthogonal decomposition (SOD) \begin{equation*} \mathbf{D}^\mathrm{b}(X) = \langle \mathcal{A}_X, f^*\mathbf{D}^\mathrm{b}(S) \otimes \mathcal{O}_X(1), \dots, f^*\mathbf{D}^\mathrm{b}(S) \otimes \mathcal{O}_X(n) \rangle \end{equation*} where $\mathcal{A}_X$ is called the \textit{residual category} of $X$. In other words, the residual category $\mathcal{A}_X$ is the non-trivial component in the derived category. We can view $\mathcal{A}_X$ as a refined invariant of $X$. For example, the refined derived Torelli problem asks if $X$ is determined by $\mathcal{A}_X$. It is interesting to see when $\mathcal{A}_X$ is (twisted) geometric, i.e., equivalent to the (twisted) derived category of some scheme. In this paper, we study the problem when $f$ is a flat quadric surface bundle allowing fibers of corank $2$ and prove that their residual categories are (twisted) geometric in two cases. Let $p\colon \mathcal{Q}\to S$ be a flat quadric surface bundle where $S$ is an integral noetherian scheme over a field $\mathcal{B}bbk$ with $\operatorname{op}eratorname{char}(\mathcal{B}bbk)\neq 2$. Let $\mathcal{A}_{\mathcal{Q}}$ be the residual category of $p$. We start with Case 1. Assume that $p$ has a smooth section as in Definition \ref{regisodef} and the second degeneration $S_2\subset S$ is different from $S$ (The $k$-th degeneration $S_k$ is the locus in $S$ where fibers of $p$ have corank at least $k$.). In this case, $\mathcal{A}_{\mathcal{Q}}$ is geometric. {\renewcommand{\ref{main3}}{\ref{main1}} \begin{theorem} In the hypotheses of Case 1, $\mathcal{A}_{\mathcal{Q}}\cong\mathbf{D}^\mathrm{b}(\bar{\mathcal{Q}})$ where $\bar{\mathcal{Q}}$ is the hyperbolic reduction of $\mathcal{Q}$ with respect to the smooth section in Definition \ref{hypreddef}. \end{theorem} } The hyperbolic reduction $\bar{\mathcal{Q}}$ is isomorphic to a subscheme $Z$ of the relative Hilbert scheme of lines $M$ over $S$ parametrizing lines in the fibers of $p\colon \mathcal{Q} \to S$ that intersect the smooth section. Let $\mathbb{P}_Z(\mathcal{R}_Z) \subset \mathcal{Q} \times_S Z$ be the universal family of lines that $Z$ parametrizes. Using this identification, the embedding functor $\mathcal{A}_{\mathcal{Q}}\to \mathbf{D}^\mathrm{b}(\mathcal{Q})$ can be described explicitly as below. {\renewcommand{\ref{main3}}{\ref{main2}} \begin{theorem} In the hypotheses of Case 1, $\mathcal{A}_{\mathcal{Q}}\cong\mathbf{D}^\mathrm{b}(Z)$ where $Z$ is introduced above. The embedding functors $ \mathbf{D}^\mathrm{b}(Z)\to \mathbf{D}^\mathrm{b}(\mathcal{Q})$ are of Fourier-Mukai type with kernels $\mathcal{S}_n^{\mathcal{R}_Z}, n\in\mathbb{Z}$ where $\mathcal{S}_n^{\mathcal{R}_Z}$ is the $n$-th spinor sheaf with respect to the isotropic subbundle $\mathcal{R}_Z$ in Definition \ref{spinordef}. \end{theorem} } In Case 2, $\mathcal{A}_{\mathcal{Q}}$ is twisted geometric. {\renewcommand{\ref{main3}}{\ref{main3}} \begin{theorem} Assume $\mathcal{B}bbk$ is algebraically closed and $\operatorname{op}eratorname{char}(\mathcal{B}bbk)=0$. Let $p\colon \mathcal{Q}\to S$ be a flat quadric surface bundle where $\mathcal{Q}$ is smooth and $S$ is a smooth surface over $\mathcal{B}bbk$. Then $\mathcal{A}_{\mathcal{Q}}\cong \mathbf{D}^\mathrm{b}(S^+, \mathcal{A}^+)$ where $S^+$ is the resolution of the double cover $\widetilde{S}$ over $S$ ramified along the (first) degeneration locus $S_1$ and $\mathcal{A}^+$ is an Azumaya algebra on $S^+$. In addition, the Brauer class $[\mathcal{A}^+]\in \operatorname{Br}(S^+)$ is trivial if and only if $p\colon \mathcal{Q} \to S$ has a rational section. \end{theorem} } When the quadric surface bundle $p\colon \mathcal{Q}\to S$ has {\it simple degeneration}, i.e., fibers of $p$ have corank at most $1$, or equivalently $S_2=\emptyset$, it is well-known that $\mathcal{A}_{\mathcal{Q}}$ is equivalent to a twisted derived category of the double cover $\widetilde{S}$ over $S$ ramified along $S_1$. Furthermore, the twist on $\widetilde{S}$ is closely related to the relative Hilbert scheme of lines $M$. More precisely, $\rho\colon M\to S$ factors as a smooth conic bundle $\tau\colon M\to \widetilde{S}$ followed by the double cover $\alpha\colon \widetilde{S}\to S$. The twist on $\widetilde{S}$ is given by the Azumaya algebra corresponding to $\tau$. This correspondence is a relative version of that between central simple algebras and Severi-Brauer varieties (Theorem 2.4.3, 5.2.1 in \cite{csa}). When $p\colon \mathcal{Q}\to S$ has fibers of corank $2$, $\mathcal{A}_{\mathcal{Q}}$ becomes more complicated and this paper focuses on this case. Having fibers of corank $2$ gives two challenges. On one hand, the singular locus of a quadric surface $Q$ of corank $2$ over an algebraically closed field is not isolated (isomorphic to $\mathbb{P}^1$). This means that $\mathcal{A}_Q$ would be ``$1$-dimensional" because it {\it absorbs the singularity} of $Q$ (Its semiorthogonal complement in $\mathbf{D}^\mathrm{b}(Q)$ is an exceltional collection.). On the other hand, the Hilbert scheme of lines on $Q$ is the union of two $\mathbb{P}^2$'s intersecting at a point, which is reducible and has higher dimension than cases for quadric surfaces of corank at most $1$. \begin{conj}\label{conj} If a quadric surface bundle $p\colon \mathcal{Q}\to S$ has simple degeneration generically and each fiber has corank at most $2$, i.e., $S_2\neq S$ and $S_3=\emptyset$, then $\mathcal{A}_{\mathcal{Q}}\cong \mathbf{D}^\mathrm{b}(Y, \alpha_Y)$ where $(Y, \alpha_Y)$ is some twisted scheme. \end{conj} We expect the conjecture to be true because \'{e}tale locally $p$ has a smooth section (Case 1). More specifically, we expect that $Y$ is a double cover over $S\backslash S_1$ and a $\mathbb{P}^1$-bundle over $S_2$. This is supported by Theorem \ref{main1}, \ref{main2} and Theorem \ref{main3} in the paper. So far we do not have a natural way to construct such $Y$ in general. But in the two cases considered in the paper, we can make such constructions. In the remaining part of the introduction, we will describe $Y$ more explicitly and give an overview of the techniques used in the paper. In Case 1 where $p\colon \mathcal{Q}\to S$ is assumed to have a smooth section, $Y$ can be constructed as the hyperbolic reduction $\bar{\mathcal{Q}}$ with respect to the smooth section. The relative linear projection of $\mathcal{Q}$ from the smooth section identifies the blow-up of $\mathcal{Q}$ along the smooth section with a blow-up along $\bar{\mathcal{Q}}$; see the diagram (\ref{hrdiag}). The condition $S_2\neq S$ ensures that the blow-up center $\bar{Q}$ is a regular embedding of codimension $2$, and then we can apply the blow-up formula \cite[Theorem 6.11]{jquot-1}. Theorem \ref{main1} follows from performing mutations under this identification. This proof is straightforward, but it has the disadvantage that the information on the embedding functor $\mathcal{A}_{\mathcal{Q}}\to \mathbf{D}^\mathrm{b}(\mathcal{Q})$ is lost under mutations. This is why we provide a second proof in Theorem \ref{main2} and focus on working with $\mathcal{A}_{\mathcal{Q}}$ by itself. In the second proof of Case 1, we make use of the isomorphism from $\bar{\mathcal{Q}}$ to the scheme $Z$ over $S$ parametrizing lines in the fibers of $p\colon \mathcal{Q} \to S$ that intersect the smooth section. It has been shown in \cite{kuzqfib, abbqfib} that $\mathcal{A}_{\mathcal{Q}}$ is equivalent to $\mathbf{D}^\mathrm{b}(S, \mathcal{B}_0)$, the derived category of coherent sheaves on $S$ with right $\mathcal{B}_0$-module structures, where $\mathcal{B}_0$ is the even Clifford algebra of $p$. We prove in Proposition \ref{nceqprop} that $\mathbf{D}^\mathrm{b}(S,\mathcal{B}_0)\cong \mathbf{D}^\mathrm{b}(Z, \mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z}))$ where $\mathcal{I}_0^{\mathcal{R}_Z}$ is certain Clifford ideal introduced in Section \ref{clalid}. Finally, we use Morita equivalence to deduce $\mathcal{A}_{\mathcal{Q}}\cong \mathbf{D}^\mathrm{b}(Z)$. The embedding functors $\mathbf{D}^\mathrm{b}(Z)\to \mathbf{D}^\mathrm{b}(\mathcal{Q})$ can be described explicitly because each functor involved can be described so. This approach relies on the study of derived categories of non-commutative schemes and we provide the necessary foundations in Appendix \ref{ncschsec}. In Case 2 where we assume $\mathcal{Q}$ is smooth and $S$ is a smooth surface, $Y$ can be constructed as the resolution $S^+$ of the double cover $\widetilde{S}$ over $S$. The idea is to make use of the relation between the relative Hilbert scheme of lines $M$ and the residual category $\mathcal{A}_{\mathcal{Q}}$ when $p\colon \mathcal{Q} \to S$ has simple degeneration. Although this relation fails for quadric surfaces of corank $2$, which means that $\mathcal{A}_{\mathcal{Q}}$ described in terms of the map $\tau\colon M\to \widetilde{S}$ is no longer a twisted derived category, we can fix this by making a modification to $\tau$. Namely, we will construct a smooth conic bundle $\tau_+\colon M^+\to S^+$ from $\tau$ such that $\mathcal{A}_{\mathcal{Q}}\cong \mathbf{D}^\mathrm{b}(S^+, \mathcal{A}^+)$ where $\mathcal{A}^+$ is Brauer equivalent to the Azumaya algebra corresponding to $\tau_+$. This construction is motivated by the work \cite{kuzline}, in which the base $S$ is a smooth $3$-fold. We will not need the additional assumptions of \cite{kuzline} on the degeneration loci $S_k$. Now we give a more detailed description of the modification. We will assume $\mathcal{B}bbk$ is algebraically closed and $\operatorname{op}eratorname{char}(\mathcal{B}bbk)=0$ so that we can perform birational transformations to $M$. In this case, there are only a finite number of fibers with corank $2$ by Lemma \ref{bealem}. As pointed out before, the fiber $M_s, s\in S_2$ is a union of two $\mathbb{P}^2$'s. For each $M_s$, we will blow up one of the $\mathbb{P}^2$'s and contract the exceptional locus onto $\mathbb{P}^1$. In this process, $M_s$ becomes a Hirzebruch surface $M^+_s$, and $M^+\to \widetilde{S}$ factor as a smooth conic bundle $\tau_+\colon M^+\to S^+$ followed by the resolution $S^+\to \widetilde{S}$. Details of this process are given in Proposition \ref{birtran}. It should be pointed out that this geometric construction relies on being able to choose one of the two $\mathbb{P}^2$'s for each $M_s, s\in S_2$. Hence, it works well when $S_2$ is a finite set of points, but it would not work in general. The main results of the paper can be used to reprove results on semiorthogonal decompositions for the nodal quintic del Pezzo threefolds (Example \ref{nodaldp5}) given in \cite{xienodaldp5} and cubic $4$-folds containing a plane in non-generic cases (Example \ref{cubic4}). Since the residual category of a Fano complete intersection of quadrics is equivalent to that of the associated net of quadrics by the Homological Projective Duality theory, we can produce new examples of Fano complete intersections of quadrics whose residual categories are twisted geometric. For example, residual categories of smooth complete intersections of three quadrics in $\mathbb{P}^{2m+3}$ for $m\leqslant 5$ are twisted geometric; see Proposition \ref{ci3q}. \noindent{\bf Convention.} Throughout the paper, we assume that $\mathcal{B}bbk$ is a field with $\operatorname{op}eratorname{char}(\mathcal{B}bbk)\neq 2$ and $S$ is an integral noetherian scheme over $\mathcal{B}bbk$ unless specified otherwise. In Section \ref{surfbsec} (Case 2), we assume that $\mathcal{B}bbk$ is algebraically closed, and starting from Proposition \ref{birtran} till the end of the section, we assume additionally $\operatorname{op}eratorname{char}{\mathcal{B}bbk}=0$. \noindent{\bf Acknowledgements.} I would like to thank Arend Bayer, Qingyuan Jiang for numerous helpful conversations, and thank Alexander Kuznetsov for pointing out the reference \cite{moc8}. I would also like to thank the referee for a very careful reading of the paper and for many useful suggestions. The author is supported by the ERC Consolidator grant WallCrossAG, no. 819864. \section{Quadric bundles, Clifford algebras and ideals}\label{prisec} In this section, we recall some notions of quadric bundles, review definitions of Clifford algebras and introduce Clifford ideals. \subsection{Some basic notions of quadric bundles}\label{qbnotion} Assume that $\mathcal{E}$ is a vector bundle and $\mathcal{L}$ is a line bundle on $S$. We say $q\colon \mathcal{E}\to \mathcal{L}$ is a \textit{(line-bundle valued) quadratic form} on $S$ if $q$ is an $\mathcal{O}_S$-homogeneous morphism of degree $2$ such that the associated morphism $b_q\colon \mathcal{E}\times \mathcal{E}\to \mathcal{L}$ defined by $b_q(v,w)=q(v+w)-q(v)-q(w)$ is a symmetric bilinear form. The \textit{rank} of $q\colon \mathcal{E}\to\mathcal{L}$ is the rank of $\mathcal{E}$. Assume $q$ is non-zero. Let $\pi\colon \mathbb{P}_S(\mathcal{E}) \to S$ be the projection map. Then $q$ corresponds to a non-zero section \begin{equation}\label{qsec} s_q\in \mathcal{G}amma(\mathbb{P}_S(\mathcal{E}), \mathcal{O}_{\mathbb{P}_S(\mathcal{E})/S}(2)\otimes \pi^*\mathcal{L}) \cong \mathcal{G}amma(S, \operatorname{Sym}^2(\mathcal{E}^\vee)\otimes \mathcal{L}), \end{equation} where $\mathcal{E}^\vee$ is the dual of $\mathcal{E}$ and $\operatorname{Sym}^2$ is the second symmetric product. Let $\mathcal{Q} \subset \mathbb{P}_S(\mathcal{E})$ be the zero locus of the section $s_q$. We write $\mathcal{Q} =\{q=0\}$ and call $p\colon \mathcal{Q}\to S$ the associated \textit{quadric bundle}. We say that $q\colon \mathcal{E}\to \mathcal{L}$ is \textit{primitive} if $q$ is non-zero over the residue field of every point on $S$. This is equivalent to $p\colon \mathcal{Q}\to S$ being a flat quadric bundle. In this paper, when we consider quadric bundles, we only require that $\mathcal{Q} \subset \mathbb{P}_S(\mathcal{E})$ has codimension $1$ and thus $p\colon \mathcal{Q}\to S$ may not be flat. Denote the \textit{$k$-th degeneration locus} of $p\colon \mathcal{Q}\to S$ by $S_k \subset S$, which is the closed subscheme of $S$ defined by the sheaf of ideal \begin{equation*} \operatorname{op}eratorname{Im}(\mathcal{L}ambda^{n+1-k}\mathcal{E} \otimes \mathcal{L}ambda^{n+1-k} \mathcal{E} \otimes (\mathcal{L}^\vee)^{n+1-k} \xrightarrow{\mathcal{L}ambda^{n+1-k} b_q} \mathcal{O}_S), \end{equation*} where $n=\operatorname{op}eratorname{rank}(\mathcal{E})$. This means that $S_k$ is the locus where fibers of $p$ have corank at least $k$. The \textit{corank} of a quadric is the corank of its associated symmetric bilinear form. In particular, $S_1 \cong \{\det(b_q)=0\}$ is the locus of singular fibers. We say that $p\colon \mathcal{Q} \to S$ has \textit{simple degeneration} if $S_2=\emptyset$. A subbundle $\mathcal{W}$ of $q\colon \mathcal{E}\to\mathcal{L}$ is isotropic if $q|_{\mathcal{W}}=0$. This is equivalent to $\mathbb{P}(\mathcal{W})\subset \mathcal{Q}$. \begin{defn}\label{regisodef} An isotropic subbundle $\mathcal{W}$ of $q\colon \mathcal{E}\to\mathcal{L}$ is called \textit{regular} if for each geometric point $x\in S$, the fiber $\mathbb{P}(\mathcal{W})_x$ over $x$ is contained in the smooth locus of the fiber $\mathcal{Q}_x$. We call $\mathbb{P}_S(\mathcal{W})$ a \textit{smooth $r$-section} of $p\colon\mathcal{Q}\to S$ if $\mathcal{W}$ is a regular isotropic subbundle of $q$ of rank $r+1$. A smooth $0$-section is simply called a \textit{smooth section}. \end{defn} Let $\mathcal{W}$ be a regular isotropic subbundle of $q\colon \mathcal{E}\to\mathcal{L}$. Then there is an exact sequence \begin{equation*} 0\to \mathcal{W}^\perp \to \mathcal{E} \xrightarrow{b_q|_{\mathcal{E}\times \mathcal{W}}} \mathscr{H}\kern -2pt om(\mathcal{W}, \mathcal{L}) \to 0, \end{equation*} where $\mathcal{W}^\perp$ is the kernel of $b_q|_{\mathcal{E}\times \mathcal{W}}$. Since $\mathcal{W}$ is isotropic, we have $\mathcal{W} \subset \mathcal{W}^\perp$ and $\mathcal{W}$ is contained in the kernel of $q|_{\mathcal{W}^\perp}\colon \mathcal{W}^\perp \to \mathcal{L}$. It induces a new quadratic form $\bar{q}\colon \mathcal{W}^\perp/\mathcal{W} \to \mathcal{L}$. \begin{defn}\label{hypreddef} Denote $\bar{\mathcal{E}}=\mathcal{W}^\perp/\mathcal{W}$. The induced quadratic from $\bar{q}\colon \bar{\mathcal{E}} \to \mathcal{L}$ is called the \textit{hyperbolic reduction} of $q\colon \mathcal{E} \to \mathcal{L}$ with respect to the regular isotropic subbundle $\mathcal{W}$. Alternatively, $\bar{\mathcal{Q}}=\{\bar{q}=0\}$ is called the \textit{hyperbolic reduction} of $\mathcal{Q}=\{q=0\}$ with respect to the smooth $r$-section $\mathbb{P}_S(\mathcal{W})$, where $r=\operatorname{op}eratorname{rank}(\mathcal{W})-1$. \end{defn} The quadric bundles $\mathcal{Q}\to S$ and $\bar{\mathcal{Q}}\to S$ related by the hyperbolic reduction share many features. For example, they have the same degeneration loci $S_k$. \subsection{Clifford algebras and ideals}\label{clalid} Let $q\colon\mathcal{E}\to\mathcal{L}$ be a non-zero quadratic form and let $p\colon \mathcal{Q}=\{q=0\} \to S$ be the associated quadric bundle. There are several equivalent definitions for even Clifford algebras and Clifford bimodules (also called the odd part of Clifford algebras) of $q$; see \cite[\S3]{bkqf}, \cite[\S1.5]{abbqfib}, \cite[\S3.3]{kuzqfib}. We will recall the construction in \cite{bkqf}. Set the degrees of elements of $\mathcal{E}$ and $\mathcal{L}$ to be $1$ and $2$, respectively. The \textit{generalized Clifford algebra} is defined by \begin{equation} \mathcal{B}= T(\mathcal{E})\otimes (\bigoplus_{n\in \mathbb{Z}} \mathcal{L}^n)/\langle (v\otimes v)\otimes 1- 1\otimes q(v)\rangle_{v\in \mathcal{E}}, \end{equation} where $T(\mathcal{E})$ is the tensor algebra of $\mathcal{E}$. Let $\mathcal{B}_n$ be the subgroups of $\mathcal{B}$ consisting of elements of degree $n\in\mathbb{Z}$. Then $\mathcal{B} \cong \bigoplus_{n\in\mathbb{Z}} \mathcal{B}_n$ is a $\mathbb{Z}$-graded algebra over $\mathcal{B}_0$. The \textit{even Clifford algebra} and the \textit{Clifford bimodule} are defined to be $\mathcal{B}_0$ and $\mathcal{B}_1$, respectively. Write $\operatorname{op}eratorname{rank}(\mathcal{E})=2m$ or $2m+1$. Then there are $\mathcal{O}_S$-filtrations \begin{equation}\label{b01fil} \begin{split} & \mathcal{O}_S=F_0\subset F_2\subset \dots \subset F_{2m}=\mathcal{B}_0,\\ & \mathcal{E}= F_1\subset F_3\subset \dots \subset F_{2m+1} =\mathcal{B}_1\\ \end{split} \end{equation} such that $F_{2i}/F_{2i-2} \cong \mathcal{L}ambda^{2i}\mathcal{E} \otimes (\mathcal{L}^\vee)^i$ and $F_{2i+1}/F_{2i-1}\cong \mathcal{L}ambda^{2i+1} \mathcal{E} \otimes (\mathcal{L}^\vee)^i$. Moreover, we have \begin{equation}\label{bn} \mathcal{B}_n\cong \left\{ \begin{array}{ll} \mathcal{B}_0\otimes \mathcal{L}^k, & n=2k\\ \mathcal{B}_1\otimes \mathcal{L}^k, & n=2k+1\\ \end{array} \right.. \end{equation} When $q$ is primitive, the Clifford multiplications \begin{equation}\label{primiso} \mathcal{B}_n\otimes_{\mathcal{B}_0}\mathcal{B}_m\xrightarrow{\cong} \mathcal{B}_{n+m} \end{equation} are isomorphisms for $n,m\in\mathbb{Z}$ and $\mathcal{B}_1$ is an invertible $\mathcal{B}_0$-bimodule. Note that if $\mathcal{W}$ is an isotropic subbundle, then $\bigoplus_n \mathcal{L}ambda^n \mathcal{W}$ is a subalgebra of $\mathcal{B}$. We define the associated Clifford ideals below. \begin{defn}\label{clideal} The \textit{$n$-th (left) Clifford ideal} $\mathcal{I}_n^{\mathcal{W}}$ of $q\colon\mathcal{E}\to \mathcal{L}$ with respect to an isotropic subbundle $\mathcal{W}$ is the degree $n$ part of the left principal ideal of $\mathcal{B}$ generated by $\det\mathcal{W}\subset \mathcal{B}_{\operatorname{op}eratorname{rank}(\mathcal{W})}$, i.e., \begin{equation*} \mathcal{I}_n^{\mathcal{W}} =\operatorname{op}eratorname{Im}(\mathcal{B}_{n-\operatorname{op}eratorname{rank}(\mathcal{W})}\otimes \det\mathcal{W} \to \mathcal{B}_n), \end{equation*} where the map is given by Clifford multiplications. Similarly, we can define the \textit{$n$-th right Clifford ideal} \begin{equation*} \mathcal{I}_n^{\circ \mathcal{W}}= \operatorname{op}eratorname{Im}(\det\mathcal{W}\otimes \mathcal{B}_{n-\operatorname{op}eratorname{rank}(\mathcal{W})} \to \mathcal{B}_n). \end{equation*} \end{defn} We have $\mathcal{I}_n^{\mathcal{W}}\cong\mathcal{I}_n^{\circ \mathcal{W}}$ as vector bundles on $S$ because tensor is commutative and $\mathcal{I}_{n+2}^\mathcal{W} \cong \mathcal{I}_n^\mathcal{W}\otimes \mathcal{L}$ by relations (\ref{bn}). The Clifford ideals play an important role in the study of quadric bundles. They have been studied for a quadric hypersurface and were used to define spinor sheaves for an isotropic subspace in \cite[\S2]{addspinor}. In this paper, we will provide a relative version. Firstly, we provide two lemmas about Clifford ideals that will be used later. Next, we give the relation between Clifford ideals of different isotropic subbundles and describe the dual of Clifford ideals. As with the quadric hypersurface case, we can define spinor sheaves for an isotropic subbundle, and we will discuss properties of spinor sheaves that follow from those of Clifford ideals. We primarily work with left Clifford ideals and properties proved for them also apply to right Clifford ideals. \begin{lemma}\label{mulisolem} Assume that $q\colon \mathcal{E}\to \mathcal{L}$ is primitive and $\mathcal{W}$ is an isotropic subbundle. Then for all $m,n\in\mathbb{Z}$, (1) the Clifford multiplication induces a left $\mathcal{B}_0$-module isomorphism \begin{equation*} \sigma_{m,n} \colon \mathcal{B}_m\otimes_{\mathcal{B}_0} \mathcal{I}_n^{\mathcal{W}} \xrightarrow{\cong} \mathcal{I}_{m+n}^{\mathcal{W}}; \end{equation*} (2) there are isomorphisms of sheaves of algebras $\mathscr{E}\kern -1pt nd(\mathcal{I}_n^{\mathcal{W}})\cong \mathscr{E}\kern -1pt nd(\mathcal{I}_{m+n}^{\mathcal{W}})$. A similar result is true for right Clifford ideals. \end{lemma} \begin{proof} (1) We have the map $\sigma_{m,n}$ because the image of the Clifford multiplication $\mathcal{B}_m\otimes \mathcal{I}_n^{\mathcal{W}} \to \mathcal{B}_{m+n}$ is $\mathcal{I}_{m+n}^{\mathcal{W}}$ and it factors through $\mathcal{B}_m\otimes_{\mathcal{B}_0} \mathcal{I}_n^{\mathcal{W}}$. Applying $\mathcal{B}_m\otimes_{\mathcal{B}_0} -$ to the map \begin{equation*} \sigma_{-m,m+n}\colon \mathcal{B}_{-m}\otimes_{\mathcal{B}_0} \mathcal{I}_{m+n}^{\mathcal{W}} \to \mathcal{I}_n^{\mathcal{W}} \end{equation*} and using the isomorphism (\ref{primiso}), we obtain the map $\mathcal{I}_{m+n}^{\mathcal{W}}\to \mathcal{B}_m\otimes_{\mathcal{B}_0} \mathcal{I}_n^{\mathcal{W}}$. This is the inverse map of $\sigma_{m,n}$. (2) is a consequence of (1) where the morphism $\mathscr{E}\kern -1pt nd(\mathcal{I}_n^{\mathcal{W}})\to \mathscr{E}\kern -1pt nd(\mathcal{I}_{m+n}^{\mathcal{W}})$ is induced by $\mathcal{B}_m\otimes_{\mathcal{B}_0} -$. \end{proof} \begin{lemma}\label{coklem} Let $\mathcal{W}$ be an isotropic subbundle of rank $r$. Then for all $n\in \mathbb{Z}$ there are exact sequences \begin{equation*} \mathcal{B}_{n-1}\otimes \mathcal{W} \to \mathcal{B}_n \to \mathcal{I}_{n+r}^{\mathcal{W}}\otimes \det(\mathcal{W}^\vee) \to 0 \end{equation*} of left $\mathcal{B}_0$-modules, where the first map is given by the Clifford multiplication $\mathcal{B}_{n-1}\otimes \mathcal{W} \subset \mathcal{B}_{n-1}\otimes \mathcal{B}_1 \to \mathcal{B}_n$. A similar result is true for right Clifford ideals. \end{lemma} \begin{proof} There is a left $\mathcal{B}_0$-module surjection $\mathcal{B}_n\twoheadrightarrow \mathcal{I}_{n+r}^{\mathcal{W}}\otimes \det(\mathcal{W}^\vee)$ induced by the multiplication $\mathcal{B}_n\otimes \det(\mathcal{W})\subset \mathcal{B}_n\otimes \mathcal{B}_r\to \mathcal{B}_{n+r}$. By construction, the kernel of the surjection is given by the image of $\mathcal{B}_{n-1}\otimes \mathcal{W} \to \mathcal{B}_n$. \end{proof} \begin{lemma}\label{2clideal} Let $\mathcal{W}'\subset \mathcal{W}$ be isotropic subbundles and assume $\operatorname{op}eratorname{rank}(\mathcal{W})-\operatorname{op}eratorname{rank}(\mathcal{W}')=1$. Let $\mathcal{L}_1=\mathcal{W}/\mathcal{W}'$. Then for all $n\in \mathbb{Z}$ there are short exact sequences \begin{equation*} 0\to \mathcal{I}_n^{\mathcal{W}}\to \mathcal{I}_n^{\mathcal{W}'}\to \mathcal{I}_{n+1}^{\mathcal{W}}\otimes \mathcal{L}_1^\vee\to 0 \end{equation*} of left $\mathcal{B}_0$-modules. Similarly, if $\mathcal{W}$ is an isotropic sub line bundle, then for all $n\in \mathbb{Z}$ there are short exact sequences \begin{equation*} 0\to \mathcal{I}_n^{\mathcal{W}}\to \mathcal{B}_n\to \mathcal{I}_{n+1}^{\mathcal{W}}\otimes \mathcal{W}^\vee\to 0. \end{equation*} A similar result is true for right Clifford ideals. \end{lemma} \begin{proof} Consider the map given by the Clifford multiplication \begin{equation*} \mathcal{I}_n^{\mathcal{W}'}\otimes \mathcal{W} \subset \mathcal{B}_n \otimes \mathcal{B}_1 \to \mathcal{B}_{n+1}. \end{equation*} The image is $\mathcal{I}_{n+1}^\mathcal{W}$ and it factors through $\mathcal{I}_n^{\mathcal{W}'}\otimes (\mathcal{W}/\mathcal{W}')$. Furthermore, the induced map \begin{equation*} \mathcal{I}_n^{\mathcal{W}'}\otimes \mathcal{L}_1 \twoheadrightarrow \mathcal{I}_{n+1}^\mathcal{W} \end{equation*} has kernel $\mathcal{I}_n^\mathcal{W} \otimes \mathcal{L}_1$. The proof when $\operatorname{op}eratorname{rank}(\mathcal{W})=1$ is similar. \end{proof} \begin{lemma}\label{clidealdual} Let $\mathcal{W}$ be an isotropic subbundle of rank $r$. (1) If $\operatorname{op}eratorname{rank}(\mathcal{E})=2m$, then for $k\in\mathbb{Z}$ there are right $\mathcal{B}_0$-module isomorphisms \begin{equation*} (\mathcal{I}_k^\mathcal{W})^\vee \cong \mathcal{I}_{r-k}^{\circ \mathcal{W}}\otimes \det(\mathcal{W}^\vee)\otimes \det(\mathcal{E}^\vee)\otimes \mathcal{L}^m. \end{equation*} (2) If $\operatorname{op}eratorname{rank}(\mathcal{E})=2m+1$, then for $k\in\mathbb{Z}$ there are right $\mathcal{B}_0$-module isomorphisms \begin{equation*} (\mathcal{I}_k^\mathcal{W})^\vee \cong \mathcal{I}_{r+1-k}^{\circ \mathcal{W}}\otimes \det(\mathcal{W}^\vee)\otimes \det(\mathcal{E}^\vee)\otimes \mathcal{L}^m. \end{equation*} \end{lemma} \begin{proof} (1) Let $\operatorname{op}eratorname{tr}\colon \mathcal{B}_0\to \det(\mathcal{E})\otimes (\mathcal{L}^\vee)^m$ be the map $F_{2m}\to F_{2m}/F_{2m-2}$ induced by the filtration (\ref{b01fil}). For $k\in\mathbb{Z}$, there is a pairing \begin{equation*} \begin{array}{rcccl} \mathcal{B}_{-k}\otimes \mathcal{B}_{k} &\to & \mathcal{B}_0 & \xrightarrow{\operatorname{op}eratorname{tr}} & \det(\mathcal{E})\otimes (\mathcal{L}^\vee)^m,\\ \xi \otimes \eta & \mapsto & \xi\eta & \mapsto & \operatorname{op}eratorname{tr}(\xi\eta). \end{array} \end{equation*} When the pairing is restricted to $\mathcal{B}_{-k}\otimes \mathcal{I}_k^\mathcal{W}$, it induces \begin{equation*} f\colon \mathcal{B}_{-k} \to \mathscr{H}\kern -2pt om(\mathcal{I}_k^\mathcal{W}, \det(\mathcal{E})\otimes (\mathcal{L}^\vee)^m). \end{equation*} It is clear from the construction that $f$ is a homomorphism of right $\mathcal{B}_0$-modules. On the other hand, by Lemma \ref{coklem}, we get a right $\mathcal{B}_0$-module surjection \begin{equation*} g\colon \mathcal{B}_{-k}\twoheadrightarrow \det(\mathcal{W}^\vee)\otimes \mathcal{I}_{r-k}^{\circ \mathcal{W}}. \end{equation*} Since $f, g$ have the same kernel, we have an injection \begin{equation*} \bar{f}\colon \det(\mathcal{W}^\vee)\otimes \mathcal{I}_{r-k}^{\circ \mathcal{W}} \to \mathscr{H}\kern -2pt om(\mathcal{I}_k^\mathcal{W}, \det(\mathcal{E})\otimes (\mathcal{L}^\vee)^m). \end{equation*} Note both the vector bundles above have rank $2^{\operatorname{op}eratorname{rank}(\mathcal{E})-r-1}$. Then $\bar{f}$ is an isomorphism because it is so over the residue field of every point on $S$. (2) The proof when $\operatorname{op}eratorname{rank}(\mathcal{E})$ is odd is similar. The only differences are that we should use $\operatorname{op}eratorname{tr}\colon \mathcal{B}_1\to \det(\mathcal{E})\otimes (\mathcal{L}^\vee)^m$ and the pairing $\mathcal{B}_{1-k}\otimes \mathcal{B}_{k}\to \det(\mathcal{E})\otimes (\mathcal{L}^\vee)^m$ instead. \end{proof} Now we define spinor sheaves on quadric bundles by means of Clifford ideals. Let $\pi\colon \mathbb{P}_S(\mathcal{E})\to S$ be the projection map. Regard $\mathcal{O}_{\mathbb{P}_S(\mathcal{E})/S}(-1)$ as the universal sub line bundle and consider the map \begin{equation}\label{phi_n} \begin{array}{rcl} \mathcal{O}_{\mathbb{P}_S(\mathcal{E})/S}(-1)\otimes \pi^*\mathcal{I}_{n-1}^\mathcal{W} & \xrightarrow{\phi_n} & \pi^* \mathcal{I}_n^\mathcal{W},\\ v\otimes \xi & \mapsto & v \xi. \end{array} \end{equation} Then $\phi_n\circ \phi_{n-1}=q$. Taking into account that $S$ is integral and $q\neq 0$, we have $\phi_n$ is an isomorphism outside of $\mathcal{Q}=\{q=0\}$ and $\phi_n$ is injective. \begin{defn}\label{spinordef} The $n$-th \textit{spinor sheaf} $\mathcal{S}_n^\mathcal{W}$ on $\mathcal{Q}=\{q=0\}$ of the non-zero quadratic form $q\colon\mathcal{E}\to\mathcal{L}$ with respect to an isotropic subbundle $\mathcal{W}$ is defined by the exact sequence \begin{equation*} 0\to \mathcal{O}_{\mathbb{P}_S(\mathcal{E})/S}(-1)\otimes \pi^*\mathcal{I}_{n-1}^\mathcal{W} \xrightarrow{\phi_n} \pi^*\mathcal{I}_n^\mathcal{W} \to i_* \mathcal{S}_n^\mathcal{W} \to 0, \end{equation*} where $\phi_n$ is constructed in (\ref{phi_n}), $\pi\colon \mathbb{P}_S(\mathcal{E})\to S$ is the projection map and $i\colon \mathcal{Q}\hookrightarrow \mathbb{P}_S(\mathcal{E})$ is the embedding. \end{defn} Again we have $\mathcal{S}_{n+2}^\mathcal{W} \cong \mathcal{S}_n^\mathcal{W} \otimes \mathcal{L}$. \begin{remark}\label{spinordef2} We can also construct $\mathcal{S}_n^\mathcal{W}$ as the cokernel of $\phi_n^\circ$ where \begin{equation*} \phi_n^\circ\colon \mathcal{O}_{\mathbb{P}_S(\mathcal{E})/S}(-1)\otimes \pi^*\mathcal{I}_{n-1}^{\circ\mathcal{W}} \to \pi^*\mathcal{I}_n^{\circ\mathcal{W}} \end{equation*} is the map sending $v\otimes \xi$ to $\xi v$. \end{remark} Restricting $\phi_n, n\in\mathbb{Z}$ to the quadric bundle $\mathcal{Q}$, there are exact sequences \begin{equation}\label{spinorseq} \begin{split} & \dots \to \mathcal{O}_{\mathcal{Q}/S}(-2)\otimes p^*\mathcal{I}_{n-2}^\mathcal{W} \xrightarrow{\phi_{n-1}} \mathcal{O}_{\mathcal{Q}/S}(-1)\otimes p^*\mathcal{I}_{n-1}^\mathcal{W} \xrightarrow{\phi_n} p^*\mathcal{I}_n^\mathcal{W} \to \mathcal{S}_n^\mathcal{W} \to 0,\\ & 0\to \mathcal{S}_n^\mathcal{W} \to \mathcal{O}_{\mathcal{Q}/S}(1)\otimes p^*\mathcal{I}_{n+1}^\mathcal{W} \xrightarrow{\phi_{n+2}} \mathcal{O}_{\mathcal{Q}/S}(2)\otimes p^*\mathcal{I}_{n+2}^\mathcal{W} \xrightarrow{\phi_{n+3}} \dots, \end{split} \end{equation} where $p\colon \mathcal{Q}\to S$ is the quadric bundle. \begin{cor} Let $\mathcal{W}'\subset \mathcal{W}$ be isotropic subbundles and assume $\operatorname{op}eratorname{rank}(\mathcal{W})-\operatorname{op}eratorname{rank}(\mathcal{W}')=1$. Let $\mathcal{L}_1=\mathcal{W}/\mathcal{W}'$. Then for all $n\in \mathbb{Z}$ there are short exact sequences on $\mathcal{Q}$, \begin{equation*} 0\to \mathcal{S}_n^{\mathcal{W}}\to \mathcal{S}_n^{\mathcal{W}'}\to \mathcal{S}_{n+1}^{\mathcal{W}}\otimes p^*\mathcal{L}_1^\vee\to 0, \end{equation*} where $p\colon \mathcal{Q}\to S$ is the quadric bundle. \end{cor} \begin{proof} By Lemma \ref{2clideal}, there are short exact sequences \begin{equation*} 0\to \mathcal{I}_n^{\mathcal{W}}\to \mathcal{I}_n^{\mathcal{W}'}\to \mathcal{I}_{n+1}^{\mathcal{W}}\otimes \mathcal{L}_1^\vee\to 0. \end{equation*} Then the result follows because these short exact sequences are compatible with the map $\phi_n$ (\ref{phi_n}) defining the spinor sheaves. \end{proof} \begin{cor} The spinor sheaf $\mathcal{S}_n^\mathcal{W}$ is reflexive on $\mathcal{Q}$. Let $\operatorname{op}eratorname{rank}(\mathcal{W})=r$. Recall $p\colon \mathcal{Q}\to S$ is the quadric bundle. (1) If $\operatorname{op}eratorname{rank}(\mathcal{E})=2m$, then \begin{equation*} (\mathcal{S}_n^\mathcal{W})^\vee \cong \mathcal{S}_{r-n-1}^{\mathcal{W}} \otimes \mathcal{O}_{\mathcal{Q}/S}(-1) \otimes p^*(\det(\mathcal{W}^\vee)\otimes \det(\mathcal{E}^\vee)\otimes \mathcal{L}^m). \end{equation*} (2) If $\operatorname{op}eratorname{rank}(E)=2m+1$, then \begin{equation*} (\mathcal{S}_n^\mathcal{W})^\vee \cong \mathcal{S}_{r-n}^{\mathcal{W}} \otimes \mathcal{O}_{\mathcal{Q}/S}(-1) \otimes p^*(\det(\mathcal{W}^\vee)\otimes \det(\mathcal{E}^\vee)\otimes \mathcal{L}^m). \end{equation*} \end{cor} \begin{proof} The reflexivity of $\mathcal{S}_n^\mathcal{W}$ follows from the observation that taking double dual of (\ref{spinorseq}) gives the same exact sequences. (1) Taking the dual of the second sequence in (\ref{spinorseq}), we have another exact sequence \begin{equation*} \dots \to \mathcal{O}_{\mathcal{Q}/S}(-2)\otimes p^*(\mathcal{I}_{n+2}^\mathcal{W})^\vee \xrightarrow{\phi_{n+2}^\vee} \mathcal{O}_{\mathcal{Q}/S}(-1) \otimes p^*(\mathcal{I}_{n+1}^\mathcal{W})^\vee \to (\mathcal{S}_n^\mathcal{W})^\vee \to 0. \end{equation*} The result follows from Lemma \ref{clidealdual} and Remark \ref{spinordef2} by noticing that $\phi_{n+2}^\vee \cong \phi_{r-n-1}^\circ$. (2) can be proved similarly. \end{proof} \subsection{Non-primitive quadratic forms of rank two}\label{qfr2} Given a quadric surface bundle with a smooth section, the hyperbolic reduction with respect to the smooth section as in Definition \ref{hypreddef} gives a quadratic form of rank $2$. If $S_2\neq \emptyset$, then the new quadratic form is non-primitive. As a preparation for Section \ref{smoothsec}, we will study possibly non-primitive quadratic forms of rank $2$ in this section. Let $q\colon\mathcal{E}\to\mathcal{L}$ be a non-zero quadratic form of rank $2$, i.e., $\operatorname{op}eratorname{rank}(\mathcal{E})=2$, and let $p\colon \mathcal{Q}=\{q=0\} \to S$ be the associated quadric bundle. Then $q|_{S_2}=0$, and $q\neq 0$ implies $S_2\neq S$. In this case, $p$ has relative dimension $0$ over $S\backslash S_2$ and is not flat if $S_2\neq \emptyset$. We will give a result on the relation between Clifford algebras and Clifford ideals. The non-flatness of $p$ requires a non-trivial argument. We observe that $\mathcal{O}_{\mathcal{Q}/S}(-1)$ is an isotropic line bundle of $p^*q\colon p^*\mathcal{E}\to p^*\mathcal{L}$. The Clifford multiplication gives \begin{equation*} p^*\mathcal{B}_n\otimes \mathcal{I}_m^{\mathcal{O}_{\mathcal{Q}/S}(-1)}\to \mathcal{I}_{n+m}^{\mathcal{O}_{\mathcal{Q}/S}(-1)}. \end{equation*} Since $\mathcal{B}_n$ is locally free, the map above induces \begin{equation}\label{hommap} Lp^*\mathcal{B}_n \cong p^*\mathcal{B}_n \to \mathscr{H}\kern -2pt om_{\mathcal{O}_{\mathcal{Q}}}(\mathcal{I}_m^{\mathcal{O}_{\mathcal{Q}/S}(-1)}, \mathcal{I}_{n+m}^{\mathcal{O}_{\mathcal{Q}/S}(-1)}). \end{equation} \begin{lemma}\label{dim0lem} Let $p\colon \mathcal{Q}\to S$ be the quadric bundle associated with a non-zero quadratic form $q\colon \mathcal{E}\to \mathcal{L}$ of rank $2$. Let $\pi\colon \mathbb{P}_S(\mathcal{E})\to S$ be the projection map and let $i\colon \mathcal{Q}\hookrightarrow \mathbb{P}_S(\mathcal{E})$ be the embedding. Then $p=\pi\circ i$. (1) There is a short exact sequence \begin{equation*} 0\to \pi^*\mathcal{B}_{n-1} \otimes \mathcal{O}_{\mathbb{P}_S(\mathcal{E})/S}(-1)\xrightarrow{\phi_n} \pi^*\mathcal{B}_n \to i_*\mathscr{H}\kern -2pt om_{\mathcal{O}_{\mathcal{Q}}}(\mathcal{I}_1^{\mathcal{O}_{\mathcal{Q}/S}(-1)}, \mathcal{I}_{n+1}^{\mathcal{O}_{\mathcal{Q}/S}(-1)})\to 0 \end{equation*} where $\phi_n$ is the map (\ref{phi_n}). (2) The map \begin{equation*} \mathcal{B}_n\to Rp_*\mathscr{H}\kern -2pt om_{\mathcal{O}_{\mathcal{Q}}}(\mathcal{I}_m^{\mathcal{O}_{\mathcal{Q}/S}(-1)}, \mathcal{I}_{n+m}^{\mathcal{O}_{\mathcal{Q}/S}(-1)}) \end{equation*} induced by (\ref{hommap}) is an isomorphism for all $n,m\in \mathbb{Z}$. \end{lemma} \begin{proof} (1) Since $S$ is integral, $\phi_n\circ \phi_{n-1}=q$ and $q\neq 0$, we have $\phi_n$ is injective and an isomorphism outside of $\mathcal{Q}$. Moreover, $\operatorname{coker}(\phi_n)$ is supported on $\mathcal{Q}$ schematically and we can write it as $i_*\mathfrak{S}_n$. Note that $I_1^{\mathcal{O}_{\mathcal{Q}/S}(-1)}\cong \mathcal{O}_{\mathcal{Q}/S}(-1)$ and $\mathfrak{S}_n\cong \operatorname{coker}(\phi_n|_{\mathcal{Q}})$. By Lemma \ref{coklem}, we get \begin{equation*} \mathfrak{S}_n\cong \mathscr{H}\kern -2pt om_{\mathcal{O}_{\mathcal{Q}}}(\mathcal{I}_1^{\mathcal{O}_{\mathcal{Q}/S}(-1)}, \mathcal{I}_{n+1}^{\mathcal{O}_{\mathcal{Q}/S}(-1)}). \end{equation*} (2) It suffices to prove the claim for $m=0,1$ and all $n$. Applying $R\pi_*$ to the sequence in (1), we get the claim holds for $m=1$. Now we prove for $m=0$. Let $F=\operatorname{coker}(\mathcal{O}_{\mathbb{P}_S(\mathcal{E})/S}(-1)\to \pi^*\mathcal{E})\cong \pi^*\det(\mathcal{E}) \otimes \mathcal{O}_{\mathbb{P}_S(\mathcal{E})/S}(1)$. From (1), we get \begin{equation*} \mathfrak{S}_1\cong \mathscr{H}\kern -2pt om_{\mathcal{O}_{\mathcal{Q}}}(\mathcal{I}_1^{\mathcal{O}_{\mathcal{Q}/S}(-1)}, \mathcal{I}_2^{\mathcal{O}_{\mathcal{Q}/S}(-1)}) \cong p^*\det\mathcal{E}\otimes \mathcal{O}_{\mathcal{Q}/S}(1). \end{equation*} Comparing $F$ and $\mathfrak{S}_1$, we get $\mathfrak{S}_1 \cong F|_{\mathcal{Q}}$. We have commutative diagrams with exact rows \begin{equation*} \begin{tikzcd} 0\arrow{r} & \pi^*\mathcal{B}_n \otimes \mathcal{O}_{\mathbb{P}_S(\mathcal{E})/S}(-1)\arrow{r} \arrow[equal]{d} & \pi^*\mathcal{B}_n \otimes \pi^*\mathcal{E} \arrow{r} \arrow{d}{f} & \pi^*\mathcal{B}_n\otimes F \arrow{r} \arrow{d}{g} & 0\\ 0 \arrow{r} & \pi^*\mathcal{B}_n \otimes \mathcal{O}_{\mathbb{P}_S(\mathcal{E})/S}(-1) \arrow{r} & \pi^*\mathcal{B}_{n+1} \arrow{r} & i_*\mathfrak{S}_{n+1} \arrow{r} & 0, \end{tikzcd} \end{equation*} where $f$ is pull-back of the Clifford multiplication $\mathcal{B}_n\otimes \mathcal{E} \to \mathcal{B}_{n+1}$ and $g$ is the map induced by the diagram. Tensoring $g$ by $F^\vee$ gives \begin{equation*} \pi^*\mathcal{B}_n \to i_*\mathfrak{S}_{n+1}\otimes F^\vee \cong i_* \mathscr{H}\kern -2pt om_{\mathcal{O}_{\mathcal{Q}}}(\mathfrak{S}_1, \mathfrak{S}_{n+1}) \cong i_* \mathscr{H}\kern -2pt om_{\mathcal{O}_{\mathcal{Q}}}(\mathcal{I}_0^{\mathcal{O}_{\mathcal{Q}/S}(-1)}, \mathcal{I}_n^{\mathcal{O}_{\mathcal{Q}/S}(-1)}). \end{equation*} Clearly, $f$ is surjective. Thus, $g$ is also surjective and $\ker(f)\cong \ker(g)$. Observe that $R\pi_*(\ker(f)\otimes \mathcal{F}^\vee)=0$. Hence, $R\pi_*(\ker(g)\otimes \mathcal{F}^\vee)=0$ and the claim holds for $m=0$. \end{proof} \section{Relative Hilbert schemes of lines}\label{hilbsec} In this section, we focus on describing the relative Hilbert schemes of lines of flat quadric surface bundles. In the rest of the paper, we will use the following notations. \begin{itemize} \item Let $p\colon \mathcal{Q}\to S$ be a flat quadric surface bundle with the associated quadratic form $q\colon \mathcal{E}\to \mathcal{L}$ and the generalized Clifford algebra $\mathcal{B} \cong \bigoplus_{n\in\mathbb{Z}} \mathcal{B}_n$. \item Let $\pi\colon \mathbb{P}_S(\mathcal{E}) \to S$ be the projection map and let $i\colon \mathcal{Q}\hookrightarrow \mathbb{P}_S(\mathcal{E})$ be the embedding. Then $p=\pi\circ i$. \item Let $\widetilde{S} =\operatorname{op}eratorname{Spec}_S(\mathcal{C}_0) $ be the \textit{discriminant cover} over $S$ where $\mathcal{C}_0$ is the center of $\mathcal{B}_0$. Denote the covering map by $\alpha\colon \widetilde{S} \to S$. The description of $\alpha$ is given in Lemma \ref{disccover} below. \item Let $\rho \colon M\to S$ be the relative Hilbert scheme of lines of $p$ and it factors as $M \xrightarrow{\tau}\widetilde{S}\xrightarrow{\alpha} S$. \end{itemize} Observe that $M\subset \operatorname{op}eratorname{Gr}_S(2, \mathcal{E})$. Let $\mathcal{R}$ be the universal subbundle on $\operatorname{op}eratorname{Gr}_S(2, \mathcal{E})$. Then $M \subset \operatorname{op}eratorname{Gr}_S(2,\mathcal{E})$ is the zero locus of the section \begin{equation}\label{msec} s_{M} \in \mathcal{G}amma(\operatorname{op}eratorname{Gr}_S(2,\mathcal{E}), \operatorname{Sym}^2(\mathcal{R}^\vee)\otimes \pi_{\operatorname{op}eratorname{Gr}}^*\mathcal{L}) \cong \mathcal{G}amma(S, \operatorname{Sym}^2(\mathcal{E}^\vee)\otimes \mathcal{L}) \end{equation} corresponding to the section $s_q$ in (\ref{qsec}) defining $\mathcal{Q}$. Here $\pi_{\operatorname{op}eratorname{Gr}}\colon \operatorname{op}eratorname{Gr}_S(2, \mathcal{E})\to S$ is the projection map. For a geometric point $s\in S$, the fiber $M_s$ is \begin{enumerate} \item a disjoint union of two smooth conics if $\mathcal{Q}_s$ is smooth; \item a smooth conic over the dual numbers $\mathcal{B}bbk[\epsilon]/\epsilon^2$ if $\mathcal{Q}_s$ has corank $1$; \item a union of two planes intersecting at a point if $\mathcal{Q}_s$ has corank $2$. \end{enumerate} When $p$ has a smooth section as in Definition \ref{regisodef}, we further denote by \begin{itemize} \item $Z\subset M$ the subscheme parametrizing lines that intersect the smooth section and $\beta: Z\hookrightarrow M\xrightarrow{\rho} S$ the composition map. \item $\mathcal{R}_Z$ the restriction of the universal subbundle $\mathcal{R}$ on $\operatorname{op}eratorname{Gr}_S(2,\mathcal{E})$ to $Z$. \end{itemize} We would like to point out that whether $\rho \colon M\to S$ factors through a double cover is a subtle question in positive characteristic. This is the case when $\operatorname{op}eratorname{char}(\mathcal{B}bbk)\neq 2, 3$ because there is the decomposition \begin{equation*} \mathcal{B}_0\cong \mathcal{O}_S \operatorname{op}lus \mathcal{L}ambda^2 \mathcal{E} \otimes \mathcal{L}^\vee \operatorname{op}lus \det(\mathcal{E})\otimes (\mathcal{L}^\vee)^2. \end{equation*} In this case, $\widehat{\mathcal{C}_0}:= \mathcal{O}_S\operatorname{op}lus \det(\mathcal{E})\otimes (\mathcal{L}^\vee)^2$ is a subalgebra inside the center $\mathcal{C}_0$ and $\widehat{\alpha}\colon \widehat{S}:= \operatorname{op}eratorname{Spec}_S(\widehat{\mathcal{C}_0})\to S$ is a double cover over $S$ ramified along $S_1$. The inclusion $\widehat{\mathcal{C}_0}\subset \mathcal{C}_0$ implies that $\rho\colon M\to S$ factors through $\widehat{\alpha}$. In general, it is unclear if we can embed $\widehat{\mathcal{C}_0}$ inside $\mathcal{C}_0$. But we still have the following when $\operatorname{op}eratorname{char}(\mathcal{B}bbk)\neq 2$. \begin{lemma}\label{disccover} The center $\mathcal{C}_0$ of $\mathcal{B}_0$ is locally free of rank $2$ in the following two cases: (a) $S_2=\emptyset$ or (b) $S$ is a locally factorial integral scheme and $S_1\neq S$. In these cases, $\alpha\colon \widetilde{S}\to S$ is a double cover over $S$ ramified along the degeneration locus $S_1$. \end{lemma} \begin{proof} (a) Let $s\in S$ be an arbitrary point. When $S_2=\emptyset$, Lemma 1.4 in \cite{apsqsb} implies that the stalk of $\mathcal{E}$ at $s$ has an orthogonal basis $\{v_i\}_{i=1}^4$, i.e., $b_q(v_i, v_j)= 0$ for $i\neq j$. Then $\{1, v_1v_2v_3v_4\}$ is contained in the stalk of $\mathcal{C}_0$ at $s$, which gives $\dim_{\mathcal{B}bbk(s)}(\mathcal{C}_0\otimes \mathcal{B}bbk(s))\geqslant 2$. On the other hand, $\mathcal{B}_0\otimes \mathcal{B}bbk(s)$ is an Azumaya algebra whose center has dimension $2$. Thus, $\dim_{\mathcal{B}bbk(s)}(\mathcal{C}_0\otimes \mathcal{B}bbk(s))=2$ and $\mathcal{C}_0$ is locally free of rank $2$. The case (b) is Lemma 1.6.1 in \cite{abbqfib}. \end{proof} If $p$ has simple degeneration, i.e., $S_2=\emptyset$, then $\tau: M\to \widetilde{S}$ is a smooth conic bundle. When $p$ has a smooth section, the composition $Z\hookrightarrow M \xrightarrow{\tau} \widetilde{S}$ is an isomorphism and thus $\tau$ is a $\mathbb{P}^1$-bundle. In this case, there are well-known relations among the relative Hilbert scheme of lines $M$, the even Clifford algebra $\mathcal{B}_0$, and Clifford ideals $\mathcal{I}_n^{\mathcal{R}_Z}$. \begin{lemma}\label{sdb0lem} Assume that the flat quadric surface bundle $p\colon \mathcal{Q}\to S$ has a smooth section and $S_2=\emptyset$. Then we can identify $\beta \cong \alpha \colon Z\xrightarrow{\cong}\widetilde{S}\to S$. For all $n\in \mathbb{Z}$, we have (1) $M \cong \mathbb{P}_Z(\mathcal{I}_n^{\mathcal{R}_Z})$; (2) $\mathcal{B}_0 \cong \beta_* \mathscr{E}\kern -1pt nd(\mathcal{I}_n^{\mathcal{R}_Z}).$\\ Here $\mathcal{I}_n^{\mathcal{R}_Z}$ is the $n$-th Clifford ideal of $\beta^*q$ on $Z$ in Definition \ref{clideal}. \end{lemma} \begin{proof} (1) Consider base changes along $\beta\colon Z\to S$, \begin{equation*} \begin{tikzcd} \mathcal{Q}\times_S Z \arrow{r}{\beta_{\mathcal{Q}}} \arrow{d}{p_Z} & \mathcal{Q} \arrow{d}{p}\\ Z \arrow{r}{\beta} & S, \end{tikzcd} \quad \begin{tikzcd} \mathbb{P}_S(\mathcal{E})\times_S Z \arrow{r}{\beta_{\mathcal{E}}} \arrow{d}{\pi_Z} & \mathbb{P}_S(\mathcal{E}) \arrow{d}{\pi}\\ Z \arrow{r}{\beta} & S, \end{tikzcd} \end{equation*} and let $i_Z\colon \mathcal{Q}\times_S Z\hookrightarrow \mathbb{P}_S(\mathcal{E})\times_S Z$ be the embedding. By Definition \ref{spinordef}, we have \begin{equation}\label{spinoronz} 0 \to \beta_{\mathcal{E}}^*\mathcal{O}_{\mathbb{P}_S(\mathcal{E})/S}(-1)\otimes \pi_Z^*\mathcal{I}_{n-1}^{\mathcal{R}_Z} \to \pi_Z^*\mathcal{I}_{n}^{\mathcal{R}_Z} \to i_{Z*} \mathcal{S}_n^{\mathcal{R}_Z} \to 0. \end{equation} Let $z=[L]\in Z$ be a geometric point represented by a line $L$ in the fiber $\mathcal{Q}_{\beta(z)}$. Then the spinor sheaf $\mathcal{S}_{0}^{\mathcal{R}_Z}$ restricted at $z$ is the line bundle (resp., rank $1$ reflexive sheaf) $\mathcal{O}_{\mathcal{Q}_{\beta(z)}}(L)$ when $\mathcal{Q}_{\beta(z)}$ is smooth (resp., of corank $1$). Therefore, $M$ is the projectivization of $p_{Z*}\mathcal{S}_{0}^{\mathcal{R}_Z}$. Applying $\pi_{Z*}$ to the sequence (\ref{spinoronz}), we get $\mathcal{I}_n^{\mathcal{R}_Z} \cong p_{Z*}\mathcal{S}_{n}^{\mathcal{R}_Z}$. Hence, $M \cong \mathbb{P}_Z(\mathcal{I}_0^{\mathcal{R}_Z})$. From Lemma \ref{mulisolem} (2), we get that $\mathscr{E}\kern -1pt nd(\mathcal{I}_n^{\mathcal{R}_Z})$ are isormorphic as sheaves of algebras for all $n$. Thus, $\mathcal{I}_n^{\mathcal{R}_Z}$ for different $n$ differ only by tensoring with a line bundle. This proves (1). (2) Lemma 4.2 in \cite{kuzcubic4} shows that the push-forward $\beta_*$ of the Azumaya algebra corresponding to the smooth conic bundle $\tau\colon M\to \widetilde{S}\cong Z$ is isomorphic to $\mathcal{B}_0$. More specifically, the left $\beta^*\mathcal{B}_0$-module structure of $\mathcal{I}_n^{\mathcal{R}_Z}$ gives $\beta^*\mathcal{B}_0\to \mathscr{E}\kern -1pt nd(\mathcal{I}_n^{\mathcal{R}_Z})$ and the map adjoint to it induces $\mathcal{B}_0 \cong \beta_*\mathscr{E}\kern -1pt nd(\mathcal{I}_n^{\mathcal{R}_Z})$. \end{proof} For the rest of the section, we will show that Lemma \ref{sdb0lem} (2) still holds when $p\colon \mathcal{Q}\to S$ does not have simple degeneration. This is given by Corollary \ref{genb0cor}. Assume that $p\colon \mathcal{Q}\to S$ has a smooth section and let $\mathcal{N}$ be the corresponding regular isotropic sub line bundle. Denote $\mathcal{N}^\perp/\mathcal{N}$ by $\bar{\mathcal{E}}$. Let $\bar{q}\colon \bar{\mathcal{E}} \to \mathcal{L}$ be the hyperbolic reduction of $q$ with respect to $\mathcal{N}$. By construction, each point on $\mathbb{P}_S(\bar{\mathcal{E}})$ corresponds to a line in the fiber of $\mathbb{P}_S(\mathcal{N}^\perp)\to S$ that intersects $\mathbb{P}_S(\mathcal{N})$. Hence, \begin{equation}\label{zeqhr} Z\cong \{\bar{q}=0\} \subset \mathbb{P}_S(\bar{\mathcal{E}}) \end{equation} is the hyperbolic reduction of $\mathcal{Q}$. The short exact sequence \begin{equation*} 0\to \mathcal{N}\to \mathcal{N}^\perp \to \bar{\mathcal{E}}\to 0 \end{equation*} induces inclusions $\mathcal{N}\otimes \bar{\mathcal{E}}\subset \mathcal{L}ambda^2 \mathcal{N}^\perp \subset \mathcal{L}ambda^2 \mathcal{E}$. Under the isomorphism $\mathbb{P}_S(\bar{\mathcal{E}}) \cong \mathbb{P}_S(\mathcal{N}\otimes \bar{\mathcal{E}}) \subset \mathbb{P}_S(\mathcal{L}ambda^2 \mathcal{E})$, we have \begin{equation}\label{detr} \det(\mathcal{R}_Z) \cong \beta^*\mathcal{N} \otimes \mathcal{O}_{Z/S}(-1), \end{equation} where $\mathcal{O}_{Z/S}(-1)$ is the restriction of $\mathcal{O}_{\mathbb{P}_S(\bar{\mathcal{E}})/S}(-1)$. \begin{lemma}\label{genb0lem} The following properties about $\beta\colon Z\to S$ hold. (1) Let $Z'=\beta^{-1}(S_2)$ and let $\bar{\pi}'=\beta|_{Z'}\colon Z'\to S_2$. Denote $\bar{\mathcal{E}}|_{S_2}$ by $\bar{\mathcal{E}_2}$ and $\mathcal{I}_0^{\mathcal{R}_Z}|_{Z'}$ by $\mathcal{I}_0^{\mathcal{R}_{Z'}}$. Then $Z'\cong \mathbb{P}_{S_2}(\bar{\mathcal{E}_2})$ and \begin{equation}\label{i0dec} \mathcal{I}_0^{\mathcal{R}_{Z'}} \cong \mathcal{O}_{Z'/S}(-1)\otimes \bar{\pi}'^* \left( (\mathcal{N} \otimes \mathcal{L}^\vee)|_{S_2} \right) \operatorname{op}lus \bar{\pi}'^* \left( (\det(\mathcal{E})\otimes (\mathcal{L}^\vee)^2)|_{S_2} \right). \end{equation} (2) Assume $S_2\neq S$. There is an exact sequence \begin{equation*} 0\to \mathcal{O}_{\mathbb{P}_S(\bar{\mathcal{E}})/S}(-2)\otimes \bar{\pi}^*\mathcal{L}^\vee \to \mathcal{O}_{\mathbb{P}_S(\bar{\mathcal{E}})} \to \mathcal{O}_Z\to 0, \end{equation*} where $\bar{\pi}\colon \mathbb{P}_S(\bar{\mathcal{E}})\to S$ is the projection map. (3) Assume $S_2\neq S$. Then $R^1\beta_*\mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z})=0$, $\beta_*\mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z})$ is locally free of rank $8$, and $\det (\beta_*\mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z})) \cong \det(\mathcal{B}_0)$. \end{lemma} \begin{proof} (1) Since $\bar{q}|_{S_2}=0$, we have $Z'\cong \mathbb{P}_{S_2}(\bar{\mathcal{E}_2})$. The filtration (\ref{b01fil}) of $\beta^*\mathcal{B}_0$ also induces one for $\mathcal{I}_0^{\mathcal{R}_Z}$, and we have \begin{equation}\label{i0fil} 0\to \det(\mathcal{R}_Z) \otimes \beta^*\mathcal{L}^\vee \to \mathcal{I}_0^{\mathcal{R}_Z} \to \beta^*(\det(\mathcal{E})\otimes (\mathcal{L}^\vee)^2) \to 0 \end{equation} as well as the same for $\mathcal{I}_0^{\circ \mathcal{R}_Z}$. Recall that $\det(\mathcal{R}_Z) \cong \beta^*\mathcal{N} \otimes \mathcal{O}_{Z/S}(-1)$ from (\ref{detr}). Since $\bar{\pi}'\colon Z'\to S_2$ is a $\mathbb{P}^1$-bundle, there is a semiorthogonal decomposition \begin{equation}\label{sodz'} \mathbf{D}^\mathrm{b}(Z')=\langle \bar{\pi}'^{*}\mathbf{D}^\mathrm{b}(S_2) \otimes \mathcal{O}_{Z'/S}(-1), \bar{\pi}'^{*}\mathbf{D}^\mathrm{b}(S_2)\rangle. \end{equation} This implies that \begin{equation*} \operatorname{Ext}^1_{Z'}(\bar{\pi}'^* (\det(\mathcal{E})\otimes (\mathcal{L}^\vee)^2)|_{S_2}, \mathcal{O}_{Z'/S}(-1)\otimes \bar{\pi}'^*(\mathcal{N} \otimes \mathcal{L}^\vee)|_{S_2})=0. \end{equation*} Thus, the sequence (\ref{i0fil}) splits after restricting to $Z'$, and we get (\ref{i0dec}). (2) Equation (\ref{zeqhr}) implies that there is the right exact sequence \begin{equation*} \mathcal{O}_{\mathbb{P}_S(\bar{\mathcal{E}})/S}(-2)\otimes \bar{\pi}^*\mathcal{L}^\vee \to \mathcal{O}_{\mathbb{P}_S(\bar{\mathcal{E}})} \to \mathcal{O}_Z\to 0 \end{equation*} When $S_2\neq S$, we have $\bar{q}\neq 0$, which means that the kernel of the first map is torsion. Since $S$ is integral, the kernel has to be zero. (3) By Lemma \ref{clidealdual}, (1) we have \begin{align*} (\mathcal{I}_0^{\mathcal{R}_Z})^\vee & \cong \mathcal{I}_2^{\circ \mathcal{R}_Z} \otimes \det(\mathcal{R}_Z^\vee) \otimes \beta^*(\det(\mathcal{E}^\vee) \otimes \mathcal{L}^2) \\ & \cong \mathcal{I}_0^{\circ \mathcal{R}_Z} \otimes \det(\mathcal{R}_Z^\vee) \otimes \beta^*(\det(\mathcal{E}^\vee) \otimes \mathcal{L}^3). \end{align*} Combining it with (\ref{i0fil}) and (\ref{detr}), we have \begin{equation}\label{endfil} \begin{split} & 0\to \mathcal{I}_0^{\circ \mathcal{R}_Z} \otimes \beta^*(\det(\mathcal{E}^\vee)\otimes \mathcal{L}^2) \to \mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z}) \to \mathcal{I}_0^{\circ \mathcal{R}_Z}\otimes \det(\mathcal{R}_Z^\vee) \otimes \beta^*\mathcal{L} \to 0,\\ & 0\to \mathcal{O}_{Z/S}(-1)\otimes \beta^*(\mathcal{N}\otimes \det(\mathcal{E}^\vee)\otimes \mathcal{L}) \to \mathcal{I}_0^{\circ \mathcal{R}_Z} \otimes \beta^*(\det(\mathcal{E}^\vee)\otimes \mathcal{L}^2) \to \mathcal{O}_Z \to 0,\\ & 0 \to \mathcal{O}_Z\to \mathcal{I}_0^{\circ \mathcal{R}_Z}\otimes \det(\mathcal{R}_Z^\vee) \otimes \beta^*\mathcal{L} \to \mathcal{O}_{Z/S}(1)\otimes \beta^*(\mathcal{N}^\vee \otimes \det(\mathcal{E})\otimes \mathcal{L}^\vee) \to 0. \end{split} \end{equation} Let $k=-1,0,1$. From the short exact sequence from (2) and the facts that $\bar{\pi}_*\mathcal{O}_{\mathbb{P}_S(\bar{\mathcal{E}})/S}(k-2) =0$ and $R^{\geqslant 2}\bar{\pi}_*=0$, we induce \begin{multline*} 0 \to \bar{\pi}_*\mathcal{O}_{\mathbb{P}_S(\bar{\mathcal{E}})/S}(k) \to \beta_*\mathcal{O}_{Z/S}(k) \to R^1 \bar{\pi}_*\mathcal{O}_{\mathbb{P}_S(\bar{\mathcal{E}})/S}(k-2) \otimes \mathcal{L}^\vee \\ \to R^1\bar{\pi}_*\mathcal{O}_{\mathbb{P}_S(\bar{\mathcal{E}})/S}(k) \to R^1\beta_*\mathcal{O}_{Z/S}(k) \to 0. \end{multline*} In addition, from $R^1\bar{\pi}_* \mathcal{O}_{\mathbb{P}_S(\bar{\mathcal{E}})/S}(k)=0$, we deduce $R^1\beta_*\mathcal{O}_{Z/S}(k)=0$. By Serre duality, we have \begin{equation*} R^1 \bar{\pi}_*\mathcal{O}_{\mathbb{P}_S(\bar{\mathcal{E}})/S}(k-2) \cong (\bar{\pi}_*\mathcal{O}_{\mathbb{P}_S(\bar{\mathcal{E}})/S}(-k))^\vee \otimes \det(\bar{\mathcal{E}}). \end{equation*} Hence, $\beta_*\mathcal{O}_{Z/S}(k)$ are locally free of rank $2$ and \begin{equation*} \det(\beta_*\mathcal{O}_{Z/S}(k)) \cong \left\{ \begin{array}{ll} \det(\bar{\mathcal{E}}^\vee), & k=1\\ \det(\bar{\mathcal{E}})\otimes \mathcal{L}^\vee, & k=0\\ \det(\bar{\mathcal{E}})^3\otimes (\mathcal{L}^\vee)^2, & k=-1 \end{array} \right.. \end{equation*} On the other hand, we have similar long exact sequences induced by sequences (\ref{endfil}). From them, we deduce that $R^1\beta_*\mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z})=0$ and $\beta_*\mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z})$ is locally free of rank $8$. Note that \begin{equation*} \det(\mathcal{E}) \cong \mathcal{N} \otimes \mathscr{H}\kern -2pt om(\mathcal{N},\mathcal{L}) \otimes \det(\bar{\mathcal{E}}) \cong \det(\bar{\mathcal{E}}) \otimes \mathcal{L}. \end{equation*} Then \begin{equation*} \det(\beta_*\mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z})) \cong \det(\mathcal{E})^4 \otimes (\mathcal{L}^\vee)^8. \end{equation*} Lastly, $\mathcal{B}_0$ has a filtration (\ref{b01fil}) with factors $\mathcal{O}_S, \mathcal{L}ambda^2\mathcal{E} \otimes \mathcal{L}^\vee, \det(\mathcal{E}) \otimes (\mathcal{L}^\vee)^2$. Since $\det(\mathcal{L}ambda^2 \mathcal{E}) \cong \det(\mathcal{E})^3$, we have \begin{align*} \det(\mathcal{B}_0) & \cong \det(\mathcal{L}ambda^2\mathcal{E} \otimes \mathcal{L}^\vee) \otimes \det(\mathcal{E}) \otimes (\mathcal{L}^\vee)^2\\ & \cong \det(\mathcal{E})^4 \otimes (\mathcal{L}^\vee)^8\\ & \cong \det(\beta_*\mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z})). \end{align*} \end{proof} \begin{cor}\label{genb0cor} Assume that the flat quadric surface bundle $p\colon \mathcal{Q}\to S$ has a smooth section and $S_2\neq S$. Then $\mathcal{B}_0 \cong R\beta_* \mathscr{E}\kern -1pt nd(\mathcal{I}_n^{\mathcal{R}_Z}) \cong \beta_* \mathscr{E}\kern -1pt nd(\mathcal{I}_n^{\mathcal{R}_Z})$ as sheaves of algebras for all $n\in \mathbb{Z}$. \end{cor} \begin{proof} By Lemma \ref{mulisolem} (2), it suffices to prove it for $n=0$. The left $\beta^*\mathcal{B}_0$-module structure of $\mathcal{I}_0^{\mathcal{R}_Z}$ gives \begin{equation}\label{endmodmap} f\colon \beta^*\mathcal{B}_0 \cong L\beta^*\mathcal{B}_0 \to \mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z}) \end{equation} and it induces \begin{equation*} g\colon \mathcal{B}_0\to R\beta_*\mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z}). \end{equation*} We only need to show that $g$ is locally an isomorphism. Locally, we have \begin{equation*} \mathcal{E} \cong (\mathcal{N}\operatorname{op}lus \mathscr{H}\kern -2pt om(\mathcal{N},\mathcal{L})) \perp \bar{\mathcal{E}}, \end{equation*} where $\perp$ is the orthogonal sum of quadratic forms and the first quadratic form is given by the evaluation map $\operatorname{ev}_{\mathcal{N}}\colon \mathcal{N}\operatorname{op}lus \mathscr{H}\kern -2pt om(\mathcal{N},\mathcal{L})\to \mathcal{L}$. Denote by $\mathcal{B}^{\mathcal{N}}\cong \bigoplus_{n\in \mathbb{Z}} \mathcal{B}_n^{\mathcal{N}}$ and $\bar{\mathcal{B}} \cong \bigoplus_{n\in \mathbb{Z}} \bar{\mathcal{B}}_n$ the generalized Clifford algebras of $\operatorname{ev}_{\mathcal{N}}$ and $\bar{q}$, respectively. Then we have \begin{equation*} \mathcal{B}_0\cong \mathcal{B}_0^{\mathcal{N}}\otimes \bar{\mathcal{B}}_0\operatorname{op}lus \mathcal{B}_1^{\mathcal{N}}\otimes \bar{\mathcal{B}}_1 \otimes \mathcal{L}^\vee\cong \bar{\mathcal{B}}_0\operatorname{op}lus \bar{\mathcal{B}}_0 \operatorname{op}lus \mathcal{N}\otimes \mathcal{L}^\vee \otimes \bar{\mathcal{B}}_1 \operatorname{op}lus \mathcal{N}^\vee\otimes \bar{\mathcal{B}}_1. \end{equation*} In addition, locally $\mathcal{R}_Z \cong \beta^*\mathcal{N}\operatorname{op}lus \mathcal{O}_{Z/S}(-1)$ and \begin{equation*} \mathcal{I}_0^{\mathcal{R}_Z} \cong \mathcal{I}_0^{\mathcal{O}_{Z/S}(-1)} \operatorname{op}lus \beta^*(\mathcal{N}\otimes\mathcal{L}^\vee)\otimes \mathcal{I}_1^{\mathcal{O}_{Z/S}(-1)}. \end{equation*} The left $\beta^*\mathcal{B}_0$-module structure of $\mathcal{I}_0^{\mathcal{R}_Z}$ can be seen by writing $\mathcal{B}_0$ and $\mathcal{I}_0^{\mathcal{R}_Z}$ in block matrices \begin{equation*} \mathcal{B}_0=\left( \begin{array}{cc} \bar{\mathcal{B}}_0 & \mathcal{N}^\vee\otimes \bar{\mathcal{B}}_1 \\ \mathcal{N}\otimes \mathcal{L}^\vee \otimes \bar{\mathcal{B}}_1 & \bar{\mathcal{B}}_0 \end{array} \right), \quad \mathcal{I}_0^{\mathcal{R}_Z}=\left( \begin{array}{c} \mathcal{I}_0^{\mathcal{O}_{Z/S}(-1)}\\ \beta^*(\mathcal{N}\otimes\mathcal{L}^\vee)\otimes \mathcal{I}_1^{\mathcal{O}_{Z/S}(-1)} \end{array} \right). \end{equation*} By Lemma \ref{dim0lem}, we have $\bar{\mathcal{B}}_{n-m}\cong R\beta_*\mathscr{H}\kern -2pt om(\mathcal{I}_m^{\mathcal{O}_{Z/S}(-1)}, \mathcal{I}_n^{\mathcal{O}_{Z/S}(-1)})$. This implies that $g$ is an isomorphism. \end{proof} \begin{remark} The corollary has an easier proof if $S_2\subset S$ has codimension at least $2$ or $S$ is proper and integral. The proof goes as follows. By Lemma \ref{genb0lem} (3), we have $R\beta_*\mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z}) \cong \beta_*\mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z})$ is locally free and $\det(\mathcal{B}_0) \cong \det(\beta_*\mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z}))$. Hence, $\det(g)\in \mathcal{G}amma(S,\mathcal{O}_S)$. By Lemma \ref{sdb0lem} (2), the map $g$ induced in the proof of the previous corollary is an isomorphism on $S\backslash S_2$ and thus $\det(g)|_{S\backslash S_2}$ is a unit. If $S$ is proper and integral, then $\det(g)$ is a non-zero constant and $g$ is an isomorphism on $S$. On the other hand, $\{\det(g)=0\}\subset S$ is either empty or has codimension $1$. Then $g$ is an isomorphism on $S$ if $S_2\subset S$ has codimension at least $2$. \end{remark} We give an explicit description of the map $\beta\colon Z\to S$ for the universal quadric surface bundle with a smooth section. \begin{ex}\label{univqsb} The universal family $\mathcal{Q}$ of quadric surface bundles with a smooth section is parametrized by $S \cong \mathcal{A}A^3 \cong \operatorname{op}eratorname{Spec}(\mathcal{B}bbk[a,b,c])$ and the quadratic form is \begin{equation*} q(x)=x_1x_2+ ax_3^2 +bx_3x_4 +c x_4^2. \end{equation*} The smooth section is given by $\{x_2=x_3=x_4=0\}$ or $\{x_1=x_3=x_4=0\}$. The hyperbolic reduction with respect to either smooth section is $\bar{q}=ax_3^2 +bx_3x_4 +c x_4^2$. Then \begin{equation*} Z \cong \operatorname{op}eratorname{Proj} (\frac{\mathcal{B}bbk[a,b,c,x_3, x_4]}{ax_3^2 +bx_3x_4 +c x_4^2}), \quad \widetilde{S} \cong \operatorname{op}eratorname{Spec}(\frac{\mathcal{B}bbk[a,b,c,d]}{d^2-b^2+4ac}), \end{equation*} where $Z$ is the hyperbolic reduction as well as the scheme parametrizing lines that intersect the smooth section, and $\widetilde{S}$ is the double cover over $\mathcal{A}A^3$. Let $\sigma$ be the involution of the double cover $\widetilde{S}\to \mathcal{A}A^3$. Then there is a factorization \begin{equation*} \beta: Z\xrightarrow{h} \widetilde{S} \to S \cong \mathcal{A}A^3 \end{equation*} where $h$ and $\sigma \circ h$ are the two minimal resolutions of the affine nodal quadric threefold $\widetilde{S}$. In addition, $S_2\cong \{a=b=c=0\}$ is the origin and has codimension $3$ in $S\cong \mathcal{A}A^3$. \end{ex} \section{Quadric surface bundles with a smooth section}\label{smoothsec} Let $p\colon \mathcal{Q}\to S$ be a flat quadric surface bundle with a smooth section and let $\mathcal{A}_{\mathcal{Q}}$ be its residual category. In this section, we prove that $\mathcal{A}_{\mathcal{Q}}$ is geometric. We give two proofs where the easier proof is described in Theorem \ref{main1} and the harder proof in Theorem \ref{main2}. The harder proof in addition gives an explicit description of the Fourier-Mukai kernels of the embedding functors $\mathbb{P}si_n\colon \mathcal{A}_{\mathcal{Q}}\to \mathbf{D}^\mathrm{b}(\mathcal{Q})$. Firstly, we provide a type of mutations for derived categories of not necessarily smooth schemes. Recall that a morphism is \textit{perfect} if it is pseudo-coherent and has finite Tor-dimension. A proper \textit{local complete intersection} morphism (locally factors as a Koszul-regular closed immersion followed by a smooth morphism) is perfect and it has \textit{invertible} relative dualizing complex (a degree shift of a line bundle); see Example 3.2 in \cite{jquot-1}. \begin{lemma}\label{mulem} Let $f\colon X\to S$ be a proper and perfect morphism of noetherian schemes. Denote by $\omega_f=f^!(\mathcal{O}_S)$ the relative dualizing complex. Assume $\omega_f$ is invertible, e.g., when $f$ is a Gorenstein or a proper local complete intersection morphism. Denote by $S_{X/S}= -\otimes \omega_f$ the equivalence functor on $\mathbf{D}^\mathrm{b}(X)$ (This is the relative Serre functor on $\mathbf{D}^\mathrm{b}D^{\operatorname{op}eratorname{perf}}(X)$). Then $f^! \cong S_{X/S}\circ Lf^*$ on $\mathbf{D}^\mathrm{b}(X)$. (1) Assume there is an $S$-linear semiorthogonal decomposition \begin{equation}\label{ssod} \mathbf{D}^\mathrm{b}(X) =\langle \mathcal{A}_1, \mathcal{A}_2 \rangle \end{equation} where $S$-linear means that $\mathcal{F}\otimes f^*\mathcal{G} \in \mathcal{A}_i$ for every $\mathcal{F}\in \mathcal{A}_i$, $\mathcal{G}\in \mathbf{D}^\mathrm{b}D^{\operatorname{op}eratorname{perf}}(S)$, the derived category of perfect complexes, and $i=1,2$. Assume that $\mathcal{A}_1$ or $\mathcal{A}_2$ is equivalent to $Lf^*\mathbf{D}^\mathrm{b}(S)\otimes T$ where $T\in \mathbf{D}^\mathrm{b}D^{\operatorname{op}eratorname{perf}}(X)$ is relative exceptional, i.e., $Rf_*R\mathscr{H}\kern -2pt om_{\mathcal{O}_X}(T, T)\cong \mathcal{O}_S$. Then both $\mathcal{A}_1$ and $\mathcal{A}_2$ are admissible, i.e., inclusion functors have both left and right adjoints, and there is an $S$-linear semiorthogonal decomposition \begin{equation}\label{ssodmu} \mathbf{D}^\mathrm{b}(X) =\langle \mathcal{A}_2\otimes \omega_f, \mathcal{A}_1 \rangle. \end{equation} (2) More generally, assume that there is an $S$-linear semiorthogonal decomposition \begin{equation*} \mathbf{D}^\mathrm{b}(X) =\langle \mathcal{A}_1, \dots, \mathcal{A}_n \rangle. \end{equation*} Assume that there exists some $i_0$ such that for all $i\neq i_0$, we have $\mathcal{A}_i\cong Lf^*\mathbf{D}^\mathrm{b}(S)\otimes T_i$ for relative exceptional $T_i\in \mathbf{D}^\mathrm{b}D^{\operatorname{op}eratorname{perf}}(X)$. Then each $\mathcal{A}_i, 1\leqslant i\leqslant n$ is admissible and there is an $S$-linear semiorthogonal decomposition \begin{equation}\label{nssodmu} \mathbf{D}^\mathrm{b}(X) =\langle \mathcal{A}_n\otimes \omega_f, \mathcal{A}_1 \dots, \mathcal{A}_{n-1} \rangle. \end{equation} \end{lemma} \begin{proof} The claim that $f^! \cong S_{X/S}\circ Lf^*$ is given in Theorem 3.1 (2) of \cite{jquot-1}. (1) We will make use of Lemma 2.7 in \cite{kuzbc}, which states that $\mathcal{A}_1, \mathcal{A}_2$ is semiorthogonal, i.e., $\operatorname{Hom}_{\mathbf{D}^\mathrm{b}(X)}(\mathcal{F}_2, \mathcal{F}_1)=0$ for \textbf{all} $\mathcal{F}_i\in \mathcal{A}_i$, $i=1,2$ if and only if $Rf_* R\mathscr{H}\kern -2pt om_{\mathcal{O}_X}(\mathcal{F}_2, \mathcal{F}_1)=0$ for \textbf{all} $\mathcal{F}_i\in \mathcal{A}_i$, $i=1,2$. We assume that $\mathcal{A}_1\cong Lf^*\mathbf{D}^\mathrm{b}(S)\otimes T$ for $T\in \mathbf{D}^\mathrm{b}D^{\operatorname{op}eratorname{perf}}(X)$ relative exceptional. The other case is similar. Recall that for a full subcategory $\mathcal{A}\subset \mathbf{D}^\mathrm{b}(X)$, the \textit{right orthogonal} to $\mathcal{A}$ is \begin{equation*} \mathcal{A}^{\perp} = \{\mathcal{F}\in \mathbf{D}^\mathrm{b}(X)\,|\, \operatorname{Hom}_{\mathbf{D}^\mathrm{b}(X)}(\mathcal{K},\mathcal{F})=0, \forall \mathcal{K}\in \mathcal{A}\} \end{equation*} and the \textit{left orthogonal} to $\mathcal{A}$ is \begin{equation*} {}^{\perp}\mathcal{A} = \{\mathcal{F}\in \mathbf{D}^\mathrm{b}(X)\,|\, \operatorname{Hom}_{\mathbf{D}^\mathrm{b}(X)}(\mathcal{F}, \mathcal{K})=0, \forall \mathcal{K}\in \mathcal{A}\}. \end{equation*} From the SOD (\ref{ssod}), we get $\mathcal{A}_2\cong {}^{\perp}\mathcal{A}_1$ and $\mathcal{A}_1\cong \mathcal{A}_2^{\perp}$. The existence of the SOD (\ref{ssodmu}) is equivalent to $S_{X/S}(\mathcal{A}_2) \cong \mathcal{A}_1^{\perp}$. Firstly, we show $S_{X/S}(\mathcal{A}_2) \subset \mathcal{A}_1^{\perp}$. Let $\mathcal{F}\in \mathcal{A}_2$, $\mathcal{G}\in \mathbf{D}^\mathrm{b}(S)$. Then \begin{equation*} \begin{split} Rf_*R\mathscr{H}\kern -2pt om_{\mathcal{O}_X}(Lf^*\mathcal{G}\otimes T, S_{X/S}(\mathcal{F})) & \cong Rf_*R\mathscr{H}\kern -2pt om_{\mathcal{O}_X}(Lf^*\mathcal{G}, T^\vee \otimes S_{X/S}(\mathcal{F}))\\ & \cong R\mathscr{H}\kern -2pt om_{\mathcal{O}_S}(\mathcal{G}, Rf_*(T^\vee \otimes S_{X/S}(\mathcal{F}))). \end{split} \end{equation*} Note that $T$ is a perfect complex. The local Grothendieck-Serre duality gives \begin{equation*} Rf_*(T^\vee \otimes S_{X/S}(\mathcal{F})) \cong Rf_* R\mathscr{H}\kern -2pt om_{\mathcal{O}_X}(T, S_{X/S}(\mathcal{F})) \cong R\mathscr{H}\kern -2pt om_{\mathcal{O}_S}(Rf_* R\mathscr{H}\kern -2pt om_{\mathcal{O}_X}(\mathcal{F}, T), \mathcal{O}_S). \end{equation*} Since $\mathcal{F}\in \mathcal{A}_2\cong {}^{\perp}\mathcal{A}_1$, we have $Rf_* R\mathscr{H}\kern -2pt om_{\mathcal{O}_X}(\mathcal{F}, T)=0$. Hence, $S_{X/S}(\mathcal{A}_2) \subset \mathcal{A}_1^{\perp}$. Next, we show $S_{X/S}^{-1}(\mathcal{A}_1^{\perp})\subset \mathcal{A}_2 \cong {}^{\perp}\mathcal{A}_1$. Let $\mathcal{F}\in \mathcal{A}_1^{\perp}$ and $\mathcal{G}\in \mathbf{D}^\mathrm{b}(S)$. The local Grothendieck-Verdier duality gives \begin{equation*} \begin{split} Rf_*R\mathscr{H}\kern -2pt om_{\mathcal{O}_X}(S_{X/S}^{-1}(\mathcal{F}), Lf^*\mathcal{G}\otimes T) & \cong Rf_*R\mathscr{H}\kern -2pt om_{\mathcal{O}_X}(\mathcal{F}, S_{X/S}(Lf^*\mathcal{G}\otimes T))\\ & \cong Rf_*R\mathscr{H}\kern -2pt om_{\mathcal{O}_X}(\mathcal{F}, (f^!\mathcal{G})\otimes T)\\ & \cong R\mathscr{H}\kern -2pt om_{\mathcal{O}_S}(Rf_*(\mathcal{F}\otimes T^\vee), \mathcal{G}). \end{split} \end{equation*} Since $\mathcal{F}\in \mathcal{A}_1^{\perp}$, we have $Rf_*(\mathcal{F}\otimes T^\vee)=0$. Hence, $S_{X/S}^{-1}(\mathcal{A}_1^{\perp})\subset \mathcal{A}_2$. This concludes $S_{X/S}(\mathcal{A}_2) \cong \mathcal{A}_1^{\perp}$. It remains to show the admissibility of $\mathcal{A}_i, i=1,2$. From the SOD (\ref{ssod}), we get that $\mathcal{A}_1$ is left admissible and $\mathcal{A}_2$ is right admissible. From the SOD (\ref{ssodmu}), we get that $\mathcal{A}_1$ is right admissible. The SOD (\ref{ssodmu}) induces a SOD $\langle \mathcal{A}_2, \mathcal{A}_1\otimes \omega_f^{-1}\rangle$ and thus $\mathcal{A}_2$ is also left admissible. (2) There are two cases. Let $\mathcal{A}=\langle \mathcal{A}_1, \dots, \mathcal{A}_{n-1}\rangle$ when $i_0\neq n$. Then we can apply (1) to get the SOD (\ref{nssodmu}). When $i_0=n$, we can apply (1) to get a SOD \begin{equation*} \mathbf{D}^\mathrm{b}(X)=\langle \mathcal{A}_2, \dots, \mathcal{A}_n, \mathcal{A}_1\otimes \omega_f^{-1} \rangle. \end{equation*} We apply (1) for $n-2$ more times and get \begin{equation*} \begin{split} \mathbf{D}^\mathrm{b}(X) & =\langle \mathcal{A}_n, \mathcal{A}_1\otimes \omega_f^{-1}, \dots, \mathcal{A}_{n-1}\otimes \omega_f^{-1} \rangle\\ & \cong \langle \mathcal{A}_n\otimes \omega_f, \mathcal{A}_1, \dots, \mathcal{A}_{n-1}\rangle. \end{split} \end{equation*} Each $\mathcal{A}_i, 1\leqslant i\leqslant n$ is admissible because we can construct SODs similar to (1) with $\mathcal{A}_i$ being the leftmost or the rightmost component. \end{proof} \begin{theorem}\label{main1} Let $p\colon \mathcal{Q}\to S$ be a flat quadric surface bundle where $S$ is an integral noetherian scheme over $\mathcal{B}bbk$ with $\operatorname{op}eratorname{char}(\mathcal{B}bbk)\neq 2$. Assume that $p$ has a smooth section and the second degeneration $S_2\neq S$. Then there is a semiorthogonal decomposition \begin{equation} \mathbf{D}^\mathrm{b}(\mathcal{Q}) =\langle \mathbf{D}^\mathrm{b}(\bar{\mathcal{Q}}), p^*\mathbf{D}^\mathrm{b}(S), p^*\mathbf{D}^\mathrm{b}(S) \otimes \mathcal{O}_{\mathcal{Q}/S}(1) \rangle \end{equation} where $\bar{\mathcal{Q}}$ is the hyperbolic reduction of $\mathcal{Q}$ with respect to the smooth section in Definition \ref{hypreddef}. \end{theorem} \begin{proof} Let $\mathcal{N}$ be the regular isotropic line bundle corresponding to the smooth section of $p$. Remark 2.6 of \cite{ksgroring} provides the following picture: \begin{equation}\label{hrdiag} \begin{tikzcd} E \arrow[hook]{r}{\bar{j}} \arrow{d}[swap]{\bar{f}} & \mathcal{Q}' \arrow{d}[swap]{f} \arrow{rd}{g} & & &\\ \mathbb{P}_S(\mathcal{N}) \arrow[hook]{r}{j} & \mathcal{Q} & \mathbb{P}_S(\mathcal{E}/\mathcal{N}) \arrow[hookleftarrow]{r} & \mathbb{P}_S(\mathcal{N}^\perp/\mathcal{N}) \arrow[hookleftarrow]{r} & \bar{\mathcal{Q}} \end{tikzcd} \end{equation} where $g\circ f^{-1}\colon \mathcal{Q}\dashrightarrow \mathbb{P}_S(\mathcal{E}/\mathcal{N})$ is the relative linear projection from $\mathbb{P}_S(\mathcal{N}) \cong S$, the scheme $\bar{\mathcal{Q}}$ is the hyperbolic reduction with respect to $\mathcal{N}$, \begin{equation*} \mathcal{Q}' \cong \operatorname{op}eratorname{Bl}_{\mathbb{P}_S(\mathcal{N})}(\mathcal{Q}) \cong \operatorname{op}eratorname{Bl}_{\bar{\mathcal{Q}}}(\mathbb{P}_S(\mathcal{E}/\mathcal{N})), \end{equation*} the subscheme $E \cong \mathbb{P}_S(\mathcal{N}^\perp/\mathcal{N}) \subset \mathcal{Q}'$ is the exceptional locus of $f$, the map $\bar{f}=f|_E$, and maps $j, \bar{j}$ are inclusions. Let $D\subset \mathcal{Q}'$ be the exceptional locus of $g$. Let $H$ and $h$ be the relative hyperplane classes of $\mathcal{Q}$ and $\mathbb{P}_S(\mathcal{E}/\mathcal{N})$, respectively. Use the same notations for the pull-back classes on $\mathcal{Q}'$. In $\operatorname{op}eratorname{Pic}(\mathcal{Q}')/f^*p^*\operatorname{op}eratorname{Pic}(S)$, there are relations \begin{equation}\label{relofdiv} \left\{ \begin{array}{l} H = h+E \\ h = D+E \end{array} \right.. \end{equation} Let $\bar{q}\colon \mathcal{N}^\perp/\mathcal{N} \to \mathcal{L}$ be the hyperbolic reduction and recall $\bar{\mathcal{Q}}=\{\bar{q}=0\} \subset \mathbb{P}_S(\mathcal{N}^\perp/\mathcal{N})$. Since $S_2\neq S$ and $S$ is integral, we have $\bar{q}\neq 0$ and $\bar{\mathcal{Q}}\hookrightarrow \mathbb{P}_S(\mathcal{E}/\mathcal{N})$ is a regular immersion of codimension $2$. Locally, $\mathbb{P}_S(\mathcal{N})\subset \mathcal{Q}$ is defined by $\{xy+\bar{q}=0\}$, where $x,y$ are variables for $\mathcal{N}$ and $\mathscr{H}\kern -2pt om(\mathcal{N},\mathcal{L})$, respectively. Restricting to $\{x\neq 0\}$, the smooth section is defined by $\{z=w=0\}$, where $z,w$ are variables for $\mathcal{N}^\perp/\mathcal{N}$. Thus, $\mathbb{P}_S(\mathcal{N})\hookrightarrow \mathcal{Q}$ is also a regular immersion of codimension $2$. The blow up formulas for derived categories in Theorem 6.11 of \cite{jquot-1} can be applied to maps $f, g$. For the blow-up map $f$, there is a semiorthogonal decomposition \begin{equation}\label{blowupsod1} \mathbf{D}^\mathrm{b}(\mathcal{Q}') = \langle Lf^*\mathbf{D}^\mathrm{b}(\mathcal{Q}), \bar{j}_*L\bar{f}^*\mathbf{D}^\mathrm{b}(\mathbb{P}_S(\mathcal{N})) \rangle \cong \langle \mathbf{D}^\mathrm{b}(\mathcal{Q}), \mathbf{D}^\mathrm{b}(S) \otimes \bar{j}_*\mathcal{O}_E \rangle, \end{equation} where $\mathbf{D}^\mathrm{b}(S) \otimes \bar{j}_*\mathcal{O}_E$ denotes a subcategory that is equivalent to $\mathbf{D}^\mathrm{b}(S)$, and it is obtained by pulling back objects in $\mathbf{D}^\mathrm{b}(S)$ via the map $f'^*$ followed by tensoring with $\bar{j}_*\mathcal{O}_E$. Here $f'=p\circ f\colon \mathcal{Q}'\to S$ is the composition map and $f'$ is flat. The equivalence of the second components in the SOD (\ref{blowupsod1}) comes from the observation that $p\circ j\colon \mathbb{P}_S(\mathcal{N})\to S$ is an isomorphism and we have \begin{equation*} \bar{f}\cong p\circ j\circ\bar{f} \cong p\circ f\circ \bar{j} \cong f'\circ \bar{j}. \end{equation*} Similarly for the blow-up map $g$, there is a semiorthogonal decomposition \begin{equation}\label{blowupsod2} \mathbf{D}^\mathrm{b}(\mathcal{Q}') = \langle \mathbf{D}^\mathrm{b}(\bar{\mathcal{Q}}), \mathbf{D}^\mathrm{b}(\mathbb{P}_S(\mathcal{E}/\mathcal{N})) \rangle. \end{equation} There is a semiorthogonal decomposition \begin{equation*} \mathbf{D}^\mathrm{b}(\mathcal{Q})= \langle \mathcal{A}_{\mathcal{Q}}, p^*\mathbf{D}^\mathrm{b}(S), p^*\mathbf{D}^\mathrm{b}(S)\otimes \mathcal{O}(H)\rangle \cong \langle \mathcal{A}_{\mathcal{Q}}, \mathbf{D}^\mathrm{b}(S)\otimes \mathcal{O}_{\mathcal{Q}}, \mathbf{D}^\mathrm{b}(S)\otimes \mathcal{O}(H)\rangle \end{equation*} where $\mathcal{A}_{\mathcal{Q}}$ is the residual category and for the equivalence of components, similar notations are used as in the SOD (\ref{blowupsod1}). Therefore, the SOD (\ref{blowupsod1}) can be expanded as \begin{align} \mathbf{D}^\mathrm{b}(\mathcal{Q}') & = \langle \mathcal{A}_{\mathcal{Q}}, \mathbf{D}^\mathrm{b}(S)\otimes \mathcal{O}_{\mathcal{Q}'}, \mathbf{D}^\mathrm{b}(S)\otimes \mathcal{O}(H), \mathbf{D}^\mathrm{b}(S)\otimes \bar{j}_*\mathcal{O}_E\rangle \\ & \cong \langle \mathcal{A}_{\mathcal{Q}}, \mathbf{D}^\mathrm{b}(S)\otimes \mathcal{O}_{\mathcal{Q}'}, \mathbf{D}^\mathrm{b}(S)\otimes \mathcal{O}(H-E), \mathbf{D}^\mathrm{b}(S)\otimes \mathcal{O}(H)\rangle \label{mut1}\\ & \cong \langle \mathbf{D}^\mathrm{b}(S)\otimes \mathcal{O}(H-3h+D), \mathcal{A}_{\mathcal{Q}}, \mathbf{D}^\mathrm{b}(S)\otimes \mathcal{O}_{\mathcal{Q}'}, \mathbf{D}^\mathrm{b}(S)\otimes \mathcal{O}(h) \rangle \label{mut2}\\ & \cong \langle \mathcal{A}_{\mathcal{Q}}, \mathbf{D}^\mathrm{b}(S)\otimes \mathcal{O}(-h), \mathbf{D}^\mathrm{b}(S)\otimes \mathcal{O}_{\mathcal{Q}'}, \mathbf{D}^\mathrm{b}(S)\otimes \mathcal{O}(h) \rangle \label{mut3}\\ & \cong \langle \mathcal{A}_{\mathcal{Q}}, \mathbf{D}^\mathrm{b}(\mathbb{P}_S(\mathcal{E}/\mathcal{N})) \rangle. \label{mut4} \end{align} The equivalences above are obtained by mutations. Denote by $\mathcal{L}L_T$ and $\mathcal{R}R_T$, respectively, the left and right mutation functors through $\mathbf{D}^\mathrm{b}(S)\otimes T$ when $T$ is a relative exceptional object over $S$ (cf. \S 3.11 in \cite{jquot-1}). Note that up to divisors of $S$, we have $H-E=h$, $H-3h+D=-h$ by relations (\ref{relofdiv}) and the relative canonical divisor $K_{\mathcal{Q}'/S}=-3h+D$. In addition, up to a degree shift the relative dualizing complex $\omega_{f'}\cong \mathcal{O}(K_{\mathcal{Q}'/S})$. (\ref{mut1}) applies $\mathcal{L}L_{\mathcal{O}(H)}$ to $\mathbf{D}^\mathrm{b}(S) \otimes \bar{j}_*\mathcal{O}_E$; (\ref{mut2}) applies $-\otimes \omega_{f'}$ to $\mathbf{D}^\mathrm{b}(S)\otimes \mathcal{O}(H)$, which uses Lemma \ref{mulem} (2); and (\ref{mut3}) applies $\mathcal{L}L_{\mathcal{O}(-h)}$ to $\mathcal{A}_{\mathcal{Q}}$. Comparing SODs (\ref{mut4}) and (\ref{blowupsod2}), we have $\mathcal{A}_{\mathcal{Q}}\cong \mathbf{D}^\mathrm{b}(\bar{\mathcal{Q}})$. \end{proof} Since $\mathcal{A}_{\mathcal{Q}}\cong \mathbf{D}^\mathrm{b}(\bar{\mathcal{Q}})$ is obtained by mutations, we lose information on the embedding functor $\mathcal{A}_{\mathcal{Q}}\to \mathbf{D}^\mathrm{b}(\mathcal{Q})$. To remedy this problem, we focus on working with $\mathcal{A}_{\mathcal{Q}}$ by itself and show that $\mathcal{A}_{\mathcal{Q}}$ is geometric using Corollary \ref{genb0cor} and the known description of $\mathcal{A}_{\mathcal{Q}}$ below. Theorem 4.2 in \cite{kuzqfib} ($\mathcal{B}bbk$ algebraically closed and $\operatorname{op}eratorname{char}(\mathcal{B}bbk)=0$) and Theorem 2.2.1 in \cite{abbqfib} (arbitrary field $\mathcal{B}bbk$) state that for all $n\in\mathbb{Z}$, there are semiorthogonal decompositions \begin{equation}\label{kuzsod} \mathbf{D}^\mathrm{b}(\mathcal{Q}) =\langle \mathbb{P}hi_n(\mathbf{D}^\mathrm{b}(S,\mathcal{B}_0)), p^*\mathbf{D}^\mathrm{b}(S)\otimes \mathcal{O}_{\mathcal{Q}/S}(1), p^*\mathbf{D}^\mathrm{b}(S) \otimes \mathcal{O}_{\mathcal{Q}/S}(2) \rangle, \end{equation} where \begin{equation}\label{phin} \mathbb{P}hi_n\colon \mathbf{D}^\mathrm{b}(S,\mathcal{B}_0) \to \mathbf{D}^\mathrm{b}(\mathcal{Q}), \quad \mathcal{F} \mapsto p^*(\mathcal{F})\otimes_{p^*\mathcal{B}_0}^{\mathcal{L}L} \mathcal{K}_n \end{equation} are fully faithful functors of Fourier-Mukai type with kernels $\mathcal{K}_n$. The kernels $\mathcal{K}_n$ are left $\mathcal{B}_0$-modules constructed by \begin{equation}\label{fmkseq} 0\to \mathcal{O}_{\mathbb{P}_S(\mathcal{E})/S}(-1)\otimes \pi^*\mathcal{B}_{n-1} \xrightarrow{\phi_n^\circ} \pi^*\mathcal{B}_{n} \to i_* \mathcal{K}_n \to 0, \end{equation} where $\pi\colon \mathbb{P}_S(\mathcal{E})\to S$ is the projetion, $i\colon \mathcal{Q}\hookrightarrow \mathbb{P}_S(\mathcal{E})$ is the embedding and $\phi_n^\circ$ is the map defined in Remark \ref{spinordef2}. We can regard $\mathcal{K}_n$ as the $n$-th spinor sheaf with respect to the zero isotropic subbundle. We use the same notations as in Section \ref{hilbsec} and consider $Z\subset M$ the subscheme parametrizing lines in the fibers of $p\colon \mathcal{Q}\to S$ that intersect the smooth section. Note that $\beta\colon Z\to S$ together with $f\colon \beta^*\mathcal{B}_0\to \mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z})$ from (\ref{endmodmap}) give a morphism \begin{equation}\label{ncmor} \gamma=(\beta,f)\colon (Z, \mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z})) \to (S,\mathcal{B}_0) \end{equation} of non-commmutative schemes as in Definition \ref{ncdef}. \begin{prop}\label{nceqprop} $R\gamma_*\colon \mathbf{D}^\mathrm{b}D^*(Z,\mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z}))\to \mathbf{D}^\mathrm{b}D^*(S,\mathcal{B}_0)$ is an equivalence for $*=-, \textrm{b}$. \end{prop} \begin{proof} Consider functors \begin{equation*} R\gamma_*\colon \mathbf{D}^\mathrm{b}D^-(Z,\mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z}))\to \mathbf{D}^\mathrm{b}D^-(S,\mathcal{B}_0), \quad L\gamma^*\colon \mathbf{D}^\mathrm{b}D^-(S,\mathcal{B}_0) \to \mathbf{D}^\mathrm{b}D^-(Z,\mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z})) \end{equation*} defined in Appendix \ref{ncschsec}. Firstly, we show $ R\gamma_*$ and $L\gamma^*$ are inverse functors. Corollary \ref{genb0cor} indicates that $\mathcal{B}_0\cong R\gamma_*\mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z})$. Then $R\gamma_* L\gamma^*$ is identity by the projection formula (Proposition \ref{projf}). Conversely, we claim that $R\gamma_*\mathcal{F}=0$ implies $\mathcal{F}=0$ for all $\mathcal{F}\in\mathbf{D}^\mathrm{b}D^-(Z, \mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z}))$. Denote by $\mathcal{H}^i$ the $i$-th cohomology sheaf. Using the spectral sequence $R^i\gamma_*\mathcal{H}^j(\mathcal{F})\mathcal{R}ightarrow R^{i+j}\gamma_* \mathcal{F}$, we can reduce to the case that $\mathcal{F}\in \operatorname{op}eratorname{Coh}(Z, \mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z}))$. Adopt the notations in Lemma \ref{genb0lem} (1) and let $j\colon Z'\hookrightarrow Z$ be the inclusion. Since $\beta|_{Z\backslash Z'}$ is finite, the condition $R\gamma_*\mathcal{F}=0$ implies that $\mathcal{F}$ is supported on $Z'\cong \mathbb{P}_{S_2}(\bar{\mathcal{E}_2})$. If additionally we can prove that $j^*\mathcal{F}=0$, then $\mathcal{F}=0$. Let $\mathcal{F}_1= j^*\mathcal{F} \in \operatorname{op}eratorname{Coh}(Z',\mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_{Z'}}))$. Since there is an equivalence \begin{equation*} -\otimes (\mathcal{I}_0^{\mathcal{R}_{Z'}})^\vee\colon \operatorname{op}eratorname{Coh}(Z')\xrightarrow{\cong} \operatorname{op}eratorname{Coh}(Z',\mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_{Z'}})), \end{equation*} we have $\mathcal{F}_1 \cong \mathcal{G} \otimes (\mathcal{I}_0^{\mathcal{R}_{Z'}})^\vee$ for some $\mathcal{G}\in \operatorname{op}eratorname{Coh}(Z')$. From the decomposition (\ref{i0dec}), we deduce that \begin{equation*} \mathcal{G} \otimes (\mathcal{I}_0^{\mathcal{R}_{Z'}})^\vee \cong \mathcal{G}\otimes \bar{\pi}'^* \left( (\det(\mathcal{E}^\vee)\otimes \mathcal{L}^2)|_{S_2}\right) \operatorname{op}lus \mathcal{G}(1) \otimes \bar{\pi}'^*\left( (\mathcal{N}^\vee \otimes \mathcal{L})|_{S_2}\right). \end{equation*} Note that $\beta\colon Z\to S$ is a proper morphism with fibers of dimension at most $1$ and $\bar{\pi}'\colon Z'\to S_2$ is the base change of $\beta$ along the inclusion $S_2\hookrightarrow S$. Together with $R\gamma_*\mathcal{F}\cong R\beta_* \mathcal{F}=0$, Proposition 2.11 in \cite{bbflop} deduces that $R\bar{\pi}'_*\mathcal{F}_1=0$. This means that $R\bar{\pi}'_*\mathcal{G}=0$ and $R\bar{\pi}'_*\mathcal{G}(1)=0$, or equivalently \begin{equation*} \operatorname{Hom}_{\mathbf{D}^\mathrm{b}(Z')}(\bar{\pi}'^*\mathcal{K}, \mathcal{G})=0, \quad \operatorname{Hom}_{\mathbf{D}^\mathrm{b}(Z')}((\bar{\pi}'^*\mathcal{K})(-1), \mathcal{G})=0 \end{equation*} for all $\mathcal{K}\in \mathbf{D}^\mathrm{b}(Z')$. The SOD (\ref{sodz'}) \begin{equation*} \mathbf{D}^\mathrm{b}(Z')=\langle \bar{\pi}'^{*}\mathbf{D}^\mathrm{b}(S_2)\otimes \mathcal{O}_{Z'/S}(-1), \bar{\pi}'^{*}\mathbf{D}^\mathrm{b}(S_2)\rangle \end{equation*} implies that $\mathcal{G}=0$. Thus, $\mathcal{F}_1=0$ and $\mathcal{F}=0$. For every $\mathcal{F}\in\mathbf{D}^\mathrm{b}D^-(Z,\mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z}))$, consider the exact triangle \begin{equation*} L\gamma^* R\gamma_* \mathcal{F} \to \mathcal{F} \to \mathcal{K} \end{equation*} where $\mathcal{K}$ is the cone of the first map. Applying $R\gamma_*$ to the exact triangle, we get $R\gamma_*\mathcal{K}=0$. Hence, $\mathcal{K}=0$ and $L\gamma^* R\gamma_*$ is the identity. The equivalence on $\mathbf{D}^\mathrm{b}D^-$ implies that \begin{equation*} R\gamma_*\colon \mathbf{D}^\mathrm{b}(Z,\mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z}))\to \mathbf{D}^\mathrm{b}(S,\mathcal{B}_0) \end{equation*} is fully faithful. Recall that $R\gamma_* L\gamma^*\cong \operatorname{id}$. From the proofs of Lemma 2.4 and Corollary 2.5 in \cite{kuzdp6}, we get that $R\gamma_*$ is also essentially surjective on $\mathbf{D}^\mathrm{b}$. Thus, $R\gamma_*$ is an equivalence on $\mathbf{D}^\mathrm{b}$. \end{proof} \begin{theorem}\label{main2} Let $p\colon \mathcal{Q}\to S$ be a flat quadric surface bundle where $S$ is an integral noetherian scheme over $\mathcal{B}bbk$ with $\operatorname{op}eratorname{char}(\mathcal{B}bbk)\neq 2$. Assume that $p$ has a smooth section and the second degeneration $S_2\neq S$. Then for all $n\in \mathbb{Z}$ there are semiorthogonal decompositions \begin{equation} \mathbf{D}^\mathrm{b}(\mathcal{Q}) =\langle \mathbb{P}si_n(\mathbf{D}^\mathrm{b}(Z)), p^*\mathbf{D}^\mathrm{b}(S)\otimes \mathcal{O}_{\mathcal{Q}/S}(1), p^*\mathbf{D}^\mathrm{b}(S) \otimes \mathcal{O}_{\mathcal{Q}/S}(2) \rangle, \end{equation} where $Z$ is the scheme over $S$ parametrizing lines in the fibers of $p$ that intersect the smooth section and $\mathbb{P}si_n\colon \mathbf{D}^\mathrm{b}(Z) \to \mathbf{D}^\mathrm{b}(\mathcal{Q})$ are fully faithful functors of Fourier-Mukai type. In addition, $Z$ is isomorphic to the hyperbolic reduction $\bar{\mathcal{Q}}$ in Theorem \ref{main1}. More specifically, let $\mathbb{P}_Z(\mathcal{R}_Z) \subset \mathcal{Q} \times_S Z$ be the universal family of lines that $Z$ parametrizes. Then the kernel of $\mathbb{P}si_n$ is $\mathcal{S}_n^{\mathcal{R}_Z}$, the $n$-th spinor sheaf with respect to the isotropic subbundle $\mathcal{R}_Z$ in Definition \ref{spinordef}. \end{theorem} \begin{proof} For simplicity, denote by $\otimes, g_*, g^*$ the derived tensor product, the derived pull-forward and pull-back functors of a map $g$, respectively. The isomorphism $Z\cong \bar{Q}$ has been pointed out by (\ref{zeqhr}). Consider the composition \begin{equation}\label{psin} \mathbb{P}si_n\colon \mathbf{D}^\mathrm{b}(Z) \xrightarrow[\cong]{-\otimes \mathcal{I}_0^{\circ \mathcal{R}_Z}} \mathbf{D}^\mathrm{b}(Z, \mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z})) \xrightarrow[\cong]{\gamma_*} \mathbf{D}^\mathrm{b}(S,\mathcal{B}_0) \xrightarrow{\mathbb{P}hi_n} \mathbf{D}^\mathrm{b}(\mathcal{Q}), \end{equation} where $\mathbb{P}hi_n$ is defined in (\ref{phin}) and $\gamma=(\beta,f)\colon (Z, \mathscr{E}\kern -1pt nd(\mathcal{I}_0^{\mathcal{R}_Z})) \to (S,\mathcal{B}_0)$ in (\ref{ncmor}). From Lemma \ref{clidealdual}, we get \begin{equation*} \mathcal{I}_0^{\circ \mathcal{R}_Z} \cong (\mathcal{I}_0^{\mathcal{R}_Z})^\vee \otimes \det(\mathcal{R}_Z)\otimes \beta^*(\det(\mathcal{E})\otimes (\mathcal{L}^\vee)^3). \end{equation*} Thus, the first functor in (\ref{psin}) is an equivalence because $-\otimes (\mathcal{I}_0^{\mathcal{R}_Z})^\vee$ is such. From Proposition \ref{nceqprop}, we get $\gamma_*$ is also an equivalence. Thus, $\mathbb{P}si_n$ is fully faithful and for every $\mathcal{F}\in\mathbf{D}^\mathrm{b}(Z)$, \begin{equation*} \mathbb{P}si_n(\mathcal{F}) = p^*\beta_*(\mathcal{F}\otimes \mathcal{I}_0^{\circ \mathcal{R}_Z})\otimes_{p^*\mathcal{B}_0} \mathcal{K}_n. \end{equation*} Consider the Cartesian squares \begin{equation*} \begin{tikzcd} \mathcal{Q}\times_S Z \arrow[hook]{r}{i_Z} \arrow{d}{\beta_{\mathcal{Q}}} & \mathbb{P}_S(\mathcal{E})\times_S Z \arrow{r}{\pi_Z} \arrow{d}{\beta_{\mathcal{E}}} & Z\arrow{d}{\beta}\\ \mathcal{Q} \arrow[hook]{r}{i} & \mathbb{P}_S(\mathcal{E}) \arrow{r}{\pi} & S. \end{tikzcd} \end{equation*} Recall that $p=\pi\circ i$ and denote $p_Z=\pi_{Z}\circ i_Z$. Let $\theta=\beta_{\mathcal{E}}\circ \pi = \pi_Z \circ \beta$. Since $\pi$ is flat, the right Cartesian square is exact. We have \begin{align*} i_*\mathbb{P}si_n(\mathcal{F}) & \cong \pi^*\beta_*(\mathcal{F}\otimes \mathcal{I}_0^{\circ \mathcal{R}_Z})\otimes_{\pi^*\mathcal{B}_0} i_*\mathcal{K}_n\\ & \cong \beta_{\mathcal{E}*}\pi_Z^*(\mathcal{F}\otimes \mathcal{I}_0^{\circ \mathcal{R}_Z})\otimes_{\pi^*\mathcal{B}_0} i_*\mathcal{K}_n\\ & \cong \beta_{\mathcal{E}*} \left( \pi_Z^*(\mathcal{F}\otimes \mathcal{I}_0^{\circ \mathcal{R}_Z})\otimes_{\theta^*\mathcal{B}_0} \beta_{\mathcal{E}}^*i_*\mathcal{K}_n \right). \end{align*} On the other hand, applying $\pi_Z^*\mathcal{I}_0^{\circ \mathcal{R}_Z}\otimes_{\theta^*\mathcal{B}_0} \beta_{\mathcal{E}}^*(-)$ to the sequence (\ref{fmkseq}) and using the isomorphism from Lemma \ref{mulisolem} (1), we have \begin{equation*} 0 \to \beta_{\mathcal{E}}^*\mathcal{O}_{\mathbb{P}_S(\mathcal{E})/S}(-1)\otimes \pi_Z^*\mathcal{I}_{n-1}^{\circ \mathcal{R}_Z} \xrightarrow{\phi_n^{\circ}} \pi_Z^* \mathcal{I}_{n}^{\circ \mathcal{R}_Z} \to \pi_Z^*(\mathcal{I}_0^{\circ \mathcal{R}_Z})\otimes_{\theta^*\mathcal{B}_0}\beta_{\mathcal{E}}^* (i_* \mathcal{K}_n) \to 0. \end{equation*} From Definition \ref{spinordef}, we get that the last term is $i_{Z*}\mathcal{S}_n^{\mathcal{R}_Z}$. Then we have \begin{align*} i_*\mathbb{P}si_n(\mathcal{F}) & \cong \beta_{\mathcal{E}*}(\pi_Z^*\mathcal{F} \otimes i_{Z*}\mathcal{S}_n^{\mathcal{R}_Z})\\ & \cong \beta_{\mathcal{E}*}i_{Z*}(p_Z^*\mathcal{F} \otimes \mathcal{S}_n^{\mathcal{R}_Z})\\ & \cong i_*\beta_{\mathcal{Q}*}(p_Z^*\mathcal{F} \otimes \mathcal{S}_n^{\mathcal{R}_Z}). \end{align*} Hence, $\mathbb{P}si_n$ are Fourier-Mukai functors with kernels $\mathcal{S}_n^{\mathcal{R}_Z}$. \end{proof} \section{Quadric surface bundles over surfaces}\label{surfbsec} Use the same notations as in Section \ref{hilbsec}. Throughout this section, assume that $\mathcal{B}bbk$ is algebraically closed and $p\colon \mathcal{Q}\to S$ is a flat quadric surface bundle (except for Lemma \ref{bealem}) where $\mathcal{Q}$ is smooth and $S$ is a smooth surface. Recall that $S_k\subset S$ is the $k$-th degeneration locus. We will show that the residual category $\mathcal{A}_{\mathcal{Q}}$ is twisted geometric by performing birational transformation to the relative Hilbert scheme of lines following the approach of \cite{kuzline}. Recall that the relative Hilbert scheme of lines $\rho \colon M\to S$ factors as $M\xrightarrow{\tau} \widetilde{S} \xrightarrow{\alpha} S$. From Lemma \ref{disccover} and Lemma \ref{bealem}, we get that the discriminant cover $\alpha$ in this section is a double cover ramified along $S_1$. When $p\colon \mathcal{Q}\to S$ has simple degeneration, the map $\tau\colon M\to \widetilde{S}$ is a smooth conic bundle and the Azumaya algebra $\widetilde{\mathcal{A}}$ corresponding to $\tau$ satisfies $\alpha_*\widetilde{\mathcal{A}}\cong \mathcal{B}_0$. This implies that $\mathcal{A}_{\mathcal{Q}}$ is equivalent to the twisted derived category $\mathbf{D}^\mathrm{b}(\widetilde{S}, \widetilde{\mathcal{A}})$. But when $p$ does not have simple degeneration, $\tau$ is no longer a smooth conic bundle. The main idea of the section is to modify $\tau$ so that we can still describe $\mathcal{A}_{\mathcal{Q}}$ as a twisted derived category. In the first part, we will perform birational transformations to $\tau$ and obtain a smooth conic bundle $\tau_+\colon M^+\to S^+$ in Proposition \ref{birtran}. The second part is devoted to the proof of Theorem \ref{main3}, where we show $\mathcal{A}_{\mathcal{Q}}\cong \mathbf{D}^\mathrm{b}(S^+, \mathcal{A}^+)$ for $\mathcal{A}^+$ the Azumaya algebra corresponding to $\tau_+$. When the total space of a quadric bundle (not necessarily a quadric surface bundle) is smooth and the base is a smooth surface, fibers of the quadric bundle will not be too singular: \begin{lemma}[{\cite[Proposition 1.2 (iii)]{beaprym}}] \label{bealem} Let $p\colon \mathcal{Q}\to S$ be a flat quadric bundle where $\mathcal{Q}$ is smooth and $S$ is a smooth surface. Then $S_3=\emptyset$, $S_1\subset S$ is a curve with at most a finite number of ordinary double points, and the singular locus of $S_1$ is $S_2$. \end{lemma} Since $S_3=\emptyset$, for every geometric point $s\in S_2$, the geometric fiber $M_s = \Sigma_s^+\cup \Sigma_s^-$ is the union of two planes $\Sigma_s^{\pm}\cong \mathbb{P}^2$ intersecting at a point. Denote by $m_s=\Sigma_s^+\cap \Sigma_s^-$ the intersection point. The geometry of $M$ is described below. \begin{lemma}\label{mgeom} Let $p\colon \mathcal{Q}\to S$ be a flat quadric surface bundle where $\mathcal{Q}$ is smooth and $S$ is a smooth surface. Then the relative Hilbert scheme of lines $M$ has at most a finite number of ordinary double points $\{m_s\}_{s\in S_2}$. \end{lemma} \begin{proof} By Lemma \ref{bealem}, $S_2$ is a finite set of points and thus so is $\{m_s\}_{s\in S_2}$. For a point $x\in S$, denote by $K_x\subset \mathcal{E}_x$ the kernel of the quadratic form $q_x$ over the residue field of the point $x$. A point of $M$ is represented by $(x, K)$ where $x$ is a point of $S$ and $K$ is a $2$-dimensional subspace of $\mathcal{E}_x$. Proposition 2.1 in \cite{kuzline} states that $M$ is smooth at $(x,K)$ if $\dim(K\cap K_x)\leqslant 1$. Hence, $M$ is smooth away from $\{m_s\}_{s\in S_2}$. For the singularity of $M$, we can replace $S$ by $\operatorname{op}eratorname{Spec}(\widehat{\mathcal{O}_{S,s}})$ where $\widehat{\mathcal{O}_{S,s}}\cong \mathcal{B}bbk \llbracket t_1, t_2 \rrbracket$ is the formal completion at a point $s\in S_2$. Note that all units in $\mathcal{B}bbk \llbracket t_1, t_2 \rrbracket$ are squares. By Corollary 3.3 in \cite{baeqf}, the quadratic form can be written as \begin{equation*} q=ax_1^2+bx_1x_2+cx_2^2 +x_3^2+x_4^2 \end{equation*} where $a,b,c$ are contained in the ideal $(t_1,t_2)$. Recall that $M\subset S\times \operatorname{op}eratorname{Gr}(2,4)$ is the zero locus of the section $s_M$ in (\ref{msec}). We can refer to Section 3.3 in \cite{hvav} for the explicit correspondence. Denote the variables of $\operatorname{op}eratorname{Gr}(2,4)$ by $\{y_{ij}\}_{1\leqslant i<j\leqslant 4}$ and assume that the singular point of $M$ is $(0, [1:0:\dots :0])$. Consider the open neighborhood $U:=S\times \{y_{12}=1\}$ of the singular point. Then $M\cap U$ is defined by \begin{equation}\label{meq1} \left\{ \begin{array}{l} y_{23}^2+ y_{24}^2=-2a\\ y_{13}y_{23}+y_{14}y_{24}=b\\ y_{13}^2+y_{14}^2=-2c \end{array} \right.. \end{equation} By Lemma \ref{bealem}, $S_1=\{\det(b_q)=4ac-b^2=0\}$ has an ordinary double point at $0\in S$. The degree $2$ term of $\det(b_q)$ is $4l_al_c-l_b^2$ where $l_a, l_b, l_c$ are the linear terms of $a,b,c$, respectively. Let $\mathbf{D}^\mathrm{b}elta$ be the discriminant of $4l_al_c-l_b^2$. From $\mathbf{D}^\mathrm{b}elta\neq 0$, we get that $l_a, l_b, l_c$ are not proportional. We can assume $a=t_1, c=t_2$ and $l_b=\lambda t_1+ \mu t_2$ for $\lambda,\mu \in\mathcal{B}bbk$. Then $\mathbf{D}^\mathrm{b}elta=16(1-\lambda\mu)\neq 0$ and Equations (\ref{meq1}) reduce to \begin{equation}\label{meq2} \left\{ \frac{\lambda}{2}y_{23}^2 +y_{13}y_{23} +\frac{\mu}{2}y_{13}^2 +\frac{\lambda}{2} y_{24}^2 +y_{14}y_{24} +\frac{\mu}{2}y_{14}^2 +\epsilon=0 \right\} \subset \mathcal{A}A_{\mathcal{B}bbk}^4, \end{equation} where $\epsilon$ is a sum of degree $\geqslant 3$ terms in $y_{13}, y_{23}, y_{14}, y_{24}$. Since $1-\lambda\mu\neq 0$, the discriminants of \begin{equation*} \frac{\lambda}{2}y_{2i}^2 +y_{1i}y_{2i} +\frac{\mu}{2}y_{1i}^2, \quad i=3,4 \end{equation*} are non-zero. Hence, the degree $2$ part of (\ref{meq2}) has full rank and $M$ has an ordinary double point. \end{proof} We will construct a minimal resolution $\widetilde{M}$ of the nodal $3$-fold $M$. This is obtained by blowing up $M$ along $\bigsqcup_{s\in S_2} \Sigma_s^+$ (or alternatively $\bigsqcup_{s\in S_2} \Sigma_s^-$). Lemma \ref{nblem} and Example \ref{minresex} serve as preparations for this step. \begin{lemma}\label{nblem} Under the assumptions of Lemma \ref{mgeom}, denote by $M^{\circ} = M - \{m_s\}_{s\in S_2}$ the smooth locus of $M$. Write $\Sigma=\Sigma_s^*$ for $*=\pm$ and $\Sigma^{\circ}=\Sigma-\{m_s\}$. (1) The normal bundle $N_{\Sigma^{\circ}/M^{\circ}}$ is isomorphic to $\mathcal{O}_{\Sigma^{\circ}}(-2)$. (2) $\mathcal{O}_{\Sigma}(-K_M) \cong \mathcal{O}_{\Sigma}(1)$. \end{lemma} \begin{proof} (1) There is an exact sequence of normal bundles \begin{equation*} 0\to N_{\Sigma^{\circ}/M^{\circ}} \to N_{\Sigma^{\circ}/\operatorname{op}eratorname{Gr}_S(2, \mathcal{E})} \to (N_{M^{\circ}/\operatorname{op}eratorname{Gr}_S(2,\mathcal{E})})|_{\Sigma^{\circ}} \to 0. \end{equation*} From inclusions $\Sigma^{\circ}\subset \operatorname{op}eratorname{Gr}(2,4) \subset\operatorname{op}eratorname{Gr}_S(2, \mathcal{E})$, a similar exact sequence of normal bundles give \begin{equation*} N_{\Sigma^{\circ}/\operatorname{op}eratorname{Gr}_S(2, \mathcal{E})} \cong N_{\Sigma^{\circ}/\operatorname{op}eratorname{Gr}(2,4)} \operatorname{op}lus (T_s S \otimes \mathcal{O}_{\Sigma^{\circ}}) \cong \mathcal{R}^\vee|_{\Sigma^{\circ}} \operatorname{op}lus \mathcal{O}_{\Sigma^{\circ}}^2, \end{equation*} where $T_s S$ is the tangent space of $S$ at $s$. By applying (\ref{msec}), we get \begin{equation*} N_{M^{\circ}/\operatorname{op}eratorname{Gr}_S(2,\mathcal{E})} \cong \operatorname{Sym}^2(\mathcal{R}^\vee)\otimes \pi_{\operatorname{op}eratorname{Gr}}^*\mathcal{L}, \end{equation*} where $\pi_{\operatorname{op}eratorname{Gr}}\colon \operatorname{op}eratorname{Gr}_S(2, \mathcal{E})\to S$ is the projection map. They imply \begin{equation*} \det(N_{\Sigma^{\circ}/\operatorname{op}eratorname{Gr}_S(2, \mathcal{E})}) \cong \mathcal{O}_{\Sigma^{\circ}}(1), \quad \det((N_{M^{\circ}/\operatorname{op}eratorname{Gr}_S(2,\mathcal{E})})|_{\Sigma^{\circ}}) \cong \mathcal{O}_{\Sigma^{\circ}}(3). \end{equation*} Hence, $N_{\Sigma^{\circ}/M^{\circ}} \cong \det( N_{\Sigma^{\circ}/M^{\circ}}) \cong \mathcal{O}_{\Sigma^{\circ}}(-2)$. (2) From (1), we have $\mathcal{O}_{\Sigma^{\circ}}(\Sigma) \cong \mathcal{O}_{\Sigma^{\circ}}(-2)$. We get \begin{align*} \mathcal{O}_{\Sigma^{\circ}}(-K_M) & \cong \mathcal{O}_{\Sigma^{\circ}}(-K_M-\Sigma) \otimes \mathcal{O}_{\Sigma^{\circ}}(\Sigma)\\ & \cong \mathcal{O}_{\Sigma^{\circ}}(-K_{\Sigma}) \otimes \mathcal{O}_{\Sigma^{\circ}}(-2)\\ & \cong \mathcal{O}_{\Sigma^{\circ}} (1). \end{align*} Since $M$ is Gorenstein, we have that $\mathcal{O}_{\Sigma}(-K_M)$ is a line bundle and the isomorphism extends to $\Sigma$. \end{proof} \begin{ex}\label{minresex} Let $X =\{xy-zw=0\} \subset \mathbb{P}^4$ be the nodal quadric $3$-fold. It is a cone over $\mathbb{P}^1 \times \mathbb{P}^1$ with the vertex $O=\{x=y=z=w=0\}$. Let $\varphi\colon X \dashrightarrow \mathbb{P}^1 \times \mathbb{P}^1$ be the projection from the vertex. Write $\Sigma_t=\varphi^{-1}(\mathbb{P}^1\times \{t\}) \cong \mathbb{P}^2, t\in \mathbb{P}^1$. Let $Y$ be the minimal resolution of $X$. Then $Y\cong \mathbb{P}(\mathcal{O}_{\mathbb{P}^1}\operatorname{op}lus \mathcal{O}_{\mathbb{P}^1}(-1)^2)\subset \mathbb{P}^4 \times \mathbb{P}^1$ and $Y \cong \operatorname{op}eratorname{Bl}_{\Sigma_t} X$ is the blow up of $X$ along $\Sigma_t$ for every $t\in \mathbb{P}^1$. Fix a point $0\in \mathbb{P}^1$ and let $\psi\colon X\dashrightarrow \mathbb{P}^1$ be the linear projection from $\Sigma_0$. The resolution of the indeterminacy of $\psi$ gives \begin{equation*} \begin{tikzcd}[column sep = small] & Y\arrow{ld}[swap]{f} \arrow{rd}{g} &\\ X \arrow[dotted]{rr}{\psi} & & \mathbb{P}^1, \end{tikzcd} \end{equation*} where $f$ is the blow-up of $X$ along $\Sigma_0$. For every $t\in \mathbb{P}^1$, the pre-image $\tilde{\Sigma}_t=f^{-1}(\Sigma_t)$ is the Hirzebruch surface $\operatorname{op}eratorname{Bl}_O(\Sigma_t)$, where $O\in \Sigma_t$ is the vertex of $X$. Let $H$ and $h$ be the pull-backs of hyperplane classes of $X$ and $\mathbb{P}^1$ to $Y$, respectively. Then we have the relation $\tilde{\Sigma}_t=H-h$. Let $l \cong \mathbb{P}^1$ be the exceptional locus of $f$. Then $\mathcal{O}_l(\tilde{\Sigma}_t) \cong \mathcal{O}_l(H-h) \cong \mathcal{O}_l(-1)$. \end{ex} The locus $S_2$ can be embedded into the double cover $\widetilde{S}$ and for $s\in S_2\subset \widetilde{S}$, we have that $M_s = \Sigma_s^+\cup \Sigma_s^-$ is also the scheme-theoretic fiber of $\tau\colon M\to \widetilde{S}$. Moreover, $\widetilde{S}$ has ordinary double points at $S_2$. Recall $m_s=\Sigma_s^+\cap \Sigma_s^-$ and $M^{\circ} = M - \{m_s\}_{s\in S_2}$. By Lemma \ref{mgeom}, $M$ has a finite number of ordinary double points $\{m_s\}_{s\in S_2}$. That is, near the point $m_s$, the $3$-fold $M$ is the nodal quadric. Since $\Sigma_s^+$ and $\Sigma_s^-$ intersect at only one point, they are the planes lying over the same rulings of $\mathbb{P}^1\times \mathbb{P}^1$. Let \begin{equation*} \xi\colon \widetilde{M} \to M \end{equation*} be the blow-up of $M$ along $\bigsqcup_{s\in S_2} \Sigma_s^+$. Then $\widetilde{M}$ is a small resolution of $M$ and $\xi^{-1}(M^{\circ}) \to M^{\circ}$ is an isomorphism. Write $l_s =\xi^{-1}(m_s) \cong \mathbb{P}^1$ and let $*=\pm$. Example \ref{minresex} tells us that \begin{equation*} \tilde{\Sigma}_s^* := \xi^{-1}(\Sigma_s^*) \cong \operatorname{op}eratorname{Bl}_{m_s} \Sigma_s^* \end{equation*} and $l_s$ is the $(-1)$-curve on $\operatorname{op}eratorname{Bl}_{m_s} \Sigma_s^*$. Denote the fiber classes of the projection $\tilde{\Sigma}_s^* \to l_s$ by $h_s^*$. Then \begin{equation*} h_s^*\cdot l_s=1, \quad l_s^2=-1, \quad (h_s^*)^2=0 \end{equation*} on $\tilde{\Sigma}_s^*$. In addition, $\xi^*\mathcal{O}_{\Sigma_s^*}(1) \cong \mathcal{O}_{\tilde{\Sigma}_s^*}(h_s^* +l_s)$. \begin{prop}\label{birtran} Assume $\mathcal{B}bbk$ is algebraically closed and $\operatorname{op}eratorname{char}(\mathcal{B}bbk)=0$. (1) There exists a relative contraction map $\xi_+\colon \widetilde{M} \to M^+$ over $\widetilde{S}$ where $\xi_+$ is an isomorphism on $\widetilde{M}-\bigsqcup_{s\in S_2} \tilde{\Sigma}_s^+$ and $\xi_+|_{\tilde{\Sigma}_s^+}$ is the projection onto $l_s$. Namely, we have birational morphisms \begin{equation*} M\xleftarrow{\xi} \widetilde{M} \xrightarrow{\xi_+} M^+ \end{equation*} over $\widetilde{S}$ and the corresponding fibers over $s\in S_2\subset \widetilde{S}$ are \begin{equation*} \Sigma_s^+\cup \Sigma_s^-\leftarrow \tilde{\Sigma}_s^+ \cup \tilde{\Sigma}_s^- \to \tilde{\Sigma}_s^-. \end{equation*} Furthermore, $M^+$ is smooth and $\xi_+$ is the blow-up of $M^+$ along $\bigsqcup_{s\in S_2} l_s$. (2) Let $\eta\colon S^+=\operatorname{op}eratorname{Bl}_{S_2} \widetilde{S} \to \widetilde{S}$ be the resolution of $\widetilde{S}$. Then the map $\tau'\colon M^+\to \widetilde{S}$ obtained from (1) fits into the commutative diagram \begin{equation*} \begin{tikzcd}[column sep = small] & \widetilde{M}\arrow{ld}[swap]{\xi} \arrow{rd}{\xi_+} & \\ M \arrow{d}[swap]{\tau} & & M^+ \arrow{d}{\tau_+} \arrow{lld}[swap]{\tau'}\\ \widetilde{S} & & S^+ \arrow{ll}{\eta} \end{tikzcd} \end{equation*} and $\tau_+$ is a smooth conic bundle. \end{prop} \begin{proof} Denote by $\widetilde{S}^{\circ}=\widetilde{S}\backslash S_2$ and let $*=\pm$. (1) By Lemma \ref{nblem} (1), $N_{\tilde{\Sigma}_s^*/\widetilde{M}} \cong \mathcal{O}(-2h_s^*+nl_s)$ for some $n\in\mathbb{Z}$. Since $\mathcal{O}_l(\tilde{\Sigma}_s^*) \cong \mathcal{O}_{l_s}(-1)$, we have $n=-1$ and \begin{equation}\label{nbeq} N_{\tilde{\Sigma}_s^*/\widetilde{M}} \cong \mathcal{O}_{\tilde{\Sigma}_s^*}(\tilde{\Sigma}_s^*) \cong \mathcal{O}_{\tilde{\Sigma}_s^*}(-2h_s^*-l_s). \end{equation} Let \begin{equation}\label{condiv} D=-K_{\widetilde{M}}-\sum_{s\in S_2}\tilde{\Sigma}_s^- = -\xi^*K_{M}-\sum_{s\in S_2}\tilde{\Sigma}_s^-. \end{equation} Denote $\widetilde{\tau}=\tau\circ\xi\colon \widetilde{M} \to \widetilde{S}$. We claim that $D$ is $\widetilde{\tau}$-nef and $D-K_{\widetilde{M}}$ is $\widetilde{\tau}$-ample. See \cite[Tag 01VH]{stacks-project} for the definition of relative ampleness and recall that a divisor is relative nef if its intersection with every curve in the fiber is non-negative. Note that $\widetilde{\tau}^{-1}(\widetilde{S}^{\circ})\cong \tau^{-1}(\widetilde{S}^{\circ}) \to \widetilde{S}^{\circ}$ is a smooth conic bundle and $D|_{\widetilde{\tau}^{-1}(\widetilde{S}^{\circ})} =-K_{\widetilde{\tau}^{-1}(\widetilde{S}^{\circ})}$ is relative ample. It suffices to study $D$ and $D-K_{\widetilde{M}}$ on $\tilde{\Sigma}_s^+\cup \tilde{\Sigma}_s^-$. The relative nefness and ampleness can be checked on curve classes $h_s^{\pm}$ and $l_s$. By Lemma \ref{nblem} (2), \begin{equation*} \mathcal{O}_{\tilde{\Sigma}_s^*}(-K_{\widetilde{M}}) \cong \xi^* \mathcal{O}_{\Sigma_s^*}(-K_M) \cong \mathcal{O}_{\tilde{\Sigma}_s^*}(h_s^*+l_s). \end{equation*} Hence, $(-K_{\widetilde{M}})\cdot h_s^* = 1$, $(-K_{\widetilde{M}})\cdot l_s=0$ and \begin{equation}\label{nefness} D\cdot h_s^+=0, \quad D\cdot h_s^-=2, \quad D\cdot l_s=1. \end{equation} From these, we get $D$ is relative nef, and by Kleiman's ampleness criterion (Theorem 1.44 in \cite{kmbirgeo}), $D-K_{\widetilde{M}}$ is relative ample. By relative basepoint-free theorem (Theorem 3.24 in \cite{kmbirgeo} and this is where $\operatorname{op}eratorname{char}(\mathcal{B}bbk)$=0 is used), $mD$ is $\widetilde{\tau}$-free for $m\gg 0$. We can construct $\xi_+\colon \widetilde{M} \to M^+$ by taking the Stein factorization of $|mD|, m\gg 0$. From Equations (\ref{nefness}), we get that $\xi_+$ has the required properties. From (\ref{nbeq}), we get $N_{\tilde{\Sigma}_s^+/\widetilde{M}}|_{h_s^+} \cong \mathcal{O}_{h_s^+}(-1)$. Then \cite{funainv} or \cite[Theorem 2.3]{andoexray} implies that $M^+$ is smooth and $\xi_+$ is the blow-up. (2) Note that from (1), we have $M^+$ is a smooth $3$-fold. The fiber of $\tau'\colon M^+\to \widetilde{S}$ over $s\in S_2$ is a Cartier divisor $\tilde{\Sigma}_s^-$ of $M^+$. From the universal property of blowing up, we get that $\tau'$ factors as $M^+\xrightarrow{\tau_+} S^+\xrightarrow{\eta} \widetilde{S}$. By construction, $\eta^{-1}(\widetilde{S}^{\circ})\cong \widetilde{S}^{\circ}$, and $\tau_+^{-1}(\widetilde{S}^{\circ}) \to \widetilde{S}^{\circ}$ is isomorphic to $\tau^{-1}(\widetilde{S}^{\circ}) \to \widetilde{S}^{\circ}$. We claim that $\tau_+|_{\tilde{\Sigma}_s^-}\colon \tilde{\Sigma}_s^-\to \eta^{-1}(s)$ is the projection from the Hirzebruch surface onto $\mathbb{P}^1$. Note that $\xi_+^{-1}(\tilde{\Sigma}_s^-)=\tilde{\Sigma}_s^+ \cup \tilde{\Sigma}_s^-$ and $\xi_+|_{\tilde{\Sigma}_s^-}$ is an isomorphism. Then from (\ref{nbeq}), we get \begin{equation}\label{nbm+} N_{\tilde{\Sigma}_s^-/M^+} \cong \mathcal{O}_{M^+}(\tilde{\Sigma}_s^-)|_{\tilde{\Sigma}_s^-}\cong \mathcal{O}_{\widetilde{M}}(\tilde{\Sigma}_s^+ + \tilde{\Sigma}_s^-)|_{\tilde{\Sigma}_s^-} \cong \mathcal{O}_{\tilde{\Sigma}_s^-}(-2h_s^-). \end{equation} By construction of $\tau_+$, the pull-back of $\mathcal{O}_{\eta^{-1}(s)}(1)\cong \mathcal{O}_{\mathbb{P}^1}(2)$ is the conormal bundle $N_{\tilde{\Sigma}_s^-/M^+}^\vee$. Thus, the pull-back of $\mathcal{O}_{\mathbb{P}^1}(1)$ is $\mathcal{O}_{\tilde{\Sigma}_s^-}(h_s^-)$ and $\tau_+|_{\tilde{\Sigma}_s^-}$ is the said projection. Lastly, we show $\tau_+$ is a smooth conic bundle. Recall that a map $f\colon X\to Y$ is a smooth conic bundle if each geometric fiber $X_y$ is $\mathbb{P}^1$ and there exists a line bundle $L$ on $X$ such that $L|_{X_y}\cong \mathcal{O}_{\mathbb{P}^1}(2)$. We have seen the geometric fibers of $\tau_+$ are $\mathbb{P}^1$'s. Now we will show that there is a line bundle $L$ on $M^+$ such that $\xi_+^*L \cong \mathcal{O}_{\widetilde{M}}(D)$ with $D$ defined in (\ref{condiv}) and $L$ makes $\tau_+$ a smooth conic bundle. Since $\xi_+$ is a smooth blow-up from (1), the SOD of $\mathbf{D}^\mathrm{b}(\widetilde{M})$ obtained from the blow-up formula implies that $\mathcal{O}_{\widetilde{M}}(D)$ is the pull-back of a line bundle $L$ on $M^+$ if $\mathcal{O}_{\widetilde{M}}(D)|_{\tilde{\Sigma}_s^+}$ is the pull-back of a line bundle on $l_s$. Computations in (1) give \begin{equation*} \mathcal{O}_{\tilde{\Sigma}_s^+}(D) \cong \mathcal{O}_{\tilde{\Sigma}_s^+}(h_s^+) \cong \xi_+^* \mathcal{O}_{l_s}(1), \quad \mathcal{O}_{\tilde{\Sigma}_s^-}(D) \cong \mathcal{O}_{\tilde{\Sigma}_s^-}(3h_s^-+2l_s). \end{equation*} Therefore, such $L$ exists and write $L=\mathcal{O}_{M^+}(D^+)$. Note that $\mathcal{O}_M(-K_M)$ restricted to $\tau^{-1}(\widetilde{S}^{\circ})$ makes $\tau^{-1}(\widetilde{S}^{\circ}) \to \widetilde{S}^{\circ}$ a smooth conic bundle. Since $\mathcal{O}_{M^+}(D^+)$ and $\mathcal{O}_M(-K_M)$ restricted to $\tau_+^{-1}(\widetilde{S}^{\circ}) \cong \tau^{-1}(\widetilde{S}^{\circ})$ are isomorphic and \begin{equation*} D^+\cdot h_s^- =D \cdot h_s^-= 2, \end{equation*} we have that $\mathcal{O}_{M^+}(D^+)$ makes $\tau_+$ a smooth conic bundle. \end{proof} For the rest of the section, we will show that $\mathcal{A}_{\mathcal{Q}}\cong \mathbf{D}^\mathrm{b}(S^+, \mathcal{A}^+)$ where $\mathcal{A}^+$ is Brauer equivalent to the Azumaya algebra corresponding to $\tau_+\colon M^+\to S^+$. Recall that $\mathcal{R}$ is the universal subbundle on $\operatorname{op}eratorname{Gr}_S(2,\mathcal{E})$. Let $\mathcal{R}_M$ be the restriction of $\mathcal{R}$ on $M$. Recall the associated left Clifford ideals $\mathcal{I}_n^{\mathcal{R}_M}, n\in\mathbb{Z}$ in Definition \ref{clideal}. \begin{lemma}\label{ires} For every $n\in \mathbb{Z}$ and $*=\pm$, we have $(\xi^*\mathcal{I}_n^{\mathcal{R}_M})|_{\tilde{\Sigma}_s^*}\cong \mathcal{O}_{\tilde{\Sigma}_s^*} \operatorname{op}lus \mathcal{O}_{\tilde{\Sigma}_s^*}(-h_s^*-l_s)$. \end{lemma} \begin{proof} As with the filtration (\ref{i0fil}), for $m\in\mathbb{Z}$, we have the short exact sequence \begin{equation*} 0\to \det(\mathcal{R}_M) \otimes \rho^*\mathcal{L}^{m-1} \to \mathcal{I}_{2m}^{\mathcal{R}_M} \to \rho^*(\det(\mathcal{E})\otimes \mathcal{L}^{m-2}) \to 0 \end{equation*} and $\mathcal{I}_{2m+1}^{\mathcal{R}_M}\cong \det(\mathcal{R}_M)\otimes (\rho^*\mathcal{E}/\mathcal{R}_M) \otimes \rho^*\mathcal{L}^{m-1}$. Note that $\Sigma_s^*\cong \operatorname{op}eratorname{Gr}(2,3)\subset \operatorname{op}eratorname{Gr}(2,4)$ and the universal quotient on $\operatorname{op}eratorname{Gr}(2,4)$ restricted to $\operatorname{op}eratorname{Gr}(2,3)$ is $\mathcal{O}\operatorname{op}lus \mathcal{O}(1)$. Then for every $n\in \mathbb{Z}$ we get $\mathcal{I}_n^{\mathcal{R}_M}|_{\Sigma_s^*}\cong \mathcal{O} \operatorname{op}lus \mathcal{O}(-1)$ and the result follows. \end{proof} \begin{lemma}\label{jlem} For every $n\in \mathbb{Z}$, there exists a rank $2$ vector bundle $\mathcal{J}_n$ on $M^+$ that fits into short exact sequences \begin{equation}\label{ijseq} 0\to \xi_+^*\mathcal{J}_n\to \xi^*\mathcal{I}_n^{\mathcal{R}_M} \to \bigoplus_{s\in S_2} \mathcal{O}_{\tilde{\Sigma}_s^+}(-h_s^+ -l_s) \to 0, \end{equation} \begin{equation}\label{ijdual} 0\to \xi^*(\mathcal{I}_n^{\mathcal{R}_M})^\vee \to \xi_+^*(\mathcal{J}_n)^\vee\to \bigoplus_{s\in S_2} \mathcal{O}_{\tilde{\Sigma}_s^+}(-h_s^+) \to 0. \end{equation} \end{lemma} \begin{proof} By Lemma \ref{ires}, we have $(\xi^*\mathcal{I}_n^{\mathcal{R}_M})|_{\tilde{\Sigma}_s^+}\cong \mathcal{O} \operatorname{op}lus \mathcal{O}(-h_s^+-l_s)$. We can construct a surjection \begin{equation*} \xi^*\mathcal{I}_n^{\mathcal{R}_M} \twoheadrightarrow (\xi^*\mathcal{I}_0^{\mathcal{R}_M})|_{\tilde{\Sigma}_s^+} \twoheadrightarrow \bigoplus_{s\in S_2} \mathcal{O}_{\tilde{\Sigma}_s^+}(-h_s^+ -l_s) \end{equation*} and denote its kernel by $\mathcal{F}$. Restricting this short exact sequence to $\tilde{\Sigma}_s^+$, we have \begin{equation*} 0 \to \mathcal{O}_{\tilde{\Sigma}_s^+}(-h_s^+ -l_s) \otimes N_{\tilde{\Sigma}_s^+/\widetilde{M}}^\vee \to \mathcal{F}|_{\tilde{\Sigma}_s^+} \to \mathcal{O}_{\tilde{\Sigma}_s^+}\operatorname{op}lus \mathcal{O}_{\tilde{\Sigma}_s^+}(-h_s^+ -l_s)\to \mathcal{O}_{\tilde{\Sigma}_s^+}(-h_s^+ -l_s) \to 0. \end{equation*} Since $N_{\tilde{\Sigma}_s^+/\widetilde{M}}^\vee \cong O_{\tilde{\Sigma}_s^+}(2h_s^+ +l_s)$ by (\ref{nbeq}), we have \begin{equation}\label{jres} \mathcal{F}|_{\tilde{\Sigma}_s^+} \cong \mathcal{O}_{\tilde{\Sigma}_s^+} \operatorname{op}lus \mathcal{O}_{\tilde{\Sigma}_s^+}(h_s^+) \end{equation} and this is the pull-back of $\mathcal{O}_{l_s}\operatorname{op}lus \mathcal{O}_{l_s}(1)$. Since $\xi_+\colon \widetilde{M} \to M^+$ is a smooth blow-up along $\bigsqcup l_s$ with exceptional locus $\bigsqcup \tilde{\Sigma}_s^+$ by Proposition \ref{birtran} (1), we deduce $\mathcal{F}\cong \xi_+^*\mathcal{J}_n$ for some rank $2$ bundle $\mathcal{J}_n$ on $M^+$. This gives (\ref{ijseq}). Since \begin{equation*} R\mathscr{H}\kern -2pt om(\mathcal{O}_{\tilde{\Sigma}_s^+}(-h_s^+ -l_s), \mathcal{O}_{\widetilde{M}}) \cong R\mathscr{H}\kern -2pt om(\mathcal{O}_{\tilde{\Sigma}_s^+}(-h_s^+ -l_s), N_{\tilde{\Sigma}_s^+/\widetilde{M}}[-1])\cong \mathcal{O}_{\tilde{\Sigma}_s^+}(-h_s)[-1], \end{equation*} we get (\ref{ijdual}) by taking the dual of (\ref{ijseq}). \end{proof} From Proposition \ref{birtran} (2), we get commutative diagrams \begin{equation}\label{commdiag} \begin{tikzcd}[sep = small] & \widetilde{M}\arrow{ld}[swap]{\xi} \arrow{rd}{\xi_+} & \\ M \arrow{dd}[swap]{\tau} \arrow{rd}{\rho} & & M^+ \arrow{dd}{\tau_+} \arrow{ld}[swap]{\rho_+}\\ & S & \\ \widetilde{S} \arrow{ru}[swap]{\alpha} & & S^+ \arrow{ll}{\eta} \arrow{lu}{\alpha_+} \end{tikzcd} \end{equation} \begin{lemma}\label{ijpushf} For every $n\in \mathbb{Z}$, (1) $R\rho_*\mathcal{I}_n^{\mathcal{R}_M}=0$ and $R\rho_*\mathscr{E}\kern -1pt nd(\mathcal{I}_n^{\mathcal{R}_M})\cong \mathcal{B}_0$ as sheaves of algebras; (2) $R(\rho_+)_*\mathcal{J}_n=0$ and $R(\rho_+)_*\mathscr{E}\kern -1pt nd(\mathcal{J}_n)\cong \mathcal{B}_0$ as sheaves of algebras. \end{lemma} \begin{proof} (1) In \cite[\S 3]{kuzline}, the right $\mathcal{B}_0$-module $\mathfrak{S}_n$ is constructed as the cokernel of $\mathcal{R}_M\otimes \rho^*\mathcal{B}_{n-1}\to \rho^*\mathcal{B}_n$. Comparing it with Lemma \ref{coklem}, we have $\mathfrak{S}_n\cong \mathcal{I}_n^{\circ \mathcal{R}_M}\otimes\det(\mathcal{R}_M^\vee)\otimes \rho^*\mathcal{L}$. By Lemma \ref{clidealdual}, we have \begin{equation*} \mathcal{I}_n^{\mathcal{R}_M}\cong \mathfrak{S}_{-n}^\vee \otimes \rho^*(\det(\mathcal{E})\otimes (\mathcal{L}^\vee)^2). \end{equation*} Then Corollary 3.5, 3.6 in \textit{loc. cit.} give (1). (2) Write $\mathcal{I}_n=\mathcal{I}_n^{\mathcal{R}_M}$. Observe from the diagram (\ref{commdiag}) that $\rho_+\xi_+=\rho\xi$ restricted to $\tilde{\Sigma}_s^+$ is the map $\tilde{\Sigma}_s^+ \to s=\operatorname{op}eratorname{Spec}(\mathcal{B}bbk)$. We first recall some results on the blow-up $X$ of $\mathbb{P}^2$ at a point that are needed for the proof. The blow-up $X\subset \mathbb{P}^2\times \mathbb{P}^1$ is a divisor $\mathcal{O}_{\mathbb{P}^2\times \mathbb{P}^1}(1,1)$. Denote by $\mathcal{O}_X(a,b), a,b \in\mathbb{Z}$ the restriction of $\mathcal{O}_{\mathbb{P}^2\times \mathbb{P}^1}(a,b)$ to $X$. Then we easily see from the short exact sequence \begin{equation*} 0\to \mathcal{O}_{\mathbb{P}^2\times \mathbb{P}^1}(a-1,b-1)\to \mathcal{O}_{\mathbb{P}^2\times \mathbb{P}^1}(a,b)\to \mathcal{O}_X(a,b) \to 0 \end{equation*} that $H^\bullet(X, \mathcal{O}_X(a,b))=0$ for $a=-1$ or $(a,b)=(0,-1)$. Observe also that \begin{equation*} R(\rho_+)_* = R(\rho_+)_* R(\xi_+)_* L\xi_+^* = R\rho_* R\xi_* L\xi_+^*. \end{equation*} Applying $R\rho_* R\xi_*$ to the sequence (\ref{ijseq}), we have the last term vanishes and thus $R(\rho_+)_*\mathcal{J}_n\cong R\rho_*\mathcal{I}_n=0$. We have seen from Lemma \ref{ires} and (\ref{jres}) that \begin{equation*} (\xi_+^*\mathcal{J}_n)|_{\tilde{\Sigma}_s^+} \cong \mathcal{O}_{\tilde{\Sigma}_s^+} \operatorname{op}lus \mathcal{O}_{\tilde{\Sigma}_s^+}(h_s^+), \quad (\xi^*\mathcal{I}_n)|_{\tilde{\Sigma}_s^+}\cong \mathcal{O}_{\tilde{\Sigma}_s^+} \operatorname{op}lus \mathcal{O}_{\tilde{\Sigma}_s^+}(-h_s^+-l_s). \end{equation*} Tensoring (\ref{ijseq}) with $\xi_+^*(\mathcal{J}_n)^\vee$ and (\ref{ijdual}) with $\xi^*\mathcal{I}_n$, respectively, we get \begin{equation*} \begin{split} & 0\to \xi_+^*\mathscr{E}\kern -1pt nd(\mathcal{J}_n) \to \xi^*\mathcal{I}_n\otimes \xi_+^*(\mathcal{J}_n)^\vee \to \bigoplus_{s\in S_2} (\mathcal{O}_{\tilde{\Sigma}_s^+}(-h_s^+ -l_s) \operatorname{op}lus \mathcal{O}_{\tilde{\Sigma}_s^+}(-2h_s^+ -l_s)) \to 0,\\ & 0\to \xi^*\mathscr{E}\kern -1pt nd(\mathcal{I}_n) \to \xi^*\mathcal{I}_n\otimes \xi_+^*(\mathcal{J}_n)^\vee \to \bigoplus_{s\in S_2} (\mathcal{O}_{\tilde{\Sigma}_s^+}(-h_s^+) \operatorname{op}lus \mathcal{O}_{\tilde{\Sigma}_s^+}(-2h_s^+ -l_s)) \to 0. \end{split} \end{equation*} Applying $R(\rho_+)_* R(\xi_+)_* = R\rho_* R\xi_*$ to the sequence above, we have last terms in both sequences vanish. Then \begin{equation*} R(\rho_+)_*\mathscr{E}\kern -1pt nd(\mathcal{J}_n) \cong R\rho_* R\xi_*(\xi^*\mathcal{I}_n\otimes \xi_+^*(\mathcal{J}_n)^\vee) \cong R\rho_*\mathscr{E}\kern -1pt nd(\mathcal{I}_n)\cong \mathcal{B}_0. \end{equation*} \end{proof} \begin{prop}\label{azprop} (1) For every $n\in \mathbb{Z}$, we have $\mathcal{J}_n|_{\tilde{\Sigma}_s^-} \cong \mathcal{O}_{\tilde{\Sigma}_s^-}(-h_s^- -l_s) \operatorname{op}lus \mathcal{O}_{\tilde{\Sigma}_s^-}(-l_s)$. (2) Let $\mathcal{B}^+$ be the Azumaya algebra on $S^+$ that corresponds to the smooth conic bundle $\tau_+\colon M^+\to S^+$. Then $\mathscr{E}\kern -1pt nd(\mathcal{J}_0)\cong \tau_+^*\mathcal{A}^+$ for some Azumaya algebra $\mathcal{A}^+$ on $S^+$ that is Brauer equivalent to $\mathcal{B}^+$. (3) $R(\alpha_+)_*\mathcal{A}^+\cong \mathcal{B}_0$ as sheaves of algebras. \end{prop} \begin{proof} (1) By restricting the sequence (\ref{ijseq}) to $\tilde{\Sigma}_s^-$ and using Lemma \ref{ires}, we obtain \begin{equation*} 0\to \mathcal{J}_n|_{\tilde{\Sigma}_s^-} \to \mathcal{O}_{\tilde{\Sigma}_s^-}\operatorname{op}lus \mathcal{O}_{\tilde{\Sigma}_s^-}(-h_s^- -l_s)\to \mathcal{O}_{l_s}\to 0. \end{equation*} Then $\mathcal{J}_n|_{\tilde{\Sigma}_s^-}\cong \mathcal{O}_{\tilde{\Sigma}_s^-}(-h_s^- -l_s) \operatorname{op}lus \mathcal{O}_{\tilde{\Sigma}_s^-}(-l_s)$ or $\mathcal{O}_{\tilde{\Sigma}_s^-}(-h_s^- -2l_s) \operatorname{op}lus \mathcal{O}_{\tilde{\Sigma}_s^-}$. We will show the latter is impossible. Let $s\in S_2\subset S$. By Lemma \ref{ijpushf} (2), \begin{equation*} 0=\operatorname{Ext}^\bullet(\mathcal{O}_s, R(\rho_+)_*\mathcal{J}_n) \cong \operatorname{Ext}^\bullet(L\rho_+^*\mathcal{O}_s, \mathcal{J}_n). \end{equation*} Denote by $\mathcal{H}^i= \mathcal{H}^i(L\rho_+^*\mathcal{O}_s)$. Then $\mathcal{H}^i=0$ for $i>0$ and $\mathcal{H}^0\cong \mathcal{O}_{\tilde{\Sigma}_s^-}$. Consider the spectral sequence \begin{equation}\label{sseq} \operatorname{Ext}^j(\mathcal{H}^i,\mathcal{J}_n)\mathcal{R}ightarrow \operatorname{Ext}^{j-i}(L\rho_+^*\mathcal{O}_s, \mathcal{J}_n)=0. \end{equation} By Serre duality on $M^+$, we get \begin{equation}\label{extiso} \operatorname{Ext}^j(\mathcal{H}^i,\mathcal{J}_n) \cong \operatorname{Ext}^{3-j}(\mathcal{J}_n, \mathcal{H}^i\otimes \omega_{M^+})^\vee \cong H^{3-j}(M^+, \mathcal{J}_n^\vee \otimes\mathcal{H}^i\otimes \omega_{M^+})^\vee, \end{equation} where $\omega_{M^+}$ is the canonical line bundle. By Equation (\ref{nbm+}), we get \begin{equation*} \omega_{M^+}|_{\tilde{\Sigma}_s^-} \cong \omega_{\tilde{\Sigma}_s^-} \otimes N_{\tilde{\Sigma}_s^-/M^+}^\vee \cong \mathcal{O}_{\tilde{\Sigma}_s^-}(-h_s^- -2l_s). \end{equation*} Since $\mathcal{H}^i$ is supported on $\tilde{\Sigma}_s^-$, the right hand side in (\ref{extiso}) is $0$ if $j\notin \{1,2,3\}$. Thus, the line $j=2$ is stable in the spectral sequence (\ref{sseq}). If $\mathcal{J}_n|_{\tilde{\Sigma}_s^-}\cong \mathcal{O}_{\tilde{\Sigma}_s^-}(-h_s^- -2l_s) \operatorname{op}lus \mathcal{O}_{\tilde{\Sigma}_s^-}$, then \begin{align*} \operatorname{Ext}^2(\mathcal{H}^0, \mathcal{J}_n)& \cong H^1(M^+, (\mathcal{J}_n^\vee)|_{\tilde{\Sigma}_s^-} \otimes \omega_{M^+})^\vee\\ & \cong H^1(M^+, \mathcal{O}_{\tilde{\Sigma}_s^-} \operatorname{op}lus \mathcal{O}_{\tilde{\Sigma}_s^-}(-h_s^- -2l_s))^\vee\cong \mathcal{B}bbk. \end{align*} This implies $\operatorname{Ext}^2(L\rho_+^*\mathcal{O}_s, \mathcal{J}_n)\neq 0$, which is a contradiction. (2) Notice that if there are rank $2$ vector bundles $\mathcal{F}_i, i=1,2$ on $M^+$ such that $\mathcal{F}_i$ restricted to each geometric fiber $M_t^+, t\in S^+$ of $\tau_+$ is $\mathcal{O}_{\mathbb{P}^1}(-1)^2$, then $\mathscr{E}\kern -1pt nd(\mathcal{F}_i)$ and $\mathscr{H}\kern -2pt om(\mathcal{F}_1, \mathcal{F}_2)$ restricted to $M_t^+$ are trivial. Thus, there are Azumaya algebras $\mathcal{A}_i, i=1,2$ and a vector bundle $\mathcal{V}$ on $S^+$ such that $\mathscr{E}\kern -1pt nd(\mathcal{F}_i)\cong \tau_+^*\mathcal{A}_i$ and $\mathscr{H}\kern -2pt om(\mathcal{F}_1, \mathcal{F}_2) \cong \tau_+^*\mathcal{V}$. Since \begin{equation*} \mathscr{E}\kern -1pt nd(\mathcal{F}_1^\vee) \otimes \mathscr{E}\kern -1pt nd(\mathcal{F}_2)\cong \mathscr{E}\kern -1pt nd(\mathscr{H}\kern -2pt om(\mathcal{F}_1, \mathcal{F}_2)), \end{equation*} we have \begin{equation*} \mathcal{A}_1^{\operatorname{op}} \otimes\mathcal{A}_2\cong \mathscr{E}\kern -1pt nd(\mathcal{V}) \end{equation*} where $\mathcal{A}_1^{\operatorname{op}}$ is the opposite algebra of $\mathcal{A}_1$. That is, $\mathcal{A}_1, \mathcal{A}_2$ are Brauer equivalent. By construction, $\tau_+^*\mathcal{B}^+$ is trivial, i.e., the endomorphism of some rank $2$ vector bundle $\mathcal{F}$ on $M^+$. Since $\mathcal{O}_{M^+/S^+}(1)|_{M_t^+}\cong \mathcal{O}_{\mathbb{P}^1}(2)$, we can choose $\mathcal{F}$ such that $\mathcal{F}|_{M_t^+}\cong \mathcal{O}_{\mathbb{P}^1}(-1)^2$. We claim that $\mathcal{J}_n$ is such rank $2$ vector bundle for every $n\in \mathbb{Z}$. By Lemma \ref{ijpushf} (2), $R(\rho_+)_*\mathcal{J}_n=0$. Since $\alpha_+\colon S^+\to S$ restricted to $S^+\backslash \bigsqcup l_s$ is finite, we have $R(\tau_+)_*\mathcal{J}_n=0$ on $S^+\backslash \bigsqcup l_s$ and thus $\mathcal{J}_n|_{M_t^+}\cong \mathcal{O}_{\mathbb{P}^1}(-1)^2$ for every $t\in S^+\backslash \bigsqcup l_s$. On the other hand, from (1), we get that $\mathcal{J}_n$ restricted to the fibers of $\tilde{\Sigma}_s^- \to l_s$ is also $\mathcal{O}_{\mathbb{P}^1}(-1)^2$. Thus, we get $\mathscr{E}\kern -1pt nd(\mathcal{J}_0)\cong \tau_+^*\mathcal{A}^+$ for some Azumaya algebra $\mathcal{A}^+$ on $S^+$ that is Brauer equivalent to $\mathcal{B}^+$. (3) Observe from the diagram (\ref{commdiag}) that \begin{equation*} R(\alpha_+)_*\cong R(\alpha_+)_* R(\tau_+)_* L \tau_+^* \cong R(\rho_+)_* L \tau_+^*. \end{equation*} Then by (2) and Lemma \ref{ijpushf} (2), we get \begin{equation*} R(\alpha_+)_* \mathcal{A}^+ \cong R(\rho_+)_* \tau_+^* \mathcal{A}^+ \cong R(\rho_+)_* \mathscr{E}\kern -1pt nd(\mathcal{J}_0) \cong \mathcal{B}_0. \end{equation*} \end{proof} \begin{theorem}\label{main3} Assume $\mathcal{B}bbk$ is algebraically closed and $\operatorname{op}eratorname{char}(\mathcal{B}bbk)=0$. Let $p\colon \mathcal{Q}\to S$ be a flat quadric surface bundle where $\mathcal{Q}$ is smooth and $S$ is a smooth surface over $\mathcal{B}bbk$. Then there is a semiorthogonal decomposition \begin{equation}\label{sod3} \mathbf{D}^\mathrm{b}(\mathcal{Q}) =\langle \mathbf{D}^\mathrm{b}(S^+, \mathcal{A}^+), p^*\mathbf{D}^\mathrm{b}(S), p^*\mathbf{D}^\mathrm{b}(S)\otimes \mathcal{O}_{\mathcal{Q}/S}(1) \rangle \end{equation} where $S^+$ is the resolution of the double cover $\widetilde{S}$ over $S$ ramified along the (first) degeneration locus $S_1$ and $\mathcal{A}^+$ is an Azumaya algebra on $S^+$. In addition, the Brauer class $[\mathcal{A}^+]\in \operatorname{Br}(S^+)$ is trivial if and only if $p\colon \mathcal{Q} \to S$ has a rational section. \end{theorem} \begin{proof} Recall that by the SOD (\ref{kuzsod}), the non-trivial component of $\mathbf{D}^\mathrm{b}(\mathcal{Q})$ is equivalent to $\mathbf{D}^\mathrm{b}(S,\mathcal{B}_0)$. To get the SOD (\ref{sod3}), it suffices to show that $R(\alpha_+)_*\colon \mathbf{D}^\mathrm{b}(S^+, \mathcal{A}^+)\to \mathbf{D}^\mathrm{b}(S,\mathcal{B}_0)$ is an equivalence. The proof of this is similar to that of Proposition \ref{nceqprop}. By Proposition \ref{azprop} (3) and the projection formula (Proposition \ref{projf}), we get $R(\alpha_+)_*L\alpha_+^* \cong \operatorname{id}$. On the other hand, since $S^+\backslash \bigsqcup l_s\cong \widetilde{S}\backslash S_2 \to S\backslash S_2$ is finite, we have that $R(\alpha_+)_*\mathcal{F} =0$ for $\mathcal{F}\in \operatorname{op}eratorname{Coh}(S^+, \mathcal{A}^+)$ implies that $\mathcal{F}$ is supported on $\bigsqcup l_s$. By Proposition \ref{azprop} (1)(2), \begin{equation*} \tau_+^*(\mathcal{A}^+|_{l_s}) \cong \mathscr{E}\kern -1pt nd(\mathcal{J}_0)|_{\tilde{\Sigma}_s^-} \cong \mathscr{E}\kern -1pt nd(\mathcal{O}_{\tilde{\Sigma}_s^-} \operatorname{op}lus \mathcal{O}_{\tilde{\Sigma}_s^-}(-h_s^-)) \cong \tau_+^*(\mathscr{E}\kern -1pt nd(\mathcal{O}_{l_s}\operatorname{op}lus \mathcal{O}_{l_s}(-1))). \end{equation*} Hence, $\mathcal{A}^+|_{l_s} \cong \mathscr{E}\kern -1pt nd(\mathcal{O}_{l_s}\operatorname{op}lus \mathcal{O}_{l_s}(-1))$. Now the proof in Proposition \ref{nceqprop} shows that $\mathcal{F}=0$ and $L\alpha_+^* R(\alpha_+)_* \cong \operatorname{id}$. Let $U=S\backslash S_2$. Since $S^+$ is smooth and integral, we have that the composition $\operatorname{Br}(S^+) \to \operatorname{Br}(\alpha_+^{-1}(U))\to \operatorname{Br}(\mathcal{B}bbk(S^+))$ is injective. Thus, the restriction $\operatorname{Br}(S^+) \to \operatorname{Br}(\alpha_+^{-1}(U))$ is injective. Observe from the diagram (\ref{commdiag}) that $\rho^{-1}(U)\to \alpha^{-1}(U)$ is isomorphic to $\rho_+^{-1}(U) \to \alpha_+^{-1}(U)$. By Proposition 2.15 in \cite{ksgroring}, $[\mathcal{A}^+]=0$ if and only if $p$ has a rational smooth ({\it non-degenerate} in {\it loc. cit.}) section. Lastly, since $\mathcal{Q}$ and $S$ are smooth, every section of $p$ is smooth by Lemma 1.3.2 in \cite{abbqfib}. \end{proof} \section{Examples}\label{exsec} In this section, we apply the main theorems to examples. We start with some remarks on nodal quintic del Pezzo $3$-folds and cubic $4$-folds containing a plane, and then we consider the most important cases of complete intersections of quadrics. \begin{ex}[\cite{xienodaldp5}] \label{nodaldp5} Let $X_m\subset \mathbb{P}^6, m=1,2,3$ be the nodal quintic del Pezzo $3$-folds with $m$ nodes. Let $x\in X_m$ be a node. Then the embedded projective tangent space $T_x X_m$ is isomorphic to $\mathbb{P}^4$. The linear projection $X_m\dashrightarrow \mathbb{P}^1$ from $T_x X_m$ induces the map $p_m\colon Y_m \to \mathbb{P}^1$ where $f_m\colon Y_m=\operatorname{op}eratorname{Bl}_{\mathbb{P}^4\cap X_m} X_m \to X_m$ is the blow-up. In fact, $f_m$ is a (partial) resolution of $X_m$ at the nodal point $x$. We have that $p_m$ is a quadric surface bundle where $p_1, p_2$ have simple degeneration and $p_3$ has a fiber of corank $2$. In addition, the exceptional locus of $f_m$ is a smooth section of $p_m$. The hyperbolic reduction $C_m$ with respect to the smooth section is a nodal chain of $m$ $\mathbb{P}^1$'s. Hence, the residual category of $Y_m$ is equivalent to $\mathbf{D}^\mathrm{b}(C_m)$ by Theorem \ref{main1} or \ref{main2}. \end{ex} \begin{ex}[\cite{moc8}] \label{cubic4} Let $X\subset \mathbb{P}^5$ be a smooth cubic $4$-fold containing a plane and let $Y=\operatorname{op}eratorname{Bl}_{\mathbb{P}^2} X$ be the blow-up of $X$ along the plane. The linear projection $X\dashrightarrow \mathbb{P}^2$ from the plane induces $Y\to \mathbb{P}^2$ where $Y\to \mathbb{P}^2$ is a quadric surface bundle possibly with fibers of corank $2$. The Kuznetsov component of $X$ is equivalent to the residual category of $Y$. By Theorem \ref{main3}, the residual category of $Y$ is equivalent to the twisted derived category of a smooth K3 surface. This K3 surface is obtained as the resolution of the double cover over $\mathbb{P}^2$ ramified along a nodal sextic curve. The same result proved in \cite{moc8} uses the result of quadric surface bundles over smooth $3$-folds in \cite{kuzline}. In order to use \cite{kuzline}, $X$ is described as a hyperplane section of a smooth cubic $5$-fold containing the plane such that the induced quadric surface bundle over $\mathbb{P}^3$ satisfies the required hypotheses on degeneration loci. Section \ref{surfbsec} in this paper provides a more direct proof. \end{ex} Now we consider applications to complete intersections of quadrics. Let \begin{equation*} X^{n,k}= \bigcap_{i=1}^k Q_i \subset \mathbb{P}^{n+1}, \quad k \leqslant n \end{equation*} be the complete intersection of $k$ quadrics $Q_i=\{q_i=0\}\subset \mathbb{P}^{n+1}$. Let \begin{equation*} p^{n,k}\colon \mathcal{Q}^{n,k}\to \mathbb{P}^{k-1} \end{equation*} be the corresponding net of quadrics, i.e., the fiber over $[a_1:\dots:a_k]\in \mathbb{P}^{k-1}$ is $\{\sum_{i=1}^k a_iq_i=0\}$. Then $\dim(X^{n,k})=n+1-k$ and $p^{n,k}$ is a flat quadric bundle of relative dimension $n$ whose associated quadratic form is \begin{equation*} q^{n,k}\colon \mathcal{O}_{\mathbb{P}^{k-1}}^{n+2}\to \mathcal{O}_{\mathbb{P}^{k-1}}(1). \end{equation*} When $X^{n,k}$ is Fano or Calabi-Yau, i.e., $n\geqslant 2k-2$, Theorem 5.5 in \cite{kuzqfib} states that there is a semiorthogonal decomposition \begin{equation}\label{cisod} \mathbf{D}^\mathrm{b}(X^{n,k})=\langle \mathbf{D}^\mathrm{b}(\mathbb{P}^{k-1}, \mathcal{B}_0^{n,k}), \mathcal{O}(1), \mathcal{O}(2), \dots, \mathcal{O}(n+2-2k) \rangle \end{equation} where $\mathcal{B}_0^{n,k}$ is the even Clifford algebra of $p^{n,k}\colon \mathcal{Q}^{n,k}\to \mathbb{P}^{k-1}$. \begin{prop} Assume that $n\geqslant 2k-2$, $n$ is even and write $n=2m+2$. Assume that the smooth locus of $X^{n,k}$ contains a $\mathbb{P}^m$. Then $\mathbb{P}^m\times \mathbb{P}^{k-1}\subset \mathcal{Q}^{n,k}$ is a smooth $m$-section of $p^{n,k}\colon \mathcal{Q}^{n,k}\to \mathbb{P}^{k-1}$ as in Definition \ref{regisodef} and there is a semiorthogonal decomposition \begin{equation*} \mathbf{D}^\mathrm{b}(X^{n,k})=\langle \mathbf{D}^\mathrm{b}(\bar{\mathcal{Q}}), \mathcal{O}(1), \mathcal{O}(2), \dots, \mathcal{O}(n+2-2k) \rangle \end{equation*} where $\bar{\mathcal{Q}}$ is the hyperbolic reduction of $p^{n,k}$ with respect to the smooth $m$-section constructed in Definition \ref{hypreddef}. \end{prop} \begin{proof} Choose some hyperplane $\mathbb{P}(W_{m-1})\subset \mathbb{P}^m:=\mathbb{P}(W_m)$. Let $p'\colon \mathcal{Q}'\to \mathbb{P}^{k-1}$ be the hyperbolic reduction of $p^{n,k}$ with respect to the smooth $(m-1)$-section $\mathbb{P}(W_{m-1})\times \mathbb{P}^{k-1}\subset \mathcal{Q}^{n,k}$. Then $p'$ is a flat quadric surface bundle with a smooth section given by the projectivization of $(W_m/W_{m-1})\otimes \mathcal{O}_{\mathbb{P}^{k-1}}$. Proposition 1.1 (3) in \cite{kuzqbhe} deduces that \begin{equation*} \mathbf{D}^\mathrm{b}(\mathbb{P}^{k-1}, \mathcal{B}_0^{n,k})\cong \mathbf{D}^\mathrm{b}(\mathbb{P}^{k-1}, \mathcal{B}_0') \end{equation*} where $\mathcal{B}_0'$ is the even Clifford algebra of $p'$. Then $\mathbf{D}^\mathrm{b}(\mathbb{P}^{k-1}, \mathcal{B}_0')\cong \mathbf{D}^\mathrm{b}(\bar{\mathcal{Q}})$ by Theorem \ref{main1} or \ref{main2} and we obtain the semiorthogonal decomposition from the SOD in (\ref{cisod}). \end{proof} \begin{remark} Note that Proposition 1.1 (3) in \cite{kuzqbhe} only applies to flat quadric bundles and $\bar{\mathcal{Q}}\to \mathbb{P}^{k-1}$ is not flat when $p^{n,k}\colon \mathcal{Q}^{n,k}\to \mathbb{P}^{k-1}$ has fibers of corank $2$. This is why the proof has to go through the middle hyperbolic reduction $p'\colon \mathcal{Q}'\to \mathbb{P}^{k-1}$. \end{remark} For smooth complete intersections of three quadrics, we have better results. \begin{prop} \label{ci3q} Assume $\mathcal{B}bbk$ is algebraically closed with $\operatorname{op}eratorname{char}(\mathcal{B}bbk)=0$. Let $Y^{2m}$ be the smooth complete intersection of three quadrics in $\mathbb{P}^{2m+3}$. Assume that $Y^{2m}$ contains a $\mathbb{P}^{m-1}$ for $m\geqslant 6$, which is automatically satisfied for $1\leqslant m \leqslant 5$. Then we have a semiorthogonal decomposition \begin{equation*} \mathbf{D}^\mathrm{b}(Y^{2m}) =\langle \mathbf{D}^\mathrm{b}(S^{2m}, \mathcal{A}^{2m}), \mathcal{O}_{Y^{2m}}(1), \mathcal{O}_{Y^{2m}}(2), \dots, \mathcal{O}_{Y^{2m}}(2m-2) \rangle \end{equation*} where $S^{2m}$ is the resolution of the double cover over $\mathbb{P}^2$ ramified along a nodal curve of degree $2m+4$ and $\mathcal{A}^{2m}$ is an Azumaya algebra on $S^{2m}$. Moreover, $Y^{2m}$ for $m\geqslant 3$ is rational and $Y^4$ is rational when $[\mathcal{A}^{4}]\in \operatorname{Br}(S^4)$ is trivial. \end{prop} \begin{proof} Firstly, we claim that $Y^{2m}$ contains a $\mathbb{P}^{m-1}$ for $1\leqslant m \leqslant 5$. Let $F_{m-1}$ be the Hilbert scheme of $\mathbb{P}^{m-1}$'s on a quadric of dimension $2m+2$. It is the zero locus of a section in \begin{equation*} \mathcal{G}amma(\operatorname{op}eratorname{Gr}(m, 2m+4), \operatorname{Sym}^2\mathcal{R}_m) \end{equation*} where $\mathcal{R}_m$ is the universal subbundle on $\operatorname{op}eratorname{Gr}(m, 2m+4)$. Thus, $F_{m-1}\subset \operatorname{op}eratorname{Gr}(m, 2m+4)$ has codimension at most $m(m+1)/2$. Since $3m(m+1)/2\leqslant \dim \operatorname{op}eratorname{Gr}(m, 2m+4)$ when $1\leqslant m \leqslant 5$, $Y^{2m}$ contains a $\mathbb{P}^{m-1}$ in these cases. In previous notations, $Y^{2m} = X^{2m+2, 3}$. Let $p'\colon \mathcal{Q}'\to \mathbb{P}^{k-1}$ be the hyperbolic reduction of $p^{2m+2,3}\colon \mathcal{Q}^{2m+2,3}\to \mathbb{P}^2$ with respect to the smooth $(m-1)$-section $\mathbb{P}^{m-1}\times \mathbb{P}^2\subset \mathcal{Q}^{n,k}$. Then $p'$ is a flat quadric surface bundle. Proposition 1.1 (3) in \cite{kuzqbhe} deduces the Morita equivalence \begin{equation*} \mathbf{D}^\mathrm{b}(\mathbb{P}^2, \mathcal{B}_0^{2m+2,3})\cong \mathbf{D}^\mathrm{b}(\mathbb{P}^2, \mathcal{B}_0') \end{equation*} where $\mathcal{B}_0'$ is the even Clifford algebra of $p'$. From the smoothness of $Y^{2m}$, we get that $\mathcal{Q}^{n,k}$ and $\mathcal{Q}'$ are also smooth. In addition, the (first) degeneration locus of $q^{2m+2,3}\colon \mathcal{O}_{\mathbb{P}^2}^{2m+4}\to \mathcal{O}_{\mathbb{P}^2}(1)$ is a curve of degree $2m+4$ and the degeneration locus is preserved under hyperbolic reduction. By Lemma \ref{bealem}, the curve is nodal along the locus where fibers of $p'$ have corank $2$. From Theorem \ref{main3}, we get that \begin{equation*} \mathbf{D}^\mathrm{b}(\mathbb{P}^2, \mathcal{B}_0')\cong \mathbf{D}^\mathrm{b}(S^{2m}, \mathcal{A}^{2m}) \end{equation*} and $[\mathcal{A}^{2m}]\in \operatorname{Br}(S^{2m})$ is trivial if and only if $p'$ has a rational section. We get the semiorthogonal decomposition from the SOD in (\ref{cisod}). Lastly, by Example 1.4.4 in \cite{beaprym}, we have that $Y^{2m}, m\geqslant 2$ is birational to the hyperbolic reduction $\mathcal{Q}_l$ of $p^{2m+2,3}$ with respect to a smooth $1$-section $l\times \mathbb{P}^2$ where $l \cong \mathbb{P}^1$ is a line on $Y^{2m}$. It can be computed similarly that the Hilbert scheme of planes on a quadric of dimension $2m+2$ has codimension at most $6$ in $\operatorname{op}eratorname{Gr}(3,2m+4)$. We get that $\mathcal{Q}_l\to \mathbb{P}^2$ has a section for $m\geqslant 3$. Hence, $Y^{2m}$ is rational for $m\geqslant 3$. Moreover, if $m=2$ and $[\mathcal{A}^4]=0$, then $\mathcal{Q}_l\cong \mathcal{Q}'\to \mathbb{P}^2$ has a rational section and thus $Y^4$ is rational. \end{proof} \begin{remark} We expect that Conjecture \ref{conj} holds for every flat quadric bundle of relative even dimension under the same hypotheses. If it is true, then we do not need to take hyperbolic reduction so that the results of quadric surface bundles can be applied. In this case, the assumption that $Y^{2m}$ contains a $\mathbb{P}^{m-1}$ can be removed for getting the semiorthogonal decomposition. \end{remark} \appendix \section{Non-commutative schemes}\label{ncschsec} In the appendix, we give an overview of derived categories of non-commutative schemes as in Definition \ref{ncdef} generalizing Appendix D in \cite{kuzhs} and \cite[\S2.2]{xienodaldp5}. We will discuss the relations among $\mathbf{D}^\mathrm{b}D_{\operatorname{op}eratorname{QCoh}}, \mathbf{D}^\mathrm{b}D(\operatorname{op}eratorname{QCoh})$, $\mathbf{D}^\mathrm{b}D(\operatorname{op}eratorname{Coh})$ and prove the projection formula (Proposition \ref{projf}). \begin{defn}\label{ncdef} A pair $(X,\mathcal{A}_X)$ is a \textit{non-commutative scheme} if $X$ is a noetherian scheme, $\mathcal{A}_X$ is a sheaf of $\mathcal{O}_X$-algebras and a quasi-coherent $\mathcal{O}_X$-module. A morphism \begin{equation*} H=(h, h_{\mathcal{A}})\colon (X,\mathcal{A}_X)\to (Y, \mathcal{A}_Y) \end{equation*} of non-commutative schemes consists of a morphism $h\colon X\to Y$ of schemes and a homomorphism $h_{\mathcal{A}}\colon h^*\mathcal{A}_Y\to \mathcal{A}_X$ of $\mathcal{O}_X$-algebras. \end{defn} Denote by \begin{itemize} \item $\mathcal{A}_X^{\operatorname{op}}$ the opposite algebra of $\mathcal{A}_X$, \item $\operatorname{op}eratorname{Mod}(X, \mathcal{A}_X)$ the category of right $\mathcal{A}_X$-modules, \item $\operatorname{op}eratorname{QCoh}(X,\mathcal{A}_X)$ the category of quasi-coherent sheaves on $X$ with right $\mathcal{A}_X$-module structures, \item $\operatorname{op}eratorname{Coh}(X,\mathcal{A}_X)$ the category of coherent sheaves on $X$ with right $\mathcal{A}_X$-module structures. \end{itemize} Further denote by \begin{itemize} \item $\mathbf{D}^\mathrm{b}D, \mathbf{D}^\mathrm{b}D^-, \mathbf{D}^\mathrm{b}$ the unbounded, bounded above and bounded derived categories, \item $\mathbf{D}^\mathrm{b}D^*(X,\mathcal{A}_X)$ the derived category $\mathbf{D}^\mathrm{b}D^*(\operatorname{op}eratorname{Coh}(X,\mathcal{A}_X))$ for $*=\emptyset, -, \textrm{b}$, \item $\mathbf{D}^\mathrm{b}D_{\operatorname{op}eratorname{QCoh}}(X, \mathcal{A}_X)$ (resp., $\mathbf{D}^\mathrm{b}D_{\operatorname{op}eratorname{Coh}}(X, \mathcal{A}_X)$) the unbounded derived category of right $\mathcal{A}_X$-modules with quasi-coherent (resp., coherent) cohomologies. \end{itemize} There are pairs of adjoint functors \begin{equation}\label{adfunctor} \begin{tikzcd}[column sep =large] \operatorname{op}eratorname{QCoh}(X) \arrow[shift left=0.5ex]{r}{-\otimes \mathcal{A}_X} & \operatorname{op}eratorname{QCoh}(X,\mathcal{A}_X) \arrow[shift left=0.5ex]{l}{j_Q}, \end{tikzcd} \begin{tikzcd}[column sep =large] \operatorname{op}eratorname{Mod}(X) \arrow[shift left=0.5ex]{r}{-\otimes \mathcal{A}_X} & \operatorname{op}eratorname{Mod}(X,\mathcal{A}_X) \arrow[shift left=0.5ex]{l}{j_M} \end{tikzcd} \end{equation} where $j_Q, j_M$ are forgetful functors, and $-\otimes \mathcal{A}_X$ is left adjoint to $j_Q$ and $j_M$. When $\mathcal{A}_X$ is a coherent sheaf, there is an additional pair of adjoint functors \begin{equation*} \begin{tikzcd}[column sep =large] \operatorname{op}eratorname{Coh}(X) \arrow[shift left=0.5ex]{r}{-\otimes \mathcal{A}_X} & \operatorname{op}eratorname{Coh}(X,\mathcal{A}_X) \arrow[shift left=0.5ex]{l}{j} \end{tikzcd} \end{equation*} where $j\colon \operatorname{op}eratorname{Coh}(X, \mathcal{A}_X) \to \operatorname{op}eratorname{Coh}(X)$ is the forgetful functor. Recall that the \textit{coherator} of $X$ is the functor $Q_X$ right adjoint to the inclusion $\operatorname{op}eratorname{QCoh}(X)\hookrightarrow \operatorname{op}eratorname{Mod}(X)$. For example, if $X$ is affine, then $Q_X(\mathcal{F})$ for $\mathcal{F}\in\operatorname{op}eratorname{Mod}(X)$ is the quasi-coherent sheaf $\widetilde{\mathcal{G}amma(X, \mathcal{F})}$ associated with $\mathcal{G}amma(X, \mathcal{F})$. Note that $\widetilde{\mathcal{G}amma(X, \mathcal{F})}\in \operatorname{op}eratorname{QCoh}(X,\mathcal{A}_X)$ if $\mathcal{F}\in \operatorname{op}eratorname{Mod}(X,\mathcal{A}_X)$. Thus, $Q_X$ induces a coherator functor \begin{equation} Q_{\mathcal{A}_X}\colon \operatorname{op}eratorname{Mod}(X, \mathcal{A}_X) \to \operatorname{op}eratorname{QCoh}(X,\mathcal{A}_X), \end{equation} which is right adjoint to the inclusion $\operatorname{op}eratorname{QCoh}(X,\mathcal{A}_X)\hookrightarrow \operatorname{op}eratorname{Mod}(X,\mathcal{A}_X)$. Let $T$ be an abelian category and let $K(T)$ be its homotopy category. Recall that a complex $I^\bullet$ of objects in $T$ is called \textit{K-injective} if $\operatorname{Hom}_{K(T)}(M^\bullet, I^\bullet)=0$ for every acyclic complex $M^\bullet$. In particular, a bounded below complex of injectives is K-injective. For a scheme $X$ and a complex $\mathcal{K}^\bullet$ of $\mathcal{O}_X$-modules, $\mathcal{K}^\bullet$ is called \textit{K-flat} if the complex \begin{equation*} \operatorname{Tot}(\mathcal{F}^\bullet\otimes_{\mathcal{O}_X} \mathcal{K}^\bullet) \end{equation*} is acyclic for every acyclic complex $\mathcal{F}^\bullet$ of $\mathcal{O}_X$-modules. In particular, a bounded above complex of flat $\mathcal{O}_X$-modules is K-flat. We can define a similar notion for a non-commutative scheme $(X,\mathcal{A}_X)$. We say that a right $\mathcal{A}_X$-module $\mathcal{K}$ is \textit{right flat} if $\mathcal{K} \otimes_{\mathcal{A}_X} -$ is an exact functor on $\operatorname{op}eratorname{Mod}(X, \mathcal{A}_X^{\operatorname{op}})$ and a complex $\mathcal{K}^\bullet$ of right $\mathcal{A}_X$-modules is \textit{right K-flat} if the complex \begin{equation*} \operatorname{Tot}(\mathcal{K}^\bullet\otimes_{\mathcal{A}_X} \mathcal{F}^\bullet) \end{equation*} is acyclic for every acyclic complex $\mathcal{F}^\bullet$ of left $\mathcal{A}_X$-modules. As with before, a bounded above complex of right flat $\mathcal{A}_X$-modules is right K-flat. Replacing $\mathcal{A}_X$ by $\mathcal{A}_X^{\operatorname{op}}$, we get notions for left flat and left K-flat. \begin{lemma}\label{injflat} (1) $\operatorname{op}eratorname{Mod}(X,\mathcal{A}_X)$, $\operatorname{op}eratorname{QCoh}(X,\mathcal{A}_X)$ are Grothendieck abelian categories and every complex in $\operatorname{op}eratorname{Mod}(X, \mathcal{A}_X)$ or $\operatorname{op}eratorname{QCoh}(X,\mathcal{A}_X)$ has a K-injective resolution. (2) For every complex $\mathcal{G}^\bullet$ of right $\mathcal{A}_X$-modules, there exists a right K-flat complex $\mathcal{K}^\bullet$ whose terms are right flat $\mathcal{A}_X$-modules and a quasi-isomorphism $\mathcal{K}^\bullet \to \mathcal{G}^\bullet$ which is termwise surjective. The same is true for complexes of left $\mathcal{A}_X$-modules. \end{lemma} \begin{proof} (1) $\operatorname{op}eratorname{QCoh}(X)$ is a Grothendieck abelian category. The abelian category structure, direct sums and exact filtered colimits on $\operatorname{op}eratorname{QCoh}(X)$ carry over to $\operatorname{op}eratorname{QCoh}(X,\mathcal{A}_X)$. By the adjointness (\ref{adfunctor}), $\operatorname{op}eratorname{QCoh}(X,\mathcal{A}_X)$ has a generator $U\otimes \mathcal{A}_X$ where $U$ is a generator of $\operatorname{op}eratorname{QCoh}(X)$. Hence, $\operatorname{op}eratorname{QCoh}(X,\mathcal{A}_X)$ is a Grothendieck abelian category. The existence of K-injective complexes follows from \cite[Tag 079P]{stacks-project}. The proof for $\operatorname{op}eratorname{Mod}(X,\mathcal{A}_X)$ is similar. (2) We only need to prove for complexes of right $\mathcal{A}_X$-modules. The proof is a modification of \cite[Tag 06YF]{stacks-project} and it suffices to show that $\operatorname{op}eratorname{Mod}(X,\mathcal{A}_X)$ has enough right flat objects. We know that $\operatorname{op}eratorname{Mod}(X)$ has enough flat objects. For $\mathcal{G}\in \operatorname{op}eratorname{Mod}(X,\mathcal{A}_X)$, there is a surjection $\mathcal{F}\twoheadrightarrow j_M(\mathcal{G})$ from a flat $\mathcal{O}_X$-module $\mathcal{F}$. Then its adjoint map $\mathcal{F}\otimes \mathcal{A}_X\to \mathcal{G}$ is also a surjection and $\mathcal{F}\otimes \mathcal{A}_X$ is a right flat $\mathcal{A}_X$-module. Now $\mathcal{K}^\bullet$ can be constructed in the same way as \textit{loc. cit.}, which is the filtered colimit of a nice sequence of bounded above complexes of right flat $\mathcal{A}_X$-modules. \end{proof} Thanks to the lemma above, it makes sense to talk about the right adjoint functor $RQ_{\mathcal{A}_X}$. \begin{lemma}\label{qceqlem} The natural functor $\mathbf{D}^\mathrm{b}D(\operatorname{op}eratorname{QCoh}(X,\mathcal{A}_X))\to \mathbf{D}^\mathrm{b}D_{\operatorname{op}eratorname{QCoh}}(X,\mathcal{A}_X)$ is an equivalence with quasi-inverse given by $RQ_{\mathcal{A}_X}$. \end{lemma} \begin{proof} Since the coherator functors $Q_{\mathcal{A}_X}\colon \operatorname{op}eratorname{Mod}(X, \mathcal{A}_X)\to \operatorname{op}eratorname{QCoh}(X,\mathcal{A}_X)$ and $Q_X\colon \operatorname{op}eratorname{Mod}(X)\to \operatorname{op}eratorname{QCoh}(X)$ commute with forgetful functors, the claim is a consequence of \cite[Tag 09T4]{stacks-project}. \end{proof} \begin{lemma}\label{coheqlem} When $\mathcal{A}_X$ is coherent, the natural functors \begin{equation*} \mathbf{D}^\mathrm{b}D^*(X,\mathcal{A}_X)\to \mathbf{D}^\mathrm{b}D_{\operatorname{op}eratorname{Coh}}^*(\operatorname{op}eratorname{QCoh}(X,\mathcal{A}_X)) \to \mathbf{D}^\mathrm{b}D_{\operatorname{op}eratorname{Coh}}^*(X,\mathcal{A}_X) \end{equation*} for $*=-, \textrm{b}$ are equivalences. \end{lemma} \begin{proof} The equivalence of the second functor follows from Lemma \ref{qceqlem}. For the first functor, we will modify the proof in \cite[Tag 0FDA]{stacks-project}. We claim that if there is a surjection $\mathcal{G} \twoheadrightarrow \mathcal{F}$ for $\mathcal{F}\in \operatorname{op}eratorname{Coh}(X,\mathcal{A}_X)$ and $\mathcal{G}\in \operatorname{op}eratorname{QCoh}(X,\mathcal{A}_X)$, then there is some $\mathcal{G}'\in \operatorname{op}eratorname{Coh}(X,\mathcal{A}_X)$ that surjects onto $\mathcal{F}$. Consequently, the first functor is an equivalence by \cite[Tag 0FCL]{stacks-project}. We know $j_Q(\mathcal{G})$ is a filtered union of coherent submodules $\mathcal{G}_i$. Let $\mathcal{G}_i'= \operatorname{op}eratorname{Im}(\mathcal{G}_i\otimes \mathcal{A}_X\to \mathcal{G})$ be the image of the map adjoint to the inclusion $\mathcal{G}_i\hookrightarrow j_Q(\mathcal{G})$. Then $\mathcal{G}$ is the filtered union of coherent $\mathcal{A}_X$-submodules $\mathcal{G}_i'$ and one of them will be $\mathcal{G}'$. \end{proof} Given a morphism $H=(h, h_{\mathcal{A}})\colon (X,\mathcal{A}_X)\to (Y, \mathcal{A}_Y)$, a \textit{push-forward} functor is defined by \begin{equation*} H_* \colon \operatorname{op}eratorname{QCoh}(X,\mathcal{A}_X) \to \operatorname{op}eratorname{QCoh}(Y,\mathcal{A}_Y) \end{equation*} where as a quasi-coherent sheaf, $H_*\mathcal{F}$ is given by $h_*\mathcal{F}$ for $\mathcal{F}\in \operatorname{op}eratorname{QCoh}(X,\mathcal{A}_X)$, and a right $\mathcal{A}_Y$-module structure on $h_*\mathcal{F}$ is induced by \begin{equation*} (h_*\mathcal{F})\otimes \mathcal{A}_Y \cong h_*(\mathcal{F}\otimes h^*\mathcal{A}_Y) \xrightarrow{h_{\mathcal{A}}} h_*(\mathcal{F}\otimes \mathcal{A}_X)\to h_*\mathcal{F}. \end{equation*} A \textit{pull-back} functor is defined by \begin{equation*} H^*\colon \operatorname{op}eratorname{QCoh}(Y,\mathcal{A}_Y) \to \operatorname{op}eratorname{QCoh}(X,\mathcal{A}_X) \end{equation*} where for $\mathcal{G}\in \operatorname{op}eratorname{QCoh}(Y,\mathcal{A}_Y)$, \begin{equation*} H^*\mathcal{G}:=(h^*\mathcal{G})\otimes_{h^*\mathcal{A}_Y} \mathcal{A}_X \cong (h^{-1}\mathcal{G})\otimes_{h^{-1}\mathcal{A}_Y} \mathcal{A}_X. \end{equation*} In the lemma below, we keep the same notations $RH_*$, $LH^*$ for the derived push-forward and pull-back functors induced from the original ones, respectively. \begin{lemma}\label{derivedfun} Let $H=(h, h_{\mathcal{A}})\colon (X,\mathcal{A}_X)\to (Y, \mathcal{A}_Y)$ be a morphism between non-commutative schemes. (1) There exists a right derived functor \begin{equation*} RH_* \colon \mathbf{D}^\mathrm{b}D(\operatorname{op}eratorname{QCoh}(X,\mathcal{A}_X)) \to \mathbf{D}^\mathrm{b}D(\operatorname{op}eratorname{QCoh}(Y,\mathcal{A}_Y)). \end{equation*} When $h$ is proper and $\mathcal{A}_Y$ is coherent, it induces \begin{equation*} RH_* \colon \mathbf{D}^\mathrm{b}D^*(X,\mathcal{A}_X) \to \mathbf{D}^\mathrm{b}D^*(Y,\mathcal{A}_Y) \end{equation*} for $*=-, \textrm{b}$. (2) There exists a left derived functor \begin{equation*} LH^*\colon \mathbf{D}^\mathrm{b}D(\operatorname{op}eratorname{QCoh}(Y,\mathcal{A}_Y)) \to \mathbf{D}^\mathrm{b}D(\operatorname{op}eratorname{QCoh}(X,\mathcal{A}_X)). \end{equation*} When $\mathcal{A}_X$ is coherent, it induces \begin{equation*} LH^*\colon \mathbf{D}^\mathrm{b}D^-(Y,\mathcal{A}_Y) \to \mathbf{D}^\mathrm{b}D^-(X,\mathcal{A}_X). \end{equation*} (3) $H^* \dashv H_*$ and $LH^*\dashv RH_*$ are adjoint functors. \end{lemma} \begin{proof} (1) By Lemma \ref{injflat} (1), the K-injective resolutions exist for $\mathbf{D}^\mathrm{b}D(\operatorname{op}eratorname{QCoh}(X,\mathcal{A}_X))$ and thus the right derived functor $RH_*$ can be defined. When $h$ is proper, we have an induced functor \begin{equation*} RH_* \colon \mathbf{D}^\mathrm{b}D_{\operatorname{op}eratorname{Coh}}(\operatorname{op}eratorname{QCoh}(X,\mathcal{A}_X)) \to \mathbf{D}^\mathrm{b}D_{\operatorname{op}eratorname{Coh}}(\operatorname{op}eratorname{QCoh}(Y,\mathcal{A}_Y)). \end{equation*} When $\mathcal{A}_Y$ is coherent, from Lemma \ref{coheqlem}, we get the right derived functor \begin{equation*} RH_* \colon \mathbf{D}^\mathrm{b}D^*(X,\mathcal{A}_X) \to \mathbf{D}^\mathrm{b}D_{\operatorname{op}eratorname{Coh}}^*(\operatorname{op}eratorname{QCoh}(X,\mathcal{A}_X)) \to \mathbf{D}^\mathrm{b}D_{\operatorname{op}eratorname{Coh}}^*(\operatorname{op}eratorname{QCoh}(Y,\mathcal{A}_Y))\cong \mathbf{D}^\mathrm{b}D^*(Y,\mathcal{A}_Y). \end{equation*} for $*=-, \textrm{b}$. (2) Given $\mathcal{G}^\bullet \in \mathbf{D}^\mathrm{b}D(\operatorname{op}eratorname{QCoh}(Y,\mathcal{A}_Y))$, we define \begin{equation*} LH^*\mathcal{G}^\bullet:= RQ_{\mathcal{A}_X}(H^*\mathcal{K}^\bullet) \in \mathbf{D}^\mathrm{b}D(\operatorname{op}eratorname{QCoh}(X,\mathcal{A}_X)) \end{equation*} where $\mathcal{K}^\bullet$ is the right K-flat resolution of $\mathcal{G}^\bullet$ constructed in Lemma \ref{injflat} (2) and $RQ_{\mathcal{A}_X}$ is the derived coherator in Lemma \ref{qceqlem}. Standard arguments show that this is well-defined. Given $\mathcal{G}\in \operatorname{op}eratorname{Coh}(Y, \mathcal{A}_Y)$, we claim that $H^*\mathcal{G}\in\operatorname{op}eratorname{Coh}(X,\mathcal{A}_X)$ when $\mathcal{A}_X$ is coherent. This is a local question. Assume that $(X,\mathcal{A}_X) \cong (\operatorname{op}eratorname{Spec} A, \widetilde{R_A})$, $(Y,\mathcal{A}_Y) \cong (\operatorname{op}eratorname{Spec} B, \widetilde{R_B})$ and $\mathcal{G} \cong \widetilde{M}$. There is a surjection $B^n\twoheadrightarrow M$ for some $n$ and it induces surjections \begin{equation*} R_B^n\twoheadrightarrow M, \quad R_A^n\twoheadrightarrow M\otimes_{R_B} R_A. \end{equation*} Then $M\otimes_{R_B} R_A$ is a finitely generated $A$-module because $R_A$ is such. Given $\mathcal{G}^\bullet \in \mathbf{D}^\mathrm{b}D^-(Y,\mathcal{A}_Y)$, we have \begin{equation*} LH^*\mathcal{G}^\bullet \in \mathbf{D}^\mathrm{b}D_{\operatorname{op}eratorname{Coh}}^-(\operatorname{op}eratorname{QCoh}(X,\mathcal{A}_X)) \cong \mathbf{D}^\mathrm{b}D^-(X,\mathcal{A}_X), \end{equation*} where the equivalence is given by Lemma \ref{coheqlem}. (3) It suffices to prove the adjointness for $H^*\dashv H_*$. For $\mathcal{F}\in \operatorname{op}eratorname{Mod}(X,\mathcal{A}_X)$ and $\mathcal{G}\in\operatorname{op}eratorname{Mod}(Y,\mathcal{A}_Y)$, we have \begin{align*} \operatorname{Hom}_{\mathcal{A}_X}(H^*\mathcal{G},\mathcal{F}) & \cong\operatorname{Hom}_{\mathcal{A}_X}(h^*\mathcal{G}\otimes_{h^*\mathcal{A}_Y}\mathcal{A}_X, \mathcal{F}) \\ & \cong \operatorname{Hom}_{h^*\mathcal{A}_Y}(h^*\mathcal{G}, \mathcal{F})\\ & \cong \operatorname{Hom}_{\mathcal{A}_Y}(\mathcal{G}, H_*\mathcal{F}). \end{align*} \end{proof} \begin{prop}[Projection formula] \label{projf} Let $H=(h, h_{\mathcal{A}})\colon (X,\mathcal{A}_X)\to (Y, \mathcal{A}_Y)$ be a morphism of non-commutative schemes in Definition \ref{ncdef}. (1) Given $\mathcal{F}\in \mathbf{D}^\mathrm{b}D(\operatorname{op}eratorname{QCoh}(X,\mathcal{A}_X^{\operatorname{op}}))$ and $\mathcal{G}\in \mathbf{D}^\mathrm{b}D(\operatorname{op}eratorname{QCoh}(Y,\mathcal{A}_Y))$, there is a natural map \begin{equation}\label{projmap} \mathcal{G} \otimes_{\mathcal{A}_Y}^{\mathcal{L}L} RH_*(\mathcal{F}) \to RH_*(LH^*(\mathcal{G}) \otimes_{\mathcal{A}_X}^{\mathcal{L}L} \mathcal{F}), \end{equation} and it is an isomorphism in $\mathbf{D}^\mathrm{b}D(\operatorname{op}eratorname{QCoh}(Y))$. (2) Assume that $h:X\to Y$ is proper and $\mathcal{A}_X, \mathcal{A}_Y$ are coherent. Given $\mathcal{F}\in \mathbf{D}^\mathrm{b}D^-(X,\mathcal{A}_X^{\operatorname{op}})$ and $\mathcal{G}\in \mathbf{D}^\mathrm{b}D^-(Y,\mathcal{A}_Y)$, the natural map (\ref{projmap}) is an isomorphism in $\mathbf{D}^\mathrm{b}D^-(Y)$. \end{prop} \begin{proof} The derived functors involved are defined in Lemma \ref{derivedfun} and the natural map (\ref{projmap}) is induced by the adjointness $LH^*\dashv RH_*$. The proof of the proposition is the same as the proof of Lemma 2.5 in \cite{xienodaldp5}. \end{proof} \end{document}
\begin{document} \title{ space{-15mm} \thispagestyle{fancy} \noindent \textbf{Keywords:} Unconstrained global optimization, constrained optimization, engineering problems, \hspace{25mm}seed-based plant propagation \begin{abstract} \noindent The seasonal production of fruit and seeds resembles opening a feeding station, such as a restaurant agents/ customers will arrive at a certain rate and pick fruit (get served) at a certain rate following some appropriate processes. Therefore, dispersion follows the resource process. Modelling this process results in a search/ optimisation algorithm that used dispersion as an exploration tool that, if well captured, will find the optimum of a function over a given search space. This paper presents such an algorithm and tests it on non-trivial problems. \end{abstract} \section{Introduction} A variety of plants have evolved in generous ways to propagate. Propagation through seeds is perhaps the most common of them all and one which takes advantage of all sorts of agents ranging from wind to water, to birds and animals. Beside propagation using runners, the strawberry plant uses seeds as well. These seeds are judiciously placed on the surface of a very tasty and brightly coloured fruit, the strawberries, which attract a variety of agents such as birds and animals including humans, which help the propagation. Plants rely heavily on the dispersion of their seeds to colonise new territories and to improve their survival \cite{herrera2009plant,herrera2002seed}. There are a lot of studies and models of seed dispersion particularly for trees \cite{abrahamson1989plant,andersen1996plant,bryant1990plant,herrera2002seed,herrera2009plant}. Dispersion by wind and ballistic means are probably the most studied of all approaches \cite{glover2007understanding,yang2012flower,yang2013multi}. However, in the case of the strawberry plant, given the way the seeds stick to the surface of the fruit, Figure(1), \cite{du1996efficient}, dispersion by wind or mechanical means is very limited. Animals, however, and birds in particular are the ideal agents of dispersion \cite{krefting1949role,wenny1998directed,herrera2009plant,herrera2002seed}, in this case. There are many biologically inspired optimization algorithms in the literature \cite{brownlee2011clever,yang2011nature}. Flower pollination algorithm (FPA) is inspired by the pollination of flowers through different agents \cite{yang2012flower}, the Swarm data clustering algorithm is inspired by pollination by bees \cite{kazemian2006swarm}, Particle Swarm Optimization (PSO) is inspired by the foraging behavior of a school of fish or a flock of birds, \cite{eberhart1995new,clerc2010particle}, Artificial Bee Colony (ABC) simulates the foraging behavior of honey bees \cite{karaboga2005idea,karaboga2008performance}, Firefly algorithm is inspired by the flashing fireflies when trying to attract a mate \cite{yang2010firefly,gandomi2011mixed}, Social Spider Optimization (SSO-C) is inspired by the cooperative behavior of social-spiders \cite{cuevas2014new}, to name a few of them. The Plant Propagation Algorithm (PPA) also known as the strawberry algorithm was inspired by the way plants and specifically the strawberry plants propagate using runners, \cite{Salhi2010PPA,sulaiman2014engineering}. The attraction of PPA is that it can be implemented easily for all sorts of optimization problems. Moreover, it has few algorithm specific arbitrary parameters. PPA follows the principle that plants in good spots with plenty of nutrients will send many short runners. They send few long runners when in nutrient poor spots. Long runners PPA tries to explore the search space while short runners enable the algorithm to exploit the solution space well. It is necessary to make the performance of PPA better, in terms of convergence and efficiency. In this paper we present a variant of PPA called the Seed-based Plant Propagation Algorithm the feeding station model (SbPPA). The main idea is inspired by the way frugivorous birds disperse the seeds of strawberry. The strawberry plants attract the frugivores and spread its seed for conservation in many habitats through long distances \cite{telleria2005conservation}. However, the spatial distribution of seeds depends on the availability of the strawberries on the plants and the number of visits by different agents to eat fruit. SbPPA is tested on both unconstrained and constrained benchmark problems also used in \cite{kiran2013recombination,cuevas2014new}. Experimental results are presented in Tables 3-4 in terms of best, mean, worst and standard deviation for all algorithms. The paper is organised as follows: In Section \rom{2} we briefly introduce the feeding station model representing strawberry plants having fruits on them and the main characteristics of paths followed by different agents that disperse the seeds. Section \rom{3} presents the SbPPA in pseudo code form. The experimental settings, results and convergence graphs for different problems are given in Section \rom{4}. In Section \rom{5} the conclusion and possible future work are given. \section{Aspects of the Feeding Station Model of the Strawberry Plant} Some animals and plants depend on each other to conserve their species \cite{stork1993extinction}. Thus, many plants require, for effective seed dispersal, the visits of frugivorous birds or animals according to a certain distribution, \cite{herrera2002seed,herrera2009plant,jordano2000fruits,debussche1994bird}. Seed dispersal by different agents is also called ``seed shadow''; this shows the abundance of seeds spread locally or globally around parent plants. In this context, the strawberry feeding station model is divided in two parts: (1) The quantity of fruit or seeds available to agents, or the rate at which the agents will visit the plants, and (2) a probability density, that tells us about the service rate with which the agents are served by the parent plants. This model tells us the quantity of seeds that is spread locally compared to that dispersed globally \cite{janzen1970herbivores,levin1976population,geritz1984efficacy,levin1984dispersal,augspurger1987wind}. There are two aspects that need to be balanced. First exploitation, which is represented by the dispersal of seeds around the parent plants. Secondly, exploration which ensures that the search space is well covered. As a queuing system \cite{cooper1972introduction}, there are two basic components to this model: (1) the rate at which agents arrive at the strawberry plants, (2) the rate at which the agents eat fruit and leave the plants to disperse the seeds. The agents arrive at plants in a random process. Assume that during any unit of time, whenever the fruits are available, at most one agent will arrive at a time to the plants, satisfying the orderliness condition. It is further supposed that the probability of arrivals of agents to the plants remain the same for a particular period of time. This means that the arrival rate of agents is higher when there are ripe fruit on the plants and remains the same for a further period when there is no fruit on plants; this is called stationarity condition. The arrival of one agent does not affect the rest of arrivals; this is called independence. Based on these assumptions, we conclude that the probability of arrival of $k$ agents during a cycle $t$ of fruit production by strawberry plants can be denoted by random variable $X'$, \cite{lawrence2002applied}. This can be expressed mathematically as \begin{equation} P(X'=k)=\frac{(\lambda t)^k e^{-\lambda t}}{k!}, \end{equation} where $\lambda$ denotes the mean arrival rate of agents per unit time, $t$ is the length of the time interval. On the other hand, the time taken by agents in successfully eating fruit and leaving to disperse its seeds, in other words the service time for agents are expressed by a random variable, which follows the exponential probability distribution \cite{ang2004probability}. This can be expressed as follows, \begin{equation} S(t)=\mu e^{-\mu t}, \end{equation} where $\mu$ is the average number of agents that can eat fruit at time $t$. As some fruit goes to the ground around the plants after becoming fully ripe, this shows that the number of arrivals are less than the fruits available on plants. Mathematically, this can be expressed as the arrival rate of agents is less than the fruits available on all plants, where $\lambda<\mu$. We assume that the system is in steady state. Let $A$ denote the average number of agents on the plants, and $A_q$ the average number of agents in the queue. If we denote the average number of agents eating fruits by $\frac{\lambda}{\mu}$, then by Little's formulas \cite{little1961proof}, we have \begin{equation} A=A_q+\frac{\lambda}{\mu}, \end{equation} based on Equation (3), we need to maximize the following problem \begin{eqnarray} \begin{aligned} \hspace{-33mm}Maximize \mbox{ } A_q=A-\frac{\lambda}{\mu}, \end{aligned} \end{eqnarray} \hspace{10mm}subject to \begin{eqnarray}\hspace{-38mm} \begin{aligned} &g_{1}(\lambda,\mu) = \lambda,\mu > 0, \\ &g_{2}(\lambda,\mu) = \lambda < \mu+1, \\ \end{aligned} \end{eqnarray} where $A=10$, which represents the population size in the implementation. The simple limits on the variables are $0< \lambda,\mu\leq 100 $, After solving the problem we get $\lambda=1.1$, $\mu=0.1$ and $A_q=1$. Moreover, frugivores may travel for a long distance to disperse seeds far away from parent SP; in doing so, they obey a L$\acute{e}$vy distribution \cite{thompson1942growth,van2007dispersal,reynolds2007free}. \subsection{L$\acute{e}$vy distribution} Randomization in metaheuristics is generally achieved by utilizing pseudorandom numbers, in light of some regular stochastic methodologies. L$\acute{e}$vy distributions is one of the probability density distributions for random variables. Here the random variables represent the directions of arbitrary flights by frugivores. This function of random variables ranges over real numbers with a domain called "search space". The flight lengths of the agents served by SP, is assumed to be a heavy tailed power law distribution represented by, \begin{equation} L(s)\sim |s|^{-1-\beta}, \end{equation} where $L(s)$ denotes the L$\acute{e}$vy distribution with index $\beta\in(0\mbox{, }2)$. L$\acute{e}$vy flights are a unique arbitrary excursions whose step lengths are drawn from (6). Another form of L$\acute{e}$vy distribution can be written as, \begin{equation} L(s, \gamma, \mu)= \begin{dcases} \sqrt{\frac{\gamma}{2\pi}}exp\left[-\frac{\gamma}{2(s-\mu)}\right] \left( \frac{1}{(s-\mu)}\right)^{\frac{3}{2}}, & 0<\mu <s< \infty \\ 0 & Otherwise, \end{dcases} \end{equation} this implies that \begin{equation} \lim_{s\rightarrow\infty}L(s, \gamma, \mu)=\sqrt{\frac{\gamma}{2\pi}}\left( \frac{1}{s}\right)^{\frac{3}{2}}, \end{equation} In terms of Fourier transform \cite{yang2011nature} the limiting value of $L(s)$ can be written as under, \begin{equation} \lim_{s\rightarrow\infty}L(s)=\frac{\alpha\beta\Gamma(\beta)\sin(\frac{\pi\beta}{2})}{\pi|s|^{1+\beta}}, \end{equation} where $\Gamma(\beta)$ is the Gamma function defined by \begin{equation} \Gamma(\beta)=\int_0^\infty x^{\beta-1}e^{-x}dx. \end{equation} The steps are generated by using Mantegna's algorithm. This algorithm ensures the behaviour of L$\acute{e}$vy flights to be symmetric and stable as shown in Figure (3b). \begin{figure} \caption{Strawberry fruit with seeds} \label{fig:gull} \caption{Strawberry garden flower\\} \label{fig:tiger} \caption{A fruit eaten by bird(s)} \label{fig:mouse} \caption{A bird eating strawberries} \label{fig:mouse} \caption{Strawberry plants spreading seed and sending runners around them} \label{fig:mouse} \caption{Strawberry plant propagation: through seed dispersion \cite{wikiStrawberry,Ruth,monacoeye,lifeisfull} \label{fig:animals} \end{figure} \section{Strawberry Plant Propagation Algorithm: The Feeding Station Model} The Plant Propagation Algorithm (PPA), recently developed in \cite{Salhi2010PPA,sulaiman2014engineering}, emulates the way strawberry plants (SP) propagate by runners. Here we considered the propagation through seeds. The main objective of SbPPA is the optimal reproduction of new plants through seeds dispersion, by using different dispersal means. We assume that the arrival of different agents to the plants for eating fruits, is according to Poisson distribution. The mean arrival rate $\lambda=1.1$, and $NP=10$ is the total number of agents in our population. Let $k=1,2,\ldots,A$ be the number of agents visiting the plants per unit time. By using these assumptions we get Figure (2) according to Equation (1). \begin{figure} \caption{ Agents arrival at strawberry plants to eat fruit and disperse seed} \end{figure} The probability $Poiss(\lambda)<0.05$ means that, the chances for seeds to be taken far away from SP, are lower and the propagation is supported either by runners or seeds fallen down from plants. In this case, Equation (11) below is used, which is helping the algorithm to exploit the search space, \begin{equation} x^*_{i,j} = \begin{cases} x_{i,j}+\xi_j(x_{i,j}-x_{l,j}) &\mbox{if } PR\leq 0.8 \\ x_{i,j} & Otherwise, \end{cases} \end{equation} where $PR$ denotes the perturbation rate and it tunes the intensity of displacements by which the seeds will be dispersed locally around the SP, $x^*_{i,j}, x_{i,j}\in[a_j \mbox{ } b_j]$ are the $j^{th}$ coordinates of the seeds $X_i$ and $X^*_i$ respectively, $a_j$ and $b_j$ are the $j^{th}$ lower and upper bounds defining the search space of the problem and $\xi_j\in [-1\mbox{ } 1]$. The indices $l\mbox{ and }i$ are mutually exclusive. On the other hand, if $Poiss(\lambda)\geq0.05$ (we choose 0.05 to give more weight to global dispersion), here the complete role of global dispersion is played by seeds, this is implemented by using the following equation, \begin{equation} x^*_{i,j} = \begin{cases} x_{i,j}+\mathrm{L_i}(x_{i,j}-\theta_j) &\mbox{if } PR\leq 0.8,\mbox{ }\theta_j\in[a_j \mbox{ }b_j]\\ x_{i,j} & Otherwise. \end{cases} \end{equation} Here $\mathrm{L_i}$ is a step drawn from the L$\acute{e}$vy distribution \cite{yang2011nature}, $\theta_j$ is a random coordinate within the search space. The effects on the current solutions due to perturbations applied by Equation (11) and Equation (12) are shown in Figure (3). As mentioned in the pseudo-code of SbPPA, we first collect best solutions from the first $NP$ trial runs to form a population of potentially good solutions denoted by $pop_{best}$. The convergence rate of SbPPA, is shown in Figures (4-5), for different test problems used in our experiments. The statistical results best, worst, mean and standard deviation are calculated based on $pop_{best}$. \begin{figure} \caption{Overall performance of SbPPA on problem 16} \end{figure} The seed based propagation process of SP can be represented in the following steps: \begin{enumerate} \item The dispersal of seeds or the propagation by runners in the neighbourhood of the SP, as shown in Figure ${1_e}$, is carried out either by fruit fallen from strawberry plants after they become ripe or by runners. The step lengths for this phase are calculated using Equation (11). \item Seeds are spread globally through frugivores, as shown in Figure $1_{c,d}$. The step lengths for those travelling agents are drawn from the L$\acute{e}$vy distribution. \item The probabilities, $Poiss(\lambda)$, that a certain amount $k$ of agents will arrive to SP to eat fruits and disperse it, is used as a switch between global and local search. \end{enumerate} For implementation, we assume that each SP produces one fruit, and each fruit is assumed to have one seed, we mean by a solution $X_i$ the position of the $i^{th}$ seed to be dispersed. The number of seeds in the population is denoted by $NP$. Initially we generate a random population of $NP$ seeds using Equation (13), \begin{equation}x_{i,j}=a_j+(b_j-a_j)\eta_j, j=1,...,n\end{equation} \noindent where $x_{i,j}\in[a_j,b_j]$ is the $j^{th}$ entry of solution $X_i$, $a_j$ and $b_j$ are the $j^{th}$ coordinates of the bounds describing the search space of the problem and $\eta_j\in (0,1)$. This means $X_{i}=[x_{i,j}], \mbox{ for } j=1,...,n$ represents the position of the $i^{th}$ seed in population $pop$. \begin{algorithm}\label{MPPA} \caption{\textbf{Seed-based Plant Propagation Algorithm (SbPPA): The Feeding Station Model}} \begin{algorithmic}[1] \State\unskip\the\therules Initialize: $g_{max}\leftarrow$ maximum number of generations, $max_{eval}\leftarrow$ maximum function evaluations, $r\leftarrow$ counter for trial runs \State\unskip\the\therules Set $r=1$ \If {$r\leq NP$} \State\unskip\the\therules \begin{varwidth}[t]{\linewidth}Create a random population of seeds $pop=\{X_i\mid i=1,2,...,NP\}$, using Equation (13) \par and add the best solutions from each trial run, in $pop_{best}$. \end{varwidth} \State\unskip\the\therules Evaluate the population. \EndIf \While {$r>NP$} \State\unskip\the\therules Use population $pop_{best}$. \EndWhile \State\unskip\the\therules Set $ngen=1$, \While {($ngen$ $<$ $g_{max}$) $\textbf{or}$ ($n_{eval} < max_{eval}$)} \For {$i=1$ to $NP$} \If {$Poiss(\lambda)_i \geq 0.05 $}, \Comment (Global or local seed dispersion) \For {$j=1$ to $n$} \Comment ($n$ is number of dimensions) \If {rand$ \leq PR $}, \Comment (PR=Perturbation rate) \State\unskip\the\therules Update the current entry according to Equation (12) \EndIf \EndFor \Else \For {$j=1$ to $n$} \If {rand$ \leq PR $}, \State\unskip\the\therules Update the current entry according to Equation (11) \EndIf \EndFor \EndIf \EndFor \EndWhile \State\unskip\the\therules\emph{\textbf{Return}:} Update current population. \end{algorithmic} \end{algorithm} \section{Experimental Setting And Discussion} In our experiments we test SbPPA against other state-of-the-art algorithms. Our set of test problems include benchmark constrained and unconstrained optimization problems \cite{suganthan2005problem,liang2006problem,cuevas2014new}. The results are compared in terms of best, worst, mean and standard deviations obtained by SbPPA, ABC \cite{karaboga2005idea,karaboga2011modified}, PSO \cite{he2007hybrid}, FF \cite{gandomi2011mixed}, HPA \cite{kiran2013recombination} and SSO-C \cite{cuevas2014new}. The detailed descriptions of these problems are given in Appendix \rom{1}. The significance of results are shown according to the following notations: \begin{itemize} \item (+) when SbPPA is better \item (-) when SbPPA is worse \item ($\approx$) when the results are approximately same as SbPPA. \end{itemize} \subsection{Parameter Settings} The parameter settings are give in Table 1-2: {\fontsize{12}{12} \selectfont \begin{table}[htp] \centering \footnotesize\setlength{\tabcolsep}{2.5pt} \begin{minipage}{15.4cm} \caption{Parameters used for each algorithm for solving unconstrained global optimization problems $f_{1}-f_{10}$, All experiments are repeated 30 times.} \begin{tabular}{l@{\hspace{6pt}} *{4}{l}} \toprule PSO \cite{eberhart1995new,kiran2013recombination} & ABC \cite{karaboga2005idea,kiran2013recombination} & HPA \cite{kiran2013recombination} & SbPPA \\ \midrule M=100 & SN=100 & Agents=100 & NP=10 \\ $G_{max}=\frac{(Dimension\times20,000)}{NP}$ & MCN=$\frac{(Dimension\times20,000)}{NP}$ & Iteration number=$\frac{(Dimension\times20,000)}{NP}$ & Iteration number=$\frac{(Dimension\times20,000)}{NP}$ \\[2mm] $c_1=2$ & MR=0.8 & $c_1=2$ & PR=0.8, $Poiss(\lambda)=0.05$ \\[2mm] $c_2=2$ & limit=$\frac{(SN\times dimension)}{2}$ & $c_2=2$&$k=1,2,\ldots,A$ \\[2mm] W= $\frac{(G_{max}-iteration_{index})}{G_{max}}$ &-& limit=$\frac{(SN\times dimension)}{2}$ & $\lambda=1.1$ &\\[2mm] -& - & W= $\frac{(G_{max}-iteration_{index})}{G_{max}}$ &- \\ \bottomrule \end{tabular} \end{minipage} \label{tab:addlabel} \end{table}} {\fontsize{12}{12} \selectfont \begin{table}[htp] \centering \footnotesize\setlength{\tabcolsep}{2.5pt} \begin{minipage}{15cm} \caption{Parameters used for each algorithm for solving constrained optimization problems $f_{11}-f_{18}$, \\ All experiments are repeated 30 times.} \begin{tabular}{l@{\hspace{6pt}} *{5}{l}} \toprule PSO \cite{he2007hybrid} & ABC \cite{karaboga2011modified} & FF \cite{gandomi2011mixed} & SSO-C \cite{cuevas2014new}& SbPPA \\ \midrule M=250 & sn=40 & Fireflies=25 & N=50 & NP=10 \\[2mm] $G_{max}=300$ & MCN=6000 & Iteration number= 2000 & Iteration number=500 & Iteration number=2400 \\[2mm] $c_1=2$ & MR=0.8 & q=1.5 & PF=0.7 & PR=0.8, $Poiss(\lambda)=0.05$ \\[2mm] $c_2=2$ & - & $\alpha=0.001$ & - &$k=1,2,\ldots,A$ \\[2mm] Weight factors= 0.9 to 0.4 & - & - & - & $\lambda=1.1$ \\ \bottomrule \end{tabular} \end{minipage} \label{tab:addlabel} \end{table} } {\fontsize{12}{12} \selectfont \begin{table}[htp] \centering \footnotesize\setlength{\tabcolsep}{2.5pt} \begin{minipage}{9.5cm} \caption{Results obtained by SbPPA, HPA, PSO and ABC. All problems in this table are unconstrained.} \begin{tabular}{l@{\hspace{6pt}} *{7}{l}} \toprule Fun & Dim & Algorithm & Best & Worst & Mean & SD \\ \midrule 1 & 4 & ABC & (+)\mbox{ }0.0129 & (+)\mbox{ }0.6106 &(+)\mbox{ } 0.1157 &(+) \mbox{ }0.111 \\ & & PSO & (-)\mbox{ }6.8991E-08 & (+)\mbox{ }0.0045 & (+)\mbox{ }0.001 & (+)\mbox{ }0.0013 \\ & & HPA & (+)\mbox{ }2.0323E-06 & (+)\mbox{ }0.0456 & (+)\mbox{ }0.009 &(+) \mbox{ }0.0122 \\ & & SbPPA & 1.08E-07 & 7.05E-06 & 3.05E-06 & 3.14E-06 \\[0.7mm] 2 & 2 & ABC &(+)\mbox{ }1.2452E-08 &(+)\mbox{ }8.4415E-06 & (+)\mbox{ }1.8978E-06 & (+)\mbox{ }1.8537E-06 \\ & & PSO &($\approx$)\mbox{ }0 & ($\approx$)\mbox{ }0 & ($\approx$)\mbox{ }0 & ($\approx$)\mbox{ }0 \\ & & HPA &($\approx$)\mbox{ }0 &($\approx$)\mbox{ }0 &($\approx$)\mbox{ }0 &($\approx$)\mbox{ }0 \\ & & SbPPA & 0 & 0 & 0 & 0 \\[0.7mm] 3 & 2 & ABC & ($\approx$)\mbox{ }0 & (+)\mbox{ }4.8555E-06 & (+)\mbox{ }4.1307E-07 & (+)\mbox{ }1.2260E-06 \\ & & PSO & ($\approx$)\mbox{ }0 & (+)\mbox{ }3.5733E-07 & (+)\mbox{ }1.1911E-08 & (+)\mbox{ }6.4142E-08 \\ & & HPA & ($\approx$)\mbox{ }0 & ($\approx$)\mbox{ }0 & ($\approx$)\mbox{ }0 & ($\approx$)\mbox{ }0 \\ & & SbPPA & 0 & 0 & 0 & 0 \\[0.7mm] 4 & 2 & ABC & ($\approx$)\mbox{ }-1.03163 & ($\approx$)\mbox{ }-1.03163 & ($\approx$)\mbox{ }-1.03163 & ($\approx$)\mbox{ }0 \\ & & PSO & ($\approx$)\mbox{ }-1.03163 & ($\approx$)\mbox{ }-1.03163 & ($\approx$)\mbox{ }-1.03163 & ($\approx$)\mbox{ }0 \\ & & HPA & ($\approx$)\mbox{ }-1.03163 & ($\approx$)\mbox{ }-1.03163 & ($\approx$)\mbox{ }-1.03163 & ($\approx$)\mbox{ }0 \\ & & SbPPA & -1.031628 & -1.031628 & -1.031628 & 0 \\[0.7mm] 5 & 6 & ABC & ($\approx$)\mbox{ }-50.0000 & ($\approx$)\mbox{ }-50.0000 & ($\approx$)\mbox{ }-50.0000 & (-)\mbox{ }0 \\ & & PSO & ($\approx$)\mbox{ }-50.0000 & ($\approx$)\mbox{ }-50.0000 & ($\approx$)\mbox{ }-50.0000 & (-)\mbox{ }0 \\ & & HPA & ($\approx$)\mbox{ }-50.0000 & ($\approx$)\mbox{ }-50.0000 & ($\approx$)\mbox{ }-50.0000 & (-)\mbox{ }0 \\ & & SbPPA & -50.0000 & -50.0000 & -50.0000 & 5.88E-09 \\[0.7mm] 6 & 10 & ABC & (+)\mbox{ }-209.9929 & (+)\mbox{ }-209.8437 & (+)\mbox{ }-209.9471 & (+)\mbox{ }0.044 \\ & & PSO & ($\approx$)\mbox{ }-210.0000 & ($\approx$)\mbox{ }-210.0000 & ($\approx$)\mbox{ }-210.0000 &(-)\mbox{ }0 \\ & & HPA & ($\approx$)\mbox{ }-210.0000 & ($\approx$)\mbox{ }-210.0000 & ($\approx$)\mbox{ }-210.0000 & (+)\mbox{ }1 \\ & & SbPPA & -210.0000 & -210.0000 & -210.0000 & 4.86E-06 \\[0.7mm] 7 & 30 & ABC & (+)\mbox{ }2.6055E-16 & (+)\mbox{ }5.5392E-16 & (+)\mbox{ }4.7403E-16 & (+)\mbox{ }9.2969E-17 \\ & & PSO & ($\approx$)\mbox{ }0 & ($\approx$)\mbox{ }0 & ($\approx$)\mbox{ }0 & ($\approx$)\mbox{ }0 \\ & & HPA & ($\approx$)\mbox{ }0 & ($\approx$)\mbox{ }0 &($\approx$)\mbox{ }0 & ($\approx$)\mbox{ }0 \\ & & SbPPA & 0 & 0 & 0 & 0 \\[0.7mm] 8 & 30 & ABC & (+)\mbox{ }2.9407E-16 & (+)\mbox{ }5.5463E-16 & (+)\mbox{ }4.8909E-16 & (+)\mbox{ }9.0442E-17 \\ & & PSO & ($\approx$)\mbox{ }0 & ($\approx$)\mbox{ }0 & ($\approx$)\mbox{ }0 & ($\approx$)\mbox{ }0 \\ & & HPA & ($\approx$)\mbox{ }0 & ($\approx$)\mbox{ }0 & ($\approx$)\mbox{ }0 & ($\approx$)\mbox{ }0 \\ & & SbPPA & 0 & 0 & 0 & 0 \\[0.7mm] 9 & 30 & ABC & ($\approx$)\mbox{ }0 & (+)\mbox{ }1.1102E-16 & (+)\mbox{ }9.2519E-17 & (+)\mbox{ }4.1376E-17 \\ & & PSO & ($\approx$)\mbox{ }0 & (+)\mbox{ }1.1765E-01 & (+)\mbox{ }2.0633E-02 & (+)\mbox{ }2.3206E-02 \\ & & HPA & ($\approx$)\mbox{ }0 & ($\approx$)\mbox{ }0 & ($\approx$)\mbox{ }0 & ($\approx$)\mbox{ }0 \\ & & SbPPA & 0 & 0 & 0 & 0 \\[0.7mm] 10 & 30 & ABC & (+)\mbox{ }2.9310E-14 & (+)\mbox{ }3.9968E-14 & (+)\mbox{ }3.2744E-14 & (+)\mbox{ }2.5094E-15 \\ & & PSO & ($\approx$)\mbox{ }7.9936E-15 & (+)\mbox{ }1.5099E-14 & (-)\mbox{ }8.5857E-15 & (+)\mbox{ }1.8536E-15 \\ & & HPA & ($\approx$)\mbox{ }7.9936E-15 & (+)\mbox{ }1.5099E-14 & (+)\mbox{ }1.1309E-14 & (+)\mbox{ }3.54E-15 \\ & & SbPPA & 7.994E-15 & 7.99361E-15 & 7.994E-15 & 7.99361E-15 \\ \bottomrule \end{tabular} \end{minipage} \label{tab:CF01} \end{table} } \begin{figure} \caption{Performance of SbPPA on unconstrained global optimization problems} \end{figure} {\fontsize{12}{12} \selectfont \begin{table}[htp] \centering \footnotesize\setlength{\tabcolsep}{2.5pt} \begin{minipage}{13cm} \caption{Results obtained by SbPPA, PSO, ABC, FF and SSO-C. All problems in this table are standard constrained optimization problems} \begin{tabular}{l@{\hspace{10pt}} *{8}{l}} \toprule Fun & Fun Name & Optimal & Algorithm & Best & Mean & Worst & SD \\ \midrule 11 & CP1 & -15 & PSO & ($\approx$)\mbox{ }-15 & ($\approx$)\mbox{ }-15 & ($\approx$)\mbox{ }-15 & (-)\mbox{ }0 \\ & & & ABC & ($\approx$)\mbox{ }-15 &($\approx$)\mbox{ }-15 & ($\approx$)\mbox{ }-15 & (-)\mbox{ }0 \\ & & & FF & (+)\mbox{ }14.999 & (+)\mbox{ }14.988 & (+)\mbox{ }14.798 & (+)\mbox{ }6.40E-07 \\ & & & SSO-C & ($\approx$)\mbox{ }-15 & ($\approx$)\mbox{ }-15 & ($\approx$)\mbox{ }-15 & (-)\mbox{ }0 \\ & & & SbPPA & -15 & -15 & -15 & 1.95E-15 \\[0.7mm] 12 & CP2 & -30665.539 & PSO & ($\approx$)\mbox{ }-30665.5 & (+)\mbox{ }-30662.8 & (+)\mbox{ }-30650.4 & (+)\mbox{ }5.20E-02 \\ & & & ABC & ($\approx$)\mbox{ }-30665.5 & (+)\mbox{ }-30664.9 &(+)\mbox{ }-30659.1 & (+)\mbox{ }8.20E-02 \\ & & & FF & ($\approx$)\mbox{ }-3.07E+04 & (+)\mbox{ }-30662 & (+)\mbox{ }-30649 & (+)\mbox{ }5.20E-02 \\ & & & SSO-C & ($\approx$)\mbox{ }-3.07E+04 & ($\approx$)\mbox{ }-30665.5 & (+)\mbox{ }-30665.1 & (+)\mbox{ }1.10E-04 \\ & & & SbPPA & -30665.5 & -30665.5 & -30665.5 & 2.21E-06 \\[0.7mm] 13 & CP3 & -6961.814 & PSO & (+)\mbox{ }-6.96E+03 & (+)\mbox{ }-6958.37 & (+)\mbox{ }-6942.09 & (+)\mbox{ }6.70E-02 \\ & & & ABC & (-)\mbox{ }-6961.81 & (+)\mbox{ }-6958.02 & (+)\mbox{ }-6955.34 & (-)\mbox{ }2.10E-02 \\ & & & FF & (+)\mbox{ }-6959.99 & (+)\mbox{ }-6.95E+03 & (+)\mbox{ }-6947.63& (-)\mbox{ }3.80E-02 \\ & & & SSO-C & (-)\mbox{ }-6961.81 & (+)\mbox{ }-6961.01 & (+)\mbox{ }-6960.92 & (-)\mbox{ }1.10E-03 \\ & & & SbPPA & -6961.5 & -6961.38 & -6961.45 & 0.043637 \\[0.7mm] 14 & CP4 & 24.306 & PSO & (-)\mbox{ }24.327 & (+)\mbox{ }2.45E+01 & (+)\mbox{ }24.843 & (+)\mbox{ }1.32E-01 \\ & & & ABC & (+)\mbox{ }24.48 & (+)\mbox{ }2.66E+01 & (+)\mbox{ }28.4 & (+)\mbox{ }1.14 \\ & & & FF & (-)\mbox{ }23.97 & (+)\mbox{ }28.54 & (+)\mbox{ }30.14 & (+)\mbox{ }2.25 \\ & & & SSO-C & (-)\mbox{ }24.306 & (-)\mbox{ }24.306 & (-)\mbox{ }24.306 & (-)\mbox{ }4.95E-05 \\ & & & SbPPA & 24.34442 & 24.37536 & 24.37021 & 0.012632 \\[0.7mm] 15 & CP5 & -0.7499 & PSO & ($\approx$)\mbox{ }-0.7499 & (+)\mbox{ }-0.749 & (+)\mbox{ }-0.7486 & (+)\mbox{ }1.20E-03 \\ & & & ABC & ($\approx$)\mbox{ }-0.7499 & (+)\mbox{ }-0.7495 & (+)\mbox{ }-0.749 & (+)\mbox{ }1.67E-03 \\ & & & FF & (+)\mbox{ }-0.7497 & (+)\mbox{ }-0.7491 & (+)\mbox{ }-0.7479 & (+)\mbox{ }1.50E-03 \\ & & & SSO-C & ($\approx$)\mbox{ }-0.7499 & ($\approx$)\mbox{ }-0.7499 & ($\approx$)\mbox{ }-0.7499 & (-)\mbox{ }4.10E-09 \\ & & & SbPPA & 0.7499 & 0.749901 & 0.7499 & 1.66E-07 \\[0.7mm] 16 & Spring & Not Known & PSO & (+)\mbox{ }0.012858 & (+)\mbox{ }0.014863 & (+)\mbox{ }0.019145 & (+)\mbox{ }0.001262 \\ &Design & & ABC & ($\approx$)\mbox{ }0.012665 & (+)\mbox{ }0.012851 & (+)\mbox{ }0.01321 & (+)\mbox{ }0.000118 \\ &Problem & & FF & ($\approx$)\mbox{ }0.012665 & (+)\mbox{ }0.012931 & (+)\mbox{ }0.01342 & (+)\mbox{ }0.001454 \\ & & & SSO-C & ($\approx$)\mbox{ }0.012665 & (+)\mbox{ }0.012765 & (+)\mbox{ }0.012868 & (+)\mbox{ }9.29E-05 \\ & & & SbPPA & 0.012665 & 0.012666 & 0.012666 & 3.39E-10 \\[0.7mm] 17 & Welded & Not Known & PSO & (+)\mbox{ }1.846408 & (+)\mbox{ }2.011146 & (+)\mbox{ }2.237389 & (+)\mbox{ }0.108513 \\ &Beam Design & & ABC & (+)\mbox{ }1.798173 & (+)\mbox{ }2.167358 & (+)\mbox{ }2.887044 & (+)\mbox{ }0.254266 \\ &Problem & & FF & (+)\mbox{ }1.724854 & (+)\mbox{ }2.197401 & (+)\mbox{ }2.931001 & (+)\mbox{ }0.195264 \\ & & & SSO-C & ($\approx$)\mbox{ }1.724852 & (+)\mbox{ }1.746462 & (+)\mbox{ }1.799332 & (+)\mbox{ }0.02573 \\ & & & SbPPA & 1.724852 & 1.724852 & 1.724852 & 4.06E-08 \\[0.7mm] 18 & Speed & Not Known & PSO & (+)\mbox{ }3044.453 & (+)\mbox{ }3079.262 & (+)\mbox{ }3177.515 & (+)\mbox{ }26.21731 \\ &Reducer Design & & ABC & (+)\mbox{ }2996.116 & (+)\mbox{ }2998.063 & (+)\mbox{ }3002.756 & (+)\mbox{ }6.354562 \\ &Optimization & & FF & (+)\mbox{ }2996.947 & (+)\mbox{ }3000.005 & (+)\mbox{ }3005.836 & (+)\mbox{ }8.356535 \\ & & & SSO-C & ($\approx$)\mbox{ }2996.113 & ($\approx$)\mbox{ }2996.113 & ($\approx$)\mbox{ }2996.113 & (+)\mbox{ }1.34E-12 \\ & & & SbPPA & 2996.114 & 2996.114 & 2996.114 & 0 \\ \bottomrule \end{tabular} \end{minipage} \label{tab:CF01} \end{table} } \begin{figure} \caption{Performance of SbPPA on constrained global optimization problems. The problems solved in this table are standard constrained optimization problems} \end{figure} \section{Conclusion} A new algorithm mimicking the seed-based plant propagation (SbPPA) is designed and implemented for both unconstrained and constrained optimization problems. The performance of SbPPA is compared with a number of well established algorithms. The results are compiled in terms of best, mean, worst and standard deviation. SbPPA is very easy to implement as it needs less arbitrary parameter settings. An alternative strategy is adopted to update our current population. The effects on convergence are shown through convergence plots, Figures (4-5), of some of the solved problems. Note that the success rate of SbPPA depends on the quality of the initial population. SbPPA is being tested on discrete real world problems. \section{Acknowledgments} This work is supported by Abdul Wali Khan University, Mardan, Pakistan, Grant No. F.16-5/ P\& D/ AWKUM /238. \setcounter{section}{0} \section{Appendix} \setcounter{section}{0} \section{Set of Unconstrained Global Optimization Problems} {\fontsize{12}{12} \selectfont \begin{table}[htp] \hspace{-3mm} \footnotesize\setlength{\tabcolsep}{2.5pt} \begin{minipage}{16cm} \caption{Unconstrained Global Optimization Problems Used In Our Experiments.} \begin{tabular}{l@{\hspace{8pt}} *{7}{l}} \toprule Fun &Ftn. Name& D & C & Range & Min & Formulation \\[1mm] \midrule $f_1$& Colville & 4 & UN & [-10 10] & 0 &$f(x)=100(x_1^2-x_2)+(x_1-1)^2+(x_3-1)^2+90(x_3^2-x_4)^2+10.1((x_2-1)^2$\\&&&&&&\hspace{8mm}$+(x_4-1)^2)+19.8(x_2-1)(x_4-1)$ \\[1mm] $f_2$& Matyas& 2 & UN & [-10 10] & 0 &$f(x)=0.26(x_1^2+x_2^2)-0.48x_1x_2 $ \\[1mm] $f_3$&Schaffer & 2 & MN & [-100 100] & 0 & $f(x)=0.5+\frac{\sin^{2}(\sqrt{\sum_{i=1}^{n}x^{2}_{i}})-0.5}{(1+0.001(\sum_{i=1}^{n}x^{2}_{i}))^{2}}$ \\[2mm] $f_4$&Six Hump Camel Back & 2 & MN & [-5 5] & -1.03163 & $f(x)=4x_1^2-2.1x_1^4+\frac{1}{3}x_1^6+x_1x_2-4x_2^2+4x_2^4$ \\[1mm] $f_5$& Trid6& 6 & UN & [-36 36] & -50 & $f(x)=\sum_{i=1}^{6} (x_i-1)^2-\sum_{i=2}^{6}x_ix_{i-1}$ \\[1mm] $f_6$&Trid10 & 10 & UN & [-100 100] & -210 &$f(x)=\sum_{i=1}^{10} (x_i-1)^2-\sum_{i=2}^{10}x_ix_{i-1}$ \\[1mm] $f_7$&Sphere & 30 & US & [-100 100] & 0 &$f(x)=\sum_{i=1}^{n}x^{2}_{i}$ \\[1mm] $f_8$&SumSquares & 30 & US & [-10 10] & 0 & $f(x)=\sum_{i=1}^{n}ix^{2}_{i} $ \\[1mm] $f_9$& Griewank& 30 & MN & [-600 600] & 0 &$f(x)=\frac{1}{4000}\sum_{i=1}^{n}x^{2}_{i}-\prod_{i=1}^{n}\cos(\frac{x_{i}}{\sqrt{i}})+1$ \\[1mm] $f_{10}$&Ackley & 30 & MN & [-32 32] & 0 &$f(x)=-20\exp(-0.2\sqrt{\frac{1}{n}\sum_{i=1}^{n}x^{2}_{i}})-\exp(\frac{1}{n}\sum_{i=1}^{n}\cos(2\pi x_{i}))+20+e$ \\[1mm] \bottomrule \end{tabular} \end{minipage} \label{tab:CF01} \end{table} } \section{Set of Constrained Global Optimization Problems Used in Our Experiments} \subsection{CP1} \begin{table}[htp] \hspace{-2mm} \footnotesize\setlength{\tabcolsep}{2.5pt} \begin{minipage}{16cm} \begin{tabular}{l@{\hspace{8pt}} *{3}{l}} \hspace{20mm} & & Min \mbox{ }\hspace{13mm}$f(x)=5\sum_{d=1}^{4}x_d-5\sum_{d=1}^{4}x_d^2-\sum_{d=5}^{13}x_d$ \hspace{15mm}\\[2mm] & &subject to \hspace{6mm}$g_1(x)=2x_1+2x_2+x_{10}+x_{11}-10\leq0$\\[2mm] & &\hspace{18.5mm}$g_2(x)=2x_1+2x_3+x_{10}+x_{12}-10\leq0$\\[2mm] & &\hspace{18.5mm}$g_3(x)=2x_2+2x_3+x_{11}+x_{12}-10\leq0$\\[2mm] & &\hspace{18.5mm}$g_4(x)=-8x_1+x_{10}\leq0$\\[2mm] & &\hspace{18.5mm}$g_5(x)=-8x_2+x_{11}\leq0$\\[2mm] & &\hspace{18.5mm}$g_6(x)=-8x_3+x_{12}\leq0$\\[2mm] & &\hspace{18.5mm}$g_7(x)=-2x_4-x_5+x_{10}\leq0$\\[2mm] & &\hspace{18.5mm}$g_8(x)=-2x_6-x_7+x_{11}\leq0$\\[2mm] & &\hspace{18.5mm}$g_9(x)=-2x_8-x_9+x_{12}\leq0$,\\[2mm] \end{tabular} \end{minipage} \label{tab:CF01} \end{table} \noindent where bounds are $0 \leq x_i \leq 1\mbox{ }(i = 1, .. ., 9, 13),\mbox{ } 0 \leq x_i \leq 100\mbox{ }(i = 10, 11, 12)$. The global optimum is at $x^* = (1, 1, 1, 1, 1, 1, 1, 1, 1, , 3, 3, 3, 1), f(x^*)= -15$. \subsection{CP2} \begin{table}[htp] \hspace{-2mm} \footnotesize\setlength{\tabcolsep}{2.5pt} \begin{minipage}{16cm} \begin{tabular}{l@{\hspace{8pt}} *{3}{l}} \hspace{20mm} & & Min \mbox{ }\hspace{13mm}$f(x)=5.3578547x_2 +0.8356891x_1x_5+ 37.293239x_1 - 40792.141$\\[2mm] & &subject to \hspace{6mm}$g_1(x)=85.334407+0.0056858x_2x_5+0.0006262x_1x_4-0.0022053x_3x_5-92\leq0$\\[2mm] & &\hspace{18.5mm}$g_2(x)=-85.334407-0.0056858x_2x_5-0.0006262x_1x_4+0.0022053x_3x_5\leq0$\\[2mm] & &\hspace{18.5mm}$g_3(x)=80.51249+0.0071317x2x5+0.0029955x1x2-0.0021813x_2-110\leq0$\\[2mm] & &\hspace{18.5mm}$g_4(x)=-80.51249-0.0071317x_2x_5+0.0029955x_1x_2-0.0021813x_2+90\leq0$\\[2mm] & &\hspace{18.5mm}$g_5(x)=9.300961-0.0047026x_3x_5-0.0012547x_1x_3-0.0019085x_3x_4-25\leq0$\\[2mm] & &\hspace{18.5mm}$g_6(x)=-9.300961-0.0047026x_3x_5-0.0012547x_1x_3-0.0019085x_3x_4+20\leq0$,\\[2mm] \end{tabular} \end{minipage} \label{tab:CF01} \end{table} \noindent where $78 \leq x_1\leq 102$, $33 \leq x_2 \leq 45$, $27 \leq x_i\leq 45$ $(i = 3, 4, 5)$. The optimum solution is $x^* = (78, 33, 29.995256025682, 45, 36.775812905788)$, where $f(x^*)= - 30665.539$. Constraints $g_1$ and $g_6$ are active. \subsection{CP3} \begin{table}[htp] \hspace{-2mm} \footnotesize\setlength{\tabcolsep}{2.5pt} \begin{minipage}{16cm} \begin{tabular}{l@{\hspace{8pt}} *{3}{l}} \hspace{20mm} & & Min \mbox{ }\hspace{13mm}$f(x)=(x_1-10)^3+(x_2-20)^3$\\[2mm] & &subject to \hspace{6mm}$g_1(x)=-(x_1-5)^2-(x_2 -5)^2+100 \leq0$\\[2mm] & &\hspace{18.5mm}$g_2(x)=(x_1-6)^2 +(x^2-5)^2-82.81 \leq0$,\\[2mm] \end{tabular} \end{minipage} \label{tab:CF01} \end{table} \noindent where $13 \leq x_1 \leq 100 $ and $0 \leq x_2 \leq 100$. The optimum solution is $x^* = (14.095, 0.84296)$ where $f(x^*)= -6961.81388$. Both constraints are active. \subsection{CP4} \begin{table}[htp] \hspace{-2mm} \footnotesize\setlength{\tabcolsep}{2.5pt} \begin{minipage}{16cm} \begin{tabular}{l@{\hspace{8pt}} *{3}{l}} \hspace{20mm} & & Min \mbox{ }\hspace{13mm}$f(x)=x_1^2+x_2^2+x_1x_2-14x_1-16x_2+(x_3-10)^2+ 4(x_4 - 5)^2 + (x_5 - 3)^2 + 2(x_6 -1)^2 $\\[2mm] & &\hspace{28.5mm}$+ 5x_7^2 + 7(x_8 - 11)^2+ 2(x_9 - 10)^2 + (x_{10}-7)^2 + 45$\\[2mm] & &subject to \hspace{6mm}$g_1(x)=-105+4x_1+5x_2-3x_7+9x_8 \leq0$\\[2mm] & &\hspace{18.5mm}$g_2(x)=10x_1 - 8x_2 - 17x_7 + 2x_8 \leq0$\\[2mm] & &\hspace{18.5mm}$g_3(x)=-8x_1 + 2x_2 + 5x_9 - 2x_{10} - 12 \leq0$\\[2mm] & &\hspace{18.5mm}$g_4(x)=3(x_1 - 2)^2 + 4(x_2 - 3)^2 + 2x_3^2 - 7x_4 - 120 \leq0$\\[2mm] & &\hspace{18.5mm}$g_5(x)=5x_1^2 + 8x_2 + (x_3 - 6)^2 - 2x^4 - 40 \leq0$\\[2mm] & &\hspace{18.5mm}$g_6(x)=x_1^2 + 2(x_2 - 2)^2 - 2x_1x_2 + 14x_5 - 6x_6 \leq0$\\[2mm] & &\hspace{18.5mm}$g_7(x)=0.5(x_1 - 8)^2 + 2(x_2 - 4)^2 + 3x_5^2 - x_6 - 30 \leq0$\\[2mm] & &\hspace{18.5mm}$g_8(x)=-3x_1 + 6x_2 + 12(x_9 - 8)^2 - 7x_{10} \leq0$,\\[2mm] \end{tabular} \end{minipage} \label{tab:CF01} \end{table} \noindent where $-10 \leq x_i \leq 10$ $(i = 1, .. ., 10)$. The global optimum is\\ $x^* = (2.171996, 2.363683, 8.773926, 5.095984, 0.9906548, 1.430574, 1.321644, 9.828726, 8.280092, 8.375927)$, where $f(x^*) = 24.3062091$. Constraints $g_1,\mbox{ }g_2,\mbox{ } g_3,\mbox{ } g_4,\mbox{ } g_5$ and $g_6$ are active. \subsection{CP5} \begin{table}[htp] \hspace{-2mm} \footnotesize\setlength{\tabcolsep}{2.5pt} \begin{minipage}{16cm} \begin{tabular}{l@{\hspace{8pt}} *{3}{l}} \hspace{20mm} & & Min \mbox{ }\hspace{13mm}$f(x)=x_1^2 + (x_2 - 1)^2$\\[2mm] & &subject to \hspace{6mm}$g_1(x)=x_2 - x_1^2=0$,\\[2mm] \end{tabular} \end{minipage} \label{tab:CF01} \end{table} \noindent where $−1 \leq x_1 \leq 1$, $−1 \leq x_2 \leq 1$. The optimum solution is $x^*= (\pm1/\sqrt{(2)}, 1/2)$, \\where $f(x^*) = 0.7499$. \subsection{Welded Beam Design Optimisation} The welded beam design is a standard test problem for constrained design optimisation \cite{cagnina2008solving,yang2010engineering}. There are four design variables: the width $w$ and length $L$ of the welded area, the depth $d$ and thickness $h$ of the main beam. The objective is to minimise the overall fabrication cost, under the appropriate constraints of shear stress $\tau$, bending stress $\sigma$, buckling load $P$ and maximum end deflection $\delta$. The optimization model is summarized as follows, where $x^T=(w,L,d,h).$ \begin{minipage}{12cm} \begin{equation} \hspace{-7mm} Minimise\hspace{5mm}f(x) = 1.10471w^2L + 0.04811dh(14.0 + L), \end{equation} \hspace{15mm}subject to \begin{eqnarray} \begin{aligned} &g_{1}(x) = w-h \leq 0, \\ &g_{2}(x) = \delta(x)-0.25 \leq 0, \\ &g_{3}(x) = \tau(x)-13,600 \leq 0, \\ &g_{4}(x) = \sigma(x)-30,000\leq 0, \\ &g_{5}(x) = 1.10471w^2 + 0.04811dh(14.0 + L)-5.0 \leq 0, \\ &g_{6}(x) = 0.125-w\leq 0, \\ &g_{7}(x) = 6000-P(x) \leq 0,\\ \end{aligned} \end{eqnarray} \end{minipage}\\ where \begin{equation} \begin{aligned} &\sigma(x) =\frac{504,000}{hd^2},\\[2mm] &D = \frac{1}{2}\sqrt{L^2+(w+d)^2},\\[2mm] &\delta =\frac{65,856}{30,000hd^3},\\[2mm] &\alpha = \frac{6000}{\sqrt{2}wL},\\[2mm] &P =0.61423 \times 10^6 \frac{dh^3}{6}\left(1-\frac{\sqrt[d]{\frac{30}{48}}}{28}\right).\\ \end{aligned} \begin{aligned} &Q =6000\left(14+\frac{L}{2}\right),\\[2mm] &J = \sqrt{2}w L \left(\frac{L^2}{6}+\frac{(w+d)^2}{2}\right),\\[2mm] &\beta = \frac{QD}{J},\\[2mm] \indent \indent&\tau(x) =\sqrt{\alpha^2+\frac{\alpha\beta L}{D}+\beta^2}.\\ \end{aligned} \end{equation} \subsection{Speed Reducer Design Optimization} The problem of designing a speed reducer \cite{golinski1974adaptive} is a standard test problem. It consists of the design variables as: face width $x_1$, module of teeth $x_2$, number of teeth on pinion $x_3$, length of the first shaft between bearings $x_4$, length of the second shaft between bearings $x_5$, diameter of the first shaft $x_6$, and diameter of the first shaft $x_7$ (all variables continuous except $x_3$ that is integer). The weight of the speed reducer is to be minimized subject to constraints on bending stress of the gear teeth, surface stress, transverse deflections of the shafts and stresses in the shaft, \cite{cagnina2008solving}. The mathematical formulation of the problem, where $x^T=(x_1,x_2,x_3,x_4,x_5,x_6,x_7)$, is as follows. \begin{multline} \begin{aligned} Minimise \indent f(x) = &0.7854x_1x_2^2(3.3333x_3^2 + 14.9334x_3 − 43.0934)\\ &-1.508x_1(x_6^2 + x_7^3 ) + 7.4777(x_6^3 + x_7^3 )+ 0.7854(x_4x_6^2 + x_5x_7^2 ), \end{aligned} \end{multline} \hspace{5mm}subject to \begin{eqnarray} \begin{aligned} &g_{1}(x) = \frac{27}{x_1x_2^2x_3}-1\leq 0, \\ &g_{2}(x) = \frac{397.5}{x_1x_2^2x_3^2}-1 \leq 0, \\ &g_{3}(x) = \frac{1.93x_4^3}{x_2x_3x_6^4}-1 \leq 0, \\ &g_{4}(x) = \frac{1.93x_5^3}{x_2x_3x_7^4}-1\leq 0, \\ &g_{5}(x) = \frac{1.0}{110x_6^3}\sqrt{\left(\frac{745.0x_4}{x_2x_3}\right)^2+16.9\times10^6}-1 \leq 0, \\ &g_{6}(x) = \frac{1.0}{85x_7^3}\sqrt{\left(\frac{745.0x_5}{x_2x_3}\right)^2+157.5\times10^6}-1 \leq 0, \\ &g_{7}(x) = \frac{x_2x_3}{40}-1 \leq 0,\\ &g_{8}(x) = \frac{5x_2}{x_1}-1\leq 0, \\ &g_{9}(x) = \frac{x_1}{12x_2}-1 \leq 0, \\ &g_{10}(x) = \frac{1.5x_6+1.9}{x_4}-1\leq 0, \\ &g_{11}(x) = \frac{1.1x_7+1.9}{x_5}-1 \leq 0.\\ \end{aligned} \end{eqnarray} The simple limits on the design variables are \\ $2.6\leq x_1\leq3.6 $, $0.7 \leq x_2 \leq0.8$, \\ $17\leq x_3\leq28$, $7.3\leq x_4\leq8.3 $, $7.8 \leq x_5 \leq8.3$, \\ $2.9\leq x_6\leq3.9$ and $5.0\leq x_7\leq5.5$. \subsection{Spring Design Optimisation} The main objective of this problem \cite{arora2004introduction,belegundu1985study} is to minimize the weight of a tension/compression string, subject to constraints of minimum deflection, shear stress, surge frequency, and limits on outside diameter and on design variables. There are three design variables: the wire diameter $x_1$, the mean coil diameter $x_2$, and the number of active coils $x_3$, \cite{cagnina2008solving}. The mathematical formulation of this problem, where $x^T=(x_1,x_2,x_3)$, is as follows. \begin{eqnarray} \begin{aligned} \hspace{-25.5mm}Minimize \mbox{ }f(x)= (x_3+2)x_2x_1^2, \end{aligned} \end{eqnarray} \hspace{35.5mm}subject to \begin{eqnarray} \begin{aligned} &g_{1}(x) = 1-\frac{x_2^3x_3}{7,178x_1^4}\leq 0, \\ &g_{2}(x) = \frac{4x_2^2-x_1x_2}{12,566(x_2x_1^3)-x_1^4}+\frac{1}{5,108x_1^2}-1 \leq 0, \\ &g_{3}(x) = 1-\frac{140.45x_1}{x_2^2x_3} \leq 0, \\ &g_{4}(x) = \frac{x_2+x_1}{1.5}-1\leq 0. \\ \end{aligned} \end{eqnarray} The simple limits on the design variables are $0.05\leq x_1\leq 2.0 $, $0.25 \leq x_2 \leq1.3$ \\and $2.0\leq x_3\leq15.0.$ \end{document}
\begin{document} \title{f On periodicity of generalized pseudostandard words} \begin{abstract} Generalized pseudostandard words were introduced by de Luca and De Luca in 2006~\cite{LuDeLu}. In comparison to the palindromic and pseudopalindromic closure, only little is known about the generalized pseudopalindromic closure and the associated generalized pseudostandard words. In this paper we provide a necessary and sufficient condition for their periodicity over binary and ternary alphabet. More precisely, we describe how the directive bi-sequence of a~generalized pseudostandard word has to look like in order to correspond to a~periodic word. We state moreover a~conjecture concerning a~necessary and sufficient condition for periodicity over any alphabet. \noindent \textbf{Keywords:} palindrome, palindromic closure, pseudopalindrome, pseudopalindromic closure, pseudostandard words \end{abstract} \section{Introduction} This paper focuses on a recent topic of combinatorics on words: generalized pseudostandard words. Such words were defined by de Luca and De Luca in 2006~\cite{LuDeLu} and generalize standard episturmian words, resp., pseudostandard words -- instead of the palindromic closure, resp., one pseudopalindromic closure, an infinite sequence of involutory antimorphisms is considered. While standard episturmian and pseudostandard words have been studied intensively and a~lot of their properties are known (see for instance~\cite{BuLuZa,DrJuPi,Lu,LuDeLu}), only little has been shown so far about the generalized pseudopalindromic closure that gives rise to generalized pseudostandard words. In the paper~\cite{LuDeLu} the authors have defined the generalized pseudostandard words and have proven there that the famous Thue--Morse word is an example of such words. Jajcayov\'a et al.~\cite{JaPeSt} have characterized generalized pseudostandard words in the class of generalized Thue--Morse words. Jamet et al.~\cite{JaPaRiVu} deal with fixed points of the palindromic and pseudopalindromic closure and formulate an open problem concerning fixed points of the generalized pseudopalindromic closure. The most detailed study of generalized pseudostandard words has been so far provided by Blondin Mass\'e et al.~\cite{MaPa}: \begin{itemize} \item An algorithm for normalization over binary alphabet is described. This algorithm transforms the directive bi-sequence in such a~way that the obtained word remains unchanged and no pseudopalindromic prefix is missed during the construction. \item An effective algorithm -- the generalized Justin's formula -- for producing generalized pseudostandard words is presented. \item The standard Rote words are proven to be generalized pseudostandard words and the infinite sequence of antimorphisms that generates such words is studied. \end{itemize} In this paper we provide a sufficient and necessary condition for periodicity of generalized pseudostandard words over binary and ternary alphabet. More precisely, we describe how the directive bi-sequence of a~generalized pseudostandard word has to look like in order to correspond to a~periodic word. The text is organized as follows. In Section~\ref{sec:CoW} we introduce basics from combinatorics on words. In Section~\ref{sec:generalized_pseudopalindrome}, the generalized pseudopalindromic closure is defined, the normalization algorithm over binary alphabet from~\cite{MaPa} is recalled and some new partial results on normalization over ternary alphabet are provided. The main results are presented in the following two sections. A~sufficient and necessary condition for periodicity of generalized pseudostandard words is given in Section~\ref{sec:per} over binary alphabet and in Section~\ref{sec:perTernary} over ternary alphabet. Finally, a~conjecture concerning a~necessary and sufficient condition for periodicity over any alphabet is stated in the last section. \section{Basics from combinatorics on words}\label{sec:CoW} Any finite set of symbols is called an \emph{alphabet} $\mathcal A$, the elements are called \emph{letters}. A~\emph{(finite) word} $w$ over $\mathcal A$ is any finite sequence of letters. Its length $|w|$ is the number of letters it contains. The \emph{empty word} -- the neutral element for concatenation of words -- is denoted $\varepsilon$ and its length is set $|\varepsilon|=0$. The symbol ${\mathcal A}^*$ stands for the set of all finite words over $\mathcal A$. An \emph{infinite word} $\mathbf u$ over $\mathcal A$ is any infinite sequence of letters. A finite word $w$ is a~\emph{factor} of the infinite word $\mathbf u=u_0u_1u_2\ldots$ with $u_i \in \mathcal A$ if there exists an index $i\geq 0$ such that $w=u_iu_{i+1}\ldots u_{i+|w|-1}$. Let $u,v,w \in {\mathcal A}^*$, then for $w=uv$ we mean by $wv^{-1}$ the word $u$ and by $u^{-1}w$ the word $v$. The symbol ${\mathcal L}(\mathbf u)$ is used for the set of factors of $\mathbf u$ and is called the \emph{language} of $\mathbf u$, similarly ${\mathcal L}_n(\mathbf u)$ stands for the set of factors of $\mathbf u$ of length $n$. \noindent Let $w \in \mathcal{L}({\mathbf u})$. A~\emph{left extension} of $w$ is any word $aw \in \mathcal{L}(\mathbf{u})$, where $a \in \mathcal A$. The factor $w$ is called \emph{left special} if $w$ has at least two left extensions. The \emph{(factor) complexity} of $\mathbf{u}$ is the map $\mathcal{C}_{{\mathbf u}}: {\mathbb N} \rightarrow {\mathbb N}$ defined as $$\mathcal{C}_{{\mathbf u}}(n) =\#\mathcal{L}_n(\mathbf u).$$ The following results on complexity come from~\cite{MoHe}. If an infinite word is eventually periodic, i.e., it is of the form $wv^{\omega}$, where $w,v$ are finite words ($w$ may be empty -- in such a case we speak about a~purely periodic word) and $\omega$ denotes an infinite repetition, then its factor complexity is bounded. An infinite word is not eventually periodic -- such a~word is called aperiodic -- if and only if its complexity satisfies: ${\mathcal C}(n)\geq n+1$ for all $n \in \mathbb N$. If an infinite word $\mathbf u$ contains for every length $n$ a~left special factor of length $n$, the complexity is evidently strictly growing, hence $\mathbf u$ is aperiodic. An infinite word $\mathbf u$ is called \emph{recurrent} if each of its factors occurs infinitely many times in $\mathbf u$. It is said to be \emph{uniformly recurrent} if for every $n \in \mathbb N$ there exists a~length $r(n)$ such that every factor of length $r(n)$ of $\mathbf u$ contains all factors of length $n$ of $\mathbf u$. An \emph{involutory antimorphism} is a~map $\vartheta: {\mathcal A}^* \rightarrow {\mathcal A}^*$ such that for every $v, w \in {\mathcal A}^*$ we have $\vartheta(vw) = \vartheta(w) \vartheta(v)$ and moreover $\vartheta^2$ equals identity. It is clear that in order to define an antimorphism, it suffices to provide letter images. There are only two involutory antimorphisms over the alphabet $\{0,1\}$: the \emph{reversal (mirror) map} $R$ satisfying $R(0)=0,\ R(1)=1$, and the \emph{exchange antimorphism} $E$ given by $E(0)=1,\ E(1)=0$. We use the notation $\overline{0} = 1$ and $\overline{1} = 0$, $\overline{E} = R$ and $\overline{R} = E$. There are only four involutory antimorphisms over the alphabet $\{0, 1, 2\}$: the reversal map $R$ satisfying $R(0)=0,\ R(1)=1,\ R(2)=2$, and three exchange antimorphisms $E_0,\ E_1,\ E_2$ given by $$\begin{array}{lll} E_0(0)=0,&\ E_0(1)=2,&\ E_0(2)=1\\ E_1(0)=2,&\ E_1(1)=1,&\ E_1(2)=0\\ E_2(0)=1,&\ E_2(1)=0,&\ E_2(2)=2\; . \end{array}$$ Consider an involutory antimorphism $\vartheta$ over $\mathcal A$. A finite word $w$ is a~$\vartheta$-\emph{palindrome} if $w = \vartheta(w)$. The $\vartheta$-\emph{palindromic closure} $w^{\vartheta}$ of a~word $w$ is the shortest $\vartheta$-palindrome having $w$ as prefix. For instance, over binary alphabet $011^R=0110,\ 011^{E}=011001$. We speak about the \emph{palindromic closure} if $\vartheta=R$ and the \emph{pseudopalindromic closure} if we do not need to specify which antimorphism $\vartheta$ is used. \section{Generalized pseudopalindromic closure}\label{sec:generalized_pseudopalindrome} Generalized pseudostandard words form a~generalization of infinite words generated by the palindromic or pseudopalindromic closure (see~\cite{BuLuZa,DrJuPi,Lu,LuDeLu} for more details on pseudostandard words); such a~construction was first described and studied in~\cite{LuDeLu}. Let us start with their definition and known properties; we use the papers~\cite{JaPaRiVu,LuDeLu,MaPa}. \subsection{Definition of generalized pseudostandard words} \begin{definition} Let $\mathcal A$ be an alphabet and $G$ be the set of all involutory antimorphisms on ${\mathcal A}^*$. Let $\Delta = \delta_1 \delta_2 \ldots$ and $\Theta = \vartheta_1 \vartheta_2 \ldots$, where $\delta_i \in {\mathcal A}$ and $\vartheta_i \in G$ for all $i \in \mathbb{N}$. The infinite \emph{generalized pseudostandard word} $\mathbf{u}(\Delta, \Theta)$ generated by the generalized pseudopalindromic closure is the word whose prefixes $w_n$ are obtained from the recurrence relation $$w_{n+1} = (w_n \delta_{n+1})^{\vartheta_{n+1}},$$ $$w_0 = \varepsilon.$$ The sequence $\Lambda = (\Delta, \Theta)$ is called the \emph{directive bi-sequence} of the word $\mathbf{u}(\Delta, \Theta)$. \end{definition} If $\Theta = \vartheta^{\omega}$ in the previous definition, then we deal with well known \emph{pseudostandard words}. In such a~case the sequence $(w_n)_{n \geq 0}$ is known to contain all $\vartheta$-palindromic prefixes of $\mathbf{u}(\Delta, \Theta)$. We will restrict our considerations to two cases: \begin{enumerate} \item binary alphabet ${\mathcal A}=\{0,1\}$: $G=\{R, E\}$, \item ternary alphabet ${\mathcal A}=\{0,1,2\}$: $G=\{R, E_0, E_1, E_2\}$, where $E_0, E_1, E_2$ have been defined in Section~\ref{sec:CoW}. \end{enumerate} The following two properties are readily seen from the definition of $\mathbf u=\mathbf{u}(\Delta, \Theta)$: \begin{itemize} \item If an involutory antimorphism $\vartheta$ is contained in $\Theta$ infinitely many times, then the language of $\mathbf u$ is closed under the antimorphism $\vartheta$. \item The word $\mathbf u$ is uniformly recurrent. \end{itemize} \subsection{Normalization over binary alphabet} In contrast to pseudostandard words, the sequence $(w_n)_{n\geq 0}$ of prefixes of a~binary generalized pseudostandard word ${\mathbf u}(\Delta, \Theta)$ does not have to contain all palindromic and $E$-palindromic prefixes of ${\mathbf u}(\Delta, \Theta)$. Blondin Mass\'e et al.~\cite{MaPa} have introduced the notion of normalization of the directive bi-sequence. \begin{definition}\label{def:norm} A~directive bi-sequence $\Lambda=(\Delta, \Theta)$ of a~binary generalized pseudostandard word $\mathbf{u}(\Delta, \Theta)$ is called \emph{normalized} if the sequence of prefixes $(w_n)_{n\geq 0}$ of $\mathbf{u}(\Delta, \Theta)$ contains all palindromic and $E$-palindromic prefixes of ${\mathbf u}(\Delta, \Theta)$. \end{definition} \begin{example} \label{ex:norm} Let $\Lambda=(\Delta, \Theta) = ((011)^{\omega}, (EER)^{\omega})$. Let us write down the first prefixes of $\mathbf{u}(\Delta, \Theta)$: \begin{align} w_1 =& \;01 \nonumber \\ w_2 =& \;011001 \nonumber \\ w_3 =& \;01100110 \nonumber \\ w_4 =& \;0110011001. \nonumber \end{align} The sequence $w_n$ does not contain for instance palindromic prefixes $0$ and $0110$ of $\mathbf{u}(\Delta, \Theta)$. \end{example} The authors of~\cite{MaPa} have proven that every directive bi-sequence $\Lambda$ can be normalized, i.e., transformed to such a~form $\widetilde \Lambda$ that the new sequence $(\widetilde{w}_n)_{n\geq 0}$ contains already all palindromic and $E$-palindromic prefixes and $\widetilde \Lambda$ generates the same generalized pseudostandard word as~$\Lambda$. \begin{theorem}\label{thm:norm} Let $\Lambda = (\Delta, \Theta)$ be a~directive bi-sequence of a~binary generalized pseudostandard word. Then there exists a~normalized directive bi-sequence $\widetilde{\Lambda} = (\widetilde{\Delta}, \widetilde{\Theta})$ such that ${\mathbf u}(\Delta, \Theta) = {\mathbf u}(\widetilde{\Delta}, \widetilde{\Theta})$. Moreover, in order to normalize the sequence $\Lambda$, it suffices firstly to execute the following changes to its prefixes (if they are of the corresponding form): \begin{itemize} \item $(a\bar{a}, RR) \rightarrow (a\bar{a}a, RER)$, \item $(a^i, R^{i-1}E) \rightarrow (a^i\bar{a}, R^iE)$ for $i \geq 1$, \item $(a^i\bar{a}\bar{a}, R^iEE) \rightarrow (a^i\bar{a}\bar{a}a, R^iERE)$ for $i \geq 1$, \end{itemize} and secondly to replace step by step from left to right every factor of the form: \begin{itemize} \item $(ab\bar{b}, \vartheta\overline{\vartheta}\overline{\vartheta}) \rightarrow (ab\bar{b}b, \vartheta\overline{\vartheta}\vartheta\overline{\vartheta})$, \end{itemize} where $a, b \in \{0,1\}$ and $\vartheta \in \{E,R\}$. \end{theorem} \begin{example} \label{ex:norm2} Let us normalize the directive bi-sequence $\Lambda = ((011)^{\omega}, (EER)^{\omega})$ from Example~\ref{ex:norm}. According to the procedure from Theorem~\ref{thm:norm}, we transform first prefixes of $\Lambda$. We replace $(0,E)$ with $(01,RE)$ and get $\Lambda_1 = (01(110)^{\omega}, RE(ERE)^{\omega})$. Some prefixes of $\Lambda_1$ are still of a~forbidden form, we replace thus the prefix $(011,REE)$ with $(0110, RERE)$ and get $\Lambda_2 = (0110(101)^{\omega}, RERE(REE)^{\omega})$. Prefixes of $\Lambda_2$ are now correct. It remains to replace from left to right the factors $(101, REE)$ with $(1010, RERE)$. Finally, we obtain $\widetilde{\Lambda} = (0110(1010)^{\omega}, RERE(RERE)^{\omega})=(01(10)^{\omega}, (RE)^{\omega})$, which is already normalized. Let us write down the first prefixes $(\widetilde{w}_n)_{n\geq 0}$ of ${\mathbf u}(\widetilde{\Lambda})$: \begin{align} \widetilde{w}_1 =& \;0 \nonumber \\ \widetilde{w}_2 =& \;01 \nonumber \\ \widetilde{w}_3 =& \;0110 \nonumber \\ \widetilde{w}_4 =& \;011001. \nonumber \end{align} We can notice that the new sequence $(\widetilde{w}_n)_{n\geq 0}$ now contains the palindromes $0$ and $0110$ that were skipped in Example~\ref{ex:norm}. \end{example} \subsection{Normalization over ternary alphabet}\label{sec:norm ternary} A~normalized directive bi-sequence can be defined similarly as in Definition~\ref{def:norm} over multiliteral alphabet. In contrast to binary alphabet, over ternary alphabet it is still clear that every directive bi-sequence may be normalized, however the algorithm for normalization similar to the one over binary alphabet (Theorem~\ref{thm:norm}) has not been found yet. Fortunately, in order to prove a~sufficient and necessary condition for periodicity of ternary generalized pseudostandard words, even some partial results on normalization over ternary alphabet suffice. Let us introduce these partial results. We will focus on bi-sequences that contain infinitely many times exactly two distinct antimorphisms including $R$. \begin{lemma} \label{lem:norm} Let the directive bi-sequence $(\Delta, \Theta)$ of a~ternary generalized pseudostandard word $\mathbf{u}$ contain as its factor $(abc, \vartheta R R)$, resp., $(abc, R\vartheta \vartheta)$, where $\vartheta \in \{E_0,E_1,E_2\}$ and $a, b, c \in \{0,1,2\}$ satisfy $\vartheta(b) = c$. Denote $w_n = \vartheta(w_n)$, $w_{n+1} = R(w_{n+1})$ and $w_{n+2} = R(w_{n+2})$, resp., $w_n = R(w_n)$, $w_{n+1} = \vartheta(w_{n+1})$ and $w_{n+2} = \vartheta(w_{n+2})$, the corresponding pseudopalindromic prefixes of $\mathbf{u}$. Then between the pseudopalindromic prefixes $w_{n+1}$ and $w_{n+2}$ of the word $\mathbf{u}$ there is a~$\vartheta$-palindromic, resp., palindromic, prefix $w$ of $\mathbf{u}$ followed by the letter $b$. \end{lemma} \begin{proof} Consider the first case, i.e., $(abc, \vartheta R R)$ is a~factor of $(\Delta, \Theta)$. The proof of the second case is left for the reader since it is analogous. Denote $bsb$ the longest palindromic suffix of $w_n b$. Then $w_n = ps$, where $p$ is a~nonempty prefix of $w_n$ because $w_n$ is a~$\vartheta$-palindrome while $s$ is a~palindrome. We have $w_{n+1} = psR(p)$. When constructing $w_{n+2}$, we look for the longest palindromic suffix of $w_{n+1}c$. It is easy to see that such a~suffix is equal to $\vartheta(b)\vartheta(s)c = c\vartheta(s)c$. We can thus write $w_{n+2} = psR(p)\bigl(\vartheta(s)\bigr)^{-1}R\bigl(psR(p)\bigr) = psR(p)\bigl(\vartheta(s)\bigr)^{-1}psR(p)$. Let us rewrite the word \begin{equation}\label{w} w=psR(p)\bigl(\vartheta(s)\bigr)^{-1}ps, \end{equation} which is a~prefix of $w_{n+2}$. We have \begin{align} w = & \;psR(p)\bigl(\vartheta(s)\bigr)^{-1}\vartheta(ps) \nonumber \\[3mm] = & \;pR(s)R(p)\bigl(\vartheta(s)\bigr)^{-1}\vartheta(s)\vartheta(p) \nonumber \\[3mm] = & \;pR(ps)\vartheta(p) \nonumber \\[3mm] = & \;\vartheta(w). \nonumber \end{align} We have used the facts: $\vartheta(ps) = ps$, $R(s) = s$ and $\vartheta R=R\vartheta$. Clearly, we have that $|w_{n+1}| < |w| < |w_{n+2}|$, therefore the sequence $(\Delta, \Theta)$ is not normalized and $w$ is the searched $\vartheta$-palindromic prefix of $\mathbf{u}$. Since $w_{n+2} = wR(p)$ and $p$ ends in $b$, the word $R(p)$ starts in $b$. Consequently, $w$ is followed by $b$. \end{proof} \begin{corollary} \label{cor:norm} Under the assumptions of Lemma~\ref{lem:norm} we have: If the factor $(abc, \vartheta R R)$, resp., $(abc, R \vartheta \vartheta)$, of the directive bi-sequence $(\Delta, \Theta)$ of the word $\mathbf{u}$ is replaced with the factor $(abcb, \vartheta R \vartheta R)$, resp., $(abcb, R\vartheta R\vartheta)$, the same generalized pseudostandard word is obtained. \end{corollary} \begin{proof} Consider again the first case, i.e., $(abc, \vartheta R R)$ is a~factor of the directive bi-sequence, and let for the reader the second one. Denote $w_n = \vartheta(w_n)$, $w_{n+1} = R(w_{n+1})$ and $w_{n+2} = R(w_{n+2})$ the corresponding pseudopalindromic prefixes of $\mathbf{u}$. Denote further in the same way as in Lemma~\ref{lem:norm} the skipped $\vartheta$-palindrome by $w$. We know that the prefix $w$ is followed by the letter $b$, thus it suffices to show that $(w_{n+1}c)^{\vartheta} = w$ and $(wb)^{R} = w_{n+2}$. We will start with the first claim. Assume for contradiction that there exists a~$\vartheta$-palindromic prefix $v_1$ such that $(w_{n+1}c)^{\vartheta} =v_1$ and $|v_1| < |w|$. When constructing the $\vartheta$-palindrome $v_1$, we look for the longest $\vartheta$-palindromic suffix $\vartheta(c)vc=bvc$ of the word $w_{n+1}c$. It is not difficult to see that $|w_{n}| \leq |v| <|w_{n+1}|$, where the first relation follows from the fact that $bR(w_n)c$ is a~$\vartheta$-palindromic suffix of $w_{n+1}c$. Moreover, $v \not = R(w_n)$ because otherwise $v_1=w_{n+1}(R(w_n))^{-1}\vartheta(w_{n+1})=w$. The easiest way to check this equality is to notice that both $v_1$ and $w$ are prefixes of $\mathbf u$ and have the same length according to the form of $w$ defined in~(\ref{w}). Since $v$ is a~$\vartheta$-palindromic suffix of the palindrome $w_{n+1}$, its reversal $R(v)$ is a~$\vartheta$-palindromic prefix between $w_n$ and $w_{n+1}$. Since $w_{n+1}$ is the shortest palindrome having the prefix $w_n b$, it has to satisfy at the same time $w_{n+1}=(w_n b)^{R} = (R(v) b)^{R}$. Let us however show that this leads to a~contradiction. Denote $bsb$ the longest palindromic suffix of $w_n b$ and $b\widetilde{s}b$ the longest palindromic suffix of $R(v)b$. Since $w_n$ is a~suffix of $R(v)$, either $|\widetilde{s}| > |w_n|$ or $|\widetilde{s}| = |s|$. If $|\widetilde{s}| > |w_n|$, then $\vartheta(\widetilde{s})$ is a~palindromic prefix shorter than $w_{n+1}$ and longer than $w_n$, which is a~contradiction. If $|\widetilde{s}| = |s|$, then $|(w_n b)^{R}| < |(R(v)b)^{R}|$, which is again a~contradiction, too. It remains to show that $(wb)^{R} = w_{n+2}$. We have $(w_{n+1}c)^{R} = w_{n+2}$, i.e., $w_{n+2}$ is the shortest palindrome having $w_{n+1}c$ as prefix. Hence the shortest palindrome with the prefix $wb$ has to equal $w_{n+2}$ since $w_{n+1}c$ is a~prefix of $wb$ and $wb$ is a~prefix of $w_{n+2}$. \end{proof} \begin{example} Let us illustrate Lemma~\ref{lem:norm} and Corollary~\ref{cor:norm}. Assume we have already constructed the prefix $w_k = 012$ of a~generalized pseudostandard word. Suppose further that the factor $(120,E_1RR)$ of the directive bi-sequence follows. It is readily seen that the assumptions of Lemma~\ref{lem:norm} are met (in particular we have $E_1(2) = 0$). Let us write down the prefixes $w_{k+1}$, $w_{k+2}$ and $w_{k+3}$. \begin{align} w_{k+1} = & \;0121012, \nonumber \\[3mm] w_{k+2} = & \;01210122101210, \nonumber \\[3mm] w_{k+3} = & \;0121012210121001210122101210. \nonumber \end{align} It is evident that between the prefixes $w_{k+2}$ and $w_{k+3}$ there is an $E_1$-palindrome $$012101221012100121012$$ followed by $2$. Corollary~\ref{cor:norm} moreover states that the generalized pseudostandard word remains the same if we replace the factor $(120,E_1RR)$ of the directive bi-sequence with the factor $(1202,E_1RE_1R)$ -- the reader can check it easily. \end{example} \begin{corollary} \label{cor:norm2} Let the directive bi-sequence $\Lambda = (\Delta, \Theta)$ of a~ternary generalized pseudostandard word $\mathbf{u}$ satisfy: The sequence $\Theta = \vartheta_1\vartheta_2\cdots$ contains infinitely many times exactly two distinct antimorphisms $\vartheta$ and $R$. The sequence $\Delta = \delta_1\delta_2\cdots$ contains infinitely many times two (not necessarily distinct) letters $a, b$ such that $\vartheta(a) =b$. Let further the bi-sequence $\Lambda$ satisfy: There exists $n_0 \in \mathbb{N}$ such that for all $n > n_0$ we have: either \begin{align} \vartheta_n = \vartheta \Rightarrow \delta_{n+1} = a \ \text{and} \ \vartheta_n=R \Rightarrow \delta_{n+1}=b, \end{align} or \begin{align} \vartheta_n = \vartheta \Rightarrow \delta_{n+1} = b \ \text{and} \ \vartheta_n=R \Rightarrow \delta_{n+1}=a. \end{align} Then there exists a~directive bi-sequence $\widetilde{\Lambda} = \bigl(v(ab)^{\omega}, \sigma(R\vartheta)^{\omega}\bigr)$, where $v \in \{0,1,2\}^*$, $\sigma \in \{E_0,E_1,E_2,R\}^*$, such that $\mathbf{u}(\Lambda) = \mathbf{u}(\widetilde{\Lambda})$. \end{corollary} \begin{proof} We can certainly find $m>n_0$ such that the sequence $\Theta$ contains -- starting from the index $m$ -- exactly two antimorphisms $\vartheta$ and $R$ (both of them infinitely many times) and the sequence $\Delta$ contains -- starting from the index $m$ -- only letters $a$ and $b$ (not necessarily distinct). Let us find $\ell > m$ satisfying $\vartheta_{\ell} = R$ and $\vartheta_{\ell+1} = \vartheta$. Using assumptions of Corollary~\ref{cor:norm2}, we get $\delta_{\ell+1} = b$ and $\delta_{\ell+2} = a$ (we assume without loss of generality that the antimorphism $R$ is followed by the letter $b$). We have thus found a~factor of the directive bi-sequence of the form $(cba, R\vartheta\zeta_1)$ for some $c \in \{a,b\}$ and $\zeta_1 \in \{\vartheta, R\}$. If now $\zeta_1 = \vartheta$, the assumptions of Corollary~\ref{cor:norm} are met, and consequently this factor may be replaced with the factor $(cbab, R\vartheta R\vartheta)$ and we get the same generalized pseudostandard word. If $\zeta_1 = R$, then $\delta_{\ell+3} = b$ and we get again a~factor of the directive bi-sequence of the form $(cbab, R\vartheta R\zeta_2)$ for some $\zeta_2 \in \{\vartheta, R\}$, etc. A formal proof by induction is left as an exercise for the reader. Finally, we set $v := \delta_1\ldots\delta_{\ell+1}$ and $\sigma :=\vartheta_1\ldots\vartheta_{\ell+1}$. \end{proof} At this moment, we know that if a~directive bi-sequence satisfies the assumptions of Lemma~\ref{lem:norm}, it is not normalized. The remaining question is whether the new bi-sequence whose existence is guaranteed by Corollary~\ref{cor:norm2} is normalized (at least from a~certain moment on). A~partial answer to this question is provided in the following lemma. \begin{lemma} \label{lem:norm2} Let the directive bi-sequence $\Lambda = (\delta_1\delta_2\cdots, \vartheta_1\vartheta_2\cdots)$ of a~generalized pseudostandard word $\mathbf{u}$ be of the form $\Lambda = (v(ab)^{\omega}, \sigma(R\vartheta)^{\omega})$, where $v \in \{0,1,2\}^*$, $\sigma \in \{E_0,E_1,E_2,R\}^*$ and $|v| = |\sigma|$, $\vartheta \in \{E_0,E_1,E_2\}$ and $a, b \in \{0,1,2\}$ such that $\vartheta(a) = b$. Then for all $n > n_0 = |v|$ the sequence $(w_n)_{n>n_0}$ contains all palindromic, resp., $\vartheta$-palindromic, prefixes of length larger than $|w_{n_0}|$ of the word $\mathbf{u}$ followed by the letter $b$, resp., $a$. \end{lemma} \begin{proof} Assume for contradiction that there exists $n > n_0$ such that between the prefixes $w_n = \vartheta(w_n)$ and $w_{n+1} = R(w_{n+1})$ (the converse case is analogous) another pseudopalindromic prefix $w$ occurs: \begin{itemize} \item Either $w$ is a~palindrome. This is a~contradiction with the fact that $w_{n+1}$ is the shortest palindrome having $w_n a$ as prefix. \item Or $w$ is a~$\vartheta$-palindrome followed by $a$ (consider the shortest such $w$). Denote $asa$ the longest palindromic suffix of $w_n a$ and $a\widetilde{s}a$ the longest palindromic suffix of $wa$. Since $w_n$ is a~suffix of $w$, either $|\widetilde{s}| > |w_n|$ or $|\widetilde{s}| = |s|$. If $|\widetilde{s}| > |w_n|$, then $\vartheta(\widetilde{s})$ is a~palindromic prefix shorter than $w_{n+1}$ and longer than $w_n$, which is a~contradiction. If $|\widetilde{s}| = |s|$, then $|(w_na)^{R}| < |(wa)^{R}|$. Since $w_{n+1}$ is the shortest palindrome having the prefix $w_n a$, it has to satisfy at the same time $w_{n+1}=(w_na)^{R} = (wa)^{R}$, and this is again a~contradiction. \end{itemize} \end{proof} \section{Periodicity of binary generalized pseudostandard words} \label{sec:per} Our first new result concerning generalized pseudostandard words is a~necessary and sufficient condition for their periodicity over binary alphabet. We thus consider throughout this section binary infinite words. \begin{theorem}\label{thm:periodicitybinary} A binary generalized pseudostandard word ${\mathbf u}(\Delta, \Theta)$, where $\Delta=\delta_1\delta_2\ldots \in \{0,1\}^{\mathbb N}$ and $\Theta=\vartheta_1\vartheta_2\ldots \in \{E,R\}^{\mathbb N}$, is periodic if and only if the directive bi-sequence $(\Delta, \Theta)$ satisfies the following condition: \begin{equation}\label{eq:podm} (\exists a \in \{0,1\})(\exists \vartheta \in \{E,R\})(\exists n_0 \in \mathbb N)(\forall n>n_0, n \in \mathbb N)(\delta_{n+1}=a \Leftrightarrow \vartheta_n =\vartheta). \end{equation} \end{theorem} \begin{remark} The condition for periodicity may be rewritten in a slightly less formal way:\\ ${\mathbf u}(\Delta, \Theta)$ is periodic if and only if there exists a~bijection $\pi: \{E,R\} \to \{0,1\}$ such that $\pi(\vartheta_n)=\delta_{n+1}$ for all sufficiently large $n$. \end{remark} Let us point out that generalized pseudostandard words are either aperiodic or purely periodic -- it follows from the fact that they are recurrent. In order to prove Theorem~\ref{thm:periodicitybinary} we need the following lemma and remark. \begin{lemma} \label{lemma:tvar} Let $(\Delta,\Theta)$ be a normalized directive bi-sequence of a generalized pseudostandard word. Assume $(\Delta,\Theta)$ satisfies condition~\eqref{eq:podm} and both $E$ and $R$ occur in $\Theta$ infinitely many times. Then there exist $$\nu \in {\{0,1\}}^*,\ \sigma \in {\{E,R\}}^*,\ a, b \in \{0,1\}, \ \vartheta \in \{E,R\}, \ i \in \mathbb N$$ such that $$\Delta=\nu b a^i (a\bar{a})^\omega \quad \text{and} \quad \Theta=\sigma \vartheta^{i+1}(\overline{\vartheta}\vartheta)^\omega \quad \text{and $|\nu|=|\sigma|$.}$$ \end{lemma} \begin{proof} Let us set $\nu= \delta_1 \ldots \delta_{n_0}$ and $\sigma= \vartheta_1 \ldots \vartheta_{n_0}$. Let us further denote $b=\delta_{n_0+1}$ and $\vartheta=\vartheta_{n_0+1}$. Since the directive bi-sequence satisfies condition~\eqref{eq:podm}, the same letter (say $a$) has to follow $\vartheta$. Since both $E$ and $R$ occur in~$\Theta$ infinitely many times, $\vartheta$ is repeated only finitely many times (say $i+1$ times), i.e., $\vartheta_{n_0+1}=\ldots=\vartheta_{n_0+1+i}=\vartheta$ and $\vartheta_{n_0+2+i}=\overline{\vartheta}$. According to~\eqref{eq:podm} we have $\delta_{n_0+2}=\ldots=\delta_{n_0+2+i}=a$ and $\delta_{n_0+3+i}=\bar{a}$. By Theorem~\ref{thm:norm} a~normalized directive bi-sequence cannot contain the factor $(cd\bar{d},\gamma\bar{\gamma}\bar{\gamma})$ for any $c, d \in \{0,1\}$, $\gamma \in \{E,R\}$. Consequently, $\vartheta_{n_0+3+i}=\vartheta$. Consider now the prefix of $(\Delta, \Theta)$ of the form $\Lambda_k=(\nu b a^i (a\bar{a})^k, \sigma \vartheta^{i+1}(\overline{\vartheta}\vartheta)^k)$. Then again by Theorem~\ref{thm:norm} and using~\eqref{eq:podm}, the prefix of $(\Delta, \Theta)$ of length $|\Lambda_k|+2$ is equal to $\Lambda_{k+1}$. \end{proof} The following remark follows easily from Theorem~\ref{thm:norm}. \begin{remark} \label{rem:norm} Let a~directive bi-sequence of a generalized pseudostandard word satisfy condition~\eqref{eq:podm}. Then the corresponding normalized bi-sequence satisfies condition~\eqref{eq:podm}. \end{remark} \begin{proof}[Proof of Theorem~\ref{thm:periodicitybinary}] \noindent $(\Leftarrow):$ \begin{enumerate} \item Assume that the sequence $\Theta$ contains both $E$ and $R$ infinitely many times. Let us normalize $\Lambda=(\Delta, \Theta)$ and denote the new directive bi-sequence by $\widetilde{\Lambda}$. By Remark~\ref{rem:norm} the sequence $\widetilde{\Lambda}$ satisfies condition~\eqref{eq:podm}. Applying Lemma~\ref{lemma:tvar} it is possible to write $\widetilde{\Lambda}=(\widetilde{\nu}(a\bar{a})^\omega, \widetilde{\sigma} (\overline{\vartheta}\vartheta)^\omega)$, where $|\widetilde{\nu}|=|\widetilde{\sigma}|$. Without loss of generality suppose that $\widetilde{\sigma}=\widetilde{\theta_1}\vartheta$. (Otherwise we would extend the sequence $\widetilde{\nu}$ and $\widetilde{\sigma}$ by two consecutive members.) Set $n_0 = |\widetilde{\sigma}|$. We will show that for all $n > n_0$ there exists $k \in \mathbb{N}$ such that either \begin{equation}\label{eq:tvar1} w_n = w_{n_0}[(w_{n_0}^{-1}w_{n_0+1})\vartheta\overline{\vartheta}(w_{n_0}^{-1}w_{n_0+1})]^k, \end{equation} \begin{center} or \end{center} \begin{equation}\label{eq:tvar2} w_n = w_{n_0}[(w_{n_0}^{-1}w_{n_0+1})\vartheta\overline{\vartheta}(w_{n_0}^{-1}w_{n_0+1})]^k(w_{n_0}^{-1}w_{n_0+1}), \end{equation} where $(w_n)_{n\geq 0}$ is the sequence of prefixes associated with ${\mathbf u}(\widetilde \Lambda)$ (we omit tildes for simplicity). It follows then directly from these forms that \begin{equation}\label{eq:period} w_{n_0+1}\vartheta\overline{\vartheta}(w_{n_0}^{-1}w_{n_0+1})w_{n_0}^{-1} \end{equation} is the period of the generalized pseudostandard word ${\mathbf u}(\Delta, \Theta)$. It is not difficult to show that if $w_n$ is of the form~\eqref{eq:tvar1}, then $w_n = \vartheta(w_n)$. It suffices to take into account that $\vartheta_{n_0}=\vartheta$ and therefore $\vartheta(w_{n_0})=w_{n_0}$ and $\overline{\vartheta}(w_{n_0+1})=w_{n_0+1}$. Similarly, if $w_n$ is of the form~\eqref{eq:tvar2}, then $w_n = \overline{\vartheta}(w_n)$. Let us proceed by induction: $w_{n_0}$ and $w_{n_0+1}$ are of the form~\eqref{eq:tvar1} or~\eqref{eq:tvar2} -- it suffices to set $k=0$. Let $n>n_0+1$ and assume $w_\ell$ is of the form~\eqref{eq:tvar1} or~\eqref{eq:tvar2} for all $\ell \in \mathbb{N}$, where $n_0+1<\ell \leq n$. \begin{itemize} \item Let $w_n$ be of the form~\eqref{eq:tvar1}. Then $w_n=\vartheta(w_n)$ and by condition~\eqref{eq:podm}, we have $\delta_{n+1}=a$. When constructing $w_{n+1}$, we search for the longest $\overline{\vartheta}$-palindromic suffix of $w_n a$. Since $\widetilde{\Lambda}$ is normalized, the longest $\overline{\vartheta}$-palindromic prefix of $w_n$ is $w_{n-1}$, and consequently the longest $\overline{\vartheta}$-palindromic suffix of $w_n$ is $\vartheta(w_{n-1})$. Thanks to the form of $\widetilde{\Lambda}=(\widetilde{\nu}(a\bar{a})^\omega, \widetilde{\sigma} (\overline{\vartheta}\vartheta)^\omega)$ we further know that $w_{n-1}$ is followed by $\bar{a}$. Since $w_n$ is a~$\vartheta$-palindrome, the factor $\vartheta(w_{n-1})$ is preceded by $\vartheta(\bar{a})$. Consequently, $\vartheta(\bar{a})\vartheta(w_{n-1})a$ is a~candidate for the longest $\overline{\vartheta}$-palindromic suffix of $w_n a$. On the one hand, if $\vartheta = R$, this candidate equals $\bar{a}R(w_{n-1})a$, which is an $E$-palindrome. On the other hand, if $\vartheta = E$, then this candidate equals $aE(w_{n-1})a$, which is an $R$-palindrome. Thus it is indeed the longest $\overline{\vartheta}$-palindromic suffix of $w_n a$. Using the induction assumption and since $w_{n-1}$ is a~$\overline{\vartheta}$-palindrome, we have: $$w_n = w_{n_0}[(w_{n_0}^{-1}w_{n_0+1})\vartheta\overline{\vartheta}(w_{n_0}^{-1}w_{n_0+1})]^k,$$ $$w_{n-1} = w_{n_0}[(w_{n_0}^{-1}w_{n_0+1})\vartheta\overline{\vartheta}(w_{n_0}^{-1}w_{n_0+1})]^{k-1}(w_{n_0}^{-1}w_{n_0+1}).$$ Consequently, we obtain: $$w_{n+1}=(w_n a)^{\overline{\vartheta}}=(w_{n_0+1}\overline{\vartheta}(w_{n_0}^{-1})\vartheta(w_{n-1})a)^{\overline{\vartheta}}=w_n(w_{n_0}^{-1}w_{n_0+1}),$$ which corresponds to the form~\eqref{eq:tvar2}. \item For $w_n$ of the form~\eqref{eq:tvar2} we proceed analogously. \end{itemize} \item Let the directive bi-sequence be of the form $\Lambda=(\nu a^{\omega}, \sigma \vartheta^{\omega})$. (In fact, the generalized pseudostandard word in question is either an $E$-standard or an $R$-standard word with seed as defined in~\cite{BuLuZa}.) It is known in this case that the word is periodic~\cite{BuLuZa}. Let us rewrite the directive bi-sequence so that $|\nu|=|\sigma|$ and $\sigma=\theta_1 \vartheta$ and let $n_0=|\sigma|$. It can be proven similarly as in the first case that for all $n > n_0$ there exists $k\in \mathbb{N}$ such that \begin{equation} \label{eq:tvar3} w_n=w_{n_0}[w_{n_0}^{-1}w_{n_0+1}]^k. \end{equation} Therefore the period of the $E$- or $R$-standard word with seed in question is equal to $w_{n_0+1}w_{n_0}^{-1}$. \end{enumerate} \noindent $(\Rightarrow):$ We will show that if condition~\eqref{eq:podm} is not satisfied, then the generalized pseudostandard word ${\mathbf u}(\Delta, \Theta)$ is aperiodic. More precisely, we will show that each of its prefixes is a~left special factor. Let us restrict ourselves to the case where $\Theta$ contains $E$ and $R$ infinitely many times. Otherwise, we deal with $E$- or $R$-standard words with seed and the result is known from~\cite{BuLuZa}. The negation of condition~\eqref{eq:podm} reads: for all $a \in \{0,1\}$, for all $\vartheta \in \{E,R\}$, and for all $n_0 \in \mathbb{N}$ there exists $n>n_0$ such that \begin{equation} (\delta_{n+1}=a \wedge \vartheta_n = \overline{\vartheta}) \vee (\delta_{n+1}=\bar{a} \wedge \vartheta_n = \vartheta). \end{equation} Let $v$ be a~prefix of ${\mathbf u}(\Delta, \Theta)$. Firstly, take $a=0$, $\vartheta=R$, and $n_0 > |v|$, then there exists $n_1>n_0$ such that $(\delta_{n_1+1}=0 \wedge \vartheta_{n_1} = E)\vee (\delta_{n_1+1}=1 \wedge \vartheta_{n_1} = R)$. Secondly, choose $a=1$, $\vartheta=R$, and $n_0 > |v|$, then there exists $n_2>n_0$ such that $(\delta_{n_2+1}=1 \wedge \vartheta_{n_2} = E)\vee (\delta_{n_2+1}=0 \wedge \vartheta_{n_2} = R)$. The following four cases may occur: \begin{itemize} \item $\delta_{n_1+1}=0$, $\vartheta_{n_1} = E$ and $\delta_{n_2+1}=1$, $\vartheta_{n_2} = E$: In this case, both $w_{n_1}$ and $w_{n_2}$ are $E$-palindromes, thus $E(v)$ is a~suffix of both of them. Since $\delta_{n_1+1}=0$ and $\delta_{n_2+1}=1$, the words $E(v)0$ and $E(v)1$ are factors of ${\mathbf u}(\Delta, \Theta)$. Since the language is closed under $E$, both $1v$ and $0v$ are factors of ${\mathbf u}(\Delta, \Theta)$. \item $\delta_{n_1+1}=0$, $\vartheta_{n_1} = E$ and $\delta_{n_2+1}=0$, $\vartheta_{n_2} = R$: Now, $E(v)$ has the right extension $E(v)0$ and $R(v)$ has the right extension $R(v)0$. Using the fact that the language is closed under $E$ and $R$, one can see that both $1v$ and $0v$ are factors of ${\mathbf u}(\Delta, \Theta)$. \item $\delta_{n_1+1}=1$, $\vartheta_{n_1} = R$ and $\delta_{n_2+1}=1$, $\vartheta_{n_2} = E$: This case is analogous to the previous one. \item $\delta_{n_1+1}=1$, $\vartheta_{n_1} = R$ and $\delta_{n_2+1}=0$, $\vartheta_{n_2} = R$: This case is similar to the first one. \end{itemize} \end{proof} \begin{example} Consider the directive bi-sequence $\Lambda = ((011)^{\omega}, (EER)^{\omega})$ from Example~\ref{ex:norm}. This sequence satisfies condition~\eqref{eq:podm}. According to Remark~\ref{rem:norm} the normalization of the directive bi-sequence preserves condition~\eqref{eq:podm}. It follows from Example~\ref{ex:norm2} that the normalized form of the directive bi-sequence is $\widetilde{\Lambda} = (01(10)^{\omega}, RE(RE)^{\omega})$. Let us write down the first prefixes $\widetilde{w}_k$ of ${\mathbf u}(\widetilde{\Lambda})$: \begin{align} \widetilde{w}_1 =& \;0 \nonumber \\ \widetilde{w}_2 =& \;01 \nonumber \\ \widetilde{w}_3 =& \;0110 \nonumber \\ \widetilde{w}_4 =& \;011001 \nonumber \\ \widetilde{w}_5 =& \;01100110 \nonumber \\ \widetilde{w}_6 =& \;0110011001. \nonumber \end{align} In the proof of Theorem~\ref{thm:periodicitybinary}, the formula for the period (not necessarily the smallest one) of ${\mathbf u}(\Lambda)$ was given by~\eqref{eq:period}: $$w_{n_0+1}\vartheta\overline{\vartheta}(w_{n_0}^{-1}w_{n_0+1})w_{n_0}^{-1},$$ where $\vartheta=E$, $n_0=2$, $w_{n_0} = \widetilde{w}_2$, and $w_{n_0+1} = \widetilde{w}_3$. Thus the period equals $0110 = \widetilde{w}_3$. Therefore $\mathbf{u}(\Lambda)={\mathbf u}(\widetilde{\Lambda})= (0110)^{\omega}$. \end{example} \section{Periodicity of ternary generalized pseudostandard words}\label{sec:perTernary} For ternary generalized pseudostandard words, straightforward analogy of the binary case, i.e., of condition~\eqref{eq:podm} from Theorem~\ref{thm:periodicitybinary}, does not work. \begin{example} Consider the ternary infinite word $\mathbf u=\mathbf{u}((01)^{\omega}, (RE_1)^{\omega})$. It is easy to show that any prefix $p$ of $\mathbf u$ is left special -- both $1p$ and $2p$ are factors of $\mathbf u$, thus $\mathbf u$ is an aperiodic word. \end{example} The condition for periodicity gets more complicated. \begin{theorem}\label{thm:ternary} Let $\mathbf u=\mathbf u(\Delta, \Theta)$ be a~ternary generalized pseudostandard word over $\{0,1,2\}$. Then $\mathbf u$ is periodic if and only if one of the following conditions is met: \begin{enumerate} \item The sequences $\Delta$ and $\Theta$ are eventually constant, i.e., $\Delta=va^{\omega}$ for some $v \in \{0,1,2\}^*$ and $a \in \{0,1,2\}$ and $\Theta=\sigma \vartheta^{\omega}$ for some $\sigma \in \{E_0, E_1, E_2, R\}^*$ and $\vartheta \in \{E_0, E_1, E_2, R\}$. \item \begin{itemize} \item $\Theta$ contains exactly two antimorphisms $\vartheta$ and $R$ infinitely many times; \item $\Delta$ contains two (not necessarily distinct) letters $a$ and $b$ infinitely many times such that $\vartheta(a)=b$; \item there exists $n_0 \in \mathbb N$ such that for every $n>n_0$ we have either $$\vartheta_n=\vartheta \Rightarrow \delta_{n+1}=a \ \wedge \ \vartheta_n=R \Rightarrow \delta_{n+1}=b,$$ or $$\vartheta_n=\vartheta \Rightarrow \delta_{n+1}=b \ \wedge \ \vartheta_n=R \Rightarrow \delta_{n+1}=a.$$ \end{itemize} \item The normalized directive bi-sequence $(\widetilde{\Delta}, \widetilde{\Theta})$ of $\mathbf u$ satisfies $$(\widetilde{\Delta}, \widetilde{\Theta})=(v(ijk)^{\omega}, \sigma(E_k E_j E_i)^{\omega}),$$ where $v \in \{0,1,2\}^*, \ \sigma \in \{E_0, E_1, E_2, R\}^*, \ |v|=|\sigma|$, and $i,j,k \in \{0,1,2\}$ are mutually different letters. \end{enumerate} \end{theorem} It is worth mentioning that for bi-sequences containing more antimorphisms including $R$, Theorem~\ref{thm:ternary} provides an easy-to-check condition for recognizing periodicity, however for other bi-sequences containing more antimorphisms, it is not too practical since the algorithm for normalization is not known over ternary alphabet. \begin{example} Consider $\Lambda=(0(211)^{\omega}, (R E_0 E_0)^{\omega})$. Since $E_0(1)=2$, the second condition of Theorem~\ref{thm:ternary} is satisfied. Let us write down the first few prefixes $w_n$ of $\mathbf u$: \begin{eqnarray} w_1 & =& 0 \nonumber \\ w_2 & =& 0210 \nonumber \\ w_3 & =& 0210120210 \nonumber \\ w_4 & =& 0210120210120. \nonumber \end{eqnarray} It is left for the reader to show that $\mathbf u=(021012)^{\omega}$. \end{example} \begin{example} Consider $\Lambda=((102)^{\omega}, (E_2 E_0 E_1)^{\omega})$. The third condition of Theorem~\ref{thm:ternary} is satisfied. Let us write down the first few prefixes $w_n$ of $\mathbf u$: \begin{eqnarray} w_1 & =& 10 \nonumber \\ w_2 & =& 1002 \nonumber \\ w_3 & =& 100221 \nonumber \\ w_4 & =& 10022110 \nonumber \\ w_5 & =& 1002211002. \nonumber \\ \end{eqnarray} It is not difficult to see that $\mathbf u=(100221)^{\omega}$. \end{example} In order to make the proof of Theorem~\ref{thm:ternary} as comprehensible as possible, let us distinguish several cases according to the number of antimorphisms occurring infinitely many times in the directive bi-sequence. We will consider them in the following three sections. Putting then the results of Sections~\ref{sec:1}, \ref{sec:2} and~\ref{sec:3or4} together, one obtains immediately the proof of Theorem~\ref{thm:ternary}. \subsection{Directive bi-sequences containing one antimorphism}\label{sec:1} Let us start with the simplest case, where $\Theta$ contains only one antimorphism infinitely many times. \begin{lemma}\label{1antimorphism} Let the directive bi-sequence $\Lambda=(\Delta, \Theta)$ be of the form $\Lambda=(\Delta, \sigma \vartheta^{\omega})$ for some $\sigma \in \{E_0, E_1, E_2, R\}^*$ and $\vartheta \in \{E_0, E_1, E_2, R\}$. Then the word ${\mathbf u}={\mathbf u}(\Delta, \Theta)$ is periodic if and only if $\Delta=va^{\omega}$ for some $v \in \{0,1,2\}^*$ and $a \in \{0,1,2\}$. \end{lemma} \begin{proof} $(\Rightarrow)$: Assume the sequence $\Delta$ contains infinitely many times two distinct letters, say $a$ and $b$. Then the antimorphism $\vartheta$ is followed infinitely many times by both letters $a$ and $b$. This implies that every prefix of $\mathbf u$ is a~left special factor. Therefore $\mathbf u$ is aperiodic.\\ \noindent $(\Leftarrow)$: Let $\Lambda=(va^{\omega}, \sigma \vartheta^{\omega})$. Denote $n_0=\max \{|v|, |\sigma|\}$. Then $w_{n_0+2}=ps\vartheta(p)$, where $\vartheta(a)sa$ is the longest $\vartheta$-palindromic suffix of the word $w_{n_0+1}a$. Let us now construct $w_{n_0+3}$. The longest $\vartheta$-palindromic suffix of $w_{n_0+2}a$ equals $\vartheta(a)w_{n_0+1}a=\vartheta(a)s\vartheta(p)a$. We have used the fact that $w_{n_0+1}=ps=\vartheta(ps)$ and $s=\vartheta(s)$ and that by the form of $\Lambda$, the factor $w_{n_0+1}$ is evidently the longest $\vartheta$-palindromic prefix of $w_{n_0+2}$ followed by $a$. Thus $w_{n_0+3}=ps(\vartheta(p))^2$. Repeating this process we get $\mathbf u=ps(\vartheta(p))^{\omega}$. \end{proof} \subsection{Directive bi-sequences containing two antimorphisms}\label{sec:2} Let us consider directive bi-sequences $(\Delta, \Theta)$, where $\Theta$ contains infinitely many times exactly two distinct antimorphisms. The first lemma holds for any two antimorphisms, while in the sequel we will consider bi-sequences, where one of the antimorphisms equals~$R$. \begin{lemma} Let the directive bi-sequence $\Lambda = (\Delta, \Theta)$ satisfy: $\Delta$ contains infinitely many times all letters $0,1,2$ and $\Theta$ contains infinitely many times exactly two antimorphisms $\vartheta_1, \vartheta_2 \in \{E_0, E_1, E_2, R\}$. Then the word $\mathbf{u} = \mathbf{u}(\Delta, \Theta)$ is aperiodic. \end{lemma} \begin{proof} Thanks to the form of $\Lambda$ it is possible to choose two sequences of indices, say $(k_n)^{\infty}_{n=1}$ and $(\ell_n)^{\infty}_{n=1}$, such that for all $n \in \mathbb{N}$ there exist two distinct letters $\delta_1$, $\delta_2$ and an antimorphism $\vartheta$ satisfying $\delta_{k_n} = \delta_1$, $\delta_{\ell_n} = \delta_2$ and $\vartheta_{k_n-1} = \vartheta_{\ell_n-1}=\vartheta$. In other words, the same antimorphism is followed infinitely many times by two distinct letters. This implies that every prefix of $\mathbf{u}$ is left special. \end{proof} \begin{proposition} \label{th:per} Let the directive bi-sequence $\Lambda = (\Delta, \Theta)$ of a~ternary generalized pseudostandard word $\mathbf{u}$ satisfy: The sequence $\Theta = \vartheta_1\vartheta_2\cdots$ contains infinitely many times exactly two distinct antimorphisms $\vartheta$ and $R$. The sequence $\Delta = \delta_1\delta_2\cdots$ contains infinitely many times two (not necessarily distinct) letters $a$ and $b$. Then the word $\mathbf{u} = \mathbf{u}(\Delta,\Theta)$ is periodic if and only if the directive bi-sequence $(\Delta, \Theta)$ satisfies: $\vartheta(a) = b$ and there exists $n_0 \in \mathbb N$ such that for all $n>n_0$ either \begin{align}\label{eq:podminka2} \vartheta_n = \vartheta \Rightarrow \delta_{n+1} = a \ \text{and} \ \vartheta_n=R \Rightarrow \delta_{n+1}=b, \end{align} or \begin{align}\label{eq:podminka2b} \vartheta_n = \vartheta \Rightarrow \delta_{n+1} = b \ \text{and} \ \vartheta_n=R \Rightarrow \delta_{n+1}=a. \end{align} \end{proposition} \begin{proof} ($\Rightarrow$): \begin{enumerate} \item If neither~\eqref{eq:podminka2} nor~\eqref{eq:podminka2b} is met, then the letters $a$ and $b$ are necessarily distinct and it happens infinitely many times that the same antimorphism is followed by two distinct letters. This implies that every prefix of $\mathbf u$ is left special. \item Assume without loss of generality that condition~\eqref{eq:podminka2} is satisfied. If $\vartheta(a) \neq b$, then every prefix of $\mathbf{u}$ is left special: Let $v$ be a~prefix of $\mathbf{u}$. Let $v$ be contained in the prefix $w_k = R(w_k)$, where $k>n_0$. Then by condition~\eqref{eq:podminka2} $w_k b \in \mathcal{L}({\mathbf{u}})$, consequently $R(b)R(w_k) = bw_k \in \mathcal{L}({\mathbf{u}})$. Let us similarly find $\ell$, where $\ell >n_0$, such that $v$ is a~prefix of $w_\ell = \vartheta (w_\ell)$. Then $w_\ell a \in \mathcal{L}({\mathbf{u}})$, and consequently $\vartheta(a)\vartheta(w_\ell) = \vartheta(a)w_\ell \in \mathcal{L}({\mathbf{u}})$. Thanks to the assumption that $\vartheta(a) \neq b$, the prefix $v$ is a~left special factor of $\mathbf{u}$. \end{enumerate} ($\Leftarrow$): According to Corollary~\ref{cor:norm2} we can assume without loss of generality that $\Lambda = (v(ab)^{\omega}, \sigma(R\vartheta)^{\omega})$, where $v \in \{0,1,2\}^*$, $\sigma \in \{E_0,E_1,E_2,R\}^*$ and $|v| = |\sigma|$. Denote $n_0 = |\sigma |$, i.e., $w_{n_0+1} = R(w_{n_0+1})$ and $w_{n_0+2} = \vartheta(w_{n_0+2})$. Denote further $\vartheta(b)sb=asb$ the longest $\vartheta$-palindromic suffix of the word $w_{n_0+1}b$. We can thus write $w_{n_0+2} = ps\vartheta(p) = p\vartheta(s)\vartheta(p) = p\vartheta(ps)$ for some prefix $p$ of the word $w_{n_0+1}$. Thanks to Lemma~\ref{lem:norm2}, the factor $w_{n_0+1}$ is the longest palindromic prefix of $w_{n_0+2}$ followed by the letter $b$, therefore $\vartheta(b)\vartheta(w_{n_0+1})a=a\vartheta(w_{n_0+1})a=a\vartheta(ps)a$ is the longest palindromic suffix of the word $w_{n_0+2}a$. Consequently, we have $w_{n_0+3} = p\vartheta(ps)R(p) = ps\vartheta(p)R(p)$. Let us show by induction that for all $k \geq 1$ the following holds: \begin{align} w_{n_0+2k-1} &= ps(\vartheta(p)R(p))^{k-1}, \label{tvar1}\\[3mm] w_{n_0+2k} &= ps(\vartheta(p)R(p))^{k-1}\vartheta(p). \label{tvar2} \end{align} For $k = 1$ the relations~\eqref{tvar1} and~\eqref{tvar2} hold. Let them hold for some $k \geq 1$. Then $w_{n_0+2k+1} = (w_{n_0+2k}a)^{R}$. We know that $\vartheta(w_{n_0+2k-1})$ is a~palindromic suffix of the word $w_{n_0+2k}$ preceded by the letter $a=\vartheta(b)$ and, thanks to Lemma~\ref{lem:norm2}, this suffix is the longest palindromic suffix of the word $w_{n_0+2k}$ preceded by the letter $a=\vartheta(b)$. It follows: \begin{align} w_{n_0+2k+1} =&\;(w_{n_0+2k}a)^{R} \nonumber \\[3mm] =&\;(ps(\vartheta(p)R(p))^{k-1}\vartheta(p)a)^{R} \nonumber \\[3mm] =&\;(p\vartheta(w_{n_0+2k-1})a)^{R} \nonumber \\[3mm] =&\;p\vartheta(w_{n_0+2k-1})R(p) \nonumber \\[3mm] =&\;ps(\vartheta(p)R(p))^{k}. \nonumber \end{align} Similarly, $R(w_{n_0+2k})$ is the longest $\vartheta$-palindromic suffix of the word $w_{n_0+2k+1}$ preceded by the letter $R(a)=a$. \begin{align} w_{n_0+2(k+1)} =&\;(w_{n_0+2k+1}b)^{\vartheta} \nonumber \\[3mm] =&\;(ps(\vartheta(p)R(p))^k b)^{\vartheta} \nonumber \\[3mm] =&\;(pR(w_{n_0+2k})b)^{\vartheta} \nonumber \\[3mm] =&\;pR(w_{n_0+2k})\vartheta(p) \nonumber \\[3mm] =&\;ps(\vartheta(p)R(p))^{k}\vartheta(p). \nonumber \end{align} For arbitrary $k \geq 1$ the factor $w_{n_0+2k-1}$ is a~palindrome and is of the form~\eqref{tvar1}, hence $\mathbf{u}$ is periodic with the period $R(\vartheta(p)R(p)) = pR\vartheta(p)$. \end{proof} Let us describe all combinations of antimorphisms and letters from Proposition~\ref{th:per} for which a~periodic word occurs, i.e., for which either condition~\eqref{eq:podminka2} or condition~\eqref{eq:podminka2b} is met and at the same time $\vartheta(a) = b$. \begin{align} \vartheta_n &= R \Rightarrow \delta_{n+1} = 0& &\text{and} &\vartheta_n &= E_0 \Rightarrow \delta_{n+1} = 0, \nonumber \\[3mm] \vartheta_n &= R \Rightarrow \delta_{n+1} = 1& &\text{and} &\vartheta_n &= E_0 \Rightarrow \delta_{n+1} = 2, \nonumber \\[3mm] \vartheta_n &= R \Rightarrow \delta_{n+1} = 2& &\text{and} &\vartheta_n &= E_0 \Rightarrow \delta_{n+1} = 1, \nonumber \end{align} \begin{align} \vartheta_n &= R \Rightarrow \delta_{n+1} = 1& &\text{and} &\vartheta_n &= E_1 \Rightarrow \delta_{n+1} = 1, \nonumber \\[3mm] \vartheta_n &= R \Rightarrow \delta_{n+1} = 0& &\text{and} &\vartheta_n &= E_1 \Rightarrow \delta_{n+1} = 2, \nonumber \\[3mm] \vartheta_n &= R \Rightarrow \delta_{n+1} = 2& &\text{and} &\vartheta_n &= E_1 \Rightarrow \delta_{n+1} = 0, \nonumber \end{align} \begin{align} \vartheta_n &= R \Rightarrow \delta_{n+1} = 2& &\text{and} &\vartheta_n &= E_2 \Rightarrow \delta_{n+1} = 2, \nonumber \\[3mm] \vartheta_n &= R \Rightarrow \delta_{n+1} = 0& &\text{and} &\vartheta_n &= E_2 \Rightarrow \delta_{n+1} = 1, \nonumber \\[3mm] \vartheta_n &= R \Rightarrow \delta_{n+1} = 1& &\text{and} &\vartheta_n &= E_2 \Rightarrow \delta_{n+1} = 0. \nonumber \end{align} \begin{example} Consider the directive bi-sequence $\Lambda = (0(211)^{\omega}, R(E_0E_0R)^{\omega})$. According to Corollary~\ref{cor:norm2} we construct a~new bi-sequence $\widetilde{\Lambda} = (02(12)^{\omega}, RE_0(RE_0)^{\omega})$ such that $\mathbf{u}(\Lambda) = \mathbf{u}(\widetilde{\Lambda})$. Let us write down the first few prefixes of $\mathbf{u}(\widetilde{\Lambda})$: \begin{align} \widetilde{w}_1 =&\;0 \nonumber \\[3mm] \widetilde{w}_2 =&\;0210 \nonumber \\[3mm] \widetilde{w}_3 =&\;0210120 \nonumber \\[3mm] \widetilde{w}_4 =&\;0210120210 \nonumber \\[3mm] \widetilde{w}_5 =&\;0210120210120. \nonumber \end{align} Using the notation from the proof of Proposition~\ref{th:per} we have $\widetilde{w}_{n_0+1} = \widetilde{w}_3$, $s = 0120$ and $p = 021$. The period of the word $\mathbf{u}(\widetilde{\Lambda})$ is therefore equal to $pRE_0(p) = 021012$, i.e., $\mathbf{u}(\widetilde{\Lambda}) = (021012)^{\omega}$. \end{example} \subsection{Directive bi-sequences containing three or four antimorphisms}\label{sec:3or4} Let us now treat the remaining bi-sequences. At first, we show that all generalized pseudostandard words having in their directive bi-sequence infinitely many times three or four antimorphisms such that one of them equals $R$ are aperiodic. \begin{proposition}\label{more_antimorphismR} Let $\mathbf{u} = \mathbf{u}(\Delta, \Theta)$, where $\Theta$ contains infinitely many times three or four antimorphisms including $R$. Then the word $\mathbf{u}$ is aperiodic. \end{proposition} \begin{proof} Assume without loss of generality that the bi-sequence $(\Delta, \Theta)$ is normalized. (It is easy to see that a~normalized form always exists and the original bi-sequence is its subsequence.) Assume now for contradiction that the word $\mathbf{u}$ is periodic. It holds then: \begin{enumerate} \item There exists $n_0$ such that for all $n > n_0$ the following conditions are satisfied: \begin{itemize} \item $\vartheta_n = R \Rightarrow \delta_{n+1} = a$, \item $\vartheta_n = E_i \Rightarrow \delta_{n+1} = b$, \item $\vartheta_n = E_j \Rightarrow \delta_{n+1} = c$, \item alternatively $\vartheta_n = E_k \Rightarrow \delta_{n+1} = d$; \end{itemize} \item \label{eq:pism} moreover $a = R(a) = E_i(b) = E_j(c) (=E_k(d))$, \end{enumerate} where $\Theta = \vartheta_1\vartheta_2\cdots$, $\Delta = \delta_1\delta_2\cdots$, $a,b,c,i,j,k \in \{0,1,2\}$ and $i,j,k$ are mutually different indices. If at least one of the above conditions is not met, then it can be easily shown that every prefix of $\mathbf{u}$ is left special, and the word $\mathbf{u}$ is thus aperiodic. Consider $\ell \in \{0,1,2\}$ such that $RE_\ell$ occurs infinitely many times in $\Theta$. Denote $(e_na,RE_\ell)$, $n > n_0$, the corresponding factors of the bi-sequence $(\Delta, \Theta)$. Denote $w^{(n)}_m$ the corresponding palindromic prefixes and $w^{(n)}_{m+1}$ the corresponding $E_\ell$-palindromic prefixes and let $e$ denote the letter that follows the $E_\ell$-palindromes $w^{(n)}_{m+1}$. Let us now study the form of $w^{(n)}_{m+2}$. It holds that the longest palindromic suffix of the word $w^{(n)}_{m+1}e$ is thanks to the normalized form of the directive bi-sequence equal to $E_\ell(a)E_\ell(w^{(n)}_{m})e$ since according to condition~\ref{eq:pism} we have $E_\ell(a) = e$. Moreover using again the fact that the directive bi-sequence is normalized, the factor $E_\ell(a)E_\ell(w^{(n)}_{m})e$ is the longest $\vartheta$-palindromic suffix of the word $w^{(n)}_{m+1}e$ for any antimorphism $\vartheta$. There exists $n_1 > n_0$ such that for all $n > n_1$ the factor $w^{(n)}_{m+2}$ is a~palindrome. Suppose that such an index $n_1$ does not exist. It means that for infinitely many indices $n$, when constructing $w^{(n)}_{m+2}$, we look for a~shorter $\vartheta$-palindromic suffix of the word $w^{(n)}_{m+1}e$ than its palindromic suffix $E_\ell(a)E_\ell(w^{(n)}_{m})e$. If we now extend the longest suffix $E_\ell(a)E_\ell(w^{(n)}_{m})e$ stepwise by one letter to the right and one letter to the left so that the obtained word remains a~palindrome, we get a~palindrome $s^{(n)}$ that is surrounded by distinct letters from right and from left. Since ${\mathcal L}(\mathbf u)$ is closed under the antimorphism $R$, the factor $s^{(n)}$ is a~bispecial factor and the word $\mathbf{u}$ is aperiodic, which is a~contradiction. Similarly, it is possible to show that there exists $n_2 > n_1$ such that for all $n > n_2$ the factor $w^{(n)}_{m+3}$ is an $E_\ell$-palindrome. The above arguments imply that $\Theta = \sigma(RE_\ell)^{\omega}$ for some $\sigma\in \{R,E_0,E_1,E_2\}^*$, which is a~contradiction. \end{proof} \begin{example} Since the construction of bispecials $s^{(n)}$ from the previous proof is quite complicated, let us illustrate it on an example. Consider $\mathbf u=\mathbf u((210)^{\omega}, (E_0 E_1 R)^{\omega})$. This bi-sequence is not normalized, however it serves as a~simple example for illustration of the previous proof. Let us write down the first five prefixes: \begin{align} {w}_1 =&\;21 \nonumber \\[3mm] {w}_2 =&\;2110 \nonumber \\[3mm] {w}_3 =&\;21100112 \nonumber \\[3mm] {w}_4 =&\; 21100112200221 \nonumber \\[3mm] {w}_5 =&\; 211\underline{00112200221100}22001122110. \nonumber \end{align} Indeed, $E_0(w_3)=12200221$ is a~palindromic suffix of $w_4$ that may be extended stepwise so that the obtained word $s=00112200221100$ is still a~palindrome. The word $s$ is underlined in the prefix $w_5$. Moreover, we can see that $s$ cannot be extended as a~palindrome any more since the next factor is $1s2$. The language ${\mathcal L}(\mathbf u)$ is closed under $R$, therefore $2s1$ is a~factor of $\mathbf u$, too. Since we work in the proof of Proposition~\ref{more_antimorphismR} with a~normalized bi-sequence, it is guaranteed that $s$ cannot be extended to a~prefix of $\mathbf u$. In such a~case $s$ would be a~skipped palindromic prefix of $\mathbf u$. \end{example} In order to treat the remaining directive bi-sequences in the last proposition, the following remark and corollary are needed. \begin{remark} \label{poz:e} For $i, j, k \in \{0,1,2\}$ mutually different, we have $E_iE_jE_k = E_j$. \end{remark} \begin{corollary}\label{cor:ecka} Let for some $i \in \{0,1,2\}$ and $v \in \{0,1,2\}^*$ hold $v = E_i(v)$. Let further $j, k \in \{0,1,2\}$ be such that $i, j, k$ are mutually different. Then $E_j(v) = E_kE_j(v)$, i.e., $E_j(v)$ is an $E_k$-palindrome. \end{corollary} \begin{proposition} Let $\mathbf{u} = \mathbf{u}(\Delta, \Theta)$ be a~ternary generalized pseudostandard word, where $\Theta$ contains infinitely many times $E_i, E_j$, or $E_i, E_j, E_k$ with $i, j, k$ mutually distinct. Then $\mathbf{u}$ is periodic if and only if \begin{equation} \label{eq:tvar} (\widetilde{\Delta},\widetilde{\Theta}) = (v(ijk)^{\omega}, \sigma(E_iE_kE_j)^{\omega}), \end{equation} where $(\widetilde{\Delta},\widetilde{\Theta})$ is the normalized form of $(\Delta, \Theta)$, $v \in \{0,1,2\}^*$, $\sigma \in \{E_0,E_1,E_2,R\}^*$ and $|v|=|\sigma|$. \end{proposition} \begin{proof} $(\Rightarrow)$: Suppose that the normalized bi-sequence is not of the form~\eqref{eq:tvar}. We will show that $\mathbf u=\mathbf u\mathbf(\widetilde{\Delta},\widetilde{\Theta})$ is aperiodic. Assume: \begin{enumerate} \item There exists $n_0$ such that for all $n > n_0$ the following conditions are satisfied: \begin{itemize} \item $\vartheta_n = E_i \Rightarrow \delta_{n+1} = a$, \item $\vartheta_n = E_j \Rightarrow \delta_{n+1} = b$, \item alternatively $\vartheta_n = E_k \Rightarrow \delta_{n+1} = c$; \end{itemize} \item moreover $E_i(a) = E_j(b) (=E_k(c))$, \end{enumerate} where $\Theta = \vartheta_1\vartheta_2\cdots$, $\Delta = \delta_1\delta_2\cdots$, $a,b,c,i,j,k \in \{0,1,2\}$ and $i,j,k$ are mutually different indices. If at least one of the above conditions is not met, then it can be easily shown that every prefix of $\mathbf{u}$ is left special. Since condition~\eqref{eq:tvar} is not met, two possibilities occur: \begin{itemize} \item The bi-sequence $(\widetilde{\Delta},\widetilde{\Theta})$ contains infinitely many times the factor $(def,E_\ell E_s E_s)$ for some $d,e,f,\ell,s \in \{0,1,2\}$ and $\ell, s$ mutually distinct. Denote $w^{(n)}_{m}$, $w^{(n)}_{m+1}$ and $w^{(n)}_{m+2}$ the corresponding $\vartheta$-palindromic prefixes. Thanks to Corollary~\ref{cor:ecka} and thanks to the fact that the bi-sequence $(\widetilde{\Delta},\widetilde{\Theta})$ is normalized, the $E_t$-palindrome $E_t(f)p^{(n)}f = E_s(e)E_s(w^{(n)}_{m})f$, where $t$ is different from $\ell,s$, is the longest $\vartheta$-palindromic suffix of the word $w^{(n)}_{m+1}f$. (We use the equalities: $E_s(f)=E_\ell(e), \ f=E_s E_\ell(e), \ E_t(f)=E_tE_s E_\ell(e)=E_s(e)$.) However, since $w^{(n)}_{m+2}$ is an $E_s$-palindrome, another shorter $\vartheta$-palindromic suffix of the word $w^{(n)}_{m+1}f$ has been chosen when constructing $w^{(n)}_{m+2}$. This implies that if we extend the $E_t$-palindrome $p^{(n)}$ stepwise by one letter from both sides so that the obtained factor is again an $E_t$-palindrome, we get some $E_t$-palindrome $q^{(n)}$ surrounded by two letters $g,h$ satisfying $E_t(g) \neq h$. Since ${\mathcal L}(\mathbf u)$ is closed under $E_t$, the factor $q^{(n)}$ is bispecial and the word $\mathbf{u}$ is thus aperiodic. \item The bi-sequence $(\widetilde{\Delta},\widetilde{\Theta})$ contains infinitely many times the factor $(def,E_\ell E_s E_\ell)$ for some $d,e,f,\ell, s \in \{0,1,2\}$ and $\ell,s$ mutually different. Analogously as in the previous case, we denote $E_t(f)p^{(n)}f = E_s(e)E_s(w^{(n)}_{m})f$ the longest $\vartheta$-palindromic suffix of the word $w^{(n)}_{m+1}f$. This time, the factor $w^{(n)}_{m+2}$ is an $E_\ell$-palindrome. Consequently, another suitable shorter $\vartheta$-palindromic suffix of the word $w^{(n)}_{m+1}f$ has been chosen when constructing $w^{(n)}_{m+2}$. We can hence find exactly as above bispecial factors, thus the word $\mathbf u$ is aperiodic. \end{itemize} To sum up, we get $\widetilde{\Theta}=\sigma(E_i E_j E_k)^{\omega}$ and the form of $\widetilde{\Delta}$ follows from conditions 1 and 2. \noindent $(\Leftarrow)$: Assume without loss of generality that the normalized directive bi-sequence is of the form $(\widetilde{\Delta},\widetilde{\Theta}) = (v(102)^{\omega}, \sigma(E_0E_1E_2)^{\omega})$. Denote $|v| = |\sigma| = n_0$. Then we have $w_{n_0+1} = ps$, where $2s0$ is the longest $E_1$-palindromic suffix of the word $w_{n_0+1}0$. It follows that $w_{n_0+2} = psE_1(p)$. Thanks to the fact that the bi-sequence is normalized and thanks to Corollary~\ref{cor:ecka}, the longest $E_2$-palindromic suffix of the word $w_{n_0+2}2$ equals $E_1(0)E_1(w_{n_0+1})2 = 2E_1(ps)2 = 2sE_1(p)2$. Therefore $w_{n_0+3} = psE_1(p)E_2(p)$. Similarly, the longest $E_0$-palindromic suffix of the word $w_{n_0+3}1$ is equal to $E_2(2)E_2(w_{n_0+2})1 = 2E_2(psE_1(p))1 = 2sE_1(p)E_2(p)1$. We obtain $w_{n_0+4} = psE_1(p)E_2(p)E_0(p)$. Repeating the previous steps, we get $\mathbf{u} = ps(E_1(p)E_2(p)E_0(p))^{\omega}$, thus $\mathbf{u}$ is periodic. \end{proof} \section{Open problems} We have provided a~necessary and sufficient condition for periodicity of generalized pseudostandard words over binary and ternary alphabet. More precisely, we have described how the directive bi-sequence of a~generalized pseudostandard word has to look like in order to correspond to a~periodic word. The two cases are surprisingly different -- the ternary case is not at all a~simple generalization of the condition over binary alphabet. Over ternary alphabet, it may happen that in order to decide about periodicity using our result (Theorem~\ref{thm:ternary}), one needs to know the normalized directive bi-sequence. The problem is that we only know that the normalized form of every directive bi-sequence exists, but in contrast to binary alphabet, we have no algorithm for producing the normalized directive bi-sequence from a~given directive bi-sequence over ternary alphabet. Therefore, it is desirable to find such a~normalizing algorithm over ternary or even any alphabet. Section~\ref{sec:norm ternary} may serve as a~hint in such an effort. Observing results for binary and ternary alphabet, we have the following conjecture for multiliteral alphabet. \begin{conjecture}[Periodicity of generalized pseudostandard words]\label{multiliteral} Consider an alphabet $\mathcal A$ with $\#{\mathcal A}=d$ and $G$ the set of all involutory antimorphisms on ${\mathcal A}^*$. Let $\mathbf u(\Delta, \Theta)$ be a~$d$-ary generalized pseudostandard word, where $\Delta$ is a~sequence of letters from $\mathcal A$ and $\Theta$ is a~sequence of antimorphisms from $G$. Then $\mathbf u$ is periodic if and only if the following conditions are met: \begin{enumerate} \item The normalized directive bi-sequence is of the form $$(\widetilde{\Delta}, \widetilde{\Theta})=(v\delta_1\delta_2\delta_3\ldots, \sigma \vartheta_1 \vartheta_2\vartheta_3\ldots),$$ where $|v|=|\sigma|$ and $\vartheta_i(\delta_{i+1})=\vartheta_j(\delta_{j+1})$ for all $i,j \in \mathbb N$. \item For all $i \in \mathbb N$, if $w$ is a~$\vartheta_i$-palindrome, then $\vartheta_{i+1}(w)$ is a~$\vartheta_{i+2}$-palindrome. \end{enumerate} \end{conjecture} In order to explain that this conjecture is in correspondence with results over binary and ternary alphabet, let us write down the statements for periodicity over binary and ternary alphabet using the normalized directive bi-sequence. Considering Lemma~\ref{lemma:tvar} and Remark~\ref{rem:norm} following Theorem~\ref{thm:periodicitybinary}, we have the next corollary. \begin{corollary} Let $(\widetilde{\Delta}, \widetilde{\Theta})$ be the normalized directive bi-sequence of a~binary generalized pseudostandard word $\mathbf u={\mathbf u}(\widetilde{\Delta}, \widetilde{\Theta})$. Then $\mathbf u$ is periodic if and only if one of the following conditions is met: \begin{enumerate} \item $(\widetilde{\Delta}, \widetilde{\Theta})=(va^{\omega}, \sigma\vartheta^{\omega})$ for some $v \in \{0,1\}^*, \sigma \in \{E, R\}^*, |v|=|\sigma|, a \in \{0,1\}$. \item $(\widetilde{\Delta}, \widetilde{\Theta})=(v(a\overline{a})^{\omega}, \sigma(RE)^{\omega})$ for some $v \in \{0,1\}^*, \sigma \in \{E, R\}^*, |v|=|\sigma|$, $a \in \{0,1\}$. \end{enumerate} \end{corollary} Using Theorem~\ref{thm:ternary}, Lemma~\ref{lem:norm} and Corollary~\ref{cor:norm2}, we get the following corollary. \begin{corollary} Let $(\widetilde{\Delta}, \widetilde{\Theta})$ be the normalized directive bi-sequence of a~ternary generalized pseudostandard word $\mathbf u={\mathbf u}(\widetilde{\Delta}, \widetilde{\Theta})$. Then $\mathbf u$ is periodic if and only if one of the following conditions is met: \begin{enumerate} \item $(\widetilde{\Delta}, \widetilde{\Theta})=(va^{\omega}, \sigma\vartheta^{\omega})$ for some $v \in \{0,1,2\}^*$, $\sigma \in \{E_0, E_1, E_2, R\}^*$, $|v|=|\sigma|$, $\vartheta \in \{E_0, E_1, E_2, R\}$ and $a \in \{0,1,2\}$. \item $(\widetilde{\Delta}, \widetilde{\Theta})=(v(ab)^{\omega}, \sigma(RE_i)^{\omega})$ for some $v \in \{0,1,2\}^*$, $\sigma \in \{E_0, E_1, E_2, R\}^*$, $|v|=|\sigma|$, $i \in \{0, 1, 2\}$ and $a, b \in \{0,1, 2\}$. \item $(\widetilde{\Delta}, \widetilde{\Theta})=(v(ijk)^{\omega}, \sigma(E_k E_j E_i)^{\omega})$, where $v \in \{0,1,2\}^*, \sigma \in \{E_0, E_1, E_2, R\}^*$, $|v|=|\sigma|$ and $i,j,k \in \{0,1,2\}$ are mutually different letters. \end{enumerate} \end{corollary} It is necessary to assume in Conjecture~\ref{multiliteral} that the word $\mathbf u(\Delta, \Theta)$ is $d$-ary if we consider the set $G$ of involutory antimorphisms over a~$d$-ary alphabet as illustrated by the following example. \begin{example} Consider ${\mathcal A}=\{0,1,2,3,4\}$ and the following involutory antimorphisms: \begin{itemize} \item $E_{014}(0)=0, E_{014}(1)=1, E_{014}(2)=3, E_{014}(3)=2, E_{014}(4)=4,$ \item $E_2(0)=1, E_2(1)=0, E_2(2)=2, E_2(3)=4, E_2(4)=3,$ \item $R(0)=0, R(1)=1, R(2)=2, R(3)=3, R(4)=4.$ \end{itemize} Then ${\mathbf u}((01)^{\omega}, (E_{014}E_2)^{\omega})={\mathbf u}((01)^{\omega}, (RE_2)^{\omega})=(01)^{\omega}$. Hence, we can see that as soon as we do not work with a~$d$-ary word ($d=5$ in this case), it can happen that the same word is obtained using several normalized directive bi-sequences and the conditions from Conjecture~\ref{multiliteral} are not met despite the evident periodicity of the word $\mathbf u$. \end{example} \end{document}
\begin{document} \title{Computations with modified diagonals} \author{Kieran G. O'Grady\\\\ \lq\lq Sapienza\rq\rq Universit\`a di Roma} \dedicatory{Alla piccola Titti} \date{March 23 2014} \thanks{Supported by PRIN 2010} \begin{abstract} Motivated by conjectures of Beauville and Voisin on the Chow ring of Hyperk\"ahler varieties we will prove some basic results on the rational equivalence class of modified diagonals of projective varieties. \noindentoindent \textbf{Key Words:} Chow ring, Hyperk\"ahler varieties, modified diagonals. \noindentoindent \textbf{Mathematics Subject Classification:} 14C25, 14J28. \end{abstract} \maketitle \tableofcontents \section{Introduction} Let $X$ be an $n$-dimensional variety over a field ${\mathbb K}$ and $a\in X({\mathbb K})$. For $I\subset\{1,\ldots,m\}$ we let \begin{equation}\langlebel{diagtwist} \Delta^m_I(X;a):=\{(x_1,\ldots,x_m)\in X^m \mid \text{$x_i=x_j$ if $i,j\in I$ and $x_i=a$ if $i\noindentotin I$}\}. \end{equation} The \emph{$m$-th modified diagonal cycle associated to $a$} is the $n$-cycle on $X^m$ given by \begin{equation}\langlebel{eccogamma} \Gamma^m(X;a):=\sum\limits_{\emptyset\noindentot= I\subset \{1,2,\ldots,m\}}(-1)^{m-|I|}\Delta^m_I(X;a) \end{equation} if $n>0$, and equal to $0$ if $n=0$. Gross and Schoen~\cite{groscho} proved that if $X$ is a (smooth projective) hyperelliptic curve and $a$ is a fixed point of a hyperelliptic involution then $\Gamma^3(X;a)$ represents a torsion class in the Chow group of $X^3$. On the other hand it is known that if $X$ is a generic complex smooth plane curve and $m$ is small compared to its genus then $\Gamma^m(X;a)$ is \emph{not} algebraically equivalent to $0$, whatever $a$ is, see~\cite{voisinf} (for the link between vanishing of $\Gamma^m(X;a)$ and Voisin's result on the Beauville decomposition of the Abel-Jacobi image of a curve see the proof of Prop.4.3 of~\cite{beauvoisin}). Let $X$ be a complex projective $K3$ surface: Beauville and Voisin~\cite{beauvoisin} have proved that there exists $c\in X$ such that the rational equivalence class of $\Gamma^3(X;c)$ is torsion. A natural question arises: under which hypotheses a modified diagonal cycle on a projective variety represents a torsion class in the Chow group? We should point out that such a vanishing can entail unexpected geometric properties: if $X$ is a smooth projective variety of dimension $n$ and $\Gamma^{n+1}(X;a)$ is torsion in the Chow group then the intersection of arbitrary divisor classes $D_1,\ldots,D_{n}$ on $X$ is rationally equivalent to a multiple of $a$. A set of conjectures put forth by Beauville~\cite{beauconj} and Voisin~\cite{voisinhk} predict exactly such a degenerate behaviour for the intersection product of divisors on hyperk\"ahler varieties i.e.~complex smooth projective varieties which are simply connected and carry a holomorphic symplectic form whose cohomology class spans $H^{2,0}$ (see~\cite{liefu,shenvial} for more results on those conjectures). Our interest in modified diagonals has been motivated by the desire to prove the conjecture on hyperk\"ahler varieties stated below. From now on the notation $A\equiv B$ for cycles $A,B$ on a variety $X$ means that for some integer $d\noindentot=0$ the cycle $dA$ is rationally equivalent to $dB$, i.e.~we will work with the rational Chow group $\CH(X)_{{\mathbb Q}}:=\CH(X)\otimes_{{\mathbb Z}}{\mathbb Q}$. \begin{cnj}\langlebel{cnj:diaghk} Let $X$ be a Hyperk\"ahler variety of dimension $2n$. Then there exists $a\in X$ such that $\Gamma^{2n+1}(X;a)\equiv 0$. \end{cnj} In the present paper we will \emph{not} prove~\Ref{cnj}{diaghk}, instead we will establish a few basic results on modified diagonals. Below is our first result, see~\Ref{sec}{prodotti}. \begin{prp}\langlebel{prp:protokunn} Let $X,Y$ be smooth projective varieties. Suppose that there exist $a\in X({\mathbb K})$, $b\in Y({\mathbb K})$ such that $\Gamma^m(X;a)\equiv 0$ and $\Gamma^n(Y;b)\equiv 0$. Then $\Gamma^{m+n-1}(X\times Y;(a,b))\equiv 0$. \end{prp} We will apply the above proposition in order to show that if $T$ is a complex abelian surface and $a\in T$ then $\Gamma^5(T;a)\equiv 0$. Notice that if $E$ is an elliptic curve and $a\in E$ then $\Gamma^3(E;a)\equiv 0$ by Gross and Schoen~\cite{groscho}. These results are particular instances of a Theorem of Moonen and Yin~\cite{moy} which asserts that $\Gamma^{2g+1}(A;p)\equiv 0$ for $A$ an abelian variety of dimension $g$ and $p\in A({\mathbb K})$ (and more generally for an abelian scheme of relative dimension $g$). A word about the relation between Moonen - Yin's result and~\Ref{cnj}{diaghk}. Beauville and Voisin proved that the relation $\Gamma^3(X;c)\equiv 0$ for $X$ a complex projective $K3$ surface (and a certain $c\in X$) follows from the existence of an elliptic surface $Y$ dominating $X$ and the relation $\Gamma^3(E_t;a)\equiv 0$ for the fibers of the elliptic fibration on $Y$. We expect that the theorem of Moonen and Yin can be used to prove that~\Ref{cnj}{diaghk} holds for Hyperk\"ahler varieties which are covered generically by abelian varieties, this is the subject of work in progress. (It is hard to believe that every Hyperk\"ahler variety of dimension greater than $2$ is covered generically by abelian varieties, but certainly there are interesting codimension-$1$ families which have this property, viz.~lagrangian fibrations and Hilbert schemes of $K3$ surfaces, moreover Lang's conjectures on hyperbolicity would give that a hyperk\"ahler variety is generically covered by varieties birational to abelian varieties.) In~\Ref{sec}{fibrazioni} we will prove that, in a certain sense, \Ref{prp}{protokunn} holds also for ${\mathbb P}^r$ fibrations over smooth projective varieties if certain hypotheses are satisfied, then we will apply the result to prove vanishing of classes of modified diagonals of symmetric products of curves of genus at most $2$. In~\Ref{sec}{scoppio} we will prove the following result. \begin{prp}\langlebel{prp:blowdel} Let $Y$ be a smooth projective variety and $V\subset Y$ be a smooth subvariety of codimension $e$. Suppose that there exists $b\in V({\mathbb K})$ such that $\Gamma^{n+1}(Y;b)\equiv 0$ and $\Gamma^{n-e+1}(V;b)\equiv 0$. Let $X\to Y$ be the blow-up of $V$ and $a\in X({\mathbb K})$ such that $f(a)=b$. Then $\Gamma^{n+1}(X;a)\equiv 0$. \end{prp} We will apply~\Ref{prp}{blowdel} and~\Ref{prp}{protokunn} in order to show that~\Ref{cnj}{diaghk} holds for $S^{[n]}$ where $S$ is a complex $K3$ surface and $n=2,3$, see~\Ref{prp}{diaghilbk3}. In~\Ref{sec}{rivdop} we will consider double covers $f\colon X\to Y$ where $X$ is a projective variety. We will prove that if $a\in X({\mathbb K})$ is a ramification point and $\Gamma^m(Y;f(a))\equiv 0$ then $\Gamma^{2m-1}(X;a)\equiv 0$, provided $m=2,3$. The proof for $m=2$ is the proof, given by Gross and Schoen, that if $X$ is a hyperelliptic curve then $\Gamma^3(X;a)\equiv 0$ for $a\in X({\mathbb K})$ a fixed point of a hyperelliptic involution; we expect that our extension will work for arbitrary $m$ but we have not been able to carry out the necessary linear algebra computations. The result for $m=3$ allows us to give another proof that $\Gamma^5(T;a)\equiv 0$ for a complex abelian surface $T$: the equality $\Gamma^5(T;a)\equiv 0$ follows from our result on double covers and the equality $\Gamma^3(T/\langle -1\rangle;c)\equiv 0$ proved by Beauville and Voisin~\cite{beauvoisin}. \subsection{Conventions and notation} Varieties are defined over a base field $\mathbb K$. A \emph{point of $X$} is an element of $X({\mathbb K})$. We denote the small diagonal $\Delta^m_{\{1,\ldots,m\}}(X;a)$ by $\Delta^m(X)$ and we let $\pi^m_i\colon X^m\to X$ be the $i$-th projection - we will drop the superscript $m$ if there is no potential for confusion. We let $X^{(n)}$ be the $n$-th symmetric product of $X$ i.e.~$X^{(n)}:=X^n/{\mathcal S}_n$ where ${\mathcal S}_n$ is the symmetric group on $n$ elements. \subsection{Acknowledgments} It is a pleasure to thank Lie Fu, Ben Moonen and Charles Vial for the interest they took in this work. \section{Preliminaries} \subsection{}\langlebel{subsec:significato} \setcounter{equation}{0} Let $X$ be an $n$-dimensional projective variety over a field ${\mathbb K}$, $a\in X({\mathbb K})$ and $h$ a hyperplane class on $X$. Let $\iota\colon\Delta^m(X)\hookrightarrow X^m$ be the inclusion map. If $m\le n$ then \begin{equation}\langlebel{iperpiano} \Gamma^{m}(X;a)\cdot\pi_1^{*}(h)\cdot\pi_2^{*}(h)\cdot\ldots\cdot\pi_{m-1}^{*}(h)\cdot\pi_{m}(h^{n-m+1})= \iota_{*}(h^n). \end{equation} Since $\deg\iota_{*}(h^n)\noindentot=0$ it follows that $\Gamma^{m}(X;a)\noindentot\equiv 0$ if $m\le n$. Now suppose that $\Gamma^{n+1}(X;a)\equiv 0$. Let $D_1,\ldots,D_n$ be \emph{Cartier} divisors on $X$: then \begin{equation}\langlebel{tuttimulti} 0=\pi_{n+1,*}(\Gamma^{n+1}(X;a)\cdot\pi_1^{*}D_1\cdot\ldots\cdot \pi_n^{*}D_n)=D_1\cdot\ldots\cdot D_n-\deg(D_1\cdot\ldots\cdot D_n)a \end{equation} in $\CH_0(X)_{{\mathbb Q}}$. \begin{rmk}\langlebel{rmk:unicopunto} Equation~\eqref{tuttimulti} shows that if $\Gamma^{n+1}(X;a)\equiv 0$ and $\Gamma^{n+1}(X;b)\equiv 0$ then $a\equiv b$. \end{rmk} \begin{expl}\langlebel{expl:spapro} The intersection product between cycle classes of complementary dimension defines a perfect pairing on $\CH(({\mathbb P}^n)^m)$. Let $a\in{\mathbb P}^n$: since $\Gamma^{n+1}({\mathbb P}^n;a)$ pairs to $0$ with any class of complementary dimension it follows that $\Gamma^{n+1}({\mathbb P}^n;a)\equiv 0$. \end{expl} \subsection{}\langlebel{subsec:homcomp} \setcounter{equation}{0} In the present subsection we will assume that $X$ is a complex smooth projective variety of dimension $n$. Let $a\in X$. Let $\alpha_1,\ldots,\alpha_m\in H_{DR}(X)$ be De Rham homogeneous cohomology classes such that $\sum_{i=1}^m\deg\alpha_i=2n$. Thus it makes sense to integrate $\pi_1^{*}\alpha_1\wedge\ldots\wedge\pi_m^{*}\alpha_m$ on $\Gamma^m(X;a)$. Let \begin{equation}\langlebel{eccoesse} s:=|\{1\le i\le m \mid \deg\alpha_i=0\}|. \end{equation} A straightforward computation gives that \begin{equation}\langlebel{integrale} \int_{\Gamma^m(X;a)} \pi_1^{*}\alpha_1\wedge\ldots\wedge\pi_m^{*}\alpha_m = \sum_{\ell=0}^{m-1}(-1)^{\ell}{{s}\choose{\ell}}\int_X \alpha_1\wedge\ldots\wedge\alpha_m. \end{equation} \begin{prp}\langlebel{prp:banale} Let $X$ be a smooth complex projective variety and $a\in X$. Let $n$ be the dimension of $X$ and $d$ be its Albanese dimension. The homology class of $\Gamma^{m}(X;a)$ is torsion if and only if $m>(n+d)$. \end{prp} \begin{proof} If $n=0$ the result is obvious. From now on we assume that $n>0$. By~\eqref{iperpiano} we may assume that $m>n$. The homology class of $\Gamma^{m}(X;a)$ is torsion if and only if the left-hand side of~\eqref{integrale} vanishes for every choice of homogeneous $\alpha_1,\ldots,\alpha_m\in H_{DR}(X)$ such that $\sum_{i=1}^m\deg\alpha_i=2n$. Suppose first that $n<m\le(n+d)$ and let $m=n+e$: thus $0< e\le d$. Choose a point of $X$ and let $\alb_X\colon X\to\Alb(X)$ be the associated Albanese map. Let $\theta$ be a a K\"ahler form on $\Alb(X)$: by hypothesis $\dim(\im \alb_X)=d$ and hence there exist holomorphic $1$-forms $\psi_1,\ldots,\psi_e$ on $\Alb(X)$ such that \begin{equation}\langlebel{positivo} \int_{\im(\alb_X)}\psi_1\wedge\ldots\wedge\psi_e\wedge\overline{\psi}_1\wedge\ldots\wedge\overline{\psi}_e\wedge\theta^{d-e}>0. \end{equation} For $i=1,\ldots,e$ let $\phi_i:=\alb_X^{*}\psi_i$ and $\eta:=\alb_X^{*}\theta$. Let $\omega\in H^2_{DR}(X)$ be a K\"ahler class. Equations~\eqref{integrale} and~\eqref{positivo} give that \begin{equation} \scriptstyle \int_{\Gamma^m(X;a)} \pi_1^{*}\phi_1\wedge\ldots\wedge\pi_{e}^{*}\phi_e\wedge \pi_{e+1}^{*}\overline{\phi}_1\wedge\ldots\wedge\pi_{2e}^{*}\overline{\phi}_e\wedge \pi_{2e+1}^{*}\eta\wedge\ldots\wedge\pi_{e+d}^{*}\eta \wedge\pi_{e+d+1}^{*}\omega\wedge\ldots\wedge\pi_{m}^{*}\omega =\int_{X} \phi_1\wedge\ldots\wedge\phi_e\wedge \overline{\phi}_1\wedge\ldots\wedge\overline{\phi}_e\wedge \eta^{d-e}\wedge\omega^{n-d}>0 \end{equation} It follows that the homology class of $\Gamma^{m}(X;a)$ is not torsion. Lastly suppose that $m>(n+d)$. Let $s$ be given by~\eqref{eccoesse}: then $s\le (m-1)$ because $n>0$. It follows that if $s>0$ the right-hand side of~\eqref{integrale} vanishes (by the binomial formula). Now assume that $s=0$: by~\eqref{integrale} we have that \begin{equation}\langlebel{lazio} \int_{\Gamma^m(X;a)} \pi_1^{*}\alpha_1\wedge\ldots\wedge\pi_m^{*}\alpha_m =\int_X \alpha_1\wedge\ldots\wedge\alpha_m. \end{equation} Let \begin{equation}\langlebel{eccoti} t:=|\{1\le i\le m \mid \deg\alpha_i=1\}|. \end{equation} If $t>2d$ then the right-hand side of~\eqref{lazio} vanishes because every class in $H^1_{DR}(X)$ is represented by the pull-back of a closed $1$-form on $\Alb(X)$ via the Albanese map and by hypothesis $\dim(\im \alb_X)=d$. Now suppose that $t\le 2d$. Then \begin{equation} \deg(\pi_1^{*}\alpha_1\wedge\ldots\wedge\pi_m^{*}\alpha_m)\ge t+2(m-t)>2n+2d-t\ge 2n \end{equation} and hence the right-hand side of~\eqref{lazio} vanishes because the integrand is identically zero. This proves that if $m>(n+d)$ the homology class of $\Gamma^{m}(X;a)$ is torsion. \end{proof} \subsection{}\langlebel{subsec:gradofin} \setcounter{equation}{0} Let $f\colon X\to Y$ be a map of finite non-zero degree between projective varieties. Let $a\in X$ and $b:=f(a)$. Then $f_{*}\Gamma^m(X;a)=(\deg f)\Gamma^m(Y;b)$. It follows that if $\Gamma^m(X;a)\equiv 0$ then $\Gamma^m(Y;b)\equiv 0$. \section{Products}\langlebel{sec:prodotti} We will prove~\Ref{prp}{protokunn} and then we will prove that if $T$ is a complex abelian surface then $\Gamma^5(T;a)\equiv 0$ for any $a\in T$. \subsection{Preliminary computations} \setcounter{equation}{0} Let $X$ and $Y$ be projective varieties and $a\in X$, $b\in Y$. Let $\emptyset\noindentot=I\subset \{1,\ldots,r\}$ and $\emptyset\noindentot=J\subset \{1,\ldots,s\}$. Thus $\Delta^r_{I}(X;a)\subset X^r$ and $\Delta^s_{J}(Y;b)\subset Y^s$: we let \begin{equation}\langlebel{doppia} \Delta^{r,s}_{I,J}(X,Y;a,b):=\Delta^r_{I}(X;a)\times \Delta^s_{J}(Y;b)\subset X^r\times Y^s. \end{equation} We let $\Delta^{r,s}(X,Y)=\Delta^{r,s}_{\{1,\ldots,r\},\{1,\ldots,s\}}(X,Y;a,b)$. For the remainder of the present section we let \begin{equation}\langlebel{semplifica} e:=m+n-1. \end{equation} We will constantly make the identification \begin{equation}\langlebel{scambio} \begin{matrix} (X\times Y)^{e} & \overlineerset{\sim}{\longrightarrow} & X^{e}\times Y^{e} \\ ((x_1,y_1),\ldots,(x_{e},y_{e})) & \mapsto & (x_1,\ldots,x_{e},y_1,\ldots,y_{e}) \end{matrix} \end{equation} With the above notation~\Ref{prp}{protokunn} is equivalent to the following rational equivalence: \begin{equation}\langlebel{diagprod} \sum\limits_{\emptyset\noindentot=I\subset \{1,\ldots,e\}}(-1)^{e-|I|}\Delta^{e,e}_{I,I}(X,Y;a,b)\equiv 0. \end{equation} \begin{prp}\langlebel{prp:delsup} Let $X$ be a smooth projective variety and $a\in X$. Suppose that $\Gamma^m(X;a)\equiv 0$. Then \begin{equation}\langlebel{delsup} \Delta^{m+r}(X)\equiv \sum_{1\le |J|\le (m-1)}(-1)^{m-1-|J| }{m+r-1-|J| \choose r}\Delta^{m+r}_J(X;a) \end{equation} for every $r\ge 0$. \end{prp} \begin{proof} By induction on $r$. If $r=0$ then~\eqref{delsup} is equivalent to $\Gamma^m(X;a)\equiv 0$. Let's prove the inductive step. Since $\Gamma^m(X;a)\equiv 0$ we have that \begin{multline} \Delta^{m+r+1}(X)\equiv \pi_{1,\ldots,m+r}^{*}\Delta^{m+r}(X)\cdot \pi_{m+r,m+r+1}^{*}\Delta^2(X)\equiv \\ \equiv \pi_{1,\ldots,m+r}^{*}\left(\sum_{\substack{J\subset\{1,\ldots,m+r\} \\ 1\le |J|\le(m-1)}}(-1)^{m-1-|J| }{m+r-1-|J| \choose r}\Delta^{m+r}_J(X;a)\right)\cdot \pi_{m+r,m+r+1}^{*}\Delta^2(X). \end{multline} Next notice that \begin{equation}\langlebel{duecasi} \pi_{1,\ldots,m+r}^{*}\Delta^{m+r}_J(X;a)\cdot \pi_{m+r,m+r+1}^{*}\Delta^2(X)\equiv \begin{cases} \Delta^{m+r+1}_J(X;a) & \text{if $(m+r)\noindentotin J$,}\\ \Delta^{m+r+1}_{J\cup\{m+r+1\}}(X;a) & \text{if $(m+r)\in J$,} \end{cases} \end{equation} Thus $\Delta^{m+r+1}(X)$ is rationally equivalent to a linear combination of cycles $\Delta^{m+r+1}_J(X;a)$ with $|J|\le(m-1)$ and of cycles $\Delta^{m+r+1}_{K}(X;a)$ where \begin{equation}\langlebel{kappacond} |K|=m,\qquad \{m+r,m+r+1\}\subset K. \end{equation} Let $K$ be such a subset and write $K=\{i_1,\ldots,i_m\}$ where $i_1<\ldots <i_m$. Let $\iota\colon X^{m}\to X^{m+r+1}$ be the map which composed with the $j$-th projection of $X^{m+r+1}$ is equal to the constant map to $a$ if $j\noindentotin K$, and is equal to the $l$-th projection of $X^{m}$ if if $j=i_l$. Then $\Delta^{m+r+1}_K(X;a)=\iota_{*}\Delta^{m}$ and hence the equivalence $\Gamma^m(a)\equiv 0$ gives that \begin{equation} \Delta^{m+r+1}_{K}(X;a)\equiv \sum_{\substack{J\subset K \\ 1\le |J|\le(m-1)}}(-1)^{m-1-|J|}\Delta_J^{m+r+1}(X;a). \end{equation} Putting everything together we get an equivalence \begin{equation}\langlebel{caino} \Delta^{m+r+1}(X)\equiv \sum_{1\le |J|\le (m-1)}(-1)^{m-1-|J| }c_J\Delta^{m+r+1}_J(X;a) \end{equation} In order to prove that $c_J={m+r-|J| \choose r+1}$ we distinguish four cases: they are indexed by the intersection \begin{equation}\langlebel{preiti} J\cap\{m+r,m+r+1\}. \end{equation} Suppose that~\eqref{preiti} is empty. We get a contribution (to $c_J$) of ${m+r-1-|J| \choose r}$ from the first case in~\eqref{duecasi}, and a contribution of \begin{equation} \scriptstyle |\{(J\cup\{m+r,m+r+1\})\subset K \subset\{1,\ldots,m+r+1\} \mid |K|=m\}|= {m+r-1-|J|\choose m-2-|J|}={m+r-1-|J|\choose r+1} \end{equation} from the subsets $K$ satisfying~\eqref{kappacond}. This proves that $c_J={m+r-|J| \choose r+1}$ in this case. The proof in the other three cases is similar. \end{proof} \begin{crl}\langlebel{crl:delsup} Let $X$ be a smooth projective variety and $a\in X$. Suppose that $\Gamma^m(X;a)\equiv 0$. Let $s\ge 0$ and $I\subset \{1,\ldots,m+s\}$ be a subset of cardinality at least $m$. Then \begin{equation}\langlebel{delsupdue} \Delta^{m+s}_I(X;a)\equiv \sum_{\substack{J\subset I \\ 1\le |J|\le(m-1)}}(-1)^{m-1-|J|}{|I|-|J| -1\choose |I|-m}\Delta^{m+s}_J(X;a). \end{equation} \end{crl} \begin{proof} Let $q:=|I|$ and $I=\{i_1,\ldots,i_q\}$ where $i_1<\ldots <i_q$. Let $\iota\colon X^{q}\to X^{m+s}$ be the map which composed with the $j$-th projection of $X^{m+s}$ is equal to the constant map to $a$ if $j\noindentotin I$, and is equal to the $l$-th projection of $X^{m}$ if if $j=i_l$. Then $\Delta^{m+s}_I(X;a)=\iota_{*}\Delta^{q}(X)$ and one gets~\eqref{delsupdue} by invoking~\Ref{prp}{delsup}. \end{proof} \begin{crl}\langlebel{crl:emmabon} Let $X$, $Y$ be smooth projective varieties and $a\in X$, $b\in Y$. Suppose that $\Gamma^m(X;a)\equiv 0$ and $\Gamma^n(Y;a)\equiv 0$. Assume that $m\le n$. Let $I\subset \{1,\ldots,e\}$ (recall that $e=m+n-1$). \begin{enumerate} \item If $n\le |I|$ then \begin{equation}\langlebel{moavero} \Delta^{e,e}_{I,I}(X,Y;a,b)\equiv \sum_{\substack{(J\cup K)\subset I \\ 1\le |J|\le(m-1) \\ 1\le |K|\le (n-1)}}(-1)^{m+n-|J|-|K|} {|I|-|J| -1\choose m-|J|-1}{|I|-|K|-1\choose n-|K| -1}\Delta^{e,e}_{J,K}(X,Y;a,b). \end{equation} \item If $m\le |I|<n$ then \begin{equation}\langlebel{saccomanni} \Delta^{e,e}_{I,I}(X,Y;a,b)\equiv \sum_{\substack{J\subset I \\ 1\le |J|\le(m-1)}}(-1)^{m-1-|J|} {|I|-|J| -1\choose m-|J|-1}\Delta^{e,e}_{J,I}(X,Y;a,b). \end{equation} \end{enumerate} \end{crl} \begin{proof} By definition $\Delta^{e,e}_{I,I}(X,Y;a,b)=\Delta^{e}_{I}(X;a)\times \Delta^{e}_{I}(Y;b)$. Now suppose that $n\le |I|$. By~\Ref{crl}{delsup} the first factor is rationally equivalent to a linear combination of $\Delta_J^e(X;a)$'s with $J\subset I$ and $1\le |J|\le(m-1)$, the second factor is rationally equivalent to a linear combination of $\Delta_K^e(Y;b)$'s with $K\subset I$ and $1\le |K|\le(n-1)$: writing out the product one gets~\eqref{moavero}. The proof of~\eqref{saccomanni} is similar. \end{proof} \subsection{Linear relations between binomial coefficients.}\langlebel{subsec:coeffbin} \setcounter{equation}{0} The following fact will be useful: \begin{equation}\langlebel{combcomb} \sum_{t=0}^n(-1)^t p(t){n\choose t}=0\qquad \forall p\in{\mathbb Q}[x]\text{ such that $\deg p<n$.} \end{equation} In order to prove~\eqref{combcomb} let $d<n$: then we have \begin{equation} \sum_{t=0}^n(-1)^t {t\choose d}{n\choose t}={n\choose d}\sum_{t=d}^n(-1)^t {n-d\choose t-d}=(1-1)^{n-d}=0. \end{equation} Since $\{{x\choose 0},{x\choose 1},\ldots,{x\choose n-1}\}$ is a basis of the vector space of polynomials of degree at most $(n-1)$ Equation~\eqref{combcomb} follows. \subsection{Proof of the main result.}\langlebel{subsec:combinatorica} \setcounter{equation}{0} We will prove~\Ref{prp}{protokunn}. As noticed above it suffices to prove that~\eqref{diagprod} holds. Without loss of generality we may assume that $m\le n$. \Ref{crl}{emmabon} gives that for each $1\le t\le e$ and $J,K\subset \{1,\ldots,e\}$ with $|J|\le(m-1)$, $|K|\le(n-1)$ there exists $c_{J,K}(t)$ such that \begin{equation}\langlebel{} \sum_{|I|=t}\Delta^{e,e}_{I,I}(X,Y;a,b)\equiv \sum_{\substack{J,K\subset \{1,\ldots,e\} \\ 1\le |J|\le(m-1) \\ 1\le |K|\le (n-1)}} c_{J,K}(t)\Delta^{e,e}_{J,K}(X,Y;a,b). \end{equation} It will suffice to prove that for each $J,K$ as above we have \begin{equation}\langlebel{jannacci} \sum_{t=1}^{e}(-1)^{t} c_{J,K}(t)=0. \end{equation} Equations~\eqref{moavero} and~\eqref{saccomanni} give that $c_{J,K}(t)=0$ if $t<|J\cup K|$ and that \begin{equation}\langlebel{cikappati} c_{J,K}(t)= (-1)^{m+n-|J|-|K|} {t-|J| -1\choose m-|J|-1}{t-|K|-1\choose n-|K|-1}{e-|J\cup K| \choose t-|J\cup K| },\quad \max\{|J\cup K|,n\}\le t\le e. \end{equation} We distinguish between the four cases: \begin{enumerate} \item $J\noindentot\subset K$. \item $J\subset K$ and $m\le|K|$. \item $J\subset K$, $J\noindentot=K$ and $|K|<m$. \item $J= K$ and $|K|<m$. \end{enumerate} Suppose that~(1) holds. Then~\Ref{crl}{emmabon} gives that $c_{J,K}(t)=0$ if $t<n$. Let $p\in{\mathbb Q}[x]$ be given by \begin{equation}\langlebel{bersani} p:=(-1)^{m+n-|J|-|K|}{x-|J|-1\choose m-|J|-1}{x-|K|-1\choose n-|K|-1}. \end{equation} We must prove that \begin{equation}\langlebel{serenella} \sum_{t=\max\{|J\cup K|,n\}}^e (-1)^t p(t){e-|J\cup K| \choose t-|J\cup K| }=0. \end{equation} If $n\le |J\cup K|$ then~\eqref{serenella} follows at once from~\eqref{combcomb} (notice that $\deg p<(e-|J\cup K|)$), if $n< |J\cup K|$ then~\eqref{serenella} follows from~\eqref{combcomb} and the fact that $p(i)=0$ for $|J\cup K|\le i\le(n-1)$. This proves~\eqref{jannacci} if Item~(1) above holds. Now let's assume that Item~(2) above holds. Then $|J\cup K|=|K|<n$: it follows that if $n\le t$ then $c_{J,K}(t)$ is given by~\eqref{cikappati}. On the other hand~\Ref{crl}{emmabon} gives that if $t<n$ and $t\noindentot=|K|$ then $c_{J,K}(t)=0$, and \begin{equation} c_{J,K}(|K|)=(-1)^{m-1-|J|}{|K|-|J|-1\choose m-|J|-1}. \end{equation} Thus we must prove that \begin{equation}\langlebel{marolla} (-1)^{|K|}(-1)^{m-1-|J|}{|K|-|J|-1\choose m-|J|-1}+\sum_{t=n}^e (-1)^t p(t){e-|J\cup K| \choose t-|J\cup K| }=0 \end{equation} where $p$ is given by~\eqref{bersani}. Now notice that $0=p(|K|+1)=\ldots=p(n-1)$: thus~\eqref{combcomb} gives that \begin{equation*} \scriptstyle \sum_{t=n}^e (-1)^t p(t){e-|J\cup K| \choose t-|J\cup K| }=-(-1)^{|K|}p(|K|){e-|K|\choose 0}=(-1)^{m+n-1-|J|} {|K|-|J|-1\choose m-|J|-1}{-1\choose n-|K|-1}=(-1)^{m-|J|-|K|}{|K|-|J|-1\choose m-|J|-1}. \end{equation*} This proves that~\eqref{marolla} holds. If Item~(3) above holds one proves~\eqref{jannacci} arguing as in Item~(1), if Item~(4) holds the argument is similar to that given if Item~(2) holds. \qed \subsection{Stability.}\langlebel{subsec:stabilizza} \setcounter{equation}{0} We will prove a result that will be useful later on. \begin{prp}\langlebel{prp:stabile} Let $X$ be a smooth projective variety and $a\in X$. Suppose that $\Gamma^m(X;a)\equiv 0$. If $s\ge 0$ then $\Gamma^{m+s}(X;a)\equiv 0$. \end{prp} \begin{proof} If $\dim X=0$ the result is trivial. Assume that $\dim X>0$. By definition \begin{equation} \Gamma^{m+s}(X;a):=\sum\limits_{\emptyset\noindentot= I\subset \{1,2,\ldots,m+s\}}(-1)^{m+s-|I|}\Delta^{m+s}_I(X;a). \end{equation} Replacing $\Delta^{m+s}_I(X;a)$ for $m\le |I|\le(m+s)$ by the right-hand side of~\eqref{delsupdue} we get that \begin{equation} \Gamma^{m+s}(X;a):=\sum_{1\le\ell\le(m-1)}c_{\ell} \left(\sum\limits_{| I |=\ell}(-1)\Delta^{m+s}_I(X;a)\right) \end{equation} where \begin{equation} c_{\ell}=\sum_{r=0}^s(-1)^{m-\ell-1+s-r}{m-\ell-1+r\choose m-\ell-1}{m+s-\ell\choose s-r}+(-1)^{m+s-\ell}. \end{equation} Thus it suffices to prove that $c_{\ell}=0$ for $1\le \ell\le(m-1)$. Letting $t=s-r$ we get that \begin{multline} (-1)^{m-\ell-1}c_{\ell}=\sum_{t=0}^s(-1)^{t}{m-\ell-1+s-t\choose m-\ell-1}{m+s-\ell\choose t}+(-1)^{s-1}=\\ =\sum_{t=0}^{m+s-\ell}(-1)^{t}{m-\ell-1+s-t\choose m-\ell-1}{m+s-\ell\choose t}=0 \end{multline} where the last equality follows from~\eqref{combcomb}. \qed \end{proof} \subsection{Applications}\langlebel{subsec:supab} \setcounter{equation}{0} \begin{prp}\langlebel{prp:jaciper} Suppose that $C$ is a smooth projective curve of genus $g$ and that there exists a degree-$2$ map $f\colon C\to{\mathbb P}^1$ ramified at $p\in C$. Then \begin{enumerate} \item $\Gamma^{2g+1}(C^g;(p,\ldots,p))\equiv 0$, \item $\Gamma^{2g+1}(C^{(g)};gp)\equiv 0$, and \item $\Gamma^{2g+1}(\Pic^0(C);a)\equiv 0$ for any $a\in \Pic^0(C)$. \end{enumerate} \end{prp} \begin{proof} By Proposition~4.8 of~\cite{groscho} we have $\Gamma^3(C;p)\equiv 0$. Repeated application of~\Ref{prp}{protokunn} gives the first item. The quotient map $C^g\to C^{(g)}$ is finite and the image of $(p,\ldots,p)$ is $gp$: thus Item~(2) follows from Item~(1) and~\Ref{subsec}{gradofin}. Let $u_g\colon C^{(g)}\to \Pic^0(C)$ be the map $D\mapsto [D-gp]$: since $u_g$ is birational Item~(2) and~\Ref{subsec}{gradofin} give that $\Gamma^{2g+1}(\Pic^0(C);{\bf 0})\equiv 0$ where ${\bf 0}$ is the origin of $\Pic^0(C)$. Acting by translations we get that $\Gamma^{2g+1}(\Pic^0(C);a)\equiv 0$ for any $a\in \Pic^0(C)$. \end{proof} \begin{crl}\langlebel{crl:jaciper} If $T$ is a complex abelian surface then $\Gamma^5(T;a)\equiv 0$ for any $a\in T$. \end{crl} \begin{proof} There exists a principally polarized abelian surface $J$ and an isogeny $J\to T$. By~\Ref{subsec}{gradofin} it suffices to prove that $\Gamma^5(J;b)\equiv 0$ for any $b\in J$. The surface $J$ is either a product of two elliptic curves $E_1,E_2$ or the Jacobian of a smooth genus-$2$ curve $C$. Suppose that the former holds. Let $a=(p_1,p_2)$ where $p_i\in E_i$ for $i=1,2$. Then $\Gamma^3(E_i;p_i)\equiv 0$ by Proposition~4.8 of~\cite{groscho} and hence~\Ref{prp}{protokunn} gives that $\Gamma^5(E_1\times E_2;(p_1,p_2))\equiv 0$. If $J$ is the Jacobian of a smooth genus-$2$ curve $C$ the corollary follows at once from~\Ref{prp}{jaciper}. \end{proof} \section{${\mathbb P}^r$-fibrations}\langlebel{sec:fibrazioni} Let $Y$ be a smooth projective variety. Let ${\mathscr F}$ be a locally-free sheaf of rank $(r+1)$ on $Y$ and $X:={\mathbb P}({\mathscr F})$. Thus the structure map $\rho\colon X\to Y$ is a ${\mathbb P}^r$-fibration. Let $Z:=c_1({\mathscr O}_X(1))\in\CH^1(X)$. Suppose that there exists $b\in Y$ such that $\Gamma^m(Y;b)\equiv 0$ and let $a\in\rho^{-1}(b)$. If ${\mathbb P}({\mathscr F})$ is trivial then \begin{equation}\langlebel{traslo} \Gamma^{m+r}(X;a)\equiv 0 \end{equation} by~\Ref{expl}{spapro} and~\Ref{prp}{protokunn}. In general~\eqref{traslo} does not hold. In fact suppose that $Y$ is a $K3$ surface and hence $\Gamma^3(Y;b)\equiv 0$ where $b$ is a point lying on a rational curve~\cite{beauvoisin}. If $\Gamma^{3+r}(X;a)\equiv 0$ then the top self-intersection of any divisor class on $X$ is a multiple of $[a]$, see~\Ref{subsec}{significato}: considering $Z^{r+2}$ we get that $c_2({\mathscr F})$ is a multiple of $[b]$. We will prove the following results. \begin{prp}\langlebel{prp:rigcurva} Keep notation as above and suppose that $\dim Y=1$. If $\Gamma^m(Y;b)\equiv 0$ then $\Gamma^{m+r}(X;a)\equiv 0$. \end{prp} \begin{prp}\langlebel{prp:rigsuperficie} Keep notation as above and suppose that $\dim Y=2$. If $\Gamma^{m-1}(Y;b)\equiv 0$, or $\Gamma^{m}(Y;b)\equiv 0$ and both $c_1({\mathscr F})^2$, $c_2({\mathscr F})$ are multiples of $[b]$, then $\Gamma^{m+r}(X;a)\equiv 0$ . \end{prp} As an appplication we will prove the following. \begin{prp}\langlebel{prp:prodsim} Suppose that $C$ is a smooth projective curve of genus $g\le 2$ over an algebraically closed field ${\mathbb K}$ and that $p\in C$ is such that $\dim|{\mathscr O}_C(2p)| \ge 1$. Then $\Gamma^{d+g+1}(C^{(d)};dp)\equiv 0$ for any $d\ge 0$. \end{prp} \subsection{Comparing diagonals} \setcounter{equation}{0} Let $\rho^n\colon X^n\to Y^n$ be the $n$-th cartesian product of $\rho$. Let $\pi_i\colon X^n\to X$ be the $i$-th projection and $Z_i:=\pi_i^{*}Z$. Given a multi-index $E=(e_1,\ldots,e_n)$ with $0\le e_i$ for $1\le i\le n$ we let $Z^E:=Z_1^{e_1}\cdot\ldots\cdot Z^{e_n}_n$. We let \begin{equation} \max E:=\max\{e_1,\ldots,e_n\},\qquad |E|:=e_1+\ldots+e_n. \end{equation} Let $d:=\dim Y$ and $[\Delta^n(X)]\in\CH_{d+r}(X^n)$ be the class of the (smallest) diagonal. Since $\rho^n$ is a $({\mathbb P}^r)^n$-fibration we may write \begin{equation}\langlebel{darisolvere} [\Delta^n(X)]=\sum_{\max E\le r}(\rho^n)^{*}(w_E({\mathscr F}))\cdot Z^E,\qquad w_E({\mathscr F})\in \CH_{|E|+d-r(n-1)}(Y^n). \end{equation} In order to describe the classes $w_E$ we let $\delta^n_Y\colon Y\hookrightarrow Y^n$ and $\delta^n_X\colon X\hookrightarrow X^n$ be the diagonal embeddings. \begin{prp}\langlebel{prp:coefficienti} Let $r\ge 0$ and $E=(e_1,\ldots,e_n)$ be a multi-index. There exists a universal polynomial $P_E\in{\mathbb Q}[x_1,\ldots,x_q]$, where $q:=(r(n-1)-|E|)$, such that the following holds. Let ${\mathscr F}$ be a locally-free sheaf of rank $(r+1)$ on $Y$: then (notation as above) $w_E({\mathscr F})=\delta^n_{Y,*}(P_E(c_1({\mathscr F}),\ldots,c_q({\mathscr F}))$. \end{prp} \begin{proof} Let $s_i({\mathscr F})$ be the $i$-th Segre class of ${\mathscr F}$ and $E^{\vee}:=(r-e_1,\ldots,r-e_n)$. Then \begin{equation}\langlebel{kyenge} \rho^n_{*}([\Delta^n(X)]\cdot Z^{E^{\vee}})=\delta_{Y,*}^n(s_{|E^{\vee}|-r}({\mathscr F})). \end{equation} (By convention $s_{i}({\mathscr F})=0$ if $i<0$.) On the other hand let $J=(j_1,\ldots,j_n)$ be a multi-index: then \begin{equation}\langlebel{ogbonna} \rho^n_{*}\left(\left(\sum_{\max H\le r}(\rho^n)^{*}(w_H({\mathscr F}))\cdot Z^H\right)\cdot Z^{J}\right)= \sum_{\max H\le r} w_{H}({\mathscr F})\cdot \pi_1^{*}(s_{h_1+j_1-r})\cdot\ldots\cdot \pi_n^{*}(s_{h_n+j_n-r}). \end{equation} Equations~\eqref{kyenge} and~\eqref{ogbonna} give that \begin{multline}\langlebel{esame} \delta_{Y,*}^n(s_{|E^{\vee}|-r}({\mathscr F}))=\rho^n_{*}([\Delta^n(X)]\cdot Z^{E^{\vee}})= \sum_{\max H\le r} w_{H}({\mathscr F})\cdot \pi_1^{*}(s_{h_1-e_1})\cdot\ldots\cdot \pi_n^{*}(s_{h_n-e_n})= \\ =w_{E}({\mathscr F})+\sum_{\substack{|H|>|E| \\ r\ge \max H}} w_{H}({\mathscr F})\cdot \pi_1^{*}(s_{h_1-e_1})\cdot\ldots\cdot \pi_n^{*}(s_{h_n-e_n}). \end{multline} Starting from the highest possible value of $|E|$ i.e.~$rn$ and going through descending values of $|E|$ one gets the proposition. \end{proof} \begin{rmk}\langlebel{rmk:esplicito} The proof of~\Ref{prp}{coefficienti} gives an iterative algorithm for the computation of $w_E({\mathscr F})$. A straightforward computation gives the formulae \begin{equation*}\langlebel{esplicito} w_E({\mathscr F})= \begin{cases} 0 & \text{if $|E|>r(n-1)$}, \\ [\Delta^n(Y)] & \text{if $|E|=r(n-1)$}, \\ (\langlembda_E(1)-1)\delta^n_{Y,*}( c_1({\mathscr F}))& \text{if $|E|=r(n-1)-1$}, \\ \frac{1}{2}(\langlembda_E(1) -1)(\langlembda_E(1) -2)\delta^n_{Y,*}( c_1({\mathscr F})^2) + (\langlembda_E(2) -1 ) \delta^n_{Y,*}(c_2({\mathscr F}))& \text{if $|E|=r(n-1)-2$}, \end{cases} \end{equation*} where \begin{equation} \langlembda_E(p):=|\{ 1\le i\le n \mid e_i+p\le r \}|. \end{equation} \end{rmk} \subsection{Comparing modified diagonals} \setcounter{equation}{0} We will compare $\Gamma^{m+r}(X;a)$ and $\Gamma^{m+r}(Y;b)$. In the present subsection $\emptyset\noindentot=I\subset\{1,\ldots,m+r\}$ and $I^c:=(\{1,\ldots,m+r\}\setminus I)$; we let $\pi_I\colon X^{m+r}\to X^{|I|}$ be the projection determined by $I$. We also let $H=(h_1,\ldots,h_{m+r})$ be a multi-index. If $\max H\le r$ we let $\Top H:=\{1\le i\le n \mid h_i=r\}$. Applying~\Ref{prp}{coefficienti} and~\Ref{rmk}{esplicito} we get that \begin{multline}\langlebel{barvitali} \Delta^{m+r}_I(X;a)= (\rho^{m+r})^{*}(\Delta^{m+r}_I(Y;{b}))\cdot \sum_{\substack{\max H\le r \\ |H|=r(m+r-1) \\ I^c\subset\Top H }} Z^H+ \\ +(\rho^{m+r})^{*}\left(\pi_I^{*}\delta^{|I|}_{Y,*}(c_1({\mathscr F}))\times\pi_{I^c}^{*}(\underbrace{{b}\times\ldots\times{b}}_{|I^c|})\right) \cdot\sum_{\substack{\max H\le r \\ |H|=r(m+r-1)-1 \\ I^c\subset\Top H }} (\langlembda_H(1)-1) Z^H+ \\ +(\rho^{m+r})^{*}\left(\pi_I^{*}\delta^{|I|}_{Y,*}(c_1({\mathscr F})^2)\times\pi_{I^c}^{*}(\underbrace{{b}\times\ldots\times{b}}_{|I^c|})\right) \cdot\sum_{\substack{\max H\le r \\ |H|=r(m+r-1)-2 \\ I^c\subset\Top H}} \frac{1}{2}(\langlembda_H(1)-1)(\langlembda_H(1)-2) Z^H+\\ +(\rho^{m+r})^{*}\left(\pi_I^{*}\delta^{|I|}_{Y,*}(c_2({\mathscr F}))\times\pi_{I^c}^{*}(\underbrace{{b}\times\ldots\times{b}}_{|I^c|})\right) \cdot\sum_{\substack{\max H\le r \\ |H|=r(n-1)-2 \\ I^c\subset\Top H}} (\langlembda_H(2)-1) Z^H+{\mathscr R} \end{multline} where \begin{equation}\langlebel{brasile} {\mathscr R}=\sum_{\substack{\max H\le r \\ |H|<r(n-1)-2}}Q_H Z^H \end{equation} and each $Q_H$ appearing in~\eqref{brasile} vanishes if the Chern classes of ${\mathscr F}$ of degree higher than $2$ are zero. It follows that \begin{multline}\langlebel{barsport} \Gamma^{m+r}(X;a)= \sum_{\substack{\max H\le r \\ |H|=r(m+r-1)}}(\rho^{m+r})^{*}\left(\sum_{I^c\subset\Top H}(-1)^{m+r-|I|} \Delta^{m+r}_I(Y;{b})\right)\cdot Z^H+ \\ +\sum_{\substack{\max H\le r \\ |H|=r(m+r-1)-1}}(\rho^{m+r})^{*}\left(\sum_{I^c\subset\Top H}(-1)^{m+r-|I|} \left(\pi_I^{*}\delta^{|I|}_{Y,*}(c_1({\mathscr F}))\times\pi_{I^c}^{*}(\underbrace{{b}\times\ldots\times{b}}_{|I^c|})\right)\right) \cdot \epsilon_H Z^H+ \\ +\sum_{\substack{\max H\le r \\ |H|=r(m+r-1)-2}}(\rho^{m+r})^{*}\left(\sum_{I^c\subset\Top H}(-1)^{m+r-|I|} \left(\pi_I^{*}\delta^{|I|}_{Y,*}(c_1({\mathscr F})^2)\times\pi_{I^c}^{*}(\underbrace{{b}\times\ldots\times{b}}_{|I^c|})\right)\right) \cdot \mu_H Z^H+\\ +\sum_{\substack{\max H\le r \\ |H|=r(m+r-1)-2}}(\rho^{m+r})^{*}\left(\sum_{I^c\subset\Top H}(-1)^{m+r-|I|} \left(\pi_I^{*}\delta^{|I|}_{Y,*}(c_2({\mathscr F}))\times\pi_{I^c}^{*}(\underbrace{{b}\times\ldots\times{b}}_{|I^c|})\right)\right) \cdot \noindentu_H Z^H+{\mathscr T} \end{multline} where $\epsilon_H:=(\langlembda_H(1)-1)$, $\mu_H:=(\langlembda_H(1)-1)(\langlembda_H(1)-2)/2$, $\noindentu_H:=(\langlembda_H(2)-1)$, and ${\mathscr T}$ has an expansion similar to that of ${\mathscr R}$, see~\eqref{brasile} and the comment following it. \begin{rmk}\langlebel{rmk:svanisce} Suppose that $\Gamma^m(Y;b)=0$. Then the first addend on the right-hand side of~\eqref{barsport} vanishes. In fact it is clearly independent of the rank-$r$ locally-free sheaf ${\mathscr F}$ and it is $0$ for trivial ${\mathscr F}$ by~\Ref{prp}{protokunn}: it follows that it vanishes. \end{rmk} \subsection{${\mathbb P}^r$-bundles over curves}\langlebel{subsec:rigcurva} \setcounter{equation}{0} We will prove~\Ref{prp}{rigcurva}. We start with an auxiliary result. \begin{clm}\langlebel{clm:sommalt} Let $Y$ be a smooth projective variety and $b\in Y$. Suppose that $\Gamma^m(Y;b)=0$. Let $\mathfrak{z}\in\CH(Y)$: then \begin{equation}\langlebel{sommalt} \sum_{I\subset\{1,\ldots,(m-1)\}}(-1)^{|I|}\pi_I^{*}\delta_{Y,*}^{|I|}(\mathfrak{z})\times\pi_{I^c}^{*}(\underbrace{b,\ldots,b}_{|I^c|})=0. \end{equation} \end{clm} \begin{proof} Let $\pi_{\{1,\ldots,(m-1)\}}\colon Y^m\to Y^{m-1}$ be the projection to the first $(m-1)$ coordinates. Then \begin{equation}\langlebel{sgargiante} \pi_{\{1,\ldots,(m-1)\},*}(\Gamma^m(Y;b)\cdot\pi_m^{*}\mathfrak{z})=0. \end{equation} The claim follows because the left-hand side of~\eqref{sgargiante} equals the left-hand side of~\eqref{sommalt} multiplied by $(-1)^m$. \end{proof} By~\eqref{barsport} and~\Ref{rmk}{svanisce} we must prove that if $H=(h_1,\ldots,h_{m+r})$ is a multi-index such that $\max H\le r$ and $|H|=r(m+r-1)-1$ then \begin{equation}\langlebel{mammamia} \sum_{I^c\subset\Top H}(-1)^{m+r-|I|} \pi_I^{*}\delta^{|I|}_{Y,*}(c_1({\mathscr F}))\times\pi_{I^c}^{*}(\underbrace{b,\ldots,b}_{|I^c|})=0. \end{equation} A straightforward computation shows that $|\Top H|\ge(m-1)$: thus~\eqref{mammamia} holds by~\Ref{clm}{sommalt}. \qed \subsection{${\mathbb P}^r$-bundles over surfaces} \setcounter{equation}{0} We will prove~\Ref{prp}{rigsuperficie}. Notice that $\Gamma^m(Y;b)=0$: in fact it holds either by hypothesis or by~\Ref{prp}{stabile} if $\Gamma^{m-1}(Y;b)=0$. Moreover~\eqref{mammamia} holds in this case as well, the argument is that given in~\Ref{subsec}{rigcurva}. Thus~\eqref{barsport} and~\Ref{rmk}{svanisce} give that we must prove the following: if $H=(h_1,\ldots,h_{m+r})$ is a multi-index such that $\max H\le r$ and $|H|=r(m+r-1)-2$ then \begin{equation}\langlebel{papas} \sum_{I^c\subset\Top H}(-1)^{m+r-|I|} \left(\pi_I^{*}\delta^{|I|}_{Y,*}(\mu_H c_1({\mathscr F})^2+\noindentu_H c_2({\mathscr F}))\times\pi_{I^c}^{*}(\underbrace{b,\ldots,b}_{|I^c|})\right)=0. \end{equation} A straightforward computation shows that $|\Top H|\ge(m-2)$ and that equality holds if and only if $(r-1)\le h_i\le r$ for all $1\le i\le (m+r)$ (and thus the set of indices $i$ such that $h_i=(r-1)$ has cardinality $(r+2)$). If $\Gamma^{m-1}(Y;b)=0$ then~\eqref{papas} holds by~\Ref{clm}{sommalt}. If both $c_1({\mathscr F})^2$, $c_2({\mathscr F})$ are multiples of $b$ then each term in the summation in the left-hand side of~\eqref{papas} is a multiple of $b$ and the coefficients sum up to $0$. \qed \subsection{Symmetric products of curves} \setcounter{equation}{0} If the genus of $C$ is $0$ then $C^{(d)}\cong{\mathbb P}^d$ and hence the result holds trivially, see~\Ref{expl}{spapro}. Suppose that the genus of $C$ is $1$. If $d=1$ then $\Gamma^3(C;p)\equiv 0$ by~\cite{groscho}. Let $d>1$ and let $u_d\colon C^{(d)}\to \Pic^0(C)$ be the map sending $D$ to $[D-d p]$. Since $u_d$ is ${\mathbb P}^{d-1}$-fibration we get that $\Gamma^{d+2}(C;dp)\equiv 0$ by~\Ref{prp}{prodsim} and the equivalence $\Gamma^3(C;p)\equiv 0$. Lastly suppose that the genus of $C$ is $2$. If $d=1$ then $\Gamma^3(C;p)\equiv 0$ by~\cite{groscho} and if $d=2$ then $\Gamma^5(C^{(2)};2p)\equiv 0$ by~\Ref{prp}{jaciper}. Now assume that $d>2$ and let $u_d\colon C^{(d)}\to \Pic^0(C)$ be the map sending $D$ to $[D-d p]$. Then $u_d$ is ${\mathbb P}^{d-2}$-fibration and we may write $C^{(d)}\cong{\mathbb P}({\mathscr E}_d)$ where ${\mathscr E}_d$ is a locally-free sheaf on $\Pic^0(C)$ such that \begin{equation} c_1({\mathscr E}_d)=-[\{[x-p] \mid x\in C\}],\qquad c_2({\mathscr E}_d)=[{\bf 0}], \end{equation} see Example~4.3.3 of~\cite{fulton}. By~\Ref{prp}{jaciper} we have $\Gamma^5(J(C);{\bf 0})\equiv 0$; since $c_1({\mathscr E}_d)^2=2[{\bf 0}]$ we get that $\Gamma^{d+2}(C^{(d)};dp)\equiv 0$ by~\Ref{prp}{prodsim}. \section{Blow-ups}\langlebel{sec:scoppio} We will prove~\Ref{prp}{blowdel}. A comment regarding the hypotheses of~\Ref{prp}{blowdel}. Let $Y$ be a complex $K3$ surface and $X\to Y$ be the blow-up of $y\in Y$. We know (Beauville and Voisin) that there exists $c\in Y$ such that $\Gamma^3(Y;c)\equiv 0$, but if $y$ is not rationally equivalent to $c$ then there exists no $a\in X$ such that $\Gamma^3(X;a)\equiv 0$, this follows from~\Ref{rmk}{unicopunto}. If $e=0,1$ then~\Ref{prp}{blowdel} is trivial, hence we will assume that $e\ge 2$. We let $f\colon X\to Y$ be the blow-up of $V$ and $E\subset X$ the exceptional divisor of $f$. Thus $a\in E$. Let $g\colon E\to V$ be defined by the restriction of $f$ to $E$, and $(E/V)^t$ be the $t$-th fibered product of $g\colon E\to V$. Let $(E/V)^{t}$ be the $t$-th fibered product of $g\colon E\to V$. The following commutative diagram will play a r\^ole in the proof of~\Ref{prp}{blowdel} \begin{equation} \xymatrix{ (E/V)^{t} \ar_{\mathfrak{a}mma_{t}}[r] \ar@/{}_{-1pc}/^{\alpha_{t}}[urrd] \ar_{}[d] & E^{t} \ar_{\beta_{t}}[r] \ar_{g^{t}}[d] & X^{t} \ar_{f^{t}}[d] \\ \Delta^{t}(V) \ar^{}[r] & V^{t} \ar^{}[r] & Y^{t} } \end{equation} (The maps which haven't been defined are the natural ones.) Whenever there is no danger of confusion we denote $\alpha_{t}((E/V)^{t})$ by $(E/V)^{t}$. \subsection{Pull-back of the modified diagonal.} \setcounter{equation}{0} On $E$ we have an exact sequence of locally-free sheaves: \begin{equation} 0\longrightarrow{\mathscr O}_E(-1)\longrightarrow g^{*}N_{V/Y}\longrightarrow Q\longrightarrow 0. \end{equation} For $i=1,\ldots,t$ let $Q_i(t)$ be the pull-back of $Q$ to $E^{t}$ via the $i$-th projection $E^{t}\to E$: thus $Q_i(t)$ is locally-free of rank $(e-1)$. \begin{prp}\langlebel{prp:eccesso} Keep notation as above and let $d({t}):=({t}-1)(e-1)-1$. We have the following equalities in $\CH_{\dim X}(X^{t})$: \begin{equation}\langlebel{eccesso} (f^{t})^{*}\Delta^{t}(Y)= \begin{cases} \Delta^{t}(X) & \text{if ${t}=1$,} \\ \Delta^{t}(X)+\beta_{{t},*}((g^{t})^{*}(\Delta^{t}(V))\cdot c_{d({t})}(\oplus_{j=1}^{t} Q_j({t}))) & \text{if ${t}> 1$.} \end{cases} \end{equation} \end{prp} \begin{proof} The equality of schemes $f^{-1}\Delta^1(Y)=\Delta^1(X)$ gives~\eqref{eccesso} for ${t}=1$. Now let's assume that ${t}>1$. The closed set $(f^{t})^{-1}\Delta^{t}(Y)$ has the following decomposition into irreducible components: \begin{equation} (f^{t})^{-1}\Delta^{t}(Y)=\Delta^{t}(X)\cup (E/V)^{t}. \end{equation} The dimension of $(E/V)^{t}$ is equal to $(\dim X+({t}-1)(e-1)-1)$ and hence is larger than the expected dimension unless unless $2={t}=e$. It follows that if ${t}=2$ and $e=2$ then $(f^2)^{*}\Delta^2(Y)=a\Delta^2(X)+b (E/V)^2$: one checks easily that $1=a=b$ and hence~\eqref{eccesso} holds if ${t}=2$ and $e=2$. Now suppose that that ${t}>1$ and $({t},e)\noindentot=(2,2)$. Let $U:=(X^{t}\setminus(\Delta^{t}(X)\cap (E/V)^{t}))$ and ${\mathscr Z}:= (E/V)^{t} \cap U=(E/V)^{t} \setminus \Delta^{t}(X)$. Notice that $(E/V)^{t}$ is smooth and hence the open subset ${\mathscr Z}$ is smooth as well. Let $\iota\colon {\mathscr Z}\hookrightarrow U$ be the inclusion. The restriction of $(f^{t})^{*}\Delta^{t}(Y)$ to $U$ is equal to \begin{equation} [\Delta^{t}(X)\cap U]+\iota_{*}(c_d({t})({\mathscr N})) \end{equation} where ${\mathscr N}$ is the obstruction bundle (see~\cite{fulton}, Cor.~8.1.2 and Prop.~6.1(a)). One easily identifies ${\mathscr N}$ with the restriction of $\oplus_{j=1}^{t} Q_j({t})$ to ${\mathscr Z}$. It follows that the restrictions to $U$ of the left and right hand sides of~\eqref{eccesso} are equal. The proposition follows because the dimension of $(X^{t}\setminus U)=\Delta^{t}(X)\cap (E/V)^{t}$ is equal to $(\dim X-1)$, which is strictly smaller than $\dim X$. \end{proof} \begin{crl}\langlebel{crl:eccesso} Keep notation and assumptions as above. Let $I\subset\{1,\ldots,(n+1)\}$ be non-empty and $I^c:=(\{1,\ldots,(n+1)\}\setminus I)$. Let $Q_j$ denote $Q_j(n+1)$ and let ${t}:=| I |$. Then \begin{equation} \scriptstyle (f^{n+1})^{*}\Delta_I(Y;b)= \begin{cases} \scriptstyle \Delta_I(X;a) & \scriptstyle \text{if $| I |=1$,} \\ \scriptstyle \Delta_I(X;a)+\beta_{n+1,*}((g^{n+1})^{*}\Delta_I(V;b)\cdot c_{d({t})}\left(\bigoplus\limits_{j\in I} Q_j\right)\cdot \prod_{j\in I^c} c_{e-1}(Q_j)) & \scriptstyle\text{if $| I |> 1$.} \end{cases} \end{equation} \end{crl} \begin{proof} For $1\le i\le(n+1)$ let $\rho_{i}\colon X^{n+1}\to X$ be the $i$-th projection. Let $J=\{j_1,\ldots,j_m\}$ where $1\le j_1<\ldots < j_{t}\le(n+1)$, in particular ${t}=|J|$. We let $\pi_{J}\colon X^{n+1}\to X^{t}$ be the map such that the composition of the $i$-th projection $X^{t}\to X$ with $\pi_J$ is equal to $\rho_{j_i}$. The two maps $\pi_I\colon X^{n+1}\to X^{{t}}$ and $\pi_{I^c}\colon X^{n+1}\to X^{n+1-{t}}$ define an isomorphism $\Lambda_I\colon X^{n+1}\overlineerset{\sim}{\longrightarrow} X^{t}\times X^{n+1-{t}}$. We have \begin{equation} (f^{n+1})^{*}\Delta_I(Y;b)=\Lambda_I^{*}((f^{t})^{*}\Delta^{t}(Y)\times (f^{n+1-{t}})^{*}(\{\underbrace{(b,\ldots,b)}_{n+1-{t}}\})). \end{equation} (Here $\times$ denotes the exterior product of cycles, see 1.10 of~\cite{fulton}.) An obstruction bundle computation gives that \begin{equation} (f^{n+1-{t}})^{*}(\{\underbrace{(b,\ldots,b)}_{n+1-{t}}\})=\beta_{n+1-{t},*}\left(\prod_{1\le j\le (n+1-{t})} c_{e-1}(Q_j(n+1-{t})\right) \end{equation} The corollary follows from the above equations and~\Ref{prp}{eccesso}. \end{proof} Let $I\subset\{1,\ldots,(n+1)\}$ be non-empty and let $t:=| I |$. We let $\Omega_I\in \CH_{\dim X}(E^{n+1})$ be given by \begin{equation}\langlebel{eccomega} \scriptstyle \Omega_I:= \begin{cases} \scriptstyle 0 & \scriptstyle \text{if $| I |=1$,} \\ \scriptstyle (g^{n+1})^{*}\Delta_I(V;b)\cdot c_{d({t})}\left(\bigoplus\limits_{j\in I} Q_j\right)\cdot \prod_{j\in I^c} c_{e-1}(Q_j) & \scriptstyle\text{if $| I |> 1$.} \end{cases} \end{equation} By~\Ref{crl}{eccesso} we have $(f^{n+1})^{*}\Delta_I(Y;b)= \Delta_I(X;a)+\beta_{n+1,*}(\Omega_I)$ and hence \begin{equation}\langlebel{solgam} (f^{n+1})^{*}(\Gamma^{n+1}(Y;b))=\Gamma^{n+1}(X;a)+\beta_{n+1,*}\left(\sum_{1\le| I |\le(n+1)}(-1)^{n+1-| I |}\Omega_I\right). \end{equation} \subsection{The proof.} \setcounter{equation}{0} By~\eqref{solgam} it suffices to prove that the following equality holds in $\CH_{\dim X}(E^{n+1})_{{\mathbb Q}}$: \begin{equation}\langlebel{altome} \sum_{1\le| I |\le(n+1)}(-1)^{| I |}\Omega_I=0. \end{equation} Let $I\subset\{1,\ldots,(n+1)\}$ be of cardinality strictly greater than $(n-e)$: \Ref{crl}{delsup} allows us to express the class of $\Delta_I(V;b)$ as a linear combination of the $\Delta_J(V;b)$'s with $J\subset I$ of cardinality at most $(n-e)$. Moreover Whitney's formula allows us to write the Chern class appearing in the definition of $\Omega_I$ as a sum of products of Chern classes of the $Q_j$'s. It follows that for each $I\subset\{1,\ldots,(n+1)\}$ we may express the class of $\Omega_I$ as a linear combination of the classes \begin{equation} (g^{n+1})^{*}\Delta_J(V;b)\cdot \prod_{s=1}^{n+1}c_{k_s}(Q_s),\quad 1\le |J|\le(n-e),\quad k_1+\ldots+k_{n+1}=d(n+1)=n(e-1)-1. \end{equation} \begin{dfn} ${\mathscr P}_n(e)$ is the set of $(n+1)$-tuples $k_1,\ldots,k_{n+1}$ of natural numbers $0\le k_s\le (e-1)$ whose sum equals $d(n+1)$. \end{dfn} Summing over all $I\subset\{1,\ldots,(n+1)\}$ of a given cardinality $t$ we get the following. \begin{clm} Let $1\le t\le(n+1)$. There exists an integer $c_{J,K}(t)$ for each couple $(J,K)$ with $\emptyset\noindentot=J\subset\{1,\ldots,(n+1)\}$ of cardinality at most $(n-e)$ and $K\in{\mathscr P}_n(e)$ such that \begin{equation} \sum_{| I |=t}\Omega_I=\sum_{\substack{1\le | J |\le (n-e) \\ K\in{\mathscr P}_n(e)}}c_{J,K}(t)(g^{n+1})^{*}\Delta_J(V;b)\cdot \prod_{s=1}^{n+1}c_{k_s}(Q_s). \end{equation} \end{clm} It will be convenient to set $c_{J,K}(0)=0$. We will prove that \begin{equation}\langlebel{incredibile} \sum_{t=0}^{n+1}(-1)^{t} c_{J,K}(t)=0. \end{equation} That will prove Equation~\eqref{altome} and hence also~\Ref{prp}{blowdel}. Applying~\Ref{crl}{delsup} to $(V,b)$ we get the following result. \begin{clm}\langlebel{clm:ritrito} Let $I\subset\{1,\ldots,n+1\}$ be of cardinality $t\ge (n+1-e)$. Then \begin{equation} \Delta^{n+1}_I(V;b)\equiv \sum_{\substack{J\subset I \\ 1\le |J|\le(n-e)}}(-1)^{n-e-|J|}{t-|J| -1\choose t-n-1+e}\Delta^{n+1}_J(Y;b). \end{equation} \end{clm} Given $K\in{\mathscr P}_n(e)$ we let \begin{equation} T(K):=\{1\le i\le(n+1) \mid k_i=(e-1)\}. \end{equation} A simple computation gives that \begin{equation}\langlebel{tikapineq} (n+1-e)\le | T(K)|. \end{equation} \begin{prp} Let $\emptyset\noindentot=J\subset\{1,\ldots,(n+1)\}$ be of cardinality at most $(n-e)$, let $K\in{\mathscr P}_n(e)$ and $0\le t\le(n+1)$. Then \begin{equation}\langlebel{cigeikap} c_{J,K}(t)=(-1)^{n-|J|-e}{t-|J| -1\choose n-|J|-e}{|T(K)\cap J^c|\choose n+1-t} \end{equation} \end{prp} \begin{proof} Suppose first that $0\le t\le(n-e)$. Then $c_{J,K}(t)=0$ unless $|J|=t$ and $J^c\subset T(K)$: if the latter holds then $c_{J,K}(t)=1$. Assume that the right-hand side of~\eqref{cigeikap} is non-zero: then the first binomal coefficient is non-zero and hence $t\le|J|$. Of course also the second binomal coefficient is non-zero: it follows that \begin{equation} (n+1-t)\le | T(K)\cap J^c| \le |J^c|=n+1-|J|. \end{equation} Since $t\le|J|$ it follows that $|J|=t$ and hence $| T(K)\cap J^c| = |J^c|$ i.e.~$J^c\subset T(K)$: a straightforward computation gives that under these assumptions the right-hand side of~\eqref{cigeikap} equals $1$. It remains to prove that~\eqref{cigeikap} holds for $(n+1-e)\le t\le(n+1)$. Looking at~\eqref{eccomega} and~\Ref{clm}{ritrito} we get that \begin{equation}\langlebel{lucca} c_{J,K}(t)=(-1)^{n-e-|J|}{t-|J| -1\choose t-n-1+e}|\{I\subset\{1,\ldots,(n+1)\} \mid I^c\subset(T(K)\cap J), \quad |I|=t\}|. \end{equation} Since the right-hand side of~\eqref{lucca} is equal to the right-hand side of~\eqref{cigeikap} this finishes the proof. \end{proof} Let \begin{equation} p(x):={n-|J| -x\choose n-|J|-e}. \end{equation} Then $\deg p<|T(K)\cap J^c|$ because $\deg p=(n-|J|-e)$ and because~\eqref{tikapineq} gives that \begin{equation} |T(K)\cap J^c|\ge (n+1-e)+(n+1-|J|)-(n+1)=n-|J|-e+1. \end{equation} Thus~\eqref{combcomb} and~\eqref{cigeikap} give that \begin{equation} \scriptstyle 0=\sum_{s=0}^{n+1}(-1)^s p(s) {|T(K)\cap J^c|\choose s} =(-1)^{n+1}\sum_{t=0}^{n+1}(-1)^t {t-|J| -1\choose n-|J|-e}{|T(K)\cap J^c|\choose n+1-t}= (-1)^{1-e-|J|}\sum_{t=0}^{n+1}(-1)^t c_{J,K}(t). \end{equation} This finishes the prooof of~\Ref{prp}{blowdel}. \qed \subsection{Application to Hilbert schemes of $K3$'s} Let $S$ be a complex $K3$ surface. By Beauville and Voisin~\cite{beauvoisin} there exists $c\in S$ such that $\Gamma^3(S;c)\equiv 0$. We let $S^{[n]}$ be the Hilbert scheme parametrizing length-$n$ subschemes of $S$; Beauville~\cite{beau} proved that $S^{[n]}$ is a hyperk\"ahler variety. \begin{prp}\langlebel{prp:diaghilbk3} Keep notation as above and assume that $n=2,3$. Let $a_n\in S^{[n]}$ represent a scheme supported at $c$. Then $\Gamma^{2n+1}(S^{[n]};a_n)\equiv 0$. \end{prp} \begin{proof} First assume that $n=2$. Let $\pi_1\colon X\to S\times S$ be the blow-up of the diagonal $\Delta$ and $\rho_2\colon X\to S^{(2)}$ the composition of $\pi_1$ and the quotient map $S\times S\to S^{(2)}$. There is a degree-$2$ map $\phi_2\colon X\to S^{[2]}$ fitting into a commutative diagram \begin{equation}\langlebel{equivoci} \xymatrix{ X \ar^{\phi_2}[r] \ar^{\rho_2}[dr] & S^{[2]} \ar^{\mathfrak{a}mma_2}[d] \\ & S^{(2)} } \end{equation} where $\mathfrak{a}mma_2([Z])=\sum_{p\in S}\ell({\mathscr O}_Z,p)$ is the Hilbert-Chow morphism. Let $x\in X$ such that $\phi_2(x)=a_2$; by~\Ref{subsec}{gradofin} it suffices to prove that $\Gamma^5(X;x)\equiv 0$. By commutativity of~\eqref{equivoci} we have $\pi_1(x)=(c,c)$. Now $\Gamma^5(S\times S;(c,c))\equiv 0$ by~\Ref{prp}{protokunn}, and since $\cod(\Delta,S\times S)=2$ it follows from~\Ref{prp}{blowdel} that $\Gamma^5(X;x)\equiv 0$. Next assume that $n=3$. Let $\pi_2\colon Y\to S^{[2]}\times S$ be the blow-up with center the tautological subscheme ${\mathscr Z}_2\subset S^{[2]}\times S$ and $\rho_3\colon Y\to S^{(3)}$ the composition of $\pi_2$ and the natural map $S^{[2]}\times S\to S^{(3)}$. There is a degree-$3$ map $\phi_3\colon Y\to S^{[3]}$ fitting into a commutative diagram \begin{equation}\langlebel{commedia} \xymatrix{ Y \ar^{\phi_3}[r] \ar^{\rho_3}[dr] & S^{[3]} \ar^{\mathfrak{a}mma_3}[d] \\ & S^{(3)} } \end{equation} where $\mathfrak{a}mma_3$ is the Hilbert-Chow morphism. (See for example Proposition~2.2 of~\cite{ellstrom}.) On the other hand let $p_1\colon S\times S\to S$ be projection to the first factor; the map \begin{equation}\langlebel{nicolapiazza} (\phi_2,p_1\circ \pi_1))\colon X\to S^{[2]}\times S \end{equation} is an isomorphism onto ${\mathscr Z}_2$. Let $y\in Y$ be such that $\phi_3(y)=a_3$; by~\Ref{subsec}{gradofin} it suffices to prove that $\Gamma^7(Y;y)\equiv 0$. Notice that $\pi_2(y)=(a_2,c)$ where $a_2\in S^{[2]}$ is supported at $c$. By the case $n=2$ (that we just proved) and~\Ref{prp}{protokunn} we have $\Gamma^7(S^{[2]}\times S;(a_2,c))\equiv 0$. Let $x\in X$ such that $\phi_2(x)=a_2$. In the proof for the case $n=2$ we showed that $\Gamma^5(X;x)\equiv 0$; since~\eqref{nicolapiazza} is an isomorphism it follows that $\Gamma^5({\mathscr Z}_2;(a_2,c))\equiv 0$. Since $\Gamma^7(S^{[2]}\times S;(a_2,c))\equiv 0$ and ${\mathscr Z}_2$ is smooth of codimension $2$, we get $\Gamma^7(Y;y)\equiv 0$ by~\Ref{prp}{blowdel}. \end{proof} Let ${\mathscr Z}_n\subset S^{[n]}\times S$ be the tautological subscheme. The blow-up of $S^{[n]}\times S$ with center ${\mathscr Z}_n$ has a natural regular map of finite (non-zero) degree to $S^{[n+1]}$ and in turn ${\mathscr Z}_n$ may be described starting from the tautological subscheme ${\mathscr Z}_{n-1}\subset S^{[n-1]}\times S$. Thus one may hope to prove by induction on $n$ that $\Gamma^{2n+1}(S^{[n]};a)\equiv 0$ for any $n$: the problem is that starting with ${\mathscr Z}_{3}$ the tautological subscheme is singular. \section{Double covers}\langlebel{sec:rivdop} In the present section we will assume that $X$ is a projective variety over a field ${\mathbb K}$ and that $\iota\in\Aut(X)$ is a (non-trivial) involution. We let $Y:=X/\langle\iota\rangle$ and $f\colon X\to Y$ be the quotient map. We assume that there exists $a\in X({\mathbb K})$ which is fixed by $\iota$ and we let $b:=f(a)$. \begin{cnj}\langlebel{cnj:raddoppia} Keep hypotheses and notation as above and suppose that $\Gamma^{m}(Y;b)\equiv 0$. Then $\Gamma^{2m-1}(X;a)\equiv 0$. \end{cnj} The above conjecture was proved for $m=2$ by Gross and Schoen, see Prop.~4.8 of~\cite{groscho}. We will propose a proof of~\Ref{cnj}{raddoppia} and we will show that the proof works for $m=2,3$. Of course the proof for $m=2$ is that of Gross and Schoen (with the symmetric cube of the curve replaced by the cartesian cube). \subsection{A modest proposal}\langlebel{subsec:insintesi} \setcounter{equation}{0} There is a well-defined pull-back homomorphisms \begin{equation} (f^q)^{*}\colon Z_{*}(Y^q)_{{\mathbb Q}}\to Z_{*}(X^q)_{{\mathbb Q}} \end{equation} compatible with rational equivalence (see Ex.~1.7.6 of~\cite{fulton}): thus we have an induced homomorphism $(f^q)^{*}\colon\CH_{*}(Y^q)_{{\mathbb Q}}\to \CH_{*}(X^q)_{{\mathbb Q}}$. Let $n:=\dim X$ and $\Xi_m\in Z_{n}(X^m)_{{\mathbb Q}}$ the cycle defined by \begin{equation}\langlebel{dramper} \Xi_m:=(f^m)^{*}\Gamma^m(Y;b). \end{equation} We will show that $\Xi_m$ is a linear combination of cycles of the type \begin{equation}\langlebel{tipi} \{(x,\ldots,\iota(x),\ldots x,\ldots,x,a,\ldots\iota(x),\ldots,a,\ldots) \mid x\in X\}. \end{equation} Notice that the $\Delta_I(X;a)$'s are of this type. Consider the inclusions of $X^m$ in $X^{2m-1}$ which map $(x_1,\ldots,x_{m})$ to $(x_1,\ldots,x_{m},\noindentu(1),\ldots,\noindentu(m-1))$ where $\noindentu\colon\{1,\ldots,(m-1)\}\to\{a,x_1,\ldots,x_{m},\iota(x_1),\ldots,\iota(x_{m})\}$ is an arbitrary list. Let $\Phi_{\noindentu}(\Xi_m)$ be the symmetrized image of $\Xi_m$ in $Z_{n}(X^{2m-1})$ for the inclusion determined by $\noindentu$: it is a linear combination of cycles~\eqref{tipi}. By hypothesis $\Xi_m\equiv 0$ and hence any linear combination of the cycles $\Phi_{\noindentu}(\Xi_m)$ is rationally equivalent to $0$. One gets the proof if a suitable linear combination of the $\Phi_{\noindentu}(\Xi_m)$'s is a linear combination of the $\Delta_I(X;a)$'s with the appropriate coefficients (so that it is equal to a non-zero multiple of $\Gamma^{2m-1}(X;a)$). We will carry out the proof for $m=2,3$. \subsection{Preliminaries}\langlebel{subsec:giocomega} \setcounter{equation}{0} Since the involution of $X$ is non-trivial the dimension of $X$ is strictly positive i.e.~$n>0$. Let $\mu\colon\{1,\ldots,q\}\to\{a,x,\iota(x)\}$. If $\mu$ is \emph{not} the sequence $\mu(1)=\ldots=\mu(q)=a$ we let \begin{equation} \Omega(\mu(1),\ldots,\mu(q)):=\{(x_1,\ldots,x_{q})\in X^{q} \mid x_i=\mu(i),\quad x\in X\}, \end{equation} and we let $\Omega(a,\ldots,a):=0$. Thus $\Omega(\mu(1),\ldots,\mu(d))$ is an $n$-cycle on $X^{d}$. For example $\Omega(x,\ldots,x)\in X^{q}$ is the small diagonal. Let ${\mathscr S}_{q}$ be the symmetric group on $\{ 1,\ldots,q\}$: of course it acts on $X^{q}$. For $r+s+t=q$ let \begin{equation} \overline{\Omega}(r,s,t):=\sum_{\sigma\in{\mathscr S}_{q}}\sigma(\Omega(\underbrace{a,\ldots,a}_{r},\underbrace{x,\ldots,x}_{s}, \underbrace{\iota(x),\ldots,\iota(x)}_{t})). \end{equation} Thus $\overline{\Omega}(r,s,t)$ is an $n$-cycle on $X^q$ invariant under the action of ${\mathscr S}_{q}$. Notice that \begin{equation}\langlebel{esseti} \overline{\Omega}(r,s,t)=\overline{\Omega}(r,t,s). \end{equation} With this notation \begin{equation}\langlebel{altdiag} \Gamma^q(X;a)=\sum_{\substack{ 0\le r,s \\ r+s=q}} \frac{(-1)^r}{r! s!}\overline{\Omega}(r,s,0). \end{equation} Let $\Xi_m$ be the cycle on $X^m$ given by~\eqref{dramper}. A straightforward computation gives that \begin{equation}\langlebel{tirodiag} 2\Xi_m=\sum_{\substack{ 0\le r,s,t \\ r+s+t=m}}\frac{(-2)^r}{r! s! t!}\overline{\Omega}(r,s,t). \end{equation} (Equality~\eqref{esseti} is the reason for the factor of $2$ in front of $\Xi_m$.) For \begin{equation*} \noindentu\colon\{1,\ldots,(m-1)\}\to\{a,x_1,\ldots,x_{m},\iota(x_1),\ldots,\iota(x_{m})\} \end{equation*} we let \begin{equation} \begin{matrix} X^{m}& \overlineerset{j_{\noindentu}}\longrightarrow & X^{2m-1} \\ (x_1,\ldots,x_{m}) & \mapsto & (x_1,\ldots,x_{m},\noindentu(1),\ldots,\noindentu(m-1)) \end{matrix} \end{equation} and $\Phi_{\noindentu}\colon Z_n(X^{m})\to Z_n(X^{2m-1})$ be the homomorphism \begin{equation} \Phi_{\noindentu}(\mathfrak{a}mma):=\sum_{\sigma\in{\mathscr S}_{2m-1}}\sigma_{*}(j_{\noindentu,*}(\mathfrak{a}mma)). \end{equation} Notice that $\Phi_{\noindentu}$ does not change if we reorder the sequence $\noindentu$. \subsection{The case $m=2$}\langlebel{subsec:emmedue} \setcounter{equation}{0} A straightforward computation (recall~\eqref{esseti}) gives that \begin{eqnarray} \Phi_a(\Xi_2) & = & \overline{\Omega}(1,2,0)-4\overline{\Omega}(2,1,0)+\overline{\Omega}(1,1,1), \\ \Phi_{x_1}(\Xi_2) & = & \overline{\Omega}(0,3,0)-2\overline{\Omega}(1,2,0)-2\overline{\Omega}(2,1,0)+\overline{\Omega}(0,2,1), \\ \Phi_{\iota(x_1)}(\Xi_2) & = & -2\overline{\Omega}(2,1,0)-2\overline{\Omega}(1,1,1)+2\overline{\Omega}(0,2,1). \end{eqnarray} Thus \begin{equation} 0\equiv -2\Phi_a(\Xi_2)+2\Phi_{x_1}(\Xi_2)-\Phi_{\iota(x_1)}(\Xi_2)=2\overline{\Omega}(0,3,0)-6\overline{\Omega}(1,2,0)+6\overline{\Omega}(2,1,0)=12\Gamma^3(X;a). \end{equation} \subsection{The case $m=3$}\langlebel{subsec:emmetre} \setcounter{equation}{0} For every $\noindentu\colon\{1,2\}\to \{a,x_1,x_2,x_3,\iota(x_1),\iota(x_2),\iota(x_3)\}$ the cycle $\Phi_\noindentu(\Xi_3)$ is equal to the linear combination of the classes listed in the first column of Table~\eqref{coordinate} with coefficients the numbers in the corresponding column of Table~\eqref{coordinate}. For such a $\noindentu$ let $i(\noindentu)$ be its position in the first row of Table~\eqref{coordinate}: thus $i((a,a))=1$,..., $i((\iota(x_1),\iota(x_2))=9$. Table~\eqref{coordinate} allows us to rewrite \begin{equation}\langlebel{califano} \sum_{\noindentu} \langlembda_{i(\noindentu)}\Phi_{\noindentu}(\Xi_3) \end{equation} as an integral linear combination of the classes listed in the first column of Table~\eqref{coordinate}, with coefficients $F_1,\ldots,F_9$ which are linear functions of $\langlembda_1,\ldots,\langlembda_9$. Let's impose that $0=F_1=\ldots=F_6$: solving the corresponding linear system we get that \begin{eqnarray} \langlembda_1 & = & \frac{1}{3}(-8\langlembda_6-2\langlembda_7-8\langlembda_8-8\langlembda_9),\\ \langlembda_2 & = & \frac{1}{3}(14\langlembda_6+8\langlembda_7+14\langlembda_8+20\langlembda_9),\\ \langlembda_3 & = & \frac{1}{3}(-6\langlembda_6-6\langlembda_7-6\langlembda_8-12\langlembda_9),\\ \langlembda_4 & = & \frac{1}{3}(\langlembda_6-2\langlembda_7+\langlembda_8+4\langlembda_9),\\ \langlembda_5 & = & \frac{1}{3}(-5\langlembda_6-2\langlembda_7-5\langlembda_8-8\langlembda_9). \end{eqnarray} For such a choice of coefficients $\langlembda_1,\ldots,\langlembda_9$ we have that \begin{equation}\langlebel{pasqua} 0\equiv \sum\limits_{\noindentu} \langlembda_{i(\noindentu)}\Phi_{\noindentu}(\Xi)= -\frac{4}{3}(\langlembda_6+\langlembda_7+\langlembda_8+\langlembda_9)(\overline{\Omega}(0,5,0)-5\overline{\Omega}(1,4,0)+10\overline{\Omega}(2,3,0) -10\overline{\Omega}(3,2,0)+5\overline{\Omega}(4,1,0)). \end{equation} Choosing integers $\langlembda_6,\ldots,\langlembda_9$ such that $(\langlembda_6+\langlembda_7+\langlembda_8+\langlembda_9)=-3$ we get that \begin{equation}\langlebel{eccoci} 0\equiv \sum\limits_{\noindentu} \langlembda_{i(\noindentu)}\Phi_{\noindentu}(\Xi)=4\cdot 5!\Gamma^5(X;a). \end{equation} This concludes the proof of~\Ref{cnj}{raddoppia} for $m=3$. \begin{table}[tbp]\tiny \caption{Coordinates of $\Phi_{\noindentu}(\Xi)$ for $\noindentu=(a,a),\ldots,(\iota(x_1),\iota(x_2))$.}\langlebel{coordinate} \vskip 1mm \centering \renewcommand{1.60}{1.60} \begin{tabular}{rrrrrrrrrr} \toprule & $ (a,a)$ & $ (a,x_1)$ & $ (a,\iota(x_1))$ & $(x_1,x_1)$ & $(x_1,x_2)$ & $(x_1,\iota(x_1))$ & $(x_1,\iota(x_2))$ & $ (\iota(x_1),\iota(x_1))$ & $ (\iota(x_1),\iota(x_2))$ \\ \midrule $\overline{\Omega}(3,1,1)$ & -6 & -2 & 2 & -2 & 0 & -2 & 4 & -2 & 8 \\ \midrule $\overline{\Omega}(2,2,1)$ & 3 & -4 & -8 & 0 & -4 & 4 & -6 & 4 & -8 \\ \midrule $\overline{\Omega}(1,3,1)$ & 0 & 2 & 2 & -4 & 0 & -4 & -4 & -4 & 0 \\ \midrule $\overline{\Omega}(1,2,2)$ & 0 & 1 & 2 & 0 &-2 &-4 & 0 & -4 & -4 \\ \midrule $\overline{\Omega}(0,4,1)$ & 0 & 0 & 0 & 2 & 1 &1 & 2 & 1 & 0 \\ \midrule $\overline{\Omega}(0,3,2)$ & 0 & 0 & 0 & 1 & 2 &3 & 2 & 3 & 4 \\ \toprule $\overline{\Omega}(0,5,0)$ & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 \\ \midrule $\overline{\Omega}(1,4,0)$ & 0 & 1 & 0 & -4 & -2 & 0 & 0 & 0 & 0 \\ \midrule $\overline{\Omega}(2,3,0)$ & 1 & -4 & 0 & 4 & -4 & 0 & -2 & 0 & 0 \\ \midrule $\overline{\Omega}(3,2,0)$ & -6 & 2 & -2 & -2 & 8 & -2 & 4 & -2 & 0 \\ \midrule $\overline{\Omega}(4,1,0)$ & 12 & 8 & 8 & 8 & 4 & 8 & 4 & 8 & 4 \\ \bottomrule \end{tabular} \end{table} \end{document}
\begin{document} \pagenumbering{arabic} \pagestyle{plain} \title{A chemotaxis-Navier-Stokes system with dynamical boundary conditions} \author {Baili Chen} \address{Baili Chen \newline Department of Mathematics and Computer Science\\ Gustavus Adolphus College\\ Saint Peter, MN 56082, USA} \email{[email protected]} \subjclass[2020]{35A01, 35D30, 35Q30, 35Q92, 35A35} \keywords{Weak solution; Dynamical boundary conditions; Chemotaxis; Navier-Stokes; Rothe's method} \begin{abstract} A chemotaxis-Navier-Stokes system is studied under dynamical boundary conditions in a bounded convex domain $\Omega\in \mathbb{R}^3$ with smooth boundary. This models the interaction of populations of swimming bacteria with the surrounding fluid. The existence of a global weak solution is proved using multiple layers of approximations and Rothe's method for the time discretization. \end{abstract} \maketitle \numberwithin{equation}{section} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{remark}[theorem]{Remark} \newtheorem{definition}[theorem]{Definition} \allowdisplaybreaks \section{Introduction} The Chemotaxis-Navier-Stokes system \begin{equation} \label {e1.1} \left\{ \begin{array}{lcl} \partial_t c -\alpha \Delta c + u \cdot\nabla c + n f(c) =0 &\mbox{a.e. in} &\Omega\times(0,T), \\ \partial_t n - \nabla\cdot \left(\beta \nabla n -g(n,\ c) \nabla c \right) + u \cdot\nabla n =0 &\mbox{a.e. in} &\Omega\times(0,T),\\ \partial_t u - \nabla\cdot ( \xi \nabla u) +(u\cdot \nabla) u +\nabla p = n \nabla \sigma &\mbox{a.e. in} &\Omega\times(0,T), \\ \nabla \cdot u =0 &\mbox{a.e. in} &\Omega\times(0,T). \end{array} \right. \end{equation} in a bounded convex domain with smooth boundary describes an oxygen-driven bacteria suspension swimming in an incompressible fluid like water. The above system, which was first introduced by Tuval et al. \cite{tuv}, consists of three coupled equations: an equation for the concentration of oxygen $c$, an equation for the population density $n$ of the bacteria, and the Navier-Stokes equation describing the water flow $u$. Equation \eqref{e1.1} has been studied by several authors (e.g. \cite{win1}, \cite{win2}, \cite{brau}). In their works, the equations are endowed with Neumann or Robin boundary conditions for both $c$ and $n$. In this work, we extend the model by assuming dynamical boundary condition for oxygen concentration $c$, which is given by \begin{equation}\label{e1.2} \partial_t c = \Delta_{\tau} c -b \partial_{\eta}c \quad \mbox{on}\quad \partial\Omega \end{equation} Here we assume there is an oxygen source acting on the boundary, which depends on the oxygen flux $\partial_{\eta}c$ across the boundary. The Laplace-Beltrami operator $\Delta_{\tau}$ on the boundary describes the diffusion of oxygen along the boundary. The derivation of dynamical boundary conditions is introduced in \cite{gold}. To the best of our knowledge, Chemotaxis-Navier-Stokes system with the above dynamical boundary condition has not been addressed in the existing literature. To complete the system, we introduce the following boundary conditions for bacterial and fluid field. \begin{eqnarray} \beta \partial_{\eta}n = g(n,\ c) \partial_{\eta}c \quad \mbox{on}\ \partial\Omega\label{e1.3}\\ u=0\quad \mbox{on}\quad \partial\Omega\label{e1.4} \end{eqnarray} together with the initial conditions \begin{equation}\label{e1.5} c(x,\ 0)=c_0(x),\ n(x,\ 0)=n_0(x),\ u(x,\ 0)=u_0(x) \end{equation} The aim of this paper is to derive the existence of a global weak solution of the system \eqref{e1.1} - \eqref{e1.5}, which describes the interaction between the bacteria density $n$, the oxygen concentration $c$, the fluid velocity field $u$ and the associated pressure $p$ in a bounded convex domain $\Omega$ with smooth boundary. We set the gradient of gravitational potential $\sigma$ to be constant (i.e. $\nabla\sigma \equiv $ const). The existence of a global weak solution is proved by the discretization in time (Rothe's method). This technique was also used in previous papers (e.g. \cite{vaz}, \cite{vla}, \cite{nec}, \cite{kac}) to solve other types of problems. To fix the notation, we denote by $H^m(\Omega)$ the standard Sobolev space in $L^2(\Omega)$ with derivative of order less than or equal to $m$ in $L^2(\Omega)$. Let $D(\Omega)$ be the space of $C^{\infty}$ function with compact support contained in $\Omega$. The closure of $D(\Omega)$ in $H^m(\Omega)$ is denoted by $H_0^m(\Omega)$. Let $\Upsilon$ be the space \[ \Upsilon = \left\{ u\in D(\Omega), \nabla\cdot u=0 \right\} \] The closure of $\Upsilon$ in $L^2(\Omega)$ and in $H_0^1(\Omega)$ are denoted by $H$ and $V$ respectively. We denote by $L^r(0,\ T; \ X)$ the Banach space of all measurable functions \[v:\ [0,\ T] \rightarrow X\] with norm \[ \begin{array}{ll} \ & \Vert v \Vert _{L^r(0,\ T; \ X)} = {\left( \int_0^T \Vert v \Vert _{X}^r dt \right)}^{\frac{1}{r}}, \ \mbox{for}\ 1\le r< \infty \\ \mbox{or} & \Vert v \Vert _{ L^{\infty} (0,\ T; \ X) } = \mbox{ess sup}_{0\le t\le T} \Vert v \Vert _{X}, \ \mbox{for}\ r=\infty. \end{array} \] The trace of a function is denoted by the subscript $\tau.$ For example, $c_{\tau}$ denotes the trace of function $c$. Throughout this paper, we denote by $M$ and $C$ the constants whose values may be different at each occurrence. Before stating the main result, we make the following assumptions throughout this paper: \[ \begin{array}{lll} (H_1) &\ f(\cdot) \ in\ C^0(R) & \mbox{with} \ f_0\le f(\cdot) \le f_1;\ f_0,\ f_1 \in R^{+}.\\ (H_2) &\ g(\cdot,\ \cdot) \ in\ C^1(R^2) & \mbox{with} \ | g(\cdot,\ \cdot) | \le g_1;\ g_1 \in R^{+}.\\ (H_3) &\ \alpha,\ \beta, \ \xi,\ b \in R^{+}. & \ \end{array} \] The main result of this paper is: \begin{theorem} \label{th1.1} Suppose $(H_1)-(H_3)$ hold, $(c_0,\ n_0,\ u_0)\in (L^2(\Omega))^2\times H.$ Then there exists functions $(c,\ n)\in \left( L^{\infty}(0,\ T, \ L^2(\Omega))\bigcap L^2(0,\ T, \ H^1(\Omega)) \right)^2$ and $u\in L^{\infty}(0,\ T, \ H)\bigcap L^2(0,\ T, \ V)$ such that $\left( c(0),\ n(0),\ u(0) \right)=(c_0,\ n_0,\ u_0)$ and \begin{equation} \label {e1.6} \left\{ \begin{array}{ll} \ &\int_{\Omega} \partial_t c(t)\phi_1 dx + \alpha \int_{\Omega} \nabla c(t) \nabla\phi_1 dx + \frac{\alpha}{b} \int_{\partial\Omega} \partial_t c_{\tau}(t)\phi_{1\tau} d\sigma + \frac{\alpha}{b} \int_{\partial\Omega} \nabla_{\tau} c(t) \nabla_{\tau}\phi_1 d\sigma \\ \ &\hspace{1cm} + \int_{\Omega} u\nabla c(t) \phi_1 dx= \int_{\Omega} -n(t) f(c(t))\phi_1 dx, \\ \ &\int_{\Omega} \partial_t n(t)\phi_2 dx + \int_{\Omega} \left( \beta\nabla n(t) - g( n(t),\ c(t)) \nabla c(t) \right) \nabla\phi_2 dx \\ \ &\hspace{3cm} + \int_{\Omega} u(t)\nabla n(t) \phi_2 dx=0,\\ \ &\int_{\Omega}\partial_t u(t)\phi_3 dx + \int_{\Omega} \xi \nabla u(t) \nabla\phi_3 dx + \int_{\Omega} (u(t)\cdot\nabla) u(t) \phi_3 dx=\\ \ &\hspace{3cm} \int_{\Omega} n(t) \nabla \sigma \phi_3 dx. \end{array} \right. \end{equation} for all $(\phi_1,\ \phi_2,\ \phi_3)\in (H^1(\Omega))^2\times V$ . \end{theorem} The paper is organized as follows. In the next section, we introduce some preliminary lemmas and time-discretization scheme (Rothe's method). We also outline the approaches to prove the main result. Section 3 is devoted to proving Theorem 2.3. We use regularization technique and fixed-point theorem to prove the existence of solution for an auxiliary problem, then use Galerkin method to show existence of solutions to the discretized scheme. In section 4, we derive several a priori estimates which will allow us to pass limits in the discretization scheme and thereby verify Theorem 1.1. \section{Preliminaries } In this paper, we will use the following Gronwall's lemma in the discrete form. \begin{lemma} Let $0<k<1,\ (a_i)_{i\ge 1},\ $and $(A_i)_{i\ge 1}$ be sequence of real, non-negative numbers. Assuming that $(A_i)_{i\ge 1}$ is non-decreasing and that \[ a_i \le A_i + k\sum_{j=0}^i a_j, \quad \mbox{for}\ i=0,1,2,\cdots \] then \[ a_i \le \frac{1}{1-k} A_i exp \left( (i-1)\frac{k}{1-k} \right) , \quad \mbox{for}\ i=0,1,2,\cdots \] \end{lemma} We also need the following relation: \begin{equation}\label{e2.1} 2\int_{\Omega} a(a-b)dx = \Vert a\Vert _{L^2(\Omega)}^2 - \Vert b\Vert _{L^2(\Omega)}^2 + \Vert a-b \Vert _{L^2(\Omega)}^2. \end{equation} We will frequently use Young's inequality: \[ ab\le \delta a^2 + \frac{1}{4\delta} b^2.\] The compactness result presented in the next lemma will allow us to pass the limit in the Rothe approximation. \begin{lemma} Let $X,\ Y$ be two Banach spaces, such that $Y \subset X\ $, the injection being compact. Assume that $G$ is a family of functions in $L^{2}(0,\ T; \ Y)\bigcap L^p(0,\ T; \ X)$ for some $T>0$ and $p>1$, such that \begin{eqnarray} \ & G \ \mbox{is bounded in}\ L^{2}(0,\ T; \ Y)\ \mbox{and}\ L^p(0,\ T; \ X);\nonumber\\ \ & \sup_{g\in G} \int_0^{T-a} \Vert g(a+s)- g(s) \Vert _{X}^p ds \rightarrow 0 \quad \mbox{as}\ a\rightarrow 0, \ a>0 \label{e2.2} \end{eqnarray} Then the family $G$ is relatively compact in $L^p(0,\ T; \ X)$. \end{lemma} The proof the main result stated in section 1 is based on the Rothe's method of time discretization. We divide the time interval $[0,\ T]$ into $N$ subintervals $[t_{m-1},\ t_m ]$, $t_m=km$, $k=T/N$, $m=1,\ 2,\ \cdots ,\ N.$ The time discretized variational formulation reads as \begin{equation} \label {e2.3} \left\{ \begin{array}{lcl} \delta_t c^m -\alpha \Delta c^m + u^m\cdot \nabla c^m + n^m f(c^m) =0 &\mbox{a.e. in} &\Omega\times(0,T), \\ \delta_t n^m - \nabla\cdot \left(\beta \nabla n^m -g(n^m,\ c^m) \nabla c^m \right) + u^m \cdot \nabla n^m =0 &\mbox{a.e. in} &\Omega\times(0,T),\\ \delta_t u^m - \nabla\cdot ( \xi \nabla u^m) +(u^m\cdot \nabla) u^m +\nabla p^m = n^m \nabla \sigma &\mbox{a.e. in} &\Omega\times(0,T),\\ \nabla \cdot u^m =0 &\mbox{a.e. in} &\Omega\times(0,T),\\ \delta_t c^m = \Delta_{\tau} c^m -b \partial_{\eta}c^m &\mbox{a.e. on} &\partial\Omega,\\ \beta \partial_{\eta}n^m = g(n^m,\ c^m) \partial_{\eta}c^m &\mbox{a.e. on} &\partial\Omega,\\ u^m=0&\mbox{a.e. on} &\partial\Omega,\\ c^0=c_0(x),\ n^0=n_0(x),\ u^0=u_0(x). \end{array} \right. \end{equation} Here, we use the notation: $\delta_t c^m=\frac{c^m - c^{m-1}}{k},\ \delta_t n^m=\frac{n^m - n^{m-1}}{k}, \delta_t u^m=\frac{u^m - u^{m-1}}{k}$, where $c^m, \ n^m, \ u^m, \ m=1,2,\cdots,N$ are the approximations of $c(x,t_m),\ n(x,t_m),\ $ and $u(x,t_m)$ respectively. The existence result for the discrete scheme is stated in the following theorem, which will be proved in section 3. \begin{theorem} \label{th2.3} Suppose $(H_1)-(H_3)$ holds, $(c^0,\ n^0,\ u^0)\in (L^2(\Omega))^2\times H.$ Then there exists $(c^m,\ n^m,\ u^m)\in (L^2(\Omega))^2\times H $ solving the discrete problem \eqref{e2.3} for time step $k$ small enough. \end{theorem} With this result, we introduce the Rothe functions: \begin{equation} \left\{ \begin{array}{ll} \ &\tilde{c}_k(t) = c^{m-1} + (t-t_{m-1})\delta_t c^m,\\ \ &\tilde{c}_{k\tau}(t) = c_{\tau}^{m-1} + (t-t_{m-1})\delta_t c_{\tau}^m,\\ \ &\tilde{n}_k(t) = n^{m-1} + (t-t_{m-1})\delta_t n^m,\\ \ &\tilde{u}_k(t) = u^{m-1} + (t-t_{m-1})\delta_t u^m,\quad \mbox{for}\ t_{m-1}\le t\le t_m,\ 1\le m\le N \end{array} \right. \end{equation} and step functions: \begin{equation} \left\{ \begin{array}{ll} \ & (c_k(t),\ c_{k\tau}(t),\ n_k(t),\ u_k(t)) = (c^{m},\ c_{\tau}^{m},\ n^{m},\ u^{m}) ,\\ \ & (c_k(0),\ c_{k\tau}(0),\ n_k(0),\ u_k(0)) = (c_0,\ c_{0\tau},\ n_0,\ u_0). \end{array}\right. \end{equation} for $t_{m-1}\le t\le t_m,\quad 1\le m \le N.$ We will prove the Rothe functions $(\tilde{c}_k,\ \tilde{c}_{k\tau},\ \tilde{n}_k,\ \tilde{u}_k )$ and the step functions $(c_k,\ c_{k\tau},\ n_k,\ u_k)$ will converge to the limit functions $(c,\ c_{\tau},\ n,\ u)$ as $k\rightarrow 0$, and the limit functions will be the solutions to problem \eqref{e1.6}. Thus, the main result will be proved. \section {Proof of Theorem 2.3} The proof of Theorem 2.3 is based on semi-Galerkin method. Let ${w_ j}$ be an orthogonal basis of $V$ that is orthonormal in $H$. We denote by $V_j$ the finite vector space spanned by $ \{w_i\}_{1\le i \le j}.$ For a fixed $m$, let $u_j^m \in V_j$ be the Galerkin approximation of $u^m$, and assume $(c^{m-1},\ n^{m-1},\ u^{m-1})$ are given, we consider the following problem: Find $(c_j^{m},\ n_j^{m},\ u_j^{m}) \in \left( H^1(\Omega) \right)^2\times V_j$, satisfying the following elliptic system: \begin{equation}\label{e3.1} \left\{ \begin{array}{lll} \ & -k\alpha\Delta c_j^m + k (u_j^m \cdot \nabla ) c_j^m + c_j^m = -k n_j^m f( c_j^m)+h &\mbox{a.e. in}\ \Omega\times(0,T),\\ \ &-k\nabla\cdot \left( \beta\nabla n_j^m-g (n_j^m,\ c_j^m) \nabla c_j^m \right) + k (u_j^m \cdot \nabla ) n_j^m + n_j^m = l &\mbox{a.e. in}\ \Omega\times(0,T),\\ \ &-k\nabla\cdot (\xi\nabla u_j^m) + k (u_j^m \cdot \nabla ) u_j^m + k \nabla P_j^m + u_j^m=k n_j^m\nabla \sigma + q &\mbox{a.e. in}\ \Omega\times(0,T),\\ \ &\nabla\cdot u_j^m = 0 &\mbox{a.e. in}\ \Omega\times(0,T),\\ \ & k\partial_{\eta} c_j^m = \frac{k}{b} \Delta_{\tau} c_{j_\tau}^m -\frac{1}{b} c_{j_\tau}^m+ \frac{1}{b} h_{\tau} &\mbox{a.e. on}\ \partial\Omega, \\ \ & \beta \partial_{\eta}n_j^m = g(n_j^m,\ c_j^m) \partial_{\eta}c_j^m&\mbox{a.e. on}\ \partial\Omega,\\ \ &u_j^m=0 &\mbox{a.e. on}\ \partial\Omega. \end{array} \right. \end{equation} where $(h,\ h_{\tau},\ l,\ q)=(c^{m-1},\ c_{\tau}^{m-1},\ n^{m-1},\ u^{m-1}).$ The existence result of the above problem is stated in the following theorem: \begin{theorem}\label{th3.1} Suppose $(H_1)-(H_3)$ holds, and $(h,\ l,\ q,\ h_{\tau})\in (L^2(\Omega))^3\times L^2(\partial\Omega),$ then there exists a solution $(c_j^m,\ n_j^m,\ u_j^m)\in (H^1(\Omega))^2\times V $ of problem \eqref{e3.1}. \end{theorem} We will then move on to derive estimates on the solution $(c_j^m,\ c_j^m,\ c_j^m)$ of problem \eqref{e3.1}. Those estimates allow us to pass the limits by letting $j\rightarrow\infty$, the sequences will converge to limit functions, which will be the solutions of the discrete problem \eqref{e2.3}, and Theorem 2.3 will be proved. The proof of Theorem 3.1 will use the following lemma: \begin{lemma} Suppose $(H_1)-(H_3)$ holds, and $(h,\ l,\ h_{\tau})\in (L^2(\Omega))^2\times L^2(\partial\Omega),$ then for fixed $\hat{u} \in V_j$, there exist a solution $(c,\ n)\in (H^1(\Omega))^2$ of the following problem: \begin{equation}\label{e3.2} \left\{ \begin{array}{ll} \ & -k\alpha\Delta c + k (\hat{u}\cdot \nabla c )+ c = -k n f( c)+h,\\ \ &-k\nabla\cdot \left( \beta\nabla n-g (n,\ c) \nabla c \right) + k (\hat{u}\cdot \nabla n) + n = l,\\ \ & k\partial_{\eta} c = \frac{k}{b} \Delta_{\tau} c_{\tau} -\frac{1}{b} c_{\tau}+ \frac{1}{b} h_{\tau}, \\ \ & \beta \partial_{\eta}n = g(n,\ c) \partial_{\eta}c. \end{array} \right. \end{equation} \end{lemma} 3.1 Proof of Lemma 3.2 To prove Lemma 3.2, we first consider a regularized problem, we smooth $\hat{u}$ by replacing $\hat{u}$ with $J_{\epsilon}\hat{u}$, where $J_{\epsilon}\hat{u} = \left( (\psi _{\epsilon} u )\ast w_{\epsilon} \right)_{div}.$ Here \[ \psi _{\epsilon}(x):= \left\{ \begin{array}{ll} 0 & \mbox{if}\ \ \mbox{dis}\ (x,\ \partial \Omega) \le 2\epsilon,\\ 1 & \mbox{elsewhere} \end{array}\right. \] $(\psi _{\epsilon} u )\ast w_{\epsilon}$ denote the standard regularization of $\psi _{\epsilon} u $ with kernel $w_{\epsilon}$ having support in a ball of radius $\epsilon$. The symbol $(\cdot)_{div}$ comes from the Helmholtz decomposition, see \cite{bul}. $J_{\epsilon}\hat{u}$ keeps Dirichlet boundary condition and divergence free property, therefore, we have the identity $\int_{\Omega} (J_{\epsilon} \hat{u}\cdot \nabla c) c dx=0$, see \cite{tem}. We will use this identity frequently throughout this paper. We have the following existence result for the regularized problem. \begin{lemma} Suppose $(H_1)-(H_3)$ holds, and $(h,\ l,\ h_{\tau})\in (L^2(\Omega))^2\times L^2(\partial\Omega),$ then for fixed $\epsilon\in (0,\ 1)\ $and $\hat{u} \in V_j$, there exist a solution $(c_{\epsilon},\ n_{\epsilon})\in (H^1(\Omega))^2$ of the following problem: \begin{equation}\label{e3.3} \begin{array}{ll} \ & -k\alpha\Delta c_{\epsilon} + k J_{\epsilon}\hat{u}\cdot \nabla c_{\epsilon} + c_{\epsilon} = -k n_{\epsilon} f( c_{\epsilon})+h, \\ \ &-k\nabla\cdot \left( \beta\nabla n_{\epsilon}-g (n_{\epsilon},\ c_{\epsilon}) \nabla c_{\epsilon} \right) + k J_{\epsilon}\hat{u}\cdot \nabla n_{\epsilon} + n_{\epsilon} = l, \\ \ & k\partial_{\eta} c_{\epsilon} = \frac{k}{b} \Delta_{\tau} {c_{\epsilon}}_{\tau} -\frac{1}{b} {c_{\epsilon}}_{\tau}+ \frac{1}{b} h_{\tau}, \\ \ & \beta \partial_{\eta}n_{\epsilon} = g(n_{\epsilon},\ c_{\epsilon}) \partial_{\eta}c_{\epsilon}. \end{array} \end{equation} \end{lemma} {\bf Proof of Lemma 3.3}: In the proof, we omit subscript $\epsilon$, write $(c_{\epsilon},\ n_{\epsilon})$ as $(c,\ n)$. We use Schaefer's fixed-point theorem. We define an operator $\Phi :\quad X \rightarrow X,\ $ where \[ X= H^1(\Omega) \times H^1(\Omega) \] Fix $(\hat{c}, \ \hat{n})\ \in X,\ $ we set $\Phi (\hat{c}, \ \hat{n}) = (c,\ n)$, where $(c,\ n)$ is the solution to the following problem: \begin{eqnarray} &\ & -k\alpha\Delta c + k J_{\epsilon} \hat{u}\cdot \nabla c + c = -k \hat{n} f( \hat{c})+h,\label{e3.4} \\ &\ & -k\nabla\cdot \left( \beta\nabla n - g (\hat{n},\ c) \nabla c \right)+ k J_{\epsilon} \hat{u}\cdot \nabla n + n = l,\label{e3.5}\\ &\ & k\partial_{\eta} c = \frac{k}{b} \Delta_{\tau} c_{\tau} -\frac{1}{b} c_{\tau}+ \frac{1}{b} h_{\tau}, \label{e3.6} \\ &\ & \beta \partial_{\eta}n = g(\hat{n},\ c) \partial_{\eta}c. \label{e3.7} \end{eqnarray} We want to show $\Phi$ is a continuous and compact mapping of $X$ into itself, such that the set $\{ x\in X:\ x=\lambda \Phi (x)\ \mbox{for some}\ 0\le\lambda\le 1\} $ is bounded. Therefore, by Schaefer's fixed-point theorem, $\Phi$ has a fixed point. Multiply both sides of the equation \eqref{e3.4} by $c$, use $\int_{\Omega} (J_{\epsilon} \hat{u}\cdot \nabla c) c dx=0$, we arrive at \[ \begin{array}{ll} \ & -\alpha\int_{\partial \Omega} c ( h_2 - \frac{1}{b} c_{\tau} + \frac{k}{b} \Delta_{\tau} c_{\tau} ) d\sigma +\alpha k \Vert \nabla c \Vert _{L^2}^2 + \Vert c \Vert _{L^2}^2 \\ \ &\hspace{1cm} = \int_{\Omega} h_1 c. \end{array} \] Here $ h_1= -k\hat{n}f(\hat{c}) + h ,\ h_2 = \frac{1}{b} h_{\tau}.$ Use H$\ddot{o}$lder's inequality and Young's inequality, we get \begin{equation}\label{e3.8} \begin{array}{ll} \ & \frac{\alpha k}{b} \Vert \nabla c_{\tau} \Vert _{L^2}^2 + {\alpha k} \Vert \nabla c \Vert _{L^2}^2 + (1-\delta) \Vert c \Vert _{L^2}^2 +\alpha (\frac{1}{b}-\delta)\Vert c_{\tau} \Vert _{L^2}^2 \\ \ &\hspace{0.1cm} \le \frac{1}{4\delta} \Vert h_1\Vert_{L^2}^2 + \frac{\alpha}{4\delta} \Vert h_2\Vert_{L^2}^2\\ \ &\hspace{0.1cm} \le \frac{1}{2\delta} \Vert h\Vert_{L^2}^2 + \frac{\alpha}{4\delta b^2 } \Vert h_{\tau}\Vert_{L^2}^2 + \frac{1}{2\delta} k^2 f_1^2 \Vert \hat{n}\Vert_{L^2}^2. \end{array} \end{equation} Choose $\delta < \min (1,\frac{1}{b})$ in the above inequality, we have \begin{equation}\label{e3.8-1} \Vert c \Vert _{L^2}^2 + \Vert \nabla c \Vert_{L^2}^2 \le C(\Vert h\Vert_{L^2}^2 + \Vert h_{\tau}\Vert_{L^2}^2 + \Vert \hat{n}\Vert_{L^2}^2). \end{equation} Multiply \eqref{e3.5} by $n$, integrate by parts, and use boundary conditions, we get, \[ \begin{array}{ll} \ & k\beta\Vert \nabla n \Vert _{L^2}^2 + \Vert n \Vert _{L^2}^2 = \int_{\Omega} ln + k\int_{\Omega} g (\hat{n},\ c) \nabla c \nabla n \\ \ &\hspace{1cm} \le \delta \Vert n \Vert _{L^2}^2 + \frac{1}{4\delta} \Vert l \Vert _{L^2}^2 + \frac{kg_1}{2} (\Vert \nabla n \Vert _{L^2}^2 +\Vert \nabla c \Vert _{L^2}^2), \end{array} \] which gives, \[ (k\beta-\frac{kg_1}{2} )\Vert \nabla n \Vert _{L^2}^2 + (1-\delta) \Vert n \Vert _{L^2}^2 \le \frac{1}{4\delta} \Vert l \Vert _{L^2}^2 +\frac{kg_1}{2}\Vert \nabla c \Vert _{L^2}^2. \] Choose $\delta < 1$ in the above inequality, and require $\beta > \frac{g_1}{2}$, we have \begin{equation}\label{e3.9-1} \Vert n \Vert _{L^2}^2 + \Vert \nabla n \Vert_{L^2}^2 \le C(\Vert l\Vert_{L^2}^2 +\Vert \nabla c \Vert_{L^2}^2 ). \end{equation} This together with \eqref{e3.8-1} shows \begin{equation}\label{e3.9-2} \Vert n \Vert _{L^2}^2 + \Vert \nabla n \Vert_{L^2}^2 \le C(\Vert l\Vert_{L^2}^2 +\Vert h\Vert_{L^2}^2 + \Vert h_{\tau}\Vert_{L^2}^2 + \Vert \hat{n}\Vert_{L^2}^2 ). \end{equation} \eqref{e3.8-1} and \eqref{e3.9-2} show that $\Phi$ maps $X$ to $X$. Next, we want to show the set \[ S= \{ (c,\ n)\in H^1 \times H^1 :\ (c,\ n)=\lambda \Phi (c,\ n)\ \mbox{for some}\ 0\le\lambda\le 1\} \] is bounded. We consider the equation $ (\frac{c}{\lambda},\ \frac{n}{\lambda} )= \Phi (c,\ n)$: \begin{eqnarray} &\ & -k\alpha\Delta c + k J_{\epsilon} \hat{u}\cdot \nabla c + c = \lambda ( -k n f( c)+h ),\label{e3.15}\\ &\ &-k\beta\Delta n + k J_{\epsilon} \hat{u}\cdot \nabla n + n = \lambda\left( -k \nabla\cdot (g (n,\ \frac{c}{\lambda}) \frac{\nabla c}{ \lambda}) + l \right),\label{e3.16}\\ &\ & k\partial_{\eta} c = \frac{k}{b} \Delta_{\tau} c_{\tau} -\frac{1}{b} c_{\tau}+ \frac{\lambda}{b} h_{\tau}, \label{e3.17} \\ &\ & \beta \partial_{\eta}n = g(n,\ \frac{c}{\lambda}) \partial_{\eta}c.\label{e3.18} \end{eqnarray} Multiply \eqref{e3.15} by $c$, integrate with respect to $x$, we get \[ \begin{array}{ll} \ & \frac{\alpha k}{b}\Vert \nabla_{\tau} c_{\tau} \Vert _{L^2}^2 + \frac{\alpha }{b}\Vert c_{\tau} \Vert _{L^2}^2 - \frac{\alpha \lambda}{b}\int_{\partial\Omega}ch_{\tau}d\sigma + k\alpha\Vert \nabla c \Vert _{L^2}^2 +\Vert c \Vert _{L^2}^2\\ \ &\hspace{1cm} = -\lambda\int_{\Omega}knf(c)c + \lambda\int_{\Omega} hc. \end{array} \] Use Young's inequality, we arrive at \[ \begin{array}{ll} \ & \frac{\alpha k}{b}\Vert \nabla_{\tau} c_{\tau} \Vert _{L^2}^2 + ( \frac{\alpha }{b} - \frac{\alpha \lambda}{b}\delta) \Vert c_{\tau} \Vert _{L^2}^2 + k\alpha\Vert \nabla c \Vert _{L^2}^2 + (1-\lambda k f_1 \delta - \lambda\delta) \Vert c \Vert _{L^2}^2\\ \ &\hspace{1cm} \le \frac{\alpha \lambda}{4b\delta}\Vert h_{\tau} \Vert _{L^2}^2 + \frac{\lambda k f_1}{4 \delta} \Vert n \Vert _{L^2}^2 + \frac{\lambda}{4 \delta} \Vert h \Vert _{L^2}^2. \end{array} \] Choose $\delta < \min(\frac{1}{\lambda}, \frac{1}{\lambda +\lambda k f_1 })$ in the above inequality, we have \begin{equation}\label{e3.19} \begin{array}{ll} \ & \alpha k \Vert \nabla c \Vert _{L^2}^2 \le \frac{1}{4\delta}\left( {\lambda} k f_1 \Vert n\Vert_{L^2}^2 + {\lambda} \Vert h\Vert_{L^2}^2\right) + \frac{ {\lambda}\alpha}{4\delta b } \Vert h_{\tau}\Vert_{L^2}^2. \end{array} \end{equation} Multiply \eqref{e3.16} by $n$, integrate with respect to $x$, use $ \int_{\Omega} (J_{\epsilon} \hat{u})\cdot (\nabla n) n = 0$, then integrate by parts, use boundary condition \eqref{e3.18}, H$\ddot{o}$lder and Young's inequalities, we arrive at: \[ k\beta\Vert \nabla n \Vert _{L^2}^2 + \Vert n \Vert _{L^2}^2 \le \lambda (\hat{\delta} \Vert n \Vert _{L^2}^2 + \frac{1}{ 4{ \hat{\delta} } } \Vert l \Vert _{L^2}^2) + kg_1( \frac{1}{ 4{ \hat{\delta} } } \Vert \nabla c \Vert _{L^2}^2 + \hat{\delta} \Vert \nabla n \Vert _{L^2}^2), \] which gives, \begin{equation}\label{e3.20} \begin{array}{ll} \ & (1-\lambda \hat{\delta}) \Vert n \Vert _{L^2}^2 + (k\beta - k g_1 \hat{\delta}) \Vert \nabla n \Vert _{L^2}^2 \\ \ & \le \frac{\lambda}{4\hat{\delta}} \Vert l \Vert _{L^2}^2 + \frac{k g_1}{4 \hat{\delta}} \Vert \nabla c \Vert _{L^2}^2. \end{array} \end{equation} Choose $\hat{\delta} < \min (\frac{1}{\lambda},\ \frac{\beta}{g_1} )$ in \eqref{e3.20}, with fixed $\delta,\ \hat{\delta}$ in \eqref{e3.19} and \eqref{e3.20}, choose $k$ small, such that \[ \frac{ {\lambda} k f_1}{4\delta} < 1-\lambda \hat{\delta}. \] Require $\alpha$ to be a constant s.t. $\alpha > \frac{g_1}{4\hat{\delta}}.$ By adding \eqref{e3.19} and \eqref{e3.20}, we can absorb the terms with $\Vert \nabla c \Vert _{L^2}^2$ and $\Vert n \Vert _{L^2}^2$ in the right hand side of the equation to the left hand side, and obtain: \begin{equation}\label{e3.21} \begin{array}{ll} \ & (\alpha k- \frac{k g_1}{4 \hat{\delta}}) \Vert c \Vert _{H^1}^2 + (k\beta - k g_1 \hat{\delta}) \Vert \nabla n \Vert _{L^2}^2 + ( 1- \lambda \hat{\delta} - \frac{ {\lambda} k f_1 } {4\delta} ) \Vert n\Vert_{L^2}^2 \\ \ & \le \frac { \lambda}{ 4\delta } \Vert h\Vert_{L^2}^2 + \frac{ {\lambda}\alpha}{4\delta b} \Vert \ h_{\tau} \Vert_{L^2}^2 + \frac { \lambda}{ 4\hat{\delta} } \Vert l\Vert_{L^2}^2. \end{array} \end{equation} Therefore, $\Vert n \Vert _{H^1}, \ \Vert c \Vert _{H^1}$ is bounded, hence the set $S$ is bounded. Next, we want to check the mapping $\Phi$ is compact. We proceed to show that $\Phi$ maps bounded set $(\hat{c}, \ \hat{n}) \in H^1 \times H^1 $ to bounded set $(c,\ n) = \Phi(\hat{c}, \ \hat{n}) $ in $H^2 \times H^2 $: For $(\hat{c}, \ \hat{n})$ bounded in $H^1 \times H^1 $, from equation \eqref{e3.4}, \eqref{e3.6}, and elliptic regularity theorem, we know that $\Vert c\Vert_{H^2}$ is bounded. Boundedness of $\Vert n\Vert_{H^2}$ comes from equation \eqref{e3.5}, \eqref{e3.7}, elliptic regularity theorem, and boundedness of $\Vert c\Vert_{H^2}$. So we have $(c,\ n) = \Phi(\hat{c}, \ \hat{n}) $ is bounded in $H^2 \times H^2 $. Since the embedding of $H^2 \times H^2 $ into $H^1 \times H^1 $ is compact, the bounded set $(c,\ n)$ in $H^2 \times H^2 $ is relatively compact in $H^1 \times H^1 $. Therefore, the mapping $\Phi$ is compact. The last step is to show that $\Phi$ is continuous. Let $(\hat{c}_n,\ \hat{n}_n)$ converge to $(\hat{c},\ \hat{n})$ in $H^1 \times H^1$ strongly. Let $(c_n,\ n_n) =\Phi (\hat{c}_n,\ \hat{n}_n)$, so we have \begin{equation}\label{e3.22} \begin{array}{ll} \ & -k\alpha\Delta c_n + k J_{\epsilon} \hat{u}\cdot \nabla c_n + c_n = -k\hat{n}_n f( \hat{c}_n)+h, \\ \ &-k\beta\Delta n_n + k J_{\epsilon} \hat{u}\cdot \nabla n_n + n_n = -k \nabla\cdot (g (\hat{n}_n,\ c_n) \nabla c_n) + l ,\\ \ & k\partial_{\eta} c_n = \frac{k}{b} \Delta_{\tau} {c_n}_{\tau} -\frac{1}{b} {c_n}_{\tau}+ \frac{1}{b} h_{\tau}, \\ \ & \beta \partial_{\eta}n_n = g (\hat{n}_n,\ c_n)\partial_{\eta}c_n. \end{array} \end{equation} Since the sequence $(\hat{c}_n,\ \hat{n}_n)$ is bounded in $H^1 \times H^1$, the same argument to show the compactness of $\Phi$ can be applied here to derive boundedness of $(c_n,\ n_n)$ in $H^2 \times H^2$. Since $H^2 \times H^2$ is compactly embedded into $H^1 \times H^1$, there exist $(c,\ n) \in H^2 \times H^2$ and subsequence of $(c_n,\ n_n)$, still denoted as $(c_n,\ n_n)$, s.t. \begin{eqnarray} &\ & (c_n,\ n_n) \rightharpoonup (c,\ n) \ \mbox{weakly in} \ H^2 \times H^2, \label{e3.23}\\ &\ & (c_n,\ n_n) \rightarrow (c,\ n) \ \mbox{strongly in} \ H^1 \times H^1. \label{e3.24} \end{eqnarray} With \eqref{e3.24} and the fact that $(\hat{c}_n,\ \hat{n}_n)$ converges to $(\hat{c},\ \hat{n})$ in $H^1 \times H^1$ strongly, we have \begin{equation} \label{e3.25} \begin{array}{ll} \ &(\hat{c}_n,\ \hat{n}_n,\ c_n,\ n_n) \rightarrow (\hat{c},\ \hat{n},\ c,\ n) \ \mbox{a.e. in} \ \Omega. \end{array} \end{equation} As for the trace of $(c_n,\ n_n)$, we have $ {n_n}_{\tau}$ is bounded in $H^{ \frac{3}{2} } (\partial \Omega)\ $ (since $n_n$ is bounded in $H^2$), therefore, is also bounded in $H^1(\partial \Omega)$. Since $H^1(\partial \Omega)\ $ is compactly embedded into $L^2(\partial \Omega)$, there exists subsequence of $ {n_n}_{\tau}$, still denoted as $ {n_n}_{\tau}$, s.t. $ {n_n}_{\tau} \rightarrow {n}_{\tau}$ strongly in $ L^2(\partial \Omega)$, so we have \begin{equation} \label{e3.26} {n_n}_{\tau} \rightarrow {n}_{\tau}\ \mbox{almost everywhere}. \end{equation} Similarly, we have \begin{equation} \label{e3.27} {c_n}_{\tau} \rightarrow {c}_{\tau} \ \mbox{almost everywhere}. \end{equation} With assumption $(H_1) - (H_3)$, \eqref{e3.25} - \eqref{e3.27}, and dominated convergence theorem, we conclude that \begin{eqnarray} &\ & f(\hat{c}_n) \rightarrow f(\hat{c}) \ \mbox{a.e. and strongly in} \ L^2, \label{e3.28}\\ &\ & g(\hat{n}_n,\ c_n) \rightarrow g(\hat{n},\ c) \ \mbox{a.e. and strongly in} \ L^2. \label{e3.29} \end{eqnarray} From boundedness of $ (c_n,\ n_n)$ in $H^1 \times H^1$, we have \begin{equation}\label{e3.30} (\nabla c_n,\ \nabla n_n) \rightharpoonup (\nabla c,\ \nabla n) \ \mbox{weakly in} \ L^2\times L^2 . \end{equation} From \eqref{e3.28} - \eqref{e3.30}, we get \[ \begin{array}{ll} \ &\hat{n}_n f(\hat{c}_n) \rightarrow \hat{n} f(\hat{c}) \ \mbox{in distribution,} \\ \ & g(\hat{n}_n,\ c_n) \nabla c_n \rightarrow g(\hat{n},\ c) \nabla c\ \mbox{in distribution.} \end{array} \] Similarly, we can pass limits in other terms in \eqref{e3.22} by letting $n\rightarrow \infty$, we then obtain \eqref{e3.4} - \eqref{e3.7}, therefore, we have shown that $(c,\ n) = \Phi (\hat{c},\ \hat{n}).$ The continuity of $\Phi$ is proved. This finishes the proof of Lemma 3.3. {\bf Proof of Lemma 3.2:} We multiply $\eqref{e3.3}_1$ by $c_{\epsilon}$ and $\eqref{e3.3}_2$ by $n_{\epsilon}$, integrate over $\Omega$, then proceed in an analogous way as in \eqref{e3.19} - \eqref{e3.21}, we obtain the following estimates: \begin{equation}\label{e3.31} \begin{array}{ll} \ &\Vert c_{\epsilon\tau} \Vert_{H^1}^2 + \Vert c_{\epsilon} \Vert_{H^1}^2 + \Vert n_{\epsilon} \Vert_{H^1}^2 +\Vert n_{\epsilon} \Vert_{L^2}^2 \le C \left( \Vert h \Vert_{L^2}^2 + \Vert h_{\tau} \Vert_{L^2}^2 +\Vert l \Vert_{L^2}^2 \right). \end{array} \end{equation} where the constant $C$ is independent of $\epsilon$. We obtain from \eqref{e3.31} that $(c_{\epsilon\tau},\ c_{\epsilon},\ n_{\epsilon})$ is bounded in $H^1(\partial \Omega) \times H^1( \Omega) \times H^1(\Omega),$ uniformly w.r.t. $ \epsilon$. Since $H^1(\partial \Omega)$ and $H^1(\Omega)$ are compactly embedded in $L^2(\partial \Omega)$ and $L^2(\Omega)$ respectively, there exist $(c_{\tau},\ c,\ n) \in H^1(\partial \Omega) \times H^1( \Omega) \times H^1(\Omega) $ and subsequence of $(c_{\epsilon\tau},\ c_{\epsilon},\ n_{\epsilon})$, still denoted as $(c_{\epsilon\tau},\ c_{\epsilon},\ n_{\epsilon})$, s.t. as $\epsilon \rightarrow 0^{+}$, we have \begin{equation}\label{e3.32} \begin{array}{ll} \ & (c_{\epsilon\tau},\ c_{\epsilon},\ n_{\epsilon}) \rightharpoonup (c_{\tau},\ c,\ n) \ \mbox{weakly in} \ H^1(\partial \Omega)\times H^1(\Omega) \times H^1(\Omega),\\ \ & (c_{\epsilon\tau},\ c_{\epsilon},\ n_{\epsilon}) \rightarrow (c_{\tau},\ c,\ n) \ \mbox{strongly in} \ L^2(\partial \Omega)\times L^2(\Omega) \times L^2(\Omega). \end{array} \end{equation} From \eqref{e3.32} together with \begin{equation}\label{e3.33} \begin{array}{ll} \ & J_{\epsilon} \hat{u} \rightarrow \hat{u} \ \mbox{strongly in} \ L^2(\Omega). \end{array} \end{equation} We get \[ \begin{array}{ll} \ & J_{\epsilon} \hat{u} \cdot \nabla c_{\epsilon} \rightarrow \hat{u}\cdot \nabla c\ \mbox{in distribution,} \\ \ & J_{\epsilon} \hat{u} \cdot \nabla n_{\epsilon} \rightarrow \hat{u}\cdot \nabla n\ \mbox{in distribution.} \\ \end{array} \] We then proceed in an analogous way as in the proof of continuity of $\Phi$ in Lemma 3.3 to pass the limit in \eqref{e3.3} by letting $\epsilon \rightarrow 0^{+}$ to obtain \eqref{e3.2}. The proof of Lemma 3.2 is completed. 3.2 Proof of Theorem 3.1 For the simplicity of notation, in the proof of Theorem 3.1, we omit the superscript ``$m$", and write equation \eqref{e3.1} as follows: \begin{equation}\label{e3.34} \left\{ \begin{array}{lll} \ & -k\alpha\Delta c_j + k (u_j \cdot \nabla ) c_j + c_j = -k n_j f( c_j)+h\ &\mbox{in}\ \Omega,\\ \ &-k\nabla\cdot \left( \beta\nabla n_j-g (n_j,\ c_j) \nabla c_j \right) + k (u_j \cdot \nabla ) n_j + n_j = l\ &\mbox{in}\ \Omega,\\ \ &-k\nabla\cdot (\xi\nabla u_j) + k (u_j \cdot \nabla ) u_j + k \nabla P_j + u_j=k n_j\nabla \sigma + q\ &\mbox{in}\ \Omega,\\ \ &\nabla\cdot u_j = 0\ &\mbox{in}\ \Omega,\\ \ & k\partial_{\eta} c_j = \frac{k}{b} \Delta_{\tau} c_{j_\tau} -\frac{1}{b} c_{j_\tau}+ \frac{1}{b} h_{\tau}\ &\mbox{on}\ \partial\Omega, \\ \ & \beta \partial_{\eta}n_j = g(n_j,\ c_j) \partial_{\eta}c_j\ &\mbox{on}\ \partial\Omega,\\ \ &u_j=0 \ &\mbox{on}\ \partial\Omega. \end{array}\right. \end{equation} Recall that $u_j\in V_j$ is the Galerkin approximation of $u^m$ for a fixed $m$. We define an operator $L:\ \ V_j \rightarrow V_j$ as follows: For a fixed $\hat{u}_j\in V_j,\ $ we set $L(\hat{u}_j)=u_j,\ $ where $u_j$ is the solution to the following problem: \begin{eqnarray} &\ & -k\alpha\Delta c_j + k (\hat{u}_j \cdot \nabla ) c_j + c_j = -k n_j f( c_j)+h, \label{e3.35}\\ &\ &-k\nabla\cdot \left( \beta\nabla n_j - g (n_j,\ c_j) \nabla c_j \right) + k (\hat{u}_j\cdot \nabla ) n_j + n_j = l, \label{e3.36}\\ &\ &-k\nabla\cdot (\xi\nabla u_j) + k (u_j \cdot \nabla ) u_j + k \nabla P_j + u_j=k n_j\nabla \sigma + q, \label{e3.37}\\ &\ &\nabla\cdot u_j = 0.\label{e3.38} \end{eqnarray} with boundary conditions: \begin{eqnarray} &\ & k\partial_{\eta} c_j = \frac{k}{b} \Delta_{\tau} c_{j_\tau} -\frac{1}{b} c_{j_\tau}+ \frac{1}{b} h_{\tau}, \label{e3.39} \\ &\ & \beta \partial_{\eta}n_j = g(n_j,\ c_j) \partial_{\eta}c_j, \label{e3.40} \\ &\ &u_j=0. \label{e3.41} \end{eqnarray} Let $\hat{u}_j$ belongs to a bounded set in $V_j$, we fix $\hat{u}_j$ in \eqref{e3.35} - \eqref{e3.36}, solve \eqref{e3.35}, \eqref{e3.36}, \eqref{e3.39}, \eqref{e3.40} for $(c_j,\ n_j)$. The existence of solution $(c_j,\ n_j)$ is proved in Lemma 3.2, and we have the estimate of $n_j$: \begin{equation}\label{e3.42} \Vert n_j \Vert_{L^2}^2 \le C ( \Vert h \Vert_{L^2}^2 + \Vert h_{\tau} \Vert_{L^2}^2 +\Vert l \Vert_{L^2}^2). \end{equation} We then use this $n_j$ in equation \eqref{e3.37}, proceed to solve equation \eqref{e3.37},\eqref{e3.38} and \eqref{e3.41} for $u_j$, since $u_j\in V_j$, which is finite-dimensional, the existence of this $u_j$ can be proved in an analogous way as in \cite[p. 164]{tem}. The operator $L$ then maps $\hat{u}_j$ to $u_j$, i.e. $u_j = L(\hat{u}_j).$ Next, we show that $L$ has a fixed point. To this end, we multiply equation \eqref{e3.37} with $u_j$, integrate over $\Omega$, use $\int_{\Omega} (u_j\cdot \nabla )u_j\cdot u_j = 0,\ $ we get, \[ \begin{array}{ll} \ & k\xi \Vert \nabla u_j \Vert_{L^2}^2 + \Vert u_j \Vert_{L^2}^2 = k \int_{\Omega} n_j \nabla \sigma u_j + \int_{\Omega} q u_j. \end{array} \] Use Young's inequality for the terms on the right hand side of the above equation, and use \eqref{e3.42}, we obtain, \begin{equation}\label{e3.43} \begin{array}{ll} \ & k\xi \Vert \nabla u_j \Vert_{L^2}^2 + ( 1-kC\delta- \delta )\Vert u_j \Vert_{L^2}^2\\ \ &\hspace{1cm} \le \frac{kC}{4\delta} \Vert n_j \Vert_{L^2}^2 + \frac{1}{4\delta} \Vert q \Vert_{L^2}^2\\ \ &\hspace{1cm} \le \frac{kC}{4\delta}( \Vert h \Vert_{L^2}^2 + \Vert h_{\tau} \Vert_{L^2}^2 +\Vert l \Vert_{L^2}^2) + \frac{1}{4\delta} \Vert q \Vert_{L^2}^2. \end{array} \end{equation} By choosing $\delta$ small enough, from the above inequality, we see that $ u_j = L(\hat{u}_j) $ is bounded in $V_j$. Therefore, $L$ maps a bounded set in $V_j$ to a bounded set in $V_j$. Since $V_j$ is finite dimensional space, we can use Brouwer fixed-point theorem to conclude that $L$ has a fixed point, which is the solution of \eqref{e3.34}. This completes the proof of Theorem 3.1. 3.3 Proof of Theorem 2.3 With Theorem 3.1, we have shown the existence results for \eqref{e3.1} with a fixed ``$m$". Now we multiply $\eqref{e3.34}_1$ by $c_j$, $\eqref{e3.34}_2$ by $n_j$, $\eqref{e3.34}_3$ by $u_j$, integrate over $\Omega$, proceed in an analogous way as in \eqref{e3.19} - \eqref{e3.21}, \eqref{e3.43}, we see that $(c_j,\ n_j,\ u_j)$ of \eqref{e3.34} is bounded in $\left(H^1(\Omega)\right)^2 \times V$, uniformly with respect to $j$. As a result, as $j\rightarrow +\infty$, we have \[ \begin{array}{ll} \ & (c_j,\ n_j) \rightharpoonup (c,\ n) \ \mbox{weakly in} \ H^1 \times H^1,\\ \ & (c_j,\ n_j) \rightarrow (c,\ n) \ \mbox{strongly in} \ L^2 \times L^2,\\ \ & u_j \rightharpoonup u \ \mbox{weakly in} \ V,\\ \ & u_j \rightarrow u \ \mbox{strongly in} \ H. \end{array} \] We can pass the limit (letting $j\rightarrow +\infty$) in \eqref{e3.34}. This finish the proof of Theorem 2.3. \section{Proof of Theorem 1.1} We first derive aprori estimates for $(c^m,\ n^m, \ u^m)$. To this end, we multiply $\eqref{e2.3}_1$ by $c^m$, $\eqref{e2.3}_2$ by $n^m$, $\eqref{e2.3}_3$ by $u^m$, and integrate over $\Omega$, we arrive at, \[ \begin{array}{ll} \ & \frac{1}{k}\int_{\Omega} c^m (c^m-c^{m-1}) - \alpha \int_{\Omega} c^m \Delta c^m + \int_{\Omega} c^m n^m f(c^m) = 0,\\ \ & \frac{1}{k}\int_{\Omega} n^m (n^m-n^{m-1}) - \int_{\Omega} n^m \nabla \cdot\left(\beta \nabla n^m -g(n^m,\ c^m) \nabla c^m \right)=0,\\ \ & \frac{1}{k}\int_{\Omega} u^m (u^m-u^{m-1}) - \int_{\Omega} u^m \nabla \cdot ( \xi \nabla u^m)= \int_{\Omega} u^m n^m \nabla \sigma. \end{array} \] Do integration by parts, using boundary conditions $\eqref{e2.3}_{5,6,7}$, we have \begin{eqnarray} \frac{1}{k} \int_{\Omega} c^m (c^m-c^{m-1}) +\alpha \Vert \nabla c^m \Vert _{L^2}^2 +\frac{\alpha}{b} \Vert \nabla_{\tau} c^m \Vert _{L^2}^2 +\frac{\alpha}{bk} \int_{\partial\Omega} c_{\tau} ^m (c_{\tau} ^m-c_{\tau} ^{m-1})\nonumber \\ = - \int_{\Omega} c^m n^m f(c^m),\label{e4.1}\\ \frac{1}{k} \int_{\Omega} n^m (n^m-n^{m-1})+\beta \Vert \nabla n^m \Vert _{L^2}^2 - \int_{\Omega} g(n^m,\ c^m) \nabla c^m\nabla n^m=0,\label{e4.2}\\ \frac{1}{k} \int_{\Omega} u^m (u^m-u^{m-1})+\xi \Vert \nabla u^m \Vert _{L^2}^2=\int_{\Omega} u^m n^m \nabla \sigma. \label{e4.3} \end{eqnarray} Using relation (2.1) in the equations \eqref{e4.1} and \eqref{e4.2}, we get, \begin{equation}\label{e4.4} \begin{array}{ll} \ & \frac{1}{2k}( \Vert c^m \Vert _{L^2}^2 - \Vert c^{m-1} \Vert _{L^2}^2 + \Vert c^m-c^{m-1} \Vert _{L^2}^2 )+ \alpha \Vert \nabla c^m \Vert _{L^2}^2 +\frac{\alpha}{b} \Vert \nabla_{\tau} c^m \Vert _{L^2}^2 \\ \ & \hspace{0.5cm} +\frac{\alpha}{2bk} ( \Vert c_{\tau}^m \Vert _{L^2}^2 - \Vert c_{\tau}^{m-1} \Vert _{L^2}^2 + \Vert c_{\tau}^m-c_{\tau}^{m-1} \Vert _{L^2}^2 )\\ \ & \hspace{1.5cm} \le \frac{f_1}{2} ( \Vert c^m \Vert _{L^2}^2 + \Vert n^m \Vert _{L^2}^2 ). \end{array} \end{equation} \begin{equation}\label{e4.5} \begin{array}{ll} \ & \frac{1}{2k}( \Vert n^m \Vert _{L^2}^2 - \Vert n^{m-1} \Vert _{L^2}^2 + \Vert n^m-n^{m-1} \Vert _{L^2}^2 )+ \beta \Vert \nabla n^m \Vert _{L^2}^2\\ \ & \hspace{1.5cm} \le g_1 ( \frac{1}{4\delta} \Vert \nabla c^m \Vert _{L^2}^2 + \delta \Vert \nabla n^m \Vert _{L^2}^2 ). \end{array} \end{equation} Multiply \eqref{e4.4} by $\frac{g_1}{4\delta},\ $ \eqref{e4.5} by $\alpha$, add the resulting equations, we get \begin{equation}\label{e4.6} \begin{array}{ll} \ & \frac{g_1}{8k\delta}( \Vert c^m \Vert _{L^2}^2 - \Vert c^{m-1} \Vert _{L^2}^2 + \Vert c^m-c^{m-1} \Vert _{L^2}^2 )+ \frac{g_1\alpha}{4b\delta} \Vert \nabla_{\tau} c^m \Vert _{L^2}^2\\ \ & \hspace{0.5cm} +\frac{g_1\alpha}{8kb\delta} ( \Vert c_{\tau}^m \Vert _{L^2}^2 - \Vert c_{\tau}^{m-1} \Vert _{L^2}^2 + \Vert c_{\tau}^m-c_{\tau}^{m-1} \Vert _{L^2}^2 ) \\ \ & \hspace{0.5cm} + \frac{\alpha}{2k}( \Vert n^m \Vert _{L^2}^2 - \Vert n^{m-1} \Vert _{L^2}^2 + \Vert n^m-n^{m-1} \Vert _{L^2}^2 ) + (\alpha \beta - g_1\alpha\delta)\Vert \nabla n^m \Vert _{L^2}^2\\ \ & \hspace{1.5cm} \le \frac{g_1}{4\delta} \frac{f_1}{2} ( \Vert c^m \Vert _{L^2}^2 + \Vert n^m \Vert _{L^2}^2 ). \end{array} \end{equation} Choose $\delta$ such that $\delta < \frac{\beta}{g_1}$, then multiply \eqref{e4.6} by $\frac{8k\delta}{g_1}$, we get \begin{equation}\label{e4.7} \begin{array}{ll} \ & ( \Vert c^m \Vert _{L^2}^2 - \Vert c^{m-1} \Vert _{L^2}^2 + \Vert c^m-c^{m-1} \Vert _{L^2}^2 )+ \frac{2k\alpha}{b} \Vert \nabla_{\tau} c^m \Vert _{L^2}^2\\ \ & \hspace{0.5cm} +\frac{\alpha}{b} ( \Vert c_{\tau}^m \Vert _{L^2}^2 - \Vert c_{\tau}^{m-1} \Vert _{L^2}^2 + \Vert c_{\tau}^m-c_{\tau}^{m-1} \Vert _{L^2}^2 ) \\ \ & \hspace{0.5cm} + \frac{4\delta\alpha}{g_1}( \Vert n^m \Vert _{L^2}^2 - \Vert n^{m-1} \Vert _{L^2}^2 + \Vert n^m-n^{m-1} \Vert _{L^2}^2 ) + \frac{8k\delta}{g_1} (\alpha \beta - g_1\alpha\delta)\Vert \nabla n^m \Vert _{L^2}^2\\ \ & \hspace{1.5cm} \le k f_1 ( \Vert c^m \Vert _{L^2}^2 + \Vert n^m \Vert _{L^2}^2 ). \end{array} \end{equation} Let $\tilde\alpha := \min\{ \frac{\alpha}{b},\ \frac{4\delta\alpha}{g_1},\ \frac{8\delta}{g_1} (\alpha \beta - g_1\alpha\delta) \},\ $ then from \eqref{e4.7} , we have \[ \begin{array}{ll} \ & ( \Vert c^m \Vert _{L^2}^2 - \Vert c^{m-1} \Vert _{L^2}^2 + \Vert c^m-c^{m-1} \Vert _{L^2}^2 )+ k\tilde\alpha \Vert \nabla_{\tau} c^m \Vert _{L^2}^2\\ \ & \hspace{0.5cm} +\tilde\alpha ( \Vert c_{\tau}^m \Vert _{L^2}^2 - \Vert c_{\tau}^{m-1} \Vert _{L^2}^2 + \Vert c_{\tau}^m-c_{\tau}^{m-1} \Vert _{L^2}^2 ) \\ \ & \hspace{0.5cm} + \tilde\alpha( \Vert n^m \Vert _{L^2}^2 - \Vert n^{m-1} \Vert _{L^2}^2 + \Vert n^m-n^{m-1} \Vert _{L^2}^2 ) + \tilde\alpha k\Vert \nabla n^m \Vert _{L^2}^2\\ \ & \hspace{1.5cm} \le k f_1 ( \Vert c^m \Vert _{L^2}^2 + \Vert n^m \Vert _{L^2}^2 ). \end{array} \] Summing the above inequality for $m=1,\ 2,\ 3,\ \cdots ,\ r,\quad 1\le r \le N,\ $ we find \[ \begin{array}{ll} \ & \Vert c^r \Vert _{L^2}^2 + \sum_{m=1}^r \Vert c^m-c^{m-1} \Vert _{L^2}^2 + k \tilde{\alpha} \sum_{m=1}^r \Vert \nabla_{\tau} c^m \Vert _{L^2}^2\\ \ & \hspace{0.5cm} + \tilde{\alpha} ( \Vert c_{\tau}^r \Vert _{L^2}^2+ \sum_{m=1}^r \Vert c_{\tau}^m-c_{\tau}^{m-1} \Vert _{L^2}^2 ) \\ \ & \hspace{0.5cm} + \tilde{\alpha}( \Vert n^r \Vert _{L^2}^2 + \sum_{m=1}^r \Vert n^m-n^{m-1} \Vert _{L^2}^2 ) + \tilde{\alpha} k \sum_{m=1}^r \Vert \nabla n^m \Vert _{L^2}^2\\ \ & \hspace{1.5cm} \le k f_1 \sum_{m=1}^r ( \Vert c^m \Vert _{L^2}^2 + \Vert n^m \Vert _{L^2}^2 ) + \Vert c^0 \Vert _{L^2}^2 + \tilde{\alpha} \Vert c_{\tau}^0 \Vert _{L^2}^2 + \tilde{\alpha}\Vert n^0 \Vert _{L^2}^2. \end{array} \] From here, we derive the following inequality: \begin{equation}\label{e4.8} \begin{array}{ll} \ & \Vert c^r \Vert _{L^2}^2 + \sum_{m=1}^r \Vert c^m-c^{m-1} \Vert _{L^2}^2 + k \sum_{m=1}^r \Vert \nabla_{\tau} c^m \Vert _{L^2}^2\\ \ & \hspace{0.5cm} + \Vert c_{\tau}^r \Vert _{L^2}^2+ \sum_{m=1}^r \Vert c_{\tau}^m-c_{\tau}^{m-1} \Vert _{L^2}^2 \\ \ & \hspace{0.5cm} + \Vert n^r \Vert _{L^2}^2 + \sum_{m=1}^r \Vert n^m-n^{m-1} \Vert _{L^2}^2 + k \sum_{m=1}^r \Vert \nabla n^m \Vert _{L^2}^2\\ \ & \hspace{0.5cm} \le M( \Vert c^0 \Vert _{L^2}^2 + \Vert c_{\tau}^0 \Vert _{L^2}^2 +\Vert n^0 \Vert _{L^2}^2) +Mk \sum_{m=1}^r ( \Vert c^m \Vert _{L^2}^2 + \Vert n^m \Vert _{L^2}^2 ). \end{array} \end{equation} Using Lemma 2.1 (discrete Gronwall's Lemma), we have \begin{equation}\label{e4.9} \begin{array}{ll} \ & \max_{1 \le r \le N} \Vert c^r \Vert _{L^2}^2 + \max_{1 \le r \le N} \Vert n^r \Vert _{L^2}^2\\ \ & \hspace{0.5cm}\le M ( \Vert c^0 \Vert _{L^2}^2 + \Vert c_{\tau}^0 \Vert _{L^2}^2 +\Vert n^0 \Vert _{L^2}^2). \end{array} \end{equation} \[ \begin{array}{ll} \ & k \sum_{m=1}^r ( \Vert \nabla_{\tau} c^m \Vert _{L^2}^2 + \Vert \nabla n^m \Vert _{L^2}^2) \\ \ & \hspace{0.5cm}\le M ( \Vert c^0 \Vert _{L^2}^2 + \Vert c_{\tau}^0 \Vert _{L^2}^2 +\Vert n^0 \Vert _{L^2}^2) +Mkr\left( M ( \Vert c^0 \Vert _{L^2}^2 + \Vert c_{\tau}^0 \Vert _{L^2}^2 +\Vert n^0 \Vert _{L^2}^2) \right),\\ \ & \hspace{0.5cm}\le (M+M^2 T)( \Vert c^0 \Vert _{L^2}^2 + \Vert c_{\tau}^0 \Vert _{L^2}^2 +\Vert n^0 \Vert _{L^2}^2). \end{array} \] Write $M+M^2 T$ again as $M$, we have \begin{equation}\label{e4.10} \begin{array}{ll} \ & k \sum_{m=1}^r ( \Vert \nabla_{\tau} c^m \Vert _{L^2}^2 + \Vert \nabla n^m \Vert _{L^2}^2) \\ \ & \hspace{0.5cm}\le M( \Vert c^0 \Vert _{L^2}^2 + \Vert c_{\tau}^0 \Vert _{L^2}^2 +\Vert n^0 \Vert _{L^2}^2). \end{array} \end{equation} Similarly, we have \begin{equation}\label{e4.11} \begin{array}{ll} \ & \sum_{m=1}^r (\Vert c^m-c^{m-1} \Vert _{L^2}^2 + \Vert c_{\tau}^m-c_{\tau}^{m-1} \Vert _{L^2}^2 + \Vert n^m-n^{m-1} \Vert _{L^2}^2 )\\ \ & \hspace{0.5cm}\le M( \Vert c^0 \Vert _{L^2}^2 + \Vert c_{\tau}^0 \Vert _{L^2}^2 +\Vert n^0 \Vert _{L^2}^2). \end{array} \end{equation} \begin{equation}\label{e4.12} \max_{1 \le r \le N} \Vert c_{\tau}^r \Vert _{L^2}^2 \le M( \Vert c^0 \Vert _{L^2}^2 + \Vert c_{\tau}^0 \Vert _{L^2}^2 +\Vert n^0 \Vert _{L^2}^2). \end{equation} \begin{equation}\label{e4.13} \sum_{m=1}^r \Vert c_{\tau}^m - c_{\tau}^{m-1} \Vert _{L^2}^2 \le M( \Vert c^0 \Vert _{L^2}^2 + \Vert c_{\tau}^0 \Vert _{L^2}^2 +\Vert n^0 \Vert _{L^2}^2). \end{equation} For the estimate of $u^m$, we proceed in an analogous way, and obtain the following inequality from \eqref{e4.3}. \[ \begin{array}{ll} \ & \Vert u^m \Vert _{L^2}^2 - \Vert u^{m-1} \Vert _{L^2}^2 + \Vert u^m - u^{m-1} \Vert _{L^2}^2 + 2k\xi \Vert \nabla u^m \Vert _{L^2}^2\\ \ & \hspace{0.5cm}\le kC( \Vert u^m \Vert _{L^2}^2 + \Vert n^m \Vert _{L^2}^2 ). \end{array} \] Summing the above inequality for $m=1,\ 2,\ 3,\ \cdots ,\ r,\quad 1\le r \le N,\ $ we find \[ \begin{array}{ll} \ & \Vert u^r \Vert _{L^2}^2 + \sum_{m=1}^r \Vert u^m-u^{m-1} \Vert _{L^2}^2 + 2k\xi \sum_{m=1}^r \Vert \nabla u^m \Vert _{L^2}^2\\ \ & \hspace{0.5cm}\le kC \sum_{m=1}^r ( \Vert u^m \Vert _{L^2}^2 + \Vert n^m \Vert _{L^2}^2 ) + \Vert u^0 \Vert _{L^2}^2\\ \ & \hspace{0.5cm}\le k C \sum_{m=1}^r \Vert u^m \Vert _{L^2}^2 + krC \max _ {1\le m \le r} \Vert n^m \Vert _{L^2}^2 + \Vert u^0 \Vert _{L^2}^2\\ \ & \hspace{0.5cm}\le M(\Vert u^0 \Vert _{L^2}^2+ \Vert c^0 \Vert _{L^2}^2 + \Vert c_{\tau}^0 \Vert _{L^2}^2 +\Vert n^0 \Vert _{L^2}^2) + Mk \sum_{m=1}^r \Vert u^m \Vert _{L^2}^2 . \end{array} \] By Lemma 2.1 (discrete Gronwall's Lemma), we have \begin{equation}\label{e4.14} \max_{1 \le r \le N} \Vert u^r \Vert _{L^2}^2 \le M( \Vert c^0 \Vert _{L^2}^2 + \Vert c_{\tau}^0 \Vert _{L^2}^2 +\Vert n^0 \Vert _{L^2}^2 + \Vert u^0 \Vert _{L^2}^2). \end{equation} \begin{equation}\label{e4.15} \sum_{m=1}^r \Vert u^m - u^{m-1} \Vert _{L^2}^2 \le M( \Vert c^0 \Vert _{L^2}^2 + \Vert c_{\tau}^0 \Vert _{L^2}^2 +\Vert n^0 \Vert _{L^2}^2 + \Vert u^0 \Vert _{L^2}^2). \end{equation} \begin{equation}\label{e4.16} k\sum_{m=1}^r \Vert \nabla u^m \Vert _{L^2}^2 \le M( \Vert c^0 \Vert _{L^2}^2 + \Vert c_{\tau}^0 \Vert _{L^2}^2 +\Vert n^0 \Vert _{L^2}^2 + \Vert u^0 \Vert _{L^2}^2). \end{equation} From \eqref{e4.9} - \eqref{e4.16}, we obtain the following estimates for Rothe functions and step functions: \begin{equation}\label{e4.17} \begin{array}{ll} \ & \Vert (c_k,\ n_k) \Vert_{\left( L^2(0,\ T;\ L^2(\Omega)) \right)^2} \le M; \hspace{0.5cm} \Vert (c_k,\ n_k) \Vert_{\left( L^2(0,\ T;\ H^1(\Omega)) \right)^2} \le M ,\\ \ & \Vert u_k \Vert_{L^2(0,\ T;\ H)} \le M; \hspace{0.5cm} \Vert u_k \Vert_{L^2(0,\ T;\ V)} \le M ,\\ \ & \Vert (\tilde{c}_k,\ \tilde{n}_k) \Vert_{\left( L^2(0,\ T;\ L^2(\Omega)) \right)^2} \le M; \hspace{0.5cm} \Vert (\tilde{c}_k,\ \tilde{n}_k) \Vert_{\left( L^2(0,\ T;\ H^1(\Omega)) \right)^2} \le M ,\\ \ & \Vert \tilde{u}_k \Vert_{L^2(0,\ T;\ H)} \le M; \hspace{0.5cm} \Vert \tilde{u}_k \Vert_{L^2(0,\ T;\ V)} \le M,\\ \ & \Vert c_{k\tau} \Vert_{L^2(0,\ T;\ L^2(\partial\Omega)) } \le M; \hspace{0.5cm} \Vert c_{k\tau} \Vert_{L^2(0,\ T;\ H^1(\partial\Omega)) } \le M,\\ \ & \Vert \tilde{c}_{k\tau} \Vert_{L^2(0,\ T;\ L^2(\partial\Omega)) } \le M; \hspace{0.5cm} \Vert \tilde{c}_{k\tau} \Vert_{L^2(0,\ T; \ H^1(\partial\Omega)) } \le M,\\ \ & \Vert (\tilde{c}_k - c_k,\ \tilde{n}_k - n_k) \Vert_{\left( L^2(0,\ T;\ L^2(\Omega)) \right)^2} \le M k; \hspace{0.5cm} \Vert \tilde{u}_k - u_k \Vert_{ L^2(0,\ T;\ H) } \le M k. \end{array} \end{equation} From the above estimates of Rothe functions and step functions, we conclude that, there exist subsequences, still denoted as $(c_k,\ c_{k\tau},\ n_k,\ u_k),\ (\tilde{c}_k,\ \tilde{c}_{k\tau},\ \tilde{n}_k,\ \tilde{u}_k)$ such that as $k\rightarrow 0,\ $, we have \begin{equation}\label{e4.18} \begin{array}{ll} \ & (c_k,\ c_{k\tau},\ n_k) \rightharpoonup (c,\ c_{\tau},\ n);\ (\tilde{c}_k,\ \tilde{c}_{k\tau},\ \tilde{n}_k) \rightharpoonup (\tilde{c},\ \tilde{c}_{\tau},\ \tilde{n}) \ \mbox{weakly in} \ L^2 (0,\ T;\ L^2(\Omega)),\\ \ & (c_k,\ c_{k\tau},\ n_k) \rightharpoonup (c,\ c_{\tau},\ n);\ (\tilde{c}_k,\ \tilde{c}_{k\tau},\ \tilde{n}_k) \rightharpoonup (\tilde{c},\ \tilde{c}_{\tau},\ \tilde{n}) \ \mbox{weakly in} \ L^2 (0,\ T;\ H^1(\Omega)),\\ \ & u_k \rightharpoonup u;\ \tilde{u}_k \rightharpoonup \tilde{u}, \ \mbox{weakly in} \ L^2 (0,\ T;\ H),\\ \ & u_k \rightharpoonup u;\ \tilde{u}_k \rightharpoonup \tilde{u}, \ \mbox{weakly in} \ L^2 (0,\ T;\ V),\\ \ & (c_k,\ c_{k\tau},\ n_k,\ u_k) = (\tilde{c}_k,\ \tilde{c}_{k\tau},\ \tilde{n}_k,\ \tilde{u}_k)\ \mbox{almost everywhere.} \end{array} \end{equation} Next, we want to show there exist subsequences of $(c_k,\ n_k,\ u_k)$, still denoted as $(c_k,\ n_k,\ u_k)$, such that as $k\rightarrow 0,\ $ we have \begin{equation}\label{e4.19} \begin{array}{ll} \ & (c_k,\ n_k) \rightarrow (c,\ n)\ \mbox{strongly in} \ L^2 (0,\ T;\ L^2(\Omega)),\\ \ & u_k \rightarrow u\ \mbox{strongly in} \ L^2 (0,\ T;\ H). \end{array} \end{equation} To this end, we apply Lemma 2.2, set $Y=\left( H^1(\Omega) \right)^2\times V,\ X= \left( L^2(\Omega) \right)^2\times H,\ $ and set $p=2.\ $ The embedding of $Y$ into $X$ is compact. Let $G$ be the family of functions $\ (\tilde{c}_k,\ \tilde{n}_k,\ \tilde{u}_k)$. Then $G$ is bounded in $L^2 (0,\ T;\ Y)\ $ and $L^2 (0,\ T;\ X)\ $ due to \eqref{e4.17}. Assume $(c^0,\ n^0,\ u^0) \in \left(H^1 \right)^2\times V,\ $ we want to show (2.2) in Lemma 2.2 holds. For this, we rewrite \eqref{e2.3} using Rothe functions $(\tilde{c}_k,\ \tilde{c}_{k\tau},\ \tilde{n}_k,\ \tilde{u}_k)\ $ and step functions $ (c_k,\ c_{k\tau},\ n_k,\ u_k)$, then multiply $\eqref{e2.3}_1$ by $\phi_1$, $\eqref{e2.3}_2$ by $\phi_2$, $\eqref{e2.3}_3$ by $\phi_3$, where $\phi_1, \ \phi_2$ are any functions in $H^1(\Omega)$, and $\phi_3$ is any function in $V,$ integrate by parts, using boundary conditions, we obtain \begin{equation}\label{e4.20} \begin{array}{ll} \ &\int_{\Omega} \partial_t \tilde{c}_k(t)\phi_1 dx + \alpha \int_{\Omega} \nabla c_k(t) \nabla\phi_1 dx + \frac{\alpha}{b} \int_{\partial\Omega} \partial_t \tilde{c}_{k\tau}(t)\phi_{1\tau} d\sigma +\\ \ &\quad \frac{\alpha}{b} \int_{\partial\Omega} \nabla_{\tau} c_k(t) \nabla_{\tau}\phi_1 d\sigma + \int_{\Omega} u_k(t)\nabla c_k(t) \phi_1 dx= \int_{\Omega} -n_k(t) f(c_k(t))\phi_1 dx,\\ \ & \int_{\Omega} \partial_t \tilde{n}_k(t)\phi_2 dx + \int_{\Omega} \left( \beta\nabla n_k(t) - g ( n_k(t),\ c_k(t) ) \nabla c_k(t) \right) \nabla\phi_2 dx \\ \ &\quad + \int_{\Omega} u_k(t)\nabla n_k(t) \phi_2 dx=0,\\ \ & \int_{\Omega} \partial_t \tilde{u}_k(t)\phi_3 dx + \int_{\Omega} \xi \nabla u_k(t) \nabla\phi_3 dx + \int_{\Omega} (u_k(t)\cdot\nabla) u_k(t) \phi_3 dx=\\ \ &\quad \int_{\Omega} n_k(t) \nabla \sigma \phi_3 dx. \end{array} \end{equation} We then integrate \eqref{e4.20} between $t$ and $t+a,\ t\in (0,\ T), \ a>0,$ we get \begin{equation}\label{e4.21} \begin{array}{ll} \ & \int_{\Omega} (\tilde{c}_k(t+a) - \tilde{c}_k(t) ) \phi_1 dx + \frac{\alpha}{b} \int_{\partial\Omega} (\tilde{c}_{k\tau}(t+a) - \tilde{c}_{k\tau}(t))\phi_{1\tau} d\sigma = - \alpha \int_t^{t+a} \int_{\Omega} \nabla c_k(s) \nabla\phi_1 dx ds\\ \ &\quad - \frac{\alpha}{b} \int_t^{t+a} \int_{\partial\Omega} \nabla_{\tau} c_k(s) \nabla_{\tau}\phi_1 d\sigma ds -\int_t^{t+a} \int_{\Omega} u_k(s)\nabla c_k(s) \phi_1 dx ds - \int_t^{t+a} \int_{\Omega} n_k(s) f(c_k(s))\phi_1 dx ds,\\ \ & \int_{\Omega} (\tilde{n}_k(t+a) - \tilde{n}_k(t) ) \phi_2 dx= - \int_t^{t+a} \int_{\Omega} \left( \beta\nabla n_k(s) - g (n_k(s),\ c_k(s)) \nabla c_k(s) \right) \nabla\phi_2 dx ds \\ \ &\quad - \int_t^{t+a} \int_{\Omega} u_k(s)\nabla n_k(s) \phi_2 dx ds,\\ \ &\int_{\Omega} (\tilde{u}_k(t+a) - \tilde{u}_k(t) ) \phi_3 dx = - \int_t^{t+a} \int_{\Omega} \xi \nabla u_k(s) \nabla\phi_3 dx ds - \int_t^{t+a} \int_{\Omega} (u_k(s)\cdot\nabla) u_k(s) \phi_3 dx ds\\ \ &\quad + \int_t^{t+a} \int_{\Omega} n_k(s) \nabla \sigma \phi_3 dx ds. \end{array} \end{equation} In the above equations, let \[ \begin{array}{ll} \ & \phi_1 = \tilde{c}_k(t+a) - \tilde{c}_k(t),\\ \ & \phi_{1\tau} = \tilde{c}_{k\tau}(t+a) - \tilde{c}_{k\tau}(t),\\ \ & \phi_2 = \tilde{n}_k(t+a) - \tilde{n}_k(t),\\ \ & \phi_3 = \tilde{u}_k(t+a) - \tilde{u}_k(t). \end{array} \] Then integrate from $0$ to $T-a$, we get from $\eqref{e4.21}_1$, \[ \begin{array}{ll} \ & \int_0^{T-a} \Vert \tilde{c}_k(t+a) - \tilde{c}_k(t) \Vert_{L^2}^2 + \frac{\alpha}{b}\int_0^{T-a} \Vert \tilde{c}_{k\tau}(t+a) - \tilde{c}_{k\tau}(t) \Vert_{L^2}^2= I_1 + I_2 + I_3+I_4,\\ \end{array} \] where \[ \begin{array}{ll} \ & I_1= - \alpha\int_0^{T-a} \int_t^{t+a} \int_{\Omega} \nabla c_k(s) \nabla\phi_1 dx ds dt, \\ \ & I_2= - \frac{\alpha}{b} \int_0^{T-a} \int_t^{t+a} \int_{\partial\Omega} \nabla_{\tau} c_k(s) \nabla_{\tau}\phi_1 d\sigma ds dt,\\ \ & I_3= - \int_0^{T-a} \int_t^{t+a} \int_{\Omega} u_k(s)\nabla c_k(s) \phi_1 dx ds dt, \\ \ & I_4= - \int_0^{T-a} \int_t^{t+a} \int_{\Omega} n_k(s) f(c_k(s))\phi_1 dx ds dt. \end{array} \] Using H$\ddot{o}$lder's inequality, Fubini's Theorem, and \eqref{e4.17}, we have \[ \begin{array}{ll} \vert I_1 \vert & \le \alpha\int_0^{T-a} \int_t^{t+a} \Vert \nabla c_k(s) \Vert _{L^2} \Vert \nabla (\tilde{c}_k(t+a) - \tilde{c}_k(t)) \Vert _{L^2} ds dt\\ \ & = \alpha\int_0^{T-a} \Vert \nabla (\tilde{c}_k(t+a) - \tilde{c}_k(t)) \Vert _{L^2} \left( \int_t^{t+a} \Vert \nabla c_k(s) \Vert _{L^2} ds \right) dt\\ \ & \le a^{\frac{1}{2}} \left(\int_0^{T} \Vert \nabla c_k(s) \Vert _{L^2}^2 ds\right)^ \frac{1}{2} \alpha\int_0^{T-a} \Vert \nabla (\tilde{c}_k(t+a) - \tilde{c}_k(t)) \Vert _{L^2} dt\\ \ & \le a^{\frac{1}{2}} M \alpha {(T-a)}^{\frac{1}{2}} 2 \left( \int_0^{T} \Vert \nabla \tilde{c}_k(t) \Vert _{L^2}^2 dt \right)^{\frac{1}{2}} \\ \ & \le a^{\frac{1}{2}} M \alpha {T}^{\frac{1}{2}}. \end{array} \] Then we have $I_1 \rightarrow 0$ as $a \rightarrow 0.$ Similarly, we have $I_2 \rightarrow 0$ as $a \rightarrow 0.$ For the estimates of $I_3,\ $by H$\ddot{o}$lder's inequality, Fubini's Theorem, Sobolev embedding theorem, and \eqref{e4.17}, we have \[ \begin{array}{ll} \vert I_3 \vert & \le \int_0^{T-a}\int_t^{t+a}\Vert \ u_k(s) \Vert_{L^6} \Vert \nabla c_k(s) \Vert _{L^2} \Vert \tilde{c}_k(t+a) - \tilde{c}_k(t) \Vert _{L^3} dsdt\\ \ & \le \int_0^{T-a} 2 \Vert \nabla \tilde{c}_k(t) \Vert _{L^2} \left\{ \int_t^{t+a}\Vert u_k(s) \Vert_V \Vert \nabla c_k(s) \Vert _{L^2} ds\right\}dt\\ \ & \le \int_0^{T} \Vert u_k(s) \Vert _V \Vert \nabla c_k(s) \Vert _{L^2} \left( \int_{[s-a,\ s]\bigcap [0,\ T-a] } 2\Vert \nabla \tilde{c}_k(t) \Vert _{L^2} dt \right) ds\\ \ & \le \int_0^{T} \Vert \ u_k(s) \Vert _V \Vert \nabla c_k(s) \Vert _{L^2} \left\{ 2a^{\frac{1}{2}} \left( \int_0^T \Vert \nabla \tilde{c}_k(t) \Vert _{L^2}^2 dt \right)^{\frac{1}{2}} \right\} ds\\ \ & \le a^{\frac{1}{2}} M \int_0^{T} \Vert \ u_k(s) \Vert _V \Vert \nabla c_k(s) \Vert _{L^2} ds\\ \ & \le a^{\frac{1}{2}} M \left( \int_0^{T} \Vert \ u_k(s) \Vert _V^2 ds \right)^{\frac{1}{2}} \left( \int_0^{T} \Vert \nabla c_k(s) \Vert _{L^2}^2 ds\right)^ { \frac{1}{2} }\\ \ &\le a^{\frac{1}{2}} M. \end{array} \] Then we have $I_3 \rightarrow 0$ as $a \rightarrow 0.$ We now continue to estimate $\vert I_4 \vert:$ \[ \begin{array}{ll} \vert I_4 \vert & \le \int_0^{T-a} \int_t^{t+a} f_1 \Vert n_k(s) \Vert _{L^2} \Vert \tilde{c}_k(t+a) - \tilde{c}_k(t) \Vert _{L^2} ds dt\\ \ &= f_1 \int_0^{T-a} \Vert \tilde{c}_k(t+a) - \tilde{c}_k(t) \Vert _{L^2} \left(\int_t^{t+a}\Vert n_k(s) \Vert _{L^2} ds\right) dt\\ \ & \le f_1 a^{\frac{1}{2}} \left(\int_0^{T}\Vert n_k(s) \Vert _{L^2}^2 ds\right)^{\frac{1}{2}} \int_0^{T-a} \Vert \tilde{c}_k(t+a) - \tilde{c}_k(t) \Vert _{L^2} dt\\ \ & \le f_1 a^{\frac{1}{2}} {(T-a)}^{\frac{1}{2}} M. \end{array} \] We have $I_4 \rightarrow 0$ as $a \rightarrow 0.$ Therefore, we have proved that \begin{equation}\label{e4.22} \begin{array}{ll} \ &\int_0^{T-a} \Vert \tilde{c}_k(t+a) - \tilde{c}_k(t) \Vert_{L^2}^2 \ \rightarrow 0 \quad \mbox{as}\ a \rightarrow 0. \end{array} \end{equation} Proceeding in an analogous way, we have \begin{eqnarray} &\ &\int_0^{T-a} \Vert \tilde{n}_k(t+a) - \tilde{n}_k(t) \Vert_{L^2}^2 \ \rightarrow 0 \quad \mbox{as}\ a \rightarrow 0. \label{e4.23}\\ &\ &\int_0^{T-a} \Vert \tilde{u}_k(t+a) - \tilde{u}_k(t) \Vert_{L^2}^2 \ \rightarrow 0 \quad \mbox{as}\ a \rightarrow 0.\label{e4.24} \end{eqnarray} With \eqref{e4.22} - \eqref{e4.24}, and lemma 2.2, we conclude that \eqref{e4.19} holds. Proceed in the same way as we derive \eqref{e3.28} - \eqref{e3.29}, we also have, \begin{eqnarray} &\ & f(c_k) \rightarrow f(c) \quad \mbox{strongly in} \ L^2(0,\ T,\ L^2(\Omega)),\label{e4.26}\\ &\ & g(n_k,\ c_k) \rightarrow g(n,\ c) \quad \mbox{strongly in} \ L^2(0,\ T,\ L^2(\Omega)).\label{e4.27} \end{eqnarray} Next, we want to pass the limit in \eqref{e4.20} by letting $k\rightarrow 0,$ to this end, we consider $\phi_1,\ \phi_2\in C^{\infty}\bigcap H^1,\ \phi_3\in \Upsilon,$ and $(\Psi_1,\ \Psi_2,\ \Psi_3 ) \in {\left( C^1([0,\ T],\ R) \right)}^3,$ with $\Psi_1(T)=\Psi_2(T)=\Psi_3(T)=0.$ Multiply $ \eqref{e4.20}_1$ by $\Psi_1$, $ \eqref{e4.20}_2$ by $\Psi_2$, $ \eqref{e4.20}_3$ by $\Psi_3$, integrate over $[0,\ T],\ $ integration by parts, we get \begin{equation}\label{e4.28} \begin{array}{ll} \ & -\int_0^T \int_{\Omega} \tilde{c}_k(t)\phi_1 \Psi_1'(t) dx dt + \alpha\int_0^T \int_{\Omega} \nabla c_k(t) \nabla\phi_1 \Psi_1(t) dx dt -\frac{\alpha}{b} \int_0^T \int_{\partial\Omega} \tilde{c}_{k\tau}(t)\phi_{1\tau} \Psi_1'(t) d\sigma dt \\ \ &\quad +\frac{\alpha}{b} \int_0^T \int_{\partial\Omega} \nabla_{\tau} c_k(t) \nabla_{\tau}\phi_1 \Psi_1(t) d\sigma dt + \int_0^T \int_{\Omega} u_k(t)\nabla c_k(t) \phi_1 \Psi_1(t) dx dt = \Psi_1(0) \int_{\Omega} \tilde{c}_k(x,\ 0)\phi_1 dx \\ \ &\quad + \frac{\alpha}{b} \Psi_1(0) \int_{\partial\Omega} \tilde{c}_{k\tau}(x,\ 0)\phi_{1\tau} d\sigma - \int_0^T \int_{\Omega} n_k(t) f(c_k(t))\phi_1 \Psi_1(t) dx dt,\\ \ & -\int_0^T \int_{\Omega} \tilde{n}_k(t)\phi_2 \Psi_2'(t) dx dt + \int_0^T \int_{\Omega} \left( \beta\nabla n_k(t) - g (n_k(t),\ c_k(t)) \nabla c_k(t) \right) \nabla\phi_2 \Psi_2(t) dx dt\\ \ & \hspace{1cm} + \int_0^T \int_{\Omega} u_k(t)\nabla n_k(t) \phi_2 \Psi_2(t) dx dt = \Psi_2(0) \int_{\Omega} \tilde{n}_k(x,\ 0)\phi_2 dx,\\ \ & -\int_0^T \int_{\Omega} \tilde{u}_k(t)\phi_3 \Psi_3'(t) dx dt + \int_0^T \int_{\Omega} \xi \nabla u_k(t) \nabla\phi_3 \Psi_3(t) dx dt + \int_0^T \int_{\Omega} (u_k(t)\cdot\nabla) u_k(t) \phi_3 \Psi_3(t) dx dt\\ \ &\hspace{1cm}= \Psi_3(0) \int_{\Omega} \tilde{u}_k(x,\ 0)\phi_3 dx + \int_0^T \int_{\Omega} n_k(t) \nabla \sigma \phi_3 \Psi_3(t) dx dt. \end{array} \end{equation} Now we take the limit in \eqref{e4.28} by letting $k \rightarrow 0,\ $ and using \eqref{e4.18}, \eqref{e4.19}, \eqref{e4.26}, \eqref{e4.27}, we get \begin{equation}\label{e4.29} \begin{array}{ll} \ & -\int_0^T \int_{\Omega} c(t)\phi_1 \Psi_1'(t) dx dt + \alpha\int_0^T \int_{\Omega} \nabla c(t) \nabla\phi_1 \Psi_1(t) dx dt -\frac{\alpha}{b} \int_0^T \int_{\partial\Omega} c_{\tau}(t)\phi_{1\tau} \Psi_1'(t) d\sigma dt \\ \ & +\frac{\alpha}{b} \int_0^T \int_{\partial\Omega} \nabla_{\tau} c(t) \nabla_{\tau}\phi_1 \Psi_1(t) d\sigma dt + \int_0^T \int_{\Omega} u(t)\nabla c(t) \phi_1 \Psi_1(t) dx dt\\ \ & = \Psi_1(0) \int_{\Omega} c(x,\ 0)\phi_1 dx + \frac{\alpha}{b} \Psi_1(0) \int_{\partial\Omega} c_{\tau}(x,\ 0)\phi_{1\tau} d\sigma - \int_0^T \int_{\Omega} n(t) f(c(t))\phi_1 \Psi_1(t) dx dt,\\ \ & -\int_0^T \int_{\Omega} n(t)\phi_2 \Psi_2'(t) dx dt + \int_0^T \int_{\Omega} \left( \beta\nabla n(t) - g (n(t),\ c(t)) \nabla c(t) \right) \nabla\phi_2 \Psi_2(t) dx dt\\ \ & + \int_0^T \int_{\Omega} u(t)\nabla n(t) \phi_2 \Psi_2(t) dx dt = \Psi_2(0) \int_{\Omega} n(x,\ 0)\phi_2 dx,\\ \ &-\int_0^T \int_{\Omega} u(t)\phi_3 \Psi_3'(t) dx dt + \int_0^T \int_{\Omega} \xi \nabla u(t) \nabla\phi_3 \Psi_3(t) dx dt + \int_0^T \int_{\Omega} (u(t)\cdot\nabla) u(t) \phi_3 \Psi_3(t) dx dt\\ \ &= \Psi_3(0) \int_{\Omega} u(x,\ 0)\phi_3 dx + \int_0^T \int_{\Omega} n(t) \nabla \sigma\phi_3 \Psi_3(t) dx dt. \end{array} \end{equation} \eqref{e4.29} holds for any $\phi_1,\ \phi_2\in C^{\infty}\bigcap H^1,\ \phi_3\in \Upsilon,$ by continuity, it holds for any $\phi_1,\ \phi_2\in H^1,\ \phi_3\in V$. By choosing $(\Psi_1,\ \Psi_2,\ \Psi_3 ) \in {\left( C_0^{\infty}[0,\ T] \right)}^3,\ $ we draw conclusion that (1.6) holds, in the weak sense on $(0,\ T).$ By standard argument, we have $ (c(0),\ n(0),\ u(0)) = (c_0,\ n_0,\ u_0).$ The proof of Theorem 1.1 is completed. \end{document}
\begin{document} \title[The Brauer group of Azumaya corings ]{The Brauer group of Azumaya corings and the second cohomology group} \author{S. Caenepeel} \address{Faculty of Engineering, Vrije Universiteit Brussel, VUB, B-1050 Brussels, Belgium} \email{[email protected]} \urladdr{http://homepages.vub.ac.be/\~{}scaenepe/} \author{B. Femi\'c} \address{Faculty of Engineering, Vrije Universiteit Brussel, VUB, B-1050 Brussels, Belgium} \email{[email protected]} \thanks{} \subjclass{16W30} \keywords{Galois coring, comatrix coring, descent theory, Morita context} \begin{abstract} Let $R$ be a commutative ring. An Azumaya coring consists of a couple $(S,\mathcal{C})$, with $S$ a faithfully flat commutative $R$-algebra, and an $S$-coring $\mathcal{C}$ satisfying certain properties. If $S$ is faithfully projective, then the dual of $\mathcal{C}$ is an Azumaya algebra. Equivalence classes of Azumaya corings form an abelian group, called the Brauer group of Azumaya corings. This group is canonically isomorphic to the second flat cohomology group. We also give algebraic interpretations of the second Amitsur cohomology group and the first Villamayor-Zelinsky cohomology group in terms of corings. \end{abstract} \maketitle \section*{Introduction} Let $k$ be a field, and $l$ a Galois field extension of $k$ with group $G$. The Crossed Product Theorem states that we have an isomorphism ${\rm Br}(l/k)\cong H^2(G,l^*)$. The map from the second cohomology group to the Brauer group can be described easily and explicitly: if $f\in Z ^2(G,l^*)$ is a $2$-cocycle, then the central simple algebra representing the class in ${\rm Br}(l/k)$ corresponding to $f$ is $$A=\bigoplus_{\sigma\in G} Au_{\sigma},$$ with multiplication rule $$(au_\sigma)(bu_\tau)=a\sigma(b)f(\sigma,\tau)u_{\sigma\tau}.$$ From the fact that every central simple algebra can be split by a Galois extension, it follows that the full Brauer group ${\rm Br}(k)$ can be described as a second cohomology group $${\rm Br}(k)\cong H^2({\rm Gal}(k^{\rm sep}/k),k^{\rm sep*}),$$ where $k^{\rm sep}$ is the separable closure of $k$.\\ The definition of the Brauer group can be generalized from fields to commutative rings (see \cite{AG}), or, more generally, to schemes (see \cite{Gr2}). The cohomological description of the Brauer group of a commutative ring is more complicated; first of all, Galois cohomology is no longer sufficient, since not every Azumaya algebra can be split by a Galois extension. More general cohomology theories have to be introduced, such as Amitsur cohomology (over commutative rings) or \v Cech cohomolgy (over schemes). The Crossed Product Theorem is replaced by a long exact sequence, called the Chase-Rosenberg sequence. We can introduce the second \'etale cohomology group $H^2(R_{\rm et},{\mathbb G}_m)$, as the second right derived functor of a global section functor. If $R=k$ is a field, then this group equals the total Galois cohomology group $H^2({\rm Gal}(k^{\rm sep}/k),k^{\rm sep*})$. Then we have a monomorphism $${\rm Br}(R)\hookrightarrow H^2(R_{\rm et},{\mathbb G}_m).$$ In general, this monomorphism is not surjective, as the Brauer group is allways torsion, and the second cohomology group is not torsion in general. Gabber \cite{Gabber} proved that the Brauer group is isomorphic to the torsion part of the second cohomology group.\\ In \cite{Ta}, Taylor introduced a new Brauer group, consisting of equivalence classes of algebras that do not necessarily have a unit. The classical Brauer group is a subgroup, and it is shown in \cite{RT} that Taylor's Brauer group is isomorphic to the full second \'etale cohomology group. The proof depends on deep results, such as Artin's Refinement Theorem (see \cite{A1}); also the proof does not provide an explicit procedure producing a Taylor-Azumaya algebra out of an Amitsur cocycle.\\ In this paper, we propose a new Brauer group, and we show that it is isomorphic to the full second flat cohomology group. The elements of this new Brauer group are equivalence classes of corings. Corings were originally introduced by Sweedler \cite{Sweedler75}; inspired by an observation made by Takeuchi that a large class of generalized Hopf modules can be viewed as comodules over a coring, Brzezi\'nski \cite{Brzezinski02} revived the theory of corings. \cite{Brzezinski02} was followed by a series of papers giving new applications of corings, we refer to \cite{BrzezinskiWisbauer} for a survey.\\ Let $S$ be a commutative faithfully flat $R$-algebra. We can define a comultiplication and a counit on the $S$-bimodule $S\otimes S$, making $S\otimes S$ into a coring. This coring, called Sweedler's canonical coring, can be used to give an elegant approach to descent theory: the category of descent data is isomorphic to the category of comodules over the coring. Our starting observation is now the following: an Amitsur $2$-cocycle can be used to deform the comultiplication on $S\otimes S$, such that the new comultiplication is still coassociative. Thus the Amitsur $2$-cocycle condition should be viewed as a coassociativity condition rather than an associativity condition (in contrast with the Galois $2$-cocycle condition, which is really an associativity condition). In the situation where $S$ is faithfully projective as an $R$-module, we can take the dual of the coring $S\otimes S$, which is an $S$-ring, isomorphic to ${\rm End}_R(S)$. Amitsur $2$-cocycles can then be used to deform the multiplication on ${\rm End}_R(S)$, leading to an Azumaya algebra in the classical sense; this construction leads to a map $H^2(S/R,{\mathbb G}_m)\to {\rm Br}(S/R)$, and we will show that it is one of the maps in the Chase-Rosenberg sequence. The duality between the $S$-coring $S\otimes S$ and the $S$-ring ${\rm End}_R(S)$ works well in both directions if $S/R$ is faithfully projective, but fails otherwise; this provides an explanation for the fact that we need the condition that $S/R$ is faithfully projective in order to fit the relative Brauer group ${\rm Br}(S/R)$ into the Chase-Rosenberg sequence.\\ The canonical coring construction can be generalized slightly: if $I$ is an invertible $S$-module, then we can define a coring structure on $I^*\otimes I$. Such a coring will be called an elementary $S/R$-coring. Azumaya $S/R$-corings are then introduced as twisted forms of elementary $S/R$-corings. If $S/R$ is faithfully projective, then the dual of an Azumaya $S/R$-coring is an Azumaya algebra containing $S$ as a maximal commutative subalgebra. The set of isomorphism classes of Azumaya $S/R$-corings forms a group; after we divide by the subgroup consisting of elementary corings, we obtain the relative Brauer group ${\rm Br}^c(S/R)$; we will show that ${\rm Br}^c(S/R)$ is isomorphic to Villamayor and Zelinsky's cohomology group with values in the category of invertible modules $H^1(S/R,\dul{{\rm Pic}})$ \cite{ViZ}. As a consequence, ${\rm Br}^c(S/R)$ fits into a Chase-Rosenberg type sequence (even if $S/R$ is not faithfully projective).\\ An Azumaya coring will consist of a couple $(S,\mathcal{C})$, where $S$ is a (faithfully flat) commutative ring extension of $R$, and $\mathcal{C}$ is an $S/R$-coring. On the set of isomorphism classes, we define a Brauer equivalence relation, and show that the quotient set is a group under the operation induced by the tensor product over $R$. This group is called the Brauer group of Azumaya corings, and we can show that it is isomorphic to the full second cohomology group.\\ If $C$ is an object of a category $\mathcal{C}$, then the identity endomorphism of $C$ will also be denoted by $C$. \section{The Brauer group of a commutative ring}\selabel{1} \subsection{Amitsur cohomology}\selabel{1.1} Let $R$ be a commutative ring, and $S$ an $R$-algebra that is faithfully flat as an $R$-module. Tensor products over $R$ will be written without index $R$: $M\otimes N=M\otimes_RN$, for $R$-modules $M$ and $N$. The $n$-fold tensor product $S\otimes\cdots \otimes S$ will be denoted by $S^{\otimes n}$. For $i\in \{1,\cdots,n+2\}$, we have an algebra map $$\eta_i:\ S^{\otimes (n +1)}\to S^{\otimes (n+2)},$$ given by $$\eta_i(s_1\otimes \cdots \otimes s_{n+1})=s_1\otimes \cdots\otimes s_{i-1}\otimes 1\otimes s_i\otimes\cdots\otimes s_{n+1}.$$ Let $P$ be a covariant functor from a full subcategory of the category of commutative $R$-algebras that contains all tensor powers $S^{\otimes n}$ of $S$ to abelian groups. Then we consider $$\delta_n=\sum_{i=1}^{n+2} (-1)^{i-1}P(\eta_i):\ P(S^{\otimes (n +1)})\to P(S^{\otimes (n +2)}).$$ It is straightforward to show that $\delta_{n+1}\circ \delta_n=0$, so we obtain a complex $$0\to P(S)\rTo^{\delta_0} P(S^{\otimes 2})\rTo^{\delta_1} P(S^{\otimes 3})\rTo^{\delta_2}\cdots,$$ called the Amitsur complex $\mathcal{C}(S/R)$. We write $$Z^n(S/R, P)={\rm Ker}\,\delta_n~~;~~B^n(S/R, P)={\rm Im}\,\delta_{n-1};$$ $$H^n(S/R, P)=Z^n(S/R, P)/B^n(S/R, P).$$ $H^n(S/R, P)$ will be called the $n$-th Amitsur cohomology group of $S/R$ with values in $P$. Elements in $Z^n(S/R,P)$ are called $n$-cocycles, and elements in $B^n(S/R,P)$ are called $n$-coboundaries.\\ In this paper, we will mainly look at the following two examples: $P={\rm Pic}$, where ${\rm Pic}(S)$ is the Picard group of $S$, consisting of isomorphism classes of invertible $S$-modules, and $P={\mathbb G}_m$, where ${\mathbb G}_m(S)$ is the group consisting of all invertible elements of $S$.\\ If $u\in S^{\otimes n}$, then we will write $u_i=\eta_i(u)$. Observe that $u\in {\mathbb G}_m(S^{\otimes 3})$ is then a cocycle in $Z^2(S/R,{\mathbb G}_m)$ if and only if $$u_1u_2^{-1}u_3u_4^{-1}=1.$$ Amitsur cohomology was first introduced in \cite{Amitsur1} (over fields); it can be viewed as an affine version of \v Cech cohomology. For a more detailed discussion, see for example \cite{Caenepeel98,CR,KO1}. We now present some elementary properties of Amitsur cohomology groups. We will adopt the following notation: an element $u\in S^{\otimes n}$ will be written formally as $u=u^1\otimes u^2\otimes \cdots \otimes u^n$, where the summation is understood implicitly. \begin{proposition}\prlabel{am1a} Let $R$ be a commutative ring, and $f:\ S\to T$ a morphism of commutative $R$-algebras. $f$ induces maps $f_*:\ H^n(S/R,P)\to H^n(T/R,P)$. If $g:\ S\to T$ is a second algebra map, then $f_*=g_*$ (for $n\geq 1$). \end{proposition} \begin{proof} The first statement is obvious. For the proof of the second one, we refer to \cite[Prop. 5.1.7]{KO1}. \end{proof} The following result is obvious. \begin{lemma}\lelabel{am1} If $u,v\in Z^n(S/R,{\mathbb G}_m)$, then $$u\otimes v= (u^1\otimes v^1)\otimes (u^2\otimes v^2)\otimes\cdots\otimes (u^n\otimes v^n)\in Z^n(S\otimes S/R,{\mathbb G}_m).$$ If $u,v\in B^n(S/R,{\mathbb G}_m)$, then $u\otimes v\in B^n(S\otimes S/R,{\mathbb G}_m)$. \end{lemma} \begin{corollary}\colabel{am2} If $u\in Z^n(S/R,{\mathbb G}_m)$, then $[u\otimes 1]=[1\otimes u]$, and $[u\otimes u^{-1}]=1$ in $H^n(S\otimes S/R,{\mathbb G}_m)$. \end{corollary} \begin{proof} Apply \prref{am1a} to the algebra maps $\eta_1,\eta_2:\ S\to S\otimes_R S$, $\eta_1(s)=1\otimes s$, $\eta_2(s)=s\otimes 1$. \end{proof} \begin{lemma}\lelabel{am3} Take a cocycle $u=u^1\otimes u^2\otimes u^3\in Z^2(S/R,{\mathbb G}_m)$. $|u|=u^1u^2u^3\in {\mathbb G}_m(S)$ is called the norm of $u$, and $$u_1\otimes |u|^{-1}u_2u_3=1\otimes 1=|u|^{-1}u_1u_2\otimes u_3.$$ \end{lemma} \begin{proof} The first equality is obtained after we multiply the second, third and fourth tensor factors in the cocycle condition $u_1u_2^{-1}u_3u_4^{-1}=1$. The second equality is obtained after multiplying the first three tensor factors. \end{proof} A 2-cocycle $u$ is called normalized if $|u|=1$. \begin{lemma}\lelabel{am4} Every cocycle $u$ is cohomologous to a normalized cocycle. \end{lemma} \begin{proof} First observe that $\Delta_1(|u|^{-1}\otimes 1)=1\otimes |u|^{-1}\otimes 1$. The cocycle $u\Delta_1(|u|^{-1}\otimes 1)=u^1\otimes |u|^{-1}u_2\otimes u_3$ is normalized and cohomologous to $u$. \end{proof} Now we consider the Amitsur complex $\mathcal{C}(S\otimes S/R\otimes S)$. We have a natural isomorphism $$(S\otimes S)^{\otimes_{R\otimes S}n}\rTo^{\cong} S^{\otimes (n+1)},~~ (s_1\otimes t_1)\otimes\cdots\otimes (s_n\otimes t_n)\mapsto s_1\otimes\cdots\otimes s_n\otimes t_1\cdots t_n.$$ The augmentation maps ($i=1,2,3$) $$\eta_i:\ (S\otimes S)^{\otimes_{R\otimes S}2}\to (S\otimes S)^{\otimes_{R\otimes S}3}$$ can then be viewed as maps $$\eta_i:\ S^{\otimes 3}\to S^{\otimes 4},$$ and we find, for $u\in Z^2(S/R,{\mathbb G}_m)$ and $i=1,2,3$ that $\eta_i(u)=u_i$. Consequently $u\otimes 1=u_4=u_1u_2^{-1}u_3=\Delta_1(u)\in B^2(S\otimes S/R\otimes S,{\mathbb G}_m)$. \begin{lemma}\lelabel{am5} If $u\in Z^2(S/R,{\mathbb G}_m)$, then $u\otimes 1\in B^2(S\otimes S/R\otimes S,{\mathbb G}_m)$. \end{lemma} \subsection{Derived functor cohomology}\selabel{1.2} Let $R$ be a commutative ring. $\dul{\rm cat}(R_{\rm fl})$ is the full subcategory of commutative flat finitely presented $R$-algebras. A covariant functor $P:\ \dul{\rm cat}(R_{\rm fl})\to \dul{\rm Ab}$ is called a presheaf on $R_{\rm fl}$. The category of presheaves on $R_{\rm fl}$ and natural transformations will be denoted by $\mathcal{P}(R_{\rm fl})$.\\ A presheaf $P$ is called a sheaf if $H^0(S'/S,P)=P(S)$, for every faithfully flat $R$-algebra homomorphism $S\to S'$. The full subcategory of $\mathcal{P}(R_{\rm fl})$ consisting of sheaves is denoted by $\mathcal{S}(R_{\rm fl})$. $\mathcal{P}(R_{\rm fl})$ and $\mathcal{S}(R_{\rm fl})$ are abelian categories having enough injective objects.\\ ${\mathbb G}_a$ and ${\mathbb G}_m$ are sheaves on $R_{\rm fl}$. The embedding functor $i:\ \mathcal{S}(R_{\rm fl})\to \mathcal{P}(R_{\rm fl})$ has a left adjoint $a:\ \mathcal{P}(R_{\rm fl})\to \mathcal{S}(R_{\rm fl})$.\\ The ``global section" functor $\Gamma:\ \mathcal{S}(R_{\rm fl})\to \dul{\rm Ab}$ is left exact, so we can consider its $n$-th right derived functor $R^n\Gamma$. We define the $n$-th flat cohomology group by $$H^n(R_{\rm fl},{\mathbb G}_m)=R^n\Gamma({\mathbb G}_m).$$ Fix a faithfully flat $R$-algebra $S$, and consider the functor $$g=H^0(S/R,-):\ \mathcal{P}(R_{\rm fl})\to \dul{\rm Ab}.$$ Then $\Gamma=g\circ i$, and $i$ takes injective objects of $\mathcal{S}(R_{\rm fl})$ to $g$-acyclics (see \cite[lemma 5.6.6]{Caenepeel98}), and we have long exact sequences, for every sheaf $F$, and for every $q\geq 0$ (see \cite{Caenepeel98,ViZ}): \begin{eqnarray}\eqlabel{et1} 0&\longrightarrow& H^1(S/R,C^q)\longrightarrow H^{q+1}(R_{\rm fl},F) \longrightarrow H^0(S/R,H^{q+1}(\bullet,F))\\ &\longrightarrow& H^2(S/R,C^q)\longrightarrow H^1(S/R,C^{q+1}) \longrightarrow H^1(S/R,H^{q+1}(\bullet,F))\nonumber\\ &\longrightarrow& \cdots\nonumber\\ &\longrightarrow& H^{p+1}(S/R,C^q)\longrightarrow H^p(S/R,C^{q+1}) \longrightarrow H^p(S/R,H^{q+1}(\bullet,F))\nonumber\\ &\longrightarrow& \cdots\nonumber. \end{eqnarray} The sheaf $C^i$ is the $i$-th syzygy of an injective resolution $0\to F\to X^0\to X^1\to \cdots$ of $F$ in $\mathcal{S}(R_{\rm fl})$, that is, $C^i={\rm Ker}\,(X^i\to X^{i+1})$.\\ A morphism $f:\ S\to T$ of commutative faithfully flat $R$-algebras induces a map between the corresponding sequences \equref{et1}, namely we have a commutative diagram \begin{equation}\eqlabel{et1a} \begin{diagram} 0\rightarrow&H^1(S/R,C^q)&\to&H^{q+1}(R_{\rm fl},F)&\to& H^0(S/R,H^{q+1}(\bullet,F))&\to&\cdots \\ &\dTo_{f_*}&&\dTo_{=}&&\dTo_{f_*}&&\\ 0\rightarrow&H^1(T/R,C^q)&\to&H^{q+1}(R_{\rm fl},F)&\to& H^0(T/R,H^{q+1}(\bullet,F))&\to&\cdots \end{diagram} \end{equation} It is known that $H^1(R_{\rm fl},{\mathbb G}_m)={\rm Pic}(R)$, the group of rank one projective $R$-modules. Writing down \equref{et1} for $F={\mathbb G}_m$ and $q=0$, we find the exact sequence \begin{eqnarray}\eqlabel{et2} 0&\longrightarrow& H^1(S/R,{\mathbb G}_m)\longrightarrow {\rm Pic}(R) \longrightarrow H^0(S/R,{\rm Pic})\\ &\longrightarrow& H^2(S/R,{\mathbb G}_m)\longrightarrow H^1(S/R,C^{1}) \longrightarrow H^1(S/R,{\rm Pic})\nonumber\\ &\longrightarrow&H^3(S/R,{\mathbb G}_m)\longrightarrow\cdots\nonumber \end{eqnarray} Let $\mathcal{R}$ be the category with faithfully flat commutative $R$-algebras as objects. The set of morphisms between two objects $S$ and $T$ is a singleton if there exists an algebra morphism $S\to T$ (then we write $S\leq T$), and is empty otherwise. Then $\mathcal{R}$ is a directed preorder, that is a category with at most one morphism between two objects, and such that every pair of objects $(S,T)$ has a successor, namely $S\otimes T$ (see \cite[IX.1]{McLane}).\\ Let $P$ be a presheaf on $R_{\rm fl}$. It follows from \prref{am1a} that we have a functor $$H^n(\bullet/R,P):\ \mathcal{R}\to \dul{\rm Ab},$$ and we can consider the colimit $$\check H^n(R_{\rm fl},P)={\rm colim} H^n(\bullet/R,P).$$ Now let $F$ be a sheaf. Using the exact sequences \equref{et1} and the commutative diagrams \equref{et1a}, we find a homomorphism of abelian groups $$\check H^n(R_{\rm fl},F)\to H^n(R_{\rm fl},F).$$ If $n=1$, this map is an isomorphism. In particular, we have \begin{equation}\eqlabel{et3} H^2(R_{\rm fl},{\mathbb G}_m)\cong H^1(R_{\rm fl},C^1)\cong\check H^1(R_{\rm fl},C^1). \end{equation} The category $\dul{\rm cat}(R_{\rm fl})$ can be replaced by $\dul{\rm cat}(R_{\rm et})$, the category of \'etale $R$-algebras. All results remain valid, and, moreover, we have $$\check H^n(R_{\rm et},F)\cong H^n(R_{\rm et},F).$$ The proof of this result is based on Artin's Refinement Theorem \cite{A1}. \subsection{Amitsur cohomology with values in $\dul{{\rm Pic}}$}\selabel{1.2b} Let $R$ be a commutative ring. The category of invertible $R$-modules and $R$-module isomorphisms is denoted by $\dul{{\rm Pic}}(R)$. The Grothendieck group $K_0\dul{{\rm Pic}}(R)$ is the Picard group ${\rm Pic}(R)$. The inverse of $[I]\in {\rm Pic}(R)$ is represented by $I^*={\rm Hom}_R(I,R)$. If $I\in \dul{{\rm Pic}}(R)$, then the evaluation map ${\rm ev}_I:\ I\otimes I^*\to R$ is an isomorphism, with inverse the coevaluation map ${\rm coev}_I:\ R\to I\otimes I^*$. If ${\rm coev}_I(1)=\sum_i e_i\otimes e_i^*$, then $\{(e_i,e_i^*)~|~i=1,\cdots n\}$ is a finite dual basis for $I$.\\ Let $S$ be a commutative faithfully flat $R$-algebra. For every positive integer $n$, we have a functor $$\delta_{n-1}:\ \dul{{\rm Pic}}(S^{\otimes n})\to \dul{{\rm Pic}}(S^{\otimes (n+1)}),$$ given by $$\delta_{n-1}(I)=I_1\otimes_{S^{\otimes (n+1)}}I^*_2\otimes_{S^{\otimes (n+1)}}\cdots \otimes_{S^{\otimes (n+1)}}J_{n+1},$$ $$\delta_{n-1}(f)=f_1\otimes_{S^{\otimes (n+1)}} (f^*_2)^{-1}\otimes_{S^{\otimes (n+1)}}\cdots \otimes_{S^{\otimes (n+1)}} (g_{n+1})^{\pm 1},$$ with $J=I$ or $I^*$, $g=f$ or $f^*$ depending on whether $n$ is odd or even. Here $I_i= I\otimes_{S^{\otimes n}}S^{\otimes {n+1}}$, where $S^{\otimes {n+1}}$ is a left $S^{\otimes n}$-module via $\eta_i:\ S^{\otimes n}\to S^{\otimes {n+1}}$ (see \seref{1.1}). We easily compute that $$\delta_n\delta_{n-1}(I)=\bigotimes_{j=1}^{n+2}\bigotimes_{i=1}^{j-1} (I_{ij}\otimes_{S^{\otimes (n+2)}} I^*_{ij}),$$ so we have a natural isomorphism $$\lambda_I= \bigotimes_{j=1}^{n+2}\bigotimes_{i=1}^{j-1} {\rm ev}_{I_{ij}}:\ \delta_n\delta_{n-1}(I)\to S^{\otimes (n+2)}.$$ $\dul{Z}^{n-1}(S/R,\dul{{\rm Pic}})$ is the category with objects $(I,\alpha)$, with $I\in \dul{{\rm Pic}}(S^{\otimes n})$, and $\alpha:\ \delta_{n-1}(I)\to S^{\otimes(n+1)}$ an isomorphism of $S^{\otimes(n+1)}$-modules such that $\delta_n(\alpha)=\lambda_I$. A morphism $(I,\alpha)\to (J,\beta)$ is an isomorphism of $S^{\otimes n}$-modules $f:\ I\to J$ such that $\beta\circ \delta_{n-1}(f)=\alpha$. $\dul{Z}^{n-1}(S/R,\dul{{\rm Pic}})$ is a symmetric monoidal category, with tensor product $(I,\alpha)\otimes (J,\beta)=(I\otimes_{S^{\otimes n}}J,\alpha\otimes_{S^{\otimes (n+1)}}\beta)$ and unit object $(S^{\otimes n},S^{\otimes (n+1)})$. Every object in this category is invertible, and we can consider $$K_0\dul{Z}^{n-1}(S/R,\dul{{\rm Pic}})={Z}^{n-1}(S/R,\dul{{\rm Pic}}).$$ We have a strongly monoidal functor $$\delta_{n-2}:\ \dul{{\rm Pic}}(S^{\otimes(n-1)})\to \dul{Z}^{n-1}(S/R,\dul{{\rm Pic}}),$$ $\delta_{n-2}(J)=(\delta_{n-2}(J),\lambda_J)$. Consider the subgroup $B^{n-1}(S/R,\dul{{\rm Pic}})$ of $Z^{n-1}(S/R,\dul{{\rm Pic}})$, consisting of elements represented by $\delta_{n-2}(J)$, with $J\in\dul{{\rm Pic}}(S^{\otimes n-1})$. We then define $$ H^{n-1}(S/R,\dul{{\rm Pic}})=Z^{n-1}(S/R,\dul{{\rm Pic}})/B^{n-1}(S/R,\dul{{\rm Pic}}).$$ This definition is such that we have a long exact sequence (see \cite{ViZ}): \begin{eqnarray}\eqlabel{villa1} 0&\longrightarrow& H^1(S/R,{\mathbb G}_m)\longrightarrow {\rm Pic}(R) \longrightarrow H^0(S/R,{\rm Pic})\\ &\longrightarrow& H^2(S/R,{\mathbb G}_m)\longrightarrow H^1(S/R,\dul{{\rm Pic}}) \longrightarrow H^1(S/R,{\rm Pic})\nonumber\\ &\longrightarrow& \cdots\nonumber\\ &\longrightarrow& H^{p+1}(S/R,{\mathbb G}_m)\longrightarrow H^p(S/R,\dul{{\rm Pic}}) \longrightarrow H^p(S/R,{\rm Pic})\nonumber\\ &\longrightarrow& \cdots\nonumber. \end{eqnarray} Comparing to \equref{et1} in the situation where $F={\mathbb G}_m$ and $q=0$, we see that \begin{equation}\eqlabel{villa2} H^n(S/R,\dul{{\rm Pic}})\cong H^n(S/R,C^1), \end{equation} for all $n\geq 1$. For detail, we refer to \cite{Caenepeel98,ViZ}. The following result can be viewed as an analog of \leref{am5}. \begin{lemma}\lelabel{1.2a.2} Let $(I,\alpha)\in\dul{Z}^1(S/R,\dul{{\rm Pic}})$. Then $$(I\otimes S,\alpha\otimes S)\cong \delta_0(I)~~{\rm in}~~\dul{Z}^1(S\otimes S/R\otimes S,\dul{{\rm Pic}}),$$ and consequently $[I\otimes S,\alpha\otimes S]=1$ in $H^1(S\otimes S/R\otimes S,\dul{{\rm Pic}})$. \end{lemma} \begin{proof} The isomorphism $\alpha:\ I_1\otimes_{S^{\otimes 3}}I^*_2\otimes_{S^{\otimes 3}}I_3\to S^{\otimes 3}$ induces an isomorphism $$\beta:\ I_3=I\otimes S\to I^*_1\otimes_{S^{\otimes 3}}I_2= (S\otimes I)^*\otimes_{(S\otimes S)\otimes_{R\otimes S}(S\otimes S)}(S\otimes I).$$ The fact that $\delta_2(\alpha)=\lambda_I$ implies that $\beta$ is an isomorphism in $\dul{Z}^1(S\otimes S/R\otimes S,\dul{{\rm Pic}})$. \end{proof} \begin{proposition}\prlabel{1.2a.3} Let $f:\ S\to T$ be a morphism of commutative faithfully flat $R$-algebras. $f$ induces group morphisms $f_*:\ H^n(S/R,\dul{{\rm Pic}})\to H^n(T/R,\dul{{\rm Pic}})$. If $g:\ S\to T$ is a second algebra morphism, then $f_*=g_*$. \end{proposition} \begin{proof} We have a functor $f_*:\ \dul{Z}^{n-1}(S/R,\dul{{\rm Pic}})\to \dul{Z}^{n-1}(T/R,\dul{{\rm Pic}})$, given by $$f_*(I,\alpha)=(I\otimes_{S^{\otimes n}}T^{\otimes n},\alpha\otimes_{S^{\otimes n+1}}T^{\otimes n+1}).$$ $f_*$ induces maps $f_*:\ H^n(S/R,\dul{{\rm Pic}})\to H^n(T/R,\dul{{\rm Pic}})$.\\ $f$ and $g$ induce maps $f_*$ and $g_*$ between the exact sequence \equref{villa1} and its analog with $S$ replaced by $T$. We have seen in \prref{am1a} that these maps coincide on $H^n(S/R,{\mathbb G}_m)$ and $H^n(S/R,{\rm Pic})$. It follows from the five lemma that they also coincide on $H^n(T/R,\dul{{\rm Pic}})$. \end{proof} It follows from \prref{1.2a.3} that we have a functor $$H^1(\bullet/R,\dul{{\rm Pic}}):\ \mathcal{R}\to \dul{\rm Ab},$$ so we can consider the colimit $$\check H^n(R_{\rm fl},\dul{{\rm Pic}})={\rm colim} H^1(\bullet/R,\dul{{\rm Pic}}).$$ If $f:\ S\to T$ is a morphism of commutative faithfully flat $R$-algebras, then the maps $f_*$ establish a map between the corresponding exact sequences \equref{villa1}. This implies that the isomorphisms \equref{villa2} fit into commutative diagrams $$\begin{diagram} H^n(S/R,\dul{{\rm Pic}})&\rTo^{\cong}&H^n(S/R,C^1)\\ \dTo^{f_*}&&\dTo_{f_*}\\ H^n(T/R,\dul{{\rm Pic}})&\rTo^{\cong}&H^n(T/R,C^1) \end{diagram}$$ Consequently, the functors $H^n(\bullet/R,\dul{{\rm Pic}})$ and $H^n(\bullet/R,C^1)$ are isomorphic, and \begin{equation}\eqlabel{villa10} \check H^1(R_{\rm fl},\dul{{\rm Pic}})\cong \check H^1(R_{\rm fl},C^1) \cong H^2(R_{\rm fl},{\mathbb G}_m). \end{equation} \subsection{The Brauer group}\selabel{1.3} Let $R$ be a commutative ring. An $R$-algebra $A$ is called an Azumaya algebra if there exists a commutative faithfully flat $R$-algebra $S$ such that $A\otimes S \cong {\rm End}_S(P)$ for some faithfully projective $S$-module $P$. There are several equivalent characterizations of Azumaya algebras, we refer to the literature \cite{Caenepeel98,DI,KO1}. An Azumaya algebra over a field is nothing else then a central simple algebra.\\ Two $R$-Azumaya algebras $A$ and $B$ are called Brauer equivalent if there exist faithfully projective $R$-modules $P$ and $Q$ such that $A\otimes {\rm End}(P)\cong B\otimes {\rm End}(Q)$ as $R$-algebras. This induces an equivalence relation on the set of isomorphism classes of $R$-Azumaya algebras. The quotient set ${\rm Br}(R)$ is an abelian group under the operation induced by the tensor product. The inverse of a class represented by an algebra $A$ is represented by the opposite algebra $A^{\rm op}$.\\ If $i:\ R\to S$ is a morphism of commutative rings, then we have an associated abelian group map $${\rm Br}(i):\ {\rm Br}(R)\to {\rm Br}(S),~~i[A]=[A\otimes S].$$ The kernel ${\rm Ker}\,({\rm Br}(i))={\rm Br}(S/R)$ is called the part of the Brauer group of $R$ split by $S$.\\ If $S/R$ is faithfully flat, then we have an embedding ${\rm Br}(S/R)\to H^1(S/R,C^1)$. This embedding is an isomorphism if $S$ is faithfully projective as an $R$-module. Consequently, we have an embedding $${\rm Br}(S/R)\to H^2(R_{\rm fl},{\mathbb G}_m),$$ and $${\rm Br}(R)\to H^2(R_{\rm fl},{\mathbb G}_m).$$ Since every $R$-Azumaya algebra can be split by an \'etale covering, $H^2(R_{\rm fl},{\mathbb G}_m)$ can be replaced by $H^2(R_{\rm et},{\mathbb G}_m)$ in the two formulas above. If $R$ is a field, or, more generally, if $R$ is a regular ring, then we have an isomorphism $${\rm Br}(R)\cong H^2(R_{\rm et},{\mathbb G}_m).$$ In general, we do not have such an isomorphism, because the Brauer group is torsion, and the second cohomology group is not (see \cite{Gr2}). Gabber (\cite{Gabber}, see also \cite{KO3}) showed that $${\rm Br}(R)\cong H^2(R_{\rm et},{\mathbb G}_m)_{\rm tors},$$ for every commutative ring $R$. Taylor \cite{Ta} introduced a Brauer group ${\rm Br}'(R)$ consisting of classes of algebras that have not necessarily a unit, but satisfy a weaker property. ${\rm Br}'(R)$ contains ${\rm Br}(R)$ as a subgroup, and we have an isomorphism \cite{RT} $${\rm Br}'(R)\cong H^2(R_{\rm et},{\mathbb G}_m).$$ The proof is technical, and relies on Artin's refinement Theorem \cite{A1}. It provides no explicit description of the Taylor-Azumaya algebra that corresponds to a given cocycle. \section{Some adjointness properties}\selabel{2.1} We start this technical Section with the following elementary fact. For any morphism $\eta:\ R\to S$ of rings, we have an adjoint pair of functors $(F=-\otimes_R S,G)$ between the module categories $\mathcal{M}_R$ and $\mathcal{M}_S$. $F$ is called the induction functor, and $G$ is the restriction of scalars functor. For every $M\in \mathcal{M}_R$, $N\in \mathcal{M}_S$, we have a natural isomorphism $${\rm Hom}_R(M,G(N))\cong {\rm Hom}_S(M\otimes_R S,N).$$ $f:\ M\to G(N)$ and the corresponding $\tilde{f}:\ M\otimes_RS\to N$ are related by the following formula: \begin{equation}\eqlabel{2.1.0.1} \tilde{f}(m\otimes_R s)=f(m)s. \end{equation} Now assume that $R$ and $S$ are commutative rings, and consider the ring morphisms $\eta_i:\ S\otimes_R S\to S\otimes_RS\otimes_RS$ ($i=1,2,3$) introduced at the beginning of \seref{1.1}. The corresponding adjoint pairs of functors between $\mathcal{M}_{S^{\otimes 2}}$ and $\mathcal{M}_{S^{\otimes 3}}$ will be written as $(F_i,G_i)$. $M\in \mathcal{M}_{S^{\otimes 2}}$ will also be regarded as an $S$-bimodule, and we will denote $M_i=F_i(M)$. For $m\in M$, we write $$m_i=(M\otimes_{S^{\otimes 2}}\eta_i)(m).$$ In particular, $m_3=m\otimes 1$ and $m_1=1\otimes m$. \begin{lemma}\lelabel{2.1} Let $M\in \mathcal{M}_{S^{\otimes 2}}$. Then we have an $S$-bimodule isomorphism $$G_2(M_3\otimes_{S^{\otimes 3}}M_1)\cong M\otimes_S M,$$ and an isomorphism $${}_S{\rm Hom}_S(M,M\otimes_S M)\cong {\rm Hom}_{S^{\otimes 3}}(M_2, M_3\otimes_{S^{\otimes 3}}M_1).$$ \end{lemma} \begin{proof} The map $$\alpha:\ M_3\otimes M_1\to M\otimes_S M,~~ \alpha((m\otimes s)\otimes (t\otimes n))=tm\otimes_S ns$$ induces a well-defined map $$\alpha:\ M_3\otimes_{S^{\otimes 3}}M_1\to M\otimes_S M.$$ Indeed, for all $m,n\in M$ and $s,t,u,v,w\in S$, we easily compute that \begin{eqnarray*} &&\hspace*{-2cm} \alpha\bigl((m\otimes s)(u\otimes v\otimes w)\otimes (t\otimes n)\bigr) =\alpha\bigl((umv\otimes sw)\otimes (t\otimes n)\bigr)\\ &=& tumv\otimes_S nsw=utm\otimes_S vnws\\ &=& \alpha\bigl((m\otimes s)\otimes (ut\otimes vnw)\bigr) =\alpha\bigl((m\otimes s)\otimes (u\otimes v\otimes w) (t\otimes n)\bigr). \end{eqnarray*} The map $$\beta:\ M\otimes M\to M_3\otimes_{S^{\otimes 3}}M_1,~~\beta(m\otimes n)=m_3\otimes_{S^{\otimes 3}} m_1$$ induces a well-defined map $$\beta:\ M\otimes_S M\to M_3\otimes_{S^{\otimes 3}}M_1.$$ Indeed, \begin{eqnarray*} &&\hspace*{-2cm} \beta(ms\otimes n)=(ms\otimes 1)\otimes_{S^{\otimes 3}}(1\otimes n)\\ &=& (m\otimes 1)(1\otimes s\otimes 1)\otimes_{S^{\otimes 3}}(1\otimes n)\\ &=& (m\otimes 1)\otimes_{S^{\otimes 3}}(1\otimes s\otimes 1)(1\otimes n)\\ &=&(m\otimes 1)\otimes_{S^{\otimes 3}}(1\otimes sn) =\beta(m\otimes sn). \end{eqnarray*} It is clear that $\alpha$ and $\beta$ are inverse $S$-bimodule maps. Finally, the adjunction cited above tells us that $${}_S{\rm Hom}_S(M,M\otimes_S M)\cong {\rm Hom}_{S^{\otimes 2}}(M, G_2(M_3\otimes_{S^{\otimes 3}}M_1)) \cong{\rm Hom}_{S^{\otimes 3}}(M_2, M_3\otimes_{S^{\otimes 3}}M_1).$$ \end{proof} Using \equref{2.1.0.1}, we can write an explicit formula for the map $\tilde{f}:\ M_2\to M_3\otimes_{S^{\otimes 3}}M_1$ corresponding to $f:\ M\to M\otimes_S M$. To this end, we first introduce the following Sweedler-type notation: $$f(m)=m_{(1)}\otimes_S m_{(2)},$$ where summation is understood implicitly. Then we have \begin{equation}\eqlabel{2.1.1} \tilde{f}(m_2)=\beta(f(m))=m_{(1)3}\otimes_{S^{\otimes 3}} m_{(2)1}. \end{equation} For $i=1,2,3,4$ and $j=1,2,3$, we now consider the ring morphisms $$\eta_{ij}=\eta_i\circ \eta_j:\ S\otimes_RS\to S\otimes_RS\otimes_RS\otimes_RS$$ and the corresponding pairs of adjunct functors $(F_{ij},G_{ij})$ between the categories $\mathcal{M}_{S^{\otimes 2}}$ and $\mathcal{M}_{S^{\otimes 4}}$. \begin{lemma}\lelabel{2.1a} Let $M\in \mathcal{M}_{S^{\otimes 2}}$. Then we have a natural isomorphism of $S$-bimodules $$G_{23}(M_{34}\otimes_{S^{\otimes 4}}M_{14}\otimes_{S^{\otimes 4}}M_{12})\cong M\otimes_SM\otimes_SM,$$ and an isomorphism $${}_S{\rm Hom}_S(M,M\otimes_S M\otimes_S M)\cong {\rm Hom}_{S^{\otimes 4}}(M_{23}, M_{34}\otimes_{S^{\otimes 4}}M_{14}\otimes_{S^{\otimes 4}}M_{12}).$$ The map $\tilde{f}$ corresponding to $f\in {}_S{\rm Hom}_S(M,M\otimes_S M\otimes_S M)$, with $f(m)= m_{(1)}\otimes_S m_{(2)}\otimes_S m_{(3)}$ is given by the formula \begin{equation}\eqlabel{2.1a.1} \tilde{f}(m_{23})=m_{(1)34}\otimes_{S^{\otimes 4}} m_{(2)14}\otimes_{S^{\otimes 4}} m_{(3)12}. \end{equation} \end{lemma} \begin{proof} The map $$\alpha:\ M_{34}\otimes_{S^{\otimes 4}}M_{14}\otimes_{S^{\otimes 4}}M_{12}\to M\otimes_S M\otimes_S M$$ and $$\beta:\ M\otimes_S M\otimes_S M\to M_{34}\otimes_{S^{\otimes 4}}M_{14}\otimes_{S^{\otimes 4}}M_{12}$$ given by the formulas $$\alpha\Bigl((m\otimes s\otimes t)\otimes_{S^{\otimes 4}}(s'\otimes n\otimes t')\otimes_{S^{\otimes 4}}(s''\otimes t''\otimes p)\Bigr)= s''s'm\otimes_S t''ns\otimes_S ptt'$$ and $$\beta(m\otimes_S n\otimes_S p)=m_{34}\otimes_{S^{\otimes 4}} n_{14}\otimes_{S^{\otimes 4}}p_{12}$$ are well-defined inverse $S$-bimodule maps. Verification of the details goes precisely as in the proof of \leref{2.1}. Then, using the adjunction from the beginning of this Section, we find \begin{eqnarray*} &&\hspace*{-2cm} {}_S{\rm Hom}_S(M,M\otimes_S M\otimes_S M)\cong {\rm Hom}_{S^{\otimes 2}}(M, G_{23}(M_{34}\otimes_{S^{\otimes 4}}M_{14}\otimes_{S^{\otimes 4}}M_{12})\\ &\cong & {\rm Hom}_{S^{\otimes 4}}(M_{23}, M_{34}\otimes_{S^{\otimes 4}}M_{14}\otimes_{S^{\otimes 4}}M_{12}). \end{eqnarray*} Using \equref{2.1.0.1}, we find that $\tilde{f}(m_{23})=\beta(f(m))$, and \equref{2.1a.1} then follows easily. \end{proof} Let $S$ be a commutative faithfully flat $R$-algebra. We have an algebra morphism $m:\ S^{\otimes n}\to S$, $m(s_1\otimes\cdots\otimes s_n)=s_1\cdots s_n$, and the corresponding induction functor $$-\otimes_{S^{\otimes n}} S=|-|:\ \mathcal{M}_{S^{\otimes n}}\to \mathcal{M}_S,$$ which is strongly monoidal since $|S^{\otimes n}|=S$, and \begin{eqnarray*} &&\hspace{-2cm} |M\otimes_{S^{\otimes n}}N|= M\otimes_{S^{\otimes n}}N\otimes_{S^{\otimes n}}S\\ &\cong& (M\otimes_{S^{\otimes n}} S)\otimes_S (N\otimes_{S^{\otimes n}} S)\cong |M|\otimes_S|N|. \end{eqnarray*} Recall from \cite[IX.4.6]{Ba2} that an $R$-module $M$ is faithfully projective if and only if there exists an $R$-module $N$ such that $M\otimes N\cong R^m$. This implies that $|-|$ sends faithfully projective (resp. invertible) $S^{\otimes n}$-modules to faithfully projective (resp. invertible) $S$-modules. \begin{lemma}\lelabel{2.2a} Let $M_1,\cdots, M_n \in \mathcal{M}_S$. Then $$|M_1\otimes\cdots\otimes M_n|\cong M_1\otimes_S\cdots\otimes_S M_n.$$ \end{lemma} \begin{proof} The natural epimorphism $\pi:\ M_1\otimes\cdots\otimes M_n\to |M_1\otimes\cdots\otimes M_n|$ factors through $M_1\otimes_S\cdots\otimes_S M_n$ since \begin{eqnarray*} &&\hspace*{-20mm} \pi(m_1\otimes\cdots \otimes sm_i\otimes \cdots\otimes m_n)= (m_1\otimes\cdots \otimes sm_i\otimes \cdots\otimes m_n)\otimes_{S^{\otimes n}} 1\\ &=& (m_1\otimes\cdots \otimes m_i\otimes \cdots\otimes m_n)\otimes_{S^{\otimes n}} s\\ &=&(m_1\otimes\cdots \otimes sm_j\otimes \cdots\otimes m_n)\otimes_{S^{\otimes n}} 1, \end{eqnarray*} for all $i,j$, so we have a map $$\alpha:\ M_1\otimes_S\cdots\otimes_S M_n\to |M_1\otimes\cdots\otimes M_n|.$$ In a similar way, the quotient map $M_1\otimes\cdots\otimes M_n\to M_1\otimes_S\cdots\otimes_S M_n$ factors through $|M_1\otimes\cdots\otimes M_n|$, so we have a map $$\beta:\ |M_1\otimes\cdots\otimes M_n|\to M_1\otimes_S\cdots\otimes_S M_n,$$ which is inverse to $\alpha$. \end{proof} \section{Corings}\selabel{2.2} Let $S$ be a ring. Recall that an $S$-coring is a coalgebra (or comonoid) $\mathcal{C}$ in the category ${}_S\mathcal{M}_S$. This means that $\mathcal{C}$ is an $S$-bimodule, together with two $S$-bimodule maps $\Delta_\mathcal{C}:\ \mathcal{C}\to \mathcal{C}\otimes_S \mathcal{C}$ and $\varepsilon_\mathcal{C}:\ \mathcal{C}\to S$, satisfying the usual coassociativity and counit conditions: $$(\mathcal{C}\otimes_S \Delta_\mathcal{C})\circ \Delta_\mathcal{C}=(\Delta_\mathcal{C}\otimes_S \mathcal{C})\circ \Delta_\mathcal{C}~~;~~ (\mathcal{C}\otimes_S \varepsilon_\mathcal{C})\circ \Delta_\mathcal{C}=(\varepsilon_\mathcal{C}\otimes_S \mathcal{C})\circ \Delta_\mathcal{C}=\mathcal{C}.$$ For the comultiplication $\Delta_\mathcal{C}$, we use the following Sweedler type notation: $$\Delta_\mathcal{C}(c)=c_{(1)}\otimes_S c_{(2)}.$$ A right $\mathcal{C}$-comodule $M$ is a right $S$-module together with a right $S$-linear map $\rho:\ M\to M\otimes_S \mathcal{C}$ such that $$(M\otimes_S\Delta_\mathcal{C})\circ \rho=(\rho\otimes_S M)\circ\rho~~;~~ (M\otimes_S\varepsilon_\mathcal{C})\circ \rho=S.$$ If $\mathcal{C}$ is an $S$-coring, then ${}_S{\rm Hom}(\mathcal{C},S)$ is an $S$-ring. This means that ${}_S{\rm Hom}(\mathcal{C},S)$ is a ring, and that we have a ring morphism $j:\ S\to {}_S{\rm Hom}(\mathcal{C},S)$. The multiplication on ${}_S{\rm Hom}(\mathcal{C},S)$ is given by the formula \begin{equation}\eqlabel{left} (g\# f)(c)=f(c_{(1)}g(c_{(2)})). \end{equation} The unit is $\varepsilon_\mathcal{C}$, and $j(s)(c)=\varepsilon_\mathcal{C}(c)s$, for all $s\in S$ and $c\in\mathcal{C}$. In a similar way, ${\rm Hom}_S(\mathcal{C},S)$ is an $S$-ring. The multiplication is now given by the formula \begin{equation}\eqlabel{right} (f\# g)(c)=f(g(c_{(1)})c_{(2)}). \end{equation} For a detailed discussion of corings and their applications, we refer to \cite{BrzezinskiWisbauer}. Let $S$ be a commutative $R$-algebra. We have seen in \seref{2.1} that we have a functor $G:\ \mathcal{M}_{S^{\otimes 2}}\to {}_S\mathcal{M}_S$. An $S$-bimodule $M$ lies in the image of $G$ if $M^R=M$, that is, $xm=mx$, for all $m\in M$ and $x\in R$.\\ We can view $\mathcal{M}_{S^{\otimes 2}}$ as a monoidal category with tensor product $\otimes_S$ and unit object $S$. A coalgebra in this category will be called an $S/R$-coring. Thus an $S/R$-coring $\mathcal{C}$ is an $S$-coring, with the additional condition that $\mathcal{C}^R=\mathcal{C}$. \begin{example}\exlabel{2.3} Take an invertible $S$-module $I$. Then $I$ is finitely projective as an $S$-module, and we have a finite dual basis $\{(e_i,f_i)\in I\times {I^*}~|~i=1,\cdots,n\}$ of $I$. Then $\sum_i e_i\otimes_S f_i=1\in I\otimes_S I^*\cong S$. We have an $S/R$-coring $$\mathcal{C}={\rm Can}_R(I;S)=I^*\otimes_R I,$$ with structure maps $$\Delta_\mathcal{C}:\ I^*\otimes_R I\to I^*\otimes_R I\otimes_SI^*\otimes_R I\cong I^*\otimes_RS\otimes_R I$$ $$\varepsilon_\mathcal{C}:\ I^*\otimes_R I\to S$$ given by $$\Delta_\mathcal{C}(f\otimes x)=\sum_i f\otimes e_i\otimes_S f_i\otimes x=f\otimes 1\otimes x~~;~~ \varepsilon_\mathcal{C}(f\otimes x)=f(x).$$ We call $\mathcal{C}$ an {\sl elementary} coring. If $I=S$, then we obtain Sweedler's canonical coring, introduced in \cite{Sweedler75}; in general, ${\rm Can}_R(I;S)$ is an example of a comatrix coring, as introduced in \cite{Kaoutit}. We also compute that $${}_S{\rm Hom}(\mathcal{C},S)={}_S{\rm Hom}(I^*\otimes_RI,S)\cong {}_R{\rm Hom}(I,I)={}_R{\rm End}(I).$$ ${}_R{\rm End}(I)$ is an $R$-algebra (under composition) and an $S$-ring, and we find an isomorphism of $S$-rings \begin{equation}\eqlabel{2.3.5} {}_S{\rm Hom}(\mathcal{C},S)\cong {}_R{\rm End}(I)^{\rm op}. \end{equation} \end{example} \begin{lemma}\lelabel{2.4} Let $S$ and $T$ be commutative $R$-algebras. Then we have a strongly monoidal functor $$F=-\otimes_R T:\ \mathcal{M}_{S\otimes_R S}\to \mathcal{M}_{(S\otimes_R T)\otimes_T(S\otimes_R T)}=\mathcal{M}_{S\otimes_R S\otimes_R T}.$$ Consequently, if $\mathcal{C}$ is an $S/R$-coring, then $F(\mathcal{C})=\mathcal{C}\otimes_RT$ is an $S\otimes_R T/T$-coring. \end{lemma} \begin{proof} $F(M)=M\otimes_R T$ is an $S\otimes_R T$-bimodule, via $(s\otimes t)\cdot (m\otimes t'')\cdot (s'\otimes t')= sms'\otimes tt''t'$. $F$ is strongly monoidal since $F(S)=S\otimes_R T$ and \begin{eqnarray*} &&\hspace*{-2cm} F(M\otimes_S N)=(M\otimes_S N)\otimes_R T\\ &\cong & (M\otimes_R T)\otimes_{S\otimes_RT}(N\otimes_RT)= F(M)\otimes_{S\otimes_R T}F(N). \end{eqnarray*} \end{proof} \begin{example}\exlabel{2.5} Let $I$ be an invertible $S$-module. Then \begin{eqnarray*} &&\hspace*{-2cm}F({\rm Can}_R(I;S))=(I^*\otimes_RI)\otimes_R T\cong (I^*\otimes_RT)\otimes_{R\otimes_R T} (I\otimes_RT)\\ &\cong & (I\otimes_R T)^*\otimes_{R\otimes_R T} (I\otimes_RT)\cong {\rm Can}_{T}(I\otimes_R T;S\otimes_RT). \end{eqnarray*} \end{example} \section{Azumaya corings}\selabel{3} \begin{lemma}\lelabel{3.1} Let $S$ be a commutative faithfully flat $R$-algebra, and $I\in \dul{{\rm Pic}}(S\otimes S)$. Consider an $S$-bimodule map $\Delta:\ I\to I\otimes_S I$, and assume that its corresponding map $\tilde{\Delta}:\ I_2\to I_3\otimes_{S^{\otimes 3}}I_1$ in $\mathcal{M}_{S^{\otimes 3}}$ is an isomorphism. Then we have an isomorphism of $S^{\otimes 3}$-modules \begin{equation}\eqlabel{3.1.1} \alpha^{-1}=(\tilde{\Delta}\otimes_{S^{\otimes 3}}I^*_2)\circ {\rm coev}_{I_2}:\ S^{\otimes 3}\to I_2\otimes_{S^{\otimes 3}} I^*_2\to I_3\otimes_{S^{\otimes 3}}I_1\otimes_{S^{\otimes 3}} I^*_2. \end{equation} $\Delta$ is coassociative if and only if $(I,\alpha)\in \dul{Z}^1(S/R,\dul{{\rm Pic}})$. \end{lemma} \begin{proof} We have the following isomorphisms of $S^{\otimes 4}$-modules: $$\tilde{\Delta}_1:\ I_{21}=I_{13}\to I_{31}\otimes_{S^{\otimes 4}} I_{11}= I_{14}\otimes_{S^{\otimes 4}} I_{12};$$ $$\tilde{\Delta}_2:\ I_{22}=I_{23}\to I_{32}\otimes_{S^{\otimes 4}} I_{12}= I_{24}\otimes_{S^{\otimes 4}} I_{12};$$ $$\tilde{\Delta}_3:\ I_{23}\to I_{33}\otimes_{S^{\otimes 4}} I_{13}= I_{34}\otimes_{S^{\otimes 4}} I_{13};$$ $$\tilde{\Delta}_4:\ I_{24}\to I_{34}\otimes_{S^{\otimes 4}} I_{14}.$$ $(I,\alpha)\in \dul{Z}^1(S/R,\dul{{\rm Pic}})$ if and only if the composition $$ \begin{matrix} I_{23}&\rTo^{I_{23}\otimes {\rm coev}_{I_{13}}}& I_{23}\otimes_{S^{\otimes 4}}I^*_{13}\otimes_{S^{\otimes 4}}I_{13}\\ &\rTo^{\tilde{\Delta}_3\otimes I^*_{13}\otimes \tilde{\Delta}_1}& I_{34}\otimes_{S^{\otimes 4}}I_{13}\otimes_{S^{\otimes 4}}I^*_{13}\otimes_{S^{\otimes 4}} I_{14}\otimes_{S^{\otimes 4}}I_{12}\\ &\rTo^{I_{34}\otimes{\rm ev}_{I_{13}}\otimes I_{14}\otimes I_{12}}& I_{34}\otimes_{S^{\otimes 4}} I_{14}\otimes_{S^{\otimes 4}}I_{12} \end{matrix} $$ equals the composition $$\begin{matrix} I_{23}&\rTo^{{\rm coev}_{I_{24}}\otimes I_{23}}& I_{24}\otimes_{S^{\otimes 4}} I^*_{24}\otimes_{S^{\otimes 4}}I_{23}\\ &\rTo^{\tilde{\Delta}_4\otimes I^*_{24}\otimes\tilde{\Delta}_2}& I_{34}\otimes_{S^{\otimes 4}} I_{14}\otimes_{S^{\otimes 4}}I^*_{24}\otimes_{S^{\otimes 4}}I_{24} \otimes_{S^{\otimes 4}}I_{12}\\ &\rTo^{I_{34}\otimes I_{14}\otimes {\rm ev}_{I_{24}}\otimes I_{12}}& I_{34}\otimes_{S^{\otimes 4}} I_{14}\otimes_{S^{\otimes 4}} I_{12}. \end{matrix}$$ Let $\{(e_i,e_i^*)~|~i=1,\cdots,n\}$ be a finite dual basis of $I$. For all $c\in I$, we compute \begin{eqnarray*} &&\hspace*{-15mm} \Bigl(\Bigl({I_{34}\otimes{\rm ev}_{I_{13}}\otimes I_{14}\otimes I_{12}}\Bigr) \circ \Bigl(\tilde{\Delta}_3\otimes I^*_{13}\otimes \tilde{\Delta}_1 \Bigr) \circ \Bigl(I_{23}\otimes {\rm coev}_{I_{13}} \Bigr)\Bigr)(c_{23})\\ &=& \Bigl(\Bigl({I_{34}\otimes{\rm ev}_{I_{13}}\otimes I_{14}\otimes I_{12}}\Bigr) \circ \Bigl(\tilde{\Delta}_3\otimes I^*_{13}\otimes \tilde{\Delta}_1 \Bigr)\Bigr) \bigl(\sum_i c_{23}\otimes e^*_{i13}\otimes e_{i13}\bigr)\\ &=& \Bigl({I_{34}\otimes{\rm ev}_{I_{13}}\otimes I_{14}\otimes I_{12}}\Bigr) \Bigl(\sum_i c_{(1)34}\otimes c_{(2)13}\otimes e^*_{i13}\otimes \tilde{\Delta}_1(e_{i13})\Bigr)\\ &=& \sum_i c_{(1)34}\otimes\tilde{\Delta}_1((\langle c_{(2)},e_i^*\rangle e_i)_{13}) = c_{(1)34}\otimes \tilde{\Delta}_1(c_{(2)13})\\ &=& c_{(1)34}\otimes c_{(2)(1)14}\otimes c_{(2)(2)12}, \end{eqnarray*} and \begin{eqnarray*} &&\hspace*{-15mm} \Bigl(\Bigl(I_{34}\otimes I_{14}\otimes {\rm ev}_{I_{24}}\otimes I_{12} \Bigr) \circ \Bigl(\tilde{\Delta}_4\otimes I^*_{24}\otimes\tilde{\Delta}_2\Bigr) \circ \Bigl({\rm coev}_{I_{24}}\otimes I_{23}\Bigr)\Bigr)(c_{23})\\ &=& \Bigl(\Bigl(I_{34}\otimes I_{14}\otimes {\rm ev}_{I_{24}}\otimes I_{12} \Bigr) \circ \Bigl(\tilde{\Delta}_4\otimes I^*_{24}\otimes\tilde{\Delta}_2\Bigr)\Bigr) \bigl(\sum_i e_{i24}\otimes e^*_{i24}\otimes c_{23}\bigr)\\ &=& \Bigl(I_{34}\otimes I_{14}\otimes {\rm ev}_{I_{24}}\otimes I_{12} \Bigr) \Bigl(\sum_i \tilde{\Delta}_4(e_{i24})\otimes e^*_{i24}\otimes c_{(1)24}\otimes c_{(2)12}\Bigr)\\ &=& \tilde{\Delta}_4(c_{(1)24})\otimes c_{(2)12} = c_{(1)(1)34}\otimes c_{(1)(2)14}\otimes c_{(2)12}. \end{eqnarray*} From \leref{2.1a}, it follows that $(I,\alpha)\in \dul{Z}^1(S/R,\dul{{\rm Pic}})$ if and only if the maps in ${\rm Hom}_{S^{\otimes 4}}(I_{23}, I_{34}\otimes_{S^{\otimes 4}} I_{14}\otimes_{S^{\otimes 4}} I_{12})$ associated to $(\Delta\otimes_SI)\circ \Delta$ and $(I\otimes_S\Delta)\circ \Delta$ in ${}_S{\rm Hom}_S(I,I\otimes_SI\otimes_S I)$ are equal. This is equivalent to the coassociativity of $\Delta$. \end{proof} Observe that the map $\tilde{\Delta}$ can be recovered from $\alpha$ using the following formula \begin{equation}\eqlabel{3.1.2} \tilde{\Delta}=(I_3\otimes I_1\otimes {\rm ev}_{I_2})\circ (\alpha^{-1}\otimes I_2). \end{equation} \begin{lemma}\lelabel{3.2} Let $I,\Delta,\tilde{\Delta},\alpha$ be as in \leref{3.1}, and take $J\in \dul{{\rm Pic}}(S)$. Then we have an isomorphism of bimodules with coassociative comultiplication $I\cong{\rm Can}_R(J;S)$ if and only if $(I,\alpha) \cong \delta_0(J)$ in $\dul{Z}^1(S/R,\dul{{\rm Pic}})$. \end{lemma} \begin{proof} Take $J\in {\rm Pic}(S)$. Then $\delta_0(J)=J_1\otimes_{S^{\otimes 2}} J^*_2= J^*\otimes J$, and \begin{eqnarray*} &&\hspace*{-2cm} \delta_1\delta_0(J)=\delta_0(J)_1 \otimes_{S^{\otimes 3}} \delta_0(J)_3 \otimes_{S^{\otimes 3}} \delta_0(J^*)_2\\ &=& J_{11}\otimes_{S^{\otimes 3}}J^*_{21}\otimes_{S^{\otimes 3}} J_{13}\otimes_{S^{\otimes 3}}J^*_{23}\otimes_{S^{\otimes 3}} J^*_{12}\otimes_{S^{\otimes 3}}J_{22}\\ &=& J_{12}\otimes_{S^{\otimes 3}}J^*_{13}\otimes_{S^{\otimes 3}} J_{13}\otimes_{S^{\otimes 3}}J^*_{23}\otimes_{S^{\otimes 3}} J^*_{12}\otimes_{S^{\otimes 3}}J_{23}. \end{eqnarray*} The map $\lambda_J$ is obtained by applying the evaluation map on tensor factors 1 and 5, 2 and 3, 4 and 6. Let $\{(e_i,e_i^*)~|~i=1,\cdots, n\}$ be a finite dual basis of $J$ as an $S$-module. Then $$\lambda_J^{-1}(1\otimes 1\otimes 1)= \sum_{i,j,k} e_{i12}\otimes e^*_{j13}\otimes e_{j13}\otimes e^*_{k23}\otimes e^*_{i12}\otimes e_{k23}.$$ Take $x^*\otimes 1\otimes x=x_{12}\otimes_{S^{\otimes 3}} x_{23}^*\in (J^*\otimes J)_2=J^*\otimes S\otimes J=J_{12}\otimes_{S^{\otimes 3}}J^*_{23}$. We then compute, using \equref{3.1.2}, \begin{eqnarray*} &&\hspace*{-12mm} \tilde{\Delta}(x^*\otimes 1\otimes x)= (\delta_0(J)_1 \otimes_{S^{\otimes 3}} \delta_0(J)_3 \otimes_{S^{\otimes 3}} {\rm ev}_{\delta_0(J)_2}) (\lambda_J^{-1}\otimes_{S^{\otimes 3}} \delta_0(J)_2)(x_{12}\otimes x_{23}^*)\\ &=& \sum_{i,j,k} e_{i12}\otimes e^*_{j13}\otimes e_{j13}\otimes e^*_{k23}\otimes \langle e^*_{i12}, x_{12}\rangle \otimes \langle x_{23}^*,e_{k23}\rangle\\ &=& \sum_{j} x_{12}\otimes e_{j13}\otimes e^*_{j13}\otimes x^*_{23}\\ &=& \sum_j x^*\otimes (e^*_j\otimes_S e_j)\otimes x\in J^*\otimes (J^*\otimes_S J)\otimes J. \end{eqnarray*} Consequently $$\Delta(x^*\otimes x)=\sum_j x^*\otimes \langle e^*_j, e_j\rangle \otimes x=x^*\otimes 1\otimes x$$ is the comultiplication on ${\rm Can}_R(J;S)$. In a similar way, starting from the comultiplication $\Delta$ on ${\rm Can}_R(J;S)$, we find that the map $\alpha$ defined in \equref{3.1.1} is precisely $\lambda_J$. \end{proof} \begin{theorem}\thlabel{3.3} Let $\mathcal{C}$ be a faithfully projective $S\otimes S$-module, and $\Delta:\ \mathcal{C}\to \mathcal{C}\otimes_S\mathcal{C}$ an $S$-bimodule map. We consider the corresponding map $\tilde{\Delta}:\ \mathcal{C}_2\to \mathcal{C}_3\otimes_{S^{\otimes 3}}\mathcal{C}_1$ in $\mathcal{M}_{S^{\otimes 3}}$ (cf. \leref{2.1}). Then the following assertions are equivalent. \begin{enumerate} \item $\Delta$ is coassociative and $\tilde{\Delta}$ is an isomorphism in $\mathcal{M}_{S^{\otimes 3}}$; \item $\mathcal{C}\in \dul{{\rm Pic}}(S^{\otimes 2})$ and $(\mathcal{C},\alpha)\in \dul{Z}^1(S/R, \dul{{\rm Pic}})$, with $\alpha$ defined by \equref{3.1.1}; \item $\mathcal{C}\in \dul{{\rm Pic}}(S^{\otimes 2})$ and $\mathcal{C}\otimes S$ is isomorphic to ${\rm Can}_{R\otimes S}(\mathcal{C};S\otimes S)$ as bimodules with coassociative comultiplication; \item there exists a faithfully flat commutative $R$-algebra $T$ such that $(\mathcal{C}\otimes_R T,\Delta\otimes_R T)$ is isomorphic to ${\rm Can}_T(I;S\otimes T)$, for some $I\in \dul{{\rm Pic}}(S\otimes T)$, as a bimodule with a coassociative comultiplication; \item $(\mathcal{C},\Delta)$ is a coring and $\tilde{\Delta}$ is an isomorphism in $\mathcal{M}_{S^{\otimes 3}}$. \end{enumerate} \end{theorem} \begin{proof} $\underline{ 1)\Rightarrow 2)}$. From the fact that $\tilde{\Delta}$ is an isomorphism, it follows that $\mathcal{C}_2\cong \mathcal{C}_3\otimes_{S^{\otimes 3}}\mathcal{C}_1$. Applying the functor $|-|:\ \mathcal{M}_{S^{\otimes 3}}\to \mathcal{M}_S$, we find that $|\mathcal{C}|\cong |\mathcal{C}|\otimes_S |\mathcal{C}|$. $\mathcal{C}$ is a faithfully projective $S\otimes S$-module, so $|\mathcal{C}|$ is a faithfully projective $S$-module. Its rank is an idempotent, so it is equal to one, and $|\mathcal{C}|\in \dul{{\rm Pic}}(S)$.\\ Now switch the second and third tensor factor in $\mathcal{C}_2\cong \mathcal{C}_3\otimes_{S^{\otimes 3}}\mathcal{C}_1$, and then apply $|-|$ to the first and second factor. We find that $|\mathcal{C}|\otimes S\cong \mathcal{C}\otimes_{S^{\otimes 2}} \tau(\mathcal{C})$, with $\tau(\mathcal{C})$ equal to $\mathcal{C}$ as an $R$-module, with newly defined $S\otimes S$-action $c\triangleleft(s\otimes t)= c(t\otimes s)$. Now $|\mathcal{C}|\otimes S\in \dul{{\rm Pic}}(S\otimes S)$, and it follows that $\mathcal{C}\in \dul{{\rm Pic}}(S\otimes S)$. It follows now from \leref{3.1} that $(\mathcal{C},\alpha)\in \dul{Z}^1(S/R,\dul{{\rm Pic}})$.\\ $\underline{ 2)\Rightarrow 3)}$. It follows from \leref{1.2a.2} that $(\mathcal{C}\otimes S,\alpha\otimes S)\cong \delta_0(\mathcal{C})$ in $\dul{Z}^1(S\otimes S/R\otimes S,\dul{{\rm Pic}})$. From \leref{3.2}, it follows that $\mathcal{C}\otimes S\cong {\rm Can}_{R\otimes S}(\mathcal{C};S\otimes S)$ as bimodules with coassociative comultiplication.\\ $\underline{ 3)\Rightarrow 4)}$ is obvious.\\ $\underline{ 4)\Rightarrow 1)}$. After faithfully flat base extension, $\Delta$ becomes coassociative, and $\tilde{\Delta}$ becomes an isomorphism. Hence $\Delta$ is coassociative and $\tilde{\Delta}$ is an isomorphism.\\ $\underline{ 1)\Rightarrow 5)}$. We have an isomorphism of $S^{\otimes 3}$-modules $\alpha:\ \mathcal{C}^*_2\otimes_{S^{\otimes 3}}\mathcal{C}_1\otimes_{S^{\otimes 3}} \mathcal{C}_3\to S^{\otimes 3}$. Applying the functor $|-|$, we find an isomorphism of $S$-modules $|\alpha |:\ |\mathcal{C}|\to S$. Now we consider the composition $\varepsilon= |\alpha|\circ \pi:\ \mathcal{C}\to S$. In the situation where $\mathcal{C}={\rm Can}_R(I;S)$, $\varepsilon$ is the counit of $\mathcal{C}$. By 4), $\varepsilon$ has the counit property after a base extension. Hence $\varepsilon$ has itself the counit property. So $(\mathcal{C},\Delta,\varepsilon)$ is a coring.\\ $\underline{5)\Rightarrow 1)}$ is obvious. \end{proof} If $(\mathcal{C},\Delta,\varepsilon)$ satisfies the equivalent conditions of \thref{3.3}, then we call $\mathcal{C}$ an Azumaya $S/R$-coring. The connection to Azumaya algebras is discussed in the following Proposition. \begin{proposition}\prlabel{3.3a} Let $S$ be a faithfully projective commutative $R$-algebra, and $\mathcal{C}$ an Azumaya $S/R$-coring. Then ${}_S{\rm Hom}(\mathcal{C},S)$ and ${\rm Hom}_S(\mathcal{C},S)$ are Azumaya $R$-algebras split by $S$. \end{proposition} \begin{proof} Using \thref{3.3} and \equref{2.3.5}, we find the following isomorphisms of $S$-algebras: \begin{eqnarray*} &&\hspace*{-2cm} {}_S{\rm Hom}(\mathcal{C},S)\otimes S={}_S{\rm Hom}(\mathcal{C},S)\otimes {}_S{\rm Hom}(S,S)\cong {}_{S\otimes S}{\rm Hom}(\mathcal{C}\otimes S,S\otimes S)\\ &\cong & {}_{S\otimes S}{\rm Hom}({\rm Can}_{R\otimes S}(\mathcal{C};S\otimes S),S\otimes S)\cong {}_{R\otimes S}{\rm End}(\mathcal{C})^{\rm op}. \end{eqnarray*} \end{proof} \begin{theorem}\thlabel{3.4} Let $(\mathcal{C},\Delta)$ and $(\mathcal{C}',\Delta')$ be Azumaya $S/R$-corings, and consider the corresponding $(\mathcal{C},\alpha),(\mathcal{C}',\alpha')\in \dul{Z}^1(S/R,\dul{{\rm Pic}})$. Let $f:\ \mathcal{C}\to \mathcal{C}'$ be an isomorphism in $\dul{{\rm Pic}}(S\otimes S)$. Then $f$ is an isomorphism of corings if and only if $f$ defines an isomorphism in $\dul{Z}^1(S/R,\dul{{\rm Pic}})$. \end{theorem} \begin{proof} $f$ is an isomorphism of corings if and only if the following diagram commutes: $$\begin{diagram} \mathcal{C}&\rTo^{\Delta}&\mathcal{C}\otimes_S\mathcal{C}\\ \dTo_{f}&&\dTo_{f\otimes_S f}\\ \mathcal{C}'&\rTo^{\Delta'}&\mathcal{C}'\otimes_S\mathcal{C}'. \end{diagram}$$ This is equivalent to commutativity of the diagram $$\begin{diagram} \mathcal{C}_2&\rTo^{\tilde{\Delta}}&\mathcal{C}_3\otimes_{S^{\otimes 3}}\mathcal{C}_1\\ \dTo_{f_2}&&\dTo_{f_3\otimes_{S^{\otimes 3}} f_1}\\ \mathcal{C}'_2&\rTo^{\tilde{\Delta}'}&\mathcal{C}'_3\otimes_{S^{\otimes 3}}\mathcal{C}'_1. \end{diagram}$$ This is equivalent to commutativity of the right square in the next diagram $$\begin{diagram} S^{\otimes 3}&\rTo^{{\rm coev}_{\mathcal{C}_2}}& \mathcal{C}^*_2\otimes_{S^{\otimes 3}}\mathcal{C}_2&\rTo^{\mathcal{C}^*_2\otimes\tilde{\Delta}}&\mathcal{C}^*_2\otimes_{S^{\otimes 3}}\mathcal{C}_3\otimes_{S^{\otimes 3}}\mathcal{C}_1\\ \dTo_{=}&& \dTo_{(f^*_2)^{-1}\otimes_{S^{\otimes 3}} f_2}&&\dTo_{(f^*_2)^{-1}\otimes_{S^{\otimes 3}} f_3\otimes_{S^{\otimes 3}} f_1}\\ S^{\otimes 3}&\rTo^{{\rm coev}_{\mathcal{C}'_2}}& {\mathcal{C}'}^*_2\otimes_{S^{\otimes 3}}\mathcal{C}'_2&\rTo^{{\mathcal{C}'}^*_2\otimes\tilde{\Delta}'}&{\mathcal{C}'}^*_2\otimes_{S^{\otimes 3}}\mathcal{C}'_3 \otimes_{S^{\otimes 3}}\mathcal{C}'_1. \end{diagram}$$ The left square is automatically commutative. Commutativity of the full diagram is equivalent to $\alpha'\circ \delta_1(f)=\alpha$, as needed. \end{proof} Let ${\dul{\rm Az}}^c(S/R)$ be the category of Azumaya $S/R$-corings and isomorphisms of corings. \begin{proposition}\prlabel{3.5} $(\dul{\rm Az}^c(S/R),\otimes_{S^{\otimes 2}},{\rm Can}_R(S;S))$ is a monoidal category. \end{proposition} \begin{proof} Take two Azumaya $S/R$-corings $(\mathcal{C},\Delta)$ and $(\mathcal{C}',\Delta')$, and let $\tilde{D}$ be the following composition \begin{eqnarray*} (\mathcal{C}\otimes_{S^{\otimes 2}}\mathcal{C}')_2 = \mathcal{C}_2\otimes_{S^{\otimes 3}}\mathcal{C}'_2 &\rTo^{\tilde{\Delta}\otimes \tilde{\Delta}'}& \mathcal{C}_3\otimes_{S^{\otimes 3}}\mathcal{C}_1 \otimes_{S^{\otimes 3}} \mathcal{C}'_3\otimes_{S^{\otimes 3}}\mathcal{C}'_1\\ &\rTo^{\mathcal{C}_3\otimes \tau\otimes \mathcal{C}'_1}& \mathcal{C}_3\otimes_{S^{\otimes 3}}\mathcal{C}'_3 \otimes_{S^{\otimes 3}} \mathcal{C}_1\otimes_{S^{\otimes 3}}\mathcal{C}'_1\\ &=& (\mathcal{C}\otimes_{S^{\otimes 2}}\mathcal{C}')_3 \otimes_{S^{\otimes 3}}(\mathcal{C}\otimes_{S^{\otimes 2}}\mathcal{C}')_1. \end{eqnarray*} The comultiplication on $\mathcal{C}\otimes_{S^{\otimes 2}}\mathcal{C}'$ is the corresponding map $$D:\ \mathcal{C}\otimes_{S^{\otimes 2}}\mathcal{C}' \to (\mathcal{C}\otimes_{S^{\otimes 2}}\mathcal{C}')\otimes_S (\mathcal{C}\otimes_{S^{\otimes 2}}\mathcal{C}').$$ Observe that the $S$-bimodule structure on $\mathcal{C}\otimes_{S^{\otimes 2}}\mathcal{C}'$ is given by the formulas $$s(c\otimes c')=sc\otimes c'=c\otimes sc'~~;~~ (c\otimes c')t=c\otimes c' t= ct\otimes c'.$$ We have that $$\tilde{D}(c\otimes c')_2= (c_{(1)}\otimes_{S^{\otimes 2}} c'_{(1)})_{3} \otimes_{S^{(3)}} (c_{(2)}\otimes_{S^{\otimes 2}} c'_{(2)})_{1},$$ hence $$D(c\otimes c')=(c_{(1)}\otimes_{S^{\otimes 2}} c'_{(1)}) \otimes_S (c_{(2)}\otimes_{S^{\otimes 2}} c'_{(2)}).$$ It is then easy to see that $D$ is coassociative, and that $$\mathcal{C}\otimes_{S^{\otimes 2}} {\rm Can}_R(S;S)\cong \mathcal{C}\cong {\rm Can}_R(S;S)\otimes_{S^{\otimes 2}}\mathcal{C}.$$ \end{proof} \begin{corollary}\colabel{3.6} We have a monoidal isomorphism of categories $$H:\ \dul{\rm Az}^c(S/R)\to \dul{Z}^1(S/R,\dul{{\rm Pic}}).$$ \end{corollary} Consider the subgroup ${\rm Can}^c(S/R)$ of $K_0\dul{\rm Az}^c(S/R)$ consisting of isomorphism classes represented by an elementary coring ${\rm Can}_R(I;S)$ for some $I\in \dul{{\rm Pic}}(S)$. The quotient $${\rm Br}^c(S/R)=K_0\dul{\rm Az}^c(S/R)/{\rm Can}^c(S/R)$$ is called the {\sl relative Brauer group} of Azumaya $S/R$-corings. \begin{corollary}\colabel{3.7} We have an isomorphism of abelian groups $${\rm Br}^c(S/R)\cong H^1(S/R,\dul{{\rm Pic}}).$$ Consequently, we have an exact sequence \begin{eqnarray}\eqlabel{3.7.1} 0&\longrightarrow& H^1(S/R,{\mathbb G}_m)\longrightarrow {\rm Pic}(R) \longrightarrow H^0(S/R,{\rm Pic})\\ &\longrightarrow& H^2(S/R,{\mathbb G}_m)\rTo^{\alpha} {\rm Br}^c(S/R) \longrightarrow H^1(S/R,{\rm Pic})\nonumber\\ &\longrightarrow& H^{3}(S/R,{\mathbb G}_m)\nonumber. \end{eqnarray} \end{corollary} Let $f:\ S\to T$ be a morphism of faithfully flat commutative $R$-algebras. Then we have a functor $\tilde{f}:\ \dul{\rm Az}^c(S/R)\to \dul{\rm Az}^c(T/R)$ such that the following diagram commutes $$\begin{diagram} \dul{\rm Az}^c(S/R)&\rTo^{H}& \dul{Z}^1(S/R,\dul{{\rm Pic}})\\ \dTo^{\tilde{f}}&&\dTo_{f_*}\\ \dul{\rm Az}^c(T/R)&\rTo^{H}& \dul{Z}^1(T/R,\dul{{\rm Pic}}). \end{diagram}$$ $\tilde{f}(\mathcal{C})=\mathcal{C}\otimes_{S^{\otimes 2}}{\rm Can}_R(T;T)$, with comultiplication $\Delta_\mathcal{C}\otimes_{S^{\otimes 2}}\Delta$, where $\Delta$ is the comultiplication on the canonical coring ${\rm Can}_R(T;T)$. This induces a commutative diagram $$\begin{diagram} {\rm Br}^c(S/R)&\rTo^{\cong}& H^1(S/R,\dul{{\rm Pic}})\\ \dTo^{\tilde{f}}&&\dTo_{f_*}\\ {\rm Br}^c(T/R)&\rTo^{\cong}& H^1(T/R,\dul{{\rm Pic}}). \end{diagram}$$ Otherwise stated, the isomorphisms in \coref{3.7} define an isomorphism of functors $${\rm Br}^c(\bullet/R)\cong H^1(\bullet/R,\dul{{\rm Pic}}):\ \mathcal{R}\to\dul{Ab},$$ and \begin{equation}\eqlabel{3.7.2} {\rm colim} {\rm Br}^c(\bullet/R)\cong \check H^1(R_{\rm fl},\dul{{\rm Pic}}) \cong H^2(R_{\rm fl},{\mathbb G}_m). \end{equation} Let us describe the map $\alpha:\ H^2(S/R,{\mathbb G}_m)\to {\rm Br}^c(S/R)$. Let $u\in Z^2(S/R,{\mathbb G}_m)$ be a cocycle, and consider the coring $$\mathcal{C}={\rm Can}_R(S;S)_u,$$ which is equal to $S\otimes S$ as an $S$-bimodule, with comultiplication \begin{equation}\eqlabel{3.7.3} \Delta_u:\ S\otimes S\to S\otimes S\otimes_S S\otimes S\cong S\otimes S\otimes S,~~ \Delta_u(s\otimes t)=u^1s\otimes u^2\otimes u^3t. \end{equation} The coassociativity follows immediately from the cocycle condition; the counit $\varepsilon$ is given by the formula (see \leref{am3}) \begin{equation}\eqlabel{3.7.4} \varepsilon(s\otimes t)=|u|^{-1}st. \end{equation} The counit property follows from \leref{am3}. If $u$ is normalized, then the counit coincides with the counit in ${\rm Can}_R(S;S)$.\\ Let us compute the right dual ${\rm Hom}_S(\mathcal{C},S)$. As an $R$-module, ${\rm Hom}_S(\mathcal{C},S)={\rm Hom}_S(S\otimes S,S)\cong {\rm End}_R(S)$. We transport the multiplication on ${\rm Hom}_S(\mathcal{C},S)$ to ${\rm End}_R(S)$ as follows: take $\varphi,\psi\in {\rm End}_R(S)$, and define $f,g\in {\rm Hom}_S(S\otimes S,S)$ by $$f(s\otimes t)=\varphi(s)t~~;~~g(s\otimes t)=\psi(s)t.$$ Then we find, using \equref{right}, $$(\varphi * \psi)(s)=(f\# g)(s\otimes 1)=f(\psi(su^1)u^2\otimes u^3)= \varphi(\psi(su^1)u^2)u^3,$$ or \begin{equation}\eqlabel{3.8.1} \varphi*\psi=u^3\varphi u^2\psi u^1. \end{equation} In a similar way, we find that ${}_S{\rm Hom}(\mathcal{C},S)\cong {\rm End}_R(S)$, with twisted multiplication \begin{equation}\eqlabel{3.8.2} \varphi*\psi=u^1\psi u^2\varphi u^3. \end{equation} If $S$ is faithfully projective as an $R$-module, then it is well-known that there exists a morphism $$\alpha:\ H^2(S/R, {\mathbb G}_m)\to {\rm Br}(S/R).$$ More precisely, we can associate an Azumaya algebra $A(u)$ to any cocycle $u\in Z^2(S/R, {\mathbb G}_m)$. The construction of $A(u)$ was given first in \cite[Theorem 2]{RZ}. It is explained in \cite[V.2]{KO1} and \cite[7.5]{K2} using descent theory. Let us summarize the construction of $A(u)$, following \cite{KO1}. Take a cocycle $u=u^1\otimes u^2\otimes u^3= U^1\otimes U^2\otimes U^3$ with inverse $u^{-1}= v^1\otimes v^2\otimes v^3$, and consider the map $$\Phi:\ S\otimes S\otimes {\rm End}_R(S)\to S\otimes {\rm End}_R(S)\otimes S,~~ \Phi(s\otimes t\otimes \varphi)=su^1v^1\otimes u^3\varphi v^3 \otimes tu^2v^2.$$ Then $$A(u)=\{x\in S\otimes {\rm End}_R(S)~|~x\otimes 1=\Phi(1\otimes x)\}.$$ It will be convenient to use the canonical identification ${\rm End}_R(S)\cong S^*\otimes S$. Then $x=\sum_i s_i\otimes t^*_i\otimes t_i\in S\otimes S^*\otimes S$ lies in $A(u)$ if and only if $$\sum_i s_i\otimes t_i^*\otimes t_i\otimes 1= \sum_i u^1v^1\otimes t_i^*v^3\otimes u^3t_i\otimes u^2v^2s_i,$$ or $$\sum_i s_i\otimes 1\otimes t_i^*\otimes t_i= \sum_i u^1v^1\otimes u^2v^2s_i\otimes t_i^*v^3\otimes u^3t_i,$$ or \begin{equation}\eqlabel{3.8.3} x_2=x_1u_3u_4^{-1}~~{\rm or}~~x_2u_4=x_1u_3. \end{equation} Let ${\rm End}_R(S)_u$ be equal to ${\rm End}_R(S)$, with twisted multiplication given by \equref{3.8.1}. We know from \prref{3.3a} that ${\rm End}_R(S)_u$ is an Azumaya algebra split by $S$. \begin{theorem}\thlabel{3.8} Let $S$ be a faithfully projective commutative $R$-algebra, and $u\in Z^2(S/R,{\mathbb G}_m)$. Then we have an isomorphism of $R$-algebras $\gamma:\ {\rm End}_R(S)_u\to A(u)$. \end{theorem} \begin{proof} We define $\gamma$ by the following formula: $$\gamma(\varphi)=u^1\otimes u^3\varphi u^2,$$ or $$\gamma(t^*\otimes t)=u^1 \otimes t^*u^2\otimes u^3t.$$ We have to show that $x=\gamma(t^*\otimes t)$ satisfies \equref{3.8.3}. Indeed, $$x_2u_4=(1\otimes 1\otimes t^*\otimes t)u_2u_4=(1\otimes 1\otimes t^*\otimes t)u_1u_3=x_1u_3.$$ Let us next show that $\gamma$ is multiplicative. We want to show that $$\gamma(\psi)\circ \gamma(\varphi)=\gamma(\psi *\varphi)$$ or $$u^1U^1\otimes u^3\psi u^2 U^3\varphi U^2= U^1\otimes U^3u^3\psi u^2\varphi u^1 U^2.$$ It suffices that $$u^1U^1\otimes u^3\otimes u^2 U^3\otimes U^2= U^1\otimes U^3u^3\otimes u^2\otimes u^1 U^2,$$ or $$u^1U^1\otimes U^2\otimes u^2 U^3\otimes u^3= U^1\otimes u^1 U^2\otimes u^2\otimes U^3u^3.$$ This is precisely the cocycle condition $u_2u_4=u_1u_3$.\\ The inverse of $\gamma$ is given by $$\gamma^{-1}(\sum_i s_i\otimes t_i^*\otimes t_i)=\sum_i t_i^*v^2\otimes v^1v^3s_it_i,$$ for all $x=\sum_i s_i\otimes t_i^*\otimes t_i\in A(u)$. We compute that $$\gamma(\gamma^{-1}(x))= \gamma(\sum_i t_i^*v^2\otimes v^1v^3s_it_i) = u^1\otimes t_i^*v^2u^2\otimes u^3v^1v^3s_it_i.$$ It follows from \equref{3.8.3} that $$x_2=x_1u_3u_4^{-1}=x_1u_2u_1^{-1}= u^1\otimes s_iv^1\otimes t_i^*u^2v^2\otimes t_iu^3v^3.$$ Multiplying the second and the fourth tensor factor, we obtain that $$\gamma(\gamma^{-1}(x))=u^1\otimes t_i^*v^2u^2\otimes u^3v^1v^3s_it_i=x.$$ Finally $$ \gamma^{-1}(\gamma(t^*\otimes t))=\gamma^{-1}(u^1\otimes t^*u^2\otimes u^3t)= t^*u^2v^2\otimes v^1v^3u^1u^3t =t^*\otimes t.$$ \end{proof} \section{A Normal Basis Theorem}\selabel{normal} Let $S$ be a faithfully flat commutative $R$-algebra. We say that an $S\otimes S$-module with coassociative comultiplication has normal basis if it is isomorphic to $S\otimes S$ as an $S$-bimodule. Examples are the Azumaya $S/R$-corings ${\rm Can}_R(S;S)_u$, with $u\in Z^2(S/R,{\mathbb G}_m)$, as considered above. The category of $S/R$-corings (resp. $S\otimes S$-modules with coassociative comultiplication) with normal basis will be denoted by $\dul{F}(S/R)$ (resp. $\dul{F}'(S/R)$). $(\dul{F}(S/R),\otimes_{S^{\otimes 2}},{\rm Can}_R(S;S))$ and $(\dul{F}'(S/R),\otimes_{S^{\otimes 2}},$ ${\rm Can}_R(S;S))$ are monoidal categories, and the sets of isomorphism classes $F(S/R)$ and $F'(S/R)$ are monoids. Let $FAz(S/R)$ be the subgroup of $F(S/R)$ consisting of isomorphism classes of $S/R$-Azumaya corings with normal basis. We have inclusions $$FAz(S/R)\subset F(S/R)\subset F'(S/R).$$ We will give a cohomological description of these monoids.\\ Take $u=u^1\otimes u^2\otimes u^3\in S^{\otimes 3}$. As usual, summation is implicitly understood. We do not assume that $u$ is invertible. We call $u$ a $2$-cosickle if $u_1u_3=u_2u_3$. If, in addition, $u^1u^2\otimes u^3$ and $u^1\otimes u^2u^3$ are invertible in $S^{\otimes 2}$, then we call $u$ an almost invertible $2$-cosickle. This implies in particular that $|u|=u^1u^2u^3$ is invertible in $S$. Almost invertible $2$-cosickles have been introduced and studied in \cite{Haile}. Let ${S'}^2(S/R)$ be the set of $2$-cosickles and $S^2(S/R)$ the set of almost invertible $2$-cosickles. ${S}^2(S/R)$ and ${S'}^2(S/R)$ are multiplicative monoids, and we have the following inclusions of monoids: $$B^2(S/R,{\mathbb G}_m) \subset Z^2(S/R,{\mathbb G}_m)\subset S^2(S/R)\subset {S'}^2(S/R)\subset S^{\otimes 3}.$$ We consider the quotient monoids $$M^2(S/R)=S^2(S/R)/B^2(S/R,{\mathbb G}_m)~~;~~{M'}^2(S/R)={S'}^2(S/R)/B^2(S/R,{\mathbb G}_m).$$ $M^2(S/R)$ is called the second (Hebrew) Amitsur cohomology monoid; the subgroup consisting of invertible classes is the usual (French) Amitsur cohomology group $H^2(S/R,{\mathbb G}_m)$ (the Hebrew-French dictionary is explained in detail in \cite{Haile}). We have the following inclusions: $$H^2(S/R,{\mathbb G}_m)\subset M^2(S/R)\subset {M'}^2(S/R).$$ \begin{theorem}\thlabel{4a.1} Let $S$ be a commutative faithfully flat $R$-algebra. An $S\otimes S$-module with coassociative comultiplication and normal basis is an Azumaya $S/R$-coring if and only if it represents an invertible element of $F'(S/R)$. Furthermore $$F'(S/R)\cong {M'}^2(S/R),~~F(S/R)\cong M^2(S/R) ~~{\rm and}~~FAz(S/R)\cong H^2(S/R,{\mathbb G}_m).$$ \end{theorem} \begin{proof} We define a map $\alpha':\ {S'}^2(S/R)\to {F'}(S/R)$ as follows: $\alpha'(u)={\rm Can}_R(S;S)_u$, with comultiplication given by \equref{3.7.3}. It is easy to see that $\alpha'$ is a map of monoids. $\alpha'$ is surjective: let $\mathcal{C}=S^{\otimes 2}$ with a coassociative comultiplication $\Delta_\mathcal{C}$, and take $$u=u^1\otimes u^2\otimes u^3=\Delta_\mathcal{C}(1\otimes 1)\in S\otimes S\otimes_S S\otimes S\cong S^{\otimes 3}.$$ From the coassociativity of $\Delta_\mathcal{C}$, it follows that $u_1u_3=u_2u_4$, so $u\in {S'}^2(S/R)$, and $\alpha'(u)=\mathcal{C}$.\\ Take $u\in {\rm Ker}\,\alpha'$. We then have a comultiplication preserving $S$-bimodule isomorphism $\varphi:\ {\rm Can}_R(S;S)\to {\rm Can}_R(S;S)_u$. Put $\varphi(1\otimes 1)=v\in S^{\otimes 2}$. From the fact that $\varphi$ is an automorphism of $S^{\otimes 2}$ as an $S$-bimodule, it follows that $v^{-1}=\varphi^{-1}(1\otimes 1)$. $\varphi$ preserves comultiplication, so it follows that \begin{eqnarray*} &&\hspace*{-1cm}v_1v_3=(\varphi\otimes_S\varphi)(\Delta_1(1\otimes 1))\\ &=& \Delta_u(\varphi(1\otimes 1))=\Delta_u(v)=v^1u^1\otimes u^2\otimes u^3v^2 = v_2u, \end{eqnarray*} hence $u=\delta_1(v)\in B^2(S/R)$. It follows that $F'(S/R)\cong {M'}^2(S/R)$ as monoids.\\ If $u\in {S}^2(S/R)$, then $\alpha'(u)={\rm Can}_R(S;S)_u$ has counit given by \equref{3.7.4}. Conversely, let $\mathcal{C}\in F(S/R)$, and take $u={\alpha'}^{-1}(\mathcal{C})$. Let $v=\varepsilon_\mathcal{C}(1\otimes 1)$. Using the counit property and the fact that $\varepsilon_\mathcal{C}$ is a bimodule map, we then compute that $$1\otimes 1=\varepsilon_\mathcal{C}(u^1\otimes u^2)\otimes u^3=u^1vu^2\otimes u^3;$$ $$1\otimes 1=u^1\otimes \varepsilon_\mathcal{C}(u^2\otimes u^3)=u^1\otimes u^2vu^3.$$ It follows that $u^1u^2\otimes u^3$ and $u^1\otimes u^2u^3$ are invertible, and that $v=|u|^{-1}$. Hence $u\in S^2(S/R)$, and it follows that $\alpha'$ restricts to an epimorphism of monoids $$\alpha:\ {S}^2(S/R)\to {F}(S/R).$$ It is clear that ${\rm Ker}\,\alpha=B^2(S/R,{\mathbb G}_m)$, and it follows that $M^2(S/R)\cong F(S/R)$.\\ If $u\in Z^2(S/R,{\mathbb G}_m)$, then $\alpha'(u)={\rm Can}_R(S;S)_u$ is an Azumaya $S/R$-coring. Conversely, let $\mathcal{C}$ be an Azumaya $S/R$-coring with normal basis, and $u={\alpha'}^{-1}(\mathcal{C})$. Then $[u]$ is invertible in $M^2(S/R)$, so there exists $v\in S^2(S/R)$ such that $uv\in B^2(S/R)$. Since every element in $B^2(S/R)$ is invertible in $S^{\otimes 3}$, it follows that $u\in {\mathbb G}_m(S^{\otimes 3})$, and $u\in Z^2(S/R,{\mathbb G}_m)$. So $\alpha$ restricts to an epimorphism $\alpha'':\ Z^2(S/R,{\mathbb G}_m)\to FAz(S/R)$. Clearly ${\rm Ker}\,\alpha''=B^2(S/R,{\mathbb G}_m)$, hence $B^2(S/R,{\mathbb G}_m)\cong FAz(S/R)$. \end{proof} \section{The Brauer group}\selabel{5} An Azumaya coring over $R$ is a pair $(S,\mathcal{C})$, where $S$ is a faithfully flat finitely presented commutative $R$-algebra, and $\mathcal{C}$ is an Azumaya $S/R$-coring. A morphism between two Azumaya corings $(S,\mathcal{C})$ and $(T,\mathcal{D})$ over $R$ is a pair $(f,\varphi)$, with $f:\ S\to T$ an algebra isomorphism, and $\varphi:\ \mathcal{C}\to\mathcal{D}$ an $R$-module isomorphism preserving the bimodule structure and the comultiplication, that is $$\varphi(scs')=f(s)\varphi(c)f(s')~~{\rm and}~~ \Delta_\mathcal{D}(\varphi(c))=\varphi(c_{(1)})\otimes_T\varphi(c_{(2)}),$$ for all $s,s'\in S$ and $c\in \mathcal{C}$. The counit is then preserved automatically. Let $\dul{\rm Az}^c(R)$ be the category of Azumaya corings over $R$. \begin{lemma}\lelabel{5.1} Suppose that $S$ and $T$ are commutative $R$-algebras. If $M\in \mathcal{M}_{S\otimes_R S}$ and $N\in \mathcal{M}_{T\otimes_R T}$, then $M\otimes_R N\in \mathcal{M}_{(S\otimes_RT)\otimes_R(S\otimes_RT)}$.\\ If $\mathcal{C}$ is an (Azumaya) $S/R$-coring, and $\mathcal{D}$ is an (Azumaya) $T/R$-coring, then $\mathcal{C}\otimes_R\mathcal{D}$ is an (Azumaya) $S\otimes_RT/R$-coring. \end{lemma} \begin{proof} The proof of the first two assertions is easy; the structure maps are the obvious ones. Let us show that $\mathcal{C}\otimes_R\mathcal{D}$ is an Azumaya $S\otimes_RT/R$-coring. \begin{eqnarray*} &&\hspace*{-15mm} \mathcal{C}\otimes_R\mathcal{D}\otimes_RS\otimes_RT\cong \mathcal{C}\otimes_RS\otimes_R\mathcal{D}\otimes_RT\\ &\cong & {\rm Can}_S(I;S\otimes_R S)\otimes {\rm Can}_T(J;T\otimes_R T)=(I^*\otimes_SI)\otimes_R(J^*\otimes_TJ)\\ &\cong & (I^*\otimes_RJ^*)\otimes_{S\otimes T}(I\otimes_RJ)={\rm Can}_{S\otimes T}(I\otimes_RJ; S\otimes_RT\otimes_RS\otimes_RT). \end{eqnarray*} \end{proof} Let $(\mathcal{C},\Delta)$ be an Azumaya $S/R$-coring, and consider the corresponding $(\mathcal{C},\alpha)\in \dul{Z}^1(S/R,\dul{{\rm Pic}})$. Its inverse in $Z^1(S/R,\dul{{\rm Pic}})$ is represented by $(\mathcal{C}^*,(\alpha^*)^{-1})$. The corresponding coring will be denoted by $(\mathcal{C}^*,\overline{\Delta})$. \begin{proposition}\prlabel{5.2} Let $\mathcal{C}$ be an Azumaya $S/R$-coring. Then $\mathcal{C}\otimes \mathcal{C}^*$ is an elementary coring. \end{proposition} \begin{proof} Consider $H(\mathcal{C})=(\mathcal{C},\alpha)\in \dul{Z}^1(S/R,\dul{{\rm Pic}})$, and the maps $\eta_1,\eta_2:\ S\to S\otimes S$. It follows from \prref{1.2a.3} that $$[\eta_{1*}(\mathcal{C},\alpha)]=[(\mathcal{C}\otimes S^{\otimes 2},\alpha\otimes S^{\otimes 3})] = [\eta_{2*}(\mathcal{C},\alpha)]=[(S^{\otimes 2}\otimes \mathcal{C},S^{\otimes 3}\otimes \alpha)]$$ in $H^1(S\otimes S/R,\dul{{\rm Pic}})$. Consequently $$[H^{-1}(\eta_{1*}(\mathcal{C},\alpha))]=[\mathcal{C}\otimes {\rm Can}_R(S;S)]= [H^{-1}(\eta_{2*}(\mathcal{C},\alpha))]=[{\rm Can}_R(S;S)\otimes\mathcal{C}]$$ in ${\rm Br}^c(S\otimes S/R)$. The inverse of $[{\rm Can}_R(S;S)\otimes\mathcal{C}]$ in ${\rm Br}^c(S\otimes S/R)$ is represented by ${\rm Can}_R(S;S)\otimes\mathcal{C}^*$. It follows that $$(\mathcal{C}\otimes {\rm Can}_R(S;S))\otimes_{S^{\otimes 4}} ({\rm Can}_R(S;S)\otimes\mathcal{C}^*)\cong \mathcal{C}\otimes \mathcal{C}^*$$ is an elementary coring. \end{proof} Let $(S,\mathcal{C})$ and $(T,\mathcal{D})$ be Azumaya corings over $R$. We say that $\mathcal{C}$ and $\mathcal{D}$ are Brauer equivalent (notation: $\mathcal{C}\sim \mathcal{D}$) if there exist elementary corings $\mathcal{E}_1$ and $\mathcal{E}_2$ over $R$ such that $\mathcal{C}\otimes \mathcal{E}_1\cong \mathcal{D}\otimes \mathcal{E}_2$ as Azumaya corings over $R$. Since the tensor product of two elementary corings is elementary, it is easy to show that $\sim$ is an equivalence relation. Let ${\rm Br}^{\rm c}_{\rm fl}(R)$ be the set of equivalence classes of isomorphism classes of Azumaya corings over $R$. \begin{proposition}\prlabel{5.4} ${\rm Br}^{\rm c}_{\rm fl}(R)$ is an abelian group under the operation induced by the tensor product $\otimes_R$, with unit element $[R]$. \end{proposition} \begin{proof} It follows from \prref{5.2} that the inverse of $[(\mathcal{C},\Delta)]$ is $[(\mathcal{C}^*,\overline{\Delta})]$. \end{proof} \begin{lemma}\lelabel{5.5} Let $\mathcal{C},\mathcal{E}$ be Azumaya $S/R$-corings, and assume that $\mathcal{E}={\rm Can}_R(J;S)$ is elementary. Then the Azumaya corings $\mathcal{C}\otimes_{S^{\otimes 2}}\mathcal{E}$ and $\mathcal{C}$ are Brauer equivalent. \end{lemma} \begin{proof} Let $H(\mathcal{C})=(\mathcal{C},\alpha)$. We know that $H(\mathcal{E})=(J^*\otimes J,\lambda_J)$, and $$[(\mathcal{C}\otimes_{S^{\otimes 2}}\mathcal{E},\alpha\otimes_{S^{\otimes 3}}\lambda_J)]= [(\mathcal{C},\alpha)]$$ in $H^1(S/R,\dul{{\rm Pic}})$. From \prref{1.2a.3}, it follows that \begin{eqnarray*} &&\hspace*{-2cm} [\eta_{1*}(\mathcal{C},\alpha)]=[(S^{\otimes 2}\otimes\mathcal{C},S^{\otimes 3}\otimes\alpha)]= [\eta_{2*}(\mathcal{C}\otimes_{S^{\otimes 2}}\mathcal{E},\alpha\otimes_{S^{\otimes 3}}\lambda_J)]\\ &=& [(\mathcal{C}\otimes_{S^{\otimes 2}}\mathcal{E})\otimes S^{\otimes 2}, (\alpha\otimes_{S^{\otimes 3}}\lambda_J)\otimes S^{\otimes 3})] \end{eqnarray*} in $H^1(S\otimes S/R,\dul{{\rm Pic}})$. Applying $H^{-1}$ to both sides, we find that $$[{\rm Can}_R(S;S)\otimes \mathcal{C}]=[(\mathcal{C}\otimes_{S^{\otimes 2}}\mathcal{E})\otimes{\rm Can}_R(S;S)]$$ in ${\rm Br}^c(S\otimes S/R)$. Since the inverse of $[{\rm Can}_R(S;S)\otimes \mathcal{C}]$ in ${\rm Br}^c(S\otimes S/R)$ is $[{\rm Can}_R(S;S)\otimes \mathcal{C}^*]$, we obtain that $$[({\rm Can}_R(S;S)\otimes \mathcal{C}^*)\otimes_{S^{\otimes 4}} ((\mathcal{C}\otimes_{S^{\otimes 2}}\mathcal{E})\otimes{\rm Can}_R(S;S))]= [(\mathcal{C}\otimes_{S^{\otimes 2}}\mathcal{E})\otimes \mathcal{C}^*]=1$$ in ${\rm Br}^c(S\otimes S/R)$. Consequently $(\mathcal{C}\otimes_{S^{\otimes 2}}\mathcal{E})\otimes \mathcal{C}^*=\mathcal{F}$ is an elementary coring, and $$(\mathcal{C}\otimes_{S^{\otimes 2}}\mathcal{E})\otimes \mathcal{C}^*\otimes\mathcal{C}=\mathcal{F}\otimes \mathcal{C}.$$ We have seen in \prref{5.2} that $\mathcal{C}\otimes\mathcal{C}^*$ is elementary, and it follows that $\mathcal{C}\otimes_{S^{\otimes 2}}\mathcal{E}\sim \mathcal{C}$. \end{proof} \begin{lemma}\lelabel{5.5a} Let $f:\ S\to T$ be a morphism of faithfully flat commutative $R$-algebras. If $\mathcal{C}$ is an Azumaya $S/R$-coring, then $\mathcal{C}\sim \tilde{f}(\mathcal{C})=\mathcal{C}\otimes_{S^{\otimes 2}} {\rm Can}_R(T;T)$. \end{lemma} \begin{proof} As before, let $H(\mathcal{C})=(\mathcal{C},\alpha)$. Consider the maps $\varphi,\psi:\ S\to S\otimes T$ given by $$\varphi(s)=1\otimes f(s)~~;~~\psi(s)=s\otimes 1.$$ Applying \prref{1.2a.3}, we find that \begin{eqnarray*} &&\hspace*{-2cm} [\varphi_*(\mathcal{C},\alpha)]=[(S^{\otimes 2}\otimes (\mathcal{C}\otimes_{S^{\otimes 2}}T^{\otimes 2}),\lambda_S\otimes (\alpha\otimes_{S^{\otimes 3}}T^{\otimes 3}))]\\ &=& [\psi_*(\mathcal{C},\alpha)]=[(\mathcal{C}\otimes T^{\otimes 2},\alpha\otimes T^{\otimes 3})] \end{eqnarray*} in $H^1(S\otimes T/R,\dul{{\rm Pic}})$. Consequently \begin{eqnarray*} &&\hspace*{-2cm} [H^{-1}(\varphi_*(\mathcal{C},\alpha))]=[{\rm Can}_R(S;S)\otimes (\mathcal{C}\otimes_{S^{\otimes 2}}{\rm Can}_R(T;T))]\\ &=& [H^{-1}(\psi_*(\mathcal{C},\alpha))]=[\mathcal{C}\otimes {\rm Can}_R(T;T)] \end{eqnarray*} in ${\rm Br}^c(S\otimes T/R,\dul{{\rm Pic}})$. The inverse of $[\mathcal{C}\otimes {\rm Can}_R(T;T)]$ in ${\rm Br}^c(S\otimes T/R,\dul{{\rm Pic}})$ is $[\mathcal{C}^*\otimes {\rm Can}_R(T;T)]$, and it follows that \begin{eqnarray*} &&\hspace*{-2cm} (\mathcal{C}^*\otimes {\rm Can}_R(T;T))\otimes_{S\otimes S\otimes T\otimes T} ({\rm Can}_R(S;S)\otimes (\mathcal{C}\otimes_{S^{\otimes 2}}{\rm Can}_R(T;T)))\\ &\cong &\mathcal{C}^*\otimes (\mathcal{C}\otimes_{S^{\otimes 2}}{\rm Can}_R(T;T)) \cong \mathcal{E} \end{eqnarray*} where $\mathcal{E}$ is an elementary $S\otimes T/R$-coring. We then have $$\mathcal{C}\otimes \mathcal{C}^*\otimes (\mathcal{C}\otimes_{S^{\otimes 2}}{\rm Can}_R(T;T))\cong \mathcal{C}\otimes\mathcal{E}.$$ We know from \prref{5.2} that $\mathcal{C}\otimes\mathcal{C}^*$ is elementary, so we can conclude that $\mathcal{C}\sim \tilde{f}(\mathcal{C})=\mathcal{C}\otimes_{S^{\otimes 2}} {\rm Can}_R(T;T)$. \end{proof} \begin{proposition}\prlabel{5.6} Let $S$ be a commutative faithfully flat $R$-algebra. We have a well-defined group monomorphism $$i_S:\ {\rm Br}^{\rm c}(S/R)\to {\rm Br}^{\rm c}_{\rm fl}(R),~~ i_S([\mathcal{C}])=[\mathcal{C}].$$ If $f:\ S\to T$ is a morphism of commutative faithfully flat $R$-algebras, then we have a commutative diagram $$\begin{diagram} {\rm Br}^c(S/R)&\rTo^{i_S}&{\rm Br}^c_{\rm fl}(R)\\ \dTo_{\tilde{f}}&\NE^{i_T}&\\ {\rm Br}^c(T/R).&& \end{diagram}$$ \end{proposition} \begin{proof} It follows from \leref{5.5} that $i_S$ is well-defined. Let us show that $i_S$ is a group homomorphism. Consider two Azumaya $S/R$-corings $\mathcal{C}$ and $\mathcal{D}$. Then the $S\otimes S/R$-coring $\mathcal{C}^*\otimes \mathcal{C}=\mathcal{E}_1$ and the $S/R$-coring $\mathcal{C}\otimes_{S^{\otimes 2}} \mathcal{C}^*=\mathcal{E}_2$ are both elementary. From \leref{5.5}, it follows that \begin{eqnarray*} \mathcal{C}\otimes \mathcal{D}&\sim & (\mathcal{C}\otimes \mathcal{D})\otimes_{S^{\otimes 4}} (\mathcal{C}^*\otimes \mathcal{C})\\ &\cong & (\mathcal{C}\otimes_{S^{\otimes 2}} \mathcal{C}^*)\otimes (\mathcal{D}\otimes_{S^{\otimes 2}} \mathcal{C})\\ &\sim &\mathcal{D}\otimes_{S^{\otimes 2}} \mathcal{C}\cong \mathcal{C}\otimes_{S^{\otimes 2}} \mathcal{D}. \end{eqnarray*} Consequently $$i_S[\mathcal{C}\otimes_{S^{\otimes 2}} \mathcal{D}]=[\mathcal{C}\otimes \mathcal{D}]=i_S[\mathcal{C}]i_S[\mathcal{D}].$$ It is clear that $i_S$ is injective.\\ Finally, it follows from \leref{5.5a} that $i_S[\mathcal{C}]=[\mathcal{C}]=[\mathcal{C}\otimes_{S^{\otimes 2}} {\rm Can}_R(T;T)]=(i_T\circ \tilde{f})[\mathcal{C}]$. \end{proof} \begin{theorem}\thlabel{5.7} Let $R$ be a commutative ring. Then $$ {\rm Br}_{\rm fl}^c(R)\cong {\rm colim} {\rm Br}^c(\bullet/R)\cong H^2(R_{\rm fl},{\mathbb G}_m).$$ \end{theorem} \begin{proof} It follows from \prref{5.6} and the definition of colimit that we have a map $$i:\ {\rm colim} {\rm Br}^c(\bullet/R)\to {\rm Br}_{\rm fl}^c(R).$$ Suppose that $A$ is an abelian group, and suppose that we have a collection of maps $\alpha_S:\ {\rm Br}^{\rm c}(S/R)\to A$ such that $$\alpha_T\circ \tilde{f}=\alpha_S,$$ for every morphism of faithfully flat commutative $R$-algebras $f:\ S\to T$. Take $x\in {\rm Br}^c_{\rm fl}(R)$. Then $x$ is represented by an Azumaya $S/R$-coring $\mathcal{C}$. We claim that the map $$\alpha:\ {\rm Br}^c_{\rm fl}(R)\to A,~~\alpha(x)=\alpha_S[\mathcal{C}]$$ is well-defined. Take an Azumaya $T/R$-coring $\mathcal{D}$ that also represents $x$. Then $$\mathcal{C}\otimes {\rm Can}_R(T;T)\sim \mathcal{C}\sim \mathcal{D}\sim \mathcal{D}\otimes {\rm Can}_R(S;S)$$ and it follows from the injectivity of $i_{S\otimes T}$ (see \prref{5.6}) that $[\mathcal{C}\otimes {\rm Can}_R(T;T)]=[\mathcal{D}\otimes {\rm Can}_R(S;S)]$ in ${\rm Br}^c(S\otimes T/R)$, hence $$\alpha_S[\mathcal{C}]=\alpha_{S\otimes T}[\mathcal{C}\otimes {\rm Can}_R(T;T)]=\alpha_{S\otimes T} [\mathcal{D}\otimes {\rm Can}_R(S;S)]=\alpha_T[\mathcal{D}],$$ as needed. We have constructed $\alpha$ in such a way that the diagrams $$\begin{diagram} {\rm Br}^c(S/R)&\rTo^{i_S}&{\rm Br}^c_{\rm fl}(R)\\ &\SE_{\alpha_S}&\dTo_{\alpha}\\ &&A \end{diagram}$$ commute. This means that ${\rm Br}^c_{\rm fl}(R)$ satisfies the required universal property. Finally, apply \equref{3.7.2}. \end{proof} \begin{corollary}\colabel{5.8} Let $S$ be a faithfully flat commutative $R$-algebra. Then $${\rm Ker}\,({\rm Br}_{\rm fl}^c(R)\to {\rm Br}_{\rm fl}^c(S))= {\rm Br}^c(S/R).$$ \end{corollary} \begin{proof} Applying \coref{3.7}, \equref{villa2}, \equref{et1} with $q=1$ and \thref{5.7}, we find that \begin{eqnarray*} {\rm Br}^c(S/R)&\cong& H^1(S/R,\dul{{\rm Pic}})\cong H^1(S/R,C^1)\\ &\cong& {\rm Ker}\,(H^2(R_{\rm fl},{\mathbb G}_m)\to H^2(S_{\rm fl},{\mathbb G}_m))\\ &\cong& {\rm Ker}\,({\rm Br}_{\rm fl}^c(R)\to {\rm Br}_{\rm fl}^c(S)). \end{eqnarray*} \end{proof} All our results remain valid if we replace the condition that $S$ is faithfully flat by the condition that $S$ is an \'etale covering, a faithfully projective extension or a Zarisky covering of $R$ (see e.g. \cite{KO1} for precise definitions). It follows from Artin's Refinement Theorem \cite{A1} that the (injective) map $$\check H^2(R_{\rm et},{\mathbb G}_m)\to H^2(R_{\rm et},{\mathbb G}_m)$$ is an isomorphism. We will now present an algebraic interpretation of $\check H^2(R_{\rm fl},{\mathbb G}_m)$ independent of Artin's Theorem. Consider the subgroup ${\rm Br}^{\rm cnb}_{\rm fl}(R)$ consisting of classes of Azumaya corings represented by an Azumaya coring with normal basis. \begin{theorem}\thlabel{5.9} Let $R$ be a commutative ring. Then $${\rm Br}^{\rm cnb}_{\rm fl}(R)\cong \check H^2(R_{\rm fl},{\mathbb G}_m).$$ \end{theorem} \begin{proof} Let $S$ be a faithfully flat commutative $R$-algebra, and consider the maps $$\begin{diagram} &&{\rm Br}^c(S/R)\\ &&\dTo^{\gamma}\\ H^2(S/R,{\mathbb G}_m)&\rTo^{\beta}&H^1(S/R,\dul{{\rm Pic}})\\ \dTo{}&&\dTo^{}\\ \check H^2(R_{\rm fl},{\mathbb G}_m)&\hookrightarrow&H^2(R_{\rm fl},{\mathbb G}_m). \end{diagram}$$ If $\mathcal{C}$ is an Azumaya $S/R$-coring with normal basis, then $\gamma[\mathcal{C}]\in {\rm Im}\,(\beta)$, so the image of $[\gamma]$ in $H^2(R_{\rm fl},{\mathbb G}_m)$ lies in the subgroup $\check H^2(R_{\rm fl},{\mathbb G}_m)$. It follows that we have a monomorphism $\kappa:\ {\rm Br}^{\rm cnb}_{\rm fl}(R) \to \check H^2(R_{\rm fl},{\mathbb G}_m)$ such that the following diagram commutes: $$\begin{diagram} {\rm Br}^{\rm cnb}_{\rm fl}(R)&\hookrightarrow & {\rm Br}^{\rm c}_{\rm fl}(R)\\ \dTo^{\kappa}&&\dTo_{\cong}\\ \check H^2(R_{\rm fl},{\mathbb G}_m)&\hookrightarrow& H^2(R_{\rm fl},{\mathbb G}_m). \end{diagram}$$ It is clear that $\gamma$ is surjective. \end{proof} \end{document}
\begin{document} \title{Self-organized PT-symmetry of exciton-polariton condensate in a double-well potential} \author{P.A. Kalozoumis} \affiliation{Materials Science Department, School of Natural Sciences, University of Patras, GR-26504 Patras, Greece} \affiliation{Hellenic American University, 436 Amherst st, Nashua, NH 0306 USA} \affiliation{Institute of Electronic Structure and Laser, FORTH, GR-70013 Heraklion, Crete, Greece} \author{D. Petrosyan} \affiliation{Institute of Electronic Structure and Laser, FORTH, GR-70013 Heraklion, Crete, Greece} \affiliation{A. Alikhanyan National Science Laboratory (YerPhI), 0036 Yerevan, Armenia} \date{\today} \begin{abstract} We investigate the dynamics and stationary states of a semiconductor exciton-polariton condensate in a double well potential. We find that upon the population build up of the polaritons by above-threshold laser pumping, coherence relaxation due to the phase fluctuations of the polaritons drives the system into a stable fixed point corresponding to a self-organized PT-symmetric phase. \end{abstract} \maketitle \section{Introduction} \label{sec:intro} One of the prominent research directions in semiconductor optics is the study of exciton-polariton condensation in microcavities. Exciton-polaritons are hybrid quasi-particles of strongly coupled quantum well excitons and cavity photons~\cite{DengRevModPhys2010,CarusottoRevModPhys2010} which retain the properties of both matter and light. The excitonic part mediates effective interactions between the polaritons, giving rise to interesting nonlinear properties, whereas the small effective mass of the photonic component enables Bose-Einstein condensation even at ambient temperatures~\cite{LagoudakisNatPhys2008,YamaguchiPRL2013,SchneiderNature2013}, in contrast to their ultracold atomic counterparts~\cite{GoblotPRL2016}. The short lifetime of the polariton condensate renders it an open system that requires continuous replenishing from the excitonic reservoir via external pumping. After their experimental realization~\cite{KasprzakNature2006,BaliliScience2007}, the polariton condensates have been shown to be ideal system for studies of many effects at the interface of non-equilibrium physics and nonlinear dynamics. The intrinsic nonlinear dynamics of polariton systems lead to a variety of effects, such as the appearance of a Mach-Cherenkov cone in a supersonic flow~\cite{AmoNatPhys2009}, the formation of quantized vortices~\cite{DominiciNatComm2018}, and dark solitons~\cite{GonzalezPhysLettA2017}. Moreover, the polariton condensates can be engineered with high precision by the external laser fields \cite{BaliliScience2007,Ohadi2017, Ohadi2018, Orfanakis2021}. Finally, such systems are promising candidates for various applications in photonic devices, such as switches, gates and transistors~\cite{LagoudakisNatPhys2008}, as well as for quantum simulators of interacting spin models~\cite{BerloffNatMat2017}. The ``open'' nature of the system, featuring gain and loss, leads to interesting implications when the dissipative dynamics become pseudo-Hermitian. This is the case in parity-time (PT) symmetric setups, where dissipation losses are exactly balanced by the pumping gain. Systems with PT-symmetry has been a flourishing and broad research field, extending from quantum mechanics~\cite{Bender2002} and field theory~\cite{Bender2004} to optics~\cite{Ganainy2007} and acoustics~\cite{Fleury2015}. The interplay between the inherent losses and the laser pumping in such a way as to preserve the PT symmetry of the system provides an effective framework where a polariton system can exhibit coherent, Hermitian-like dynamics for relatively long times. Recent works have shown promising results, such as permanent Rabi oscillations~\cite{ChestnovSciRep2016}, multistability and condensation below threshold~\cite{LienPRB2015}, exceptional points in polaritonic cavities below lasing threshold~\cite{KhurginOPTICA2020}, and coherent oscillations of a two species polariton mixture in a double well~\cite{KalozoumisEPL2020}. The latter has been shown to be able to simulate the dynamics of a pair of spin-1/2 particles (qubits) in the presence of exchange interaction. Yet, polariton structures in the framework of PT symmetry have not been extensively studied yet, and more efforts are required to understand the rich landscape of phenomena which emerge from this framework. In this work we study the dynamics of an exciton-polariton condensate in a double well potential, in the presence of time-varying exciton populations and phase fluctuations. We consider the coupled-mode equations for the polaritons supplemented by the rate equations for the laser-pumped exciton reservoirs, and derive analytically the steady state solutions for the exciton and polariton populations as well as their coherence. We find that, when the total pumping rate is above threshold, the system automatically attains the PT symmetric state, independently of the pumping rates of the individual sites. Employing numerical simulations for several different pumping rates and initial conditions, we verify our analytical findings. We also study the stability and robustness of our results in the presence of phase noise caused by the unavoidable phase fluctuations of the polaritons. \section{The exciton-polariton system}\label{II} \begin{figure} \caption{Schematic top view (left panel) and side view (right panel) of a polariton system in a double quantum well. Spatially shaped pumping lasers populate with rates $P_L$ and $P_R$ the reservoir excitons $n_{L,R} \label{fig1} \end{figure} The system under consideration is schematically illustrated in Fig.~\ref{fig1}. One or more layers of semiconductor quantum wells are placed inside the semiconductor microcavity near the antinode of the resonant cavity field mode. Spatially shaped pumping lasers replenish continuously the exciton reservoirs and simultaneously create confining potentials for the polariton condensate. Assuming a tight-binding double-well potential, the exciton-polariton system can be described by the following set of equations for the polariton condensate wavefunctions $\psi_{L}$ and $\psi_{R}$ in the left ($L$) and right ($R$) wells \cite{WoutersPRL2007}: \begin{subequations} \label{full_model} \begin{eqnarray} i \partial_t \psi_{L} &=& \left[ \epsilon_L + \eta |\psi_{L}|^{2} \right] \psi_L + \frac{i}{2} \left[ R n_{L} - \kappa \right] \psi_L - J \; \psi_{R} , \qquad \label{site_one} \\ i \partial_t \psi_{R} &=& \left[ \epsilon_R + \eta |\psi_{R}|^{2} \right] \psi_R + \frac{i}{2} \left[ R n_{R} - \kappa \right] \psi_R - J^{*}\psi_{L} , \label{site_two} \end{eqnarray} \end{subequations} where $\epsilon_{L,R}$ are the single-particle energies, $\eta$ is the nonlinear interaction strength, $\kappa$ is the decay rate of the polaritons due to the exciton recombination and cavity photon losses (assumed the same for both wells), and $J$ is the Josephson (tunnel) coupling between the wells. The polariton equations are supplemented by the equations for the populations $n_{L,R}$ of the reservoir excitons, \begin{subequations} \label{resLR} \begin{eqnarray} \partial_t n_{L} &=& P_{L} - \Gamma n_{L} -R n_{L} |\psi_{L}|^2 , \label{res_one} \\ \partial_t n_{R} &=& P_{R} - \Gamma n_{R} -R n_{R} |\psi_{R}|^2 , \label{res_two} \end{eqnarray} \end{subequations} which are created by laser pumping with rates $P_{L,R}$, decay with rate $\Gamma$, and scatter into the polariton condensate with rate $R$. In Appendix~\ref{sec:app} we briefly outline the PT-symmetry conditions for a condensate in a double well potential. Neglecting for the moment the non-linearity $\eta$, the PT-symmetry condition is satisfied when $\epsilon_{L,R} = \epsilon \, (=0)$ and the gain in one well exactly compensates the losses in the other, \begin{equation} \gamma_L \equiv \frac{1}{2}[R n_{L} - \kappa] = - \frac{1}{2}[R n_{R} - \kappa] \equiv -\gamma_R , \label{gammadefs} \end{equation} as per Eqs.~(\ref{site_one}) and ~(\ref{site_two}), which leads to $n_{L} + n_R = \frac{2\kappa}{R}$. The threshold pumping at which the polariton condensate starts to form can be obtained from the condition that the sum of the gain and loss in both wells is non-negative, $\gamma_L + \gamma_R \geq 0$. With Eq.~(\ref{gammadefs}), this condition is equivalent to \begin{equation} \label{Thexcitons} n_{L} + n_R \geq \frac{2\kappa}{R}. \end{equation} Note that exactly at the threshold, this is the same condition as for the PT-symmetry. If we consider the stationary regime for the reservoir excitons, $\partial_t n_{L,R}=0$, we find from Eqs.~(\ref{resLR}) the steady-state values \begin{equation} n_{L,R} = \frac{P_{L,R}}{\Gamma+R|\psi_{L,R}|^2} . \label{excitons_stat_1} \end{equation} Exactly at the threshold for condensate formation, the values of the polariton populations in both wells, $|\psi_{L,R}|^2$, are marginally equal to zero and we have $n_{L,R} \simeq P_{L,R}/\Gamma$. Substituting these values into Eq.~(\ref{Thexcitons}), we find the threshold pumping condition \begin{equation} \label{threshold_pumping} P_L+P_R \geq \frac{2\kappa \Gamma}{R} , \end{equation} above which the condensate begins to form, while for $P_L+P_R<2\kappa \Gamma/R$ the condensate decays to zero. \begin{figure*} \caption{Dynamics of the polariton populations $|\psi_{L,R} \label{fig2} \end{figure*} In the upper panels of Fig.~\ref{fig2} we show the polariton populations $|\psi_{L,R}|^2$ for different pumping rates and initial conditions, with and without non-linear interaction, as obtained from the numerical solution of Eqs.~(\ref{full_model}-\ref{resLR}). The insets show the evolution of the exciton populations $n_{L}$ and $n_{R}$ and their sum $n_{L}+n_{R}$. For pumping below threshold, we observe a decay of the initial (seed) polariton populations with rate $\gamma_L+ \gamma_R \simeq \frac{R}{2\Gamma} (P_L + P_R) -\kappa < 0$, accompanied by Rabi-like oscillations, while the exciton populations settle to $n_{L,R}=P_{L,R}/\Gamma$. For pumping above threshold, the polariton populations grow until reaching certain values $|\psi_{L,R}|^2$ at which $n_{L} + n_R \simeq \frac{2\kappa}{R}$, while the Rabi-like oscillations persist or are eventually damped, depending on the initial conditions or presence of non-linear interaction, as discussed below. Remarkably, the polariton and exciton populations increase and decrease, respectively, reaching the same stationary values which satisfy the PT-symmetry conditions, independently of the pumping rates, as long as pumping is retained above threshold. \section{Equivalence of the PT-symmetry and steady state conditions} \label{sec:PTSS} To understand the dynamics of the system, it is convenient to express Eqs.~(\ref{full_model}) in terms of the polariton populations $|\psi_{L,R}|^2$ and coherence $\Theta \equiv \psi_L \psi_{R}^*$ as \begin{subequations} \label{population_coherence} \begin{eqnarray} \partial_t |\psi_L|^2 &=& 2 \gamma_{L} |\psi_{L}|^2 + 2J \textrm{Im} \Theta \label{population_coherence1} , \\ \partial_t |\psi_R|^2 &=& 2 \gamma_{R} |\psi_{R}|^2 - 2J \textrm{Im} \Theta \label{population_coherence2} , \\ \partial_t \Theta &=& -i \left[ \epsilon_L - \epsilon_R + \eta ( |\psi_{L}|^2 -|\psi_{R}|^2 ) \right] \Theta \nonumber \\ & & + (\gamma_L+ \gamma_R) \Theta - i J \left(|\psi_{L}|^2 - |\psi_{R}|^2 \right) \label{population_coherence3} . \end{eqnarray} \end{subequations} Note that below threshold, $(\gamma_L+ \gamma_R) < 0$, both the polariton populations and their coherence decay to zero, as already mentioned above. Let us assume $\epsilon_{L,R} = 0$ and consider first the case of vanishing nonlinearity $\eta=0$. Equation~(\ref{population_coherence3}) indicates that the coherence decays only if its real part is nonzero. In turn, the solution for the real part of the coherence is \begin{equation} \label{real_coh_sol} \textrm{Re} \Theta (t) = \textrm{Re} \Theta (0) \; e^{ \int_0^t (\gamma_L + \gamma_R) dt'} . \end{equation} Hence, if initially $\textrm{Re} \Theta(0) = 0$, it will remain so at later times, $\textrm{Re}\Theta(t) =0 \; \forall \; t >0$. Then the dynamics of the system, if pumped above threshold, will exhibit continuous Rabi-like oscillations with frequency $J$, while no steady state will be attained, as in Fig.~\ref{fig2}(b). In practice, however, even if initially we have $\textrm{Re} \Theta(0) = 0$ [e.g., either $\psi_{L}(0) = 0$ or $\psi_{R}(0) = 0$], the unavoidable phase fluctuations of the polaritons will eventually lead to the appearance of finite $\textrm{Re} \Theta(t) \neq 0$, which in turn will result in the decay of coherence and drive the system to the steady state. Equivalently, if we have initially $\textrm{Re} \Theta(0) \neq 0$, the system can still exhibit initially Rabi-like oscillations, but then it will eventually attain the steady state, as in Fig.~\ref{fig2}(c). Finally, as seen from Eq.~(\ref{population_coherence3}) the nonlinear interaction couples the real and imaginary parts of the coherence $\Theta$ with the rate $\eta (|\psi_{L}|^2 -|\psi_{R}|^2)$. Hence, in the presence of nonlinearity $\eta \neq 0$, we expect the eventual decay of the coherence with the system attaining the steady state, for any initial conditions and independent on the phase fluctuations, as in Fig.~\ref{fig2}(d). Setting the time derivative in the left-hand side of the Eq.~(\ref{population_coherence3}) equal to zero, we find the steady state is reached when \begin{equation} \label{stationary_cond_1} R[n_{L}+n_{R}]-2 \kappa = 0 , \quad \mathrm{and} \quad |\psi_{L}|^2 = |\psi_{R}|^2. \end{equation} Remarkably, the first equation corresponds exactly to the PT-symmetry condition $n_{L} + n_R = \frac{2\kappa}{R}$ discussed above. Moreover, this condition is satisfied even in the presence of nonlinear interaction $\eta \neq 0$, because the equal polariton populations as per the second equation lead to exactly the same energy shifts $\eta |\psi_{L,R}|^2$ of the polaritons in both wells. In other words, for any initial conditions, and provided the total pumping is above threshold as per Eq.~(\ref{threshold_pumping}) but otherwise arbitrary $P_L$ and $P_R$, the system attains a stable fixed point corresponding to the PT-symmetric state. Even when no steady state exists or is yet reached, the PT condition in Eq.~(\ref{stationary_cond_1}) is approximately satisfied, as seen in the insets of Fig.~\ref{fig2}. \begin{figure*} \caption{Dynamics of the polariton populations $|\psi_{L,R} \label{fig3} \end{figure*} Combining Eqs.~(\ref{excitons_stat_1}) and (\ref{stationary_cond_1}), we find that steady state polariton populations are \begin{equation} \label{stationary_polaritons} |\psi_{L}|^2=|\psi_{R}|^2=\frac{P_L+P_R}{2\kappa}-\frac{\Gamma}{R}, \end{equation} while the exciton populations are \begin{equation} \label{excitons_stat_2} n_{L,R}=\frac{2\kappa P_{L,R}}{R(P_L+P_R)}. \end{equation} Using these stationary values for $n_{L,R}$ and $|\psi_{L,R}|^2$ in Eqs.~(\ref{population_coherence1}) and (\ref{population_coherence2}) in the steady state we obtain \begin{equation} \label{imag_coherence_stat3} \textrm{Im} (\Theta) = \frac{\kappa \Gamma ( P_{L}-P_R) }{2RJ(P_L+P_R)}-\frac{P_{L}-P_R}{4J} , \end{equation} and from $|\psi_L \psi_R^*|^2= [ \textrm{Re}(\psi_{L} \psi_{R}^*) ]^2+ [ \textrm{Im}( \psi_{L} \psi_{R}^*) ]^2$ we obtain \begin{equation} \label{real_coherence_stat} \textrm{Re}(\Theta)=\mathcal{D}\sqrt{4J^2(P_L+P_R)^2-\kappa^2 (P_L-P_R)^2} \end{equation} where \[ \mathcal{D}=\frac{P_L+P_R-2\kappa \Gamma/R}{4J\gamma(P_L+P_R)} . \] These results are verified by the numerical simulations illustrated in Fig.~\ref{fig2} and they equally hold for any value the nonlinearity strength $\eta$. \section{Phase fluctuations} \label{sec:PF} As mentioned above, the coherence of the polariton condensate will decay due to the phase fluctuations that are always present in realistic quantum systems. We therefore incorporate the phase noise in our numerical calculations and investigate how it modifies the dynamics of the polaritons and the coherence. We model the phase fluctuations as the standard Wiener process for stochastic differential equations. Thus, the single-particle energies $\epsilon_{L,R}$ in Eqs.~(\ref{full_model}) become Gaussian stochastic variables with the mean $\braket{\epsilon_{L,R}} = 0$ and variance $\sigma^2 = 2 \xi/\delta t$, where $\xi$ is the decoherence rate and $\delta t$ is the time step for picking a new random energy. In Fig.~\ref{fig3} we show the results of our numerical simulations as obtained upon the ensemble average over $N=1000$ independent realizations of the system dynamics. We compute the first-order correlation functions $g^{(1)}(t)$, which quantify the coherence for the polaritonic fields, via \begin{equation} \label{acf_ens} g^{(1)}(t)= \frac{\langle \psi(t_0)\psi(t)\rangle}{\sqrt{\langle |\psi(t_0)|^2 \rangle \langle |\psi(t)|^2 \rangle}} , \end{equation} where $\psi = \psi_L$ or $\psi_R$, and $\braket{\ldots}$ denotes the ensemble average. Below the pumping threshold, the polariton fields decay with rate $\gamma_L + \gamma_R < 0$, but the phase fluctuations with rate $\xi$ causes even faster decay of coherence, $g^{(1)}(t) \propto e^{-(\xi-|\gamma_L+\gamma_R|/2)t}$, as seen in Fig.~\ref{fig3}(a). For pumping above threshold, the phase noise causes exponential decay of the correlation function $g^{(1)}(t) \propto e^{-\xi t}$, while the system approaches the steady state independently of the initial value of coherence $\textrm{Re} \Theta(0)$, as seen in Fig.~\ref{fig3}(b). Including also the nonlinear interaction $\eta \neq 0$ further accelerates the decay of the correlation function and the system approaches the steady state even faster. We finally note that for an ergodic system the ensemble-averaged and time-averaged correlation functions are equivalent. To verify whether our polariton system is ergodic, we also compute the field correlation function \begin{equation} \label{acf_time} g^{(1)}(t)=\frac{ \int_{t_\textrm{i}}^{t_{\textrm{f}}} d \tau \psi (\tau) \psi^{*}(\tau+t)} {\sqrt{\int_{t_\textrm{i}}^{t_{\textrm{f}}} d \tau |\psi(\tau)|^2 \int_{t_\textrm{i}}^{t_{\textrm{f}}} d \tau |\psi(\tau+t)|^2}} \end{equation} resulting from a single, long-time trajectory with $t_{\textrm{f}} -t_{\textrm{i}} = 3000/J$. As seen in Fig.~\ref{fig3} (lower panels), the computed ensemble-averaged and time-averaged correlation functions coincide to a very approximation, attesting to the ergodicity of our system. \section{Conclusions} \label{sec:conc} To summarize, we have studied an exciton-polariton system in a double-well potential, taking into account the dynamics of the reservoir excitons and the polaritons. We have found that for pumping of the excitons above the total threshold value for the formation of the polariton condensate, the exciton populations attain the values that satisfy the PT-symmetry condition for the polariton condensate, independent of the pumping rates of the individual wells. Employing the population-coherence equations, we interpreted the corresponding dynamics and revealed the stable fixed point, or the steady state, that the system approaches. To make our analysis experimentally relevant, we have taken also into account the phase fluctuations present in any realistic system, and computed the first-order correlation functions for the polariton fields, which revealed the coherence decay with the corresponding rate. We note that our results apply to moderate non-linear interaction strength and small differences in pumping rates of the two wells. For large difference in the pumping rates, the strong non-linear energy shift of the polariton condensate energy may lead to self-trapping and break-up of the PT symmetry \cite{Sukhorukov2010} \section{Acknowledgments} We thank P.G. Savvidis, H. Ohadi, and A.F. Tzortzakakis for fruitful discussions. This work was co-financed by Greece (General Secretariat for Research and Technology), and the European Union (European Regional Development Fund), in the framework of the bilateral Greek-Russian Science and Technology collaboration on Quantum Technologies (POLISIMULATOR project.) \appendix \section{Polariton condensate in a PT-symmetric double well} \label{sec:app} Consider a polariton condensate in a double well potential described by the coupled-mode equations \begin{subequations} \label{eqs:pisLR} \begin{eqnarray} \label{site_one_plus} i \partial_t \psi_{L} &=& (\epsilon_L + i \gamma_L) \psi_{L} + \eta |\psi_{L}|^{2}\psi_{L} - J \; \psi_{R} , \\ \label{site_one_minus} i \partial_t \psi_{R} &=& (\epsilon_R + i \gamma_R) \psi_{R} + \eta |\psi_{R}|^{2}\psi_{R} - J^* \! \psi_{L} , \end{eqnarray} \end{subequations} where $\epsilon_{L,R}$ are the single-particle energies, $\gamma_{L,R}$ are the incoherent loss ($\gamma <0$) or gain ($\gamma >0$) rates at each well, $\eta$ is the nonlinear interaction strength, and $J$ is the Josephson coupling between the wells. \begin{figure} \caption{Imaginary part of the eigenvalues $\lambda_{\pm} \label{fig_app} \end{figure} If we set $\epsilon_{L,R} =0$, assume negligibly weak nonlinearity, $\eta |\psi|^{2} \ll J$, and set $\gamma_{L} = - \gamma_{R}= \gamma$ so that the loss at the right well is exactly compensated by the gain at the left well, we obtain a PT-symmetric Hamiltonian matrix corresponding to Eqs.~(\ref{eqs:pisLR}) \cite{KalozoumisEPL2020}: \begin{equation} \label{hamiltonian} \mathcal{H} = \begin{pmatrix} i \gamma & - J \\ - J^* & - i\gamma \end{pmatrix}. \end{equation} Its eigenvalues and the corresponding eigenvectors are given by \begin{equation} \label{eigenvalues} \lambda_{\pm} = \pm \sqrt{|J|^2-\gamma^2} \end{equation} and \begin{equation} \label{eigenstateS} |\pm \rangle = \left[ \left( \sqrt{|J|^2-\gamma^2} \pm i \gamma \right) | L \rangle \mp J^* | R \rangle \right]/N_{\pm} \end{equation} with $N_{\pm}$ the normalization factors. For $\gamma < |J|$, the eigenvalue spectrum is real and the dynamics is Hermitian-like. For $\gamma > |J|$, the eigenvalues become imaginary and the system enters the PT-broken phase. The case $|J|=\gamma$ corresponds to the exceptional point of the system where the eigenvalues become degenerate and the eigenstates coalesce. Figure~\ref{fig_app} illustrates the dependence of imaginary part of the eigenvalues on the loss/gain parameter $\gamma$. \end{document}
\begin{document} \title{Experimental investigation of the dynamics of entanglement: Sudden death, complementarity, and continuous monitoring of the environment} \author{A. Salles} \email{[email protected]} \affiliation{Instituto de F\'{\i}sica, Universidade Federal do Rio de Janeiro, Caixa Postal 68528, Rio de Janeiro, RJ 21941-972, Brazil} \author{F. Melo} \affiliation{Instituto de F\'{\i}sica, Universidade Federal do Rio de Janeiro, Caixa Postal 68528, Rio de Janeiro, RJ 21941-972, Brazil} \affiliation{Albert-Ludwigs-Universit\"at Freiburg, Physikalisches Institut, Hermann-Herder-Strasse 3, D-79104 Freiburg, Germany} \author{M. P. Almeida} \affiliation{Instituto de F\'{\i}sica, Universidade Federal do Rio de Janeiro, Caixa Postal 68528, Rio de Janeiro, RJ 21941-972, Brazil} \affiliation{Centre for Quantum Computer Technology, Department of Physics, University of Queensland, QLD 4072, Brisbane, Australia.} \author{M. Hor-Meyll} \affiliation{Instituto de F\'{\i}sica, Universidade Federal do Rio de Janeiro, Caixa Postal 68528, Rio de Janeiro, RJ 21941-972, Brazil} \author{S. P. Walborn} \affiliation{Instituto de F\'{\i}sica, Universidade Federal do Rio de Janeiro, Caixa Postal 68528, Rio de Janeiro, RJ 21941-972, Brazil} \author{P. H. Souto Ribeiro} \affiliation{Instituto de F\'{\i}sica, Universidade Federal do Rio de Janeiro, Caixa Postal 68528, Rio de Janeiro, RJ 21941-972, Brazil} \author{L. Davidovich} \affiliation{Instituto de F\'{\i}sica, Universidade Federal do Rio de Janeiro, Caixa Postal 68528, Rio de Janeiro, RJ 21941-972, Brazil} \begin{abstract} We report on an experimental investigation of the dynamics of entanglement between a single qubit and its environment, as well as for pairs of qubits interacting independently with individual environments, using photons obtained from parametric down-conversion. The qubits are encoded in the polarizations of single photons, while the interaction with the environment is implemented by coupling the polarization of each photon with its momentum. A convenient Sagnac interferometer allows for the implementation of several decoherence channels and for the continuous monitoring of the environment. For an initially-entangled photon pair, one observes the vanishing of entanglement before coherence disappears. For a single qubit interacting with an environment, the dynamics of complementarity relations connecting single-qubit properties and its entanglement with the environment is experimentally determined. The evolution of a single qubit under continuous monitoring of the environment is investigated, demonstrating that a qubit may decay even when the environment is found in the unexcited state. This implies that entanglement can be increased by local continuous monitoring, which is equivalent to entanglement distillation. We also present a detailed analysis of the transfer of entanglement from the two-qubit system to the two corresponding environments, between which entanglement may suddenly appear, and show instances for which no entanglement is created between dephasing environments, nor between each of them and the corresponding qubit: the initial two-qubit entanglement gets transformed into legitimate multiqubit entanglement of the Greenberger-Horne-Zeilinger (GHZ) type. \end{abstract} \pacs{03.65.Yz; 03.67.Bg; 03.67.Mn; 42.50.Ex} \maketitle \section{Introduction} \label{sec:introduction} Entanglement plays a central role in quantum mechanics. The subtleties of this phenomenon were first brought to light by the seminal paper of Einstein, Podolski, and Rosen~\cite{EPR}, published in 1935, and by those of Schr\"odinger~\cite{schrodinger1,schrodinger2}, published in 1935 and 1936. It took however approximately thirty years for its essential distinction from classical physics to be unmasked by John Bell~\cite{bell}, and another thirty years for the discovery that entanglement is a powerful resource for quantum communication~\cite{ekert,bennett3,bouwmeester,boschi,gisin,bennett4}. It was also found in the 90's to play an important role in quantum computation algorithms~\cite{nielsen}. Furthermore, it plays a key role in the behavior of macroscopic quantities like the magnetic susceptibility at low temperatures~\cite{susceptibility}. Yet the dynamics of entangled systems under the unavoidable effect of the environment is still a largely unknown subject, in spite of its fundamental importance in the understanding of the quantum-classical transition, and its practical relevance for the realization of quantum computers. The absence of coherent superpositions of classically distinct states of a macroscopic object is analyzed by decoherence theory~\cite{joos,zurek:715}, which shows that the emergence of the classical world is intimately related to the extremely small decoherence time scale for macroscopic objects. Within a very short time, which decreases with the size of the system, an initial coherent superposition of two classically distinct states gets transformed into a mixture, due to the entanglement of the system with the environment. The decay dynamics is ruled, within a very good approximation, by an exponential law. Detailed consideration of the dynamics of entangled states requires defining proper measures of this quantity. For pure states, one can use the Von Neumann entropy~\cite{nielsen} associated to each part, or alternatively the corresponding purity, defined by the so-called linear entropy~\cite{rungta01, mintert}. The ideais that the more entangled some partition of a multiqubit state is, the more unknown is the state of each part. However, systems undergoing decoherence do not remain pure. A mixed state of $N$ parties is separable if it can be written as a convex sum of products of density matrices corresponding to each part~\cite{werner}: \begin{equation} \label{Rho} \varrho=\sum_{\mu} p_{\mu} \varrho_{1_{\mu}}\otimes\hdots\otimes\varrho_{N_{\mu}} , \end{equation} where the index $\mu$ refers to the $\mu$-th realization of the state and $\sum_{\mu} p_{\mu}=1$, with $p_{\mu}\geq0$. Entanglement measures for mixed states have been defined for systems with dimension up to six~\cite{wootters,peres,horodecki}, but for larger dimensions this problem has not yet been solved. For two-qubit systems, Wootters~\cite{wootters} introduced the concurrence as a measure of entanglement. It was shown by Peres~\cite{peres} that, if the partial transpose of the density matrix of a multipartite system with respect to one of its parts has negative eigenvalues, then the state is necessarily entangled. Thus, a non-negative partial transpose is a necessary condition for a state to be separable. However, this condition is also sufficient only for $2\times2$ or $2\times3$ systems, as shown in Ref.~\cite{horodecki}. The negativity, defined as the magnitude of the sum of negative eigenvalues of the partially transposed matrix, can thus be used, in these cases, as a measure of entanglement. For higher dimensions, this does not work anymore: the negativity is then an indicator of {\it distillable entanglement}. That is, if the negativity is different from zero then it is possible, through local operations and classical communication, to obtain from $n$ copies of the state a number $m$ ($m\le n$) of maximally entangled states. This process, called distillation, does not work if the partially-transposed density matrix is non-negative: any entanglement still present cannot be distilled -- it is then called bound entanglement~\cite{bound}. These measures allow one to study the dynamics of initially entangled states under the influence of the environment~ \cite{mintert, karol, simon, diosi, dodd, dur, yu1, carvalho:230501, hein, fine:153105, santos:040305, yu:140403, almeida07, carvalho-2007, kimble07, aolita}. The outcome of these investigations is that the dynamics of entanglement can be quite different from the dynamics of decoherence: the first is not ruled by an exponential decay law, , as the latter, and entanglement can disappear at finite times, even when system coherences decay asymptotically in time. This phenomenon, known as \emph{entanglement sudden death}~\cite{yu:140403}, is a peculiar feature of global dynamics. In this article we present an all-optical device to study the interaction of simple systems (one or two qubits) with various kinds of environments, in a highly controllable fashion. The setup is extremely versatile, allowing to implement many different types of open system dynamics. A partial account of our experimental results was given in Ref.~\cite{almeida07}. Here we show how this set up can be used not only to demonstrate the subtle dynamics of entanglement, but also the behavior of a continuously-monitored system, as well as the dynamics of complementarity relations~\cite{englert, englert2, jakob-2003, melo} between local and global properties for a two-qubit entangled system. These complementarity relations quantify the notion that, for pure entangled states, coherences and populations of each party become uncertain: the more unknown they are, the more entangled is the state. Sections~\ref{sec:theory} and \ref{complementarity} contain the theoretical framework, in a form which is particularly suitable to the experimental investigation of the dynamics of entanglement under different kinds of environment. Section~\ref{sec:theory} deals with open-system dynamics and Kraus operators, while Section~\ref{complementarity} discusses quantum channels, the dynamics of complementarity for each of these channels, and the transfer of entanglement from the two-qubit system to the two corresponding environments. A peculiar feature of dephasing processes is emphasized, for a family of initial states: when the two-qubit entanglement disappears, no bipartite entanglement is left in the system. The state of the two-qubit system plus corresponding environments becomes a state of GHZ (Greenberger-Horne-Zeilinger) type~\cite{GHZ}, with only genuine multiparticle entanglement. The experimental setup is introduced in section~\ref{sec:experiment}, along with several examples of environments that we are able to implement. In section~\ref{sec:results_qubit} we present the experimental results for the behavior of a single qubit, with and without continuous monitoring of the environment, including a detailed study of the dynamical behavior of complementarity relations between local (single-party) and global properties of the system qubit$+$environment. We show that our results on the continuously-monitored system are intimately related to the distillation of entanglement. In Section~\ref{sec:results_entanglement} we discuss the experimental investigation of the evolution of entanglement for two typical noise channels -- amplitude damping and dephasing -- including the first observation of the phenomenon of entanglement sudden death, which we had previously reported in~\cite{almeida07}. Our conclusions are summarized in section~\ref{sec:conclusions}. \section{Open system dynamics and Kraus operators} \label{sec:theory} A system ($S$) interacting with an environment ($E$) is described by the following Hamiltonian: \begin{equation} H = H_S\otimes\openone +\openone\otimes H_E + \lambda V_{SE}, \label{toth} \end{equation} where $H_S$ and $H_E$ are the system and environment Hamiltonians respectively, and $V_{SE}$ is the coupling term between them with coupling constant $\lambda$ (in the weak coupling limit, $\lambda \ll 1$). The system and the environment get entangled due to the interaction $V_{SE}$ -- an initially pure state of $S$ evolves to a mixed state. In quantum optics, the traditional way of dealing with open systems weakly coupled to environments with large number of degrees of freedom is through master equations~\cite{lindblad, kossakowski}. In this approach, the equation of motion for the state $\rho_S$ of the system, given by: \begin{equation} \dot{\rho}_S=-\frac{i}{\hbar}\tr_E [H,\rho_{SE}], \label{eqmotion} \end{equation} where $\rho_{SE}$ is $S+E$ density matrix, is approximated to first order of perturbation theory, with additional assumptions of Markov dynamics and initially uncorrelated systems. The previous expression can be then written as a sum of a unitary contribution plus a non-unitary term, which depends only on operators acting on the system $S$, and is given by the following expression: \begin{equation} \dot\rho_S^{\rm NU} = - \sum_k \left(\rho_S{\cal L}^\dagger_k {\cal L}_k +{\cal L}^\dagger_k {\cal L}_k \rho_S - 2 {\cal L}_k \rho_S {\cal L}^\dagger_k\right)\,, \end{equation} where the upper index NU stands for non-unitary, and $ {\cal L}_k $ are the so-called Lindblad operators. See References~\cite{carmichael, breuer} for a comprehensive treatment. The experimental investigation of open system dynamics can be greatly simplified by adopting an alternative formalism, based on the Kraus representation~\cite{kraus}. We summarize in the following the main ingredients of this approach. \subsection{Kraus operators} As suggested in Eq.~(\ref{eqmotion}), the evolution of a system coupled to an environment can always be expressed as a unitary dynamics on a higher dimensional system -- Fig.~\ref{openclose} depicts this approach. \begin{figure} \caption{\footnotesize Unitary dynamics. a) Closed system b) Open system -- $\$$ describes the reduced evolution of $S$ when we trace out the environment $E$.} \label{openclose} \end{figure} Starting with uncorrelated systems, the total evolution can be written as: \begin{equation} U_{SE}(\rho_S\otimes\ket{0}_E\bra{0})U^\dag_{SE}\;; \end{equation} where $U_{SE}$ is the $S+E$ evolution operator and $\ket{0}_E$, without loss of generality, represents the initial state of the environment. If we wish to focus only on the evolution of system $S$, we take the trace over the degrees of freedom of the environment. The effective evolution, not necessarily unitary, is then given by: \begin{equation} \$(\rho_S)\begin{array}[t]{l} =\tr_E[U_{SE}(\rho_S\otimes\ket{0}_E\!\bra{0})U^\dag_{SE}]\;\\ =\sum_\mu\, \!_E\bra{\mu}U_{SE}\ket{0}_E\rho_S\,_E\!\bra{0}U^\dag_{SE}\ket{\mu}_E\;; \end{array} \label{TrKraus} \end{equation} where $\{\ket{\mu}\}$ form an orthonormal basis for $E$, and the operator $\$$ describes the evolution of the system $S$ ( $\$$ is usually called a quantum channel, in analogy with classical communication theory~\cite{nielsen} ). Finally this evolution can be expressed only in terms of operators acting on $S$ in the following form: \begin{equation} \$(\rho_S)=\sum_\mu M_\mu \rho_S M_\mu^\dag, \label{EvolKraus} \end{equation} where the operators \begin{equation}\label{kraus} M_\mu\equiv\, \!_E\bra{\mu}U_{SE}\ket{0}_E \end{equation} are the so-called Kraus operators~\cite{kraus,choi,preskill}. The property $\sum_\mu M_\mu^\dag M_\mu = \openone$ guarantees that $\tr [\$(\rho_S)] =1$, so that the operation $\$$ is trace preserving. Furthermore, the evolution given by Eq. (\ref{EvolKraus}) preserves the positive semi-definite character of $\rho_S$ -- this means that $\$(\rho_S)$ is also a density operator. It is important to note that the Kraus operators are not uniquely defined -- performing the trace operation in Eq.(\ref{TrKraus}) in different bases leads to different sets of equivalent operators, yielding different decompositions of the resulting density matrix. There are at most $d^2$ independent Kraus operators~\cite{nielsen,leung:528}, where $d$ is the dimension of $S$. Together with Eq.~(\ref{kraus}), this property implies that, if $\{\ket{\phi_i}\}$ is a basis in the space corresponding to $S$, then a dynamical evolution of $S$, corresponding to the Kraus operators $\{M_\mu\}$, $\mu=0,\dots,d^2-1$, can be derived from a unitary evolution of $S+E$ given by the following map: \begin{equation} \begin{array}{ccc} \ket{\phi_1}\ket{0}&\rightarrow&M_0\ket{\phi_1}\ket{0}+ \dots + M_{d^2-1}\ket{\phi_1}\ket{d^2-1}\;;\\ \ket{\phi_2}\ket{0}&\rightarrow&M_0\ket{\phi_2}\ket{0}+ \dots + M_{d^2-1}\ket{\phi_2}\ket{d^2-1}\;;\\ \vdots&\rightarrow&\vdots\\ \ket{\phi_d}\ket{0}&\rightarrow&M_0\ket{\phi_d}\ket{0}+ \dots + M_{d^2-1}\ket{\phi_d}\ket{d^2-1}\;, \end{array} \label{KrausMaps} \end{equation} where as before the operators $M_i$ act only on $S$. This map yields the guiding equations for our experiments. If the environment has many degrees of freedom (so that it can be considered a reservoir), then under Markovian and differentiability assumptions Eq.~(\ref{EvolKraus}) yields a master equation~\cite{preskill}. This is however less general than the Kraus approach, which applies even if the environment has a small number of degrees of freedom. \subsubsection{Global vs. Local environments} If the system $S$ is itself composed of $N$ subsystems ($S_1\;,\;\dots\;,\;S_N$), we must distinguish between two main types of environment: \noindent {\it i)} Global channels: in this case all the subsystems are embedded in the same environment, and can even communicate through it. These channels perform non-local dynamics and, in principle, can increase the entanglement among the subsystems. \noindent {\it ii)} Local channels: each subsystem interacts with its own environment, no communication is present. The total evolution can be written as $U_{S_1E_1}\otimes\cdots\otimes U_{S_NE_N}$, and Eq.(\ref{EvolKraus}) is replaced by: \begin{equation} \$(\rho_S)=\sum_{\mu\dots\nu} M^1_\mu\otimes\cdots\otimes M^N_\nu \rho_S{M^1_\mu}^\dagger\otimes\cdots\otimes {M^N_\nu}^\dagger\;. \label{EvolKrausN} \end{equation} This operation is clearly local, and therefore cannot increase the entanglement among the constituents. Obviously, for systems with $N>2$ mixed dynamics is also possible, i.e, some subsystems interact with a common environment and others with independent environments. \subsubsection{Filtering operations -- Monitoring the environment} \label{sec:environmentMonitoringTheory} Instead of directly following the dynamics of system $S$, one can infer it by monitoring its surroundings. For instance, by detecting a photon emitted by a two-level atom, we know for sure that the atom is in its ground state. This scheme is illustrated in Fig.~\ref{filtering}. \begin{figure} \caption{\footnotesize Monitoring the environment. The state of the environment is measured, post-selecting the state of $S$.} \label{filtering} \end{figure} The formalism of the preceding sections must be changed to take into account the monitoring of the environment. Rather than tracing over the environment, we perform a measurement on it. If the outcome $i$ is obtained, the state of $S$ evolves to: \begin{equation} \frac{M_i\rho_S M^\dagger_i}{p_i}, \end{equation} where $p_i =\tr [M_i\rho_S M^\dagger_i]$ is the probability of finding the outcome $i$. Notice that, if the state $\rho_S$ is initially pure, it will remain pure after the measurement on the environment. This application of a single Kraus operator to the state is usually called a filtering operation~\cite{verstraete:01, nielsen}. A sequence of successive evolutions and measurements defines a quantum trajectory for the state of $S$ -- each record of the state of the environment defining a quantum jump~\cite{carmichael}. \section{Complementarity, quantum channels, and the dynamics of entanglement}\label{complementarity} Up to this point we were dealing with the open dynamics in a rather general way. From now on we specialize on systems composed of qubits, which are representative of many physical systems of interest for quantum information processing. Furthermore, we consider only local environments, which implies in the interaction of each qubit only with its own environment. This is the situation for two decaying atoms separated by a distance much larger than the wavelength of the emitted radiation. The individual qubit dynamics is then used to describe how the initial entanglement of two qubits is degraded due to the action of these independent environments. An elegant way to illustrate how the entanglement of the system with the environment disturbs the individual properties of the subsystems is through the complementarity relations presented in Ref.~\cite{jakob-2003}. This is described in the following sub-section. \subsection{Complementarity relations} \label{sec:complementarity} A single qubit $S$ in a pure state has two complementary aspects, particle-like, and wave-like~\cite{bohr}, which is mathematically expressed by the following relation~\cite{englert,englert2}: \begin{equation} {\cal P}^2_S+{\cal V}^2_S =1, \end{equation} where the ingredients are the single-particle predictability ${\cal P}_S$ and visibility ${\cal V}_S$. The first is a measure of the single-qubit relative population, defined as ${\cal P}_S = |\langle \sigma_z \rangle|$. The second is a measure of single-qubit coherence and is defined as ${\cal V}_S = 2 |\langle \sigma^+ \rangle|$. Here $\sigma_i$, with $i \in\{x\,,y\,,z\}$, are the Pauli matrices, and $\sigma^+=\ket{1}\!\bra{0}$. When the qubit $S$ gets entangled with an environment $E$, its state becomes mixed. This implies that another term should be included in the previous relation, which then turns into~\cite{jakob-2003}: \begin{equation} {\cal C}^2_{SE}+{\cal P}^2_S+{\cal V}^2_S =1, \label{compl} \end{equation} where ${\cal C}_{SE}$ is the concurrence~\cite{wootters}, which measures the entanglement between $S$ and $E$. Independently of the dimension of $E$, the bipartite concurrence of the pure composite state is defined as \begin{equation} {\cal C}_{SE}=\sqrt{2(1-\tr[\rho_S^2])}, \label{eq:concdef} \end{equation} where $\rho_S=\tr_E[\rho_{SE}]$ \cite{rungta01}. We see from Eq.~(\ref{compl}) that whenever the entanglement between the two systems increases, the single-particle features are reduced. When ${\cal C}_{SE}=1$, the visibility and predictability vanish -- the single-qubit state has then completely decohered. Relation (\ref{compl}) was tested experimentally, using nuclear magnetic resonance techniques, in Ref.~\cite{peng:052109}. We present in section \ref{sec:results_qubit} experimental results for the dynamics of these three quantities, obtained with a linear optics setup. The complementarity relations among them will help us to understand the action of different types of environments on qubits. \subsection{Quantum channels} We describe now some of the most usual channels for qubits; amplitude damping, dephasing, bit-flip, phase-flip, and bit-phase-flip. \subsubsection{Amplitude damping} \label{deco1} This channel represents the dissipative interaction between the qubit and its environment. The emblematic example is given by the spontaneous emission of a photon by a two-level atom into a zero-temperature environment of electromagnetic-field modes. A simple way to gain insight about this process is through the corresponding quantum map: \bea{ccl} \label{AmplitudeDampingMap} \ket{0}_S\ket{0}_E&\rightarrow& \ket{0}_S\ket{0}_E\;;\\ \ket{1}_S\ket{0}_E&\rightarrow&\sqrt{1-p}\ket{1}_S\ket{0}_E + \sqrt{p}\ket{0}_S\ket{1}_E\;, \end{array}\end{equation} which can be traced back to the 1930 Weisskopf-Wigner treatment of spontaneous emission by an atom~\cite{weisskopf}. The first line indicates that if no excitation is present in the system, it remains in the same state and the environment is also untouched. The next line shows that when one excitation is present in the system, it can either remain there with probability $(1-p)$, or it can be transferred into the environment with a probability $p$. Notice that $p$ in these equations is just a parameterization of time. The relationship between the parameter $p$ and time $t$ for an atom interacting with an infinite number of electromagnetic field modes, initially in the vacuum state, under the Markov approximation, is given by $p=(1-e^{-\Gamma t})$, where $\Gamma$ is the decay rate. In this case, the state $\ket{1}_E$ in the map above can be understood as one excitation distributed in all field modes. However, this map can also be used to describe the interaction of a two-level atom with a single mode of the electromagnetic field inside a high-quality cavity~\cite{raimond:565}. In this case the excitation oscillates between the atom and the field, and we should take $p=\sin^2(\Omega t/2)$, where $\Omega$ is the vacuum Rabi frequency. The fact that the same set of equations describes the interaction with either a reservoir or an environment with a single degree of freedom is a consequence of the general character of the Kraus approach, as commented right after Eq.~(\ref{KrausMaps}). These remarks show that it is actually very advantageous to describe the evolution of the system through a quantum channel, rather than through a specific master equation or Hamiltonian. Together with the parameterization of the evolution in terms of $p$, thus avoiding a specific time dependence, this leads to a very general description, which includes many different processes in the same framework. In all cases, the dynamics represented by the map \eqref{AmplitudeDampingMap} has the following Kraus operators (in the computational basis $\{|0\rangle,|1\rangle\}$): \begin{eqnarray} M_0=\left(\begin{array}{cc} 1&0\\ 0&\sqrt{1-p} \end{array}\right) &\;& M_1=\left(\begin{array}{cc} 0&\sqrt{p}\\ 0&0 \end{array}\right). \label{Kraus1A} \end{eqnarray} Let $\ket{\chi}=\alpha\ket{0}+\beta\ket{1}$ be a general initial qubit state, i.e, at $p=0$. According to Eq.~(\ref{EvolKraus}), it evolves under the amplitude channel to: \begin{equation} \$(\ket{\chi}\!\bra{\chi})=\left(\begin{array}{cc} |\alpha|^2+p|\beta|^2&\alpha\beta^*\sqrt{1-p}\\ \alpha^*\beta\sqrt{1-p}&(1-p)|\beta|^2 \end{array}\right). \end{equation} We can see from this state that coherence decreases with increasing $p$. Also, the population of $\ket{1}$ is transferred to $\ket{0}$. When describing the spontaneous decay ($p=1-e^{-\Gamma t}$), only in the asymptotic limit $t\rightarrow \infty$ coherence drops to zero and the system tends to the ground state. These conclusions can also be drawn from the expressions for the visibility and the predictability: \bea{ccl} {\cal P}_S(p)& =& ||\alpha|^2-|\beta|^2+2 p |\beta|^2|=|1-2 (1-p) |\beta|^2|\;;\\ {\cal V}_S(p) &=& 2 \sqrt{1-p}|\alpha\beta|=\sqrt{1-p}\;{\cal V}_S(0)\,, \label{eq:predvis} \end{array}\end{equation} where ${\cal V}_S(0)$ is the initial visibility. Furthermore, within the entire interval $0<p<1$ the qubit state is mixed. This is confirmed by the calculation of its entanglement with the environment: \begin{equation} {\cal C}_{SE}(p)=2|\beta|^2\sqrt{p(1-p)}\;; \label{eq:concevol} \end{equation} which vanishes only at $p=0$ or $p=1$.\\ \subsubsection{Dephasing}\label{deco2} { Here the coherence of the qubit state disappears without any change in the populations. This process occurs often when a noisy field couples to a two-level system~\cite{leibfried:281}. The corresponding unitary evolution map is given by: \bea{ccl} \ket{0}_S\ket{0}_E&\rightarrow& \ket{0}_S\ket{0}_E\;,\\ \ket{1}_S\ket{0}_E&\rightarrow&\sqrt{1-p}\ket{1}_S\ket{0}_E + \sqrt{p}\ket{1}_S\ket{1}_E. \label{PhaseDampingMap} \end{array}\end{equation} It can be understood as an elastic scattering, where the the state of the two-level system does not change, but the state of the environment undergoes a transition without any energy exchange, due for instance to the change of momentum of its constituent particles. Although the states of the computational basis $\{|0\rangle,|1\rangle\}$ do not change under this map, any superposition of them will get entangled with the environment. The characteristics of this type of channel can be analyzed, as before, by observing the evolution of a general state. The corresponding Kraus operators are: \begin{eqnarray} M_0=\left(\begin{array}{cc} 1&0\\ 0&\sqrt{1-p} \end{array}\right) &\;& M_1=\left(\begin{array}{cc} 0&0\\ 0&\sqrt{p} \end{array}\right). \end{eqnarray} Therefore, the state $\ket{\chi}=\alpha\ket{0}+\beta\ket{1}$ evolves to: \begin{equation} \left(\begin{array}{cc} |\alpha|^2&\alpha\beta^*\sqrt{1-p}\\ \alpha^*\beta\sqrt{1-p}&|\beta|^2 \end{array}\right). \end{equation} As previously stated, the populations do not change, as well as the state predictability: ${\cal P}_S(p)=||\alpha|^2-|\beta|^2|={\cal P}_S(0)$. On the other hand, the visibility monotonically decreases: ${\cal V}_S(p) = 2|\alpha\beta|\sqrt{1-p}=\sqrt{1-p}\,{\cal V}_S(0)$, as the system $S$ gets entangled with the environment $R$. The entanglement between them is easily evaluated: ${\cal C}_{SE}(p) = 2 \sqrt{p}|\alpha\beta| = \sqrt{p}\,{\cal V}_S(0)$. This emphasizes the fact that states with zero initial visibility do not get entangled with this type of environment. \begin{table*} \caption{\label{ErrorChannels} Evolution of complementary aspects for the initial state $\ket{\chi}=\alpha\ket{0}+\beta\ket{1}$ under bit, phase, and bit-phase flip.} \begin{ruledtabular} \begin{tabular}{c|ccc} Channel & ${\cal P}_S(p)$ & ${\cal V}_S(p)$ & ${\cal C}_{SR}(p)$ \\ \hline Bit flip & $(1-p)\;{\cal P}_S(0)$ &$|(2-p)\alpha\beta^*+p \alpha^*\beta|$ & $ \sqrt{p\left( 2-p \right)}|\alpha^2-\beta^2| $ \\ Phase flip & ${\cal P}_S(0)$ &$(1-p){\cal V}_S(0)$ & $\sqrt{p(2-p)}{\cal V}_S(0)$ \\ Bit-Phase flip & $(1-p){\cal P}_S(0)$ &$|(2-p)\alpha\beta^*-p\;\alpha^*\beta|$ & $\sqrt{p(2-p)}|\alpha^2+\beta^2|$ \end{tabular} \end{ruledtabular} \end{table*} \subsubsection{Bit flip, phase flip, and bit-phase-flip} In classical computation, the only error that can take place is the bit flip $0\leftrightarrow 1$. In quantum computation however, the possibility of superposition brings also the possibility of other errors besides the usual bit flip. They are the phase flip and the bit-phase flip. The first changes the phase of the state, and the latter combines phase- and bit-flip. The set of Kraus operators for each one of these channels is given by: \begin{eqnarray} M_0= \sqrt{1-p/2}\;\openone\,, &\;& M_1^i=\sqrt{p/2}\;\sigma_i\;; \label{KrausErrors} \end{eqnarray} where $i=x$ give us the bit flip, $i=z$ the phase flip, and $i=y$ the phase-bit flip. These sets are easily interpreted as corresponding to a probability $(1-p/2)$ of remaining in the same state, and a probability $p/2$ of having an error. The factor of 2 in Eq.~(\ref{KrausErrors}) guarantees that at $p=1$ we have maximal ignorance about the occurrence of an error, and therefore minimum information about the state. The unitary maps for these channels are obtained by employing Eq.~(\ref{KrausMaps}). In Table~\ref{ErrorChannels}, the evolution of the complementary aspects, as previously defined, are summarized for these error channels. \subsection{Entanglement dynamics} \label{sec:EntanglementDynamicsTheory} Whenever the system $S$ is composed of at least two subsystems, an initial entanglement among the subsystems evolves due to the interaction with the environment~\cite{karol,simon, diosi, dodd, dur, yu1, carvalho:230501, hein, fine:153105, mintert, santos:040305, yu:140403, almeida07, carvalho-2007, kimble07, aolita}. The detailed study of this process is of crucial importance for the implementation of quantum algorithms that rely on entanglement as a resource. Here we focus on two emblematic examples of entanglement evolution: the two qubit state $\ket{\phi}=\alpha \ket{00}+\beta\ket{11}$ under local {\it i)} amplitude damping, and {\it ii)} dephasing channels. In the following analysis, the complementarity relation is not easily handled~\cite{tessier:107,peng:052109}, since it involves multipartite entanglement of mixed states. Nevertheless, in order to scrutinize the dynamics, we make use of similar figures of merit, namely: the bipartite visibility (${\cal V}_{S_1S_2}$), the concurrence between the subsystems (${\cal C}_{S_1S_2}$), and the concurrence (${\cal C}_{SE}$) between $S=S_1\otimes S_2$ and $E=E_1\otimes E_2$. The definitions of these quantities follows. The bipartite visibility, \begin{equation} {\cal V}_{S_1S_2}(p)=2 |\langle\, \ket{11}\!\bra{00} \, \rangle|\;, \label{eq:bivis} \end{equation} measures the two-particle coherence for the state $|\phi\rangle$ defined above. Notice that, given the initial state $\ket{\phi}$, and the fact that we are considering only local channels, this is the only coherence that plays a role in the dynamics. The initial pure state of the system $S=S_1\otimes S_2$ becomes mixed when in contact with the environment. The degradation of the initial entanglement due to the coupling with the environment is quantified by the concurrence defined in Ref.~\cite{wootters}: \begin{equation}\label{concurrence} {\cal C}_{S_1S_2}(p) =\max \{0,\Lambda\}\,, \end{equation} where $ \Lambda=\sqrt{\lambda_1}-\sqrt{\lambda_2}-\sqrt{\lambda_3}-\sqrt{\lambda_4}$, with $\lambda_i$'s the eigenvalues in decreasing order of: \begin{equation} \rho_{S_1S_2}(p)(\sigma_y\otimes\sigma_y)\rho^*_{S_1S_2}(p)(\sigma_y\otimes\sigma_y)\,, \end{equation} the conjugation being taken in the computational basis $\{|00\rangle, |01\rangle, |10\rangle, |11\rangle\}$, and $ \rho_{S_1S_2}(p) = \$_1\otimes \$_2 (\ket{\phi}\!\bra{\phi})$, where $\$_1$($\$_2$) is the channel applied to the first (second) qubit. The information spread from the initial pure state to the combined state -- system plus environment -- is related to the entanglement between $S$ and $E$. The corresponding concurrence is~\cite{rungta01}: \begin{equation} {\cal C}_{SE}(p)=\sqrt{2\left(1-\tr\left [\rho^2_{S_1S_2}(p)\right]\right)}\;. \end{equation} {\it i) Amplitude damping --} As described before in section~\ref{deco1}, the Kraus operators for this channel are given in Eq.~(\ref{Kraus1A}). Under two identical local amplitude channels, Eq.~(\ref{EvolKrausN}) shows that the initial two-qubit state $|\phi\rangle$ evolves to the density operator \small \begin{equation} \left(\begin{array}{cccc} |\alpha|^2+p^2|\beta|^2&0&0&(1-p)\alpha\beta^*\\ 0&(1-p)p|\beta|^2&0&0\\ 0&0&(1-p)p|\beta|^2&0\\ (1-p)\alpha^*\beta&0&0&(1-p)^2|\beta|^2 \end{array}\right)\;, \label{evolS1S2A} \end{equation} \normalsize where the matrix is written in terms of the computational basis $\{|00\rangle, |01\rangle, |10\rangle, |11\rangle\}$. The two-particle visibility is then ${\cal V}_{S_1S_2}(p)=2 (1-p)|\alpha\beta|=(1-p){\cal V}_{S_1S_2}(0)$. The bipartite visibility decays linearly with $p$, reaching zero only when $p=1$. For the entanglement between the subsystems, we have: \begin{equation} {\cal C}_{S_1S_2}(p)=\max\{0,2(1-p)|\beta|(|\alpha|-p|\beta|)\}\,. \label{concESD} \end{equation} For the same initial concurrence (${\cal C}_{S_1S_2}(0)=2|\alpha\beta|$), two entanglement decay regimes are found: if $|\alpha|\ge |\beta|$, then ${\cal C}_{S_1S_2}(p)>0$ for all $p\in[0,1)$, vanishing only at $p=1$ (as the visibility). However, for $|\alpha| < |\beta|$ the entanglement between $S_1$ and $S_2$ goes to zero at $p_{ESD}=|\alpha/\beta|$ -- the so-called \emph{entanglement sudden-death}~\cite{yu:140403, almeida07}. If the parameterization $(1-p)=e^{-\Gamma t}$ is used, this implies a finite-time disentanglement, even though the bipartite coherence goes to zero only asymptotically. This phenomenon stresses that bipartite coherence is necessary for entanglement but does not coincide with it -- the latter being more fragile to noise. \begin{figure} \caption{Two possible trajectories in the space of states under the action of amplitude damping, for initial states of the form $\alpha|00\rangle+\beta|11\rangle$. The solid line represents a sudden-death trajectory, and the dashed line a case of infinite-time disentanglement. When $p=1$, the two qubits are in the ground state. The border of the set of states is the locus of density matrices of incomplete rank.} \label{fig:TrajAmp} \end{figure} Entanglement sudden death requires the initial population of the doubly excited state $|11\rangle$ to be larger than the population of the unexcited state $|00\rangle$. This is related to the fact that the state $|11\rangle$ is perturbed by the (zero temperature) environment, while the state $|00\rangle$ is not. Therefore, the bigger the initial ``excited" component in $\ket{\phi}$, the stronger is the entanglement with the environment -- thus leading to a faster decay of ${\cal C}_{S_1S_2}$. Indeed, the entanglement between the system and the environment is given by: \begin{equation} {\cal C}_{SE}(p)=2\sqrt{2}|\beta|\sqrt{p(1-p)}\sqrt{1-|\beta|^2 p (1-p)}, \end{equation} which increases when $\beta$ increases and, for fixed $\beta$, reaches its maximum for $p=1/2$. This behavior is further stressed by realizing that the entanglement between each system and its own environment is also proportional to the excited-state amplitude:\\ \begin{equation} {\cal C}_{S_1E_1}(p)={\cal C}_{S_2E_2}(p)= 2 |\beta|^2 \sqrt{p(1-p)}\;; \end{equation} vanishing only at $p=0$ and $p=1$. These two possible ``trajectories"~\cite{terra:237,yu:2289} in the set of states are sketched in Fig.~\ref{fig:TrajAmp}. For $|\alpha|<|\beta|$ (solid line) the set of separable states is crossed at $p_{ESD}$, thus the state becomes separable at finite time. However, for $|\alpha| \ge |\beta|$ (dashed line), the state becomes separable only at $p=1$, when the two qubits are in the ground state ($\ket{00}$). This type of environment acts as a swapping process at $p=1$, i.e., the state of the system (and the corresponding entanglement) is completely transferred to the environment~\cite{yonac:s45,lopez-2008}: \begin{equation} \left(\alpha\ket{00}+\beta\ket{11}\right)_S\otimes\ket{00}_R \stackrel{p=1}{\longrightarrow} \ket{00}_S\otimes\left(\alpha\ket{00}+\beta\ket{11}\right)_R\;. \end{equation} The entanglement between the two environments is given by: \begin{equation} {\cal C}_{E_1E_2}(p)=\max\{0,2p|\beta|(|\alpha|-(1-p)|\beta|)\}, \end{equation} which shows that whenever there is entanglement sudden death for the two-qubit system, there is also sudden birth of entanglement (ESB) between the two corresponding environments~\cite{lopez-2008}. The value of $p$ for which ESB occurs, $p_{ESB}$, is simply expressed in terms of the entanglement sudden death value: $p_{ESB}=1-p_{ESD}$. This expression clearly shows that entanglement sudden birth may occur before, simultaneously, or after entanglement sudden death, depending on whether $p_{ESD}>1/2$, $p_{ESD}=1/2$, or $p_{ESD}<1/2$, respectively. {\it ii) Dephasing --} The evolved state in this case is given by: \begin{equation} \rho(p)=\left(\begin{array}{cccc} |\alpha|^2&0&0&(1-p)\alpha\beta^*\\ 0&0&0&0\\ 0&0&0&0\\ (1-p)\alpha^*\beta&0&0&|\beta|^2 \end{array}\right). \label{evolS1S2Dp} \end{equation} As above, the two-particle visibility is given by ${\cal V}_{S_1S_2}(p)=2 (1-p)|\alpha\beta|=(1-p){\cal V}_{S_1S_2}(0)$, which leads to the same behavior as before. The entanglement between the subsystems is: \begin{equation} {\cal C}_{S_1S_2}(p)=2 (1-p)|\alpha\beta|\;, \end{equation} which is precisely equal to ${\cal V}_{S_1S_2}(p)$. These two quantities have the same behavior as a function of $p$, and thus vanish at same point $p=1$. There is no entanglement sudden-death. The entanglement between the system $S$ and the environment $E$ is given by: \begin{equation} {\cal C}_{SE}(p)=2|\alpha\beta|\sqrt{p(2-p)}\;; \end{equation} which reaches its maximum, for $p$ fixed, when $|\alpha|=|\beta|=1/\sqrt{2}$. For every $\alpha$ and $\beta$ the maximum of ${\cal C}_{SE}$ as a function of $p$ is at $p=1$, i.e, only when dephasing is completed. However, increasing values of ${\cal C}_{SE}$ do not imply sudden death of entanglement, since the corresponding state trajectory does not cross the region of separable states (see Fig.~\ref{fig:TrajDep}). \begin{figure} \caption{Trajectory in the space of states under the action of dephasing, for initial states of the form $\alpha|00\rangle+\beta|11\rangle$. The state is completely decohered at $p=1$, when it reaches the borderline between entangled and separable states -- and only then becomes separable. For this case, the trajectory stays always on the border of the set of states, since, for all $p\in[0,1]$, the density matrix is not of complete rank.} \label{fig:TrajDep} \end{figure} One should note that, in contrast with the amplitude-damping case, here each system does \emph{not} get entangled with its own environment: $C_{S_1E_1}=C_{S_2E_2} = 0$ for all $p \in [0,1]$. This is expected, since the single-qubit visibility is zero at the beginning, and consequentially at all subsequent times (see Section~\ref{deco2}). What is though more surprising is that, apart from ${\cal C}_{S_1S_2}$ which monotonously decreases with $p$, all other two-quibt entanglements are identically zero. Decrease in the entanglement of the two-qubit system is accompanied by the creation of legitimate multipartite entanglement. For $p=1$, it is easy to see from Eq.~(\ref{PhaseDampingMap}) that \begin{equation} (\alpha|00\rangle+\beta|11\rangle)_S|00\rangle_{E} \rightarrow\alpha|00\rangle_S|00\rangle_E+\beta|11\rangle_S|11\rangle_E\,, \end{equation} which is a Greenberger-Horne-Zeilinger (GHZ) type~\cite{GHZ} type of state, for which any two-qubit entanglement vanishes. For arbitrary values of $p$, one can easily calculate the generalized multipartite concurrence proposed in Ref.~\cite{mintert}: \begin{equation} {\cal C}_{N}=2^{1-N/2} \sqrt{(2^{N}-2)-\sum_{i} {\rm Tr}(\rho_{i}^{2})} \end{equation} where the sum is over all nontrivial reduced density matrices of the N-particle system. We get \begin{equation} {\cal C}_{S_1S_2E_1E_2}(p)= |\alpha\beta|\sqrt{4+4 p -p^2}\; \end{equation} which monotonously increases with $p$. \section{Experimental Implementation of Decoherence Channels} \label{sec:experiment} \subsection{Single photons and multiple qubits} Many experimental investigations of quantum information processes, such as basic quantum algorithms \cite{cerf98,oliveira05,walborn05c}, quantum teleportation \cite{boschi98}, and verification of new methods for measuring entanglement~\cite{walborn06b,walborn07a} have been based on the use of several degrees of freedom of single photons. While this type of approach does not lead to scalable quantum computation \cite{blume-kohout02}, taking advantage of multiple degrees of freedom of photons allows for entanglement purification \cite{pan03}, improved Bell-state analysis \cite{kwiat98a,walborn03b,walborn03c,kwiat07} and creation of high-dimensional entanglement \cite{barreiro05}. The extra degrees of freedom have also been exploited to engineer mixed states through decoherence \cite{peters04,aiello07}. \par In the following we employ the polarization degree of freedom of a photon as the qubit, while its momentum degree of freedom is used as the environment. This choice enables us to implement controlled interactions between $S$ and $E$. As in previous works~\cite{cerf98,oliveira05}, it is possible to implement a variety of operations on these two degrees of freedom, using common optical elements such as wave plates and beam splitters. The formal correspondence between linear optics operations and one or multiple-qubit quantum operations has been provided in Ref. \cite{aiello07b}. \par \subsection{Sagnac Interferometer} Fig. \ref{fig:int} a) shows a modified Sagnac interferometer that can be used to implement the dynamics discussed in section \ref{sec:theory}. An incident photon passes through a polarizing beam splitter (PBS), which splits the horizontal ($H$) and vertical ($V$) polarization components, causing them to propagate in opposite directions within the interferometer. The interferometer is aligned so that the $H$-path and $V$-path are slightly separated, which allows us to insert different optical elements separately into each path. The two paths then recombine at the same PBS, and are reflected or transmitted into modes $0$ or $1$, depending on the polarization. HWP($\theta_{H}$) and HWP($\theta_{V}$) rotate the $H$ and $V$ polarization components of the incoming photon, respectively. If they are set at positions such that the polarizations are not rotated, the photon leaves the Sagnac interferometer in mode $0$. If, however, a photon, initially $V$-polarized, is rotated by HWP($\theta_{V}$) so that $\ket{V} \longrightarrow \alpha \ket{V} + \beta\ket{H}$, it will leave the interferometer in mode $0$ with probability $|\alpha|^2$, and in mode $1$ with probability $|\beta|^2$. The Sagnac arrangement is advantageous, since it is very robust against small mechanical fluctuations of the mirrors and polarizing beam splitter (photons in the two paths reflect off the same optical components), as well as thermal fluctuations. The two optical paths are approximately the same, since they have identical lengths and both include a single half-wave plate. We now discuss the implementation of decoherence channels with this interferometer. \begin{figure} \caption{ Experimental apparatus for implementation of quantum maps and tomographic analysis. a) Sagnac interferometer. HWP($\theta_{V} \label{fig:int} \end{figure} \subsubsection{Decoherence channels} With the half-wave plates set to angles $\theta_H$ and $\theta_V$, the Sagnac interferometer implements the transformation: \begin{subequations} \label{eq:ampdampexp1} \begin{align} \ket{H}\ket{0} & \longrightarrow \cos2\theta_H\ket{H}\ket{0} +\sin2\theta_H\ket{V}\ket{1} \,,\\ \ket{V}\ket{0} & \longrightarrow \cos2\theta_V \ket{V}\ket{0} + \sin 2\theta_V\ket{H}\ket{1}.\label{ampdampb} \end{align} \end{subequations} After the half-wave plate and the phase plate in output mode $1$, the overall transformation is \begin{subequations} \label{eq:ampdampexp2} \begin{align} \ket{H}\ket{0} \longrightarrow & \cos2\theta_H\ket{H}\ket{0} +e^{i\phi}\sin2\theta_H\sin2\theta_1\ket{H}\ket{1} \nonumber \\ & -e^{i\phi}\sin2\theta_H\cos2\theta_1\ket{V}\ket{1} \\ \ket{V}\ket{0} \longrightarrow & \cos2\theta_V \ket{V}\ket{0} + e^{i\phi}\sin 2\theta_V\cos2\theta_1\ket{H}\ket{1} \nonumber \\ & + e^{i\phi}\sin 2 \theta_V\sin2\theta_1\ket{V}\ket{1}. \end{align} \end{subequations} By associating $H$ and $V$ polarizations respectively to the ground and excited states of the qubit, output modes $0$ and $1$ to states of the environment, and adequately choosing the correct wave plate angles, a number of decoherence channels can be implemented with this interferometer. For example, setting $\theta_H=0$, $\theta_1=0$, $\phi=0$ and identifying $p=\sin^2 2\theta_V$, the interferometer corresponds to the amplitude damping channel \eqref{AmplitudeDampingMap}. Using the same settings but with $\theta_1=\pi/4$ implements the phase damping channel \eqref{PhaseDampingMap}. Also, the error channels shown in Table \ref{ErrorChannels} can be implemented. For example, $\theta_H=-\theta$, $\theta_V=\theta$, $\theta_{1}=\phi=0$ implements a bit-flip channel with $p=\sin^2 2\theta_V$. Table \ref{tab:1} shows the wave plate settings for several different decoherence channels. \par \label{sec:decoherencechannels} \begin{table} \caption{\label{tab:1} Wave plate angles and phase $\phi$ for different decoherence channels.} \begin{ruledtabular} \begin{tabular}{ccccc} Channel & $\theta_H$ & $\theta_V$ & $\theta_1$ & $\phi$ \\ \hline Amplitude decay & 0 & $\theta$ & 0& 0 \\ Phase decay & 0 &$\theta$ & $\pi/4$ & 0 \\ Bit flip & $-\theta$ &$\theta$ & 0& 0 \\ Phase flip & $\theta$ &$-\theta$ & $\pi/4$ & 0 \\ Bit-Phase flip & $-\theta$ &$-\theta$ & $0$ & $\pi/2$ \end{tabular} \end{ruledtabular} \end{table} In order to investigate decoherence using this interferometer, it is necessary to combine modes $0$ and $1$ incoherently for detection, which is the experimental equivalent to mathematically ``tracing out" the environment. This was done using the two input ports of the PBS used for polarization tomography, as shown in Fig. \ref{fig:int} b). Incoherent combination is guaranteed as long as the path length difference is larger than the coherence length of the photon. Using two or more interferometers, one can use similar setups to study the evolution of multipartite states subject to different combinations of independent error channels. \subsubsection{Monitoring the environment} \label{sec:environmentMonitoringSetup} As opposed to uncontrolled physical processes that induce decoherence in other systems, our interferometric arrangement allows us to monitor the environment. This can be done for instance by simply detecting photons in momentum modes $0$ or $1$ individually, as illustrated in Fig. \ref{fig:int} c). With this setup we are able to experimentally investigate the filtering operations and the quantum jumps described in section \ref{sec:environmentMonitoringTheory}. The corresponding experimental results are discussed in the next section. \section{Experimental Results: Single-Qubit Decay and the dynamics of complementarity relations} \label{sec:results_qubit} In the experiments reported in this section and the next, we controlled the system-environment interaction by varying the parameter $p$. For each value, we performed full tomography of the single or two-photon polarization state and reconstructed the density matrix using the maximum likelihood method \cite{kwiat-tomo,altepeter}. The purity and the concurrence were obtained from the reconstructed density matrix unless otherwise noted. The theoretical predictions were obtained by evolving the reconstructed initial state, corresponding to $p=0$, using the Kraus operator formalism discussed in section \ref{sec:theory}. Vertical experimental error bars were determined by Monte-Carlo simulation of experimental runs obeying the same Poissonian count statistics. The value of $p$ was determined by one of two methods. In our earlier experiments, we used the direct readout of the angle (larger horizontal error bars, due to coarse angular setting). In later experiments, we improved on this by developing a simple way to determine $p$ empirically, which we now describe. First, we block the interferometer arm corresponding to the $H$ component [propagation in the counterclockwise direction in Fig. \ref{fig:int} a)], and measure the counts $c_0$ in output mode $0$, with the tomographic plates set for measuring $V$ polarization. Then, still blocking the $H$ interferometer arm, we measure the counts $c_1$ in output mode $1$, with the tomography system set to $H$. We obtain $p$ from $p= {c_1}/(c_0+c_1)$. This method is more precise, since the uncertainty in $p$ comes from photon count statistics. \par \subsection{Amplitude damping channel} For the study of the amplitude decay of a single qubit, we use a c.w. solid state laser (405 nm) to pump a 5 mm long LiIO$_3$ non linear crystal, producing photon pairs from spontaneous parametric down conversion (SPDC). The signal and idler photons are prepared in polarization product states with $V$ polarization. Here the idler photon is used only as a trigger, and is sent directly to a detector equipped with an interference filter centered at 800nm, (65nm FWHM) and a 0.5mm diameter pinhole. The signal photon goes through the interferometer shown in Fig.~\ref{fig:int} a), with wave plates aligned for implementation of the amplitude damping channel, as discussed in section \ref{sec:decoherencechannels}. After the interferometer, modes 0 and 1 propagate through wave plates and a polarizing beam splitter, necessary in the tomography process. Afterwards they are detected through an interference filter centered around 800nm, with 10nm bandwidth and a 0.5mm pinhole. Coincidence counts are registered with counting electronics and a computer. \par The amplitude damping channel was implemented for a single qubit, with the detection system set to trace over the modes of the environment, using the detection setup shown in Fig.~\ref{fig:int} b). The input polarization was prepared in a superposition state $\alpha\ket{H} + \beta \ket{V}$, with $|\beta|>|\alpha|$. \par It is illustrative to view the effect of the channel by measuring the quantities involved in the complementarity relation discussed in section~\ref{sec:complementarity}. In Fig.~\ref{fig:SingleParticleAD} we show the evolution of the squared predictability ${\cal P}_S^2$, the squared visibility ${\cal V}_S^2$, and the system-environment entanglement, quantified by the squared concurrence, ${\cal C}_{SE}^2$, as a function of $p$ for the same initial state as above. The concurrence ${\cal C}_{SE}$ was calculated from the density matrix using \eqref{eq:concdef}, and coincides with what is expected from Eq. \eqref{eq:concevol}. ${\cal P}_S$ and ${\cal V}_S$ were determined directly from the polarization measurements using \begin{equation} {\cal P}_S = \frac{|c_H-c_V|}{c_H+c_V} \end{equation} \begin{equation} {\cal V}_S = 2 \sqrt{\left(\frac{2c_+}{c_H+c_V} -1\right)^2+\left(\frac{2c_R}{c_H+c_V}-1\right)^2} \end{equation} for each value of $p$, where $c_j$ are the number of counts with $j$ polarization, with $+$ and $R$ corresponding to $45^0$ linear polarization and right circular polarization, respectively. It can be seen from Fig.~\ref{fig:SingleParticleAD} that both quantities agree with Eqs. \eqref{eq:predvis}. Though ${\cal P}_S^2$, ${\cal V}_S^2$, and ${\cal C}_{SE}^2$ evolve with $p$, the sum of these three quantities (yellow triangles in Fig.~\ref{fig:SingleParticleAD}) satisfies relation~\eqref{compl} for all $p$. \begin{figure} \caption{(Color online) Evolution of the quantities involved in the qubit-environment complementarity relation \eqref{compl} \label{fig:SingleParticleAD} \end{figure} \subsection{Monitoring the environment} \label{sec:environmentMonitoringResults} We demonstrate now a peculiar effect of the dynamics of open quantum systems. If the qubit, under the action of the amplitude-damping channel, is initially in a superposition of the states $|0\rangle$ and $|1\rangle$, and we monitor the state of the environment, finding it with no excitation at all times, we still observe a decay of the system towards the ground state. This can be understood as follows: even if there is no energy transfer between system and environment, by constantly monitoring the environment and finding no excitations in it, we gain information about the system, which is expressed as a change in its state. For example, consider the arrangement used to implement the amplitude damping channel \eqref{AmplitudeDampingMap}, with an initial state $(\alpha\ket{H}+\beta\ket{V})_S\otimes\ket{0}_E$. This state evolves to \begin{equation} |\Psi(p)\rangle=\alpha\ket{H}\ket{0} + \beta\sqrt{1-p}\ket{V}\ket{0}+\beta\sqrt{p}\ket{H}\ket{1}. \end{equation} Tracing out the environment, the polarization state is \begin{equation} \varrho(p) = \left (\begin{array}{cc} |\alpha|^2+|\beta|^2p & \alpha\beta^*\sqrt{1-p} \\ \alpha^*\beta\sqrt{1-p} & |\beta|^2(1-p) \end{array} \right )\,, \label{eq:rhonomon} \end{equation} with $p^2=\sin 2\theta_V$, whereas, projecting onto the ``unexcited" $\ket{0}$ state of the environment, the polarization state becomes \begin{equation} \ket{\psi(p)} = \frac{\alpha\ket{H} + \beta\sqrt{1-p}\ket{V}}{[|\alpha|^2+|\beta|^2(1-p)]^{1/2}}, \end{equation} \par which ``decays" to $|\psi(p=1)\rangle=|H\rangle$, just as $\varrho(p=1)$ given in \eqref{eq:rhonomon}. We illustrate this phenomenon by comparing the dynamics of a qubit under the action of the amplitude damping map~\eqref{AmplitudeDampingMap}, for two cases: {\it (i)} when we trace out the environment's degrees of freedom (using the tomographic setup illustrated in Fig.~\ref{fig:int} b), and {\it (ii)} when we monitor its state in the unexcited state, using the tomographic setup shown in Fig.~\ref{fig:int} c). As above, the input polarization is prepared in a superposition state $\alpha\ket{H} + \beta \ket{V}$, with $|\beta|>|\alpha|$. \par \par Figure~\ref{fig:MonitoringPopulations} shows the evolution of the population $V$, for both cases. We see that not only are the two dynamics different, but also that the decay takes place even if no excitations are transferred to the environment. When the environment is traced out, the linear evolution is equivalent to an exponential decay (linear in $p$), while in the case where the environment is monitored the decay is retarded. \begin{figure} \caption{(Color online) Evolution of the population $V$ when monitoring (red squares) or tracing out (blue circles) the environment. Error bars are unnoticeable in this scale. The lines are the corresponding theoretical predictions (red solid line and blue dashed line).} \label{fig:MonitoringPopulations} \end{figure} Figure~\ref{fig:MonitoringPurities} shows the evolution of the purity in these two cases. We see that when we monitor the environment, the system is always close to a pure state. The little mixedness arises from the fact that our initial state is not perfectly pure. \begin{figure} \caption{(Color online) Evolution of the purity of the qubit state when monitoring (red squares) or tracing out (blue circles) the environment. Error bars are unnoticeable in this scale. The lines are the corresponding theoretical predictions (red solid line and blue dashed line).} \label{fig:MonitoringPurities} \end{figure} In these figures, the lines are the theoretical predictions, which are obtained (as introduced in section~\ref{sec:environmentMonitoringTheory}) by using the Kraus operators corresponding to the amplitude damping channel~\eqref{AmplitudeDampingMap}. When tracing out the environment, both operators $M_0$ and $M_1$ are used, while when monitoring the environment in its unexcited state, only the no-jump operator $M_0$ is used, the resulting state being renormalized afterwards. The agreement between theory and experimental data is quite good. These results show that, even though no excitation is transferred to the environment, the continuous acquisition of information about this fact changes the state of the qubit, increasing the probability that it is found in the unexcited state. \par This phenomenon allows the distillation of entanglement of a two-qubit system through continuous local monitoring of the corresponding independent environments. Indeed, for an initial state $\alpha|00\rangle+\beta|11\rangle$, with $|\alpha|<|\beta|$, continuous monitoring of the unexcited environment leads to increase of the $|00\rangle$ component, implying that the state approaches a maximally entangled state, before decaying to the state $|00\rangle$. Within the framework of the Sagnac- interferometer setup, applied to each of the two entangled photons (as described in detail in the next Section), the evolution of the system under continuous monitoring corresponds to measuring both qubits in output mode 0, while $p$ changes from $0$ to $1$. If the two-qubit state is $|\Psi(0)\rangle=\alpha|HH\rangle+\beta|VV\rangle$ for $p=0$, then the state for $p\not=0$, conditioned to the measurement of output mode 1 in $|0\rangle$ for both qubits, is given by: \begin{equation} \ket{\Psi(p)}_{\rm cond} = \frac{\alpha\ket{HH} + \beta(1-p)\ket{VV}}{[|\alpha|^2+|\beta|^2(1-p)^2]^{1/2}}. \end{equation} Setting $1-p=|\alpha/\beta|$ yields a maximally entangled state. Therefore, continuous monitoring of the environment for the two-qubit case corresponds to a quantum distillation scheme~\cite{kwiat_distillation}. \section{Experimental Results: The dynamics of entanglement} \label{sec:results_entanglement} \begin{figure} \caption{(Color online) Experimental setup for experimental investigation of entanglement dynamics under decoherence. Polarization-entangled photons pairs were created using the two-crystal SPDC source \cite{kwiat99} \label{fig:esdsetup} \end{figure} Using two Sagnac interferometers, we studied the dynamical behavior of global properties of an entangled pair of photons generated with spontaneous parametric down conversion. Fig. \ref{fig:esdsetup} shows the experimental setup. \par The source was arranged to generate pairs of photons in one of two non-maximally entangled states given by \begin{subequations} \begin{align} |\Theta1\rangle& =\frac{1}{2}|HH\rangle+\frac{\sqrt{3}}{2}e^{i\theta}|VV\rangle \label{eq:state1},\\ |\Theta2\rangle& =\frac{\sqrt{3}}{2}|HH\rangle+\frac{1}{2}e^{i\theta}|VV\rangle \label{eq:state2}\,. \end{align} \end{subequations} These states contain the same amount of entanglement: the inital concurrence is ideally $C=2|\alpha\beta|=\sqrt{3}/2\simeq0.87$. However, we measured $C=0.82\pm0.04$ and $C=0.79\pm0.11$, respectively, due to the fact that they were not 100\% pure. The decrease in purity is mostly due to small imperfections in the mode matching of the interferometers, and the angular dependence of the phase of the two-photon state \cite{kwiat99}. To simplify the description, we refer to the initial state in the following as either (\ref{eq:state1}) or (\ref{eq:state2}), meaning in fact that it was close to these states. The theoretical predictions are derived from the actual experimental state obtained when $p=0$, by calculating its evolution through the relevant Kraus operators. The dynamics of entanglement was investigated under the effect of two different decoherence channels, implemented by sending the twin photons through independent Sagnac interferometers. Full bipartite polarization-state tomography \cite{kwiat-tomo,altepeter} was performed for different values of $p$. \subsection{Amplitude damping - Entanglement sudden death and entanglement witness} \label{sec:GlobalAmplitudeDampingResults} Using the wave plate configurations listed in Table \ref{tab:1}, dual amplitude damping maps~\eqref{AmplitudeDampingMap} with the same $p$ were implemented for the initial state (\ref{eq:state1}). Tomographic reconstructions of the real part of the density matrix for different values of $p$ are shown in Fig. \ref{fig:tomo1}. The corresponding analytical expression is given by Eq.~ \eqref{evolS1S2A}. They illustrate the evolution of the populations and coherences as a function of the parameter $p$. \begin{figure} \caption{(Color online) Tomographic reconstruction of the real part of the density matrix for different values of $p$ for an initial state close to $\ket{\Theta1} \label{fig:tomo1} \end{figure} Figure~\ref{fig:AmplitudeDampingGlobalResults} displays the experimental results for the concurrence \eqref{concurrence}. The theoretical prediction (denoted by the full line in the figure) was obtained by applying Eq.~ \eqref{concurrence} to the evolved density matrix, which in turn is determined by applying the Kraus operators \eqref{Kraus1A} to the reconstructed density matrix for $p=0$. \begin{figure} \caption{(Color online) Global properties under the amplitude damping channel for the state (\ref{eq:state1} \label{fig:AmplitudeDampingGlobalResults} \end{figure} The vanishing of the entanglement for a transition probability $p<1$, corresponding to a finite time, is clearly demonstrated in Fig.~\ref{fig:AmplitudeDampingGlobalResults}. This phenomenon was termed \emph{entanglement sudden death}~\cite{yu:140403}, and our setup allowed for its first observation, which was previously reported in~\cite{almeida07}. \par Also shown in Fig.~\ref{fig:AmplitudeDampingGlobalResults} are the results obtained from an entanglement witness, evaluated at each data point. An operator $W$ is an entanglement witness if ${\tr}(W\rho)\ge0$ for any separable state, and there exist entangled states $\sigma$ for which ${\tr}(W\sigma)<0$. For initial states of the form \begin{equation}\label{initialstate} |\alpha||HH\rangle+|\beta|\exp(i\theta_0)|VV\rangle \end{equation} and the amplitude decay channel, it is possible to define a ``perfect'' $p$-independent witness~\cite{santos:040305}, so that $-{\tr}(W\rho)$ coincides precisely with $\Lambda$ in Eq.~(\ref{concurrence}), thus yielding the concurrence for all $p$. It is given by \begin{equation}\label{defwitness} \hat W_{\theta_0} \equiv {1}- 2\left| {\Phi \left( \theta_0 \right)} \right\rangle \left\langle {\Phi \left( \theta_0 \right)} \right|\,, \end{equation} where \begin{equation}\label{thetaprime} \left| {\Phi \left( \theta \right)} \right\rangle = \frac{1}{\sqrt{2}}({\left| {HH} \right\rangle + e^{i\theta} \left| {VV} \right\rangle }) \,. \end{equation} Then it is easy to show that \begin{equation} \Gamma_{\theta_0}\equiv -{\tr}\left[ {\hat W_{\theta_0} \rho \left( t \right)} \right] = 2\left[P\left( {\theta_0 ,t} \right)-{{1}\over{2}}\right]\,, \label{eq:Gamma} \end{equation} where $P( {\theta ,t})=\tr[| {\Phi( \theta)}\rangle \langle {\Phi ( \theta)} |\rho(t)]$. The concurrence \eqref{concurrence} can be written as \[ {\cal C}\left[ {\rho \left( t \right)} \right] = {\rm{max}}\left\{ {0,\Gamma_{\theta_0} } \right\}\,. \] That is, the concurrence is equal to twice the excess probability (with respect to $1/2$) of projecting the system in the maximally entangled state $\left| {\Phi \left( \theta_0 \right)} \right\rangle$. It is remarkable that in this case the concurrence can be given a simple physical interpretation, and moreover that this is valid throughout the evolution of the system (which means, in our case, that it is independent of $p$). The concurrence could then be determined directly by measuring the probability of finding the system in the maximally-entangled state $\left| {\Phi \left( \theta_0 \right)} \right\rangle$. \begin{figure} \caption{(Color online) Global properties under the amplitude damping channel for the state \eqref{eq:state2} \label{fig:AmplitudeDampingGlobalResults2} \end{figure} In our experiment, however, the initial state is not pure, so $\hat W_{\theta_0}$ is not a perfect witness. In order to compute the best witness in this case (which yields the upper bound of $-{\tr}[\hat W_\theta\rho]$), we choose theta for each data point as the argument of the $\rho_{VVHH}$ element of the reconstructed density matrix, and then obtain $\Gamma$ through Eq.~(\ref{eq:Gamma}). The same could be achieved by projecting the state of the system on $|\Psi(\theta)\rangle$ and scanning $\theta$ in order to get the minimum value for $-{\tr}[\hat W_{\theta}\rho(t)]$. One should note that the initial phase $\theta_0$ is not changed by the amplitude damping channel, as shown by Eq.~(\ref{evolS1S2A}), so in principle this procedure should be adopted only for the $p=0$ state, the corresponding witness being then valid for all values of $0<p\leq1$. However, in the experiment, changing the angle of the half-wave plate HWP$(\theta_V)$ in Fig.~\ref{fig:int} actually affects the corresponding optical path, due to imperfect alignment of this plate, so a $p$-dependent phase shows up between the states $|V\rangle|0\rangle$ and $|H\rangle|0\rangle$ on the right-hand side of Eq.~(\ref{ampdampb}). This phase does not affect the concurrence, but it implies that the best witness depends on $p$. For this reason we find $\theta$ for each data point, from the reconstructed density matrix. Fig. \ref{fig:AmplitudeDampingGlobalResults} shows that in this case $\Gamma$ underestimates the entanglement. \par For the state $\ket{\Theta2}$ defined in Eq. \eqref{eq:state2} the situation is drastically different. Fig.~\ref{fig:AmplitudeDampingGlobalResults2} shows the concurrence as a function of $p$. In this case the entanglement disappears only when $p=1$. Also shown is the entanglement witness \eqref{eq:Gamma} calculated from the reconstructed density matrices, which is always less than the actual value of the concurrence. The witness is not optimal since the initial state is not completely pure. \par Figures \ref{fig:AmplitudeDampingGlobalResults} and \ref{fig:AmplitudeDampingGlobalResults2} together constitute an experimental confirmation that two states with the same initial amount of entanglement may follow different decoherence ``trajectories" in the space of states, as discussed in section \ref{sec:EntanglementDynamicsTheory} and illustrated in Fig.~\ref{fig:TrajAmp}. \subsection{Phase damping} \label{sec:GlobalPhaseDampingResults} As shown in Table \ref{tab:1}, by adjusting the wave plate angles the Sagnac interferometers implement the phase damping channel~\eqref{PhaseDampingMap}. Experimental results for the concurrence are presented in Fig.~\ref{fig:PhaseDampingGlobalResults} for the initial state \eqref{eq:state1}. \begin{figure} \caption{(Color online) Concurrence for the initial state \eqref{eq:state1} \label{fig:PhaseDampingGlobalResults} \end{figure} There is no sudden death of entanglement, and concurrence vanishes only when $p=1$. \subsection{Evolution of purity} Figure~\ref{fig:PurityEvolution} shows the evolution of the purity of the initial state \eqref{eq:state1} for the amplitude damping channel~\eqref{AmplitudeDampingMap} and the phase damping channel~\eqref{PhaseDampingMap}. \begin{figure} \caption{(Color online) Evolution of the purity for the different decoherence models: amplitude damping (red circles) and phase damping (blue squares), and their corresponding theoretical predictions.} \label{fig:PurityEvolution} \end{figure} For the phase damping channel, the change in the purity is monotonous. For amplitude damping there is an initial decrease, and then it increases again up to 1 for $p=1$. The difference in behavior reflects the fact that amplitude damping promotes a swapping between system and environment, so that the system ends up in state $|HH\rangle$, while dephasing leads to an increase of multipartite entanglement, with the system plus environment evolving towards a GHZ-like state. \section{Conclusions} \label{sec:conclusions} We have presented a series of experiments that investigate the dynamics of entangled open quantum systems, and also the dynamics of a single qubit under continuous monitoring of the environment. By adjusting a set of wave-plates, our linear-optics setup is capable of implementing a number of single-qubit decoherence channels. We present experimental results for the amplitude-damping and phase-damping channels for single and two-qubit systems. Decoherence of a single qubit is investigated through the use of complementarity relations. The effect of decoherence on entanglement , including the phenomenon of entanglement sudden death, is experimentally demonstrated. Our setup has an appealing feature: it allows the investigation of filtering operations, implemented by monitoring the environment. This is an experimental realization of quantum trajectories~\cite{carmichael}, which lead to a description of the interaction of a system with an environment in terms of pure states. For amplitude damping, our experimental results demonstrate that it is possible to induce decay of a system by verifying, through continuous measurements, that no excitation is transferred to the environment. We have shown that this procedure, for an initially two-qubit entangled state, is equivalent to entanglement distillation. The experimental investigation of the environment-induced decay of entanglement in other systems (see, for instance, Ref.~\cite{kimble07}) is of course of fundamental importance, and should help to throw new light on the subtle relation between local and global dynamics of entangled systems. The parametrization of the quantum channels considered in this paper, in terms of the transition probability rather than time, is very convenient. It accommodates different kinds of dynamical behavior in a universal description, which allows one to extend the realm of application of the results obtained here: they include not only the decay of two-level systems interacting with individual and independent environments, but also the oscillatory exchange of energy between each qubit and another two-level system, which could be for instance the vacuum and one-photon subspace of a cavity mode. This is a quite advantageous strategy for investigating the dynamics of disentanglement, since it pinpoints the main features of this process within a quite encompassing framework. In fact, rather than the investigation of a particular physical system, our procedure amounts to the experimental implementation of quantum maps which, due to their generality, play a very fundamental role in quantum information. \begin{acknowledgments} We thank Leandro Aolita for helpful comments. The authors acknowledge financial support from the Brazilian funding agencies CNPq, CAPES, and FAPERJ. This work was performed as part of the Brazilian Millennium Institute for Quantum Information. F. de M. also acknowledges the support by the Alexander von Humboldt Foundation. M. P. Almeida also acknowledges the support of the Australian Research Council and the IARPA-funded U.S. Army Research Office Contract. \end{acknowledgments} \end{document}
\begin{document} \sloppy \newtheorem{Def}{Definition}[section] \newtheorem{Bsp}{Example}[section] \newtheorem{Prop}[Def]{Proposition} \newtheorem{Theo}[Def]{Theorem} \newtheorem{Lem}[Def]{Lemma} \newtheorem{Koro}[Def]{Corollary} \theoremstyle{definition} \newtheorem{Rem}[Def]{Remark} \newcommand{{\rm add}}{{\rm add}} \newcommand{{\rm gl.dim }}{{\rm gl.dim }} \newcommand{{\rm dom.dim }}{{\rm dom.dim }} \newcommand{{\rm E}}{{\rm E}} \newcommand{{\rm Morph}}{{\rm Morph}} \newcommand{{\rm E}nd}{{\rm End}} \newcommand{{\rm ind}}{{\rm ind}} \newcommand{{\rm res.dim}}{{\rm res.dim}} \newcommand{\rd} {{\rm rep.dim}} \newcommand{\overline}{\overline} \newcommand{{\rm rad}}{{\rm rad}} \newcommand{{\rm soc}}{{\rm soc}} \renewcommand{{\rm top}}{{\rm top}} \newcommand{{\rm proj.dim}}{{\rm proj.dim}} \newcommand{{\rm re.proj.dim}}{{\rm re.proj.dim}} \newcommand{{\rm inj.dim}}{{\rm inj.dim}} \newcommand{{\rm Fac}}{{\rm Fac}} \newcommand{\fd} {{\rm fin.dim }} \newcommand{\rfd} {{\rm re.fin.dim }} \newcommand{{\rm DTr}}{{\rm DTr}} \newcommand{\cpx}[1]{#1^{\bullet}} \newcommand{\D}[1]{{\mathscr D}(#1)} \newcommand{\Dz}[1]{{\mathscr D}^+(#1)} \newcommand{\Df}[1]{{\mathscr D}^-(#1)} \newcommand{\Db}[1]{{\mathscr D}^b(#1)} \newcommand{\C}[1]{{\mathscr C}(#1)} \newcommand{\Cz}[1]{{\mathscr C}^+(#1)} \newcommand{\Cf}[1]{{\mathscr C}^-(#1)} \newcommand{\Cb}[1]{{\mathscr C}^b(#1)} \newcommand{\K}[1]{{\mathscr K}(#1)} \newcommand{\Kz}[1]{{\mathscr K}^+(#1)} \newcommand{\Kf}[1]{{\mathscr K}^-(#1)} \newcommand{\Kb}[1]{{\mathscr K}^b(#1)} \newcommand{\ensuremath{\mbox{{\rm -Mod}}}}{\ensuremath{\mbox{{\rm -Mod}}}} \newcommand{\ensuremath{\mbox{{\rm -mod}}}}{\ensuremath{\mbox{{\rm -mod}}}} \newcommand{\stmodcat}[1]{#1\mbox{{\rm -{\underline{mod}}}}} \newcommand{\pmodcat}[1]{#1\mbox{{\rm -proj}}} \newcommand{\imodcat}[1]{#1\mbox{{\rm -inj}}} \newcommand{^{\rm op}}{^{\rm op}} \newcommand{\otimes^{\rm\bf L}}{\otimes^{\rm\bf L}} \newcommand{\rHom}{{\rm\bf R}{\rm Hom}} \newcommand{\pd}{{\rm proj.dim}} \newcommand{{\rm Hom}}{{\rm Hom}} \newcommand{{\rm coker}\,\,}{{\rm coker}\,\,} \newcommand{ \Ker }{{\rm Ker}\,\,} \newcommand{ \Img }{{\rm Im}\,\,} \newcommand{{\rm E}xt}{{\rm Ext}} \newcommand{{\rm Tor}}{{\rm Tor}} \newcommand{{\rm \underline{Hom} \, }}{{\rm \underline{Hom} \, }} \newcommand{{\rm _{\Gamma_M}}}{{\rm _{\Gamma_M}}} \newcommand{{\rm _{\Gamma_M}}r}{{\rm _{\Gamma_M^R}}} \def\varepsilon}\def\bz{\bigoplus} \def\sz {\oplus{\varepsilon}\def\bz{\bigoplus} \def\sz {\oplus} \def\xrightarrow} \def\inja{\hookrightarrow{\xrightarrow} \def\inja{\hookrightarrow} \newcommand{\longrightarrow}{\longrightarrow} \newcommand{\longrightarrowf}[1]{\stackrel{#1}{\longrightarrow}} \newcommand{\rightarrow}{\rightarrow} \newcommand{{\rm dim_{_{k}}}}{{\rm dim_{_{k}}}} {\Large \bf \begin{center} Finitistic dimension conjecture and extensions of algebras \end{center}} \centerline{\bf {Shufeng Guo$^{a, b}$}} \begin{center} $^{a}$ Faculty of Science, Guilin University of Aerospace Technology, 541004 Guilin, \\People's Republic of China \end{center} \begin{center} $^{b}$ School of Mathematical Sciences, Capital Normal University, 100048 Beijing, \\People's Republic of China \end{center} \renewcommand{\alph{footnote}}{\alph{footnote}} \setcounter{footnote}{-1} \footnote{E-mail address: [email protected]} \begin{abstract} An extension of algebras is a homomorphism of algebras preserving identities. We use extensions of algebras to study the finitistic dimension conjecture over Artin algebras. Let $f: B \to A$ be an extension of Artin algebras. We denote by $\fd(f)$ the relative finitistic dimension of $f$, which is defined to be the supremum of relative projective dimensions of finitely generated left $A$-modules of finite projective dimension. We prove that, if $B$ is representation-finite and $\fd(f)\leq 1$, then $A$ has finite finitistic dimension. For the case of $\fd(f)> 1$, we give a sufficient condition for $A$ with finite finitistic dimension. Also, we prove the following result: Let $I$, $J$, $K$ be three ideals of an Artin algebra $A$ such that $IJK=0$ and $K\supseteq {\rm rad}(A)$. If both $A/I$ and $A/J$ are $A$-syzygy-finite, then the finitistic dimension of $A$ is finite. \end{abstract} \noindent{\bf Keywords:} Artin algebra; Finitistic dimension; Relative finitistic dimension; Left idealized extension. \noindent{\bf 2000 Mathematics Subject Classification:} 18G20, 16G10; 16E10, 18G25. \section{Introduction} Let $A$ be an Artin algebra. The finitistic dimension of $A$ is defined to be the supremum of projective dimensions of finitely generated left $A$-modules having finite projective dimension. The famous finitistic dimension conjecture says that the finitistic dimension of any Artin algebra is finite (see \cite[Conjecture 11, pp. 410]{ARS} or \cite{B}). It is 57 years old and remains open to date. It is worth noting that the finitistic dimension conjecture is very closely related to many homological conjectures in the representation theory of algebras, such as strong Nakayama conjecture, generalized Nakayama conjecture, Nakayama conjecture, Wakamatsu tilting conjecture and Gorenstein symmetry conjecture. If the finitistic dimension conjecture holds, then so do the above conjectures (\cite{ARE, Y}). However, there are a few cases for which this conjecture is verified to be true (see, for example, \cite{GZ, GKK, EHIS}). In general, this conjecture seems to be far from being solved. Recently, the work of Xi in \cite{X1, X2} shows that the finitistic dimension conjecture can be reduced to comparing finitistic dimensions of two algebras in an extension. The basic idea is as follows: let $B$ and $A$ be Artin algebras, and $f: B \to A$ a homomorphism of algebras satisfying some certain conditions. If one of them has finite finitistic dimension, is the finitistic dimension of the other finite? From on the other hand of view, it is reasonable to study the finitistic dimension conjecture by extensions of algebras. In fact, we have known that some classes of algebras have finite finitistic dimension, so we use them to obtain more classes of algebras with finite finitistic dimension by means of extension. In literatures, we have already seen some interesting results concerning this direction (see \cite{EHIS, X1, X2, XX, Wjq, Wjq2, WX}). In this note, we shall continue to study the above question. Different from the usual consideration (see, for example, \cite{X1, X2, XX}), where one often uses the information on $A$ to get the information on $B$, we use some relative homological dimension to control the extension $f: B \to A$ and employ the finitistic dimension of $B$ to study that of $A$. Here, the relative finitistic dimension of $f$, denoted by $\fd(f)$, is defined to be the supremum of relative projective dimensions of finitely generated left $A$-modules of finite projective dimension. We get the following result, which generalizes the result of E. L. Green in \cite[Theorem 1.5]{G}. \begin{Theo} Let $B$ and $A$ be Artin algebras with $B$ representation-finite. Suppose that $\varphi: B \to A$ is a homomorphism of algebras preserving identities. Then: $(1)$ If $\fd(\varphi)\leq 1$, then $A$ has finite finitistic dimension. $(2)$ If $2\leq\fd(\varphi)<\infty$ and if, for any $A$-module $X$ with finite projective dimension, $_{A}A\otimes_{B}X$ has finite projective dimension, then $A$ has finite finitistic dimension. \label{thm1} \end{Theo} In Theorem \ref{thm1}, we use the finitistic dimension of $B$ to describe that of $A$. In the following, for an extension $f: B \to A$, we shall employ the finiteness of the finitistic dimension of $A$ to approach that of $B$. On the one hand, we establish the relationship between the finiteness of finitistic dimensions of quotient algebras and given algebras, and obtain the following result, which recovers many known results in literatures, for example, \cite[Theorem 3.2, Lemma 3.6, Corollary 3.8]{X1}, \cite[Theorem 3.1, Corollary 3.2, Corollary 3.3, Proposition 3.5]{Wjq2}, the result in \cite{W} and so on. For unexplained notions in the following result, we refer to Section \ref{pre}. \begin{Theo} Let $A$ be an Artin algebra and let $I$, $J$, $K$ be three ideals of $A$ such that $IJK=0$ and $K\supseteq {\rm rad}(A)$. If both $A/I$ and $A/J$ are $A$-syzygy-finite, then the finitistic dimension of $A$ is finite. \label{thm2} \end{Theo} On the other hand, we consider left idealized extensions to study the finitistic dimension conjecture, and get the following. \begin{Prop} Let $$B=A_{0}\subseteq A_{1}\subseteq\cdots \subseteq A_{s-1}\subseteq A_{s}=A$$ \noindent be a chain of subalgebras of an Artin algebra $A$ such that ${\rm rad} (A_{i-1})$ is a left ideal of $A_{i}$ for all $1\leq i\leq s$ with $s$ being a positive integer and that $A$ is 1-syzygy-finite. Then $\fd(B)<\infty$ provided one of the following conditions is satisfied. $(1)$ $B/{\rm rad} (A_{s-1})\cdots {\rm rad} (A_{1}){\rm rad} (A_{0})$ is $B$-syzygy-finite (for example, $B/{\rm rad} (A_{s-1})\cdots {\rm rad} (A_{1}){\rm rad}(A_{0})$ is representation-finite). $(2)$ $A_{1}/{\rm rad} (A_{s-1})\cdots {\rm rad} (A_{1})$ is $B$-syzygy-finite (for example, $A_{1}/{\rm rad} (A_{s-1})\cdots {\rm rad} (A_{1})$ is representation-finite). \label{propo1} \end{Prop} Remark that Proposition \ref{propo1} recovers \cite[Theorem 3.1]{X1} if we take $s=1$, and reobtain \cite[Theorem 4.5]{X1} if we take $s=2$. The paper is organized as follows. In Section \ref{pre} we recall some definitions and basic results which are need in the paper. We prove Theorem \ref{thm1} in Section \ref{rfd} and give a proof of Theorem \ref{thm2} in Section \ref{qfd}. In the last section we use left idealized extensions to study the finitistic dimension conjecture and prove Proposition \ref{propo1}. \section{Preliminaries\label{pre}} In this section, we shall fix some notations, and recall some definitions and basic results which are needed in the proofs of our main results. Throughout this paper, unless stated otherwise, all the algebras considered are Artin $R$-algebras, where $R$ is assumed to be a commutative Artin ring, and all the modules considered are finitely generated left modules over Artin algebras, so that all the homological dimensions will be assumed to be in the category of finitely generated modules. Let $A$ be an Artin algebra. We denote by $A\ensuremath{\mbox{{\rm -mod}}}$ the category of all finitely generated left $A$-modules, and by ${\rm rad}(A)$ the Jacobson radical of $A$. Given an $A$-module $M$, we denote by ${\rm proj.dim}(_{A}M)$ the projective dimension of $M$, by $\Omega_{A}^{i}(M)$ the $i$-th syzygy of $M$ (we set $\Omega_{A}^{0}(M):=M$), and by ${\rm add}(_{A}M)$ the full subcategory of $A\ensuremath{\mbox{{\rm -mod}}}$ consisting of all direct summands of finite direct sums of copies of $M$. Now let us recall some definitions concerning Artin algebras. $A$ is called representation-finite if there is only finitely many nonisomorphic indecomposable $A$-modules in $A\ensuremath{\mbox{{\rm -mod}}}$. The finitistic dimension of $A$, denoted by $\fd(A)$, is defined as $$\begin{array}{rl} \fd(A)=\mbox{Sup} \{{\rm proj.dim}(_AM) \mid M\in A\ensuremath{\mbox{{\rm -mod}}} \mbox{ and } {\rm proj.dim}(_AM)<\infty \}. \end{array}$$ \noindent And the global dimension of $A$, denoted by ${\rm gl.dim }(A)$, is defined as $$\begin{array}{rl} {\rm gl.dim }(A)=\mbox{Sup} \{{\rm proj.dim}(_AM) \mid M\in A\ensuremath{\mbox{{\rm -mod}}} \}. \end{array}$$ Let $\mathcal{C}$ be a subcategory of $A\ensuremath{\mbox{{\rm -mod}}}$ and $m$ a natural number. We set $$ \begin{array}{rl} \Omega_{A}^{m}(\mathcal{C}):=\{\Omega_{A}^{m}(X) \mid X\in \mathcal{C} \}. \end{array}$$ \noindent $\mathcal{C}$ is said to be $m$-$A$-syzygy-finite, or simply $m$-syzygy-finite if there is no confusion, if the number of non-isomorphic indecomposable direct summands of objects in $\Omega_{A}^{m}(\mathcal{C})$ is finite, that is, there is an $A$-module $N$ such that $\Omega_{A}^{m}(\mathcal{C})\subseteq {\rm add}(_{A}N)$. Furthermore, we say that $\mathcal{C}$ is $(A)$-syzygy-finite if there is some natural number $n$ such that $\mathcal{C}$ is $n$-$(A)$-syzygy-finite. If $A\ensuremath{\mbox{{\rm -mod}}}$ is syzygy-finite, then we also say that $A$ is syzygy-finite. Let $C$ be a second Artin algebra and $f: A\to C$ a homomorphism of algebras preserving identities. Clearly, every $C$-module can be regarded as an $A$-module in the natural way, and every $C$-homomorphism can be viewed as an $A$-homomorphism. This means that $C\ensuremath{\mbox{{\rm -mod}}}$ is a subcategory of $A\ensuremath{\mbox{{\rm -mod}}}$. If $C\ensuremath{\mbox{{\rm -mod}}}$ is $A$-syzygy-finite, then we also say that $C$ is $A$-syzygy-finite. Note that if $C$ is representation-finite, then $C$ is $A$-syzygy-finite. Next we give the definition and basic properties of Igusa-Todorov function. We denote by $K_{0}(A)$ the free abelian group generated by the isomorphism classes $[M]$ of modules $M$ in $A\ensuremath{\mbox{{\rm -mod}}}$. Let $K(A)$ be the factor group of $K_{0}(A)$ modulo the following relations: (1) $[Y]=[X]+[Z]$ if $Y\simeq X\oplus Z$; (2) $[P]=0$ if $P$ is projective. \noindent Then $K(A)$ is also the free abelian group with basis the isomorphism classes of indecomposable non-projective $A$-modules in $A\ensuremath{\mbox{{\rm -mod}}}$. Igusa and Todorov in \cite{IT} introduced a function $\Psi: A\ensuremath{\mbox{{\rm -mod}}} \rightarrow \mathbb{N}$ on this abelian group, which is defined on the objects of $A\ensuremath{\mbox{{\rm -mod}}}$ and takes values of non-negative integers. We call it the Igusa-Todorov function. It follows from \cite{IT} that, for any Artin algebra $A$, the Igusa-Todorov function always exists. For the convenience of the reader, we give the basic properties of Igusa-Todorov function as follows. \begin{Lem} (\cite{IT}) Let $A$ be an Artin algebra and $\Psi$ be the {\rm Igusa-Todorov} function. Then the following are true. (1) For any $A$-module $M$, if $M$ has finite projective dimension, then $\Psi(M)={\rm proj.dim}(_{A}M)$. (2) If $0\rightarrow X\rightarrow Y \rightarrow Z\rightarrow 0$ is an exact sequence in $A\ensuremath{\mbox{{\rm -mod}}}$ with ${\rm proj.dim}(Z)< \infty$, then ${\rm proj.dim}(Z)\leq\Psi(X\oplus Y)+1$. (3) If $0\rightarrow X\rightarrow Y \rightarrow Z\rightarrow 0$ is an exact sequence in $A\ensuremath{\mbox{{\rm -mod}}}$ with ${\rm proj.dim}(Y)< \infty$, then ${\rm proj.dim}(Y)\leq\Psi(\Omega(X)\oplus\Omega^{2}(Z))+2$. (4) If $0\rightarrow X\rightarrow Y \rightarrow Z\rightarrow 0$ is an exact sequence in $A\ensuremath{\mbox{{\rm -mod}}}$ with ${\rm proj.dim}(X)< \infty$, then ${\rm proj.dim}(X)\leq\Psi(\Omega(Y\oplus Z))+1$. \label{lem2.1} \end{Lem} Finally, we shall recall some definitions and basic facts on relative homological algebra. Let $B$ and $A$ be Artin algebras, and $f: B\to A$ a homomorphism of algebras preserving identities. Then we say that $f$ is an extension. Clearly, every $A$-module can be regarded as a $B$-module via $f$ in the natural way. An exact sequence in $A\ensuremath{\mbox{{\rm -mod}}}$ $$\cdots \longrightarrow M_{i+1}\longrightarrow M_{i} \longrightarrowf {t_{i}} M_{i-1} \longrightarrow\cdots$$ is called $(A, B)$-exact if there are $B$-homomorphisms $h_{i}: M_{i} \to M_{i+1}$ such that $t_{i}=t_{i}h_{i-1}t_{i}$ for all $i$. It is very easily checked that the definition is equivalent to that introduced in \cite{H1956}. Let $X$ be an $A$-module. $X$ is said to be $(A, B)$-projective, or relatively projective over $B$, if $X$ is an $A$-direct summand of $A\otimes_{B}X$. For the equivalent conditions of relatively projective modules, we refer the reader to \cite[ pp. 202, Proposition 3.6]{ARS} and \cite{Hirata, T}. We denote by $\mathscr{P}(A, B)$ the full subcategory of $A\ensuremath{\mbox{{\rm -mod}}}$ consisting of all $(A, B)$-projective $A$-modules. Note that $\mathscr{P}(A, B)$ is functorially finite in $A\ensuremath{\mbox{{\rm -mod}}}$ (see \cite{KP}). Given an $A$-module $X$, an $(A, B)$-projective resolution of $_AX$ is defined to be an $(A, B)$-exact sequence $$\cdots\longrightarrow P_n\longrightarrow P_{n-1}\longrightarrow\cdots\longrightarrow P_1\longrightarrow P_0\longrightarrow X\longrightarrow 0$$ in which $P_i\in \mathscr{P}(A, B)$ for each $i$. The relative projective dimension of $_AX$ , denoted by ${\rm re.proj.dim}(_AX)$, is defined as $$\begin{array}{rl} {\rm re.proj.dim}(_AX)=\mbox{Inf} &\{\mbox{ n } \mid 0\rightarrow P_n\rightarrow P_{n-1}\rightarrow\cdots\rightarrow P_1\rightarrow P_0\rightarrow X\rightarrow 0 \\ &\qquad \mbox{ is an } (A, B)\mbox{-projective resolution of }_AX\}. \end{array}$$ \noindent If such an exact sequence does not exist, we say that the relative projective dimension of $_AX$ is infinite. The relative global dimension of the extension $f$, denoted by ${\rm gl.dim }(f)$, is defined as $$\begin{array}{rl} {\rm gl.dim }(f)=\mbox{Sup} \{{\rm re.proj.dim}(_AX) \mid X\in A\ensuremath{\mbox{{\rm -mod}}} \}, \end{array}$$ \noindent while the relative finitistic dimension of $f$, denoted by $\fd (f)$, is defined as $$\begin{array}{rl} \fd(f)=\mbox{Sup} \{{\rm re.proj.dim}(_AX) \mid X\in A\ensuremath{\mbox{{\rm -mod}}} \mbox{ and } {\rm proj.dim}(_AX)<\infty \}. \end{array}$$ \noindent Clearly, $\fd (f) \leq {\rm gl.dim } (f)$. In particular, if ${\rm gl.dim }(A)<\infty$, then $\fd (f) = {\rm gl.dim } (f)$. Xi and Xu also defined in \cite{XX} some relative finitistic dimension of $f$ to be the supremum of relative projective dimensions of finitely generated left $A$-modules with finite relative projective dimension, denoted by $\rfd (f)$. Note that, if ${\rm gl.dim }(B)<\infty$ and $A_{B}$ is projective, then $\rfd (f)\leq \fd (f)\leq {\rm gl.dim } (f)$ by \cite[Theorem 1]{H1958}. The following result is a consequence of Generalized Schanuel's Lemma in \cite{T} by induction. \begin{Lem} \label{lemma2.2}Let $f: B\to A$ be an extension of Artin algebras. Suppose that $$0\longrightarrow M\longrightarrow P_{n-1}\longrightarrow P_{n-2}\longrightarrow \cdots \longrightarrow P_{0}\longrightarrow X\longrightarrow 0$$ \noindent and $$0\longrightarrow N\longrightarrow Q_{n-1}\longrightarrow Q_{n-2}\longrightarrow \cdots \longrightarrow Q_{0}\longrightarrow X\longrightarrow 0$$ \noindent are $(A, B)$-exact sequences in which all $P_{i}$ and $Q_{i}$ are $(A, B)$-projective for $0\leq i\leq n-1$ with $n$ being a positive integer. Then we have an isomorphism $$ M \oplus Q_{n-1}\oplus P_{n-2} \oplus \cdots \oplus C \backsimeq N\oplus P_{n-1}\oplus Q_{n-2} \oplus \cdots \oplus C' $$ \noindent as $A$-modules, where $C= P_{0}$ and $C'= Q_{0}$ if $n$ is an even number, $C= Q_{0}$ and $ C'= P_{0}$ if $n$ is an odd number. \end{Lem} \section{Relative finitistic dimensions and finitistic dimensions \label{rfd}} In this section, we employ the relative finitistic dimension to control an extension $\varphi: B \to A$ and use the finitistic dimension of $B$ to approach the finiteness of the finitistic dimension of $A$. Concretely, we consider the case where $B$ is of finite representation type and give a proof of Theorem \ref{thm1}. The main result in this section is based on the following observation. \begin{Lem} Let $\varphi: B\to A$ be an extension of Artin algebras. If $B$ is representation-finite, then so is $\mathscr{P}(A, B)$. \label{lem3.1} \end{Lem} {\bf Proof.} Let $Y$ be an $A$-module in $\mathscr{P}(A, B)$. Then $Y$ is an $A$-direct summand of $A\otimes_{B}Y$. So it follows from the proof of \cite[pp. 200, Lemma 3.1]{ARS}. $\square$ Now we can prove Theorem \ref{thm1}. {\bf Proof of Theorem \ref{thm1}.} Since $B$ is representation-finite, we have $\mathscr{P}(A, B)$ is also representation-finite by Lemma \ref{lem3.1}, so that we may assume that $\{Q_{1},\,Q_{2},\,\cdots,\,Q_{m}\}$ is a complete list of non-isomorphic indecomposable $(A,\,B)$-projective $A$-modules. Let $X$ be an $A$-module with finite projective dimension. $(1)$ If $\fd(\varphi)\leq 1$, then ${\rm re.proj.dim}(_{A}X)\leq 1$, and hence $_{A}X$ has an $(A,\,B)$-projective resolution of length 1: $$0\longrightarrow P_1\longrightarrow P_{0}\longrightarrow X\longrightarrow 0.$$ \noindent Since $_{A}P_{1}\oplus P_{0}\in \mathscr{P}(A, B)$, we can write $_{A}P_{1}\oplus P_{0} =\oplus_{j=1}^{m}Q_{j}^{s_{j}}$, where $s_{j}$ is a non-negative integer for each $j$. Now we bound the projective dimension of $_{A}X$: $$\begin{array}{rl} {\rm proj.dim} (_{A}X)&\leq \Psi(P_{1}\oplus P_{0})+1\\ &\\ &=\Psi(\oplus_{j=1}^{m}Q_{j}^{s_{j}})+1\\ &\\ &\leq \Psi(\oplus_{j=1}^{m}Q_{j})+1, \end{array}$$ \noindent where $\Psi$ is the {\rm Igusa-Todorov} function. Thus $\fd(A)$ is upper bounded by $\Psi(\oplus_{j=1}^{m}Q_{j})+1$ and $\fd(A)<\infty$. $(2)$ If $2\leq\fd(\varphi)=n<\infty$, then by definition ${\rm re.proj.dim}(_{A}X)\leq n$, so $_{A}X$ has an $(A,\,B)$-projective resolution of length $n$. Consider the standard relative projective resolution of $_{A}X$ $$\cdots\longrightarrow C_n\longrightarrowf{\delta_{n}} C_{n-1}\longrightarrow\cdots\longrightarrow C_1\longrightarrow C_0\longrightarrow X\longrightarrow 0,$$ \noindent where $ C_0= A\otimes_{B}X$ and $C_{i}= A\otimes_{B}\Ker \delta_{i-1}$ for all $i\geq 1$. Then we get the $(A,\,B)$-projective resolution of $_{A}X$ of length $n$ $$0\longrightarrow \Img \delta_{n}\longrightarrowf{} C_{n-1}\longrightarrow\cdots\longrightarrow C_1\longrightarrow C_0\longrightarrow X\longrightarrow 0.\quad \quad (\ast_{1})$$ \noindent by Lemma \ref{lemma2.2}. Note that we can write $$\Img \delta_{n}=\oplus_{j=1}^{m}Q_{j}^{s_{nj}} \mbox{\;and\;} C_{i}=\oplus_{j=1}^{m}Q_{j}^{s_{ij}} ,$$ \noindent where all $s_{ij}$ are non-negative integers for $0\leq i\leq n$ and $1\leq j\leq m$. We claim that $\Img \delta_{n}$ and $C_{i}$ with $0\leq i\leq n-1$ have finite projective dimension. In fact, we may consider the exact sequence of $A$-modules $0\rightarrow \Ker \delta_{0}\rightarrow C_{0}\longrightarrowf{\delta_{0}} X\rightarrow 0$ obtained from $(\ast_{1})$. Since both $X$ and $C_{0}$ have finite projective dimension by assumption, we have ${\rm proj.dim} (\Ker \delta_{0})<\infty$. Then one proceed in the same way from $\Ker \delta_{0}$ in order to show that $C_{1}$ and $\Ker \delta_{1}$ have finite projective dimension, etc. This shows that what we want. Now by Lemma \ref{lem2.1} we can bound the projective dimension of $_{A}X$: $$\begin{array}{rl} {\rm proj.dim} (_{A}X)&\leq \mbox{max} \{{\rm proj.dim}(\Img \delta_{n}),\, {\rm proj.dim}(C_{i}),\, i=0,\,\cdots,\,n-1 \}+n\\ &\\ &=\mbox{max} \{\psi(\Img \delta_{n}),\, \psi(C_{i}),\, i=0,\,\cdots,\,n-1 \}+n\\ &\\ &=\mbox{max} \{\psi(\oplus_{j=1}^{m}Q_{j}^{s_{nj}}),\, \psi(\oplus_{j=1}^{m}Q_{j}^{s_{ij}}),\, i=0,\,\cdots,\,n-1 \}+n\\ &\\ &\leq \Psi(\oplus_{j=1}^{m}Q_{j})+n. \end{array},$$ \noindent where $\Psi$ is the {\rm Igusa-Todorov} function. Thus $\fd(A)$ is upper bounded by $\Psi(\oplus_{j=1}^{m}Q_{j})+n$. This completes the proof. $\square$ As an immediate consequence of Theorem \ref{thm1}, we have the following. \begin{Koro} Let $B$ be a subalgebra of an Artin algebra $A$ with the same identity such that ${\rm rad}(B)$ is a left ideal in $A$ and ${\rm rad}(B)A={\rm rad}(A)$. If $B$ is representation-finite, then $\fd (A)<\infty$. \end{Koro} {\bf Proof.} Consider the inclusion map $i: B\rightarrow A$. Note that, if ${\rm rad}(B)$ is a left ideal in $A$ and ${\rm rad}(B)A={\rm rad}(A)$, then $\fd(i)\leq {\rm gl.dim }(i)\leq1$ by \cite[Proposition 2.19]{XX}. Therefore, if $B$ is representation-finite, by Theorem \ref{thm1}, we have $\fd (A)<\infty$. $\square$ As another consequence of Theorem \ref{thm1}, we have the following result. \begin{Koro} Let $\varphi: B \to A$ be an extension of Artin algebras such that $2\leq\fd(\varphi)<\infty$. Suppose that ${\rm proj.dim}(_{B}A)<\infty$ and $A_{B}$ is projective. If $B$ is representation-finite, then $\fd (A)<\infty$. \end{Koro} {\bf Proof.} By Theorem \ref{thm1}, it suffices to prove that, for any $A$-module $X$ with finite projective dimension, $_{A}A\otimes_{B}X$ has finite projective dimension. Let $X$ be an $A$-module with ${\rm proj.dim}(_{A}X)<\infty$. Then, viewing $X$ as a $B$-module, we have ${\rm proj.dim}(_{B}X)\leq {\rm proj.dim}(_{A}X)+{\rm proj.dim}(_{B}A)$. By assumption, we get ${\rm proj.dim}(_{B}X)<\infty$, and say ${\rm proj.dim}(_{B}X)=m<\infty$. Take a $B$-projective resolution of $_{B}X$ of length $m$ $$0\longrightarrow P_{m}\longrightarrow P_{m-1}\longrightarrow \cdots \longrightarrow P_{0}\longrightarrow _{B}X\longrightarrow 0.$$ \noindent Since $A_{B}$ is projective, the sequence $$0\longrightarrow A\otimes_{B}P_{m}\longrightarrow A\otimes_{B}P_{m-1}\longrightarrow \cdots \longrightarrow A\otimes_{B}P_{0}\longrightarrow A\otimes_{B}X\longrightarrow 0$$ \noindent is exact and hence an $A$-projective resolution of $_{A}A\otimes_{B}X$, which means that ${\rm proj.dim}(_{A}A\otimes_{B}X)<\infty$. $\square$ Remark that, more generally, the above corollary still holds whenever $\fd (B)<\infty$. In fact, let $X$ be an $A$-module with ${\rm proj.dim}(_{A}X)<\infty$. Since $2\leq\fd(\varphi)=n<\infty$, by the proof of Theorem \ref{thm1}, $_{A}X$ has the $(A,\,B)$-projective resolution of length $n$: $$0\longrightarrow Y\longrightarrowf{} C_{n-1}\longrightarrow\cdots\longrightarrow C_1\longrightarrow C_0\longrightarrow X\longrightarrow 0,$$ \noindent such that $_{A}Y$ and $_{A}C_{i}$ with $0\leq i\leq n-1$ have finite projective dimension and that every $C_{i}$ for $0\leq i\leq n-1$ can be expressed the following form: $$C_{i}=A\otimes_{B}M_{i},$$ \noindent where $_{B}M_{i}$ has finite projective dimension. Assume that $\fd (B)=m<\infty$. Then $_{B}M_{i}$ has a projective resolution of length $m$ $$0\longrightarrow Q_{m}\longrightarrow Q_{m-1}\longrightarrow \cdots \longrightarrow Q_{0}\longrightarrow _{B}M_{i}\longrightarrow 0.$$ \noindent Since $A_{B}$ is projective, the sequence $$0\longrightarrow A\otimes_{B}Q_{m}\longrightarrow A\otimes_{B}Q_{m-1}\longrightarrow \cdots \longrightarrow A\otimes_{B}Q_{0}\longrightarrow A\otimes_{B}M_{i}\longrightarrow 0$$ \noindent is exact and hence an $A$-projective resolution of $_{A}A\otimes_{B}M_{i}$, which means that ${\rm proj.dim}(_{A}A\otimes_{B}M_{i})\leq m$. Note that ${\rm proj.dim}(_{B}Y)\leq {\rm proj.dim}(_{A}Y)+{\rm proj.dim}(_{B}A)<\infty$, so, by the same way, ${\rm proj.dim}(_{A}Y)\leq{\rm proj.dim}(_{A}A\otimes_{B}Y)\leq m$. Now we can estimate the projective dimension of $_{A}X$: $$\begin{array}{rl} {\rm proj.dim} (_{A}X)&\leq \mbox{max} \{{\rm proj.dim}(Y),\, {\rm proj.dim}(C_{i}),\, i=0,\,\cdots,\,n-1 \}+n\\ &\\ &\leq m+n. \end{array}$$ \noindent This implies that $\fd (A)<\infty$. \section{Quotient algebras and finitistic dimensions \label{qfd}} In this section, we shall use representation-theoretical properties of quotient algebras to approach the finiteness of the finitistic dimension of given algebras and prove Theorem \ref{thm2}. {\bf Proof of Theorem \ref{thm2}.} Let $X$ be an $A$-module with finite projective dimension. Consider the exact sequence of $A$-modules $$0\longrightarrow J\Omega_{A}(X)\longrightarrow \Omega_{A}(X)\longrightarrow\Omega_{A}(X)/J\Omega_{A}(X)\longrightarrow 0. $$ \noindent Since $IJ\Omega_{A}(X)\subseteq IJ{\rm rad}(P(X))=IJ{\rm rad}(A)P(X)\subseteq IJKP(X)=0$ by assumption, where $P(X)$ is the projective cover of $X$, we have $Y:=J\Omega_{A}(X)$ is an $A/I$-module. Clearly, $Z:=\Omega_{A}(X)/J\Omega_{A}(X)$ is an $A/J$-module. If $A/I$ and $A/J$ are $A$-syzygy-finite, then there is a non-negative integer $n$, an $A$-module $M$ and an $A$-module $N$ such that $\Omega_{A}^{n}(Y)\in {\rm add} (_{A}M)$ and $\Omega_{A}^{n}(Z)\in {\rm add} (_{A}N)$. Using Horseshoe Lemma to the above exact sequence, we obtain the following exact sequence $$0\longrightarrow \Omega_{A}^{n}(Y)\longrightarrow \Omega_{A}^{n+1}(X)\oplus P\longrightarrow\Omega_{A}^{n}(Z)\longrightarrow 0$$ \noindent with $P$ projective $A$-module. Now we can bound the projective dimension of $_{A}X$: $$\begin{array}{rl} {\rm proj.dim} (_{A}X)&\leq {\rm proj.dim} (\Omega_{A}^{n+1}(X))+n+1\\ &\\ &={\rm proj.dim} (\Omega_{A}^{n+1}(X)\oplus P)+n+1\\ &\\ &\leq\Psi(\Omega_{A}^{n+1}(Y)\oplus \Omega_{A}^{n+2}(Z))+n+3\\ &\\ &\leq\Psi(\Omega_{A}^{}(M)\oplus \Omega_{A}^{2}(N))+n+3.\\ \end{array},$$ \noindent where $\Psi$ is the {\rm Igusa-Todorov} function. Thus $\fd(A)$ is upper bounded by $\Psi(\Omega_{A}^{}(M)\oplus \Omega_{A}^{2}(N))+n+3$ and $\fd(A)<\infty$. $\square$ The proof of Theorem \ref{thm2} is similar to that of \cite[Theorem 3.2]{X1} and \cite[Theorem 3.1]{Wjq2}, in which the {\rm Igusa-Todorov} function is used. However, the difference is that the syzygy shifted sequences is employed in theorem above. It is worth noting that our result unifies many of the results in literature in this direction, that is, many known results can be obtained from Theorem \ref{thm2}. In what follows, we shall illustrate it. If we take $K=A$, then we reobtain \cite[Theorem 3.2]{X1}. \begin{Koro} (\cite{X1}) Let $A$ be an Artin algebra and $I$, $J$ be two ideals of $A$ such that $IJ=0$. If $A/I$ and $A/J$ are $A$-syzygy-finite (for example, $A/I$ and $A/J$ are representation-finite), then $\fd(A)<\infty$. In particular, algebras with radical-square-zero have finite finitistic dimension. \end{Koro} If we take $K={\rm rad} (A)$, we have the following result, which recovers \cite[Theorem 3.1]{Wjq2} by \cite[Corollary 2.8]{Wjq2}. \begin{Koro} Let $A$ be an Artin algebra and let $I$, $J$ be two ideals of $A$ such that $IJ{\rm rad} (A)=0$, and that both $_{A}I$ and $_{A}J$ have finite projective dimension. If $A/I$ is $A/I$-syzygy-finite and $A/J$ is $A/J$-syzygy-finite, then the finitistic dimension of $A$ is finite. \end{Koro} If we set $I=J={\rm rad}^{n}(A)$ and $K={\rm rad} (A)$, we reobtain the main result in \cite{W}. \begin{Koro} Let $A$ be an Artin algebra with ${\rm rad}^{2n+1}(A)=0$. If $A/{\rm rad}^{n}(A)$ is $A$-syzygy-finite (for example, $A/{\rm rad}^{n}(A)$ is representation-finite), then $\fd(A)<\infty$. In particular, algebras with radical-cube-zero have finite finitistic dimension. \end{Koro} As other consequence of Theorem \ref{thm2}, we have the following result, generalizing the results of \cite[Lemma 3.6 and Corollary 3.8]{X1} and \cite[Corollary 3.2 and Proposition 3.5]{Wjq2}. \begin{Koro} Let $A$ be an Artin algebra with an ideal $I$ such that $A/I$ is $A$-syzygy-finite (for example, $A/I$ is representation-finite). Then $\fd(A)<\infty$ if one of the following conditions is satisfied: $(1)$\,$I{\rm rad}^{2}(A)=0$; $(2)$\,${\rm rad} (A)I{\rm rad} (A)=0$; $(3)$\,$I^{2}{\rm rad} (A)=0$. \end{Koro} \section{Left idealized extensions and finitistic dimensions \label{lfd}} In this section, we shall employ left idealized extensions to study the finitistic dimension conjecture and give a proof of Proposition \ref{propo1}. More precisely, we consider the following question: given a chain of subalgebras of an Artin algebra $A$ $$B=A_{0}\subseteq A_{1}\subseteq\cdots \subseteq A_{s-1} \subseteq A_{s}=A$$ \noindent such that ${\rm rad}(A_{i-1})$ is a left ideal of $A_{i}$ for all $1\leq i\leq s$, if $A$ is representation-finite, is the finitistic dimension of $B$ finite? It is known that an affirmative answer to this question will imply that the finitistic dimension conjecture over the field is true. It was proved by Xi in \cite{X1} that, given such a chain with $s\leq 2$, if $A$ is representation-finite, then $\fd (B)<\infty$. A natural question is: Is it possible to show that the finitistic dimension of $B$ is finite if $s>2$? In this direction, Wei gave in \cite[Theorem 2.9]{Wjq} an affirmative answer under some homological conditions. In this section, we shall give a partial answer for the case $s> 2$ by imposing the condition concerning syzygy-finite algebras, which generalizes some results in \cite{X1,WX}. Let us start with the following two lemmas from \cite{WX,X2}, which establish a way of lifting modules over a subalgebra to modules over its extension algebra. \begin{Lem} (\cite[Lemma 3.2]{X2}) \label{lem5.1} Let $A$ be an Artin algebra and $B$ be a subalgebra of $A$ with the same identity such that ${\rm rad} (B)$ is a left ideal of $A$. Then, for any $B$-module $X$, $\Omega_{B}^{i}(X)$ is a torsionless $A$-module for all $i\geq 2$ and there is a projective $A$-module $P$ and an $A$-module $Y$ such that $\Omega_{B}^{i}(X)\simeq \Omega_{A}(Y)\oplus P$ as $A$-modules. \end{Lem} \begin{Lem} (\cite[Lemma 3.5]{WX}) \label{lem5.2} Let $A$ be an Artin algebra and $B$ be a subalgebra of $A$ with the same identity. Suppose that $I$ is an ideal of $B$ and is also a left ideal of $A$. Then, for any torsionless $B$-module $X$, $IX$ is a torsionless $A$-module and there is a projective $A$-module $Q$ and an $A$-module $Z$ such that $IX\simeq \Omega_{A}(Z)\oplus Q$ as $A$-modules. \end{Lem} In the following, we employ the finitistic dimension of bigger algebras to approach that of smaller algebras for a left idealized extension. \begin{Lem} Let $$B=A_{0}\subseteq A_{1}\subseteq\cdots \subseteq A_{s-1}\subseteq A_{s}=A$$ \noindent be a chain of subalgebras of an Artin algebra $A$ such that $I_{i-1}$ is an ideal of $A_{i-1}$ and is also a left ideal of $A_{i}$ for all $1\leq i\leq s$ with $s$ being a positive integer. If $A$ is 1-syzygy-finite and $B/I_{s-1}\cdots I_{1}I_{0}$ is $B$-syzygy-finite (for example, $B/I_{s-1}\cdots I_{1}I_{0}$ is representation-finite), then $\fd(B)<\infty$. \label{lem5.3}\end{Lem} {\bf Proof}. First observe that $I:=I_{s-1}\cdots I_{1}I_{0}$ is an ideal of $B$ contained in $I_{0}$ by assumption. Let $X$ be a $B$-module with finite projective dimension. Then we can form an exact sequence of $B$-modules: $$0\longrightarrow I\Omega_{B}(X)\longrightarrow \Omega_{B}(X)\longrightarrow\Omega_{B}(X)/I\Omega_{B}(X)\longrightarrow 0.$$ \noindent Clearly, $Y:=\Omega_{B}(X)/I\Omega_{B}(X)$ is a $B/I$-module and hence there is a non-negative integer $n$ and a $B$-module $M$ such that $\Omega_{B}^{n}(Y)\in {\rm add} (_{B}M)$, since $B/I$ is $B$-syzygy-finite. Note that $\Omega_{B}(X)$ is a torsionless $B$-module, so $I_{0}\Omega_{B}(X)$ is a torsionless $A_{1}$-module by Lemma \ref{lem5.2}. Inductively, by Lemma \ref{lem5.2} again, we obtain that $Z:=I\Omega_{B}(X)$ is a torsionless $A$-module. Hence, there is a projective $A$-module $Q$ and an $A$-module $W$ such that $Z\simeq \Omega_{A}(W)\oplus Q$ as $A$-modules. Since $A$ is 1-syzygy-finite, there exists an $A$-module $N$ such that $\Omega_{A}^{}(W)\in {\rm add} (_{A}N)$, which means that $Z\in{\rm add} (_{A}N\oplus A)$. Taking the $n$-th syzygy of the above exact sequence, by Horseshoe Lemma, we obtain an exact sequence of $B$-modules $$0\longrightarrow \Omega_{B}^{n}(Z)\longrightarrow \Omega_{B}^{n+1}(X)\oplus P\longrightarrow\Omega_{B}^{n}(Y)\longrightarrow 0$$ \noindent with $P$ projective $B$-module. Now we can bound the projective dimension of $_{B}X$: $$\begin{array}{rl} {\rm proj.dim} (_{B}X)&\leq {\rm proj.dim} (\Omega_{B}^{n+1}(X))+n+1\\ &\\ &={\rm proj.dim} (\Omega_{B}^{n+1}(X)\oplus P)+n+1\\ &\\ &\leq\Psi(\Omega_{B}^{n+1}(Z)\oplus \Omega_{B}^{n+2}(Y))+n+3\\ &\\ &\leq\Psi(\Omega_{B}^{n+1}(N\oplus A)\oplus \Omega_{B}^{2}(M))+n+3\\ \end{array},$$ \noindent where $\Psi$ is the {\rm Igusa-Todorov} function. Thus $\fd(B)$ is upper bounded by $\Psi(\Omega_{B}^{n+1}(N\oplus A)\oplus \Omega_{B}^{2}(M))+n+3$. This completes the proof. $\square$ Note that Lemma \ref{lem5.3} recovers \cite[Theorem 3.1]{X1} if we take $s=1$. The next result is a variation of Lemma \ref{lem5.3}. \begin{Lem} Let $$B=A_{0}\subseteq A_{1}\subseteq\cdots \subseteq A_{s-1}\subseteq A_{s}=A$$ \noindent be a chain of subalgebras of an Artin algebra $A$ such that $I_{i-1}$ is an ideal of $A_{i-1}$ and is also a left ideal of $A_{i}$ for all $1\leq i\leq s$ with $s$ being a positive integer. Suppose that $I_{0}$ is the Jacobson radical ${\rm rad} (B)$ of $B$. If $A$ is 1-syzygy-finite and $A_{1}/I_{s-1}\cdots I_{1}$ is $B$-syzygy-finite (for example, $A_{1}/I_{s-1}\cdots I_{1}$ is representation-finite), then $\fd(B)<\infty$. \label{lem5.4}\end{Lem} {\bf Proof}. Given a $B$-module $X$ with finite projective dimension, we consider $\Omega_{B}^{2}(X)$ instead of $\Omega_{B}^{}(X)$. By Lemma \ref{lem5.1}, $\Omega_{B}^{2}(X)$ is a torsionless $A_{1}$-module. Then we can form an exact sequence of $B$-modules: $$0\longrightarrow I_{s-1}\cdots I_{1}\Omega_{B}^{2}(X)\longrightarrow \Omega_{B}^{2}(X)\longrightarrow\Omega_{B}^{2}(X)/I_{s-1}\cdots I_{1}\Omega_{B}^{2}(X)\longrightarrow 0.$$ \noindent By the argument in the proof of Lemma \ref{lem5.3} we obtain the lemma. $\square$ Here, we understand $A_{1}/I_{s-1}\cdots I_{1}=0$ if $s=1$, which means that $A_{1}/I_{s-1}\cdots I_{1}$ being $B$-syzygy-finite always holds. Let us remark that Lemma \ref{lem5.4} recovers \cite[Theorem 3.1]{X1} if we take $s=1$, and extends \cite[Corollary 3.10]{WX} if we take $s=2$. Combining Lemma \ref{lem5.3} with Lemma \ref{lem5.4}, we prove Proposition \ref{propo1}. As an immediate consequence of Proposition \ref{propo1}, we have the following corollary, which is a partial answer to the question by Xi in his website~(see http://math0.bnu.edu.cn/~ccxi/Problems.php). \begin{Koro} Let $D\subseteq C\subseteq B\subseteq A$ be a chain of subalgebras of an Artin algebra $A$ such that the radicals of $D$, $C$ and $B$ are left ideals of $C$, $B$ and $A$, respectively. Suppose that $A$ is 1-syzygy-finite. If either $D/{\rm rad} (B){\rm rad} (C){\rm rad} (D)$ or $C/{\rm rad} (B){\rm rad} (C)$ is $D$-syzygy-finite, then $\fd(D)<\infty$. In particular, if either $D/{\rm rad} (B){\rm rad}(C){\rm rad} (D)$ or $C/{\rm rad} (B){\rm rad} (C)$ is representation-finite, then $\fd(D)<\infty$. \end{Koro} We end this section with an example showing that our results do apply to check the finiteness of the finitistic dimension of some algebras. \noindent{\bf Example 1.}\,\,(\cite{WX})\,\,Let $A$ be the algebra given by the quiver with relation: $$\xymatrix{\circ &\circ\ar[l]_{\lambda}^(1){5}^(0){2}&\circ\ar[l]_{\epsilon}^(0){3}&\circ\ar[l]_{\xi}^(0){1} &\circ\ar[l]_{\beta}^(){}^(0){4}&\circ\ar[l]_{\alpha}^(){}^(0){6}},\,\,\,\,\alpha\beta\xi\epsilon\lambda=0.$$ \noindent Then $A$ is a Nakayama algebra and hence 1-syzygy-finite. Let $C$ be the subalgebra of A generated by the set $\{e_{1}, e_{2'}:=e_{2}+e_{4}+e_{5}, e_{3'}:=e_{3}+e_{6}, \lambda, \beta, \alpha+\epsilon, \gamma:=\xi\epsilon, \delta:=\beta\xi\}$, which is given by the quiver with relations: $$ \xymatrix{\circ\ar@<0.4ex>[r]^{\gamma} &\circ\ar@<0.4ex>[l]^{\beta}^(1){1}^(0){2'}\ar@(ur,ul)_{\lambda}\ar@<0.4ex>[r]^{\delta} &\circ\ar@<0.4ex>[l]^{\alpha+\epsilon}^(0){3'}},\quad \beta\gamma=\delta(\alpha+\epsilon), \gamma\beta=\gamma\delta=\lambda^{2}=\lambda\beta=\lambda\delta=(\alpha+\epsilon)\beta\gamma\lambda=0. $$ \noindent It is not hard to see that ${\rm rad}^{3}(C)\neq 0$ and $\ell\ell^{\infty}(C)=4$, where $\ell\ell^{\infty}(C)$ denotes the infinite-layer length of $C$ (\cite{HLM}). Note also that $C$ is neither a monomial algebra nor a special biserial algebra. It was proved in \cite{WX} that the finitistic dimension of $C$ is finite. Here, we shall use left idealized extensions to reobtain the finiteness of the finitistic dimension of $C$, though ${\rm rad}(C)$ is not a left ideal of $A$. In fact, let $B$ be the subalgebra of A generated by the set $\{e_{1}, e_{2'}:=e_{2}+e_{4}+e_{5}, e_{3'}:=e_{3}+e_{6}, \lambda, \beta, \alpha, \epsilon, \gamma:=\xi\epsilon, \delta:=\beta\xi\}$. Then $B$ is given by the following quiver with relations: $$ \xymatrix{\circ\ar@<0.4ex>[r]^{\gamma} &\circ\ar@<0.4ex>[l]^{\beta}^(1){1}^(0){2'}\ar@(ur,ul)_{\lambda}\ar@<1.3ex>[r]^{\delta} &\circ\ar[l]|{\epsilon}^(0){3'}\ar@<1.3ex>[l]^{\alpha}},\quad \beta\gamma=\delta\epsilon, \gamma\beta=\gamma\delta=\lambda^{2}=\lambda\beta=\lambda\delta=\delta\alpha=\epsilon\beta=\epsilon\delta=\alpha\lambda=\alpha\beta\gamma\lambda=0. $$ \noindent It is easy to check that $C\subseteq B\subseteq A$ is a chain of subalgebras of $A$ such that ${\rm rad}(C)$ is a left ideal of $B$ and ${\rm rad}(B)$ is a left ideal of $A$. So we have $\fd(C)<\infty$ by Proposition \ref{propo1}. {\footnotesize } \end{document}
\begin{document} \title{Improved Bounds for Eigenpath Traversal} \author{Hao-Tien Chiang} \email{[email protected]} \affiliation{ University of New Mexico \\ Albuquerque, New Mexico 87185, USA \\ } \author{Guanglei Xu} \email{[email protected]} \affiliation{ University of Pittsburgh \\ Pittsburgh, PA 15260, USA \\ } \author{Rolando D. Somma} \email{[email protected]} \affiliation{ Los Alamos National Laboratory \\ Los Alamos, New Mexico 87545, USA } \begin{abstract} We present a bound on the length of the path defined by the ground states of a continuous family of Hamiltonians in terms of the spectral gap $\Delta$. We use this bound to obtain a significant improvement over the cost of recently proposed methods for quantum adiabatic state transformations and eigenpath traversal. In particular, we prove that a method based on evolution randomization, which is a simple extension of adiabatic quantum computation, has an average cost of order $1/\Delta^2$, and a method based on fixed-point search, has a maximum cost of order $1/\Delta^{3/2}$. Additionally, if the Hamiltonians satisfy a frustration-free property, such costs can be further improved to order $1/\Delta^{3/2}$ and $1/\Delta$, respectively. Our methods offer an important advantage over adiabatic quantum computation when the gap is small, where the cost is of order $1/\Delta^3$. \end{abstract} \date{\today} \maketitle \section{Introduction} \label{sec:intro} Numerous problems in quantum information, physics and optimization, can be solved by preparing the low energy or other eigenstate of a Hamiltonian~(cf. \cite{apolloni:qa89,finnila:qc1994a,kadowaki:qc1998a,farhi:qc2001a,sachdev_2001,santoro:qa06,somma_quantum_2008,perez_PEPS_2008,das:qa08,schwarz_peps_2011}). On a quantum system (e.g., an analogue quantum computer), such an eigenstate can be prepared by smoothly changing the interaction parameters of the controlled Hamiltonians under which the system evolves. That is the idea of adiabatic quantum computation (AQC), which relies on the adiabatic theorem~\cite{Born-Fock:adiabatic,messiah_1999} to assert that, at any time, the evolved state is sufficiently close to an eigenstate of the system that is continuously related to the final one. The importance of AQC for quantum speedups was demonstrated in several examples~(c.f., \cite{farhi_quantum_2002,roland_quantum_2002,somma_quantum_2008,Hen_period_2013}). In particular, AQC is equivalent to the standard circuit model of quantum computing, implying that some quantum speedups obtained in one model may be carried to the other using methods that map quantum circuits to Hamiltonians and vice versa~\cite{aharonov_adiabatic_2007,mizel_equivalence_2007,oliveira_adiabatic_2008,aharonov_line_2009,somma_Feynman_2013, cleve_query_2009,wiebe_product_2010,childs_efficient_2010,BCS2013}. In AQC, we assume access to Hamiltonians $H(s)$, $0 \le s \le 1$, that have non-degenerate and continuously related eigenstates $\ket{\psi(s)}$. The goal is to prepare $\ket{\psi(1)}$ from $\ket{\psi(0)}$, up to some small approximation error $\varepsilon$, by increasing $s$ from 0 to 1 with a suitable time schedule. The cost of the algorithm in AQC is determined by the total evolution time, $T$. This time depends on properties of the Hamiltonians used in the evolution, such as their rate of change or spectral gaps. In particular, a commonly used and rigorous quantum adiabatic approximation provides an upper bound to the cost given by (cf. \cite{jansen_bounds_2007,jordan_thesis_2008}) \begin{align} \label{eq:AA} T_{\rm AQC} = \kappa \max_s \left [ \frac{ \| \ddot H \|} {\varepsilon \Delta^2}, \frac{ \| \dot H \|^2} {\varepsilon \Delta^3} \right] \; . \end{align} That is, increasing $s$ according to, for example, $s(t)=t/T_{\rm AQC}$, suffices to prepare the final eigenstate from the initial one within error $\varepsilon$. (The cost will be $T=T_{\rm AQC}$ for such a schedule.) $\kappa$ is a constant and $\Delta$ is the spectral gap of $H$, that is, the smallest (absolute) difference between the eigenvalue of $\ket{\psi(s)}$ and any other eigenvalue. Unless stated otherwise, all quantities, states, and operators depend on $s$, and all derivatives are with respect to $s$, e.g., $\dot X = \partial X / \partial s$ and $\ddot X = \partial^2 X/\partial s^2$. For an operator or matrix $X$ and state $\ket \phi$ on a $d$-dimensional complex Hilbert space, $\| X \|$ denotes the spectral norm and $\| \ket \phi \|$ denotes the Euclidean norm. We remark that the bound in Eq.~\eqref{eq:AA} is actually tight in the sense that there exist examples (e.g., Rabi oscillations, c.f.~\cite{Marzlin_Inconsistency_2004,Amin_Consistency_2009}) for which the total cost of the adiabatic evolution is also lower bounded by a quantity of order $\| \dot H \|^2/\Delta^3$, and $\| \dot H \|^2/\Delta^3 > \| \ddot H \|/\Delta^2$ in such examples. A drawback with rigorous quantum adiabatic approximations is that the dependence of $T_{\rm AQC}$ on the gap is rather poor, specially when $\Delta \ll 1$. Also, the bound given by Eq.~\eqref{eq:AA} could imply a large overestimate of the actual cost needed to prepare the final eigenstate in some cases. For these reasons, other methods for traversing the eigenstate path, which differ from AQC but have a better cost dependence on the gap, were recently proposed~\cite{wocjan_speed-up_2008,boixo:qc2009a,boixo:qc2010a}. One such method~\cite{boixo:qc2009a} is based on evolution randomization to implement a version of the quantum Zeno effect and simulate projective measurements of $\ket{\psi(s)}$. The main and only difference between this ``randomization method'' (RM) and AQC is that, rather than choosing the schedule $s(t)=t/T_{\rm AQC}$ for the evolution, $s(t)$ is randomly chosen according to a probability distribution that depends on the gap and the approximation error. Another method~\cite{boixo:qc2010a} also traverses the eigenstate path by making projective measurements of $\ket{\psi(s)}$, but each measurement is implemented using the so-called phase estimation algorithm~\cite{kitaev_quantum_1995} and Grover's fixed-point search technique~\cite{grover_different_2005}. The method in Ref.~\cite{boixo:qc2010a} requires knowing the eigenvalue of $\ket{\psi(s)}$, but this can be {\em learned} as the path is traversed. The (average) cost $T$, or total time of evolution under the $H(s)$, of the previous methods for eigenpath traversal depends not only on the spectral gap but also on the eigenstate path length, $L$. This is simply the length defined in complex Hilbert space: $L = \int_0^1 ds \| |\dot \psi \rangle \|$. For error $\varepsilon<1$, the cost is upper bounded by \begin{align} \label{eq:costEPT1} T_{\rm EPT} =\kappa' \frac {L^c \log (L/\varepsilon)} {\varepsilon \min_s \Delta} \; , \end{align} with $c=1,2$ depending on the method and $\kappa'$ a constant. Having an explicit dependence in the path length is important for those cases in which $L$ can be bounded independently of the gap. This observation was used in Ref.~\cite{somma_quantum_2008} to prove a quantum speedup of the well-known simulated annealing method used for optimization~\cite{kirkpatrick_SA_83} (Sec.~\ref{sec:QSA}). For many hard optimization problems, $\Delta$ decreases exponentially in the problem size while $L$ increases only polynomially. Then, $T_{\rm AQC} \gg T_{\rm EPT}$ for these cases and the methods in Refs.~\cite{wocjan_speed-up_2008,boixo:qc2009a,boixo:qc2010a} may be used to prepare the final eigenstate with lower cost than the adiabatic method. We remark that the upper bound of Eq.~\eqref{eq:costEPT1} can only be achieved for a uniform parametrization, under which the eigenstate satisfies $\| \sket{\dot \psi } \| =L$, independently of $s$. This is a strong requirement that will not be satisfied in general. We then considered an upper bound $L^* \ge L$, which can be easily computed from known properties of the Hamiltonians, and used such a bound to obtain the corresponding $T_{\rm EPT}$ in Refs.~\cite{boixo:qc2009a,boixo:qc2010a} (i.e., by replacing $L \rightarrow L^*$). When $\| \dot H \|$ and $\Delta$ are known, a commonly used path length bound is \begin{align} \label{eq:Lbound0} L^* = \max_s \frac{\| \dot H \|}{\Delta} \; . \end{align} Such a bound follows easily from the eigenvalue equation, which can be used to obtain $\| |\dot \psi \rangle \| \le \| \dot H \| / \Delta$ ~\cite{NoteEPT-bounds}. Equations~\eqref{eq:costEPT1} and~\eqref{eq:Lbound0} give an upper bound for the cost of the eigenpath traversal method as \begin{align} \label{eq:costEPT2} T_{\rm EPT} = \kappa' \max_s \frac{\| \dot H \|^c}{\varepsilon \Delta^{c+1}} \log (\| \dot H \|/(\varepsilon \Delta)) \; . \end{align} $c=2$ for the RM and $T_{\rm EPT}$ can be larger than $T_{\rm AQC}$ when the parametrization is different from the uniform one. Thus, the advantage of the RM over the adiabatic method is unclear in this case from the above upper bounds: both, $T_{\rm AQC}$ and $T_{\rm EPT}$, depend on $1/\Delta^3$. A main goal of this paper is to obtain better bounds for the cost of the methods of Refs.~\cite{wocjan_speed-up_2008,boixo:qc2009a,boixo:qc2010a} in terms of the spectral gap, the error, $\| \dot H \|$ and $\| \ddot H\|$, giving special emphasis to the RM described in Ref.~\cite{boixo:qc2009a}. Such quantities, or bounds of, are assumed to be known. The reason why we focus more on the RM than other methods for eigenpath traversal is due to its simple connection with AQC. The other methods not only require evolving with the Hamiltonian, but also require implementing other operations such as those for the quantum Fourier transform in the phase estimation algorithm. Nevertheless, some of our results can also be used to improve the cost of those other methods as well. Our manuscript is organized as follows. In Sec.~\ref{sec:pathlength}, we present an improved bound on the path length where, ignoring other quantities, $L^*$ is of order $1/\sqrt \Delta$ if $\ket{\psi}$ is the ground state of $H$. We study this bound for general Hamiltonian paths and focus also on those Hamiltonians that are {\em frustration free}, due to their importance in condensed matter theory~\cite{perez_PEPS_2008,feiguin_renorm_2013}, optimization~\cite{somma_thermod_2007}, and quantum information~\cite{bravyi_stoquastic_2009,beaudrap_frustfree_2010,somma_gap_2013}. Then, in Sec.~\ref{sec:RMbound}, we use the improved bound to obtain an average cost for the RM of order $1/\Delta^2$, which is much smaller than $T_{\rm AQC}$ when $\Delta \ll 1$. In Sec.~\ref{sec:weakmeasurement} we improve the analysis of Ref.~\cite{boixo:qc2009a} about the cost scaling with the error and show that the logarithmic factor present in Eqs.~\eqref{eq:costEPT1} and \eqref{eq:costEPT2} for the RM is unnecessary. In Sec.~\ref{sec:applications} we apply our results to two important problems in quantum computation, namely the preparation of projected entangled pair states~\cite{verstraete_peps_2006} (i.e., generalized matrix product states or PEPS) and the quantum simulation of classical annealing processes~\cite{kirkpatrick_SA_83,somma_quantum_2008}. We use the results for frustration-free Hamiltonians and show that the RM has an average cost of order $1/\Delta^{3/2}$ for the preparation of PEPS, while the method based on fixed-point search has cost of order $1/\Delta$ (up to a logarithmic correction). We conclude in Sec.~\ref{sec:conclusions} \section{The path length} \label{sec:pathlength} The path length of a continuous and differentiable family of unit states $\{ \ket{\psi(s)} \}$, $0 \le s \le 1$, is \begin{align} \nonumber L=\int_0^1 ds \| \sket{\dot \psi} \| \; . \end{align} The global phase of $\ket{\psi}$ is set so that $\langle \psi \sket{\dot \psi}=0$. $\ket{\psi}$ is a non-degenerate eigenstate of $H$ and, without loss of generality, we assume that the eigenvalue is 0. Then, $\sket{\dot \psi}=-H^{-1}\dot H \ket{\psi} $, where $H^{-1}$ has only support in the subspace orthogonal to $\ket \psi$. An upper bound of $\max_s(\|\dot H\|/\Delta) $ on $L$ simply follows. Such a bound is commonly used when deriving adiabatic approximations. Remarkably, if the state path is two times differentiable and $\ket{\psi}$ is the ground state of $H$ (i.e., the eigenstate with lowest eigenvalue), a tighter bound on $L$ in terms of the gap can be obtained. According to the Cauchy-Schwarz inequality, \begin{align} \label{eq:Lbound1} L^2 \le \int_0^1 ds \; \| \sket{\dot \psi} \|^2 \; . \end{align} By differentiation of $H \ket \psi=0$, in Appendix~\ref{app:rateofchange} we obtain \begin{align} \label{eq:Lbound2} \| \sket{\dot \psi} \|^2 \le \frac { 1}{2 \Delta} \bra \psi \ddot H \sket{ \psi} \; , \end{align} see Eq.~\eqref{eq:dotpsibound}. Equations~\eqref{eq:Lbound1} and~\eqref{eq:Lbound2} yield \begin{align} \nonumber L^2 \le \int_0^1 ds \; \frac 1 {2 \Delta} \bra{\psi} \ddot H \ket {\psi} \; . \end{align} If the lowest eigenvalue is $E \ne 0$, then \begin{align} \label{eq:Lbound5} L \le L^* =\left( \int_0^1 ds \; \frac 1 {2 \Delta} \bra{\psi} \ddot H -\ddot E \ket {\psi} \right)^{1/2}\; . \end{align} Equation~\eqref{eq:Lbound5} is our main result; its applications to eigenpath traversal will be discussed below. \subsection{General interpolations} \label{sec:generalinterpolations} In general, because $\bra{\psi} \ddot H -\ddot E \ket {\psi} \ge 0$, the {\em rhs} of Eq.~\eqref{eq:Lbound5} can be bounded so that \begin{align} \nonumber L^* & \le \max_s \sqrt{ \frac{\| \ddot H \| -( \dot E(1) - \dot E(0) ) } {2\Delta}} \\ \nonumber & \le \max_s \sqrt{ \frac{\| \ddot H \| +2 \| \dot H \|} {2 \Delta}} \; . \end{align} For eigenpath traversal, quantities such as $\| \dot H \|$ and $\| \ddot H \|$ are usually bounded by a polynomial on the problem size, while the spectral gap $\Delta$ can be exponentially small for hard instances. \subsection{Linear interpolations} \label{sec:linearinterpolation} A commonly used Hamiltonian path is given by the linear interpolation of two Hamiltonians, that is, $H=(1-s)H_0 + sH_f$. Here, $H_0$ and $H_f$ are the initial and final Hamiltonians, respectively. In this case, \begin{align} \nonumber L^* & \le \max_s \sqrt{\frac{\dot E(1) - \dot E(0)}{2\Delta}} \\ \nonumber & \le \max_s \sqrt{ \frac{\| \dot H \|}{ \Delta }} \; . \end{align} \subsection{Frustration-free Hamiltonians} \label{sec:frustrationfree} A Hamiltonian $H=\sum_k \Pi_k$ is said to be frustration-free if any ground state $\ket{\psi}$ of $H$ is also a ground state of every $\Pi_k$. Typically, $\Pi_k$ corresponds to local operators and we can assume that $H \ket{\psi}= \Pi_k \ket \psi=0$ for all $k$, and $\Pi_k \ge 0$. For frustration-free Hamiltonians, the {\em local} bound on the rate of change of the state in Eq.~\eqref{eq:Lbound2} applies directly because $E=0$, and then \begin{align} \label{eq:pathlengthFF} L^* \le \max_s \sqrt{ \frac{\| \ddot H\|} {2 \Delta}} \; . \end{align} \section{Improved bounds for the randomization method} \label{sec:RMbound} The ``randomization method'' (RM) described in Ref.~\cite{boixo:qc2009a} uses phase randomization to traverse the eigenpath. The basic idea of the RM is simple: For a Hamiltonian path $\{H(s) \}$, we choose a discretization $0<s_1<s_2<\ldots<s_q=1$ that depends on the final-state preparation error. At the $j$ th step of the RM, we evolve with the constant Hamiltonian $H(s_j)$ for random time $t_j$, which is drawn according to a specific distribution that depends on $\Delta(s_j)$, the gap at that step, and the error. A common example is to sample $t_j$ from a normal distribution of zero mean and width (standard deviation) of order $1/\Delta(s_j)$. Evolution randomization will induce phase cancellation and a reduction of the {\em coherences} between $\ket{\psi(s_j)}$ and any other state orthogonal to it (see Secs.~\ref{sec:errors} and~\ref{sec:weakmeasurement}). In other words, evolution randomization simulates a measurement of $\ket{\psi(s_j)}$. Then, due to a version of the quantum Zeno effect, a sequence of measurements of $\ket{\psi(s_1)},\ket{\psi(s_2)},\ldots$ will allow the preparation of $\ket{\psi(s_q)}$, with arbitrarily high probability for a proper choice of $s_1,s_2\ldots,s_q$. The basic steps of the RM are depicted in Fig.~\ref{fig:RM}; more details are in Secs.~\ref{sec:errors} and~\ref{sec:weakmeasurement}. \begin{figure} \caption{Basic steps of the RM and state representation. At the $j$ th step, the RM prepares the mixed state $\rho_j$ (represented by a red arrow) that has large probability of being in $\ket{\psi(s_j)} \label{fig:RM} \end{figure} The average cost of the RM is the number of steps $q$ times the average (absolute) evolution time per randomization step; the latter is proportional to the inverse spectral gap~\cite{boixo:qc2009a}. For a uniform parametrization under which $\| \sket{\dot \psi} \|=L$ for all $s$, and for error $\varepsilon$, we obtain $q \propto L^2/\varepsilon$, resulting in an optimal average cost of order $L^2/(\varepsilon\Delta)$. An additional logarithmic factor, coming from Eq.~\eqref{eq:costEPT2}, was needed for the cost analysis of Ref.~\cite{boixo:qc2009a} if the random times are nonnegative (or nonpositive). Nevertheless, the given parametrization is not uniform in general. In this case, the RM is only guaranteed to succeed if $q=(L^*)^2$, where $L^*$ is an upper bound on $L$ that can be determined from some known properties of $H$. As discussed, a standard choice for $L^*$ is the one in Eq.~\eqref{eq:Lbound0}, which results in an overall cost of order $1/\Delta^3$ if we disregard other quantities: the number of points in the discretization is $q \propto \max_s (1/\Delta^2)$. The goal of this section is to show that the upper bound obtained in Sec.~\ref{sec:pathlength} can be used to obtain a better discretization for the RM than that of Ref.~\cite{boixo:qc2009a}, resulting in an overall, improved average cost of order $\max_s( 1/\Delta^2)$. We also show how to avoid the logarithmic correction in the cost by performing a more detailed analysis of errors due to randomization, when the random times are nonnegative (or nonpositive). \subsection{Parametrization errors} \label{sec:errors} In this section we analyze the errors due to the discretization, which assumes perfect measurements of the $\ket{\psi(s)}$ in the RM. Errors from imperfect measurements due to evolution randomization are analyzed in Sec.~\ref{sec:weakmeasurement}. We let $0<s_1<s_2<\ldots <s_q=1$ determine any discretization of the interval $[0,1]$, where $q$ will be obtained below. Assuming perfect measurements of the $\ket{\psi(s_j)}$ and using the union bound, the final error or quantum {\em infidelity} ($1-F$) in the preparation of $\ket{\psi(s_q)}$ can be bounded from above as \begin{align} \nonumber 1-F &= 1-\prod_{j=1}^q \cos^2(\alpha_j) \\ \nonumber & \le \sum_{j=1}^q \sin^2 (\alpha_j) \; , \end{align} where the `angles' $\alpha_j$ are determined from $\cos \alpha_j = | \! \bra{\psi(s_{j-1})} \psi(s_j) \rangle |$ - see Fig.~\ref{fig:RM}. Without loss of generality, we can assume $\sin \alpha_j \ge 0$. In Appendix~\ref{app:bound1}, we show \begin{align} \label{eq:anglebound} \sin^2 ( \alpha_j) \le (s_j-s_{j-1}) \int_{s_{j-1}}^{s_j} ds \; \| \sket{\dot \psi}\|^2 \; , \end{align} for a differentiable path. If we choose a discretization so that $s_j=j \, \delta \mspace{-1mu} s$, the choice $\delta \mspace{-1mu} s\le \varepsilon/ \int_0^1 ds \| \partial_s \ket{\psi(s)} \|^2$ suffices to guarantee a final infidelity bounded by $\varepsilon$; that is \begin{align} \label{eq:fidelity1} \sum_{j=1}^q \sin^2 (\alpha_j) \le \varepsilon \; . \end{align} We can then use the main result of Sec.~\ref{sec:pathlength} and Eq.~\eqref{eq:anglebound} to show \begin{align} \nonumber \delta \mspace{-1mu} s = \frac { \varepsilon} {(L^*)^2} \; . \end{align} This bound assumes that $\ket{\psi}$ is the ground state of $H$. The number of points in the discretization is then \begin{align} \label{eq:pointsRM} q = \frac 1 {\delta \mspace{-1mu} s}= \frac{ \int_0^1 ds \bra \psi \ddot H - \ddot E \ket \psi/(2 \Delta)} {\varepsilon} \; , \end{align} which is of order $\max_s (1/\Delta)$ if we ignore other quantities. It follows that the overall, average cost of the RM is of order $\max_s (1/\Delta^2)$, implying a better gap dependence than the one obtained in Ref.~\cite{boixo:qc2009a}. In the following section we show how the measurements can be simulated and approximated by evolution randomization. \subsection{Imperfect measurements} \label{sec:weakmeasurement} A perfect, projective measurement of $\ket{\psi}$ is one that transforms all coherences between $\ket \psi$ and its orthogonal complement to 0. That is, if $\rho$ denotes the density matrix after the perfect measurement, then $\bra \psi \rho \sket{\psi^\perp}=0$ for all states $\sket{\psi^\perp}$ satisfying $\bra \psi \psi^\perp \rangle=0$. In the RM of Ref.~\cite{boixo:qc2009a}, we showed that a perfect measurement can only be simulated if the random evolution time $t$ is drawn according to a distribution in which $t \in (-\infty,\infty)$. If $t$ can only be nonnegative (or nonpositive), the coherences are only reduced by a multiplicative factor $\varepsilon'>0$; that is, the simulated measurement is imperfect or {\em weak}. To achieve overall error of order $\varepsilon$ in the preparation of the final eigenstate due to imperfect measurements, in Ref.~\cite{boixo:qc2009a} we chose $\varepsilon'=\varepsilon/q$, which easily follows from an union-like bound for a sequence of quantum operations. This introduces an additional cost to the RM given by a multiplicative factor of order $\log (q/\varepsilon)$ [Eq.~\eqref{eq:costEPT1}], which can be large if $q \gg 1$. Nevertheless, we now present an improved error analysis of the RM than that of Ref.~\cite{boixo:qc2009a}, and show that if the imperfect measurements are such that $\varepsilon'$ is a constant independent of $\varepsilon$, an overall error of order $\varepsilon$ can still be achieved. This results in an improved cost for the RM: the $\log (q/\varepsilon)$ overhead is unnecessary. To demonstrate the improved scaling, it is convenient to define $\rho_j$ as the state, or density matrix, at the $j$ th step of the RM ($j=0,1,\ldots,q$); that is, the state after the randomized evolution with $H(s_j)$. Without loss of generality, we write \begin{align} \nonumber \rho_j = \mathrm{Pr}(j) \ket{\psi(s_j)} \! \bra{\psi(s_j)} + (1-\mathrm{Pr}(j))\rho_j^\perp + \\ \nonumber + \ket{\xi_j} \! \bra{\psi(s_j)} +\ket{\psi(s_j)} \! \bra{\xi_j} \; , \end{align} where $\mathrm{Pr}(j) = \bra{\psi(s_j)} \rho_j \ket{\psi(s_j)} $ is the probability of $\ket{\psi(s_j)}$ in $\rho_j$ (i.e., the fidelity). $\rho_j^\perp$ is a density matrix with support orthogonal to $\ket{\psi(s_j)}$ so that $\rho_j^\perp \ket{\psi(s_j)}=0$. The (unnormalized) state $\sket{\xi_j}$ is also orthogonal to $\ket{\psi(s_j)}$ and denotes the {\em coherences} between $\ket{\psi(s_j)}$ and its orthogonal complement. The norm of $\sket{\xi_j}$ denotes a coherence factor: \begin{align} \nonumber c_j = \| \ket{\xi_j} \| \; . \end{align} The main goal of the RM is to simulate measurements by keeping $c_j$ sufficiently small via phase or evolution randomization. At the $j+1$ th step, we evolve with $H(s_{j+1})$ for a random time drawn from some distribution $f(t)$. Then, \begin{align} \rho_{j+1}=\int dt \; e^{-i H(s_{j+1})t} \rho_j e^{i H(s_{j+1})t} \; . \end{align} Since evolving with $H(s_{j+1})$ leaves the eigenstate $\ket{\psi(s_{j+1})}$ invariant (up to a global phase), we have \begin{align} \nonumber \mathrm{Pr}(j+1) &= \bra{\psi(s_{j+1})} \rho_{j+1} \ket{\psi(s_{j+1})} \\ \nonumber &=\bra{\psi(s_{j+1})} \rho_j \ket{\psi(s_{j+1})} \; , \end{align} with $\ket{\psi(s_{j+1})}=\cos \alpha_{j+1} \ket{\psi(s_{j})} + \sin \alpha_{j+1} \ket{\psi^\perp(s_{j})}$. Then, \begin{align} \label{eq:probstep} \mathrm{Pr}(j+1) \ge \cos^2 \alpha_{j+1} \mathrm{Pr}(j) - 2 \sin \alpha_{j+1} c_j \; . \end{align} Here, we assumed the worst case scenario for which $\sbra{\xi_j} \psi(s_{j+1}) \rangle = -c_j \sin \alpha_{j+1}$ and used $\cos \alpha_{j+1} \le 1$. In Appendix~\ref{sec:appbound2}, Eq.~\eqref{eq:coherenceboundapp}, we show that if Eq.~\eqref{eq:fidelity1} is satisfied, \begin{align} \label{eq:coherencebound} c_j \le \frac 1 {1-\varepsilon} (\varepsilon' \sin \alpha_j + \varepsilon'^2 \sin \alpha_{j-1} + \ldots + \varepsilon'^{j} \sin \alpha_{1}) \; . \end{align} The factor $\varepsilon'<1$ denotes the reduction in coherence due to evolution randomization per step. That is, a random evolution under $H(s_{j+1})$ applied to $\rho_j$ transforms and reduces the coherences $\sket{\psi(s_{j+1})} \sbra{\psi(s^\perp_{j+1})}$ to \begin{align} \nonumber & \int dt \; f(t) e^{-i H(s_{j+1}) t} \ket{\psi(s_{j+1})} \bra{\psi(s^\perp_{j+1})}e^{i H(s_{j+1}) t} \; , \end{align} where $\sket{\psi(s_{j+1})^\perp}$ is a normalized state orthogonal to $\sket{\psi(s_{j+1})}$. Then, we can assume \begin{align} \label{eq:epsilon'} \varepsilon' = \left \| \int dt \; f(t) e^{i \Delta t} \right \| \;. \end{align} where $\Delta \le \Delta(s_{j+1})$. The RM starts with $\ket{\psi(s_0)}$, so initially $\mathrm{Pr}(0)=1$ and $c_0=0$. By iteration of Eq.~\eqref{eq:probstep} we obtain \begin{align} \label{eq:fidelityRM} \mathrm{Pr}(q) \ge \prod_{j=1}^q \cos^2( \alpha_j )- 2 \sum_{j=1}^q \sin \alpha_j c_{j-1} \; . \end{align} The first term on the {\em rhs} of Eq.~\eqref{eq:fidelityRM} corresponds to the case where all projective measurements are implemented perfectly, i.e., when $c_j=0$ for all $j$. A lower bound to such term is given by $1-\sum_{j=1}^q \sin^2 (\alpha_j) \ge 1- \varepsilon$, as described in Sec.~\ref{sec:errors}. Using Eq.~\eqref{eq:coherencebound}, the second term on the {\em rhs} of Eq.~\eqref{eq:fidelityRM} can be upper bounded by \begin{align} \label{eq:geometricdependence} \frac 2 {1-\varepsilon} \sum_{j=1}^q \sin \alpha_j (\varepsilon' \sin \alpha_{j-1} + \varepsilon'^2 \sin \alpha_{j-2}+\ldots) \; , \end{align} and using the Cauchy-Schwarz inequality, \begin{align} \nonumber \sum_{j=1}^q \sin \alpha_j \sin \alpha_{j-k}\le \sum_{j=1}^q \sin^2 (\alpha_j )\le \varepsilon \; . \end{align} Then, the fidelity of the RM or probability of success in the preparation of $\ket{\psi(s_q)}$ is \begin{align} \label{eq:fidelity2} \mathrm{Pr}(q) \ge 1 -\varepsilon - \frac {2 \varepsilon \varepsilon'} {(1-\varepsilon)(1-\varepsilon')} \; , \end{align} which follows from summing the geometric series in $\varepsilon'$ in Eq.~\eqref{eq:geometricdependence}. \subsection{Total cost} For constant error or infidelity of order $\varepsilon <1$, it suffices to choose a constant $\varepsilon'$ in Eq.~\eqref{eq:fidelity2}. For example, a common choice for the time distribution is a normal distribution $f(t)$ with standard deviation of order $1/\Delta$. Since the Fourier transform of $f(t)$ is a normal distribution with standard deviation of order $\Delta$, Eq.~\eqref{eq:epsilon'} implies a constant upper bound for $\varepsilon'$. Then, the average cost per step of the RM is also of order $ 1/\Delta$. Multiplying this by $q$, the total number of steps in Eq.~\eqref{eq:pointsRM}, provides an upper bound to the total average cost of the RM given by \begin{align} \label{eq:totalcostRM} \frac{(L^*)^2}{\varepsilon \Delta} \le \kappa' \max_s \frac{\| \ddot H \| + 2 \| \dot H \|} {\varepsilon 2 \Delta^2(s)} \; , \end{align} for general interpolations (Sec.~\ref{sec:generalinterpolations}). $\kappa' \approx \sqrt{2/\pi}$ is also constant~\cite{boixo:qc2009a}. Such an upper bound can be further improved for different Hamiltonians or interpolations as described in Secs. \ref{sec:linearinterpolation} and~\ref{sec:frustrationfree}. Our result in Eq.~\eqref{eq:totalcostRM} significantly improves upon the result in Ref.~\cite{boixo:qc2009a}, for which the average cost in terms of the gap only was of order $\max_s [\log(1/\Delta)/\Delta^3]$. \section{Applications} \label{sec:applications} Improved bounds on the cost of methods for eigenpath traversal may result in speedups for problems in physics, optimization, and quantum information. In this section, we apply our results to two important examples where polynomial quantum speedups are obtained. \subsection{Preparation of projected entangled pair states (PEPS)} \label{sec:PEPS} PEPS, a generalization of matrix product states to space dimensions higher than one~\cite{RO_MPS_1997,Vidal_MPS_2003}, were conjectured to approximate the ground states of physical systems with local interactions~\cite{verstraete_peps_2006}. PEPS also arise in combinatorial optimization and quantum information problems, and their preparation is paramount to solve such problems. For this reason, methods for the preparation of PEPS on a quantum computer were recently developed~\cite{schwarz_peps_2011,somma_gap_2013}. An important property of PEPS is that they can be realized as the ground states of frustration-free Hamiltonians. Then, we can analyze the cost of the RM for the preparation of PEPS. That is, if $H(s)=\sum_{k=1}^L \Pi_k(s)$ denotes a frustration-free Hamiltonian path, using the results of Sec.~\ref{sec:frustrationfree} we obtain a cost for the RM upper bounded by \begin{align} \nonumber \max_s \frac{\| \ddot H \|} {\varepsilon 2 \Delta^2} \; . \end{align} Such a cost can be further improved as follows. A remarkable property of frustration-free Hamiltonians is that their spectral gap can be amplified by constructing the related Hamiltonian \begin{align} \nonumber H' = \sqrt{\| \Pi \|} \sum_{k=1}^L \sqrt{ \Pi_k} \otimes [ \ket k \bra 0 + \ket 0 \bra k ] \; , \end{align} where $\ket k$, $k=0,1,\ldots L$ are a basis of states of an ancillary system. $H'$ has $\ket \psi \otimes \ket 0$ as eigenstate of eigenvalue 0, and the spectral gap of $H'$ is $\Delta' \ge \sqrt{ \Delta \| \Pi \|}$, where $\| \Pi \| = \max_k \| \Pi_k \|$. These properties and the full spectrum of $H'$ was analyzed in Ref.~\cite{somma_gap_2013}. Then, if we have access to evolutions under the $\sqrt{\Pi_k(s)}$, the randomized evolution in the RM can be implemented using $H'$ instead, having an average cost of order $1/\Delta' \propto 1/\sqrt {\Delta \|\Pi \|}$ per step. This implies an overall, average cost for the RM upper bounded by \begin{align} \kappa' \max_s \frac{\| \ddot H \|} {\varepsilon 2 \|\Pi\|^{1/2}}\times \frac 1 {\Delta^{3/2}}\; . \end{align} Similarly, the cost of other methods for eigenpath traversal~\cite{boixo:qc2010a} for this problem will have an improved cost bounded by \begin{align} \label{eq:PEPSbound} \kappa' \max_s \frac{\sqrt{\| \ddot H \|/2} \times \log(\sqrt{\| \ddot H/(2 \Delta)}/\varepsilon)} {\varepsilon \|\Pi\|^{1/2} } \times \frac 1 {\Delta} \; . \end{align} Equation~\eqref{eq:PEPSbound} follows from Eq.~\eqref{eq:costEPT1} for $c=1$, replacing $\Delta$ by $\Delta'$ and $L$ by $L^*$ as in Eq.~\eqref{eq:pathlengthFF}. The cost is almost linear in $1/\Delta$. We note that for many frustration-free Hamiltonians, the terms $\Pi_k$ are projectors and $\sqrt{\Pi_k}=\Pi_k$. Otherwise the $\Pi_k$ may be expressed as a linear combination of projectors, so that the requirement of having access to evolutions with the $\sqrt{\Pi_k}$ is not strong. \subsection{Quantum Simulated Annealing} \label{sec:QSA} Simulated annealing is a powerful heuristics for solving combinatorial optimization problems. When implemented via Markov-Chain Monte Carlo techniques, it generates a stochastic sequence of configurations that converges to the Gibbs distribution determined by the inverse {\em temperature} $\beta_q$ and an objective function $E$. For sufficiently large $\beta_q$, the final sequences are sampled from a distribution mostly weighted on those configurations $\sigma$ that minimize $E$. The process is specified by a particular annealing schedule, which consists of a finite increasing sequence of inverse temperatures $\beta_0=0<\beta_1<\ldots<\beta_q$. The cost of the method is the number of Markov steps required to sample from the desired distribution, i.e., $q$. For constant error, such a number can be upper bounded by $\propto \max_\beta 1/\Delta(\beta)$, where $\Delta(\beta)$ denotes the spectral gap of the stochastic matrix at inverse temperature $\beta$. In Ref.~\cite{somma_quantum_2008} we gave a quantum algorithm that allows us to sample from the same distribution as that approached by the simulated annealing method. The quantum algorithm uses the RM to traverse a path of states $\ket{\psi(\beta)}$. Here, $\ket{\psi(\beta)}$ is a {\em coherent} version of the corresponding Gibbs state, having amplitudes that coincide with the square root of the probabilities. That is, \begin{align} \ket{\psi(\beta)}=\frac 1 {\sqrt{\cal Z}} \sum_\sigma e^{-\beta E[\sigma]/2} \ket \sigma \; , \end{align} where the sum is over all configurations and ${\cal Z} = \sum_\sigma \exp(-\beta E[\sigma])$ is the partition function. In more detail, the cost of the quantum method presented in Ref.~\cite{somma_quantum_2008} is of order \begin{align} \label{eq:QSAcost} \max_{\beta} q \log q/\sqrt{\Delta(\beta)} \end{align} with $q= \beta_q^2 E_M^2 /(4\varepsilon)$ and $E_M$ is the maximum of $|E|$. $\varepsilon$ denotes the overall error probability of finding the configuration that minimizes $E$ and $q$ is the number of points in the discretization or steps in the RM. As discussed, $q$ is related to the path length so that $q \ge L^2/\varepsilon$, with $L=\int_0^{\beta_q} d\beta \| \ket{\partial_\beta \psi(\beta)} \|$ in this case. In terms of the spectral gap $\Delta(\beta)$, the quantum algorithm of Ref.~\cite{somma_quantum_2008} provides a square root improvement over the classical method, which is important for those hard instances where $\Delta(\beta)$ is small. We can then use the results in Sec.~\ref{sec:RMbound} to search for a better bound on the path length and, ultimately, a reduction on the cost of the RM for this problem. That is, instead of using $q$ as above, we replace it by $q^*$, with $L^2 \le q^*$ and \begin{align} \nonumber q^* =\frac {\beta_q} \varepsilon \int_0^{\beta_q} d\beta \| \ket{\partial_\beta \psi(\beta)} \|^2 \; ; \end{align} See Eq.~\eqref{eq:pointsRM}. For such $\varepsilon$, $\beta_q$ is of order $\log(d/\varepsilon)/\gamma$, where $d$ is the dimension of the configuration space and $\gamma$ is the difference between the two smallest values in the range of $E$ (i.e., the spectral gap of $E$). In Appendix~\ref{app:QSA}, Eq.~\eqref{eq:QSAratechange}, we show \begin{align} \nonumber \| \ket{\partial_\beta \psi(\beta)} \|^2 = -\partial_\beta \langle E \rangle /4\; , \end{align} where $\langle E \rangle$ is the expected (thermodynamic) value of $E$. Then, we obtain \begin{align} \nonumber q^* = \frac {\beta_q ( \langle E \rangle_{0} - \langle E \rangle_{\beta_q})} {4 \varepsilon} \; . \end{align} Without loss of generality, we assume $\langle E \rangle_{0}=0$, as we can always shift the lowest value of $E$ to satisfy the assumption. In fact, the assumption is readily satisfied for many problems of interest, such as those where $E$ describes a so-called Ising model. If $\beta_q \gg 1$, then $\langle E \rangle_{\beta_q} \approx - E_M$ and \begin{align} \nonumber q^* \le \frac {\beta_q E _M} {4 \varepsilon} \; . \end{align} Our improved average cost of the RM for this problem is then \begin{align} \label{eq:QSAcost2} T_{\rm QSA}= \kappa' \max_\beta \frac{\beta_q E _M}{4 \varepsilon \sqrt{\Delta(\beta)}} \; \end{align} ($\kappa'$ is a small constant). Equation~\eqref{eq:QSAcost2} has to be contrasted with the worse cost given by Eq.~\eqref{eq:QSAcost}, which in this case is of order \begin{align} \nonumber \max_\beta \frac{\beta_q^2 E _M^2 \log(\beta_q^2 E_M^2/\varepsilon)}{ \varepsilon \sqrt{\Delta(\beta)}} \; \end{align} and much larger than $T_{\rm QSA}$ in the large $E_M$ and $\beta_q$ limit. \section{Conclusions} \label{sec:conclusions} We presented a significantly improved upper bound on $L$, the length of the path traversed by the continuously-related ground states of a family of Hamiltonians. Such a bound is approximately the square root of standard and previously used bounds for $L$ in the literature. It results in an improved average cost of a method for adiabatic state transformations based on evolution randomization, which is a simple extension of AQC. Specifically, we prove an average cost of order $1/\Delta^2$ for the randomization method, whereas AQC has a proven cost of order $1/\Delta^3$ (i.e., the cost of AQC is upper bounded by $1/\Delta^3$, disregarding other quantities). Here, $\Delta$ is a bound on the spectral gap of the Hamiltonians. When the Hamiltonians satisfy a certain frustration-free property, the average cost of the randomization method is further improved to order $1/\Delta^{3/2}$. The gap $\Delta$ is very small for hard instances and thus the randomization method is a promising alternative to AQC in these cases, as it has a proven lower cost. We also improved the cost of the randomization method when the simulated measurements are imperfect. We showed that if evolution randomization induces a weak measurement, where the coherences are reduced by a constant, multiplicative factor (e.g., by 1/3), then the eigenstate of the final Hamiltonian is still prepared at small, bounded error probability. Previous analysis for the randomization method required a reduction on the coherences that depended on the path length. The randomization method outperforms AQC in certain instances (e.g., Rabi oscillations). Nevertheless, it remains open to show how generic the advantages of the randomization method over AQC are. To understand this problem better, for example, one needs to devise other instances where AQC has a cost dominated by $1/\Delta^3$, so that the cost of AQC is strictly higher than that of the randomization method. Perhaps our most important contribution is a method for eigenpath traversal that has a {\em proven} lower cost than that provided by quantum adiabatic approximations~\cite{jansen_bounds_2007,jordan_thesis_2008,regev_quantum_2004,lidar_adiabatic_2009}, since rigorously improving the latter cost in terms of the gap, even for simple cases (e.g., linear interpolations), does not seem feasible. Finally, the improved bound on $L$ can also be used to improve the cost of other methods for eigenpath traversal such as that in Ref. \cite{boixo:qc2010a}. For the most efficient and known method for eigenpath traversal in the literature, our bound on $L$ implies a cost of order $1/\Delta^{3/2}$ for general Hamiltonians and order $1/\Delta$ for Hamiltonians that satisfy the frustration free property. \section{ Acknowledgements} H.-T.C. acknowledges support from the National Science Foundation through the CCF program. GX and RS acknowledge support from AFOSR through grant number FA9550-12-1-0057. RS thanks Sandia National Laboratories, where the initial ideas of this work were developed. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. We thank Sergio Boixo, Andrew Daley and Andrew Landahl for discussions. \begin{appendix} \section{A bound on $\| \sket{\dot \psi}\|$} \label{app:rateofchange} From $H \ket \psi=0$, where $\ket \psi$ is the ground state, we obtain \begin{align} \nonumber \sket{\dot \psi} = -H^{-1} \dot H \ket \psi \; . \end{align} $H^{-1}$ denotes the operator that is inverse to $H$ in the subspace orthogonal to $\ket{\psi}$. We assume the existence of $\dot H$ with $\| \dot H \| <\infty$. Then, \begin{align} \nonumber \| \sket{\dot \psi} \| ^2 &= \bra \psi \dot H H^{-2} \dot H \ket \psi \\ \nonumber & \le \frac 1 \Delta \bra \psi \dot H H^{-1} \dot H \ket \psi \\ \label{eq:dotpsi} & = \frac {-1} \Delta \bra \psi \dot H \sket{\dot \psi} \end{align} where we used Cauchy-Schwarz and the assumption that $H \ge 0$. In addition, \begin{align} \nonumber \dot H \sket{\dot \psi} = - \frac 1 2 [\ddot H \ket \psi + H \sket{\ddot \psi}] \; , \end{align} and using Eq.~\eqref{eq:dotpsi} we obtain the desired bound as \begin{align} \label{eq:dotpsibound} \| \sket{\dot \psi} \|^2 \le \frac {1}{2 \Delta} \bra \psi \ddot H \sket{ \psi} \; . \end{align} This assumes the existence of $\ddot H$ with $\| \ddot H \| <\infty$. \section{A bound on $\sin \alpha_j$} \label{app:bound1} As pointed out, the angles $\alpha_j$ in Sec.~\ref{sec:RMbound} (Fig.~\ref{fig:RM}) can be defined via $\cos \alpha_j = \bra{\psi(s_{j-1})} \psi(s_j) \rangle \in \mathbb {R}$. It follows that \begin{align} \nonumber \sin \alpha_j & = \| \ket{\psi(s_{j-1})} - \cos \alpha_j \ket{\psi(s_{j})} \| \\ \label{eq:app1} & \le \| \ket{\psi(s_{j-1})} - e^{i \phi} \ket{\psi(s_{j})} \| \; . \end{align} The phase $\phi \in \mathbb{R}$ can be arbitrary. Next, we split the interval $[s_{j-1},s_j]$ into $r$ segments of size $(s_j-s_{j-1})/r$ and define $s_j^n=s_{j-1} +(s_j-s_{j-1}) n/r $, with $n=0,1,\ldots , r$. The corresponding eigenstates are now $\ket{\psi(s_j^n)}$ and, with no loss of generality, we assume $\cos \beta_n = \bra{\psi(s_{j}^{n-1})} \psi(s_j^{n}) \rangle \in \mathbb{R}$. In particular, $\ket{\psi(s_j^0)}=\ket{\psi(s_{j-1})}$ and $\ket{\psi(s_j^n)}= e^{i\phi}\ket{\psi(s_{j})}$. From Eq.~\eqref{eq:app1} we obtain \begin{align} \nonumber \sin \alpha_j & \le \| \sum_{n=0}^{r-1} \left( \ket{\psi(s_{j}^n)} - \ket{\psi(s_{j}^{n+1})} \right) \| \\ \nonumber & \le \sum_{n=0}^{r-1} \left \| \ket{\psi(s_{j}^n)} - \ket{\psi(s_{j}^{n+1})} \right \| \; , \end{align} where we used the triangle inequality. Also, \begin{align} \nonumber \sin \alpha_j & \le \lim_{r \rightarrow \infty} \sum_{n=0}^{r-1} \frac{\left \| \ket{\psi(s_{j}^n)} - \ket{\psi(s_{j}^{n+1})} \right \|}{s_j^{n+1}-s_j^n} \frac n r (s_j-s_{j-1}) \\ \label{eq:app3} & \le \int_{s_{j-1}}^{s_j} ds \; \| \sket{\dot \psi } \| \; , \end{align} where the phase of $\ket{\psi}$ must be chosen so that $\langle \dot \psi \ket{\psi} \in \mathbb R$, and thus $\langle \dot\psi \ket{\psi }=0$ from the normalization condition. The inequality in Eq.~\eqref{eq:app3} requires existence $\sket{\dot \psi}$, i.e., a differentiable path. Since \begin{align} \nonumber \int_{s_{j-1}}^{s_j} ds \; \| \ket{\partial_s \psi(s)} \|^2 - \left( \int_{s_{j-1}}^{s_j} ds \; \| \ket{\partial_s \psi(s)} \| \right)^2 \ge 0 \; \end{align} from Cauchy Schwarz, we obtain the desired bound as \begin{align} \nonumber \sin \alpha_j \le \left( \int_{s_{j-1}} ^{s_j } ds \; \| \sket{\dot \psi } \|^2 \right)^{1/2}\; . \end{align} \section{A bound on the coherences} \label{sec:appbound2} As explained in Sec~\ref{sec:weakmeasurement}, we let $\rho_j$ be the density matrix for the state after the randomized evolution with $H(s_j)$, i.e., the state output at the $j$ th step of the randomization method: \begin{align} \nonumber \rho_j = \mathrm{Pr}(j) \ket{\psi(s_j)} \! \bra{\psi(s_j)} + (1-\mathrm{Pr}(j))\rho_j^\perp + \\ \nonumber + \ket{\xi_j} \! \bra{\psi(s_j)} +\ket{\psi(s_j)} \! \bra{\xi_j} \; , \end{align} The coherence factor is defined as \begin{align} \nonumber c_j = \| \ket{ \xi_j} \| =\| P_j^\perp \rho_j \ket{\psi(s_j)} \| \; , \end{align} where $P_j^\perp = \one - \ket{\psi(s_j)} \bra{\psi(s_j)}$ is the a projector onto the subspace orthogonal to $\ket{\psi(s_j)}$. The coherence factor at the $j+1$ th step is then \begin{align} \nonumber c_{j+1} & = \| \ket{\xi_{j+1}} \| \\ \nonumber &=\| P_{j+1}^\perp \rho_{j+1} \ket{\psi(s_{j+1})} \| \\ \nonumber & = \| P_{j+1}^\perp \int dt \; f(t) e^{-iH(s_{j+1})t} \rho_{j} e^{iH(s_{j+1})t} \ket{\psi(s_{j+1})} \| \; , \end{align} where $f(t)$ is the distribution for the random time at that step. Since $e^{iH(s_{j+1})t}$ leaves $\ket{\psi(s_{j+1})}$ invariant (up to a global phase) and \begin{align} \nonumber \left \| \int dt \; f(t) e^{-iH(s_{j+1})t} \sket{\bar \psi^\perp(s_{j+1})} \right \| \le \varepsilon' \end{align} for any unit state $\sket{\bar \psi^\perp(s_{j+1})}$ orthogonal to $\ket{ \psi(s_{j+1})}$, we arrive at \begin{align} \label{eq:app4} c_{j+1} \le \varepsilon' \| P_{j+1}^\perp \rho_j \ket{\psi(s_{j+1})} \| \; . \end{align} The factor $\varepsilon'<1$ was defined in Eq.~\eqref{eq:epsilon'}, and is the Fourier transform of $f(t)$ at $\Delta \le \Delta(s_{j+1})$. We now bound the {\em rhs} of Eq.~\eqref{eq:app4}. Without loss of generality, we write $\ket{\psi(s_{j+1})} = \cos \alpha_{j+1} \ket{\psi(s_{j})} + \sin \alpha_{j+1} \ket{\psi^\perp(s_{j})}$, and obtain \begin{align} \nonumber c_{j+1} \le \varepsilon' \left[ \cos \alpha_{j+1} \| P_{j+1}^\perp \left( \mathrm{Pr}(j) \ket{\psi(s_{j})} + \ket{\xi_j} \right) \| \right. + \\ \nonumber \left. + \sin \alpha_{j+1} \| P_{j+1}^\perp \rho_j \ket{\psi^\perp(s_{j})} \| \right] \; , \end{align} where we used the triangle inequality and $\rho_j \ket{\psi(s_j)} = \mathrm{Pr}(j) \ket{\psi(s_j)} + \ket{\xi_j}$. By definition, $\sin \alpha_{j+1} = \| P_{j+1}^\perp \ket{\psi(s_{j})}\| $. Also, \begin{align} \nonumber \rho_j & \ket{\psi^\perp(s_{j})} = \\ \nonumber & = (1-\mathrm{Pr}(j)) \rho_j^\perp \ket{\psi^\perp(s_{j})} + \ket{\psi(s_j)} \! \bra{\xi_j} \psi^\perp(s_{j})\rangle \; . \end{align} By using Cauchy-Schwarz and the triangle inequalities, we obtain \begin{align} \nonumber c_{j+1} \le \varepsilon' \left[ \cos \alpha_{j+1} \mathrm{Pr}(j) \sin \alpha_{j+1} + \cos \alpha_{j+1} c_j + \right. \\ \nonumber \left. + \sin \alpha_{j+1} (1-\mathrm{Pr}(j)) + \sin^2( \alpha_{j+1}) c_j \right] \; , \end{align} and thus \begin{align} \label{eq:iterationstep} c_{j+1} \le \varepsilon' \left[ \sin \alpha_{j+1} + (1+\sin^2 (\alpha_{j+1})) c_j \right] \;. \end{align} Because the initial state (step 0) is exactly $\ket{\psi(s_0)}$, we have $c_0=0$ and, by iteration of Eq.~\eqref{eq:iterationstep}, \begin{align} \nonumber c_{j+1} \le \varepsilon' \sin \alpha_{j+1} + (\varepsilon')^2 (1+ \sin^2 (\alpha_{j+1})) \sin \alpha_{j} + \ldots \\ \nonumber \ldots + (\varepsilon')^{j+1} (1+ \sin^2( \alpha_{j+1})) \ldots (1+ \sin^2 (\alpha_2))\sin \alpha_{1} \; . \end{align} In order to relate $\varepsilon'$ with the error coming from the discretization (perfect measurements), we recall the condition \begin{align} \nonumber \sum_{j=1}^q \sin^2 (\alpha_j )\le \varepsilon \; \end{align} of Eq.~\eqref{eq:fidelity1}. Then, \begin{align} \nonumber &\prod_{j=i}^q (1+\sin^2( \alpha_j)) \le \prod_{j=1}^q (1+\sin^2 (\alpha_j)) \\ \nonumber & \le 1 + \sum_{j=1}^q \sin^2( \alpha_j)+ \left( \sum_{j=1}^q \sin^2 (\alpha_j) \right)^2 + \ldots \\ \nonumber & \le \sum_{j \ge 0} \varepsilon^j= 1/(1-\varepsilon)\; , \end{align} where the last inequality is due to the geometric series. Then, \begin{align} \label{eq:coherenceboundapp} c_j \le \frac 1 {1-\varepsilon} (\varepsilon' \sin \alpha_j + \varepsilon'^2 \sin \alpha_{j-1} + \ldots + \varepsilon '^{j} \sin \alpha_{1}) \; , \end{align} which is the desired bound. \section{Eigenstate change in QSA} \label{app:QSA} By definition, the eigenstate path in QSA is determined by \begin{align} \nonumber \ket{\psi(\beta)}=\frac 1 {\sqrt{\cal Z}} \sum_\sigma e^{-\beta E[\sigma]/2} \ket \sigma \end{align} where $0 \le \beta \le \beta_q$, $E[\sigma] \in \mathbb R$ is the value of the objective function for (classical) configuration $\sigma$, and ${\cal Z}=\sum_{\sigma} e^{-\beta E[\sigma]}$ is the partition function. Then, it is simple to show \begin{align} \label{eq:QSAratechange0} \ket{ \partial_\beta \psi(\beta)}=\frac 1 2 \left [\langle E \rangle \ket{\psi(\beta)}- \frac 1 {\sqrt{\cal Z}} \sum_\sigma E[\sigma] e^{-\beta E[\sigma]/2} \ket \sigma \right] \; , \end{align} where \begin{align} \nonumber \langle E \rangle = \frac 1 {\cal Z} \sum_\sigma E[\sigma] e^{-\beta E[\sigma]} \end{align} is the expected (thermodynamic) value of $E$ at inverse temperature $\beta$. Because $\{\ket \sigma \}$ is an orthogonal basis, Eq.~\eqref{eq:QSAratechange0} gives \begin{align} \nonumber \| \ket{ \partial_\beta \psi(\beta)} \|^2 &= \frac 1 4 \sum_\sigma (\langle E \rangle -E[\sigma])^2 \times \frac{e^{-\beta E[\sigma]}}{\cal Z} \\ \nonumber & = \frac 1 4 \left( \langle E^2 \rangle - \langle E \rangle^2 \right) \; , \end{align} relating the rate of change of the state with the thermodynamic fluctuations of $E$. In addition, \begin{align} \nonumber \partial _ \beta \langle E \rangle &=\partial _ \beta \frac 1 {\cal Z} \sum_\sigma E[\sigma] e^{-\beta E[\sigma]} \\ \nonumber & = \frac{-\partial_\beta \cal Z}{{\cal Z}^2}\sum_\sigma E[\sigma] e^{-\beta E[\sigma]} - \frac{1}{\cal Z}\sum_\sigma E^2[\sigma] e^{-\beta E[\sigma]} \\ \nonumber & = \langle E \rangle^2 - \langle E^2 \rangle \;, \end{align} and then \begin{align} \label{eq:QSAratechange} \| \ket{ \partial_\beta \psi(\beta)} \|^2 &= -\frac{\partial_\beta \langle E \rangle} 4 \; . \end{align} \end{appendix} \begin{thebibliography}{45} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo {\boldsymbol{\ell}}se \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo {\boldsymbol{\ell}}se \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Apolloni}\ \emph {et~al.}(1989)\citenamefont {Apolloni}, \citenamefont {Caravalho},\ and\ \citenamefont {de~Falco}}]{apolloni:qa89} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Apolloni}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Caravalho}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {de~Falco}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Stochastic Processes and their Applications}\ }\textbf {\bibinfo {volume} {33}},\ \bibinfo {pages} {233} (\bibinfo {year} {1989})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Finnila}\ \emph {et~al.}(1994)\citenamefont {Finnila}, \citenamefont {Gomez}, \citenamefont {Sebenik}, \citenamefont {Stenson},\ and\ \citenamefont {Doll}}]{finnila:qc1994a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~B.}\ \bibnamefont {Finnila}}, \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Gomez}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Sebenik}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Stenson}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont {Doll}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Chem. Phys. Lett.}\ }\textbf {\bibinfo {volume} {219}},\ \bibinfo {pages} {343} (\bibinfo {year} {1994})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kadowaki}\ and\ \citenamefont {Nishimori}(1998)}]{kadowaki:qc1998a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Kadowaki}}\ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Nishimori}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {58}},\ \bibinfo {pages} {5355} (\bibinfo {year} {1998})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Farhi}\ \emph {et~al.}(2001)\citenamefont {Farhi}, \citenamefont {Goldstone}, \citenamefont {Gutmann}, \citenamefont {Lapan}, \citenamefont {Lundgren},\ and\ \citenamefont {Preda}}]{farhi:qc2001a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Farhi}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Goldstone}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gutmann}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Lapan}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Lundgren}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Preda}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {292}},\ \bibinfo {pages} {472} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sachdev}(2001)}]{sachdev_2001} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Sachdev}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum Phase Transitions}}}\ (\bibinfo {publisher} {Cambridge University Press, UK},\ \bibinfo {year} {2001})\BibitemShut {NoStop} \bibitem [{\citenamefont {Santoro}\ and\ \citenamefont {Tosatti}(2006)}]{santoro:qa06} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Santoro}}\ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Tosatti}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Phys. A}\ }\textbf {\bibinfo {volume} {39}},\ \bibinfo {pages} {R393} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Somma}\ \emph {et~al.}(2008)\citenamefont {Somma}, \citenamefont {Boixo}, \citenamefont {Barnum},\ and\ \citenamefont {Knill}}]{somma_quantum_2008} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~D.}\ \bibnamefont {Somma}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Boixo}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Barnum}}, \ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Knill}},\ }\href {\doibase 10.1103/PhysRevLett.101.130504} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {101}},\ \bibinfo {pages} {130504} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Perez-Garcia}\ \emph {et~al.}(2008)\citenamefont {Perez-Garcia}, \citenamefont {Verstraete}, \citenamefont {Cirac},\ and\ \citenamefont {Wolf}}]{perez_PEPS_2008} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Perez-Garcia}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Verstraete}}, \bibinfo {author} {\bibfnamefont {J.~I.}\ \bibnamefont {Cirac}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Wolf}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Quant. Inf. Comp.}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {0650} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Das}\ and\ \citenamefont {Chakrabarti}(2008)}]{das:qa08} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Das}}\ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Chakrabarti}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys}\ }\textbf {\bibinfo {volume} {80}},\ \bibinfo {pages} {1061} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schwarz}\ \emph {et~al.}(2012)\citenamefont {Schwarz}, \citenamefont {Temme},\ and\ \citenamefont {Verstraete}}]{schwarz_peps_2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Schwarz}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Temme}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Verstraete}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {108}},\ \bibinfo {pages} {110502} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Born}\ and\ \citenamefont {Fock}(1928)}]{Born-Fock:adiabatic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Born}}\ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Fock}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Zeitschrift f\"ur Physik}\ }\textbf {\bibinfo {volume} {3-4}},\ \bibinfo {pages} {165} (\bibinfo {year} {1928})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Messiah}(1999)}]{messiah_1999} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Messiah}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum Mechanics}}}\ (\bibinfo {publisher} {Dover Publications},\ \bibinfo {year} {1999})\BibitemShut {NoStop} \bibitem [{\citenamefont {Farhi}\ \emph {et~al.}(2002)\citenamefont {Farhi}, \citenamefont {Goldstone},\ and\ \citenamefont {Gutmann}}]{farhi_quantum_2002} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Farhi}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Goldstone}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gutmann}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {quant-ph/0201031}\ } (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Roland}\ and\ \citenamefont {Cerf}(2002)}]{roland_quantum_2002} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Roland}}\ and\ \bibinfo {author} {\bibfnamefont {N.~J.}\ \bibnamefont {Cerf}},\ }\href {\doibase {10.1103/PhysRevA.65.042308}} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {65}},\ \bibinfo {pages} {042308} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hen}(2013)}]{Hen_period_2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Hen}},\ }\href@noop {} {\enquote {\bibinfo {title} {Period finding with adiabatic quantum computation},}\ } (\bibinfo {year} {2013}),\ \bibinfo {note} {arXiv:1307.6538}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aharonov}\ \emph {et~al.}(2007)\citenamefont {Aharonov}, \citenamefont {van Dam}, \citenamefont {Kempe}, \citenamefont {Landau}, \citenamefont {Lloyd},\ and\ \citenamefont {Regev}}]{aharonov_adiabatic_2007} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Aharonov}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {van Dam}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kempe}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Landau}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}}, \ and\ \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Regev}},\ }\href {\doibase {10.1137/S0097539705447323}} {\bibfield {journal} {\bibinfo {journal} {{SIAM} J. Comp.}\ }\textbf {\bibinfo {volume} {37}},\ \bibinfo {pages} {166} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mizel}\ \emph {et~al.}(2007)\citenamefont {Mizel}, \citenamefont {Lidar},\ and\ \citenamefont {Mitchell}}]{mizel_equivalence_2007} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Mizel}}, \bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont {Lidar}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mitchell}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {99}},\ \bibinfo {pages} {070502} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Oliveira}\ and\ \citenamefont {Terhal}(2008)}]{oliveira_adiabatic_2008} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Oliveira}}\ and\ \bibinfo {author} {\bibfnamefont {B.~M.}\ \bibnamefont {Terhal}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Quant. Inf. Comp}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {0900} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aharonov}\ \emph {et~al.}(2009)\citenamefont {Aharonov}, \citenamefont {Gottesman}, \citenamefont {Irani},\ and\ \citenamefont {Kempe}}]{aharonov_line_2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Aharonov}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Gottesman}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Irani}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kempe}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Comm. Math. Phys.}\ }\textbf {\bibinfo {volume} {287}},\ \bibinfo {pages} {41} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Somma}\ and\ \citenamefont {Ganti}(2013)}]{somma_Feynman_2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Somma}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ganti}},\ }\href@noop {} {\enquote {\bibinfo {title} {On the gap of hamiltonians for the adiabatic simulation of quantum circuits},}\ } (\bibinfo {year} {2013}),\ \bibinfo {note} {arXiv:1307.4993}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cleve}\ \emph {et~al.}(2009)\citenamefont {Cleve}, \citenamefont {Gottesman}, \citenamefont {Mosca}, \citenamefont {Somma},\ and\ \citenamefont {Yonge-Mallo}}]{cleve_query_2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Cleve}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Gottesman}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mosca}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Somma}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Yonge-Mallo}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Proceedings of the 41st Annual IEEE Symp. on Theory of Computing}\ ,\ \bibinfo {pages} {409}} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wiebe}\ \emph {et~al.}(2010)\citenamefont {Wiebe}, \citenamefont {Berry}, \citenamefont {Hoyer},\ and\ \citenamefont {Sanders}}]{wiebe_product_2010} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Wiebe}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Berry}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Hoyer}}, \ and\ \bibinfo {author} {\bibfnamefont {B.~C.}\ \bibnamefont {Sanders}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Phys. A: Math. Theor.}\ }\textbf {\bibinfo {volume} {43}},\ \bibinfo {pages} {065203} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Childs}\ and\ \citenamefont {Kothari}(2011)}]{childs_efficient_2010} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Childs}}\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kothari}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Theory of Quantum Computation, Communication, and Cryptography}\ ,\ \bibinfo {pages} {94}} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Berry}\ \emph {et~al.}(2013)\citenamefont {Berry}, \citenamefont {Cleve},\ and\ \citenamefont {Somma}}]{BCS2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Berry}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Cleve}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~D.}\ \bibnamefont {Somma}},\ }\href@noop {} {\enquote {\bibinfo {title} {Exponential improvement in precision for hamiltonian-evolution simulation},}\ } (\bibinfo {year} {2013}),\ \bibinfo {note} {arXiv:1308.5424}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jansen}\ \emph {et~al.}(2007)\citenamefont {Jansen}, \citenamefont {Ruskai},\ and\ \citenamefont {Seiler}}]{jansen_bounds_2007} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Jansen}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ruskai}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Seiler}},\ }\href {\doibase 10.1063/1.2798382} {\bibfield {journal} {\bibinfo {journal} {J. of Math. Phys.}\ }\textbf {\bibinfo {volume} {48}},\ \bibinfo {pages} {102111} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jordan}(2008)}]{jordan_thesis_2008} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Jordan}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum computation beyond the circuit model}}}\ (\bibinfo {publisher} {Massachusetts Institute of Technology},\ \bibinfo {year} {2008})\BibitemShut {NoStop} \bibitem [{\citenamefont {Marzlin}\ and\ \citenamefont {Sanders}(2004)}]{Marzlin_Inconsistency_2004} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.-P.}\ \bibnamefont {Marzlin}}\ and\ \bibinfo {author} {\bibfnamefont {B.~C.}\ \bibnamefont {Sanders}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {160408} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Amin}(2009)}]{Amin_Consistency_2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~H.~S.}\ \bibnamefont {Amin}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {102}},\ \bibinfo {pages} {220401} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wocjan}\ and\ \citenamefont {Abeyesinghe}(2008)}]{wocjan_speed-up_2008} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Wocjan}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Abeyesinghe}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ ,\ \bibinfo {pages} {042336}} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Boixo}\ \emph {et~al.}(2009)\citenamefont {Boixo}, \citenamefont {Knill},\ and\ \citenamefont {Somma}}]{boixo:qc2009a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Boixo}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Knill}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~D.}\ \bibnamefont {Somma}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Quantum Inf. and Comp.}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {833} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Boixo}\ \emph {et~al.}(2010)\citenamefont {Boixo}, \citenamefont {Knill},\ and\ \citenamefont {Somma}}]{boixo:qc2010a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Boixo}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Knill}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~D.}\ \bibnamefont {Somma}},\ }\href@noop {} {\enquote {\bibinfo {title} {Fast quantum algorithms for traversing paths of eigenstates},}\ } (\bibinfo {year} {2010}),\ \bibinfo {note} {arXiv:1005.3034}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kitaev}(1995)}]{kitaev_quantum_1995} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~Y.}\ \bibnamefont {Kitaev}},\ }\href@noop {} {\enquote {\bibinfo {title} {Quantum measurements and the abelian stabilizer problem},}\ } (\bibinfo {year} {1995}),\ \bibinfo {note} {quant-ph/9511026}\BibitemShut {NoStop} \bibitem [{\citenamefont {Grover}(2005)}]{grover_different_2005} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~K.}\ \bibnamefont {Grover}},\ }\href@noop {} {\enquote {\bibinfo {title} {A different kind of quantum search},}\ } (\bibinfo {year} {2005}),\ \bibinfo {note} {quant-ph/0503205}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kirkpatrick}\ \emph {et~al.}(1983)\citenamefont {Kirkpatrick}, \citenamefont {Gelatt},\ and\ \citenamefont {Vecchi}}]{kirkpatrick_SA_83} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Kirkpatrick}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Gelatt}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Vecchi}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {220}},\ \bibinfo {pages} {671} (\bibinfo {year} {1983})}\BibitemShut {NoStop} \bibitem [{Not()}]{NoteEPT-bounds} \BibitemOpen \href@noop {} {}\bibinfo {howpublished} {With no loss of generality, we can assume that the eigenvalue is zero so that $H \ket{\psi }=0$. Then, $\sket{\dot \psi}=(-1/H)\dot H \ket \psi$, where $1/H$ has support on the space orthogonal to $ \ket{\psi}$ only.}\BibitemShut {Stop} \bibitem [{\citenamefont {Feiguin}\ \emph {et~al.}(2013)\citenamefont {Feiguin}, \citenamefont {Somma},\ and\ \citenamefont {Batista}}]{feiguin_renorm_2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Feiguin}}, \bibinfo {author} {\bibfnamefont {R.~D.}\ \bibnamefont {Somma}}, \ and\ \bibinfo {author} {\bibfnamefont {C.~D.}\ \bibnamefont {Batista}},\ }\href@noop {} {\enquote {\bibinfo {title} {An exact real-space renormalization method and applications},}\ } (\bibinfo {year} {2013}),\ \bibinfo {note} {arXiv:1303.0305}\BibitemShut {NoStop} \bibitem [{\citenamefont {Somma}\ \emph {et~al.}(2007)\citenamefont {Somma}, \citenamefont {Batista},\ and\ \citenamefont {Ortiz}}]{somma_thermod_2007} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Somma}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Batista}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Ortiz}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {99}},\ \bibinfo {pages} {030603} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bravyi}\ and\ \citenamefont {Terhal}(2009)}]{bravyi_stoquastic_2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Bravyi}}\ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Terhal}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {SIAM J. Comput.}\ }\textbf {\bibinfo {volume} {39}},\ \bibinfo {pages} {1462} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {de~Beaudrap}\ \emph {et~al.}(2010)\citenamefont {de~Beaudrap}, \citenamefont {Ohliger}, \citenamefont {Osborne},\ and\ \citenamefont {Eisert}}]{beaudrap_frustfree_2010} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {de~Beaudrap}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ohliger}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Osborne}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Eisert}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {105}},\ \bibinfo {pages} {060504} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Somma}\ and\ \citenamefont {Boixo}(2013)}]{somma_gap_2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~D.}\ \bibnamefont {Somma}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Boixo}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {SIAM J. Comp}\ }\textbf {\bibinfo {volume} {42}},\ \bibinfo {pages} {593} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Verstraete}\ \emph {et~al.}(2006)\citenamefont {Verstraete}, \citenamefont {Wolf}, \citenamefont {Perez-Garcia},\ and\ \citenamefont {Cirac}}]{verstraete_peps_2006} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Verstraete}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Wolf}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Perez-Garcia}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Cirac}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {96}},\ \bibinfo {pages} {220601} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rommer}\ and\ \citenamefont {Ostlund}(1997)}]{RO_MPS_1997} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Rommer}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Ostlund}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {55}},\ \bibinfo {pages} {2164} (\bibinfo {year} {1997})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vidal}(2003)}]{Vidal_MPS_2003} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Vidal}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {91}},\ \bibinfo {pages} {147902} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ambainis}\ and\ \citenamefont {Regev}(2004)}]{regev_quantum_2004} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ambainis}}\ and\ \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Regev}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {quant-ph/0411152}\ } (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lidar}\ \emph {et~al.}(2009)\citenamefont {Lidar}, \citenamefont {Rezakhani},\ and\ \citenamefont {Hamma}}]{lidar_adiabatic_2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont {Lidar}}, \bibinfo {author} {\bibfnamefont {A.~T.}\ \bibnamefont {Rezakhani}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Hamma}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Math. Phys.}\ }\textbf {\bibinfo {volume} {50}},\ \bibinfo {pages} {102106} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \end{thebibliography} \end{document}
#2egin{document} \title{Finite Asymptotic Clusters of Metric Spaces} \author{Viktoriia Bilet and Oleksiy Dovgoshey} \date{} \maketitle #2egin{abstract} Let $(X, d)$ be an unbounded metric space and let $\tilde r=(r_n)_{n\in\mathbb N}$ be a sequence of positive real numbers tending to infinity. A pretangent space $\Omega_{\infty, \tilde r}^{X}$ to $(X, d)$ at infinity is a limit of the rescaling sequence $\left(X, \frac{1}{r_n}d\right).$ The set of all pretangent spaces $\Omega_{\infty, \tilde r}^{X}$ is called an asymptotic cluster of pretangent spaces. Such a cluster can be considered as a weighted graph $(G_{X, \tilde r}, \rho_{X})$ whose maximal cliques coincide with $\Omega_{\infty, \tilde r}^{X}$ and the weight $\rho_{X}$ is defined by metrics on $\Omega_{\infty, \tilde r}^{X}$. We describe the structure of metric spaces having finite asymptotic clusters of pretangent spaces and characterize the finite weighted graphs which are isomorphic to these clusters. \end{abstract} \noindent\textbf{Keywords and phrases:} asymptotics of metric space, finite metric space, weighted graph, metrization of weighted graphs, homomorphism of graphs. #2igskip \noindent\textbf{2010 Mathematics subject classification:} 54E35, 05C12, 05C69 \section{Introduction} Under an asymptotic cluster of metric spaces we mean the set of metric spaces which are the limits of rescaling metric spaces $\left(X, \frac{1}{r_n} d\right)$ for $r_n$ tending to infinity. The Gromov--Hausdorff convergence and the asymptotic cones are most often used for construction of such limits. Both of these approaches are based on higher-order abstractions (see, for example, \cite{Ro} for details), which makes them very powerful, but it does away the constructiveness. In this paper we use a more elementary, sequential approach for describing scaling limits of unbounded metric spaces at infinity. Let $(X,d)$ be a metric space and let $\tilde{r}=(r_n)_{n\in\mathbb{N}}$ be a sequence of positive real numbers with $\mathop{\lim}\limits_{n\to\infty} r_{n} = \infty$. In what follows $\tilde{r}$ will be called a \emph{scaling sequence} and the formula $\seq{x_n}{n} \subset A$ will be mean that all elements of the sequence $\seq{x_n}{n}$ belong to the set~$A$. #2egin{definition}\label{d1.1.1} Two sequences $\tilde{x} = \seq{x_n}{n} \subset X$ and $\tilde{y} = \seq{y_n}{n}\subset X$ are \emph{mutually stable} with respect to the scaling sequence $\tilde{r}=\seq{r_n}{n}$ if there is a finite limit #2egin{equation}\label{e1.1.1} \lim_{n\to\infty}\frac{d(x_n,y_n)}{r_n} := \tilde{d}_{\tilde{r}}(\tilde{x},\tilde{y}) = \tilde{d}(\tilde{x}, \tilde{y}). \end{equation} \end{definition} Let $p\in X$. Denote by $Seq(X, \tilde{r})$ the set of all sequences $\tilde{x} = \seq{x_n}{n} \subset X$ for which there is a finite limit #2egin{equation}\label{e1.1.2} \lim_{n\to\infty}\frac{d(x_n, p)}{r_n} := \tilde{\tilde d}_{\tilde{r}}(\tilde{x}) \end{equation} and such that $\mathop{\lim}\limits_{n\to\infty} d(x_n, p)=\infty$. #2egin{definition}\label{d1.1.2} A set $F\subseteq Seq(X, \tilde{r})$ is \emph{self-stable} if any two $\tilde{x}, \tilde{y} \in F$ are mutually stable. $F$ is \emph{maximal self-stable} if it is self-stable and, for arbitrary $\tilde{y}\in Seq(X, \tilde{r})$, we have either $\tilde{y}\in F$ or there is $\tilde{x}\in F$ such that $\tilde{x}$ and $\tilde{y}$ are not mutually stable. \end{definition} The maximal self-stable subsets of $Seq(X, \tilde{r})$ will be denoted as $\sstable{X}{r}$. #2egin{remark}\label{r1.1.3} If $\tilde{x}=\seq{x_n}{n} \in Seq(X, \tilde{r})$ and $p, b\in X,$ then the triangle inequality implies #2egin{equation}\label{e1.1.3} \lim_{n\to\infty}\frac{d(x_n, p)}{r_n} = \lim_{n\to\infty}\frac{d(x_n, b)}{r_n}. \end{equation} In particular, $Seq(X, \tilde{r})$, the self-stable subsets and the maximal self-stable subsets of $Seq(X, \tilde{r})$ are invariant w.r.t. the choosing a point $p\in X$ in \eqref{e1.1.2}. \end{remark} Recall that a function $\mu: Y\times Y\to\mathbb R^{+}$ is called a pseudometric on a set $Y$ if for all $x, y, z\in Y$ we have #2egin{equation*} \mu(x, x)=0, \quad \mu(x, y)=\mu(y, x)\quad\mbox{and}\quad \mu(x, z)\le\mu(x, y)+\mu(y, z). \end{equation*} Every metric is a pseudometric. A pseudometric $\mu: Y\times Y\to\mathbb R^{+}$ is a metric if and only if, for all $x, y\in Y,$ the equality $\mu(x, y)=0$ implies $x=y.$ Consider a function $\tilde{d}: \sstable{X}{r} \times \sstable{X}{r} \rightarrow \mathbb{R}$ satisfying \eqref{e1.1.1} for all $\tilde{x}$, $\tilde{y} \in \sstable{X}{r}$. Obviously, $\tilde{d}$ is symmetric and nonnegative. Moreover, the triangle inequality for $d$ gives us the triangle inequality for $\tilde d$, $$ \tilde{d}(\tilde{x},\tilde{y})\leq\tilde{d}(\tilde{x},\tilde{z})+\tilde{d}(\tilde{z},\tilde{y}). $$ Hence $(\sstable{X}{r},\tilde{d})$ is a pseudometric space. Now we are ready to define the main object of our research. #2egin{definition}\label{d1.1.4} Let $(X,d)$ be an unbounded metric space, let $\tilde{r}$ be a scaling sequence and let $\sstable{X}{r}$ be a maximal self-stable subset of $Seq(X, \tilde{r})$. The \emph{ pretangent space} to $(X, d)$ (at infinity, with respect to $\tilde{r}$) is the metric identification of the pseudometric space $(\sstable{X}{r},\tilde{d})$. \end{definition} Since the notion of pretangent space is basic for the paper, we recall the metric identification construction. Define a relation $\equiv$ on $Seq(X, \tilde{r})$ as #2egin{equation}\label{e1.1.4} \left(\tilde{x}\equiv \tilde{y}\right)\Leftrightarrow \left(\tilde d_{\tilde{r}}(\tilde{x}, \tilde{y})=0\right). \end{equation} The reflexivity and the symmetry of $\equiv$ are evident. Let $\tilde{x}, \tilde{y}, \tilde{z}\in Seq (X, \tilde{r})$ and $\tilde{x}\equiv\tilde{y},$ and $\tilde{y}\equiv\tilde{z}$. Then the inequality $$ \limsup_{n\to\infty}\frac{d(x_n, z_n)}{r_n} \le \lim_{n\to\infty} \frac{d(x_n, y_n)}{r_n} + \lim_{n\to\infty} \frac{d(y_n, z_n)}{r_n} $$ implies $\tilde{x} \equiv \tilde{z}$. Thus $\equiv$ is an equivalence relation. Write $\pretan{X}{r}$ for the set of equivalence classes generated by the restriction of $\equiv$ on the set $\sstable{X}{r}$. Using general properties of pseudometric spaces we can prove (see, for example, \cite{Kelley}) that the function $\rho \colon \pretan{X}{r} \times \pretan{X}{r} \to \mathbb{R}$ with #2egin{equation}\label{e1.1.5} \rho(\alpha,#2eta):=\tilde d_{\tilde{r}}(\tilde{x}, \tilde{y}), \quad \tilde{x}\in \alpha \in \pretan{X}{r}, \quad \tilde{y}\in #2eta \in \pretan{X}{r}, \end{equation} is a well-defined metric on~$\pretan{X}{r}$. The metric identification of $(\sstable{X}{r}, \tilde d)$ is the metric space $(\pretan{X}{r}, \rho).$ Let us denote by $\tilde{X}_{\infty}$ the set of all sequences $\seq{x_n}{n}\subset X$ satisfying the limit relation $\lim\limits_{n\to\infty}d(x_n, p)=\infty$ with $p\in X.$ It is clear that $Seq(X, \tilde{r})\subseteq\tilde{X}_{\infty}$ holds for every scaling sequence $\tilde{r}$ and for every $\tilde{x} \in\tilde{X}_{\infty},$ there exists a scaling sequence $\tilde{r}$ such that $\tilde{x}\in Seq(X, \tilde{r}).$ For every unbounded metric space $(X, d)$ and every scaling sequence $\tilde{r}$ define the subset $\sstable{X}{r}^{0}$ of the set $Seq(X, \tilde{r})$ by the rule: #2egin{equation}\label{e1.2.4} \left((z_n)_{n\in\mathbb{N}}\in\sstable{X}{r}^{0}\right) \Leftrightarrow \left((z_n)_{n\in\mathbb{N}} \in \tilde{X}_{\infty} \quad \mbox{and} \quad \lim_{n\to\infty} \frac{d(z_n, p)}{r_n} = 0\right), \end{equation} where $p$ is a point of $X$. Below we collect together some basic properties of the set $\sstable{X}{r}^{0}.$ #2egin{proposition}\label{p1.2.2} Let $(X, d)$ be an unbounded metric space and let $\tilde{r}$ be a scaling sequence. Then the following statements hold. #2egin{enumerate} \item\label{p1.2.2s1} The set $\sstable{X}{r}^{0}$ is nonempty. \item\label{p1.2.2s2} If we have $\tilde{z}\in \sstable{X}{r}^{0},$ $\tilde{y}\in\tilde{X}_{\infty}$ and $\tilde d_{\tilde{r}}(\tilde{z}, \tilde{y})=0,$ then $\tilde{y}\in\sstable{X}{r}^{0}$ holds. \item\label{p1.2.2s3} If $F\subseteq Seq(X, \tilde{r})$ is self-stable, then $\sstable{X}{r}^{0}\cup F$ is also a self-stable subset of $Seq(X, \tilde{r}).$ \item\label{p1.2.2s4} The set $\sstable{X}{r}^{0}$ is self-stable. \item\label{p1.2.2s5} The inclusion $\sstable{X}{r}^{0}\subseteq\sstable{X}{r}$ holds for every maximal self-stable subset $\sstable{X}{r}$ of $Seq(X, \tilde{r}).$ \item\label{p1.2.2s6} Let $\tilde{z}\in\tilde{X}_{\infty,\tilde{r}}^{0}$ and $\tilde{x}\in\tilde{X}_{\infty}.$ Then $\tilde{x}\in Seq (X, \tilde{r})$ holds if and only if $\tilde{x}$ and $\tilde{z}$ are mutually stable. For $\tilde{x}\in Seq (X, \tilde{r})$ we have $$\tilde{\tilde d}_{\tilde{r}}(\tilde{x})=\tilde d_{\tilde{r}}(\tilde{x}, \tilde{z}).$$ \item\label{p1.2.2s7} Denote by $\mathbf{\pretan{X}{r}}$ the set of all pretangent to $X$ at infinity (with respect to $\tilde{r}$) spaces. Then the membership #2egin{equation*} \sstable{X}{r}^{0}\in #2igcap_{\pretan{X}{r}\in\mathbf{\pretan{X}{r}}}\pretan{X}{r} \end{equation*} holds. \end{enumerate} \end{proposition} A simple proof is omitted here. #2egin{remark}\label{r1.2.3} The set $\sstable{X}{r}^{0}$ is invariant under replacing of $p\in X$ by an arbitrary point $b\in X$ in \eqref{e1.2.4}. \end{remark} #2egin{lemma}\label{l1.2.4} Let $(X, d)$ be an unbounded metric space, $p\in X$ and $\tilde{y}\in\tilde{X}_{\infty}$, let $\tilde{r}$ be a scaling sequence and let $\sstable{X}{r}$ be a maximal self-stable set. If $\tilde{y}$ and $\tilde{x}$ are mutually stable for every $\tilde{x}\in\sstable{X}{r}$, then $\tilde{y}\in\sstable{X}{r}$. \end{lemma} #2egin{proof} Suppose $\tilde{y}$ and $\tilde{x}$ are mutually stable for every $\tilde{x}\in\sstable{X}{r}$. To prove $\tilde{y}\in\sstable{X}{r}$ it suffices to show that there is a finite limit $\mathop{\lim}\limits_{n\to\infty} \frac{d(y_n, p)}{r_n}$ that follows from statements (\emph{v}) and (\emph{vi}) of Proposition~\ref{p1.2.2}. \end{proof} #2egin{lemma}\label{l1.2.5} Let $(X, d)$ be an unbounded metric space and let $\tilde{r}$ be a scaling sequence. If $\tilde{x}$, $\tilde{y}$, $\tilde t \in \tilde{X}_{\infty}$ such that $\tilde{x}$ and $\tilde{y}$ are mutually stable with respect to $\tilde{r}$ and $\tilde d_{\tilde{r}}(\tilde{x}, \tilde t)=0$, then $\tilde{y}$ and $\tilde t$ are mutually stable with respect to $\tilde{r}$. \end{lemma} #2egin{proof} The statement follows from the equality $\tilde d_{r}(\tilde{x}, \tilde t)=0$ and the inequa\-lities #2egin{multline*} \tilde d_{\tilde{r}}(\tilde{x}, \tilde{y}) - \tilde d_{\tilde{r}}(\tilde{x}, \tilde t) \leq \liminf_{n\to\infty} \frac{d(y_n,t_n)}{r_n} \\ \leq \limsup_{n\to\infty} \frac{d(y_n,t_n)}{r_n} \leq \tilde d_{\tilde{r}}(\tilde{x}, \tilde{y}) + \tilde d_{\tilde{r}}(\tilde{x}, \tilde t). \tilde{a}g*\qedhere \end{multline*} \end{proof} The set $\sstable{X}{r}^{0}$ is a common distinguished point of all pretangent spaces $\pretan{X}{r}$ (with given scaling sequence $\tilde{r}$). We will consider the pretangent spaces to $(X,d)$ at infinity as the triples $(\pretan{X}{r}, \rho, \nu_{0})$, where $\rho$ is defined by~\eqref{e1.1.5} and $\nu_{0}: = \sstable{X}{r}^{0}$. The point $\nu_{0}$ can be informally described as follows. The points of pretangent space $\pretan{X}{r}$ are infinitely removed from the initial space $(X, d)$, but $\pretan{X}{r}$ contains a unique point $\nu_{0}$ which is close to $(X, d)$ as much as possible. #2egin{example}\label{ex1.2.6} Let $\seq{x_n}{n} \subset (0, \infty)$ be an increasing sequence and $\tilde{r} = \seq{r_n}{n}$ be a scaling sequence such that #2egin{equation}\label{ex1.2.6e1} \lim_{n\to\infty} \frac{x_{n+1}}{x_{n}} = \infty \quad \text{and} \quad r_n = \sqrt{x_n x_{n+1}} \end{equation} for every $n \in \mathbb{N}$. Define a metric space $(X, d)$ as #2egin{equation}\label{ex1.2.6e2} X := \left(#2igcup_{n\in\mathbb{N}} \{x_n\}\right) \cup \{0\} \end{equation} and $d(x,y) := |x-y|$ for all $x$, $y \in X$. It follows from~\eqref{ex1.2.6e1} and \eqref{ex1.2.6e2} that, for every $n \in \mathbb{N}$, we have either #2egin{equation*}\label{ex1.2.6e3} \frac{x}{r_n} \geq \sqrt{\frac{x_{n+1}}{x_n}} \end{equation*} if $x \in X \cap [x_{n+1}, \infty)$, or #2egin{equation*}\label{ex1.2.6e4} \frac{x}{r_n} \leq \sqrt{\frac{x_n}{x_{n+1}}} \end{equation*} if $x \in X \cap [0, x_n]$. Consequently the equality $$ \Dist[r]{y} = 0 $$ holds for every $\tilde{y} \in Seq(X, \tilde{r})$, i.e., $$ Seq(X, \tilde{r}) = \sstable{X^0}{r}. $$ \end{example} In conclusion of this introduction we note that there exist other techniques which allow to investigate the asymptotic properties of metric spaces at infinity. As examples, we mention only the Gromov product which can be used to define a metric structure on the boundaries of hyperbolic spaces \cite{BS}, \cite{Sc}, the balleans theory~\cite{PZ} and the Wijsman convergence \cite{LechLev}, \cite{Wijs64}, \cite{Wijs66}. \section{The cluster of pretangent spaces} In this section, using some elements of the graph theory, we introduce the concept of \emph{cluster of pretangent spaces} which will allow us to describe the relationships between these spaces. Recall that a \emph{graph} $G$ is an ordered pair $(V, E)$ consisting of a nonempty set $V= V(G)$ and a set $E = E(G)$ of unordered pairs of distinct elements of $V(G)$. The elements of $V$ and $E$ are called the \emph{vertices} and, respectively, the \emph{edges} of $G$. Thus all our graph are \emph{simple} and \emph{loopless}. In what follows we mainly use the terminology from~\cite{BM}. In particular, we say that vertices $x$ and $y$ of a graph $G$ are \emph{adjacent} if $\{x, y\} \in E(G)$. Let $(X,d)$ be an unbounded metric space and let $\tilde{r}$ be a scaling sequence. Let us consider the graph $G_{X, \tilde{r}}$ with the vertex set $V(G_{X, \tilde{r}})$ consisting of the equivalence classes generated by the relation $\equiv$ on $Seq(X, \tilde{r})$ (see \eqref{e1.1.4}) and the edge set $E(G_{X, \tilde{r}})$ defined by the rule: $$ u, v \in V(G_{X, \tilde{r}}) \text{ are adjecent}\quad \text{if and only if} \quad u\ne v\quad\text{and} $$ $$ \quad \text{the limit} \quad \lim_{n\to\infty} \frac{d(x_n, y_n)}{r_n} \quad \text{exists for} \quad \tilde{x}\in u \quad \text{and} \quad \tilde{y}\in v. $$ Recall that a \emph{clique} in a graph $G = (V,E)$ is a set $C \subseteq V$ such that every two distinct vertices of $C$ are adjacent. A \emph{maximal clique} is a clique $C_1$ such that the inclusion $$ C_1 \subseteq C $$ implies the equality $C_1 = C$ for every clique $C$ in $G$. #2egin{theorem}\label{t1.4.1} Let $(X,d)$ be an unbounded metric space and let $\tilde{r}$ be a scaling sequence. A set $C \subseteq V(G_{X, \tilde{r}})$ is a maximal clique in $G_{X, \tilde{r}}$ if and only if there is a pretangent spaces $(\pretan{X}{r}, \rho)$ such that $C=\pretan{X}{r}$. \end{theorem} #2egin{proof} Lemma~\ref{l1.2.4} and Lemma~\ref{l1.2.5} imply the equality #2egin{equation}\label{t1.4.1e1} \{\tilde{x}\in\sstable{X}{r}:\tilde d_{\tilde{r}}(\tilde{x}, \tilde{y})=0\}=\{\tilde{x}\in Seq(X, \tilde{r}): \tilde d_{\tilde{r}}(\tilde{x}, \tilde{y})=0\} \end{equation} for every $\tilde{y}\in\sstable{X}{r}$ and every $\sstable{X}{r}\subseteq Seq(X, \tilde r).$ Since, for every $\tilde{y}\in Seq(X, \tilde{r}),$ there is $\sstable{X}{r}$ such that $\sstable{X}{r}\ni\tilde{y},$ equality \eqref{t1.4.1e1} implies #2egin{equation}\label{t1.4.1e2} V(G_{X, \tilde{r}}) = #2igcup_{\pretan{X}{r} \in \mathbf{\pretan{X}{r}}}\pretan{X}{r}, \end{equation} where $\mathbf{\pretan{X}{r}}$ is the set of all spaces which are pretangent to $X$ at infinity with respect to $\tilde{r}.$ Now the theorem follows from the definitions of the pretangent spaces and the maximal cliques. \end{proof} Theorem~\ref{t1.4.1} gives some grounds for calling the graph $G_{X, \tilde{r}}$ a \emph{cluster of pretangent spaces} to $(X, d)$ at infinity. Recall that a vertex $v$ of a graph $G=(V, E)$ is \emph{dominating} if $\{u, v\}\in E$ holds for all $u\in V \setminus \{v\}$. Statement (\emph{vii}) of Proposition~\ref{p1.2.2} gives us the following fact. #2egin{proposition}\label{p1.4.2} Let $(X, d)$ be an unbounded metric space and let $\tilde{r}$ be a scaling sequence. Then the vertex $\nu_{0}=\sstable{X}{r}^{0}$ is a dominating vertex of $G_{X, \tilde{r}}$. \end{proposition} If $G = (V, E)$ is a simple graph and $r \in V$ is a distinguished vertex of $G$, then we will say that $G$ is a \emph{rooted} graph with the \emph{root} $r$ and write $G = G(r)$. Now we recall the definition of isomorphic rooted graphs. #2egin{definition}\label{d1.4.3} Let $G_1 = G_1(r_1)$ and $G_2 = G_2(r_2)$ be rooted graphs. A bijection $f\colon V(G_1) \to V(G_2)$ is an \emph{isomorphism} of $G_1(r_1)$ and $G_2(r_2)$ if $f(r_1) = r_2$ and #2egin{equation}\label{e1.4.3} (\{u,v\} \in E(G_1)) \Leftrightarrow (\{f(u), f(v)\} \in E(G_2)) \end{equation} holds for all $u$, $v \in V(G_1)$. The rooted graphs $G_1$ and $G_2$ are \emph{isomorphic} if there exists an isomorphism $f\colon V(G_1) \to V(G_2)$. \end{definition} The isomorphism of rooted graphs is a special case of the graph homomorphisms whose theory is a relatively new but very promising branch of the graph theory. See the book of Pavel Hell and Jaroslav Ne\v{s}et\v{r}il~\cite{HN2004}. If $(X,d)$ is an unbounded metric space and $\tilde{r}$ is a scaling sequence, then we will consider the cluster $G_{X, \tilde{r}}$ as a rooted graph with the root $\nu_0 = \sstable{X}{r}^0$ and write $ G_{X, \tilde{r}} = G_{X, \tilde{r}} (\nu_0). $ #2egin{problem}\label{pr1.4.4} Describe the rooted graphs which are isomorphic to the rooted clusters of pretangent spaces. \end{problem} #2egin{remark}\label{r1.4.5} Using Proposition~\ref{p1.4.2} we can prove that if $T(r)$ is a nontrivial rooted tree and this tree is isomorphic to a rooted cluster $G_{X, \tilde{r}} (\nu_0)$, then $T(r)$ is a star. Thus the class of rooted clusters of pretangent spaces is a proper subclass of the class of all rooted graphs. \end{remark} The following, important for us, notion is a \emph{weighted} graph, i.e., a simple graph $G = (V, E)$ together with a weight $w\colon E \to \mathbb{R}^+$. Let us define a weight $\rho_X$ on the edge set of $G_{X, \tilde{r}}$ as: #2egin{equation}\label{e1.4.4} \rho_X(\{u, v\}) := \dist{x}{y} = \lim_{n\to\infty} \frac{d(x_n, y_n)}{r_n}, \quad \{u, v\} \in E(G_{X, \tilde{r}}), \end{equation} where $\tilde{x} = \seq{x_n}{n} \in u$ and $\tilde{y} = \seq{y_n}{n} \in v$. Since for every $\{u, v\} \in E(G)$ there is a pretangent space $(\pretan{X}{r}, \rho)$ such that $u$, $v \in \pretan{X}{r}$, we have #2egin{equation}\label{e1.4.5} \rho_X(\{u, v\}) = \rho(u, v). \end{equation} #2egin{definition}\label{d1.4.6} Let $G_i = G_i(w_i, r_i)$ be weighted rooted graphs with the roots $r_i$ and the weights $w_{i} \colon V(G_i) \to \mathbb{R}^+$, $i = 1$, $2$. An isomorphism $f \colon V(G_1) \to V(G_2)$ of the rooted graphs $G_1(r_1)$ and $G_2(r_2)$ is an isomorphism of the weighted rooted graphs $G_1(w_1, r_1)$ and $G_2(w_2, r_2)$ if the equality #2egin{equation}\label{d1.4.6e1} w_2(\{f(u), f(v)\}) = w_1(\{u, v\}) \end{equation} holds for every $\{u, v\} \in E(G_1)$. Two weighted rooted graphs are isomorphic if there is an isomorphism of these graphs. \end{definition} #2egin{problem}\label{pr1.4.7} Describe the weighted rooted graphs which are isomorphic to the weighed rooted clusters of pretangent spaces. \end{problem} Problem~\ref{pr1.4.4}, that was formulated above, is a weak version of Problem~\ref{pr1.4.7}. For the finite graphs, both those problems will be solved in the next section of the paper. The solution of these problems is based on the following fact: ``The weighted clusters $G_{X, \tilde r}(\rho_X)$ are metrizable''. Recall that a weighted graph $G(w)$ is \emph{metrizable} if there is a metric $\delta \colon V(G) \times V(G) \to \mathbb{R}^+$ such that the equality #2egin{equation}\label{e1.4.7} \delta(u, v) = w(\{u, v\}) \end{equation} holds for every $\{u,v\} \in E(G)$. Similarly, $G(w)$ is \emph{pseudometrizable} if there is a pseudometric $\delta \colon V(G) \times V(G) \to \mathbb{R}^+$ such that~\eqref{e1.4.7} holds for every $\{u,v\} \in E(G)$. In this case we say that $G(w)$ is metrizable (pseudometrizable) by the metric (pseudometric) $\delta$. Let $G(w)$ be a connected weighted graph and let $u$, $v$ be distinct vertices of $G$. Let us denote by $\mathcal{P}_{u, v}$ the set of all paths joining $u$ and $v$ in $G$. Write #2egin{equation}\label{e1.4.8} d_{w}^* (u, v) := \inf\left\{w(P) \colon P \in \mathcal{P}_{u, v}\right\}, \end{equation} where $w(P) := \sum_{e \in P} w(e)$. The function $d_w^*$ is a pseudometric on the set $V(G)$ if we define $d_w^*(u, u) = 0$ for each $u \in V(G)$. This pseudometric will be termed as the \emph{weighted shortest-path pseudometric}. It coincides with the usual path metric if $w(e) = 1$ for every $e \in E(G)$. The following lemma is a simplified version of Proposition~2.1 from~\cite{DMV}. #2egin{lemma}\label{l1.4.12} Let $G = G(w)$ be a connected weighted graph. The following statements are equivalent. #2egin{enumerate} \item\label{l1.4.12s1} The graph $G(w)$ is pseudometrizable. \item\label{l1.4.12s2} The graph $G(w)$ is pseudometrizable by $d_w^*$. \end{enumerate} \end{lemma} The next lemma follows directly from Lemma~\ref{l1.4.12}, the triangle inequality and the definition of the shortest-path pseudometric. #2egin{lemma}\label{l1.4.13} Let $G = G(w)$ be a connected weighted graph. If $G(w)$ is metrizable, then the shortest-path pseudometric $d_w^*$ is a metric and, moreover, if $\delta$ is a metric on $V(G)$ satisfying~\eqref{e1.4.7} for every $\{u, v\} \in E(G)$, then $$ \delta(u, v) \leq d^*_w (u,v) $$ holds for all $u$, $v \in V(G)$. \end{lemma} #2egin{proposition}\label{p1.4.14} Let $(X,d)$ be an unbounded metric space and $\tilde{r}$ be a sca\-ling sequence. Then the shortest-path pseudometric $d_{\rho_X}^*$ is a metric and the weighted cluster $G_{X, \tilde{r}}(\rho_X)$ is metrizable by $d_{\rho_X}^*$. \end{proposition} #2egin{proof} Lemma~\ref{l1.4.13} and Lemma~\ref{l1.4.12} imply that the shortest-path pseudometric $d_{\rho_X}^*$ is a metric if $G_{X, \tilde{r}} (\rho_X)$ is metrizable. Thus, it suffices to show that $G_{X, \tilde{r}} (\rho_X)$ is metrizable. For all $u$, $v \in V(G_{X, \tilde{r}})$, write #2egin{equation}\label{p1.4.14e2} \Delta(u, v) := \limsup_{n\to\infty} \frac{d(x_n, y_n)}{r_n}, \end{equation} where $\seq{x_n}{n} \in u$ and $\seq{y_n}{n} \in v$. It follows directly from the definitions of $G_{X, \tilde{r}}$ and $\rho_X$ that #2egin{equation}\label{p1.4.14e3} \Delta(u ,v) = \rho_X (\{u, v\}) \end{equation} holds for every $\{u, v\} \in E(G_{X, \tilde{r}})$. As in \eqref{e1.1.5} we can see that $\Delta$ is well-defined on $V(G_{X, \tilde{r}}) \times V(G_{X, \tilde{r}})$. We claim that $\Delta$ is a metric on $V(G_{X, \tilde{r}})$. The inequalities $$ 0 \leq \limsup_{n\to\infty} \frac{d(x_n, y_n)}{r_n} \leq \limsup_{n\to\infty} \frac{d(x_n, z_n)}{r_n} + \limsup_{n\to\infty} \frac{d(z_n, y_n)}{r_n} $$ holds for all $\tilde{x}$, $\tilde{y}$, $\tilde{z} \in Seq(X, \tilde{r})$. It is clear that $\Delta(u, u) = 0$ for every $u \in V(G_{X, \tilde{r}})$ and $\Delta(u, v) = \Delta(v, u)$ for all $u$, $v \in V(G_{X, \tilde{r}})$. Hence, $\Delta$ is a pseudometric. Consequently, $\Delta$ is a metric if $$ (\Delta(u, v) = 0) \Rightarrow (u = v) $$ holds for all $u$, $v \in V(G_{X, \tilde{r}})$. Let $\Delta(u, v) = 0$ hold. Then from~\eqref{p1.4.14e2} it follows that $$ \lim_{n\to\infty} \frac{d(x_n, y_n)}{r_n} = 0 $$ for $\tilde{x} \in u$ and $\tilde{y} \in v$. Thus $\tilde{x} \equiv \tilde{y}$ holds (see~\eqref{e1.1.4}). It implies that $u = v$. \end{proof} \section{The metric spaces with finite clusters of pretangent spaces} In this section we describe the unbounded metric spaces $(X,d)$ having finite clusters $G_{X, \tilde{r}}$ for every scaling sequence $\tilde{r}$. Let $p$ be a point of a metric space $(X, d)$. Denote $$ A(p, r,k) := \left\{x \in X\colon \frac{r}{k} \leq d(x,p) \leq rk\right\} \text{ and } S(p, r) := \left\{x \in X\colon d(x,p) = r\right\} $$ for $r > 0$ and $k \geq 1$. The set $S(p, r)$ is the sphere in $(X,d)$ with the radius~$r$ and the center~$p$. Analogously we can consider $A(p,r,k)$ as an annulus in $(X,d)$ ``bounded'' by the concentric spheres $S(p, rk)$ and $S(p, \frac{r}{k})$. In particular, the annulus $A(p,r,1)$ coincides with the sphere $S(p,r)$. #2egin{theorem}\label{t2.2.1} Let $(X, d)$ be an unbounded metric space, $p \in X$, and let $n \geq 2$ be an integer number. Then the inequality #2egin{equation}\label{t2.2.1e1} |V(G_{X, \tilde{r}})| \leq n \end{equation} holds for every scaling sequence $\tilde{r}$ if and only if #2egin{equation}\label{t2.2.1e2} \lim_{n\to\infty} F_n(x_1, \ldots, x_n) = 0 \end{equation} and #2egin{equation}\label{t2.2.1e3} \lim_{k \to 1} \lim_{r\to\infty} \frac{\diam(A(p,r,k))}{r} = \lim_{r\to\infty} \frac{\diam(S(p,r))}{r} = 0, \end{equation} where $r \in (0, \infty)$ and $k \in [1, \infty)$ and the function $F_n \colon X^n \to \mathbb{R}$ is defined as #2egin{equation}\label{t2.2.1e4} F_n(x_1, \ldots, x_n): = \dfrac{\min\limits_{1\le k\le n} d(x_k, p) \prod\limits_{1\le k<l\le n} d(x_k, x_l)}{\left(\max\limits_{1\le k\le n}d(x_k, p)\right)^{\frac{n(n-1)}{2}+1}} \end{equation} if $(x_1, \ldots, x_n) \neq (p, \ldots, p)$ and $F_n(p, \ldots, p) := 0$. \end{theorem} #2egin{remark}\label{r2.2.2} Condition~\eqref{t2.2.1e3} means that the function $\Psi \colon [1, \infty) \to \mathbb{R}^+$, $$ \Psi(k) := \limsup_{r\to\infty} \frac{\diam(A(p, r,k))}{r}, $$ is continuous at the point $1$ and $\Psi(1) = 0$ holds. \end{remark} #2egin{remark}\label{r2.2.3} The annuls $A(p, r,k)$ can be void. At that time we use the convention $$ \diam A(p, r,k) = \diam(\varnothing) = 0. $$ \end{remark} In order to prove Theorem~\ref{t2.2.1}, it is necessary to find a connection between conditions~\eqref{t2.2.1e2} -- \eqref{t2.2.1e3} and the structure of the weighted rooted cluster $G_{X, \tilde{r}}(\rho_X, \nu_0)$. Theorem~\ref{t1.4.1} and Theorem~4.3 from \cite{BDnew} imply the following lemma. #2egin{lemma}\label{l2.3.5n} Let $(X, d)$ be an unbounded metric space and let $n\ge 2$ be an integer number. The following statements are equivalent. #2egin{enumerate} \item\label{l2.3.5ns1} The inequality $|C|\le n$ holds for every clique $C$ of each cluster $G_{X, \tilde r}.$ \item\label{l2.3.5ns2} Limit relation \eqref{t2.2.1e2} holds for the function $F_{n}$ defined by equality \eqref{t2.2.1e4}. \end{enumerate} \end{lemma} Recall that, for given $(X, d)$ and $\tilde{r}$, the weight $\rho_X$ is defined as: $$ \rho_X (\{u ,v\}) := \lim_{n\to\infty} \frac{d(x_n, y_n)}{r_n}, $$ where $\{u ,v\} \in E(G_{X, \tilde{r}})$ and $\seq{x_n}{n} \in u$ and $\seq{y_n}{n} \in v$. (See~\eqref{e1.4.4}.) Now we define the \emph{labeling} $\rho^0 \colon V(G_{X, \tilde{r}}) \to \mathbb{R}^+$, #2egin{equation}\label{e2.2.4} \rho^0(v) := #2egin{cases} 0 & \text{if } v = \nu_0\\ \rho_X(\{\nu_0, v\}) & \text{if } v \neq \nu_0, \end{cases} \end{equation} where $\nu_0 = \sstable{X^0}{r}$ is the root of the cluster $G_{X, \tilde{r}}$. By Proposition~\ref{p1.4.2}, $\nu_0$ is a dominating vertex of $G_{X, \tilde{r}}$. Hence $\rho^0$ is a well-defined function on $V(G_{X, \tilde{r}})$. Recall also that an \emph{independent} set $I$ in a graph $G$ is a subset of $V(G)$ such that, for any two vertices in $I$, there is no edge connecting them. The following lemma is an expanded version of Theorem~4.5 from \cite{BDnew}. #2egin{lemma}\label{l2.2.4} Let $(X,d)$ be an unbounded metric space and $p \in X$. Then condition~\eqref{t2.2.1e3} from Theorem~\ref{t2.2.1} holds if and only if the labeling $\rho^0 \colon V(G_{X, \tilde{r}}) \to \mathbb{R}^+$ is an injective function on $V(G_{X, \tilde{r}})$ for every~$\tilde{r}$. Moreover, if for given $\tilde{r}$, there are two distinct vertices $\nu_1$, $\nu_2 \in V(G_{X, \tilde{r}})$ and $c\in\mathbb R^{+}$ with $$\rho^0(\nu_1) = \rho^0(\nu_2)=c,$$ then there exists an independent set $I \subseteq V(G_{X, \tilde{r}})$ having the cardinality of the continuum, $|I| = \mathfrak{c},$ and such that #2egin{equation}\label{l2.2.4e1} \rho^0(v) = c \quad \end{equation} holds for every $v \in I.$ \end{lemma} #2egin{proof} Suppose condition~\eqref{t2.2.1e3} holds but there are a scaling sequence $\tilde{r}$ and $\nu_1$, $\nu_2 \in V(G_{X, \tilde{r}})$ and $c \in \mathbb{R}^+$ such that $\nu_1 \neq \nu_2$ and #2egin{equation}\label{vspom} \rho^0(\nu_1) = \rho^0(\nu_2) = c. \end{equation} Let $\seq{x_n^1}{n} \in \nu_1$ and $\seq{x_n^2}{n} \in \nu_2$. If $c = 0$, then we have $$ \rho^0(\nu_1) = \lim_{n\to\infty} \frac{d(x_n^1, p)}{r_n} = \lim_{n\to\infty} \frac{d(x_n^2, p)}{r_n} = \rho^0(\nu_2) = 0. $$ Consequently, by the definition of $\sstable{X^0}{r}$, the statements $$ \seq{x_n^1}{n} \in \sstable{X^0}{r} \quad \text{and}\quad \seq{x_n^2}{n} \in \sstable{X^0}{r} $$ hold. Thus, $\nu_1 = \nu_2$, which contradicts $\nu_1 \neq \nu_2$. Assume $c > 0.$ Note that $\nu_1 \neq \nu_2$ holds if and only if there is $c_1 > 0$ such that #2egin{equation}\label{l2.2.4e3} \limsup_{n\to\infty} \frac{d(x_n^1, x_n^2)}{r_n} = c_1. \end{equation} Without loss of generality we may suppose that $$ \min\{d(x_n^1, p), d(x_n^2, p)\} > 0 $$ holds for every $n \in \mathbb{N}$. Write, for $n \in \mathbb{N}$, $$ R_n := #2igl(\max\{d(x_n^1, p), d(x_n^2, p)\} \cdot \min\{d(x_n^1, p), d(x_n^2, p)\}#2igr)^{1/2} $$ and $$ k_n := \left(\frac{\max\{d(x_n^1, p), d(x_n^2, p)\}}{\min\{d(x_n^1, p), d(x_n^2, p)\}}\right)^{1/2}. $$ From \eqref{vspom} it follows that #2egin{equation}\label{l2.2.4e4} \lim_{n\to\infty} \frac{R_n}{r_n} = c \end{equation} and $\mathop{\lim}\limits_{n\to\infty} k_n = 1$. Since we have $$ R_n\cdot k_n = \max\{d(x_n^1, p), d(x_n^2, p)\} \quad\mbox{and}\quad R_n\cdot k_n^{-1} = \min\{d(x_n^1, p), d(x_n^2, p)\}, $$ the annulus $A(p, R_n, k_n)$ contains the points $x_n^1$ and $x_n^2$ for every $n \in \mathbb{N}$. It follows from $x_n^1$, $x_n^2 \in A(p, R_n, k_n)$ and $\mathop{\lim}\limits_{n\to\infty} k_n = 1$ and \eqref{l2.2.4e3} and \eqref{l2.2.4e4} that $\mathop{\lim}\limits_{n\to\infty} R_n = \infty$ and, for every $k > 1$, #2egin{multline*} \limsup_{r\to\infty} \frac{\diam(A(p, r, k))}{r} \geq \limsup_{n\to\infty} \frac{\diam(A(p, R_n, k_n))}{R_n} \\ \geq \limsup_{n\to\infty} \frac{d(x_n^1, x_n^2)}{r_n} \frac{r_n}{R_n} = \frac{c_1}{c} > 0, \end{multline*} contrary to~\eqref{t2.2.1e3}. Hence condition~\eqref{t2.2.1e3} implies the injectivity of $\rho^0$. Suppose now that the labeling $\rho^0$ is injective but condition~\eqref{t2.2.1e3} does not hold. Let us consider the function $\Psi \colon [1, \infty) \to \mathbb{R}$, $$ \Psi(k) := \limsup_{r\to\infty} \frac{\diam(A(p,r,k))}{r}. $$ (See Remark~\ref{r2.2.3}.) It is easy to see that $\Psi$ is increasing and $\Psi(k) \leq 2k$ holds for every $k \in [1, \infty)$. Consequently, there is a finite limit $$ \lim_{\substack{k \to 1\\k \in (1, \infty)}} \Psi(k) := b \leq 2. $$ Moreover, condition~\eqref{t2.2.1e3} does not hold if and only if $b > 0$. Let $\seq{k_n}{n} \subset (1, \infty)$ be a decreasing sequence such that #2egin{equation}\label{l2.2.4e5} \lim_{n\to\infty} k_n = 1 \end{equation} and let $b_1 \in (0, b)$. Then there are some sequences $\tilde{x}$, $\tilde{y} \subset X$ and a sequence $\tilde{r} \subset (0, \infty)$ such that $\mathop{\lim}\limits_{n\to\infty} r_n = \infty$ and #2egin{equation}\label{l2.2.4e6} x_n, y_n \in A(p,r_n,k_n) \end{equation} and #2egin{equation}\label{l2.2.4e7} 2 k_n \geq \frac{d(x_n, y_n)}{r_n} \geq b_1 \end{equation} hold for every $n \in \mathbb{N}$. Statement~\eqref{l2.2.4e6} implies the inequalities #2egin{equation}\label{l2.2.4e8} \frac{1}{k_n} \leq \frac{d(p, x_n)}{r_n} \leq k_n \text{ and } \frac{1}{k_n} \leq \frac{d(p, y_n)}{r_n} \leq k_n \end{equation} for every $n$. Using~\eqref{l2.2.4e8} and \eqref{l2.2.4e5} we obtain $$ \Dist[r]{x} = \lim_{n\to\infty} \frac{d(x_{n}, p)}{r_{n}} = \lim_{n\to\infty} \frac{d(y_{n}, p)}{r_{n}} = \Dist[r]{y} = 1 $$ and $$ \limsup_{n\to\infty} \frac{d(x_{n}, y_{n})}{r_{n}} \geq b_1 > 0. $$ Hence the labeling $\rho^0 \colon V(G_{X, \tilde{r}}) \to \mathbb{R}^+$ is not injective, contrary to our supposition. It still remains to find an independent set $I \subseteq V(G_{X, \tilde{r}})$ with $|I| = \mathfrak{c}$ for $G_{X, \tilde{r}}$ having a non-injective labeling $\rho^0 \colon V(G_{X, \tilde{r}}) \to \mathbb{R}^+$. Suppose there exist $\tilde{r}$ and $\nu_1, \nu_2\in V(G_{X, \tilde{r}})$ such that $\nu_1\ne\nu_2$ and $\rho^0(\nu_1) = \rho^0(\nu_2)$. Let $\tilde{x}^{1} = (x_{n}^{1})_{n\in\mathbb{N}} \in \nu_{1}$ and $\tilde{x}^{2} = (x_{n}^{2})_{n\in\mathbb{N}}\in\nu_{2}$. Then we have #2egin{equation}\label{l2.2.4e9} \lim_{n\to\infty}\frac{d(x_{n}^{1}, p)}{r_n}=\lim_{n\to\infty} \frac{d(x_{n}^{2}, p)}{r_n}>0 \end{equation} and #2egin{equation*} \quad \infty>\limsup_{n\to\infty}\frac{d(x_{n}^{1}, x_{n}^{2})}{r_n}>0. \end{equation*} Let $\mathbb{N}_e$ be an infinite subset of $\mathbb{N}$ such that $\mathbb{N} \setminus \mathbb{N}_e$ is also infinite and #2egin{equation}\label{l2.2.4e10} \limsup_{n\to\infty} \frac{d(x_n^{1}, x_n^{2})}{r_n} = \lim_{\substack{n\to\infty\\ n \in \mathbb{N}_e}} \frac{d(x_{n}^{1}, x_{n}^{2})}{r_n}. \end{equation} We can consider a relation $\asymp$ on the set $2^{\mathbb{N}_{e}}$ of all subsets of $\mathbb{N}_{e}$ defined by the rule: $A\asymp B,$ if and only if the set $$ A#2igtriangleup B=(A\setminus B)\cup (B\setminus A) $$ is finite, $|A#2igtriangleup B|<\infty$. It is clear that $\asymp$ is reflexive and symmetric. Since for all $A, B, C\subseteq\mathbb{N}_{e}$ we have $$ A#2igtriangleup C\subseteq (A#2igtriangleup B)\cup (B#2igtriangleup C), $$ the relation $\asymp$ is transitive. Thus $\asymp$ is an equivalence on $2^{\mathbb{N}_{e}}$. If $A\subseteq\mathbb{N}_{e}$, then for every $B\subseteq\mathbb{N}_{e}$ we have #2egin{equation}\label{l2.2.4e11} B=(B\setminus A)\cup (A\setminus (A\setminus B)). \end{equation} For every $A\subseteq\mathbb{N}_{e}$ write $$ [A] := \{B\subseteq\mathbb{N}_{e}: B\asymp A\}. $$ The set of all finite subsets of $\mathbb{N}_{e}$ is countable. Consequently equality \eqref{l2.2.4e11} implies $#2igl|[A]#2igr| = \aleph_0$ for every $A\subseteq\mathbb{N}_{e}$. Hence we have #2egin{equation}\label{l2.2.4e12} #2igl|\{[A]: A\subseteq\mathbb{N}_{e}\}#2igr| = #2igl|2^{\mathbb{N}_{e}}#2igr| = \mathfrak{c}. \end{equation} Let $\mathbf{\mathcal{N}}\subseteq 2^{\mathbb{N}_{e}}$ be a set such that: $#2ullet$ For every $A\subseteq\mathbb{N}_{e}$ there is $N\in\mathbf{\mathcal{N}}$ with $A\asymp N$; $#2ullet$ The implication #2egin{equation}\label{l2.2.4e13} (N_1\asymp N_2)\Rightarrow (N_1 = N_2) \end{equation} holds for all $N_1, N_2\in\mathbf{\mathcal{N}}$. It follows from \eqref{l2.2.4e12} that $|\mathbf{\mathcal{N}}|=\mathfrak{c}$. For every $N \in \mathcal{N}$ define the sequence $\tilde{x}(N) = (x_{n}(N))_{n\in\mathbb{N}}$ as #2egin{equation}\label{l2.2.4e14} x_{n}(N):= #2egin{cases} x_{n}^{1}& \mbox{if } n\in N\\ x_{n}^{2}& \mbox{if } n\in \mathbb{N} \setminus N. \end{cases} \end{equation} Recall that $(x_{n}^{1})_{n\in\mathbb{N}},$ $(x_{n}^{2})_{n\in\mathbb{N}}\in Seq(X, \tilde{r})$ satisfy \eqref{l2.2.4e9} and \eqref{l2.2.4e10}. It follows from \eqref{l2.2.4e9} and \eqref{l2.2.4e10} that #2egin{equation}\label{l2.2.4e15} \lim_{n\to\infty}\frac{d(x_{n}(N), p)}{r_n}=\tilde{\tilde d}_{\tilde{r}}(\tilde{x}^{1})=\tilde{\tilde d}_{\tilde{r}}(\tilde{x}^{2}) \end{equation} for every $N\in\mathbf{\mathcal{N}}$. Thus $\tilde{x}(N)\in Seq(\tilde{X}, \tilde{r})$. Let $N_1$ and $N_2$ be distinct elements of $\mathbf{\mathcal{N}}$. Then, by~\eqref{l2.2.4e14}, the equality $$ d(x_{n}(N_1), x_{n}(N_2))=d(x_{n}^{1}, x_{n}^{2}) $$ holds for every $n\in N_{1}#2igtriangleup N_{2}$. Using \eqref{l2.2.4e10} and the definition of $\asymp$ we see that the set $N_{1}#2igtriangleup N_{2}$ is infinite for all distinct $N_{1}, N_{2} \in \mathbf{\mathcal{N}}$. Consequently, we have #2egin{equation}\label{l2.2.4e16} \limsup_{n\to\infty}\frac{d(x_{n}(N_1), x_{n}(N_2))}{r_n}>0\quad\text{and}\quad\liminf_{n\to\infty}\frac{d(x_{n}(N_1), x_{n}(N_2))}{r_n}=0. \end{equation} For every $N \in \mathbf{\mathcal{N}}$ we write #2egin{equation}\label{l2.2.4e17} \nu_{N} := \{\tilde{x} \in Seq(X, \tilde{r})\colon \tilde d_{\tilde{r}}(\tilde{x}, \tilde{x}(N))=0\}. \end{equation} The first inequality in \eqref{l2.2.4e16} implies $\nu_{N_1} \neq \nu_{N_2}$ if $N_1\ne N_2$. Consequently $$ I = \{\nu_N \colon N \in \mathcal{N}\} $$ is an independent set in $G_{X, \tilde{r}}$ and $|I| = \mathfrak{c}$ holds. To complete the proof note that~\eqref{l2.2.4e15} implies~\eqref{l2.2.4e1} for every $v \in I$. \end{proof} #2egin{remark}\label{r2.2.5} The existence of continuum many sets $A_\gamma \subseteq \mathbb{N}$ satisfying, for all distinct $\gamma_1$ and $\gamma_2$, the equalities $$ |A_{\gamma_1}\setminus A_{\gamma_2}| = |A_{\gamma_2}\setminus A_{\gamma_1}| = |A_{\gamma_1}\cap A_{\gamma_2}| = \aleph_0 $$ are well know. (See, for example, Problem 41 of Chapter 4 in~\cite{KT}.) \end{remark} Now using Lemma~\ref{l2.3.5n} and Lemma~\ref{l2.2.4}, we can reformulate Theorem~\ref{t2.2.1} as follows. #2egin{theorem}\label{t2.2.6} Let $(X, d)$ be an unbounded metric space and let $n \geq 2$ be an integer number. Then the inequality #2egin{equation}\label{t2.2.6e1} \left|V(G_{X, \tilde{r}})\right| \leq n \end{equation} holds for every $\tilde{r}$ if and only if the following statements are valid for every $\tilde{r}$. #2egin{enumerate} \item The inequality #2egin{equation}\label{t2.2.6e2} \left|C\right| \leq n \end{equation} holds for all cliques $C \subseteq V(G_{X, \tilde{r}})$. \item The labeling $\rho^0 \colon V(G_{X, \tilde{r}}) \to \mathbb{R}^+$ is injective. \end{enumerate} \end{theorem} #2egin{proof} Let inequality~\eqref{t2.2.6e1} hold for every $\tilde{r}$. Then the inclusion $C \subseteq V(G_{X, \tilde{r}})$ implies~\eqref{t2.2.6e2}. The injectivity of $\rho^0$ follows from Lemma~\ref{l2.2.4}. Conversely, suppose that, for every $\tilde{r}$, inequality~\eqref{t2.2.6e2} holds for all cliques $C \subseteq V(G_{X, \tilde{r}})$ and $\rho^0 \colon V(G_{X, \tilde{r}}) \to \mathbb{R}^+$ is injective. Assume also that there is a scaling sequence $\tilde{r}_{1}=(r_{m}^{1})_{m\in\mathbb N}$ for which $$ \left|V(G_{X, \tilde{r}_{1}})\right|\ge n+1. $$ Then we can find $$\tilde{x}_{0}, \tilde{x}_{1}, \ldots, \tilde{x}_{n}\in Seq(X, \tilde{r}_{1}),\, \tilde x_{i}=(x_{m}^{i})_{m\in\mathbb N}, i=0, ..., n,$$ such that #2egin{equation}\label{t2.2.6e3} 0 = \tilde{\tilde d}_{\tilde{r}_{1}}(\tilde{x}_{0}) < \tilde{\tilde d}_{\tilde{r}_{1}}(\tilde{x}_{1}) < \ldots < \tilde{\tilde d}_{\tilde{r}_{1}}(\tilde{x}_{n}) < \infty. \end{equation} There is an infinite subsequence $\tilde{r}_{1}'=(r_{m_k}^{1})_{k\in\mathbb N}$ of the sequence $\tilde{r}_{1}$ such that the set $$\{\tilde{x}'_{0}, \tilde{x}'_{1}, \ldots, \tilde{x}'_{n}\},\,\tilde x_{i}'=(x_{m_k}^{i})_{k\in\mathbb N,}, \,i=0, ..., n,$$ is self-stable. Write $\nu_{i} := \pi(\tilde{x}'_{i})$, $i = 0$, $\ldots$, $n$, where $\pi \colon Seq(X, \tilde{r}_{1}') \to V(G_{X, \tilde{r}_1'})$ is the natural projection $$\pi(\tilde x)=\{\tilde y\in Seq(X, \tilde r_{1}'): \tilde d_{\tilde r_{1}'}(\tilde x, \tilde y)=0\}.$$ Now \eqref{t2.2.6e3} implies that $$ 0 = \rho(\nu_{0}, \nu_{0}) < \rho(\nu_{0}, \nu_{1}) < \ldots<\rho(\nu_{0}, \nu_{n}). $$ Consequently $\{\nu_{0}, \ldots, \nu_{n}\}$ is a clique in $G_{X, \tilde{r}_1'}$, which contradicts \eqref{t2.2.6e2}. \end{proof} \section{Structural characteristic of finite $G_{X, \tilde r}$} Our next goal is the structural characteristic of the finite, weighted, rooted graphs which are isomorphic to the weighted rooted clusters of pretangent spaces. This characteristic will be based on the concept of a \emph{cycle}. Recall that a graph $C$ is a \emph{subgraph} of the graph $G$, $C \subseteq G$, if $$ V(C) \subseteq V(G) \quad \text{and} \quad E(C) \subseteq E(G). $$ A finite graph $C$ is a \emph{cycle} in a graph $G$ if $C \subseteq G$ and $|V(C)| \geq 3$ and there exists a numbering $(v_1, \ldots, v_n)$ of $V(C)$ such that #2egin{equation}\label{e2.2.28} (\{v_i, v_j\} \in E(C)) \Leftrightarrow (|i-j|=1 \text{ or } |i-j| = n-1). \end{equation} For a weighted graph $G = G(w)$, the \emph{length} of a cycle $C \subseteq G$ is defined as #2egin{equation}\label{e2.2.29} w(C) := \sum_{e \in E(C)} w(e). \end{equation} If $V(C) = (v_1, \ldots, v_n)$ and~\eqref{e2.2.28} holds, then we have #2egin{equation}\label{e2.2.30} w(C) = w(\{v_n, v_1\}) + \sum_{i=1}^{n-1} w(\{v_i, v_{i+1}\}). \end{equation} We need several lemmas. #2egin{lemma}\label{l2.2.11} Let $G = G(w)$ be a finite, connected, weighted graph with the weight $w$ satisfying the inequality $w(e) > 0$ $(w(e)\ge 0)$ for every $e \in E(G)$. Then $G(w)$ is metrizable (pseudometrizable) if and only if the inequality #2egin{equation}\label{l2.2.11e1} 2 \max_{e \in E(C)} w(e) \leq \sum_{e \in E(C)} w(e) \end{equation} holds for every cycle $C \subseteq G$. \end{lemma} The proof can be found in \cite[Proposition~2.1]{DMV}. Let $(X, d)$ be an unbounded metric space, $\tilde{r}=(r_n)_{n\in\mathbb N}$ be a scaling sequence and $\tilde{r}'=(r_{n_k})_{k\in\mathbb N}$ be a subsequence of $\tilde{r}$. Denote by $\Phi_{\tilde{r}'}$ the mapping from $Seq(X, \tilde{r})$ to $Seq(X, \tilde{r}')$ with $$\Phi_{\tilde{r}'} (\tilde{x}) = \tilde{x}'=(x_{n_k})_{k\in\mathbb N}, \, \tilde{x}=(x_n)_{n\in\mathbb N}.$$ It is clear that $$ (\dist{x}{y} = 0) \Rightarrow (\dist[r']{x'}{y'} = 0) $$ and $\Dist{x} = \Dist[r']{x'}$ for every $\tilde{x} \in Seq(X, \tilde{r})$. Consequently, there is a mapping $$ Em' \colon V(G_{X, \tilde{r}}) \to V(G_{X, \tilde{r}'}) $$ such that the diagram #2egin{equation}\label{e2.2.37} \ctdiagram{ \def50{50} \def40{40} \ctv -50, 40: {Seq(X, \tilde{r})} \ctv 50, 40: {Seq(X, \tilde{r}')} \ctv -50,-40: {V(G_{X, \tilde{r}})} \ctv 50,-40: {V(G_{X, \tilde{r}'})} \ctet -50, 40,50, 40:{\Phi_{\tilde{r}'}} \ctel -50, 40,-50,-40:{\pi_{\tilde{r}}} \cter 50, 40, 50,-40: {\pi_{\tilde{r}'}} \ctet -50,-40, 50,-40: {Em'} } \end{equation} is commutative, where $\pi_{\tilde{r}}$ and $\pi_{\tilde{r}'}$ are the natural projections, $$ \pi_{\tilde{r}} (\tilde{x}) := \{\tilde{z} \in Seq(X, \tilde{r}) \colon \dist{x}{z} = 0\} $$ and $$ \pi_{\tilde{r}'} (\tilde{y}) := \{\tilde{z} \in Seq(X, \tilde{r}') \colon \dist[r']{y}{z} = 0\}. $$ Let us recall the following important definition. #2egin{definition}\label{d2.2.13} Let $G_i = G_i(w_i, r_i)$ be weighted rooted graphs with the roots $r_i$ and the weights $w_i \colon V(G_i) \to \mathbb{R}^{+}$, $i = 1$, $2$. A mapping $$ f \colon V(G_1) \to V(G_2) $$ is a \emph{weight preserving homomorphism} of $G_1(w_1, r_1)$ and $G_2(w_2, r_2)$ if the following statements hold: #2egin{itemize} \item $f(r_1) = f(r_2)$; \item $\{f(u), f(v)\} \in E(G_2)$ whenever $\{u, v\} \in E(G_1)$; \item $w_2(\{f(u), f(v)\}) = w_1(\{u, v\})$ for every $\{u, v\} \in E(G_1)$. \end{itemize} A \emph{weight preserving monomorphism} of the graphs $G_1(w_1, r_1)$ and $G_2(w_2, r_2)$ is an injective and weight preserving homomorphism of these graphs. \end{definition} Let $(X, d)$ be an unbounded metric space, $\tilde{r}$ be a scaling sequence and $\tilde{r}'$ be an infinite subsequence of $\tilde{r}$. Then, for arbitrary mutually stable $\tilde{x}$, $\tilde{y} \in Seq(X, \tilde{r})$, $\tilde{x}'$ and $\tilde{y}'$ are mutually stable with respect to $\tilde{r}'$ and $\dist[r']{x'}{y'} = \dist{x}{y}$ holds. Hence $Em'$ is a weight preserving homomorphism of the weighted rooted clusters $G_{X, \tilde{r}} (\rho_X, \nu_{0})$ and $G_{X, \tilde{r}'} (\rho_{X}', \nu_{0}')$, where $\nu_{0}' = \sstable{X^0}{r'}$ and $\rho_{X}'$ is defined as in~\eqref{e1.4.4} with $\tilde{r} = \tilde{r}',$ $\tilde x=\tilde x'$ and $\tilde y=\tilde y'$. #2egin{lemma}\label{l2.2.14} Let $(X, d)$ be an unbounded metric space and let $\tilde{r}$ be a scaling sequence. Then the following statements are equivalent. #2egin{enumerate} \item \label{l2.2.14s1} The labeling $\rho^0 \colon V(G_{X, \tilde{r}}) \to \mathbb{R}^{+}$ is injective. \item \label{l2.2.14s2} The homomorphism $Em' \colon V(G_{X, \tilde{r}}) \to V(G_{X, \tilde{r}'})$ is a monomorphism for every infinite subsequence $\tilde{r}'$ of $\tilde{r}$. \end{enumerate} \end{lemma} #2egin{proof} $\ref{l2.2.14s1} \Rightarrow \ref{l2.2.14s2}$ Suppose that $\rho^0 \colon V(G_{X, \tilde{r}}) \to \mathbb{R}^{+}$ is injective but there is an infinite subsequence $\tilde{r}'$ of $\tilde{r}$ such that the equality $Em'(\nu_1) = Em'(\nu_2)$ holds for some distinct $\nu_1$, $\nu_2 \in V(G_{X, \tilde{r}})$. Since $Em'$ is a weight preserving homomorphism of weighted rooted graphs, we obtain #2egin{equation}\label{l2.2.14e1} \rho^0 (\nu_1) = {}'\!\rho^0 (Em'(\nu_1)) = {}'\!\rho^0 (Em'(\nu_2)) = \rho^0 (\nu_2), \end{equation} where $ {}'\!\rho^0$ is the labeling of the graph $G_{X, \tilde{r}'}$ defined by~\eqref{e2.2.4} with $\tilde{r} = \tilde{r}'$. Thus we have $$ \rho^0 (\nu_1) = \rho^0 (\nu_2) \text{ and } \nu_1 \neq \nu_2, $$ contrary to injectivity of $\rho^0$. $\ref{l2.2.14s2} \Rightarrow \ref{l2.2.14s1}$ Suppose now that there are $\nu_1$, $\nu_2 \in V(G_{X, \tilde{r}})$ such that $\nu_1 \neq \nu_2$ and $\rho^0 (\nu_1) = \rho^0 (\nu_2)$. Let $\seq{x_n^1}{n} \in \nu_1$ and $\seq{x_n^2}{n} \in \nu_2$ and let $p \in X$. Write #2egin{equation}\label{l2.2.14e2} y_n = #2egin{cases} x_n^1 & \text{if $n$ is even} \\ x_n^2 & \text{if $n$ is odd}. \end{cases} \end{equation} It follows from the equality $\rho^0 (\nu_1) = \rho^0 (\nu_2)$ that $$ \lim_{n\to\infty} \frac{d(y_n, p)}{r_n} = \rho^0 (\nu_1) = \rho^0 (\nu_2). $$ Hence $\seq{y_n}{n} \in Seq(X, \tilde{r})$. Moreover, we have $$ 0 < \limsup_{n\to\infty} \frac{d(x_n^1, x_n^2)}{r_n} \leq \limsup_{n\to\infty} \frac{d(x_n^1, y_n)}{r_n} + \limsup_{n\to\infty} \frac{d(x_n^2, y_n)}{r_n}. $$ Consequently, $$ \limsup_{n\to\infty} \frac{d(x_n^1, y_n)}{r_n} > 0 \text{ or } \limsup_{n\to\infty} \frac{d(x_n^2, y_n)}{r_n} > 0. $$ With no loss of generality suppose that #2egin{equation}\label{l2.2.14e3} \limsup_{n\to\infty} \frac{d(x_n^1, y_n)}{r_n} > 0. \end{equation} Write $\nu_3 := \pi_{\tilde{r}} (\tilde{y})$ (see diagram~\ref{e2.2.37}). Inequality~\eqref{l2.2.14e3} implies that $\nu_1 \neq \nu_2$. Now, for $\tilde{r}' = \seq{r_{n_k}}{k}$ with $n_k = 2k$, equality~\eqref{l2.2.14e2} shows that $$ Em'(\nu_1) = Em'(\nu_3). $$ Thus $Em'$ is not a monomorphism. \end{proof} For every finite, connected, metrizable, weighted graph $G=G(w)$ denote by $\mathcal{M}(w)$ the set of all metrics $d \colon V(G) \times V(G) \to \mathbb{R}^{+}$ satisfying the equality $$ d(u, v) = w(\{u,v\}) $$ for every $\{u, v\} \in E(G)$. #2egin{lemma}\label{l2.2.15} Let $(X, d)$ be an infinite metric space, let $\tilde{r}$ be a scaling sequence and let $u^*$, $v^*$ be distinct non adjacent vertices of $G_{X, \tilde{r}}$. If $G_{X, \tilde{r}}$ is finite, then there are two metrics $d^1, d^2\in\mathcal{M}(\rho_X)$ such that #2egin{equation}\label{l2.2.15e1} d^1(u^*, v^*) \neq d^2 (u^*, v^*). \end{equation} \end{lemma} #2egin{proof} Since $\{u^*, v^*\} \notin E(G_{X, \tilde{r}}),$ we have #2egin{equation}\label{l2.2.15e2} \limsup_{n\to\infty} \frac{d(x_n, y_n)}{r_n} \neq \liminf_{n\to\infty} \frac{d(x_n, y_n)}{r_n}, \end{equation} where $\seq{x_n}{n} \in u^*$ and $\seq{y_n}{n} \in v^*$. Let $\tilde{r}_1' = \seq{r_{n_{1,k}}}{k}$ and $\tilde{r}_2' = \seq{r_{n_{2,k}}}{k}$ be subsequences of $\tilde{r}$ satisfying the equalities $$ \lim_{k \to \infty} \frac{d(x_{n_{1,k}}, y_{n_{1,k}})}{r_{n_{1,k}}} = \limsup_{n\to\infty} \frac{d(x_n, y_n)}{r_n} $$ and $$ \lim_{k \to \infty} \frac{d(x_{n_{2,k}}, y_{n_{2,k}})}{r_{n_{2,k}}} = \liminf_{n\to\infty} \frac{d(x_n, y_n)}{r_n} $$ respectively. Suppose $G_{X, \tilde{r}}$ is finite. Then, by Lemma~\ref{l2.2.4}, the labeling $\rho^0 \colon V(G_{X, \tilde{r}}) \to \mathbb{R}^{+}$ is injective. Consequently, by Lemma~\ref{l2.2.14}, $$ Em'_1\colon V(G_{X, \tilde{r}}) \to V(G_{X, \tilde{r}_1'}) \quad \mbox{and} \quad Em'_2\colon V(G_{X, \tilde{r}}) \to V(G_{X, \tilde{r}_2'}) $$ are the weight preserving monomorphisms (see diagram~\eqref{e2.2.37}). By Proposition~\ref{p1.4.14} the weighted clusters $G_{X, \tilde{r}_1'}$ and $G_{X, \tilde{r}_2'}$ are metrizable by the corresponding shortest-path metrics $d_{\rho_{X}^1}^*$ and $d_{\rho_{X}^2}^*$. Write, for all $u$, $v \in V(G_{X, \tilde{r}})$, $$ d^{i} (u, v) := d_{\rho_{X}^i}^* (Em_i'(u), Em_i'(v)), \quad i = 1,2. $$ Since $Em_1'$ and $Em_2'$ are weight preserving monomorphisms, the weighted cluster $G_{X, \tilde{r}}$ is metrizable by $d^1$ and $d^2$. Moreover, \eqref{l2.2.15e2} implies that $d^1(u^*, v^*) \neq d^2(u^*, v^*)$. \end{proof} #2egin{lemma}\label{l2.2.16} Let $G=G(w)$ be a finite, connected, weighted metrizable graph. Then the double inequality #2egin{multline}\label{l2.2.16e1} \max_{P \in \mathcal{P}_{\mu, \nu}} #2iggl(2 \max_{e \in E(P)} w(e) - \sum_{e \in E(P)} w(e) #2iggr)_{+} \\* \leq d(\mu, \nu) \leq \min_{P \in \mathcal{P}_{\mu, \nu}} \sum_{e \in E(P)} w(e) \end{multline} holds for every $d \in \mathcal{M}(w)$ and all distinct, non adjacent vertices $\mu$, $\nu \in V(G)$, where $(\cdot)_{+}$ is the positive part of $(\cdot)$. Conversely, if $\mu$ and $\nu$ are some distinct, non adjacent vertices of $G$ and $t$ is a positive real number satisfying the double inequality #2egin{equation}\label{l2.2.16e2} \max_{P \in \mathcal{P}_{\mu, \nu}} #2iggl(2 \max_{e \in E(P)} w(e) - \sum_{e \in E(P)} w(e) #2iggr)_{+} \leq t \leq \min_{P \in \mathcal{P}_{\mu, \nu}} \sum_{e \in E(P)} w(e), \end{equation} then there is $d \in \mathcal{M}(w)$ such that $d(\mu, \nu) = t$. \end{lemma} #2egin{proof} Let $\mu$, $\nu \in V(G)$ be distinct and non adjacent and let $d \in \mathcal{M}(w)$. Then the second inequality in~\eqref{l2.2.16e1} follows from Lemma~\ref{l1.4.13}. To prove the first inequality in~\eqref{l2.2.16e1} it suffices to show that the inequality #2egin{equation}\label{l2.2.16e3} #2iggl(2 \max_{1 \leq i \leq n-1} d(x_i, x_{i+1}) - \sum_{i=1}^{n-1} d(x_i, x_{i+1}) #2iggr)_{+} \leq d(x_1, x_n) \end{equation} holds for every path $(x_1, \ldots, x_n) \subseteq G$ if $x_1 = \mu$ and $x_n = \nu$. When the left side of~\eqref{l2.2.16e3} is $0$, then there is nothing to prove. In the opposite case, \eqref{l2.2.16e3} can be written as $$ 2 \max_{1 \leq i \leq n-1} d(x_i, x_{i+1}) \leq d(x_1, x_n) + \sum_{i=1}^{n-1} d(x_i, x_{i+1}) $$ that immediately follows from the triangle inequality. Suppose now that $\mu$ and $\nu$ are distinct, non adjacent vertices of $G$ and $t$ is a positive real number satisfying double inequality~\eqref{l2.2.16e2}. We must find $d \in \mathcal{M}(w)$ such that $d(\mu, \nu) = t$. Let us consider the weighted graph $\hat{G} = \hat{G}(\hat{w})$ with $$ V(\hat{G}) := V(G), \quad E(\hat{G}) := E(G) \cup \{\{\mu, \nu\}\} $$ and $$ \hat{w}(e) := #2egin{cases} w(e) & \text{if } e \in E(G)\\ t & \text{if } e = \{\mu, \nu\}. \end{cases} $$ $\hat{G}(\hat{w})$ is metrizable if and only if there is $d \in \mathcal{M}(w)$ such that the equality $d(\mu, \nu) = t$ holds. Consequently it suffices to show that $\hat{G}(\hat{w})$ is metrizable. By Lemma~\ref{l2.2.11}, the weighted graph $\hat{G}(\hat{w})$ is metrizable if and only if #2egin{equation}\label{l2.2.16e4} 2 \max_{e \in E(C)} \hat{w}(e) \leq \sum_{e \in E(C)} \hat{w}(e) \end{equation} holds for every cycle $C \subseteq \hat{G}$. If $C \subseteq G$, then~\eqref{l2.2.16e4} holds because $G(w)$ is metrizable. Let $C \nsubseteq G$. Then $\{\mu, \nu\}$ is an edge of the cycle $C$. There are two cases to consider: #2egin{enumerate} \renewcommand{\ensuremath{(i_\arabic{enumi})}}{\ensuremath{(i_\arabic{enumi})}} \item\label{l2.2.16s1} $\max_{e \in E(C)} \hat{w}(e) = \hat{w}(\{\mu, \nu\})$; \item\label{l2.2.16s2} $\max_{e \in E(C)} \hat{w}(e) > \hat{w}(\{\mu, \nu\})$. \end{enumerate} Let $\ol{P}$ be the path in $C$ such that $V(\ol{P}) = V(C)$ and $\{\mu, \nu\} \notin E(\ol{P})$. Then we evidently have $\ol{P} \in \mathcal{P}_{\mu, \nu}$ and #2egin{equation}\label{l2.2.16e5} \sum_{e \in E(C)} \hat{w}(e) = t + \sum_{e \in E(\ol{P})} w(e). \end{equation} Consequently in the case when~\ref{l2.2.16s1} holds, inequality~\eqref{l2.2.16e4} can be written as: $$ 2 t \leq t + \sum_{e \in E(\ol{P})} w(e), $$ or equivalently #2egin{equation}\label{l2.2.16e6} t \leq \sum_{e \in E(\ol{P})} w(e). \end{equation} Since $\ol{P} \in \mathcal{P}_{\mu, \nu}$ and $\ol{P} \subseteq G$, we have $$ \min_{P \in \mathcal{P}_{\mu, \nu}} \sum_{e \in E(P)} w(e) \leq \sum_{e \in E(\ol{P})} w(e). $$ The last inequality and the second inequality in~\eqref{l2.2.16e2} imply~\eqref{l2.2.16e6}. It follows form~\ref{l2.2.16s2} that #2egin{equation}\label{l2.2.16e7} \max_{e \in E(C)} \hat{w}(e) = \max_{e \in E(\ol{P})} w(e). \end{equation} Using the first inequality in~\eqref{l2.2.16e2} and the membership~$\ol{P} \in \mathcal{P}_{\mu, \nu}$ we obtain #2egin{multline*} 2 \max_{e \in E(\ol{P})} w(e) - \sum_{e \in E(\ol{P})} w(e) \leq #2iggl(2 \max_{e \in E(\ol{P})} w(e) - \sum_{e \in E(\ol{P})} w(e) #2iggr)_{+} \\ \leq \max_{P \in \mathcal{P}_{\mu, \nu}} #2iggl(2 \max_{e \in E(P)} w(e) - \sum_{e \in E(P)} w(e) #2iggr)_{+} \leq t. \end{multline*} Thus $$ 2 \max_{e \in E(\ol{P})} w(e) \leq t + \sum_{e \in E(\ol{P})} w(e). $$ This inequality, \eqref{l2.2.16e5} and \eqref{l2.2.16e7} imply \eqref{l2.2.16e4}. \end{proof} #2egin{definition}\label{d2.2.18} For a metrizable, weighted graph $G = G(w)$ we denote by: #2egin{itemize} \item $E^{un}(G)$ the set of $2$-elements subsets $\{\mu, \nu\}$ of $V(G)$ such that $\{\mu, \nu\} \notin E(G)$ and $d^{1}(\mu, \nu) = d^{2}(\mu, \nu)$ holds for all $d^{1}$, $d^{2} \in \mathcal{M}(w)$; \item $\hat{G} = \hat{G}(\hat{w})$ the weighted graph with $$ V(\hat{G}) := V(G), \quad E(\hat{G}) := E(G) \cup E^{un}(G) $$ and $\hat{w} \colon E(\hat{G}) \to \mathbb{R}^{+}$ for which $$ \hat{w}(e) := #2egin{cases} w(e) & \text{if } e \in E(G)\\ d(\mu, \nu) & \text{if } e = \{\mu, \nu\} \in E^{un}(G), \end{cases} $$ where $d \in \mathcal{M}(w)$. \end{itemize} \end{definition} #2egin{corollary}\label{c2.2.19} Let $C = C(w)$ be a weighted cycle with $w(e) > 0$ for every $e \in E(C)$ and such that #2egin{equation}\label{c2.2.19e1} \sum_{e \in E(C)} w(e) = 2\max_{e \in E(C)} w(e). \end{equation} Then $\hat{C} = \hat{C}(\hat{w})$ is a complete graph. \end{corollary} #2egin{figure}[htb] #2egin{center} #2egin{tikzpicture}[scale=1] \node at (-3, 2) {$C$}; \coordinate [label= left:{$v_1 = \mu$}] (v1) at ($(0,0) + (180:2cm)$); \coordinate [label=above left:$v_2$] (v2) at ($(0,0) + (120:2cm)$); \coordinate [label=above right:$v_3$] (v3) at ($(0,0) + (50:2cm)$); \coordinate [label=right:{$\nu = v_4$}] (v4) at ($(0,0) + (10:2cm)$); \coordinate [label=below right:$v_5$] (v5) at ($(0,0) + (-25:2cm)$); \coordinate [label=below right:$v_6$] (v6) at ($(0,0) + (-75:2cm)$); \coordinate [label=below:$v_7$] (v7) at ($(0,0) + (-135:2cm)$); \draw (v1)--(v2)--(v3)--(v4)--(v5)--(v6)--(v7)--(v1); \draw (v1)--(v4); \draw [fill=black, draw=black] (v1) circle (2pt); \draw [fill=black, draw=black] (v2) circle (2pt); \draw [fill=black, draw=black] (v3) circle (2pt); \draw [fill=black, draw=black] (v4) circle (2pt); \draw [fill=black, draw=black] (v5) circle (2pt); \draw [fill=black, draw=black] (v6) circle (2pt); \draw [fill=black, draw=black] (v7) circle (2pt); \end{tikzpicture} \end{center} \caption{Here $e^* = \{v_2, v_3\}$ is the edge of maximum length and $v_1 = \mu$ and $v_4 = \nu$ are non adjacent vertices of the cycle $C = \{v_1, v_2, \ldots, v_7\}$, $P_1 = (v_1, v_2, v_3, v_4)$ and $P_2 = (v_4, v_5, v_6, v_7)$.} \label{ex2.2.19fig1} \end{figure} #2egin{proof} Let $\mu$ and $\nu$ be distinct, non adjacent vertices of $C$ and let $e^*$ be an edge of $C$ such that $$ w(e^*) = \max_{e \in E(C)} w(e). $$ Equality~\eqref{c2.2.19e1} implies that $e^*$ is the unique edge satisfying~\eqref{c2.2.19e2}. For the cycle $C$, the set $\mathcal{P}_{\mu, \nu}$ contains exactly two paths: $P_1$ with $e^* \in E(P_1)$ and $P_2$ with $e^* \notin E(P_2)$ (see Figure~\ref{ex2.2.19fig1}). It follows from~\eqref{c2.2.19e1} that #2egin{multline}\label{c2.2.19e2} \max_{P \in \mathcal{P}_{\mu, \nu}} #2iggl( 2 \max_{e \in E(P)} w(e) - \sum_{e \in E(P)} w(e) #2iggr)_{+} = #2iggl( 2 \max_{e \in E(P_1)} w(e) - \sum_{e \in E(P_1)} w(e) #2iggr)_{+} \\ = #2iggl( 2 w(e^*) - \sum_{e \in E(P_1)} w(e) #2iggr)_{+} \end{multline} and #2egin{equation}\label{c2.2.19e3} \min_{P \in \mathcal{P}_{\mu, \nu}} \sum_{e \in E(P)} w(e) = \sum_{e \in E(P_2)} w(e) \end{equation} and #2egin{equation}\label{c2.2.19e4} \sum_{e \in E(C)} w(e) = w(e^*) + \sum_{e \in E(P_1)} w(e) + \sum_{e \in E(P_2)} w(e). \end{equation} Equality~\eqref{c2.2.19e1}--\eqref{c2.2.19e4} imply that $$ \max_{P \in \mathcal{P}_{\mu, \nu}} #2iggl( 2 \max_{e \in E(P)} w(e) - \sum_{e \in E(P)} w(e) #2iggr)_{+} = \min_{P \in \mathcal{P}_{\mu, \nu}} \sum_{e \in E(P)} w(e). $$ Moreover, by Lemma~\ref{l2.2.11}, equality~\eqref{c2.2.19e1} also implies that $C(w)$ is metrizable. Using Lemma~\ref{l2.2.16} we obtain that $\{\mu, \nu\} \in E^{un}(C)$. Thus $\{u, v\} \in E(\hat{C})$ holds for all distinct $u$, $v \in V(\hat{C})$, i.e., $\hat{C}$ is complete. \end{proof} #2egin{lemma}\label{l2.2.20} Let $G = G(w)$ be a finite, connected, metrizable weighted graph and let $\mu$, $\nu$ be distinct, non adjacent vertices of $G$. Then the following statements are equivalent: #2egin{enumerate} \item\label{l2.2.20s1} The membership $\{\mu, \nu\} \in E^{un}(G)$ is valid; \item\label{l2.2.20s2} There is a cycle $C \subseteq G$ such that $\mu$, $\nu \in V(C)$ and~\eqref{c2.2.19e1} holds. \end{enumerate} \end{lemma} #2egin{proof} $\ref{l2.2.20s1} \Rightarrow \ref{l2.2.20s2}$ Let $\{\mu, \nu\} \in E^{un}(C)$. By Lemma~\ref{l2.2.16} the last statement holds if and only if $$ \max_{P \in \mathcal{P}_{\mu, \nu}} #2iggl( 2 \max_{e \in E(P)} w(e) - \sum_{e \in E(P)} w(e) #2iggr)_{+} = \min_{P \in \mathcal{P}_{\mu, \nu}} \sum_{e \in E(P)} w(e). $$ Since $G$ is a finite graph, the last equality implies that #2egin{equation}\label{l2.2.20e1} #2iggl( 2 \max_{e \in E(P_1)} w(e) - \sum_{e \in E(P_1)} w(e) #2iggr)_{+} = #2iggl( 2 \max_{e \in E(P_1)} w(e) - \sum_{e \in E(P_1)} w(e) #2iggr) = \sum_{e \in E(P_2)} w(e) > 0 \end{equation} for some $P_1$, $P_2 \in \mathcal{P}_{\mu, \nu}$. Let we consider the graph $P_1 \cup P_2$, $$ V(P_1 \cup P_2) = V(P_1) \cup V(P_2), \quad E(P_1 \cup P_2) = E(P_1) \cup E(P_2), $$ where $P_1$, $P_2 \in \mathcal{P}_{\mu, \nu}$ such that~\eqref{l2.2.20e1} holds. It is clear that $$ \max_{e \in E(P_1 \cup P_2)} w(e) = \max_{e \in E(P_1)} w(e). $$ Moreover~\eqref{l2.2.20e1} implies the inequality #2egin{equation}\label{l2.2.20e2} 2 \max_{e \in E(P_1 \cup P_2)} w(e) \geq \sum_{e \in E(P_1 \cup P_2)} w(e). \end{equation} It suffices to show that $P_1 \cup P_2$ is a cycle in $G$. Indeed, if $P_1 \cup P_2$ is a cycle, then the converse inequality $$ 2 \max_{e \in E(P_1 \cup P_2)} w(e) \leq \sum_{e \in E(P_1 \cup P_2)} w(e). $$ follows from Lemma~\ref{l2.2.11}. Hence we obtain equality~\eqref{c2.2.19e1} with $C = P_1 \cup P_2$. To prove that $P_1 \cup P_2$ is a cycle in $G$, we can consider the edge-deleted subgraph $$ P_{1,2} := P_1 \cup P_2 - \{e^*\} $$ of the graph $P_{1}\cup P_{2}$ such that $e^* = \{u^*, v^*\}$ is the unique edge of $P_1 \cup P_2$ with $$ \max_{e \in E(P_1 \cup P_2)} w(e) = w(e^*). $$ It is clear that $P_{1,2}$ is connected. Consequently there is a path $P_0$ joining $u^*$ and $v^*$ in $P_{1,2}$. Then $C_0 := P_0 + e^*$ is a cycle in $P_1 \cup P_2$. By Lemma~\ref{l2.2.11} we have #2egin{equation}\label{l2.2.20e3} 2 w(e^*) = 2 \max_{e \in E(C_0)} w(e) \leq \sum_{e \in E(C_0)} w(e). \end{equation} Since $C_0 \subseteq P_1 \cup P_2$, the inequality #2egin{equation}\label{l2.2.20e4} \sum_{e \in E(C_0)} w(e) \leq \sum_{e \in E(P_1 \cup P_2)} w(e) \end{equation} holds. Inequalities~\eqref{l2.2.20e2}, \eqref{l2.2.20e3} and \eqref{l2.2.20e4} imply the equality $$ \sum_{e \in E(P_1 \cup P_2)} w(e) = \sum_{e \in E(C_0)} w(e). $$ Using the last equality and the inclusion $C_0 \subseteq P_1 \cup P_2$ we see that $C_0 = P_1 \cup P_2$. Thus $P_1 \cup P_2$ is a cycle in $G$. $\ref{l2.2.20s2} \Rightarrow \ref{l2.2.20s1}$ Let \ref{l2.2.20s2} hold. By Corollary~\ref{c2.2.19} we have $\{\mu, \nu\} \in E^{un}(C)$, that implies $\{\mu, \nu\} \in E^{un}(G)$. \end{proof} We denote by $FPC$ (Finite Pretangent Clusters) the class of all weighted rooted graphs $G = G(w, r)$ for which $|V(G)| < \infty$ and there are an unbounded metric space $(X, d)$ and a scaling sequence $\tilde{r}$ such that $G(w, r)$ and $G_{X, \tilde{r}} (\rho_X, \nu_{0})$ are isomorphic as weighted rooted graphs. (See Definition~\ref{d1.4.6}.) If for a weighted rooted graph $G(w, r)$ the root $r$ is a dominating vertex, then we can define an analog $w^0$ of the labeling $\rho^0 \colon V(G_{X, \tilde{r}}) \to \mathbb{R}^{+}$ as follows: #2egin{equation}\label{e2.2.27} w^0(v) := #2egin{cases} 0 & \text{if } v = r\\ w(\{r, v\}) & \text{if } v \neq r. \end{cases} \end{equation} The following theorem gives us a solution of Problem~\ref{pr1.4.7} for the finite graphs. #2egin{theorem}\label{t2.2.21} Let $G = G(w, r)$ be a finite, weighted, rooted graph. Then $G \in FPC$ if and only if the following conditions simultaneously hold. #2egin{enumerate} \item\label{t2.2.21s1} The root $r$ is a dominating vertex of $G$ and the labeling $w^0 \colon V(G) \to \mathbb{R}^{+}$ is an injective function. \item\label{t2.2.21s2} The inequality #2egin{equation}\label{t2.2.21e1} 2 \max_{e \in E(C)} w(e) \leq \sum_{e \in E(C)} w(e) \end{equation} holds for every cycle $C\subseteq G$. \item\label{t2.2.21s3} If $C$ is a cycle in $G$ and the equality #2egin{equation}\label{t2.2.21e2} 2 \max_{e \in E(C)} w(e) = \sum_{e \in E(C)} w(e) \end{equation} holds, then $V(C)$ is a clique in $G$. \end{enumerate} \end{theorem} #2egin{proof} Let $(X, d)$ be an infinite metric space and $\tilde{r}$ be a scaling sequence for which there is an isomorphism $$ f \colon V(G) \to V(G_{X, \tilde{r}}) $$ of the weighted rooted graphs $G(w,r)$ and $G_{X, \tilde{r}} (\rho_{X}, \nu_0)$. We must show that~\ref{t2.2.21s1}, \ref{t2.2.21s2}, and \ref{t2.2.21s3} hold. \ref{t2.2.21s1} By Proposition~\ref{p1.4.2} the vertex $\nu_0 = \sstable{X^0}{r}$ is a dominating vertex of $G_{X, \tilde{r}}$. Since $f$ is an isomorphism of rooted graphs, we have $f(r) = \nu_0$. Consequently, $r$ is a dominating vertex of $G$. The graph $G$ is finite by the condition. Hence $G_{X, \tilde{r}}$ is also finite. Now, using Lemma~\ref{l2.2.4}, we obtain that the labeling $\rho^0 \colon V(G_{X, \tilde{r}}) \to \mathbb{R}^{+}$ is injective. By the definition of $w^{0}$ (see~\eqref{e2.2.27}) the equality $$ w^{0}(v) = w(\{r, v\}) $$ holds for every $v \in V(G) \setminus \{r\}$. Hence we have $$ w^{0} (v) = \rho_{X} (\{f(r), f(v)\}) = \rho^{0} (f(v)), $$ that implies the injectivity of $w^{0} \colon V(G) \to \mathbb{R}^{+}$. Condition~\ref{t2.2.21s1} follows. \ref{t2.2.21s2} Let $C = (v_1, \ldots, v_n)$ be a cycle in $G$. Then $f(C) := (f(v_1), \ldots, f(v_n))$ is a cycle in $G_{X, \tilde{r}}$ and #2egin{equation}\label{t2.2.21e3} \max_{e \in E(C)} w(e) = \max_{e \in E(f(C))} \rho_{X}(e) \text{ and } \sum_{e \in E(C)} w(e) = \sum_{e \in E(f(C))} \rho_{X}(e). \end{equation} By Proposition~\ref{p1.4.14} the weighted cluster $G_{X, \tilde{r}} (\rho_{X})$ is metrizable. Hence, by Lemma~\ref{l2.2.11}, the inequality $$ 2 \max_{e \in E(f(C))} \rho_{X}(e) \leq \sum_{e \in E(f(C))} \rho_{X}(e) $$ holds. The last inequality and~\eqref{t2.2.21e3} imply the inequality #2egin{equation}\label{t2.2.21e4} 2 \max_{e \in E(C)} w(e) \leq \sum_{e \in E(C)} w(e). \end{equation} Thus~\ref{t2.2.21s2} holds. \ref{t2.2.21s3} Suppose \ref{t2.2.21s3} does not hold. Then there is a cycle $C \subseteq G$ and some distinct vertices $\mu$, $\nu \in V(C)$ such that $$ 2 \max_{e \in E(C)} w(e) = \sum_{e \in E(C)} w(e) $$ and $\{\mu, \nu\} \notin E(G)$. Since $f \colon V(G) \to V(G_{X, \tilde{r}})$ is an isomorphism of weighted graphs, $f(C)$ is a cycle in $G_{X, \tilde{r}}$ and $f(\mu)$, $f(\nu) \in V(f(C))$ and #2egin{equation}\label{t2.2.21e5} \{f(\mu), f(\nu)\} \notin E(G_{X, \tilde{r}}) \end{equation} and $$ 2 \max_{e \in E(f(C))} \rho_{X}(e) = \sum_{e \in E(f(C))} \rho_{X}(e). $$ Lemma~\ref{l2.2.20} implies $$ \{f(\mu), f(\nu)\} \in E^{un} (V(G_{X, \tilde{r}})), $$ i.e., the equality $$ d^{1}(f(\mu), f(\nu)) = d^{2}(f(\mu), f(\nu)) $$ holds for all $d^{1}$, $d^{2} \in \mathcal{M}(\rho_{X})$. It follows from Lemma~\ref{l2.2.15} that $$ \{f(\mu), f(\nu)\} \in E(G_{X, \tilde{r}}), $$ contrary to~\eqref{t2.2.21e5}. Condition~\ref{t2.2.21s3} follows. Conversely, suppose that conditions~\ref{t2.2.21s1}, \ref{t2.2.21s2} and \ref{t2.2.21s3} hold for the weighted rooted graph $G(w, r)$. We must find an unbounded metric space $(X, d)$ and a scaling sequence $\tilde{r} = \seq{r_n}{n}$ such that $G(w,r)$ and $G_{X, \tilde{r}} (\rho_{X}, \nu_{0})$ are isomorphic as weighted rooted graphs. Let $|V(G(w, r))|=1$ hold. Example~\ref{ex1.2.6} describes $(X, d)$ and $\tilde{r}$ for which $|\pretan{X}{r}| = 1$ that implies $|V(G_{X, \tilde{r}})| = 1$. It is clear that any two weighted rooted graphs $G_{1} = G_{1}(w_1, r_1)$ and $G_{2} = G_{2}(w_2, r_2)$ are isomorphic if $|V(G_1)| = |V(G_2)| = 1$. Thus we may suppose that $G(w, r)$ contains at least two vertices. Using condition~\ref{t2.2.21s1} and Lemma~\ref{l2.2.11} we can show that $w(e) > 0$ holds for every $e \in E(G)$. Indeed, if $\{u^*, v^*\} \in E(G)$ and #2egin{equation}\label{t2.2.21e6} w(\{u^*, v^*\}) = 0, \end{equation} then there is a pseudometric $d \colon V(G) \times V(G) \to \mathbb{R}^{+}$ such that $$ w^{0} (u^*) = d(r, u^*), \quad w^{0} (v^*) = d(r, v^*) $$ and $w(\{u^*, v^*\}) = d(u^*, v^*)$. Now, from~\eqref{t2.2.21e6} and the triangle inequality we have $$ |w^{0} (u^*) - w^{0} (v^*)| = |d(r, u^*) - d(r, v^*)| \leq d(u^*, v^*) = 0. $$ Thus we have the equality $|w^{0} (u^*) - w^{0} (v^*)| = 0$. That implies $w^{0} (u^*) = w^{0} (v^*)$, contrary to condition~\ref{t2.2.21s1}. The set $E^{un}(G)$ (see Definition~\ref{d2.2.18}) is empty. To see it suppose $\{\mu, \nu\} \in E^{un}(G)$. Since $w(e) > 0$ for every $e \in E(G)$, condition~~\ref{t2.2.21s2} and Lemma~\ref{l2.2.11} imply that $G(w)$ is metrizable. By Lemma~\ref{l2.2.20}, there is a cycle $C \subseteq G$ such that $$ \sum_{e \in E(C)} w(e) = 2\max_{e \in E(C)} w(e) $$ holds and $\mu$, $\nu \in V(C)$. It follows from condition~\ref{t2.2.21s3} that $V(C)$ is a clique in $G$. Hence $\{\mu, \nu\} \in E(G)$. The last statement contradicts the definition of $E^{un}(G)$. Let $\ol{G}$ be the complement of $G$, i.e., $\ol{G}$ is the graph whose vertex set is $V(G)$ and whose edges are the pairs of nonadjacent vertices of $\ol{G}$ (see~\cite[Definition~1.1.17]{BM}). Since $E^{un}(G) = \varnothing$, for every $\ol{e} = \{\ol{u}, \ol{v}\} \in E (\ol{G})$ there are metrics $d^{1}$, $d^{2} \in \mathcal{M}(w)$ such that $$ d^{1} (\ol{u}, \ol{v}) \neq d^{2} (\ol{u}, \ol{v}). $$ (Recall that a metric $d \colon V(G) \times V(G) \to \mathbb{R}^{+}$ belongs $\mathcal{M}(w)$ if and only if $G(w)$ is metrizable by $d$.) We denote by $\ol{m}$ the number of edges of $\ol{G}$. Let $(\ol{e}_1, \ldots, \ol{e}_{\ol{m}})$ and $(\ol{e}_{1+\ol{m}}, \ldots, \ol{e}_{\ol{m}+\ol{m}})$ be numberings of $E(\ol{G})$ for which the equality $$ \ol{e}_{i} = \ol{e}_{i+\ol{m}} $$ holds for $i = 1$, $\ldots$, $\ol{m}$. Then there is a finite sequence $(\ol{d}_1, \ldots, \ol{d}_{\ol{m}}, \ldots, \ol{d}_{2\ol{m}})$ of metrics from $\mathcal{M}(w)$ such that #2egin{equation}\label{t2.2.21e7} \ol{d}_{i} (\ol{u}_{i}, \ol{v}_{i}) \neq \ol{d}_{i+\ol{m}} (\ol{u}_{i}, \ol{v}_{i}) \end{equation} if $i = 1$, $\ldots$, $\ol{m}$ and $\{\ol{u}_{i}, \ol{v}_{i}\} = \ol{e}_{i}$. Let $\tilde{r} = \seq{r_n}{n}$ be a scaling sequence such that #2egin{equation}\label{t2.2.21e8} \lim_{n\to\infty} \frac{r_{n+1}}{r_{n}} = \infty \end{equation} and let $\seq{V(G), d_n}{n}$ be the sequence of metric spaces with the metrics $d_n$ satisfying the equality #2egin{equation}\label{t2.2.21e9} d_n = r_n \ol{d}_{i} \end{equation} if $n = i \pmod{2\ol{m}}$ and $i = 1$, $\ldots$, $2\ol{m}$. Now, using the Kuratowski embedding, we will define a metric space $(X, d)$ as a subset of the $k$-dimensional normed vector space $l_{k}^{\infty}$ with $k = |V(G)|$ and the norm $$ \|x\|_{\infty} := \sup_{1\leq j\leq k} |x_j|. $$ For every $n \in \mathbb{N}$, the Kuratowski embedding $$ K_n \colon (V(G), d_n) \to (l_k^{\infty}, \|\cdot\|_{\infty}) $$ can be defined as: #2egin{equation}\label{t2.2.21e11} K_n(v) := #2egin{pmatrix} d_n(v, v_1) - d_n(v_1, r)\\ d_n(v, v_2) - d_n(v_2, r)\\ \hdotsfor{1}\\ d_n(v, v_m) - d_n(v_m, r) \end{pmatrix}, \quad v \in V(G), \end{equation} where $(v_1, \ldots, v_m)$ is a numbering of $V(G)$ and the metrics $d_n$, $n \in \mathbb{N}$, are defined by~\eqref{t2.2.21e9}. We set #2egin{equation}\label{t2.2.21e12} X := #2igcup_{n \in \mathbb{N}} K_n(V(G)) \end{equation} and consider $X$ with the metric~$d$ induced by the norm $\|\cdot\|_{\infty}$. We claim that $G_{X, \tilde{r}} (\rho_{X}, \nu_{0})$ and $G(w, r)$ are isomorphic as weighted rooted graphs. The next part of the proof is similar to the corresponding reasoning from Example~4.14 in \cite{BDnew}. It follows directly from~\eqref{t2.2.21e11} that $$ K_n(r)=#2egin{pmatrix} 0\\ \ldots\\ 0\\ \end{pmatrix} $$ holds for every $n$. For convenience we can suppose that $$ p = #2egin{pmatrix} 0\\ \ldots\\ 0\\ \end{pmatrix} $$ is a distinguished point of $X$. Hence, for every $x\in X$, we have #2egin{equation}\label{t2.2.21e13} d(x, p) = \|x\|_{\infty}. \end{equation} Let $\tilde{x} = \seq{x_n}{n} \in Seq (X, \tilde{r})$ such that #2egin{equation}\label{t2.2.21e14} \tilde{\tilde d}_{\tilde{r}}(\tilde{x})=\lim_{n\to\infty} \frac{d(x_n, p)}{r_n} = \lim_{n\to\infty} \frac{\|x_{n}\|_{\infty}}{r_n} > 0. \end{equation} By \eqref{t2.2.21e12}, for $n\in\mathbb{N}$, there are $j \in \mathbb{N}$ and $v = v(n)\in V(G)$ satisfying the equality $x_{n} = K_j(v)$. It is well known that the Kuratowski embeddings are distance preserving (see, for example, \cite[the proof of Theorem~III.8.1]{Borsuk}). Consequently, we have #2egin{equation}\label{t2.2.21e15} \frac{1}{r_n} \|x_n\|_{\infty} = \frac{r_j}{r_n} \|K_j(v)\|_{\infty} = \frac{r_j}{r_n} w^{0}(v). \end{equation} Now \eqref{t2.2.21e14} implies $v \neq r$ for all sufficiently large $n$. Moreover, using \eqref{t2.2.21e8} and \eqref{t2.2.21e15} we obtain $n=j$ if $n$ is large enough. Hence, if $\tilde{x} = \seq{x_n}{n}$ belongs to $Seq(X, \tilde{r})$ and $\tilde{\tilde d}_{\tilde{r}} (x) > 0$, then for every sufficiently large $n$ there is $v(n) \in V(G)$ such that $$ \frac{1}{r_n} \|x_n\|_{\infty} = w^{0}(v(n)). $$ Since the labeling $w^{0} \colon V(G) \to \mathbb{R}$ is injective, $\mathop{\lim}\limits_{n\to\infty} w^{0}(v(n))$ exists if and only if there is $v'\in V(G)$ such that $v(n) = v'$ holds for all sufficiently large $n$. Conversely, if there are $v' \in V(G)$ and $\tilde{x} = \seq{x_n}{n} \subset X$ such that the equality $$ \frac{1}{r_n} \|x_n\|_{\infty} = w^{0}(v') $$ holds for all sufficiently large $n$, then we have $\tilde{x} \in Seq(X, \tilde{r})$. Thus there is a bijection $$ f \colon V(G) \to X(G_{X, \tilde{r}}) $$ such that $f(r) = \sstable{X^0}{r} = \nu_0$ and, by~\eqref{t2.2.21e15}, $$ w^{0}(v) = \rho_{X}^{0} (f(v)) $$ for every $v \in V(G)$. It is easy to prove that $f$ is an isomorphism of $G(w)$ and $G_{X, \tilde{r}} (\rho_{X})$. Indeed, if $u$ and $v$ are distinct vertices of $G$ and $$ \tilde{x} = \seq{x_n}{n} \in f(u), \quad \tilde{v} = \seq{v_n}{n} \in f(v), $$ then, using~\eqref{t2.2.21e8}, we obtain $$ \frac{d(x_n, y_n)}{r_n} = \frac{\|x_n - y_n\|_{\infty}}{r_n} = \frac{\|K_n(u) - K_n(v)\|_{\infty}}{r_n} = \frac{d_n(u, v)}{r_n} = \ol{d}_i(u, v) $$ for all sufficiently large $n \in \mathbb{N}$, where $i \in \{1, \ldots, 2\ol{m}\}$ and $i = n \pmod{2\ol{m}}$. The equality #2egin{equation}\label{t2.2.21e16} \frac{d(x_n, y_n)}{r_n} = \ol{d}_i(u, v) \end{equation} and~\eqref{t2.2.21e7} imply that $\tilde{x}$ and $\tilde{y}$ are mutually stable if and only if $\{u, v\} \in E(G)$. Moreover, it follows from $\ol{d}_i \in \mathcal{M}(w)$ and~\eqref{t2.2.21e16} that, for $\{u, v\} \in E(G)$, we have $$ \rho_{X} (\{f(u), f(v)\}) = \lim_{n\to\infty} \frac{d(x_n, y_n)}{r_n} = \ol{d}_{i} (u, v) = w(\{u, v\}). $$ Thus $G(w,r)$ and $G_{X, \tilde{r}} (\rho_{X}, \nu_{0})$ are isomorphic as weighted rooted graphs. \end{proof} The following corollary of Theorem~\ref{t2.2.21} gives us a solution of Problem~\ref{pr1.4.4} for the case of finite graphs. #2egin{corollary}\label{c2.2.22} A finite rooted graph $G = G(r)$ is isomorphic to a rooted cluster $G_{X,\tilde{r}} (\nu_{0})$ for some $(X, d)$ and $\tilde{r}$ if and only if the root $r$ is a dominating vertex of $G$. \end{corollary} #2egin{proof} If $|V(G)| = 1$, then it follows from Example~\ref{ex1.2.6}. Now let $r$ be a dominating vertex of $G$ and let $|V(G)| \geq 2$ hold. Define a weight $w$ such that $1 < w(e) < 2$, for all $e \in E(G)$, and $w(e_1) \neq w(e_2)$ if $e_1 \neq e_2$. Then conditions~\ref{t2.2.21s1}--\ref{t2.2.21s3} of Theorem~\ref{t2.2.21} are satisfied, and, consequently, there exist $(X, d)$ and $\tilde{r}$ such that $G(w,r)$ and $G_{X, \tilde{r}} (\rho_{X}, \nu_{0})$ are isomorphic as weighted rooted graphs. Thus, $G(r)$ and $G_{X, \tilde{r}} (\nu_{0})$ are isomorphic as rooted graphs. The converse statement follows directly form Proposition~\ref{p1.4.2}. \end{proof} #2egin{corollary}\label{c2.2.24} Let $(Y, \delta)$ be a finite nonempty metric space. Then the following statements are equivalent. #2egin{enumerate} \item\label{c2.2.24s1} There is $y^* \in Y$ such that #2egin{equation}\label{c2.2.24e1} \delta(y^*, x) \neq \delta(y^*, z) \end{equation} holds whenever $x$ and $z$ are distinct points of $Y$. \item\label{c2.2.24s2} There are an unbounded metric space $(X, d)$ and a scaling sequence $\tilde{r}$ such that $(X,d)$ has the unique pretangent space at infinity with respect to $\tilde{r}$ and this pretangent space is isometric to $(Y, \delta)$. \end{enumerate} \end{corollary} #2egin{proof} $\ref{c2.2.24s1} \Rightarrow \ref{c2.2.24s2}$ Suppose~\ref{c2.2.24s1} holds. To prove~\ref{c2.2.24s2} it suffices to consider a finite, weighted rooted graph $G = G(w,r)$ such that: #2egin{itemize} \item $V(G) = Y$; \item $G$ is complete, i.e., $\{x,y\} \in E(G)$, whenever $x$ and $y$ are distinct points of $Y$; \item The equality $w(\{x, y\}) = \delta(x,y)$ holds for every $\{x,y\} \in E(G)$; \item The root $r$ coincides with a point $y^*$ for which~\eqref{c2.2.24e1} holds for all distinct $x$, $y \in Y$. \end{itemize} Theorem~\ref{t2.2.21} implies the existence of $(X, d)$ and $\tilde{r}$ having the desirable properties. $\ref{c2.2.24s2} \Rightarrow \ref{c2.2.24s1}$ If~\ref{c2.2.24s2} holds, then \ref{c2.2.24s1} follows from Lemma~\ref{l2.2.4}. \end{proof} It is known that the maximum number $f(n)$ of maximal cliques possible in a finite graph with $n \geq 2$ vertices satisfies the equality #2egin{equation}\label{e2.2.26} f(n) = #2egin{cases} 3^{n/3} & \text{if } n = 0 \pmod 3\\ 4(3^{\lfloor n/3\rfloor - 1}) & \text{if } n = 1 \pmod 3\\ 2(3^{\lfloor n/3\rfloor}) & \text{if } n = 2 \pmod 3, \end{cases} \end{equation} where $\lfloor\cdot\rfloor$ is the floor function. (See \cite{ER} and \cite{MM} for the proof and related results.) #2egin{corollary}\label{c2.2.9} Let $(X, d)$ be an unbounded metric space and let $\tilde{r}$ be a scaling sequence. Then, we have either #2egin{equation}\label{c2.2.9e1} \left|\mathbf{\pretan{X}{r}}\right|\leq #2egin{cases} 1 & \text{if } \left|V(G_{X, \tilde{r}})\right| \leq 2\\ f\left(\left|V(G_{X, \tilde{r}})\right|-1\right) & \text{if } 3 \leq \left|V(G_{X, \tilde{r}})\right| < \infty \end{cases} \end{equation} or $\left|\mathbf{\pretan{X}{r}}\right|\ge \mathfrak{c}$ if $\left|V(G_{X, \tilde{r}})\right|$ is infinite, where $\left|\mathbf{\pretan{X}{r}}\right|$ is the cardinal number of distinct pretangent spaces to $(X,d)$ at infinity with respect to $\tilde{r}$ and $f$ satisfies equality~\eqref{e2.2.26}. \end{corollary} #2egin{proof} If the labeling $\rho^0 \colon V(G_{X, \tilde{r}}) \to \mathbb{R}^+$ is not injective, then by Lemma~\ref{l2.2.4} there is an independent set $I \subseteq V(G_{X, \tilde{r}})$ such that $$ |I| = \mathfrak{c}. $$ For every $\nu \in I$ there is $\pretan{X}{r}$ such that $\nu \in \pretan{X}{r}$ and by virtue the fact that $I$ is independent, the distinct points of $I$ belong to distinct pretangent spaces. Hence $\left| \mathbf{\pretan{X}{r}}\right| \geq \mathfrak{c}$ holds. Let $\rho^0 \colon V(G_{X, \tilde{r}}) \to \mathbb{R}^+$ be injective. If $\left| V(G_{X, \tilde{r}}) \right| \leq 2$, then statement (iii) of Proposition~\ref{p1.2.2} and Definition~\ref{d1.1.4} imply $\left| \mathbf{\pretan{X}{r}}\right| = 1$. Assume now that $3 \leq \left| V(G_{X, \tilde{r}}) \right| < \infty$. The point $\nu_0 = \sstable{X^0}{r}$ is a dominating vertex of $G_{X, \tilde{r}}$. Consequently, there is an one-to-one correspondence between the maximal cliques of $G_{X, \tilde{r}}$ and the maximal cliques of the vertex-deleted subgraph $G_{X, \tilde{r}} - \nu_0$. Since $2 \leq \left| V(G_{X, \tilde{r}} - \nu_0) \right| < \infty$, we may use function~\eqref{e2.2.26} to obtain the desirable estimation. \end{proof} #2egin{remark}\label{r2.2.10} Inequality~\eqref{c2.2.9e1} is the best possible in the sense that, for every $n \in \mathbb{N}$, there exist an unbounded metric space $(X, d)$ and a scaling sequence $\tilde{r}$ such that $|V(G_{X, \tilde{r}})| = n$ and $$ \left|\mathbf{\pretan{X}{r}}\right| = #2egin{cases} 1 & \text{if } \left|V(G_{X, \tilde{r}})\right| \leq 2\\ f\left(\left|V(G_{X, \tilde{r}})\right|-1\right) & \text{if } 3 \leq \left|V(G_{X, \tilde{r}})\right| < \infty. \end{cases} $$ It directly follows from Corollary~\ref{c2.2.22} that the vertex-deleted subgraph $(G_{X, \tilde{r}} - \nu_{0})$ of the graph $G_{X, \tilde r}$ can be isomorphic to arbitrary finite graph $G$ with $|V(G)| = |V(G_{X, \tilde{r}})| - 1$. \end{remark} We conclude this section by a brief discussion of conditions~\ref{t2.2.21s2} and \ref{t2.2.21s3} of Theorem~\ref{t2.2.21}. By Lemma~\ref{l2.2.11} condition~\ref{t2.2.21s2} means that every weighted cycle $C \subseteq G(w)$ is metrizable with the weight induced from $G(w)$. Furthermore, it was shown that condition~\ref{t2.2.21s3} is equivalent to the fact that the vertex set $V(C)$ of every uniquely metrizable cycle $C \subseteq G(w)$ is a clique in $G(w)$. For an arbitrary metrizable cycle $C = C(w)$ there is a circle $S$ in the plane and a finite subset $A$ of $S$ such that $|V(C)| = |A|$ and, for every $\{u, v\} \in E(C)$, there are $a$, $b \in A$ for which the length of the minor arc between $a$ and $b$ equals to $w(\{u, v\})$. So we can consider a set $A$ together with the metric defined by the minor arc length as a result of metrization of the weighted cycle $C(w)$. We know that this metrization is unique (up to an isometry) if and only if #2egin{equation}\label{e2.2.76} 2 \max_{e \in E(C)} w(e) = \sum_{e \in E(C)} w(e) \end{equation} holds. If we have the strict inequality $$ 2 \max_{e \in E(C)} w(e) < \sum_{e \in E(C)} w(e) $$ and $|V(C)| \geq 4$, then, by Lemma~\ref{l2.2.16}, there are continuum many different metrizations of $C(w)$. #2egin{example}\label{ex2.2.25} Let $C(w)$ be a weighted cycle depicted in Figure~\ref{ex2.2.25fig1}. Then $C(w)$ is metrizable if and only if #2egin{equation*} 2 \max\{a, b, c, k\} \leq a + b + c + k. \end{equation*} #2egin{figure}[h] #2egin{center} #2egin{tikzpicture}[scale=1,thick] \coordinate [label=below left:$\nu_1$] (v1) at (0,0.5); \coordinate [label=above left:$\nu_2$] (v2) at (1,3); \coordinate [label=above right:$\nu_3$] (v3) at (4,2); \coordinate [label=below right:$\nu_4$] (v4) at (5,0); \draw (v1) -- node[above left] {$a$} (v2) -- node[above] {$b$} (v3) -- node[above right] {$c$} (v4) -- node[below] {$k$} (v1); \draw [fill=black, draw=black] (v1) circle (2pt); \draw [fill=black, draw=black] (v2) circle (2pt); \draw [fill=black, draw=black] (v3) circle (2pt); \draw [fill=black, draw=black] (v4) circle (2pt); \end{tikzpicture} \end{center} \caption{Here $C(w)$ is a weighted cycle with $w(\{\nu_1, \nu_2\}) = a$, $w(\{\nu_2, \nu_3\}) = b$, $w(\{\nu_3, \nu_4\}) = c$ and $w(\{\nu_4, \nu_1\}) = k$.} \label{ex2.2.25fig1} \end{figure} If $C(w)$ is metrizable, then for each $d \in \mathcal{M}(w)$ we have the double inequalities $$ \max\{|b-c|, |a-k|\} \leq d(\nu_2, \nu_4) \leq \min \{b+c, a+k\} $$ and $$ \max\{|a-b|, |c-k|\} \leq d(\nu_1, \nu_3) \leq \min \{a+b, c+k\}. $$ Conversely, if $p$ and $q$ are positive real numbers such that #2egin{equation*} \max\{|b-c|, |a-k|\} \leq p \leq \min \{b+c, a+k\} \end{equation*} and #2egin{equation*} \max\{|a-b|, |c-k|\} \leq q \leq \min \{a+b, c+k\}, \end{equation*} then $C(w)$ is metrizable and there is $d \in \mathcal{M}(w)$ with $$ d(\nu_2, \nu_4) = p \quad \text{and} \quad d(\nu_1, \nu_3) = q. $$ \end{example} The unique metrization of a weighted cycle $C(w)$ satisfying equality~\eqref{e2.2.76} can also be represented as a finite set of points on the real line with the standard metric $d(x,y) = |x-y|$ (see Figure~\ref{fig2.7}). The last representation is closely connected to the important concept of ``metric betweenness'' which was introduces by Menger~\cite{Menger1928} in the following form. #2egin{figure}[h] #2egin{center} #2egin{tikzpicture}[scale=1] \coordinate (O) at (0,0); \coordinate [label=below:$v_1$] (v1) at ($(O)+(-90:2cm)$); \coordinate [label=below right:$v_2$] (v2) at ($(O)+(-30:2cm)$); \coordinate [label=right:$v_3$] (v3) at ($(O)+(30:2cm)$); \coordinate [label=above:$v_4$] (v4) at ($(O)+(90:2cm)$); \draw (O) circle (2cm); \draw [fill=black] (v1) circle (2pt); \draw [fill=black] (v2) circle (2pt); \draw [fill=black] (v3) circle (2pt); \draw [fill=black] (v4) circle (2pt); \end{tikzpicture} \hfil #2egin{tikzpicture}[scale=1] \coordinate (O) at (0,0); \coordinate [label=below:$v_1$] (v1) at ($(O)$); \coordinate [label=below:$v_2$] (v2) at ($(O)+(2, 0)$); \coordinate [label=below:$v_3$] (v3) at ($(O)+(4, 0)$); \coordinate [label=below:$v_4$] (v4) at ($(O)+(6, 0)$); \draw (v1) -- (v2) -- (v3) -- (v4); \draw [fill=black] (v1) circle (2pt); \draw [fill=black] (v2) circle (2pt); \draw [fill=black] (v3) circle (2pt); \draw [fill=black] (v4) circle (2pt); \end{tikzpicture} \end{center} \caption{Two isometric metrizations of $C(w)$ satisfying equality~\eqref{e2.2.76}.} \label{fig2.7} \end{figure} Let $(X, d)$ be a metric space and let $x$, $y$ and $z$ be different points of $X$. One says that $y$ lies between $x$ and $z$ if $$ d(x, z) = d(x,y) + d(y,z). $$ It is easy to verify that, for three different points $x$, $y$, $z \in X$, we have $$ 2 \max\{d(x,y), d(x, z), d(y,z)\} = d(x,y) + d(x, z) + d(y,z) $$ if and only if one of these points lies between the other two points. Thus equality~\eqref{e2.2.76} can be considered as a generalization of the ``metric betweenness'' relation to the case of weighted graphs. Characteristic properties of ternary relations that are ``metric betweenness'' relations were determined by Wald in \cite{Wald}. Later, the problem of metrization of ``betweenness'' relations (not necessarily by real-valued metrics) was considered in \cite{Mosz, MZl, Simko}. Analogs of the classical Sylvester-Gallai and Bruijn-Erd\"{o}s theorems for ``metric betweenness'' relations have recently been obtained in \cite{Che, CheChv, Chv}. #2ibliographystyle{amsplain} #2ibliography{bibliography} \addcontentsline{toc}{chapter}{#2ibname} \textbf{Viktoriia Bilet} Institute of Applied Mathematics and Mechanics of NASU, Dobrovolskogo Str. 1, Sloviansk 84100, Ukraine \textbf{E-mail:} [email protected] #2igskip \textbf{Oleksiy Dovgoshey} Institute of Applied Mathematics and Mechanics of NASU, Dobrovolskogo Str. 1, Sloviansk 84100, Ukraine \textbf{E-mail:} [email protected] \end{document}
\muathbf{b}egin{document} \tauitle{CYCLE CHARACTERIZATION OF THE AUBRY SET FOR WEAKLY COUPLED HAMILTON--JACOBI SYSTEMS} \muathbf{a}uthor{H.Ibrahim\tauhetaanks{Lebanese University, Mathematics Department, Hadeth, Beirut, Lebanon, {\taut [email protected]}}, \,A.Siconolfi\tauhetaanks{Mathematics Department, University of Rome "La Sapienza", Italy, {\taut [email protected]}}, \, S.Zabad\tauhetaanks{Mathematics Department, University of Rome "La Sapienza", Italy, {\taut [email protected]}}} \muaketitle \muathbf{b}egin{abstract} We study a class of weakly coupled systems of Hamilton--Jacobi equations using the random frame introduced in \cite{siconolfi3}. We provide a cycle condition characterizing the points of Aubry set. This generalizes a property already known in the scalar case. \\ \muathbf{b}igskip \noindent \tauextbf{Key words. }weakly coupled systems of Hamilton-Jacobi equations, viscosity solutions, weak KAM Theory. \muathbf{b}igskip \noindent \tauextbf{AMS subject classifications.} 35F21, 49L25, 37J50. \varepsilonnd{abstract} \sigmaection{\tauextbf{Introduction}} \partialarskip +3pt True to the title, the object of the paper is to provide a dynamical characterization of the Aubry set associated to weakly coupled Hamilton-Jacobi systems posed on the flat torus $\muathbb T^N$. We consider the one--parameter family $$ H_{i}(x,Du_{i})+\sigmaum_{j=1}^{m}a_{ij}u_{j}(x)=\muathbf{a}lpha \quad \mubox{in } \muathbb{T}^{N} \quad \mubox{ for every } i \inftyn \{1,\cdots,m\}, $$ with $m \gammae2$ and $\muathbf{a}lpha$ varying in $\hbox{\muathbf{b}f R}$. Here $\muathbf u=(u_1,\cdots,u_m)$ is the unknown function, $H_1, \cdots, H_m$ are continuous Hamiltonians, convex and superlinear in the momentum variable, and $A = {\lambda}eft (a_{ij} \rhoight )$ is a coupling matrix. As primarily pointed out in \cite{Fabio}, \cite{Mitake2}, this kind of systems exhibit properties and phenomena similar to the ones already studied for scalar Eikonal equations, under suitable assumptions on the coupling matrix. In particular the minimum $\muathbf{a}lpha$ for which the system has subsolutions, to be understood in viscosity or equivalently a.e. sense, is the unique value for which it admits viscosity solutions. We call this threshold value critical, and denote it by $\muathbf{b}eta$ in what follows. The obstruction to the existence of subsolutions below the critical value is not spread indistinctly on the torus, but instead concentrated on the Aubry set, denoted by ${\cal A}$. This fact, proved in \cite{Davini}, generalizes what happens in the scalar case. It has relevant consequences on the structure of the critical subsolutions and allow defining a fundamental class of critical solutions, see Definition \rhoef{pdeau}. However, so far, no geometrical/dynamical description of ${\cal A}$ is available, and the aim of our investigation is precisely to mend this gap. To deepen knowledge of the Aubry set seems important for the understanding of the interplay between equational and dynamical facts in the study of the system, which is at the core of an adapted weak KAM theory. See \cite{Fathh} for a comprehensive treatment of this topic in the scalar case. This will hopefully allow to attack some open problem in the field, the most relevant being the existence of regular subsolutions. Another related application, at least when the Hamiltonians are of Tonelli type, is in the analysis of random evolutions associated to weakly coupled systems, see \cite{DaZaSi}. According to \cite{Davini}, there is a restriction in the values that a critical (or supercritical) subsolution can assume at any given point. This is a property which genuinely depends on the vectorial structure of the problem and has no counterpart in the scalar case. Due to stability properties of viscosity subsolutions and the convex nature of the problem, these admissible values make up a closed convex set at any point $y$ of the torus. We denote it by $F_\muathbf{b}eta(y)$. The restriction becomes severe on the Aubry set where $F_\muathbf{b}eta(y)$ is a one--dimensional set, while we have proved, to complete the picture, that it possesses nonempty interior outside $\muathcal A$, see Proposition \rhoef{prop30}. In a nutshell what we are doing in the paper is to provide a dynamical dressing to this striking dichotomy. To this purpose, we take advantage of the action functional introduced in \cite{siconolfi3} in relation to the systems. We also make a crucial use of the characterization of admissible values through the action functional computed on random cycles there established, see Theorem \rhoef{t18}. The action functional is defined exploiting the underlying random structure given by the Markov chain with $-A$ as transition matrix. Following the approach of \cite{siconolfi3}, we provide a presentation of the random frame based on explicit computations and avoid using advanced probabilistic notions. This makes the text mostly self contained accessible to readers without specific background in probability. The starting point is the cycle characterization of the Aubry set holding in the scalar case, see \cite{siconolfi}. It asserts that a point is in the Aubry set if and only there exists, for some $\varepsilonpsilon$ positive, a sequence of cycles based on it, and defined in $[0,t]$ with $t > \varepsilonpsilon$, on which the action functional is infinitesimal. Of course the role of the lower bound $\varepsilonpsilon$ is crucial, otherwise the property should be trivially true for any element of the torus. To generalize it in the context of systems, we need using random cycles defined on intervals with a stopping time, say $\tauau$, as right endpoint. We call it $\tauau$--cycles, see Appendix \rhoef{randomset}. This makes the adaptation of the $\varepsilonpsilon$--condition quite painful. To perform the task, we use the notion of stopping time strictly greater than $\varepsilonpsilon$, $\tauau \gammag \varepsilonpsilon$, see Definition \rhoef{superstrict}, which seems rather natural but that we were not able to find in the literature. We therefore present in Section \rhoef{propstoptim} some related basic results. We, in particular, prove that the exponential of the coupling matrix related to a $\tauau \gammag \varepsilonpsilon$ is strictly positive, see Proposition \rhoef{corollozzo}. This property will be repeatedly used throughout the paper. We moreover provide a strengthened version of the aforementioned Theorem \rhoef{t18}, roughly speaking showing that the $\tauau$--cycles with $\tauau \gammag \varepsilonpsilon$ are enough to characterize admissible values for critical subsolutions, see Theorem \rhoef{t20}. This result is in turn based on a cycle iteration technique we explain in Section \rhoef{reptcyc}. \sigmamallskip The main output is presented in two versions, see Theorems \rhoef{t19}, \rhoef{t21}, with the latter one, somehow more geometrically flavored, exploiting the notion of characteristic vector of a stopping time, see Definition \rhoef{charactsc}. The paper is organized as follows: in section \rhoef{settingpb} we introduce the system under study and recall some basic preliminary facts. Section \rhoef{propstoptim} is devoted to illustrate some properties of stopping times and the related shift flows. Section \rhoef{reptcyc} is about the cycle iteration technique. In section \rhoef{dynpropaub} we give the main results. Finally the two appendices \rhoef{stocmatrx} and \rhoef{cadpath} collect basic material on stochastic matrices and spaces of c\`{a}dl\`{a}g paths. In Appendix \rhoef{randomset} we give a broad picture of the random frame we work within. \sigmamallskip \sigmaection{\tauextbf{ Assumptions and preliminary results}}{\lambda}abel{settingpb} \partialarskip +3pt In this section we fix some notations and write down the problem with the standing assumptions. We also present some basic results on weakly coupled systems we will need in the following. \muedskip We deal with a weakly coupled system of Hamilton-Jacobi equations of the form \muathbf{b}egin{equation}{\lambda}abel{e1} \tauag{HJ$\muathbf{a}lpha$} H_{i}(x,Du_{i})+\sigmaum_{j=1}^{m}a_{ij}u_{j}(x)=\muathbf{a}lpha \quad \mubox{in } \muathbb{T}^{N} \quad \mubox{ for every } i \inftyn \{1,\cdots,m\}, \varepsilonnd{equation} \noindent where $m \gammaeq 2$, $\muathbf{a}lpha$ is a real constant, $A$ is an $m \tauimes m$ matrix, the so--called coupling matrix, and $H_{1}, \cdots,...,H_{m}$ are Hamiltonians. The $H_i$ satisfy the following set of assumptions for all $i \inftyn \{1,\cdots,m\}$ : \muathbf{b}egin{itemize} \inftytem[(H1)] $ H_{i}:\muathbb{T}^{N}\tauimes\muathbb{R}^{N}\rhoightarrow\muathbb{R} \quad \mubox{is continuous}$; \inftytem[(H2)] $p\muapsto H_{i}(x,p) \quad \mubox{is convex for every } x \inftyn \muathbb{T}^{N} $; \inftytem[(H3)]$p\muapsto H_{i}(x,p) \quad \mubox{is superlinear for every } x \inftyn \muathbb{T}^{N}. $ \varepsilonnd{itemize} The superlinearity condition (H3) allows to define the corresponding Lagrangians through the Fenchel transform, namely $$L_i(x,q)= \muax_{p\inftyn\muathbb{R}^{N}} \{p \cdot q - H_i(x,p)\} \quad\hbox{for any $i$}.$$ The coupling matrix $A =(a_{ij})$ satisfies: \muathbf{b}egin{itemize} \inftytem[(A1)]$a_{ij}{\lambda}eq 0 \mubox{ for every }i\neq j$; \inftytem[(A2)]$\sigmaum {\lambda}imits_{j=1}^{m}a_{ij} =0 \mubox { for any } i \inftyn \{1,\cdots,m\};$ \inftytem[(A3)] it is irreducible, i.e for every $W \sigmaubsetneq \{1,2,...,m\}$ there exists $i \inftyn W $ and $j \notin W$ such that $a_{ij}<0.$ \varepsilonnd{itemize} \sigmamallskip Roughly speaking (A3) means that the system cannot split into independent subsytems. We remark that the assumptions (A1) and (A2) on the coupling matrix are equivalent to $e^{-At}$ being a stochastic matrix for any $t\gammaeq 0$ and due to irreducibility we get $e^{-A t}$ is positive for any $t >0$, as made precise in Appendix \rhoef{stocmatrx}. We will consider (sub/super) solutions of the system in the viscosity sense, see \cite{Davini}, \cite{siconolfi3} for the definition. We recall that, as usual in convex coercive problems, all the subsolutions are Lipschitz--continuous and the notions of viscosity and a.e. subsolutions are equivalent. \muedskip We now define the critical value $\muathbf{b}eta$ as $$\muathbf{b}eta = \inftynf \{ \muathbf{a}lpha \inftyn \muathbb{R} \muid \mubox{(HJ$\muathbf{a}lpha$) admits subsolutions}\},$$ and write down the critical system \muathbf{b}egin{equation}{\lambda}abel{ecritical} \tauag{HJ$\muathbf{b}eta$} H_{i}(x,Du_{i})+\sigmaum_{j=1}^{m}a_{ij}u_{j}(x)= \muathbf{b}eta \quad \mubox{in } \muathbb{T}^{N} \quad \mubox{ for every } i \inftyn \{1,\cdots,m\}. \varepsilonnd{equation} \sigmamallskip The critical system (\rhoef{ecritical}) is the unique one in the family \varepsilonqref{e1}, $\muathbf{a}lpha \inftyn \muathbb R$, for which there are solutions. By critical (sub/super) solutions, we will mean (sub/super) solutions of \varepsilonqref{ecritical}. \sigmamallskip We deduce from the coercivity condition: \sigmamallskip \muathbf{b}egin{proposition} {\lambda}abel{lip} The family of subsolutions to \varepsilonqref{e1} are equiLipschitz--continuous for any $\muathbf{a}lpha \gammaeq \muathbf{b}eta$. \varepsilonnd{proposition} \sigmamallskip As already recalled, a relevant property of systems is that not all values in $\hbox{\muathbf{b}f R}^m$ are admissible for subsolutions to (\rhoef{e1}) at a given point of the torus. This rigidity phenomenon will play a major role in what follows. We define for $\muathbf{a}lpha \gammaeq \muathbf{b}eta$ and $x \inftyn \muathbb{T}^{N}$, \muathbf{b}egin{equation}{\lambda}abel{e42} F_\muathbf{a}lpha(x)= \{ \muathbf{b} \inftyn \muathbb {R}^{m} \muid \varepsilonxists \; \muathbf{u} \; \mubox{subsolution to } \varepsilonqref{e1} \mubox{ with }\muathbf{u}(x)= \muathbf{b} \}. \varepsilonnd{equation} \sigmamallskip It is clear that $$\muathbf b \inftyn F_\muathbf{a}lpha (x) \; \hbox{\muathbf{b}f R}ightarrow \; \muathbf b + {\lambda}ambda \, \muathbf 1 \inftyn F_\muathbf{a}lpha (x) \qquad\hbox{for any ${\lambda}ambda \inftyn \hbox{\muathbf{b}f R}$,}$$ where $\muathbf{1}$ is the vector of $\muathbb R^m$ with all the components equal to $1$. It is also apparent from the stability properties of subsolutions and the convex character of the Hamiltonians, that $F_\muathbf{a}lpha$ is closed and convex at any $x$. We proceed recalling the PDE definition of Aubry set $\muathcal A$ of the system, see \cite{Davini}, \cite{siconolfi3}, \cite{SiZa}. \muathbf{b}egin{definition}{\lambda}abel{pdeau} A point $y$ belongs to the Aubry set if and only if the maximal critical subsolution taking an admissible value at $y$ is a solution to (\rhoef{ecritical}). \varepsilonnd{definition} \sigmamallskip \muathbf{b}egin{definition} A critical subsolution $\muathbf u$ is said {\varepsilonm locally strict} at a point $y \inftyn \muathbb T^N$ if there is a neighborhood $U$ of $y$ and a positive constant $\deltaelta$ with \[ H_i(x,Du_i) +\sigmaum_{j=1}^{m}a_{ij}u_{j}(x) {\lambda}eq \muathbf{b}eta- \deltaelta \quad\hbox{for any $i \inftyn \{1, \cdots, m\} $, a.e. $x \inftyn U$.}\] \varepsilonnd{definition} We recall the following property: \sigmamallskip \muathbf{b}egin{proposition} (\cite{Davini} Theorem 3.13) {\lambda}abel{DZ1} A point $y \not \inftyn {\cal A}$ if and only if there exists a critical subsolution locally strict at $y$. \varepsilonnd{proposition} \muedskip As pointed out in the Introduction the admissible values make up a one--dimensional set on ${\cal A}$. \sigmamallskip \muathbf{b}egin{proposition}{\lambda}abel{prop28} (\cite{Davini} Theorem 5.1) An element $y$ belongs to the Aubry set if and only if $$F_\muathbf{b}eta(y)= \{\muathbf{b}+{\lambda}ambda \, \muathbf{1} \muid {\lambda}ambda \inftyn \hbox{\muathbf{b}f R}\}$$ where $\muathbf{b}$ is some vector in $\muathbb R^m$ depending on $y$. \varepsilonnd{proposition} \sigmamallskip On the contrary, if $y \not\inftyn\muathcal A$, the admissible set possesses nonempty interior, which is characterized as follows: \sigmamallskip \muathbf{b}egin{proposition}{\lambda}abel{prop30} Given $y \notin {\muathcal A} $, the interior of $F_\muathbf{b}eta(y)$ is nonempty, and $\muathbf b \inftyn \hbox{\muathbf{b}f R}^m$ is an internal point of $F_\muathbf{b}eta(y)$ if and only if there is a critical subsolution $\muathbf u$ locally strict at $y$ with $\muathbf u(y)= \muathbf b$. \varepsilonnd{proposition} \muathbf{b}egin{proof} The values $\muathbf b $ corresponding to critical subsolutions locally strict at $y$ make up a nonempty set in force of Proposition \rhoef{DZ1}, it is in addition convex by the convex character of the system. We will denote it by $\widetilde F_\muathbf{b}eta(y)$. Let $\muathbf b \inftyn \widetilde F_\muathbf{b}eta(y)$, we claim that there exists $\nu_0 >0$ with \muathbf{b}egin{equation}{\lambda}abel{notte1} \muathbf{b}+\nu \, \muathbf e_{i} \inftyn \widetilde F_{\muathbf{b}eta}(y)\, \qquad\mubox{ for any } i, \, \nu_0 > \nu >0. \varepsilonnd{equation} We denote by $\muathbf u$ the locally strict critical subsolution with $\muathbf u(y)= \muathbf b$, then there exists $0<\varepsilonpsilon < 1$ and $\deltaelta > 0$ such that \muathbf{b}egin{equation} {\lambda}abel{e51} H_{i}(x,Du_{i}(x))+\sigmaum_{j=1}^{m}a_{ij}u_{j}(x){\lambda}eq \muathbf{b}eta - 2 \, \deltaelta \quad \mubox{ for any $i$, a.e. }x \inftyn B(y,\varepsilonpsilon). \varepsilonnd{equation} We fix $i$ and assume \muathbf{b}egin{equation} {\lambda}abel{e49} \varepsilonta(\varepsilonpsilonilon)+a_{ii}\,\frac {\varepsilonpsilonilon^{2}}{2} < \deltaelta \qquad\hbox{for any $i$,} \varepsilonnd{equation} where $\varepsilonta$ is a continuity modulus for $(x,p) \muapsto H_i(x,p)$ in $\muathbb T^N \tauimes B(0,\varepsilonll_\muathbf{b}eta+1)$ and $\varepsilonll_\muathbf{b}eta$ is a Lipschitz constant for all critical subsolutions, see Proposition \rhoef{lip}. We define $\muathbf w: \muathbb T^N \tauo \hbox{\muathbf{b}f R}^M$ via \[w_{j}(x) = {\lambda}eft \{ \muathbf{b}egin{array}{ll} u_{j}(x) & \mubox{if $ j\neq i$} \\ \muax \{ \partialhi(x),u_{i}(x)\} & \mubox{if $j=i$} \\ \varepsilonnd{array} \rhoight .\] where $$\partialhi(x):= u_{i}(x)- \deltaisplaystyle \frac{1}{2}\,|y-x|^{2}+ \deltaisplaystyle \frac{\varepsilonpsilonilon^2}{2} $$ Notice that \muathbf{b}egin{equation}{\lambda}abel{ecu} w_i= \partialhi > u_i \quad\hbox{in } \; B(y,\varepsilonpsilon) \qquad\hbox{and } \quad w_i=u_i \quad\hbox{outside} \; B(y,\varepsilonpsilon). \varepsilonnd{equation} By \varepsilonqref{e51}, \varepsilonqref{e49} and the assumptions on the coupling matrix, we have for any $i$ and a.e. $x \inftyn B(y,\varepsilonpsilon)$ \muathbf{b}egin{eqnarray*} && H_i(x,Dw_i(x)) + \sigmaum_{j} a_{ij} \, w_j(x) \\ &=& H_i(x,Du_i(x) + (y-x)) + \sigmaum_{j \neq i} a_{ij} \, u_j(x) + a_{ii} \, \partialhi(x) \\ &{\lambda}eq& H_i(x,Du_i(x))+ \varepsilonta( \varepsilonpsilon) + \sigmaum_j a_{ij} \, u_j(x) + a_{ii} \, \frac{\varepsilonpsilon^2 }2 \\ &{\lambda}eq& \muathbf{b}eta - 2 \, \deltaelta + \deltaelta = \muathbf{b}eta - \deltaelta \varepsilonnd{eqnarray*} Further, for $j\neq i$ and for a.e. $x \inftyn B(y,\varepsilonpsilonilon)$, we have \muathbf{b}egin{eqnarray*} && H_{j}(x,Dw_{j}(x))+\sigmaum_k a_{jk}w_{k}(x) \\ &=& H_{j}(x,Du_{j}(x))+\sigmaum_{k } a_{jk}u_{j}(x)+a_{ik}{\lambda}eft(-\deltaisplaystyle \frac{1}{2}\,|y-x|^{2}+ \deltaisplaystyle \frac{\varepsilonpsilonilon^2}{2}\rhoight) \\ & {\lambda}eq & \muathbf{b}eta - 2 \, \deltaelta , \varepsilonnd{eqnarray*} where the last inequality is due to the fact that $a_{ki} {\lambda}eq 0$. The previous computations and \varepsilonqref{ecu} show that $\muathbf w$ is a critical subsolution locally strict at $y$, and this property is inherited by \[{\lambda}ambda \, \muathbf w + (1 - {\lambda}ambda) \, \muathbf u\] for any ${\lambda}ambda \inftyn [0,1]$. We therefore prove \varepsilonqref{notte1} setting $\nu_0 = \frac {\varepsilonpsilon^2}2$. Taking into account that $\muathbf b + {\lambda}ambda \, \muathbf 1 \inftyn \widetilde F_\muathbf{b}eta(y)$ for any ${\lambda}ambda \inftyn \hbox{\muathbf{b}f R}$ and that the vectors $\muathbf e_i$, $i=1, \cdots, m$, and $- \muathbf 1$ are affinely independent, we derive from \varepsilonqref{notte1} and $\widetilde F_\muathbf{b}eta(y)$ being convex, that $\muathbf b$ is an internal point of $\widetilde F_\muathbf{b}eta(y)$ and consequently that $\widetilde F_\muathbf{b}eta(y)$ is an open set. Finally it is also dense in $F_\muathbf{b}eta(y)$ because if $\muathbf v$ is any critical subsolution and $\muathbf u$ is in addition locally strict at $y$ then any convex combination of $\muathbf u$ and $\muathbf v$ is locally strict and \[ {\lambda}ambda \, \muathbf u(y) + (1 - {\lambda}ambda) \, \muathbf v(y) \tauo \muathbf v (y) \qquad\hbox{as ${\lambda}ambda \tauo 0$.}\] The property of being open, convex and dense in $F_\muathbf{b}eta(y)$ implies that $\widetilde F_\muathbf{b}eta(y)$ must coincide with the interior of $ F_\muathbf{b}eta(y)$, as claimed. $ {{\cal B}ox}$ \varepsilonnd{proof} \muedskip \muedskip We proceed introducing the action functional defined in \cite{siconolfi3} on which our analysis is based, see Appendices \rhoef{cadpath}, \rhoef{randomset} for terminology, notation, definitions and basic facts. \muedskip Given $\muathbf{a}lpha \gammaeq \muathbf{b}eta$ and an initial point $x \inftyn \muathbb{T}^{N}$, the action functional adapted to the system is $$\muathbb E_{\muathbf a} {\lambda}eft [ \inftynt_0^\tauau L_{\overlinemega(s)}(x + \muathcal {I}(\Xi)(s),-\Xi(s)) + \muathbf{a}lpha \, ds \rhoight ],$$ where $\muathbf a$ is any probability vector of $\hbox{\muathbf{b}f R}^m$, $\tauau$ a bounded stopping time and $\Xi$ a control. \muedskip Using the action functional, we get the following characterizations of subsolutions to the system and admissible values: \muathbf{b}egin{theorem}{\lambda}abel{t17} A function $\muathbf u:\muathbb{T}^{N} \rhoightarrow \muathbb{R}^{m}$ is a subsolution of (\rhoef{e1}), for any $\muathbf{a}lpha \gammaeq \muathbf{b}eta$, if and only if $$\muathbb{E}_{\muathbf {a}} \muathbf{b}ig [u_{\overlinemega(0)}(x) - u_{\overlinemega(\tauau)}(y) \muathbf{b}ig ] {\lambda}eq \muathbb{E}_{\muathbf {a}} {\lambda}eft [ \inftynt_0^\tauau L_{\overlinemega(s)}(x + \muathcal {I}(\Xi)(s),-\Xi(s)) + \muathbf{a}lpha \, ds \rhoight],$$ for any pair of points $x$, $y$ in $\muathbb{T}^{N}$, probability vector $\muathbf {a} \inftyn \hbox{\muathbf{b}f R}^m$, any bounded stopping time $\tauau$ and $\Xi \inftyn \muathcal{K}(\tauau, y-x)$.\\ \varepsilonnd{theorem} \muedskip \muathbf{b}egin{theorem}{\lambda}abel{t18} Given $y \inftyn \muathbb{T}^{N}$, $\muathbf{a}lpha \gammaeq \muathbf{b}eta$, $\muathbf{b} \inftyn \muathcal{F}_{\muathbf{a}lpha}(y) $ if and only if \muathbf{b}egin{equation}{\lambda}abel{e41} \muathbb{E}_{i} {\lambda}eft [\inftynt_0^\tauau L_{\overlinemega(s)}(y + \muathcal I(\Xi)(s),-\Xi(s)) + \muathbf{a}lpha \, ds - b_{i} + b_{\overlinemega(\tauau)} \rhoight ] \gammaeq 0, \varepsilonnd{equation} for any $i \inftyn \{1, \cdots, m\}$, bounded stopping times $\tauau$ and $\tauau$--cycles $\Xi$. \varepsilonnd{theorem} \sigmaection{Properties of stopping times}{\lambda}abel{propstoptim} Given a stopping time $\tauau$, the push--forward of $\muathbb{P}_{\muathbf{a}}$ through $\overlinemega(\tauau)$ is a probability measure on indices $\{1,\cdots,m\}$, which can be identified with an element of the simplex (denoted by $\muathcal S$) of probability vectors in $\hbox{\muathbf{b}f R}^m$. Then $$\muathbf{a} \muapsto \overlinemega(\tauau) \# \muathbb{P}_{\muathbf{a}},$$ defines a map from $\muathcal S$ to $\muathcal S$ which is, in addition, linear. Hence, thanks to Proposition \rhoef{prop32}, it can be represented by a stochastic matrix, which we denote by $e^{- A \tauau}$, acting on the right, i.e. \muathbf{b}egin{equation} {\lambda}abel{e58} \muathbf{a} \, e^{- A \tauau } = \overlinemega(\tauau) \# \muathbb{P}_\muathbf{a} \qquad \mubox{for any } \muathbf{a} \inftyn \muathcal{S}. \varepsilonnd{equation} \muedskip \muathbf{b}egin{definition} {\lambda}abel{charactsc} We say that $\muathbf{a} \inftyn \muathcal S$ is a {\varepsilonm characteristic vector} of $\tauau$ if it is an eigenvector of $e^{- A \tauau}$ corresponding to the eigenvalue $1$, namely $\muathbf {a} = \muathbf {a} \, e^{- A\tauau}$. \varepsilonnd{definition} \sigmamallskip \muathbf{b}egin{rem}{\lambda}abel{now} According to Proposition \rhoef{prop33}, any stopping time possesses a characteristic vector $\muathbf{a}$, and \[ \muathbb E_\muathbf{a} b_{\overlinemega(\tauau)} = \muathbf{a} \, e^{- A \tauau} \cdot \muathbf{b}= \muathbf{a} \cdot \muathbf{b} \qquad\hbox{for every $\muathbf{b} \inftyn \muathbb R^m$.}\] \varepsilonnd{rem} \sigmamallskip According to the remark above, Theorem \rhoef{t17} takes a simpler form if we just consider expectation operators $\muathbb E_{\muathbf a}$ with $\muathbf a$ characteristic vector. This result will play a key role in Lemma \rhoef{post21}. \sigmamallskip \muathbf{b}egin{corollary} {\lambda}abel{cornow} A function $\muathbf u$ is a subsolution to \varepsilonqref{e1} if and only if \muathbf{b}egin{equation}{\lambda}abel{charstric} \muathbf a \cdot \muathbf{b}ig ( \muathbf u(x)- \muathbf u(y) \muathbf{b}ig ) {\lambda}eq \muathbb E_{\muathbf a} {\lambda}eft ( \inftynt_0^\tauau L_{\overlinemega(s)}(x + \muathcal{I}(\Xi)(s),-\Xi(s)) + \muathbf{b}eta \, ds \rhoight ), \varepsilonnd{equation} for any $i \inftyn \{1, \cdots, m\}$, bounded stopping times $\tauau$, $\muathbf {a}$ characteristic vector of $\tauau$, and $\Xi \inftyn \muathcal K(\tauau, y-x)$. \varepsilonnd{corollary} \sigmamallskip \muathbf{b}egin{lemma}{\lambda}abel{media} Take $\tauau_n$ as in \varepsilonqref{splstopping}. Then \[ e^{-A \tauau_n} \tauo e^{-A \tauau} \qquad\hbox{as $n$ goes to infinity.}\] \varepsilonnd{lemma} \muathbf{b}egin{proof} Let $\muathbf{a} \inftyn \muathcal S$, $\muathbf{b} \inftyn \muathbb{R}^{m}$. Being $\overlinemega$ right-continuous and $\tauau_n \gammaeq \tauau$, we get $\overlinemega(\tauau_n) \tauo \overlinemega(\tauau)$ for any $\overlinemega \inftyn \muathcal D$, and consequently \[ b_{\overlinemega(\tauau_n)} \tauo b_{\overlinemega(\tauau)}.\] This implies, taking into account \varepsilonqref{e58} \[ (\muathbf{a} \, e^{- A \tauau_n}) \cdot \muathbf{b} = \muathbb E_\muathbf{a} b_{\overlinemega(\tauau_n)} \tauo \muathbb E_\muathbf{a} b_{\overlinemega(\tauau)}= (\muathbf{a} \, e^{- A \tauau}) \cdot \muathbf{b},\] and yields the assertion. $ {{\cal B}ox}$ \varepsilonnd{proof} \muathbf{b}egin{definition}{\lambda}abel{superstrict} Given any positive constant $\varepsilonpsilonilon$, we say that $\tauau$ is strongly greater than $\varepsilonpsilonilon$, written mathematically as $\tauau \gammag \varepsilonpsilonilon$, to mean that $\tauau - \varepsilonpsilonilon$ is still a stopping time, or equivalently \muathbf{b}egin{equation}{\lambda}abel{greatergreater} \tauau \gammaeq \varepsilonpsilonilon \;\hbox{a.s.} \quad\hbox{and} \quad \{ \tauau {\lambda}eq t\} \inftyn \muathcal{F}_{t- \varepsilonpsilonilon } \quad\hbox{for any $t \gammaeq \varepsilonpsilonilon$.} \varepsilonnd{equation} Moreover for $i \inftyn \{1, \cdots, m\}$, we say $$\tauau \gammag \varepsilonpsilonilon \;\hbox{in $\muathcal D_i$}$$ to mean \muathbf{b}egin{equation}{\lambda}abel{greatergreater1} \tauau \gammaeq \varepsilonpsilonilon \;\hbox{a.s. in $\muathcal D_i$} \quad\hbox{and} \quad \{ \tauau {\lambda}eq t\} \cap \muathcal D_i \inftyn \muathcal F_{t- \varepsilonpsilonilon } \quad\hbox{for any $t \gammaeq \varepsilonpsilonilon$.} \varepsilonnd{equation} \varepsilonnd{definition} \muedskip \muathbf{b}egin{proposition}{\lambda}abel{corollozzo} Let $\varepsilonpsilonilon >0$, $i \inftyn \{1, \cdots, m\} $. Then for every $\tauau \gammag \varepsilonpsilonilon$ in $\muathcal D_i$, there exists a positive constant $\rhoho$, solely depending on $\varepsilonpsilonilon$ and on the coupling matrix, such that \muathbf{b}egin{equation}{\lambda}abel{lozzo1} {\lambda}eft ( e^{- A \tauau} \rhoight )_{ij} > \rhoho \qquad j \inftyn \{1, \cdots, m\}. \varepsilonnd{equation} \varepsilonnd{proposition} \muathbf{b}egin{proof} We approximate $\tauau$ by a sequence of simple stopping times $\tauau_n$ with $\tauau_n \gammaeq \tauau$, as indicated in Proposition \rhoef{stoppapproxi}. For a fixed $n$, we then have $$\tauau_n= \sigmaum_j \frac j{2^n} \, \muathbb{I}(\{\tauau \inftyn [(j-1)/2^n,j/2^n)\}).$$ By the assumption on $\tauau$, the set $F_j := \{\tauau \inftyn [(j-1)/2^n,j/2^n)\} \cap \muathcal D_i$ belongs to $\muathcal F_{ \frac j{2^n} - \varepsilonpsilonilon}$. By applying Lemma \rhoef{lem4}, we therefore get \muathbf{b}egin{eqnarray*} \muathbf{e}_i \, e^{-A \tauau_n} &=& \overlinemega(\tauau_n) \# \muathbb{P}_i= \sigmaum_j \, \overlinemega( j/2^n)\# (\muathbb{P}_i \mures F_j) = {\lambda}eft ( \sigmaum_j \overlinemega(j/2^n - \varepsilonpsilonilon )\# (\muathbb{P}_i \mures F_j) \rhoight ) \, e^{-A \varepsilonpsilonilon} \\ &=& \muathbf{b}ig ( \overlinemega (\tauau_n - \varepsilonpsilonilon )\# \muathbb{P}_i \muathbf{b}ig )\, e^{-A \varepsilonpsilonilon} \varepsilonnd{eqnarray*} Owing to $\overlinemega (\tauau_n - \varepsilonpsilonilon )\#\muathbb{P}_i \inftyn \muathcal S$, we deduce \[ \muathbf{e}_i \, e^{- A \tauau_n} \inftyn \{ \muathbf{b} \, e^{-A \varepsilonpsilonilon} \muid \muathbf{b} \inftyn \muathcal S\}, \] we have in addition $ e^{- A \tauau_n} \tauo e^{- A \tauau}$ by Lemma \rhoef{media}, and consequently \[ \muathbf{e}_i \, e^{- A \tauau} \inftyn \{ \muathbf{b} \, e^{-A \varepsilonpsilonilon} \muid \muathbf{b} \inftyn \muathcal S\}. \] This set is compact, and contained in the relative interior of $\muathcal S$ because $e^{-A \varepsilonpsilonilon}$ is positive by Proposition \rhoef{prop35}. Since the components of $\muathbf{e}_i \, e^{- A \tauau}$ make up the $i$--th row of $e^{- A \tauau}$, we immediately derive the assertion. $ {{\cal B}ox}$ \varepsilonnd{proof} \sigmamallskip According to the previous proposition and Proposition \rhoef{prop34}, the the characteristic vector of a $\tauau \gammag \varepsilonpsilonilon$, for some $\varepsilonpsilonilon >0$, is unique and positive. \sigmamallskip \muathbf{b}egin{rem}{\lambda}abel{corollozzobis} Take $\tauau \gammag \varepsilonpsilon$ and denote by $\rhoho$ -the positive constant satisfying \varepsilonqref{lozzo1} for any $i$, $j$, according to Proposition \rhoef{corollozzo}. Then, since $e^{- A \tauau}$ is a stochastic matrix, we have \[ {\lambda}eft ( e^{- A \tauau} \rhoight )_{ij} = 1 - \sigmaum_{k \neq j} {\lambda}eft ( e^{- A \tauau} \rhoight )_{ik} {\lambda}eq 1 - (m-1) \rhoho {\lambda}eq 1 - \rhoho.\] \varepsilonnd{rem} \sigmamallskip \muathbf{b}egin{rem}{\lambda}abel{min} Let $\tauau $, $\rhoho$ be as in the previous remark. If $\muathbf a$ is the characteristic vector of $\tauau$ then we get for any $i$ \[a_i = \sigmaum_j a_j \, {\lambda}eft (e ^{- A \tauau} \rhoight )_{ji} > \rhoho.\] \varepsilonnd{rem} \muedskip For any stopping time $\tauau$, we consider the shift flow $\partialhi_\tauau$ on $\muathcal D$ defined by : \muathbf{b}egin{eqnarray*} \partialhi_\tauau: \muathcal D &\tauo& \muathcal D \\ \overlinemega &\muapsto& \overlinemega(\cdot + \tauau(\overlinemega)). \varepsilonnd{eqnarray*} We proceed by establishing some related properties. \sigmamallskip \muathbf{b}egin{lemma}{\lambda}abel{convergeflow} Assume that $\tauau_n$ is a sequence of stopping times converging to $\tauau$ uniformly in $\muathcal D$, then \[ \partialhi_{\tauau_n} \tauo \partialhi_{\tauau} \qquad\hbox{as $n \tauo + \inftynfty$}\] pointwise in $\muathcal D$, with respect to the Skorohod convergence, see Appendix \rhoef{cadpath} for the definition. \varepsilonnd{lemma} \muathbf{b}egin{proof} We fix $\overlinemega \inftyn \muathcal D$, we set \[g_n(t)= t + \tauau(\overlinemega) - \tauau_n(\overlinemega) \qquad\hbox{for any $n$, $t \gammaeq 0$}.\] We have for any $t$ \[\partialhi_\tauau(\overlinemega)(t)= \overlinemega(t+ \tauau_n(\overlinemega) + (\tauau(\overlinemega) - \tauau_n(\overlinemega)))= \partialhi_{\tauau_n}(\overlinemega)(g_n(t) ).\] This yields the asserted convergence because the $g_n$ are a sequence of strictly increasing functions uniformly converging to the identity. $ {{\cal B}ox}$ \varepsilonnd{proof} \muedskip \muathbf{b}egin{proposition}{\lambda}abel{flowcont} The shift flow $\partialhi_\tauau: \muathcal D \tauo \muathcal D$ is measurable. \varepsilonnd{proposition} \muathbf{b}egin{proof} If $\tauau$ is a simple stopping time, say of the form $\tauau= \sigmaum_k t_k \, \muathbb I(E_k)$, then \[\partialhi_\tauau(\overlinemega)= \sigmaum_k \partialhi_{t_k}(\overlinemega) \, \muathbb I(E_k)(\overlinemega)\] and the assertion follows being $\partialhi_{t_k}$ measurable for any $k$, $\muathbb I(E_k)$ measurable. If $\tauau$ is not simple then, by Proposition \rhoef{splstopping}, there exists a sequence of simple stopping times $\tauau_n$ converging to $\tauau$ uniformly in $\muathcal D$, this implies that $\partialhi_\tauau$ is measurable as well, as pointwise limit of measurable maps, in force of Lemma \rhoef{convergeflow}. $ {{\cal B}ox}$ \varepsilonnd{proof} \muedskip We now define the probability measure $\partialhi_\tauau \# \muathbb P_{\muathbf{a}}$, for $\muathbf{a} \inftyn \muathcal S$. The following result generalizes Proposition \rhoef{pushprbdet} to shifts given for stopping times. It will be used in Theorem \rhoef{cod} and in Lemma \rhoef{preprese}. \sigmamallskip \muathbf{b}egin{theorem}{\lambda}abel{flowflow} Let $\muathbf{a}$ be a probability vector, then $$ \partialhi_\tauau \# \muathbb P_{\muathbf{a}} = \muathbb P_{\muathbf{a}\,e^{-A \tauau}} .$$ \varepsilonnd{theorem} \muedskip We need the following preliminary result: \sigmamallskip \muathbf{b}egin{lemma}{\lambda}abel{preflowflow} Let $\muathbf{a}$, $t$, $E$ be a vector in $\muathcal S$, a positive deterministic time and a set in $\muathcal F_t$, respectively, then \[ \partialhi_t \# (\muathbb P_\muathbf{a} \mures E) = \muathbb P_{\muathbf{a}}(E) \, \muathbb P_{\muathbf{b}}\qquad\hbox{for some $\muathbf{b} \inftyn \muathcal S $.} \] \varepsilonnd{lemma} \muathbf{b}egin{proof} We first assume $E$ to be a cylinder, namely \[E= \muathcal C(t_1, \cdots,t_k; j_1, \cdots, j_k)\] for some times and indices, notice that the condition $E \inftyn \muathcal F_t$ implies $t_k {\lambda}eq t$. We fix $i \inftyn \{1, \cdots, m\}$ and consider a cylinder $C \sigmaubset \muathcal D_i$, namely \[C= \muathcal C(0, s_2, \cdots, s_m; i, i_2, \cdots, i_m)\] for some choice of times and indices. We set \[F= \{\overlinemega \muid \partialhi_t(\overlinemega) \inftyn C\} \cap E,\] then \[F=\muathcal C(t_1, \cdots,t_k, t, t+s_2, \cdots,t+s_m; j_1, \cdots, j_k, i, i_2, \cdots, i_m).\] We have \muathbf{b}egin{eqnarray*} \partialhi_t \# (\muathbb P_\muathbf{a} \mures E)(C) &=& \muathbb P_\muathbf{a} (F) \\ &=& {\lambda}eft (\muathbf{a} \,e^{-A t_1} \rhoight )_{j_1} \, \partialrod_{l=2}^{k} {\lambda}eft (e^{-A (t_l-t_{l-1})} \rhoight )_{j_{l-1} \, j_l} \, {\lambda}eft (e^{-A (t- t_k)} \rhoight )_{ j_k \, i} \, \partialrod_{r=2}^{m} {\lambda}eft (e^{-A (s_r-s_{r-1})} \rhoight )_{i_{r-1} \, i_r} \\ &=& \muathbb P_\muathbf{a} (E) \, {\lambda}eft (e^{-A (t- t_k)} \rhoight )_{ j_k \, i} \, \partialrod_{r=2}^{m} {\lambda}eft (e^{-A (s_r-s_{r-1})} \rhoight )_{i_{r-1} \, i_r}, \varepsilonnd{eqnarray*} we also have \[ \muathbb P_i(C)= \partialrod_{r=2}^{m} {\lambda}eft (e^{-A (s_r-s_{r-1})} \rhoight )_{i_{r-1} \, i_r},\] and we consequently get the relation \[ \partialhi_t \# ( \muathbb P_\muathbf{a} \mures E)(C)= \muathbb P_\muathbf{a} (E) \, \muu_i \, \muathbb P_i(C)\] with \muathbf{b}egin{equation}{\lambda}abel{floflo1} \muu_i= {\lambda}eft (e^{-A (t- t_k)} \rhoight )_{ j_k \, i} \varepsilonnd{equation} just depending on $E$ and $i$. If $C$ is any cylinder, we write \muathbf{b}egin{equation}{\lambda}abel{floflo2} \partialhi_t \# (\muathbb P_\muathbf{a} \mures E)(C) = \sigmaum_i \partialhi_t \# (\muathbb P_\muathbf{a}\mures E)(C \cap \muathcal D_i) = \muathbb P_\muathbf{a}(E) \, \sigmaum_i \muu_i \, \muathbb P_i(C) \varepsilonnd{equation} where the $\muu_i$ are defined as in \varepsilonqref{floflo1}. Taking into account that $\muu_i \gammaeq 0$ for any $i$ and $\sigmaum_i \muu_i =1$, $\muathbf{b} := \sigmaum_i \muu_i \, \muathbf{e}_i \inftyn \muathcal S$, we derive from \varepsilonqref{floflo2} \[\partialhi_t \# (\muathbb P_\muathbf{a} \mures E)(C) = \muathbb P_\muathbf{a}(E) \, \muathbb P_\muathbf{b}(C).\] This in turn implies by Proposition \rhoef{klenke} \muathbf{b}egin{equation}{\lambda}abel{floflo3} \partialhi_t \# (\muathbb P_\muathbf{a}\mures E) = \muathbb P_\muathbf{a}(E) \, \muathbb P_\muathbf{b} \varepsilonnd{equation} showing the assertion in the case where $E$ is a cylinder. If instead $E$ is a multi--cylinder, namely $E=\cup_j E_j$ with $E_j$ mutually disjoint cylinders then by the previous step \[\partialhi_t \# (\muathbb P_\muathbf{a}\mures E) = \sigmaum_j \partialhi_t \#(\muathbb P_\muathbf{a} \mures E_j) = \sigmaum_j \muathbb P_\muathbf{a}(E_j) \, \muathbb P_{\muathbf{b}_j}\] which again implies \varepsilonqref{floflo3} with \[ \muathbf{b} = \sigmaum_j \frac{\muathbb P_\muathbf{a}(E_j)}{\muathbb P_\muathbf{a}(E)} \, \muathbf{b}_j.\] Finally, for a general $E$, we know from Proposition \rhoef{klenke} that there is a sequence of multi--cylinders $E_n$ with \muathbf{b}egin{equation}{\lambda}abel{flowflow2} {\lambda}im_n \muathbb P_\muathbf{a}(E_n \tauriangle E)=0. \varepsilonnd{equation} Given $F \inftyn {\cal F}$, we set \[C = \{\overlinemega \muid \partialhi_t(\overlinemega) \inftyn F\}, \] we have \[ \partialhi_t \# (\muathbb P_\muathbf{a} \mures E_n)(F)= \muathbb P_\muathbf{a} (C \cap E_n ) {\lambda}eq \muathbb P_\muathbf{a} \muathbf{b}ig ((C\cap E) \cup (E \tauriangle E_n) \muathbf{b}ig ) = \partialhi_t \# (\muathbb P_\muathbf{a} \mures E)(F) + \muathbb P_\muathbf{a} (E \tauriangle E_n) \] and similarly \[ \partialhi_t \# (\muathbb P_\muathbf{a} \mures E)(F) {\lambda}eq \partialhi_t \# (\muathbb P_\muathbf{a} \mures E_n)(F) + \muathbb P_\muathbf{a} (E \tauriangle E_n) .\] We deduce in force of \varepsilonqref{flowflow2} \[{\lambda}im_n \partialhi_t \# (\muathbb P_\muathbf{a} \mures E_n)(F) = \partialhi_t \# (\muathbb P_\muathbf{a} \mures E)(F)\] which in turn implies that $\partialhi_t \# (\muathbb P_\muathbf{a} \mures E_n)$ weakly converges to $\partialhi_t \# (\muathbb P_\muathbf{a} \mures E)$. Since, by the previous step in the proof \[\partialhi_t \# (\muathbb P_\muathbf{a} \mures E_n)= \muathbb P_\muathbf{a}(E_n) \, \muathbb P_{\muathbf{b}_n} \qquad\hbox{ for some $\muathbf{b}_n \inftyn \muathcal S$}\] we derive from Proposition \rhoef{conti} and \varepsilonqref{flowflow2} \[\partialhi_t \# (\muathbb P_\muathbf{a} \mures E)= \muathbb P_\muathbf{a}(E) \, \muathbb P_{\muathbf{b}} \qquad\hbox{with $\muathbf{b} ={\lambda}im_n \muathbf{b}_n$. }\] This concludes the proof. $ {{\cal B}ox}$ \varepsilonnd{proof} \\ \muathbf{b}egin{proof} ({\varepsilonm of the Theorem \rhoef{flowflow}}) \\ We first show that \muathbf{b}egin{equation}{\lambda}abel{flowflow1} \partialhi_\tauau \# \muathbb P_\muathbf{a} = \muathbb P_\muathbf{b}\qquad\hbox{for a suitable $\muathbf{b} \inftyn \muathcal S$.} \varepsilonnd{equation} If $\tauau = \sigmaum_k t_k \, \muathbb I(E_k)$ is simple then by Lemma \rhoef{preflowflow} \[\partialhi_\tauau \# \muathbb P_\muathbf{a} = \sigmaum_k \partialhi_{t_k} \# (\muathbb P_\muathbf{a} \mures E_k) = \sigmaum_k \muathbb P_\muathbf{a}(E_k) \, \muathbb P_{\muathbf{b}_k}\] for some $b_k \inftyn \muathcal S$, and we deduce \varepsilonqref{flowflow1} with $\muathbf{b}= \sigmaum_k \muathbb P_\muathbf{a}(E_k) \, \muathbf{b}_k$. Given a general stopping time $\tauau$, we approximate it by a sequence of simple stopping times $\tauau_n$, and, exploiting the previous step, we consider $\muathbf{b}_n \inftyn \muathcal S$ with \[\partialhi_{\tauau_n} \# \muathbb P_\muathbf{a}= \muathbb P_{\muathbf{b}_n}.\] We know from Lemma \rhoef{convergeflow} that \[ \partialhi_{\tauau_n}(\overlinemega) \tauo \partialhi_\tauau(\overlinemega) \qquad\hbox{for any $\overlinemega$ in the Skorohod sense,}\] and we derive via Dominate Convergence Theorem \[ \muathbb E_\muathbf{a} f(\partialhi_{\tauau_n}) \tauo \muathbb E_\muathbf{a} f( \partialhi_\tauau)\] for any bounded measurable function $f: \muathcal D \tauo \muathbb R$. Using change of variable formula (\rhoef{changevar}) we get \[ \inftynt_\muathcal D f \, d \partialhi_{\tauau_n} \# \muathbb P_\muathbf{a} \tauo \inftynt_\muathcal D f \, d \partialhi_{\tauau} \# \ \muathbb P_\muathbf{a}\] or equivalently \[ \muathbb P_{\muathbf{b}_n} = \partialhi_{\tauau_n} \# \muathbb P_\muathbf{a} \tauo \partialhi_{\tauau} \# \muathbb P_\muathbf{a}\] in the sense of weak convergence of measures. This in turn implies by the continuity property stated in Proposition \rhoef{conti} that $\muathbf{b}_n$ is convergent in $\muathbb R^{m}$ and \[\muathbb P_{\muathbf{b}_n} \tauo \muathbb P_{\muathbf{b}}\quad\qquad\hbox{with $\muathbf{b}= {\lambda}im_n \muathbf{b}_n$}\] which shows \varepsilonqref{flowflow1}. We can compute the components of $\muathbf{b}$ via \[ b_i= \muathbb P_\muathbf{a}\{\overlinemega\muid \partialhi_\tauau(\overlinemega) \inftyn \muathcal D_i\}= \muathbb P_\muathbf{a}\{\overlinemega \muid \overlinemega(\tauau(\overlinemega)) =i\}= \muathbf{b}ig (\overlinemega(\tauau) \# \muathbb P_\muathbf{a} \muathbf{b}ig )_i= \muathbf{b}ig( \muathbf{a} \, e^{- A \tauau} \muathbf{b}ig )_i.\] This concludes the proof. $ {{\cal B}ox}$ \varepsilonnd{proof} \sigmaection{Cycle iteration}{\lambda}abel{reptcyc} \partialarskip +3pt It is immediate that we can construct a sequence of (deterministic) cycles going through a given closed curve any number of times. We aim at generalizing this iterative procedure in the random setting we are working with, starting from $\tauau^0$--cycle, for some stopping time $\tauau^0$. In this case the construction is more involved and requires some details. \sigmamallskip Let $\tauau^0$, $\Xi^0$ be a simple stopping time and a $\tauau^0$--cycle, respectively, we recursively define for $j \gammaeq 0$ \muathbf{b}egin{equation}{\lambda}abel{buckle1} \tauau^{j+1}(\overlinemega) = \tauau^0(\overlinemega) + \tauau^j(\partialhi_{\tauau^0}(\overlinemega)) \varepsilonnd{equation} and \muathbf{b}egin{equation}{\lambda}abel{cape1} \Xi^{j+1}(\overlinemega)(s) = {\lambda}eft \{ \muathbf{b}egin{array}{ll} \Xi^j(\overlinemega)(s), & \hbox{for $s \inftyn [0,\tauau^j(\overlinemega))$} \\ \Xi^0(\partialhi_{\tauau^j}(\overlinemega))(s-\tauau^j(\overlinemega)) & \hbox{for $s \inftyn [\tauau^j(\overlinemega),+ \inftynfty)$.} \varepsilonnd{array} \rhoight. \varepsilonnd{equation} \sigmamallskip We will prove below that the $\Xi^j$ make up the sequence of iterated random cycles we are looking for. A first step is: \muedskip \muathbf{b}egin{proposition}{\lambda}abel{cedro} The $\tauau^j$, defined by \varepsilonqref{buckle1}, are simple stopping times for all $j$. If, in addition, $\tauau^0 \gammag \deltaelta$ in $\muathcal D_i$, for some $i \inftyn \{1, \cdots, m\}$, $\deltaelta >0$, then $\tauau^j \gammag \deltaelta$ in $\muathcal D_i$. \varepsilonnd{proposition} \muathbf{b}egin{proof} We argue by induction on $j$. The property is true for $j=0$, assume by inductive step that $\tauau^j$ is a simple stopping time, then by Proposition \rhoef{flowcont} $\tauau^{j+1}$ is a random variable, as sum and composition of measurable maps, taking nonnegative values. Assume \muathbf{b}egin{eqnarray} \tauau^0 &=& \sigmaum_{l=1}^{m_0} s_l \, \muathbb I(F_l) {\lambda}abel{cedro1}\\ \tauau^j &=& \sigmaum_{k=1}^{m_j} t_k \, \muathbb I(E_k) {\lambda}abel{cedro2} \varepsilonnd{eqnarray} then the sets \[F_l \cap \{\overlinemega \muid \partialhi_{\tauau^0}(\overlinemega) \inftyn E_k\} \qquad l=1, \cdots, m_0, \; k=1, \cdots, m_j\] are mutually disjoint and their union is the whole $\muathcal D$. Moreover if \[\overlinemega \inftyn F_l \cap \{\overlinemega \muid \partialhi_{\tauau^0}(\overlinemega) \inftyn E_k\}\] then \[\tauau^{j+1}(\overlinemega) = \tauau^0 (\overlinemega) + \tauau^j(\partialhi_{\tauau^0}(\overlinemega))= s_l +t_k,\] which shows that $\tauau^{j+1}$ is simple. Since $\tauau^0$, $\tauau^j$ are stopping time then $F_l \inftyn \muathcal F_{s_l}$ and $E_k \inftyn \muathcal F_{t_k}$. By Proposition \rhoef{supershift} \[F_l \cap \{\overlinemega \muid \partialhi_{\tauau^0}(\overlinemega) \inftyn E_k\} \inftyn \muathcal F_{s_l +t_k},\] which shows that $ \tauau^{j+1}$ is a stopping time.\\ Moreover if $\tauau^0 \gammag \deltaelta$ in $\muathcal D_i$ then $F_l \cap \muathcal D_i \inftyn \muathcal F_{s_l-\deltaelta}$ and consequently \[F_l \cap \muathcal D_i \cap \{\overlinemega \muid \partialhi_{\tauau^0}(\overlinemega) \inftyn E_k\} \inftyn \muathcal F_{s_l +t_k -\deltaelta},\] which shows that $ \tauau^{j+1} \gammag \deltaelta$ in $\muathcal D_i$. $ {{\cal B}ox}$ \varepsilonnd{proof} \muathbf{b}igskip \muedskip The main result of the section is \sigmamallskip \muathbf{b}egin{theorem}{\lambda}abel{cod} The $\Xi^j$, as defined in \varepsilonqref{cape1}, are $\tauau^j$--cycles for all $j$. \varepsilonnd{theorem} \sigmamallskip A lemma is preliminary. \sigmamallskip \muathbf{b}egin{lemma}{\lambda}abel{precod} For any $j$, $\overlinemega$ \[\tauau^{j+1}(\overlinemega)= \tauau^j(\overlinemega) + \tauau^0(\partialhi_{\tauau^j}(\overlinemega)).\] \varepsilonnd{lemma} \muathbf{b}egin{proof} Given $j \gammaeq 1$, we preliminarily write \muathbf{b}egin{eqnarray*} \partialhi_{\tauau^{j-1}} ( \partialhi_{\tauau^0}(\overlinemega))(s) &=& \partialhi_{\tauau^0}(\overlinemega)(s + \tauau^{j-1}(\partialhi_{\tauau^0}(\overlinemega) ) \\ &=& \overlinemega(s + \tauau^0(\overlinemega) + \tauau^{j-1}(\partialhi_{\tauau^0}(\overlinemega) ) = \overlinemega(s + \tauau^j(\overlinemega))= \partialhi_{\tauau^j}(\overlinemega)(s) \varepsilonnd{eqnarray*} which gives \muathbf{b}egin{equation}{\lambda}abel{precod1} \partialhi_{\tauau^{j-1}} \circ \partialhi_{\tauau^0}= \partialhi_{\tauau^j}. \varepsilonnd{equation} We proceed arguing by induction on $j$. The formula in the statement is true for $j= 0$. We proceed showing that it is true for $j+1$ provided it holds for $j \gammaeq 0$. We have, taking into account \varepsilonqref{precod1} \muathbf{b}egin{eqnarray*} \tauau^{j+1}(\overlinemega) &=& \tauau^0(\overlinemega) + \tauau^j(\partialhi_{\tauau^0}(\overlinemega)) = \tauau^0(\overlinemega) + \tauau^{j-1}(\partialhi_{\tauau^0}(\overlinemega)) + \tauau^0(\partialhi_{\tauau^{j-1}}(\partialhi_{\tauau^0}(\overlinemega))) \\ &=& \tauau^j(\overlinemega) + \tauau^0(\partialhi_{\tauau^{j-1}}(\partialhi_{\tauau^0}(\overlinemega))) = \tauau^j(\overlinemega) + \tauau^0(\partialhi_{\tauau^j}(\overlinemega)) \varepsilonnd{eqnarray*} as asserted. $ {{\cal B}ox}$ \varepsilonnd{proof} \muedskip \muathbf{b}egin{proof} (of Theorem \rhoef{cod}) The property is true for $j=0$, then we argue by induction on $j$. We exploit the principle that $\Xi^j$ is a control if and only the maps $\overlinemega \muapsto \Xi^j(\overlinemega)(s)$ from $\muathcal D $ to $\muathbb R^N$ are $\muathcal F_s$--measurable for all $s$. Given $s$ and a Borel set $B$ of $\muathbb R^N$, we therefore need to show \muathbf{b}egin{equation}{\lambda}abel{cedro00} \{\overlinemega \muid \Xi^{j+1}(\overlinemega)(s) \inftyn B\} \inftyn \muathcal F_s, \varepsilonnd{equation} knowing that $\Xi^0$, $\Xi^j$ are controls, the first by assumption and the latter by inductive step. We set \[E = \{ \tauau^j > s\},\] then we have by the very definition of $\Xi^{j+1}$\muathbf{b}egin{equation}{\lambda}abel{cedro0} \{\overlinemega \muid \Xi^{j+1}(s) \inftyn B\}= F_1 \muathbf{b}igcup F_2 \varepsilonnd{equation} with \muathbf{b}egin{eqnarray*} F_1 &=& \{\overlinemega \muid \Xi^j(s) \inftyn B\} \cap E \\ F_2 &=& \{\overlinemega \muid \Xi^0(\partialhi_{\tauau^j}(\overlinemega))(s-\tauau^j(\overlinemega)) \inftyn B \} \sigmaetminus E. \varepsilonnd{eqnarray*} We know that \muathbf{b}egin{equation}{\lambda}abel{cedro10} F_1 \inftyn \muathcal F_s, \varepsilonnd{equation} because $\tauau^j$ is a stopping time and $\Xi^j$ a control. Assume now $\tauau^j$ to be of the form \[\sigmaum_{k=1}^{m_j} t_k \, \muathbb I(E_k)\] then $E_k \sigmaetminus E = E_k$ or $E_k \sigmaetminus E = \varepsilonmptyset$ according on whether $t_k {\lambda}eq s$ or $t_k > s$ and so \[ F_2 = \muathbf{b}igcup_{t_k {\lambda}eq s} \, \{\overlinemega \muid \Xi^0(\partialhi_{t_k}(\overlinemega))(s-t_k) \inftyn B \} \cap E_k \] Consequently, if $t_k {\lambda}eq s$, $\Xi^{j+1}(s)$ is represented in $E_k $ by the composition of the following maps \[ \overlinemega \xrightarrow{\partialsi_1} \partialhi_{t_k}(\overlinemega) \xrightarrow{\partialsi_2} \Xi^0(\partialhi_{t_k}(\overlinemega))\xrightarrow{\partialsi_3} \Xi^0(\partialhi_{t_k}(\overlinemega))(s-t_k).\] By the very definition of the $\sigmaigma$--algebra $\muathcal F'_t$, $\partialsi_3^{-1}(\muathcal B) \sigmaubset \muathcal F'_{s-t_k}$, moreover, since $\Xi^0$ is adapted then $\partialsi_2^{-1}(\muathcal F'_{s-t_k}) \sigmaubset \muathcal F_{s-t_k}$ and finally $\partialsi_1^{-1}(\muathcal F_{s-t_k}) \sigmaubset \muathcal F_{s}$ by Proposition \rhoef{supershift}. We deduce, taking also into account that $E_k \inftyn \muathcal F_{t_k} \sigmaubset \muathcal F_{s}$, that if $t_k {\lambda}eq s$ then \[\{\overlinemega \muid \Xi^{j+1}(s) \inftyn B\} \cap E_k = \{\overlinemega \muid \Xi^0(\partialhi_{t_k}(\overlinemega))(s-t_k) \inftyn B \} \cap E_k \inftyn {\cal F}_s,\] and consequently $F_2$, being the union of sets in ${\cal F}_s$, belongs to $\muathcal F_s$ as well. By combining this information with \varepsilonqref{cedro0}, \varepsilonqref{cedro10}, we prove \varepsilonqref{cedro00} and conclude that $\Xi^{j+1}$ is a control.\\ To show that $ \Xi^{j+1}$ is a $ \tauau^{j+1}$--cycle, we use the very definition of $\tauau^{j+1}$, $\Xi^{j+1}$ and write for any $\overlinemega$ \muathbf{b}egin{equation}{\lambda}abel{cod1} \inftynt_0^{ \tauau^{j+1}(\overlinemega)} \Xi^{j+1}(\overlinemega) \, ds = I(\overlinemega) + J(\overlinemega) \varepsilonnd{equation} with \muathbf{b}egin{eqnarray*} I(\overlinemega) &=& \inftynt_0^{ \tauau^j(\overlinemega)} \Xi^j(\overlinemega) \, ds \\ J(\overlinemega) &=& \inftynt_{\tauau^j(\overlinemega)}^{ \tauau^{j+1}(\overlinemega)} \Xi^0(\partialhi_{\tauau^j}(\overlinemega))(s-\tauau^j(\overlinemega)) \, ds. \varepsilonnd{eqnarray*} Due to $\Xi^j$ being a $\tauau^j$--cycle, we have \muathbf{b}egin{equation}{\lambda}abel{cod2} I(\overlinemega) = 0 \quad\hbox{a.s.} \varepsilonnd{equation} We change the variable in $J$, setting $t= s - \tauau^j(\overlinemega)$, and exploit Lemma \rhoef{precod} to get \muathbf{b}egin{equation}{\lambda}abel{cod20} J(\overlinemega)= \inftynt_0^{\tauau^0(\partialhi_{\tauau^j}(\overlinemega))} \Xi^0(\partialhi_{\tauau^j}(\overlinemega))(t) \, dt. \varepsilonnd{equation} Let $E$ be any set in $\muathcal F$ and $\muathbf{a}$ a positive probability vector. We integrate $J(\overlinemega)$ over $E$ with respect to $\muathbb P_\muathbf{a}$ using \varepsilonqref{cod20}, replace $\partialhi_{\tauau^j}(\overlinemega))$ by $\overlinemega$ via change of variable formula, and exploit Theorem \rhoef{flowflow}. We obtain \muathbf{b}egin{equation}{\lambda}abel{cod30} \inftynt_E J(\overlinemega) \, d\muathbb P_\muathbf{a} = \inftynt_{\partialhi_{\tauau^j}(E)} {\lambda}eft (\inftynt_0^{\tauau^0(\overlinemega))} \Xi^0(\overlinemega)(t) \, dt \rhoight ) \, d \muathbb P_{\muathbf{a} e^{- A \tauau^j}} . \varepsilonnd{equation} Due to $\Xi_0$ being a $\tauau^0$--cycle \[ \inftynt_0^{\tauau^0(\overlinemega)} \Xi^0(\overlinemega)(t) \, dt = 0 \qquad\hbox{a.s,}\] and therefore the integral in the right hand--side of \varepsilonqref{cod30} is vanishing and so \[ \inftynt_E J(\overlinemega) \, d\muathbb P_\muathbf{a} =0.\] Since $E$ has been arbitrarily chosen in $\muathcal F$ and $\muathbf{a} >0$, we deduce in force of Lemma \rhoef{nullo} \[J(\overlinemega)= 0 \qquad\hbox{a.s.}\] This information combined with \varepsilonqref{cod1}, \varepsilonqref{cod2} shows that $\Xi^{j+1}$ is a $\tauau^{j+1}$--cycle and conclude the proof. $ {{\cal B}ox}$ \varepsilonnd{proof} \sigmamallskip \sigmaection{Dynamical characterization of the Aubry set }{\lambda}abel{dynpropaub} In this section we give the main results of the paper on the cycle characterization of the Aubry set. \sigmamallskip As explained in the Introduction, a key step is to establish a strengthened version of Theorem \rhoef{t18}, which is based on the cycle iteration technique presented in Section \rhoef{reptcyc}. \sigmamallskip \muathbf{b}egin{theorem}{\lambda}abel{t20} Given $\varepsilonpsilonilon >0$, $\muathbf{a}lpha \gammaeq \muathbf{b}eta$, and $y \inftyn \muathbb{T}^{N}$, $\muathbf{b} \inftyn \muathcal{F}_{\muathbf{a}lpha}(y) $ if and only if \muathbf{b}egin{equation}{\lambda}abel{e57} \muathbb{E}_{k} {\lambda}eft (\inftynt_0^\tauau L_{\overlinemega(s)}(y + \muathcal I(\Xi)(s),-\Xi(s)) + \muathbf{b}eta \, ds - b_{k} + b_{\overlinemega(\tauau)} \rhoight )\gammaeq 0, \varepsilonnd{equation} for any $k \inftyn \{1, \cdots, m\}$, $\tauau \gammag \varepsilonpsilonilon$ bounded stopping times and $\tauau$--cycles $\Xi$. \varepsilonnd{theorem} We break the argument in two parts. The first one is presented in a preliminary lemma. \sigmamallskip \muathbf{b}egin{lemma}{\lambda}abel{preprese} Let $i \inftyn \{1, \cdots, m\}$, $\muathbf{b} \inftyn \muathbb R^m$, $\deltaelta >0$, assume $\tauau^0$ to be a simple stopping time vanishing outside $\muathcal D_i$, with $\tauau^0 \gammag \deltaelta$ in $\muathcal D_i$ satisfying \[\muathbb E_i {\lambda}eft (\inftynt_0^{\tauau^0} L_{\overlinemega(s)}(y + \muathcal I(\Xi^0)(s),-\Xi^0(s)) + \muathbf{b}eta \, ds - b_i + b_{\overlinemega(\tauau^0)} \rhoight ) =: - \muu < 0. \] Then for any $j \inftyn \muathbb N$ \muathbf{b}egin{equation}{\lambda}abel{crux} \muathbb E_i {\lambda}eft (\inftynt_0^{\tauau^j} L_{\overlinemega(s)}(y + \muathcal I(\Xi^j)(s),-\Xi^j(s)) + \muathbf{b}eta \, ds - b_i + b_{\overlinemega(\tauau^j)} \rhoight ) < - \muu \, ( 1+ \rhoho \, j ), \varepsilonnd{equation} where $\tauau^j$, $\Xi^j$ are as in \varepsilonqref{buckle1}, \varepsilonqref{cape1}, respectively, and $\rhoho$ is the positive constant, provided by Proposition \rhoef{corollozzo}, with \[ {\lambda}eft ( e^{- A \tauau} \rhoight )_{ik} > \rhoho \quad\hbox{for any $\tauau \gammag \deltaelta$ in $\muathcal D_i$, $k= 1, \cdots, m$.}\] \varepsilonnd{lemma} \muathbf{b}egin{proof} We denote by $I_j$ the expectation in the left hand--side of \varepsilonqref{crux} and argue by induction on $j$. Formula \varepsilonqref{crux} is true for $j=0$, and we assume by inductive step that it holds for some $j \gammaeq 1$. Taking into account that \[ \Xi^{j+1}(\overlinemega)(s)= \Xi^j(\overlinemega)(s) \qquad\hbox{in $[0,\tauau^j(\overlinemega))$ for any $\overlinemega$,}\] we get by applying the inductive step \muathbf{b}egin{equation}{\lambda}abel{crux0} I_{j+1} = I_j + K_j {\lambda}eq - \muu \, ( 1+ \rhoho \, j ) + K_j \varepsilonnd{equation} with \[K_j= \muathbb E_i {\lambda}eft (\inftynt_{\tauau^j}^{\tauau^{j+1}} L_{\overlinemega(s)}(y + \muathcal I(\Xi^{j+1})(s),-\Xi^{j+1}(s)) + \muathbf{b}eta \, ds - b_{\overlinemega(\tauau^j)} + b_{\overlinemega(\tauau^{j+1})} \rhoight ).\] We further get by applying Lemma \rhoef{precod} and the very definition of $\Xi^{j+1}$ \muathbf{b}egin{equation}{\lambda}abel{crux1} K_j = \muathbb E_i\muathbf{b}ig( W(\overlinemega)\muathbf{b}ig) + \muathbb E_i (- b_{\overlinemega(\tauau^j)} + b_{\overlinemega(\tauau^j + \tauau^0(\partialhi_{\tauau^j}))}) \varepsilonnd{equation} where \[W(\overlinemega) = \inftynt_{\tauau^j}^{\tauau^j + \tauau^0(\partialhi_{\tauau^j})} L_{\overlinemega(s)}(y + \muathcal I(\Xi^0(\partialhi_{\tauau^j}))(s - \tauau^j),-\Xi^0(\partialhi_{\tauau^j})(s - \tauau^j)) + \muathbf{b}eta \, ds.\] We fix $\overlinemega$ and set $t =s - \tauau^j(\overlinemega)$, we have \[ W(\overlinemega) = \inftynt_{0}^{\tauau^0(\partialhi_{\tauau^j}(\overlinemega))} L_{\partialhi_{\tauau^j}(\overlinemega)(t)}(y + \muathcal I(\Xi^0(\partialhi_{\tauau^j}(\overlinemega)))(t),-\Xi^0(\partialhi_{\tauau^j}(\overlinemega))(t)) + \muathbf{b}eta \, dt. \] By using the above relation and change of variable formula( from $\partialhi_{\tauau^j}(\overlinemega)$ to $\overlinemega$), and Theorem \rhoef{flowflow}, we obtain \muathbf{b}egin{equation}{\lambda}abel{crux2} \muathbb E_i W(\overlinemega) = \muathbb E_{\muathbf{e}_i e^{-A \tauau^j}} {\lambda}eft (\inftynt_0^{\tauau^0} L_\overlinemega (y+ \muathcal I(\Xi^0(\overlinemega)), - \Xi^0(\overlinemega)) + \muathbf{b}eta \, ds \rhoight ) \varepsilonnd{equation} We also have by applying the same change of variable \muathbf{b}egin{eqnarray*} \muathbb E_i {\lambda}eft (- b_{\overlinemega(\tauau^j)} + b_{\overlinemega(\tauau^j + \tauau^0(\partialhi_{\tauau^j}))} \rhoight ) &=&\muathbb E_i {\lambda}eft (- b_{\partialhi_{\tauau^j}(\overlinemega)(0)} + b_{\partialhi_{\tauau^j}(\overlinemega)(\tauau^0)} \rhoight ) \\ &=& \muathbb E_{\muathbf{e}_i e^{-A \tauau^j}} {\lambda}eft (- b_{\overlinemega(0)} + b_{\overlinemega(\tauau^0)} \rhoight ). \varepsilonnd{eqnarray*} By using the above relation, \varepsilonqref{crux1}, \varepsilonqref{crux2} and the fact that $\tauau^0$ vanishes outside $\muathcal D_i$, we obtain \muathbf{b}egin{eqnarray*} K_j &=& \muathbb E_{{\muathbf{e}_i e^{-A \tauau^j}}} {\lambda}eft (\inftynt_0^{\tauau^0} L_{\overlinemega(s)}(y + \muathcal I(\Xi^0)(s),-\Xi^0(s)) + \muathbf{b}eta \, ds - b_i + b_{\overlinemega(\tauau^0)} \rhoight) \\ &=& {\lambda}eft (e^{-A \tauau^j}\rhoight )_{ii} \, \muathbb E_i {\lambda}eft (\inftynt_0^{\tauau^0} L_{\overlinemega(s)}(y + \muathcal I(\Xi^0)(s),-\Xi^0(s)) + \muathbf{b}eta \, ds - b_i + b_{\overlinemega(\tauau^0)} \rhoight ) \\ &<& - \rhoho \, \muu \varepsilonnd{eqnarray*} By plugging this relation in \varepsilonqref{crux0}, we end up with \[I_{j+1} {\lambda}eq - \muu \, ( 1+ \rhoho \, (j+1) ) \] proving \varepsilonqref{crux}. $ {{\cal B}ox}$ \varepsilonnd{proof} \muedskip \muathbf{b}egin{proof} (of Theorem \rhoef{t20}) .\\ The first implication is direct by Theorem \rhoef{t18}.\\ Conversely, if $\muathbf b \notin \muathcal{F}_{\muathbf{b}eta}(y)$ then there exists, by Theorem \rhoef{t18}, $i \inftyn \{1, \cdots, m\}$, bounded stopping time $\tauau^0$ and $\tauau^0$--cycle $\Xi^0$ such that \muathbf{b}egin{equation} {\lambda}abel{pres1} \muathbb{E}_{i} {\lambda}eft (\inftynt_0^{\tauau^0} L_{\overlinemega(s)}(y + \muathcal I(\Xi^0)(s),-\Xi^0(s)) + \muathbf{b}eta \, ds - b_{i} + b_{\overlinemega(\tauau^0)} \rhoight ) =: - \muu < 0. \varepsilonnd{equation} We can also assume $\tauau^0 = 0$ outside $\muathcal D_i$ without affecting \varepsilonqref{pres1}. We set \[\widetilde \Xi(\overlinemega)(s) = {\lambda}eft \{ \muathbf{b}egin{array}{ll} \Xi^0(\overlinemega)(s), & \hbox{for $\overlinemega \inftyn \muathcal{D}$, $s \inftyn [0,\tauau^{0}(\overlinemega))$} \\ 0, & \hbox{for $\overlinemega \inftyn \muathcal D$, $s \inftyn [\tauau^0(\overlinemega),+\inftynfty)$.} \\ \varepsilonnd{array} \rhoight.\] We claim that $\widetilde \Xi$ is still a $\tauau^0$--cycle; the unique property requiring some detail is actually the nonanticipating character. We take $\overlinemega_1=\overlinemega_2$ in $[0,t]$, for some positive $t$, and consider two possible cases:\\ \sigmamallskip \noindent \underline{\tauextbf{Case 1:}} If $s:=\tauau^0(\overlinemega_1){\lambda}eq t$ then $$\overlinemega_1 \inftyn A:=\{\overlinemega \muid \tauau^{0}(\overlinemega)=s\} \inftyn {\cal F}_s \sigmaubseteq {\cal F}_t,$$ which yields $\overlinemega_2 \inftyn A$ and hence $\tauau^0(\overlinemega_1)=\tauau^0(\overlinemega_2)=s$.\\ In this case \muathbf{b}egin{align*} & \widetilde \Xi(\overlinemega_1)= \Xi^0(\overlinemega_1)= \Xi^0(\overlinemega_2)=\widetilde\Xi(\overlinemega_2) \qquad\hbox{in $[0,s]$}, \\ &\widetilde\Xi(\overlinemega_1)= \widetilde\Xi (\overlinemega_2)=0 \qquad\hbox{in $[s,t]$}. \varepsilonnd{align*} \sigmamallskip \noindent \underline{\tauextbf{Case 2:}} If $\tauau^0(\overlinemega_1)> t$, then $$\overlinemega_1 \inftyn \{\overlinemega \muid \tauau^{0}(\overlinemega)>t\} \inftyn {\cal F}_t ,$$ which implies that $\overlinemega_{2}$ belongs to the above set and consequently $\tauau^0(\overlinemega_2)> t$. Therefore $$\widetilde\Xi(\overlinemega_1)= \Xi^0(\overlinemega_1)= \Xi^0(\overlinemega_2)=\widetilde\Xi(\overlinemega_2) \qquad\hbox{in $[0,t]$}.$$ This shows the claim. Therefore we can assume that the $\Xi^0$ appearing in \varepsilonqref{pres1} vanishes when $t \gammaeq \tauau^0(\overlinemega)$ for any $\overlinemega$. We know from Proposition \rhoef{stoppapproxi} that there is a nonincreasing sequence $\tauau'_n$ of simple stopping times with \[ \tauau'_n \tauo \tauau^0 \quad\hbox{uniformly in $\muathcal D$.}\] We define \[ \tauau_n = {\lambda}eft \{\muathbf{b}egin{array}{cc} \tauau'_n + \frac 1n & \;\hbox{in $\muathcal D_i$} \\ 0 & \;\quad\qquad\quad\;\hbox{in $\muathcal D_k$ for $k \neq i$} \\ \varepsilonnd{array} \rhoight .\] The $\tauau_n$ are simple stopping times, moreover, since $\tauau^0$ is vanishing outside $\muathcal D_i$ and $\frac 1n \tauo 0$, we get \[ \tauau_n \gammaeq \tauau^ 0 \quad\hbox{and} \quad \tauau_n \tauo \tauau^0 \quad\hbox{uniformly in $\muathcal D$,}\] in addition \[\{\tauau_n {\lambda}eq t\} \cap \muathcal D_i = \{\tauau'_n + 1/n {\lambda}eq t\} \cap \muathcal D_i \inftyn \muathcal F_{t - 1/n} \quad\hbox{for $t \gammaeq \frac {1}{n}$}\] which shows that \[\tauau_n \gammag \frac 1n \quad\hbox{in $\muathcal D_i$.}\] It is clear that $\widetilde \Xi$ belongs to ${\cal K}(\tauau_{n},0)$, we further have \muathbf{b}egin{eqnarray*} {\lambda}eft | \inftynt_0^{\tauau_n} L_{\overlinemega(s)}( y+\muathcal I(\Xi^0),- \Xi^0) \, ds -\inftynt_0^{\tauau^0} L_{\overlinemega(s)}( y+\muathcal I(\Xi^0),-\Xi^0) \, ds \rhoight |{\lambda}eq \inftynt_{\tauau^0}^{\tauau_n } |L_{\overlinemega(s)}( y,0)| \, ds. \varepsilonnd{eqnarray*} Owing to the boundedness property of the integrand and that $\tauau_n \rhoightarrow \tauau^0$, the right hand-side of the above formula becomes infinitesimal, as $n$ goes to infinity. Therefore the strict negative inequality in (\rhoef{pres1}) is maintained replacing $\tauau^0$ by $\tauau_n$, for a suitable $n$. Hence we can assume, without loss of generality, that $\tauau^0$ appearing in \varepsilonqref{pres1} satisfies the assumptions of Lemma \rhoef{preprese} for a suitable $\deltaelta >0$. Let $\tauau^j$, $\Xi^j$ be as in \varepsilonqref{buckle1}, \varepsilonqref{cape1}, we define for any $j$ \[ \widetilde\tauau^j = \tauau^{j} + \varepsilonpsilonilon \] and \[\widetilde \Xi^j(\overlinemega)(s) = {\lambda}eft \{ \muathbf{b}egin{array}{cc} \Xi^{j}(\overlinemega)(s) & \quad\hbox{for $s \inftyn [0, \tauau^{j}(\overlinemega))$} \\ 0 & \quad\hbox {for $s \inftyn [\tauau^{j}(\overlinemega),\widetilde\tauau^{j}(\overlinemega))$} \varepsilonnd{array} \rhoight .\] the $\widetilde\tauau^j$ are apparently stopping times with $\widetilde\tauau^j \gammag \varepsilonpsilonilon$, and that $\widetilde \Xi^j$ are $\widetilde\tauau^j$--cycles. \\ We have \[ \muathbb E_i {\lambda}eft (\inftynt_0^{\widetilde\tauau^j} L_{\overlinemega(s)}(y + \muathcal I(\widetilde \Xi^j)(s),-\widetilde \Xi^j(s)) + \muathbf{b}eta \, ds - b_i + b_{\overlinemega(\widetilde\tauau^j)} \rhoight )= U_j + V_j\] with \muathbf{b}egin{eqnarray*} U_j &=& \muathbb E_i {\lambda}eft (\inftynt_0^{\tauau^{j}} L_{\overlinemega(s)}(y + \muathcal I(\Xi^{j})(s),-\Xi^{j}(s)) + \muathbf{b}eta \, ds - b_i + b_{\overlinemega(\tauau^{j})} \rhoight ) \\ V_j &=& \muathbb E_i {\lambda}eft (\inftynt_{\tauau^{j}}^{\tauau^{j} +\varepsilonpsilonilon} L_{\overlinemega(s)}(y,0) + \muathbf{b}eta \, ds - b_{\overlinemega(\tauau^{j})} +b_{\overlinemega(\tauau^{j}+ \varepsilonpsilonilon)} \rhoight ) \varepsilonnd{eqnarray*} The term $U_j$ diverges negatively as $j \tauo + \inftynfty$ by Lemma \rhoef{preprese}, while $V_j$ stays bounded, which implies \[ \muathbb E_i {\lambda}eft (\inftynt_0^{\widetilde\tauau^j} L_{\overlinemega(s)}(y + \muathcal I(\widetilde \Xi^j)(s),-\widetilde \Xi^j(s)) + \muathbf{b}eta \, ds - b_i + b_{\overlinemega(\widetilde\tauau^j)} \rhoight ) < 0\] for $j$ large. Taking into account that $\widetilde \tauau^j \gammag \varepsilonpsilon$ and Theorem \rhoef{t18}, the last inequality shows that stopping times strongly greater than $\varepsilonpsilon$ and corresponding cycles based at $y$ are sufficient to characterize values $\muathbf b \not\inftyn \muathcal F_\muathbf{b}eta(y)$. This concludes the proof. $ {{\cal B}ox}$ \varepsilonnd{proof} \muedskip Next we state and prove the first main theorem. \sigmamallskip \muathbf{b}egin{theorem}{\lambda}abel{t19} Given $\varepsilonpsilonilon >0$, $y \inftyn \muathbb T^N$, $\muathbf b \inftyn \hbox{\muathbf{b}f R}^m$, we consider \muathbf{b}egin{equation}{\lambda}abel{e43} \inftynf \, \muathbb E_i {\lambda}eft [ \inftynt_0^\tauau L_{\overlinemega(s)}(y +\muathcal I(\Xi),-\Xi) + \muathbf{b}eta \, ds - b_{\overlinemega(0)}+b_{\overlinemega(\tauau)} \rhoight ] \varepsilonnd{equation} where the infimum is taken with respect to any bounded stopping times $\tauau \gammag \varepsilonpsilonilon$ and $\tauau$--cycles $ \Xi$. The following properties are equivalent: \muathbf{b}egin{itemize} \inftytem[(i)] $y \inftyn \muathcal A$ \inftytem[(ii)] the infimum in \varepsilonqref{e43} is zero for any index $i$, any $ \muathbf b \inftyn F_\muathbf{b}eta(y)$ \inftytem[(iii)] the infimum in \varepsilonqref{e43} is zero for some $i$, any $ \muathbf b \inftyn F_\muathbf{b}eta(y)$. \varepsilonnd{itemize} \varepsilonnd{theorem} \sigmamallskip The assumption that the stopping times involved in the infimum are strongly greater than a positive constant is essentially used for proving (iii) $\hbox{\muathbf{b}f R}ightarrow$ (i), while in the implication (i) $\hbox{\muathbf{b}f R}ightarrow$ (ii) it is exploited the characterization of admissible values provided in Theorem \rhoef{t20}. \sigmamallskip \muathbf{b}egin{proof} We start proving the implication (i) $\hbox{\muathbf{b}f R}ightarrow$ (ii). \\ Let $y \inftyn \muathcal{A}$, assume to the contrary that $$ \inftynf {\lambda}imits_{\tauau \gammag \varepsilonpsilonilon} \, \muathbb E_{i} {\lambda}eft [ \inftynt_0^\tauau L_{\overlinemega(s)}(y +\muathcal I(\Xi),-\Xi) + \muathbf{b}eta \, ds - b_{\overlinemega(0)} + b_{\overlinemega(\tauau)} \rhoight ] \neq 0 \quad\hbox{for some $i$ and $\muathbf b \inftyn F_\muathbf{b}eta(y)$.} $$ We deduce from Theorem \rhoef{t18} that \muathbf{b}egin{equation} {\lambda}abel{e45} \inftynf {\lambda}imits_{\tauau \gammag \varepsilonpsilonilon} \, \muathbb E_{i} {\lambda}eft [ \inftynt_0^\tauau L_{\overlinemega(s)}(y +\muathcal I(\Xi),-\Xi) + \muathbf{b}eta \, ds - b_{\overlinemega(0)} + b_{\overlinemega(\tauau)} \rhoight ] > 0 \varepsilonnd{equation} for such $i$, $\muathbf b$. We claim that $\muathbf b +\nu \, \muathbf e_{i} \inftyn \muathcal {F}_{\muathbf{b}eta}(y)$ for any positive $\nu$ less than the infimum in \varepsilonqref{e45} denoted by $\varepsilonta$. Taking into account (\rhoef{e45}) and $e^{-A \tauau}$ being stochastic, we have for any stopping time $\tauau \gammag \varepsilonpsilon$ and $\tauau$--cycle $\Xi$ \muathbf{b}egin{eqnarray*} && \muathbb E_{i}{\lambda}eft [ \inftynt_0^\tauau L_{\overlinemega(s)}(y +\muathcal I(\Xi),-\Xi) + \muathbf{b}eta \, ds - (\muathbf b +\nu \, \muathbf e_{i})_{\overlinemega(0)}+ (\muathbf b +\nu \, \muathbf e_{i})_{\overlinemega(\tauau)} \rhoight ] \\ &=& \muathbb E_{i}{\lambda}eft [ \inftynt_0^\tauau L_{\overlinemega(s)}(y +\muathcal I(\Xi),-\Xi) + \muathbf{b}eta \, ds - b_{\overlinemega(0)}+ b_{\overlinemega(\tauau)} \rhoight ]-\nu+\nu \, \muathbf e_{i}\, e^{-A \tauau} \cdot \muathbf e_{i} \\ &\gammaeq& \varepsilonta -\nu \gammaeq 0. \varepsilonnd{eqnarray*} We further get for $j \neq i$ in force of Theorem \rhoef{t18}, \muathbf{b}egin{eqnarray*} && \muathbb E_{j}{\lambda}eft [ \inftynt_0^\tauau L_{\overlinemega(s)}(y +\muathcal I(\Xi),-\Xi) + \muathbf{b}eta \, ds - (\muathbf b +\nu \, \muathbf e_{i})_{\overlinemega(0)}+ (\muathbf b +\nu \, \muathbf e_{i})_{\overlinemega(\tauau)} \rhoight ] \\ &=& \muathbb E_{j}{\lambda}eft [ \inftynt_0^\tauau L_{\overlinemega(s)}(y +\muathcal I(\Xi),-\Xi) + \muathbf{b}eta \, ds - b_{\overlinemega(0)}+ b_{\overlinemega(\tauau)} \rhoight ]-\nu \, \muathbf e_{j} \cdot \muathbf e_{i}+\nu \, \muathbf e_{j}\, e^{-A \tauau} \cdot \muathbf e_{i} \\ &\gammaeq& 0. \varepsilonnd{eqnarray*} Combining the information from the above computations with Theorem \rhoef{t20}, we get the claim, reaching a contradiction with $y$ being in $\muathcal{A}$ via Proposition \rhoef{prop28}. \sigmamallskip It is trivial that (ii) implies (iii). We complete the proof showing that (iii) implies (i). Let us assume that \varepsilonqref{e43} is vanishing for some $i$ and any $\muathbf b \inftyn F_\muathbf{b}eta(y)$. For any positive constant $\nu$, select $\deltaelta > 0$ satisfying $\deltaelta - \rhoho \, \nu < 0$, where $\rhoho > 0$ is given by Proposition \rhoef{corollozzo}. Notice that we can invoke Proposition \rhoef{corollozzo} because we are working with stopping times strongly greater than $\varepsilonpsilon$. We fix $\muathbf b \inftyn F_\muathbf{b}eta(y)$ and deduce from \varepsilonqref{e43} being zero that there exist a bounded stopping time $\tauau \gammag \varepsilonpsilonilon$, and a $\tauau$--cycle $ \Xi$ with \muathbf{b}egin{equation} {\lambda}abel{e48} \muathbb E_{i} {\lambda}eft [ \inftynt_0^\tauau L_{\overlinemega(s)}(y +\muathcal I(\Xi),-\Xi) + \muathbf{b}eta \, ds - b_{\overlinemega(0)}+b_{\overlinemega(\tauau)}\rhoight ] < \deltaelta. \varepsilonnd{equation} Taking into account Remark \rhoef{corollozzobis} and \varepsilonqref{e48}, we have \muathbf{b}egin{eqnarray*} && \muathbb E_{i}{\lambda}eft [ \inftynt_0^\tauau L_{\overlinemega(s)}(y +\muathcal I(\Xi),-\Xi) + \muathbf{b}eta \, ds - (b +\nu \, \muathbf e_{i})_{\overlinemega(0)}+ (b +\nu \, \muathbf e_{i})_{\overlinemega(\tauau)} \rhoight ] \\ &=& \muathbb E_{i}{\lambda}eft [ \inftynt_0^\tauau L_{\overlinemega(s)}(y +\muathcal I(\Xi),-\Xi) + \muathbf{b}eta \, ds - b_{\overlinemega(0)}+ b_{\overlinemega(\tauau)} \rhoight ]-\nu+\nu \, \muathbf e_{i}\, e^{-A \tauau} \cdot \muathbf e_{i} \\ &<& \deltaelta - \rhoho \, \nu \\ &<& 0 \varepsilonnd{eqnarray*} which proves that $\muathbf{b}+\nu \, \muathbf e_{i} \notin \muathcal {F}_{\muathbf{b}eta}(y)$, in view of Theorem \rhoef{t18}. This proves that $\muathbf b$, arbitrarily taken in $F_\muathbf{b}eta(y)$, is not an internal point, and consequently that the interior of $F_\muathbf{b}eta(y)$ must be empty. This in turn implies that $y \inftyn \muathcal A$ in force of Proposition \rhoef{prop30}. $ {{\cal B}ox}$ \varepsilonnd{proof} \muedskip Using expectation operators related to characteristic vectors of stopping times, we have a more geometric formulation of the cycle characterization provided in Theorem \rhoef{t19}, without any reference to admissible values for critical subsolutions. This is our second main result. \sigmamallskip \muathbf{b}egin{theorem}{\lambda}abel{t21} Given $\varepsilonpsilonilon >0$, $y \inftyn \muathcal{A}$ if and only if \muathbf{b}egin{equation}{\lambda}abel{e43bis} \inftynf \, \muathbb E_{\muathbf{a}} {\lambda}eft [ \inftynt_0^\tauau L_{\overlinemega(s)}(y +\muathcal I(\Xi),-\Xi) + \muathbf{b}eta \, ds \rhoight ]=0 \varepsilonnd{equation} where the infimum is taken with respect to any bounded stopping times $\tauau \gammag \varepsilonpsilonilon$, $\tauau$--cycles $ \Xi$ and $\muathbf a$ characteristic vector of $\tauau$. \varepsilonnd{theorem} \sigmamallskip Theorem \rhoef{t21} comes from Theorem \rhoef{t19} and the following \sigmamallskip \muathbf{b}egin{lemma}{\lambda}abel{post21} Given $\varepsilonpsilon >0$, $y \inftyn \muathcal A$ and $\muathbf b \inftyn F_\muathbf{b}eta(y)$, let us consider \muathbf{b}egin{equation}{\lambda}abel{main1} \inftynf \, \muathbb E_i {\lambda}eft [ \inftynt_0^\tauau L_{\overlinemega(s)}(y +\muathcal I(\Xi),-\Xi) + \muathbf{b}eta \, ds - b_{\overlinemega(0)}+b_{\overlinemega(\tauau)} \rhoight ] \varepsilonnd{equation} \muathbf{b}egin{equation}{\lambda}abel{main2} \inftynf \, \muathbb E_{\muathbf{a}} {\lambda}eft [ \inftynt_0^\tauau L_{\overlinemega(s)}(y +\muathcal I(\Xi),-\Xi) + \muathbf{b}eta \, ds \rhoight ] \varepsilonnd{equation} where both the infima are taken with respect to any bounded stopping times $\tauau \gammag \varepsilonpsilonilon$, $\tauau$--cycles $ \Xi$, and in \varepsilonqref{main2} $\muathbf a$ is a characteristic vector of $\tauau$. Then \varepsilonqref{main2} vanishes if and only if \varepsilonqref{main1} vanishes for any $i \inftyn \{1, \cdots, m\}$. \varepsilonnd{lemma} \muathbf{b}egin{proof} Let us assume that \varepsilonqref{main1} vanishes for any $i$, then for any $\deltaelta >0$, any $i$ we find a $\tauau_i \gammag \varepsilonpsilon$ and $\tauau_i$--cycles $\Xi_i$ with $$ \muathbb E_i {\lambda}eft [ \inftynt_0^{\tauau_i} L_{\overlinemega(s)}(y +\muathcal I(\Xi_i),-\Xi_i) + \muathbf{b}eta \, ds - b_{\overlinemega(0)}+b_{\overlinemega(\tauau_i)} \rhoight ] < \deltaelta. $$ We define a new stopping time $\tauau \gammag \varepsilonpsilon $ and a $\tauau$--cycle $\Xi$ setting \muathbf{b}egin{eqnarray*} \tauau &=& \tauau_i \qquad\hbox{on $\muathcal D_i$}\\ \Xi &=& \Xi_i \qquad\hbox{on $\muathcal D_i$} \varepsilonnd{eqnarray*} then we get $$ \muathbb E_i {\lambda}eft [ \inftynt_0^{\tauau } L_{\overlinemega(s)}(y +\muathcal I(\Xi),-\Xi) + \muathbf{b}eta \, ds - b_{\overlinemega(0)}+b_{\overlinemega(\tauau)} \rhoight ] < \deltaelta \qquad\hbox{for any $i$.} $$ Taking a characteristic vector $\muathbf{a} =(a_1, \cdots, a_m)$ of $\tauau$, and making convex combinations in the previous formula with coefficients $a _i$, we get taking into account Remark \rhoef{now} $$ \deltaelta > \muathbb E_{\muathbf a} \,{\lambda}eft [ \inftynt_0^{\tauau} L_{\overlinemega}(y+ \muathcal I(\Xi),-\Xi) + \muathbf{b}eta \, ds \rhoight ] - \muathbf a \cdot \muathbf b + (\muathbf a \, e^{- A \tauau}) \cdot \muathbf{b} = \muathbb E_{\muathbf a} \,{\lambda}eft [ \inftynt_0^{\tauau} L_{\overlinemega}(y+ \muathcal I(\Xi),-\Xi) + \muathbf{b}eta \, ds \rhoight ].$$ Since we know that the infimum in \varepsilonqref{main2} is greater that or equal to $0$ thanks to Corollary \rhoef{cornow} with $x=y$, the above inequality implies that it must be $0$, as claimed. Conversely assume that \varepsilonqref{main2} is equal to $0$, then for any $\deltaelta >0$ there is a stopping time $\tauau \gammag \varepsilonpsilon$ with characteristic vector $\muathbf a$, and a $\tauau$--cycle $\Xi$ with $$ \muathbb E_{\muathbf{a}} {\lambda}eft [ \inftynt_0^\tauau L_{\overlinemega(s)}(y +\muathcal I(\Xi),-\Xi) + \muathbf{b}eta \, ds \rhoight ] < \deltaelta$$ Taking into account that $$\sigmaum_i a_i \,\muathbb E_i [-b_{\overlinemega(0)} + b_{\overlinemega(\tauau)}]= - \muathbf a \cdot \muathbf b + (\muathbf a \, e^{-A \tauau}) \cdot \muathbf b =0 $$ for any $\muathbf b \inftyn F_\muathbf{b}eta(y)$, we derive $$ \sigmaum_i a_i \, \muathbb E_i {\lambda}eft [ \inftynt_0^\tauau L_{\overlinemega(s)}(y +\muathcal I(\Xi),-\Xi) + \muathbf{b}eta \, ds - b_{\overlinemega(0)} + b_{\overlinemega(\tauau)} \rhoight ] < \deltaelta.$$ From Remark \rhoef{min} and the fact that the expectations in the above inequality must be nonnegative because of Theorem \rhoef{t18}, we deduce $$ \muathbb E_i {\lambda}eft [ \inftynt_0^\tauau L_{\overlinemega(s)}(y +\muathcal I(\Xi),-\Xi) + \muathbf{b}eta \, ds - b_{\overlinemega(0)} + b_{\overlinemega(\tauau)} \rhoight ] < \frac \deltaelta \rhoho \qquad\hbox{for any $i$},$$ where $\rhoho$ is the constant appearing in Proposition \rhoef{corollozzo}. This implies that the infima in \varepsilonqref{main1} must vanish for any $i$. $ {{\cal B}ox}$ \varepsilonnd{proof} \muedskip \muathbf{b}egin{appendices} \sigmaection{Stochastic Matrices} {\lambda}abel{stocmatrx} \partialarskip +3pt In this appendix we briefly collect some elementary linear algebraic results concerning stochastic matrices. The material is manly taken from \cite{Meyer}, \cite{Norris}, where the reader can find more details. \muedskip We denote by $\muathcal{S} \sigmaubset \muathbb{R}^{m}$ the simplex of probability vectors of $\muathbb{R}^{m}$, namely with nonnegative components summing to $1$. It is a compact convex set. \sigmamallskip \muathbf{b}egin{definition} A positive matrix $M$ is a matrix for which all the entries are positive, and we write $M > 0$. \varepsilonnd{definition} \sigmamallskip \muathbf{b}egin{definition} A right stochastic matrix is a matrix of nonnegative entries with each row summing to $1$. \varepsilonnd{definition} \sigmamallskip \muathbf{b}egin{proposition}{\lambda}abel{prop32} A matrix $B$ is stochastic if and only if \muathbf{b}egin{equation}{\lambda}abel{e53} \muathbf{a} \, B \inftyn \muathcal{S} \;\; \mubox{whenever $\muathbf{a} \inftyn \muathcal{S}$}. \varepsilonnd{equation} \varepsilonnd{proposition} \muathbf{b}egin{proof} $B$ is stochastic if and only if each one of its rows is a probability vector, i.e. $$\muathbf e_{i} \, B \inftyn \muathcal{S} \quad\mubox{for every $i$},$$ which in turn is equivalent to (\rhoef{e53}). $ {{\cal B}ox}$ \varepsilonnd{proof} \sigmamallskip By Perron-Frobenius theorem for nonnegative matrices, we have \muathbf{b}egin{proposition}{\lambda}abel{prop33} Let $B$ be a stochastic matrix, then its maximal eigenvalue is $1$ and there is a corresponding left eigenvector in $\muathcal{S}$. \varepsilonnd{proposition} \sigmamallskip By Perron-Frobenius theorem for positive matrices, we have \muathbf{b}egin{proposition}{\lambda}abel{prop34} Let $B$ be a positive stochastic matrix, then its maximal eigenvalue is $1$ and is simple. In addition, there exists a unique positive corresponding left eigenvector which is an element of $\muathcal{S}$. \varepsilonnd{proposition} \muedskip In view of application to coupling matrices of system, we recall \sigmamallskip \muathbf{b}egin{proposition} {\lambda}abel{prop26} Given a matrix $A$ and $t\gammaeq0$. Assume (A1) and (A2) hold, then $ e^{- At}$ is stochastic. \varepsilonnd{proposition} \sigmamallskip See \cite{siconolfi3} for the proof. \muedskip Exploiting the irreducibility condition (A3), we also have, see Theorem 3.2.1 in \cite{Norris}. \muathbf{b}egin{proposition} {\lambda}abel{prop35} Let $A$ be a matrix satisfying (A1), (A2), (A3) then $e^{-A t}$ is positive for any $t >0$. \varepsilonnd{proposition} \sigmaection{Path spaces}{\lambda}abel{cadpath} We refer readers to \cite{Pabi} for the material presented in this section. \muathbf{b}igskip The term c\`{a}dl\`{a}g indicates a function, defined in some interval of $\hbox{\muathbf{b}f R}$, which is continuous on the right and has left limit. We denote by $\muathcal D:=\muathcal D(0,+\inftynfty;\{1, \cdots, m\})$ and $\muathcal D(0,+\inftynfty;\muathbb{R}^{N})$ the spaces of c\`{a}dl\`{a}g paths defined in $[0,+\inftynfty)$ with values in $\{1, \cdots, m\}$ and $\muathbb{R}^{N}$, respectively. \muathbf{b}igskip To any finite increasing sequence of times $t_1, \cdots, t_k$, with $k \inftyn \muathbb{N}$, and indices $j_1, \cdots, j_k$ in $\{1, \cdots, m\}$ we associate a cylinder defined as $$\muathcal {C}(t_1, \cdots,t_k;j_1, \cdots,j_k)= \{\overlinemega \muid \overlinemega(t_1)=j_1, \cdots, \overlinemega(t_k)=j_k\} \sigmaubset \muathcal{D}.$$ \sigmamallskip We denote by $\muathcal{D}_{i}$ cylinders of type $\muathcal C(0;i)$ for any $i \inftyn \{1, \cdots, m\}$.\\ We call multi-cylinders the sets made up by finite unions of mutually disjoint cylinders. \muathbf{b}igskip The space $\muathcal D$ of c\`{a}dl\`{a}g paths is endowed with the $\sigmaigma$--algebra $\muathcal F$ spanned by cylinders of the type $\muathcal C(s;i)$, for $s \gammaeq 0$ and $i \inftyn \{1, \cdots, m\}$. A natural related filtration ${\cal F}_t$ is obtained by picking, as generating sets, the cylinders $\muathcal C(t_1, \cdots,t_k;j_1, \cdots,j_k)$ with $t_k {\lambda}eq t$, for any fixed $t\gammaeq 0$. \muedskip We can perform same construction in $\muathcal D(0,+\inftynfty;\muathbb{R}^{N})$, and in this case the $\sigmaigma$--algebra, denoted by ${\cal F}'$, is spanned by the sets \muathbf{b}egin{equation}{\lambda}abel{cyli} \{\xi \inftyn \muathcal D(0,+\inftynfty;\muathbb{R}^{N}) \muid \xi(s) \inftyn E\} \varepsilonnd{equation} for $s \gammaeq 0$ and $E$ varying in the Borel $\sigmaigma$--algebra related to the natural topology of $\muathbb{R}^N$. A related filtration is given by the increasing family of $\sigmaigma$--algebras ${\cal F}'_t$ spanned by cylinders in \varepsilonqref{cyli} with $s {\lambda}eq t$. \muedskip Both $\muathcal D$ and $\muathcal D(0,+\inftynfty;\muathbb{R}^{N})$ can be endowed with a metric, named after Skorohod, which makes them Polish spaces, namely complete and separable. Above $\sigmaigma$--algebras ${\cal F}$, ${\cal F}'$ are the corresponding Borel $\sigmaigma$--algebras. \muathbf{b}igskip The convergence induced by Skorohod metric is defined, say in $\muathcal D(0,+\inftynfty;\muathbb{R}^{N})$ to fix ideas, requiring that there exists a sequence $f_n$ of strictly increasing continuous functions from $[0,+\inftynfty]$ onto itself (then $f_n(0)=0$ for any $n$) such that \muathbf{b}egin{eqnarray*} f_n(s) &\tauo& s \quad\hbox{uniformly in $[0,+\inftynfty]$} \\ \xi_n(f_n(s)) &\tauo& \xi(s) \quad\hbox{uniformly in $[0,+\inftynfty]$.} \varepsilonnd{eqnarray*} \muathbf{b}igskip We consider the measurable shift flow $\partialhi_h$ on $\muathcal D$, for $h \gammaeq 0$, defined by $$ \partialhi_h(\overlinemega)(s)= \overlinemega(s+h) \qquad\hbox{for any $s \inftyn [0,+ \inftynfty)$, $\overlinemega \inftyn \muathcal D$.}$$ \muathbf{b}egin{proposition}{\lambda}abel{supershift} Given nonnegative constants $h$, $t$, we have \[ \partialhi_h^{-1}({\cal F}_t) \sigmaubset {\cal F}_{t+h}.\] \varepsilonnd{proposition} \muedskip We also consider that space $\muathcal C(0,+\inftynfty;\muathbb{T}^{N})$ of continuous paths defined in $[0,+ \inftynfty)$ with the local uniform convergence. We can associate to it a metric making it a Polish space. We define a map $$\muathcal I: \muathcal D(0,+\inftynfty;\muathbb{R}^{N}) \rhoightarrow \muathcal C(0,+\inftynfty;\muathbb{T}^{N})$$ via $$\muathcal {I}(\xi)(t) = \muathrm{proj} {\lambda}eft ( \inftynt_0^t \xi ds \rhoight ) $$ where $\muathrm{proj}$ indicates the projection from $\hbox{\muathbf{b}f R}^N$ onto $\muathbb T^N$. It is continuous with respect to the aforementioned metrics, see \cite{siconolfi3}. \sigmaection{Random setting}{\lambda}abel{randomset} The material of this section is taken from \cite{siconolfi3}. We are going to define a family of probability measures on $(\muathcal D, {\cal F})$, see \cite{siconolfi3}. We start from a preliminary result. Taking into account that ${\cal F}$, ${\cal F}_t$ are generated by cylinders, we get by the Approximation Theorem for Measures, see \cite[Theorem 1.65]{Kl}. \sigmamallskip \muathbf{b}egin{proposition}{\lambda}abel{klenke} Let $\muu$ be a finite measure on ${\cal F}$. For any $E \inftyn {\cal F}$, there is a sequence $E_n$ of multi--cylinders in ${\cal F}$ with \[{\lambda}im_n \muu(E_n \tauriangle E)=0, \] where $\tauriangle$ stands for the symmetric difference. As a consequence we see that two finite measures on $\muathcal D$ coinciding on the family of cylinders, are actually equal. \varepsilonnd{proposition} \muedskip Given a probability vector $\muathbf a$ in $\muathbb{R}^{m}$, namely with nonnegative components summing to 1, we define for any cylinder $\muathcal{C}(t_1, \cdots,t_k;j_1, \cdots , j_k)$ a nonnegative function $\muu_{a}$ \muathbf{b}egin{equation}{\lambda}abel{e54} \muu_\muathbf{a} (\muathcal C(t_1, \cdots,t_k;j_1, \cdots,j_k)) = {\lambda}eft (\muathbf{a} \,e^{- A t_1} \rhoight )_{j_1} \, \partialrod_{l=2}^{k} {\lambda}eft (e^{-(t_l-t_{l-1})A} \rhoight )_{j_{l-1}\,j_l}. \varepsilonnd{equation} We then exploit that $e^{-A t}$ is stochastic to uniquely extend $\muu_{\muathbf a}$, through Daniell--Kolmogorov Theorem, to a probability measure $\muathbb P_{\muathbf a}$ on $(\muathcal D, \muathcal F)$, see for instance \cite[Theorem 1.2]{Swart}.\\ Hence, in view of (\rhoef{e54}), we have \muathbf{b}egin{proposition}{\lambda}abel{conti} The map $$\muathbf{a} \rhoightarrow \muathbb P_{\muathbf a}$$ is injective, linear and continuous from $\muathcal S \sigmaubset \muathbb R^m$ to the space of probability measures on $\muathcal D$ endowed with the weak convergence.\\ Consequently, the measures $\muathbb P_{\muathbf a}$ are spanned by $\muathbb{P}_{i} :=\muathbb{P}_{\muathbf{e_{i}}}$, for $i \inftyn \{1,\cdots,m\}$, and \muathbf{b}egin{equation}{\lambda}abel{prlin} \muathbb{P}_{\muathbf{a}}= \sigmaum_{i=1}^{m}a_{i} \, \muathbb{P}_{i}. \varepsilonnd{equation} \varepsilonnd{proposition} By (\rhoef{e54}) we also get that $\muathbb{P}_{i}$ are supported in $\muathcal {D}_{i}:=\muathcal{C}(0;i)$. \sigmamallskip We denote by $\muathbb{E}_\muathbf{a}$ the expectation operators relative to $\muathbb{P}_{\muathbf{a}}$, and we put $\muathbb{E}_{i}$ instead of $\muathbb{E}_{\muathbf{e_{i}}}$.\\ We say that some property holds almost surely, a.s. for short, if it is valid up to $\muathbb{P}_{\muathbf{a}}-$null set for some $\muathbf{a} > 0$. We state for later use: \sigmamallskip \muathbf{b}egin{lemma}{\lambda}abel{nullo} Let $f$, $\muathbf{a}$ be a real random variable and a positive probability vector, respectively. If \[\inftynt_E f \, d\muathbb{P}_{\muathbf{a}} =0 \qquad\hbox{for any $E \inftyn \muathcal F$}\] then $f =0$ a.s. \varepsilonnd{lemma} \sigmamallskip We consider the push--forward of the probability measure $\muathbb P_\muathbf{a}$, for any $\muathbf{a} \inftyn \muathcal S$, through the flow $\partialhi_h$ on $\muathcal D$. In view of (\rhoef{e54}), one gets: \muathbf{b}egin{proposition} {\lambda}abel{pushprbdet} For any $\muathbf{a} \inftyn \muathcal S$, $h \gammaeq 0$, \[ \partialhi_h \# \muathbb P_\muathbf{a} = \muathbb P_{\muathbf{a} \, e^{-hA}}.\] \varepsilonnd{proposition} \sigmamallskip Accordingly, for any measurable function $f: \muathcal D \tauo \muathbb R$, we have by the change of variable formula \muathbf{b}egin{equation}{\lambda}abel{changevar} \muathbb E_\muathbf{a} f(\partialhi_h) = \inftynt_\muathcal D f(\partialhi_h(\overlinemega))\, d \muathbb P_\muathbf{a}= \inftynt_\muathcal D f(\overlinemega) \, d \partialhi_h \# \muathbb P_\muathbf{a} = \muathbb E_{\muathbf{a} \, e^{-A h}} f. \varepsilonnd{equation} \sigmamallskip The push--forward of $\muathbb{P}_{\muathbf{a}}$ through $\overlinemega(t)$, which is a random variable for any $t$, is a probability measure on indices. More precisely, we have by (\rhoef{e54}) $$\overlinemega(t) \# \muathbb{P}_{\muathbf{a}}(i)= \muathbb{P}_{\muathbf{a}}(\{\overlinemega \muid \overlinemega(t) = i\}) = {\lambda}eft (\muathbf{a} \, e^{-A t} \rhoight )_i$$ for any index $i \inftyn \{1,\cdots,m\}$, so that \muathbf{b}egin{equation}{\lambda}abel{e55} \overlinemega(t) \# \muathbb{P}_{\muathbf{a}} = \muathbf{a}\, e^{-A t}. \varepsilonnd{equation} Moreover for $\muathbf b =(b_1, \cdots, b_m) \inftyn \muathbb{R}^m$, we have $$\muathbb E_{\muathbf a} b_{\overlinemega (t)}= \muathbf a \, e^{-At} \cdot \muathbf b.$$ \muedskip Formula (\rhoef{e55}) can be partially recovered for measures of the type $ \muathbb{P}_{\muathbf{a}} \mures E$ which means $ \muathbb{P}_{\muathbf{a}}$ is restricted to $E$, where $E$ is any set in $\muathcal{F}$. \sigmamallskip \muathbf{b}egin{lemma}{\lambda}abel{lem4} (\cite{siconolfi3} Lemma 3.4) \, For a given $ \muathbf{a} \inftyn \muathcal {S}$, $E \inftyn \muathcal{F}_{t}$ for some $t \gammaeq 0$, we have $$ \overlinemega(s) \# (\muathbb{P}_{\muathbf{a}} \mures E) = \muathbf{b}ig ( \overlinemega(t) \# ( \muathbb{P}_{\muathbf{a}} \mures E) \muathbf{b}ig ) \, e^{-A(s-t) } \quad \mubox{ for any $s \gammaeq t$}.$$ \varepsilonnd{lemma} \muedskip \noindent \tauextbf{Admissible controls:} We call control any random variable $\Xi$ taking values in $\muathcal D(0,+\inftynfty;\muathbb{R}^{N})$ such that \muathbf{b}egin{itemize} \inftytem[(i)] it is locally bounded (in time), i.e. for any $t >0$ there is $M >0$ with $$ \sigmaup_{[0,t]} |\Xi(t)| < M \qquad\mubox{ a.s.} $$ \inftytem[(ii)] it is nonanticipating, i.e. for any $t >0$ $$\overlinemega_1=\overlinemega_2 \;\mubox{in $[0,t]$} \;\; \hbox{\muathbf{b}f R}ightarrow \;\; \Xi(\overlinemega_1)=\Xi(\overlinemega_2) \;\hbox{in $[0,t]$.}$$ The second condition is equivalent to require $\Xi$ to be adapted to the filtration $\muathcal{F}_{t}$ which means that the map $$\overlinemega \muapsto \Xi(\overlinemega)(t)$$ from $\muathcal{D}$ to $\muathbb{R}^{N}$ is measurable with respect $\muathcal{F}_{t}$ and the Borel $\sigmaigma$--algebra on $\muathbb{R}^{N}$. \varepsilonnd{itemize} We will denote by $\muathcal K$ the class of admissible controls. \muedskip \noindent \tauextbf{Stopping times:} \muathbf{b}egin{definition} A stopping time, adapted to $\muathcal F_{t}$, is a nonnegative random variable $\tauau$ satisfying $$\{ \tauau {\lambda}eq t\} \inftyn \muathcal F_t \qquad\mubox{for any $t$,} $$ which also implies $ \{ \tauau < t\}, \, \{ \tauau = t\} \inftyn \muathcal F_t$. \varepsilonnd{definition} \sigmamallskip We will repeatedly use the following non increasing approximation of a bounded random variable $\tauau$ by simple stopping times. We set \muathbf{b}egin{equation}{\lambda}abel{splstopping} \tauau_n = \sigmaum_j \frac j{2^n} \, \muathbb{I}(\{\tauau \inftyn [(j-1)/2^n,j/2^n)\}), \varepsilonnd{equation} where $\muathbb I(\cdot)$ stands for the {\varepsilonm indicator function} of the set at the argument. We have for any $j$, $n$ \[\{\tauau_n = j/2^n\} =\{\tauau < j/2^n\} \cap \{\tauau \gammaeq (j-1)/2^n\} \inftyn {\cal F}_{j/2^n},\] moreover the sum in (\rhoef{splstopping}) is finite, being $\tauau$ bounded. Hence $\tauau_n$ are simple stopping times and letting $n$ go to infinity we get: \sigmamallskip \muathbf{b}egin{proposition} {\lambda}abel{stoppapproxi} Given a bounded stopping time $\tauau$, the $\tauau_n$, defined as in (\rhoef{splstopping}), make up a sequence of simple stopping times with \[ \tauau_n \gammaeq \tauau, \quad \tauau_n \tauo \tauau \quad\hbox{uniformly in $\muathcal D$.} \] \varepsilonnd{proposition} \sigmamallskip Given a bounded stopping time $\tauau$ and a pair $x$, $y$ of elements of $\muathbb{T}^{N}$, we set $$\muathcal {K}(\tauau,y-x)= {\lambda}eft \{ \Xi \inftyn \muathcal{K} \muid \muathcal I(\Xi)(\tauau) = y-x \, \mubox{a.s.} \rhoight \}, $$ where the symbol $-$ refers to the structure of additive group on $\muathbb T^N$ induced by the projection of $\hbox{\muathbf{b}f R}^N$ onto $\muathbb T^N = \hbox{\muathbf{b}f R}^N/\hbox{\muathbf{b}f Z}^N$. The controls belonging to $\muathcal {K}(\tauau,0)$ are called $\tauau$--cycles. \varepsilonnd{appendices} \muathbf{b}egin{thebibliography}{20} \muathbf{b}ibitem{Pabi} \tauextsc{P. Billingsley}, \tauextit{Convergence of probability measures}, John Wiley, New York (1999), doi: 10.1002/9780470316962. \muathbf{b}ibitem{Fabio} \tauextsc{F. Camilli, O. Ley, P. Loreti, V. D. Nguyen}, \tauextit{Large time behavior of weakly coupled systems of first-order Hamilton-Jacobi equations}, NoDEA Nonlinear Differential Equations and Applications, 19 (2012), no. 6, pp. 719--749, doi: 10.1007/s00030-011-0149-7. \muathbf{b}ibitem{Davini} \tauextsc{A. Davini and M. Zavidovique}, \tauextit{Aubry sets for weakly coupled systems of Hamilton-Jacobi equations}, SIAM Journal on Mathematical Analysis, 46 (2014), no. 5, pp. 3361--3389, doi: 10.1137/120899960. \muathbf{b}ibitem{DaZaSi} \tauextsc{A. Davini, M. Zavidovique, A. Siconolfi}, \tauextit{Random Hamiltonian evolutions and Lax-Oleinik semigroup}, preprint. \muathbf{b}ibitem{Fathh} \tauextsc{A. Fathi}, \tauextit{Weak KAM Theorem in Lagrangian Dynamics}, www.crm.sns.it/media/person/1235/fathi.pdf.s \muathbf{b}ibitem{siconolfi} \tauextsc{A. Fathi and A. Siconolfi}, \tauextit{PDE aspects of Aubry-Mather theory for quasiconvex Hamiltonians}, Calc, Var. Partial Differential Equations, 22(2005), no. 2, pp.185-228, doi: 10.1007/s00526-004-0271-z. \muathbf{b}ibitem{Kl} \tauextsc{Achim Klenke}, \tauextit{ Probability Theory}, Springer, Berlin (2008). doi: 10.1007/978-1-84800-048-3 \muathbf{b}ibitem{Meyer} \tauextsc{Carl~ D. Meyer}, \tauextit{Matrix Analysis and Applied Linear Algebra}, SIAM, Philadelphia (2000). doi: 10.1137/1.9780898719512 \muathbf{b}ibitem{siconolfi3} \tauextsc{ H.Mitake, A.Siconolfi, H.V. Tran, N. Yamada} \tauextit{A Lagrangian approach to weakly coupled Hamilton-Jacobi systems}, SIAM J. MATH. ANAL. 48 (2016), No. 2, pp.821–846, doi: 10.1137/15M1010841 \muathbf{b}ibitem{Mitake2} \tauextsc{ H. Mitake, Hung V. Tran}, \tauextit{A dynamical approach to the large-time behavior of solutions to weakly coupled systems of Hamilton--Jacobi equations},J. Math. Pures Appl. 101 (2014), no. 1, pp 76--93, doi: 10.1016/j.matpur.2013.05.004. \muathbf{b}ibitem{Norris} \tauextsc{James ~ R. Norris}, \tauextit{Markov chains}, Cambridge University Press, Cambridge (1997), doi: 10.1017/CBO9780511810633. \muathbf{b}ibitem{SiZa} \tauextsc{A. Siconolfi, S. Zabad}, \tauextit{Reduction techniques to weakly coupled systems of Hamilton--Jacobi equations}, preprint 2016. \muathbf{b}ibitem{Swart} \tauextsc{J. Swart, A. Winter} \tauextit{ Markov processes: theory and examples}, Mimeo, https://www.uni-due.de/~hm0110/Markovprocesses/sw20.pdf (2013). \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \begin{center} \textbf{Phase flows and vectorial lagrangians in $J^3(\pi)$} V.N.Dumachev \\ Voronezh institute of the MVD of Russia\\ e-mail: [email protected] \end{center} \textbf{Abstract.} On the basis of Liouville theorem the generalization of the Nambu mechanics is considered. For three-dimensional phase space the concept of vector hamiltonian and vector lagrangian is entered. \textbf{1.} Standard phenomenological approach to the analysis dynamic system is the construction for it the functional of actions $S=\int L\;dt$. We represent this functional as submanifolds in jet bundles $J^n(\pi)$: $E \to M$ \[ F(t,x_0,x_1,...,x_n)=0, \] where $t \in M \subset R$, $u = x_{0} \in U \subset R$, $x_i \in J^i(\pi) \subset R^n$, $E=M \times U$. The Euler-Lagrange equation \begin{equation} \label{eq1} \sum\limits_{k=0}^n(-)^k\frac{d^k}{dt^k}\frac{\partial L}{\partial x_k}=0 \end{equation} describes a lines (jet) in $J^{2n}(\pi)$. Embedding $J^n(\pi) \subset J^1( J^1( J^1(... J^1(\pi)))))$ allows us to rewrite differential equation $n$-st order as system of the $n$ equations 1-st order \begin{equation} \label{eq2} \overset{\cdot}{\textbf{x}}=\textbf{Ax}. \end{equation} According to the Noether theorem, symmetry of functional $S$ with respect to generator $X=\partial/\partial t$ give us the conservation law $I$, and hamiltonian form for our dynamics: \begin{equation} \label{eq3} \overset{\cdot}{\textbf{x}}=\{H(I),\textbf{x}\}. \end{equation} \textbf{2.} Another approach for receiving of the Euler-Lagrange equation (\ref{eq1}) for every hamiltonians set was described by P.A.Griffiths \cite{Griffiths}. He find such 1-form \[ \psi=Ldt+\lambda^i \theta_i, \qquad i=0..n-1, \] which does not vary at pullback along vector fields $(\partial/\partial \theta_i,\partial/\partial d\theta_{n-1})$. Here \[ \theta_i=dx_i-x_{i+1}\; dt \] is the contact distribution, $\lambda^i$ is the Lagrange multipliers. Bounding of the form $\Psi=d\psi$ on the field $(\partial/\partial \theta_i,\partial/\partial d\theta_{n-1})$ gives the set \[ \left\{ \begin{array}{l} \dfrac{\partial L}{\partial x_i}=\dfrac{d\lambda^i}{dt}+\lambda^{i-1}, \qquad i=0..n-1,\\ \\ \dfrac{\partial L}{\partial x_n}=\lambda^{n-1}, \end{array}\right. \] which is equivalent to the equation (\ref{eq1}). Hamiltonian formulation of this theory assume that Lagrange's multipliers $\lambda^i$ be a dynamic variables $H=H(x_i,\lambda^i)$: \[ \psi=(L-\lambda^i x_{i+1})\wedge dt+\lambda^i dx_i=-H dt+\lambda^i dx_i. \] Then bounding of the form $\Psi=d\psi$ on the field $(\partial/\partial x_i,\partial/\partial \lambda^i)$ gives \[ \frac{\partial H}{\partial x_i}=-\frac{\partial \lambda^i}{\partial t}, \qquad \frac{\partial H}{\partial \lambda^i}=\frac{\partial x_i}{\partial t}. \] \textbf{3.} Our target is the generalization of the above scheme on a case odd jets. To clear idea of a method we shall receive the Euler-Lagrange equation for $L \in J^3(\pi)$. \textbf{Theorem 1.} For $L \in J^3(\pi)$ the Euler-Lagrange equation has the form \begin{equation} \label{eq4} \frac{1}{2}\frac{d}{dt}\left( L_{\overset{\cdot }{x}_{k}}^{i}-L_{\overset{\cdot }{x}_{i}}^k\right) = L^k_{x_i}-L^i_{x_k}. \end{equation} \textbf{Proof.} Let $\psi$ be the Griffiths 2-form: \[ \psi=L^idx_i\wedge dt+\lambda^i \Theta_i, \] where $\Theta = \theta \wedge \theta$. Exterior differencial this form is \begin{eqnarray*} d\psi &=&dL^{i}\wedge \omega _{i}=\left( \text{rot}\,L\right) ^{k}\Theta _{k}\wedge dt+d\lambda ^{i}\wedge \Theta _{i}+\lambda ^{i}\wedge d\Theta _{i}+ \\ &+&L_{\overset{\cdot }{x}_{3}}^{2}d\overset{\cdot }{x}_{3}\wedge dx_{2}\wedge dt+L_{\overset{\cdot }{x}_{2}}^{3}d\overset{\cdot }{x} _{2}\wedge dx_{3}\wedge dt \\ &+&L_{\overset{\cdot }{x}_{1}}^{3}d\overset{\cdot }{x}_{1}\wedge dx_{3}\wedge dt+L_{\overset{\cdot }{x}_{3}}^{1}d\overset{\cdot }{x} _{3}\wedge dx_{1}\wedge dt \\ &+&L_{\overset{\cdot }{x}_{2}}^{1}d\overset{\cdot }{x}_{2}\wedge dx_{1}\wedge dt+L_{\overset{\cdot }{x}_{1}}^{2}d\overset{\cdot }{x} _{1}\wedge dx_{2}\wedge dt \\ &+& L_{\overset{\cdot }{x}_{1}}^{1}d\overset{\cdot }{x}_{1}\wedge dx_{1}\wedge dt +L_{\overset{\cdot }{x}_{2}}^{2}d\overset{ \cdot }{x}_{2}\wedge dx_{2}\wedge dt +L_{\overset{\cdot }{x} _{3}}^{3}d\overset{\cdot }{x}_{3}\wedge dx_{3}\wedge dt. \end{eqnarray*} Limiting it on vector fields $v=(\partial _{\Theta _{k}},\partial _{d\Theta _{k}})$, \begin{equation} \label{eq5}\left( \text{rot}\,L\right) ^{k}=-\overset{\cdot }{ \lambda }^k, \end{equation} \[ L_{\overset{\cdot }{x}_{2}}^{3}-L_{\overset{ \cdot }{x}_{3}}^{2}=2\lambda ^1, \qquad L_{\overset{\cdot }{x}_{3}}^{1}-L_{\overset{ \cdot }{x}_{1}}^{3}=2\lambda ^2, \qquad L_{\overset{\cdot }{x}_{1}}^{2}-L_{\overset{ \cdot }{x}_{2}}^{1}=2\lambda ^3 \] we get the Euler-Lagrange equation (\ref{eq4}). \textbf{4.} Now we consider construction of the vector hamiltonian $h^i$ for $L \in J^3(\pi)$. Rewrite the Griffiths 2-form $\psi$ as \begin{eqnarray*} \psi &=&L^i\omega _i+\lambda^i\Theta _i \\ &=&\left( L^{1}-\left( \lambda ^3\overset{\cdot }{x}_{2}-\lambda ^2 \overset{\cdot }{x}_{3}\right) \right) dx_{1}\wedge dt \\ &+&\left( L^{2}-\left( \lambda ^1\overset{\cdot }{x}_{3}-\lambda ^3 \overset{\cdot }{x}_{1}\right) \right) dx_{2}\wedge dt \\ &+&\left( L^{3}-\left( \lambda ^2\overset{\cdot }{x}_{1}-\lambda ^1 \overset{\cdot }{x}_{2}\right) \right) dx_{3}\wedge dt+\lambda^idS_i \\ &=&-h^idx_i\wedge dt+\lambda ^idS_i. \end{eqnarray*} Here $dS_i=\varepsilon_{ijk}dx_j\wedge dx_k$ be a Pl\"ucker coordinats of area element $dS$ spanned by vectors $dx_i$. \textbf{Definition 1.} The vector field $\textbf{f}$ is called conservative if \[ \text{div}\;\textbf{f}=0. \] In other words, conservative vector field is divergence-free. \textbf{Definition 2.} Phase trajectory $\textbf{x}(t)$ is called Lagrange-stable if for all $t>0$ remains in some bounded domain of phase space. Geometrically it means, that a phase flow (\ref{eq2}) should be divergence-free. \textbf{Theorem 2.} The Lagrange-stable phase flow is hamiltonians. \textbf{Proof.} We first calculate the exterior derivatives of closed 2-forms $\psi$: \[ d\psi =\left( \text{rot}\; \textbf{h}\right)^k \Theta _k\wedge dt+\overset{\cdot }{ \lambda ^k}dt\wedge dS_{k}+ \text{div}\;\lambda^k \cdot dV=0. \] Then from $\text{div}\;\lambda ^k=0$ it follows that \[ \overset{\cdot }{\lambda }=\text{rot}\;\textbf{h}. \] I.e. from hamiltonians point of view the set (\ref{eq2}) described of dynamics of a generaliszed moments $\lambda$, which were defined in (\ref{eq5}). \textbf{5.} The base of deformation quantization of dynamical system in $J^2(\pi)$ is the Liouville theorem about preserved of the volume $\Omega = dx_0 \wedge dx_1$ by phase flows. Geometrically it means, that Lie derivative of the 2-form $\Omega $ along vector field $X_H^1$ is zero: $\mathcal{L}_X \Omega = 0$. In other words if $\{ g_t\}$ denotes the one parameter group symplectic diffeomorphisms generated by vector fields $X_H^1$, then $g_t^\ast \Omega = \Omega$ and the phase flow $\{g_t\}$ preserved the volume form $\Omega$. For extended this construction on $J^3(\pi)$ we consider the 3-form of the phase space volume \[ \Omega=dx_0 \wedge dx_1 \wedge dx_2. \] \textbf{Theorem 3.} The volume 3-form $ \Omega \in \Lambda ^3$ supposes existence two polyvector hamiltonians fields $X_H^1\in \Lambda ^1$ and $X_H^2\in \Lambda ^2$. \textbf{Proof.} By definition, put \[ \mathcal{L}_{X} \Omega = X \rfloor d\Omega + d\left( {X\rfloor \Omega} \right) = 0. \] Since $ \Omega \in \Lambda ^ 3 $, we see that $d\Omega = 0 $ and \[ d\left( {X\rfloor \Omega} \right) = 0. \] From Poincare's lemma it follows that form $X\rfloor \Omega $ is exact, and \[ X\rfloor \Omega = \Theta = d\textbf{H}. \] 1) If $\textbf {X}_H^1 \in \Lambda ^1$, then $ \Theta \in \Lambda ^ 2 $, $\textbf{H}=(\textbf{h} \cdot d\textbf{x}) \in \Lambda ^ 1 $. Hamiltonian vector fields has the form \begin{eqnarray} X_H^1&=&(\text{rot}\,\textbf{h} \cdot \frac{\partial}{\partial \textbf{x}})\label{eq6}\\ &=& \left(\frac{\partial h_2}{\partial x_1}- \frac{\partial h_1}{\partial x_2} \right)\frac{\partial }{\partial x_0} + \left( \frac{\partial h_0}{\partial x_2} - \frac{\partial h_2}{\partial x_0} \right)\frac{\partial}{\partial x_1} + \left( \frac{\partial h_1}{\partial x_0}-\frac{\partial h_0}{\partial x_1} \right)\frac{\partial }{\partial x_2}.\notag \end{eqnarray} \noindent 2) If $X_H^2 \in \Lambda ^2$, then $ \Theta \in \Lambda ^1 $, $H \in \Lambda ^0 $ and we see already hamiltonian bivector fields \begin{equation} \label{eq7} X_{H}^{2} = \frac{{1}}{{2}}\left( {\frac{{\partial H}}{{\partial x_{0}} } \cdot \frac{{\partial} }{{\partial x_{1}} } \wedge \frac{{\partial }}{{\partial x_{2}} } + \frac{{\partial H}}{{\partial x_{1}} } \cdot \frac{{\partial} }{{\partial x_{2}} } \wedge \frac{{\partial} }{{\partial x_{0}} } + \frac{{\partial H}}{{\partial x_{2}} } \cdot \frac{{\partial }}{{\partial x_{0}} } \wedge \frac{{\partial} }{{\partial x_{1}} }} \right). \end{equation} More generalized (but scalar) construction was considered in \cite{Dumachev}. Poisson's bracket for vectorial hamiltonian (\ref{eq6}) has the form \begin{eqnarray*} \{ \textbf{h},G\}&=&X_H^1\rfloor dG\\ &=& \left(\frac{\partial h_2}{\partial x_1}- \frac{\partial h_1}{\partial x_2} \right)\frac{\partial G}{\partial x_0} + \left( \frac{\partial h_0}{\partial x_2} - \frac{\partial h_2}{\partial x_0} \right)\frac{\partial G}{\partial x_1} + \left( \frac{\partial h_1}{\partial x_0}-\frac{\partial h_0}{\partial x_1} \right)\frac{\partial G}{\partial x_2}, \end{eqnarray*} and dynamic equations is (\ref{eq2}) \begin{eqnarray}\label{eq8} \overset{\cdot}{\textbf{x}}=\{ \textbf{h},\textbf{x} \}. \end{eqnarray} Poisson's bracket for bivector fields requires introduction two hamiltonians \begin{eqnarray}\label{eq9} &&X_{H}^{2} \rfloor \left( {dF \wedge dG} \right) = \left\{ H,F,G \right\}=\frac{1}{2}\left[ {\frac{{\partial H}}{{\partial x_{0}} } \cdot \left( {\frac{{\partial F}}{{\partial x_{1}} }\frac{{\partial G}}{{\partial x_{2} }} - \frac{{\partial F}}{{\partial x_{2}} }\frac{{\partial G}}{{\partial x_{1}} }} \right)}\right. \notag \\ \\ &+&\left. {\frac{{\partial H}}{{\partial x_{1}} } \cdot \left( {\frac{{\partial F}}{{\partial x_{2}} }\frac{{\partial G}}{{\partial x_{0} }} - \frac{{\partial F}}{{\partial x_{0}} }\frac{{\partial G}}{{\partial x_{2}} }} \right) + \frac{{\partial H}}{{\partial x_{2}} } \cdot \left( {\frac{{\partial F}}{{\partial x_{0}} }\frac{{\partial G}}{{\partial x_{1} }} - \frac{{\partial F}}{{\partial x_{1}} }\frac{{\partial G}}{{\partial x_{0}} }} \right)}\right], \notag \end{eqnarray} such that dynamic equations (\ref{eq2}) has the Nambu form \cite{Nambu} \[ \overset{\cdot}{\textbf{x}}=\{ F,G,\textbf{x} \}. \] \textbf{Example.} Consider the dynamics of Frenet frame with constant curvature and torsion \begin{eqnarray}\label{eq10}\left\{ \begin{array}{l} \overset{\cdot}{x}=y\\ \overset{\cdot}{y}=z-x\\ \overset{\cdot}{z}=-y \end{array}\right. \end{eqnarray} Lax representations for this set has the form \[ \overset{\cdot}{A}=[A,B], \quad A= \left( \begin{array}{ccc} x&y&x\\ y&2z&y\\ x&y&x \end{array}\right), \quad B= \left( \begin{array}{ccc} 0&1&0\\ -1&0&-1\\ 0&1&0 \end{array}\right) \] and gives us following invariants \[ I_k=\frac{1}{k}\text{Tr}\textbf{A}^k, \] \[ I_1=x+z, \quad I_2=\frac{1}{2}(x^2+y^2+z^2), \quad I_3=\frac{1}{3}\left(x^3+\frac{3}{2}y^2(x+z)+z^3\right) ... \] Let $H_1=x+z$ and $H_2=\frac{1}{2}(2xz-y^2)$ - are the hamiltonians of Frenet set, then \[ I_1=H_1, \quad I_2=\frac{1}{2}H_1^2-H_2, \quad I_3=\frac{1}{3}H_1\left(H_1^2-3H_2\right) ... \] The system (\ref{eq2}) is equivalent to system \[ \overset{\cdot}{\textbf{x}}=\{ H_1,H_2,\textbf{x} \} \] with a Poisson bracket (\ref{eq9}). For a finding of vectorial hamiltonian we write the differential $\Psi=d\psi$ of Lagrange's 1-form for Frenet set (\ref{eq10}): \[ \Psi=ydy\wedge dz+(z-x)dz\wedge dx-ydx\wedge dy, \] and, using homotopy formula, we get an expression for the vectorial hamiltonians $h^i$ and vectorial lagranfians $L^i$: \begin{eqnarray*} \textbf{h}=\frac{1}{3}\left( \begin{array}{l} y^2+z^2-xz\\ -y(x+z)\\ y^2+x^2-xz \end{array}\right), \qquad \textbf{L}=\left( \begin{array}{l} z\overset{\cdot}{y}-y\overset{\cdot}{z}-h_1\\ x\overset{\cdot}{z}-z\overset{\cdot}{x}-h_2\\ y\overset{\cdot}{x}-x\overset{\cdot}{y}-h_3\\ \end{array}\right). \end{eqnarray*} \end{document}
\begin{document} \baselineskip=1.3\baselineskip \title{\bf Multiplicative functional for reflected Brownian motion via deterministic ODE} \author{ {\bf Krzysztof Burdzy} \ and \ {\bf John M. Lee} } \address{Department of Mathematics, Box 354350, University of Washington, Seattle, WA 98195} \thanks{Research supported in part by NSF Grants DMS-0600206 and DMS-0406060. } \begin{abstract} We prove that a sequence of semi-discrete approximations converges to a multiplicative functional for reflected Brownian motion, which intuitively represents the Lyapunov exponent for the corresponding stochastic flow. The method of proof is based on a study of the deterministic version of the problem and the excursion theory. \end{abstract} \keywords{Reflected Brownian motion, multiplicative functional} \subjclass{60J65; 60J50} \maketitle \pagestyle{myheadings} \markboth{}{Multiplicative functional for reflected Brownian motion} \section{Introduction}\label{section:article_intro} This article is the first part of a project devoted to path properties of a stochastic flow of reflected Brownian motions. We will first outline the general direction of the project and then we will comment on the results contained in the current article. Consider a bounded smooth domain $D \subset{\bf R}^n$, $n\geq 2$, and for any $x\in \overline D$, let $X^x_t$ be reflected Brownian motion in $D$, starting from $X^x_0 = x $. Construct all processes $X^x$ so that they are driven by the same $n$-dimensional Brownian motion. It has been proved in \cite{BCJ} that in some planar domains, for any $x{\bf n}e y$, the limit $\lim_{t\rightarrow \infty} \log |X^x _t - X^y_t| /t = \Lambda(D)$ exists a.s. Moreover, an explicit formula has been given for the limit $\Lambda(D)$, in terms of geometric quantities associated with $D$. Our ultimate goal is to prove an analogous result for domains in ${\bf R}^n$ for $n\geq 3$. The higher dimensional case is more difficult to study for several reasons. First, we believe that the multidimensional quantity analogous to $\Lambda(D)$ in the two dimensional case cannot be expressed directly in terms of geometric properties of $D$. Instead, it has to be expressed using the stationary distribution for the normalized version of the multiplicative functional studied in the present paper. Second, non-commutativity of projections is a more challenging technical problem in dimensions $n\geq 3$. The result of \cite{BCJ} mentioned above contains an implicit assertion about another limit, namely, in the space variable for a fixed time. In other words, one can informally infer the existence and value of the limit $\lim_{\varepsilon\downarrow 0} (X^{x+\varepsilon {\bf v}}_t - X^x_t)/\varepsilon= \widetilde {\mathcal A}_t {\bf v}$, for ${\bf v} \in {\bf R}^n$. The limit operator $\widetilde {\mathcal A}_t$, regarded as a function of time, is a linear multiplicative functional of reflected Brownian motion. Its form is considerably more complex and interesting in dimensions $n\ge 3$ than in two dimensions. Our overall plan is first to prove the differentiability in the space variable stated in the last paragraph. Then we will prove the existence and uniqueness of the stationary distribution for the normalized version of $\widetilde A_t$. And then we will prove the formula for the rate of convergence of $|X^x_t - X^y_t| $ to 0, as $t\rightarrow \infty$. The immediate goal of the present paper is much more modest than the overall plan outlined above. We will deal with some foundational issues related to the application of our main method, excursion theory, to the convergence of semi-discrete approximations to the multiplicative functional described above. We will briefly review some of the existing literature on the subject, so that we can place out own results in an appropriate context. The multiplicative functional $\widetilde A_t$ appeared in a number of publications discussing reflected Brownian motion, starting with \cite{A,IKpaper}, and later in \cite{IKbook,H}. None of these publications contains the analysis of the deterministic version of the multiplicative functional. This is what we are going to do in Section \ref{section:determ}. In a sense, we are trying to see whether the approach of \cite{LS} could be applied in our case---that approach was to develop a deterministic theory that could be applied to stochastic processes path by path. Unfortunately, our result on deterministic ODE's do not apply to reflected Brownian motion, roughly speaking, for the same reason why the Riemann-Stiltjes does not work for integrals with respect to Brownian motion. Nevertheless, our deterministic results are not totally disjoint from the second, probabilistic section. In fact, our basic approach developed in Lemma \ref{lemma:finite-Y-est} is just what we need in Section \ref{section:diff}. The main theorem of Section \ref{section:diff} proves existence of the multiplicative functional using semi-discrete approximations. The result does not seem to be known in this form, although it is obviously close to some theorems in \cite{A, IKpaper, H}. However, the main point is not to give a new proof to a slightly different version of a known result but to develop estimates using excursion techniques that are analogous to those in \cite{BCJ}, and that can be applied to study $X^x_t - X^y_t$. We continue with some general review of literature. The differentiability of $X^x_t$ in the initial data was proved in \cite{DZ} for reflected diffusions. The main difference between our project and that in \cite{DZ} is that that paper was concerned with diffusions in $(0,\infty)^n$, and our main goal is to study the effect of the curvature of $\partial D$. Deterministic transformations based on reflection were considered, for example, in \cite{LS,DI,DR}. Synchronous couplings of reflected Brownian motions in convex domains were studied in \cite{CLJ1, CLJ2}, where it was proved that under mild assumptions, $X^x_t - X^y_t$ is not 0 at any finite time. Our estimates in Section \ref{section:diff} are so robust that they indicate that Theorem \ref{thm:diffskor} holds for the trace of a degenerate diffusion on $\partial D$, defined as in \cite{CS, MO}, with the density of jumps having different scaling properties than that for reflected Brownian motion. In other words, the main theorem of Section \ref{section:diff} is likely to hold in the case when the trace of the reflected diffusion is any ``stable-like'' process on $\partial D$. We do not present this generalization because, as far as we can tell, the multiplicative functional $\widetilde {\mathcal A}_t$ does not represent the limit $\lim_{\varepsilon\downarrow 0} (X^{x+\varepsilon {\bf v}}_t - X^x_t)/\varepsilon$ for flows of degenerate reflected diffusions. We are grateful to Elton Hsu for very helpful advice. \section{Deterministic differential equation}\label{section:determ} \subsection{Geometric Preliminaries}\label{section:intro} Throughout this section, $M$ will be a $C^2$, properly embedded, orientable hypersurface (i.e., submanifold of codimension $1$) in ${\bf R}^{n}$, endowed with a unit normal vector field ${\bf n}$. The properness condition means that the inclusion map $M\mathrel{\hookrightarrow}{\bf R}^{n}$ is a proper map (the inverse image of every compact set is compact), which is equivalent to $M$ being a a closed subset of ${\bf R}^{n}$. For any $R>0$, let $M_R $ denote the intersection of $M$ with the closed ball of radius $R$ around the origin in ${\bf R}^n$, and note that $M_R$ is a compact subset of $M$. We consider $M$ as a Riemannian manifold with the induced metric. We use the notation $\left<\mathbin{\text{\protect\raisebox{-.3ex}[1ex][0ex]{\Large{$\cdot$}}}},\mathbin{\text{\protect\raisebox{-.3ex}[1ex][0ex]{\Large{$\cdot$}}}}\right>$ for both the Euclidean inner product on ${\bf R}^n$ and its restriction to $\T _xM$ for any $x\in M$, and $\left|\mathbin{\text{\protect\raisebox{-.3ex}[1ex][0ex]{\Large{$\cdot$}}}}\right|$ for the associated norm. For any $x\in M$, let $\pi_x\colon {\bf R}^{n}\rightarrow \T _x M$ denote the orthogonal projection onto the tangent space $\T _x M$, so \begin{equation}\label{eq:pi} \pi_x {\bf z} = {\bf z} - \langle{\bf z} ,{\bf n}(x)\rangle{\bf n}(x), \end{equation} and let $\S (x)\colon \T _xM\rightarrow \T _xM$ denote the {\it shape operator} (also known as the {\it Weingarten map}), which is the symmetric linear endomorphism of $\T _xM$ associated with the second fundamental form. It is characterized by \begin{equation}\label{eq:def-S} \S (x) {\bf v} = - \partial_{\bf v} {\bf n}(x), \qquad {\bf v} \in \T _xM, \end{equation} where $\partial_{\bf v} $ denotes the ordinary Euclidean directional derivative in the direction of ${\bf v} $. If $\gamma \colon[0,T]\rightarrow M$ is a smooth curve in $M$, a {\it vector field along $\gamma $} is a smooth map ${\bf v} \colon [0,T]\rightarrow M$ such that ${\bf v} (t)\in \T _{\gamma (t)}M$ for each $t$. The {\it covariant derivative of ${\bf v} $ along $\gamma $} is given by \begin{align*} {\mathcal D} _t{\bf v} (t) &:= {\bf v} '(t) - \langle{\bf v} (t), \S (\gamma (t)) \gamma '(t)\rangle{\bf n}(\gamma (t)) \\ &= {\bf v} '(t) + \langle{\bf v} (t), \partial_t ({\bf n}\circ \gamma )(t)\rangle {\bf n}(\gamma (t)) . \end{align*} The eigenvalues of $\S (x)$ are the principal curvatures of $M$ at $x$, and its determinant is the Gaussian curvature. We extend $\S (x)$ to an endomorphism of ${\bf R}^{n}$ by defining $\S (x){\bf n} (x) = 0$. It is easy to check that $\S (x)$ and $\pi_x$ commute, by evaluating separately on ${\bf n} (x)$ and on ${\bf v} \in \T _xM$. The following lemma expresses some elementary observations that we will use below. Most of these follow easily from the fact that smooth maps satisfy uniform local Lipschitz estimates, so we leave the proof to the reader. For any linear map ${\mathcal A}\colon {\bf R}^{n}\rightarrow {\bf R}^{n}$, we let $\|{\mathcal A}\|$ denote the operator norm. \begin{lemma}\label{lemma:globalK} For any $R>0$ and $T>0$, there exists a constant $K$ depending only on $M$, $R$, and $T$ such that the following estimates hold for all $x,y\in M_R$, $0\le l \le T$, $t\ge 0$ and ${\bf z}\in {\bf R}^n$: \begin{align} \|\pi_x - \pi_y\| &\le K|x-y|.\label{eq:pi-est}\\ \|\S (x)\|&\le K.\label{eq:S-norm-est}\\ \|\S (x)-\S (y)\| &\le K|x-y|.\\ \|e^{t \S (x)}\| &\le e^{K t}.\label{eq:e^S-est}\\ |e^{t \S (x)} {\bf z}| &\ge e^{-K t}|{\bf z}|. \label{eq:e^S-estlow}\\ \|e^{l \S (x)} - \operatorname{Id}\| &\le Kl.\label{eq:e^S-est2}\\ \|e^{l \S (x)} - e^{l \S (y)}\| &\le {K }l\,|x-y|.\label{eq:e^S-est3}\\ |{\bf n} (x)-{\bf n} (y)| &\le K|x-y|.\label{eq:N-Lip-est} \end{align} \end{lemma} Another useful estimate is the following. \begin{lemma}\label{lemma:pipi-pipi} For any $R>0$, there exists a constant $C$ depending only on $M$ and $R$ such that for all $w,x,y,z\in M_R$, the following operator-norm estimate holds: \begin{displaymath} \left\|\pi_{z} \circ \left(\pi_{y} - \pi_{x}\right)\circ \pi_{w}\right\| \le C\big(\left|w-y\right|\,\left|y-z\right| + \left|w-x\right|\,\left|x-z\right| \big). \end{displaymath} \end{lemma} \begin{proof} Using the fact that ${\bf n} $ is a unit vector field and expanding $|{\bf n} (x)-{\bf n} (y)|^2$ in terms of inner products, we obtain \begin{displaymath} \langle{\bf n} (x),{\bf n} (y)\rangle = \tfrac12 (|{\bf n} (x)|^2 + |{\bf n} (y)|^2 - |{\bf n} (x)-{\bf n} (y)|^2) = 1 - \tfrac12 |{\bf n} (x)-{\bf n} (y)|^2. \end{displaymath} Suppose $w,x,y,z\in M_R$ and ${\bf v} \in{\bf R}^n$. If $\pi_w{\bf v} =0$, then the estimate holds trivially, so we may as well assume that ${\bf v} \in \T _wM$. Expanding the projections as in \eqref{eq:pi} and using the fact that $\pi_w{\bf v} ={\bf v} $, we obtain \begin{align*} \pi_{z}&\bigl(\pi_{y} - \pi_x\bigr)\pi_w{\bf v}\\ &= \pi_{z}({\bf v} - \langle{\bf v} ,{\bf n} (y)\rangle{\bf n} (y)) - \pi_{z}({\bf v} - \langle{\bf v} ,{\bf n} (x)\rangle{\bf n} (x))\\ &= ({\bf v} - \langle{\bf v} ,{\bf n} (y)\rangle{\bf n} (y)) \\ &\quad- \bigl( \langle{\bf v} ,{\bf n} (z)\rangle {\bf n} (z) - \langle{\bf v} ,{\bf n} (y)\rangle\langle{\bf n} (y),{\bf n} (z)\rangle{\bf n} (z)\bigr)\\ &\quad - ({\bf v} - \langle{\bf v} ,{\bf n} (x)\rangle{\bf n} (x))\\ &\quad + \bigl( \langle{\bf v} ,{\bf n} (z)\rangle {\bf n} (z) - \langle{\bf v} ,{\bf n} (x)\rangle\langle{\bf n} (x),{\bf n} (z)\rangle{\bf n} (z)\bigr)\\ &= - \langle{\bf v} ,{\bf n} (y)\rangle{\bf n} (y) + \langle{\bf v} ,{\bf n} (x)\rangle{\bf n} (x) \\ &\quad + \langle{\bf v} ,{\bf n} (y)\rangle\bigl(1 - \tfrac12 |{\bf n} (y)-{\bf n} (z)|^2\bigr){\bf n} (z)\\ &\quad - \langle{\bf v} ,{\bf n} (x)\rangle\bigl(1 - \tfrac12 |{\bf n} (x)-{\bf n} (z)|^2\bigr){\bf n} (z)\\ &= -\langle{\bf v} ,{\bf n} (y)\rangle({\bf n} (y)-{\bf n} (z)) + \langle{\bf v} ,{\bf n} (x)\rangle({\bf n} (x)-{\bf n} (z)) \\ &\quad -\tfrac 12 \langle{\bf v} ,{\bf n} (y)\rangle|{\bf n} (y)-{\bf n} (z)|^2{\bf n} (z) + \tfrac12\langle{\bf v} ,{\bf n} (x)\rangle|{\bf n} (x)-{\bf n} (z)|^2{\bf n} (z). \end{align*} Using the fact that $\langle{\bf v} ,{\bf n} (w)\rangle=0$, this can be written \begin{align*} \pi_{z}\bigl(\pi_{y} - \pi_x\bigr)\pi_w{\bf v} &= -\langle{\bf v} ,{\bf n} (w) - {\bf n} (y)\rangle({\bf n} (y)-{\bf n} (z)) \\ &\quad + \langle{\bf v} ,{\bf n} (w) - {\bf n} (x)\rangle({\bf n} (x)-{\bf n} (z)) \\ &\quad -\tfrac 12 \langle{\bf v} ,{\bf n} (w)-{\bf n} (y)\rangle|{\bf n} (y)-{\bf n} (z)|^2{\bf n} (z)\\ &\quad + \tfrac12\langle{\bf v} ,{\bf n} (w) - {\bf n} (x)\rangle|{\bf n} (x)-{\bf n} (z)|^2{\bf n} (z). \end{align*} The desired estimate follows from \eqref{eq:N-Lip-est} and the fact that \begin{displaymath} |{\bf n} (x)-{\bf n} (y)|^2 \le \left( |{\bf n} (x)| + |{\bf n} (y)|\right)|{\bf n} (x)-{\bf n} (y)| \le 2K|x-y|. \end{displaymath} \end{proof} \subsection{Analytic Preliminaries} Let $T$ be a positive real number. We let ${\mathcal B}V([0,T];{\bf R})$ denote the set of functions $u\colon [0,T]\rightarrow {\bf R}$ of bounded variation, and ${{\mathcal B}bb N}BV([0,T];{\bf R})\subset {\mathcal B}V([0,T];{\bf R})$ the subset consisting of functions that are right-continuous. By convention, we will consider each $u\in {{\mathcal B}bb N}BV([0,T];{\bf R})$ to be a function defined on all of ${\bf R}$ by setting $u(t)=0$ for $t< 0$ and $u(t)=u(T)$ for $t>T$; the extended function is still right-continuous and of bounded variation. With this understanding, we will follow the conventions of \cite{Folland}, and most of the properties of ${{\mathcal B}bb N}BV([0,T];{\bf R})$ that we use can be found there. It is easy to check that ${{\mathcal B}bb N}BV([0,T];{\bf R})$ is closed under pointwise products and sums. Functions in ${{\mathcal B}bb N}BV([0,T];{\bf R})$ have bounded images, at most countably many discontinuities, and well-defined left-hand limits at each discontinuity. In particular, they are examples of {{\it c\`adl\`ag}}\ functions ({\it continue \`a droite, limites \`a gauche}). (In fact, ${{\mathcal B}bb N}BV([0,T];{\bf R})$ is exactly the set of {{\it c\`adl\`ag}}\ functions of bounded variation.) For any $u\in {{\mathcal B}bb N}BV([0,T];{\bf R})$ and any $s\in [0,T]$, we set \begin{displaymath} u(s-) = \lim_{t{\bf n}earrow s} u(t), \end{displaymath} and we define the {\it jump of $u$ at $s$} to be \begin{displaymath} \mathbb Delta_s(u) = u(s) - u(s-). \end{displaymath} Note that $u(0-) = 0$ and $\mathbb Delta_0(u) = u(0)$ by our conventions. It follows from elementary measure theory that for each $u\in {{\mathcal B}bb N}BV([0,T];{\bf R})$, there is a unique signed Borel measure $du$ on $[0,T]$ characterized by \begin{displaymath} du \bigl( (a,b] \bigr) = u(b) - u(a), \quad t\in [0,T]. \end{displaymath} Because this measure has atoms exactly at points $t\in [0,T]$ where $u$ is discontinuous, we have to be careful to indicate whether endpoints are included or excluded in integrals. For example, we have the following versions of the fundamental theorem of calculus for $a,b\in[0,T]$: \begin{align*} \int_{(a,b]} du &= u(b) - u(a);& \int_{[a,b]} du &= u(b) - u(a-);\\ \int_{(a,b)} du &= u(b-) - u(a);& \int_{[a,b)} du &= u(b-) - u(a-). \end{align*} The {\it total variation} of $u$, denoted by $\|du\|$, is given by either of two formulas: \begin{align*} \|du\| &= \sup \left\{ \sum_{i=1}^k |u(x_i) - u(x_{i-1})|: 0=x_0<x_1<\dots < x_k=T\right\}\\ &= \int_{[0,T]}|du|. \end{align*} It follows from our conventions that $\|u\|_\infty\le \|du\|$. For $u\in {{\mathcal B}bb N}BV([0,T];{\bf R})$, we will use the notation $u_-$ to denote the function $u_-(t) = u(t-)$. Note that $u_-$ has bounded variation, but is left-continuous rather than right-continuous. \begin{lemma}\label{lemma:byparts} For any $u,v\in {{\mathcal B}bb N}BV([0,T];{\bf R})$ and $a,b\in[0,T]$, the following integration by parts formula holds: \begin{equation}\label{eq:intbyparts} \int_{(a,b]} u\,dv + \int_{(a,b]} v_-\, du = u(b)v(b) - u(a)v(a). \end{equation} \end{lemma} \begin{proof} This follows as in \cite[Thm.\ 3.36]{Folland} by applying Fubini's theorem to the integral $\int_{\mathcal O}mega du\mathbin{\times} dv$, where ${\mathcal O}mega$ is the triangle $\{(s,t): a< s\le t\le b\}$. \end{proof} \begin{lemma}\label{lemma:productrule} The following product rules hold for $u,v\in {{\mathcal B}bb N}BV([0,T];{\bf R})$: \begin{align*} d(uv) &= u\,dv + v_-\, du \\ &= u_-\,dv + v\, du\\ &= u\,dv + v\, du - \sum_i \mathbb Delta_{s_i}(u)\mathbb Delta_{s_i}(v)\partialta_{s_i}, \end{align*} where $\partialta_{s_i}$ is the Dirac mass at $s_i$, and the sum is over the countably many points $s_i\in[0,T]$ at which both $u$ and $v$ are discontinous. \end{lemma} \begin{proof} The first two formulas follow immediately from \eqref{eq:intbyparts} and the definition of $d(uv)$. For the third, we just note that the measure $(v-v_-)du$ is supported on the set of points where $u$ and $v$ are both discontinuous, and for each such point $s_i$, \begin{align*} (v(s_i)-v_-(s_i))du(\{s_i\}) &= (v(s_i) - v(s_i-)) (u(s_i) - u(s_i-)) \\ &= \mathbb Delta_{s_i}(u)\mathbb Delta_{s_i}(v). \end{align*} \end{proof} We will be interested primarily in vector-valued functions. We let ${{\mathcal B}bb N}BV([0,T];{\bf R}^n)$ denote the set of functions ${\bf v} \colon [0,T]\rightarrow {\bf R}^n$ each of whose component functions is in ${{\mathcal B}bb N}BV([0,T];{\bf R})$, and ${{\mathcal B}bb N}BV([0,T];M)\subset {{\mathcal B}bb N}BV([0,T];{\bf R}^n)$ the subset of functions taking their values in $M$. The considerations above apply equally well to such vector-valued functions, with obvious trivial modifications in notation. For example, if ${\bf v} ,{\bf w} \in {{\mathcal B}bb N}BV([0,T];{\bf R}^n)$, we consider $d{\bf v}$ and $d{\bf w}$ as ${\bf R}^n$-valued measures, and Lemma \ref{lemma:byparts} implies that \begin{displaymath} \int_{(a,b]} \langle{\bf v} ,d{\bf w} \rangle + \int_{(a,b]} \langle{\bf w} _-, d{\bf v} \rangle = \langle{\bf v} (b),{\bf w} (b)\rangle - \langle{\bf v} (a),{\bf w} (a)\rangle. \end{displaymath} If $\gamma \in {{\mathcal B}bb N}BV([0,T];M)$, we say a function ${\bf v} \in {{\mathcal B}bb N}BV([0,T];{\bf R}^n)$ is a {\it vector field along $\gamma $} if ${\bf v} (t)\in \T _{\gamma (t)}M$ for each $t\in [0,T]$. This is equivalent to the equation $\langle{\bf v} (t),{\bf n} (\gamma (t))\rangle=0$ for all $t$, or more succinctly $\langle{\bf v} ,{\bf n} \circ \gamma \rangle\equiv 0$. Note that the fact that $\gamma $ takes its values in a bounded set, on which ${\bf n} $ is uniformly Lipschitz, guarantees that ${\bf n} \circ \gamma \in {{\mathcal B}bb N}BV([0,T];{\bf R}^n)$. We generalize the notion of covariant derivative for ${{\mathcal B}bb N}BV$ vector fields by defining \begin{displaymath} {\mathcal D} {\bf v} = d{\bf v} + \langle {\bf v} _-, d({\bf n} \circ \gamma )\rangle{\bf n} \circ \gamma . \end{displaymath} One motivation for this definition is provided by the following lemma, which says that if ${\bf v} (0)$ is tangent to $M$ and ${\mathcal D} {\bf v} $ is tangent to $M$ on all of $[0,T]$, then ${\bf v} $ stays tangent to $M$. \begin{lemma}\label{lemma:Y-stays-tangent} Suppose $\gamma \in {{\mathcal B}bb N}BV([0,T];M)$ and ${\bf v} \in {{\mathcal B}bb N}BV([0,T];{\bf R}^n)$. If ${\bf v} (0)\in \T _{\gamma (0)}M$ and $\langle{\mathcal D} {\bf v} ,{\bf n} \circ \gamma \rangle \equiv 0$, then ${\bf v} (t)\in \T _{\gamma (t)}M$ for all $t\in [0,T]$. \end{lemma} \begin{proof} Using Lemma \ref{lemma:productrule}, we compute \begin{align*} 0 &= \langle{\mathcal D} {\bf v} ,{\bf n} \circ \gamma \rangle\\ &= \langle d{\bf v} ,{\bf n} \circ \gamma \rangle + \bigl\langle \langle {\bf v} _-, d({\bf n} \circ \gamma ) \rangle{\bf n} \circ \gamma, {\bf n} \circ \gamma\bigr\rangle\\ &= d\langle{\bf v} ,{\bf n} \circ \gamma \rangle - \langle {\bf v} _-, d({\bf n} \circ \gamma )\rangle + \langle {\bf v} _-, d({\bf n} \circ \gamma ) \rangle \langle{\bf n} \circ \gamma, {\bf n} \circ \gamma\rangle\\ &= d\langle{\bf v} ,{\bf n} \circ \gamma \rangle. \end{align*} Thus if $\langle{\bf v} (0),{\bf n} (\gamma (0))\rangle=0$, we find by integration that $\langle{\bf v} (t),{\bf n} (\gamma (t))\rangle=0$ for all $t$. \end{proof} \subsection{An Existence and Uniqueness Theorem} The main purpose of this section is to prove the following theorem. \begin{theorem}\label{thm:existence-uniqueness} Let $M\subset {\bf R}^n$ be a smooth, properly embedded hypersurface, and let $\gamma \in {{\mathcal B}bb N}BV([0,T];M)$. For any ${\bf v} _0\in \T _{\gamma (0)}M$, there exists a unique ${{\mathcal B}bb N}BV$ vector field ${\bf v} $ along $\gamma $ that is a solution to the following (measure-valued) ODE initial-value problem: \begin{equation}\label{eq:ODE} \begin{aligned} {\mathcal D} {\bf v} &= (\S \circ \gamma ) {\bf v} \, dt,\\ {\bf v} (0) &= {\bf v} _0. \end{aligned} \end{equation} \end{theorem} Before proving the theorem, we will establish some important preliminary results. We begin by dispensing with the uniqueness question. \begin{lemma} Let $\gamma \in {{\mathcal B}bb N}BV([0,T];M)$. If ${\bf v} ,\widetilde {\bf v} \in {{\mathcal B}bb N}BV([0,T];{\bf R}^n)$ are both solutions to \eqref{eq:ODE} with the same initial condition, they are equal. \end{lemma} \begin{proof} Suppose ${\bf v} $ is any solution to \eqref{eq:ODE}. Observe that Lemma \ref{lemma:Y-stays-tangent} implies that ${\bf v} (t)$ is tangent to $M$ for all $t$, so $\langle{\bf v} ,{\bf n} \circ \gamma \rangle\equiv 0$. Let $R = \|\gamma \|_\infty$, so that $\gamma $ takes its values in $M_R$. With $K$ chosen as in Lemma \ref{lemma:globalK}, define $f\in {{\mathcal B}bb N}BV([0,T];{\bf R})$ by $f(t) = e^{-2Kt} |{\bf v} (t)|^2$. Then Lemma \ref{lemma:productrule} yields \begin{align*} df &= e^{-2Kt} \biggl( -2K|{\bf v} |^2 dt + 2\langle {\bf v} , d{\bf v} \rangle -\sum_i \langle\mathbb Delta_{s_i}{\bf v} ,\mathbb Delta_{s_i}{\bf v} \rangle\partialta_{s_i}\biggr)\\ &= e^{-2Kt} \biggl( -2K|{\bf v} |^2 dt - 2\langle{\bf v} ,{\bf n} \circ \gamma \rangle \langle {\bf v} _-, d({\bf n} \circ \gamma )\rangle\\ &\qquad\qquad + 2\langle {\bf v} , (\S \circ \gamma ) {\bf v} \rangledt -\sum_i \langle\mathbb Delta_{s_i}{\bf v} ,\mathbb Delta_{s_i}{\bf v} \rangle\partialta_{s_i}\biggr)\\ &= e^{-2Kt} \biggl( 2\bigl( \langle {\bf v} , (\S \circ \gamma ) {\bf v} \rangle - K|{\bf v} |^2\bigr)dt -\sum_i |\mathbb Delta_{s_i}{\bf v} |^2\partialta_{s_i}\biggr). \end{align*} Since \eqref{eq:S-norm-est} shows that $\langle {\bf v} , (\S \circ \gamma ) {\bf v} \rangle \le K|{\bf v} |^2$, this last expression is a nonpositive measure on $[0,T]$. Integrating, we conclude that $f(t)\le f(0)$, or \begin{displaymath} |{\bf v} (t)|^2\le e^{2Kt}|{\bf v} _0|^2. \end{displaymath} In particular, the only solution with initial condition ${\bf v} _0=0$ is the zero solution. Because \eqref{eq:ODE} is linear in ${\bf v} $, this suffices. \end{proof} To prove existence, we will work first with finite approximations. Define a {\it finite trajectory} in $M$ to be a function $\gamma \in{{\mathcal B}bb N}BV([0,T];M)$ that takes on only finitely many values. This means that there exists a partition $\{0=t_0 < t_1 < \dots < t_m =T \}$ of $[0,T]$ such that $\gamma $ is constant on $[t_{i},t_{i+1})$ for each $i$. For such a function, $d\gamma = \sum_{i=0}^m \mathbb Delta_{t_i}(\gamma )\partialta_{t_i}$ and $\|d\gamma \| = \sum_{i=0}^m |\mathbb Delta_{t_i}(\gamma )|$. Suppose $\gamma $ is a finite trajectory in $M$ and ${\bf v} _0\in \T _{\gamma (0)}M$. Let $0=t_0<\dots<t_m=T$ be a finite partition of $[0,T]$ including all of the discontinuities of $\gamma $, and write $x_i = \gamma (t_i)$. Define ${\bf v} \colon [0,T]\rightarrow {\bf R}^n$ by \begin{equation}\label{eq:finite-Y} {\bf v} (t) = e^{(t-t_k)\S _{x_k}} \pi_{x_k} e^{(t_k-t_{k-1})\S _{x_{k-1}}} \pi_{x_{k-1}} \cdots e^{(t_2-t_1) \S _{x_1}} \pi_{x_1} e^{(t_1-t_0) \S _{x_0}} {\bf v} _0, \end{equation} where $k$ is the largest index such that $t_k\le t$. Observe that the definition of ${\bf v} $ is unchanged if we insert more times $t_i$ in the partition. \begin{lemma}\label{lemma:finite-Y} Let $\gamma \colon[0,T]\rightarrow M$ be a finite trajectory. For and any ${\bf v} _0\in \T _{\gamma (0)}M$, the map ${\bf v} $ defined by \eqref{eq:finite-Y} is the unique solution to \eqref{eq:ODE}, and satisfies \begin{align} |{\bf v} (t)| &\le e^{Ct}|{\bf v} _0|,\label{eq:Y(t)-est}\\ \|d{\bf v} \| &\le C,\label{eq:dY-est} \end{align} where $C$ is a constant depending only on $M$, $T$, and $\|d\gamma \|$. \end{lemma} \begin{proof} An easy computation shows that \begin{align*} d{\bf v} &= (\S \circ \gamma ){\bf v} \, dt + \sum_{i=0}^m \left( \pi_{x_i} {\bf v} (t_i-) - {\bf v} (t_i-)\right) \partialta_{t_i}\\ &= (\S \circ \gamma ){\bf v} \, dt + \sum_{i=0}^m \bigl\langle{\bf v} (t_i-), {\bf n} (\gamma (t_i)) - {\bf n} (\gamma (t_i-))\bigr\rangle {\bf n} (\gamma (t_i)) \partialta_{t_i}, \end{align*} from which it follows that ${\bf v} $ solves \eqref{eq:ODE}. To estimate $|{\bf v} (t)|$, observe first that the operator norm of each projection $\pi_{x}$ is equal to one. Let $K$ be the constant of Lemma \ref{lemma:globalK} for $R = \|d\gamma \|$. Using \eqref{eq:e^S-est}, we have the following operator norm estimate for any finite collection of points $x_1,\dots,x_j\in M_R$ and real numbers $l_1,\dots,l_j\in [0,T]$: \begin{equation}\label{eq:comp-est} \|e^{l_{j} \S _{x_j}} \circ\pi_{x_j}\circ \cdots \circ e^{l_{1}\S _{x_1}}\circ \pi_{x_1}\| \le e^{Kl_{j}}\cdots e^{Kl_{1}} = e^{K(l_{j}+\cdots+l_{1})}. \end{equation} Applying this to the definition of ${\bf v} $ proves \eqref{eq:Y(t)-est}. Then, using \eqref{eq:Y(t)-est} and \eqref{eq:N-Lip-est}, we estimate \begin{align*} \|d{\bf v} \| & = \int_{[0,T]} |(\S \circ \gamma ){\bf v} |\,dt + \sum_{i=0}^m \biggl |\bigl\langle{\bf v} (t_i-), {\bf n} (\gamma (t_i)) - {\bf n} (\gamma (t_i-))\bigr\rangle {\bf n} (\gamma (t_i)) \biggr|\\ &\le \int_{[0,T]} K e^{Kt}dt + \sum_{i=0}^m e^{KT} K|\gamma (t_i) - \gamma (t_i-)|\\ &\le C(1+ \|d\gamma \|). \end{align*} \end{proof} \begin{lemma}\label{lemma:finite-Y-est} Suppose $\gamma $ and $\widetilde \gamma $ are any finite trajectories in $M$ defined on $[0,T]$ and starting at the same point, and ${\bf v} $, $\widetilde {\bf v} $ are the corresponding solutions to \eqref{eq:ODE}. There is a constant $C$ depending only on $M$, $T$, $\|\gamma \|_\infty$, and $\|\widetilde \gamma \|_\infty$ such that the following estimate holds: \begin{displaymath} \|{\bf v} - \widetilde {\bf v} \|_{\infty} \le C \left( 1 + \|d\gamma \| + \|d\widetilde \gamma \|\right)\|\gamma -\widetilde \gamma \|_{\infty}|{\bf v} _0|. \end{displaymath} \end{lemma} \begin{proof} Lemma \ref{lemma:finite-Y} shows that $\|{\bf v} \|_\infty$ and $\|\widetilde {\bf v} \|_\infty$ are both bounded by $C|{\bf v} _0|$ for some $C$ depending only on $M$, $T$, $\|\gamma \|_\infty$, and $\|\widetilde \gamma \|_\infty$. Fix $t\in [0,T]$, and let $0=t_0<\dots<t_k\le t$ denote a finite partition that includes all of the discontinuities of $\gamma $ and $\widetilde \gamma $ in $[0,t]$. We introduce the following shorthand notations: \begin{align*} t_{k+1}&=t, & l_i &= t_{i+1} - t_i,\\ x_i &= \gamma (t_i), & \widetilde x_i &= \widetilde \gamma (t_i),\\ \S _i &= \S (x_i),& \widetilde \S _i &= \S (\widetilde x_i),\\ \pi_i &= \pi_{x_i},& \widetilde\pi_i &= \pi_{\widetilde x_i}. \end{align*} Observing that $\pi_0{\bf v} _0={\bf v} _0$ and $\widetilde \pi_{k+1}\widetilde {\bf v} (t) = \widetilde {\bf v} (t)$, we can write ${\bf v} (t)-\widetilde {\bf v} (t)$ as a telescoping sum: \begin{equation*} {\bf v} (t) - \widetilde {\bf v} (t) = \sum_{i=0}^{k} e^{l_{k} \S _{k}} \pi_k \cdots e^{l_{i+1} \S _{i+1}} \pi_{i+1} \left( e^{l_{i} \S _{i}} \pi_{i} - \widetilde \pi_{i+1} e^{l_{i} \widetilde \S _{i}} \right) \widetilde \pi_{i} \cdots e^{l_1 \widetilde \S _{1}} \widetilde \pi_{1} e^{l_0 \widetilde \S _{0}} {\bf v} _0. \end{equation*} By \eqref{eq:comp-est}, the compositions of operators before and after the parentheses in the summation above are uniformly bounded in operator norm by $e^{KT}$. Therefore, \begin{displaymath} |{\bf v} (t) - \widetilde {\bf v} (t)| \le e^{2KT} \sum_{i=0}^{k}\left\| \pi_{i+1}\circ \left( e^{l_{i} \S _{i}} \circ\pi_{i} - \widetilde \pi_{i+1} \circ e^{l_{i} \widetilde \S _{i}} \right) \circ\widetilde \pi_{i} \right\|\, |{\bf v} _0|. \end{displaymath} Using the fact that $\S _{i}$ and $\pi_{i}$ commute, as do $\widetilde \S _i$ and $\widetilde\pi_i$, we decompose the middle factors as follows: \begin{align*} \pi_{i+1}\circ \left( e^{l_{i} \S _{i}} \circ\pi_{i} - \widetilde \pi_{i+1} \circ e^{l_{i} \widetilde \S _{i}} \right) \circ\widetilde \pi_{i} &= \pi_{i+1}\circ\pi_i \circ \left( e^{l_i \S _i} - e^{l_i \widetilde \S _i} \right) \circ \widetilde \pi_i\\ &\quad + \pi_{i+1} \circ \left( \pi_i - \widetilde\pi_{i+1} \right) \circ \widetilde \pi_i \circ e^{l_i\widetilde \S _i} . \end{align*} We will deal with each of these terms separately. For the first term, \eqref{eq:e^S-est3} implies \begin{displaymath} \left\|e^{l_i \S _i} - e^{l_i \widetilde \S _i}\right\| \le K l_i |x_i - \widetilde x_i|\le K l_i \|\gamma -\widetilde \gamma \|_{\infty}, \end{displaymath} and after summing over $i$, we find that this is bounded by $KT\|\gamma -\widetilde \gamma \|_{\infty}$. For the second term, Lemma \ref{lemma:pipi-pipi} allows us to conclude that \begin{align*} \biggl\| \pi_{i+1} &\circ \left( \pi_i - \widetilde\pi_{i+1} \right) \circ \widetilde \pi_i \circ e^{l_i\widetilde \S _i} \biggr\| \\ &\le C \left( \left|x_{i+1} - x_i\right| \left|x_i - \widetilde x_{i}\right| + \left|x_{i+1} - \widetilde x_{i+1}\right| \left|\widetilde x_{i+1} - \widetilde x_{i}\right| \right) \, \left\|e^{l_i\widetilde \S _i}\right\|\\ &\le Ce^{KT} \|\gamma -\widetilde \gamma \|_{\infty} \left( \left|x_{i+1} - x_i\right| + \left|\widetilde x_{i+1} - \widetilde x_{i}\right| \right). \end{align*} After summing, this is bounded by $Ce^{KT} \|\gamma -\widetilde \gamma \|_{\infty}\left(\|d\gamma \| + \|d\widetilde \gamma \|\right)$. This completes the proof. \end{proof} \begin{lemma}\label{lemma:unif-approx} Let $\gamma \in{{\mathcal B}bb N}BV([0,T];M)$ be arbitrary. For any $\varepsilon>0$, there exists a finite trajectory $\widetilde \gamma \colon [0,T]\rightarrow M$ such that $\|\gamma -\widetilde \gamma \|_{\infty}<\varepsilon$ and $\|d\widetilde \gamma \|\le \|d\gamma \|$. \end{lemma} \begin{proof} Let $\varepsilon$ be given. Since $\gamma $ is {{\it c\`adl\`ag}}, for each $a\in [0,T]$, there exists $\partialta>0$ such that for $t\in [0,T]$, \begin{align} t\in [a,a+\partialta) &\implies |\gamma (t)-\gamma (a)|<\varepsilon,\label{eq:jump-est-r}\\ t\in (a-\partialta,a) &\implies |\gamma (t)-\gamma (a-)|<\frac{\varepsilon}{2}.\label{eq:jump-est-l} \end{align} By compactness, we can choose finitely many points $0=a_0<a_1<\dots a_m=T$ and corresponding positive numbers $\partialta_0,\dots, \partialta_m$ so that $[0,T]$ is covered by the intervals $(a_i-\partialta_i,a_i+\partialta_i)$, $i=1,\dots,m$. Because they are a cover, for each $i=1,\dots,m$ we can choose $b_i$ such that \begin{displaymath} b_i \in (a_{i-1},a_{i-1} + \partialta_{i-1}) \cap (a_{i} - \partialta_{i},a_i). \end{displaymath} Now define a finite trajectory $\widetilde \gamma \colon [0,T]\rightarrow M$ by \begin{displaymath} \widetilde \gamma (t) = \begin{cases} \gamma (a_{i-1}), &t\in [a_{i-1},b_i),\\ \gamma (b_i), &t\in [b_i,a_i). \end{cases} \end{displaymath} It is clear from the definition of the total variation that $\|d\widetilde \gamma \|\le \|d\gamma \|$. We will show that $\|\gamma -\widetilde \gamma \|_\infty<\varepsilon$. Let $t\in[0,T]$ be arbitrary. For some $i$, either $t\in [a_{i-1},b_i)$ or $t\in [b_i,a_i)$. In the first case, since $[a_{i-1},b_i) \subset [a_{i-1},a_{i-1} + \partialta_{i-1})$ by construction, \eqref{eq:jump-est-r} yields \begin{displaymath} |\gamma (t)-\widetilde \gamma (t)| = |\gamma (t)-\gamma (a_{i-1})| < \varepsilon. \end{displaymath} On the other hand, if $t\in [b_i,a_i)\subset (a_i-\partialta_i,a_i)$, \eqref{eq:jump-est-l} yields \begin{align*} |\gamma (t)-\widetilde \gamma (t)| &= |\gamma (t)-\gamma (b_i)|\\ &\le |\gamma (t)-\gamma (a_i-)| + |\gamma (a_i-)-\gamma (b_i)|\\ &< \frac{\varepsilon}{2} + \frac{\varepsilon}{2}, \end{align*} so we reach the same conclusion. \end{proof} \begin{lemma}\label{lemma:sequence} For any $\gamma \in{{\mathcal B}bb N}BV([0,T];M)$, there exists a sequence of finite trajectories $\gamma ^{(k)}\colon [0,T]\rightarrow M$ satisfying $\|d\gamma ^{(k)}\|\le \|d\gamma \|$ and converging uniformly to $\gamma $. \end{lemma} \begin{proof} This is an immediate consequence of Lemma \ref{lemma:unif-approx}. \end{proof} Now we can prove the existence and uniqueness theorem. \begin{proof}[Proof of Theorem \ref{thm:existence-uniqueness}] Given $\gamma $ as in the statement of the theorem, let $\gamma ^{(k)}$ be a sequence of finite trajectories converging uniformly to $\gamma $ as guaranteed by Lemma \ref{lemma:sequence}. For each $k$, let ${\bf v} ^{(k)}$ be the solution to \eqref{eq:ODE} for $\gamma =\gamma ^{(k)}$, as defined by \eqref{eq:finite-Y}. Then Lemma \ref{lemma:finite-Y-est} guarantees that the sequence ${\bf v} ^{(k)}$ is uniformly Cauchy, and hence there is a limit function ${\bf v} \colon [0,T]\rightarrow {\bf R}^n$ such that ${\bf v} ^{(k)}\rightarrow {\bf v} $ uniformly. It is straightforward to check that ${\bf v} \in {{\mathcal B}bb N}BV([0,T];{\bf R}^n)$. Moreover, since each ${\bf v} ^{(k)}$ is tangent to $M$ and ${\bf v} ^{(k)}\rightarrow {\bf v} $ uniformly, it follows that ${\bf v} $ is also tangent to $M$. We need to show that ${\bf v} $ solves \eqref{eq:ODE} for $\gamma $. It suffices to show for any ${\bf w} \in {{\mathcal B}bb N}BV([0,T];{\bf R}^n)$ that \begin{displaymath} \int_{[0,T]} \langle{\bf w} ,d{\bf v} \rangle = - \int_{[0,T]} \langle{\bf w} ,{\bf n} \circ \gamma \rangle \langle {\bf v} _-, d({\bf n} \circ \gamma )\rangle + \int_{[0,T]} \langle {\bf w} , (\S \circ \gamma ){\bf v} \rangle\,dt. \end{displaymath} If we write ${\bf w} = {\bf w} ^\rightarrowp + {\bf w} ^\perp$, where ${\bf w} ^\rightarrowp$ is tangent to $M$ and ${\bf w} ^\perp$ is orthogonal to $M$, this is equivalent to the following two equations: \begin{align} \int_{[0,T]} \langle{\bf w} ^\perp,d{\bf v} \rangle &= - \int_{[0,T]}\langle{\bf w} ^\perp,{\bf n} \circ \gamma \rangle \langle {\bf v} _-, d({\bf n} \circ \gamma )\rangle,\label{eq:perp-part}\\ \int_{[0,T]} \langle{\bf w} ^\rightarrowp,d{\bf v} \rangle &= \int_{[0,T]} \langle {\bf w} ^\rightarrowp, (\S \circ \gamma ){\bf v} \rangle\,dt.\label{eq:tan-part} \end{align} Because ${\bf w} ^\perp$ is proportional to ${\bf n} $, ${\bf w} ^\perp = \langle{\bf w} ^\perp,{\bf n} \circ \gamma \rangle{\bf n} \circ \gamma $. The fact that ${\bf v} $ is tangent to $M$ means that $\langle{\bf n} \circ \gamma ,{\bf v} \rangle\equiv 0$, from which we conclude \begin{align*} 0 &= d\langle{\bf n} \circ \gamma ,{\bf v} \rangle= \langle {\bf n} \circ \gamma , d{\bf v} \rangle+\langle{\bf v} _-,d({\bf n} \circ \gamma )\rangle. \end{align*} Therefore, \begin{align*} \langle{\bf w} ^\perp,d{\bf v} \rangle &= \bigl\langle \langle{\bf w} ^\perp,{\bf n} \circ \gamma \rangle{\bf n} \circ \gamma , d{\bf v} \bigr\rangle\\ &= \langle{\bf w} ^\perp,{\bf n} \circ \gamma \rangle\langle {\bf n} \circ \gamma , d{\bf v} \rangle \\ &= - \langle{\bf w} ^\perp,{\bf n} \circ \gamma \rangle \langle{\bf v} _-,d({\bf n} \circ \gamma )\rangle , \end{align*} from which \eqref{eq:perp-part} follows. On the other hand, from Lemma \ref{lemma:byparts} we conclude that \begin{align*} \int_{[0,T]} \langle{\bf w} ^\rightarrowp,d{\bf v} \rangle &= \langle{\bf w} ^\rightarrowp(T),{\bf v} (T)\rangle - \int_{[0,T]} \langle{\bf v} _-,d{\bf w} ^\rightarrowp\rangle\\ &= \lim_{k\rightarrow \infty}\left( \langle{\bf w} ^\rightarrowp(T),{\bf v} ^{(k)}(T)\rangle - \int_{[0,T]} \langle{\bf v} _-^{(k)},d{\bf w} ^\rightarrowp\rangle\right)\\ &= \lim_{k\rightarrow \infty} \int_{[0,T]} \langle{\bf w} ^\rightarrowp,d{\bf v} ^{(k)}\rangle\\ &= \lim_{k\rightarrow \infty}\biggl(- \int_{[0,T]}\langle{\bf w} ^\rightarrowp,{\bf n} \circ \gamma ^{(k)}\rangle \langle {\bf v} ^{(k)}_-, d({\bf n} \circ \gamma ^{(k)})\rangle\\ &\qquad \qquad+ \int_{[0,T]} \langle {\bf w} ^\rightarrowp, (\S \circ \gamma ^{(k)}){\bf v} ^{(k)}\rangle\,dt\biggr). \end{align*} Since $\langle{\bf w} ^\rightarrowp , {\bf n} \circ \gamma ^{(k)}\rangle$ converges uniformly to $\langle{\bf w} ^\rightarrowp ,{\bf n} \circ \gamma \rangle \equiv 0$, and the measures $\langle {\bf v} ^{(k)}_-, d({\bf n} \circ \gamma ^{(k)})\rangle$ have uniformly bounded total variation, the first term above vanishes in the limit. Since both ${\bf v} ^{(k)}$ and $\S \circ \gamma ^{(k)}$ converge uniformly, the last term above converges to $\int_{[0,T]} \langle {\bf w} ^\rightarrowp, (\S \circ \gamma ){\bf v} \rangle\,dt$. This proves \eqref{eq:tan-part}. \end{proof} \subsection{Stability} In this section, we wish to address the stability of the solution to \eqref{eq:ODE} under perturbations of the trajectory $\gamma $. For applications to probability, we will need to consider perturbations in a weaker topology than the uniform one. We define a metric $d_S$ on ${{\mathcal B}bb N}BV([0,T];{\bf R}^n)$, called the {\it Skhorokhod metric}, by \begin{displaymath} d_S(\gamma ,\widetilde \gamma ) = \inf_{\lambda\in\Lambda} \max\left( \|\gamma - \widetilde \gamma \circ\lambda\|_\infty, \|\lambda-\operatorname{Id}\|_\infty\right), \end{displaymath} where $\Lambda$ is the set of increasing homeomorphisms $\lambda\colon[0,T]\rightarrow [0,T]$. We wish to show that the solution to \eqref{eq:ODE} is continuous in the Skorokhod metric, as long as we stay within a set of trajectories with uniformly bounded total variation. Because the Skorokhod metric is not homogeneous with respect to constant multiples, it will not be possible to bound $d_S({\bf v} ,\widetilde {\bf v} )$ directly in terms of $d_S(\gamma ,\widetilde \gamma )$. For this reason, we will work instead with the {\it solution operator}: for any ${{\mathcal B}bb N}BV$ trajectory $\gamma \colon [0,T]\rightarrow M$, this is the endomorphism-valued function ${\mathcal A}\colon [0,T]\rightarrow {\mathcal E}nd({\bf R}^n)$ defined by \begin{displaymath} {\mathcal A}(t){\bf v} _0 = {\bf v} (t), \end{displaymath} where ${\bf v} $ is the solution to \eqref{eq:ODE} with initial value ${\bf v} _0$, and extended to an endomorphism of ${\bf R}^n$ by declaring ${\mathcal A}(t){\bf n} _{\gamma (0)} = 0$. As before, $\|{\mathcal A}(t)\|$ will denote the operator norm of ${\mathcal A}(t)$, and we set \begin{align*} \|{\mathcal A}\|_\infty &= \sup \{ \|{\mathcal A}(t)\|: t\in [0,T]\}\\ &= \sup \left\{ \frac{|{\bf v} (t)|}{|{\bf v} _0|}: t\in [0,T], \ {\bf v} _0 \in \T _{\gamma (0)}M,\ {\bf v} _0{\bf n}e 0\right\}. \end{align*} It follows easily from the results of the preceding section that for any $\gamma \in {{\mathcal B}bb N}BV([0,T];M)$, the solution operator ${\mathcal A}$ is in ${{\mathcal B}bb N}BV([0,T];{\mathcal E}nd({\bf R}^n))$, and Lemma \ref{lemma:finite-Y-est} translates immediately into the following estimate. \begin{lemma}\label{lemma:finite-B-est} Suppose $\gamma $ and $\widetilde \gamma $ are any finite trajectories in $M$ defined on $[0,T]$ and starting at the same point, and ${\mathcal A}$, $\widetilde {\mathcal A}$ are the corresponding solution operators. There is a constant $C$ depending only on $M$, $T$, $\|d\gamma \|$, and $\|d\widetilde \gamma \|$ such that the following estimate holds: \begin{displaymath} \|{\mathcal A}- \widetilde {\mathcal A}\|_{\infty} \le C \|\gamma -\widetilde \gamma \|_{\infty}. \end{displaymath} \end{lemma} Next we need to examine the effect of a reparametrization on the solution associated with a finite trajectory. \begin{lemma}\label{lemma:lambda-estimate} Let $\gamma \colon [0,T]\rightarrow M$ be a finite trajectory, let $\lambda\colon [0,T]\rightarrow [0,T]$ be an increasing homeomorphism, and let $\widetilde \gamma = \gamma \circ\lambda$. There is a constant $C$ depending only on $M$, $T$, and $\|d\gamma \|$ such that the solutions ${\bf v} $ and $\widetilde {\bf v} $ to \eqref{eq:ODE} associated to $\gamma $ and $\widetilde \gamma $ with the same initial value ${\bf v} _0$ satisfy \begin{equation}\label{eq:dS-est-for-Y} \|{\bf v} - \widetilde {\bf v} \circ\lambda\|_\infty \le C\|\lambda-\operatorname{Id}\|_\infty|{\bf v} _0|. \end{equation} \end{lemma} \begin{proof} As in the proof of Lemma \ref{lemma:finite-Y-est}, fix $t\in [0,T]$ and let $0=t_0<t_1<\dots<t_k\le t$ be the points in $[0,t]$ at which $\gamma $ is discontinuous. Set $t_{k+1} = t$, $x_i = \gamma (t_i)$, and $\widetilde t_i = \lambda(t_i)$, so that $\gamma $ and $\widetilde \gamma $ are given by \begin{align*} \gamma (t) &= x_i \quad \text{if $t_i\le t < t_{i+1}$},\\ \widetilde \gamma (t) &= x_i \quad \text{if $\widetilde t_i\le t < \widetilde t_{i+1}$}. \end{align*} We will also use the notations \begin{align*} l_i &= t_{i+1}-t_i,\\ \widetilde l_i &= \widetilde t_{i+1} - \widetilde t_i,\\ \S _i &= \S (x_i),\\ \pi_i &= \pi_{x_i}. \end{align*} We can write $\widetilde {\bf v} (\lambda(t))- {\bf v} (t) = \widetilde {\bf v} (\widetilde t_{k+1}) - {\bf v} (t_{k+1})$ as a telescoping sum: \begin{align*} \widetilde {\bf v} (\lambda(t))- {\bf v} (t) &= \left( \operatorname{Id} - e^{(t_{k+1} - \widetilde t_{k+1})\S _k}\right)\widetilde {\bf v} (\lambda(t))\\ &\quad + \sum_{i=1}^{k} e^{l_{k} \S _{k}} \pi_k \cdots e^{l_{i+1}\S _{i+1}} \pi_{i+1}\circ \\ &\qquad \left( e^{(t_{i+1} - \widetilde t_i)\S _{i}} \pi_{i} e^{\widetilde l_{i-1} \S _{i-1}} - e^{l_i \S _i} \pi_i e^{(t_i - \widetilde t_{i-1})\S _{i-1}} \right)\circ\\ &\qquad\quad \pi_{i-1} e^{\widetilde l_{i-2}\S _{i-2}} \cdots e^{\widetilde l_1 \S _{1}} \pi_{1} e^{\widetilde l_0 \S _{0}}{\bf v} _0. \end{align*} By virtue of \eqref{eq:e^S-est2}, the first term is bounded by a constant multiple of $|t_{k+1} - \widetilde t_{k+1}|\,|{\bf v} _0|\le \|\lambda - \operatorname{Id}\|_\infty|{\bf v} _0|$. As before, the compositions before and after the parentheses in the summation are uniformly bounded in operator norm, so we need only estimate the sum \begin{displaymath} \sum_{i=1}^{k}\left\| e^{(t_{i+1} - \widetilde t_i)\S _{i}} \circ \pi_{i} \circ e^{\widetilde l_{i-1} \S _{i-1}} - e^{l_i \S _i} \circ \pi_i \circ e^{(t_i - \widetilde t_{i-1})\S _{i-1}} \right\|. \end{displaymath} Using the fact that $\pi_i$ commutes with $\S _i$, we can rewrite the $i$-th term in this sum as \begin{multline*} \left\| e^{l_i \S _i} \circ \pi_i \circ \left( e^{(t_i - \widetilde t_i)\S _i} - e^{(t_i - \widetilde t_i) \S _{i-1}}\right) e^{\widetilde l_{i-1} \S _{i-1}} \right\| \\ \le \left\| e^{l_i \S _i}\right\| \left\| e^{(t_i - \widetilde t_i)\S _i }- e^{(t_i - \widetilde t_i) \S _{i-1}} \right\| \left\|e^{\widetilde l_{i-1} \S _{i-1}}\right\|. \end{multline*} {}From \eqref{eq:e^S-est} and \eqref{eq:e^S-est3}, this last expression is bounded by $ C\left|t_i - \widetilde t_i\right|\, \left|x_i - x_{i-1}\right|$. Summing over $i$, we conclude that this is bounded by $C \|\lambda-\operatorname{Id}\|_{\infty} \|d\gamma \|$. \end{proof} \begin{lemma}\label{lemma:dS-estimate} Suppose $\gamma , \widetilde \gamma \colon [0,T]\rightarrow M$ are finite trajectories starting at the same point, and let ${\mathcal A}$, $\widetilde {\mathcal A}$ be the corresponding solution operators. There exists a constant $C$ depending only on $M$, $T$, $\|d\gamma \|$, and $\|d\widetilde \gamma \|$ such that \begin{equation}\label{eq:finite-stability-est} d_S\big({\mathcal A},\widetilde {\mathcal A}\big) \le C d_S\big(\gamma ,\widetilde \gamma \big). \end{equation} \end{lemma} \begin{proof} Let $\partialta=d_S(\gamma ,\widetilde \gamma )$ and let $\varepsilon>0$ be arbitrary. By definition of the Skorokhod metric, there is an increasing homeomorphism $\lambda\colon [0,T]\rightarrow [0,T]$ such that $\|\gamma -\widetilde \gamma \circ\lambda\|_\infty\le \partialta+\varepsilon$ and $\|\lambda-\operatorname{Id}\|_\infty\le \partialta+\varepsilon$. Let ${\mathcal A}_1$ be the solution operator associated with $\widetilde \gamma \circ\lambda$. Then $\|{\mathcal A}-{\mathcal A}_1\|_\infty\le C(\partialta+\varepsilon)$ by Lemma \ref{lemma:finite-B-est}, and $\|\widetilde {\mathcal A} - {\mathcal A}_1\circ\lambda\|_\infty \le C(\partialta+\varepsilon)$ by Lemma \ref{lemma:lambda-estimate}. Thus by the triangle inequality, \begin{align*} d_S({\mathcal A},\widetilde {\mathcal A}) &\le d_S({\mathcal A},{\mathcal A}_1) + d_S({\mathcal A}_1,\widetilde {\mathcal A})\\ &\le \|{\mathcal A}-{\mathcal A}_1\|_\infty + \max\left( \|\widetilde {\mathcal A} - {\mathcal A}_1\circ\lambda\|_\infty, \|\lambda-\operatorname{Id}\|_\infty\right)\\ &\le C(\partialta+\varepsilon) + \max( C(\partialta+\varepsilon), \varepsilon). \end{align*} Letting $\varepsilon\rightarrow 0$, we obtain \begin{displaymath} d_S({\mathcal A},\widetilde {\mathcal A}) \le 2Cd_S(\gamma ,\widetilde \gamma ). \end{displaymath} \end{proof} Here is our main stability result. \begin{theorem}\label{thm:stability} Given positive constants $R$ and $T$, there exists a constant $C$ depending only on $M$, $R$, and $T$ such that for any trajectories $\gamma ,\widetilde \gamma \in{{\mathcal B}bb N}BV([0,T];M)$ starting at the same point and with total variation bounded by $R$, the corresponding solution operators ${\mathcal A}$ and $\widetilde {\mathcal A}$ satisfy \begin{equation}\label{eq:stability-est} d_S\big({\mathcal A},\widetilde {\mathcal A}\big) \le C d_S\big(\gamma ,\widetilde \gamma \big). \end{equation} \end{theorem} \begin{proof} By the argument in the proof of Theorem \ref{thm:existence-uniqueness}, there exist sequences of finite trajectories converging uniformly to $\gamma $ and $\widetilde \gamma $ whose solution operators converge uniformly to ${\mathcal A}$ and $\widetilde {\mathcal A}$, respectively. Thus for any $\varepsilon>0$, we can choose finite trajectories $\gamma '$ and $\widetilde \gamma '$, with corresponding solution operators ${\mathcal A}'$ and $\widetilde {\mathcal A}'$, such that \begin{align*} \|\gamma '-\gamma \|_\infty&<\varepsilon, &\|\widetilde \gamma '-\widetilde \gamma \|_\infty&<\varepsilon,\\ \|{\mathcal A}'-{\mathcal A}\|_\infty&<\varepsilon, &\|\widetilde {\mathcal A}'-\widetilde {\mathcal A}\|_\infty&<\varepsilon. \end{align*} Then by the triangle inequality, \begin{displaymath} d_S(\gamma ',\widetilde \gamma ') \le d_S(\gamma ',\gamma ) + d_S(\gamma ,\widetilde \gamma ) + d_S(\widetilde \gamma ,\widetilde \gamma ') < d_S(\gamma ,\widetilde \gamma ) + 2\varepsilon. \end{displaymath} By Lemma \ref{lemma:dS-estimate}, we have \begin{align*} d_S({\mathcal A}',\widetilde {\mathcal A}')&\le Cd_S(\gamma ',\widetilde \gamma ') \le Cd_S(\gamma ,\widetilde \gamma ) + 2C\varepsilon. \end{align*} Thus by the triangle inequality once more, \begin{align*} d_S({\mathcal A},\widetilde {\mathcal A}) &\le d_S({\mathcal A},{\mathcal A}') + d_S({\mathcal A}',\widetilde {\mathcal A}') + d_S(\widetilde {\mathcal A}',\widetilde {\mathcal A})\\ &\le \varepsilon + (Cd_S(\gamma ,\widetilde \gamma ) + 2C\varepsilon) + \varepsilon. \end{align*} Letting $\varepsilon\rightarrow 0$ completes the proof. \end{proof} \subsection{Base trajectories of infinite variation} In the probabilistic context, we will have to analyze the situation when the base trajectory $\gamma $ does not have finite variation on finite intervals. We will now present an example showing that some of the results proved in this section do not extend to (all) functions $\gamma$ of infinite variation. Hence, arguments using piecewise-constant approximations in the probabilistic context will require some modification of our techniques. \begin{example} Let $M\subset {\bf R}^2$ be the parabola $M= \{(x_1, x_2) \in {\bf R}^2: x_2 = x_1^2\}$, with the orientation of $M$ chosen so that $\|\S_x\| < 1$ for all $x\in M$. Let $\gamma(t) = (0,0)$ for $t\in [0,1]$, and for even integers $j\geq 2$, let \begin{equation*} \gamma_j(t) = \left\{ \begin{array}{ll} x_j:=(j^{-1}, j^{-2}), & \hbox{for $t\in[2kj^{-3}, (2k+1)j^{-3})$, $k=0,1,\dots,j^3/2-1$,} \\ y_j:=(-j^{-1}, j^{-2}), & \hbox{for $t\in[(2k+1)j^{-3}, (2k+2)j^{-3})$, $k=0,1,\dots,j^3/2-1$,} \\ (j^{-1}, j^{-2}), & \hbox{for $t=1$.} \end{array} \right. \end{equation*} Clearly, $\gamma_j \rightarrow \gamma$ in the supremum norm on $[0,1]$, so $d_S(\gamma_j, \gamma) \rightarrow 0$. Let ${\bf v}_0 = (1,0)$ and let ${\bf v}_j(t)$ be defined as in (\ref{eq:finite-Y}), relative to $\gamma_j$. Similarly, let ${\bf v}(t)$ be defined by (\ref{eq:finite-Y}) relative to $\gamma$. We have ${\bf v}(1) = e^{\S_{(0,0)}} {\bf v}_0 {\bf n}e (0,0)$. There exists $c_1>0$ such that for all $j\geq 2$, ${\bf z} \in \T_{x_j} M$, we have $|\pi_{y_j} {\bf z}| \leq (1- c_1 j^{-2}) |{\bf z}|$, and similarly, $|\pi_{x_j} {\bf z}| \leq (1- c_1 j^{-2}) |{\bf z}|$, for ${\bf z} \in \T_{y_j} M$. This implies that for some $c_2 < 1$, $|{\bf v}_j(1)| = | (\pi_{x_j} \circ \pi_{y_j})^{j^3/2} {\bf v}_0| \leq c_2 ^j$. Hence, $\lim_{j\rightarrow \infty} {\bf v}_j(1) = (0,0) {\bf n}e {\bf v}(1)$. This shows that results such as Lemma \ref{lemma:finite-Y-est} do not hold for (some) functions $\gamma$ which do not have bounded variation. \end{example} \section{Multiplicative functional for reflected Brownian motion}\label{section:diff} Suppose $D\subset{\bf R}^n$, $n\geq 2$, is an open connected bounded set with $C^2$ boundary. Recall that ${\bf n} (x)$ denotes the unit inward normal vector at $x\in\partial D$. Let $B$ be standard $d$-dimensional Brownian motion, $x_* \in \overline D$, and consider the following Skorokhod equation, \begin{equation} X_t = x_* + B_t + \int_0^t {\bf n} (X_s) dL_s, \qquad \hbox{for } t\geq 0. \label{old1.1} \end{equation} Here $L$ is the local time of $X$ on $\partial D$. In other words, $L$ is a non-decreasing continuous process which does not increase when $X$ is in $D$, i.e., $\int_0^\infty {\bf 1}_{D}(X_t) dL_t = 0$, a.s. Equation (\ref{old1.1}) has a unique pathwise solution $(X,L)$ such that $X_t \in \overline D$ for all $t\geq 0$ (see \cite{LS}). We need an extra ``cemetery point'' $\mathbb Delta$ outside ${\bf R}^n$, so that we can send processes killed at a finite time to $\mathbb Delta$. Excursions of $X$ from $\partial D$ will be denoted $e$ or $e_s$, i.e., if $s< u$, $X_s,X_u\in\partial D$, and $X_t {\bf n}otin \partial D$ for $t\in(s,u)$ then $e_s = \{e_s(t) = X_{t+s} ,\, t\in[0,u-s)\}$. Let $\zeta(e_s) = u -s$ be the lifetime of $e_s$. By convention, $e_s(t) = \mathbb Delta$ for $t\geq \zeta$, so $e_t \equiv \mathbb Delta$ if $\inf\{s> t: X_s \in \partial D\} = t$. Let $\sigma$ be the inverse local time, i.e., $\sigma_t = \inf\{s \geq 0: L_s \geq t\}$, and ${\mathcal E}_r = \{e_s: s \leq \sigma_r\}$. Fix some $r,\varepsilon >0$ and let $\{e_{t_1}, e_{t_2}, \dots, e_{t_m}\}$ be the set of all excursions $e\in {\mathcal E}_r$ with $|e(0) -e(\zeta-)| \geq \varepsilon$. We assume that excursions are labeled so that $t_k < t_{k+1}$ for all $k$ and we let $\ell_k = L_{t_k}$ for $k=1,\dots, m$. We also let $t_0 =\inf\{t\geq 0: X_t \in \partial D\}$, $\ell_0 =0 $, $\ell_{m+1} = r$, and $\mathbb Delta \ell_k = \ell_{k+1} - \ell_k$. Let $x_k = e_{t_k}(\zeta-)$ for $k=1,\dots, m$, and let $x_0=X_{t_0}$. In this section, the boundary of $D$ will play the role of the hypersurface $M$, i.e., $M=\partial D$. Recall that $\S$ denotes the shape operator and $\pi_x$ is the orthogonal projection on the tangent space $\T_x \partial D$, for $x\in \partial D$. For ${\bf v}_0\in{\bf R}^n$, let \begin{equation}\label{def:vr} {\bf v}_{r,\varepsilon} = \operatorname{exp}(\mathbb Delta\ell_m \S(x_m)) \pi_{x_m} \cdots \operatorname{exp}(\mathbb Delta\ell_1 \S(x_1)) \pi_{x_1} \operatorname{exp}(\mathbb Delta \ell_0 \S(x_0)) \pi_{x_0} {\bf v}_0. \end{equation} Let ${\mathcal A}_{r,\varepsilon}$ be a linear mapping defined by ${\bf v}_{r,\varepsilon} = {\mathcal A}_{r,\varepsilon} {\bf v}_0$. We point out that the ``multiplicative functional'' $\widetilde {\mathcal A}_t$ discussed in the the Introduction is not the same as ${\mathcal A}_r$ defined in this section. Intuitively speaking, ${\mathcal A}_r = \widetilde {\mathcal A}_{\sigma_r}$, although we have not defined $\widetilde {\mathcal A}_t$ in a formal way. Suppose that $\partial D$ contains $n$ non-degenerate $(n-1)$-dimensional spheres, such that vectors perpendicular to these spheres are orthogonal to each other. If the trajectory $\{X_t, 0\leq t \leq r\}$ visits the $n$ spheres and no other part of $\partial D$, then it is easy to see that ${\mathcal A}_{r,\varepsilon} = 0$ for small $\varepsilon>0$. To avoid this uninteresting situation, we impose the following assumption on $D$. \begin{assumption}\label{a:A1} For every $x\in \partial D$, the $(n-1)$-dimensional surface area measure of $\{y\in \partial D: \langle{\bf n}(y),{\bf n}(x)\rangle =0\}$ is zero. \end{assumption} \begin{theorem}\label{thm:diffskor} Suppose that Assumption \ref{a:A1} holds. With probability 1, for every $r>0$, the limit ${\mathcal A}_r := \lim_{\varepsilon\rightarrow 0} {\mathcal A}_{r,\varepsilon}$ exists and it is a linear mapping of rank $n-1$. For any ${\bf v}_0$, with probability 1, ${\mathcal A}_{r,\varepsilon}{\bf v}_0\rightarrow {\mathcal A}_r {\bf v}_0$ uniformly on compact sets. \end{theorem} \begin{remark} Intuitively speaking, ${\mathcal A}_r{\bf v}_0$ represents the solution to the following ODE, similar to (\ref{eq:ODE}). Let $\gamma(t) = X(\sigma_t)$, and suppose that ${\bf v}_0 \in {\bf R}^n$. Consider the following ODE, \begin{equation*} {\mathcal D} {\bf v} = (\S \circ \gamma ) {\bf v} \, dt, \qquad {\bf v} (0) = \pi_{x_0}{\bf v} _0. \end{equation*} Then ${\mathcal A}_r$ is defined by ${\bf v}(r)= {\mathcal A}_r{\bf v}_0$. We cannot use Theorem \ref{thm:existence-uniqueness} to justify this definition of ${\mathcal A}_r$ because $\gamma {\bf n}otin {{\mathcal B}bb N}BV([0,r];\partial D)$. See \cite{A}, \cite{IKpaper} or \cite{H} for various versions of the above claim with rigorous proofs. Those papers also contain proofs of the fact that ${\mathcal A}_r$ is a multiplicative functional of reflected Brownian motion. This last claim follows directly from our definition of ${\mathcal A}_r$. \end{remark} \begin{remark} Recall that $B$ is standard $d$-dimensional Brownian motion and consider the following stochastic flow, \begin{equation} X_t^x = x + B_t + \int_0^t {\bf n} (X^x_s) dL^x_s, \qquad \hbox{for } t\geq 0, \label{old1.1new} \end{equation} where $L^x$ is the local time of $X^x$ on $\partial D$. The results in \cite{LS} are deterministic in nature, so with probability 1, for all $x\in \overline D$ simultaneously, (\ref{old1.1new}) has a unique pathwise solution $(X^x,L^x)$. In a forthcoming paper, we will prove that for every $r>0$, a.s., $\lim_{\varepsilon\rightarrow0} \sup_{{\bf v}: |{\bf v}| \leq 1} \left| (X^{x_0 + \varepsilon {\bf v}} _{\sigma_r} - X^{x_0}_{\sigma_r})/\varepsilon - {\mathcal A}_r {\bf v}\right| =0$. \end{remark} The rest of this section is devoted to the proof of Theorem \ref{thm:diffskor}. We precede the actual proof with a short review of the excursion theory. See, e.g., \cite{M} for the foundations of the theory in the abstract setting and \cite{Bu} for the special case of excursions of Brownian motion. Although \cite{Bu} does not discuss reflected Brownian motion, all results we need from that book readily apply in the present context. An ``exit system'' for excursions of the reflected Brownian motion $X$ from $\partial D$ is a pair $(L^*_t, H^x)$ consisting of a positive continuous additive functional $L^*_t$ and a family of ``excursion laws'' $\{H^x\}_{x\in\partial D}$. In fact, $L^*_t = L_t$; see, e.g., \cite{BCJ}. Recall that $\mathbb Delta$ denotes the ``cemetery'' point outside ${\bf R}^n$ and let ${\mathcal C}$ be the space of all functions $f:[0,\infty) \rightarrow {\bf R}^n\cup\{\mathbb Delta\}$ which are continuous and take values in ${\bf R}^n$ on some interval $[0,\zeta)$, and are equal to $\mathbb Delta$ on $[\zeta,\infty)$. For $x\in \partial D$, the excursion law $H^x$ is a $\sigma$-finite (positive) measure on $\mathcal C$, such that the canonical process is strong Markov on $(t_0,\infty)$, for every $t_0>0$, with transition probabilities of Brownian motion killed upon hitting $\partial D$. Moreover, $H^x$ gives zero mass to paths which do not start from $x$. We will be concerned only with ``standard'' excursion laws; see Definition 3.2 of \cite{Bu}. For every $x\in \partial D$ there exists a unique standard excursion law $H^x$ in $D$, up to a multiplicative constant. Recall that excursions of $X$ from $\partial D$ are denoted $e$ or $e_s$, i.e., if $s< u$, $X_s,X_u\in\partial D$, and $X_t {\bf n}otin \partial D$ for $t\in(s,u)$ then $e_s = \{e_s(t) = X_{t+s} ,\, t\in[0,u-s)\}$ and $\zeta(e_s) = u -s$. By convention, $e_s(t) = \mathbb Delta$ for $t\geq \zeta$, so $e_t \equiv \mathbb Delta$ if $\inf\{s> t: X_s \in \partial D\} = t$. Recall that $\sigma_t = \inf\{s\geq 0: L_s \geq t\}$ and let $I$ be the set of left endpoints of all connected components of $(0, \infty)\smallsetminus \{t\geq 0: X_t\in \partial D\}$. The following is a special case of the exit system formula of \cite{M}, \begin{equation} {\bf E} \left[ \sum_{t\in I} V_t \cdot f ( e_t) \right] = {\bf E} \int_0^\infty V_{\sigma_s} H^{X(\sigma_s)}(f) ds = {\bf E} \int_0^\infty V_t H^{X_t}(f) dL_t, \label{old4.1} \end{equation} where $V_t$ is a predictable process and $f:\, {\mathcal C}\rightarrow[0,\infty)$ is a universally measurable function which vanishes on excursions $e_t$ identically equal to $\mathbb Delta$. Here and elsewhere $H^x(f) = \int_{\mathcal C} f dH^x$. The normalization of the exit system is somewhat arbitrary, for example, if $(L_t, H^x)$ is an exit system and $c\in(0,\infty)$ is a constant then $(cL_t, (1/c)H^x)$ is also an exit system. Let ${\bf P}^y_D$ denote the distribution of Brownian motion starting from $y$ and killed upon exiting $D$. Theorem 7.2 of \cite{Bu} shows how to choose a ``canonical'' exit system; that theorem is stated for the usual planar Brownian motion but it is easy to check that both the statement and the proof apply to the reflected Brownian motion in ${\bf R}^n$. According to that result, we can take $L^*_t$ to be the continuous additive functional whose Revuz measure is a constant multiple of the surface area measure on $\partial D$ and $H^x$'s to be standard excursion laws normalized so that \begin{equation} H^x (A) = \lim_{\partialta\downarrow 0} \frac1{ \partialta} \,{\bf P}_D^{x + \partialta{\bf n}(x)} (A),\label{old4.2} \end{equation} for any event $A$ in a $\sigma$-field generated by the process on an interval $[t_0,\infty)$, for any $t_0>0$. The Revuz measure of $L$ is the measure $dx/(2|D|)$ on $\partial D$, i.e., if the initial distribution of $X$ is the uniform probability measure $\mu$ in $D$ then ${\bf E}^\mu \int_0^1 {\bf 1}_A (X_s) dL_s = \int_A dx/(2|D|)$ for any Borel set $A\subset \partial D$, see Example 5.2.2 of \cite{FOT}. It has been shown in \cite{BCJ} that $(L^*_t, H^x)=(L_t, H^x)$ is an exit system for $X$ in $D$, assuming the above normalization. \begin{proof}[Proof of Theorem \ref{thm:diffskor}] The overall structure of our argument will be similar to that in the proof of Lemma \ref{lemma:finite-Y-est}. We will first consider the case $r=1$. Let $\varepsilon_j = 2^{-j}$, for $j\geq 1$. Fix some $j$ for now and suppose that $\varepsilon' \in[\varepsilon_{j+1}, \varepsilon_j)$. Let \begin{align*} \left\{e_{t^j_1}, e_{t^j_2}, \dots, e_{t^j_{m_j}}\right\} &=\{e\in {\mathcal E}_1: |e(0) - e(\zeta-)| \geq \varepsilon_j\}, \\ \left\{e_{t'_1}, e_{t'_2}, \dots, e_{t'_{m'}}\right\} &=\{e\in {\mathcal E}_1: |e(0) - e(\zeta-)| \geq \varepsilon'\} . \end{align*} We label the excursions so that $t^j_k < t^j_{k+1}$ for all $k$ and we let $\ell^j_k = L_{t^j_k}$ for $k=1,\dots, m_j$. Similarly, $t'_k < t'_{k+1}$ for all $k$ and $\ell'_k = L_{t'_k}$ for $k=1,\dots, m'$. We also let $t^j_0 = t'_0 = \inf\{t\geq 0: X_t \in \partial D\}$, $\ell^j_0 = \ell'_0 =0 $, $\ell^j_{m_j+1} = \ell'_{m'+1}= 1$, $\mathbb Delta \ell^j_k = \ell^j_{k+1} - \ell^j_k$, and $\mathbb Delta \ell'_k = \ell'_{k+1} - \ell'_k$. Let $x^j_k = e_{t^j_k}(\zeta-)$ for $k=1,\dots, m_j$, and $x'_k = e_{t'_k}(\zeta-)$ for $k=1,\dots, m'$. Let $x^j_0=X_{t^j_0}$, and $x'_0=X_{t'_0}$. Let $\gamma^j(s) = x^j_k$ for $s\in[\ell^j_k, \ell^j_{k+1})$ and $k=0,1,\dots, m_j$, and $\gamma^j(1) = \gamma^j(\ell^j_{m_j})$. Let $\gamma'(s) = x'_k$ for $s\in[\ell'_k, \ell'_{k+1})$ and $k=0,1,\dots, m'$, and $\gamma'(1) = \gamma'(\ell'_{m'})$. For ${\bf v}_0\in{\bf R}^n$, let \begin{align*} {\bf v}^j &= \operatorname{exp}(\mathbb Delta\ell^j_{m_j} \S(x^j_{m_j})) \pi_{x^j_{m_j}} \cdots \operatorname{exp}(\mathbb Delta\ell^j_1 \S(x^j_1)) \pi_{x^j_1} \operatorname{exp}(\mathbb Delta \ell^j_0 \S(x^j_0)) \pi_{x^j_0} {\bf v}_0, \\ {\bf v}' &= \operatorname{exp}(\mathbb Delta\ell'_{m'} \S(x'_{m'})) \pi_{x'_{m'}} \cdots \operatorname{exp}(\mathbb Delta\ell'_1 \S(x'_1)) \pi_{x'_1} \operatorname{exp}(\mathbb Delta \ell'_0 \S(x'_0)) \pi_{x'_0} {\bf v}_0. \end{align*} Let $0=\ell_0<\dots<\ell_{m+1} = 1$ denote the ordered set of all $\ell^j_k$'s, $0\leq k \leq m_j+1$, and $\ell'_k$'s, $0\leq k \leq m'+1$. In the definition of $\ell_k$'s, we followed the proof of Lemma \ref{lemma:finite-Y-est} word by word, for conceptual consistency, although the set of $\ell_k$'s is the same as the set of $\ell'_k$'s. We introduce the following shorthand notations, $\mathbb Delta_i = \ell_{i+1} - \ell_i$, \begin{align*} x_i &= \gamma^j(\ell_i), & \widetilde x_i &= \gamma' (\ell_i),\\ \S _i &= \S (x_i),& \widetilde \S _i &= \S (\widetilde x_i),\\ \pi_i &= \pi_{x_i},& \widetilde\pi_i &= \pi_{\widetilde x_i}. \end{align*} Observing that $\pi_0\widetilde \pi_{0} {\bf v} _0=\widetilde \pi_{0} {\bf v} _0$ and $\widetilde \pi_{m+1} {\bf v}' = {\bf v}'$, we can write ${\bf v}^j - {\bf v}'$ as a telescoping sum: \begin{equation*} {\bf v}^j - {\bf v} ' = \sum_{i=0}^{m} e^{\mathbb Delta_{m} \S _{m}} \pi_m \cdots e^{\mathbb Delta_{i+1} \S _{i+1}} \pi_{i+1} \left( e^{\mathbb Delta_{i} \S _{i}} \pi_{i} - \widetilde \pi_{i+1} e^{\mathbb Delta_{i} \widetilde \S _{i}} \right) \widetilde \pi_{i} \cdots e^{\mathbb Delta_1 \widetilde \S _{1}} \widetilde \pi_{1} e^{\mathbb Delta_0 \widetilde \S _{0}} \widetilde \pi_{0} {\bf v} _0. \end{equation*} By \eqref{eq:comp-est}, the compositions of operators before and after the parentheses in the summation above are uniformly bounded in operator norm by a constant. Therefore, for some $c_1$ depending only on $D$, \begin{equation}\label{eq:vminv} |{\bf v}^j - {\bf v}'| \le c_1 \sum_{i=0}^{m}\left\| \pi_{i+1}\circ \left( e^{\mathbb Delta_{i} \S _{i}} \circ\pi_{i} - \widetilde \pi_{i+1} \circ e^{\mathbb Delta_{i} \widetilde \S _{i}} \right) \circ\widetilde \pi_{i} \right\|\, |{\bf v} _0|. \end{equation} Using the fact that $\S _{i}$ and $\pi_{i}$ commute, as do $\widetilde \S _i$ and $\widetilde\pi_i$, we decompose the middle factors as follows: \begin{align}\label{eq:decom} \pi_{i+1}\circ \left( e^{\mathbb Delta_{i} \S _{i}} \circ\pi_{i} - \widetilde \pi_{i+1} \circ e^{\mathbb Delta_{i} \widetilde \S _{i}} \right) \circ\widetilde \pi_{i} &= \pi_{i+1}\circ\pi_i \circ \left( e^{\mathbb Delta_i \S _i} - e^{\mathbb Delta_i \widetilde \S _i} \right) \circ \widetilde \pi_i\\ &\quad + \pi_{i+1} \circ \left( \pi_i - \widetilde\pi_{i+1} \right) \circ \widetilde \pi_i \circ e^{\mathbb Delta_i\widetilde \S _i} . {\bf n}onumber \end{align} We will deal with each of these terms separately. For the first term, we have by (\ref{eq:e^S-est3}), \begin{equation}\label{eq:1stterm} \left\|\pi_{i+1}\circ\pi_i \circ \left( e^{\mathbb Delta_i \S _i} - e^{\mathbb Delta_i \widetilde \S _i} \right) \circ \widetilde \pi_i \right\| \leq \left\| e^{\mathbb Delta_i \S _i} - e^{\mathbb Delta_i \widetilde \S _i} \right\| \leq c_2 \mathbb Delta_i |x_i - \widetilde x_i|. \end{equation} For the second term, Lemma \ref{lemma:pipi-pipi} and (\ref{eq:e^S-est}) allow us to conclude that \begin{align} \biggl\| \pi_{i+1} \circ \left( \pi_i - \widetilde\pi_{i+1} \right) \circ \widetilde \pi_i \circ e^{\mathbb Delta_i\widetilde \S _i} \biggr\| & \le c_3 \left( \left|x_{i+1} - x_i\right| \left|x_i - \widetilde x_{i}\right| + \left|x_{i+1} - \widetilde x_{i+1}\right| \left|\widetilde x_{i+1} - \widetilde x_{i}\right| \right) \, \left\|e^{\mathbb Delta_i\widetilde \S _i}\right\| {\bf n}onumber \\ &\le c_4 \left( \left|x_{i+1} - x_i\right| \left|x_i - \widetilde x_{i}\right| + \left|x_{i+1} - \widetilde x_{i+1}\right| \left|\widetilde x_{i+1} - \widetilde x_{i}\right| \right). \label{eq:2ndterm} \end{align} We will now estimate ${\bf E} \sup_{0\leq i \leq m} |x_i - \widetilde x_i|$. Suppose that $x_i {\bf n}e \widetilde x_i$ for some $i$. Then there exist $k_1$ and $k_2$ such that $\ell^j_{k_1} < \ell'_{k_2} < \ell^j_{k_1+1}$, $x_i = x^j_{k_1}$, and $\widetilde x_i = x'_{k_2}$. Hence, \begin{equation}\label{eq:diff1} \{|x_i - \widetilde x_i| > a \} \subset \bigcup_k \left\{\sup_{t^j_{k} + \zeta(e^j_{k}) < t < t^j_{k+1}, X_t \in \partial D} |x^j_{k} - X_t| >a \right\}. \end{equation} Intuitively speaking, the last condition means that the process $X$ deviates by more than $a$ units from $x^j_{k_1}$ (the right endpoint of an excursion $e_{t^j_{k_1}}$), when $X$ is on the boundary of $D$, at some time between the lifetime of this excursion and the start of the next excursion in this family, $e_{t^j_{k_1+1}}$. Since $\partial D$ is $C^2$, standards estimates (see, e.g., \cite{Bu}) show that for some $a_0,c_5 >0$, all $x\in \partial D$ and $a\in(0,a_0)$, \begin{equation}\label{eq:H1} 1/(c_5 a) \leq H^x\left(|e(\zeta-) - x| > a\right) \leq c_5/ a. \end{equation} It follows from this and (\ref{old4.1}) that there exists $c_6$ so large that for any stopping time $T$ and $a\in(0,a_0)$, \begin{equation}\label{eq:H2} {\bf P}\left( \exists e_s: |e_s(\zeta-) - e_s(0)| > a, s\in (T, \sigma(L_T + c_6a)) \right) \geq 3/4. \end{equation} Let $\tau_{{\mathcal B}(x,a)}$ be the exit time of $X$ from the ball ${\mathcal B}(x,a)$ in ${\bf R}^n$ with center $x$ and radius $a$. Routine estimates show that for some $c_7, a_1>0$, and all $a\in (0, a_1)$ and $x\in \partial D$, \begin{equation}\label{eq:local1} {\bf P}^x( L(\tau_{{\mathcal B}(x,c_7 a)}) > c_6 a) > 3/4. \end{equation} Let $T^j_{k,0} = t^j_k$, and $$ T^j_{k,i+1} = \inf\{t \geq T^j_{k,i}: X(t) \in \partial D, |X(t) - X(T^j_{k,i})| \geq c_7 \varepsilon_j\}, \qquad i \geq 0. $$ According to (\ref{eq:local1}), the amount of local time generated on $(T^j_{k,0}, T^j_{k,1})$ will be greater than $c_6 \varepsilon_j$ with probability greater than $3/4$. This and (\ref{eq:H2}) imply that there exists an excursion $e_s$ with $|e_s(\zeta-) - e_s(0)| > \varepsilon_j$ and $s\in (T^j_{k,0}, T^j_{k,1})$, with probability greater than $1/2$. By the strong Markov property, if there does not exist an excursion $e_s$ with $|e_s(\zeta-) - e_s(0)| > \varepsilon_j$ and $s\in (T^j_{k,0}, T^j_{k,i})$ then there exists an excursion $e_s$ with $|e_s(\zeta-) - e_s(0)| > \varepsilon_j$ and $s\in (T^j_{k,i}, T^j_{k,i+1})$, with probability greater than $1/2$. Let $M^j_k$ be the smallest $i$ with the property that there exists an excursion $e_s$ with $|e_s(\zeta-) - e_s(0)| > \varepsilon_j$ and $s\in (T^j_{k,i}, T^j_{k,i+1})$. We see that $M^j_k$ is majorized by a geometric random variable $\widetilde M^j_k$ with mean 2. Note that \begin{equation*} |X(T^j_{k,i+1}) - X(T^j_{k,i})| \leq (c_7+1) \varepsilon_j= c_8 \varepsilon_j, \end{equation*} for $i < M^j_k$. Therefore, \begin{equation}\label{eq:xM} \sup_{t^j_{k} + \zeta(e^j_{k}) < t < t^j_{k+1}, X_t \in \partial D} |x^j_{k} - X_t| \leq c_8 M^j_k\varepsilon_j. \end{equation} It is easy to see, using the strong Markov property at the stopping times $t^j_k$, that we can assume that all $\{\widetilde M^j_k, k\geq 0\}$ are independent. Consider an arbitrary $\beta_1< -1$ and let $n_j = \varepsilon_j^{\beta_1}$. For some $c_9>0$, not depending on $j$, \begin{equation}\label{eq:geom} {\bf P}\left(\max_{1\leq k \leq n_j} c_8 \widetilde M^j_k \varepsilon _j \geq c_8 i \varepsilon_j\right) = 1 - (1 - (1/2)^i)^{n_j} \leq \begin{cases} 1& \text{if $i \leq \beta_1 j$,} \\ c_9 n_j (1/2)^i& \text{ if $i > \beta_1 j$}. \end{cases} \end{equation} Let $\rho_0$ be the diameter of $D$ and $j_1 $ be the largest integer smaller than $\log \rho_0$. By (\ref{eq:geom}), for any $\beta_2 < 1$, some $c_{12}<\infty$, and all $j\geq j_1$, \begin{align} {\bf E}\left (\max_{1\leq k \leq n_j} c_8 M^j_k \varepsilon _j \right) &\leq {\bf E}\left (\max_{1\leq k \leq n_j} c_8 \widetilde M^j_k \varepsilon _j \right) {\bf n}onumber \\ &\leq \sum_{i \leq \beta_1 j} c_8 i \varepsilon_j + \sum_{i > \beta_1 j} c_8 i \varepsilon_j c_9 n_j (1/2)^i {\bf n}onumber \\ &\leq c_{10} \varepsilon_j (\log \varepsilon_j)^2 + c_{11} \varepsilon_j |\log \varepsilon_j| \leq c_{12} \varepsilon_j^{\beta_2} .\label{eq:maxM} \end{align} Let $N_\varepsilon$ be the number of excursions $e_s$ with $s \leq \sigma_1$ and $|e_s(0) - e_s(\zeta-)| \geq \varepsilon$. For $\varepsilon = \varepsilon_j$, $N_\varepsilon = m_j$. Then (\ref{old4.1}) and (\ref{eq:H1}) imply that $N_\varepsilon$ is stochastically majorized by a Poisson random variable $\widetilde N_\varepsilon$ with mean $ c_{13} /\varepsilon$, where $c_{13}<\infty$ does not depend on $\varepsilon>0$. We have ${\bf E} \operatorname{exp}(\widetilde N_\varepsilon) = \operatorname{exp}(c_{13} \varepsilon^{-1} (e-1))$, so for any $a>0$, \begin{equation*} {\bf P}(N_\varepsilon \geq a) \leq {\bf P}(\widetilde N_\varepsilon \geq a) = {\bf P}(\operatorname{exp}(\widetilde N_\varepsilon) \geq \operatorname{exp}( a)) \leq \operatorname{exp}(c_{14} \varepsilon^{-1} - a). \end{equation*} Standard calculations yield the following estimates. For any $\beta_3<-1$, $\beta_4 < 0$, $\partialta_1>0$, some $ \partialta_2 \in(0,\partialta_1)$, and all $\partialta_3,\partialta_4 \in (0,\partialta_2)$, \begin{equation}\label{eq:LDP1} {\bf P}(N_{\partialta_3} \geq \partialta_3^{\beta_3}) \leq \partialta_3^2, \end{equation} and \begin{equation}\label{eq:LDP2} \sup_{\partialta_4\leq \partialta \leq \partialta_1 } {\bf E}\left(N_\partialta {\bf 1}_{\left\{N_\partialta\geq \partialta^{\beta_3} \partialta_4^{\beta_4}\right\}}\right) \leq \partialta_4^2. \end{equation} It follows from (\ref{eq:xM}), (\ref{eq:maxM}) and (\ref{eq:LDP1}) that, for any $\beta_2 < 1$, some $c_{16}$, and $j\geq j_1$, \begin{align} {\bf E}&\left( \max_{0 \leq k \leq m_j} \sup_{t^j_{k} + \zeta(e^j_{k}) < t < t^j_{k+1}, X_t \in \partial D} |x^j_{k} - X_t| \right) {\bf n}onumber \\ &\leq {\bf E}\left( \max_{0 \leq k \leq n_j} \sup_{t^j_{k} + \zeta(e^j_{k}) < t < t^j_{k+1}, X_t \in \partial D} |x^j_{k} - X_t| \right) + \rho_0 {\bf P}(m_j \geq n_j) {\bf n}onumber \\ &\leq {\bf E}\left( \max_{0 \leq k \leq n_j} c_8M^j_k\varepsilon_j \right) + c_{15} \varepsilon_j^2 {\bf n}onumber \\ &\leq c_{12} \varepsilon_j^{\beta_2} + c_{15} \varepsilon_j^2 \leq c_{16} \varepsilon_j^{\beta_2}. \label{eq:X-M} \end{align} Note that $\sum_{i=0}^m \mathbb Delta_i = 1$. This, (\ref{eq:X-M}), (\ref{eq:1stterm}) and (\ref{eq:diff1}) imply that, \begin{align}\label{eq:1stterm1} {\bf E} &\left( \sup_{\varepsilon_{j+1} \leq \varepsilon' < \varepsilon_j} \sum_{i=0}^m \left\|\pi_{i+1}\circ\pi_i \circ \left( e^{\mathbb Delta_i \S _i} - e^{\mathbb Delta_i \widetilde \S _i} \right) \circ \widetilde \pi_i \right\| \right) \leq {\bf E}\left( \sup_{\varepsilon_{j+1} \leq \varepsilon' < \varepsilon_j} \sum_{i=0}^m c_2 \mathbb Delta_i |x_i - \widetilde x_i| \right) \\ & \leq {\bf E}\left( \sup_{\varepsilon_{j+1} \leq \varepsilon' < \varepsilon_j} \max_{0 \leq i \leq m} |x_i - \widetilde x_i| \sum_{i=0}^m c_2 \mathbb Delta_i \right) = c_2 {\bf E}\left( \sup_{\varepsilon_{j+1} \leq \varepsilon' < \varepsilon_j} \max_{0 \leq i \leq m} |x_i - \widetilde x_i| \right) {\bf n}onumber \\ & \leq c_2 {\bf E}\left( \max_{0 \leq k \leq m_j} \sup_{t^j_{k} + \zeta(e^j_{k}) < t < t^j_{k+1}, X_t \in \partial D} |x^j_{k} - X_t| \right) \leq c_{16} \varepsilon_j^{\beta_2}. {\bf n}onumber \end{align} We will now estimate the right hand side of (\ref{eq:2ndterm}). We start with an observation similar to (\ref{eq:diff1}). Suppose that $x_i {\bf n}e x_{i+1}$ for some $i$. Then there exists $k_1$ such that $x_i = x^j_{k_1}$, and $ x_{i+1} = x^{j}_{k_1+1}$. Note that $k_1$'s corresponding to distinct $i$'s are distinct. Hence, \begin{align}\label{eq:diam} &\{|x_i - x_{i+1}| > a\} \\ & {\bf n}onumber \subset \bigcup_k \left\{ \sup_{t^j_{k} + \zeta(e^j_{k}) < t < t^j_{k+1}, X_t \in \partial D} |x^j_{k} - X_t| >a /2 \right\} \cup \left\{|e_{t^{j}_{k+1}}(0)- e_{t^{j}_{k+1}}(\zeta-)| > a/2\right\} \\ & {\bf n}onumber \subset \bigcup_k \left\{ \sup_{t^j_{k} + \zeta(e^j_{k}) < t < t^j_{k+1}, X_t \in \partial D} |x^j_{k} - X_t| >a /2 \right\} \cup \bigcup_k \left\{|e_{t^{j+1}_{k}}(0)- e_{t^{j+1}_{k}}(\zeta-)| > a/2\right\}. \end{align} Similarly, suppose that $\widetilde x_i {\bf n}e \widetilde x_{i+1}$ for some $i$. Then there exists $k_2$ such that $\widetilde x_i = x'_{k_2}$, and $ \widetilde x_{i+1} = x'_{k_2+1}$. Again, $k_2$'s corresponding to distinct $i$'s are distinct. Hence, \begin{equation*} \{|\widetilde x_i - \widetilde x_{i+1}| > a \} \subset \bigcup_k \left\{ \sup_{t'_{k} + \zeta(e'_{k}) < t < t'_{k+1}, X_t \in \partial D} |x'_{k} - X_t| >a/2 \right\} \cup \left\{|e_{t'_{k+1}}(0), e_{t'_{k+1}}(\zeta-)| > a/2 \right\}. \end{equation*} Since $\varepsilon_{j+1} \leq \varepsilon' < \varepsilon_j$, this implies that, \begin{align}\label{eq:diam1} &\{|\widetilde x_i - \widetilde x_{i+1}| > a \} \\ &\subset \bigcup_{0 \leq k \leq m_j} \left\{ \sup_{t^j_{k} + \zeta(e^j_{k}) < t < t^j_{k+1}, X_t \in \partial D} |x^j_{k} - X_t| >a /2 \right\} \cup \bigcup_{0 \leq k \leq m_{j+1}} \left\{|e_{t^{j+1}_{k}}(0)- e_{t^{j+1}_{k}}(\zeta-)| > a/2\right\}. {\bf n}onumber \end{align} It follows from (\ref{eq:diff1}), (\ref{eq:diam}) and (\ref{eq:diam1}) that \begin{align} &\sup_{\varepsilon_{j+1} \leq \varepsilon' < \varepsilon_j} \sum_{0 \leq i \leq m} \left( \left|x_{i+1} - x_i\right| \left|x_i - \widetilde x_{i}\right| + \left|x_{i+1} - \widetilde x_{i+1}\right| \left|\widetilde x_{i+1} - \widetilde x_{i}\right|\right) \label{eq:xs} \\ &\leq 4 \sum_{0 \leq i \leq m} \left(\max_{0 \leq k \leq m_j} \sup_{t^j_{k} + \zeta(e^j_{k}) < t < t^j_{k+1}, X_t \in \partial D} |x^j_{k} - X_t| \right)^2 {\bf n}onumber \\ & \qquad + 8 \left(\max_{0 \leq k \leq m_j} \sup_{t^j_{k} + \zeta(e^j_{k}) < t < t^j_{k+1}, X_t \in \partial D} |x^j_{k} - X_t| \right) \left(\sum_{0 \leq k \leq m_{j+1}} |e_{t^{j+1}_k}(0)- e_{t^{j+1}_k}(\zeta-)| \right) {\bf n}onumber \\ &= 4 ( m+1) \left(\max_{0 \leq k \leq m_j} \sup_{t^j_{k} + \zeta(e^j_{k}) < t < t^j_{k+1}, X_t \in \partial D} |x^j_{k} - X_t| \right)^2 {\bf n}onumber \\ & \qquad + 8 \left(\max_{0 \leq k \leq m_j} \sup_{t^j_{k} + \zeta(e^j_{k}) < t < t^j_{k+1}, X_t \in \partial D} |x^j_{k} - X_t| \right) \left(\sum_{0 \leq k \leq m_{j+1}} |e_{t^{j+1}_k}(0)- e_{t^{j+1}_k}(\zeta-)| \right). {\bf n}onumber \end{align} We have the following estimate, similar to (\ref{eq:maxM}). For any $\beta_5 < 2$, some $c_{19}<\infty$, and $j\geq j_1$, \begin{align} {\bf E}\left (\max_{1\leq k \leq n_j} c_8 M^j_k \varepsilon _j \right)^2 &\leq {\bf E}\left (\max_{1\leq k \leq n_j} c_8 \widetilde M^j_k \varepsilon _j \right)^2 {\bf n}onumber \\ &\leq \sum_{i \leq \beta_1 j} (c_8 i \varepsilon_j)^2 + \sum_{i > \beta_1 j} (c_8 i \varepsilon_j)^2 c_9 n_j (1/2)^i {\bf n}onumber \\ &\leq c_{17} \varepsilon_j^2 |\log \varepsilon_j|^3 + c_{18} \varepsilon_j^2 (\log \varepsilon_j)^2 \leq c_{19} \varepsilon_j^{\beta_5} .\label{eq:maxM2} \end{align} We now proceed as in (\ref{eq:X-M}). For any $\beta_5 < 2$ and $j \geq j_1$, \begin{align} {\bf E}&\left( \max_{0 \leq k \leq m_j} \sup_{t^j_{k} + \zeta(e^j_{k}) < t < t^j_{k+1}, X_t \in \partial D} |x^j_{k} - X_t| \right)^2 {\bf n}onumber \\ &\leq {\bf E}\left( \max_{0 \leq k \leq n_j} \sup_{t^j_{k} + \zeta(e^j_{k}) < t < t^j_{k+1}, X_t \in \partial D} |x^j_{k} - X_t| \right)^2 + \rho_0^2 {\bf P}(m_j \geq n_j) {\bf n}onumber \\ &\leq {\bf E}\left( \max_{0 \leq k \leq n_j} c_8M^j_k \varepsilon_j \right)^2 + c_{20} \varepsilon_j^2 {\bf n}onumber \\ &\leq c_{19} \varepsilon_j^{\beta_5} + c_{20} \varepsilon_j^2 \leq c_{21} \varepsilon_j^{\beta_5}. \label{eq:X-M2} \end{align} Recall that $m$ is random and note that $m \leq m_{j+1}$. We obtain the following from (\ref{eq:LDP1}) and (\ref{eq:X-M2}), for any $\beta_7 < 1$, by choosing appropriate $\beta_5 < 2$ and $\beta_6 < -1$, \begin{align}\label{eq:msup} {\bf E}&\left( (m+1) \left(\max_{0 \leq k \leq m_j} \sup_{t^j_{k} + \zeta(e^j_{k}) < t < t^j_{k+1}, X_t \in \partial D} |x^j_{k} - X_t| \right)^2 \right) \\ &\leq {\bf E}\left( \varepsilon_j^{\beta_6} \left(\max_{0 \leq k \leq m_j} \sup_{t^j_{k} + \zeta(e^j_{k}) < t < t^j_{k+1}, X_t \in \partial D} |x^j_{k} - X_t| \right)^2 \right) + \rho_0^2 {\bf P}(m+1 \geq \varepsilon_j^{\beta_6}) {\bf n}onumber \\ &\leq c_{21} \varepsilon_j^{\beta_6 + \beta_5} + c_{22} \varepsilon_j^2 \leq c_{23} \varepsilon_j^{\beta_7}. {\bf n}onumber \end{align} Next we estimate the second term on the right hand side of (\ref{eq:xs}) as follows. The number of excursions $e_{t^{j+1}_k}$ with $|e_{t^{j+1}_k}(0)- e_{t^{j+1}_k}(\zeta-)| \in [\varepsilon_{i+1}, \varepsilon_i]$ is bounded by $m_{i+1}$, so $$ \sum_{0 \leq k \leq m_{j+1}} |e_{t^{j+1}_k}(0)- e_{t^{j+1}_k}(\zeta-)| \leq \sum_{i=j_1}^{j+1} m_i \varepsilon_{i-1}. $$ Hence, for any $\beta_9 <0$, we can choose $\beta_8 < 0$, $\beta_1 < -1$ and $c_{23} < \infty$ so that for all $j\geq j_1$, \begin{align*} & \left(\max_{0 \leq k \leq m_j} \sup_{t^j_{k} + \zeta(e^j_{k}) < t < t^j_{k+1}, X_t \in \partial D} |x^j_{k} - X_t| \right) \left(\sum_{0 \leq k \leq m_{j+1}} |e_{t^{j+1}_k}(0)- e_{t^{j+1}_k}(\zeta-)| \right) \\ & \leq \left(\max_{0 \leq k \leq m_j} \sup_{t^j_{k} + \zeta(e^j_{k}) < t < t^j_{k+1}, X_t \in \partial D} |x^j_{k} - X_t| \right) \sum_{i=j_1}^{j+1} m_i \varepsilon_{i-1} {\bf n}onumber \\ & \leq \left(\max_{0 \leq k \leq m_j} \sup_{t^j_{k} + \zeta(e^j_{k}) < t < t^j_{k+1}, X_t \in \partial D} |x^j_{k} - X_t| \right) \sum_{i=j_1}^{j+1} \varepsilon_j^{\beta_8} n_i 2 \varepsilon_i + \rho_0 \sum_{i=j_1}^{j+1} m_i {\bf 1}_{\{m_i\geq n_i\varepsilon_j^{\beta_8}\}} 2 \varepsilon_i {\bf n}onumber \\ & \leq \left(\max_{0 \leq k \leq m_j} \sup_{t^j_{k} + \zeta(e^j_{k}) < t < t^j_{k+1}, X_t \in \partial D} |x^j_{k} - X_t| \right) 2 (j-j_1) \varepsilon_j^{\beta_8 + \beta_1+1} + 2 \rho_0 \sum_{i=j_1}^{j+1} m_i {\bf 1}_{\{m_i\geq n_i \varepsilon_j^{\beta_8}\}} \varepsilon_i {\bf n}onumber \\ & \leq c_{23} \varepsilon_j^{\beta_9} \left(\max_{0 \leq k \leq m_j} \sup_{t^j_{k} + \zeta(e^j_{k}) < t < t^j_{k+1}, X_t \in \partial D} |x^j_{k} - X_t| \right) + 2\rho_0 \sum_{i=j_1}^{j+1} m_i {\bf 1}_{\{m_i\geq n_i \varepsilon_j^{\beta_8}\}} \varepsilon_i . {\bf n}onumber \end{align*} This, (\ref{eq:X-M}) and (\ref{eq:LDP2}) imply that for any $\beta_{10} < 1$, by choosing an appropriate $\beta_2<1$ and $\beta_8,\beta_9 < 0$, we obtain for some $c_{26} < \infty$ and $j\geq j_1$, \begin{align}\label{eq:xXd} {\bf E}& \left(\max_{0 \leq k \leq m_j} \sup_{t^j_{k} + \zeta(e^j_{k}) < t < t^j_{k+1}, X_t \in \partial D} |x^j_{k} - X_t| \right) \left(\sum_{0 \leq k \leq m_{j+1}} |e_{t^{j+1}_k}(0)- e_{t^{j+1}_k}(\zeta-)| \right) \\ & \leq c_{23} \varepsilon_j^{\beta_9} {\bf E} \left(\max_{0 \leq k \leq m_j} \sup_{t^j_{k} + \zeta(e^j_{k}) < t < t^j_{k+1}, X_t \in \partial D} |x^j_{k} - X_t| \right) + 2\rho_0 {\bf E}\left(\sum_{i=j_1}^{j+1} m_i {\bf 1}_{\{m_i\geq n_i \varepsilon_j^{\beta_8}\}} \varepsilon_i\right) , {\bf n}onumber \\ &\leq c_{24} \varepsilon_j^{\beta_9} \varepsilon_j^{\beta_2} + c_{25} \sum_{i=j_1}^{j+1} \varepsilon_j^2 \varepsilon_i \leq c_{26} \varepsilon_j^{\beta_{10}}. {\bf n}onumber \end{align} We combine (\ref{eq:xs}), (\ref{eq:msup}) and (\ref{eq:xXd}) to see that for any $\beta_{10} < 1$, some $c_{27}< \infty$ and all $j\geq j_1$, \begin{align*} {\bf E} \left(\sup_{\varepsilon_{j+1} \leq \varepsilon' < \varepsilon_j} \sum_{0 \leq i \leq m} \left( \left|x_{i+1} - x_i\right| \left|x_i - \widetilde x_{i}\right| + \left|x_{i+1} - \widetilde x_{i+1}\right| \left|\widetilde x_{i+1} - \widetilde x_{i}\right|\right) \right) &\leq c_{27} \varepsilon_j^{\beta_{10}}. \end{align*} We use this estimate and (\ref{eq:2ndterm}) to see that for any $\beta_{10} < 1$, some $c_{27}< \infty$ and all $j\geq j_1$, \begin{align}\label{eq:2ndterm1} {\bf E} &\left( \sup_{\varepsilon_{j+1} \leq \varepsilon' < \varepsilon_j} \sum_{i=0}^m \left\| \pi_{i+1} \circ \left( \pi_i - \widetilde\pi_{i+1} \right) \circ \widetilde \pi_i \circ e^{\mathbb Delta_i\widetilde \S _i} \right\| \right) \\ & \leq {\bf E}\left( \sup_{\varepsilon_{j+1} \leq \varepsilon' < \varepsilon_j} \sum_{i=0}^m c_{4} \left( \left|x_{i+1} - x_i\right| \left|x_i - \widetilde x_{i}\right| + \left|x_{i+1} - \widetilde x_{i+1}\right| \left|\widetilde x_{i+1} - \widetilde x_{i}\right|\right) \right) \leq c_{27} \varepsilon_j^{\beta_{10}}. {\bf n}onumber \end{align} It follows (\ref{eq:vminv}), (\ref{eq:decom}), (\ref{eq:1stterm1}), and (\ref{eq:2ndterm1}) that for any $\beta_{10} < 1$, some $c_{28}< \infty$ and all $j\geq j_1$, \begin{equation*} {\bf E} \left( \sup_{\varepsilon_{j+1} \leq \varepsilon' < \varepsilon_j} |{\bf v}^j - {\bf v} '| \right) \leq c_{28} \varepsilon_j^{\beta_{10}} |{\bf v}_0| = c_{28} 2^{-\beta_{10} j} |{\bf v}_0|. \end{equation*} This implies that $ \sum _{j\geq j_1} {\bf E} \left( \sup_{\varepsilon_{j+1} \leq \varepsilon' < \varepsilon_j} |{\bf v}^j - {\bf v}'| \right) < \infty $, and, therefore, a.s., \begin{equation}\label{eq:vcon} \sum _{j\geq j_1} \left( \sup_{\varepsilon_{j+1} \leq \varepsilon' < \varepsilon_j} |{\bf v}^j - {\bf v}'| \right) < \infty. \end{equation} We extend the notation ${\bf v}'$ from $\varepsilon'$ in the range $[\varepsilon_{j+1}, \varepsilon_j)$ to all $\varepsilon'>0$, in the obvious way. It is elementary to see that (\ref{eq:vcon}) implies that ${\bf v}_1 := \lim_{\varepsilon' \downarrow 0} {\bf v}'$ exists. For every $\varepsilon'>0$, the mapping ${\bf v}_0 \rightarrow {\bf v}'$ is linear, so the same can be said about the mapping ${\bf v}_0 \rightarrow {\bf v}_1 := {\mathcal A}_1 {\bf v}_0$. Note that the right hand side of (\ref{eq:vminv}) corresponding to $r\in[0,1)$ is less than or equal to the right hand side of (\ref{eq:vminv}) in the case $r=1$. Hence, we can strengthen (\ref{eq:vcon}) to the claim that a.s., \begin{equation*} \sum _{j\geq j_1} \left( \sup_{0\leq r \leq 1} \sup_{\varepsilon_{j+1} \leq \varepsilon' < \varepsilon_j} |{\bf v}_r^j - {\bf v}_r'| \right) < \infty, \end{equation*} where ${\bf v}_r^j$ and ${\bf v}_r'$ are defined in a way analogous to ${\bf v}^j$ and ${\bf v}'$, relative to $r\in[0,1]$. The analogous argument shows that for any integer $r_0>0$, a.s., \begin{equation*} \sum _{j\geq j_1} \left( \sup_{0\leq r \leq r_0} \sup_{\varepsilon_{j+1} \leq \varepsilon' < \varepsilon_j} |{\bf v}_r^j - {\bf v}_r'| \right) < \infty. \end{equation*} We use the same argument as above to conclude that for any ${\bf v}_0$, with probability 1, ${\mathcal A}_{r,\varepsilon}{\bf v}_0\rightarrow {\mathcal A}_r {\bf v}_0$ uniformly on compact sets. It remains to show that ${\mathcal A}_r$ has rank $n-1$. Without loss of generality, we will consider only $r=1$. Recall definition (\ref{def:vr}) of ${\bf v}_r$ and note that $\pi_{x_0} {\bf v}_0 \in \T_{x_0} \partial D$. It will suffice to show that for any ${\bf w}\in \T_{x_0} D$ such that ${\bf w} {\bf n}e 0$, we have ${\mathcal A}_1 {\bf w} {\bf n}e 0$. Recall the definition of $x^j_k$'s and related notation from the beginning of the proof. Recall from (\ref{eq:e^S-estlow}) that for some $c_{29}< \infty$ depending only on $D$, all $x\in \partial D$, ${\bf z} \in {\bf R}^n$, and all $t\geq 0$, we have $|e^{t \S (x)} {\bf z}| \ge e^{-c_{29} t}|{\bf z}|$. Therefore, for any ${\bf w}\in \T_{x_0} D$, \begin{align*} |{\bf v}^j| &= | \operatorname{exp}(\mathbb Delta\ell^j_{m_j} \S(x^j_{m_j})) \pi_{x^j_{m_j}} \cdots \operatorname{exp}(\mathbb Delta\ell^j_1 \S(x^j_1)) \pi_{x^j_1} \operatorname{exp}(\mathbb Delta \ell^j_0 \S(x^j_0)) \pi_{x^j_0} {\bf w}| \\ &\geq \operatorname{exp}\left(-c_{29}\sum_{i=0}^{m_j} \mathbb Delta \ell_i \right) |\pi_{x^j_{m_j}} \pi_{x^j_{m_j-1}} \cdots \pi_{x^j_1} \pi_{x^j_0} {\bf w}| \\ & = c_{30} |\pi_{x^j_{m_j}} \pi_{x^j_{m_j-1}} \cdots \pi_{x^j_1} \pi_{x^j_0} {\bf w}|. \end{align*} It follows that \begin{align*} \frac{|{\bf v}^j|}{|{\bf w}|} = \prod _{k=1}^{m_j} \frac{|\pi_{x^j_k} \cdots \pi_{x^j_1} \pi_{x^j_0} {\bf w}|} {| \pi_{x^j_{k-1}} \cdots \pi_{x^j_1} \pi_{x^j_0} {\bf w}|}, \end{align*} and, therefore, \begin{align*} \log|{\bf v}^j| = \log |{\bf w}| + \sum _{k=1}^{m_j} \left( \log |\pi_{x^j_k} \cdots \pi_{x^j_2} \pi_{x^j_1} {\bf w}| - \log | \pi_{x^j_{k-1}} \cdots \pi_{x^j_2} \pi_{x^j_1} {\bf w}| \right). \end{align*} By the Pythagorean theorem, $|{\bf z}|^2 = |\pi_x {\bf z}|^2 + \langle{\bf z}/|{\bf z}|, {\bf n}(x)\rangle^2 |{\bf z}|^2$. This implies that for some $c_{31}<\infty$, if ${\bf z} \in \T_y \partial D$ then \begin{equation*} |\pi_x {\bf z}| \geq \left(1-c_{31} |x-y|^2 \right) |{\bf z}|. \end{equation*} Thus we can find $\rho_1 >0 $ so small that for some $c_{32}$ and all $|x-y| \leq \rho_1$ and ${\bf z} \in \T_y \partial D$, \begin{equation*} \log |\pi_x {\bf z}| \geq \log |{\bf z}| - c_{32} |x-y|^2. \end{equation*} Therefore, \begin{align}\label{eq:logest} \log|{\bf v}^j| &\geq \log |{\bf w}| - c_{32}\sum _{k=1}^{m_j} |x^j_k - x^j_{k+1}|^2 {\bf 1}_{\{|x^j_k - x^j_{k+1}| \leq \rho_1\}}\\ &\qquad + \sum _{k=1}^{m_j} \left( {\bf 1}_{\{|x^j_k - x^j_{k+1}| > \rho_1\}} \log \frac{|\pi_{x^j_k} \cdots \pi_{x^j_1} \pi_{x^j_0} {\bf w}|} {| \pi_{x^j_{k-1}} \cdots \pi_{x^j_1} \pi_{x^j_0} {\bf w}|} \right). {\bf n}onumber \end{align} We make $\rho_1$ smaller, if necessary, so that $\rho_1/2 = \varepsilon_{j_2}$ for some integer $j_2$. Note that the set of excursions $e_{t^{j_2}_k}$ is finite, with cardinality $m_{j_2}$. The hitting distribution of $\partial D$ for any excursion law $H^x$ is absolutely continuous with respect to the surface area measure on $\partial D$, because the same is true for Brownian motion. This, (\ref{old4.1}) and Assumption \ref{a:A1} imply that with probability 1, for all $k=1,2,\dots, m_{j_2}$, we have $|\langle{\bf n}(e_{t^{j_2}_k}(0)), {\bf n}(e_{t^{j_2}_k}(\zeta-))\rangle| > \partialta $, for some random $\partialta >0$. For large $j$, because of continuity of reflected Brownian motion paths, and because excursions are dense in the trajectory, the only points $x^j_{k+1}$ such that $|x^j_k - x^j_{k+1}| > \rho_1$ can be the endpoints of excursions $e_{t^{j_2}_i}$, $i=1,2,\dots, m_{j_2}$. Fix a point $e_{t^{j_2}_i}$ and let $k(j)$ be such that $x^j_{k(j)} = e_{t^{j_2}_i}$. Then $x^j_{k(j)-1}\rightarrow x^j_{k(j)}$ as $j \rightarrow \infty$, again by the continuity of reflected Brownian motion, and because excursions are dense in the trajectory. It follows that for large $j$, for all pairs $(x^j_k, x^j_{k+1})$ with $|x^j_k - x^j_{k+1}| > \rho_1$, we have $|\langle{\bf n}(x^j_k), {\bf n}(x^j_{k+1})\rangle| > \partialta/2 $. This implies that, a.s., for some random $U>-\infty$, and all sufficiently large $j$, \begin{equation}\label{eq:largeexc} \sum _{k=1}^{m_j} \left( {\bf 1}_{\{|x^j_k - x^j_{k+1}| > \rho_1\}} \log \frac{|\pi_{x^j_k} \cdots \pi_{x^j_1} \pi_{x^j_0} {\bf w}|} {| \pi_{x^j_{k-1}} \cdots \pi_{x^j_1} \pi_{x^j_0} {\bf w}|} \right) > U. \end{equation} In view of (\ref{eq:diam}) and (\ref{eq:msup}), for any $\beta_7 <1$, \begin{align}\label{eq:exc} {\bf E} &\left( \sum_{i=0}^{m_j} |x^j_i - x^j_{i+1}|^2 \right) \\ &\leq 8 {\bf E} \left( m_j \left( \max_{0 \leq k \leq m_j} \sup_{t^j_{k} + \zeta(e^j_{k}) < t < t^j_{k+1}, X_t \in \partial D} |x^j_{k} - X_t| \right)^2 \right) + 8{\bf E} \left( \sum_{k=1}^{m_j} |e_{t^j_k}(0)-e_{t^j_k}(\zeta-)|^2 \right) {\bf n}onumber \\ &\leq c_{23} \varepsilon_j^{\beta_7} + 8{\bf E} \left( \sum_{k=1}^{m_j} |e_{t^j_k}(0)-e_{t^j_k}(\zeta-)|^2 \right). {\bf n}onumber \end{align} By (\ref{old4.1}) and (\ref{eq:H1}), the expected number of excursions $e_s$ with $|e_s(\zeta-) - e_s(0)| \in [2^{-i-1}, 2^{-i}]$ and $s\in[0,1]$ is bounded by $c_{33} 2^i$. It follows that for some $c_{34}<\infty$, not depending on $j$, \begin{equation*} {\bf E} \left( \sum_{k=1}^{m_j} |e_{t^j_k}(0)-e_{t^j_k}(\zeta-)|^2 \right) \leq \sum_{i=j_1}^j c_{34} 2^{-2i} 2^i < c_{35} < \infty, \end{equation*} and this combined with (\ref{eq:exc}) yields \begin{equation*} \sup_{j\geq j_1} {\bf E} \left( \sum_{i=0}^{m_j} |x^j_i - x^j_{i+1}|^2 \right) < \infty. \end{equation*} In view of (\ref{eq:logest}) and (\ref{eq:largeexc}), \begin{equation*} \liminf_{j\rightarrow \infty} {\bf E} (\log|{\bf v}^j| -U) \geq \log |{\bf w}| - \limsup_{j\rightarrow \infty} {\bf E} \left( c_{32}\sum _{k=1}^{m_j} |x^j_k - x^j_{k+1}|^2\right) > -\infty, \end{equation*} so, with probability 1, $\liminf_{j\rightarrow \infty} |{\bf v}^j| >0$, and, therefore, $|{\bf v}_1| {\bf n}e 0$. \end{proof} \end{document}
\begin{document} \title{Direct Experimental Test of Forward and Reverse Uncertainty Relations} \author{Lei Xiao} \affiliation{Department of Physics, Southeast University, Nanjing 211189, China} \affiliation{Beijing Computational Science Research Center, Beijing 100084, China} \author{Bowen Fan} \affiliation{Department of Physics, Southeast University, Nanjing 211189, China} \author{Kunkun Wang} \affiliation{Beijing Computational Science Research Center, Beijing 100084, China} \author{Arun Kumar Pati} \affiliation{Quantum Information and Computation Group, Harish-Chandra Research Institute, HBNI, Allahabad 211019, India} \author{Peng Xue}\email{[email protected]} \affiliation{Beijing Computational Science Research Center, Beijing 100084, China} \begin{abstract} The canonical Robertson-Schr\"{o}dinger uncertainty relation provides a loose bound for the product of variances of two non-commuting observables. Recently, several tight forward and reverse uncertainty relations have been proved which go beyond the traditional uncertainty relations. Here we experimentally test multifold of state-dependent uncertainty relations for the product as well as the sum of variances of two incompatible observables for photonic qutrits. These uncertainty relations are independent of any optimization and still tighter than the Robertson-Schr\"{o}dinger uncertainty relation and other ones found in current literatures. For the first time, we also test the state-dependent reverse uncertainty relations for the sum of variances of two incompatible observables, which implies another unique feature of preparation uncertainty in quantum mechanics. Our experimental results not only foster insight into a fundamental limitation on preparation uncertainty but also may contribute to the study of upper limit of precession measurements for incompatible observables. \end{abstract} \maketitle {\it Introduction:---} Uncertainty relations~\cite{H27,R29,WZ83,MP14,MBP17} are the hallmarks of quantum physics and have been widely investigated since their inception~\cite{ESS+12,SSE+13,SSD+15,PHC+11,RDM+12,WHP+13,RBB+14,KBOE14,WZB+16,XWZ+17,FWXX18,XFWX18}. These uncertainty relations impose fundamental limitation on the possible preparation of quantum states for which two non-commuting observables can have sharp values---often refereed as ``preparation'' uncertainty relations. They can be used in bounds for metrology~\cite{BC94,BCM96}, the security of quantum cryptography~\cite{FP96,RB09}, and the detection of nonclassical correlations~\cite{FT03,G04,CR07,ZXH+19}. Thus, the discovery of new uncertainty relations~\cite{MP14,MBP17} with tighter bounds has important potential implications for quantum information processing. Uncertainty relations in terms of variances of incompatible observables are generally expressed in the product and the sum form. Both of these kinds of uncertainty relations express limitations in the possible preparations of the system by giving a lower bound to the product or sum of the variances of two observables. Most of the stronger uncertainty bounds~\cite{MP14,SQ16,YXWP15,XJJF16} depend on an orthogonal state to the state of the system. However, for higher dimensional systems, finding such an orthogonal state may be difficult. Recently, couple of state-dependent uncertainty relations with optimization free uncertainty bounds both in the sum and the product forms are derived by Mondal et al. in~\cite{MBP17}. These authors have also proved a state-dependent upper bound for the uncertainty relation which is tight. It is quite intriguing that the enshrined uncertainty relation due to Robertson and Schr\"{o}dinger is a much weaker than the tight forward uncertainty relation proved in~\cite{MBP17}. In this work, we report an experimental test of these new uncertainty relations with optimization free bounds and reverse uncertainty relations for single-photon measurements and demonstrate they are valid for states of a spin-$1$ particle. The experimental results we find agree well with the predictions of quantum theory and obey these new uncertainty relations. We realize a direct measurement model and give the first experimental investigation of the strengthened forward relations and the reverse uncertainty relation. Furthermore, in our experiment, every term can be obtained directly by the outcomes of the projective measurements. Our test realizes a direct measurement model which leverages the requirement of quantum state tomography~\cite{XWZ+17,LXX+11,PHC+11}. \begin{figure} \caption{Experimental setup for testing the uncertainty relations relating on both product and sum of variants of two incompatible observables $L_x$ and $L_y$ for the spin-$1$ particle with a state $|\Psi\rangle=(\cos\theta, -\sin\theta, 0)^\text{T} \label{fig1} \end{figure} \begin{figure} \caption{Experimental results of the uncertainty relations (\ref{eq:1} \label{fig2} \end{figure} \begin{figure} \caption{Experimental results of the uncertainty relations (\ref{eq:4} \label{fig3} \end{figure} {\it Theoretical framework:---} Consider a quantum system that has been prepared in the state $|\Psi\rangle$ and we perform measurement of two incompatible observables $A$ and $B$. The resulting bound on the product of uncertainties can be expressed as \begin{eqnarray} \hspace{-.2cm}\Delta A^2\Delta B^2\geq\left|\frac{1}{2}\langle[A,B]\rangle\right| ^2+\left|\frac{1}{2}\langle \{A,B\}\rangle- \langle A\rangle\langle B\rangle\right|^2, \label{eq:1} \end{eqnarray} where the expected values $\langle O\rangle = \langle \Psi|O| \Psi \rangle$ and variances $\Delta O^2=\langle O^2\rangle-\langle O\rangle^2$ are defined over an arbitrary state $|\Psi \rangle$ of the system. It is so-called Robertson-Shr\"{o}dinger uncertainty relation~\cite{E36}. This uncertainty relation is well known, however, its bound is not optimal. In~\cite{MBP17}, an alternative uncertainty relation with a tighter bound is provided \begin{eqnarray} \Delta A^{2}\Delta B^{2} \geqslant \Big(\sum_{n}|\langle\Psi|\overline{A}|\psi_{n}\rangle\langle\psi_{n}|\overline{B}|\Psi\rangle|\Big)^2, \label{eq:2} \end{eqnarray} where $A=\sum_{i}a_{i}|a_{i}\rangle\langle a_{i}|$ and $B=\sum_{i}b_{i}|b_{i}\rangle\langle b_{i}|$, $\overline{A}=(A-\langle A\rangle)=\sum_{i}\tilde{a}_i|a_i\rangle\langle a_i|$ and $\overline{B}=(B-\langle B\rangle)=\sum_{i}\tilde{b}_i|b_i\rangle\langle b_i|$, $\{|\psi_{n}\rangle\}$ is an arbitrary complete basis. Though the bound of the new uncertainty relation is indeed tighter than that of the Robertson-Shr\"{o}dinger uncertainty relation, an optimization over different bases is required for the tightest bound. Furthermore, an optimization-free uncertainty relation for two incompatible observables is derived in~\cite{MBP17} which is given by \begin{eqnarray} &\Delta A^2\Delta B^2\geq \Bigg(\sum_{i}\sqrt{F_{\Psi}^{a_{i}}}\sqrt{F_{\Psi}^{b_{i}}} \tilde{a}_{i}\tilde{b}_{i}\Bigg)^2, \label{eq:3} \end{eqnarray} where $F_{\Psi}^{x}=|\langle\Psi|x\rangle|^2$ is the fidelity between $|\Psi \rangle$ and $|x \rangle=|a_i\rangle,|b_i\rangle$. This uncertainty relation depends on the transition probability between the state of the system $|\Psi\rangle$ and the eigenstates of the observables $|a_i\rangle$ and $|b_i\rangle$. The incompatibility is captured here not by the noncommutativity, but rather by the nonorthogonality of the state of the system and the eigenstates of the observables. The bound of this uncertainty relation~(\ref{eq:3}) is tighter than the other bounds most of the time and even surpasses the bound given by (\ref{eq:2}) without any optimization. To fully capture the concept of incompatible observables, an uncertainty relation proposed relating on that sum of variances of two incompatible observables is derived in~\cite{MP14} \begin{eqnarray} \Delta A^2+\Delta B^2 &\geq &\frac{1}{2}\Delta (A+B)^2. \label{eq:4} \end{eqnarray} An optimization over a set of states $|\psi_n\rangle$ provides a more tighter bound as \begin{eqnarray} \Delta A^2+\Delta B^2&\geq & \frac{1}{2}\sum_{n}\Big(|\langle\psi_{n}|\overline{A}|\Psi\rangle|+|\langle\psi_{n}|\overline{B}|\Psi\rangle|\Big)^2. \label{eq:5} \end{eqnarray} Furthermore, an uncertainty relation for the sum of variances of two incompatible observables with optimization-free bound is also derived in~\cite{MBP17} \begin{eqnarray} \Delta A^2+\Delta B^2 &\geq &\frac{1}{2}\sum_{i}\Big(\tilde{a}_{i}\sqrt{F_{\Psi}^{a_{i}}}+ \tilde{b}_i\sqrt{F_{\Psi}^{b_{i}}}\Big)^2. \label{eq:6} \end{eqnarray} The standard preparation uncertainty relations---forward URs provide lower bound to the product or the sum of variances. However, quantum theory also restricts how large the variances can be. The upper bound to the sum of variances of two incompatible observables is expressed by the reverse uncertainty relation~\cite{MBP17} \begin{equation} \Delta A^2+ \Delta B^2\leq \frac{2\Delta (A-B)^2}{\Big [1-\frac{\text{Cov}(A,B)}{\Delta A\Delta B}\Big ]}-2\Delta A\Delta B, \label{eq:7} \end{equation} where $\text{Cov}(A,B)=\frac{1}{2}\langle\{ A,B\}\rangle-\langle A\rangle\langle B\rangle$ is the quantum covariance of the operators $A$ and $B$. Thus, for two incompatible observables, the forward and the reverse uncertainty relations set fundamental zone within which the quantum fluctuations are restricted, i.e., the intrinsic uncertainties cannot be too small and too large. \begin{figure} \caption{Experimental results of the reverse uncertainty relation. Theoretical predictions of LHS and RHS of the inequality (\ref{eq:7} \label{fig4} \end{figure} {\it Experimental investigation:---}To Test the uncertainty relations (\ref{eq:1})-(\ref{eq:6}) and the reverse uncertainty relation (\ref{eq:7}), we choose two components of the angular momentum for spin-$1$ particle as two observables: \begin{equation} L_x= \frac{1}{\sqrt{2}}\begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \\ \end{pmatrix}, L_y= \frac{1}{\sqrt{2}}\begin{pmatrix} 0 & -i & 0 \\ i & 0 & -i \\ 0 & i & 0 \\ \end{pmatrix}. \end{equation} For convenience, we also define an observable as $L'=L_xL_y+L_yL_x=\begin{pmatrix} 0 & 0 & -i \\ 0 & 0 & 0 \\ i & 0 & 0 \\ \end{pmatrix}$ and the other component of the angular momentum $L_z=-i\left[L_x,L_y\right]=\begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1 \\ \end{pmatrix}$ is used. Then the inequalities can be rewritten and both left- and right-hand sides can be measured directly. The inequality (\ref{eq:1}) can be rewritten as \begin{equation*} \Delta L_x^2\Delta L_y^2\geq\left|\frac{1}{2}\langle L_z\rangle\right| ^2+\left|\frac{1}{2}\langle L' \rangle- \langle L_x\rangle\langle L_y\rangle\right|^2. \end{equation*} All the lift- and right-hand sides of the inequalities are expected values of the obserables $L_i$ ($i=x,y,z$) and $L'$ and can be read out from the experimental results. The inequality (\ref{eq:2}) can be rewritten as \begin{equation*} \Delta L_x^2\Delta L_y^2\geq\left(\sum_{n} |C_n-D_n \langle L_x\rangle| \right)^2, \end{equation*} where $C_n=\text{Tr}\left(\rho L_x |\psi_n\rangle \langle \psi_n|L_y\right)$, $D_n=\text{Tr}\left(\rho |\psi_n\rangle \langle \psi_n|L_y\right)$, $\rho=|\Psi\rangle\langle\Psi|$, and $|\psi_1\rangle=(1,0,0)^\text{T}$, $|\psi_2\rangle=(0,1,0)^\text{T}$ and $|\psi_3\rangle=(0,0,1)^\text{T}$ are the mutual orthogonal basis vectors for ${\cal H}^3$. The inequality (\ref{eq:3}) can be rewritten as \begin{equation*} \Delta L_x^2\Delta L_y^2 \geq \Big(\sum_{i}\tilde{a}_{i} \tilde{b}_i \sqrt{F_{\Psi}^{a_{i}}} \sqrt{F_{\Psi}^{b_{i}}}\Big)^2, \end{equation*} where $F_{\Psi}^{a_i(b_i)}=\left|\langle \Psi|a_i(b_i)\rangle\right|^2$, $|a_i(b_i)\rangle$ indicates the eigenstate of $L_x(L_y)$ with the eigenvalue $a_i(b_i)=-1,1,0$, and $\tilde{a}_{i}(\tilde{b}_{i})=a_i(b_i)-\langle L_x(L_y) \rangle$. The inequality (\ref{eq:4}) can be rewritten as \begin{align*} \Delta L_x^2+\Delta L_y^2\geq \langle L_x^2 \rangle + \langle L_y^2 \rangle+\langle L' \rangle-\Big(\langle L_x \rangle+\langle L_y \rangle\Big)^2. \end{align*} The inequality (\ref{eq:5}) can be rewritten as \begin{equation*} \Delta L_x^2+\Delta L_y^2\geq \sum_{n} \left(|E_n-F_n \langle L_x\rangle|+|G_n| \right)^2, \end{equation*} where the coefficients are $E_n=\langle \psi_n |L_x|\Psi \rangle$, $F_n= \langle \psi_n |\Psi\rangle$ and $G_n=\langle \psi_n |L_y|\Psi \rangle$. The inequality (\ref{eq:6}) can be rewritten as \begin{equation*} \Delta L_x^2+\Delta L_y^2 \geq \frac{1}{2}\sum_{i}\Big(\tilde{a}_{i}\sqrt{F_{\Psi}^{a_{i}}}+ \tilde{b}_i\sqrt{F_{\Psi}^{b_{i}}}\Big)^2, \end{equation*} where the coefficients $F_\Psi^{a_i(b_i)}$, $\tilde{a}_i(\tilde{b}_i)$ and $a_i(b_i)$ are defined in rewritten inequality (\ref{eq:3}). The reverse uncertainty relation (\ref{eq:7}) can be rewritten as \begin{equation*} \Delta L_x^2+ \Delta L_y^2\leq \frac{2\Delta (L_x-L_y)^2}{\Big [1-\frac{\langle L' \rangle/2- \langle L_x \rangle \langle L_y \rangle}{\Delta L_x\Delta L_y}\Big ]}-2\Delta L_x \Delta L_y, \end{equation*} where $\Delta (L_x-L_y)=\langle L_x^2 \rangle + \langle L_y^2 \rangle - \langle L' \rangle-(\langle L_x \rangle-\langle L_y \rangle)^2$. Thus, all terms in both lift- and right-hand sides of the uncertainty relations and reverse one are related to expected values of the obserables $L_i$ ($i=x,y,z$) and $L'$ and can be read out directly from the outcomes of the projective measurements. {\it Experimental investigation:---}We report the experimental test of these uncertainty relations (\ref{eq:1})-(\ref{eq:6}) and the reverse uncertainty relation (\ref{eq:7}) for a single-photon measurement. The experimental setup shown in Fig.~\ref{fig1} involves two stages: state preparation and projective measurement. In our experiment, a photonic qutrit is encoded in three modes and the basis states are $\{|\psi_1\rangle,| \psi_2 \rangle, | \psi_3 \rangle\}=\{(1,0,0)^\text{T},(0,1,0)^\text{T},(0,0,1)^\text{T}\}$, which indicate the states of the horizontally polarized photons in the upper spatial mode, the vertically polarized photons in the upper spatial mode, and the vertically polarized photons in the lower spatial mode, respectively. Polarization-degenerated photon pairs are produced in a type-I spontaneous parametric down-conversion (SPDC) with the help of an interference filter which restricts the photon bandwidth to $3$nm. This trigger-herald pair is registered by a coincidence count at two single-photon avalanche photodiodes (APDs) with $3$ns time window. Total coincidence counts are about $10^4$. In the state preparation stage, the heralded single photons pass through a polarizing beam splitter (PBS) and are split into two parallel spatial modes---upper and lower modes by a beam displacer (BD) whose optical axis is cut so that vertically polarized photons are directly transmitted and horizontal photons undergo a lateral displacement into a neighboring mode. Then, two half-wave plates (HWPs) H with a certain setting angle $\theta/2$ and H$_0$ at $0$ are applied on the photons in the upper mode. The matrix form of the operation of HWP with the setting angle $\vartheta$ is $\begin{pmatrix} \cos 2\vartheta & \sin 2\vartheta \\ \sin 2\vartheta & -\cos 2\vartheta \\ \end{pmatrix}$. We prepare a family of qutrit states $|\Psi\rangle=(\cos\theta, -\sin\theta, 0)^\text{T}$ as the system state, where $\theta=j\pi/10$ ($j=0,\cdots,10$). Thus totally $11$ states are chosen for testing the uncertainty relations (\ref{eq:1})-(\ref{eq:6}) and the reverse one (\ref{eq:7}). To test the uncertainty relations (\ref{eq:1})-(\ref{eq:6}) and the reverse one (\ref{eq:7}) which can be rewritten and only depend on the expected values of the observables $L_i$ ($i=x,y,z$) and $L'$. An arbitrary observable $M$ can be written as $M=\sum_{i} m_i |m_i\rangle \langle m_i|$, where $|m_i\rangle$ is an eigenstate of $M$ and $m_i$ is the corresponding eigenvalue. The expected value of $M$ is \begin{align} \langle M \rangle &= \langle \Psi| M | \Psi\rangle=\sum_{i} m_i \langle \Psi| m_i \rangle \langle m_i | \Psi \rangle\\ \nonumber &=\sum_{i} m_i \left| \langle \Psi| m_i \rangle \right|^2. \end{align} A unitary operator is defined as $U=\sum_i|i\rangle \langle m_i|$ and is applied on the system in the initial state $|\Psi\rangle$ which is then projected into the basis state $|i\rangle$ ($i=0,1,2$). The overlap $|\langle \Psi|m_i\rangle|^2=\text{Tr}\left(|\Psi\rangle\langle \Psi| U^\dagger |i\rangle\langle i| U\right)$ equals to the probability $P_i$ of the photons being measured in the basis state $|i\rangle$. For example, corresponding to the observable $L_x$, the unitary operator is defined \begin{equation} U=\begin{pmatrix} \frac{1}{2} & -\frac{1}{\sqrt{2}} & \frac{1}{2} \\ \frac{1}{2} & \frac{1}{\sqrt{2}} & \frac{1}{2} \\ -\frac{1}{\sqrt{2}} & 0 & \frac{1}{\sqrt{2}} \\ \end{pmatrix}, \end{equation} which can be further decomposed into \begin{align} U&=U_3 U_2 U_1\\ & =\begin{pmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0 \\ -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0 \\ 0 & 0 & 1\\ \end{pmatrix} \begin{pmatrix} -1 & 0 & 0 \\ 0 & \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ 0 & \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \end{pmatrix} \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix}.\nonumber \end{align} Thus in the measurement stage, the above three unitary operators $U_{1,2,3}$ can be realized via the optical circuit in Fig.~\ref{fig1}. Each of them applies a rotation on just two of the basis states, leaving the other unchanged. For example, $U_1$ is realized by a HWP (H$_1$ at $45^\circ$) applying on the photons in the upper mode, which exchanges the polarizations of the photons in the upper mode and keeps the state of the photons in the lower mode unchanged. Similarly, $U_3$ is realized via a quarter-wave plate (QWP, Q) and a HWP (H$_4$) applying on the photons in the upper mode, which implement a rotation on the polarization states of the photons in the upper mode and keeps the state of the photons in the lower mode unchanged. Whereas, $U_2$ is realized by two BDs and two HWPs (H$_2$ and H$_3$). The BDs are used for basis transformation and the HWPs are used to apply a rotation on the polarization states. The last BD is used to project the photons into the basis states $|i\rangle$ ($i=0,1,2$). The probability of the photons being measured in $|i\rangle$ is obtained by normalizing photon counts in the certain spatial mode to total photon counts. Angles of the wave plates are shown in Table~\ref{table1}. \begin{table}[h] \caption{The setting angles of the wave plates for the projective measurement of the observables $L_x$, $L_y$, $L_z$ and $L'$. Here ``$-$'' denotes the corresponding wave plate is removed from the optical circuit.} \begin{tabular}{c||c|c|c|c|c} \hline Observations & H$_{1}$ & H$_{2}$ & H$_{3}$ & H$_{4}$ & Q\\ \hline $L_x$ & $45.00^{\circ}$ & $22.50^{\circ}$ & $-45.00^{\circ}$ & $22.50^{\circ}$ & $-$\\ \hline $L_y$ & $-45.00^{\circ}$ & $22.50^{\circ}$ & $45.00^{\circ}$ & $22.50^{\circ}$ & $0$\\ \hline $L_z$ & $-$ & $-$ & $-$ & $-$ & $-$\\ \hline $L'$ & $0$ & $0$ & $-45.00^{\circ}$ & $22.50^{\circ}$ & $0$\\ \hline \end{tabular} \label{table1} \end{table} {\it Experimental results:---}In Fig.~\ref{fig2}, we show the direct demonstration of the uncertainty relations (\ref{eq:1})-(\ref{eq:3}) related to product of variances of two incompatible observables for photonic qutrits. The bound given by (\ref{eq:2}) is one of the tightest bounds but to achieve it optimization is required. Whereas, the bound given by (\ref{eq:3}) is tighter than the other bounds most of the time and even surpasses the optimized bound, yet it does not need any optimization. Both bounds in (\ref{eq:2}) and (\ref{eq:3}) are tighter than that given by the Schr\"{o}dinger uncertainty relation (\ref{eq:1}). As shown in Fig.~\ref{fig3}, the bound obtained in (\ref{eq:6}) is one of the tightest optimization-free bounds. Whereas, the bound given by (\ref{eq:5}) requires optimization over the states perpendicular to the chosen state of the system and is surpassed for only a few states of the system. Both bounds in (\ref{eq:5}) and (\ref{eq:6}) are tighter than that given by (\ref{eq:4}). Our experimental data fit the theoretical predictions and satisfy the uncertainty relations of either product of sum of variances of two incompatible observables. In Fig.~\ref{fig4}, we show the experimental demonstration of the reverse uncertainty relation (\ref{eq:7}). For some states, the inequality (\ref{eq:7}) becomes an equality, which means the reverse uncertainty relation is tight. For the coefficients of the system state $\theta=0,\pi/2,\pi$, the experimental results of the LHS and RHS of (\ref{eq:7}) are $\{0.99275\pm0.01985,0.99981\pm 0.01038\}$, $\{1.99614\pm0.01197,1.99622\pm0.01214\}$ and $\{0.99988\pm0.01028,1.00343\pm0.01967\}$, respectively, which agree with their theoretical predictions $1,2,1$ very well. {\it Conclusion:---}The uncertainty relations are the most fundamental relations in quantum theory. A correct understanding and experimental confirmation of uncertainty relations will not only foster insight into foundational problems but also advance the precision measurement technology in quantum information processing. In this work, we have experimentally tested several forward as well as reverse state-dependent uncertainty relations for the product as well as the sum of variances of two incompatible observables for photonic qutrits. These uncertainty relations are independent of any optimization and still tighter than the Robertson-Schr\"{o}dinger uncertainty relation and other ones existing in the current literatures. We have also tested, for the first time, the state-dependent reverse uncertainty relations for the sum of variances of two incompatible observables, which implies an another unique feature of quantum mechanics. The experimental test of the forward and the reverse uncertainty relations vividly demonstrates that quantum fluctuations do remain within the quantum tract due to the incompatible nature of the observables. The fruition of our experiment relies on a stable interferometric network with simple linear optical elements. Though both sides of these inequalities can be calculated from the density matrices of the system states which are reconstructed by quantum state tomography. In our experiment, every term of these inequalities can be obtained directly by the outcomes of the projective measurements, and the experimental results are in a perfect agreement with theoretical predictions. Our test realizes a direct measurement model which much simplifies the experimental realization and leverages the requirement of quantum state tomography. Our experimental results not only provide deep insights into fundamental limitations of measurements but also may contribute to the study an upper time limit of quantum evolutions in future. \begin{acknowledgments} This work has been supported by the National Natural Science Foundation of China (Grant Nos. 11674056 and U1930402), the startup funding of Beijing Computational Science Research Center, and Postgraduate Research \& Practice Innovation Program of Jiangsu Province (Grant No. KYCX18\_0056). \end{acknowledgments} \end{document}
\begin{document} \title{Superfast maximum likelihood reconstruction for quantum tomography} \author{Jiangwei \surname{Shang}} \email{Current address: Naturwissenschaftlich-Technische Fakult{\"a}t, Universit{\"a}t Siegen, Walter-Flex-Stra{\ss}e 3, 57068 Siegen, Germany.} \affiliation{Centre for Quantum Technologies, National University of Singapore, Singapore 117543, Singapore} \author{Zhengyun \surname{Zhang}} \email{Corresponding email: [email protected]} \affiliation{BioSyM IRG, Singapore-MIT Alliance for Research and Technology (SMART) Centre, Singapore 138602, Singapore} \author{Hui Khoon \surname{Ng}} \affiliation{Centre for Quantum Technologies, National University of Singapore, Singapore 117543, Singapore} \affiliation{Yale-NUS College, Singapore 138527, Singapore} \affiliation{MajuLab, CNRS-UNS-NUS-NTU International Joint Research Unit, UMI 3654, Singapore} \pacs{03.65.Wj, 03.67.-a, 02.60.Pn} \date[]{Posted on the arXiv on March 9, 2017} \begin{abstract} Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme---an accelerated projected-gradient method---that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for $n$-qubit state tomography. In particular, an 8-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow, and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints. \end{abstract} \maketitle \textit{Introduction.---} Efficient and reliable characterization of properties of a quantum system, e.g., its state or the process it is undergoing, is needed for any quantum information processing task. Such are the goals of quantum tomography \cite{LNP649}, broadly classified into state tomography and process tomography. Process tomography can be recast as state tomography via the Choi-Jamiolkowski isomorphism \cite{QPT1,QPT2}; we hence restrict our attention to state tomography. Tomography is a two-step process: the first is data gathering from measurements of the quantum system; the second is the estimation of the state from the gathered data. This second step is the focus of this article. A popular estimation strategy is that of the maximum-likelihood estimator (MLE) \cite{MLEreview} from standard statistics. Computing the MLE for quantum tomography is, however, not straightforward due to the constraints imposed by quantum mechanics. While general-purpose convex optimization toolboxes (e.g., CVX \cite{CVX1,CVX2}) are available for small-sized problems, specially adapted MLE algorithms are needed for tackling useful system sizes. Past MLE algorithms \cite{DilML,TeoYS:thesis} incorporate the quantum constraints by going to the \emph{factored space} (see definition later) where the constraints are satisfied by an appropriate parameterization. Gradient methods are then straightforwardly employed in the now-unconstrained factored space. These algorithms can be slow in practice, with an extreme example \cite{8qubit} of an 8-qubit situation purportedly (see Refs.~\cite{CompSens,GaussianNoise,QiLRE,14qubit}) requiring \emph{weeks} of computation time to find the MLE, with bootstrapped error bars (10 MLE reconstructions in all), for the measured data \cite{RoosGuhne}. This triggered a search for alternatives to the MLE strategy \cite{CompSens,GaussianNoise,QiLRE,14qubit,MatrixProd}, specializing to circumstances where certain assumptions about the system apply, permitting simpler and faster reconstruction. Yet, the MLE approach provides a principled strategy, requiring no extraneous assumptions that may or may not be applicable, and is still one of the most popular methods for experimenters. The MLE gives a justifiable point estimate for the state \cite{Hradil}. It is the natural starting point for quantifying the uncertainty in the estimate: One can bootstrap the measured data \cite{Efron} and quantify the scatter in the MLEs for simulated data; confidence regions can be established starting from the MLE point estimator; credible regions for the actual data are such that the MLE is the unique state contained in every error region \cite{OER}. It is thus worthwhile to pursue better methods for finding the MLE. Here, we present a fast algorithm to accurately compute the MLE from tomographic data. The computation of the MLE for a single set of data for the 8-qubit situation mentioned above now takes less than a minute, and can return a far more accurate answer than previous algorithms. The speedup and accuracy originate from two features introduced here: (i) the ``CG-APG" algorithm that combines an accelerated projected-gradient (APG) approach, which overcomes convergence issues of previous methods, with the existing conjugate-gradient (CG) algorithm; (ii) the use of the product structure (if present) of the tomographic situation to speed up each iterative step. The CG-APG algorithm gives faster and more accurate reconstruction whether or not the product structure is present; the product structure, if present, can also speed up previous MLE algorithms. \textit{The problem setup}.--- In quantum tomography, $N$ independently and identically prepared copies of the quantum state are measured one-by-one via a set of measurement outcomes $\{\Pi_k\}_{k=1}^K$, with ${\Pi_k\ge 0}$ $\forall k$ and ${\sum_{k=1}^K\Pi_k={\bf 1}}$. $\{\Pi_k\}_{k=1}^K$ is known as a POVM (positive operator-valued measure) or a POM (probability-operator measurement). The measured data $D$ consist of a sequence of detection events $\{e_1,e_2,\ldots, e_N\}$, where $e_\alpha=k$ indicates the click of the detector for outcome $\Pi_k$, for the $\alpha$th copy measured. The likelihood for $D$ given state $\rho$ (the density matrix) is \begin{equation} L(D|\rho)=\prod_{k} p_k^{n_k}={\left\{\prod_{k}{\left[\mathrm{tr}(\rho\Pi_k)\right]}^{f_k}\right\}}^N\,, \end{equation} where $p_k=\mathrm{tr}(\rho\Pi_k)$ is the probability for outcome $\Pi_k$, $n_k$ is the total number of clicks in detector $k$, and $f_k=n_k/N$ is the relative frequency. The MLE strategy views the likelihood as a function of $\rho$ for the obtained $D$, and identifies the quantum state $\rho$ that maximizes $L(D|\rho)$ as the best guess---the MLE. This can be phrased as an optimization problem for the normalized negative log-likelihood, $F(\rho) \equiv-\frac{1}{N}\log L(D|\rho)$: \begin{subequations} \begin{eqnarray} \underset{\rho\in\mathcal{B}(\mathcal{H})}{\textrm{minimize}}&\quad&F(\rho) =-\sum_{k=1}^K\nolimits f_k\log\bigl(\mathrm{tr}(\rho\Pi_k)\bigr)\,,\label{optimization}\\ \text{subject to}&\quad&\rho\ge 0\quad\textrm{and}\quad \mathrm{tr}(\rho)=1\,.\label{eq:constr} \end{eqnarray} \end{subequations} The domain is the space of bounded operators $\mathcal{B}(\mathcal{H})$ on the $d$-dimensional Hilbert space $\mathcal{H}$. We refer to \eqref{eq:constr} as the quantum constraints. Any $\rho\in\mathcal{B}(\mathcal{H})$ satisfying \eqref{eq:constr} is a valid state; the convex set of all valid states is the quantum state space. $F$ is convex, and hence has a unique minimum value, on the quantum state space. $F(\rho)$ is differentiable (except on sets of measure zero) with gradient $\nabla F(\rho) = -\sum_{k=1}^K\nolimits \Pi_k\,f_k/p_k\equiv -R$, so that $\delta F(\rho)\equiv F(\rho+\delta\rho)-F(\rho)=\mathrm{tr}(\delta \rho\,\nabla F)=-\mathrm{tr}(\delta \rho\,R)$ for infinitesimal unconstrained $\delta\rho$. \textit{The problem of slow convergence}.--- Previous MLE algorithms \cite{DilML,TeoYS:thesis} converge slowly because of the ``by-construction" incorporation of the quantum constraints \eqref{eq:constr}: One writes $\rho=A^\dagger A/\mathrm{tr}(A^\dagger A)$ for $A\in\mathcal{B}(\mathcal{H})$, and performs gradient descent in the \emph{factored space} of unconstrained $A$ operators, for $\widetilde F(A)\equiv F\bigl(\rho=A^\dagger A/\mathrm{tr}(A^\dagger A)\bigr)$. Straightforward algebra yields \begin{equation} \delta\widetilde F(A)=-\mathrm{tr}{\left(\delta A\frac{(R-1)A^\dagger}{\mathrm{tr}(A^\dagger A)}+h.c.\right)}. \end{equation} to linear order in $\delta A$. $\delta\widetilde F(A)$ is negative---hence walking downhill---for $\delta A=\epsilon A(R-1)$, for a small $\epsilon$. This choice of $\delta A$ prescribes a $\rho$-update of the form \begin{equation}\label{eq:facDescent} \rho_i\!\rightarrow \!\rho_{i+1}\!=\!\rho_i+\delta\rho_i, \textrm{ with } \delta\rho_i\!=\!\epsilon[(R-1)\rho_i+\rho_i(R-1)] \end{equation} to linear order in $\epsilon$. $\delta\rho_i$ comprises two terms, each with $\rho_i$ as a factor. When the MLE is close to the boundary of the state space---a typical situation when there are limited data (unavoidable in high dimensions) for nearly pure true states---$\rho_i$ eventually gets close to a rank-deficient state with at least one small eigenvalue. Yet, $\rho_i$ has unit trace, so its spectrum is highly asymmetric. $\delta\rho_i$ inherits this asymmetry, leading to a locally ill-conditioned problem and slow convergence. \begin{figure} \caption{\label{fig:8qubit} \label{fig:8qubit} \end{figure} To illustrate, consider the situation of \cite{8qubit}: tomography of a (target) $8$-qubit $W$-state via product-Pauli measurements. Figure \ref{fig:8qubit} shows the trajectories taken by different algorithms from the maximally mixed state to the MLE---the minimum of $F$---for the experimental data of \cite{8qubit}. The red and blue lines are for commonly used MLE methods: the diluted direct-gradient (DG) algorithm \cite{DilML} and the CG algorithm with step-size optimization via line search \cite{TeoYS:thesis}. Both algorithms walk in the factored space, with DG performing straightforward descent according to Eq.~\eqref{eq:facDescent}, while CG follows the conjugate-gradient direction. The DG and CG iterations initially decrease $F$ quickly, but the advances soon stall, with $F$ stagnating at values significantly larger than attainable by APG and CG-APG (explained below). On average the CG-APG and DG algorithms take comparable time per iterative step; see Appendix~\ref{App0} for Fig.~\ref{fig:8qubit} plotted against time rather than steps. Note that the methods of Refs.~\cite{GaussianNoise, QiLRE,14qubit} are inapplicable here. Those methods assume Gaussian statistics, valid only when every measurement outcome receives many clicks. For the 8-qubit dataset above, 82\% of the outcomes had \emph{zero} counts. This is typical of high-dimensional experiments with limited data. Near-zero counts are also unavoidable for near-rank-deficient states. The compressed-sensing approach \cite{CompSens}, which assumes a low-rank true state, is also a poor choice; see Appendix~\ref{R8qb}. \textit{The CG-APG algorithm.}--- The slowdown in convergence for DG and CG puts a severe limit on the accuracy of the MLE reconstruction: The analysis of \cite{8qubit} stopped---after a long wait \cite{RoosGuhne}---at a state with likelihood $L\simeq0.1\%L_{\max}$. That was sufficient for \cite{8qubit} to show the establishment of entanglement, but is hardly useful for further MLE analysis. The slowdown in DG and CG can be avoided by walking in the $\rho$-space. There, $F(\rho)$ has gradient $-R$ which, unlike that of $\widetilde F(A)$, is not proportional to $\rho$. Walking in the $\rho$-space, however, does not ensure satisfaction of constraints \eqref{eq:constr}. They are instead enforced by projecting $\rho$ back into the quantum state space after each unconstrained gradient step. This is an example of ``projected-gradient" methods in numerical optimization \cite{Goldstein,Levitin,Bruck,Passty,Nesterov}. In steepest-descent methods, the local condition number of the merit function [$F(\rho)$ or $\widetilde F(A)$] affects convergence. The condition number measures the asymmetry of the response of the function to changes in the input along different directions. Poor conditioning (i.e., more asymmetric) leads to a steepest-descent direction that oscillates. One smooths out the approach to the minimum by giving each step some ``momentum" from the previous step. CG implements this for quadratic merit functions; for projected gradients, accelerated gradient schemes \cite{FISTA} are the state of the art. Coupled with adaptive restart \cite{AdaptRes}, the APG method indirectly probes the local condition number by gradually increasing the momentum preserved (controlled by $\theta$ in the algorithm below), and resetting ($\theta=1$) whenever the momentum causes the current step to point too far from the steepest-descent direction. The APG algorithm of Refs.~\cite{FISTA,TFOCS,AdaptRes}, in $\rho$-space, thus proceeds as follows: \begin{algorithm}[H] \caption{\textbf{APG with adaptive restart}} \begin{algorithmic} \State Given $\rho_0$, $0<\beta<1$, and $t_1>0$. \State Initialize $\varrho_0=\rho_0$, $\theta_0=1$. \vspace*{0.15cm} \For {$i = 1,\cdots,$} \State Set $t_i\!=\!t_{i-1}$, $\rho_i \!= \!\proj(\varrho_{i-1}-t_i\nabla F(\varrho_{i-1}))$, $\delta_i\!=\!\rho_i-\varrho_{i-1}$. \vspace*{-0.15cm} \State (Choose step size via backtracking) \While {$F(\rho_i) > F(\varrho_{i-1}) + \left\langle\nabla F(\varrho_{i-1}),\delta_i\right\rangle + \tfrac1{2t_i}||\delta_i||_F^2$} \State Set $t_i=\beta t_i$. \State Update $\rho_i = \proj(\varrho_{i-1}-t_i\nabla F(\varrho_{i-1}))$, $\delta_i=\rho_i-\varrho_{i-1}$. \EndWhile \vspace*{0.15cm} \State Set $\hat\delta_i=\rho_i-\rho_{i-1}$; Termination criterion. \vspace*{0.15cm} \If {$\langle\delta_i,\hat\delta_i\rangle <0$} \hspace*{0.1cm}(Restart) \State $\rho_i=\rho_{i-1}$, $\varrho_i=\rho_{i-1}$, $\theta_i=1$; \vspace*{0.15cm} \Else \hspace*{0.1cm}(Accelerate) \State Set $\theta_i=\tfrac{1}{2}{\left(1+\sqrt{1+4\theta_{i-1}^2}\right)}$, $\varrho_i=\rho_i+\hat\delta_i\frac{\theta_{i-1}-1}{\theta_i}$\,. \EndIf \EndFor \end{algorithmic} \end{algorithm} \noindent $\proj(\cdot)$ projects the Hermitian argument to the nearest---in Euclidean distance---state satisfying \eqref{eq:constr} \cite{GaussianNoise}. One can modify the backtracking portion of the algorithm for better performance; see Sec.~3 in Appendix~\ref{AppA}. The \textsl{MATLAB} code for our APG and CG-APG algorithms, with accompanying documentation, is available online \cite{qMLE}. Applying the APG algorithm to the 8-qubit example, one finds fast convergence to the MLE (see Fig.~\ref{fig:8qubit}) once the walk brings us sufficiently close; no slowdown as seen in DG and CG is observed. APG with adaptive restart exhibits linear (on a log-scale) convergence in areas of strong convexity \cite{AdaptRes} sufficiently close to the minimum. The staircase pattern is expected in adaptive restart APG algorithms \cite{AdaptRes}: Flat regions occur after a reset, giving way to steep regions when the momentum is built up again. These undulations do not affect the overall rate of convergence. Far from the minimum, APG can descend slowly, as is visible in Fig.~\ref{fig:8qubit}. CG descent in the factored space, on the other hand, is rapid in this initial phase. Similar behavior is observed for other states (see a representative example in Appendix~\ref{AppB}), although the initial slow APG phase is usually markedly shorter than in the $W$-state example here. Thus, a practical strategy is to start with CG in the factored space to capitalize on its initial rapid descent, and switch over to APG in the $\rho$-space when the fast convergence of APG sets in, \emph{provided} one can determine the switchover point cheaply. Both APG and CG use a local quadratic approximation at each step, the accuracy of which relies on the local curvature, measured by the Hessian of the merit function. The advance is quick if the Hessian changes slowly from step to step so that prior-step information provides good guidance for the next step. Empirically, for nearly pure true states, we observe that the Hessian of $F(\rho)$ changes a lot initially in APG but settles down close to the MLE. This is because the APG trajectory quickly comes close to the state-space boundary, so that some $p_k$ values, which occur in the Hessian of $F(\rho)$ as $\sim\!\! f_k/p_k^2\equiv h_k$, can be small and unchecked by the $f_k$ values away from the MLE. In contrast, the Hessian of $\widetilde F(A)$ relevant for CG is initially slowly changing, but starts fluctuating closer to the MLE, likely due to the ill-conditioning discussed previously. The proposal is hence to start with CG in the factored space, detect when the Hessian of $F(\rho)$ settles down, at which point one switches over to APG in the $\rho$-space for rapid convergence to the minimum. The Hessian is, however, expensive to compute; one can instead get a good gauge by monitoring the $h_k$ values, cheaply computable from the $p_k$s already used in the algorithm; see Sec.~4 in Appendix~\ref{AppA}. This then is finally our CG-APG algorithm, with a superfast approach to the MLE that outperforms all other algorithms; see Fig.~\ref{fig:8qubit}. \textit{Exploiting the product structure.}--- Part of the speedup in the 8-qubit example stems from exploiting the product structure of the situation. For the four algorithms compared, one of the most expensive parts of the computation is the evaluation of the probabilities $\{p_k=\mathrm{tr}(\rho\Pi_k)\}$ needed in $F$ and $\nabla F=-R$, for $\rho$ at each iterative step. For a $d$-dimensional system and $K$ POM outcomes, the computational cost of evaluating $\{p_k\}$ is $O(Kd^2)$ [there are $K$ probabilities, each requiring $O(d^2)$ operations for the trace of a product of two $d\times d$ matrices]. For the 8-qubit example, $d=2^8=256$, and the POM has $K=6^8=1679616$ outcomes. The computational cost can be greatly reduced if one has a product structure: The system comprises $n$ registers, and the POM is a product of individual POMs on each register. For simplicity, we assume the $n$ registers each have dimension $d_r$, and the POM on each register is the same, written as $\{\pi_k\}_{k=1}^{K_r}$. The $n$-register POM outcome is $\Pi_{\vec k}=\pi_{k_1}\otimes\pi_{k_2}\otimes\ldots\otimes\pi_{k_n}$, with $\vec k\equiv (k_1,k_2,\ldots ,k_n)$ and $k_a=1,\ldots,K_r$. The generalization to non-identical registers and POMs is obvious. Then, $d=d_r^n$ and $K=K_r^n$. Exploiting this product structure reduces the cost of evaluating the probabilities from $O(K_r^n d_r^{2n})$ to $O(K_r^{n+1})$ (for $K_r>d_r^2$). For $n$ qubits with product-Pauli measurements ($d_r=2$, $K_r=6$), this is a huge reduction from $\sim\!6^n4^n$ to $\sim\!6^{n+1}$. The computational savings come from re-using parts of the calculation. Let $\rho_{n-1}^{(k_n)}\equiv \mathrm{tr}_n(\rho\pi_{k_n})$, the partial trace on the $n$th register, for a given $k_n$. This same $\rho_{n-1}^{(k_n)}$ can be used to evaluate $\rho_{n-2}^{(k_{n-1},k_n)}\equiv \mathrm{tr}_{n-1}{\left(\rho_{n-1}^{(k_n)}\pi_{k_{n-1}}\right)}$ for any $k_{n-1}$. One does this repeatedly, partial-tracing out the last register each time, until one arrives at $p_{\vec k}=\rho_0^{(k_1,k_2,\ldots,k_n)}$. At each stage, evaluating $\rho_{\ell-1}^{(k_{\ell},\ldots,k_n)}$ from $\rho_{\ell}^{(k_{\ell+1},\ldots,k_n)}$ involves computing the trace of $\pi_{k_\ell}$ with submatrices of $\rho_{\ell}^{(k_{\ell+1},\ldots,k_n)}$. Specifically,\begin{equation} \rho_{\ell}^{(k_{\ell+1},\ldots,k_n)}=\sum_{\vec i^{(\ell-1)},\vec j^{(\ell-1)}}|\vec i^{(\ell-1)}\rangle\langle \vec j^{(\ell-1)}|\otimes\rho_{\vec i^{(\ell-1)};\vec j^{(\ell-1)}}, \end{equation} where $\vec i^{(\ell-1)}\equiv (i_1,i_2,\ldots, i_{\ell-1})$ with $i_a=1,\ldots, d_r$ (similarly for $\vec j^{(\ell-1)}$), $\rho_{\vec i^{(\ell-1)};\vec j^{(\ell-1)}}$ is a $d_r\times d_r$ submatrix, and $\rho_{\ell}^{(k_{\ell+1},\ldots,k_n)}$ is a $(d_r)^{\ell-1}\times (d_r)^{\ell-1}$ array of these submatrices. Getting $\rho_{\ell-1}^{(k_{\ell},\ldots,k_n)}$ from $\rho_{\ell}^{(k_{\ell+1},\ldots,k_n)}$ requires replacing each submatrix in $\rho_{\ell}^{(k_{\ell+1},\ldots,k_n)}$ by the number $\mathrm{tr}(\rho_{\vec i^{(\ell-1)};\vec j^{(\ell-1)}}\pi_{k_\ell})$, which takes $O(d_r^2)$ computations. Since each $\rho_{\ell}^{(k_{\ell+1},\ldots,k_n)}$ need only be computed once for all subsequent $k_{j\leq\ell}$, simple counting (see Appendix~\ref{AppC}) yields a total cost of $O(K_r^{n+1})$ (for $K_r>d_r^2$) to evaluate the full set of probabilities. Constructing $R$ for given $\{p_k\}$ also requires $O(Kd^2)$ operations; the same idea of register-by-register evaluation applies for a speedier computation. \begin{figure} \caption{\label{fig:nqb} \label{fig:nqb} \end{figure} Figure \ref{fig:nqb} shows the performance of the different algorithms for a varying number of qubits with and without exploiting the product structure, for the product-Pauli measurement. The APG and CG-APG algorithms show a substantial improvement in speed over other algorithms. With the faster speed, one can perform accurate MLE reconstruction for larger systems in the same amount of time. Observe that the CG-APG and APG runtimes are similar in Fig.~\ref{fig:nqb}, with CG-APG about 10\% faster than APG for $n=10$ (exploiting the product structure). This is because the advantage of CG-APG over APG occurs early on in the run, when the walk is far from the minimum. APG works well enough if one is seeking an accurate MLE so that most of the runtime is spent resolving the exact MLE location in the vicinity of the minimum. However, for long runtimes, the 10\% savings from CG-APG is not inconsequential. CG-APG combines the advantages of CG and APG, without much increase in implementation complexity, and works well even in cases like that of Fig.~1 where APG spends a long time wandering around far from the minimum. Furthermore, significant speedup is visible when the product structure is incorporated. Exploiting the product structure is very different from putting in assumptions about the state or the noise: In the former, one knows the structure by design of the tomographic experiment; the latter requires checks of compliance, which need not be easy or even possible. Tomography experiments beyond a couple of qubits typically employ POMs with a product structure, because of the comparative ease in design and construction, so this product assumption often holds. For comparison, we display the runtime for the general-purpose CVX toolbox for convex optimization \cite{CVX1,CVX2}. The clear disadvantage there is the inability to capitalize on the product structure. All computations are conducted with {\sl MATLAB} on a desktop computer (3 GHz Intel Xeon CPU E5-1660). The good performance of APG/CG-APG goes beyond the product-Pauli measurement. Appendix~\ref{AppD} gives results for the product-tetrahedron POM \cite{TetraPOM} and the symmetric, informationally complete (SIC) POM \cite{SICPOM1,SICPOM}, yielding similar conclusions. This is to be expected, as the slow convergence of DG/CG, remedied by the APG algorithm, is independent of the measurement choice. \textit{Conclusion.}--- We have demonstrated that, with the right algorithm, MLE reconstruction can be done quickly and reliably, with no latent restriction on the accuracy of the MLE obtained, and no need for added, possibly unverifiable, assumptions. As the dimension increases, there is no getting around the fact that any tomographic reconstruction will become expensive, but our algorithm slows the onset of that point beyond the system size currently accessible in experiments. We note that our method can be immediately applied to process tomography. Furthermore, it is a general method for optimization in the quantum state space or other constrained spaces, and hence useful for such problems. \textit{Remark:} Upon completion of our work, we came to be aware \cite{Leach} of Ref.~\cite{Goncalves}, an earlier work employing projected gradient techniques for optimization over the quantum state space. In particular, MLE reconstruction was investigated as an application. However, the discussion there was restricted to two- to four-qubit tomography, and the authors do not use accelerated gradients---as we do here---crucial for fast convergence, and certainly not our hybrid CG-APG method. This work is funded by the Singapore Ministry of Education (partly through the Academic Research Fund Tier 3 MOE2012-T3-1-009) and the National Research Foundation of Singapore. The research is also supported by the National Research Foundation (NRF), Prime Minister's Office, Singapore, under its CREATE programme, Singapore-MIT Alliance for Research and Technology (SMART) BioSystems and Micromechanics (BioSyM) IRG. HKN is partly funded by a Yale-NUS College start-up grant. We thank Christian Roos and Otfried G{\"u}hne for sharing the experimental data of Ref.~\cite{8qubit} and information about the MLE reconstruction used in that work. ZZ thanks Chenglong Bao for his discussions regarding APG and George Barbastathis for general discussions. We are also grateful to Andrew Scott, Michael Hoang, Christopher Fuchs and Markus Grassl for sharing the 7-qubit fiducial state for our SIC-POM example (Appendix~\ref{AppD}). J.~Shang and Z.~Zhang contributed equally to this work. \appendix \section{Time taken for 8-qubit trajectories}\label{App0} Here we show again the trajectories taken by different algorithms as in Fig.~1 in the main text, but now plotted against time rather than iterative steps. \begin{figure} \caption{\label{fig:SuppMat1} \label{fig:SuppMat1} \end{figure} \section{The reconstructed 8-qubit state}\label{R8qb} For the 8-qubit dataset, the 12 largest eigenvalues of the reconstructed MLE states from our CG-APG algorithm and the original H{\"a}ffner et al.~reconstruction using DG in the factored space \cite{8qubit} are given in Table \ref{tab:eig}. \begin{table}[!h] \caption{\label{tab:eig}The 12 largest eigenvalues of the reconstructed MLE states.} \begin{tabular}{c@{\hspace*{0.8cm}}c} Our reconstruction & H{\"a}ffner et al.~\\ (using CG-APG)&(Ref.~\cite{8qubit}, using DG)\\ \hline 0.7512&0.7514\\ 0.0609&0.0609\\ 0.0458&0.0456\\ 0.0403&0.0400\\ 0.0237&0.0234\\ 0.0193&0.0189\\ 0.0178&0.0174\\ 0.0153&0.0149\\ 0.0106&0.0102\\ 0.0051&0.0048\\ 0.0039&0.0036\\ 0.0030&0.0030 \end{tabular} \end{table} Observe the close correspondence between the two reconstructed states. The table of eigenvalues also demonstrate the problem with using the compressed sensing (CS) scheme of Ref.~\cite{CompSens}. The CS approach requires an \emph{a priori} choice in the rank of the reconstructed state; specifically, it works well when that choice is one of low rank. Looking at the list of eigenvalues above, we see that one either uses a rank-1 state, or one should include many more eigenvalues, as the subsequent ones are comparable in size. Without access to an unrestricted reconstruction, attainable only by our CG-APG or other full MLE schemes, there is no way of making a verifiable rank choice for the CS approach. \section{The APG and CG-APG algorithms}\label{AppA} We discuss various technical details pertaining to the APG and/or CG-APG algorithms described in the main text. \subsection{The projection algorithm used in APG} As explained in the main text, the APG algorithm relies on a projection $\proj(\cdot)$ to enforce the quantum constraints after each gradient step. The argument of $\proj(\cdot)$ is a Hermitian operator $\varrho$ with eigenvalues $\lambda_i$ (in descending order) and eigenvectors $|\psi_i\rangle$. One projects $\{\lambda_i\}$ onto the probability simplex so that ${\{\lambda_i\}\rightarrow\{\overline\lambda_i\}}$ with ${\overline\lambda_i\geq 0}\,\,\forall i$ and ${\sum_i\overline\lambda_i=1}$, and then rebuilds the operator with $\{\overline\lambda_i\}$, i.e., ${\proj(\varrho)=\sum_i\overline\lambda_i|\psi_i\rangle\langle\psi_i|}$. The projection of $\lambda_i$ onto the simplex is done as follows \cite{GaussianNoise}: Find ${u=\max\left\{j:\lambda_j-\frac1{j}\bigl(\sum_{i=1}^j\lambda_i-1\bigr)>0\right\}}$, then define ${w=\frac1{u}\left(\sum_{i=1}^u \lambda_i-1\right)}$. Finally we have $\overline\lambda_i=\max\{\lambda_i-w,0\}$. \subsection{Handling negative $p_k$ values} During the gradient step of APG, one can wind up outside the physical state space, i.e., $\varrho_i$ at each iterative step need not be a valid state. It can even happen that not all $p_{k,i}\equiv \mathrm{tr}(\varrho_i\Pi_k)$s needed in the iterative step are positive, for which $F(\varrho_i)$ is ill-defined because of the logarithm. We can prevent this by checking whether any $p_{k,i}$ is negative after $\varrho_i$ is computed, and set $\varrho_i=\rho_i$ if this happens to be the case. Empirically, we observe such cases to occur only very rarely. \subsection{Convergence tweaks for APG}\label{SuppMat:opt} We also incorporated a few small adjustments to APG recommended in Ref.~\cite{TFOCS}, as well as the Barzilai-Borwein (BB) method for computing step sizes \cite{BBStep}, for better step-size estimation and improved performance in the implementation of the CG-APG algorithm used to produce the figures in the main text. We list those adjustments here. First, for iterative step $i>1$, rather than fixing the step size as $t_i=t_{i-1}$, we set \cite{BBStep} \begin{equation} t_i = \frac{\langle \varrho_i-\varrho_{i-1},\nabla F(\varrho_i) - \nabla F(\varrho_{i-1})\rangle}{\langle \nabla F(\varrho_i) - \nabla F(\varrho_{i-1}),\nabla F(\varrho_i) - \nabla F(\varrho_{i-1})\rangle} \end{equation} if there was no restart in the previous iteration and the denominator is nonzero; otherwise we set $t_i=\alpha t_{i-1}$, for some pre-chosen constant $\alpha$. We used $\alpha=1.1$ and $\beta=0.5$ (see main text) as recommended in \cite{TFOCS}. We also use the following update on $\theta_i$ and $\varrho_i$ for $i>1$ to prevent changes in $t_i$ from affecting convergence: \begin{subequations} \begin{eqnarray} \hat\theta_{i-1} &=& \theta_{i-1}\sqrt{t_{i-1}/t_i}\\ \label{5b}\theta_i &=& \frac{1}{2}{\left(1+\sqrt{1+4\hat\theta_{i-1}^2}\right)}\\ \label{5c}\varrho_i&=& \rho_i+\hat\delta_i(\hat\theta_{i-1}-1)/\theta_i \end{eqnarray} \end{subequations} The rules Eqs.~\eqref{5b} and \eqref{5c} are exactly those stated in the APG algorithm in the main text, but with $\theta_{i-1}$ replaced by $\hat\theta_{i-1}$ for the step-size adjustment. Sometimes we observe that standard APG as prescribed by \cite{AdaptRes} fails to restart early enough for good performance. We hence use a stricter restart criterion: Restart when \begin{align} \frac{\mathrm{tr}(\delta_i\hat{\delta_i})}{\sqrt{\mathrm{tr}(\delta_i^2)\mathrm{tr}\bigl(\hat\delta_i^2\bigr)}} < \gamma, \end{align} with $\gamma$ set to a small positive value ($0.01$ for the graphs in the main text). We notice that the BB method can sometimes give a larger variation in performance of the APG algorithm for different data. This is visible in Fig.~2 of the main Letter, where the 7-qubit situation for APG shows a slightly larger scatter than for other $n$ values. The scatter is reduced when the restart parameter $\gamma$ is set larger so that the restart occurs earlier, suggesting a possible interference with adaptive restart. One can thus adjust the $\gamma$ parameter if one is concerned about the variation in performance; or one can turn off the BB step-size optimization altogether, and see only a very slight worsening of the average runtime; see Fig.~\ref{fig:SuppMatBB} below. Note that we did not see this variation in performance for CG-APG with the BB method for all cases tested. \begin{figure} \caption{\label{fig:SuppMatBB} \label{fig:SuppMatBB} \end{figure} \subsection{Switchover criterion for CG-APG} For the CG-APG algorithm, as explained in the main text, one would like to start with CG iterations and switch to APG when the Hessian stabilizes, i.e., it changes only by a little with further APG steps. This happens when the trajectory is sufficiently close to the MLE. Here, we explain the technical details of this switchover. The Hessian of $F(\rho)$---its curvature---characterizes its local quadratic structure. It is the ``second derivative" of $F(\rho)$, and comes from considering the second-order variation of $F$: $\delta^2F(\rho)\equiv \delta F\bigl(\rho+\widetilde{\delta\rho}\bigr)-\delta F(\rho)$, where $\delta F(\rho)\equiv -\mathrm{tr}\Bigl(\delta\rho\, R(\rho)\Bigr)$, the first-order variation of $F$, with $R(\rho)=\sum_k\Pi_kf_k/p_k$ and $p_k\equiv \mathrm{tr}(\rho\Pi_k)$ as in the main text. Here, $\delta\rho$ and $\widetilde{\delta\rho}$ are independent infinitesimal variations of $\rho$. A little algebra gives \begin{equation} \delta^2F(\rho)=\mathrm{tr}{\left(\delta\rho\sum_k\Pi_k\mathrm{tr}\Bigl(\widetilde{\delta\rho}\,\Pi_k\Bigr)\frac{f_k}{p_k^2}\right)}, \end{equation} and we identify the linear operator [on $\mathcal{B}(\mathcal{H})$] \begin{equation} H(\rho;\,\cdot\,) = \sum_k \Pi_k \mathrm{tr}(\,\cdot\,\Pi_k )\frac{f_k}{p_k^2} \end{equation} as the Hessian of $F$ at $\rho$. The eigenvalues of $H$ give the local quadratic structure of $F(\rho)$. Ideally, determining the right time during CG to switch to APG requires computing how much $H$ changes across successive APG steps from the current value of $\rho$. However, this would be very costly: It is as if one is running APG alongside CG, and the Hessian is a large matrix ($d^2\times d^2$ in size) and hence expensive to compute. Instead, we adopt a compromise that works well in practice: (1) we treat the $\Pi_k$s as if they were all mutually orthogonal so that the eigenvalues of $H$ would be equal to $\{f_k/p_k^2\}$, and (2) we look at the change in $H$ between iterations of CG instead of between iterations of APG. The $\Pi_k$s are never exactly mutually orthogonal for informationally complete measurements, but a good tomographic design would seek to spread out the $\Pi_k$ directions, and for large dimensional situations, their mutual overlaps will be small and $\{f_k/p_k^2\}$ is a good enough proxy for the eigenvalues of the Hessian. While looking at the change in $H$ across iterations of CG would not always guarantee a similar change for APG, a small change with CG iterations signals closeness to the MLE, or that CG has stagnated. In either case, one should switch to APG. Thus, in our implementation of CG-APG, we first initialize CG with the maximally mixed state, and switch to APG at the first iteration when the overlap, \begin{equation} \frac{\vec q_i\cdot\vec q_{i-1}}{\sqrt{|\vec q_i|^2|\vec q_{i-1}|^2}}, \end{equation} exceeds $\cos\phi$ for some chosen small $\phi$ value. Here, $\vec q_i\equiv (f_1/p_{1,i}^2,f_2/p_{2,i}^2,\ldots,f_K/p_{K,i}^2)$, where $p_{k,i}=\mathrm{tr}(\rho_i\Pi_k)$ for state $\rho_i$ of the $i$th CG iteration. The switchover thus occurs when the angle between the $\vec q$s for subsequent iterations is small enough. We find that $\phi=0.01$ radians works well in practice. \section{Trajectories for generic states}\label{AppB} Figure 1 in the main text shows the trajectories taken by different algorithms for the experimental data of Ref.~\cite{8qubit} for a noisy $W$-state. There, we saw a long initial slow phase of APG, which is in fact atypical of the behavior seen for generic states. Figure \ref{fig:SuppMat2} shows the more representative behavior for a random 8-qubit pure state with 10\% added white noise. As in Fig.~1, $N=3^8\times 100$, with 100 copies for each of the $3^8$ settings of the 8-qubit product-Pauli POM. Observe the significantly shorter length of the initial slow phase of APG than for the noisy $W$-state in Fig.~1. \begin{figure} \caption{\label{fig:SuppMat2} \label{fig:SuppMat2} \end{figure} \section{Exploiting the product structure: Computational savings}\label{AppC} Here, we present the counting argument that gives $O(K_r^{n+1})$ as the computational cost of evaluating a full set of Born probabilities after making use of the product structure of the POM. To remind the reader of the notation: The system comprises $n$ registers each of dimension $d_r$; the POM on each register is $\{\pi_k\}_{k=1}^{K_r}$; the $n$-register POM outcome is $\Pi_{\vec k}=\pi_{k_1}\otimes\pi_{k_2}\otimes\ldots\otimes\pi_{k_n}$, with $\vec k\equiv (k_1,k_2,\ldots ,k_n)$ and $k_a=1,\ldots,K_r$; and $\rho_{\ell}^{(k_{\ell+1},\ldots,k_n)}\equiv \mathrm{tr}_{\ell+1}\{\ldots\mathrm{tr}_{n-1}\{\mathrm{tr}_n\{\rho\pi_{k_n}\}\pi_{k_{n-1}}\}\ldots\pi_{k_{\ell+1}}\}$. We also need the following basic fact: Evaluating $\mathrm{tr}\{AB\}$ for $A$ an $n\times m$ matrix and $B$ an $m\times n$ matrix requires $2mn$ operations (elementary addition/multiplication). In each step of the procedure described in the main text, one needs to evaluate $\rho_{\ell-1}^{(k_{\ell},\ldots,k_n)}=\mathrm{tr}_{\ell}\{\rho_{\ell}^{(k_{\ell+1},\ldots,k_n)}\pi_{k_\ell}\}$ for given $k_\ell,\ldots,k_n$. One such evaluation requires the computation of the trace of $\pi_{k_\ell}$ with each of the $(d_r)^{\ell-1}\times (d_r)^{\ell-1}$ submatrices of $\rho_{\ell}^{(k_{\ell+1},\ldots,k_n)}$. $\pi_{k_\ell}$ and each submatrix are $d_r\times d_r$ in size, so the computational cost of evaluating $\rho_{\ell-1}^{(k_{\ell},\ldots,k_n)}$ is $2d_r^2\times d_r^{2(\ell-1)}=2d_r^{2\ell}$ operations. One incurs this cost for every choice of $k_\ell,\ldots,k_n$, so the total cost of evaluating $\rho_{\ell-1}^{(k_{\ell},\ldots,k_n)}$ for all $k_\ell,\ldots,k_n$, for given $\ell$, is $2K_r^{n-\ell+1}d_r^{2\ell}$. Adding up this cost over all values of $\ell=1,2,\ldots,n$ gives the total cost for evaluating a full set of Born probabilities as \begin{eqnarray} 2\sum_{\ell=1}^n K_r^{n-\ell+1}d_r^{2\ell}&=&2K_r^nd_r^2\sum_{\ell=0}^{n-1}{\left(\frac{d_r^2}{K_r}\right)}^\ell\nonumber\\ &=&2K_r^nd_r^2{\left[\frac{1-(d_r^2/K_r)^n}{1-(d_r^2/K_r)}\right]}. \end{eqnarray} For $K_r> d_r^2$, as is usually the case, this gives the dominant computational cost of $O(K_r^{n+1})$; for $K_r=d_r^2$, one has instead the cost of $O(nK_r^{n+1})$. \section{Performance for different measurements}\label{AppD} As stated in the main text, the edge our APG and CG-APG algorithms has over the DG and CG algorithms (in the factored space) does not depend on the measurement choice. Here, we demonstrate this point by benchmarking the performance of APG and CG-APG against that of DG and CG for two more measurements (in addition to the product-Pauli POM in the main text): the product-tetrahedron POM \cite{TetraPOM} in Fig.~\ref{fig:tetra}, and the symmetric, informationally complete POM (SIC-POM) \cite{SICPOM1,SICPOM} in Fig.~\ref{fig:SIC}. \begin{figure} \caption{\label{fig:tetra} \label{fig:tetra} \end{figure} \begin{figure} \caption{\label{fig:SIC} \label{fig:SIC} \end{figure} The parameters in both figures used are identical (with the exception of $n=9$ of the product-tetrahedron POM for CG) to those of Fig.~2 in the main text, repeated here for the reader's convenience: For each $n$, 50 states are used, each a Haar-random pure state with 10\% white noise to emulate a noisy preparation. For each state, the algorithms are run for $\{f_k=p_k\}$, where $\{p_k\}$ are the Born probabilities for the state on the respective $n$-qubit POM. The MLE is hence the actual state. The lines labeled ``np" indicate runs \emph{without} using the product structure. The lines are drawn through the average time taken for each algorithm over the 50 states; the scatter of the timings are shown only for the algorithms using the product structure.\\ We note that the performance of the CG algorithm for the SIC-POM showed exceptional sensitivity, not observed in other cases, to the parameters that enter the line minimization of the CG algorithm. The CG plot of Fig.~\ref{fig:SIC} is given for line-minimization parameters optimized for the best CG runtime. While the previous parameters used for the product-Pauli measurement worked also for the product-tetrahedron POM (in fact, there we see little difference in the performance for different parameter choices), the CG runs did not converge at all unless we tweak the parametrs. We do not yet understand the underlying reason for this sensitivity, and it may be mitigated with the use of a more elaborate line-minimization procedure, but one should perhaps take this as an added note of caution when using the CG algorithm. \end{document}
\betagin{document} \title[Singular MASAs in type III factors and Connes' Bicentralizer Property]{Singular MASAs in type III factors and \\ Connes' Bicentralizer Property} \betagin{abstract} We show that any type ${\rm III_1}$ factor with separable predual satisfying Connes' Bicentralizer Property (CBP) has a singular maximal abelian $\ast$-subalgebra that is the range of a normal conditional expectation. We also investigate stability properties of CBP under finite index extensions/restrictions of type ${\rm III_1}$ factors. \end{abstract} \author{Cyril Houdayer} \address{Laboratoire de Math\'ematiques d'Orsay\\ Universit\'e Paris-Sud\\ CNRS\\ Universit\'e Paris-Saclay\\ 91405 Orsay\\ FRANCE} \email{[email protected]} \thanks{CH is supported by ERC Starting Grant GAN 637601} \author{Sorin Popa} \address{Mathematics Department \\ University of California at Los Angeles \\ CA 90095-1555 \\ USA} \email{[email protected]} \thanks{SP is supported by NSF Grant DMS-1400208, a Simons Fellowship and Chaire d'Excellence de la FSMP~2016} \subjclass[2010]{46L10, 46L36} \keywords{Connes' bicentralizer property; Singular maximal abelian $\ast$-subalgebras; Type ${\rm III}$ factors} \maketitle \section{Introduction} Let $M$ be any von Neumann algebra and $A \subset M$ any maximal abelian $\ast$-subalgebra (abbreviated MASA). Denote by $\mathcal N_M(A) = \{u \in \mathcal U(M) : u A u^* = A\}$ the group of unitaries in $M$ that normalize $A$. We say that $A \subset M$ is {\em singular} if $\mathcal N_M(A) = \mathcal U(A)$, that is, the only unitaries in $M$ that normalize $A$ are in $A$. It has been shown in \cite{Po82} that any diffuse semifinite von Neumann algebra with separable predual and any type ${\rm III_\lambda}$ factor ($0 \leq \lambda < 1$) with separable predual has a singular MASA. A new approach to this result has been recently given in \cite{Po16}. But while many examples of type ${\rm III_1}$ factors are known to have singular MASAs, the problem of whether any type ${\rm III_1}$ factor has a singular MASA remained open. Following \cite{Co80}, if $M$ is a type ${\rm III_1}$ factor with a normal faithful state $\varphi$, then the {\em bicentralizer} of $(M, \varphi)$ is defined by $$\mathord{\text{\rm B}}(M, \varphi) = \left \{ a \in M : \lim_n \|x_n a - a x_n\|_\varphi = 0, \forall (x_n)_n \in \mathord{\text{\rm AC}}(M, \varphi) \right \}$$ where $\mathord{\text{\rm AC}}(M, \varphi) = \left\{ (x_n)_n \in \ell^\infty(\mathbf{N}, M) : \lim_n \|x_n \varphi - \varphi x_n\| = 0\right \}$. It is known that $\mathord{\text{\rm B}}(M, \varphi) \subset M$ is a von Neumann subalgebra that is globally invariant under the modular flow $\sigma^\varphi$. By Connes--St\o rmer transitivity theorem (\cite{CS76}), it follows that if $\mathord{\text{\rm B}}(M, \varphi) = \mathbf{C} 1$ for some normal faithful state $\varphi$ on $M$, then $\mathord{\text{\rm B}}(M, \psi) = \mathbf{C} 1$ for any normal faithful state $\psi$ on $M$ (cf. \cite[Corollary 1.5]{Ha85}). We say that $M$ satisfies Connes' Bicentralizer Property (abbreviated CBP) if $\mathord{\text{\rm B}}(M, \varphi) = \mathbf{C} 1$ for some (equivalently, for any) normal faithful state $\varphi$ on $M$. Haagerup showed in \cite{Ha85} that any amenable type ${\rm III_1}$ factor with separable predual satisfies CBP. Together with the work of Connes \cite{Co85}, this showed the uniqueness of the amenable factor of type ${\rm III_1}$ with separable predual. Haagerup also obtained in \cite[Theorem 3.1]{Ha85} the following characterization of CBP: a type ${\rm III_1}$ factor $M$ with separable predual satisfies CBP if and only if it has a normal faithful state $\varphi$ such that $(M_\varphi)' \cap M = \mathbf{C} 1$. Several classes of nonamenable type ${\rm III_1}$ factors have been shown to satisfy CBP: free Araki--Woods factors \cite{Ho08}; free product factors \cite{HU15}; nonamenable factors satisfying Ozawa's condition (AO) \cite{HI15}. However, Connes' Bicentralizer Problem is still open for arbitrary type ${\rm III_1}$ factors. In this note, we prove that every factor $M$ with separable predual that has a normal faithful state $\varphi$ satisfying the condition $(M_\varphi)' \cap M = \mathbf{C} 1$, contains an abelian $\ast$-subalgebra $A \subset M_\varphi$ that's maximal abelian and singular in $M$ (see Theorem \ref{thm-singular}). We prove this result by adapting the argument used in \cite[Theorem 2.1]{Po16} for the type ${\rm II_1}$ case. By combining our result with Haagerup's characterization of CBP \cite{Ha85} explained above, we derive that any type ${\rm III_1}$ factor with separable predual satisfying CBP has a singular MASA that is the range of a normal conditional expectation. We end the paper with some results and comments about the stability of Connes' Bicentralizer Property for inclusions of type ${\rm III_1}$ factors with normal conditional expectation, under the assumption that the inclusion has finite index (see Theorem \ref{thm-CBP-finite-index}). \subsection*{Acknowledgments} The first named author is grateful to Yusuke Isono for useful discussions. \section*{Notation} Let $M$ be any $\sigma$-finite von Neumann algebra. We denote by $(M, \mathord{\text{\rm L}}^2(M), J, \mathord{\text{\rm L}}^2(M)_+)$ the standard form of $M$. We moreover denote by $\mathcal Z(M)$ its center, by $\mathcal U(M)$ its group of unitaries and by $\mathord{\text{\rm Ball}}(M)$ its unit ball with respect to the uniform norm $\|\cdot\|_\infty$. For any normal faithful state $\varphi$ on $M$, we denote by $\xi_\varphi \in \mathord{\text{\rm L}}^2(M)_+$ its canonical implementing vector, by $\sigma^\varphi$ its modular automorphism group and by $M_\varphi = \{x \in M : \sigma_t^\varphi(x) = x, \forall t \in \mathbf{R}\}$ its centralizer. We write $\|x\|_\varphi = \|x \xi_\varphi\|$ for every $x \in M$. We say that a von Neumann subalgebra $N \subset M$ is with {\em normal conditional expectation} (abbreviated NCE) if there exists a normal faithful conditional expectation $\mathord{\text{\rm E}}_N : M \to N$. Recall that $N \subset M$ is globally invariant under the modular automorphism group $\sigma^\varphi$ if and only if there exists a $\varphi$-preserving conditional expectation $\mathord{\text{\rm E}}_N^\varphi : M \to N$ (see \cite[Theorem IX.4.2]{Ta03}). \section{Singular MASAs in type ${\rm III_1}$ factors}\label{section:singular-MASA} We recall from \cite{Po81} two results that will be used in the proof of Theorem \ref{thm-singular}. \betagin{lem}[{\cite[Theorem 2.5]{Po81}}]\label{lem-technical-1} Let $M$ be any $\sigma$-finite von Neumann algebra, $\varphi$ any normal faithful state on $M$ and $N \subset M_\varphi$ be any von Neumann subalgebra such that $N' \cap M \subset N$. For any finite dimensional abelian $\ast$-subalgebra $D \subset N$, any $x_1, \dots, x_n \in M$ and any $\varepsilon > 0$, there exists a finite dimensional abelian $\ast$-subalgebra $A \subset N$ that contains $D$ and for which we have \betagin{equation*} \forall 1 \leq i \leq n, \quad \|\mathord{\text{\rm E}}_{A' \cap M}^\varphi(x_i) - \mathord{\text{\rm E}}_A^\varphi(x_i)\|_\varphi \leq \varepsilon. \end{equation*} \end{lem} \betagin{proof} Write $D = \bigoplus_{j \in J} \mathbf{C} e_j$ where $J$ is a nonempty finite set and $(e_j)_{j \in J}$ are the nonzero minimal projections of $D$. For every $j \in J$, we have $e_j N e_j \subset (e_j M e_j)_{\varphi_{e_j}}$ and $(e_j N e_j)' \cap e_j M e_j \subset e_j N e_j$ (see \cite[Lemma 2.1]{Po81}). Let $x_1, \dots, x_n \in M$ and $\varepsilon > 0$. For every $j \in J$, by \cite[Theorem 2.5]{Po81}, there exists a finite dimensional abelian $\ast$-subalgebra $A_j \subset e_j N e_j$ such that \betagin{equation*} \forall 1 \leq i \leq n, \quad \|\mathord{\text{\rm E}}^{\varphi_{e_j}}_{A_j' \cap e_jMe_j}(e_j x_i e_j) - \mathord{\text{\rm E}}^{\varphi_{e_j}}_{A_j}(e_jx_ie_j)\|_{\varphi_{e_j}} \leq \varepsilon. \end{equation*} Put $A = \bigoplus_{j \in J} A_j$. Then $A \subset N$ is a finite dimensional abelian $\ast$-subalgebra that contains $D$. Moreover, for all $1 \leq i \leq n$, we have \betagin{align*} \|\mathord{\text{\rm E}}_{A' \cap M}^\varphi(x_i) - \mathord{\text{\rm E}}_A^\varphi(x_i)\|_\varphi^2 &= \sum_{j \in J} \varphi(e_j) \|\mathord{\text{\rm E}}^{\varphi_{e_j}}_{A_j' \cap e_jMe_j}(e_j x_i e_j) - \mathord{\text{\rm E}}^{\varphi_{e_j}}_{A_j}(e_jx_ie_j)\|_{\varphi_{e_j}}^2 \\ &\leq \sum_{j \in J} \varphi(e_j) \varepsilon^2 = \varepsilon^2. \qedhere \end{align*} \end{proof} \betagin{lem}[{\cite[Theorem 3.2]{Po81}}]\label{lem-technical-2} Let $M$ be any factor with separable predual, $\varphi$ any normal faithful state on $M$ and $N \subset M_\varphi$ be any von Neumann subalgebra such that $N' \cap M = \mathbf{C} 1$. For any finite dimensional abelian $\ast$-subalgebra $D \subset N$, there exists an abelian $\ast$-subalgebra $A \subset N$ that contains $D$ and that is maximal abelian in $M$. \end{lem} \betagin{proof} Write $D = \bigoplus_{j \in J} \mathbf{C} e_j$ where $J$ is a nonempty finite set and $(e_j)_{j \in J}$ are the nonzero minimal projections of $D$. For every $j \in J$, we have $e_j N e_j \subset (e_j M e_j)_{\varphi_{e_j}}$ and $(e_j N e_j)' \cap e_j M e_j = \mathbf{C} e_j$ (see \cite[Lemma 2.1]{Po81}). For every $j \in J$, by \cite[Theorem 3.2]{Po81}, there exists an abelian $\ast$-subalgebra $A_j \subset e_j N e_j$ that is maximal abelian in $e_j M e_j$. Put $A = \bigoplus_{j \in J} A_j$. Then $A \subset N$ is an abelian $\ast$-subalgebra that contains $D$ and that is maximal abelian in $M$. \end{proof} \betagin{thm}\label{thm-singular} Let $M$ be any non-type ${\rm I}$ factor with separable predual, $\varphi$ any normal faithful state on $M$ and $N \subset M_\varphi$ any subalgebra such that $N' \cap M = \mathbf{C} 1$. Then there exists an abelian $\ast$-subalgebra $A \subset N$ that is maximal abelian and singular in $M$. \end{thm} \betagin{proof} We follow the lines of the proof of \cite[Theorem 2.1]{Po16}. Choose a sequence $x_n \in \mathord{\text{\rm Ball}}(M)$ that is $\ast$-strongly dense in $\mathord{\text{\rm Ball}}(M)$ and a sequence of projections $e_n \in N$ that is strongly dense in the set of all projections of $N$ with $e_0 = 1$. We may further assume that each projection $e_n$ appears infinitely many times in the sequence $(e_m)_{m \in \mathbf{N}}$. We construct inductively an increasing sequence $A_n$ of finite dimensional abelian $\ast$-subalgebras of $N$ together with a sequence of projections $f_n \in A_n$ and a sequence of unitaries $v_n \in \mathcal U(A_n f_n)$ satisfying the following properties: \betagin{itemize} \item [(P1)] $\|f_n - e_n\|_\varphi \leq 7 \|e_n - \mathord{\text{\rm E}}^\varphi_{A_{n - 1}' \cap N}(e_n)\|_\varphi$; \item [(P2)] $\|\mathord{\text{\rm E}}^\varphi_{A_n' \cap M}(x_i^*v_n x_j)f_n^\perp\|_\varphi \leq 2^{-n}$ for all $0 \leq i, j \leq n$; \item [(P3)] $\|\mathord{\text{\rm E}}^\varphi_{A_n' \cap M}(x_j) - \mathord{\text{\rm E}}^\varphi_{A_n}(x_j)\|_\varphi \leq 2^{-n}$ for all $0 \leq j \leq n$. \end{itemize} We put $A_{-1} = A_0 = \mathbf{C} 1$, $f_0 = v_0 = 1$. Assume that we have constructed $(A_k, f_k, v_k)$ for all $0 \leq k \leq n$. Put $f_{n + 1} := \mathbf 1_{[1/2, 1]}(\mathord{\text{\rm E}}^\varphi_{A_n'\cap N}(e_{n + 1}))$. Then $f_{n + 1} \in A_n' \cap N$ is a projection that satisfies $\|f_{n + 1} - e_{n + 1}\|_\varphi \leq 7 \|e_{n + 1} - \mathord{\text{\rm E}}^\varphi_{A_n' \cap N}(e_{n + 1})\|_\varphi$ by \cite[Lemma 1.4]{Po82}. Then (P1) holds true for $f_{n + 1}$. Assume that $f_{n + 1} \in \{ 0, 1 \}$. Then (P2) holds true for any choice of $A_{n + 1}$. By Lemma \ref{lem-technical-1}, we may find a finite dimensional abelian $\ast$-subalgebra $A_{n + 1} \subset N$ that contains $A_n$ and that satisfies \betagin{equation}\label{eq-MASA} \forall 0 \leq i \leq n + 1, \quad \|\mathord{\text{\rm E}}^\varphi_{A_{n + 1}'\cap M }(x_i) - \mathord{\text{\rm E}}^\varphi_{A_{n + 1}}(x_i) \|_\varphi \leq 2^{-(n + 1)}. \end{equation} Thus, \eqref{eq-MASA} shows that (P3) holds true for $A_{n + 1}$. Assume that $f_{n + 1} \not\in \{ 0, 1 \}$. By Lemma \ref{lem-technical-2}, there exists an abelian $\ast$-subalgebra $B \subset N$ that contains $A_n \vee \mathbf{C} f_{n + 1}$ and that is maximal abelian in $M$. Since $(A_n f_{n + 1})' \cap f_{n + 1} N f_{n + 1}$ is a type ${\rm II_1}$ von Neumann algebra and $B f_{n + 1}^\perp \subset f_{n + 1}^\perp M f_{n + 1}^\perp$ is an abelian subalgebra with normal expectation, \cite[Theorem 2.3]{HV12} (see also \cite[Theorem 2.1 and Corollary 2.3]{Po03}) implies that there exists $v_{n + 1} \in \mathcal U((A_n f_{n + 1})' \cap f_{n + 1} N f_{n + 1})$ for which we have \betagin{equation}\label{eq-intertwining-1} \forall 0 \leq i, j \leq n + 1, \quad \left\|\mathord{\text{\rm E}}^{\varphi_{f_{n + 1}^\perp}}_{B f_{n + 1}^\perp}(f_{n + 1}^\perp x_i^*f_{n + 1} \, v_{n + 1} \, f_{n + 1} x_j f_{n + 1}^\perp) \right\|_\varphi < 2^{-(n + 2)}. \end{equation} Using the spectral theorem, we may further assume that $v_{n + 1} \in \mathcal U((A_n f_{n + 1})' \cap f_{n + 1} N f_{n + 1})$ has finite spectrum and still satisfies \eqref{eq-intertwining-1}. We may then choose a finite dimensional abelian $\ast$-subalgebra $D_1 \subset Bf_{n + 1}$ that contains $A_n f_{n + 1}$ and $v_{n + 1}$. Moreover, using \cite[Lemma 1.2]{Po81} and \eqref{eq-intertwining-1}, we may choose a finite dimensional abelian $\ast$-subalgebra $D_2 \subset Bf_{n + 1}^\perp$ that contains $A_n f_{n + 1}^\perp$ and for which we have \betagin{equation}\label{eq-intertwining-2} \forall 0 \leq i, j \leq n + 1, \quad \left\|\mathord{\text{\rm E}}^{\varphi_{f_{n + 1}^\perp}}_{D_2' \cap f_{n + 1}^\perp M f_{n + 1}^\perp}(f_{n + 1}^\perp x_i^*f_{n + 1} \, v_{n + 1} \, f_{n + 1} x_j f_{n + 1}^\perp) \right\|_{\varphi} < 2^{-(n + 1)}. \end{equation} Letting $D := D_1 ^\text{\rm op}lus D_2$, we can then rewrite \eqref{eq-intertwining-2} as \betagin{equation}\label{eq-intertwining-3} \forall 0 \leq i, j \leq n + 1, \quad \|\mathord{\text{\rm E}}^\varphi_{D' \cap M}(f_{n + 1}^\perp x_i^*f_{n + 1} \, v_{n + 1} \, f_{n + 1} x_j f_{n + 1}^\perp)\|_\varphi < 2^{-(n + 1)}. \end{equation} By Lemma \ref{lem-technical-1}, we may find a finite dimensional abelian $\ast$-subalgebra $A_{n + 1} \subset N$ that contains $D$ (and hence that contains $A_n$) and that satisfies \eqref{eq-MASA}. Thus, \eqref{eq-MASA} shows that (P3) holds true for $A_{n + 1}$. Since $D \subset A_{n + 1}$, we have $A_{n + 1}' \cap M \subset D' \cap M$ and \eqref{eq-intertwining-3} implies that (P2) holds true for $f_{n + 1}$ and $A_{n + 1}$. Thus, we have constructed $(A_{n + 1}, f_{n + 1}, v_{n + 1})$. Put $A = \bigvee_{n \in \mathbf{N}} A_n$. Property (P3) and \cite[Lemma 1.2]{Po81} imply that $A' \cap M = A$ and hence $A$ is maximal abelian in $M$. It remains to prove that $A$ is singular in $M$. By contradiction, assume that $A \neq \mathcal N_M(A)^{\prime\prime}$. Choose $u \in \mathcal N_M(A) \setminus \mathcal U(A)$. We can then find a nonzero projection $z \in A$ such that $uzu^* \perp z$. Denote by $h$ the unique nonsingular (possibly) unbounded positive selfadjoint operator affiliated with $A$ such that $(\varphi \circ ^\text{\rm op}eratorname{Ad}(u))|_A = \varphi(h \, \cdot \,) |_A$. By \cite[Lemme 1.4.5(2)]{Co72}, we have $\sigma_t^\varphi (u) = u h^{{\rm i}t} $ for every $t \in \mathbf{R}$. Since $h$ is nonsingular, there exists a projection $p \in A$ large enough so that $pz \neq 0$ and $\delta > 0$ so that $\delta p \leq hp \leq \delta^{-1} p$. It follows that the nonzero partial isometry $up \in M$ (resp.\ $(up)^* \in M$) is entire analytic with respect to the modular automorphism group $\sigma^\varphi$. Hence, there exists $\kappa_1 \geq 1$ (resp.\ $\kappa_2 \geq 1$) such that $\|x \, up\|_\varphi \leq \kappa_1 \|x\|_\varphi$ (resp.\ $\|x \, (up)^*\|_\varphi \leq \kappa_2 \|x\|_\varphi$) for every $x \in M$. Put $q := pz \in A$. For every $n \in \mathbf{N}$, we have \betagin{align}\label{eq-inequality-1} \|q f_n\|_\varphi &= \|z v_n p\|_\varphi \\ \nonumber &= \| u^* \, uz v_n pu^* \, up\|_\varphi \\ \nonumber & \leq \kappa_1 \| uz v_n p u^* \|_\varphi \\ \nonumber &= \kappa_1 \| \mathord{\text{\rm E}}^\varphi_{A_n' \cap M}(uz v_npu^*) z^\perp \|_\varphi \quad (\text{since } uz v_npu^* \in z^\perp(A_n'\cap M)z^\perp). \end{align} Since $z, v_n \in A \subset M_\varphi$ and since $(up)^*$ is entire analytic with respect to the modular automorphism group $\sigma^\varphi$, for all $0 \leq i, j \leq n$, we further have \betagin{align}\label{eq-inequality-2} \| \mathord{\text{\rm E}}^\varphi_{A_n' \cap M}(uz \, v_n \, pu^*) z^\perp \|_\varphi &\leq \| \mathord{\text{\rm E}}^\varphi_{A_n' \cap M}(x_i^* \, v_n \, pu^*) z^\perp \|_\varphi + \kappa_2 \|x_i^* - uz\|_\varphi \\ \nonumber &\leq \| \mathord{\text{\rm E}}^\varphi_{A_n' \cap M}(x_i^* \, v_n \, x_j) z^\perp \|_\varphi + \|x_j - pu^*\|_\varphi + \kappa_2 \|x_i^* - uz\|_\varphi \\ \nonumber &\leq \| \mathord{\text{\rm E}}^\varphi_{A_n' \cap M}(x_i^* \, v_n \, x_j) e_n^\perp \|_\varphi + \| e_n - z\|_\varphi + \|x_j - pu^*\|_\varphi + \kappa_2 \|x_i^* - uz\|_\varphi. \end{align} Since $z \in A \subset A_{n - 1}'\cap N$, for every $n \in \mathbf{N}$, using (P1) we have \betagin{align*} \|e_n - f_n\|_\varphi &\leq 7 \|e_n - \mathord{\text{\rm E}}^\varphi_{A_{n - 1}' \cap N}(e_n)\|_\varphi \\ &= 7 \|e_n - z - \mathord{\text{\rm E}}^\varphi_{A_{n - 1}' \cap N}(e_n - z)\|_\varphi \\ & \leq 7 \|e_n - z\|_\varphi. \end{align*} Choose now $n_0 \in \mathbf{N}$ large enough so that $2^{-n_0} \leq \|q\|_\varphi/(100 \kappa_1)$. By density, we may then choose $i, j \geq 0$ and $n \geq \max(i, j , n_0)$ so that $\| e_n - z\|_\varphi + \|x_j - pu^*\|_\varphi + \kappa_2 \|x_i^* - uz\|_\varphi \leq \|q\|_\varphi/(100 \kappa_1)$. Using \eqref{eq-inequality-2} and (P2), we then have \betagin{align}\label{eq-inequality-3} \| \mathord{\text{\rm E}}^\varphi_{A_n' \cap M}(uz v_npu^*) z^\perp \|_\varphi &\leq \| \mathord{\text{\rm E}}^\varphi_{A_n' \cap M}(x_i^* \, v_n \, x_j) e_n^\perp \|_\varphi + \|q\|_\varphi/(100 \kappa_1) \\ \nonumber &\leq \| \mathord{\text{\rm E}}^\varphi_{A_n' \cap M}(x_i^* \, v_n \, x_j) f_n^\perp \|_\varphi + \|f_n - e_n\|_\varphi + \|q\|_\varphi/(100 \kappa_1) \\ \nonumber &\leq \| \mathord{\text{\rm E}}^\varphi_{A_n' \cap M}(x_i^* \, v_n \, x_j) f_n^\perp \|_\varphi + 7\| e_n - z\|_\varphi + \|q\|_\varphi/(100 \kappa_1) \\ \nonumber &\leq 2^{-n}+ 7 \|q\|_\varphi/(100 \kappa_1) + \|q\|_\varphi/(100 \kappa_1) \\ \nonumber &\leq 9 \|q\|_\varphi/(100\kappa_1) \\ \nonumber & \leq \|q\|_\varphi/(2 \kappa_1). \nonumber \end{align} Combining \eqref{eq-inequality-1} and \eqref{eq-inequality-3}, we obtain \betagin{equation}\label{eq-conclusion-1} \|q f_n\|_\varphi \leq \|q\|_\varphi/2. \end{equation} On the other hand, we have \betagin{align}\label{eq-conclusion-2} \|q f_n - q\|_\varphi = \|q(f_n - z)\|_\varphi &\leq \|f_n - z\|_\varphi \\ \nonumber &\leq \|f_n - e_n\|_\varphi + \|e_n - z\|_\varphi \\ \nonumber & \leq 8 \|e_n - z\|_\varphi \\ \nonumber & \leq 8 \|q\|_\varphi/(100\kappa_1) \\ \nonumber &\leq \|q\|_\varphi/4. \nonumber \end{align} Combining \eqref{eq-conclusion-1} and \eqref{eq-conclusion-2}, we finally obtain \betagin{equation*} \|q\|_\varphi/2 \geq \|q f_n\|_\varphi \geq \|q\|_\varphi - \|qf_n - q\|_\varphi \geq 3 \|q\|_\varphi/4. \end{equation*} Since $\|q\|_\varphi \neq 0$, this is contradiction. Therefore $A = \mathcal N_M(A)^{\prime\prime}$ and hence $A \subset M$ is singular. \end{proof} \betagin{cor} Every type ${\rm III_1}$ factor $M$ with separable predual satisfying {\rm CBP} has a singular maximal abelian $\ast$-subalgebra $A \subset M$ with normal expectation. \end{cor} \betagin{proof} By \cite[Theorem 3.1]{Ha85}, there exists a faithful state $\varphi \in M_\ast$ such that $(M_\varphi)' \cap M = \mathbf{C} 1$. By Theorem \ref{thm-singular}, there exists an abelian $\ast$-subalgebra $A \subset M_\varphi$ that is maximal abelian and singular in $M$. Moreover, $A \subset M$ is the range of a normal faithful conditional expectation. \end{proof} \section{Stability of CBP under finite index extensions/restrictions}\label{section:CBP-stability} In this section we investigate the stability properties of CBP for inclusions of type ${\rm III_1}$ factors $N \subset M$ with normal faithful conditional expectation $\mathord{\text{\rm E}}_N : M \to N$. Fix a normal faithful state $\varphi$ on $M$ such that $\varphi = \varphi \circ \mathord{\text{\rm E}}_N$. The bicentralizer algebras $\mathord{\text{\rm B}}(N, \varphi)$ and $\mathord{\text{\rm B}}(M, \varphi)$ are not related in any obvious way and so it is hopeless to try to prove in general that CBP passes to subalgebras or overalgebras. In fact, any type ${\rm III_1}$ factor $N$ embeds in an irreducible way and with NCE into a type ${\rm III_1}$ factor $M$ that satisfies CBP. Indeed, choose any normal faithful state $\psi_N$ on $N$ and put $(M, \psi_M) = (N, \psi_N) \ast (\mathord{\text{\rm L}}(\mathbf{Z}_2), \tau_{\mathbf{Z}_2})$. It follows from \cite[Theorem A.1]{HU15} that $M$ is a type ${\rm III_1}$ factor that satisfies CBP, $N \subset M$ is with NCE and $N' \cap M = \mathbf{C} 1$. However, when the inclusion $N\subset M$ has finite index, we show here that $N$ satisfies CBP if and only if $M$ satisfies CBP \betagin{thm}\label{thm-CBP-finite-index} Let $N \subset M$ be any inclusion of type ${\rm III_1}$ factors with separable predual such that $N$ is the range of a normal faithful conditional expectation and $N$ has finite index in $M$. Then $N$ satisfies {\rm CBP} if and only if $M$ satisfies {\rm CBP}. \end{thm} \betagin{proof} {\bf If $N$ satisfies CBP then $M$ satisfies CBP.} Denote by $(M, \mathord{\text{\rm L}}^2(M), J, \mathord{\text{\rm L}}^2(M)_+)$ the standard form of $M$. Fix a normal faithful conditional expectation $\mathord{\text{\rm E}}_N : M \to N$ with finite index. Denote by $\langle M, N \rangle := (JNJ)' \cap \mathbf B(\mathord{\text{\rm L}}^2(M))$ the Jones basic construction and by $e_N : \mathord{\text{\rm L}}^2(M) \to \mathord{\text{\rm L}}^2(N) : x \xi_\varphi \mapsto \mathord{\text{\rm E}}_N(x)\xi_\varphi$ the Jones projection. We denote by $\Phi : \langle M, N\rangle \to M$ the canonical normal faithful conditional expectation (see \cite{Ko85}). Since $N$ satisfies CBP, by \cite[Theorem 3.1]{Ha85}, there exists a faithful state $\varphi \in M_\ast$ such that $\varphi = \varphi \circ \mathord{\text{\rm E}}_N$ and $(N_\varphi)' \cap N = \mathbf{C} 1$. Put $P = (N_\varphi)' \cap M$ and observe that $P \subset M$ is globally invariant under $\sigma^\varphi$. Let $e : \mathord{\text{\rm L}}^2(P) \to \mathbf{C} \xi_\varphi$ be the rank-one orthogonal projection. Since $(N_\varphi)' \cap N = \mathbf{C} 1$, we have $\mathord{\text{\rm E}}_N(x) = \varphi(x) 1$ for every $x \in P$. Let $\langle P, e \rangle = (P \cup \{e\})^{\prime\prime}$ and observe that $\langle P, e \rangle = \mathbf B(\mathord{\text{\rm L}}^2(P))$. Denote by $\psi_{\langle P, e \rangle}$ (resp.\ $\psi_{\langle M, N \rangle}$) the natural normal faithful semifinite weight defined on $\langle P, e \rangle$ (resp.\ $\langle M, N\rangle$). Then the map $$V : \mathord{\text{\rm L}}^2(\langle P, e\rangle, \psi_{\langle P, e\rangle}) \to \mathord{\text{\rm L}}^2(\langle M, N\rangle, \psi_{\langle M, N\rangle}) : \Lambda_{\psi_{\langle P, e\rangle}}(xey) \mapsto \Lambda_{\psi_{\langle M, N\rangle}}(xe_Ny), \quad x, y \in P$$ is an isometry. Denote by $\mathcal P \subset \langle M, N\rangle$ the weak closure of the (possibly) nonunital $\ast$-subalgebra $\mathord{\text{\rm span}} \{x e_N y : x, y \in P\}$. Observe that $\sigma_t^{\psi_{\langle M, N\rangle}}(x \, e_N \, y) = \sigma_t^\varphi(x) \, e_N \, \sigma_t^\varphi(y)$ for all $t \in \mathbf{R}$ and all $x, y \in P$. It follows that $\mathord{\text{\rm span}} \{x e_N y : x, y \in P\}$ is globally invariant under $\sigma^{\psi_{\langle M, N\rangle}}$. Thus, we have $\sigma_t^{\psi_{\langle M, N\rangle}}(1_{\mathcal P}) = 1_{\mathcal P}$ for every $t \in \mathbf{R}$ and $\mathcal P \subset 1_{\mathcal P} \langle M, N\rangle 1_{\mathcal P}$ is globally invariant under $\sigma^{\psi_{\langle M, N\rangle}}$. Observe that $e_N \leq 1_{\mathcal P}$. Since $\psi_{\langle M, N\rangle}(1_{\mathcal P} \cdot 1_{\mathcal P})$ is semifinite on $\mathcal P$, there exists a $\psi_{\langle M, N\rangle}(1_{\mathcal P} \cdot 1_{\mathcal P})$-preserving conditional expectation $\mathord{\text{\rm E}}_{\mathcal P} : 1_{\mathcal P}\langle M, N\rangle 1_{\mathcal P} \to \mathcal P$ (see \cite[Theorem IX.4.2]{Ta03}). Then the projection $VV^*$ is nothing but the orthogonal projection $\mathord{\text{\rm L}}^2(\langle M, N\rangle, \psi_{\langle M, N\rangle}) \to \mathord{\text{\rm L}}^2(\mathcal P, \psi_{\langle M, N\rangle}(1_{\mathcal P} \cdot 1_{\mathcal P}))$. Write $\mathord{\text{\rm L}}^2(\mathcal P) = \mathord{\text{\rm L}}^2(\mathcal P, \psi_{\langle M, N\rangle}(1_{\mathcal P} \cdot 1_{\mathcal P}))$. Thus, the map $\Theta : \langle P, e\rangle \to \mathbf B(\mathord{\text{\rm L}}^2(\mathcal P))$ defined by $\Theta(a) V = V a$ for $a \in \langle P, e \rangle$ is a unital normal $\ast$-embedding that satisfies $\Theta(xey) = xe_N y$ for all $x, y \in P$. In particular, $\Theta(\langle P, e \rangle) \subset \mathcal P$ and $\Theta : \langle P, e\rangle \to \mathcal P$ is a $P$-$P$-bimodular normal faithful unital completely positive map. Regard $\langle P, e\rangle = \mathbf B(\mathord{\text{\rm L}}^2(P))$ and define $\Psi : \mathbf B(\mathord{\text{\rm L}}^2(P)) \to P$ by the composition $\Psi = \mathord{\text{\rm E}}_{P} \circ \Phi \circ \Theta$. Then $\Psi$ is a $P$-$P$-bimodular normal faithful completely positive map. Since $\Psi(1) \in \mathcal Z(P)_+$, we may choose a nonzero element $c \in \mathcal Z(P)_+$ so that $z := c^{1/2}\Psi(1)c^{1/2}$ is a nonzero projection in $\mathcal Z(P)$. Then the map $\Psi_z : \mathbf B(\mathord{\text{\rm L}}^2(Pz)) \to Pz$ defined by $\Psi_z := c^{1/2} \Psi (z \, \cdot \, z) c^{1/2}$ is a $Pz$-$Pz$-bimodular normal unital completely positive map and hence a normal conditional expectation. Therefore $Pz$ is a discrete von Neumann algebra. By \cite[Theorem 3.5]{HI15}, the bicentralizer algebra $\mathord{\text{\rm B}}(M, \varphi)$ satisfies the following dichotomy: either $\mathord{\text{\rm B}}(M, \varphi) = \mathbf{C}1$ or $\mathord{\text{\rm B}}(M, \varphi)$ is a type ${\rm III_1}$ factor. Since the inclusions $\mathord{\text{\rm B}}(M, \varphi) \subset (M_\varphi)' \cap M \subset (N_\varphi)' \cap M= P$ are all globally invariant under $\sigma^\varphi$ and since $P$ has a minimal projection, it follows that $\mathord{\text{\rm B}}(M, \varphi) = \mathbf{C} 1$. Therefore, $M$ satisfies CBP. {\bf If $M$ satisfies CBP then $N$ satisfies CBP.} Since the inclusion $N \subset M$ has finite index, we may choose a normal faithful conditional expectation $\mathord{\text{\rm E}}_N : M \to N$ for which there exists $\kappa > 0$ such that $\mathord{\text{\rm E}}_N(x) \geq \kappa \, x$ for every $x \in M_+$ (see \cite{PP84, Po95}). Let $\varphi$ be a normal faithful state on $M$ such that $\varphi = \varphi \circ \mathord{\text{\rm E}}_N$. Fix a nonprincipal ultrafilter $\omegaega \in \betata(\mathbf{N}) \setminus \mathbf{N}$ and denote by $M^\omegaega$ (resp.\ $N^\omegaega$) the Ocneanu ultraproduct of $M$ (resp.\ $N$) with respect to $\omegaega$ (see \cite{Oc85, AH12}). Following \cite[Section 1.3]{Po95}, define the normal faithful conditional expectation $\mathord{\text{\rm E}}_{N^\omegaega} : M^\omegaega \to N^\omegaega$ by the formula $\mathord{\text{\rm E}}_{N^\omegaega}((x_n)^\omegaega) = (\mathord{\text{\rm E}}_N(x_n))^\omegaega$ for every $(x_n)^\omegaega \in M^\omegaega$. Then we have $\mathord{\text{\rm E}}_{N^\omegaega}(x) \geq \kappa \, x$ for every $x \in (M^\omegaega)_+$. Put $\mathcal N = (N^\omegaega)_{\varphi^\omegaega}$ and $\mathcal M = (M^\omegaega)_{\varphi^\omegaega}$ and observe that both $\mathcal N$ and $\mathcal M$ are type ${\rm II_1}$ factors by \cite[Proposition 4.24]{AH12}. Since $\varphi^\omegaega = \varphi^\omegaega \circ \mathord{\text{\rm E}}_{N^\omegaega}$ and since $\mathord{\text{\rm E}}_{N^\omegaega}(x) \in \mathcal N$ for every $x \in \mathcal M$, we have $\mathord{\text{\rm E}}_{\mathcal N}(x) = \mathord{\text{\rm E}}_{N^\omegaega}(x) \geq \kappa \, x$ for every $x \in \mathcal M_+$, where $\mathord{\text{\rm E}}_{\mathcal N} : \mathcal M \to \mathcal N$ denotes the unique trace preserving conditional expectation. Thus, the inclusion $\mathcal N \subset \mathcal M$ has finite index by \cite[Theorem 2.2]{PP84}. We first prove that $\mathcal M' \cap M^\omegaega = \mathbf{C} 1$. Since $M$ satisfies CBP, by \cite[Theorem 3.1]{Ha85}, there exists a normal faithful state $\psi$ on $M$ such that $(M_\psi)' \cap M = \mathbf{C} 1$. Then \cite[Lemma 2.3]{Po81} implies that $((M_\psi)^\omegaega)' \cap M^\omegaega = \mathbf{C} 1$. Since $(M_\psi)^\omegaega \subset (M^\omegaega)_{\psi^\omegaega}$, we have $((M^\omegaega)_{\psi^\omegaega})' \cap M^\omegaega = \mathbf{C} 1$. By Connes--St\o rmer transitivity theorem \cite{CS76} (see also \cite[Theorem 4.20]{AH12}), there exists $u \in \mathcal U(M^\omegaega)$ such that $\psi^\omegaega = u \varphi^\omegaega u^*$. Thus, we obtain $$\mathcal M' \cap M^\omegaega = ((M^\omegaega)_{\varphi^\omegaega})' \cap M^\omegaega = u^* (((M^\omegaega)_{\psi^\omegaega})' \cap M^\omegaega) u = \mathbf{C} 1.$$ We next prove that $\mathcal N' \cap M^\omegaega$ has a nonzero minimal projection following the lines of \cite[Lemma 3.3]{Po09}. Since $\mathcal N \subset \mathcal M$ is a finite index inclusion of type ${\rm II_1}$ factors, we may choose a projection $e \in \mathcal M$ such that $\mathord{\text{\rm E}}_{\mathcal N}(e) = [\mathcal M : \mathcal N]^{-1} 1$. Put $\mathcal P = \{e\}' \cap \mathcal N$ so that $\mathcal P \subset \mathcal N$ is a finite index inclusion of type ${\rm II_1}$ factors and $\mathcal M = \langle \mathcal N, e\rangle$ (see \cite[Corollary 1.8]{PP84}). By \cite[Proposition 1.3]{PP84}, choose a finite basis $(X_j)_{j \in J}$ of $\mathcal N$ over $\mathcal P$. Recall that we have $\sum_{j \in J} X_j e X_j^* = 1$, $\sum_{j \in J} X_j X_j^* = [\mathcal M : \mathcal N]$ and for every $j \in J$, $p_j := \mathord{\text{\rm E}}_{\mathcal P}(X_j^*X_j)$ is a projection in $\mathcal P$. Since $\sum_{j \in J} X_j e \, x \, e X_j^* \in \mathcal M' \cap M^\omegaega = \mathbf{C} 1$ for every $x \in \mathcal N' \cap M^\omegaega$, we may define the state $\Psi \in (\mathcal N' \cap M^\omegaega)_\ast$ by the formula $\sum_{j \in J} X_j e \, x \, e X_j^* = \Psi(x) 1$. Moreover, we have $exe = \Psi(x) e$ for every $x \in \mathcal N' \cap M^\omegaega$. Following \cite[Lemma 3.3]{Po09}, put $b = [\mathcal M : \mathcal N] \cdot \mathord{\text{\rm E}}_{\mathcal N' \cap \mathcal M}(e) = [\mathcal M : \mathcal N] \cdot \mathord{\text{\rm E}}^{\varphi^\omegaega}_{\mathcal N' \cap M^\omegaega}(e) \in (\mathcal N' \cap \mathcal M)_+$. For every $x \in \mathcal N' \cap M^\omegaega$, we have \betagin{align*} \varphi^\omegaega \left(\sum_{j \in J} X_j e b^{-1/2} \, x \, b^{-1/2} e X_j^*\right) &= \sum_{j \in J} \varphi^\omegaega ( x \, b^{-1/2} e X_j^* X_j e b^{-1/2}) = \sum_{j \in J} \varphi^\omegaega ( x \, b^{-1/2} p_j e b^{-1/2}) \\ &= \sum_{j \in J} \varphi^\omegaega \circ \mathord{\text{\rm E}}^{\varphi^\omegaega}_{\mathcal P' \cap M^\omegaega}( x \, b^{-1/2} p_j e b^{-1/2}) \\ &= [\mathcal M : \mathcal N] \cdot \varphi^\omegaega ( x \, b^{-1/2} e b^{-1/2}) = \varphi^\omegaega(x). \end{align*} Since $\sum_{j \in J} X_j e b^{-1/2} \, x \, b^{-1/2} e X_j^* \in \mathcal M' \cap M^\omegaega = \mathbf{C} 1$ for every $x \in \mathcal N' \cap M^\omegaega$, we have $$\forall x \in \mathcal N' \cap M^\omegaega, \quad \sum_{j \in J} X_j e b^{-1/2} \, x \, b^{-1/2} e X_j^* = \varphi^\omegaega(x) 1.$$ Moreover, $f = b^{-1/2} e b^{-1/2} \in \mathcal M$ is a projection such that $f x f = \varphi^\omegaega(x) f$ for every $x \in \mathcal N' \cap M^\omegaega$. This implies that the von Neumann algebra $\langle \mathcal N' \cap M^\omegaega , f\rangle = ((\mathcal N' \cap M^\omegaega) \cup \{f\})^{\prime\prime}$ has a minimal projection, namely $f$. Since $f \in \mathcal M$, the von Neumann subalgebra $\langle \mathcal N' \cap M^\omegaega , f\rangle \subset M^\omegaega$ is globally invariant under $\sigma^{\varphi^\omegaega}$. Since the inclusions $\mathcal N' \cap M^\omegaega \subset \langle \mathcal N' \cap M^\omegaega , f\rangle \subset M^\omegaega$ are all globally invariant under $\sigma^{\varphi^\omegaega}$, we obtain that $\mathcal N' \cap M^\omegaega$ has a minimal projection as well. By \cite[Proposition 3.3]{HI15}, we have $\mathord{\text{\rm B}}(N, \varphi) = ((N^\omegaega)_{\varphi^\omegaega})' \cap N = \mathcal N' \cap N$. Since the inclusion $\mathord{\text{\rm B}}(N, \varphi) = \mathcal N' \cap N \subset \mathcal N' \cap M^\omegaega$ is globally invariant under $\sigma^{\varphi^\omegaega}$ and since $\mathcal N' \cap M^\omegaega$ has a minimal projection, \cite[Theorem 3.5]{HI15} implies that $\mathord{\text{\rm B}}(N, \varphi) = \mathbf{C} 1$. Therefore, $N$ satisfies CBP. \end{proof} \section{Open problems} Formulated some forty years ago and still open, Connes's Bicentralizer Problem remains one of the most famous unsolved problems in von Neumann algebras. It is certainly the central, most important open problem in the theory of type ${\rm III_1}$ factors. The fundamental role it plays in unraveling the structure of type ${\rm III_1}$ factors comes from its equivalent form as existence of a normal faithful state with large centralizer (due to \cite{Ha85}). In turn, this latter form of CBP (often accompanied by Connes--Stormer's theorem) allows adapting arguments from II$_1$ factors to the ``III$_1$ factor world''. For instance, it has been a key feature in developing a type ${\rm III_1}$ version of the second named author's deformation-rigidity theory, which has been initially developed in II$_1$ factor framework (see e.g.\ \cite{HI14, HI15}). It is somewhat notorious that Connes and Haagerup strongly believed CBP had an affirmative answer. But since all efforts to prove it have failed, during the last decade there have been attempts to produce counterexamples as well, in fact some of the papers involving the first named author have been motivated by such attempts (see e.g.\ \cite{Ho08}). However, at this moment, both authors of this paper believe CBP has a positive answer. The purpose of the previous section was to offer some supporting evidence in this respect, with its partial results bound to become redundant if CBP is proven in its full generality. We'll formulate in this section several related problems, including some stronger versions of the CBP conjecture. Let us first recall that Connes' Bicentralizer Property for a type ${\rm III_1}$ factor with separable predual $M$ was shown in \cite{Ha85} to be equivalent to the weak relative Dixmier property of the inclusion $M_\Phi \subset M$, where $\Phi$ is any normal faithful dominant weight on $M$ and $M_\Phi$ denotes the fixed point algebra of its automorphism group. The terminology {\it weak relative Dixmier property} for an inclusion of (arbitrary) von Neumann algebras $N\subset M$ is in the sense of \cite{Po98}, and it means that the convex set $\mathcal K_N(x)=\overline{\text{\rm co}}^w \{uxu^* \mid u\in \mathcal U(N)\}$ has non-empty intersection with $N'\cap M$, for any $x\in M$. Note that if $M=\mathbf B(H)$ then this condition for $N\subset M$ is equivalent to $N$ being amenable (cf.\ \cite{Sc63}). Note also that in the case $N\subset M$ is an irreducible inclusion of factors (i.e., if $N'\cap M=\mathbf C1$), then for this condition to hold true it is sufficient to have an amenable (equivalently approximately finite dimensional, by \cite{Co75}) von Neumann subalgebra $B\subset N$ such that $B'\cap M\subset B$ (thus $B'\cap M = \mathcal Z(B)$). Indeed, because then by \cite{Sc63} we have $\mathcal K_B(x)\cap B'\cap B\neq \emptyset$ for all $x\in M$, and by applying in the factor $N$ the Dixmier averaging theorem \cite{Di57} to an element in this intersection set, one gets $\mathcal K_N(x)\cap \mathbf C1\neq \emptyset$ (see \cite[Remark 3.9]{Ha85}). In particular, if $N\subset M$ is an irreducible inclusion of factors that satisfies {\it Kadison's property}, i.e., $N$ contains an abelian von Neumann subalgebra that's a MASA in $M$, then $N\subset M$ has the weak relative Dixmier property. The existence of a MASA in a subfactor $N\subset M$ clearly implies irreducibility, and one of the well known problems in \cite{Ka67} asks whether the converse is true as well, i.e., if Kadison's property actually characterizes irreducibility. It is easy to see that if $N\subset M$ is an inclusion of von Neumann algebras with NCE and $N$ is semifinite, then $N\subset M$ satisfies the weak relative Dixmier property (see \cite{Po81, Po98}). It has in fact been shown in \cite{Po81} that if in addition $N'\cap M\subset N$, then $N$ contains a MASA of $M$ with NCE (so such $N\subset M$ do satisfy Kadison's property), and that if $N\subset M$ is irreducible then $N$ contains a hyperfinite subfactor $R \subset N$ with NCE such that $R'\cap M=\mathbf C 1$. \betagin{prob} Let $N \subset M$ be an irreducible inclusion with NCE of type ${\rm III_1}$ factors with separable predual such that $N$ satisfies CBP. Does $N$ contain an amenable (or even abelian) von Neumann subalgebra $B\subset N$ with NCE and such that $B'\cap M\subset B$? Is this at least true if $N$ is the hyperfinite type ${\rm III_1}$ factor? \end{prob} Note that by a result in \cite{GP96}, there do exist examples of irreducible inclusions of factors $N\subset M$ with $N$ of type II$_1$, $M$ of type II$_\infty$ such that $N$ contains no amenable von Neumann subalgebra $B$ with the property that $B'\cap M\subset B$. But the examples of irreducible inclusions in \cite{GP96} that do not satisfy Kadison's property are not with NCE. Thus, the problem of whether Kadison's criterion characterizes irreducibility for an inclusion of factors seems quite subtle, in its full generality. In turn, the weak relative Dixmier property may still be true for arbitrary irreducible inclusions. \betagin{prob} Let $N \subset M$ be an arbitrary irreducible inclusion of factors with separable predual. Does $N\subset M$ have the weak relative Dixmier property? Is this at least true when the NCE condition is satisfied? \end{prob} As we mentioned before, if this is true in the case $M$ is type ${\rm III_1}$ and $N$ is its type ${\rm II_\infty}$ core then CBP holds true. Note that by \cite{Po81}, if $N$ is any non-Gamma type ${\rm II_1}$ factor (e.g., if $N$ is the free group factor $\mathord{\text{\rm L}}(\mathbf F_n)$, cf \cite{MvN43}) and $M$ is the ultrapower factor $N^\omegaega$, for some nonprincipal ultrafilter on $\mathbf N$, then $N\subset M$ is irreducible, yet $N$ contains no MASAs of $M$. But in these examples the larger factor is non-separable. However, such inclusions $N\subset M$ do satisfy the weak relative Dixmier property. \betagin{thebibliography}{MvN43} \bibitem[AH12]{AH12} {\sc H. Ando, U. Haagerup}, {\it Ultraproducts of von Neumann algebras.} J. Funct. Anal. {\bf 266} (2014), 6842--6913. \bibitem[Co72]{Co72} {\sc A. Connes}, {\it Une classification des facteurs de type ${\rm III}$.} Ann. Sci. \'{E}cole Norm. Sup. {\bf 6} (1973), 133--252. \bibitem[Co75]{Co75} {\sc A. Connes}, {\it Classification of injective factors}, Ann. Math. {\bf 104} (1976), 73--115. \bibitem[Co80]{Co80} {\sc A. Connes}, {\it Classification des facteurs.} In ``Operator algebras and applications, Part 2 (Kingston, 1980)'' Proc. Sympos. Pure Math. {\bf 38} Amer. Math. Soc., Providence, 1982, pp.\ 43--109. \bibitem[Co85]{Co85} {\sc A. Connes}, {\it Factors of type ${\rm III_1}$, property $L'_\lambda$ and closure of inner automorphisms.} J. Operator Theory {\bf 14} (1985), 189--211. \bibitem[CS76]{CS76} {\sc A. Connes, E. St\o rmer}, {\it Homogeneity of the state space of factors of type ${\rm III_1}$.} J. Funct. Anal. {\bf 28} (1978), 187--196. \bibitem[Di57]{Di57} {\sc J. Dixmier}, {\it Les alg\`ebres d'op\'erateurs dans l'espace hilbertien.} Gauthier-Villars, Paris 1957, 1969. \bibitem[GP96]{GP96} {\sc L. Ge, S. Popa}, {\it On some decomposition properties for factors of type ${\rm II_1}$.} Duke Math. J. {\bf 94} (1998), 79--101. \bibitem[Ha85]{Ha85} {\sc U. Haagerup}, {\it Connes' bicentralizer problem and uniqueness of the injective factor of type ${\rm III_1}$.} Acta Math. {\bf 69} (1986), 95--148. \bibitem[Ho08]{Ho08} {\sc C. Houdayer}, {\it Free Araki-Woods factors and Connes' bicentralizer problem.} Proc. Amer. Math. Soc. {\bf 137} (2009), 3749--3755. \bibitem[HI14]{HI14} {\sc C. Houdayer, Y. Isono}, {\it Free independence in ultraproduct von Neumann algebras and applications.} J. London Math. Soc. {\bf 92} (2015), 163--177. \bibitem[HI15]{HI15} {\sc C. Houdayer, Y. Isono}, {\it Unique prime factorization and bicentralizer problem for a class of type ${\rm III}$ factors.} Adv. Math. {\bf 305} (2017), 402--455. \bibitem[HU15]{HU15} {\sc C. Houdayer, Y. Ueda}, {\it Asymptotic structure of free product von Neumann algebras.} Math. Proc. Cambridge Philos. Soc. {\bf 161} (2016), 489--516. \bibitem[HV12]{HV12} {\sc C. Houdayer, S. Vaes}, {\it Type ${\rm III}$ factors with unique Cartan decomposition.} J. Math. Pures Appl. {\bf 100} (2013), 564--590. \bibitem[Ka67]{Ka67} {\sc R.V. Kadison}, {\it Problems on von Neumann algebras.} Baton Rouge Conference, 1967 (unpublished). \bibitem[Ko85]{Ko85} {\sc H. Kosaki}, {\it Extension of Jones' theory on index to arbitrary factors.} J. Funct. Anal. {\bf 66} (1986), 123--140. \bibitem[MvN43]{MvN43} {\sc F. Murray, J. von Neumann}, {\it Rings of operators ${\rm IV}$.} Ann. Math. {\bf 44} (1943), 716--808. \bibitem[Oc85]{Oc85} {\sc A. Ocneanu}, {\it Actions of discrete amenable groups on von Neumann algebras.} Lecture Notes in Mathematics, {\bf 1138}. Springer-Verlag, Berlin, 1985. iv+115 pp. \bibitem[PP84]{PP84} {\sc M. Pimsner, S. Popa}, {\it Entropy and index for subfactors.} Ann. Sci. \'Ecole Norm. Sup. {\bf 19} (1986), 57--106. \bibitem[Po81]{Po81} {\sc S. Popa}, {\it On a problem of R.V.\ Kadison on maximal abelian $\ast$-subalgebras in factors.} Invent. Math. {\bf 65} (1981), 269--281. \bibitem[Po82]{Po82} {\sc S. Popa}, {\it Singular maximal abelian $\ast$-subalgebras in continuous von Neumann algebras.} J. Funct. Anal. {\bf 50} (1983), 151--166. \bibitem[Po95]{Po95} {\sc S. Popa}, {\it Classification of subfactors and their endomorphisms.} CBMS Regional Conference Series in Mathematics, {\bf 86}. Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 1995. x+110 pp. \bibitem[Po98]{Po98} {\sc S. Popa}, {\it On the relative Dixmier property for inclusions of $\mathord{\text{\rm C}}^*$-algebras.} J. Funct. Anal. {\bf 171} (2000), 139--154. \bibitem[Po03]{Po03} {\sc S. Popa}, {\it Strong rigidity of $\rm II_1$ factors arising from malleable actions of w-rigid groups ${\rm I}$.} Invent. Math. {\bf 165} (2006), 369--408. \bibitem[Po09]{Po09} {\sc S. Popa}, {\it On the classification of inductive limits of ${\rm II_1}$ factors with spectral gap.} Trans. Amer. Math. Soc. {\bf 364} (2012), 2987--3000. \bibitem[Po16]{Po16} {\sc S. Popa}, {\it Constructing MASAs with prescribed properties.} To appear in Kyoto J. Math. {\tt arXiv:1610.08945} \bibitem[Sc63]{Sc63} {\sc J. Schwartz}, {\it Two finite, non-hyperfinite, non-isomorphic factors.} Comm. Pure Appl. Math. {\bf 16} (1963), 19--26. \bibitem[Ta03]{Ta03} {\sc M. Takesaki}, {\it Theory of operator algebras. ${\rm II}$.} Encyclopaedia of Mathematical Sciences, {\bf 125}. Operator Algebras and Non-commutative Geometry, 6. Springer-Verlag, Berlin, 2003. xxii+518 pp. \end{thebibliography} \end{document}
\begin{document} \title{f The quaternionic Gauss-Lucas Theorem} \begin{abstract} The classic Gauss-Lucas Theorem for complex polynomials of degree $d\ge2$ has a natural reformulation over quaternions, obtained via rotation around the real axis. We prove that such a reformulation is true only for $d=2$. We present a new quaternionic version of the Gauss-Lucas Theorem valid for all $d\geq2$, together with some consequences. \end{abstract} {\mathbb{S}}ection{Introduction} Let $p$ be a complex polynomial of degree $d\geq2$ and let $p'$ be its derivative. The Gauss-Lucas Theorem asserts that the zero set of $p'$ is contained in the convex hull $\mathcal K(p)$ of the zero set of $p$. The classic proof uses the logarithmic derivative of $p$ and it strongly depends on the commutativity of ${\mathbb{C}}$. This note deals with a quaternionic version of such a classic result. We refer the reader to \cite{GeStoSt2013} for the notions and properties concerning the algebra ${\mathbb{H}}$ of quaternions we need here. The ring ${\mathbb{H}}[X]$ of quaternionic polynomials is defined by fixing the position of the coefficients w.r.t.\ the indeterminate $X$ (e.g.\ on the right) and by imposing commutativity of $X$ with the coefficients when two polynomials are multiplied together (see e.g.\ \cite[\S 16]{Lam}). Given two polynomials $P,Q\in{\mathbb{H}}[X]$, let $P\cdot Q$ denote the product obtained in this way. If $P$ has real coefficients, then $(P\cdot Q)(x)=P(x)Q(x)$. In general, a direct computation (see \cite[\S 16.3]{Lam}) shows that if $P(x)\mathcal{N}e0$, then \begin{equation}\label{product} (P\cdot Q)(x)=P(x)Q(P(x)^{-1}xP(x)), \end{equation} while $(P\cdot Q)(x)=0$ if $P(x)=0$. In this setting, a {(left) root} of a polynomial $P(X)={\mathbb{S}}um_{h=0}^dX^h a_h$ is an element $x\in{\mathbb{H}}$ such that $P(x)=\textstyle{\mathbb{S}}um_{h=0}^dx^h a_h=0$. Given $P(X)={\mathbb{S}}um_{k=0}^dX^ka_k\in {\mathbb{H}}[X]$, consider the polynomial $P^c(X)={\mathbb{S}}um_{k=0}^dX^k\bar a_k$ and the \emph{normal polynomial} $N(P)=P \cdot P^c=P^c\cdot P$. Since $N(P)$ has real coefficients, it can be identified with a polynomial in ${\mathbb{R}}[X]{\mathbb{S}}ubset{\mathbb{C}}[X]$. We recall that a subset $A$ of ${\mathbb{H}}$ is called \emph{circular} if, for each $x\in A$, $A$ contains the whole set (a 2-sphere if $x\mathcal{N}ot\in{\mathbb{R}}$, a point if $x\in{\mathbb{R}}$) \begin{equation}\label{sx} {\mathbb{S}}_x=\{pxp^{-1}\in{\mathbb{H}}\;|\;p\in{\mathbb{H}}^*\}, \end{equation} where ${\mathbb{H}}^*:={\mathbb{H}}{\mathbb{S}}etminus\{0\}$. In particular, for any imaginary unit $I\in{\mathbb{H}}$, ${\mathbb{S}}_I={\mathbb{S}}$ is the 2-sphere of all imaginary units in ${\mathbb{H}}$. For every subset $B$ of ${\mathbb{H}}$ we define its \emph{circularization} as the set $\bigcup_{x\in B}{\mathbb{S}}_x$. It is well-known (\cite[\S3.3]{GeStoSt2013}) that the zero set $V(N(P)){\mathbb{S}}ubset{\mathbb{H}}$ of the normal polynomial is the circularization of the zero set $V(P)$, which consists of isolated points or isolated 2-spheres of the form \eqref{sx} if $P\mathcal{N}eq0$. Let the degree $d$ of $P$ be at least 2 and let $P'(X)={\mathbb{S}}um_{k=1}^dX^{k-1}ka_k$ be the derivative of $P$. It is known (see e.g.\ \cite{GGSarxiv}) that the Gauss-Lucas Theorem does not hold directly for quaternionic polynomials. For example, the polynomial $P(X)=(X-i)\cdot (X-j)=X^2-X(i+j)+k$ has zero set $V(P)=\{i\}$, while $P'$ vanishes at $x=(i+j)/2$. Since the zero set $V(P) $ of $P$ is contained in the set $V(N(P)) $, a natural reformulation in ${\mathbb{H}}[X]$ of the classic Gauss-Lucas Theorem is the following: $V(N(P')){\mathbb{S}}ubset\mathcal K(N(P))$ or equivalently \begin{equation}\label{eq:g-l} V(P'){\mathbb{S}}ubset\mathcal K(N(P)), \end{equation} where $\mathcal K(N(P))$ denotes the convex hull of $V(N(P))$ in ${\mathbb{H}}$. This set is equal to the circularization of the convex hull of the zero set of $N(P)$ viewed as a polynomial in ${\mathbb{C}}[X]{\mathbb{S}}ubset{\mathbb{H}}[X]$. Recently two proofs of the above inclusion \eqref{eq:g-l} were presented in \cite{VlacciGL,GGSarxiv}. Our next two propositions prove that inclusion \eqref{eq:g-l} is correct in its full generality only when $d=2$. {\mathbb{S}}ection{Gauss-Lucas polynomials} \begin{definition} Given a polynomial $P\in{\mathbb{H}}[X]$ of degree $d\geq 2$ we say that $P$ is a \emph{Gauss-Lucas polynomial} if $P$ satisfies \eqref{eq:g-l}. \end{definition} \begin{proposition} If $P$ is a polynomial in ${\mathbb{H}}[X]$ of degree $2$, then $V(N(P))={\mathbb{S}}_{x_1}\cup{\mathbb{S}}_{x_2}$ for some $x_1,x_2\in{\mathbb{H}}$ (possibly with ${\mathbb{S}}_{x_1}={\mathbb{S}}_{x_2}$) and \[ V(P'){\mathbb{S}}ubset\bigcup_{y_1\in{\mathbb{S}}_{x_1},y_2\in{\mathbb{S}}_{x_2}}\left\{\frac{y_1+y_2}{2}\right\}. \] In particular every polynomial $P\in{\mathbb{H}}[X]$ of degree $2$ is a Gauss-Lucas polynomial. \end{proposition} \begin{proof} Let $P(X)=X^2a_2+Xa_1+a_0\in{\mathbb{H}}[X]$ with $a_2\mathcal{N}eq0$. Since $P\cdot a_2^{-1}=Pa_2^{-1}$, $(P\cdot a_2^{-1})'=P'\cdot a_2^{-1}=P'a_2^{-1}$, we can assume $a_2=1$. Consequently, $P(X)=(X-x_1)\cdot(X-x_2)=X^2-X(x_1+x_2)+x_1x_2$ for some $x_1,x_2\in{\mathbb{H}}$. Then $x_1\in V(P)$ and $\bar x_2\in V(P^c)$, since $P^c(X)=(X-\bar x_2)\cdot(X-\bar x_1)$. Therefore $x_1,x_2\in V(N(P))$. On the other hand $V(P')=\{(x_1+x_2)/2\}$ as desired. \end{proof} \begin{remark} Let $P(X)={\mathbb{S}}um_{k=0}^dX^ka_k\in{\mathbb{H}}[X]$ of degree $d\geq2$ and let $Q:=P\cdot a_d^{-1}$ be the corresponding monic polynomial. Since $V(P)=V(Q)$ and $V(P')=V(Q')$, $P$ is a Gauss-Lucas polynomial if and only if $Q$ is. \end{remark} \begin{proposition} \label{prop:-4} Let $P\in{\mathbb{H}}[X]$ of degree $d\geq3$. Suppose that $N(P)(X)=X^{2e}\cdot(X^2+1)^{d-e}$ for some $e<d$ and that $N(P')(X)={\mathbb{S}}um_{k=0}^{2d-2}X^kb_k$ contains a unique monomial of odd degree, that is, $b_k\mathcal{N}eq0$ for a unique odd $k$. Then $P$ is not a Gauss-Lucas polynomial. \end{proposition} \begin{proof} Since $V(N(P)){\mathbb{S}}ubset\{0\}\cup{\mathbb{S}}$, $\mathcal K(N(P)){\mathbb{S}}ubset\im({\mathbb{H}})=\{x\in{\mathbb{H}}\,|\,\re(x)=0\}$. Then it suffices to show that $N(P')$ has at least one root in ${\mathbb{H}}{\mathbb{S}}etminus\im({\mathbb{H}})$. Let $G,L\in{\mathbb{R}}[X]$ be the unique real polynomials such that $G(t)+iL(t)=N(P')(it)$ for every $t\in{\mathbb{R}}$. Since $N(P')$ contains a unique monomial of odd degree, say $X^{2\ell+1}b_{2\ell+1}$, then $L(t)=(-1)^\ell b_{2\ell+1}t^{2\ell+1}$ and hence $V(N(P'))\cap i{\mathbb{R}}{\mathbb{S}}ubset\{0\}$. Being $V(N(P'))$ a circular set, it holds $V(N(P'))\cap\im({\mathbb{H}}){\mathbb{S}}ubset\{0\}$. Since $N(P')$ contains at least two monomials, namely those of degrees $2\ell+1$ and $2d-2$, we infer that it must have a nonzero root. Therefore $V(N(P'))\mathcal{N}ot{\mathbb{S}}ubset\{0\}$ and hence $V(N(P'))\mathcal{N}ot{\mathbb{S}}ubset\im({\mathbb{H}})$, as desired. \end{proof} \begin{corollary}\label{counterexample} Let $d\geq3$ and let \[ P(X)=X^{d-3}\cdot(X-i)\cdot(X-j)\cdot(X-k). \] Then $N(P)(X)=X^{2d-6}\cdot(X^2+1)^3$ and $N(P')$ contains a unique monomial of odd degree, namely $-4X^{2d-5}$. In particular $P$ is not a Gauss-Lucas polynomial. \end{corollary} \begin{proof} By a direct computation we obtain: \begin{align} P(X)&=X^d-X^{d-1}(i+j+k)+X^{d-2}(i-j+k)+X^{d-3},\\ P'(X)&=dX^{d-1}-(d-1)X^{d-2}(i+j+k)+(d-2)X^{d-3}(i-j+k)+(d-3)X^{d-4},\label{P1} \\ N(P')(X)&=d^2X^{2d-2}+3(d-1)^2X^{2d-4}-4X^{2d-5}+3(d-2)^2X^{2d-6}+(d-3)^2X^{2d-8}. \end{align} Proposition \ref{prop:-4} implies the thesis. \end{proof} Let $I\in{\mathbb{S}}$ and let ${\mathbb{C}}_I{\mathbb{S}}ubset{\mathbb{H}}$ be the complex plane generated by 1 and $I$. Given a polynomial $P\in{\mathbb{H}}[X]$, we will denote by $P_I:{\mathbb{C}}_I\to{\mathbb{H}}$ the restriction of $P$ to ${\mathbb{C}}_I$. {\it If $P_I$ is not constant, we will denote by $\mathcal K_{{\mathbb{C}}_I}(P)$ the convex hull in the complex plane ${\mathbb{C}}_I$ of the zero set $V(P_I)=V(P)\cap{\mathbb{C}}_I$. If $P_I$ is constant, we set $\mathcal K_{{\mathbb{C}}_I}(P)={\mathbb{C}}_I$.} If ${\mathbb{C}}_I$ contains every coefficient of $P\in{\mathbb{H}}[X]$, then we say that $P$ is a \emph{${\mathbb{C}}_I$-polynomial}. \begin{proposition} The following holds: \begin{enumerate} \item[(1)] Every ${\mathbb{C}}_I$-polynomial of degree $\geq2$ is a Gauss-Lucas polynomial. \item[(2)] Let $d\geq3$, let ${\mathbb{H}}_d[X]=\{P\in{\mathbb{H}}[X]\,|\,\deg(P)=d\}$ and let $E_d[X]$ be the set of all elements of ${\mathbb{H}}_d[X]$ that are not Gauss-Lucas polynomials. Identify each $P(X)={\mathbb{S}}um_{k=0}^dX^ka_k$ in ${\mathbb{H}}_d[X]$ with $(a_0,\ldots,a_d)\in{\mathbb{H}}^d\times{\mathbb{H}}^*{\mathbb{S}}ubset{\mathbb{R}}^{4d+4}$ and endow ${\mathbb{H}}_d[X]$ with the relative Euclidean topology. Then $E_d[X]$ is a nonempty open subset of ${\mathbb{H}}_d[X]$. Moreover $E_d[X]$ is not dense in ${\mathbb{H}}_d[X]$, being $X^d-1$ an interior point of its complement. \end{enumerate} \end{proposition} \begin{proof} If $P$ is a ${\mathbb{C}}_I$-polynomial, then $P_I$ can be identified with an element of ${\mathbb{C}}_I[X]$. Consequently, the classic Gauss-Lucas Theorem gives $V(P')\cap{\mathbb{C}}_I=V(P_I'){\mathbb{S}}ubset\mathcal K_{{\mathbb{C}}_I}(P)$. The zero set of the ${\mathbb{C}}_I$-polynomial $P'$ has a particular structure (see \cite[Lemma~3.2]{GeStoSt2013}): $V(P')$ is the union of $V(P')\cap{\mathbb{C}}_I$ with the set of spheres ${\mathbb{S}}_x$ such that $x,\bar x\in V(P'_I)$. It follows that \[ V(P'){\mathbb{S}}ubset\mathcal K(N(P)). \] This proves (1). Now we prove (2). By Corollary \ref{counterexample} we know that $E_d[X]\mathcal{N}eq\emptyset$. If $P\in E_d[X]$, then $V(N(P'))\mathcal{N}ot{\mathbb{S}}ubset\mathcal K(N(P))$. $N(P)$ and $N(P')$ are polynomials with real coefficients. Since the roots of $N(P)$ and of $N(P')$ depend continuously on the coefficients of $P$ and $\mathcal K(N(P))$ is closed in ${\mathbb{H}}$, for every $Q\in{\mathbb{H}}_d[X]$ sufficiently close to $P$, $V(N(Q'))$ is not contained in $\mathcal K(N(Q))$, that is $Q\in E_d[X]$. To prove the last statement, observe that $P(X)=X^d-1$ is not in $E_d[X]$ from part (1). Since $V(P')=V(N(P'))=\{0\}$ is contained in the interior of the set $\mathcal K(N(P))$, for every $Q\in{\mathbb{H}}_d[X]$ sufficiently close to $P$, $V(Q')$ is still contained in $\mathcal K(N(Q))$. \end{proof} {\mathbb{S}}ection{A quaternionic Gauss-Lucas Theorem} Let $P(X)={\mathbb{S}}um_{k=0}^dX^ka_k\in{\mathbb{H}}[X]$ of degree $d\geq2$. For every $I\in{\mathbb{S}}$, let $\pi_I:{\mathbb{H}}\to{\mathbb{H}}$ be the orthogonal projection onto ${\mathbb{C}}_I$ and $\pi_I^\bot=id-\pi_I$. Let $P^I(X):={\mathbb{S}}um_{k=1}^dX^ka_{k,I}$ be the ${\mathbb{C}}_I$-polynomial with coefficients $a_{k,I}:=\pi_I(a_k)$. \begin{definition} We define the \emph{Gauss-Lucas snail of $P$} as the following subset ${\mathbb{S}}n(P)$ of ${\mathbb{H}}$: \[ {\mathbb{S}}n(P):=\bigcup_{I\in{\mathbb{S}}}\mathcal K_{{\mathbb{C}}_I}(P^I). \] \end{definition} Our quaternionic version of the Gauss-Lucas Theorem reads as follows. \begin{theorem}\label{thm} For every polynomial $P\in{\mathbb{H}}[X]$ of degree $\geq2$, \begin{equation}\label{snail} V(P'){\mathbb{S}}ubset{\mathbb{S}}n(P). \end{equation} \end{theorem} \begin{proof} Let $P(X)={\mathbb{S}}um_{k=0}^dX^ka_k$ in ${\mathbb{H}}_d[X]$ with $d\geq2$. We can decompose the restriction of $P$ to ${\mathbb{C}}_I$ as $P_I=\pi_I\circ P_I+\pi_I^\bot\circ P_I={P^I}_{|{\mathbb{C}}_I}+\pi_I^\bot\circ P_I$. If $x\in{\mathbb{C}}_I$, then $P^I(x)\in{\mathbb{C}}_I$ while $(\pi_I^\bot\circ P_I)(x)\in{\mathbb{C}}_I^\bot$. The same decomposition holds for $P'$. This implies that $V(P')\cap{\mathbb{C}}_I{\mathbb{S}}ubset V((P^I)')\cap{\mathbb{C}}_I$. The classic Gauss-Lucas Theorem applied to $P^I$ on ${\mathbb{C}}_I$ gives $V(P')\cap{\mathbb{C}}_I{\mathbb{S}}ubset \mathcal K_{{\mathbb{C}}_I}(P^I)$. Since $V(P')=\bigcup_{I\in{\mathbb{S}}}(V(P')\cap{\mathbb{C}}_I)$, the inclusion \eqref{snail} is proved. \end{proof} If $P$ is monic Theorem \ref{thm} has the following equivalent formulation: {\it For every monic polynomial $P\in{\mathbb{H}}[X]$ of degree $\geq2$, it holds} \begin{equation} {\mathbb{S}}n(P'){\mathbb{S}}ubset{\mathbb{S}}n(P). \end{equation} \begin{remark}\label{rem:monic} If $P$ is a nonconstant monic polynomial in ${\mathbb{H}}[X]$, then two properties hold: \begin{itemize} \item[(a)] $\mathcal K_{{\mathbb{C}}_I}(P^I)$ is a compact subset of ${\mathbb{C}}_I$ for every $I\in{\mathbb{S}}$. \item[(b)] $\mathcal K_{{\mathbb{C}}_I}(P^I)$ depends continuously on $I$. \end{itemize} Let $I\in{\mathbb{S}}$. Since $P$ is monic, also $P^I$ is a monic, nonconstant polynomial and then $\mathcal K_{{\mathbb{C}}_I}(P^I)$ is a compact subset of ${\mathbb{C}}_I$. This prove property (a). To see that (b) holds, one can apply the Continuity theorem for monic polynomials (see e.g.~\cite[Theorem~1.3.1]{RahmanSchmeisser}). The roots of $P^I$ depend continuously on the coefficients of $P^I$, which in turn depend continuously on $I$. Therefore the convex hull $\mathcal K_{{\mathbb{C}}_I}(P^I)$ depends continuously on $I$. Observe that (a) and (b) can not hold for polynomials that are not monic. For example, let $P(X)=X^2i$. Then, given $I=\alpha_1i+\alpha_2j+\alpha_3k\in{\mathbb{S}}$, $\mathcal K_{{\mathbb{C}}_I}(P^I)=\{0\}$ if $\alpha_1\mathcal{N}e0$ and $\mathcal K_{{\mathbb{C}}_I}(P^I)={\mathbb{C}}_I$ if $\alpha_1=0$, since in this case $P^I$ is constant. \end{remark} \begin{remark}\label{rem:closed} If $P$ is a monic polynomial in ${\mathbb{H}}[X]$ of degree $\geq2$, then its Gauss-Lucas snail is a closed subset of ${\mathbb{H}}$. To prove this fact, consider $q\in{\mathbb{H}}{\mathbb{S}}etminus{\mathbb{S}}n(P)$ and choose $I\in{\mathbb{S}}$ such that $q\in{\mathbb{C}}_I$. Write $q=\alpha+I\beta\in{\mathbb{C}}_I$ for some $\alpha,\beta\in{\mathbb{R}}$ and define $z:=\alpha+i\beta\in{\mathbb{C}}$. Since $P$ is monic, ${\mathbb{S}}n(P)\cap{\mathbb{C}}_I=\mathcal K_{{\mathbb{C}}_I}(P^I)$ is a compact subset of ${\mathbb{C}}_I$. Moreover, $\mathcal K_{{\mathbb{C}}_I}(P^I)$ depends continuously on $I$, and then there exist an open neighborhood $U_I$ of $z$ in ${\mathbb{C}}$ and an open neighborhood $W_I$ of $I$ in ${\mathbb{S}}$ such that the set \[ \textstyle [U_I,W_I]:=\bigcup_{J\in W_I}\{a+Jb\in{\mathbb{C}}_J\;|\; a+ib\in U_I\} \] is an open neighborhood of $q$ in $\bigcup_{J\in W_I}{\mathbb{C}}_J$, and it is disjoint from ${\mathbb{S}}n(P)$. If $q\mathcal{N}ot\in{\mathbb{R}}$ then $q$ is an interior point of ${\mathbb{H}}{\mathbb{S}}etminus{\mathbb{S}}n(P)$, because $[U_I,W_I]$ is a neighborhood of $q$ in ${\mathbb{H}}$ as well. Now assume that $q\in{\mathbb{R}}$. Since ${\mathbb{S}}$ is compact there exist $I_1,\ldots,I_n\in{\mathbb{S}}$ such that $\bigcup_{\ell=1}^nW_{I_\ell}={\mathbb{S}}$. It follows that $[\bigcap_{\ell=1}^nU_\ell,{\mathbb{S}}]$ is a neighborhood of $q$ in ${\mathbb{H}}$, which is disjoint from ${\mathbb{S}}n(P)$. Consequently $q$ is an interior point of ${\mathbb{H}}{\mathbb{S}}etminus{\mathbb{S}}n(P)$ also in this case. This proves that ${\mathbb{S}}n(P)$ is closed in ${\mathbb{H}}$. In Proposition \ref{pro:sn-compact} below we will show that the Gauss-Lucas snail of a monic polynomial in ${\mathbb{H}}[X]$ of degree $\geq2$ is also a compact subset of ${\mathbb{H}}$. \end{remark} If all the coefficients of $P$ are real, then ${\mathbb{S}}n(P)$ is a circular set. In general, ${\mathbb{S}}n(P)$ is neither closed nor bounded nor circular, as shown in the next example. \begin{example} Let $P(X)=X^2i+X$. Given $I=\alpha_1i+\alpha_2j+\alpha_3k\in{\mathbb{S}}$, $P^I(x)=X^2I\alpha_1+X$ and then $\mathcal K_{{\mathbb{C}}_I}(P^I)=\{0\}$ if $\alpha_1=0$ while $\mathcal K_{{\mathbb{C}}_I}(P^I)$ is the segment from 0 to $I{\alpha_1}^{-1}$ if $\alpha_1\mathcal{N}e0$. It follows that ${\mathbb{S}}n(P)=\{x_1i+x_2j+x_3k\in\im({\mathbb{H}})\,|\,0<x_1\le1\}\cup\{0\}$. Finally, observe that the monic polynomial $Q(X)=-P(X)\cdot~i=X^2-Xi$ corresponding to $P$ has compact Gauss-Lucas snail ${\mathbb{S}}n(Q)=\{x_1i+x_2j+x_3k\in\im({\mathbb{H}})\,|\,(x_1-1/2)^2+x_2^2+x_3^2\le 1/4\}$. \end{example} \begin{remark} Even for ${\mathbb{C}}_I$-polynomials, the Gauss-Lucas snail of $P$ can be strictly smaller than the circular convex hull $\mathcal K(N(P))$. For example, consider the ${\mathbb{C}}_i$-polynomial $P(X)=X^3+3X+2i$, with zero sets $V(P)=\{-i,2i\}$ and $V(P')={\mathbb{S}}$. The set $\mathcal K(N(P))$ is the closed three-dimensional disc in $\im({\mathbb{H}})$, with center at the origin and radius 2. The Gauss-Lucas snail ${\mathbb{S}}n(P)$ is the subset of $\im({\mathbb{H}})$ obtained by rotating around the $i$-axis the following subset of the coordinate plane $L=\{x=x_1i+x_2j\in\im({\mathbb{H}})\,|\,x_1,x_2\in{\mathbb{R}}\}$: \[\{x=\widehat{A}o\cos(\theta)i+\widehat{A}o{\mathbb{S}}in(\theta)j\in L\;|\;0\le\theta\le\pi,\,0\le\widehat{A}o\le2\cos(\theta/3)\}.\] Therefore ${\mathbb{S}}n(P)$ is a proper subset of $\mathcal K(N(P))$ (the boundaries of the two sets intersect only at the point $2i$). Its boundary is obtained by rotating a curve that is part of the \emph{lima\c con trisectrix} (see Figure \ref{fig:limacon}). \end{remark} \begin{figure} \caption{Cross-sections of ${\mathbb{S} \label{fig:limacon} \end{figure} {\mathbb{S}}ubsection{Estimates on the norm of the critical points} Let $p(z)={\mathbb{S}}um_{k=0}^da_kz^k$ be a complex polynomial of degree $d\ge1$. The norm of the roots of $p$ can be estimated making use of the norm of the coefficients $\{a_k\}_{k=0}^d$ of $p$. There are several classic results in this direction (see e.g.\ \cite[\S8.1]{RahmanSchmeisser}). For instance the estimate \cite[(8.1.2)]{RahmanSchmeisser} (with $\lambda=1,p=2$) asserts that \begin{equation}\label{eq:cauchy} \textstyle \max_{z\in V(p)}|z|\leq|a_d|^{-1}{\mathbb{S}}qrt{{\mathbb{S}}um_{k=0}^d|a_k|^2}\;. \end{equation} \begin{proposition}\label{pro:sn-compact} For every monic polynomial $P\in{\mathbb{H}}[X]$ of degree $d\geq2$, the Gauss-Lucas snail ${\mathbb{S}}n(P)$ is a compact subset of ${\mathbb{H}}$. \end{proposition} \begin{proof} Since $P={\mathbb{S}}um_{k=0}^dX^ka_k$ is monic, every polynomial $P^I$ is monic. From \eqref{eq:cauchy} it follows that $\max_{x\in V(P^I)}|x|^2\le{\mathbb{S}}um_{k=0}^d|\pi_I(a_k)|^2\le{\mathbb{S}}um_{k=0}^d|a_k|^2$ and hence ${\mathbb{S}}n(P){\mathbb{S}}ubset \{x\in{\mathbb{H}}\,|\, |x|^2\le {\mathbb{S}}um_{k=0}^d|a_k|^2\}$ is bounded. Since ${\mathbb{S}}n(P)$ is closed in ${\mathbb{H}}$, as seen in Remark \ref{rem:closed}, it is also a compact subset of ${\mathbb{H}}$. \end{proof} Define a function $C:{\mathbb{H}}[X]\to{\mathbb{R}}\cup\{+\infty\}$ as follows: $C(a):=+\infty$ if $a$ is a quaternionic constant and \[ \textstyle C(P):=|a_d|^{-1}{\mathbb{S}}qrt{{\mathbb{S}}um_{k=0}^d|a_k|^2} \qquad \text{if $P(X)={\mathbb{S}}um_{k=0}^dX^ka_k$ with $d\ge1$ and $a_d\mathcal{N}eq0$.} \] \begin{proposition}\label{pro:estimate} For every polynomial $P\in{\mathbb{H}}[X]$ of degree $d\geq1$, it holds \begin{equation}\label{eq:C} \max_{x\in V(P)}|x|\leq C(P). \end{equation} \end{proposition} \begin{proof} We follow the lines of the proof of estimate \eqref{eq:cauchy} for complex polynomials given in \cite{RahmanSchmeisser}. Let $P(X)={\mathbb{S}}um_{k=0}^dX^ka_k$ with $d\ge1$ and $a_d\mathcal{N}eq0$. We can assume that $P(X)$ is not the monomial $X^d a_d$, since in this case the thesis is immediate. Let $b_k=|a_ka_d^{-1}|$ for every $k=0,\ldots,d-1$. The real polynomial $h(z)=z^d-{\mathbb{S}}um_{k=0}^{d-1}b_kz^k$ has exactly one positive root $\widehat{A}o$ and is positive for real $z>\widehat{A}o$ (see \cite[Lemma~8.1.1]{RahmanSchmeisser}). Let $S:={\mathbb{S}}um_{k=0}^{d-1}b_k^2=C(P)^2-1$. From the Cauchy-Schwartz inequality, it follows that \[\left({\mathbb{S}}um_{k=0}^{d-1}b_kC(P)^k\right)^2\le S{\mathbb{S}}um_{k=0}^{d-1}C(P)^{2k}= (C(P)^2-1)\frac{C(P)^{2d}-1}{C(P)^2-1}<C(P)^{2d}. \] Therefore $h(C(P))>0$ and then $C(P)>\widehat{A}o$. Let $x\in V(P)$. It remains to prove that $|x|\le\widehat{A}o$. Since $x^d=-{\mathbb{S}}um_{k=0}^{d-1}x^ka_ka_d^{-1}$, it holds \[|x|^d\le{\mathbb{S}}um_{k=0}^{d-1}|x|^k\left|a_ka_d^{-1}\right|={\mathbb{S}}um_{k=0}^{d-1}|x|^kb_k.\] This means that $h(|x|)\le0$, which implies $|x|\le\widehat{A}o$. \end{proof} From Proposition~\ref{pro:estimate} it follows that for every polynomial $P\in{\mathbb{H}}[X]$ of degree $d\geq2$, it holds \begin{equation}\label{eq:C} \max_{x\in V(P')}|x|\leq C(P'). \end{equation} Theorem \ref{thm} allows to obtain a new estimate. \begin{proposition} Given any polynomial $P\in{\mathbb{H}}[X]$ of degree $d\geq2$, it holds: \begin{equation}\label{eq:ijk'} \max_{x\in V(P')}|x|\leq{\mathbb{S}}up_{I\in{\mathbb{S}}}\{C(P^I)\}. \end{equation} \end{proposition} \begin{proof} If $x\in V(P')\cap{\mathbb{C}}_I$, Theorem \ref{thm} implies that $x\in\mathcal K_{{\mathbb{C}}_I}(P^I)$. Therefore \begin{equation*}\label{eq:ijk} \max_{x\in V(P')\cap{\mathbb{C}}_I}|x|\leq C(P^I) \quad \text{ for every $I\in{\mathbb{S}}$ with $V(P')\cap{\mathbb{C}}_I\mathcal{N}e\emptyset$}, \end{equation*} from which inequality \eqref{eq:ijk'} follows. \end{proof} Our estimate \eqref{eq:ijk'} can be strictly better than classic estimate \eqref{eq:C}, as explained below. \begin{remark} Let $d\geq3$ and let $P(X)=X^{d-3}\cdot(X-i)\cdot(X-j)\cdot(X-k)$. Using \eqref{P1}, by a direct computation we obtain: \[ C(P')=d^{-1}{\mathbb{S}}qrt{8d^2-24d+24}. \] Moreover, given $I=\alpha_1i+\alpha_2j+\alpha_3k\in{\mathbb{S}}$ for some $\alpha_1,\alpha_2,\alpha_3\in{\mathbb{R}}$ with $\alpha_1^2+\alpha_2^2+\alpha_3^2=1$, we have $ \pi_I(i+j+k)=\langle I,i+j+k\rangle I=(\alpha_1+\alpha_2+\alpha_3)I$ and $\pi_I(i-j+k)=\langle I,i-j+k\rangle I=(\alpha_1-\alpha_2+\alpha_3)I$ and hence \[ C(P^I)={\mathbb{S}}qrt{4+4\alpha_1\alpha_3}\leq{\mathbb{S}}qrt{4+2(\alpha_1^2+\alpha_3^2)}\leq{\mathbb{S}}qrt{6}. \] This implies that \[ {\mathbb{S}}up_{I\in{\mathbb{S}}}\{C(P^I)\}\leq{\mathbb{S}}qrt{6}. \] For every $d\geq11$ it is easy to verify that ${\mathbb{S}}qrt{6}<C(P')$ so \[ {\mathbb{S}}up_{I\in{\mathbb{S}}}\{C(P^I)\}<C(P'), \] as announced. \end{remark} \begin{remark} Some of the results presented here can be generalized to real alternative *-algebras, a setting in which polynomials can be defined and share many of the properties valid on the quaternions (see \cite{GhPe_AIM}). The polynomials given in Corollary \ref{counterexample} can be defined every time the algebra contains an Hamiltonian triple $i,j,k$. This property is equivalent to say that the algebra contains ${\mathbb{H}}$ as a subalgebra (see \cite[\S8.1]{Numbers}). For example, this is true for the algebra of octonions and for the Clifford algebras with signature $(0,n)$, with $n\ge2$. Therefore in all such algebras there exist polynomials for which the zero set $V(P')$ (as a subset of the \emph{quadratic cone}) is not included in the circularization of the convex hull of $V(N(P))$ viewed as a complex polynomial. \end{remark} \end{document}
\begin{document} \pagestyle{plain} \thispagestyle{plain} \title[A conjectural chain model for positive $S^1$-equivariant symplectic homology of star-shaped toric domains in $\mathbb{C}^2$] {A conjectural chain model for positive $S^1$-equivariant symplectic homology of star-shaped toric domains in $\mathbb{C}^2$} \author[Kei Irie]{Kei Irie} \address{Research Institute for Mathematical Sciences, Kyoto University, Kyoto 606-8502, JAPAN} \email{[email protected]} \begin{abstract} For any star-shaped toric domain in $\mathbb{C}^2$, we define a filtered chain complex which conjecturally computes positive $S^1$-equivariant symplectic homology of the domain. Assuming this conjecture, we show that the limit $\lim_{k \to \infty} c^{\mathrm{GH}}_k(X)/k$ exists for any star-shaped toric domain $X \subset \mathbb{C}^2$, where $c^{\mathrm{GH}}_k$ denotes the $k$-th Gutt-Hutchings capacity. \end{abstract} \maketitle \section{Introduction} Let $n$ be a positive integer, and consider ${\mathbb C}^n$ with a symplectic form $\sum_{j=1}^n dx_j dy_j$. A star-shaped domain in ${\mathbb C}^n$ is a compact subset $X \subset {\mathbb C}^n$ with a $C^\infty$-boundary such that $(0,\ldots,0)$ is in the interior of $X$, and for any $z \in {\mathbb C}^n \setminus \{0\}$ the half line $\{ tz \mid t \in {\mathbb R}_{\ge 0}\}$ intersects $\partial X$ transversally at a unique point. For any such $X$ and $-\infty < a < b \le \infty$, one can define a vector space $\text{\rm SH}\,^{S^1,[a,b)}_*(X)$ called $S^1$-equivariant symplectic homology. It is well-known that $\text{\rm SH}\,^{S^1, [\partialta,\infty)}_*(X) \cong H^{S^1}_{*-(n+1)}(\text{\rm pt})$ when $\partialta>0$ is sufficiently close to $0$. On the other hand, this family of vector spaces (with maps between them) has rich quantitative information of $X$. In particular, one can define the Gutt-Hutchings capacities $(c^{\mathrm{GH}}_k)_{k \ge 1}$ from ``positive part'' of $S^1$-equivariant symplectic homology. The Gutt-Hutchings capacities were defined in \cite{Gutt_Hutchings} for Liouville domains. It is conjectured (\cite{Gutt_Hutchings} Conjecture 1.9) that the Gutt-Hutchings capacities coincide with the $S^1$-equivariant Ekeland-Hofer capacities \cite{Ekeland_Hofer} for compact star-shaped domains in ${\mathbb C}^n$. A star-shaped domain $X \subset {\mathbb C}^n$ is called a (star-shaped) toric domain if $X$ is invariant by the standard $T^n$-aciton on ${\mathbb C}^n$. When $X$ is a so called ``convex'' or ``concave'' toric domain, Gutt-Hutchings \cite{Gutt_Hutchings} proved explicit formulas to compute capacities $c^\mathrm{GH}_k(X)$ for all $k \ge 1$. One remarkable consequence of the formulas is that $\lim_{k \to \infty} \frac{c^\mathrm{GH}_k(X)}{k}$ exists if $X$ is a convex or concave toric domain. Actually, this existence of the limit holds under a much weaker assumption; see Remark 1.22 of \cite{Gutt_Hutchings}. The proof of the formulas in \cite{Gutt_Hutchings} is ``elementary'' in the sense that the proof uses only basic properties of the capacities, which are combined in a very clever way. On the other hand, it is not clear how to generalize the formulas for toric domains which are neither convex nor concave. Even for convex or concave domains, it is not clear how to obtain information beyond the capacities, such as barcodes associated to persistent modules defined from $S^1$-equivariant symplectic homology. The aim of this note is to define a filtered chain complex for any star-shaped toric domain $X \subset {\mathbb C}^2$, which conjecturally computes $\text{\rm SH}\,^{S^1, [a,b)}_*(X)$ for any $0<a<b\le \infty$. Assuming this conjecture, we show that $\lim_{k \to \infty} \frac{c^\mathrm{GH}_k(X)}{k}$ exists for any star-shaped toric domain $X \subset {\mathbb C}^2$. Let us describe the plan of this paper. In Section 2, we define an ${\mathbb R}$-filtered chain complex $C^\Omega_*$ for any $\Omega \in \mca{S}^2$ (see Definition \ref{defn_starshaped} below). For any $\Omega \in \mca{S}^2$, we define a star-shaped domain $X_\Omega \subset {\mathbb C}^2$, formulate a conjecture that $C^\Omega_*$ computes $\text{\rm SH}\,^{S^1, [a,b)}_*(X_\Omega)$ for any $0 < a< b \le \infty$, and support this conjecture by some computations. In Section 3, we define a sequence of capacities $(c_k(\Omega))_{k \ge 1}$ for any $\Omega \in \mca{S}^2$. Assuming the above conjecture, one has $c_k(\Omega) = c^\mathrm{GH}_k(X_\Omega)$ for any $k \ge 1$ and $\Omega \in \mca{S}^2$. We compute the capacities $c_k(\Omega)$ when $\Omega$ is concave or (weakly) convex, and check that the results are consistent with the formulas in \cite{Gutt_Hutchings}. Moreover, we show that $\lim_{k \to \infty} \frac{c_k(\Omega)}{k}$ exists for any $\Omega \in \mca{S}^2$. {\bf Acknowledgement.} The author appreciates Jean Gutt and Michael Hutchings for very helpful comments on an earlier version of this paper. The author is supporeted by JSPS KAKENHI Grant Number 18K13407 and 19H00636. {\bf Conventions.} Throughout this paper we consider vector spaces over ${\mathbb Q}$ unless otherwise specified. An ${\mathbb R}$-filtration on a vector space $V$ is a family of subspaces $(V^a)_{a \in {\mathbb R}}$ such that $a \le b \implies V^a \subset V^b$. We set $V^\infty:=V$ and $V^{-\infty}:=0$. For any $a<b$, we denote $V^{[a,b)}:= V^b/V^a$. \section{A chain model} \subsection{Definition of a chain model} Let us start with the following definition. \begin{defn}\label{defn_starshaped} For any $n \in {\mathbb Z}_{\ge 1}$, let $\Sigma^n:= \{ v \in ( {\mathbb R}_{\ge 0})^n \mid |v|=1 \}$. Let $\mca{S}^n$ denote the set consisting of $\Omega \subset ({\mathbb R}_{\ge 0})^n$ such that there exists $r_\Omega \in C^\infty(\Sigma^n, {\mathbb R}_{>0})$ satisfying $\Omega = \{ tz \mid 0 \le t \le r_\Omega(z), \, z \in \Sigma^n \}$. For any $\Omega \in \mca{S}^n$, let $U_\Omega:= ({\mathbb R}_{>0})^n \setminus \Omega$, and let $\bar{U}_\Omega \subset ({\mathbb R}_{\ge 0})^n$ denote the closure of $U_\Omega$ in $({\mathbb R}_{\ge 0})^n$. \end{defn} In this paper we mostly consider the case $n=2$. For any $\Omega \in \mca{S}^2$, we define a ${\mathbb Z}$-graded ${\mathbb Q}$-vector space $C^\Omega_*$ by \[ C^\Omega_*:= \bigoplus_{(m_1, m_2) \in {\mathbb Z}^2 \setminus ({\mathbb Z}_{\le 0})^2} C^{\text{\rm sing}\,}_{*+1-2(m_1+m_2)}(U_\Omega) \otimes H_*(S^1), \] where $C^{\text{\rm sing}\,}_*$ denotes the singular chain complex and $S^1:={\mathbb R}/{\mathbb Z}$. Let us define a boundary operator on $C^\Omega_*$. Let $e_0:= [\text{\rm pt}] \in H_0(S^1)$ and $e_1:=[\sigma] \in H_1(S^1)$, where $\sigma:[0,1] \to S^1; \, t \mapsto [t]$. For any homogeneous element $x \in C^\Omega_*$, let us set \[ x= \sum_{(m_1, m_2, i) \in ({\mathbb Z}^2 \setminus ({\mathbb Z}_{\le 0})^2) \times \{0, 1\}} x_{m_1, m_2, i} \otimes e_i, \] and define $\partial x$ by \begin{align*} (\partial x )_{m_1, m_2, 0 }&:= \partial^\text{\rm sing}\, x _{m_1, m_2, 0}, \\ (\partial x)_{m_1, m_2, 1}&:= \partial^\text{\rm sing}\, x_{m_1, m_2, 1} + (-1)^{|x|} ( m_2 \cdot x_{m_1+1, m_2, 0} - m_1 \cdot x_{m_1, m_2+1, 0}), \end{align*} where $\partial^\text{\rm sing}\,$ denotes the boundary operator of the singular chain complex. One can check $\partial^2=0$ by direct computations. Let us define an ${\mathbb R}$-filtration on $C^\Omega_*$. For any $(x_1, x_2) \in {\mathbb R}^2$, define $A_{x_1, x_2}: {\mathbb R}^2 \to {\mathbb R}$ by \[ A_{x_1,x_2}(y_1,y_2):= x_1 y_1 + x_2 y_2. \] For any $a \in {\mathbb R}$ and $(m_1,m_2) \in {\mathbb Z}^2$, let \[ U_\Omega(a: m_1,m_2):= \{(x_1,x_2) \in U_\Omega \mid A_{m_1,m_2}(x_1,x_2) < a\}. \] and \[ C^{\Omega,a}_*:= \bigoplus_{(m_1, m_2) \in {\mathbb Z}^2 \setminus ({\mathbb Z}_{\le 0})^2} C^{\text{\rm sing}\,}_{*+1-2(m_1+m_2)}(U_\Omega(a:m_1,m_2)) \otimes H_*(S^1). \] Then $(C^{\Omega, a}_*)_{a \in {\mathbb R}}$ is an ${\mathbb R}$-filtration on $C^\Omega_*$. For any $a<b$, we denote $C^{\Omega, [a,b)}_*:= C^{\Omega, b}_*/C^{\Omega, a}_*$. For any $(m_1, m_2) \in {\mathbb Z}^2 \setminus ({\mathbb Z}_{\le 0})^2$, there holds \begin{equation}\label{U_Omega_length} U_\Omega(a:m_1+1, m_2) , U_\Omega(a: m_1, m_2+1) \subset U_\Omega(a: m_1, m_2). \end{equation} Thus $\partial(C^{\Omega, a}_*) \subset C^{\Omega, a}_{*-1}$. If $\Omega_1, \Omega_2 \in \mca{S}^2$ satisfy $\Omega_1 \subset \Omega_2$, then $U_{\Omega_2} \subset U_{\Omega_1}$. Then we obtain a natural chain map $C^{\Omega_2}_* \to C^{\Omega_1}_*$ which preserves the ${\mathbb R}$-filtrations. \begin{lem}\label{lem_F_m} Let $-\infty < a<b \le \infty$. For any $m \in {\mathbb Z}$, let \[ F_mC^{\Omega, [a,b)}_*: = \bigoplus_{\substack{(m_1, m_2) \in {\mathbb Z}^2 \setminus ({\mathbb Z}_{\le 0})^2 \\ m_1+m_2 \le m}} C^{\text{\rm sing}\,}_{*+1-2(m_1+m_2)}(U_\Omega(b:m_1,m_2), U_\Omega(a:m_1,m_2)) \otimes H_*(S^1). \] Then $(F_mC^{\Omega, [a,b)}_*)_{m \in {\mathbb Z}}$ is a filtration on $C^{\Omega, [a,b)}_*$. Let $(E^r, \partial_{E^r})_{r \ge 1}$ be the spectral sequence associated to this filtration. Then the following holds. \begin{itemize} \item[(i):] There exists an isomorphism \[ E^1_{p,q} \cong \bigoplus_{\substack{m_1+m_2=p \\ i+j=q-p+1}} H_i(U_\Omega(b:m_1,m_2), U_\Omega(a:m_1,m_2)) \otimes H_j(S^1) \] such that $\partial_{E^1}: E^1_{p,q} \to E^1_{p-1,q}$ is given by \begin{align*} (\partial_{E^1} x )_{m_1, m_2, 0 }&=0, \\ (\partial_{E^1} x)_{m_1, m_2, 1}&=(-1)^{|x|} ( m_2 \cdot x_{m_1+1, m_2, 0} - m_1 \cdot x_{m_1, m_2+1, 0}). \end{align*} \item[(ii):] $\partial_{E^r}=0$ if $r \ge 2$. Moreover $H_*(C^{\Omega,[a,b)}) \cong \bigoplus_{p+q=*} E^\infty_{p,q}$. \end{itemize} \end{lem} \begin{proof} (i) is straightforward. To see (ii), for each $l \in {\mathbb Z}$ let \begin{align*} C^l_*:= & \bigoplus_{m_1+m_2=l} C_{*+1-2l}^{\text{\rm sing}\,}(U_\Omega(b:m_1,m_2), U_\Omega(a:m_1,m_2)) \otimes {\mathbb R} e_1 \\ \oplus &\bigoplus_{m_1+m_2=l+1} C_{*-1-2l}^{\text{\rm sing}\,}(U_\Omega(b:m_1,m_2), U_\Omega(a:m_1,m_2)) \otimes {\mathbb R} e_0. \end{align*} Then $C^l_*$ is a subcomplex of $C_*:= C^{\Omega,[a,b)}_*$, and there holds $C_* = \bigoplus_{l \in {\mathbb Z}} C^l_*$, in particular $H_*(C) \cong \bigoplus_{l \in {\mathbb Z}} H_*(C^l)$. Each $C^l$ is equipped with a filtration $F^mC^l:= F^m C \cap C^l\, (m \in {\mathbb Z})$. Let $(E^r(C^l))_{r \ge 1}$ be the spectral sequence associated to this filtration. Since $F^m C \cong \bigoplus_{l \in {\mathbb Z}} F^m C^l$ for each $m$, there holds $E^r_{p,q}(C) \cong \bigoplus_{l \in {\mathbb Z}} E^r_{p,q}(C^l)$ for any $r \ge 1$. Thus it is sufficient to show that $\partial_{E^r(C^l)}=0 \,(r \ge 2)$ and $H_*(C^l) \cong \bigoplus_{p+q=*} E^\infty_{p,q}(C^l)$ for each $l \in {\mathbb Z}$. This follows from $F^{l-1}C^l=0$ and $F^{l+1}C^l =C^l$. \end{proof} \begin{rem}\label{rem_E_1} Suppose that $H_i(U_\Omega(b:m_1,m_2), U_\Omega(a:m_1,m_2)) \ne 0$ only if $i=0$. Then $E^1_{p,q} \ne 0$ only if $q=p$ or $q=p-1$. Moreover, for any $j \in {\mathbb Z}$ \[ E^1_{p, p-1+j} \cong \bigoplus_{m_1+m_2=p} H_0(U_\Omega(b:m_1,m_2), U_\Omega(a:m_1,m_2)) \otimes H_j(S^1). \] \end{rem} \subsection{Conjectural relation to $S^1$-equivariant symplectic homology} Let $n$ be a positive integer, and let $\lambda_0:= \frac{1}{2} \sum_{j=1}^n (x_j dy_j - y_j dx_j) \in \Omega^1({\mathbb C}^n)$. For any star-shaped domain $X \subset {\mathbb C}^n$, $(X, \lambda_0)$ is a Liouville domain. For any $-\infty < a < b \le \infty$, one can define a ${\mathbb Z}$-graded vector space $\text{\rm SH}\,^{S^1, [a,b)}_*(X, \lambda_0)$, which we abbreviate by $\text{\rm SH}\,^{S^1,[a,b)}_*(X)$, called $S^1$-equivariant symplectic homology. The family of vector spaces $(\text{\rm SH}\,^{S^1, [a,b)}_*(X))_{a,b,X}$ is equipped with the maps (transfer morphisms) \[ \text{\rm SH}\,^{S^1, [a,b)}_*(X) \to \text{\rm SH}\,^{S^1, [a', b')}_*(X') \] for any $(a,b,X)$ and $(a', b', X')$ such that $a \le a'$, $b \le b'$ and $X' \subset X$. \begin{rem} $S^1$-equivariant symplectic homology was defined by Viterbo \cite{Viterbo_GAFA}. Bourgeois-Oancea \cite{Bourgeois_Oancea} gave alternative definitions via family Floer homology following Seidel \cite{Seidel_biased}. Gutt-Hutchings \cite{Gutt_Hutchings} uses a family Floer homology definition, following the treatment in Gutt \cite{Gutt}. \end{rem} For any $\Omega \in \mca{S}^2$, \[ X_\Omega:= \{ (z_1, z_2) \in {\mathbb C}^2 \mid (\pi|z_1|^2, \pi |z_2|^2) \in \Omega \} \] is a star-shaped domain in ${\mathbb C}^2$. Now we can state the following conjecture. \begin{conj}\label{conj_main} For any $\Omega \in \mca{S}^2$ and $0<a<b \le \infty$, one can define an isomorphism of ${\mathbb Z}$-graded vector spaces \[ F^{a,b}_\Omega: H_*(C^{\Omega, [a,b)}) \cong \text{\rm SH}\,^{S^1, [a,b)}_*(X_\Omega) \] so that the diagram \[ \xymatrix{ H_*(C^{\Omega, [a,b)})\ar[d]_-{F^{a,b}_\Omega}^-{\cong} \ar[r]& H_*(C^{\Omega',[a',b')}) \ar[d]^-{F^{a',b'}_{\Omega'}}_-{\cong} \\ \text{\rm SH}\,^{S^1,[a,b)}_*(X_\Omega) \ar[r] & \text{\rm SH}\,^{S^1, [a', b')}_*(X_{\Omega'}) } \] commutes for any $a, b, \Omega$ and $a', b', \Omega'$ such that $a \le a'$, $b \le b'$ and $\Omega' \subset \Omega$. \end{conj} Let us briefly explain an idea to obtain the conjectural isomorphism $F^{a, b}_\Omega$. Given $0<a<b<\infty$ such that $a,b \not\in \text{\rm Spec}\,(\Omega)$ and a positive integer $N$, take an autonomous Hamiltonian $H$ on ${\mathbb C}^2$ so that $\text{\rm SH}\,^{S^1, [a,b)}_{\le N}(X_\Omega) \cong \mathrm{HF}^{S^1, [a,b)}_{\le N}(H)$ and the following property holds: every $1$-periodic orbit $\gamma$ of $X_H$ with $\mathrm{ind}_{\mathrm{CZ}}(\gamma) \le N$ and $\mca{A}_H(\gamma)>0$ is contained in $({\mathbb C} \setminus \{0\})^2$. Here $\mathrm{ind}_{\mathrm{CZ}}$ denotes the Conley-Zehnder index and $\mca{A}_H$ denotes the Hamiltonian action functional. To compute $\mathrm{HF}^{S^1, [a,b)}_{\le N}(H)$ we take $\varepsilon>0$ and consider an almost complex structure $J^\varepsilon$ on ${\mathbb C}^2$ which satisfies \[ J^\varepsilon(\partial_{\theta_i}) = - \varepsilon r_i \partial_{r_i} \qquad (z_i = e^{r_i + \sqrt{-1} \theta_i}, \, i=1, 2) \] on the complement of a neighborhood of ${\mathbb C} \times \{0\} \cup \{0\} \times {\mathbb C}$. Conjecturally, when $\varepsilon$ is sufficiently close to $0$, $\mathrm{HF}^{S^1, [a,b)}_{\le N}(H)$ can be computed by counting certain Morse trajectories on $({\mathbb R}_{>0})^2$, and one obtains an isomorphism $\mathrm{HF}^{S^1, [a,b)}_{\le N}(H) \cong H_{\le N}(C^{\Omega, [a,b)})$ via finite-dimensional Morse theory; this gives $F^{a,b}_\Omega$ up to degree $N$. \subsection{Computations of relative homologies} In this subsection we compute relative homologies $H_*(C^{\Omega, [a,b)})$ for some special cases, verifying that Conjecture \ref{conj_main} is consistent with known properties of $S^1$-equivariant symplectic homology. We start with some preparations on toric star-shaped domains in ${\mathbb C}^2$. For any $\Omega \in \mca{S}^2$, let us define $\rho_\Omega \in C^\infty([0,\pi/2], {\mathbb R}_{>0})$ by $\rho_\Omega(\theta):=r_\Omega(\cos \theta, \sin \theta)$. In other words, \begin{equation}\label{eqn_Omega} \Omega = \{ (r \cos \theta, r \sin \theta) \mid 0 \le \theta \le \pi/2, \, 0 \le r \le \rho_\Omega(\theta) \}. \end{equation} Let us define \begin{align*} \partial \Omega&:= \{ (\rho_\Omega(\theta) \cos \theta, \rho_\Omega(\theta) \sin \theta) \mid 0 \le \theta \le \pi/2 \}, \\ \partial_+\Omega&:= \partial\Omega \cup \{ (t,0) \mid t \ge \rho_\Omega(0)\} \cup \{ (0,t) \mid t \ge \rho_\Omega(\pi/2)\}. \end{align*} For any $c \in {\mathbb R}$ and $(m_1,m_2) \in {\mathbb Z}^2 \setminus ({\mathbb Z}_{\le 0})^2$, let \begin{align*} \bar{U}_\Omega(c:m_1,m_2)&:= \{(x_1, x_2) \in \bar{U}_\Omega \mid A_{m_1,m_2}(x_1,x_2)<c\}, \\ \partial_+\Omega(c:m_1,m_2)&:= \{(x_1,x_2) \in \partial_+ \Omega \mid A_{m_1,m_2}(x_1,x_2)<c\}. \end{align*} \begin{lem}\label{lem_barU} For any $\Omega \in \mca{S}^2$, $c \in {\mathbb R}_{>0}$ and $(m_1,m_2) \in {\mathbb Z}^2 \setminus ({\mathbb Z}_{\le 0})^2$, \[ H_*(\bar{U}_\Omega(c:m_1,m_2), U_\Omega(c:m_1,m_2)) = H_*(\bar{U}_\Omega(c:m_1,m_2), \partial_+\Omega(c:m_1,m_2))=0. \] \end{lem} \begin{proof} Let us define \[ \mca{C}:= \{ \text{critical values of $A_{m_1,m_2}|_{\partial \Omega}$ } \} \cup \{ m_1 \rho(0), m_2 \rho(\pi/2) \}. \] Note that $\mca{C}$ is a null set. We may assume $c \not\in \mca{C}$, since the case $c \in \mca{C}$ follows from this case by taking limits. Then $\bar{U}_\Omega(c: m_1, m_2)$ is a manifold with corners, which implies that $H_*(\bar{U}_\Omega(c:m_1,m_2), U_\Omega(c:m_1,m_2))=0$. Next we show that $H_*(\bar{U}_\Omega(c:m_1,m_2), \partial_+\Omega(c:m_1,m_2))=0$. Let \[ \partial \Omega(c: m_1, m_2):= \{ (x_1, x_2) \in \partial \Omega \mid A_{m_1, m_2}(x_1, x_2) < c\}. \] Then it is sufficient to show that the inclusion maps \[ i: \partial \Omega(c:m_1,m_2) \to \partial_+ \Omega (c:m_1, m_2), \qquad j: \partial \Omega(c:m_1,m_2) \to \bar{U}_\Omega(c:m_1,m_2) \] are homotopy equivalent maps. For any $x \in \bar{U}_\Omega(c:m_1,m_2)$, let $\rho(x) \in {\mathbb R}_{>0}$ be the unique positive real number such that $\rho(x) x \in \partial \Omega$. Then \[ r: \bar{U}_\Omega(c:m_1,m_2) \to \partial \Omega(c:m_1,m_2); \, x \mapsto \rho(x) \cdot x \] is a homotopy inverse of $j$, and $r|_{\partial_+ \Omega(c:m_1,m_2)}$ is an inverse of $i$. \end{proof} For any $\Omega \in \mca{S}^2$, let us define \[ P(\Omega):= \{ (\rho_\Omega(0), 0), (0, \rho_\Omega(\pi/2))\} \cup \bigcup_{(m_1, m_2) \in {\mathbb Z}^2 \setminus ({\mathbb Z}_{\le 0})^2} \mathrm{Crit}\,_+(A_{m_1,m_2}|_{\partial \Omega}), \] where \[ \mathrm{Crit}\,_+(A_{m_1,m_2}):= \{ p \in \partial \Omega \mid d A_{m_1,m_2}|_{\partial \Omega}(p)=0, \quad A_{m_1,m_2}(p) > 0 \}. \] For any $p \in P(\Omega)$ and $m \in {\mathbb Z}_{>0}$, we define $A(p,m) \in {\mathbb R}_{>0}$ and $i(p,m) \in {\mathbb Z}$ as follows: \begin{itemize} \item If $p=(\rho_\Omega(0), 0)$, \[ A(p,m):= m \cdot \rho_\Omega(0) , \quad i(p,m):= 1 + 2(m+ [mt_1]), \] where $t_1 \in {\mathbb R}$ is defined so that $T_p(\partial\Omega)$ is generated by $(-t_1, 1)$. \item If $p= (0, \rho_\Omega(\pi/2))$, \[ A(p,m):= m \cdot \rho_\Omega(\pi/2), \quad i(p,m):= 1 + 2(m+ [mt_2]), \] where $t_2 \in {\mathbb R}$ is defined so that $T_p(\partial\Omega)$ is generated by $(1,-t_2)$. \item If $p \not\in \{ (\rho_\Omega(0), 0), (0, \rho_\Omega(\pi/2))\}$, there exists unique $(m_1, m_2) \in {\mathbb Z}^2 \setminus ({\mathbb Z}_{\le 0} )^2$ such that $p \in \mathrm{Crit}\,_+(A_{m_1,m_2})$ and $m= \mathrm{gcd}(m_1, m_2)$. Let $\mu(p)$ denote the Morse index of $p$ as a critical point of $A_{m_1,m_2}|_{\partial \Omega}$. Then \[ A(p,m):=A_{m_1,m_2}(p), \quad i(p,m):= 2(m_1+m_2) + \mu(p) - 1. \] \end{itemize} Let $\text{\rm Spec}\,(\Omega):= \{ A(p,m) \mid (p,m) \in P(\Omega) \times {\mathbb Z}_{>0} \} \subset {\mathbb R}_{>0}$. It is easy to see that $\text{\rm Spec}\,(\Omega)$ is of measure zero and closed in ${\mathbb R}_{\ge 0}$, in particular $\min \text{\rm Spec}\,(\Omega)$ exists and is positive. Let $\mca{S}^2_\text{\rm nice}$ denote the set consists of $\Omega \in \mca{S}^2$ satisfying the following conditions: \begin{itemize} \item For any $(m_1,m_2) \in {\mathbb Z}^2 \setminus ({\mathbb Z}_{\le 0})^2$, $(\rho_\Omega(0), 0)$ and $(0, \rho_\Omega(\pi/2))$ are not critical points of $A_{m_1,m_2}|_{\partial \Omega}$. Moreover $A_{m_1, m_2}|_{\partial \Omega}$ is a Morse function, i.e. every critical point of $A_{m_1,m_2}|_{\partial \Omega}$ is nondegenerate. \item If $(p,m), (p', m') \in P(\Omega) \times {\mathbb Z}_{>0}$ satisfy $A(p,m)=A(p',m')$, then $(p,m) = (p',m')$. \end{itemize} It is easy to see that $\mca{S}^2_\text{\rm nice}$ is residual in $\mca{S}^2$ with the $C^\infty$-topology; see \cite{Irie_equidistribution} Lemma 6.1. \begin{prop}\label{prop_01} For any $\Omega \in \mca{S}^2$ and $0<a<b \le \infty$ such that $a,b \not\in \text{\rm Spec}\,(\Omega)$, the following holds. \begin{itemize} \item[(i):] If $[a,b) \cap \text{\rm Spec}\,(\Omega) = \emptyset$, then $H_*(C^{\Omega,[a,b)})=0$. \item[(ii):] If $\Omega \in \mca{S}^2_\text{\rm nice}$ and $[a,b) \cap \text{\rm Spec}\,(\Omega)$ consists of one element $A(p,m)$ with $p \in \{(\rho_\Omega(0), 0), (0, \rho_\Omega(\pi/2))\}$, then $H_*(C^{\Omega, [a,b)}) \cong H_{*-i(p,m)}(\text{\rm pt})$. \item[(iii):] If $\Omega \in \mca{S}^2_\text{\rm nice}$ and $[a,b) \cap \text{\rm Spec}\,(\Omega)$ consists of one element $A(p,m)$ with $p \not\in \{(\rho_\Omega(0), 0), (0, \rho_\Omega(\pi/2))\}$, then $H_*(C^{\Omega, [a,b)}) \cong H_{*-i(p,m)}(S^1)$. \end{itemize} \end{prop} \begin{proof} Let us consider the filtration $(F_m)_m$ on $C^{\Omega, [a,b)}_*$ and the associated spectral sequence as in Lemma \ref{lem_F_m}. (i): By Lemma \ref{lem_barU} and the assumption, \[ H_*(U_\Omega(b:m_1,m_2), U_\Omega(a:m_1,m_2)) \cong H_*(\partial_+\Omega(b:m_1, m_2), \partial_+\Omega(a:m_1,m_2)) =0 \] for any $(m_1,m_2) \in {\mathbb Z}^2 \setminus ({\mathbb Z}_{\le 0})^2$, thus $E^1_{p,q}=0$ for any $(p,q) \in {\mathbb Z}^2$. (ii): Let us consider the case $p=(\rho_\Omega(0), 0)$. By Lemma \ref{lem_barU} and the assumption, \[ H_*(U_\Omega(b:m_1,m_2), U_\Omega(a:m_1,m_2)) \cong \begin{cases} H_*(\text{\rm pt}) &(m_1=m, \, m_2 > mt_1), \\ 0 &(\text{otherwise}). \end{cases} \] Then, by Lemma \ref{lem_F_m} and Remark \ref{rem_E_1}, we obtain \[ E^2_{k,l} \cong \begin{cases} {\mathbb Q} &(k=m+[mt_1]+1, \, l=m+[mt_1] ) \\ 0&(\text{otherwise}) \end{cases} \] and $\partial_{E^2}=0$, which implies $H_*(C) \cong H_{*-i(p,m)}(\text{\rm pt})$. The case $p=(0, \rho_\Omega(\pi/2))$ is similar and omitted. (iii): By Lemma \ref{lem_barU} and the assumption, \[ H_*(U_\Omega(b:m_1,m_2), U_\Omega(a:m_1,m_2)) \cong \begin{cases} H_{*-\mu(p)} (\text{\rm pt}) &(p \in \mathrm{Crit}\,_+(A_{m_1,m_2}), \, m = \mathrm{gcd}(m_1, m_2)), \\ 0&( \text{otherwise}). \end{cases} \] By Lemma \ref{lem_F_m}, we obtain \[ E^1_{k,l} \cong \begin{cases} {\mathbb Q} &(k=m_1+m_2, \, l - k - \mu(p) \in \{-1, 0\}) \\ 0 &(\text{otherwise}) \end{cases} \] and $\partial_{E^1}=0$, which implies $H_*(C^{\Omega, [a,b)}) \cong H_{*-i(p,m)}(S^1)$. \end{proof} For any $0<a \le \infty$, let $H^{+,a}_*(\Omega):= \mathop{\lim_{\longleftarrow}}_{\partialta \to 0+} H_*(C^{\Omega, [\partialta,a)})$. By Proposition \ref{prop_01} (i), $H^{+,a}_*(\Omega) \to H_*(C^{\Omega,[\partialta,a)})$ is an isomorphism if $\partialta \in (0, \min \text{\rm Spec}\,(\Omega))$. \begin{prop}\label{prop_plus} \begin{itemize} \item[(i):] For any $\Omega \in \mca{S}^2$, there holds $H^{+,\infty}_*(\Omega) \cong H^{S^1}_{*-3}(\text{\rm pt})$. \item[(ii):] For any $\Omega, \Omega' \in \mca{S}^2$ such that $\Omega' \subset \Omega$, the natural map $H^{+,\infty}_*(\Omega) \to H^{+,\infty}_*(\Omega')$ is an isomorphism. \end{itemize} \end{prop} \begin{proof} (i): Take $\partialta \in (0, \min \text{\rm Spec}\,(\Omega))$. Then $H^{+,\infty}_*(\Omega) \cong H_*(C^{\Omega, [\partialta,\infty)})$. By Lemma \ref{lem_barU}, it is easy to show that for any $(m_1,m_2) \in {\mathbb Z}^2 \setminus ({\mathbb Z}_{\le 0})^2$ \[ H_*(U_\Omega, U_\Omega(\partialta: m_1, m_2)) \cong \begin{cases} H_*(\text{\rm pt}) &( (m_1, m_2) \in ({\mathbb Z}_{>0})^2), \\ 0 &(\text{otherwise}). \end{cases} \] Consider the filtration $(F_m)_{m \in {\mathbb Z}}$ on $C^{\Omega, [\partialta, \infty)}_*$ as before. Then, by Lemma \ref{lem_F_m} and Remark \ref{rem_E_1}, $E^1_{p,q} \ne 0$ only if $q-p \in \{0,-1\}$, and there exists an isomorphism \[ E^1_{p, p-1+j} \cong \bigoplus_{\substack{m_1+m_2=p \\ m_1,m_2>0}} H_0(\text{\rm pt}) \otimes H_j(S^1). \] For any $p \ge 2$ one has an exact sequence \[ \xymatrix{ 0 \ar[r] & {\mathbb Q} \ar[r]_-{\star}& E^1_{p,p-1} \ar[r]_-{\partial_{E^1}} & E^1_{p-1, p-1} \ar[r]& 0, } \] where $\star$ maps $1$ to $\sum_{j=1}^{p-1} x_{j, p-j, 0} \otimes e_0$ such that $x_{j, p-j, 0} \ne 0$ for any $j$. Hence we obtain \[ E^2_{p,q} \cong \begin{cases} {\mathbb Q} &(p \ge 2, \, q = p-1), \\ 0 &(\text{otherwise}). \end{cases} \] This implies $H_*(C^{\Omega, [\partialta, \infty)}) \cong H^{S^1}_{*-3}(\text{\rm pt})$. (ii): The natural chain map $C^{\Omega, [\partialta,\infty)}_* \to C^{\Omega', [\partialta,\infty)}_*$ respects the filtrations $(F_m)_{m \in {\mathbb Z}}$ and gives isomorphisms on $E^1$-pages. \end{proof} The next corollary follows from the above proof of Proposition \ref{prop_plus} (i). \begin{cor}\label{cor_plus} For any $\partialta \in (0, \min \text{\rm Spec}\,(\Omega))$ and $k \ge 1$, any element of $H_{2k+1}(C^{\Omega, [\partialta, \infty)}) \cong {\mathbb Q}$ is represented by $x= \sum_{(m_1, m_2, i) \in ({\mathbb Z}^2 \setminus ({\mathbb Z}_{\le 0})^2) \times \{0, 1\}} x_{m_1, m_2, i} \otimes e_i \in C^{\Omega, [\partialta,\infty)}_{2k+1}$ such that: \begin{itemize} \item $x_{m_1,m_2,i}=0$ unless $m_1,m_2>0$, $m_1+m_2=k+1$ and $i=0$. \item $x_{j, k+1-j, 0} = a_j[p] \otimes e_0$ for any $1 \le j \le k$, where $p \in U_\Omega$ and $(a_1, \ldots, a_k)$ satisfies $(k-j) \cdot a_{j+1} - j \cdot a_j=0$ for any $1 \le j \le k-1$. \end{itemize} \end{cor} \section{Capacities} \subsection{Definition and basic properties} For any $\Omega \in \mca{S}^2$, we define a sequence $(c_k(\Omega))_{k \ge 1}$ as follows. For any $a \in {\mathbb R}_{>0}$, let \[ (i^a_\Omega)_*: H^{+,a}_*(\Omega) \to H^{+,\infty}_*(\Omega) \] be the natural map. For any $k \ge 1$, let \[ c_k(\Omega):= \inf\{ a \mid (i^a_\Omega)_{2k+1} \ne 0 \}. \] \begin{prop}\label{prop_property_c_k} The following holds for any $\Omega \in \mca{S}^2$ and $k \ge 1$. \begin{itemize} \item[(i):] For any $\Omega' \in \mca{S}^2$ such that $\Omega' \subset \Omega$, there holds $c_k(\Omega') \le c_k(\Omega)$. \item[(ii):] For any $c \in {\mathbb R}_{>0}$, $c_k(c\Omega)=c \cdot c_k(\Omega)$. \item[(iii):] $c_k(\Omega) \in \text{\rm Spec}\,(\Omega)$. \item[(iv):] $c_k(\Omega) \le c_{k+1}(\Omega)$. \end{itemize} \end{prop} \begin{proof} (i): For any $0 < a \le \infty$, let $j^a: H^{+,a}_*(\Omega) \to H^{+,a}_*(\Omega')$ be the natural map. Then $c_k(\Omega') \le c_k(\Omega)$ since $i^a_{\Omega'} \circ j^a=j^\infty \circ i^a_\Omega$ and $j^\infty$ is an isomorphism by Proposition \ref{prop_plus} (ii). (ii) follows from the isomorphism $H^{+,a}_*(\Omega) \cong H^{+, ca}_*(c\Omega)$ defined for any $0<a \le \infty$ and $c>0$, which is defined by the scaling diffeomorphism $U_\Omega \to U_{c\Omega}; \, (x_1,x_2) \mapsto (cx_1, cx_2)$. (iii) follows from Proposition \ref{prop_01} (i) and $\text{\rm Spec}\,(\Omega)$ is a closed set. (iv): Let us define a linear map $u:C^\Omega_* \to C^\Omega_{*-2}$ by \[ (ux)_{m_1,m_2, i}: = x_{m_1+1, m_2, i} + x_{m_1, m_2+1, i} \qquad(i=0, 1). \] By direct computations one can check that $u$ commutes with the boundary map on $C^\Omega$. $u$ respects the ${\mathbb R}$-filtration on $C^\Omega$ by (\ref{U_Omega_length}). Also Corollary \ref{cor_plus} implies that $H_*(u): H_{2k+3}(C^{\Omega, [\partialta,\infty)}) \to H_{2k+1}(C^{\Omega, [\partialta,\infty)})$ is an isomorphism for any $k \ge 1$, which implies that $c_k(\Omega) \le c_{k+1}(\Omega)$. \end{proof} \begin{rem} It is not clear to the author whether $H_*(u)$ corresponds to the $U$-map (as defined in \cite{Gutt_Hutchings}) or not. \end{rem} \subsection{Conjectural relation to Gutt-Hutchings capacities} For any Liouville domain $(X,\lambda)$, Gutt-Hutchings \cite{Gutt_Hutchings} defined a sequence $(c^{\mathrm{GH}}_k(X,\lambda))_{k \ge 1}$ called Gutt-Hutchings capacities. See \cite{Gutt_Hutchings} Definitions 4.1 and 4.4 for the definition of the capacities for general Liouville domains. Let $n$ be a positive integer, and $X$ be a star-shaped domain in ${\mathbb C}^n$. We abbreviate $c^{\mathrm{GH}}_k(X, \lambda_0)$ by $c^{\mathrm{GH}}_k(X)$. For any $0 < a \le \infty$, let ${\mathbb C}H^a_*(X):= \varprojlim_{\partialta \to 0+} \text{\rm SH}\,^{S^1,[\partialta,a)}_*(X)$. There exists a natural map $\partialta: {\mathbb C}H^\infty_{*-n+1}(X) \to H_*(X, \partial X) \otimes H^{S^1}_*(\text{\rm pt})$; see \cite{Gutt_Hutchings} Section 3. For any $a \in {\mathbb R}_{>0}$, let $i^a: {\mathbb C}H^a_*(X) \to {\mathbb C}H^\infty_*(X)$ be the natural map. Then \[ c^{\mathrm{GH}}_k(X) = \inf\{ a \mid (i^a)_{2k+1} \ne 0\} \] for any $k \ge 1$. This follows from the following facts: \begin{itemize} \item $\partialta: {\mathbb C}H^\infty_{n+1}(X) \to H_{2n}(X, \partial X) \otimes H^{S^1}_0(\text{\rm pt})$ is an isomorphism. \item $U: {\mathbb C}H^\infty_{n+2k+1}(X) \to {\mathbb C}H^\infty_{n+2k-1}(X)$ is an isomorphism for any $k \ge 1$; see \cite{Gutt_Hutchings} Section 6.3 for the definition of the $U$-map. \end{itemize} It is now clear that Conjecture \ref{conj_main} implies the following conjecture. \begin{conj}\label{conj_capacity} For any $\Omega \in \mca{S}^2$ and $k \ge 1$, there holds $c_k(\Omega)=c^{\mathrm{GH}}_k(X_\Omega)$. \end{conj} \subsection{Computations for concave and (weakly) convex domains} In this subsection, we compute the capacities $c_k(\Omega)$ when $\Omega \in \mca{S}^2$ is concave or weakly convex. We say $\Omega$ is weakly convex if it is a convex subset of ${\mathbb R}^2$, and $\Omega$ is concave if $U_\Omega = ({\mathbb R}_{>0})^2 \setminus \Omega$ is a convex subset of ${\mathbb R}^2$. By comparing our computations with formulas by Gutt-Hutchings (Theorems 1.6 and 1.14 in \cite{Gutt_Hutchings}), we can verify Conjecture \ref{conj_capacity} when $\Omega$ is concave or strongly convex. Here we say $\Omega$ is strongly convex if $\{(x_1, x_2) \in {\mathbb R}^2 \mid (|x_1|, |x_2|) \in \Omega\}$ is a convex subset of ${\mathbb R}^2$. Recall that $\bar{U}_\Omega$ denotes the closure of $U_\Omega$. \begin{prop}\label{prop_concave} If $\Omega \in \mca{S}^2$ is concave, then for any $k \ge 1$ \[ c_k(\Omega) = \max_{1 \le j \le k} ( \min_{p \in \bar{U}_\Omega} A_{j, k+1-j}(p) ). \] \end{prop} \begin{proof} We may assume that $\Omega \in \mca{S}^2_\text{\rm nice}$. This is because for any concave $\Omega \in \mca{S}^2$ and for any $\varepsilon>0$, there exists $\Omega' \in \mca{S}^2_\text{\rm nice}$ which is concave and satisfies $\Omega \subset \Omega' \subset (1+\varepsilon)\Omega$. Let $a:=\max_{1 \le j \le k} ( \min_{p \in \bar{U}_\Omega} A_{j, k+1-j}(p) )$. To prove $c_k(\Omega)=a$, it is sufficient to show $a-\varepsilon \le c_k(\Omega) \le a+\varepsilon$ for any $\varepsilon>0$. Let us prove $c_k(\Omega) \le a+\varepsilon$. Consider the filtration on $C^{\Omega, [a+\varepsilon,\infty)}_*$ as in Lemma \ref{lem_F_m}. Since $U_\Omega$ and $U_\Omega(a+\varepsilon:m_1,m_2)$ are both convex, we obtain \[ H_*(U_\Omega, U_\Omega(a+\varepsilon:m_1,m_2)) \cong \begin{cases} H_*(\text{\rm pt}) &( U_\Omega(a+\varepsilon:m_1,m_2) = \emptyset), \\ 0 &(U_\Omega(a+\varepsilon:m_1,m_2) \ne \emptyset). \end{cases} \] Moreover, if $m_1+m_2 \le k+1$ then $\min_{p \in \bar{U}_\Omega} A_{m_1,m_2}(p) \le a$, thus $U_\Omega(a+\varepsilon:m_1,m_2) \ne \emptyset$. By Remark \ref{rem_E_1}, if $E^1_{p,q} \ne 0$ then $q \in \{p-1, p\}$ and $p \ge k+2$, thus $q+p \ge 2k+3$. Hence $H_{\le 2k+2}(C^{\Omega, [a+\varepsilon,\infty)})=0$. This implies that $H_{2k+1}(C^{\Omega,[\partialta, a+\varepsilon)}) \to H_{2k+1}(C^{\Omega, [\partialta,\infty)})$ is isomorphic for any $\partialta \in (0, a+\varepsilon)$, thus $c_k(\Omega) \le a+\varepsilon$. Let us prove $c_k(\Omega) \ge a-\varepsilon$. Take $\partialta>0$ sufficiently close to $0$ and consider the filtration on $C^{\Omega, [\partialta, a-\varepsilon)}_*$ as in Lemma \ref{lem_F_m}. Then \[ E^1_{p,p-1+j} \cong \bigoplus_{\substack{m_1+m_2=p \\ m_1,m_2>0}} H_0(U_\Omega(a-\varepsilon:m_1,m_2), U_\Omega(\partialta: m_1,m_2)) \otimes H_j(S^1). \] This is because $H_0(U_\Omega(a-\varepsilon:m_1,m_2), U_\Omega(\partialta: m_1,m_2))=0$ unless $m_1, m_2>0$. Moreover $\partial_{E^1}: E^1_{p, p-1} \to E^1_{p-1, p-1}$ is given by \[ (\partial_{E_1} x)_{m_1, m_2, 1} = m_1 \cdot x_{m_1, m_2+1, 0} - m_2 \cdot x_{m_1+1, m_2, 0}. \] There exists $1 \le j \le k$ such that $a-\varepsilon < \min_{p \in \bar{U}_\Omega} A_{j, k+1-j}(p)$, then $U_\Omega(a-\varepsilon: j, k+1-j) = \emptyset$. This implies \[ \mathrm{Ker}( \partial_{E^1}: E^1_{k+1,k} \to E^1_{k,k})=0, \] then $H_{2k+1}(C^{\Omega, [\partialta, a-\varepsilon)})=0$. This implies $c_k(\Omega) \ge a-\varepsilon$. \end{proof} \begin{prop}\label{prop_convex} If $\Omega \in \mca{S}^2$ is weakly convex, then for any $k \ge 1$ \[ c_k(\Omega) = \min_{0 \le j \le k} ( \max_{p \in \Omega} A_{j, k-j}(p)). \] \end{prop} \begin{proof} We may assume that $\Omega \in \mca{S}^2_\text{\rm nice}$. This is because for any weakly convex $\Omega \in \mca{S}^2$ and for any $\varepsilon>0$, there exists $\Omega' \in \mca{S}^2_\text{\rm nice}$ which is weakly convex and satisfies $\Omega \subset \Omega' \subset (1+\varepsilon)\Omega$. Let $a:=\min_{0 \le j \le k} ( \max_{p \in \Omega} A_{j, k-j}(p))$. It is sufficient to show that, for any $\varepsilon>0$ there holds $a-\varepsilon \le c_k(\Omega) \le a+\varepsilon$. Let us prove $c_k(\Omega) \le a+\varepsilon$. Take $j \in \{0, \ldots, k\}$ so that $\max_{p \in \Omega} A_{j,k-j}(p)=a$. We fix such $j$ in the following argument. There exists $\Omega' \in \mca{S}^2$ which is concave and \[ \Omega \subset \Omega' \subset \{ p \in ({\mathbb R}_{\ge 0})^2 \mid A_{j, k-j}(p) \le a+\varepsilon\}. \] Then \[ c_k(\Omega) \le c_k(\Omega') \le \max_{1 \le i \le k} \big( \min_{A_{j, k-j}(p) \ge a+\varepsilon} A_{i, k+1-i}(p) \big), \] where the second inequality follows from Proposition \ref{prop_concave}. Thus it is sufficient to show \begin{equation}\label{eqn_j_k_j} \min_{A_{j,k-j}(p) \ge a+\varepsilon} A_{i, k+1-i} (p) \le a+\varepsilon \end{equation} for any $i \in \{1, \ldots, k\}$. When $j=0$, the LHS is equal to $\frac{(k+1-i)(a+\varepsilon)}{k}$. When $j=k$, the LHS is equal to $\frac{i(a+\varepsilon)}{k}$. Thus (\ref{eqn_j_k_j}) holds when $j=0$ or $j=k$. Let us consider the case $0<j<k$. By $(i-j) + (k+1-i) -(k-j)=1$, we obtain \[ \min\{ i-j, (k+1-i)-(k-j)\} \le 0 \iff \min\{ i/j, (k+1-i)/(k-j) \} \le 1. \] Then \[ \min_{A_{j, k-j}(p) \ge a+\varepsilon} A_{i, k+1-i}(p) = \min \{i/j, (k+1-i)/(k-j) \} \cdot (a+\varepsilon) \le a+\varepsilon. \] This completes the proof of $c_k(\Omega) \le a+\varepsilon$. Let us prove $c_k(\Omega) \ge a-\varepsilon$. It is sufficient to show that the image of \begin{equation}\label{eqn_convex} H_{2k+1}(C^{\Omega, [\partialta, a-\varepsilon)}) \to H_{2k+1}(C^{\Omega, [\partialta, \infty)}) \end{equation} is zero for any $\partialta>0$ sufficiently close to $0$. Let us first notice that for any $j \in {\mathbb Z}$ \[ H_*(\bar{U}_\Omega(a-\varepsilon:j, k+1-j), \bar{U}_\Omega(\partialta:j, k+1-j)) \cong H_*(\partial_+\Omega(a-\varepsilon:j,k+1-j), \partial_+\Omega(\partialta:j,k+1-j)) \] by Lemma \ref{lem_barU}. Then we have the following observations: \begin{itemize} \item[(a):] $H_*(\bar{U}_\Omega(a-\varepsilon:j, k+1-j), \bar{U}_\Omega(\partialta:j, k+1-j))=0$ unless $*=0$. \item[(b):] Any element of $H_0(\bar{U}_\Omega(a-\varepsilon:j, k+1-j), \bar{U}_\Omega(\partialta:j, k+1-j))$ can be written as $[a^1p_1+a^2p_2]$ with $a^1, a^2 \in {\mathbb Q}$, where $p_1:=(\rho(0), 0)$ and $p_2:=(0, \rho(\pi/2))$. \end{itemize} (a) holds since $\Omega$ is convex and $a-\varepsilon < a \le \max_{p \in \Omega} A_{j, k+1-j}(p)$. (b) holds since $\Omega$ is convex. Now consider the filtration on $C^{\Omega, [\partialta, a-\varepsilon)}_*$ as in Lemma \ref{lem_F_m}. By (a) and Remark \ref{rem_E_1}, if $E^1_{p,q} \ne 0$ and $p+q=2k+1$, then $p=k+1$, $q=k$. Moreover there exists a natural isomorphism \[ E^1_{k+1,k} \cong \bigoplus_{j \in {\mathbb Z}} H_0(\bar{U}_\Omega(a-\varepsilon:j, k+1-j), \bar{U}_\Omega(\partialta:j, k+1-j)) \otimes H_0(S^1). \] Note that we can replace $U_\Omega$ with $\bar{U}_\Omega$ due to Lemma \ref{lem_barU}. We are going to prove the following claim: \begin{quote} If $x= \sum_{j \in {\mathbb Z}} x_{j, k+1-j} \otimes e_0 \in E^1_{k+1,k}$ satisfies $\partial_{E^1}(x)=0$, then $x_{j, k+1-j}=0$ for any $1 \le j \le k$. \end{quote} By (b), for any $j \in {\mathbb Z}$ there exist $a^1_j, a^2_j \in {\mathbb Q}$ such that \[ x_{j, k+1-j} = [a^1_j p_1 + a^2_j p_2] \in H_0(\bar{U}_\Omega(a-\varepsilon:j, k+1-j), \bar{U}_\Omega(\partialta: j, k+1-j)). \] Let us prove $[a^1_jp_1]=0$ for $0 \le j \le k$. $[a^1_0p_1]=0$ since $[p_1]=0$ in $H_0(\bar{U}_\Omega(a-\varepsilon:0, k+1), \bar{U}_\Omega(\partialta:0,k+1))$. Then it is sufficient to show that if $0 \le j \le k-1$ and $[a^1_jp_1]=0$ then $[a^1_{j+1}p_1]=0$. This follows from $(\partial_{E^1}x)_{j, k-j}=0$, $k-j \ne 0$ and the following claim: $[\alpha p_1 + \beta p_2]=0 \implies [\alpha p_1]=[\beta p_2]=0$ in $H_0(\bar{U}_\Omega(a-\varepsilon:j,k-j), \bar{U}_\Omega(\partialta:j,k-j))$. This claim holds since for $c \in C^0([0,1], \bar{U}_\Omega)$ such that $c(0)=p_1$ and $c(1)=p_2$, there holds \[ \max_{t \in [0,1]} A_{j, k-j}(c(t)) \ge \max_{p \in \Omega} A_{j,k-j}(p) \ge a > a-\varepsilon. \] Now we have proved that $[a^1_jp_1]=0$ for any $0 \le j \le k$. By similar arguments, one can prove that $[a^2_j p_2]=0$ for any $1 \le j \le k+1$. Then, for any $1 \le j \le k$, we obtain $x_{j, k+1-j} = [ a^1_j p_1 + a^2_j p_2] = 0$. This finishes the proof of the claim. Finally, consider the filtration on $C^{\Omega, [\partialta, \infty)}_*$ as in Lemma \ref{lem_F_m}. As in the proof of Proposition \ref{prop_plus} (i), there exist natural isomorphisms \[ E^1_{k+1,k} \cong \bigoplus_{j \in {\mathbb Z} } H_0(\bar{U}_\Omega, \bar{U}_\Omega(\partialta:j, k+1-j)) \otimes H_0(S^1), \] and \[ H_0(\bar{U}_\Omega, \bar{U}_\Omega(\partialta:j, k+1-j)) \cong \begin{cases} H_0(\text{\rm pt}) &(1 \le j \le k), \\ 0 &(\text{otherwise}). \end{cases} \] Thus the above claim implies that the image of (\ref{eqn_convex}) is zero. This completes the proof of $c_k(\Omega) \ge a-\varepsilon$. \end{proof} \begin{cor}\label{cor_asymptotics} For any $\Omega \in \mca{S}^2$, \[ \lim_{k \to \infty} \frac{c_k(\Omega)}{k} = \max_{(x_1, x_2) \in \Omega} \min\{ x_1, x_2\}. \] \end{cor} \begin{proof} Let $a:=\max_{(x_1, x_2) \in \Omega} \min\{ x_1, x_2\}$. It is sufficient to show that, for any $\varepsilon>0$ there holds $\limsup_{k \to \infty} \frac{c_k(\Omega)}{k} \le a+\varepsilon$ and $\liminf_{k \to \infty} \frac{c_k(\Omega)}{k} \ge a-\varepsilon$. There exists $\Omega' \in \mca{S}^2$ which is concave and \[ \Omega \subset \Omega' \subset \{ (x_1, x_2) \in ({\mathbb R}_{\ge 0})^2 \mid \min \{x_1, x_2\} \le a+ \varepsilon \}. \] Then, for any $k \ge 1$ \[ c_k(\Omega) \le c_k(\Omega') \le (k+1)(a+\varepsilon) \] where the second inequality follows from Proposition \ref{prop_concave}. Thus $\limsup_{k \to \infty} \frac{c_k(\Omega)}{k} \le a+\varepsilon$. There exists $(x_1, x_2)$ and $\Omega' \in \mca{S}^2$ such that $\min \{x_1, x_2\} \ge a-\varepsilon$, $(x_1, x_2) \in \Omega' \subset \Omega$ and $\Omega'$ is weakly convex. Then \[ c_k(\Omega) \ge c_k(\Omega') \ge k(a-\varepsilon), \] where the second inequality follows from Proposition \ref{prop_convex}. Then we obtain $c_k(\Omega) \ge k(a-\varepsilon)$ for any $k \ge 1$, which implies $\liminf_{k \to \infty} \frac{c_k(\Omega)}{k} \ge a-\varepsilon$. \end{proof} \end{document}
\begin{document} \title{On the Optimality of Batch Policy Optimization Algorithms} \begin{abstract} \iffalse The batch policy optimization achieves empirical successes and provides promise to fuel the massive historical data to improve the policy. However, the theoretical understanding of this problem for benchmarking the algorithms is long overdue. In this paper, we study the stochastic limits of batch policy optimization in finite-armed stochastic bandits setting. We first introduce the \tilde{p}h{confidence-biased index} algorithm family, which includes both optimistic and pessimistic principles into a single algorithmic framework, and thus, leads to a unified analysis. Our analysis shows that regardless of pessimism vs. optimism, any confidence index algorithm with an appropriate bias, or even no bias, are \tilde{p}h{optimal} in this minimax sense. \todoy[inline]{Should mention the upper bounds somewhere? the main message is that regarding instance-wise performance, pessimism is not always better and our upper bound shows when it is worse than optimism. The message of ``when'' is more important in upper bounds as the negative result shows the existence.} \todob{The message of ``when'' is in the last sentence: if you care about weighted-minimax optimality.} \todoy{What I mean is ``on what instances'' some algorithms are better in terms of simple regret. } On the other hand, our analysis also reveals that instance-dependent optimality, which is commonly used in the literature for characterizing the optimality of the algorithms for stochastic bandits, \tilde{p}h{cannot be achieved} by any algorithm in the batch setting. To fill the blank in distinguish algorithms, we introduce a \tilde{p}h{weighted-minimax} criterion based on the inherent difficulty of optimal value prediction, which justifies the commonly used pessimistic algorithms in batch policy optimization. \todoy[inline]{Having a customized metric may not ``justify'' an algorithm? maybe just state the result we have instead of making general statements... E.g. saying under this criterion pessimistic algorithms are **optimal while non-pessimistic algorithms are not...} \todob{``justify'' here is in the sense providing a reason that pessimistic is better. } \fi Batch policy optimization considers leveraging existing data for policy construction before interacting with an environment. Although interest in this problem has grown significantly in recent years, its theoretical foundations remain under-developed. To advance the understanding of this problem, we provide three results that characterize the limits and possibilities of batch policy optimization in the finite-armed stochastic bandit setting. First, we introduce a class of \tilde{p}h{confidence-adjusted index} algorithms that unifies optimistic and pessimistic principles in a common framework, which enables a general analysis. For this family, we show that \tilde{p}h{any} confidence-adjusted index algorithm is minimax optimal, whether it be optimistic, pessimistic or neutral. Our analysis reveals that instance-dependent optimality, commonly used to establish optimality of \tilde{p}h{on-line} stochastic bandit algorithms, \tilde{p}h{cannot be achieved by any algorithm} in the batch setting. In particular, for any algorithm that performs optimally in some environment, there exists another environment where the same algorithm suffers arbitrarily larger regret. Therefore, to establish a framework for distinguishing algorithms, we introduce a new \tilde{p}h{weighted-minimax} criterion that considers the inherent difficulty of optimal value prediction. We demonstrate how this criterion can be used to justify commonly used pessimistic principles for batch policy optimization. \end{abstract} \begingroup \iffalse \setlength{\abovedisplayskip}{4pt} \setlength{\abovedisplayshortskip}{4pt} \setlength{\belowdisplayskip}{4pt} \setlength{\belowdisplayshortskip}{4pt} \setlength{\jot}{4pt} \setlength{\floatsep}{1ex} \setlength{\textfloatsep}{1ex} \fi \section{Introduction} We consider the problem of \tilde{p}h{batch policy optimization}, where a learner must infer a behavior policy given only access to a fixed dataset of previously collected experience, with no further environment interaction available. Interest in this problem has grown recently, as effective solutions hold the promise of extracting powerful decision making strategies from years of logged experience, with important applications to many practical problems \citep{strehl11learning,swaminathan2015batch,covington2016deep,jaques2019way,levine2020offline}. Despite the prevalence and importance of batch policy optimization, the theoretical understanding of this problem has, until recently, been rather limited. A fundamental challenge in batch policy optimization is the insufficient coverage of the dataset. In online reinforcement learning (RL), the learner is allowed to continually explore the environment to collect useful information for the learning tasks. By contrast, in the batch setting, the learner has to evaluate and optimize over various candidate policies based only on experience that has been collected a priori. The distribution mismatch between the logged experience and agent-environment interaction with a learned policy can cause erroneous value overestimation, which leads to the failure of standard policy optimization methods \citep{fujimoto2019off}. To overcome this problem, recent studies propose to use the \tilde{p}h{pessimistic principle}, by either learning a pessimistic value function \citep{swaminathan2015batch,wu2019behavior,jaques2019way,kumar2019stabilizing,kumar2020conservative} or pessimistic surrogate \citep{BuGeBe20}, or planning with a pessimistic model \citep{KiRaNeJo20,yu2020mopo}. However, it still remains unclear how to maximally exploit the logged experience without further exploration. In this paper, we investigate batch policy optimization with finite-armed stochastic bandits, and make three contributions toward better understanding the statistical limits of this problem. \tilde{p}h{First}, we prove a minimax lower bound of $\Omega({1}/{\sqrt{\text{min}_i n_i}})$ on the simple regret for batch policy optimization with stochastic bandits, where $n_i$ is the number of times arm $i$ was chosen in the dataset. We then introduce the notion of a confidence-adjusted index algorithm that unifies both the optimistic and pessimistic principles in a single algorithmic framework. Our analysis suggests that any index algorithm with an appropriate adjustment, whether pessimistic or optimistic, is minimax optimal. \tilde{p}h{Second}, we analyze the instance-dependent regret of batch policy optimization algorithms. Perhaps surprisingly, our main result shows that instance-dependent optimality, which is commonly used in the literature of minimizing cumulative regret of stochastic bandits, does not exist in the batch setting. Together with our first contribution, this finding challenges recent theoretical findings in batch RL that claim pessimistic algorithms are an optimal choice \citep[e.g.,][]{BuGeBe20,jin2020pessimism}. In fact, our analysis suggests that for any algorithm that performs optimally in some environment, there must always exist another environment where the algorithm suffers arbitrarily larger regret than an optimal strategy there. Therefore, any reasonable algorithm is equally optimal, or not optimal, depending on the exact problem instance the algorithm is facing. In this sense, for batch policy optimization, there remains a lack of a well-defined optimality criterion that can be used to choose between algorithms. \tilde{p}h{Third}, we provide a characterization of the pessimistic algorithm by introducing a weighted-minimax objective. In particular, the pessimistic algorithm can be considered to be optimal in the sense that it achieves a regret that is comparable to the inherent difficulty of optimal value prediction on an instance-by-instance basis. Overall, the theoretical study we provide consolidates recent research findings on the impact of being pessimistic in batch policy optimization \citep{BuGeBe20,jin2020pessimism,kumar2020conservative,KiRaNeJo20,yu2020mopo,liu2020provably,yin2021near}. The remainder of the paper is organized as follows. After defining the problem setup in Sections {\textnormal{e}}f{sec:setup}, we present the three main contributions in Sections {\textnormal{e}}f{sec:minimax} to {\textnormal{e}}f{sec:pessimism} as aforementioned. Section {\textnormal{e}}f{sec:related-work} discusses the related works. Section 7 gives our conclusions. \section{Problem setup} \label{sec:setup} To simplify the exposition, we express our results for batch policy optimization in the setting of stochastic finite-armed bandits. In particular, assume the action space consists of $k > 0$ arms, where the available data takes the form of \hbox{$n_i>0$} real-valued observations $X_{i,1},\dots,X_{i,n_i}$ for each arm $i\in [k]:=\{1,\dots,k\}$. This data represents the outcomes of $n_i$ pulls of each arm $i$. We assume further that the data for each arm $i$ is \tilde{p}h{i.i.d.} with $X_{i,j}\sim P_i$ such that $P_i$ is the reward distribution for arm $i$. Let $\mu_i=\int x P_i(dx)$ denote the mean reward that results from pulling arm $i$. All observations in the data set $X = (X_{ij})_{i\in [k],j\in [n_i]}$ are assumed to be independent. We consider the problem of designing an algorithm that takes the counts $(n_i)_{i\in[k]}$ and observations $X\in \times_{i\in [k]} \mathbb{R}^{n_i}$ as inputs and returns the index of a single arm in $[k]$, where the goal is to select an arm with the highest mean reward. Let ${\mathcal{A}}(X)\in[k]$ be the output of algorithm ${\mathcal{A}}$, The (simple) regret of ${\mathcal{A}}$ is defined as \begin{align*} {\mathcal{R}}({\mathcal{A}}, \theta) = \mu^* - \mathbb{E}_{X\sim \theta}[ \mu_{{\mathcal{A}}(X)} ]\, , \end{align*} where $\mu^*=\max_i \mu_i$ is the maximum reward. Here, the expectation $\mathbb{E}_{X\sim \theta}$ considers the randomness of the data $X$ generated from problem instance $\theta$, and also any randomness in the algorithm ${\mathcal{A}}$, which together induce the distribution of the random choice ${\mathcal{A}}(X)$. Note that this definition of regret depends both on the algorithm ${\mathcal{A}}$ and the problem instance $\theta = ((n_i)_{i\in [k]}, (P_i)_{i\in [k]})$. When $\theta$ is fixed, we will use ${\mathcal{R}}({\mathcal{A}})$ to reduce clutter. For convenience, we also let $n = \sum_i n_i$ and $n_{\min}$ denote the total number of observations and the minimum number of observations in the data. The optimal arm is $a^*$ and the suboptimality gap is $\Delta_i = \mu^* - \mu_i$. The largest and smallest non-zero gaps are $\Delta_{\max}=\max_i \Delta_i$ and $\Delta_{\min}=\min_{i:\Delta_i>0}\Delta_i$. In what follows, we assume that the distributions $P_i$ are \hbox{1-subgaussian} with means in the unit interval $[0,1]$. We denote the set of these distributions by ${\mathcal{P}}$. The set of all instances where the distributions satisfy these properties is denoted by $\Theta$. The set of instances with $\mathbf{n}=(n_i)_{i\in [k]}$ fixed is denoted by $\Theta_{\mathbf{n}}$. Thus, $\Theta = \cup_{\mathbf{n}} \Theta_{\mathbf{n}}$. Finally, we define $|\mathbf{n}|=\sum_i n_i$ for $\mathbf{n}=(n_i)_{i\in [k]}$. \section{Minimax Analysis} \label{sec:minimax} In this section, we introduce the notion of a \tilde{p}h{confidence-adjusted index algorithm}, and prove that a broad range of such algorithms are minimax optimal up to a logarithmic factor. A confidence-adjusted index algorithm is one that calculates an index for each arm based on the data for that arm only, then chooses an arm that maximizes the index. We consider index algorithms where the index of arm $i\in [k]$ is defined as the sum of the sample mean of this arm, $\hat{\mu}_i = \frac1{n_i} \sum_{j=1}^{n_i} X_{i,j}$ plus a bias term of the form $\alpha/\sqrt{n_i}$ with $\alpha \in \mathbb{R}$. That is, given the input data $X$, the algorithm selects an arm according to \begin{align*} \argmax_{i\in[k]}\ \hat{\mu}_i + \frac{\alpha}{\sqrt{n_i}} \, . \addeq\label{eq:index} \end{align*} The reason we call these confidence-adjusted is because for a given confidence level $\delta>0$, by Hoeffding's inequality, it follows that \begin{align*} \mu_i\in\left[ \hat{\mu}_i - \frac{\beta_\delta}{\sqrt{n_i}} , \,\, \hat{\mu}_i + \frac{\beta_\delta}{\sqrt{n_i}} {\textnormal{i}}ght] \addeq\label{eq:confidence-interval} \end{align*} with probability at least $1-\delta$ for all arms with \begin{align*} \beta_\delta = \sqrt{2\log\left(\frac{k}{\delta}{\textnormal{i}}ght)}\, . \end{align*} Thus, the family of confidence-adjusted index algorithms consists of all algorithms that follow this strategy, where each particular algorithm is defined by a (data independent) choice of $\alpha$. For example, an algorithm specified by $\alpha=-\beta_\delta$ chooses the arm with highest lower-confidence bound (highest LCB value), while an algorithm specified by $\alpha=\beta_\delta$ chooses the arm with the highest upper-confidence bound (highest UCB value). Note that $\alpha=0$ corresponds to what is known as the \tilde{p}h{greedy} (sample mean maximizing) choice. Readers familiar with the literature on batch policy optimization will recognize that $\alpha=-\beta_\delta$ implements what is known as the pessimistic algorithm \citep{jin2020pessimism,BuGeBe20,KiRaNeJo20,yin2021near}, or distributionally robust choice, or risk-adverse strategy. It is therefore natural to question the utility of considering batch policy optimization algorithms that \tilde{p}h{maximize} UCB values (i.e., implement optimism in the presence of uncertainty, or risk-seeking behavior, even when there is no opportunity for exploration). However, our first main result is that for batch policy optimization a risk-seeking (or greedy) algorithm cannot be distinguished from the more commonly proposed pessimistic approach in terms of minimax regret. To establish this finding, we first provide a lower bound on the minimax regret: \begin{theorem} \label{thm:minmax-lb} Fix $\mathbf{n} = (n_i)_{i\in [k]}$ with $n_1 \leq \cdots \leq n_k$. Then, there exists a universal constant $c > 0$ such that \begin{align*} \inf_{{\mathcal{A}}} \sup_{\theta\in \Theta_{\mathbf{n}}} {\mathcal{R}}({\mathcal{A}}, \theta) \geq c \max_{m \in [k]} \sqrt{\frac{\max(1, \log(m))}{n_m}} \,. \end{align*} \end{theorem} The assumption of increasing counts, $n_1 \leq \cdots \leq n_k$, is only needed to simplify the statement; the arm indices can always be re-ordered without loss of generality. The proof follows by arguing that the minimax regret is lower bounded by the Bayesian regret of the Bayesian optimal policy for any prior. Then, with a judicious choice of prior, the Bayesian optimal policy has a simple form. Intuitively, the available data permits estimation of the mean of action $a$ with accuracy $O(\sqrt{1/n_a})$. The additional logarithmic factor appears when $n_1,\ldots,n_m$ are relatively close, in which case the lower bound is demonstrating the necessity of a union bound that appears in the upper bound that follows. The full proof appears in the supplementary material. Next we show that a wide range of confidence-adjusted index algorithms are nearly minimax optimal when their confidence parameter is properly chosen: \begin{theorem} Fix $\mathbf{n} = (n_i)_{i\in [k]}$. Let $\delta$ be the solution of $\delta = \sqrt{32 \log(k/\delta)/\min_i n_i}$, and ${\mathcal{I}}$ be the confidence-adjusted index algorithm with parameter $\alpha$. Then, for any $\alpha\in [-\beta_\delta,\beta_\delta]$, we have \begin{align*} \sup_{\theta\in \Theta_{\mathbf{n}}} {\mathcal{R}}({\mathcal{I}}(\alpha),\theta) \le 12 \sqrt{\frac{\log(k/\delta)}{\min_i n_i}}\,. \end{align*} \label{thm:minimax-upper} \end{theorem} \begin{remark} Theorem~{\textnormal{e}}f{thm:minimax-upper} also holds for algorithms that use different $\alpha_i\in [-\beta_\delta,\beta_\delta]$ for different arms. \end{remark} \iffalse \begin{proof} Let $i' = \argmax_i \tilde{\mu}_i$. Then, given that ({\textnormal{e}}f{eq:confidence-interval}) is true,\todot{for all arms?} which is with probability at least $1-\delta$, we have \begin{align*} \mu^* - \mu_{i'} & = \mu^* - \tilde{\mu}_{a^*} + \tilde{\mu}_{a^*} - \tilde{\mu}_{i'} + \tilde{\mu}_{i'} - \mu_{i'} \\ & \leq \mu^* - \tilde{\mu}_{a^*} + \tilde{\mu}_{i'} - \mu_{i'}\\ & \leq \mu^* - \hat{\mu}_{a^*} + \hat{\mu}_{i'} - \mu_{i'} + 2\sqrt{\frac{2 \log (k / \delta)}{ \min_i n_i}} \\ & \leq \sqrt{\frac{32 \log (k / \delta)}{\min_i n_i}} \,, \end{align*} where the first two inequalities follow from the definition of the index algorithm, and the last follows from ({\textnormal{e}}f{eq:confidence-interval}). Using the tower rule gives the desired result. \end{proof} \fi Perhaps a little unexpectedly, we see that \tilde{p}h{regardless} of optimism vs.\ pessimism, index algorithms with the right amount of adjustment, or \tilde{p}h{even no adjustment}, are minimax optimal, up to an order $\sqrt{\log(k n)}$ factor. We note that although these algorithms have the same worst case performance, they can behave very differently indeed on individual instances, as we show in the next section. In effect, what these two results tell us is that minimax optimality is too weak as a criterion to distinguish between pessimistic versus optimistic (or greedy) algorithms when considering the ``fixed count'' setting of batch policy optimization. This leads us to ask whether more refined optimality criteria are able to provide nontrivial guidance in the selection of batch policy optimization methods. One such criterion, considered next, is known as instance-optimality in the literature of cumulative regret minimization for stochastic bandits. \section{Instance-Dependent Analysis} \label{sec:instance} To better distinguish between algorithms we require a much more refined notion of performance that goes beyond merely considering worst-case behavior over all problem instances. Even if two algorithms have the same worst case performance, they can behave very differently on individual instances. Therefore, we consider the instance dependent performance of confidence-adjusted index algorithms. \subsection{Instance-dependent Upper Bound} Our next result provides a regret upper bound for a general form of index algorithm. All upper bounds in this section hold for any $\theta \in \Theta_{\mathbf{n}}$ unless otherwise specified, and we use ${\mathcal{R}}({\mathcal{A}})$ instead of ${\mathcal{R}}({\mathcal{A}}, \theta)$ to simplify the notation. \begin{theorem} Consider a general form of index algorithm, ${\mathcal{A}}(X)=\argmax_i \hat{\mu}_i + b_i$, where $b_i$ denotes the bias for arm $i\in[k]$ specified by the algorithm. For $2\le i\le k$ and $\eta\in \mathbb{R}$, define \begin{align*} g_i(\eta) = \sum_{j\ge i} e^{-\frac{n_j}{2} \left(\eta - \mu_j - b_j{\textnormal{i}}ght)_+ ^2} + \min_{j < i} e^{-\frac{n_j}{2} \left(\mu_j + b_j - \eta {\textnormal{i}}ght)_+ ^2} \end{align*} and $g_i^* = \min_{\eta} g_i(\eta)$. Assuming $\mu_1\ge \mu_2\ge \cdots \ge\mu_k$, for the index algorithms ({\textnormal{e}}f{eq:index}) we have \begin{align*} \Prb{{\mathcal{A}}(X)\ge i} \le \min\{ 1, g_i^* \} \addeq\label{eq:ub-cdf-general} \end{align*} and \begin{align*} {\mathcal{R}}({\mathcal{A}}) \le \sum_{2\le i \le k} \Delta_i \left(\min\{ 1, g_i^* \} - \min\{ 1, g_{i+1}^* \} {\textnormal{i}}ght) \addeq\label{eq:ub-exp-general} \end{align*} where we define $g_{k+1}^*=0$. \label{thm:instance-upper-general} \end{theorem} \iffalse \todoch[inline]{Should we include a proof sketch for this general result in the main paper?} \todoy[inline]{ Why do we care about this? This is actually bounding the CDF of the regret and takes all factors into account instead of having $\Delta_{\max}, \Delta_{\min}, n_1, n_{\min}$, etc. One advantage is that the upper bound derived in this way, e.g. for LCB, will not depend on $n_1$ to be large. The bound for LCB is small whenever there exists $i$ such that $\Delta_i$ is small and $n_i$ is large, thus more ``instance dependent'' and closer to the practical situation (and the actual performance), e.g. we do not need to see the optimal action but a near-optimal action is enough. } \fi The assumption $\mu_1 \geq \mu_2 \geq \dots \geq \mu_k$ is only required to express the statement simply; the indices can be reordered without loss of generality. The expression in \eqref{eq:ub-cdf-general} is a bit difficult to work with, so to make the subsequent analysis simpler we provide a looser but more interpretable bound for general index algorithms as follows. \begin{corollary} Following the setting of Theorem~{\textnormal{e}}f{thm:instance-upper-general}, consider any index algorithm and any $\delta\in(0, 1)$. Define $U_i=\mu_i + b_i + \beta_{\delta}/\sqrt{n_i}$ and $L_i = \mu_i + b_i - \beta_{\delta}/\sqrt{n_i}$. Let $h = \max\{i\in[k]: \max_{j<i} L_j < \max_{j'\ge i} U_{j'}\}$. Then we have \begin{align*} & {\mathcal{R}}({\mathcal{A}}) \le \Delta_h + \frac{\delta}{k}\Delta_{\max} \\ & + \frac{\delta}{k} \sum_{i>h}(\Delta_i - \Delta_{i-1}) \sum_{j\ge i} e^{-\frac{n_j}{2}\left( \max_{j'<i}L_{j'} - U_j {\textnormal{i}}ght)^2 } \,. \end{align*} \label{coro:instance-upper-general-simplified} \end{corollary} \begin{remark} The upper bound in Corollary~{\textnormal{e}}f{coro:instance-upper-general-simplified} can be further relaxed as ${\mathcal{R}}({\mathcal{A}}) \le \Delta_h + \delta \Delta_{\max}$. \end{remark} \begin{remark} \label{remark:recover-minimax} The minimax regret upper bound (Theorem~{\textnormal{e}}f{thm:minimax-upper}) can be recovered a result of Corollary~{\textnormal{e}}f{coro:instance-upper-general-simplified} (see supplement). \end{remark} Corollary~{\textnormal{e}}f{coro:instance-upper-general-simplified} highlights an inherent optimization property of index algorithms: they work by designing an additive adjustment for each arm, such that all of the bad arms ($i>h$) can be eliminated efficiently, i.e., it is desirable to make $h$ as small as possible. We note that although one can directly plug in the specific choices of $\{b_i\}_{i\in \iset{k}}$ to get instance-dependent upper bounds for different algorithms, it is not clear how their performance compares to one another. Therefore, we provide simpler relaxed upper bounds for the three specific cases, greedy, LCB and UCB, to allow us to better differentiate their performance across different problem instances (see supplement for details). \begin{corollary}[Regret Upper bound for Greedy] Following the setting of Theorem~{\textnormal{e}}f{thm:instance-upper-general}, for any $0<\delta<1$, the regret of greedy ($\alpha=0$) on any problem instance is upper bounded by \begin{align*} {\mathcal{R}}({\mathcal{A}}) \le \min_{i\in \iset{k}} \left(\Delta_{i} + \sqrt{\frac{2}{n_{i}}\log\frac{k}{\delta}} + \max_{j>i} \sqrt{\frac{2}{n_{j}}\log\frac{k}{\delta}} {\textnormal{i}}ght) + \delta \, . \end{align*} \label{coro:ub-greedy-instance} \end{corollary} \begin{corollary}[Regret Upper bound for LCB] Following the setting of Theorem~{\textnormal{e}}f{thm:instance-upper-general}, for any $0<\delta<1$, the regret of LCB ($\alpha=-\beta_\delta$) on any problem instance is upper bounded by \begin{align*} {\mathcal{R}}({\mathcal{A}})\leq \min_{i\in \iset{k}} \Delta_i + \sqrt{\frac{8}{n_{i}}\log\frac{k}{\delta}} + \delta\, . \end{align*} \label{coro:ub-lcb-instance} \end{corollary} \begin{corollary}[Regret Upper bound for UCB] Following the setting of Theorem~{\textnormal{e}}f{thm:instance-upper-general}, for any $0<\delta<1$, the regret of UCB ($\alpha=\beta_\delta$) on any problem instance is upper bounded by \begin{align*} {\mathcal{R}}({\mathcal{A}}) \le \min_{i \in \iset{k}} \left( \Delta_i + \max_{j > i} \sqrt{\frac{8}{n_{j}}\log\frac{k}{\delta}} {\textnormal{i}}ght) + \delta \, . \end{align*} \label{coro:ub-ucb-instance} \end{corollary} \begin{remark} The results in these corollaries sacrifice the tightness of instance-dependence to obtain cleaner bounds for the different algorithms. The tightest instance dependent bounds can be derived from Theorem~{\textnormal{e}}f{thm:instance-upper-general} by optimizing $\eta$. \end{remark} \if0 \todoy[inline]{ Compare these bounds, LCB v.s. UCB might be interesting. Propose arguments according to upper bounds Verify these arguments through experiments. Use the permutation model to explain why LCB might be more favorable in general (requires $k$ to be large.) } \todot{The above are really nice clean bounds and they say something interesting about the algorithms. A lot of the instance-dependence has been lost now though. Should we remark on what happens when $(n_i)$ get large? Note that now the minimax theorems are (more or less) corollaries of these.} \todob{Upvote! We may discuss the scenario under which the LCB, UCB, and greedy domininate each other, in terms of the sample quality, i.e., $n_i$. For example, an interesting practical case is the behavior policy is quite good but not optimal. Could we say something about the selection criterion in this case? Intuitively, we should use LCB in this case. } \fi \paragraph{Discussion.} The regret upper bounds presented above suggest that although they are all nearly minimax optimal, UCB, LCB and greedy exhibit distinct behavior on individual instances. Each will eventually select the best arm with high probability when $n_i$ gets large for \tilde{p}h{all} $i\in \iset{k}$, but their performance can be very different when $n_i$ gets large for only a \tilde{p}h{subset} of arms $S\subset \iset{k}$. For example, LCB performs well whenever $S$ contains a good arm (i.e., with small $\Delta_i$ and large $n_i$). UCB performs well when there is a good arm $i$ such that all worse arms are in $S$ ($n_j$ large for all $j>i$). For the greedy algorithm, the regret upper bound is small only when there is a good arm $i$ where $n_j$ is large for all $j \ge i$, in which situation both LCB and UCB perform well. Clearly there are instances where LCB performs much better than UCB and vice versa. Consider an environment where there are two groups of arms: one with higher rewards and another with lower rewards. The behavior policy plays a subset of the arms $S\subset\iset{k}$ a large number of times and ignores the rest. If $S$ contains at least one good arm but no bad arm, LCB will select a good played arm~(with high probability) while UCB will select a bad unplayed arm. If $S$ consists of all bad arms, then LCB will select a bad arm by being pessimistic about the unobserved good arms while UCB is guaranteed to select a good arm by being optimistic. This example actually raises a potential reason to favor LCB, since the condition for UCB to outperform LCB is stricter: requiring the behavior policy to play all bad arms while ignoring all good arms. To formalize this, we compare the upper bounds for the two algorithms by taking the $n_i$ for a subset of arms $i\in S\subset \iset{k}$ to infinity. For ${\mathcal{A}}\in \{\textnormal{greedy}, \textnormal{LCB}, \textnormal{UCB}\}$, let $\hat{{\mathcal{R}}}_S({\mathcal{A}})$ be the regret upper bounds with $\{n_i\}_{i\in S}\to\infty$ and $\{n_i\}_{i\notin S} = 1$ while fixing $\mu_1,...,\mu_k$ in Corollary~{\textnormal{e}}f{coro:ub-greedy-instance}, {\textnormal{e}}f{coro:ub-lcb-instance}, and {\textnormal{e}}f{coro:ub-ucb-instance} respectively. Then LCB dominates the three algorithms with high probability under a uniform prior for $S$: \begin{proposition} Suppose $\mu_1>\mu_2>...>\mu_k$ and $S\subset \iset{k}$ is uniformly sampled from all subsets with size $m<k$, then \begin{align*} \Prb{\hat{{\mathcal{R}}}_S(\textnormal{LCB}) < \hat{{\mathcal{R}}}_S(\textnormal{UCB})} \ge 1 - \frac{(k-m)!m!}{k!} \,. \end{align*} \label{prop:lcb-vs-ucb} \end{proposition} This lower bound is $1/2$ when $k=2$ and approaches $1$ when $k$ increases for any $0 < m < k$ since it is always lower bounded by $1 - 1/k$. The same argument applies when comparing LCB to greedy. To summarize, when comparing different algorithms by their upper bounds, we have the following observations: (i) These algorithms behave differently on different instances, and none of them outperforms the others on all instances. (ii) Both scenarios where LCB is better and scenarios where UCB is better exist. (iii) LCB is more favorable when $k$ is not too small because it is the best option among these algorithms on most of the instances. \paragraph{Simulation results. } Since our discussion is based on comparing only the upper bounds~(instead of the exact regret) for different algorithms, it is a question that whether these statements still hold in terms of their actual performance. To answer this question, we verify these statements through experiments on synthetic problems. The details of these synthetic experiments can be found in the supplementary material. We first verify that there exist instances where LCB is the best among the three algorithms as well as instances where UCB is the best. For LCB to perform well, we construct two ${\epsilon}ilon$-greedy behavior policies on a $100$-arm bandit where the best arm or a near-optimal arm is selected to be played with a high frequency while the other arms are uniformly played with a low frequency. Figure~{\textnormal{e}}f{fig:lcb-1} and~{\textnormal{e}}f{fig:lcb-2} show that LCB outperforms UCB and greedy on these two instances, verifying our observation from the upper bound~(Corollary~{\textnormal{e}}f{coro:ub-lcb-instance}) that LCB only requires a good behavior policy while UCB and greedy require bad arms to be eliminated~(which is not the case for ${\epsilon}ilon$-greedy policies). For UCB to outperform LCB, we set the behavior policy to play a set of near-optimal arms with only a small number of times and play the rest of the arms uniformly. Figure~{\textnormal{e}}f{fig:ucb-1} and~{\textnormal{e}}f{fig:ucb-2} show that UCB outperforms LCB and greedy on these two instances, verifying our observation from the upper bound~(Corollary~{\textnormal{e}}f{coro:ub-ucb-instance}) that UCB only requires all worse arms to be identified. We now verify the statement that LCB is the best option on most of the instances when $k$ is not too small. We verify this statement in two aspects: First, we show that when $k=2$, LCB and UCB have an equal chance to be the better algorithm. More specifically, we fix $n_1 > n_2$~(note that if $n_1=n_2$ all index algorithms are the same as greedy) and vary $\mu_1 - \mu_2$ from $-1$ to $1$. Intuitively, when $|\mu_1-\mu_2|$ is large, the problem is relatively easy for all algorithms. For $\mu_1-\mu_2$ in the medium range, as it becomes larger, the good arm is tried more often, thus the problem becomes easier for LCB and harder for UCB. Figure~{\textnormal{e}}f{fig:two-arm-1} and ~{\textnormal{e}}f{fig:two-arm-2} confirm this and show that both LCB and UCB are the best option on half of the instances. Second, we show that as $k$ grows, LCB quickly becomes the more favorable algorithm, outperforming UCB and greedy on an increasing fraction of instances. More specifically, we vary $k$ and sample a set of instances from the prior distribution introduced in Proposition~{\textnormal{e}}f{prop:lcb-vs-ucb} with $|S|=k/2$ and $|S|=k/4$. Figure~{\textnormal{e}}f{fig:frac-1} and~{\textnormal{e}}f{fig:frac-2} shows that the fraction of instances where LCB is the best quickly approaches $1$ as $k$ increases. \if0 We also compare these algorithms on a two-armed bandit, where the reward distribution for each arm $i$ is a Gaussian with unit variance. We fix the mean reward of the first arm $\mu_1=0$ and the number of observations of both arms, and investigate how the regret of different algorithms changes as a function of $\mu_1-\mu_2$. In Figure~{\textnormal{e}}f{fig:two-armed-bandit} we present the results for two cases $n_1=10, n_2=5$ and $n_1=100, n_2=10$. Intuitively, when $|\mu_1-\mu_2|$ is large, the problem is relatively easy for all algorithms. For $\mu_1-\mu_2$ in the medium range, as it becomes larger, the good arm is tried more often, thus the problem becomes easier for LCB and harder for UCB. It is clear that neither UCB nor LCB cannot dominate the other algorithm. In fact, their performance is symmetry over the instance where $\mu_1=\mu_2$. Also, the greedy algorithm is dominated by either LCB or UCB on all problem instances. \fi \if0 \begin{figure*} \caption{Comparing UCB, LCB and greedy on synthetic problems. (a) Problem instance where LCB has the best performance. The data set is generated by a behavior policy that pulls the optimal arm with \tilde{p} \label{fig:lcb-better} \label{fig:ucb-better} \label{fig:two-arms} \label{fig:more-arms} \label{fig:bandit} \end{figure*} \fi \begin{figure*} \caption{Comparing UCB, LCB and greedy on synthetic problems~(with $k=100$). (a) and (b): Problem instances where LCB has the best performance. The data set is generated by a behavior policy that pulls an arm~$i$ with \tilde{p} \label{fig:lcb-1} \label{fig:lcb-2} \label{fig:ucb-1} \label{fig:ucb-2} \label{fig:bandit} \end{figure*} \begin{figure*} \caption{Comparing UCB, LCB and greedy on synthetic problems. (a) and (b): A set of two-armed bandit instances where both LCB and UCB dominate half of the instances. (c) and (d): For each $k$, we first sample 100 vectors ${\bm{e} \label{fig:two-arm-1} \label{fig:two-arm-2} \label{fig:frac-1} \label{fig:frac-2} \label{fig:two-armed-bandit} \end{figure*} \input{instance_negative} \section*{Instance dependent upper bounds} \todoy[inline]{Attempting to get a tighter upper bound...} \begin{theorem} Consider index algorithms where $I_i = \hat{\mu}_i + b_i$ and ${\mathcal{A}}(X) = \argmax_i I_i$. For each $2\le i\le k$, define \begin{align*} g_i(\eta) = \sum_{j\ge i} e^{-\frac{n_j}{2} \left(\eta - \mu_j - b_j{\textnormal{i}}ght)_+ ^2} + \min_{j < i} e^{-\frac{n_j}{2} \left(\mu_j + b_j - \eta {\textnormal{i}}ght)_+ ^2} \end{align*} and $g_i^* = \min_{\eta} g_i(\eta)$. Assuming $\mu_1\ge \mu_2\ge...\ge\mu_k$, we have \begin{align*} \Prb{{\mathcal{A}}(X)\ge i} \le \min\{ 1, g_i^* \} \addeq\label{eq:ub-cdf-general} \end{align*} and \begin{align*} {\mathcal{R}}({\mathcal{A}}) \le \sum_{2\le i \le k} \Delta_i \left(\min\{ 1, g_i^* \} - \min\{ 1, g_{i+1}^* \} {\textnormal{i}}ght) \addeq\label{eq:ub-exp-general} \end{align*} where we define $g_{k+1}^*=0$. \label{thm:instance-upper-general} \end{theorem} \todoy[inline]{ Why do we care about this? This is actually bounding the CDF of the regret and takes all factors into account instead of having $\Delta_{\max}, \Delta_{\min}, n_1, n_{\min}$, etc. One advantage is that the upper bound derived in this way, e.g. for LCB, will not depend on $n_1$ to be large. The bound for LCB is small whenever there exists $i$ such that $\Delta_i$ is small and $n_i$ is large, thus more ``instance dependent'' and closer to the practical situation (and the actual performance), e.g. we do not need to see the optimal action but a near-optimal action is enough. } Note that \eqref{eq:ub-cdf-general} and \eqref{eq:ub-exp-general} is already a well defined bound on the CDF and expected regret although not quite interpretable. Next we will instantiate the bound for specific choices of index functions $I_i = \hat{\mu}_i + b_i$. \begin{corollary}[Worst-case for any index algorithm.] Consider any index algorithm where $I_i = \hat{\mu}_i + b_i$, ${\mathcal{A}}(X) = \argmax_i I_i$, and $b_i \in \left[ - \sqrt{\frac{2}{n_{i}}\log\frac{k}{\delta}}, \sqrt{\frac{2}{n_{i}}\log\frac{k}{\delta}} {\textnormal{i}}ght]$ for all $i\in \iset{k}$. We have \begin{align*} \mu^* - \mu_{{\mathcal{A}}(X)} \le \sqrt{\frac{32}{n_{\min}}\log\frac{k}{\delta}} \end{align*} with probability at least $1 - \delta$ for any $\theta$. \label{coro:ub-worst-case} \end{corollary} \todoy[inline]{Convert to bound in expectation?} \begin{corollary}[Upper bound for LCB.] Consider the LCB algorithm where $I_i = \hat{\mu}_i - \sqrt{\frac{2}{n_{i}}\log\frac{k}{\delta}}$ and ${\mathcal{A}}(X) = \argmax_i I_i$. We have \begin{align*} \mu^* - \mu_{{\mathcal{A}}(X)} \le \min_{i\in \iset{k}} \Delta_i + \sqrt{\frac{8}{n_{i}}\log\frac{k}{\delta}} \end{align*} with probability at least $1 - \delta$. \label{coro:ub-lcb-instance} \end{corollary} \todoy[inline]{This may not be the most refined bound for LCB from Theorem~{\textnormal{e}}f{thm:instance-upper-general} but the form looks clean and interpretable. e.g., This is tighter than picking $i=1$ in the upper bound.~(same for the bounds for other algorithms)} \begin{corollary}[Upper bound for Greedy.] Consider the Greedy algorithm where ${\mathcal{A}}(X) = \argmax_i \hat{\mu}_i$. We have \begin{align*} & \mu^* - \mu_{{\mathcal{A}}(X)} \\ & \le \min_{i\in \iset{k}} \left(\Delta_{i} + \sqrt{\frac{2}{n_{i}}\log\frac{k}{\delta}} + \max_{j>i} \sqrt{\frac{2}{n_{j}}\log\frac{k}{\delta}} {\textnormal{i}}ght) \end{align*} with probability at least $1 - \delta$. \label{coro:ub-greedy-instance} \end{corollary} \begin{corollary}[Upper bound for UCB.] Consider the UCB algorithm where $I_i = \hat{\mu}_i + \sqrt{\frac{2}{n_{i}}\log\frac{k}{\delta}}$ and ${\mathcal{A}}(X) = \argmax_i I_i$. We have \begin{align*} \mu^* - \mu_{{\mathcal{A}}(X)} \le \min_{i \in \iset{k}} \left( \Delta_i + \max_{j > i} \sqrt{\frac{8}{n_{j}}\log\frac{k}{\delta}} {\textnormal{i}}ght) \end{align*} with probability at least $1 - \delta$. \label{coro:ub-ucb-instance} \end{corollary} \todoy[inline]{ Compare these bounds, LCB v.s. UCB might be interesting. Propose arguments according to upper bounds Verify these arguments through experiments. Use the permutation model to explain why LCB might be more favorable in general (requires $k$ to be large.) } \section*{Proof for upper bounds} \begin{proof}[Proof of Theorem~{\textnormal{e}}f{thm:instance-upper-general}] Assuming $\mu_1\ge \mu_2 \ge ,...,\mu_k$. If we can bound $\Prb{{\mathcal{A}}(X)\ge i} \le B_i$ where $B_1=1 \ge B_2 \ge...\ge B_k$. Then we can write \begin{align*} {\mathcal{R}}({\mathcal{A}}) & = \sum_{2\le i \le k} \Delta_i \Prb{{\mathcal{A}}(X)=i} \\ & = \sum_{2\le i \le k} \Delta_i \left( \Prb{{\mathcal{A}}(X)\ge i} - \Prb{{\mathcal{A}}(X) \ge i + 1}{\textnormal{i}}ght) \\ & = \sum_{2 \le i \le k} \left(\Delta_i - \Delta_{i-1}{\textnormal{i}}ght) \Prb{{\mathcal{A}}(X)\ge i} \\ & \le \sum_{2 \le i \le k} \left(\Delta_i - \Delta_{i-1}{\textnormal{i}}ght) B_i \\ & = \sum_{2\le i \le k} \Delta_i (B_i - B_{i+1}) \,. \end{align*} To upper bound $\Prb{{\mathcal{A}}(X)\ge i}$, let $I_i$ be the index for algorithm ${\mathcal{A}}$, i.e., ${\mathcal{A}}(X) = \argmax_i I_i$. Then \begin{align*} \Prb{{\mathcal{A}}(X)\ge i} \le \Prb{\max_{j\ge i} I_j \ge \max_{j<i} I_j} \,. \end{align*} \todoy[inline]{How to bound this tightly?} We can further write \begin{align*} \Prb{{\mathcal{A}}(X)\ge i} & \le \Prb{\max_{j\ge i} I_j \ge \max_{j<i} I_i, \max_{j<i} I_j \ge \eta} \\ & + \Prb{\max_{j\ge i} I_j \ge \max_{j<i} I_i, \max_{j<i} I_j < \eta} \\ & \le \Prb{\max_{j\ge i} I_j \ge \eta} + \Prb{\max_{j<i} I_j < \eta} \,. \addeq\label{eq:ub-eta} \end{align*} Then optimize the choice of $\eta$ according to the design of $I$. e.g. greedy, LCB, UCB, ... etc. We consider index algorithms where $I_i = \hat{\mu}_i + b_i$. Continue with~\eqref{eq:ub-eta}, for the first term, by union bound we have \begin{align*} \Prb{\max_{j\ge i} I_j \ge \eta} \le \sum_{j\ge i} \Prb{I_j \ge \eta} \,. \end{align*} For each $j\ge i$, by Hoeffding's inequality we have \begin{align*} \Prb{I_j \ge \eta} \le e^{-\frac{n_j}{2} \left(\eta - \mu_j - b_j{\textnormal{i}}ght)_+ ^2} \,. \end{align*} For the second term in~\eqref{eq:ub-eta}, we have $\Prb{\max_{j<i} I_j < \eta} \le \Prb{I_j < \eta}$ for each $j < i$. By Hoeffding's inequality we have \begin{align*} \Prb{I_j \ge \eta} \le e^{-\frac{n_j}{2} \left(\mu_j + b_j - \eta {\textnormal{i}}ght)_+ ^2} \,. \end{align*} So we have \begin{align*} \Prb{\max_{j<i} I_j < \eta} \le \min_{j<i} e^{-\frac{n_j}{2} \left(\mu_j + b_j - \eta {\textnormal{i}}ght)_+ ^2} \,. \end{align*} Define \begin{align*} g_i(\eta) = \sum_{j\ge i} e^{-\frac{n_j}{2} \left(\eta - \mu_j - b_j{\textnormal{i}}ght)_+ ^2} + \min_{j < i} e^{-\frac{n_j}{2} \left(\mu_j + b_j - \eta {\textnormal{i}}ght)_+ ^2} \end{align*} and $g_i^* = \min_{\eta} g_i(\eta)$, we have \begin{align*} \Prb{{\mathcal{A}}(X)\ge i} \le \min\{ 1, g_i^* \} \,. \end{align*} Then we can bound the expected regret as \begin{align*} {\mathcal{R}}({\mathcal{A}}) \le \sum_{2\le i \le k} \Delta_i \left(\min\{ 1, g_i^* \} - \min\{ 1, g_{i+1}^* \} {\textnormal{i}}ght) \end{align*} where we define $g_{k+1}^*=0$. \todoy[inline]{Need to show $g_i^*$ is monotonic? should be easy} \end{proof} \begin{proof}[Proof of Corollary~{\textnormal{e}}f{coro:ub-worst-case}] Let $\eta = \mu_1 - 2 \sqrt{\frac{2}{n_{\min}}\log\frac{k}{\delta}}$ and apply Theorem~{\textnormal{e}}f{thm:instance-upper-general}. More specifically, we can show \begin{align*} g_i^* \le \sum_{j\ge i} e^{-\frac{n_{\min}}{2} \left(\Delta_j - 3 \sqrt{\frac{2}{n_{\min}}\log\frac{k}{\delta}} {\textnormal{i}}ght)_+ ^2} + \frac{\delta}{k} \end{align*} then only consider $i$ such that $\Delta_i \ge 4\sqrt{\frac{2}{n_{\min}}\log\frac{k}{\delta}}$. \todoy[inline]{finish the proof..} \end{proof} \begin{proof}[Proof of Corollary~{\textnormal{e}}f{coro:ub-lcb-instance}] Let $\eta = \max_i \mu_i - \sqrt{\frac{8}{n_i}\log\frac{k}{\delta}}$ and apply Theorem~{\textnormal{e}}f{thm:instance-upper-general}. We only consider $i$ where $\Delta_i > \mu_1 - \eta$. Considering the LCB algorithm, for each $i\ge 2$, \begin{align*} g_i(\eta) = & \sum_{j\ge i} e^{-\frac{n_j}{2} \left(\eta - \mu_j + \sqrt{\frac{2}{n_j}\log\frac{k}{\delta}} {\textnormal{i}}ght)_+ ^2} \\ & + \min_{j < i} e^{-\frac{n_j}{2} \left(\mu_j - \eta - \sqrt{\frac{2}{n_j}\log\frac{k}{\delta}} {\textnormal{i}}ght)_+ ^2} \,. \end{align*} Define $h_i = \argmax_{j<i} \mu_j - \sqrt{\frac{8}{n_j}\log\frac{k}{\delta}}$ and $\eta_i =\mu_{h_i} - \sqrt{\frac{8}{n_{h_i}}\log\frac{k}{\delta}}$. Then we have $ e^{-\frac{n_{h_i}}{2} \left(\mu_{h_i} - \eta_i - \sqrt{\frac{2}{n_j}\log\frac{k}{\delta}} {\textnormal{i}}ght)_+ ^2} = \delta/k $. Consider $j \ge i$ we have \begin{align*} e^{-\frac{n_j}{2} \left(\eta_i - \mu_j + \sqrt{\frac{2}{n_j}\log\frac{k}{\delta}}{\textnormal{i}}ght)_+ ^2} \le \frac{\delta}{k} \end{align*} whenever $\eta_i - \mu_j \ge 0$, i.e. $\mu_{h_i} - \sqrt{\frac{8}{n_{h_i}}\log\frac{k}{\delta}} \ge \mu_j$. Define \begin{align*} U_i = \I{\forall j \ge i, \mu_{h_i} - \sqrt{\frac{8}{n_{h_i}}\log\frac{k}{\delta}} \ge \mu_j } \,. \end{align*} Then we have $ g_i^* U_i \le \frac{k - i + 2}{k}\delta \le \delta $. According to Theorem~{\textnormal{e}}f{thm:instance-upper-general} we have $\Prb{{\mathcal{A}}(X)\ge i} \le \min\{ 1, g_i^* \}$, so for any $i$ such that $\Prb{{\mathcal{A}}(X)\ge i} > \delta$, we must have $U_i = 0$, which is equivalent to \begin{align*} \mu_i > \max_{j<i} \mu_j - \sqrt{\frac{8}{n_j}\log\frac{k}{\delta}}\,. \addeq\label{eq:proof-lcb-cond-1} \end{align*} Let $\hat{i}$ be the maximum $i$ that satisfies \eqref{eq:proof-lcb-cond-1}. Then we have $\Prb{{\mathcal{A}}(X)\ge \hat{i} + 1} \le \delta$. Therefore, we have $\Prb{\mu^* - \mu_{{\mathcal{A}}(X)} \le \Delta_{\hat{i}}} \ge 1 - \delta$. So it remains to upper bound $\Delta_{\hat{i}}$. For any $i \in \iset{k}$, if $\hat{i} \le i$ then $\Delta_{\hat{i}} \le \Delta_{i}$. If $\hat{i} > i$ we have \begin{align*} \mu_{\hat{i}} > \max_{j < \hat{i}} \mu_j - \sqrt{\frac{8}{n_j}\log\frac{k}{\delta}} \ge \mu_i - \sqrt{\frac{8}{n_i}\log\frac{k}{\delta}} \,. \end{align*} Therefore, \begin{align*} \Delta_{\hat{i}} = \Delta_i + \mu_{i} - \mu_{\hat{i}} \le \Delta_i + \sqrt{\frac{8}{n_{i}}\log\frac{k}{\delta}} \,, \end{align*} which concludes the proof. \if0 Pick $\eta=\mu_1$ then the second term in $g_i(\eta)$ becomes $\delta/k$. For $j$ such that $\Delta_j\ge \sqrt{\frac{8}{n_j}\log\frac{k}{\delta}}$ we have \begin{align*} e^{-\frac{n_j}{2} \left(\eta - \mu_j - \sqrt{\frac{2}{n_j}\log\frac{k}{\delta}} {\textnormal{i}}ght)_+ ^2} \le \frac{\delta}{k} \,. \end{align*} Define \begin{align*} U_i = \I{\forall j \ge i, \Delta_j\ge \sqrt{\frac{8}{n_j}\log\frac{k}{\delta}} } \,. \end{align*} Then we have $ g_i^* U_i \le \frac{k - i + 2}{k}\delta \le \delta $. According to Theorem~{\textnormal{e}}f{thm:instance-upper-general} we have $\Prb{{\mathcal{A}}(X)\ge i} \le \min\{ 1, g_i^* \}$, so for any $i$ such that $\Prb{{\mathcal{A}}(X)\ge i} > \delta$, we must have $U_i = 0$, which is equivalent to \begin{align*} \max_{j \ge i} \mu_j + \sqrt{\frac{8}{n_j}\log\frac{k}{\delta}} > \mu_1\,. \addeq\label{eq:proof-ucb-cond-1} \end{align*} For any $i \in \iset{k}$, if $\hat{i} \le i$ then $\Delta_{\hat{i}} \le \Delta_{i}$. If $\hat{i} > i$ we have \begin{align*} \max_{j \ge \hat{i}} \mu_j + \sqrt{\frac{8}{n_j}\log\frac{k}{\delta}} \le \mu_{\hat{i}} + \max_{j > i} \sqrt{\frac{8}{n_{j}}\log\frac{k}{\delta}} \,. \end{align*} Applying \eqref{eq:proof-ucb-cond-1} gives \begin{align*} \Delta_{\hat{i}} = \mu_{1} - \mu_{\hat{i}} \le \max_{j > i} \sqrt{\frac{8}{n_{j}}\log\frac{k}{\delta}} \,. \end{align*} Therefore, \begin{align*} \Delta_{\hat{i}} & \le \max\left\{ \Delta_{i}, \max_{j > i} \sqrt{\frac{8}{n_{j}}\log\frac{k}{\delta}} {\textnormal{i}}ght\} \\ & \le \Delta_{i} + \max_{j > i} \sqrt{\frac{8}{n_{j}}\log\frac{k}{\delta}} \,. \end{align*} for any $i \in \iset{k}$, which concludes the proof. \fi \end{proof} \begin{proof}[Proof of Corollary~{\textnormal{e}}f{coro:ub-greedy-instance}] Considering the greedy algorithm, for each $i\ge 2$, \begin{align*} g_i(\eta) = \sum_{j\ge i} e^{-\frac{n_j}{2} \left(\eta - \mu_j{\textnormal{i}}ght)_+ ^2} + \min_{j < i} e^{-\frac{n_j}{2} \left(\mu_j - \eta {\textnormal{i}}ght)_+ ^2} \,. \end{align*} Define $h_i = \argmax_{j<i} \mu_j - \sqrt{\frac{2}{n_j}\log\frac{k}{\delta}}$ and $\eta_i =\mu_{h_i} - \sqrt{\frac{2}{n_{h_i}}\log\frac{k}{\delta}}$. Then we have $ e^{-\frac{n_{h_i}}{2} \left(\mu_{h_i} - \eta_i {\textnormal{i}}ght)_+ ^2} = \delta/k. $. Consider $j \ge i$ we have \begin{align*} e^{-\frac{n_j}{2} \left(\eta_i - \mu_j{\textnormal{i}}ght)_+ ^2} = e^{-\frac{n_j}{2} \left(\mu_{h_i} - \mu_j - \sqrt{\frac{2}{n_{h_i}}\log\frac{k}{\delta}} {\textnormal{i}}ght)_+ ^2} \,. \end{align*} When $\mu_{h_i} - \mu_j \ge \sqrt{\frac{2}{n_{h_i}}\log\frac{k}{\delta}} + \sqrt{\frac{2}{n_{j}}\log\frac{k}{\delta}}$ we have $e^{-\frac{n_j}{2} \left(\eta_i - \mu_j{\textnormal{i}}ght)_+ ^2} \le \delta/k$. Define \begin{align*} U_i = \I{\forall j \ge i, \mu_{h_i} - \mu_j \ge \sqrt{\frac{2}{n_{h_i}}\log\frac{k}{\delta}} + \sqrt{\frac{2}{n_{j}}\log\frac{k}{\delta}}} \,. \end{align*} Then we have $ g_i^* U_i \le \frac{k - i + 2}{k}\delta \le \delta $. According to Theorem~{\textnormal{e}}f{thm:instance-upper-general} we have $\Prb{{\mathcal{A}}(X)\ge i} \le \min\{ 1, g_i^* \}$, so for any $i$ such that $\Prb{{\mathcal{A}}(X)\ge i} > \delta$, we must have $U_i = 0$, which is equivalent to \begin{align*} \max_{j<i} \mu_j - \sqrt{\frac{2}{n_{j}}\log\frac{k}{\delta}} < \max_{j \ge i} \mu_j + \sqrt{\frac{2}{n_{j}}\log\frac{k}{\delta}} \addeq\label{eq:proof-greedy-cond-1} \end{align*} Let $\hat{i}$ be the maximum $i$ that satisfies \eqref{eq:proof-greedy-cond-1}. Then we have $\Prb{{\mathcal{A}}(X)\ge \hat{i} + 1} \le \delta$. Therefore, we have $\Prb{\mu^* - \mu_{{\mathcal{A}}(X)} \le \Delta_{\hat{i}}} \ge 1 - \delta$. So it remains to upper bound $\Delta_{\hat{i}}$. For any $i \in \iset{k}$, if $\hat{i} \le i$ then $\Delta_{\hat{i}} \le \Delta_{i}$. If $\hat{i} > i$ we have \begin{align*} \max_{j<\hat{i}} \mu_j - \sqrt{\frac{2}{n_{j}}\log\frac{k}{\delta}} \ge \mu_{i} - \sqrt{\frac{2}{n_{i}}\log\frac{k}{\delta}} \end{align*} and \begin{align*} \max_{j \ge \hat{i}} \mu_j + \sqrt{\frac{2}{n_{j}}\log\frac{k}{\delta}} \le \mu_{\hat{i}} + \max_{j > i} \sqrt{\frac{2}{n_{j}}\log\frac{k}{\delta}} \,. \end{align*} applying \eqref{eq:proof-greedy-cond-1} gives \begin{align*} \Delta_{\hat{i}} - \Delta_{i} = \mu_{i} - \mu_{\hat{i}} \le \sqrt{\frac{2}{n_{i}}\log\frac{k}{\delta}} + \max_{j > i} \sqrt{\frac{2}{n_{j}}\log\frac{k}{\delta}} \,. \end{align*} So \begin{align*} \Delta_{\hat{i}} \le \Delta_{i} + \sqrt{\frac{2}{n_{i}}\log\frac{k}{\delta}} + \max_{j > i} \sqrt{\frac{2}{n_{j}}\log\frac{k}{\delta}} \,. \end{align*} for any $i \in \iset{k}$, which concludes the proof. \end{proof} \begin{proof}[Proof of Corollary~{\textnormal{e}}f{coro:ub-ucb-instance}] Considering the UCB algorithm, for each $i\ge 2$, \begin{align*} g_i(\eta) = & \sum_{j\ge i} e^{-\frac{n_j}{2} \left(\eta - \mu_j - \sqrt{\frac{2}{n_j}\log\frac{k}{\delta}} {\textnormal{i}}ght)_+ ^2} \\ & + \min_{j < i} e^{-\frac{n_j}{2} \left(\mu_j - \eta + \sqrt{\frac{2}{n_j}\log\frac{k}{\delta}} {\textnormal{i}}ght)_+ ^2} \,. \end{align*} Pick $\eta=\mu_1$ then the second term in $g_i(\eta)$ becomes $\delta/k$. For $j$ such that $\Delta_j\ge \sqrt{\frac{8}{n_j}\log\frac{k}{\delta}}$ we have \begin{align*} e^{-\frac{n_j}{2} \left(\eta - \mu_j - \sqrt{\frac{2}{n_j}\log\frac{k}{\delta}} {\textnormal{i}}ght)_+ ^2} \le \frac{\delta}{k} \,. \end{align*} Define \begin{align*} U_i = \I{\forall j \ge i, \Delta_j\ge \sqrt{\frac{8}{n_j}\log\frac{k}{\delta}} } \,. \end{align*} Then we have $ g_i^* U_i \le \frac{k - i + 2}{k}\delta \le \delta $. According to Theorem~{\textnormal{e}}f{thm:instance-upper-general} we have $\Prb{{\mathcal{A}}(X)\ge i} \le \min\{ 1, g_i^* \}$, so for any $i$ such that $\Prb{{\mathcal{A}}(X)\ge i} > \delta$, we must have $U_i = 0$, which is equivalent to \begin{align*} \max_{j \ge i} \mu_j + \sqrt{\frac{8}{n_j}\log\frac{k}{\delta}} > \mu_1\,. \addeq\label{eq:proof-ucb-cond-1} \end{align*} Let $\hat{i}$ be the maximum $i$ that satisfies \eqref{eq:proof-ucb-cond-1}. Then we have $\Prb{{\mathcal{A}}(X)\ge \hat{i} + 1} \le \delta$. Therefore, we have $\Prb{\mu^* - \mu_{{\mathcal{A}}(X)} \le \Delta_{\hat{i}}} \ge 1 - \delta$. So it remains to upper bound $\Delta_{\hat{i}}$. For any $i \in \iset{k}$, if $\hat{i} \le i$ then $\Delta_{\hat{i}} \le \Delta_{i}$. If $\hat{i} > i$ we have \begin{align*} \max_{j \ge \hat{i}} \mu_j + \sqrt{\frac{8}{n_j}\log\frac{k}{\delta}} \le \mu_{\hat{i}} + \max_{j > i} \sqrt{\frac{8}{n_{j}}\log\frac{k}{\delta}} \,. \end{align*} Applying \eqref{eq:proof-ucb-cond-1} gives \begin{align*} \Delta_{\hat{i}} = \mu_{1} - \mu_{\hat{i}} \le \max_{j > i} \sqrt{\frac{8}{n_{j}}\log\frac{k}{\delta}} \,. \end{align*} Therefore, \begin{align*} \Delta_{\hat{i}} & \le \max\left\{ \Delta_{i}, \max_{j > i} \sqrt{\frac{8}{n_{j}}\log\frac{k}{\delta}} {\textnormal{i}}ght\} \\ & \le \Delta_{i} + \max_{j > i} \sqrt{\frac{8}{n_{j}}\log\frac{k}{\delta}} \,. \end{align*} for any $i \in \iset{k}$, which concludes the proof. \end{proof} {\bm{s}}pace{-2mm} \section{A Characterization of Pessimism} \label{sec:pessimism} {\bm{s}}pace{-1mm} It is known that the pessimistic algorithm, maximizing a lower confidence bound on the value, satisfies many desirable properties: it is consistent with rational decision making using preferences that satisfy uncertainty aversion and certainty-independence \citep{GiSch89}, it avoids the optimizer's curse \citep{SmiWi06}, it allows for optimal inference in an asymptotic sense \citep{Lam2019}, and in a certain sense it is the unique strategy that achieves these properties \citep{VPEKuhn17,SuVPKuhn20}. However, a pure statistical decision theoretic justification (in the sense of \citet{Berger85}) is still lacking. The instance-dependent lower bound presented above attempts to characterize the optimal performance of an algorithm on an instance-by-instance basis. In particular, one can interpret the objective ${{\mathcal{R}}({\mathcal{A}}, \theta)}/{{\mathcal{R}}^*_{{\mathcal{M}}_{\mathbf{n}, c}}(\theta)}$ defined in Theorem~{\textnormal{e}}f{thm:instance-negative-1} as weighting each instance $\theta$ by $1/{{\mathcal{R}}^*_{{\mathcal{M}}_{\mathbf{n}, c}}(\theta)}$, where this can be interpreted as a measure of instance difficulty. It is natural to consider an algorithm to be optimal if it can perform well relative to this weighted criteria. However, given that the performance of an algorithm can be arbitrarily different across instances, no such optimal algorithm can exist under this criterion. The question we address here is whether other measures of instance difficulty might be used to distinguish some algorithms as naturally advantageous over others. In a recent study, \citet{jin2020pessimism} show that the pessimistic algorithm is minimax optimal when weighting each instance by the variance induced by the optimal policy. In another recent paper, \citet{BuGeBe20} point out that the pessimistic choice has the property that its regret improves whenever the optimal choice's value is easier to predict. In particular, with our notation, their most relevant result (Theorem 3) implies the following: if $b_i$ defines an interval such that $\mu_i \in [\hat \mu_i-b_i,\hat \mu_i+b_i]$ for all $i\in [k]$, then for $i' = \argmax_i \hat \mu_i - b_i$ one obtains \footnote{ This inequality follows directly from the definitions: $\mu^* - \mu_{i'} \le \mu^* - (\hat \mu_{i'} - b_{i'}) \le \mu^* - (\hat \mu_{a^*} - b_{a^*}) \le 2 b_{a^*}$ and we believe this was known as a folklore result, although we are not able to point to a previous paper that includes this inequality. The logic of this inequality is the same as that used in proving regret bounds for UCB policies \citep{lai1985asymptotically,lattimore2020bandit}. It is also clear that the result holds for any data-driven stochastic optimization problem regardless of the structure of the problem. Theorem 3 of \citet{BuGeBe20} with this notation states that $\mu^*-\mu_{i'}\le \min_i \mu^*-\mu_i + 2 b_i$. } \begin{align*} \mu^* - \mu_{i'} \le 2 b_{a^*}\, . \addeq\label{eq:buckman-result} \end{align*} If we (liberally) interpret $b_{a^*}$ as a measure of how hard it is to predict the value of the optimal choice, this inequality suggests that the pessimistic choice could be justified as the choice that makes the regret comparable to the error of predicting the optimal value. To make this intuition precise, consider the same problem setup as discussed in Section {\textnormal{e}}f{sec:setup}. Suppose that the reward distribution for each arm $i\in[k]$ is a Gaussian with unit variance. Consider the problem of estimating the optimal value $\mu^*$ where the optimal arm $a^*$ is also provided to the estimator. We define the set of minimax optimal estimators. {\bm{s}}pace{-1mm} \begin{definition}[Minimax Estimator] For fixed $\mathbf{n} = (n_i)_{i\in [k]}$, an estimator is said to be minimax optimal if its worst case error is bounded by the minimax estimate error of the problem up to some constant. We define the set of minimax optimal estimators as \begin{align*} {\mathcal{V}}^*_\mathbf{n} \!=\! \left\{ \nu:\! \sup_{\theta\in\Theta_\mathbf{n}}\! \mathbb{E}_\theta[|\mu^*- \nu|] \leq c \inf_{\nu'\in{\mathcal{V}}} \sup_{\theta\in\Theta_\mathbf{n}}\! \mathbb{E}_{\theta}[|\mu^* - \nu'|] {\textnormal{i}}ght\} \end{align*} where $c$ is a universal constant, and ${\mathcal{V}}$ is the set of all possible estimators. \end{definition} {\bm{s}}pace{-1mm} Now consider using this optimal value estimation problem as a measure of how difficult a problem instance is, and then use this to weight each problem instance as in the definition of instance-dependent lower bound. In particular, let \begin{align*} {\mathcal{E}}^*(\theta) = \inf_{\nu\in{\mathcal{V}}^*_\mathbf{n}}\mathbb{E}_{\theta}[|\mu^* - \nu|] \end{align*} be the inherent difficulty of estimating the optimal value $\mu^*$ on problem instance $\theta$. The previous result ({\textnormal{e}}f{eq:buckman-result}) suggests (but does not prove) that $\sup_{\theta}\frac{{\mathcal{R}}(\textnormal{LCB},\theta)}{{\mathcal{E}}^*(\theta)}<+\infty$. We now show that not only does this hold, but up to a constant factor, the LCB algorithm is nearly weighted minimax optimal with the weighting given by ${\mathcal{E}}^*(\theta)$. \iffalse \todoc{An alternative to this is redefining the estimation problem as that of estimating the value of $a^*$, the optimal choice. That is, in this variant, the data also contains $a^*$, which is guaranteed to satisfy $r_{a^*} = r^*$, while the estimator still needs to produce an estimate of $r^*$. Clearly, the estimator has more information in this setting, so this is an easier problem. Denoting by $\tilde \Omega^*$ the set of $c$-minimax optimal estimators, we expect that $\tilde {\mathcal{E}}^*(\theta) = \inf_{\hat r \in \tilde\Omega^*} \mathbb{E}_\theta[|r^*-\hat r|] \le {\mathcal{E}}^*(\theta)$. I suspect that we can use $\tilde {\mathcal{E}}^*(\theta)$ as the inverse weight and nothing changes in the result below. } \fi {\bm{s}}pace{-1mm} \begin{proposition} For any $\mathbf{n} = (n_i)_{i\in [k]}$, \begin{align*} {\sup_{\theta\in\Theta_\mathbf{n}} \frac{{\mathcal{R}}(\textnormal{LCB},\theta)}{{\mathcal{E}}^*(\theta)}}<c\sqrt{\log |\mathbf{n}|} \, , \end{align*} where $c$ is some universal constant. \label{prop:weighted-minimax-lcb} \end{proposition} {\bm{s}}pace{-1mm} \iffalse \begin{proof} We first show ${\textnormal{h}}o^*>0$. Let $\hat \nu$ be the empirical estimator of $\mu_{a^*}$, then \begin{align*} {\textnormal{h}}o^* \geq \frac{\inf_{\mathcal{A}} \sup_{\theta\in\Theta_\mathbf{n}}{\mathcal{R}}({\mathcal{A}}, \theta)}{\sup_{\theta\in\Theta_\mathbf{n}} \mathbb{E}_{\theta}[ | \mu^* - \hat \mu |]} \geq c\, , \addeq\label{eq:weighted-minimax-proof-2} \end{align*} where the last inequality follows Theorem~{\textnormal{e}}f{thm:minmax-lb} and apply the error of empirical estimator. Then for any $\theta\in\Theta_\mathbf{n}$ and some universal constants $c_0$ and $c'$, \begin{align*} \frac{{\mathcal{R}}(\textnormal{LCB}, \theta)}{{\mathcal{E}}^*(\theta)} \leq \frac{\min_i\Delta_{i} + \sqrt{\frac{8}{n_{i}}\log\frac{k}{\delta}}+\delta}{c_0 / \sqrt{n_{a^*}}} \leq c'\sqrt{\log n}\, , \end{align*} where the first inequality follows Corollary {\textnormal{e}}f{coro:ub-greedy-instance} and standard normal estimation lower bound, the second inequality follows by choosing $\delta=1/\sqrt{n}$. Combine the above we obtain the desired result. \iffalse the maximum of LCB can be used as an estimator. That is, consider the estimator $ \nu' = \max_i \hat \mu_i - \beta_\delta / \sqrt{n_i} $, then with probability at least $1-\delta$, there is $\mu^*\geq {\nu'}$ and $\mu^* - {\nu'} = \min_i \Delta_i + 2\beta_\delta / \sqrt{n_i}$. \fi \end{proof} \fi \begin{proposition} There exists a sequence $\{\mathbf{n}_j\}$ such that \begin{align*} &\limsup_{j{\textnormal{i}}ghtarrow\infty} \sup_{\theta\in\Theta_{\mathbf{n}_j}}{\frac{{\mathcal{R}}(\textnormal{UCB},\theta)}{\sqrt{\log |\mathbf{n}_j|}\cdot {\mathcal{E}}^*(\theta)}} = + \infty \\ &\limsup_{j{\textnormal{i}}ghtarrow\infty}\sup_{\theta\in\Theta_{\mathbf{n}_j}} {\frac{{\mathcal{R}}(\textnormal{greedy},\theta)}{\sqrt{\log |\mathbf{n}_j|}\cdot {\mathcal{E}}^*(\theta)}} = + \infty \end{align*} \label{prop:weighted-minimax-ucb-greedy} \end{proposition} That is, the pessimistic algorithm can be justified by weighting each instance using the difficulty of predicting the optimal value. We note that this result does not contradict the no-instance-optimality property of batch policy optimization with stochastic bandits (Corollary~{\textnormal{e}}f{cor:instance-negative}). In fact, it only provides a characterization of pessimism: the pessimistic choice is beneficial when the batch dataset contains enough information that is good for predicting the optimal value. \iffalse Consider two gaussians ${\mathcal{N}}(\mu_1, 1)$ and ${\mathcal{N}}(\mu_2, 1)$ with $\mu_1=0, \mu_2=\Delta$. Let $n$ be the sample size. For any estimator $\nu$, \begin{align*} \max\left\{ \mathbb{E}[|\mu_1 - \nu|], \mathbb{E}[|\mu_2 - \nu|] {\textnormal{i}}ght\} \geq {\Delta} {\mathbb{P}}\left\{ X\geq \frac{\sqrt{n} \Delta}{2} {\textnormal{i}}ght\} \end{align*} where $X$ is standard gaussian random variable. Using gaussian tails approximation and optimizing over $\Delta$ we get \begin{align*} \max\left\{ \mathbb{E}[|\mu_1 - \nu|], \mathbb{E}[|\mu_2 - \nu|] {\textnormal{i}}ght\}\geq \frac{c}{\sqrt{n}}\, , \end{align*} for some constant $c$. \fi \section{Related work} \label{sec:related-work} In the context of offline bandit and RL, a number of approaches based on the pessimistic principle have been proposed and demonstrate great success in practical problems \citep{swaminathan2015batch,wu2019behavior,jaques2019way,kumar2019stabilizing,kumar2020conservative,BuGeBe20,KiRaNeJo20,yu2020mopo,siegel2020keep}. We refer interested readers to the survey by \citet{levine2020offline} for recent developments on this topic. To implement the pessimistic principle, the distributional robust optimization~(DRO) becomes one powerful tool in bandit~\citep{FaTaVaSmDo19,karampatziakis2019empirical} and RL~\citep{xu2010distributionally,yu2015distributionally,yang2017convex,chen2019distributionally,dai2020coindice,DeSh20}. In terms of theoretical perspective, the statistical properties of general DRO, \emph{e.g.}, the consistency and asymptotic expansion of DRO, is analyzed in~\citep{DuGlNam16}. \citet{liu2020provably} provides regret analysis for a pessimistic algorithm based on stationary distribution estimation in offline RL with insufficient data coverage. \citet{BuGeBe20} justify the pessimistic algorithm by providing an upper bound on worst-case suboptimality. \citet{jin2020pessimism}, \citet{KiRaNeJo20} and \citet{yin2021near} recently prove that the pessimistic algorithm is nearly minimax optimal for batch policy optimization. However, the theoretical justification of the benefits of pessimitic principle vs. alternatives are missing in offline RL. Decision theory motivates DRO with an axiomatic characterization of min-max (or distributionally robust) utility: Preferences of decision makers who face an uncertain decision problem and whose preference relationships over their choices satisfy certain axioms follow an ordering given by assigning max-min utility to these preferences~\citep{GiSch89}. Thus, if we believe that the preferences of the user follow the axioms stated in the above work, one must use a distributionally optimal (pessimistic) choice. On the other hand, \citet{smith2006optimizer} raise the ``optimizer's curse'' due to statistical effect, which describes the phenomena that the resulting decision policy may disappoint on unseen out-of-sample data, \emph{i.e.}, the actual value of the candidate decision is below the predicted value. \citet{VPEKuhn17,SuVPKuhn20} justify the optimality of DRO in combating with such an overfitting issue to avoid the optimizer's curse. Moreover, \citet{DeKiWi19} demonstrate the benefits of randomized policy from DRO in the face of uncertainty comparing with deterministic policy. While reassuring, these still leave open the question whether there is a justification for the pessimistic choice dictated by some alternate logic, or perhaps a more direct logic reasoning in terms of regret in decision problem itself~\citep{lattimore2020bandit}. Our theoretical analysis answer this question, and provide a complete and direct justification for all confidence-based index algorithms. \iffalse Specifically, we show all confidence-based index algorithms are nearly minimax optimal in terms of regret. More importantly, our instance-dependent analysis show that for any algorithm one can always find some problem instance where the algorithm will suffer arbitrarily large regret. Therefore, one cannot directly compare the performance of two algorithms without specifying the problem instance. To distinguish pessimistic algorithm from other confidence-adjusted index algorithms, we show that the pessimistic algorithm is nearly minimax optimal when weighting individual instance by its inherent difficulty of estimating the optimal value. \fi \section{Conclusion} In this paper we study the statistical limits of batch policy optimization with finite-armed bandits. We introduce a family of confidence-adjusted index algorithms that provides a general analysis framework to unify the commonly used optimistic and pessimistic principles. For this family, we show that any index algorithm with an appropriate adjustment is nearly minimax optimal. Our analysis also reveals another important finding, that for any algorithm that performs optimally in some environment, there exists another environment where the same algorithm can suffer arbitrarily large regret. Therefore, the instance-dependent optimality cannot be achieved by any algorithm. To distinguish the algorithms in offline setting, we introduce a weighted minimax objective and justify the pessimistic algorithm is nearly optimal under this criterion. \endgroup \appendix \onecolumn \begin{appendix} \thispagestyle{plain} \begin{center} {\huge Appendix} \end{center} \newcommand{\mathbb{E}E}[2]{\mathbb{E}_{#1}\left[#2{\textnormal{i}}ght]} \newcommand{{\mathcal{A}}}{{\mathcal{A}}} \newcommand{\alg_{\mathrm{mm}}}{{\mathcal{A}}_{\mathrm{mm}}} \newcommand{\alg_{\mathrm{LCB}}}{{\mathcal{A}}_{\mathrm{LCB}}} \newcommand{R}{R} \newcommand{\Rg_{\mathrm{mm}}}{R_{\mathrm{mm}}} \newcommand{\Rg_{\mathrm{imm}}}{R_{\mathrm{imm}}} \newcommand{\Rg_{\mathrm{adv}}}{R_{\mathrm{adv}}} \newcommand{\Rg_{\mathrm{per}}}{R_{\mathrm{per}}} \newcommand{\mathtt{per}}{\mathtt{per}} \newcommand{{\mathcal{G}}}{{\mathcal{G}}} \newcommand{\abs}[1]{\left\lvert#1 {\textnormal{i}}ght{\textnormal{v}}ert} \newcommand{\tilde{\Theta}}{\tilde{\Theta}} \section{A Deterministic Setting} \todoy[inline]{This actually gives insights for many questions since it enables exact calculation of many quantities...?} to understand what is going on, e.g. what are the possible arguments for which algorithm we should use. \subsection*{Problem Setup: Observing the True Parameters} $1 \le n < k$ observations (samples). Each observation reveals the true $\theta_i$ for arm $i$. Each arm $i \in \iset{k}$ is observed at most once. Without loss of generality, we assume that the observed arms are $1,...,n$ and $\theta \in \Theta = [0, 1]^k$. We consider deterministic algorithms that maps observed $X = \theta_{1:n} = \left[\theta_1, ..., \theta_n{\textnormal{i}}ght]$ to an arm ${\mathcal{A}}(X)\in \iset{k}$. Given an instance $\theta$, the regret for an arm $i$ is defined as $R(i, \theta) = \theta^* - \theta_i$ where $\theta^* = \max_{i\in \iset{k}} \theta_i$, and the regret for an algorithm ${\mathcal{A}}$ is defined as $R({\mathcal{A}}, \theta) = R({\mathcal{A}}(\theta_{1:n}), \theta) = \theta^* - \theta_{{\mathcal{A}}(\theta_{1:n})}$. \todoy[inline]{Do we just assume $n\le k - 2$ for simplicity? $n=k-1$ needs to be discussed separately in many statements.} \subsection*{Possible Algorithms} Index algorithms ${\mathcal{A}}_{\alpha}$ with $\alpha\in [0, 1]$: Define $\hat{\theta}_i = X_i$ for $i\in \iset{n}$ and $\theta_i = \alpha$ for $i > n$. Then ${\mathcal{A}}_{\alpha}(X) = \argmax_{i\in \iset{k}} \hat{\theta}_i$. Note that $\alpha=0$ and $\alpha=1$ lead to LCB and UCB algorithms respectively. We define the algorithm that implements the minimax strategy as $\alg_{\mathrm{mm}}(X) = \argmin_i \max_{\theta:\theta_{1:n}=X} R(i, \theta)$. Note that the LCB algorithm ${\mathcal{A}}_{0}(X) = \argmin_i \max_{\theta:\theta_{1:n}=X} -\theta_i$, which also implements the minimax strategy with respect to the reward but not the regret. We assume that the algorithms pick the arm with the smaller index for tie breakers in $\argmin$ or $\argmax$. \begin{remark} When $n \le k - 2$, ${\mathcal{A}}_0$ and $\alg_{\mathrm{mm}}$ are the same algorithm where ${\mathcal{A}}_0(X) = \alg_{\mathrm{mm}}(X) = \argmax_{1\le i \le n}X_i$. When $n = k - 1$, $\alg_{\mathrm{mm}} = {\mathcal{A}}_{1/2}$. That is, $\alg_{\mathrm{mm}}(X) = \argmax_{1\le i \le n}X_i$ if $\max_{1\le i \le n} X_i \ge 1/2$ and $\alg_{\mathrm{mm}}(X) = k$ otherwise. \end{remark} \begin{proof} ${\mathcal{A}}_0(X) = \argmax_{1\le i \le n}X_i$ by definition. Now we consider $\alg_{\mathrm{mm}}$. For $i\le n$, $\max_{\theta:\theta_{1:n}=X} R(i, \theta) = 1 - X_i$ because setting $\theta_{n+1}=1$ maximizes $R(i, \theta)$. When $n\le k-2$, for $i>n$, $\max_{\theta:\theta_{1:n}=X} R(i, \theta) = 1$ because we can always set $\theta_i=0$ and $\theta_j=1$ for some $j>n, j\ne i$.Therefore, for $n \le k - 2$, $\alg_{\mathrm{mm}}(X) = \argmax_{i\iset{n}} X_i$. When $n = k-1$, for $i>n$~($i=k$), $\max_{\theta:\theta_{1:n}=X} R(i, \theta) = \max_{i\in \iset{n}} X_i$ by setting $\theta_k=0$. Therefore, for $n \le k - 2$, $\alg_{\mathrm{mm}}(X)=\argmax_{i\iset{n}} X_i$ if $1 - \max_{i\in \iset{n}} X_i \le \max_{i\in \iset{n}} X_i$~(i.e., $\max_{i\in \iset{n}} X_i \ge 1/2$), and $\alg_{\mathrm{mm}}(X) = k$ otherwise. \end{proof} We now derive the regret for different algorithms. The exact regret for index algorithms is: \begin{align*} R({\mathcal{A}}_{\alpha}, \theta) = \begin{cases} \theta^* - \theta_{1:n}^* & \textrm{ if } \theta_{1:n}^* \ge \alpha, \\ \theta^* - \theta_{n+1} & \textrm{ otherwise}. \end{cases} \end{align*} Here we define $\theta_{1:n}^*=\max_{i\in \iset{n}} \theta_i$. $R(\alg_{\mathrm{mm}}, \theta) = R({\mathcal{A}}_{0}, \theta)$ for $n\le k - 2$, and $R(\alg_{\mathrm{mm}}, \theta) = R({\mathcal{A}}_{1/2}, \theta)$ for $n=k-1$. Similar conclusion holds as in the stochastic setting. \subsection*{Minimax Optimality} \begin{proposition} Let $\Rg_{\mathrm{mm}} = \min_{{\mathcal{A}}} \max_{\theta} R({\mathcal{A}}, \theta)$. Then $\Rg_{\mathrm{mm}} = 1$ for $n \le k-2$ and $\Rg_{\mathrm{mm}} = 1/2$ for $n=k-1$. \end{proposition} \begin{proof} For $n\le k-2$, consider $\theta_1=...=\theta_n=0$ and $\theta_{k-1}, \theta_k$ can be either $0, 1$ and $1, 0$. Then any algorithm must suffer regret $1$ in at least one of these two instance. When $n=k-1$, consider $\theta_1=...\theta_{k-1}=1/2$ and $\theta_k\in \{0, 1\}$. Then any algorithm must suffer regret at least $1/2$ in one of these two instances. On the other hand, the algorithm $\alg_{\mathrm{mm}}$ has worst case regret guarantee of $1/2$. Hence $1/2$ is the minimax value when $n=k-1$. \end{proof} For any constant $c\ge 1$, define \begin{align*} {\mathcal{G}}_c = \left\{{\mathcal{A}}: \max_{\theta} R({\mathcal{A}}, \theta) \le c \Rg_{\mathrm{mm}} {\textnormal{i}}ght\} \end{align*} to be the set of algorithms that are minimax optimal up to a multiplicative constant factor $c$. Now we characterize the minimax optimality for different algorithms. \begin{proposition} (i) For any algorithm ${\mathcal{A}}$, we have ${\mathcal{A}} \in {\mathcal{G}}_2$. (ii) When $n\le k-2$, for any algorithm ${\mathcal{A}}$ we have ${\mathcal{A}} \in {\mathcal{G}}_1$. (iii) For $n=k-1$, among all index algorithms only ${\mathcal{A}}_{1/2}=\alg_{\mathrm{mm}} \in {\mathcal{G}}_1$. \end{proposition} \begin{proof} The first two statements are trivial. For $n=k-1$, if $\alpha<1/2$, let $\theta=[\frac{2\alpha+1}{4},...,\frac{2\alpha+1}{4}, 1]$ then the algorithm selects ${\mathcal{A}}_{\alpha}\in \iset{k-1}$ thus $R({\mathcal{A}}_{\alpha}, \theta) = \frac{3 - 2\alpha}{4} > 1/2$. If $\alpha > 1/2$ let $\theta=[\frac{2\alpha+1}{4},...,\frac{2\alpha+1}{4}, 0]$ then the algorithm selects ${\mathcal{A}}_{\alpha}=k$ thus $R({\mathcal{A}}_{\alpha}, \theta) = \frac{2\alpha + 1}{4} > 1/2$. \end{proof} \subsection*{Instance-wise Optimality under the Minimax Optimality Constraint} Define $\Rg_{\mathrm{imm}}(\theta) = \min_{{\mathcal{A}} \in {\mathcal{G}}_c} R({\mathcal{A}}, \theta)$ to be the instance dependent lower bound for any algorithm that is minimax optimal up to a constant $c$. Then we have \begin{proposition} When $n\le k-2$ or $c\ge 2$, $\Rg_{\mathrm{imm}}(\theta)=0$ for all $\theta$. When $n=k-1$ and $1\le c < 2$, let $\theta_{1:n}^*=\max_{i\in \iset{n}} \theta_i$. Then \begin{align*} \Rg_{\mathrm{imm}}(\theta) = \begin{cases} 0 & \textrm{ if } \theta_{1:n}^* \in \left[1-\frac{c}{2}, \frac{c}{2}{\textnormal{i}}ght]\,, \\ \theta^* - \theta_k & \textrm{ if } \theta_{1:n}^* < 1-\frac{c}{2}\,, \\ \theta^* - \theta_{1:n}^* & \textrm{ if } \theta_{1:n}^* > \frac{c}{2}\,. \\ \end{cases} \end{align*} \end{proposition} \begin{proof} When $n\le k-2$ or $c\ge 2$, ${\mathcal{G}}_c$ contains all algorithms. So for each instance $\theta$ any algorithm that chooses the best arm when observing $\theta_{1:n}$ achieve regret $0$. For $n = k-1$ and $c \in [1, 2)$. We have $\Rg_{\mathrm{mm}}=1/2$ and ${\mathcal{G}}_c = \left\{{\mathcal{A}}: \max_{\theta} R({\mathcal{A}}, \theta) \le c/2 {\textnormal{i}}ght\}$. Notice that $\theta^* = \max\{\theta_k, \theta^*_{1:n}\}$, i.e., the best arm is either the best observed arm or arm $k$. (i) When $\theta_{1:n}^* \in \left[1-\frac{c}{2}, \frac{c}{2}{\textnormal{i}}ght]$, an algorithm ${\mathcal{A}}$ that selects ${\mathcal{A}}(\theta_{1:n})\in \{\argmax_{i\in \iset{n}}\theta_i, k\}$ must satisfy $\max_{\theta':\theta'_{1:n}}R({\mathcal{A}}, \theta') \le c/2$. So given $\theta$ that satisfies $\theta_{1:n}^* \in \left[1-\frac{c}{2}, \frac{c}{2}{\textnormal{i}}ght]$, the algorithm ${\mathcal{A}}$ defined by ${\mathcal{A}}(\theta_{1:n})=\argmax_i \theta_i$ and ${\mathcal{A}}(X)={\mathcal{A}}_{1/2}(X)$ otherwise satisfies both ${\mathcal{A}} \in {\mathcal{G}}_c$ and ${\mathcal{A}}(\theta)=0$. So $\Rg_{\mathrm{imm}}(\theta)=0$. (ii) When $\theta_{1:n}^* < 1 - c/2$, any algorithm ${\mathcal{A}} \in {\mathcal{G}}_c$ must select arm $k$, otherwise we can let $\theta_k=1$ then $R({\mathcal{A}}, \theta) > c/2$. There exists an algorithm ${\mathcal{A}}_{1/2} \in {\mathcal{G}}_c$ so $\Rg_{\mathrm{imm}}(\theta)=\theta^*-\theta_k$. (ii) When $\theta_{1:n}^* > c/2$, any algorithm ${\mathcal{A}} \in {\mathcal{G}}_c$ must not select arm $k$, otherwise we can let $\theta_k=0$ then $R({\mathcal{A}}, \theta) > c/2$. So $\Rg_{\mathrm{imm}}(\theta)\ge \theta^*-\theta_{1:n}^*$. There exists an algorithm ${\mathcal{A}}_{1/2} \in {\mathcal{G}}_c$ which achieves $R({\mathcal{A}}_{1/2}, \theta)=\theta^*-\theta_{1:n}*$ so $\Rg_{\mathrm{imm}}(\theta)=\theta^*-\theta_{1:n}^*$. \end{proof} No algorithm is instance-wise optimal: \begin{proposition} For any $n\in \iset{k-1}$, $c\ge 1$, and any algorithm ${\mathcal{A}}$, we have $\sup_{\theta} \frac{R({\mathcal{A}}, \theta)}{\Rg_{\mathrm{imm}}(\theta)} = +\infty$.~(Here we define $0/0=1$.) \end{proposition} \begin{proof} Consider $\theta=\left[1/2,...,1/2,0 {\textnormal{i}}ght]$ and $\theta'=\left[1/2,...,1/2,1 {\textnormal{i}}ght]$. Then $\Rg_{\mathrm{imm}}(\theta)=\Rg_{\mathrm{imm}}(\theta')=0$ for all cases, including $n=k-1$. Since $\theta_{1:n}=\theta'_{1:n}$ any algorithm must output the same action for $\theta$ and $\theta'$, suffering $1/2$ regret in one of the two instances. \end{proof} \begin{remark} For $n=k-1$ and $c=1$, the algorithm $\alg_{\mathrm{mm}} = {\mathcal{A}}_{1/2}$ is instance-wise optimal except for the instances where $\theta_k > \theta_{1:n}^*=1/2$. \end{remark} \subsection*{Local minimax optimality} \todoy[inline]{ discuss why do we care about this? what are the possible definitions, what are the differences between them, etc } \subsection*{Local minimax optimality I, the adversarial instance} Define\todoy{add citation} \begin{align*} \Rg_{\mathrm{adv}}(\theta) = \max_{\lambda\in \Theta} \min_{{\mathcal{A}}} \max\left\{R({\mathcal{A}}, \lambda), R({\mathcal{A}}, \theta) {\textnormal{i}}ght\} \,. \end{align*} \begin{proposition} Given $\theta$ define $\theta_{n+1:k}^-=\min_{n+1\le i\le k} \theta_i$. Then \begin{align*} \Rg_{\mathrm{adv}}(\theta) = \max\big\{ & \min\left\{\theta_{1:n}^*, \theta^*-\theta_{1:n}^*{\textnormal{i}}ght\}, \\ & \min\left\{1 - \theta_{1:n}^*, \theta^* - \theta_{n+1:k}^- {\textnormal{i}}ght\} \big\}\,. \addeq\label{eq:radv} \end{align*} \end{proposition} \begin{proof} First observe that we only need to consider $\lambda$ where $\lambda_{1:n}=\theta_{1:n}$ because otherwise the algorithm can just select the best arm in $\lambda$ and $\theta$ respectively and achieves $0$ regret in both instances. When $\lambda_{1:n}=\theta_{1:n}$, any algorithm will select the same arm in both $\lambda$ and $\theta$. So we can write $\Rg_{\mathrm{adv}}$ as \begin{align*} \Rg_{\mathrm{adv}}(\theta) = \max_{\lambda\in \Theta} \min_{i\in \iset{k}} \max\left\{\lambda^* - \lambda_i, \theta^* - \theta_i {\textnormal{i}}ght\} \,. \end{align*} We now discuss how to find the corresponding $\lambda$ for $\theta$. We divide all $\lambda \in \Theta$ into two categories: $\lambda^* = \lambda_{1:n}^*$ and $\lambda^* > \lambda_{1:n}^*$. ... \end{proof} \begin{proposition} discuss optimality: For $n\le k-2$, $\sup_\theta \frac{R({\mathcal{A}}_{\alpha}, \theta)}{\Rg_{\mathrm{adv}}(\theta)} = \max\{\frac{1-\alpha}{\alpha}, \frac{1}{1 - \alpha}\}$. For $n=k-1$, $\sup_\theta \frac{R({\mathcal{A}}_{\alpha}, \theta)}{\Rg_{\mathrm{adv}}(\theta)} = \max\{\frac{1-\alpha}{\alpha}, \frac{\alpha}{1 - \alpha}\}$. \end{proposition} \begin{proof} ... \end{proof} The local minimax optimality does not favor the LCB algorithm ${\mathcal{A}}_0$. \begin{remark} ${\mathcal{A}}_{1/2}$ is optimal with respect to $\Rg_{\mathrm{adv}}$ up to a constant factor $2$: For $n\le k-2$, $\sup_\theta \frac{R({\mathcal{A}}_{1/2}, \theta)}{\Rg_{\mathrm{adv}}(\theta)} = 2$. For $n=k-1$, $\sup_\theta \frac{R({\mathcal{A}}_{1/2}, \theta)}{\Rg_{\mathrm{adv}}(\theta)} = 1$. The LCB algorithm ${\mathcal{A}}_{0}$ is not optimal with respect to $\Rg_{\mathrm{adv}}$ up to any constant factor: $\sup_\theta \frac{R({\mathcal{A}}_{1/2}, \theta)}{\Rg_{\mathrm{adv}}(\theta)} = +\infty$. \end{remark} If we only consider $\theta$ that is away from the boundary, then any index algorithm is optimal with respect to $\Rg_{\mathrm{adv}}$. \begin{proposition} Let $\theta_{\min} = \min_{i\in \iset{k}}\theta_i$. Define $\Theta'=\left\{ \theta: \theta^* - \theta_{\min} \le \min\{\theta_{\min}, 1 - \theta^*\} {\textnormal{i}}ght\}$. Then $R({\mathcal{A}}_{\alpha}, \theta) \le \Rg_{\mathrm{adv}}(\theta)$ for all $\theta\in \Theta'$. \end{proposition} \begin{proof} For $\theta\in \Theta'$, $\Rg_{\mathrm{adv}}(\theta) = \max\{\theta^* - \theta_{1:n}^*, \theta^* - \theta_{n+1:k}^-\}$ thus the statement holds. \end{proof} A special category of instances is where all the unseen arms are better than the observed ones.\todoy{what can we say about this category} If we exclude this category of instances, the LCB algorithm ${\mathcal{A}}_0$ is more favorable, but only up to a factor of $2$ compared to ${\mathcal{A}}_{1/2}$: \begin{proposition} Define $\tilde{\Theta}=\left\{ \theta: \theta_{1:n}^* > \theta_{n+1:k}^- {\textnormal{i}}ght\}$. Then For $n\le k-2$, $\sup_{\theta \in \tilde{\Theta}} \frac{R({\mathcal{A}}_{\alpha}, \theta)}{\Rg_{\mathrm{adv}}(\theta)} = \frac{1}{1 - \alpha}$. For $n=k-1$, $\sup_{\theta \in \tilde{\Theta}} \frac{R({\mathcal{A}}_{\alpha}, \theta)}{\Rg_{\mathrm{adv}}(\theta)} = \max\{ \frac{\alpha}{ 1 - \alpha}, 1\}$. \end{proposition} \begin{proof} ... \end{proof} \subsection*{Local minimax optimality II, the permutation model} \todoy[inline]{The result is not really minimax, but instance-wise} Define \begin{align*} \Rg_{\mathrm{per}}(\theta) = \min_{{\mathcal{A}}} \max_{\lambda\in \mathtt{per}(\theta)}R({\mathcal{A}}, \lambda) \,. \end{align*} where $\mathtt{per}(\theta)$ is the set of instances that is a permutation~(of the arm indices) of $\theta$. \begin{proposition} Let $\theta_{(1)}\le \theta_{(2)} \le ,..., \le \theta_{(k)}$ be the sorted permutation of $\theta$, then $\Rg_{\mathrm{per}}(\theta) = \theta^* - \theta_{(n+1)}$. \end{proposition} \begin{proof} ... \end{proof} Instance dependent lower bound within $\mathtt{per}(\theta)$. Let ${\mathcal{G}}_{\mathtt{per}(\theta)} = \{{\mathcal{A}}: \max_{\lambda\in \mathtt{per}(\theta)} R({\mathcal{A}}, \lambda) \le \Rg_{\mathrm{per}}(\theta)\}$ be the set of all minimax optimal algorithms with respect to $\mathtt{per}(\theta)$ and $\tilde{R}_{\mathtt{per}}(\theta) = \min_{{\mathcal{A}} \in {\mathcal{G}}_{\mathtt{per}(\theta)}} R({\mathcal{A}}, \theta)$ be the instance dependent lower bound with the minimax optimality constraint. Then we have \begin{proposition} \begin{align*} \tilde{R}_{\mathtt{per}}(\theta) = \begin{cases} \theta^* - \theta_{1:n}^* & \textrm{ if } \theta_{1:n}^* > \theta_{n:k+1}^{-} \\ 0 & \textrm{ otherwise}. \end{cases} \end{align*} \end{proposition} \begin{proof} ... \end{proof} ${\mathcal{A}}_0$ is instance-wise optimal with respect to $\tilde{R}_{\mathtt{per}}$~(and also $\Rg_{\mathrm{per}}$) on most of the instances \begin{proposition} Define $\tilde{\Theta}=\left\{ \theta: \theta_{1:n}^* > \theta_{n+1:k}^- {\textnormal{i}}ght\}$. Then $\sup_{\theta \in \tilde{\Theta}} \frac{R({\mathcal{A}}_{0}, \theta)}{\tilde{R}_{\mathtt{per}}(\theta)} = 1$. \end{proposition} \begin{proof} ... \end{proof} Unlike $\Rg_{\mathrm{adv}}$, here we have a stronger argument on why ${\mathcal{A}}_0$ might be more favorable when compared to $\tilde{R}_{\mathtt{per}}$. That is, under any permutation invariant prior distribution over $\Theta$, ${\mathcal{A}}_0$ minimizes the probability that the ratio between the actual regret and $\tilde{R}_{\mathtt{per}}(\theta)$ is unbounded among all index algorithms. \begin{proposition} For $n\le k-2$, let ${\mathcal{P}}$ be any permutation invariant distribution over $\Theta$ and ${\mathcal{P}}(\exists i\ne j, \theta_i=\theta_j)=0$~($\theta$ does not contain duplication entries almost surely). Then we have \todoy[inline]{The following result might be inexact, needs to be double checked.} \begin{align*} \lefteqn{{\mathcal{P}}\left(\frac{R({\mathcal{A}}_{\alpha}, \theta)}{\tilde{R}_{\mathtt{per}}(\theta)} = +\infty {\textnormal{i}}ght)} \\ = &\frac{n}{k}{\mathcal{P}}(\theta^* < \alpha) + \frac{n!(k-n-1)!(k-n-1)}{k!} \end{align*} where we define $0/0=1$. \end{proposition} \begin{proof} ... \end{proof} \todoy[inline]{Replace $+\infty$ with a constant $c\ge 1$ will give a more complete picture and may be more realistic (does not require the observed data to contain the optimal action.) But might be a bit complicated.} \todoy[inline]{Derive $|\tilde{\Theta}|$ under the uniform distribution over $\Theta$.} \todoy[inline]{Try to prove similar results in the stochastic setting.} \todoy[inline]{A separate question: Is there a notion of ``data dependent optimality'' (i.e. given observations, the algorithm minimize the worst case regret among all possible~(or most) instances)? This should automatically justify the optimality of pessimism in the deterministic setting. In the stochastic setting, pessimistic algorithms can be approximately viewed as optimizing the quantile of posterior regret/reward given a (uniform?) prior. The question is that whether we should be satisfied with this type of optimality, and why~(and maybe this is something that we may want to discuss as well).} \appendix \input{exp_sup} \section{Proof of Minimax Results} \subsection{Proof of \cref{thm:minmax-lb}} \newcommand{\mathcal B \mathcal R}{\mathcal B \mathcal R} Let $m \geq 2$ and $\mu^1,\ldots,\mu^m$ be a collection of vectors in $\mathbb{R}^k$ with $\mu^b_a = \Delta {\mathbb{I}}\{a = b\}$ where $\Delta > 0$ is a constant to be chosen later. Next, let $\theta_b$ be the environment in $\Theta_{\mathbf{n}}$ with $P_a$ a Gaussian distribution with mean $\mu^b_a$ and unit variance. Let $B$ be a random variable uniformly distributed on $[m]$ where $m \in [k]$. The Bayesian regret of an algorithm ${\mathcal{A}}$ is \begin{align*} \mathcal B \mathcal R^* = \inf_{{\mathcal{A}}} \mathbb{E}\left[{\mathcal{R}}({\mathcal{A}}, \theta_B){\textnormal{i}}ght] = \Delta \mathbb{E}\left[{\mathbb{I}}\{A \neq B\}{\textnormal{i}}ght]\,, \end{align*} where $A \in [k]$ is the $\sigma(X)$-measurable random variable representing the decision of the Bayesian optimal policy, which is $A = \argmax_{b \in [k]} {\mathbb{P}}\{B = b | X\}$. By Bayes' law and the choice of uniform prior, \begin{align*} {\mathbb{P}}\{B = b | X\} &\propto \exp\left(-\frac{1}{2}\sum_{a=1}^k n_a(\hat \mu_a - \mu^b_a)^2{\textnormal{i}}ght) \\ &= \exp\left(-\frac{1}{2} \sum_{a=1}^k n_a(\hat \mu_a - \Delta {\mathbb{I}}\{a = b\})^2{\textnormal{i}}ght)\,. \end{align*} Therefore, the Bayesian optimal policy chooses \begin{align*} A = \argmin_{b \in [k]} n_b (\Delta/2 - \hat \mu_b) \,. \end{align*} On the other hand, \begin{align*} \mathcal B \mathcal R^* &= \Delta {\mathbb{P}}\{A \neq B\} = \frac{\Delta}{k} \sum_{b=1}^k {\mathbb{P}}_b(A \neq b)\,, \end{align*} where ${\mathbb{P}}_b = {\mathbb{P}}\{\cdot | B = b\}$. Let $b \in [m]$ be arbitrary. Then, \begin{align*} &{\mathbb{P}}_b\{A \neq b\} \\ &\geq {\mathbb{P}}_b\left\{\hat \mu_b \leq \Delta \text{ and } \max_{a \in [m] \setminus \{b\}} \hat \mu_a \geq \frac{\Delta}{2}\left(1 + \frac{n_b}{n_a}{\textnormal{i}}ght){\textnormal{i}}ght\} \\ &\geq \frac{1}{2} \left(1 - \prod_{a \in [m] \setminus \{b\} } \left(1 - {\mathbb{P}}_b\left\{\hat \mu_a \geq \frac{\Delta}{2} \left(1 + \frac{n_b}{n_a}{\textnormal{i}}ght){\textnormal{i}}ght\}{\textnormal{i}}ght){\textnormal{i}}ght) \\ &\geq \frac{1}{2} \left(1 - \prod_{a > b} \left(1 - {\mathbb{P}}_b\left\{\hat \mu_a \geq \Delta{\textnormal{i}}ght\}{\textnormal{i}}ght){\textnormal{i}}ght)\,, \end{align*} where in the second inequality we used independence and the fact that the law of $\hat \mu_b$ under ${\mathbb{P}}_b$ is Gaussian with mean $\Delta$ and variance $1/n_b$. The first inequality follows because \begin{align*} \left\{\hat \mu_b \leq \Delta \text{ and } \max_{a \neq b} \hat \mu_a \geq \frac{\Delta}{2}\left(1 + \frac{n_b}{n_a}{\textnormal{i}}ght) {\textnormal{i}}ght\} \subset \{A \neq b\} \,. \end{align*} Let $b < a \leq m$ and \begin{align*} \delta_a(\Delta) = \frac{1}{\Delta \sqrt{n_a} + \sqrt{4 + n_a \Delta^2}} \sqrt{\frac{2}{\pi}} \exp\left(-\frac{n_a \Delta^2}{2}{\textnormal{i}}ght) \,. \end{align*} Since for $a \neq b$, $\hat \mu_a$ has law ${\mathcal{N}}(0, 1/n_a)$ under ${\mathbb{P}}_b$, by standard Gaussian tail inequalities \citep[\S26]{AS88}, \begin{align*} {\mathbb{P}}_b\{\hat \mu_a \geq \Delta\} = {\mathbb{P}}_b\{\hat \mu_a \sqrt{n_a} \geq \Delta \sqrt{n_a}\} \geq \delta_a(\Delta) \geq \delta_m(\Delta)\,, \end{align*} where the last inequality follows from our assumption that $n_1 \leq \cdots \leq n_k$. Therefore, choosing $\Delta$ so that $\delta_m(\Delta) = 1/(2m)$, \begin{align*} \mathcal B \mathcal R^* &\geq \frac{\Delta}{2m} \sum_{b \in [m]} \left(1 - (1 - \delta_m(\Delta))^{m-b}{\textnormal{i}}ght) \\ &\geq \frac{\Delta}{2m} \sum_{b \in [m]} \left(1 - \left(1 - \frac{1}{2m}{\textnormal{i}}ght)^{m-b}{\textnormal{i}}ght) \\ &\geq \frac{\Delta}{2m} \sum_{b \leq m/2} \left(1 - \left(1 - \frac{1}{2m}{\textnormal{i}}ght)^{m/2}{\textnormal{i}}ght) \\ &\geq \frac{\Delta (m-1)}{20m} \geq \frac{\Delta}{40}\,. \end{align*} A calculation shows there exists a universal constant $c > 0$ such that \begin{align*} \Delta \geq c \sqrt{\frac{\log(m)}{n_m}}\,, \end{align*} which shows there exists a (different) universal constant $c > 0$ such that \begin{align*} \inf_{{\mathcal{A}}} \sup_{\theta} {\mathcal{R}}({\mathcal{A}}, \theta) \geq \mathcal B \mathcal R^* \geq \max_{m \geq 2} c\sqrt{\frac{\log(m)}{n_m}}\,. \end{align*} The argument above relies on the assumption that $m \geq 2$. A minor modification is needed to handle the case where $n_1$ is much smaller than $n_2$. Let $B$ be uniformly distributed on $\{1,2\}$ and let $\theta_1, \theta_2 \in \Theta_{\mathbf{n}}$ be defined as above, but with $\mu^1 = (\Delta, 0)$ and $\mu^2 = (-\Delta, 0)$ for some constant $\Delta > 0$ to be tuned momentarily. As before, the Bayesian optimal policy has a simple closed form solution, which is \begin{align*} A = \begin{cases} 1 & \text{if } \hat \mu_1 \geq 0 \\ 2 & \text{otherwise}\,. \end{cases} \end{align*} The Bayesian regret of this policy satisfies \begin{align*} \mathcal B \mathcal R^* &= \frac{1}{2} {\mathcal{R}}({\mathcal{A}}, \theta_1) + \frac{1}{2} {\mathcal{R}}({\mathcal{A}}, \theta_2) \geq \frac{1}{2} {\mathcal{R}}({\mathcal{A}}, \theta_1) \\ &\geq \frac{1}{2} {\mathbb{P}}_1\{A = 2\} \geq \frac{\Delta}{2} {\mathbb{P}}_1\{\hat \mu_1 < 0\} \\ &\geq \sqrt{\frac{2}{\pi}} \frac{\Delta}{2\Delta \sqrt{n_1} + 2\sqrt{4 + n_1 \Delta^2}} \exp\left(-\frac{n_1 \Delta^2}{2}{\textnormal{i}}ght) \\ &\geq \frac{1}{13} \sqrt{\frac{1}{n_1}}\,, \end{align*} where the final inequality follows by tuning $\Delta$. \subsection{Proof of \cref{thm:minimax-upper}} \begin{proof} Let $\tilde{\mu}_i$ be the index and $i' = \argmax_i \tilde{\mu}_i$. Then, given that ({\textnormal{e}}f{eq:confidence-interval}) is true for all arms, which is with probability at least $1-\delta$, we have \begin{align*} \mu^* - \mu_{i'} & = \mu^* - \tilde{\mu}_{a^*} + \tilde{\mu}_{a^*} - \tilde{\mu}_{i'} + \tilde{\mu}_{i'} - \mu_{i'} \\ & \leq \mu^* - \tilde{\mu}_{a^*} + \tilde{\mu}_{i'} - \mu_{i'}\\ & \leq \mu^* - \hat{\mu}_{a^*} + \hat{\mu}_{i'} - \mu_{i'} + 2\sqrt{\frac{2 \log (k / \delta)}{ \min_i n_i}} \\ & \leq \sqrt{\frac{32 \log (k / \delta)}{\min_i n_i}} \,, \end{align*} where the first two inequalities follow from the definition of the index algorithm, and the last follows from ({\textnormal{e}}f{eq:confidence-interval}). Using the tower rule gives the desired result. \end{proof} \section{Proof of Instance-dependent Results} \subsection{ Instance-dependent Upper Bound} \begin{proof}[Proof of Theorem~{\textnormal{e}}f{thm:instance-upper-general}] Assuming $\mu_1\ge \mu_2 \ge ...\geq\mu_k$, if we have $\Prb{{\mathcal{A}}(X)\ge i} \le b_i$, then we can write \begin{align*} {\mathcal{R}}({\mathcal{A}}) & = \sum_{2\le i \le k} \Delta_i \Prb{{\mathcal{A}}(X)=i} \\ & = \sum_{2\le i \le k} \Delta_i \left( \Prb{{\mathcal{A}}(X)\ge i} - \Prb{{\mathcal{A}}(X) \ge i + 1}{\textnormal{i}}ght) \\ & = \sum_{2 \le i \le k} \left(\Delta_i - \Delta_{i-1}{\textnormal{i}}ght) \Prb{{\mathcal{A}}(X)\ge i} \\ & \le \sum_{2 \le i \le k} \left(\Delta_i - \Delta_{i-1}{\textnormal{i}}ght) b_i \\ & = \sum_{2\le i \le k} \Delta_i (b_i - b_{i+1}) \,. \end{align*} To upper bound $\Prb{{\mathcal{A}}(X)\ge i}$, let $I_i$ be the index used by algorithm ${\mathcal{A}}$, i.e., ${\mathcal{A}}(X) = \argmax_i I_i$. Then \begin{align*} \Prb{{\mathcal{A}}(X)\ge i} \le \Prb{\max_{j\ge i} I_j \ge \max_{j<i} I_j} \,. \end{align*} Hence we can further write \begin{align*} \Prb{{\mathcal{A}}(X)\ge i} & \le \Prb{\max_{j\ge i} I_j \ge \max_{j<i} I_i, \max_{j<i} I_j \ge \eta} \\ & + \Prb{\max_{j\ge i} I_j \ge \max_{j<i} I_i, \max_{j<i} I_j < \eta} \\ & \le \Prb{\max_{j\ge i} I_j \ge \eta} + \Prb{\max_{j<i} I_j < \eta} \,. \addeq\label{eq:ub-eta} \end{align*} Next we optimize the choice of $\eta$ according to the specific choice of the index. For this let $I_i = \hat{\mu}_i + b_i$. Continuing with~\eqref{eq:ub-eta}, for the first term, by the union bound we have \begin{align*} \Prb{\max_{j\ge i} I_j \ge \eta} \le \sum_{j\ge i} \Prb{I_j \ge \eta} \,. \end{align*} For each $j\ge i$, by Hoeffding's inequality we have \begin{align*} \Prb{I_j \ge \eta} \le e^{-\frac{n_j}{2} \left(\eta - \mu_j - b_j{\textnormal{i}}ght)_+ ^2} \,. \end{align*} For the second term in~\eqref{eq:ub-eta}, we have $\Prb{\max_{j<i} I_j < \eta} \le \Prb{I_j < \eta}$ for each $j < i$. By Hoeffding's inequality we have \begin{align*} \Prb{I_j < \eta} \le e^{-\frac{n_j}{2} \left(\mu_j + b_j - \eta {\textnormal{i}}ght)_+ ^2} \, , \end{align*} and thus \begin{align*} \Prb{\max_{j<i} I_j < \eta} \le \min_{j<i} e^{-\frac{n_j}{2} \left(\mu_j + b_j - \eta {\textnormal{i}}ght)_+ ^2} \,. \end{align*} Define \begin{align*} g_i(\eta) = \sum_{j\ge i} e^{-\frac{n_j}{2} \left(\eta - \mu_j - b_j{\textnormal{i}}ght)_+ ^2} + \min_{j < i} e^{-\frac{n_j}{2} \left(\mu_j + b_j - \eta {\textnormal{i}}ght)_+ ^2} \end{align*} and $g_i^* = \min_{\eta} g_i(\eta)$. Then we have \begin{align*} \Prb{{\mathcal{A}}(X)\ge i} \le \min\{ 1, g_i^* \} \,. \end{align*} Putting everything together, we bound the expected regret as \begin{align*} {\mathcal{R}}({\mathcal{A}}) \le \sum_{2\le i \le k} \Delta_i \left(\min\{ 1, g_i^* \} - \min\{ 1, g_{i+1}^* \} {\textnormal{i}}ght) \end{align*} where we define $g_{k+1}^*=0$. \end{proof} \begin{proof}[Proof of Remark~{\textnormal{e}}f{remark:recover-minimax}] Recall the definition of $g_i(\eta)$: \begin{align*} g_i(\eta) = \sum_{j\ge i} e^{-\frac{n_j}{2} \left(\eta - \mu_j - b_j{\textnormal{i}}ght)_+ ^2} + \min_{j < i} e^{-\frac{n_j}{2} \left(\mu_j + b_j - \eta {\textnormal{i}}ght)_+ ^2}\, . \end{align*} Let $\eta = \mu_1 - 2 \sqrt{\frac{2}{n_{\min}}\log\frac{k}{\delta}}$. Then, for the second term of $g_i(\eta)$, \begin{align*} \min_{j < i} e^{-\frac{n_j}{2} \left(\mu_j + b_j - \eta {\textnormal{i}}ght)_+ ^2} \leq e^{-\frac{n_1}{2} \left( 2 \sqrt{\frac{2}{n_{\min}}\log\frac{k}{\delta}} -\sqrt{\frac{2}{n_{1}}\log\frac{k}{\delta}}{\textnormal{i}}ght)_{+}^2 } \leq \frac{\delta}{k}\, . \end{align*} For the first term, \begin{align*} \sum_{j\ge i} e^{-\frac{n_j}{2} \left(\eta - \mu_j - b_j{\textnormal{i}}ght)_+ ^2} = \sum_{j\ge i} e^{-\frac{n_j}{2} \left(\mu_1 - 2 \sqrt{\frac{2}{n_{\min}}\log\frac{k}{\delta}} - \mu_j - b_j{\textnormal{i}}ght)_+ ^2} \leq \sum_{j\ge i} e^{-\frac{n_{\min}}{2} \left(\Delta_j - 3 \sqrt{\frac{2}{n_{\min}}\log\frac{k}{\delta}} {\textnormal{i}}ght)_+ ^2}\, . \end{align*} Thus, \begin{align*} g_i^* \le \sum_{j\ge i} e^{-\frac{n_{\min}}{2} \left(\Delta_j - 3 \sqrt{\frac{2}{n_{\min}}\log\frac{k}{\delta}} {\textnormal{i}}ght)_+ ^2} + \frac{\delta}{k}\, . \end{align*} For arm $i$ such that $\Delta_i \ge 4\sqrt{\frac{2}{n_{\min}}\log\frac{k}{\delta}}$, by Theorem~{\textnormal{e}}f{thm:instance-upper-general} we have $P({\mathcal{A}}(X)\geq i) \leq g^*_i \leq \delta$. The result then follows by the tower rule. \end{proof} \begin{proof}[Proof of Corollary~{\textnormal{e}}f{coro:instance-upper-general-simplified}] For each $i$, let $\eta_i = \max_{j<i} L_j$. Then, \begin{align*} g_i(\eta_i) = \sum_{j\ge i} e^{-\frac{n_j}{2} \left( \max_{j<i} L_j - \mu_j - b_j{\textnormal{i}}ght)_+ ^2} + \min_{j < i} e^{-\frac{n_j}{2} \left(\mu_j + b_j - \max_{j<i} L_j {\textnormal{i}}ght)_+ ^2}\, . \end{align*} Let $s=\argmax_{j<i} L_j$. For the second term we have, \begin{align*} \min_{j < i} e^{-\frac{n_j}{2} \left(\mu_j + b_j - \max_{j<i} L_j {\textnormal{i}}ght)_+ ^2} \leq e^{-\frac{n_s}{2} \left(\mu_s + b_s - L_s {\textnormal{i}}ght)_+ ^2} \leq \frac{\delta}{k}\, . \end{align*} Next we consider the first term. Recall that $h = \max\{i\in[k]: \max_{j<i} L_j < \max_{j'\ge i} U_{j'}\}$. Then for any $i>h$, we have $\max_{j<i} L_j \ge U_{j'}$ for all $j'\ge i$. Therefore, \begin{align*} \sum_{j\ge i} e^{-\frac{n_j}{2} \left( \max_{j'<i} L_{j'} - \mu_j - b_j{\textnormal{i}}ght)_+ ^2} = \sum_{j\ge i} e^{-\frac{n_j}{2} \left( \max_{j'<i} L_{j'} - U_j + \sqrt{\frac{2}{n_j} \log \frac{k}{\delta}}{\textnormal{i}}ght)_+ ^2} \leq \frac{\delta}{k}\sum_{j\ge i} e^{-\frac{n_j}{2} \left( \max_{j'<i} L_{j'} - U_j {\textnormal{i}}ght)^2}\, . \end{align*} Note that for $i\leq h$, $\Delta_i \leq \Delta_h$. Thus we have, \begin{align*} {\mathcal{R}}({\mathcal{A}}) & \leq \Delta_h + \sum_{i>h} (\Delta_i - \Delta_{i-1} ){\mathbb{P}}({\mathcal{A}}(X)\geq i) \\ & \leq \Delta_h + \frac{\delta}{k}\Delta_{\max} + \frac{\delta}{k}\sum_{i>h} (\Delta_i - \Delta_{i-1} )\sum_{j\ge i} e^{-\frac{n_j}{2} \left( \max_{j'<i} L_{j'} - U_j {\textnormal{i}}ght)^2}\, , \end{align*} which concludes the proof. \end{proof} \begin{proof}[Proof of Corollary~{\textnormal{e}}f{coro:ub-greedy-instance}] Considering the greedy algorithm, for each $i\ge 2$, \begin{align*} g_i(\eta) = \sum_{j\ge i} e^{-\frac{n_j}{2} \left(\eta - \mu_j{\textnormal{i}}ght)_+ ^2} + \min_{j < i} e^{-\frac{n_j}{2} \left(\mu_j - \eta {\textnormal{i}}ght)_+ ^2} \,. \end{align*} Define $h_i = \argmax_{j<i} \mu_j - \sqrt{\frac{2}{n_j}\log\frac{k}{\delta}}$ and $\eta_i =\mu_{h_i} - \sqrt{\frac{2}{n_{h_i}}\log\frac{k}{\delta}}$. Then we have $ e^{-\frac{n_{h_i}}{2} \left(\mu_{h_i} - \eta_i {\textnormal{i}}ght)_+ ^2} = \delta/k $. Then for $j \ge i$ we have \begin{align*} e^{-\frac{n_j}{2} \left(\eta_i - \mu_j{\textnormal{i}}ght)_+ ^2} = e^{-\frac{n_j}{2} \left(\mu_{h_i} - \mu_j - \sqrt{\frac{2}{n_{h_i}}\log\frac{k}{\delta}} {\textnormal{i}}ght)_+ ^2} \,. \end{align*} When $\mu_{h_i} - \mu_j \ge \sqrt{\frac{2}{n_{h_i}}\log\frac{k}{\delta}} + \sqrt{\frac{2}{n_{j}}\log\frac{k}{\delta}}$ we have $e^{-\frac{n_j}{2} \left(\eta_i - \mu_j{\textnormal{i}}ght)_+ ^2} \le \delta/k$. Define \begin{align*} U_i = \I{\forall j \ge i, \mu_{h_i} - \mu_j \ge \sqrt{\frac{2}{n_{h_i}}\log\frac{k}{\delta}} + \sqrt{\frac{2}{n_{j}}\log\frac{k}{\delta}}} \,. \end{align*} Then we have $ g_i^* U_i \le \frac{k - i + 2}{k}\delta \le \delta $. According to Theorem~{\textnormal{e}}f{thm:instance-upper-general} we have $\Prb{{\mathcal{A}}(X)\ge i} \le \min\{ 1, g_i^* \}$, so for any $i$ such that $\Prb{{\mathcal{A}}(X)\ge i} > \delta$, we must have $U_i = 0$, which is equivalent to \begin{align*} \max_{j<i} \mu_j - \sqrt{\frac{2}{n_{j}}\log\frac{k}{\delta}} < \max_{j \ge i} \mu_j + \sqrt{\frac{2}{n_{j}}\log\frac{k}{\delta}} \, . \addeq\label{eq:proof-greedy-cond-1} \end{align*} Let $\hat{i}$ be the largest index $i$ that satisfies \eqref{eq:proof-greedy-cond-1}. Then we have $\Prb{{\mathcal{A}}(X)\ge \hat{i} + 1} \le \delta$. Therefore, we have $\Prb{\mu^* - \mu_{{\mathcal{A}}(X)} \le \Delta_{\hat{i}}} \ge 1 - \delta$, and it remains to upper bound $\Delta_{\hat{i}}$. For any $i \in \iset{k}$, if $\hat{i} \le i$ then $\Delta_{\hat{i}} \le \Delta_{i}$. If $\hat{i} > i$ we have \begin{align*} \max_{j<\hat{i}} \mu_j - \sqrt{\frac{2}{n_{j}}\log\frac{k}{\delta}} \ge \mu_{i} - \sqrt{\frac{2}{n_{i}}\log\frac{k}{\delta}} \end{align*} and \begin{align*} \max_{j \ge \hat{i}} \mu_j + \sqrt{\frac{2}{n_{j}}\log\frac{k}{\delta}} \le \mu_{\hat{i}} + \max_{j > i} \sqrt{\frac{2}{n_{j}}\log\frac{k}{\delta}} \,. \end{align*} Applying \eqref{eq:proof-greedy-cond-1} gives \begin{align*} \Delta_{\hat{i}} - \Delta_{i} = \mu_{i} - \mu_{\hat{i}} \le \sqrt{\frac{2}{n_{i}}\log\frac{k}{\delta}} + \max_{j > i} \sqrt{\frac{2}{n_{j}}\log\frac{k}{\delta}} \,, \end{align*} so \begin{align*} \Delta_{\hat{i}} \le \Delta_{i} + \sqrt{\frac{2}{n_{i}}\log\frac{k}{\delta}} + \max_{j > i} \sqrt{\frac{2}{n_{j}}\log\frac{k}{\delta}} \, \end{align*} holds for any $i \in \iset{k}$, concluding the proof. \end{proof} \begin{proof}[Proof of Corollary~{\textnormal{e}}f{coro:ub-lcb-instance}] Let $\eta = \max_i \mu_i - \sqrt{\frac{8}{n_i}\log\frac{k}{\delta}}$. Considering the LCB algorithm, for each $i\ge 2$, we have \begin{align*} g_i(\eta) = & \sum_{j\ge i} e^{-\frac{n_j}{2} \left(\eta - \mu_j + \sqrt{\frac{2}{n_j}\log\frac{k}{\delta}} {\textnormal{i}}ght)_+ ^2} + \min_{j < i} e^{-\frac{n_j}{2} \left(\mu_j - \eta - \sqrt{\frac{2}{n_j}\log\frac{k}{\delta}} {\textnormal{i}}ght)_+ ^2} \,. \end{align*} Define $h_i = \argmax_{j<i} \mu_j - \sqrt{\frac{8}{n_j}\log\frac{k}{\delta}}$ and $\eta_i =\mu_{h_i} - \sqrt{\frac{8}{n_{h_i}}\log\frac{k}{\delta}}$. Then we have $ e^{-\frac{n_{h_i}}{2} \left(\mu_{h_i} - \eta_i - \sqrt{\frac{2}{n_j}\log\frac{k}{\delta}} {\textnormal{i}}ght)_+ ^2} = \delta/k $. Now, consider $j \ge i$. Then, \begin{align*} e^{-\frac{n_j}{2} \left(\eta_i - \mu_j + \sqrt{\frac{2}{n_j}\log\frac{k}{\delta}}{\textnormal{i}}ght)_+ ^2} \le \frac{\delta}{k} \end{align*} whenever $\eta_i - \mu_j \ge 0$, i.e. $\mu_{h_i} - \sqrt{\frac{8}{n_{h_i}}\log\frac{k}{\delta}} \ge \mu_j$. Define \begin{align*} U_i = \I{\forall j \ge i, \mu_{h_i} - \sqrt{\frac{8}{n_{h_i}}\log\frac{k}{\delta}} \ge \mu_j } \,. \end{align*} Then we have $ g_i^* U_i \le \frac{k - i + 2}{k}\delta \le \delta $. According to Theorem~{\textnormal{e}}f{thm:instance-upper-general} we have $\Prb{{\mathcal{A}}(X)\ge i} \le \min\{ 1, g_i^* \}$, so for any $i$ such that $\Prb{{\mathcal{A}}(X)\ge i} > \delta$, we must have $U_i = 0$, which is equivalent to that there exists some $s\geq i$ such that \begin{align*} \mu_s > \max_{j<i} \mu_j - \sqrt{\frac{8}{n_j}\log\frac{k}{\delta}}\,. \addeq\label{eq:proof-lcb-cond-1} \end{align*} Let $\hat{i}$ be the largest index $i$ that satisfies \eqref{eq:proof-lcb-cond-1}. Then we have $\Prb{{\mathcal{A}}(X)\ge \hat{i} + 1} \le \delta$ and thus $\Prb{\mu^* - \mu_{{\mathcal{A}}(X)} \le \Delta_{\hat{i}}} \ge 1 - \delta$. It remains to upper bound $\Delta_{\hat{i}}$. For any $i \in \iset{k}$, if $\hat{i} \le i$ then $\Delta_{\hat{i}} \le \Delta_{i}$. If $\hat{i} > i$ we have \begin{align*} \mu_{\hat{i}} > \max_{j < \hat{i}} \mu_j - \sqrt{\frac{8}{n_j}\log\frac{k}{\delta}} \ge \mu_i - \sqrt{\frac{8}{n_i}\log\frac{k}{\delta}} \,. \end{align*} Therefore, \begin{align*} \Delta_{\hat{i}} = \Delta_i + \mu_{i} - \mu_{\hat{i}} \le \Delta_i + \sqrt{\frac{8}{n_{i}}\log\frac{k}{\delta}} \,, \end{align*} which concludes the proof. \if0 Pick $\eta=\mu_1$ then the second term in $g_i(\eta)$ becomes $\delta/k$. For $j$ such that $\Delta_j\ge \sqrt{\frac{8}{n_j}\log\frac{k}{\delta}}$ we have \begin{align*} e^{-\frac{n_j}{2} \left(\eta - \mu_j - \sqrt{\frac{2}{n_j}\log\frac{k}{\delta}} {\textnormal{i}}ght)_+ ^2} \le \frac{\delta}{k} \,. \end{align*} Define \begin{align*} U_i = \I{\forall j \ge i, \Delta_j\ge \sqrt{\frac{8}{n_j}\log\frac{k}{\delta}} } \,. \end{align*} Then we have $ g_i^* U_i \le \frac{k - i + 2}{k}\delta \le \delta $. According to Theorem~{\textnormal{e}}f{thm:instance-upper-general} we have $\Prb{{\mathcal{A}}(X)\ge i} \le \min\{ 1, g_i^* \}$, so for any $i$ such that $\Prb{{\mathcal{A}}(X)\ge i} > \delta$, we must have $U_i = 0$, which is equivalent to \begin{align*} \max_{j \ge i} \mu_j + \sqrt{\frac{8}{n_j}\log\frac{k}{\delta}} > \mu_1\,. \addeq\label{eq:proof-ucb-cond-1} \end{align*} For any $i \in \iset{k}$, if $\hat{i} \le i$ then $\Delta_{\hat{i}} \le \Delta_{i}$. If $\hat{i} > i$ we have \begin{align*} \max_{j \ge \hat{i}} \mu_j + \sqrt{\frac{8}{n_j}\log\frac{k}{\delta}} \le \mu_{\hat{i}} + \max_{j > i} \sqrt{\frac{8}{n_{j}}\log\frac{k}{\delta}} \,. \end{align*} Applying \eqref{eq:proof-ucb-cond-1} gives \begin{align*} \Delta_{\hat{i}} = \mu_{1} - \mu_{\hat{i}} \le \max_{j > i} \sqrt{\frac{8}{n_{j}}\log\frac{k}{\delta}} \,. \end{align*} Therefore, \begin{align*} \Delta_{\hat{i}} & \le \max\left\{ \Delta_{i}, \max_{j > i} \sqrt{\frac{8}{n_{j}}\log\frac{k}{\delta}} {\textnormal{i}}ght\} \\ & \le \Delta_{i} + \max_{j > i} \sqrt{\frac{8}{n_{j}}\log\frac{k}{\delta}} \,. \end{align*} for any $i \in \iset{k}$, which concludes the proof. \fi \end{proof} \begin{proof}[Proof of Corollary~{\textnormal{e}}f{coro:ub-ucb-instance}] Consider now the UCB algorithm. Then, for each $i\ge 2$, \begin{align*} g_i(\eta) = & \sum_{j\ge i} e^{-\frac{n_j}{2} \left(\eta - \mu_j - \sqrt{\frac{2}{n_j}\log\frac{k}{\delta}} {\textnormal{i}}ght)_+ ^2} + \min_{j < i} e^{-\frac{n_j}{2} \left(\mu_j - \eta + \sqrt{\frac{2}{n_j}\log\frac{k}{\delta}} {\textnormal{i}}ght)_+ ^2} \,. \end{align*} Pick $\eta=\mu_1$ then the second term in $g_i(\eta)$ becomes $\delta/k$. For $j$ such that $\Delta_j\ge \sqrt{\frac{8}{n_j}\log\frac{k}{\delta}}$ we have \begin{align*} e^{-\frac{n_j}{2} \left(\eta - \mu_j - \sqrt{\frac{2}{n_j}\log\frac{k}{\delta}} {\textnormal{i}}ght)_+ ^2} \le \frac{\delta}{k} \,. \end{align*} Define \begin{align*} U_i = \I{\forall j \ge i, \Delta_j\ge \sqrt{\frac{8}{n_j}\log\frac{k}{\delta}} } \,. \end{align*} Then we have $ g_i^* U_i \le \frac{k - i + 2}{k}\delta \le \delta $. According to Theorem~{\textnormal{e}}f{thm:instance-upper-general} we have $\Prb{{\mathcal{A}}(X)\ge i} \le \min\{ 1, g_i^* \}$, so for any $i$ such that $\Prb{{\mathcal{A}}(X)\ge i} > \delta$, we must have $U_i = 0$, which is equivalent to \begin{align*} \max_{j \ge i} \mu_j + \sqrt{\frac{8}{n_j}\log\frac{k}{\delta}} > \mu_1\,. \addeq\label{eq:proof-ucb-cond-1} \end{align*} Let $\hat{i}$ be the largest index $i$ that satisfies \eqref{eq:proof-ucb-cond-1}. Then we have $\Prb{{\mathcal{A}}(X)\ge \hat{i} + 1} \le \delta$. Therefore, we have $\Prb{\mu^* - \mu_{{\mathcal{A}}(X)} \le \Delta_{\hat{i}}} \ge 1 - \delta$. It remains to upper bound $\Delta_{\hat{i}}$. For any $i \in \iset{k}$, if $\hat{i} \le i$ then $\Delta_{\hat{i}} \le \Delta_{i}$. If $\hat{i} > i$, we have \begin{align*} \max_{j \ge \hat{i}} \mu_j + \sqrt{\frac{8}{n_j}\log\frac{k}{\delta}} \le \mu_{\hat{i}} + \max_{j > i} \sqrt{\frac{8}{n_{j}}\log\frac{k}{\delta}} \,. \end{align*} Applying \eqref{eq:proof-ucb-cond-1} gives \begin{align*} \Delta_{\hat{i}} = \mu_{1} - \mu_{\hat{i}} \le \max_{j > i} \sqrt{\frac{8}{n_{j}}\log\frac{k}{\delta}} \,. \end{align*} Therefore, \begin{align*} \Delta_{\hat{i}} \le \max\left\{ \Delta_{i}, \max_{j > i} \sqrt{\frac{8}{n_{j}}\log\frac{k}{\delta}} {\textnormal{i}}ght\} \le \Delta_{i} + \max_{j > i} \sqrt{\frac{8}{n_{j}}\log\frac{k}{\delta}} \,. \end{align*} for any $i \in \iset{k}$, which concludes the proof. \end{proof} \begin{proof}[Proof of Proposition~{\textnormal{e}}f{prop:lcb-vs-ucb}] Fixing $S\subset \iset{k}$, we take $\{n_i\}_{i\in S}\to\infty$ and $\{n_i\}_{i\notin S} = 1$. The upper bound for LCB in Corollary~{\textnormal{e}}f{coro:ub-lcb-instance} can be written as \begin{align*} \hat{{\mathcal{R}}}_S(\textnormal{LCB}) & = \min\left\{ \min_{i\in S} \Delta_i , \min_{i\notin S} \left(\Delta_i + \sqrt{8\log\frac{k}{\delta}}{\textnormal{i}}ght) {\textnormal{i}}ght \}+ \delta \\ & = \min_{i\in S} \Delta_i + \delta \\ & = \Delta_{\min\{i\in \iset{k}: i\in S\}} + \delta \,. \end{align*} Similarly, we have \begin{align*} \hat{{\mathcal{R}}}_S(\textnormal{UCB}) = \min_{i\in \iset{k}} \left(\Delta_i + \max_{j > i, j\notin S} \sqrt{8\log\frac{k}{\delta}} {\textnormal{i}}ght) + \delta \end{align*} and \begin{align*} \hat{{\mathcal{R}}}_S(\textnormal{greedy}) \ge \min_{i\in \iset{k}} \left(\Delta_i + \max_{j > i, j\notin S} \sqrt{2\log\frac{k}{\delta}} {\textnormal{i}}ght) + \delta \,. \end{align*} Note that for $\delta\in (0,1)$, $\sqrt{2\log\frac{k}{\delta}} > 1 \ge \Delta_{\max}$. So we can further lower bound $\hat{{\mathcal{R}}}_S(\textnormal{UCB})$ and $\hat{{\mathcal{R}}}_S(\textnormal{greedy})$ by $\Delta_h + \delta$ where $h=\min\{i \in \iset{k}: \forall j > i, j\in S \}$. Let $m=|S|$. Notice that unless $S=\{k - m + 1, ..., k\}$, we always have $\min\{i\in \iset{k}: i\in S\} < \min\{i \in \iset{k}: \forall j > i, j\in S \}$. So we have $\hat{{\mathcal{R}}}_S(\textnormal{LCB}) < \hat{{\mathcal{R}}}_S(\textnormal{UCB})$~(or $\hat{{\mathcal{R}}}_S(\textnormal{greedy})$) whenever $S\ne \{k - m + 1, ..., k\}$. Under the uniform distribution over all possible subsets for $S$, the event $S=\{k - m + 1, ..., k\}$ happens with probability $\binom{k}{m}^{-1}$, which concludes the proof. \end{proof} \subsection{Instance-dependent Lower Bounds} \begin{proof}[Proof of Theorem {\textnormal{e}}f{thm:instance-negative-1}] We first derive an upper bound for ${\mathcal{R}}^*_{{\mathcal{M}}_{\mathbf{n}, c}}(\theta)$. Assuming $X=(X_1, X_2, \mathbf{n})$ with $X_i\sim {\mathcal{N}}(\mu_i, 1/n_i)$, for any $\beta\in \mathbb{R}$, we define algorithm ${\mathcal{A}}_{\beta}$ as \begin{align*} {\mathcal{A}}_{\beta}(X) = \begin{cases} 1, & \textrm{ if } X_1 - X_2 \ge \frac{\beta}{\sqrt{n_{\min}}}\, ; \\ 2, & \textrm{ otherwise } \,. \end{cases} \end{align*} We now analyze the regret for ${\mathcal{A}}_{\beta}$. By Hoeffding's inequality we have the following instance-dependent regret upper bound: \begin{proposition} Consider any $\beta\in \mathbb{R}$ and $\theta\in\Theta_{\mathbf{n}}$. Let $\Delta=|\mu_1 - \mu_2|$. If $\mu_1\ge \mu_2$ then \begin{align*} {\mathcal{R}}({\mathcal{A}}_{\beta}, \theta) \le \I{\Delta \le \frac{\beta}{\sqrt{n_{\min}}}} \frac{\beta}{\sqrt{n_{\min}}} + \I{\Delta > \frac{\beta}{\sqrt{n_{\min}}} }e^{-\frac{n_{\min}}{4}\left( \Delta - \frac{\beta}{\sqrt{n_{\min}}} {\textnormal{i}}ght)_+^2}\,. \end{align*} Furthermore, if $\mu_1< \mu_2$, we have \begin{align*} {\mathcal{R}}({\mathcal{A}}_{\beta}, \theta) \le \I{\Delta \le \frac{-\beta}{\sqrt{n_{\min}}}} \frac{-\beta}{\sqrt{n_{\min}}} + \I{\Delta > \frac{-\beta}{\sqrt{n_{\min}}} }e^{-\frac{n_{\min}}{4}\left( \Delta + \frac{\beta}{\sqrt{n_{\min}}} {\textnormal{i}}ght)_+^2}\,. \end{align*} \end{proposition} Maximizing over $\Delta$ gives our worst case regret guarantee: \begin{proposition} For any $\beta\in \mathbb{R}$, \begin{align*} \sup_{\theta\in \Theta_{\mathbf{n}}} {\mathcal{R}}({\mathcal{A}}_{\beta}, \theta) \le \frac{|\beta| + 2}{\sqrt{n_{\min}}} \,. \end{align*} \end{proposition} ${\mathcal{A}}_{\beta}(X)$ is minimax optimal for a specific range of $\beta$: \begin{proposition} If $|\beta| \le cc_0 - 2$ then ${\mathcal{A}}_{\beta} \in {\mathcal{M}}_{\mathbf{n}, c}$. \label{prop:thresholding-alg-upper} \end{proposition} Given $\theta\in \Theta_{\mathbf{n}}$, to upper bound ${\mathcal{R}}^*_{{\mathcal{M}}_{\mathbf{n}, c}}(\theta)$, we pick $\beta$ such that ${\mathcal{A}}_{\beta} \in {\mathcal{M}}_{\mathbf{n}, c}$ and ${\mathcal{A}}_{\beta}$ performs well on $\theta$. For $\theta$ where $\mu_1\ge \mu_2$, we set $\beta = 2 - cc_0$ thus ${\mathcal{R}}^*_{{\mathcal{M}}_{\mathbf{n}, c}}(\theta) \le {\mathcal{R}}({\mathcal{A}}_{2-cc_0}, \theta)$. For $\theta$ where $\mu_1 < \mu_2$, we set $\beta = cc_0 - 2$ thus ${\mathcal{R}}^*_{{\mathcal{M}}_{\mathbf{n}, c}}(\theta) \le {\mathcal{R}}({\mathcal{A}}_{cc_0-2}, \theta)$. We now construct two instances $\theta_1, \theta_2\in \Theta_{\mathbf{n}}$ and show that no algorithm can achieve regret close to ${\mathcal{R}}^*_{{\mathcal{M}}_{\mathbf{n}, c}}$ on both instances. Fixing some $\lambda \in \mathbb{R}$ and $\eta > 0$, we define $$\theta_1=(\mu_1, \mu_2)=(\lambda + \frac{\eta}{n_1}, \lambda - \frac{\eta}{n_2})$$ and $$\theta_2=(\mu'_1, \mu'_2)=(\lambda - \frac{\eta}{n_1}, \lambda + \frac{\eta}{n_2})\,.$$ On instance $\theta_1$ we have $X_1-X_2\sim {\mathcal{N}}(( \frac{1}{n_1} + \frac{1}{n_2} )\eta, \frac{1}{n_1} + \frac{1}{n_2} )$ while on instance $\theta_2$ we have $X_1-X_2\sim {\mathcal{N}}(-( \frac{1}{n_1} + \frac{1}{n_2} )\eta, \frac{1}{n_1} + \frac{1}{n_2} )$. Let $\Phi$ be the CDF of the standard normal distribution ${\mathcal{N}}(0, 1)$, $\Delta=( \frac{1}{n_1} + \frac{1}{n_2} )\eta$, and $\sigma^2=\frac{1}{n_1} + \frac{1}{n_2}$. Then we have \begin{align*} {\mathcal{R}}({\mathcal{A}}_{\beta}, \theta_1) & = \Delta \Prbb{\theta_1}{{\mathcal{A}}_{\beta} = 2} \\ & = \Delta \Prbb{\theta_1}{X_1-X_2 < \frac{\beta}{\sqrt{n_{\min}}} } \\ & = \Delta \Phi\left( \frac{\beta - \Delta \sqrt{n_{\min}} }{\sigma \sqrt{n_{\min}} } {\textnormal{i}}ght)\,, \end{align*} and \begin{align*} {\mathcal{R}}({\mathcal{A}}_{-\beta}, \theta_2) & = \Delta \Prbb{\theta_2}{{\mathcal{A}}_{-\beta} = 1} \\ & = \Delta \Prbb{\theta_2}{X_1-X_2 \ge - \frac{\beta}{\sqrt{n_{\min}}} } \\ & = \Delta \Phi\left( \frac{\beta - \Delta \sqrt{n_{\min}} }{\sigma \sqrt{n_{\min}} } {\textnormal{i}}ght)\,. \end{align*} It follows that our upper bound on ${\mathcal{R}}^*_{{\mathcal{M}}_{\mathbf{n}, c}}$ is the same for both instances, i.e., ${\mathcal{R}}({\mathcal{A}}_{2-cc_0}, \theta_1) = {\mathcal{R}}({\mathcal{A}}_{cc_0-2}, \theta_2)$. Next we show that the greedy algorithm ${\mathcal{A}}_0$ is optimal in terms of minimizing the worse regret between $\theta_1$ and $\theta_2$. \begin{lemma} Let ${\mathcal{A}}_0$ be the greedy algorithm where ${\mathcal{A}}_0(X)=1$ if $X_1\ge X_2$ and ${\mathcal{A}}_0(X)=2$ otherwise. Then we have \begin{align*} {\mathcal{R}}({\mathcal{A}}_0, \theta_1)= {\mathcal{R}}({\mathcal{A}}_0, \theta_2) = \min_{{\mathcal{A}}} \max\{{\mathcal{R}}({\mathcal{A}}, \theta_1), {\mathcal{R}}({\mathcal{A}}, \theta_2)\} \,. \end{align*} \label{lemma:greedy-optimal-two-ins} \end{lemma} \begin{proof}[Proof of Lemma~{\textnormal{e}}f{lemma:greedy-optimal-two-ins}] The first step is to show that by applying the Neyman-Pearson Lemma, thresholding algorithms on $X_1-X_2$ perform the most powerful hypothesis tests between $\theta_1$ and $\theta_2$. Let $f_{\theta}$ be the probability density function for the observation $(X_1, X_2)$ under instance $\theta$. Then, the likelihood ratio function can be written as \begin{align*} \frac{ f_{\theta_1}(X_1, X_2) }{ f_{\theta_2}(X_1, X_2) } = \frac{ e^{-\frac{n_1}{2} (X_1 - \lambda - \eta/n_1)^2 - \frac{n_2}{2} (X_2 - \lambda + \eta/n_2)^2 } } { e^{-\frac{n_1}{2} (X_1 - \lambda + \eta/n_1)^2 - \frac{n_2}{2} (X_2 - \lambda - \eta/n_2)^2 } } = e^{2\eta(X_1 - X_2)} \,. \end{align*} Applying the Neyman-Pearson Lemma to our scenario gives the following statement: \begin{proposition}[Neyman-Pearson Lemma] For any $\gamma> 0$ let ${\mathcal{A}}^{\gamma}$ be the algorithm where ${\mathcal{A}}^{\gamma}(X)=1$ if $\frac{ f_{\theta_1}(X_1, X_2) }{ f_{\theta_2}(X_1, X_2) } \ge \gamma$ and ${\mathcal{A}}^{\gamma}(X)=2$ otherwise. Let $\alpha = \Prbb{\theta_1}{{\mathcal{A}}^{\gamma}(X)=2}$. Then for any algorithm ${\mathcal{A}}'$ such that $\Prbb{\theta_1}{{\mathcal{A}}'(X)=2}=\alpha$, we have $ \Prbb{\theta_2}{{\mathcal{A}}'(X)=1} \ge \Prbb{\theta_2}{{\mathcal{A}}^{\gamma}(X)=1}$. \label{prop:neyman-pearson} \end{proposition} Note that $\frac{ f_{\theta_1}(X_1, X_2) }{ f_{\theta_2}(X_1, X_2) } \ge \gamma$ is equivalent to $X_1-X_2\ge (2\eta)^{-1}\log \gamma$. Returning to the proof of Lemma~{\textnormal{e}}f{lemma:greedy-optimal-two-ins}, consider an arbitrary algorithm ${\mathcal{A}}'$ and let $\alpha = {\mathcal{R}}({\mathcal{A}}', \theta_1)/\Delta = \Prbb{\theta_1}{{\mathcal{A}}'(X)=2}$. Let $\gamma$ be the threshold that satisfies $\Prbb{\theta_1}{{\mathcal{A}}^{\gamma}(X)=2}=\alpha$. This exists because $X_1, X_2$ follow a continuous distribution. According to Proposition~{\textnormal{e}}f{prop:neyman-pearson} we have $ \Prbb{\theta_2}{{\mathcal{A}}'(X)=1} \ge \Prbb{\theta_2}{{\mathcal{A}}^{\gamma}(X)=1}$. Therefore, we have shown that ${\mathcal{R}}({\mathcal{A}}^{\gamma}, \theta_1)={\mathcal{R}}({\mathcal{A}}', \theta_1)$ and ${\mathcal{R}}({\mathcal{A}}^{\gamma}, \theta_2)\le{\mathcal{R}}({\mathcal{A}}', \theta_2)$, which means that for any algorithm ${\mathcal{A}}'$ there exists some $\gamma$ such that \begin{align*} \max\{{\mathcal{R}}({\mathcal{A}}^\gamma, \theta_1), {\mathcal{R}}({\mathcal{A}}^\gamma, \theta_2)\} \le \max\{{\mathcal{R}}({\mathcal{A}}', \theta_1), {\mathcal{R}}({\mathcal{A}}', \theta_2)\} \,. \end{align*} It remains to show that $\gamma=1$ is the minimizer of $\max\{{\mathcal{R}}({\mathcal{A}}^\gamma, \theta_1), {\mathcal{R}}({\mathcal{A}}^\gamma, \theta_2)\}$. This comes from the fact that ${\mathcal{R}}({\mathcal{A}}^\gamma, \theta_1)$ is a monotonically increasing function of $\gamma$ while ${\mathcal{R}}({\mathcal{A}}^\gamma, \theta_2)$ is a monotonically decreasing function of $\gamma$ and $\gamma=1$ makes ${\mathcal{R}}({\mathcal{A}}^\gamma, \theta_1)={\mathcal{R}}({\mathcal{A}}^\gamma, \theta_2)$, which means that $\gamma=1$ is the minimizer. \end{proof} We now continue with the proof of Theorem~{\textnormal{e}}f{thm:instance-negative-1}. Applying Lemma~{\textnormal{e}}f{lemma:greedy-optimal-two-ins} gives \begin{align*} \sup_{\theta\in \Theta_{\mathbf{n}}} \frac{{\mathcal{R}}({\mathcal{A}}, \theta)}{{\mathcal{R}}^*_{{\mathcal{M}}_{\mathbf{n}, c}}(\theta)} & \ge \max \left\{ \frac{{\mathcal{R}}({\mathcal{A}}, \theta_1)}{{\mathcal{R}}^*_{{\mathcal{M}}_{\mathbf{n}, c}}(\theta_1)}, \frac{{\mathcal{R}}({\mathcal{A}}, \theta_2)}{{\mathcal{R}}^*_{{\mathcal{M}}_{\mathbf{n}, c}}(\theta_2)}{\textnormal{i}}ght\} \\ & \ge \max \left\{ \frac{{\mathcal{R}}({\mathcal{A}}, \theta_1)}{{\mathcal{R}}({\mathcal{A}}_{2-cc_0}, \theta_1)}, \frac{{\mathcal{R}}({\mathcal{A}}, \theta_2)}{{\mathcal{R}}({\mathcal{A}}_{cc_0-2}, \theta_2)}{\textnormal{i}}ght\} \\ & = \frac{\max\left\{ {\mathcal{R}}({\mathcal{A}}, \theta_1), {\mathcal{R}}({\mathcal{A}}, \theta_2){\textnormal{i}}ght\}}{{\mathcal{R}}({\mathcal{A}}_{2-cc_0}, \theta_1)} \\ & \ge \frac{{\mathcal{R}}({\mathcal{A}}_0, \theta_1)}{{\mathcal{R}}({\mathcal{A}}_{2-cc_0}, \theta_1)} \\ & = \frac{\Phi\left( - \frac{\Delta}{\sigma } {\textnormal{i}}ght)}{\Phi\left( - \frac{cc_0-2}{\sigma \sqrt{n_{\min}} } - \frac{\Delta}{\sigma } {\textnormal{i}}ght)} \,. \addeq\label{eq:cdf-ratio} \end{align*} Now we apply the fact that for $x>0$, $\frac{x}{1 + x^2}\phi(x) < \Phi(-x) < \frac{1}{x}\phi(x)$ to lower bound \eqref{eq:cdf-ratio}, where $\phi$ is the probability density function of the standard normal distribution. Choosing $\beta = cc_0 - 2$, we have \begin{align*} \frac{\Phi\left( - \frac{\Delta}{\sigma } {\textnormal{i}}ght)}{\Phi\left( - \frac{\beta}{\sigma \sqrt{n_{\min}} } - \frac{\Delta}{\sigma } {\textnormal{i}}ght)} \ge \frac{\beta + \Delta \sqrt{n_{\min}} }{\sigma \sqrt{n_{\min}} }\frac{\Delta/\sigma}{1 + (\Delta/\sigma)^2} e^{\frac{1}{2} \left( \frac{\beta^2}{\sigma^2 n_{\min}} + \frac{\beta\Delta }{\sigma^2 \sqrt{n_{\min}}} {\textnormal{i}}ght) } \ge \frac{\eta^2}{n_{\min} + \eta^2} e^{\frac{\beta^2}{4} + \frac{\beta\eta}{2 \sqrt{n_{\min}}} } \,. \end{align*} Picking $\lambda=1/2$ and $\eta=n_{\min}/2$ such that $\theta_1, \theta_2 \in [0, 1]^2$, we have \begin{align*} \sup_{\theta\in \Theta_{\mathbf{n}}} \frac{{\mathcal{R}}({\mathcal{A}}, \theta)}{{\mathcal{R}}^*_{{\mathcal{M}}_{\mathbf{n}, c}}(\theta)} \ge \frac{ n_{\min} }{ n_{\min} + 4 } e^{\frac{\beta^2}{4} + \frac{\beta}{4}\sqrt{n_{\min}} }\,, \end{align*} which concludes the proof. \end{proof} \iffalse \todoy[inline]{End...} We consider the 2-armed bandit case ($k=2$) with Gaussian rewards. Let $\mu_1$ and $\mu_2$ be the mean rewards of arms 1 and 2. The variances of both arm's reward are one. In the data $X$ we have $n_1 = \pi_1 n $ and $n_2 = \pi_2 n $ with $\pi_1,\pi_2 > 0$ and $\pi_1 + \pi_2 = 1$. Let now $\Theta_n$ be the set of the corresponding problem instances. The formal definition is as follows: \begin{definition}[Instance Optimality] Fix $c,c'>0$ and let ${\mathcal{M}}$ be the set of those algorithms that are minimax optimal up to a multiplicative factor $c>0$ for every $n$ over $\Theta_n$. For $P=(P_1,P_2)\in {\mathcal{P}}^2$ let ${\mathcal{R}}^*_n(P)=\inf_{{\mathcal{A}}\in {\mathcal{M}}} {\mathcal{R}}({\mathcal{A}}, (n,P))$ where we abuse notation by using $(n,P)$ to denote the instance in $\Theta_n$ where the respective distributions of the two arms are given by $P_1$ and $P_2$. Then, an algorithm ${\mathcal{A}}$ is $(c,c')$ instance-optimal if \begin{align*} \limsup_{n\to\infty} \frac{{\mathcal{R}}({\mathcal{A}},(n,P))}{{\mathcal{R}}^*_n(P)} \le c'\,. \end{align*} \end{definition} \todoy{One major problem with this is that we are not restricting a single algorithm for all $n$ because the algorithm knows $n$. This is actually ok because the algorithm actually takes $n$ as its input.} \todoy[inline]{ Is this result satisfactory? Any slight shift of the constant in the exp will lead to infinite ratio. E.g. in best arm identification, the fixed horizon setting considers the quantity inside the exp. This quantity will be the same up to a constant multiplier for all index algorithms. The problem is that we consider $\pi_1$ and $\pi_2$ to be constants. What we probably need to show is to consider the quantity inside the exp, and then take the sup over $(\pi_1, \pi_2)$. Then the result might be the there cannot be a $(\pi_1, \pi_2)$-independent constant-factor instance optimal algorithm. } The main result of this section is the following result. \begin{theorem} \label{thm:no-instance-optimality} No matter the choice of $c,c'>0$, there are no $(c,c')$ instance optimal algorithms for batch policy optimization in two armed bandits. \end{theorem} That is, the instance-optimality, which is normally considered in the literature of minimizing cumulative regret for stochastic bandits, does not exist for the batch setting. As we have already excluded the minimax optimality in previous section, there is a lack of optimality criterion to distinguish between algorithms when considering batch policy optimization with finite-armed stochastic bandits. However, recent works suggest the pessimistic algorithm (LCB) is a good choice in batch RL from both theoretical and practical perspective \citep{BuGeBe20,DuGlNam16,FaTaVaSmDo19,yu2020mopo}. A natural question that arises here is: if pessimism is the answer, what is the question? In the next section we provide a characterization of the pessimistic algorithm based a weighted-minimax objective. \begin{proof}[Proof of Theorem {\textnormal{e}}f{thm:no-instance-optimality}] To prove the hardness result, we consider two problem instances and show that no index algorithm can be optimal on both problems. In fact, an algorithm that is optimal in one problem can be arbitrarily far from optimal on the other problem. Let $\Delta>0$ be the reward gap. In the first problem instance $\theta_1$, $\mu_1 - \mu_2 = \Delta$, while in the second problem $\theta_2$, $\mu_2 - \mu_1 = \Delta$. The decision of an index algorithm with parameter $\alpha\in [-\beta_\delta, \beta_\delta]$ is, \begin{itemize} \item Select Arm 1 if: \begin{align*} \hat{\mu}_1 + \frac{\alpha}{\sqrt{n_1}} \geq \hat{\mu}_2 + \frac{\alpha}{\sqrt{n_2}} \Rightarrow \ \hat{\mu}_1 - \hat{\mu}_2 \geq \frac{\alpha}{\sqrt{n_2}} - \frac{\alpha}{\sqrt{n_1}}. \end{align*} \item Select Arm 2 if: \begin{align*} \hat{\mu}_1 + \frac{\alpha}{\sqrt{n_1}} \leq \hat{\mu}_2 + \frac{\alpha}{\sqrt{n_2}} \Rightarrow \ \hat{\mu}_1 - \hat{\mu}_2 \leq \frac{\alpha}{\sqrt{n_2}} - \frac{\alpha}{\sqrt{n_1}}. \end{align*} \end{itemize} Thus, one can view the index algorithm as setting a threshold $ {\alpha} / {\sqrt{n_2}} - {\alpha} / {\sqrt{n_1}}$ as the decision boundary, where an arm is selected by comparing $\hat{\mu}_1 - \hat{\mu}_2$ with it. Then we could write the regret of an index algorithm with parameter $\alpha$ as, \begin{align*} R({\mathcal{I}}(\alpha), \theta_1) & = \Delta \cdot {\mathbb{P}}_{\theta_1}\left\{ \hat{\mu}_1 - \hat{\mu}_2 < \frac{\alpha}{\sqrt{n_2}} - \frac{\alpha}{\sqrt{n_1}}{\textnormal{i}}ght\} \addeq\label{eq:regret-theta-1}\\ R({\mathcal{I}}(\alpha), \theta_2) & = \Delta \cdot {\mathbb{P}}_{\theta_2}\left\{ \hat{\mu}_1 - \hat{\mu}_2 >\frac{\alpha}{\sqrt{n_2}} - \frac{\alpha}{\sqrt{n_1}} {\textnormal{i}}ght\} \addeq\label{eq:regret-theta-2} \end{align*} We have the following result which suggests that any index algorithm with appropriate bias is Pareto-optimal. The proof is given in the supplementary material. \begin{lemma} \label{lem:index-optimal} The confidence-biased index algorithm is optimal for the following problem, \begin{align*} \min_{{\mathcal{A}}} {\mathcal{R}}({\mathcal{A}}, \theta_1)\, ,\quad \text{s.t. } {\mathcal{R}}({\mathcal{A}}, \theta_2) \leq \eta \Delta\, . \end{align*} for any $\eta\in(0,1)$. \end{lemma} Next we consider two index algorithms ${\mathcal{I}}(\alpha_1)$ and ${\mathcal{I}}(\alpha_2)$, and compare their regret ratio on $\theta_1$ and $\theta_2$ by using Gaussian CDF approximation on ({\textnormal{e}}f{eq:regret-theta-1}) and ({\textnormal{e}}f{eq:regret-theta-2}) (see the supplementary material for exact computations), \begin{align*} \frac{{\mathcal{R}}({\mathcal{I}}(\alpha_1), \theta_1)}{{\mathcal{R}}({\mathcal{I}}(\alpha_2), \theta_1)}\approx \exp \Big\{ c(\alpha_2 - \alpha_1) \sqrt{n}\Delta + c' \Big\} \, , \\[0.5\baselineskip] \frac{{\mathcal{R}}({\mathcal{I}}(\alpha_1), \theta_1)}{{\mathcal{R}}({\mathcal{I}}(\alpha_2), \theta_1)}\approx \exp \Big\{ c(\alpha_1 - \alpha_2) \sqrt{n}\Delta + c' \Big\} \, . \end{align*} where $c$ and $c'$ are constants that are not dependent on $n$. Without loss of generality, we assume $\alpha_1 < \alpha_2$. Let ${\mathcal{A}}$ be an algorithm that has smaller regret than ${\mathcal{I}}(\alpha_1)$ on $\theta_2$. Then by Lemma {\textnormal{e}}f{lem:index-optimal} we have ${\mathcal{R}}({\mathcal{A}}, \theta_1)\geq {\mathcal{R}}({\mathcal{I}}(\alpha_1), \theta_1)$. Thus, \begin{align*} \lim_{n{\textnormal{i}}ghtarrow \infty} \frac{{\mathcal{R}}({\mathcal{A}}, \theta_1)}{{\mathcal{R}}({\mathcal{I}}(\alpha_2), \theta_1))} \geq \lim_{n{\textnormal{i}}ghtarrow \infty} \frac{{\mathcal{R}}({\mathcal{I}}(\alpha_1), \theta_1)}{{\mathcal{R}}({\mathcal{I}}(\alpha_2), \theta_1))} = \infty \, , \end{align*} which means that the regret of ${\mathcal{A}}$ can be arbitrarily worse than ${\mathcal{I}}(\alpha_2)$ on $\theta_1$ as the sample size goes to infinity. Therefore, no algorithm can be instance-dependent optimal. \end{proof} \fi \section{Proof for Section~{\textnormal{e}}f{sec:pessimism}} For any $\theta$, let $\mu_1$ and $n_1$ be the reward mean and sample count for the optimal arm. We first prove that ${\mathcal{E}}^*(\theta)$ is at the order of $1/\sqrt{n_1}$ for any $\theta$. \begin{proposition} There exist universal constants $c_0$ and $c_1$ such that, for any $\theta \in \Theta_{\mathbf{n}}$, $c_0/\sqrt{n_1} \le {\mathcal{E}}^*(\theta) \le c_1/\sqrt{n_1}$. \label{prop:mean-est} \end{proposition} \begin{proof}[Proof of Proposition~{\textnormal{e}}f{prop:mean-est}] For any constant $c>0$, define $\theta'\in \Theta$ such that the only difference between $\theta'$ and $\theta$ is the mean for the optimal arm: $\theta'$ has $\mu_1' = \mu_1 + \frac{4c}{\sqrt{n_1}}$. For any algorithm such that $\mathbb{E}E{\theta'}{|\mu_1' - \nu|} \le \frac{c}{\sqrt{n_1}}$, we have $\Prbb{\theta'}{\nu \ge \mu_1 + \frac{2c}{\sqrt{n_1}}} \ge \frac{1}{2}$ by Markov inequality. Applying the fact that, when $p$ and $q$ are two Bernoulli distributions with parameter $p$ and $q$ respectively, if $p\ge 1/2$ we have $\mathrm{KL}(p, q)\ge \frac{1}{2}\log \frac{1}{4q}$. Then we have \begin{align*} \Prbb{\theta}{\nu \ge \mu_1 + \frac{2c}{\sqrt{n_1}}} \ge \frac{1}{4} e^{- \mathrm{KL}(\theta, \theta')} = \frac{1}{4} e^{-4c^2} \,. \end{align*} Therefore, we have \begin{align*} \mathbb{E}E{\theta}{|\mu_1 - \nu|} \ge \frac{2c}{\sqrt{n_1}} \Prbb{\theta}{\nu \ge \mu_1 + \frac{2c}{\sqrt{n_1}}} \ge \frac{ce^{-4c^2}}{2\sqrt{n_1}} \,. \end{align*} Now we apply the fact that the empirical mean estimator $\nu = \hat{\mu}_1$ has $\mathbb{E}E{\theta}{|\mu_1 - \nu|}\le \frac{1}{\sqrt{n_1}}$ for any $\theta$. We know that $\inf_{\nu} \sup_{\theta}\mathbb{E}E{\theta}{|\mu_1 - \nu|} \le \frac{1}{\sqrt{n_1}}$. Let $c_2$ be the constant in the definition of ${\mathcal{V}}^*_\mathbf{n}$, then $\frac{c_2e^{-4c_2^2}}{2\sqrt{n_1}}$ is a lower bound on ${\mathcal{E}}^*(\theta)$ for any $\theta$ due to the fact that relaxing the constraint on the minimax optimality gives a lower instance dependent regret lower bound. Since the minimax value is also an upper bound on ${\mathcal{E}}^*(\theta)$ we know that, there exist universal constants $c_0$ and $c_1$ such that, for any $\theta \in \Theta_{\mathbf{n}}$, $c_0/\sqrt{n_1} \le {\mathcal{E}}^*(\theta) \le c_1/\sqrt{n_1}$. \end{proof} \begin{proof}[Proof of Proposition~{\textnormal{e}}f{prop:weighted-minimax-lcb}] Picking $\delta = \frac{1}{\sqrt{|\mathbf{n}|}}$ for the LCB algorithm, according to Corollary~{\textnormal{e}}f{coro:ub-lcb-instance} gives that there exists a universal constant $c$~(which may contain the term $\log k$) such that ${\mathcal{R}}(\textnormal{LCB}, \theta) \le \frac{c \sqrt{\log |\mathbf{n}|} }{\sqrt{n_1}}$. Applying Proposition~{\textnormal{e}}f{prop:mean-est} concludes the proof. \end{proof} \begin{proof}[Proof of Proposition~{\textnormal{e}}f{prop:weighted-minimax-ucb-greedy}] Consider a sequence of counts $\mathbf{n}_1, \mathbf{n}_2,...$ with $n_2 = 1$ and $n_1=2,3,..., +\infty$. Fix $\mu_1 = \mu_2 + 0.1$ and let $\Delta=\mu_1 - \mu_2$. For the UCB algorithm, we have \begin{align*} {\mathcal{R}}(\textnormal{UCB}, \theta) & = \Delta \Prbb{\theta}{ \hat{\mu}_2 + \frac{\beta_{\delta}}{\sqrt{n_2}} \ge \hat{\mu}_1 + \frac{\beta_{\delta}}{\sqrt{n_1}} } \\ & = 0.1 \Prbb{\theta}{ \hat{\mu}_1 - \hat{\mu}_2 \le \frac{\beta_{\delta}}{\sqrt{n_2}} - \frac{\beta_{\delta}}{\sqrt{n_1}} } \\ & \ge 0.1 \Prbb{\theta}{ \hat{\mu}_1 - \hat{\mu}_2 \le \left(1 - \frac{1}{\sqrt{2}}{\textnormal{i}}ght) \beta_\delta } \\ & \ge 0.1 \Prbb{\theta}{ \hat{\mu}_1 - \hat{\mu}_2 \le \left(1 - \frac{1}{\sqrt{2}}{\textnormal{i}}ght) } \\ & \ge 0.1 \Prbb{\theta}{ \hat{\mu}_1 - \hat{\mu}_2 \le \Delta } \\ & = 0.05 \end{align*} where we applied the fact that $\beta_\delta \ge 1$ for any $\delta \in (0, 1)$ and the random variable $\hat{\mu}_1 - \hat{\mu}_2$ follows a Gaussian distribution with mean $\Delta$. Applying Proposition~{\textnormal{e}}f{prop:mean-est} gives \begin{align*} \limsup_{j{\textnormal{i}}ghtarrow\infty} \sup_{\theta\in\Theta_{\mathbf{n}_j}}{\frac{{\mathcal{R}}(\textnormal{UCB},\theta)}{\sqrt{\log |\mathbf{n}_j|}\cdot {\mathcal{E}}^*(\theta)}} \ge \limsup_{j{\textnormal{i}}ghtarrow\infty} \frac{0.05 \sqrt{j + 1} }{c_1 \sqrt{\log (j + 2)}} = + \infty \end{align*} For the greedy algorithm, we have \begin{align*} {\mathcal{R}}(\textnormal{greedy}, \theta) = 0.1 \Prbb{\theta}{ \hat{\mu}_1 - \hat{\mu}_2 \le 0 } \,. \end{align*} The random variable $\hat{\mu}_1 - \hat{\mu}_2$ follows a Gaussian distribution with mean $\Delta > 0$ and variance $\frac{1}{n_1} + \frac{1}{n_2} \ge 1$. Since shrinking the variance of $\hat{\mu}_1 - \hat{\mu}_2$ will lower the probability $\Prbb{\theta}{ \hat{\mu}_1 - \hat{\mu}_2 \le 0 }$, we have ${\mathcal{R}}(\textnormal{greedy}, \theta) \ge 0.1\Phi(-0.1)$ where $\Phi$ is the CDF for the standard normal distribution. Now using a similar statement as for the UCB algorithm gives the result. \end{proof} \newcommand{\TV}[1]{\left\Vert #1{\textnormal{i}}ght\Vert_{\textrm{TV}}} \subsection{Proof of \cref{thm:mom}} \todot{I will clean this up and make the proof consistent with the statement (just constants need to change).} Let $\theta \in \Theta_n$ be an instance and abbreviate $[A, B] = I(X)$, so that $A$ and $B$ are random variables. Let $m$ and $\alpha \max_a \mu_a$ be such that \begin{align*} \sum_{a=1}^k 1\left(\mu_a + \Delta_a \geq \alpha{\textnormal{i}}ght) = m\,. \end{align*} where $\Delta_a = \sqrt{\log({\epsilon}ilon m)/n_a}$ for suitably chosen ${\epsilon}ilon \in (0,1)$ to be chosen subsequently. To simplify the notation, assume without loss of generality that the arms are ordered in a such a way that \begin{align*} \mu_a + \Delta_a \geq \alpha \text{ for all } a \leq m\,. \end{align*} Given $b \in [m]$, let $\mu^b \in \mathbb{R}^k$ be the vector with \begin{align*} \mu^b_a = \begin{cases} \mu_a & \text{if } a \neq b \\ \mu_a + \Delta_a & \text{otherwise}\,. \end{cases} \end{align*} Then, by the assumption on validity, for any $a \in [m]$, ${\mathbb{P}}_a(B \geq \alpha) \geq 1 - \delta$, which implies that \begin{align*} {\mathbb{P}}(B \geq \alpha) &\geq \frac{1}{m} \sum_{a=1}^m {\mathbb{P}}_a(B(X) \geq \alpha) - \TV{{\mathbb{P}} - \frac{1}{m} \sum_{a=1}^m P_a} \geq 1 - \delta - \TV{{\mathbb{P}} - \frac{1}{m} \sum_{a=1}^m {\mathbb{P}}_a}\,, \end{align*} where $\TV{\cdot}$ is the usual total variation norm. Letting ${\mathbb{P}}' = \frac{1}{m} \sum_{a=1}^m {\mathbb{P}}_a$ and ${\epsilon}ilon \in (0,1)$, the total variation distance satisfies \begin{align*} \TV{{\mathbb{P}} - {\mathbb{P}}'} &= \mathbb{E}\left[\left(1 - \frac{d {\mathbb{P}}'}{d {\mathbb{P}}}{\textnormal{i}}ght) 1\left(\frac{d {\mathbb{P}}'}{d {\mathbb{P}}} \leq 1{\textnormal{i}}ght){\textnormal{i}}ght] \leq 1 - \frac{4}{5} {\mathbb{P}}\left(\frac{d {\mathbb{P}}'}{d {\mathbb{P}}} \geq \frac{4}{5} {\textnormal{i}}ght) \,. \end{align*} Let $\bar M = \frac{1}{m} \sum_{a=1}^m M_a$ with \begin{align*} M_a = \exp\left(\frac{n_a}{2}\left((\hat \mu_a - \mu_a)^2 - (\hat \mu_a - \tilde \mu_a)^2{\textnormal{i}}ght){\textnormal{i}}ght) = \exp\left(n_a \Delta_a (\hat \mu_a - \mu_a) - \frac{n_a \Delta_a^2}{2}{\textnormal{i}}ght)\,. \end{align*} Then, by the Payley-Zygmund inequality and the fact that $\mathbb{E}[M_a] = 1$, \begin{align*} {\mathbb{P}}\left(\frac{d {\mathbb{P}}'}{d {\mathbb{P}}} \geq \frac{4}{5}{\textnormal{i}}ght) &= {\mathbb{P}}\left(\bar M \geq \frac{4}{5} \mathbb{E}[\bar M] {\textnormal{i}}ght) \geq \frac{\mathbb{E}[\bar M]^2/25}{\mathrm{Var}[\bar M] + \mathbb{E}[\bar M]^2/25} \\ &= \frac{1/25}{\mathbb{E}[\mathrm{Var} \bar M] + 1/25} = \frac{1/25}{1/25 + \frac{1}{m^2} \sum_{a=1}^m \mathrm{Var}(M_a)} \end{align*} Note that $M_a$ has a log-normal distribution and hence $\mathrm{Var}[M_a] = \exp(n_a \Delta_a^2) - 1$. Therefore, \begin{align*} {\mathbb{P}}\left(\frac{d {\mathbb{P}}'}{d {\mathbb{P}}} \geq \frac{4}{5}{\textnormal{i}}ght) \geq \frac{1/25}{1/25 + \frac{1}{m^2} \sum_{a=1}^m \exp(n_a \Delta_a^2)} \geq \frac{15}{16}\,. \end{align*} where in the final inequality we chose $\Delta_a = \sqrt{\log(c m) / n_a}$ for suitable constant $c > 0$. Therefore $\TV{{\mathbb{P}} - {\mathbb{P}}'} \leq 1/4$, which implies that \begin{align*} {\mathbb{P}}(B \geq \alpha) \geq \frac{3}{4} - \delta \,. \end{align*} For the lower bound, let $\alpha$ be as stated and let $\mu'$ be the vector of means with \begin{align*} \mu'_b = \begin{cases} \alpha &\text{if } \mu_b \geq \alpha \\ \mu_b &\text{otherwise}\,. \end{cases} \end{align*} Since the maximum mean of $\mu'$ is $\alpha$, as before, by the validity of the algorithm, \begin{align*} {\mathbb{P}}'(A(X) \geq \alpha) \leq \delta\,. \end{align*} On the other hand, the relative entropy between ${\mathbb{P}}$ and ${\mathbb{P}}'$ satisfies \begin{align*} D_{\mathrm{KL}}({\mathbb{P}}, {\mathbb{P}}') = \sum_{a=1}^k n_a (\mu_a - \mu'_a)^2 / 2 \leq 1 \,. \end{align*} Since $\TV{{\mathbb{P}} - {\mathbb{P}}'} \leq \sqrt{D_{\mathrm{KL}}({\mathbb{P}}, {\mathbb{P}}')/2}$, \begin{align*} {\mathbb{P}}(A \leq \alpha) \geq {\mathbb{P}}'(A \leq \alpha) - \TV{{\mathbb{P}} - {\mathbb{P}}'} \geq 1 - \delta - \sqrt{1/2}\,. \end{align*} Naive simplification and a union bound shows that \begin{align*} {\mathbb{P}}(I \subset [a(\theta), b(\theta)]) \geq \frac{1}{25} - \delta\,, \end{align*} which yields the result. \end{appendix} \end{document}
\begin{document} \title{On the zeros of Eisenstein series for $\Gamma_0^{*} (2)$ and $\Gamma_0^{*} (3)$} \author{Tsuyoshi Miezaki, Hiroshi Nozaki, Junichi Shigezumi} \maketitle \begin{center} Graduate School of Mathematics Kyushu University\\ Hakozaki 6-10-1 Higashi- ku, Fukuoka, 812-8581 Japan\\ \quad \end{center} \begin{quote} {\small\bfseries Abstract.} We locate all of the zeros of the Eisenstein series associated with the Fricke groups $\Gamma_0^{*}(2)$ and $\Gamma_0^{*}(3)$ in their fundamental domains by applying and expanding the method of F. K. C. Rankin and H. P. F. Swinnerton-Dyer (``{\it On the zeros of Eisenstein series}'', 1970).\\ \noindent {\small\bfseries Key Words and Phrases.} Eisenstein series, Fricke group, locating zeros, modular forms.\\ \noindent 2000 {\it Mathematics Subject Classification}. Primary 11F11; Secondary 11F12.\\ \quad \end{quote} \section{Introduction} Let $k \geqslant 4$ be an even integer. For $z \in \mathbb{H} := \{z \in \mathbb{C} \: ; \: Im(z)>0 \}$, let \begin{equation} E_k(z) := \frac{1}{2} \sum_{(c,d)=1}(c z + d)^{- k} \label{def:e} \end{equation} be the {\it Eisenstein series} associated with $\text{SL}_2(\mathbb{Z})$. Moreover, let \begin{equation*} \mathbb{F} := \left\{|z| \geqslant 1, \: - 1 / 2 \leqslant Re(z) \leqslant 0\right\} \cup \left\{|z| > 1, \: 0 \leqslant Re(z) < 1 / 2 \right\} \end{equation*} be the {\it standaard fundamental domain} for $\text{SL}_2(\mathbb{Z})$. F. K. C. Rankin and H. P. F. Swinnerton-Dyer considered the problem of locating the zeros of $E_k(z)$ in $\mathbb{F}$ \cite{RSD}. They proved that $n$ zeros are on the arc $A := \{z \in \mathbb{C} \: ; \: |z|=1, \: \pi / 2 < Arg(z) < 2 \pi / 3\}$ for $k = 12 n + s \: (s = 4, 6, 8, 10, 0, \text{ and } 14)$. They also said in the last part of the paper, ``This method can equally well be applied to Eisenstein series associated with subgroup of the modular group.'' However, it seems unclear how widely this claim holds. Here, we consider the same problem for Fricke groups $\Gamma_0^{*}(2)$ and $\Gamma_0^{*}(3)$ (See \cite{K}, \cite{Q}), which are commensurable with $\text{SL}_2(\mathbb{Z})$. For a fixed prime $p$, we define the following: \begin{equation} \Gamma_0^{*}(p) := \Gamma_0(p) \cup \Gamma_0(p) \: W_p, \end{equation} where \begin{equation} \Gamma_0(p) := \left\{ \left(\begin{smallmatrix} a & b \\ c & d \end{smallmatrix}\right) \in \text{SL}_2(\mathbb{Z}) \: ; \: c \equiv 0\pmod{p}\right\}, \quad W_p := \left(\begin{smallmatrix} 0 & -1 / \sqrt{p} \\ \sqrt{p} & 0 \end{smallmatrix}\right). \end{equation} Let \begin{equation} E_{k, p}^{*}(z) := \frac{1}{p^{k / 2}+1} \left(p^{k / 2} E_k(p z) + E_k(z) \right) \label{def:e*} \end{equation} be the Eisenstein series associated with $\Gamma_0^{*}(p)$. The regions \begin{align*} \mathbb{F}^{*}(2) &:= \left\{|z| \geqslant 1 / \sqrt{2}, \: - 1 / 2 \leqslant Re(z) \leqslant 0\right\} \bigcup \left\{|z| > 1 / \sqrt{2}, \: 0 \leqslant Re(z) < 1 / 2 \right\},\\ \mathbb{F}^{*}(3) &:= \left\{|z| \geqslant 1 / \sqrt{3}, \: - 1 / 2 \leqslant Re(z) \leqslant 0\right\} \bigcup \left\{|z| > 1 / \sqrt{3}, \: 0 \leqslant Re(z) < 1 / 2 \right\} \end{align*} are fundamental domains for $\Gamma_0^{*}(2)$ and $\Gamma_0^{*}(3)$, respectively. Define $A_2^{*} := \{z \in \mathbb{C} \: ; \: |z| = 1 / \sqrt{2}, \: \pi / 2 < Arg(z) < 3 \pi / 4\}$, and $A_3^{*} := \{z \in \mathbb{C} \: ; \: |z| = 1 / \sqrt{3}, \: \pi / 2 < Arg(z) < 5 \pi / 6\}$. We then have $\overline{A_2^{*}} = A_2^{*} \cup \{ i / \sqrt{2}, e^{i (3 \pi / 4)} / \sqrt{2} \}$ and $\overline{A_3^{*}} = A_3^{*} \cup \{ i / \sqrt{3}, e^{i (5 \pi / 6)} / \sqrt{3} \}$. In the present paper, we will apply the method of F. K. C. Rankin and H. P. F. Swinnerton-Dyer (RSD Method) to the Eisenstein series associated with $\Gamma_0^{*}(2)$ and $\Gamma_0^{*}(3)$. We will prove the following theorems: \begin{theorem} Let $k \geqslant 4$ be an even integer. All of the zeros of $E_{k, 2}^{*}(z)$ in $\mathbb{F}^{*}(2)$ are on the arc $\overline{A_2^{*}}$. \label{th-g0s2} \end{theorem} \begin{theorem} Let $k \geqslant 4$ be an even integer. All of the zeros of $E_{k, 3}^{*}(z)$ in $\mathbb{F}^{*}(3)$ are on the arc $\overline{A_3^{*}}$. \label{th-g0s3} \end{theorem} \section{RSD Method}\label{sect-1} At the beginning of the proof in \cite{RSD}, F. K. C. Rankin and H. P. F. Swinnerton-Dyer considered the following: \begin{equation} F_k(\theta) := e^{i k \theta / 2} E_k\left(e^{i \theta}\right), \label{def:f} \end{equation} which is real for all $\theta \in [0, \pi]$. Considering the four terms with $c^2 + d^2 = 1$, they proved that \begin{equation} F_k(\theta) = 2 \cos(k \theta / 2) + R_1, \label{eqn-fkt} \end{equation} where $R_1$ is the rest of the series ({\it i.e.} $c^2 + d^2 > 1$). Moreover they showed \begin{equation} |R_1| \leqslant 1 + \left(\frac{1}{2}\right)^{k / 2} + 4 \left(\frac{2}{5}\right)^{k / 2} + \frac{20 \sqrt{2}}{k - 3} \left(\frac{9}{2}\right)^{(3 - k) / 2}. \label{r1bound} \end{equation} They computed the value of the right-hand side of (\ref{r1bound}) at $k = 12$ to be approximately $1.03562$, which is monotonically decreasing in $k$. Thus, they could show that $|R_1| < 2$ for all $k \geqslant 12$. If $\cos (k \theta / 2)$ is $+1$ or $-1$, then $F_k(2 m \pi / k)$ is positive or negative, respectively. In order to determine the location of all of the zeros of $E_k(z)$ in $\mathbb{F}$, we need the {\it valence formula}: \begin{proposition}[valence formula] Let $f$ be a modular function of weight $k$ for $\text{\upshape SL}_2(\mathbb{Z})$, which is not identically zero. We have \begin{equation} v_{\infty}(f) + \frac{1}{2} v_{i}(f) + \frac{1}{3} v_{\rho} (f) + \sum_{\begin{subarray}{c} p \in \text{\upshape SL}_2(\mathbb{Z}) \setminus \mathbb{H} \\ p \ne i, \; \rho\end{subarray}} v_p(f) = \frac{k}{12}, \end{equation} where $v_p(f)$ is the order of $f$ at $p$, and $\rho := e^{i (2 \pi / 3)}$ $($See \cite{S}$)$. \label{prop-vf} \end{proposition} Write $m(k) := \left\lfloor \frac{k}{12} - \frac{t}{4} \right\rfloor$, where $t = 0 \text{ or } 2$, such that $t \equiv k \pmod{4}$. Then, $k = 12 m(k) + s \: (s = 4, 6, 8, 10, 0, \text{ and } 14)$. As F. K. C. Rankin and H. P. F. Swinnerton-Dyer observed, the fact that $E_k(z)$ has $m(k)$ zeros on the arc $A$, the valence formula, and Remark \ref{prop-bd_ord_1} below, imply that all of the zeros of $E_k(z)$ in the standard fundamental domain for $\text{SL}_2(\mathbb{Z})$ are on $A \cup \{ i, \rho \}$ for every even integer $k \geqslant 4$. \begin{remark} Let $k \geqslant 4$ be an even integer. We have \begin{center} \begin{tabular}{rcccrcc} $k \pmod{12}$ & $v_{i / \sqrt{3}}(E_k)$ & $v_{\rho_3}(E_k)$ & \qquad & $k \pmod{12}$ & $v_{i / \sqrt{3}}(E_k)$ & $v_{\rho_3}(E_k)$\\ \hline $0$ & $0$ & $0$ && $6$ & $1$ & $0$\\ $2$ & $1$ & $2$ && $8$ & $0$ & $2$\\ $4$ & $0$ & $1$ && $10$ & $1$ & $1$\\ \hline \end{tabular} \end{center}\label{prop-bd_ord_1} \end{remark} \section{$\Gamma_0^{*}(2)$ (Proof of Theorem \ref{th-g0s2})}\label{sect-2} \subsection{Preliminaries} We define \begin{equation} F_{k, 2}^{*}(\theta) := e^{i k \theta / 2} E_{k, 2}^{*}\left(e^{i \theta} / \sqrt{2}\right). \label{def:f*2} \end{equation} Before proving Theorem \ref{th-g0s2}, we consider an expansion of $F_{k, 2}^{*}(\theta)$. By the definition of $E_k(z), E_{k, 2}^{*}(z)$ ({\it cf.} (\ref{def:e}), (\ref{def:e*})), we have \begin{multline*} 2 (2^{k / 2}+1) e^{i k \theta / 2} E_{k, 2}^{*}\left(e^{i \theta} / \sqrt{2}\right)\\ \quad = 2^{k / 2} \sum_{(c,d)=1}(c e^{-i \theta / 2} + \sqrt{2} d e^{i \theta / 2})^{- k} + 2^{k / 2} \sum_{(c,d)=1}(c e^{i \theta / 2} + \sqrt{2} d e^{-i \theta / 2})^{- k}. \end{multline*} Now, $(c,d) = 1$ is split in two cases, namely, $c$ is {\it odd} or $c$ is {\it even}. We consider the case in which $c$ is {\it even}. We have \begin{align*} 2^{k / 2} \sum_{\begin{subarray}{c} (c,d)=1\\ c:even\end{subarray}}(c e^{-i \theta / 2} + \sqrt{2} d e^{i \theta / 2})^{- k} &= \sum_{\begin{subarray}{c} (c,d)=1\\ d:odd\end{subarray}}(\sqrt{2} c' e^{-i \theta / 2} + d e^{i \theta / 2})^{- k} \quad (c = 2 c')\\ &= \sum_{\begin{subarray}{c} (c,d)=1\\ c:odd\end{subarray}}(c e^{i \theta / 2} + \sqrt{2} d e^{-i \theta / 2})^{- k}. \end{align*} Similarly, \begin{equation*} 2^{k / 2} \sum_{\begin{subarray}{c} (c,d)=1\\ c:even\end{subarray}}(c e^{i \theta / 2} + \sqrt{2} d e^{-i \theta / 2})^{- k} = \sum_{\begin{subarray}{c} (c,d)=1\\ c:odd\end{subarray}}(c e^{-i \theta / 2} + \sqrt{2} d e^{i \theta / 2})^{- k}. \end{equation*} Thus, we can write the following: \begin{equation} F_{k, 2}^{*}(\theta) = \frac{1}{2} \sum_{\begin{subarray}{c} (c,d)=1\\ c:odd\end{subarray}}(c e^{i \theta / 2} + \sqrt{2} d e^{-i \theta / 2})^{- k} + \frac{1}{2} \sum_{\begin{subarray}{c} (c,d)=1\\ c:odd\end{subarray}}(c e^{-i \theta / 2} + \sqrt{2} d e^{i \theta / 2})^{- k}. \end{equation} Hence, we use this expression as a definition. In the last part of this section, we compare the two series in this expression. Note that for any pair $(c,d)$, $(c e^{i \theta / 2} + \sqrt{2} d e^{-i \theta / 2})^{- k}$ and $(c e^{-i \theta / 2} + \sqrt{2} d e^{i \theta / 2})^{- k}$ are conjugates of each other. Thus, we have the following lemma: \begin{lemma} $F_{k, 2}^{*}(\theta)$ is real, for all $\theta \in [0, \pi]$. \label{lemma-real2} \end{lemma} \subsection{Application of the RSD Method} We will apply the method of F. K. C. Rankin and H. P. F. Swinnerton-Dyer (RSD Method) to the Eisenstein series associated with $\Gamma_0^{*}(2)$. Note that $N := c^2 + d^2$. First, we consider the case of $N = 1$. Because $c$ is odd, there are two cases, $(c,d)=(1,0)$ and $(c,d)=(-1,0)$. Then, we can write: \begin{equation} F_{k, 2}^{*}(\theta) = 2 \cos(k \theta /2) + R_2^{*}, \end{equation} where $R_2^{*}$ denotes the remaining terms of the series. Now, \begin{equation*} |R_2^{*}| \leqslant \sum_{\begin{subarray}{c} (c,d)=1\\ c:odd, \; N > 1\end{subarray}}|c e^{i \theta / 2} + \sqrt{2} d e^{-i \theta / 2}|^{- k}. \end{equation*} Let $v_{k}(c,d,\theta) := |c e^{i \theta / 2} + \sqrt{2} d e^{-i \theta / 2}|^{- k}$, then $v_{k}(c,d,\theta) = 1 / \left( c^2 + 2 d^2 + 2 \sqrt{2} c d \cos\theta \right)^{k / 2}$, and $v_{k}(c,d,\theta)=v_{k}(-c,-d,\theta)$. Next, we will consider the following three cases, namely, $N = 2, 5$, and $N \geqslant 10$. Considering $\theta \in [\pi / 2, 3 \pi / 4]$, we have the following: \begin{align*} &\text{When $N = 2$,}& v_k(1, 1, \theta) &\leqslant 1, \quad &v_k(1, - 1, \theta) &\leqslant (1 / 3)^{k / 2}.\\ &\text{When $N = 5$,}& v_k(1, 2, \theta) &\leqslant (1 / 5)^{k / 2}, &v_k(1, - 2, \theta) &\leqslant (1 / 3)^{k}.\\ &\text{When $N \geqslant 10$,} \end{align*} \begin{align*} |c e^{i \theta / 2} \pm \sqrt{2} d e^{-i \theta / 2}|^2 &\geqslant c^2 + 2 d^2 - 2 \sqrt{2} |c d| |\cos\theta|\\ &\geqslant (c^2 + d^2) / N \quad = N / 3, \end{align*} and the remaining problem concerns the number of terms with $c^2 + d^2 = N$. Because $c$ is odd, $|c| = 1, 3, ... , 2 N' - 1 \leqslant N^{1/2}$, so the number of $|c|$ is not more than $(N^{1/2}+1)/ 2$. Thus, the number of terms with $c^2 + d^2 = N$ is not more than $2 (N^{1/2}+1) \leqslant 3 N^{1/2}$, for $N \geqslant 5$. Then, \begin{align*} \sum_{\begin{subarray}{c} (c,d)=1\\ c:odd, \; N \geq 10\end{subarray}}|c e^{i \theta / 2} + \sqrt{2} d e^{-i \theta / 2}|^{- k} &\leqslant \sum_{N=10}^{\infty} 3 N^{1/2} \left(\frac{N}{3}\right)^{- k / 2}\\ &\leqslant \frac{18 \sqrt{3}}{k-3} \left(\frac{1}{3}\right)^{(k-3)/2} = \frac{162}{k-3} \left(\frac{1}{3}\right)^{k / 2}. \end{align*} Thus, \begin{equation} |R_2^{*}| \leqslant 2 + 2 \left(\frac{1}{3}\right)^{k / 2} + 2 \left(\frac{1}{5}\right)^{k / 2} + 2 \left(\frac{1}{3}\right)^{k} + \frac{162}{k-3} \left(\frac{1}{3}\right)^{k / 2}. \label{r*2bound0} \end{equation} Recalling the previous section (RSD Method), we want to show that $|R_2^{*}| < 2$. However, the right-hand side is greater than 2, so this bound is not good. The case in which $(c,d) = \pm (1,1)$ gives a bound equal to 2. We will consider the expansion of the method in the following sections. \subsection{Expansion of the RSD Method (1)} In the previous subsection, we could not obtain a good bound for $|R_2^{*}|$, where $(c,d) = \pm (1,1)$. Note that ``$v_k(1,1,\theta) = 1 \Leftrightarrow \theta = 3 \pi / 4$''. Furthermore, ``$v_k(1,1,\theta) < 1 \Leftrightarrow \theta < 3 \pi / 4$''. Therefore, we can easily expect that a good bound can be obtained for $\theta \in [\pi / 2, 3 \pi / 4 - x]$ for small $x > 0$. However, if $k = 8 n$, then we need $|R_2^{*}| < 2$ for $\theta = 3 \pi / 4$ in this method. In the next section, we will consider the case in which $k = 8 n, \theta = 3 \pi / 4$. Define $m_2(k) := \left\lfloor \frac{k}{8} - \frac{t}{4} \right\rfloor$, where $t=0, 2$ is chosen so that $t \equiv k \pmod{4}$, and $\lfloor n \rfloor$ is the largest integer not more than $n$. Let $k = 8 n + s \: (n = m_2(k), \: s = 4, 6, 0, \text{ and } 10)$. We may assume that $k \geqslant 8$. The first step is to consider how small $x$ should be. We consider each of the cases $s = 4, 6, 0, \: and \: 10$. When $s = 4$, for $\pi / 2 \leqslant \theta \leqslant 3 \pi / 4$, $(2 n + 1) \pi \leqslant k \theta / 2 \; (= (4 n + 2) \theta) \leqslant (3 n + 1) \pi + \pi / 2$. So the last integer point ({\it i.e.} $\pm 1$) is $k \theta / 2 = (3 n + 1) \pi$, then $\theta = 3 \pi / 4 - \pi / k$. Similarly, when $s = 6, \: and \; 10$, the last integer points are $\theta = 3 \pi / 4 - \pi / 2 k, \: 3 \pi / 4 - 3 \pi / 2 k$, respectively. When $s = 0$, the second to the last integer point is $\theta = 3 \pi / 4 - \pi / k$. Thus, we need $x \leqslant \pi / 2 k$. \begin{lemma} Let $k \geqslant 8$. For all $\theta \in [\pi / 2, 3 \pi / 4 - x] \; (x = \pi / 2 k)$, $|R_2^{*}| < 2$. \label{lemma-r*2} \end{lemma} \begin{proof} Let $k \geqslant 8$ and $x = \pi / 2 k$, then $0 \leqslant x \leqslant \pi / 16$. If $0 \leqslant x \leqslant \pi / 16$, then $1 - \cos x \geqslant \frac{31}{64} x^2$. \begin{align*} |e^{i \theta / 2} + \sqrt{2} e^{-i \theta / 2}|^2 &\geqslant 3 + 2 \sqrt{2} \cos(3 \pi / 4 - x)= 1 + 2 (1 - \cos x) + 2 \sin x\\ &\geqslant 1 + 4 (1 - \cos x) \geqslant 1 + (31 / 16) x^2. \end{align*} \begin{equation*} |e^{i \theta / 2} + \sqrt{2} e^{-i \theta / 2}|^k \geqslant \left(1 + (31 / 16) x^2\right)^{k / 2} \geqslant 1 + (31 / 4) x^2. \quad (k \geqslant 8) \end{equation*} \begin{equation*} v_k(1, 1, \theta) \leqslant \frac{1}{1 + (31 / 4) x^2} \leqslant 1 - \frac{31 \times 256}{31 \pi^2 + 1024} x^2. \end{equation*} Thus, \begin{equation*} 2 v_k(1, 1, \theta) \leqslant 2 - (265 / 9) / k^2. \end{equation*} Furthermore, \begin{equation*} 2 \left(\frac{1}{3}\right)^{k / 2} + 2 \left(\frac{1}{5}\right)^{k / 2} + 2 \left(\frac{1}{3}\right)^{k} + \frac{162}{k-3} \left(\frac{1}{3}\right)^{k / 2} \leqslant 35 \left(\frac{1}{3}\right)^{k / 2} \quad (k \geqslant 8). \end{equation*} Then, we have \begin{equation*} |R_2^{*}| \leqslant 2 - \frac{265}{9} \frac{1}{k^2} + 35 \left(\frac{1}{3}\right)^{k}. \end{equation*} Next, if we can show that \begin{equation*} 35 \left(\frac{1}{3}\right)^{k / 2} < \frac{265}{9} \frac{1}{k^2} \quad or \quad \frac{3^{k / 2}}{35} > \frac{9}{265} k^2, \end{equation*} then the bound is less than $2$. The proof will thus be complete. Let $f(x) := (1 / 35) 3^{x/2} - \frac{9}{265} x^2$. Then, $f'(x) = (\log 3 / 70) 3^{x/2} - \frac{18}{265} x$, $f''(x) = ((\log 3)^2 / 140) 3^{x/2} - \frac{18}{265}$. First, $f''$ is monotonically increasing for $x \geqslant 8$, and $f''(8) = 0.63038... > 0$, so $f'' > 0$ for $x \geqslant 8$. Second, $f'$ is monotonically increasing for $x \geqslant 8$, and $f'(8) = 0.72785... > 0$, so $f' > 0$ for $x \geqslant 8$. Finally, $f$ is monotonically increasing for $x \geqslant 8$, and $f(8) = 0.14070... > 0$, so $f > 0$ for $x \geqslant 8$. \end{proof} \subsection{Expansion of the RSD Method (2)} For the case of ``$k = 8 n, \theta = 3 \pi / 4$'', we need the following lemma: \begin{lemma} Let $k$ be an integer such that $k = 8 n$ for some $n \in \mathbb{N}$. If $n$ is even, then $F_{k, 2}^{*}(3 \pi / 4) > 0$. On the other hand if $n$ is odd, then $F_{k, 2}^{*}(3 \pi / 4) < 0$. \label{lemma-8n} \end{lemma} \begin{proof} Let $k = 8 n$ ($n \geqslant 1$). By the definition of $E_{k, 2}^{*}(z), F_{k, 2}^{*}(z)$ ({\it cf.} (\ref{def:e*}), (\ref{def:f*2})), we have \begin{equation*} F_{k, 2}^{*}(3 \pi / 4) = \frac{e^{i 3 (k / 8) \pi}}{2^{k / 2}+1} \left( 2^{k / 2} E_k(- 1 + i) + E_k\left(\frac{- 1 + i}{2}\right) \right). \end{equation*} By the {\it transformation rule} for $\text{SL}_2(\mathbb{Z})$, \begin{equation*} E_k(- 1 + i) = E_k(i), \quad E_k\left( (- 1 + i) / 2 \right) = (1 + i)^k E_k(1 + i) = 2^{k / 2} E_k(i). \end{equation*} Then, \begin{equation} F_{8n,2}^{*}(3 \pi / 4) = 2 e^{i n \pi} \frac{2^{4n}}{2^{4n} + 1} F_{8n}(\pi / 2), \label{eq-2ex2} \end{equation} where $\frac{2^{4n}}{2^{4n} + 1} > 0$, $F_{8n}(\pi / 2) = 2 \cos(2 n \pi) + R_1 > 0$. The question is then: ``Which holds, $F_k(\pi / 2) < 0$ or $F_k(\pi / 2) > 0$?'' F. K. C. Rankin and H. P. F. Swinnerton-Dyer showed (\ref{eqn-fkt}) and (\ref{r1bound}) \cite{RSD}. They then proved that $|R_1| < 2$ for $k \geqslant 12$. This was necessary only for $k \geqslant 12$. Now we need $|R_1| < 2$ for $k \geqslant 8$. The value of the right-hand side of (\ref{r1bound}) at $k = 8$ is $1.29658... < 2$, which is monotonically decreasing in $k$. Thus, we can show \begin{equation} |R_1| < 2 \qquad \text{for all} \quad k \geqslant 8. \label{r1newbound} \end{equation} Then, the sign ($\pm$) of $F_{k, 2}^{*}(3 \pi / 4)$ is that of $e^{i n \pi}$. Thus, the proof is complete. \end{proof} Next, we proved that $E_{k, 2}^{*}(z)$ has $m_2(k)$ zeros on the arc $A_2^{*}$. In order to determine the location of all of the zeros of $E_{k, 2}^{*}(z)$ in $\mathbb{F}^{*}(2)$, we need the valence formula for $\Gamma_0^{*}(2)$: \begin{proposition} Let $f$ be a modular function of weight $k$ for $\Gamma_0^{*}(2)$, which is not identically zero. We have \begin{equation} v_{\infty}(f) + \frac{1}{2} v_{i / \sqrt{2}}(f) + \frac{1}{4} v_{\rho_2} (f) + \sum_{\begin{subarray}{c} p \in \Gamma_0^{*}(2) \setminus \mathbb{H} \\ p \ne i / \sqrt{2}, \; \rho_2\end{subarray}} v_p(f) = \frac{k}{8}, \end{equation} where $\rho_2 := e^{i (3 \pi / 4)} \big/ \sqrt{2}$. \label{prop-vf-g0s2} \end{proposition} The proof of this proposition is similar to that for Proposition \ref{prop-vf} (See \cite{S}). If $k \equiv 4,6, \text{ and } 0 \pmod{8}$, then $k / 8 - m_2(k) < 1$. Thus, all of the zeros of $E_{k, 2}^{*}(z)$ in $\mathbb{F}^{*}(2)$ are on the arc $\overline{A_2^{*}}$. On the other hand, if $k \equiv 2 \pmod{8}$, then we have $E_{k, 2}^{*}(i / \sqrt{2}) = i^k E_{k, 2}^{*}(i / \sqrt{2})$ by the transformation rule for $\Gamma_0^{*}(2)$. Then, we have $k / 8 - m_2(k) - v_{i / \sqrt{2}}(E_{k, 2}^{*}) / 2 < 1$. In conclusion, for every even integer $k \geqslant 4$, all of the zeros of $E_{k, 2}^{*}(z)$ in $\mathbb{F}^{*}(2)$ are on the arc $\overline{A_2^{*}}$. \begin{remark}$($See Proposition \ref{prop-bd_ord_2}$)$ Let $k \geqslant 4$ be an even integer. We have \begin{center} \begin{tabular}{rcccrcc} $k \pmod{8}$ & $v_{i / \sqrt{2}}(E_{k, 2}^{*})$ & $v_{\rho_2}(E_{k, 2}^{*})$ & \qquad & $k \pmod{8}$ & $v_{i / \sqrt{2}}(E_{k, 2}^{*})$ & $v_{\rho_2}(E_{k, 2}^{*})$\\ \hline $0$ & $0$ & $0$ && $4$ & $0$ & $2$\\ $2$ & $1$ & $3$ && $6$ & $1$ & $1$\\ \hline \end{tabular} \end{center} \end{remark}\quad \section{$\Gamma_0^{*}(3)$ (Proof of Theorem \ref{th-g0s3})}\label{sect-3} \subsection{Preliminaries} We define \begin{equation} F_{k, 3}^{*}(\theta) := e^{i k \theta / 2} E_{k, 3}^{*}\left(e^{i \theta} / \sqrt{3}\right). \label{def:f*3} \end{equation} Similar to the case of $F_{k, 2}^{*}(\theta)$, we can write the following: \begin{equation} F_{k, 3}^{*}(\theta) = \frac{1}{2} \sum_{\begin{subarray}{c} (c,d)=1\\ 3 \nmid c\end{subarray}}(c e^{i \theta / 2} + \sqrt{3} d e^{-i \theta / 2})^{- k} + \frac{1}{2} \sum_{\begin{subarray}{c} (c,d)=1\\ 3 \nmid c\end{subarray}}(c e^{-i \theta / 2} + \sqrt{3} d e^{i \theta / 2})^{- k}. \end{equation} The following lemma is then obtained: \begin{lemma} $F_{k, 3}^{*}(\theta)$ is real, for all $\theta \in [0, \pi]$. \label{lemma-real3} \end{lemma} \subsection{Application of the RSD Method} Note that $N := c^2 + d^2$. First, we consider the case of $N = 1$. We can then write the following: \begin{equation} F_{k, 3}^{*}(\theta) = 2 \cos(k \theta /2) + R_3^{*}, \end{equation} where $R_3^{*}$ denotes the remaining terms. Let $v_{k}(c, d, \theta) := |c e^{i \theta / 2} + \sqrt{3} d e^{-i \theta / 2}|^{- k}$. We will consider the following cases: $N = 2, 5, 10, 13, 17$, and $N \geqslant 25$. Considering $\theta \in [\pi / 2, 5 \pi / 6]$, we have the following: \begin{allowdisplaybreaks} \begin{align*} &\text{When $N = 2$,}& v_k(1, 1, \theta) &\leqslant 1, \quad &v_k(1, - 1, \theta) &\leqslant (1 / 2)^{k}.\\ &\text{When $N = 5$,}& v_k(1, 2, \theta) &\leqslant (1 / 7)^{k / 2}, &v_k(1, - 2, \theta) &\leqslant (1 / 13)^{k / 2},\\ &&v_k(2, 1, \theta) &\leqslant 1, &v_k(2, - 1, \theta) &\leqslant (1 / 7)^{k / 2}.\\ &\text{When $N = 10$,}& v_k(1, 3, \theta) &\leqslant (1 / 19)^{k / 2}, &v_k(1, - 3, \theta) &\leqslant (1 / 28)^{k / 2}.\\ &\text{When $N = 13$,}& v_k(2, 3, \theta) &\leqslant (1 / 13)^{k / 2}, &v_k(2, - 3, \theta) &\leqslant (1 / 31)^{k / 2}.\\ &\text{When $N = 17$,}& v_k(1, 4, \theta) &\leqslant (1 / 37)^{k / 2}, &v_k(1, - 4, \theta) &\leqslant (1 / 7)^{k},\\ &&v_k(4, 1, \theta) &\leqslant (1 / 7)^{k / 2}, &v_k(4, - 1, \theta) &\leqslant (1 / 19)^{k / 2}.\\ &\text{When $N \geqslant 25$,}& |c e^{i \theta / 2} \pm \sqrt{3} d e^{-i \theta / 2}|^2 \geqslant& N / 6, \end{align*} \end{allowdisplaybreaks} and the number of terms with $c^2 + d^2 = N$ is at most $(11 / 3) N^{1/2}$, for $N \geqslant 16$. Then, \begin{equation*} \sum_{\begin{subarray}{c} (c,d)=1\\ 3 \nmid c, \; N \geq 25\end{subarray}}|c e^{i \theta / 2} + \sqrt{3} d e^{-i \theta / 2}|^{- k} \leqslant \sum_{N=25}^{\infty} \frac{11}{3} N^{1/2} \left(\frac{1}{6} N\right)^{- k / 2} \leqslant \frac{352 \sqrt{6}}{k-3} \left(\frac{1}{2}\right)^{k}. \end{equation*} Thus, \begin{equation} |R_3^{*}| \leqslant 4 + 2 \left(\frac{1}{2}\right)^{k} + 6 \left(\frac{1}{7}\right)^{k / 2} \cdots + 2 \left(\frac{1}{7}\right)^{k} + \frac{352 \sqrt{6}}{k-3} \left(\frac{1}{2}\right)^{k}. \label{r*3bound0} \end{equation} The cases of $(c, d) = \pm (1, 1), \: \pm (2, 1)$ give a bound equal to 4. We will consider an expansion of the method similar to that of $\Gamma_0^{*}(2)$. \subsection{Expansion of the RSD Method (1)} Similar to the method of $\Gamma_0^{*}(2)$, we will consider $\theta \in [\pi / 2, 5 \pi / 6 - x]$ for small $x > 0$. In the next subsection, we also consider the case in which $k = 12 n, \: \theta = 5 \pi / 6$. Define $m_3(k) := \left\lfloor \frac{k}{6} - \frac{t}{4} \right\rfloor$, where $t=0, 2$ is chosen so that $t \equiv k \pmod{4}$. We may assume that $k \geqslant 8$. How small should $x$ be? Let $k = 12 m_3(k) + s$. Considering each case, namely, $s = 4, 6, 8, 10 \: and \; 14$, we need $x \leqslant \pi / 3 k$. \begin{lemma} Let $k \geqslant 8$. For all $\theta \in [\pi / 2, 5 \pi / 6 - x] \; (x = \pi / 3 k)$, $|R_3^{*}| < 2$. \label{lemma-r*3} \end{lemma} Before proving the above lemma, we need the following preliminaries. \begin{proposition} Let $k \geqslant 8$ be an even integer and $x = \pi / 3 k$, then \begin{equation*} 4 + 2 \sqrt{3} \cos \left(\frac{5 \pi}{6} - x\right) \geqslant \left(\frac{3}{2}\right)^{2 / k} \left(1 + \frac{256 \times 7 \times 13}{3 \times 127 \times k} x^2\right). \end{equation*} \label{prop-cos1} \end{proposition} \begin{proof} We have, \begin{equation*} \left(\frac{3}{2}\right)^{2 / k} = \sum_{n = 0}^{\infty} \frac{(2 \log 3 / 2)^n}{n!} \frac{1}{k^n} \leqslant 1 + \left(2 \log \frac{3}{2}\right) \frac{1}{k} + \frac{1}{2} \left(2 \log \frac{3}{2}\right)^2 \left(\frac{3}{2}\right)^{2 / k} \frac{1}{k^2}, \end{equation*} \begin{equation*} 3 + 2 \sqrt{3} \cos \left(\frac{5 \pi}{6} - \frac{\pi}{3k}\right) \geqslant \frac{\pi}{\sqrt{3}} \frac{1}{k}. \end{equation*} Let $x = \pi / 3 k$. Then, we have \begin{equation*} f_1(k) := 4 + 2 \sqrt{3} \cos \left(\frac{5 \pi}{6} - \frac{\pi}{3 k}\right) - \left(\frac{3}{2}\right)^{2 / k} \left(1 + \frac{256 \times 7 \times 13 \times \pi^2}{27 \times 127} \frac{1}{k^3}\right). \end{equation*} If $k = 8$, then $f_1(8) = 0.00012876... > 0$. Next, if $k \geqslant 10$, then \begin{align*} f_1(k) &\geqslant \frac{1}{k} \left\{\frac{\pi}{\sqrt{3}} - 2 \log \frac{3}{2} - \frac{1}{2} \left(2 \log \frac{3}{2}\right)^2 \left(\frac{3}{2}\right)^{2 / k} \frac{1}{k} - \frac{256 \times 7 \times 13 \times \pi^2}{27 \times 127} \left(\frac{3}{2}\right)^{2 / k} \frac{1}{k^2}\right\}\\ &\geqslant \frac{1}{k} \times 0.24004... \quad (k \geqslant 10) \quad > 0. \end{align*} \end{proof} \begin{proposition}Let $k \geqslant 8$ be an even integer and $x = \pi / 3 k$, then \begin{equation*} 7 + 4 \sqrt{3} \cos \left(\frac{5 \pi}{6} - x\right) \geqslant 3^{2 / k} \left(1 + \frac{256 \times 7 \times 13}{3 \times 127 \times k} x^2\right). \end{equation*} \label{prop-cos2} \end{proposition} \begin{proof} We have \begin{gather*} 3^{2 / k} \leqslant 1 + (2 \log 3) \frac{1}{k} + \frac{1}{2} (2 \log 3)^2 3^{2 / k} \frac{1}{k^2},\\ 6 + 4 \sqrt{3} \cos \left(\frac{5 \pi}{6} - \frac{\pi}{3k}\right) \geqslant \frac{2 \pi}{\sqrt{3}} \frac{1}{k}. \end{gather*} Similar to the proof of Proposition \ref{prop-cos1}, let $x = \pi / 3 k$, and write \begin{equation*} f_2(k) := 7 + 4 \sqrt{3} \cos \left(\frac{5 \pi}{6} - \frac{\pi}{3 k}\right) - 3^{2 / k} \left(1 + \frac{256 \times 7 \times 13 \times \pi^2}{27 \times 127} \frac{1}{k^3}\right). \end{equation*} If $k = 8$, then $f_2(8) = 0.015057... > 0$. Next, if $k \geqslant 10$, then \begin{align*} f_2(k) \geqslant \frac{1}{k} \times 0.29437... \quad > 0. \end{align*} \end{proof} \begin{proof}[Proof of Lemma \ref{lemma-r*3}] Let $k \geqslant 8$ and $x = \pi / 3 k$, then $0 \leqslant x \leqslant \pi / 24$. By Proposition \ref{prop-cos1} \begin{equation*} |e^{i \theta / 2} + \sqrt{3} e^{-i \theta / 2}|^2 \geqslant \left(\frac{3}{2}\right)^{2 / k} \left(1 + \frac{256 \times 7 \times 13}{3 \times 127 \times k} x^2\right). \end{equation*} \begin{align*} |e^{i \theta / 2} + \sqrt{3} e^{-i \theta / 2}|^k &\geqslant \left(\frac{3}{2}\right) \left(1 + \frac{256 \times 7 \times 13}{3 \times 127 \times k} x^2\right)^{k / 2}\\ &\geqslant \frac{3}{2} + \frac{64 \times 7 \times 13}{127} x^2. \quad (k \geqslant 8) \end{align*} \begin{align*} v_k(1, 1, \theta) &\leqslant \frac{2}{3} - \frac{(128 \times 7 \times 13 / 127)}{(9 / 2) + (64 \times 3 \times 7 \times 13 / 127) x^2} x^2 \leqslant \frac{2}{3} - \frac{107}{8} x^2. \quad (x \leqslant \pi / 24) \end{align*} Similarly, by Proposition \ref{prop-cos2} \begin{equation*} |2 e^{i \theta / 2} + \sqrt{3} e^{-i \theta / 2}|^2 \geqslant 3^{2 / k} \left(1 + \frac{256 \times 7 \times 13}{3 \times 127 \times k} x^2\right). \end{equation*} \begin{equation*} |2 e^{i \theta / 2} + \sqrt{3} e^{-i \theta / 2}|^k \geqslant 3 + \frac{128 \times 7 \times 13}{127} x^2. \end{equation*} \begin{equation*} v_k(2, 1, \theta) \leqslant \frac{1}{3} - \frac{107}{16} x^2. \end{equation*} Thus, \begin{equation*} 2 v_k(1, 1, \theta) + 2 v_k(2, 1, \theta) \leqslant 2 - \frac{107 \pi^2}{24} \frac{1}{k^2}. \end{equation*} In Eq. (\ref{r*3bound0}), replace $4$ with the bound $2 - \frac{107 \pi^2}{24} \frac{1}{k^2}$. Then, \begin{equation*} |R_3^{*}| \leqslant 2 - \frac{107 \pi^2}{24} \frac{1}{k^2} + 176 \left(\frac{1}{2}\right)^{k}. \end{equation*} Similarly to the method for $\Gamma_0^{*}(2)$, we can easily show that the bound is less than two for $k \geqslant 8$. \end{proof} \subsection{Expansion of the RSD Method (2)} For the case ``$k = 12 n, \theta = 5 \pi / 6$'', we need the following lemma: \begin{lemma} Let $k$ be the integer such that $k = 12 n$ for some $n \in \mathbb{N}$. If $n$ is even, then $F_{k, 3}^{*}(5 \pi / 6) > 0$. On the other hand, if $n$ is odd, then $F_{k, 3}^{*}(5 \pi / 6) < 0$. \label{lemma-12n} \end{lemma} \begin{proof} Let $k = 12 n$ ($n \geqslant 1$). Similarly to (\ref{eq-2ex2}), \begin{equation} F_{12n,3}^{*}(5 \pi / 6) = 2 e^{i n \pi} \frac{3^{6n}}{3^{6n} + 1} F_{12n}(2 \pi / 3), \end{equation} where $\frac{3^{6n}}{3^{6n} + 1} > 0$, $F_{12n}(2 \pi / 3) = 2 \cos(4 n \pi) + R_1 > 0$ ({\it cf.} (\ref{r1newbound})). \end{proof} \begin{proposition}[Valence formula for $\Gamma_0^{*}(3)$] Let $f$ be a modular function of weight $k$ for $\Gamma_0^{*}(3)$, which is not identically zero. We have \begin{equation} v_{\infty}(f) + \frac{1}{2} v_{i / \sqrt{3}}(f) + \frac{1}{6} v_{\rho_3} (f) + \sum_{\begin{subarray}{c} p \in \Gamma_0^{*}(3) \setminus \mathbb{H} \\ p \ne i / \sqrt{3}, \; \rho_3\end{subarray}} v_p(f) = \frac{k}{6}, \end{equation} where $\rho_3 := e^{i (5 \pi / 6)} \big/ \sqrt{3}$. \label{prop-vf-g0s3} \end{proposition} If $k \equiv 4,8,10 \text{ and } 0 \pmod{12}$, then $k / 6 - m_3(k) < 1$. On the other hand, if $k \equiv 2, \: 6 \pmod{12}$, we have $E_{k, 3}^{*}(i / \sqrt{3}) = 0$ and $k / 6 - m_3(k) - v_{i / \sqrt{3}}(E_{k, 3}^{*}) / 2 < 1$. In conclusion, for every even integer $k \geqslant 4$, all of the zeros of $E_{k, 3}^{*}(z)$ in $\mathbb{F}^{*}(3)$ are on the arc $\overline{A_3^{*}}$. \begin{remark}$($See Proposition \ref{prop-bd_ord_3}$)$ Let $k \geqslant 4$ be an even integer. We have \begin{center} \begin{tabular}{rcccrcc} $k \pmod{12}$ & $v_{i / \sqrt{3}}(E_{k, 3}^{*})$ & $v_{\rho_3}(E_{k, 3}^{*})$ & \qquad & $k \pmod{12}$ & $v_{i / \sqrt{3}}(E_{k, 3}^{*})$ & $v_{\rho_3}(E_{k, 3}^{*})$\\ \hline $0$ & $0$ & $0$ && $6$ & $1$ & $3$\\ $2$ & $1$ & $5$ && $8$ & $0$ & $2$\\ $4$ & $0$ & $4$ && $10$ & $1$ & $1$\\ \hline \end{tabular} \end{center} \end{remark}\quad \begin{remark} Getz\cite{G} considered a similar problem for the zeros of extremal modular forms of $\text{\upshape SL}_2(\mathbb{Z})$. It seems that similar results do not hold for extremal modular forms of $\Gamma_0^{*}(2)$ and $\Gamma_0^{*}(3)$. We plan to look into this in the near future. \end{remark}\quad \appendix \begin{center} {\bfseries APPENDIX.}\quad{\bfseries On the space of modular forms for $\Gamma_0^{*} (2)$ and $\Gamma_0^{*} (3)$} \end{center} We need theories of the spaces of modular forms for $\Gamma_0^{*} (2)$ and $\Gamma_0^{*} (3)$ in order to decide the orders at some zeros. We refer to {\it J. -P. Serre's ``A Course in Arithmetic''} \cite{S}, which presents theories for the space of modular forms for $\text{SL}_2(\mathbb{Z})$. Let $M_{k, p}$ be the space of modular forms for $\Gamma_0^{*} (p)$ of weight $k$, and let $M_{k, p}^0$ be the space of cusp forms for $\Gamma_0^{*} (p)$ of weight $k$. When we consider the map $M_{k, p} \ni f \mapsto f(\infty) \in \mathbb{C}$, the kernel of the map is $M_{k, p}^0$. So $\dim(M_{k, p} / M_{k, p}^0) \leqslant 1$, and $M_{k, p} = \mathbb{C} E_{k, p}^{*} \oplus M_{k, p}^0$. \section{$\Gamma_0^{*} (2)$} \begin{theorem}Let $k$ be an even integer, and let $\Delta_2 := \frac{17}{1152} ((E_{4,2}^{*})^2-E_{8,2}^{*})$. \def\arabic{enumi}.{(\arabic{enumi})} \begin{enumerate} \item For $k < 0$ and $k = 2$, $M_{k, 2} = 0$. \item For $k = 0, 4, 6$, and $10$, we have $M_{k, 2}^0 = 0$, and $\dim(M_{k, 2}) = 1$ with a base $E_{k, 2}^{*}$. \item $M_{k, 2}^0 = \Delta_2 M_{k - 8, 2}$. \end{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \end{theorem} We can prove the above theorem in a similar manner to the proof of to Theorem 4 of \cite[Chapter VII]{S}. We use the valence formula for $\Gamma_0^{*} (2)$ (Proposition \ref{prop-vf-g0s2}). Furthermore, for a non-negative integer $k$, $\dim(M_{k, 2}) = \lfloor k / 8 \rfloor$ if $k \equiv 2 \pmod{8}$, and $\dim(M_{k, 2}) = \lfloor k / 8 \rfloor + 1$ if $k \not\equiv 2 \pmod{8}$. We have $M_{k, 2} = \mathbb{C} E_{k - 8 n, 2}^{*} E_{8 n, 2}^{*} \oplus M_{k, 2}^0$. Then, \begin{equation*} M_{k, 2} = E_{k - 8 n, 2}^{*} (\mathbb{C} E_{8 n, 2}^{*} \oplus \mathbb{C} E_{8 (n - 1), 2}^{*} \Delta_2 \oplus \cdots \oplus \mathbb{C} \Delta_2^n) \end{equation*} Thus, for every $p \in \mathbb{H}$ and for every $f \in M_{k, 2}$, $v_p(f) \geqslant v_p(E_{k - 8 n, 2}^{*})$. We also have $E_{10, 2}^{*} = E_{4, 2}^{*} E_{6, 2}^{*}$. Finally, we have the following proposition: \begin{proposition} Let $k \geqslant 4$ be an even integer. For every $f \in M_{k, 2}$, we have \begin{equation} \begin{split} v_{i / \sqrt{2}}(f) \geqslant s_k &\quad(s_k=0, 1 \; \text{such that} \; 2 s_k \equiv k \pmod{4}),\\ v_{\rho_2}(f) \geqslant t_k &\quad(t_k=0, 1, 2, 3 \; \text{such that} \; - 2 t_k \equiv k \pmod{8}). \end{split} \end{equation} In particular, if $f$ is a constant multiple of $E_{k, 2}^{*}$, then the equalities hold. \label{prop-bd_ord_2} \end{proposition} \section{$\Gamma_0^{*} (3)$} \begin{theorem}Let $k$ be an even integer. \def\arabic{enumi}.{(\arabic{enumi})} \begin{enumerate} \item For $k < 0$ and $k = 2$, $M_{k, 3} = 0$. \item For $k = 0, 4, 6$, we have $M_{k, 3}^0 = 0$, and $\dim(M_{k, 3}) = 1$ with a base $E_{k, 3}^{*}$. \item For $k = 8, 10, 14$, we have $M_{k, 3}^0 = \mathbb{C} \Delta_{3, k}$. \item For $k = 12$, we have $M_{12, 3}^0 = \mathbb{C} \Delta_{3, 12}^0 \oplus \mathbb{C} \Delta_{3, 12}^1$. \item $M_{k, 3}^0 = M_{12, 3}^0 M_{k - 12, 3}$. \end{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} where $\Delta_{3, 8} := \frac{41}{1728} ((E_{4, 3}^{*})^2 - E_{8, 3}^{*})$, $\Delta_{3, 10} := \frac{61}{432} (E_{4, 3}^{*} E_{6, 3}^{*} - E_{10, 3}^{*})$, $\Delta_{3, 12}^0 := (\Delta_{3, 8})^2 / E_{4,3}^{*}$, $\Delta_{3, 12}^1 := \Delta_{3, 8} E_{4, 3}^{*}$, and $\Delta_{3, 14} := \Delta_{3, 10} E_{4, 3}^{*}$. \label{th-mod_sp_3} \end{theorem} Now, we have the following table: \begin{center} \begin{tabular}{rccccccrccccc} $k$ & $f$ & $v_{\infty}$ & $v_{i / \sqrt{3}}$ & $v_{\rho_3}$ & $V_3^{*}$ &\qquad& $k$ & $f$ & $v_{\infty}$ & $v_{i / \sqrt{3}}$ & $v_{\rho_3}$ & $V_3^{*}$\\ \hline $4$ & $E_{4, 3}^{*}$ & $0$ & $0$ & $4$ & $0$ && $12$ & $E_{12, 3}^{*}$ & $0$ & $0$ & $0$ & $2$\\ $6$ & $E_{6, 3}^{*}$ & $0$ & $1$ & $3$ & $0$ && & $\Delta_{3, 12}^0$ & $2$ & $0$ & $0$ & $0$\\ $8$ & $E_{8, 3}^{*}$ & $0$ & $0$ & $2$ & $1$ && & $\Delta_{3, 12}^1$ & $1$ & $0$ & $6$ & $0$\\ & $\Delta_{3, 8}$ & $1$ & $0$ & $2$ & $0$ && $14$ & $E_{14, 3}^{*}$ & $0$ & $1$ & $5$ & $1$\\ $10$ & $E_{10, 3}^{*}$ & $0$ & $1$ & $1$ & $1$ && & $\Delta_{3, 14}$ & $1$ & $1$ & $5$ & $0$\\ & $\Delta_{3, 10}$ & $1$ & $1$ & $1$ & $0$ && & & & & & \\ \hline \end{tabular} \end{center} where $V_3^{*}$ denotes the number of simple zeros of $f$ on $A_3^{*}$. Furthermore, for a non-negative integer $k$, $\dim(M_{k, 3}) = \lfloor k / 6 \rfloor$ if $k \equiv 2, 6 \pmod{12}$, and $\dim(M_{k, 3}) = \lfloor k / 6 \rfloor + 1$ if $k \not\equiv 2, 6 \pmod{12}$. We have $M_{k, 3} = \mathbb{C} E_{k - 12 n, 3}^{*} E_{12 n, 3}^{*} \oplus M_{k, 3}^0$. Then, \begin{equation*} M_{k, 3} = E_{k - 12 n, 3}^{*} \left\{\mathbb{C} E_{12 n, 3}^{*} \oplus E_{12 (n-1), 3}^{*} M_{12, 3}^0 \oplus \cdots \oplus (M_{12, 3}^0)^n \right\} \oplus M_{k - 12 n, 3}^0 (M_{12, 3}^0)^n \end{equation*} In conclusion, we have the following proposition: \begin{proposition} Let $k \geqslant 4$ be an even integer. For every $f \in M_{k, 3}$, we have \begin{equation} \begin{split} v_{i / \sqrt{3}}(f) \geqslant s_k &\quad(s_k=0, 1 \; \text{such that} \; 2 s_k \equiv k \pmod{4}),\\ v_{\rho_3}(f) \geqslant t_k &\quad(t_k=0, 1, 2, 3, 4, 5 \; \text{such that} \; - 2 t_k \equiv k \pmod{12}). \end{split} \end{equation} In particular, if $f$ is a constant multiple of $E_{k, 3}^{*}$, then the equalities hold. \label{prop-bd_ord_3} \end{proposition}\quad \begin{center} {\large Acknowledgement.} \end{center} The authors would like to thank Professor Eiichi Bannai for suggesting these problems as a master's course project. \quad \\ {\it E-mail address}: [email protected] (Tsuyoshi Miezaki), [email protected] (Hiroshi Nozaki), [email protected] (Junichi Shigezumi). \end{document}
\begin{document} \title{A note on $p$-adic denseness of quotients of values of quadratic forms} \begin{abstract} Donnay, Garcia and Rouse in \cite{DonGarRou} classified nonsingular quadratic forms $Q$ with integral coefficients and prime numbers $p$ such that the set of quotients of values of $Q$ attained for integer arguments is dense in the field of $p$-adic numbers. The aim of this note is to give another proof of this classification. \end{abstract} \section{Introduction} Let $\mathbb{N}, \mathbb{N}_+, \mathbb{Z}, \mathbb{Q}$ denote the sets of nonnegative integers, positive integers, integers and rational numbers, respectively. For each prime number $p$ and a nonzero rational number $x$ we define the $p$-adic valuation of $x$ as the integer $t$ such that $x=\frac{a}{b}p^t$ for some integers $a,b$ not divisible by $p$. We denote $p$-adic valuation of $x$ by $\nu_p(x)$. For $x=0$ we set $\nu_p(0)=+\infty$. Next, we define $p$-adic norm on the field of rational numbers by the formula $$||x||_p=\begin{cases} p^{-\nu_p(x)}&\mbox{ for }x\neq 0\\ 0&\mbox{ for }x=0 \end{cases}.$$ The $p$-adic norm induces a $p$-adic metric $d_p$ on $\mathbb{Q}$ by the formula $d_p(x,y)=||x-y||_p$. The field endowed with $p$-adic metric is not a complete metric space. Thus, we define its completion, which has a structure of a topological field and we call this completion the field of $p$-adic numbers $\mathbb{Q}_p$. The notions of $p$-adic valuation, $p$-adic norm and $p$-adic metric extend to the field $\mathbb{Q}_p$ by continuity. We denote the multiplicative group of the field of $p$-adic numbers by $\mathbb{Q}_p^*$. If $A$ is a set and $n\in\mathbb{N}_+$, then when writing $A^n$ we mean the cartesian product of $n$ copies of the set $A$. The exception is the notation $(\mathbb{Q}_p^*)^2$ that means the multiplicative group of all nonzero squares in $\mathbb{Q}_p$. A quadratic form over a given integral ring $\mathcal{R}$ is a homogenious polynomial of the second degree with coefficients in $\mathcal{R}$. If $Q=\sum_{1\leq i,j\leq n}a_{ij}X_iX_j\in\mathcal{R}[X_1,...,X_n]$, where $a_{ij}=a_{ji}$ for each $(i,j)\in\{1,...,n\}^2$ and not all $a_{ij}$ are zero, then we will say that the quadratic form $Q$ is nonsingular if $\det[a_{ij}]_{1\leq i,j\leq n}\neq 0$. If $Q$ is a quadratic form over $\mathcal{R}$ and there exists a nonzero vector $\mathbf{x}\in\mathcal{R}^n$ such that $Q(\mathbf{x})=0$, then we say that $Q$ is isotropic. Otherwise we say that $Q$ is anisotropic. The quadratic form $Q$ represents some element $a\in\mathcal{R}$ over $\mathcal{R}$ if $Q(\mathbf{x})=a$ for some $\mathbf{x}\in\mathcal{R}^n$. More general, we say that $Q$ represents some subset $A$ of $\mathcal{R}$ over $\mathcal{R}$ if $Q$ represents every element of $A$ over $\mathcal{R}$. If $A$ is a subset of a given field, then we define its quotient set as $$R(A)=\left\{\frac{a}{b}: a,b\in A, b\neq 0\right\}.$$ The subject of denseness of quotient sets of subsets of $\mathbb{N}$ in the set of positive real numbers has been widely studied for decades and generated a lot of literture (see \cite{BDGLS, BEST, BukSalToth, BukCsi, BukToth, GPS-JS, HedRose, HobSil, Sal, Sal2, Sta, StraToth, StraToth2}). On the other hand, the study of denseness of subsets of $\mathbb{N}$ or $\mathbb{Z}$ in the field of $p$-adic numbers is new. It was initiated in \cite{GarLuc}. A short time later there appeared \cite{San} and, probably the most comprehensive publication devoted to this issue, namely \cite{GHLPSSS}. The authors of \cite{GHLPSSS} presented a wide variety of interesting results and left several open problems. In the view of their results on $p$-adic denseness of quotients of sums of a given number of squares or cubes, they posed two problems. The first one \cite[Problem 4.3]{GHLPSSS} concerned $p$-adic denseness of quotients of sums of a given number of powers with a fixed exponent greater than $3$. This problem was completely solved in \cite{MisMurSan}. The second one \cite[Problem 4.4]{GHLPSSS} was devoted to quotients of values of quadratic forms. It was fully solved in \cite{DonGarRou}. Furthermore, the authors of \cite{DonGarRou} gave two proofs of classification of nonsingular quadratic forms $Q$ with integral coefficients and prime numbers $p$ such that the set of quotients of values of $Q$ attained for integer arguments is dense in the field of $p$-adic numbers. The first proof is longer but elementary. The second one is much shorter but requires the results from \cite{Ser} on quadratic classes in $\mathbb{Q}_p$ represented by a given quadratic form. The goal of this paper is to present another, even shorter proof of characterization of pairs $(Q,p)$, where $Q\in\mathbb{Z}[X_1,...,X_n]$ is a nonsingular quadratic form and $p$ is a prime number such that $R(Q(\mathbb{Z}^n))$ is dense in $\mathbb{Q}_p$. This proof is also based on the results from \cite{Ser} on quadratic classes in $\mathbb{Q}_p$ represented by a given quadratic form. Before the proof, let us recall the mentioned characterization. Its statement is the following. \begin{thm}\label{main} Let $p$ be a prime number, $n$ be a positive integer and $Q\in\mathbb{Z}[X_1,...,X_n]$ be a nonsingular quadratic form. Then the set $R(Q(\mathbb{Z}^n))$ is dense in $\mathbb{Q}_p$ if and only if $n\geq 3$ or $Q=aX_1^2+bX_1X_2+cX_2^2$ such that $2\mid\nu_p(b^2-4ac)$ and \begin{itemize} \item $\left(\frac{p^{-\nu_p(b^2-4ac)}(b^2-4ac)}{p}\right)=1$ for $p>2$, \item $p^{-\nu_p(b^2-4ac)}(b^2-4ac)\equiv 1\pmod{8}$ for $p=2$. \end{itemize}. \end{thm} \section{Proof of Theorem \ref{main}} Let us start the proof of the theorem from the remark that denseness of the sets $R(Q(\mathbb{N}^n))$, $R(Q(\mathbb{Z}^n))$ and $R(Q(\mathbb{Z}_p^n))$ are equivalent. \begin{lem}\label{1} Let $Q$ be any quadratic form over $\mathbb{Q}_p$. Then the following conditions are equivalent: \begin{enumerate} \item[i)] $R(Q(\mathbb{N}^n))$ is dense in $\mathbb{Q}_p$, \item[ii)] $R(Q(\mathbb{Z}^n))$ is dense in $\mathbb{Q}_p$, \item[iii)] $R(Q(\mathbb{Z}_p^n))$ is dense in $\mathbb{Q}_p$. \end{enumerate} \end{lem} \begin{proof} Follows from denseness of $\mathbb{N}$ in $\mathbb{Z}_p$ with respect to $p$-adic topology. \end{proof} \begin{lem}\label{2} Let $Q$ be any quadratic form over $\mathbb{Q}_p$. Then we have $R(Q(\mathbb{Q}_p^n))=R(Q(\mathbb{Z}_p^n))$. \end{lem} \begin{proof} Let $\mathbf{x}, \mathbf{y}\in\mathbb{Q}_p^n$ and $Q(\mathbf{y})\neq 0$. Then, for sufficiently large $k\in\mathbb{N}$ we have $p^k\mathbf{x}, p^k\mathbf{y}\in\mathbb{Z}_p^n$ and $$\frac{Q(p^k\mathbf{x})}{Q(p^k\mathbf{y})}=\frac{p^{2k}Q(\mathbf{x})}{p^{2k}Q(\mathbf{y})}=\frac{Q(\mathbf{x})}{Q(\mathbf{y})}.$$ \end{proof} At this moment we split the proof into two cases depending on that whether $Q$ is isotropic or not. We start with the case of isotropic quadratic form. A direct consequence of the previous lemma is the following result. \begin{cor}\label{cor} If $Q$ is a nonsingular isotropic quadratic form over $\mathbb{Q}_p$, then $R(Q(\mathbb{Z}_p^n))=\mathbb{Q}_p$. In particular, the set $R(Q(\mathbb{Z}^n))$ is dense in $\mathbb{Q}_p$. \end{cor} \begin{proof} A classical result from theory of quadratic forms states that if $Q$ is isotropic over a given field, then $Q$ represents all the elements of this field (see e.g. \cite[Proposition 3', p. 33]{Ser}). Thus $R(Q(\mathbb{Q}_p^n))=\mathbb{Q}_p$ and by Lemma \ref{2} we conclude that $R(Q(\mathbb{Z}_p^n))=\mathbb{Q}_p$. \end{proof} Let us consider the case of anisotropic quadratic form. The first lemma, however, is valid for every quadratic form over $Q_p$ - not necessarily anisotropic and not necessarily nonsingular. \begin{lem}\label{3} Let $Q\in\mathbb{Q}_p[X_1,...,X_n]$ be any quadratic form. \begin{enumerate} \item[i)] The sets $Q(\mathbb{Q}_p^n)\backslash\{0\}$ and $R(Q(\mathbb{Q}_p^n))\backslash\{0\}$ are unions of some quadratic classes over $\mathbb{Q}_p$, i.e. elements of the quotient group $\mathbb{Q}_p^*/(\mathbb{Q}_p^*)^2$. \item[ii)] The set $R(Q(\mathbb{Z}^n))$ is dense in $\mathbb{Q}_p$ if and only if $R(Q(\mathbb{Q}_p^n))=\mathbb{Q}_p$ (in other words, the set $R(Q(\mathbb{Q}_p^n))\backslash\{0\}$ is the union of all quadratic classes over $\mathbb{Q}_p$). \end{enumerate} \end{lem} \begin{proof} Let us recall that if $a=Q(\mathbf{x})$ for some $\mathbf{x}\in\mathbb{Q}_p^n$, then for each $b\in\mathbb{Q}_p^*$ we have $ab^2=Q(b\mathbf{x})$. As a cosequence, if $Q$ represents some nonzero $p$-adic number $a$, then $Q$ represents every element of the class $a(\mathbb{Q}_p^*)^2\in\mathbb{Q}_p^*/(\mathbb{Q}_p^*)^2$. This implies that if some element of a given class in $\mathbb{Q}_p^*/(\mathbb{Q}_p^*)^2$ can be written in the form $\frac{Q(\mathbf{x})}{Q(\mathbf{y})}$, where $\mathbf{x}, \mathbf{y}\in\mathbb{Q}_p^n$ and $Q(\mathbf{y})\neq 0$, then every element of this class can be represented as a quotient of values of the form $Q$. This proves the part $i)$ of this lemma. For the proof of the part $ii)$ we notice that quadratic classes over $\mathbb{Q}_p$ are open subsets of $\mathbb{Q}_p$ (see e.g. \cite[Remark 2, p. 18]{Ser}). Hence, if $R(Q(\mathbb{Z}^n))$ is dense in $\mathbb{Q}_p$, then $R(Q(\mathbb{Z}^n))$ intersects nonempty with any quadratic class over $\mathbb{Q}_p$. In the view of the part $i)$ this means that $R(Q(\mathbb{Q}_p^n))\backslash\{0\}$ is the union of all quadratic classes over $\mathbb{Q}_p$. Of course, if $R(Q(\mathbb{Q}_p^n))=\mathbb{Q}_p$, then by Lemma \ref{2} we have $R(Q(\mathbb{Z}_p^n))=\mathbb{Q}_p$ and the denseness of $R(Q(\mathbb{Z}^n))$ in $\mathbb{Q}_p$ follows from the denseness of $\mathbb{Z}$ in $\mathbb{Z}_p$ and continuity of $Q$. \end{proof} The next part of the proof uses the most advanced tool in the whole paper. We will apply the knowledge about numbers of classes in $\mathbb{Q}_p^*/(\mathbb{Q}_p^*)^2$ represented over $\mathbb{Q}_p$ by a given anisotropic quadratic form. \begin{lem}\label{4} Let $Q$ be a nonsingular anisotropic quadratic form over $\mathbb{Q}_p$. Then the set $R(Q(\mathbb{Z}^n))$ is dense in $\mathbb{Q}_p$ if and only if $n\geq 3$. \end{lem} \begin{proof} Let us start with the case of $n=1$. Then $Q=aX_1^2$ for some $a\in\mathbb{Q}_p^*$ and clearly $R(Q(\mathbb{Q}_p))=\{b^2/c^2: b,c\in\mathbb{Q}_P, b\neq 0\}=(\mathbb{Q}_p^*)^2\cup\{0\}$. Thus, $R(Q(\mathbb{Z}^n))$ cannot be dense in $\mathbb{Q}_p$. In the sequel we will use the fact that the order of the group $\mathbb{Q}_2^*/(\mathbb{Q}_2^*)^2$ is $4$ for $p>2$ and $8$ for $p=2$ (see \cite[Corollaries on p. 18]{Ser}). Consider the case of $n=2$. Since $\mathbb{Q}_p^*/(\mathbb{Q}_p^*)^2$ is a $2$-torsion group, we can treat it as a vector space over the field with two elements $\mathbb{F}_2$. If $p>2$, then $Q$ represents two quadratic classes over $\mathbb{Q}_p$ (see[Remark 1, p. 38]{Ser}). Let $a$ and $b$ be their representatives. Then $R(Q(\mathbb{Q}_p^2))$ contains only classes represented by $1$ and $ab$. If $p=2$, then $Q$ represents two quadratic classes over $\mathbb{Q}_p$ (see[Remark 1, p. 38]{Ser}). Consider several cases. \begin{enumerate} \item When the represented classes form a hyperplane in $\mathbb{Q}_2^*/(\mathbb{Q}_2^*)^2$, then they are exactly those classes contained in $R(Q(\mathbb{Q}_2^2))$. \item When the represented classes have representants $1,a,b,c$, where $a,b,c$ are linearly independent in $\mathbb{Q}_2^*/(\mathbb{Q}_2^*)^2$, then $R(Q(\mathbb{Q}_2^2))$ does not contain the class of $abc$. \item When the represented classes have representants $a,b,c,ab$, where $a,b,c$ are linearly independent in $\mathbb{Q}_2^*/(\mathbb{Q}_2^*)^2$, then $R(Q(\mathbb{Q}_2^2))$ does not contain the class of $c$. \item When the represented classes have representants $a,b,c,abc$, where $a,b,c$ are linearly independent in $\mathbb{Q}_2^*/(\mathbb{Q}_2^*)^2$, then $R(Q(\mathbb{Q}_2^2))$ does not contain the classes of $a,b,c$ and $abc$. \end{enumerate} We end the proof with the case of $n\geq 3$. Then $Q$ represents more than a half of quadratic classes over $\mathbb{Q}_p$ (see[Remark 1, p. 38]{Ser}). Let $a\in\mathbb{Q}_p^*$ be arbitrary. The sets $$\{b(\mathbb{Q}_p^*)^2: b\in\mathbb{Q}_p*, b \text{ is represented by } Q \text{ over } \mathbb{Q}_p\},$$ $$\{ac(\mathbb{Q}_p^*)^2: c\in\mathbb{Q}_p*, c \text{ is represented by } Q \text{ over } \mathbb{Q}_p\}$$ have cardilnality greater than a half of the order of $\mathbb{Q}_p^*/(\mathbb{Q}_p^*)^2$, thus they intersect nonempty. This means, that $ac_0(\mathbb{Q}_p^*)^2=b_0(\mathbb{Q}_p^*)^2$ for some $b_0,c_0\in\mathbb{Q}_p^*$ represented by $Q$ over $\mathbb{Q}_p$. Consequently, $a(\mathbb{Q}_p^*)^2=\frac{b_0}{c_0}(\mathbb{Q}_p^*)^2\subset R(Q(\mathbb{Q}_p^n))$. Since $a\in\mathbb{Q}_p^*$ is arbitrary, we showed that $R(Q(\mathbb{Q}_p^n))$ contains all the quadratic classes over $\mathbb{Q}_p$. Summing up uor reasonings, from Lemma \ref{3} ii) we conclude that if $Q$ is a nonsingular anisotropic quadratic form over $\mathbb{Q}_p$, then $R(Q(\mathbb{Z}^n))$ is dense in $\mathbb{Q}_p$ if and only if $n\geq 3$. \end{proof} If $n=1$, then a nonsingular quadratic form $Q\in\mathbb{Q}_p[X_1]$ is anisotropic and by Lemma \ref{4} we have that $R(Q(\mathbb{Z}^n))$ is dense in $\mathbb{Q}_p$. If $n\geq 3$, then it follows from Corollary \ref{cor} and Lemma \ref{4} that $R(Q(\mathbb{Z}^n))$ is dense in $\mathbb{Q}_p$ for any nonsingular quadratic form $Q\in\mathbb{Q}_p[X_1,...,X_n]$. Hence, it remains to check when a nonsingular binary quadratic form is isotropic. However, it is a very well known (and easy to prove) fact that a nonsingular binary quadratic form $Q=aX_1^2+bX_1X_2+cX_2^2\in\mathbb{Q}_p[X]$ is isotropic if and only if $b^2-4ac=d^2$ for some $d\in\mathbb{Q}_p^*$, which holds exactly when $2\mid\nu_p(b^2-ac)$ and \begin{itemize} \item $\left(\frac{b^2-4ac}{p}\right)=1$ for $p>2$, \item $b^2-4ac\equiv 1\pmod{8}$ for $p=2$. \end{itemize} The proof of Theorem \ref{main} is finished. \begin{rem} \rm{As we can see from the proof of Theorem \ref{main}, the assumption that coefficients of quadratic form $Q$ are integer is not necessary. The statement of the theorem remains true for $Q\in\mathbb{Q}_p[X_1,...,X_n]$.} \end{rem} \section{Quotients of nonnegative integers represented by a fixed quadratic form} Characterization of pairs $(Q,p)$, where $Q\in\mathbb{Z}[X_1,...,X_n]$ is a nonsingular quadratic form and $p$ is a prime number such that $R(Q(\mathbb{Z}^n))$ is dense in $\mathbb{Q}_p$, was motivated by a problem posed in \cite{GHLPSSS}. However, the problem (\cite[Problem 4.4]{GHLPSSS}) asks about the denseness of the set of quotients of nonnegative integers represented by a given quadratic form in the field of $p$-adic numbers. As we will see, this problem can be easily solved if we have Theorem \ref{main}. \begin{thm}\label{positive} Let $p$ be a prime number, $n$ be a positive integer and $Q\in\mathbb{Z}[X_1,...,X_n]$ be a quadratic form. Then the set $R(\mathbb{N}\cap Q(\mathbb{Z}^n))$ ($R(\mathbb{N}\cap Q(\mathbb{N}^n))$, respectively) is dense in $\mathbb{Q}_p$ if and only if $R(Q(\mathbb{Z}^n))$ ($R(Q(\mathbb{N}^n))$, respectively) is dense in $\mathbb{Q}_p$ and $\mathbb{N}_+\cap Q(\mathbb{Z}^n)\neq\varnothing$ ($\mathbb{N}_+\cap Q(\mathbb{N}^n)\neq\varnothing$, respectively). \end{thm} \begin{proof} Obviously, if $\mathbb{N}\cap Q(\mathbb{Z}^n)=\varnothing$ or $R(Q(\mathbb{Z}^n))$ is not dense in $\mathbb{Q}_p$, then $R(\mathbb{N}\cap Q(\mathbb{Z}^n))$ is not dense in $\mathbb{Q}_p$. Assume now that $\mathbb{N}\cap Q(\mathbb{Z}^n)\neq\varnothing$ or $R(Q(\mathbb{Z}^n))$ is dense in $\mathbb{Q}_p$. Let $\mathbf{z}\in\mathbb{Z}^n$ be such that $Q(\mathbf{z})\in\mathbb{N}_+$. Let $\mathbf{x}\in\mathbb{Z}^n$ be arbitrary. Then, for sufficiently large $k\in\mathbb{N}$ we have $Q(\mathbf{x}+p^k\mathbf{z})\in\mathbb{N}_+$. This follows from the equation $$Q(\mathbf{x}+p^k\mathbf{z})=Q(\mathbf{x})+2p^kB(\mathbf{x},\mathbf{z})+p^{2k}Q(\mathbf{z})>0,$$ where $B$ is a symmetric bilinear form associated with $Q$. The right hand side of the above equation is a quadratic trinomial with respect to $p^k$ with positive leading coefficient, thus it is positive for large $k$. On the other hand, \begin{align*} &\nu_p(Q(\mathbf{x}+p^k\mathbf{z})-Q(\mathbf{x}))=\nu_p(p^k(2B(\mathbf{x},\mathbf{z})+p^kQ(\mathbf{z})))=k+\nu_p(2B(\mathbf{x},\mathbf{z})+p^kQ(\mathbf{z}))\\ &=k+\nu_p(2B(\mathbf{x},\mathbf{z})) \end{align*} for sufficiently large $k\in\mathbb{N}$. Hence, every element of $Q(\mathbb{Z}^n)$ can be approximated $p$-adically by some elements from $\mathbb{N}\cap Q(\mathbb{Z}^n)$. As a result, the set $R(Q(\mathbb{Z}^n))$ is contained in the set of accumulation points of the set $R(\mathbb{N}\cap Q(\mathbb{Z}^n))$. However, $R(Q(\mathbb{Z}^n))$ is dense in $\mathbb{Q}_p$, which means that also $R(\mathbb{N}\cap Q(\mathbb{Z}^n))$ is dense in $\mathbb{Q}_p$. The proof for the set $R(\mathbb{N}\cap Q(\mathbb{N}^n))$ is completely analogous. \end{proof} \begin{rem} \rm{It is possible that $\mathbb{N}_+\cap Q(\mathbb{Z}^n)\neq\varnothing$ and $\mathbb{N}_+\cap Q(\mathbb{N}^n)=\varnothing$ for some quadratic form $Q\in\mathbb{Z}[X_1,...,X_n]$. Consider $Q=-X_1X_2$. Then $\mathbb{N}_+\cap Q(\mathbb{N}^n)=\varnothing$, as $Q$ attains only negative values for positive values of variables $X_1$, $X_2$. On the other hand, $\mathbb{N}_+\cap Q(\mathbb{Z}^n)\neq\varnothing$ and, since $Q$ is a nonsingular isotropic quadratic form over $\mathbb{Q}$, we have by Corollary \ref{cor} and Theorem \ref{positive} that $R(\mathbb{N}_+\cap Q(\mathbb{Z}^n))$ is dense in $\mathbb{Q}_p$ for any prime number $p$.} \end{rem} \begin{rem} \rm{The assertion of Theorem \ref{positive} remains true if we assume that $Q$ is a quadratic form with rational coefficients. It suffices to note, that if $Q(\mathbf{x})=\frac{a}{b}$ and $Q(\mathbf{y})=\frac{c}{d}$ for some $\mathbf{x},\mathbf{y}\in\mathbb{Z}^n$, then $Q(bd\mathbf{x})=abd^2$ and $Q(bd\mathbf{y})=cb^2d$ and $Q(bd\mathbf{x})/Q(bd\mathbf{y})=Q(\mathbf{x})/Q(\mathbf{y})$.} \end{rem} \end{document}
\begin{document} \title{On the Relationship between LTL Normal Forms and B\"uchi Automata} \begin{abstract} In this paper, we consider the problem of translating LTL formulas to B\"{u}chi automata. We first translate the given LTL formula into a special \emph{disjuctive-normal form} (DNF). The formula will be part of the state, and its DNF normal form specifies the atomic properties that should hold immediately (labels of the transitions) and the \emph{formula} that should hold afterwards (the corresponding successor state). Surprisingly, if the given formula is Until-free or Release-free, the B\"uchi automaton can be obtained directly in this manner. For a general formula, the construction is slightly involved: an additional component will be needed for each formula that helps us to identify the set of accepting states. Notably, our construction is an on-the-fly construction, and the resulting B\"uchi automaton has in worst case $2^{2n+1}$ states where $n$ denotes the number of subformulas. Moreover, it has a better bound $2^{n+1}$ when the formula is Until- (or Release-) free. \iffalse we explore the properties of the formula's DNF form, and then identify the corresponding accepting states. As a result, we present a DNF-based approach to generating B\"uchi automata from LTL formulas. Compared to the classic tableau construction, our approach 1) avoids generating the GBA (Generalized B\"uchi Automata); 2) discusses many interesting LTL formulas' properties which are seldom concerned before; 3) gives the more precise upper bound of $2^{2n+1}$ for the general translation, and even more has a better one of $2^{n+1}$ when the formula is Until-free (Release-free). \fi \end{abstract} \section{Introduction}\label{introduction} Translating Linear Temporal Logic (LTL) formulas to their equivalent automata (usually B\"{u}chi automata) has been studied for nearly thirty years. This translation plays a key role in the automata-based model checking~\cite{Vardi86}: here the automaton of the negation of the LTL property is first constructed, then the verification process is reduced to the emptiness problem of the product. Gerth et al.~\cite{Gerth95} proposed an on-the-fly construction approach to generating B\"{u}chi automata from LTL formulas, which means that the counterexample can be detected even only a part of the property automaton is generated. They called it a tableau construction approach, which became widely used and many subsequent works~\cite{Somenzi00,Giannakopoulou02,Daniele99,Etessami00,TACAS12} for optimizing the automata under construction are based on it. \iffalse However, as we shall see, in the tableau framework the formulas are translated into the Generalized B\"uchi automata (GBA) in an intuitive way so early that deeper relationships between LTL and B\"uchi automata are ignored. The Muller automata is a generalization of B\"{u}chi automata, and it preserves all the properties of B\"{u}chi automata theoretically. For instance, the Muller automata can also be used in model checking~\cite{Thomas91}. Moreover, in automata theory, the Muller automata have nicer properties than B\"{u}chi automata. For instance, the determinization of Muller automata can be as expressive as the non-deterministic ones, whereas it does not hold for the B\"{u}chi automata. The deterministic B\"{u}chi automata can not be closed under complement, while the Muller automata can be. However there is not much research works on directly translating the LTL formulas to the Muller automata, and one often obtains the Muller automata from the existed automata~\cite{McNaughton66}. As a result, Translating the LTL formulas to equivalent Muller automata (non-deterministic) is also significant for model checking. Although the deterministic Muller automata can not be obtained directly, it is essential for further research of the applications of Muller automata. \fi In this paper, we propose a novel construction by making use of the notion of \emph{disjuctive-normal forms} (DNF). For an LTL formula $\varphi$, its DNF normal form is an equivalent formula of the form $\bigvee _i(\alpha_i\wedge X \varphi_i)$ where $\alpha_i$ is a finite conjunction of literals (atomic propositions or their negations), and $\varphi_i$ is a conjunctive LTL formula such that the root operator of it is not a disjunction. We show that any LTL formula can be transformed into an equivalent DNF normal form, and refer to $\alpha_i\wedge X\varphi_i$ as a clause of $\varphi$. It is easy to see that any given LTL formula induces a labelled transition system (LTS): states correspond to formulas, and we assign a transition from $\varphi$ to $\varphi_i$ labelled with $\alpha_i$, if $\alpha_i\wedge X\varphi_i$ appears as a part of the DNF form of $\varphi$. Figure~\ref{ltl_ts_and_ba} demonstrates our idea in which the transition labels are omitted. \iffalse transition system As shown in the figure, we can first get the transition system (TS) from the formula $\varphi_1$, in which each node corresponds to a formula. Then we can extend the TS by doing the instantiation for formulas in each node to acquire the final B\"uchi automaton (BA). The details will be talked about in section~\ref{construction}. \fi \begin{figure} \caption{A demonstration of our idea} \label{ltl_ts_and_ba} \caption{The B\"uchi automaton for $aUb$.} \label{fig:buchiforaUb} \end{figure} The LTS is the starting point of our construction. Surprisingly, for Until-free (or Release-free) formulas, the B\"uchi automaton can be obtained directly by equipping the above LTS with the set of accepting states, which is illustrated as follows. Consider the formula $aUb$, whose DNF form is $(b\wedge X(\mathsf{True}))\vee (a\wedge X(aUb))$. The corresponding B\"{u}chi automaton for $aUb$ is shown in Figure~\ref{fig:buchiforaUb} where nodes $aUb$ and $\mathsf{True}$ represent formulas $aUb$ and $\mathsf{True}$ respectively. The transitions are self-explained. By semantics, we know that if the run $\xi$ satisfies a Release-free formula $\varphi$, then there must be a finite satisfying prefix $\eta$ of $\xi$ such that any paths starting with $\eta$ satisfy $\varphi$ as well. Thus, for this class of formulas, the state corresponding to the formula $\mathsf{True}$ is considered as the single accepting state. The Until-free formulas can be treated in a similar way by taking the set of all states as accepting. The main contribution of the paper is to extend the above construction to general formulas. As an example we consider the formula $\psi=G(a U b)$, which has the normal form $ (b\wedge X\psi) \vee ( a\wedge X(aUb\wedge\psi))$. Note here the formula $\mathsf{True}$ will be even not reachable. The most challenging part of the construction will then be identification of the set of accepting states. For this purpose, we identify subformulas that will be reached infinitely often, which we call looping formulas. Only some of the looping formulas contribute to the set of accepting states. These formulas will be the key to our construction: we characterize a set of atomic propositions for each formula, referred to as the \emph{obligation set}. The set contains properties that must occur infinitely often to make the given formula satisfiable. In our construction, we add an additional component to the states to keep track of the obligations, and then define accepting states based on it -- an illustrating example can be found in Section \ref{example}. Our construction for general formula has at most $2^{2n+1}$ states with $n$ denoting the number of subformulas. The number of states for the Release/Until cases is bounded $2^{n+1}$. Recall the complexity of $2^{O(n)}$ \cite{Gerth95} of the classical tableau construction. To the best of our knowledge, this is the first time that one can give a precise bound on the exponent for such construction. \iffalse Summarizing, the contributions of the paper are as follows: \begin{enumerate} \item We propose a \textit{Disjunctive-Normal Form} (DNF) for the LTL formula, based on which we investigate the relationship between DNF normal forms and B\"uchi automata. As a result we present a novel construction for translating LTL formulas to B\"uchi automaton. Especially for Release/Until-free formulas, our construction is very simple in theory. \item We discuss some interesting properties of LTL formulas, which demonstrate another view on the satisfiability of LTL formulas. Using these properties, we can find the translation from LTL to B\"uchi automaton is far more intuitive. \item As far as we know, compared to the complexity of $2^{O(n)}$ \cite{Gerth95} for the tableau construction, our approach gives the more precise one of $2^{2n+1}$, and even more a better one of $2^{n+1}$ when the formula is restricted into Release-free (Until-free). \end{enumerate} Etessami and Holzmann~\cite{Etessami00} have pointed out the goal of keeping the size of generated automata from LTL formulas small may not result in reducing the cost of model checking. Thus Sebastiani et al. \cite{Sebastiani03} suggested to make the generated automata ``as deterministic as possible'' and it is well accepted as a heuristic strategy by the researchers since then. However, it should be noted that keeping the generated automata small is still significant and it is still one of the key goals for the generated automata together with their determinism. Thus, the new construction proposed here also adopts this criteria that makes the number of the states and transitions of the generated automata as small as possible. The experiments in this paper also consider the ``determinism'' criteria and use the approach in \cite{Sebastiani03} to make the generated automata as deterministic as possible, which makes the experimental comparisons are more meaningful. \fi \subsubsection*{Related Work} As we know, there are two main approaches to B\"{u}chi automata construction from LTL formulas. The first approach generates the alternating automaton from the LTL formula and then translates it to the equivalent B\"{u}chi automaton~\cite{Vardi96}. Gastin et al.~\cite{Gastin01} proposed a variant of this construction in 2001, which first translates the very weak alternating co-B\"uchi automaton to generalised automaton with accepting transitions which is then translated into B\"uchi automaton. In particular, the experiments show that their algorithm outperforms the others if the formulas under construction are restricted on fairness conditions. Recently Babiak et al.~\cite{TACAS12} proposed some optimization strategies based on the work~\cite{Gastin01}. \iffalse They make the observation on the formula where each branch of its syntax tree containing at least one eventually and always operators. Compared to LTL2BA, their implementation tool, LTL3BA, performs more fast and deterministic. \fi The second approach was proposed in 1995 by Gerth et al.~\cite{Gerth95}, which is called the \textit{tableau} construction. This approach can generate the automata from LTL on-the-fly, which is widely used in the verification tools for acceleration of the automata-based verification process. Introducing the (state-based) \textit{Generalized B\"{u}chi Automata} (GBA) is the important feature for the tableau construction. Daniele et al.~\cite{Daniele99} improved the tableau construction by some simple syntactic techniques. Giannakopoulou and Lerda~\cite{Giannakopoulou02} proposed another construction approach that uses the transition-based Generalized B\"{u}chi automaton (TGBA). And some optimization techniques~\cite{Etessami00,Somenzi00} have been proposed to reduce the size of the generated automata. For instance, Etessami and Holzmann~\cite{Etessami00} described the optimization techniques including proof theoretic reductions (formulas rewritten), core algorithm tightening and the automata theoretic reductions (simulation based). \iffalse The experiments in the literature also pointed out the smaller generated automaton does not always result in the better performance of the model checking. Based on this observation, Sebastiani et al. \cite{Sebastiani03} focused on the determinism of the generated automaton and pointed that the generated automaton from the LTL formula should be ``as deterministic as possible'', and this insight was accepted by other researchers, i.e. the tool SPOT~\cite{Duret-Lutz04} integrated this heuristics into its recent implementation. \fi \iffalse For Muller automata our paper proposes a unified framework, thus without first generating the corresponding B\"uchi automata, as done in \cite{McNaughton66}. Although Jong \cite{Jong91} proposed a translation directly from the LTL formulas to Muller automata based on the classic automata construction, the translation involves automaton complementation and intersection which makes the complexity of the approach higher. \fi \subsubsection*{Organization of the paper.} Section \ref{example} illustrates our approach by a running example. Section \ref{sec:pre} introduces preliminaries of B\"uchi automata and LTL formulas and then introduces the \emph{disjunctive-normal form} for LTL formulas; Section~\ref{construction} specifies the proposed \emph{DNF-based } construction; Section~\ref{discussion} discusses how our approach is related to the tableau construction in~\cite{Gerth95}. Section~\ref{conclusion} concludes the paper. \section{A Running Example}~\label{example} We consider the formula $\varphi_1=\textrm{G}(bUc\wedge dUe)$ as our running example. The DNF form of $\varphi_1$ is given by: \begin{align*} \varphi_1= (c\wedge e\wedge X(\varphi_1) ) \vee (b\wedge e\wedge X(\varphi_2)) \vee (c\wedge d\wedge X(\varphi_3)) \vee (b\wedge d\wedge X(\varphi_4) \end{align*} where $\varphi_2=bUc\wedge G(bUc\wedge dUe)$, $\varphi_3=dUe\wedge G(bUc\wedge dUe)$, $\varphi_4=bUc\wedge dUe \wedge G(bUc\wedge dUe)$. It is easy to check that the above DNF form is indeed equivalent to formula $\varphi_1$. Interestingly, we note that $\varphi_1,\varphi_2,\varphi_3,\varphi_4$ all have the same DNF form above. \begin{figure} \caption{\label{fig:buechi} \label{fig:buechi} \end{figure} The corresponding B\"uchi automaton for $\varphi_1$ is depicted in Fig.~\ref{fig:buechi}. We can see that there are four states in the generated automata, corresponding to the four formulas $\varphi_i (\mathit{i = 1, 2, 3, 4})$. The state corresponding to the formula $\varphi_1$ is also the initial state. The transition relation is obtained by observing the DNF forms: for instance we have a self-loop for state $s_1$ with label $c\wedge e$. If we observe the normal form of $\varphi_1$, we can see that there is a term ($c\wedge e\wedge X(\varphi_1)$), where there is a conjunction of two terms $c\wedge e$ and $X(\varphi_1)$, and $\varphi_1$ in $X$ operator corresponds to the node $s_1$ and $c\wedge e$ corresponds the loop edge for $s_1$. \iffalse If we look at the edge $b\wedge e$ between $s_1$ and $s_2$ in the generated automaton, we still can find a corresponding relation between the normal form of $\varphi_1$ and the automaton, e.g., the second term in the normal form of $\varphi_1$ gives the hint.\fi Thus, the \emph{disjunctive-normal form} of the formula has a very close relation with the generated automaton. The most difficult part is to determine the set of accepting states of the automaton. We give thus here a brief description of several notions introduced for this purpose in our running example. The four of all the formulas $\mathit{\varphi_i(i=1,2,3,4)}$ have the same \textit{obligation set}, i.e. $OS_{\varphi_i}=\{\{c,e\}\}$, which may vary for different formulas. In our construction, every \textit{obligation} in the \textit{obligation set} of each formula identities the properties needed to be satisfied infinitely if the formula is satisfiable. For example, the formulas $\varphi_i(\mathit{i=1,2,3,4})$ are satisfied if and only if all properties in the obligation $\{c,e\}$ are met infinitely according to our framework. Then, a state consists of a formula and the \textit{process set}, which records all the properties that have been met so far. For simplicity, we initialize the \textit{process set} $P_1$ of the initial state $s_1$ with the empty set. For the state $s_2$, the corresponding process set $P_2 = \{e\}$ is obtained by taking the union of $P_1$ and the label $\{b,e\}$ from $s_1$. The label $b$ will be omitted as it is not contained in the obligation. Similarly one can conclude $P_3 = \{c\}$ and $P_4 = \{true\}$: here the property $true$ implies no property has been met so far. When there is more than one property in the \textit{process set}, the $\{true\}$ can be erased, such as that in state $s_3$. Moreover, the \textit{process set} in a state will be reset to empty if it includes one \textit{obligation} in the formula's \textit{obligation set}. For instance, the transition in the figure $s_2\tran{c\wedge d}s_1$ is due to that $P_1' = P_2\cup \{c\}=\{c,e\}$, which is actually in $OS_{\varphi_1}$. So $P_1'$ is reset to the empty set. One can also see the same rule when the transitions $s_2\tran{c\wedge e}s_1$, $s_4\tran{c\wedge e}s_1$, $s_3\tran{b\wedge e}s_1$ occur. Through the paper, we will go back to this example again when we explain our construction approach. \iffalse \begin{table}[t] \centering \begin{tabular}{|ll|} \hline \begin{tabular}{l} \textbf{(1)}.$G(bUc\wedge dUe)$\\ $=c\wedge e\wedge X($\underline{$G(bUc\wedge dUe)$}$_{\textbf{(1)}})$\\ \hspace*{4mm}$c\wedge d\wedge X($\underline{$dUe\wedge G(bUc\wedge dUe)$}$_{\textbf{(3)}})$\\ \hspace*{4mm}$b\wedge e\wedge X($\underline{$bUc\wedge G(bUc\wedge dUe)$}$_{\textbf{(2)}})$\\ \hspace*{4mm}$b\wedge d\wedge X($\underline{$bUd\wedge dUe\wedge G(bUc\wedge dUe)$}$_{\textbf{(4)}})$\\ \end{tabular} & \begin{tabular}{l} \textbf{(2)}.$bUc\wedge G(bUc\wedge dUe)$\\ $=c\wedge e\wedge X($\underline{$G(bUc\wedge dUe)$}$_{\textbf{(1)}})$\\ \hspace*{4mm}$c\wedge d\wedge X($\underline{$dUe\wedge G(bUc\wedge dUe)$}$_{\textbf{(3)}})$\\ \hspace*{4mm}$b\wedge e\wedge X($\underline{$bUc\wedge G(bUc\wedge dUe)$}$_{\textbf{(2)}})$\\ \hspace*{4mm}$b\wedge d\wedge X($\underline{$bUd\wedge dUe\wedge G(bUc\wedge dUe)$}$_{\textbf{(4)}})$\\ \end{tabular} \\ \begin{tabular}{l} \textbf{(3)}.$dUe\wedge G(bUc\wedge dUe)$\\ $=c\wedge e\wedge X($\underline{$G(bUc\wedge dUe)$}$_{\textbf{(1)}})$\\ \hspace*{4mm}$c\wedge d\wedge X($\underline{$dUe\wedge G(bUc\wedge dUe)$}$_{\textbf{(3)}})$\\ \hspace*{4mm}$b\wedge e\wedge X($\underline{$bUc\wedge G(bUc\wedge dUe)$}$_{\textbf{(2)}})$\\ \hspace*{4mm}$b\wedge d\wedge X($\underline{$bUd\wedge dUe\wedge G(bUc\wedge dUe)$}$_{\textbf{(4)}})$\\ \end{tabular} & \begin{tabular}{l} \textbf{(4)}.$bUc\wedge dUe\wedge G(bUc\wedge dUe)$\\ $=c\wedge e\wedge X($\underline{$G(bUc\wedge dUe)$}$_{\textbf{(1)}})$\\ \hspace*{4mm}$c\wedge d\wedge X($\underline{$dUe\wedge G(bUc\wedge dUe)$}$_{\textbf{(3)}})$\\ \hspace*{4mm}$b\wedge e\wedge X($\underline{$bUc\wedge G(bUc\wedge dUe)$}$_{\textbf{(2)}})$\\ \hspace*{4mm}$b\wedge d\wedge X($\underline{$bUd\wedge dUe\wedge G(bUc\wedge dUe)$}$_{\textbf{(4)}})$\\ \end{tabular}\\ \hline \end{tabular} \caption{The formulas expanded from $G(bUc\wedge dUe)$ and their normal forms}\label{nomalform2} \end{table} \fi \section{B\"uchi Automaton, LTL and Disjunctive Normal Form}\label{sec:pre} \subsection{B\"uchi Automaton}\label{sec:automata} A B\"{u}chi automaton is a tuple $\mathcal{A}=(S, \Sigma, \delta , S_0, F)$, where $S$ is a finite set of states, $\Sigma$ is a finite set of alphabet symbols , $\delta: S\times \Sigma \to 2^S$ is the transition relation, $S_0$ is a set of initial states, and $F\subseteq S$ is a set of accepting states of $\mathcal{A}$. We use $w, w_0\in\Sigma$ to denote alphabets in $\Sigma$, and $\eta, \eta_0\in \Sigma^*$ to denote finite sequences. A \emph{run} $\xi=w_0w_1w_2\ldots$ is an infinite sequence over $\Sigma^\omega$. For $\xi$ and $k\geq 1$ we use $\xi ^k=w_0w_1\ldots w_{k-1}$ to denote the prefix of $\xi$ up to its $k$th element (the $k+1$th element is not included) as well as $\xi _k$ to denote the suffix of $w_kw_{k+1}\ldots$ from its $(k+1)$th element (the $k+1$th element is included). Thus, $\xi=\xi^k \xi_k$. For notational convenience we write $\xi_0=\xi$ and $\xi^0=\varepsilon$ ($\varepsilon$ is the empty string). The run $\xi$ is accepting if it runs across one of the states in $F$ infinitely often. \subsection{Linear Temporal Logic}\label{sec:ltl} We recall the linear temporal logic (LTL) which is widely used as a specification language to describe the properties of reactive systems. Assume $AP$ is a set of atomic properties, then the syntax of LTL formulas is defined by: \begin{align*} \varphi\ ::=\ a \mid \neg a \ |\ \varphi\wedge \varphi\ |\ \varphi\vee \varphi\ |\ \varphi\ U \varphi\ |\ \varphi\ R\ \varphi\ |\ X\ \varphi \end{align*} where $a\in AP$, $\varphi$ is an LTL formula. We say $\varphi$ is a \emph{literal} if it is a proposition or its negation. In this paper we use lower case letters to denote atomic properties and $\alpha$, $\beta$, $\gamma$ to denote propositional formulas (without temporal operators), and use $\varphi$, $\psi$, $\vartheta$, $\mu$, $\nu$ and $\lambda$ to denote LTL formulas. Note that w.l.o.g. we are considering LTL formulas in negative normal form (NNF) -- all negations are pushed down to literal level. LTL formulas are interpreted on infinite sequences (correspond to runs of the automata) $\xi\in \Sigma ^\omega$ with $\Sigma =2^{AP}$. The Boolean connective case is trivial, and the semantics of temporal operators is given by: \begin{itemize} \item $\xi \models \varphi_1\ U\ \varphi_2$ iff there exists $i\geqslant 0$ such that $\xi_i\models \varphi_2$ and for all $0 \leqslant j < i, \xi_j\models \varphi_1$; \item $\xi \models \varphi_1\ R\ \varphi_2$ iff either $\xi_i\vDash\varphi_2$ for all $i\geq 0$, or there exists $i\ge 0$ with $\xi_i\models\varphi_1\wedge\varphi_2$ and $\xi_j\vDash\varphi_2$ for all $0\leq j< i$; \item $\xi \models X\ \varphi$ iff $\xi_i\models \varphi$. \end{itemize} According to the LTL semantics, it holds $\varphi R\psi=\neg (\neg \varphi U\neg \varphi)$. We use the usual abbreviations $\mathsf{True}=a \vee \neg a$, $Fa=\mathsf{True}Ua$ and $Ga=\mathsf{False}Ra$. \textbf{Notations.} Let $\varphi$ be a formula written in \emph{conjunctive form} $\varphi = \bigwedge_{i\in I} \varphi_i$ such that the root operator of $\varphi_i$ is not a conjunctive: then we define the conjunctive formula set as $CF(\varphi):=\{\varphi_i \mid i\in I\}$. When $\varphi$ does not include a conjunctive as a root operator, $\mathit{CF}(\varphi)$ only includes $\varphi$ itself. For technical reasons, we assume that $\mathit{CF}(\mathsf{True})=\emptyset$. Our construction requires that every atoms (properties) in the formula can be varied from their positions. For example, for the formula $aUa$ - we should consider the two of $a$s are identified syntactically differently, similarly for the formula $aU\neg a$. \subsection{Disjunctive Normal Form}\label{sec:dnf} We introduce the notion of \textit{disjunctive-normal form} for LTL formulas in the following. \begin{definition}[disjunctive-normal form]\label{def:dnf} A formula $\varphi$ is in \textit{disjunctive-normal form} (DNF) if it can be represented as $\varphi:=\bigvee _i (\alpha_i\wedge X \varphi_i)$, where $\alpha_i$ is a finite conjunction of literals, and $\varphi_i = \bigwedge \varphi_{i_j}$ where $\varphi_{i_j}$ is either a literal, or an \textit{Until}, \textit{Next} or \textit{Release} formula. We say $\alpha_i\wedge X \varphi_i$ is a \emph{clause} of $\varphi$, and write $DNF(\varphi)$ to denote all of the clauses. \end{definition} As seen in the introduction and motivating example, DNF form plays a central role in our construction. Thus, we first discuss that any LTL formula $\varphi$ can be transformed into an equivalent formula in DNF form. The transformation is done in two steps: the first step is according to the following rules: \begin{lemma}\label{lemma:expansion} \begin{enumerate} \item $DNF(\alpha) =\{\alpha \wedge X(\mathsf{True})\}$ where $\alpha$ is a literal; \item $DNF(X\varphi) = \{\mathsf{True}\wedge X(\varphi)\}$; \item $DNF(\varphi_1 U \varphi_2) = DNF(\varphi_2)\cup DNF( \varphi_1 \wedge X(\varphi_1 U \varphi_2))$; \item $DNF(\varphi_1 R \varphi_2) = DNF(\varphi_1 \wedge \varphi_2) \cup DNF( \varphi_2 \wedge X(\varphi_1 R \varphi_2))$; \item $DNF(\varphi_1 \vee \varphi_2) = DNF(\varphi_1)\cup DNF(\varphi_2)$; \item $DNF(\varphi_1\wedge\varphi_2) = \{(\alpha_1\wedge\alpha_2) \wedge X(\psi_1\wedge\psi_2)\mid \forall i=\mathit{1,2}. \ \alpha_i\wedge X(\psi_i)\in DNF(\varphi_i)\}$; \end{enumerate} \end{lemma} All of the rules above are self explained, following by the definition of DNF, distributive and the expansion laws. What remains is how to deal with the formulas in the \textit{Next} operator: by definition, in a clause $\alpha_i\wedge X(\varphi_i)$ the root operators in $\varphi_i$ cannot be disjunctions. The equivalence $X (\varphi_1 \vee \varphi_2) = X \varphi_1 \vee X \varphi_2$ can be applied repeatedly to move the disjunctions out of the \textit{Next} operator. The distributive law of disjunction over conjunctions allows us to bring any formula into an equivalent DNF form: \begin{theorem}\label{thm:transform} Any LTL formula $\varphi$ can be transformed into an equivalent formula in disjunctive-normal form. \end{theorem} In our running example, we have $DNF(\varphi_1)= DNF(\varphi_2)= DNF(\varphi_3)= DNF(\varphi_4)=\{c\wedge e\wedge X(\varphi_1), b\wedge e\wedge X(\varphi_2), c\wedge d\wedge X(\varphi_3), b\wedge d\wedge X(\varphi_4) \}$. Below we discuss the set of formulas that can be reached from a given formula. \begin{definition}[Formula Expansion]\label{def:expand} We write $\varphi\tran{\alpha}\psi$ iff there exists $\alpha\wedge X(\psi) \in DNF(\varphi)$. We say $\psi$ is expandable from $\varphi$, written as $\varphi\hookrightarrow\psi$, if there exists a finite expansion $\varphi\tran{\alpha_1}\psi_1\tran{\alpha_2}\psi_2\tran{\alpha_3}\ldots \psi_n=\psi$. Let $EF(\varphi)$ denote the set of all formulas that can be expanded from $\varphi$. \end{definition} The following theorem points out that $|EF(\lambda)|$ is bounded: \begin{theorem}\label{thm:expand:bounded} For any formula $\lambda$, $|EF(\lambda)|\leq 2^{n+1}$ where $n$ denotes the number of subformulas of $\lambda$. \end{theorem} \section{\emph{DNF-based} B\"uchi Automaton Construction}\label{construction} Our goal of this section is to construct the B\"uchi automaton $\mathcal{A}_\lambda$ for $\lambda$. We establish a few simple properties of general formulas that shall shed insights on the construction for the $Release$-free ($Until$-free) formulas. We then define the labelled transition system for a formula. In the following three subsections we present the construction for $Release$-free ($Until$-free) and general formulas, respectively. In the remaining of the paper, we fix $\lambda$ as the input LTL formula. All formulas being considered will vary over the set $EF(\lambda)$, and $AP$ will denote the set of all literals appearing in $\lambda$, and $\Sigma =2^{AP}$. \subsection{Transition Systems for LTL Formulas} We first extend formula expansions to subset in $\Sigma$: \begin{definition}\label{def:expandset} For $\omega\in \Sigma$ and propositional formula $\alpha$, $\omega\models \alpha$ is defined in the standard way: if $\alpha$ is a literal, $\omega\models \alpha$ iff $\alpha\in \omega$, and $\omega\models \alpha_1\wedge\alpha_2$ iff $\omega\models\alpha_1\wedge\omega\models\alpha_2$, and $\omega\models \alpha_1\vee\alpha_2$ iff $\omega\models\alpha_1\vee\omega\models\alpha_2$. We write $\varphi\tran{\omega}\psi$ if $\varphi\tran{\alpha}\psi$ and $w\models\alpha$. For a word $\eta=\omega_0\omega_1..\omega_k$, we write $\varphi\tran{\eta}\psi$ iff $\varphi\tran{\omega_0}\psi_1\tran{\omega_1}\psi_2\tran{\omega_2}..\psi_{k+1}=\psi$. For a run $\xi\in\Sigma^\omega$, we write $\varphi\tran{\xi}\varphi$ iff $\xi$ can be written as $\xi=\eta_0\eta_1\eta_2\ldots$ such that $\eta_i$ is a finite sequence, and $\varphi\tran{\eta_i}\varphi$ for all $i\ge 0$. \end{definition} Below we provide a few interesting properties derived from our DNF normal forms. \begin{lemma}\label{lemma:dnf} Let $\xi$ be a run and $\lambda$ a formula. Then, for all $n\ge 1$, $\xi\vDash\lambda\Leftrightarrow\lambda\tran{\xi^n}\varphi\wedge\xi_n\vDash\varphi$. \end{lemma} Essentially, $\xi\models\lambda$ is equivalent to that we can reach a formula $\varphi$ along the prefix $\xi^n$ such that the suffix $\xi_n$ satisfies $\varphi$. The following corollary is a direct consequence of Lemma \ref{lemma:dnf} and the fact that we have only finitely many formulas in $EF(\lambda)$: \begin{corollary}\label{coro:expand:existcycle} If $\xi\vDash\lambda$, then there exists $n\ge 1$ such that $\lambda\tran{\xi^n}\varphi\wedge\xi_n\vDash\varphi \wedge \varphi\tran{\xi_n}\varphi$. On the other side, if $\lambda\tran{\xi^n}\varphi\wedge\xi_n\vDash\varphi \wedge \varphi\tran{\xi_n}\varphi$, then $\xi\vDash\lambda$. \end{corollary} This corollary gives the hint that after a finite prefix we can focus on whether the suffix satisfies the \emph{looping formula} $\varphi$, i.e,. those $\varphi$ with $\varphi\hookrightarrow\varphi$. From Definition~\ref{def:expand} and the expansion rules for LTL formulas, we have the following corollary: \begin{corollary}\label{coro:expand:cycle} If $\lambda\hookrightarrow\lambda$ holds and $\lambda\neq\mathsf{True}$, then there is at least one \textit{Until} or \textit{Release} formula in $CF(\lambda)$. \end{corollary} As we described in previous, the elements in $EF(\lambda)$ and its corresponding DNF-normal forms naturally induce a labelled transition system, which can be defined as follows: \begin{definition}[LTS for $\lambda$] The labelled transition system $TS_{\lambda}$ generated from the formula $\lambda$ is a tuple $\langle \Sigma,S, \delta, S_0\rangle$: where $\Sigma = AP$, $S = EF(\lambda)$, $S_0=\{\lambda\}$ and $\delta$ is defined as follows: $\psi\in\delta(\varphi, \omega)$ iff $\varphi\tran{\omega}\psi$ holds, where $\varphi,\psi\in EF(\lambda)$ and $\omega\in \Sigma$. \end{definition} \subsection{B\"uchi automata for Release/Until-free Formulas} The following lemma is a special instance of our central theorem \ref{thm:central}. It states properties of accepting runs with respect to Release/Until-free formulas: \begin{lemma}\label{lem:releasefree} \begin{enumerate} \item Assume $\lambda$ is $Release$-free. Then, $\xi\vDash\lambda\Leftrightarrow\exists n\cdot\lambda\tran{\xi^n}\mathsf{True}$. \item Assume $\lambda$ is $Until$-free. Then $\xi\vDash\lambda\Leftrightarrow\exists n,\varphi\cdot\lambda\tran{\xi^n}\varphi\wedge\varphi\tran{\xi_n}\varphi$. \end{enumerate} \end{lemma} Essentially, If $\lambda$ is Release-free, we will reach $\mathsf{True}$ after finitely many steps; If $\lambda$ is Until-free we will reach a looping formula after finitely many steps. The B\"uchi automaton for Release-free or Until-free formulas will be directly obtained by equipping the LTS with the set of accepting states: \begin{definition}[$\mathcal{A}_\lambda$ for Release/Until-free formulas]\label{def:rfreeautomaton} For a Release/Until-free formula $\lambda$, we define the B\"uchi automaton $\mathcal{A}_\lambda=(S, \Sigma, \delta , S_0, F)$ where $TS_{\lambda}=\langle \Sigma, S, \delta, S_0\rangle$. The set $F$ is defined by: $F=\{\mathsf{True}\}$ if $\lambda$ is Release-free while $F=S$ if $\lambda$ is Until-free. \end{definition} Notably, $\mathsf{True}$ is the only accepting state for $\mathcal{A}_\lambda$ when $\lambda$ is Release-free while all the states are accepting ones if it is Until-free. \begin{theorem}[Correctness and Complexity]\label{bound} Assume $\lambda$ is $Until$-free or $Release$-free. Then, for any sequence $\xi\in \Sigma^\omega$, it holds $\xi\vDash \lambda$ iff $\xi$ is accepted by $\mathcal{A}_{\lambda}$. Moreover, $\mathcal{A}_{\lambda}$ has at most $2^{n+1}$ states, where $n$ is the number of subformulas in $\lambda$. \end{theorem} \begin{proof} The proof of the correctness is trivial according to Lemma~\ref{lem:releasefree}: 1) if $\lambda$ is Release-free, then every run $\xi$ of $\mathcal{A}_{\lambda}$ can run across the $\mathsf{True}$-state\footnote{In this paper we use $\varphi$-state to denote the state representing the formula $\varphi$.} infinitely often iff it satisfies $\exists n\geq 0\cdot\lambda\tran{\xi^n}\mathsf{True}$, that is, $\xi\vDash\lambda$; 2) if $\lambda$ is Until-free, then $\xi\vDash\lambda$ iff $\exists n, \varphi\cdot\lambda\tran{\xi^n}\varphi\wedge\varphi\tran{\xi_n}\varphi$, which will run across $\varphi$-state infinitely often so that is accepted by $\mathcal{A}_{\lambda}$ according to the construction. The upper bound is a direct consequence of Theorem \ref{thm:expand:bounded}. \end{proof} \subsection{Central Theorem for General Formulas} In the previous section we have constructed B\"uchi automaton for Release-free or Until-free formulas, which is obtained by equipping the defined LTS with appropriate accepting states. For general formulas, this is however slightly involved. For instance, consider the LTS of the formula $\varphi=G(bUc\wedge dUe)$ in our running example: there are infinitely many runs starting from the initial state $s_1$, but which of them should be accepting? Indeed, it is not obvious how to identify the set of accepting states. In this section we present our central theorem for general formulas aiming at identifying the accepting runs. \begin{figure} \caption{A snapshot illustrating the relation $\xi\models\lambda$} \label{fig:central_theorem} \end{figure} Assume the run $\xi=\omega_0\omega_1\ldots$ satisfies the formula $\lambda$. We refer to $\lambda(=\varphi_0)\tran{\alpha_0}\varphi_1\tran{\alpha_1}\varphi_2\ldots$ as an expansion path from $\lambda$, which corresponds to a path in the LTS $TS_\lambda$, but labelled with propositional formulas. Obviously, $\xi\models\lambda$ implies that there exists an expansion path in $TS_\lambda$ such that $\omega_i\models\alpha_i$ for all $i\ge 0$. As the set $EF(\lambda)$ is finite, we can find a looping formula $\varphi=\varphi_i$ that occurs \emph{infinitely often} along this expansion path. On the other side, we can \emph{partition} the run $\xi$ into sequences $\xi=\eta_0\eta_1\ldots$ such each finite sequence $\eta_i$ is consistent with respect to one loop $\varphi \hookrightarrow \varphi$ along the expansion path. This is illustrated in Figure \ref{fig:central_theorem}. The definition below formalizes the notion of consistency for finite sequence: \begin{definition}\label{def:finitestepsat} Let $\eta=\omega_0\omega_1\ldots\omega_n$ ($n\geq 0$) be a finite sequence. Then, we say that $\eta$ satisfies the LTL formula $\varphi$, denoted by $\eta\models _f \varphi$, if the following conditions are satisfied: \begin{itemize} \item there exists $\varphi_0=\varphi\tran{\alpha_0}\varphi_1\tran{\alpha_1}\ldots\tran{\alpha_{n}}\varphi_{n+1} =\psi$ such that $\omega_i\models\alpha_i$ for $0\le i \le n$, and with $S:=\bigcup_{0\leq j\leq n} CF(\alpha_j)$, it holds \begin{enumerate} \item if $\varphi$ is a literal then $\varphi\in S$ holds; \item if $\varphi$ is $\varphi_1U\varphi_2$ or $\varphi_1R\varphi_2$ then $S\models_f\varphi_2$ holds; \item if $\varphi$ is $\varphi_1\wedge \varphi_2$ then $S\models_f\varphi_1 \wedge S\models_f\varphi_2$ holds; \item if $\varphi$ is $\varphi_1\vee \varphi_2$ then $S\models_f\varphi_1 \vee S\models_f\varphi_2$ holds; \item if $\varphi$ is $X \varphi_2$ then $S\models_f\varphi_2$ holds; \end{enumerate} \end{itemize} \end{definition} \iffalse and $\varphi$, we can decide if this sequence contributes for an accepting run of $\varphi$. First there exists the formula $\psi$ such that $\varphi\tran{\eta}\psi$ holds (1). If $\mid\eta\mid > 1$ then $\eta$ will be flattened to a property set $S = \bigcup_{0\leq j\leq n} CF(\alpha_j)$, in which for each $\alpha_j$ it meets $\exists \alpha_j\wedge X\varphi_{j+1}\in DNF(\varphi_j)$ and $\omega_j\supseteq CF(\alpha_j)$ (3). When $\mid\eta\mid=0$, $\eta\models_f\varphi$ is defined over the formula $\varphi$. We note in the definition the set $S$ can be a special finite sequence with the length of 1. \fi This predicate specifies whether the given finite sequence $\eta$ is consistent with respect to the finite expansion $\varphi_0=\varphi\tran{\alpha_0}\varphi_1\tran{\alpha_1}\ldots\tran{\alpha_{n}}\varphi_{n+1} =\psi$. The condition $\omega_i\models\alpha_i$ requires that the finite sequence $\eta$ is consistent with respect to the labels along the finite expansion from $\varphi_0$. The rules for literals and Boolean connections are intuitive. For Until operator $\varphi_1 U \varphi_2$, it is defined recursively by $S\models_f\varphi_2$: as to make the Until subformula being satisfied, we should make sure that $\varphi_2$ holds under $S$. Similar, for release operator $\varphi_1 R\varphi_2$, we know that $\varphi_1\wedge \varphi_2$ or $\varphi_2$ plays a key role in an accepting run of $\varphi_1 R \varphi_2$. Because $\varphi_1\wedge \varphi_2$ implies $\varphi_2$, and with the rule (4) in the definition, we have $S\models_f\varphi_1 R\varphi_2 \equiv S\models_f\varphi_2$. Assume $\varphi=X\varphi_2$. As $\mathit{CF}(\mathsf{True})$ is defined as $\emptyset$, we have $\eta\models_f \varphi$ iff $\eta'\models_f\varphi_2$ with $\eta'=\omega_1\omega_2\ldots\omega_n$. The predicate $\models_f$ characterizes whether the prefix of an accepting run contributes to the satisfiability of $\lambda$. The idea comes from Corollary~\ref{coro:expand:existcycle}: Once $\varphi$ is expanded from itself infinitely by a run $\xi$ as well as $\xi\models\varphi$, there must be some common feature each time $\varphi$ loops back to itself. This common feature is what we defined in $\models_f$. In our running example, consider the finite sequence $\eta=\{b,d\}\{b,d\}\{c,e\}$ corresponding to the path $s_1s_4s_4s_1$: according to the definition $\eta\models_f\varphi_1$ holds. For $\eta=\{b,d\}\{b,d\}\{b,d\}$, however, $\eta\not\models_f\varphi_1$. With the notation $\models_f$, we study below properties for the looping formulas, that will lead to our \textit{central theorem}. \begin{lemma}[Soundness]\label{lemma:finitesat:infsat} Given a looping formula $\varphi$ and an infinite word $\xi$, let $\xi =\eta_1\eta_2\ldots$. If $\forall i\geq 1\cdot \varphi\tran{\eta_i}\varphi\wedge \eta_i\models_f\varphi$, then $\xi\vDash \varphi$. \end{lemma} The soundness property of the looping formula says that if there exists a partitioning $\xi = \eta_1\eta_2...$ such that $\varphi$ expends to itself by each $\eta_i$ and $\eta_i\models_f\varphi$ holds, then $\xi\models\varphi$. \begin{lemma}[Completeness]\label{lemma:completeness} Given a looping formula $\varphi$ and an infinite word $\xi$, if $\varphi\tran{\xi}\varphi$ and $\xi\vDash\varphi$ holds, then there exists a partitioning $\eta_1\eta_2\ldots$ for $\xi$, i.e. $\xi=\eta_1\eta_2\ldots$, such that for all $i\geq 0$, $\varphi\tran{\eta_i}\varphi\wedge \eta_i\models_f\varphi$ holds. \end{lemma} The completeness property of the looping formula states the other direction. If $\varphi\tran{\xi}\varphi$ as well as $\xi\models\varphi$, we can find a partitioning $\eta_1\eta_2\ldots$ that makes $\varphi$ expending to itself by each $\eta_i$ and $\eta_i\models_f\varphi$ holds. Combining Lemma 6, Lemma 7 and Corollary 1, we have our central theorem: \begin{theorem}[Central Theorem]\label{thm:central} Given a formula $\lambda$ and an infinite word $\xi$, we have \begin{align*} \xi\vDash \lambda \Leftrightarrow \exists \varphi, n\cdot \lambda\tran{\xi^n}\varphi\wedge \exists \xi_n = \eta_1\eta_2\ldots \cdot \forall i\geq 1\cdot \varphi\tran{\eta_i}\varphi\wedge \eta_i\models_f\varphi \end{align*} \iffalse \begin{enumerate} \item either $\exists n\geq 0\cdot \lambda\tran{\xi^n}\mathsf{True}$, \item or there exists $\lambda\tran{\xi^n}\psi$ such that $\psi\tran{\xi_n}\psi$ and : $\exists \xi_n=\eta_1\eta_2\eta_3\ldots.\forall i\ge 1. \psi\tran{\eta_i}\psi\wedge satOnce(S(\eta_i),\psi)$. \end{enumerate} \fi \end{theorem} The central theorem states that given a formula $\lambda$, we can always extend it to a looping formula which satisfies the soundness and completeness properties. Reconsider Figure~\ref{fig:central_theorem}: formula $\lambda$ extends to the looping formula $\varphi$ by $\xi^n$, and $\xi_n$ can be partitioned into sequences $\eta_1\eta_2\ldots$. The loops from $\varphi$ correspond to these finite sequences $\eta_i$ in the sense $\eta_i\models_f\varphi$. \iffalse\lz{delete} Remember that we attach each expanded formula from $\lambda$ (including $\lambda$ itself) with an obligation set, we can trace each expanded formula to decide wether the accumulated propositions on the transitions make its obligation set be fulfilled. Based on the central theorem, if all formulas' obligation sets are fulfilled in the loop, we can tell that the word is accepted. For instance, in our running example, we look at the loop composed by states $s_1$ and $s_2$. We know that $\varphi_1$ and $\varphi_2$ have the same obligation set $\{\{c, e\}\}$, and the propositions on two transitions are $\{b,e\}$ and $\{c,d\}$ or $\{c,e\}$. It is easy to see that $\{b,e\}\cup \{c,d\} $ or $\{b,e\}\cup \{c,e\} $ can fulfill the obligation set $\{\{c, e\}\}$, which means that any word that infinitely visits $s_1$ and $s_2$ can be accepted by the target formula. This observation inspires the construction of B\"uchi automata introduced in the following. The obligation powerset plays an important role in the automata construction. In the next section, we will see each state is composed of an expandable formula with a \textit{process set}, that will be reset empty if the process set contains the full obligations of the expandable formula. \fi \subsection{B\"uchi automata for General Formulas}\label{sec:automationgeneration} Our central theorem sheds insights about the correspondence between the accepting run and the expansion path from $\lambda$. However, how can we guarantee the predicate $\models_f$ for looping formulas in the theorem? We need the last ingredient for starting our automaton construction: we extract the \emph{obligation sets} from LTL formulas that will enable us to characterize $\models_f$. \begin{definition}\label{def:obligationset} Given a formula $\varphi$, we define its obligation set, i.e. $OS_{\varphi}$, as follows: \begin{enumerate} \item If $\varphi=p$, $OS_{\varphi}=\{ \{p\}\}$; \item If $\varphi=X\psi$, $OS_{\varphi}=OS_{\psi}$; \item If $\varphi=\psi_1\vee\psi_2$, $OS_{\varphi}=OS_{\psi_1}\cup OS_{\psi_2}$; \item If $\varphi=\psi_1\wedge\psi_2$, $OS_{\varphi}=\{S_1\cup S_2 \mid S_1\in OS_{\psi_1}\wedge S_2\in OS_{\psi_2}\}$; \item If $\varphi=\psi_1 U\psi_2$ or $\psi_1 R\psi_2$, $OS_{\varphi}=OS_{\psi_2}$; \end{enumerate} For every element set $O\in OS_{\varphi}$, we call it the obligation of $\varphi$. \end{definition} The obligation set provides all obligations (elements in obligation set) the given formula is supposed to have. Intuitively, a run $\xi$ accepts a formula $\varphi$ if $\xi$ can eliminate the obligations of $\varphi$. Take the example of $G(a R b)$, the run $(b)^{\omega}$ accepts $a R b$, and the run eliminates the obligation set $\{\{b\}\}$ infinitely often. Notice the similarity of the definition of the obligation set and the predicate $\models_f$. For instance, the obligation set of ${\varphi_1 R \varphi_2}$ is the obligation set of $\varphi_2$, which is similar in the definition of $\models_f$. The interesting rule is the conjunctive one. For obligation set $OS_{\varphi}$, there may be more than one element in $OS_{\varphi}$. However, from the view of satisfiability, if one obligation in $OS_{\varphi}$ is satisfied, we can say the obligations of $\varphi$ is fulfilled. This view leads to the definition of the conjunctive rule. For $\psi_1\wedge\psi_2$, we need to fulfill the obligations from both $\psi_1$ and $\psi_2$, which means we have to trace all possible unions from the elements of $OS_{\psi_1}$ and $OS_{\psi_2}$. For instance, the obligation set of $G(a U b \wedge c U (d\vee e))$ is $\{\{b, d\}, \{b, e\}\}$. The following lemmas gives the relationship of $\models_f$ and \textit{obligation set}. \begin{lemma}\label{lemma:obligaionandsatonce} For all $O\in OS_{\varphi}$, it holds $O\models_f\varphi$. On the other side, $S\models_f\varphi$ implies that $\exists O\in OS_{\varphi}\cdot O\subseteq S$. \end{lemma} \iffalse The first part of Lemma~\ref{lemma:obligaionandsatonce} says that any obligation set $O$ in $OS_\varphi$ can make predicate $satOnce(O, \varphi)$ hold. And the second part points out the other direction, i.e., if some set $S$ make predicate $S\models\varphi$ hold for $\varphi$, we can find an obligation in $OS_\varphi$ that is a subset of $S$. These two lemmas can be proved by the induction on the structure of LTL formulas. \fi For our input formula $\lambda$, now we discuss how to construct the B\"uchi automaton $\mathcal{A}_\lambda$. We first describe the states of the automaton. A state will be consisting of the formula $\varphi$ and a \emph{process set} that keeps track of properties have been satisfied so far. Formally: \begin{definition}[states of the automaton for $\lambda$] A state is a tuple $\langle\varphi,P\rangle$ where $\varphi$ is a formula from $EF(\lambda)$, and $P\subseteq AP$ is a \textit{process set}. \end{definition} Refer again to Figure \ref{fig:central_theorem}: reading the input finite sequence $\eta_1$, each element in the process set $P_i$ corresponds to a property set belonging to $AP$, which will be used to keep track whether all elements in an obligation are met upon returning back to a $\varphi$-state. If we have $P_i=\emptyset$, we have successfully returned to the accepting states. Now we have all ingredients for constructing our B\"uchi automaton $\mathcal{A}_\lambda$: \begin{definition}[B\"uchi Automaton $\mathcal{A}_\lambda$] The B\"uchi automaton for the formula $\lambda$ is defined as $\mathcal{A}_{\lambda}=(\Sigma, S, \delta, S_0, \mathcal{F})$, where $\Sigma = 2^{AP}$ and: \begin{itemize} \item $S=\{\langle\varphi,P\rangle\mid \varphi\in EF(\lambda)\}$ is the set of states; \item $S_0=\{\langle \lambda,\emptyset\rangle\}$ is the set of initial states; \item $\mathcal{F} =\{\langle\varphi,\emptyset\rangle\mid\varphi\in EF(\lambda)\}$ is the set of accepting states; \item Let states $s_1,s_2$ with $s_1=\langle\varphi_1,P_1\rangle$, $s_2=\langle\varphi_2,P_2\rangle$ and $w\subseteq 2^{AP}$. Then, $s_2\in \delta(s_1, \omega)$ iff there exists $\varphi_1\tran{\alpha}\varphi_2$ with $\omega\models\alpha$ such that the corresponding $P_2$ is updated by: \begin{enumerate} \item $P_2 = \emptyset$ if $\exists O\in OS_{\varphi_2}\cdot O\subseteq P_1\cup CF(\alpha)$, \item $P_2 = P_1 \cup CF(\alpha)$ otherwise. \end{enumerate} \end{itemize} \end{definition} The transition is determined by the expansion relation $\varphi_1\tran{\alpha}\varphi_2$ such that $\omega\models\alpha$. The process set $P_2$ is updated by $P_1\cup CF(\alpha)$ unless there is no element set $O\in OS_{\varphi_2}$ such that $P_1\cup CF(\alpha)\supseteq O$. In that case $P_2$ will be set to $\emptyset$ and the corresponding state will be recognized as an accepting one. \iffalse \begin{example} Consider another formula $\varphi = G(bRc\vee dUe)$. One can get $DNF(\varphi) = \{b\wedge c\wedge X \varphi , c\wedge X\varphi_1, e\wedge X\varphi , d\wedge X\varphi_2\}$, $DNF(\varphi_1)=\{b\wedge c\wedge X \varphi , c\wedge X\varphi_1\}$ and $DNF(\varphi_2)=\{e\wedge X\varphi , d\wedge X\varphi_2\}$, where $\varphi_1=bRc\wedge \varphi$ and $\varphi_2=dUe\wedge \varphi$. Then according to \textbf{Definition}~\ref{def:proofobligation} we know $PO_{\varphi}=\{po=\langle G(bRc\vee dUe),\{\{b,c\},\{e\}\}\rangle, \langle bRc\wedge G(bRc\vee dUe), \{\{c\}\}\rangle\}$. And according to \textbf{Definition}~\ref{attachment}, we can obtain $PO_{\varphi}=PO_{\varphi_2}=\{po\}$, $PO_{\varphi_1}= PO_{\varphi}$. Finally we can generate the B\"uchi automaton shown in Fig.~\ref{fig:buchiandmuller} by our construction. \end{example} \begin{figure} \caption{\label{fig:buchiandmuller} \label{fig:buchiandmuller} \end{figure} \fi Now we state the correctness of our construction: \begin{theorem}[Correctness of Automata Generation]\label{thm:correct} Let $\lambda$ be the input formula. Then, for any sequence $\xi\in \Sigma^\omega$, it holds $\xi\vDash \lambda$ iff $\xi$ is accepted by $\mathcal{A}_{\lambda}$. \end{theorem} The correctness follows mainly from the fact that our construction strictly adheres to our central theorem (Theorem \ref{thm:central}). We note that two very simple optimizations can be identified for our construction: \begin{itemize} \item If two states have the same DNF normal form and the same process set $P$, they are identical. Precisely, we merge states $s_1=\langle \varphi_1, P_1\rangle$ and $s_2=\langle \varphi_2, P_2\rangle$ if $DNF(\varphi_1)=DNF(\varphi_2)$, and $P_1=P_2$; \item The elements in the process set $P$ can be restricted into those atomic propositions appearing in $OS_{\varphi}$: Recall here $\varphi\in EF(\lambda)$. One can observe directly that only those properties are used for checking the \textit{obligation} conditions, while others will not be used so that it can be omitted in the process set $P$. \end{itemize} Now we can finally explain a final detail of our running example: \begin{example} In our running example state $s_1$ is the accepting state of the automaton. It should be mentioned that the state $s_2$ = $\langle\varphi_2,\{e\}\rangle$ originally has an edge labeling $c\wedge d$ to the state $\langle\varphi_3,\emptyset\rangle$ according to our construction, which is a new state. However, this state is equivalent with $s_1=\langle \varphi_1,\emptyset\rangle$, as $\varphi_1$ and $\varphi_3$ have the same DNF normal form. So these two states are merged. The same cases occur on state $s_3$ to state $s_1$ with the edge labeling $b\wedge e$, state $s_2$ to state $s_2$ with the edge labeling $b\wedge d$ and etc. After merging these states, we have the automaton as depicted in Figure \ref{fig:buechi}. \end{example} \begin{theorem}[Complexity]\label{generalbound} Let $\lambda$ be the input formula. Then the B\"uchi automaton $\mathcal{A}_{\lambda}$ has the upper bound $2^{2n+1}$, where $n$ is the number of subformulas in $\lambda$. \end{theorem} The number of states is bounded by $2^{n+1}\cdot 2^{|AP|} \le 2^{2n+1}$. Recall in the construction $AP$ is the set of atomic prepositions appearing in $\lambda$, thus $|AP|$ is much smaller than $n$ in general. We remark that the first part $2^{n+1}$ is much smaller in practice due to equivalent DNF representations. Indeed, it can be reduced to $2^{dnf(\lambda)+1}$ where $dnf(\lambda)$ denotes the number of equivalence classes of $EF(\lambda)$ induced by equivalent DNF representations. In our running example, all of the formulas have the same DNF normal form, thus this part is equal to $2^{1+1}=4$. On the other side, the second part $2^{|AP|}$ can be further reduced to the set of atomic propositions that appear in the obligation sets: in our running example this is $|\{c,e\}|$. \section{Discussion}\label{discussion} In this section, we discuss the relationship and differences between our proposed approach and the tableau construction. Generally speaking, our approach is essentially a tableau one that is based on the expansion laws of $Until$ and $Release$ operators. The interesting aspect of our approach is the finding of a special normal formal with its DNF-based labeled transition system, which is closely related to the B\"uchi automaton under construction. The tableau approach explicitly expands the formula recursively based on the semantics of LTL formulas while the nodes of the potential automaton are split until no new node is generated. However, our approach first studies the LTL normal forms to discover the obligations we have to fulfill for the automaton to be generated, and then presents a simple mapping between LTL formulas into B\"uchi automata. The insight behind our approach is adopting a different view on the accepting conditions. The tableau approach focuses on the $Until$-operator. For instance, to decide the accepting states, the tableau approach needs to trace all the $Until$-subformulas and records the ``eventuality'' of $\psi$ in $\varphi U \psi$, which leads to the introduction of the \textit{Generalized B\"{u}chi Automata} (GBA) in tableau approach. However, our approach focuses on the \textit{looping formulas}, which potentially consist of the accepting states. Intuitively, an infinite sequence (word) will satisfy the formula $\lambda$ iff $\lambda$ can expand to some looping formula $\varphi$ which can be satisfied by the suffix of the word removing the finite sequence arriving at $\varphi$. The key point of our approach is to introduce the static obligation set for each formula in the DNF-based labeled transition system, which indicates that an accepting run is supposed to infinitely fulfil one of the obligations in the obligation set. Thus, the obligation set gives the "invariability" for general formulas instead of the ``eventuality'' for $Until$-formulas. In the approach, we use a process set to record the obligation that formula $\varphi$ has been satisfied from its last appearance. Then, we would decide the accepting states easily when the process set fulfills one obligation in the obligation set of $\varphi$ (We reset it empty afterwards). One can also note our approach is on-the-fly: the successors of the current state can be obtained as soon as its DNF normal form is acquired. The most interesting part is that, our approach can give a more precise theoretical upper bound for the complexity of the translation when comparing to the tableau framework (Theorem \ref{generalbound}). And a better one can be acquired when the formulas are restricted into Release-free or Until-free (Theorem \ref{bound}). \section{Conclusion}\label{conclusion} \iffalse To reduce the size of the generated automata, several optimizations are proposed~\cite{Somenzi00}~\cite{Etessami00}. But most of the optimization techniques are used on the generated automata, which can also be applied to the automata generated by our approach to reduce the size of it further. However, when the formula under construction does not contain $R$ operator, our construction can generate the automata as small as possible. The reason is that when there is no $R$ operator in the formula, its proof obligation set is empty, which makes the algorithm avoid exploring more transitions. Another interesting feature of our algorithm is that it can also generate equivalent Muller automata for LTL formulas directly. Compared to previous works, it does not need to generate B\"{u}chi automata in advance. Although Jong~\cite{Jong91} also proposed a direct translation approach from LTL formulas to Muller automata, he has to introduce the automata operations such as complementation and intersection which makes the complexity of the algorithm higher. \fi In this paper, we propose the \textit{disjunctive-normal forms} for LTL formulas. Based on the DNF representation, we introduce the DNF-based labeled transition system for formula $\lambda$ and study the relationship between the transition system and the B\"uchi automata for $\lambda$. Thus, a simple but on-the-fly automata construction is achieved. When the formula under construction is Release/Until-free, our construction is very straightforward in theory, and leads to at most $2^{n+1}$ states. In the general way, our approach gives a more precise bound of $2^{2n+1}$ compared to the one of $2^{O(n)}$ for tableau construction. \begin{thebibliography}{10} \bibitem{TACAS12} Tom{\'a}s Babiak, Mojm\'{\i}r Kret\'{\i}nsk{\'y}, Vojtech Reh{\'a}k and Jan Strejcek \newblock {LTL to B{\"u}chi Automata Translation: Fast and More Deterministic.} \newblock In {\em TACAS}, pages 95--109, 2012. \bibitem{Daniele99} Marco Daniele, Fausto Giunchiglia, and Moshe~Y. Vardi. \newblock {Improved Automata Generation for Linear Temporal Logic}. \newblock In {\em CAV}, pages 249--260, 1999. \bibitem{Duret-Lutz04} Duret-Lutz, A. and Poitrenaud, D. \newblock {SPOT: an extensible model checking library using transition-based generalized B\"uchi automata} \newblock In {\em The IEEE Computer Society's 12th Annual International Symposium}, pages 76--83, 2004. \bibitem{Etessami00} Kousha Etessami and Gerard~J. Holzmann. \newblock {Optimizing B\"{u}chi Automata}. \newblock In {\em CONCUR}, pages 153--167, 2000. \bibitem{Gastin01} Paul Gastin and Denis Oddoux. \newblock {Fast LTL to B\"{u}chi Automata Translation}. \newblock In {\em CAV}, pages 53--65, 2001. \bibitem{Gerth95} Rob Gerth, Doron Peled, Moshe~Y. Vardi, and Pierre Wolper. \newblock {Simple on-the-fly automatic verification of linear temporal logic}. \newblock In {\em PSTV}, pages 3--18, 1995. \bibitem{Giannakopoulou02} Dimitra Giannakopoulou and Flavio Lerda. \newblock {From States to Transitions: Improving Translation of LTL Formulae to B{\"u}chi Automata}. \newblock In {\em FORTE}, pages 308--326, 2002. \bibitem{Rozier07} Kristin~Y. Rozier and Moshe~Y. Vardi. \newblock {LTL satisfiability checking}. \newblock In {\em SPIN}, pages 149--167, 2007. \bibitem{Sebastiani03} Roberto Sebastiani and Stefano Tonetta. \newblock {"More Deterministic" vs. "Smaller" B\"uchi Automata for Efficient LTL Model Checking}. \newblock In {\em CHARME}, pages 126--140, 2003. \bibitem{Somenzi00} Fabio Somenzi and Roderick Bloem. \newblock {Efficient B\"{u}chi Automata from LTL Formulae}. \newblock In {\em CAV}, pages 248--263, 2000. \bibitem{Tauriainen02} Heikki Tauriainen and Keijo Heljanko. \newblock {Testing LTL formula translation into B{\"u}chi automata}. \newblock {\em STTT}, 4(1):57--70, 2002. \bibitem{Vardi96} Moshe~Y. Vardi. \newblock {An Automata-Theoretic Approach to Linear Temporal Logic}. \newblock In {\em Banff Higher Order Workshop}, pages 238--266, 1995. \bibitem{Vardi86} Moshe~Y. Vardi and Pierre Wolper. \newblock {An Automata-Theoretic Approach to Automatic Program Verification}. \newblock In {\em LICS}, pages 332--344, 1986. \end{thebibliography} \iffalse \appendix \section{Experiments}\label{experiment} We have implemented the tool \textit{Aalta} based on our proposed approach. In this section we compare our experimental results with those from on-the-shelf tools. There are several implementations for automata construction including LTL2AUT~\cite{Daniele99}, LTL2BA~\cite{Gastin01}, Wring~\cite{Somenzi00}, and ltl2Buchi~\cite{Giannakopoulou02}. The ltl2Buchi tool has integrated LTL2AUT tool in its implementation, and it also implemented the optimization techniques proposed in~\cite{Etessami00}. Wring uses similar optimization techniques proposed in~\cite{Etessami00} as well, thus here we choose to compare with LTL2BA and ltl2Buchi. We use the benchmarks from the Wring tool, and there are totally 4046 formulas (2023 formulas with their negations). Moreover, we use the testing method in~\cite{Tauriainen02} to test the correctness of our tool. Briefly, our construction for $\varphi$ is correct if the conjunction of the automaton from formula $\varphi$ by \textit{Aalta} and the automaton from $\neg \varphi$ by ltl2Buchi is empty (here we assume the implementation of ltl2Buchi is correct). More details are specified in Section~\ref{test} in the following. The generated automata for all the 4046 formulas pass this testing. Our experiments are conducted on 2.66GHz Intel Core i7 CPU running Windows 7 with 4GB RAM. \subsection{Testing the Tool}\label{test} Before the experiments, we have to test the implementation correctness of \textit{Aalta} tool first. The testing approach is based on the emptiness check between two complementary B\"{u}chi automata, which is used in lbtt~\cite{Tauriainen02}. In our testing framework, we assume the results from ltl2Buchi are correct, and we test the results from \textit{Aalta} by referring to those from ltl2Buchi. When an input formula $\varphi$ is given, we construct the corresponding automaton $\mathcal{A}_{ a}$ by \textit{Aalta}, and construct the automaton $\mathcal{A}_{l}$ by ltl2Buchi for $\neg \varphi$ as well. If the product of these two automata is empty, i.e. $\mathcal{A}_{a}\otimes \mathcal{A}_{l}=\emptyset$, then we can conclude the results generated by \textit{Aalta} are correct. Note that the situation when $\mathcal{A}_{a}=\emptyset$ or $\mathcal{A}_{l}=\emptyset$ should be considered specially since $\mathcal{A}_{a}\otimes \mathcal{A}_{l}=\emptyset$ is always true in these two situations. So when $\mathcal{A}_{a}=\emptyset$ holds, we continue to check whether $\mathcal{A}_{l}=\emptyset$ holds. It is the same case when $\mathcal{A}_{l}=\emptyset$ holds. We use the benchmarks from Wring~\cite{Somenzi00} to design the test suites, and finally all the 4046 formulas in the benchmarks are passed by $Aalta$. \subsection{Experimental Results} First we present the experiments of the selected 10 formulas in Table~\ref{result1}. Five of them does not contain $R$ operator and the left five formulas are often used in property description of model checking. The LTL2BA tool uses the classical construction based on alternating automata and ltl2B\"{u}chi tool uses the tableau construction. There are also some optimization techniques integrated in these tools, such as strongly connected components simplification(s), bisimulation(b) and fair simulation(f). The second and third columns denote the experimental results from LTL2BA and ltl2Buchi without optimizations respectively, while the forth and fifth columns provide the optimization results from LTL2BA and ltl2Buchi. The last two columns show the size of B\"{u}chi automata from our tool without optimizations (\textit{Aalta}) and with optimizing techniques in~\cite{Etessami00} (\textit{Aalta}+sbf). Note that ``s'' means the number of states of the corresponding automaton, and ``t'' represents the number of transitions. \begin{table}[t] \centering \begin{tabular}{|l|l|l|l|l|l|l|} \hline Formulas & LTL2BA & ltl2Buchi & LTL2BA & ltl2Buchi & \textit{Aalta} & \textit{Aalta}\\ & & & \hspace*{4mm}+s & \hspace*{3mm}+sbf & & \hspace*{1mm}+sbf \\ \hline $aUb$ & 3s, 5t & 3s, 5t & 2s, 3t & 2s, 3t & 2s, 3t & 2s, 3t \\ \hline $aU(bUc)$ & 5s, 11t & 4s, 9t & 3s, 6t & 3s, 6t & 3s, 6t & 3s, 6t\\ \hline $(aUb)Uc$ & 6s, 17t & 5s, 14t & 6s, 17t & 4s, 10t & 4s, 10t & 4s, 10t\\ \hline $aUb\wedge cUd$ & 5s, 13t & 5s, 13t & 4s, 9t & 4s, 9t & 4s, 9t & 4s, 9t\\ \hline $aUb\vee cUd$ & 4s, 9t & 4s, 9t & 4s, 9t & 4s, 9t & 4s, 9t & 4s, 9t\\ \hline $G (p\rightarrow qUr)$ & 4s, 18t & 2s, 5t & 2s, 5t & 2s, 5t & 2s, 5t & 2s, 5t\\ \hline $G F a\wedge G F b\wedge G F c$ & 21s, 250t & 20s, 244t & 4s, 13t & 4s, 13t & 8s, 64t & 4s, 13t\\ \hline $\neg (p1U(p2U(p3U(p4))))$ & 9s, 51t& 8s, 43t & 4s, 10t & 4s, 10t& 4s, 10t & 4s, 10t\\ \hline $(pU(qUr))\vee (qU(rUp))$ & 8s, 21t & 6s, 17t & 6s, 17t & 5s, 12t & 5s, 12t & 5s, 12t\\ \hline $\neg (G F p0 \wedge G F p1)\rightarrow G (q\rightarrow F r)$ &12s, 70t & 11s, 52t & 6s, 19t & 6s, 19t & 7s, 28t & 5s, 14t \\ \hline \end{tabular} \caption{The experimental results for the comparison with other tools}\label{result1} \end{table} \begin{table}[t] \centering \begin{tabular}{|l|l|l|l|l|l|l|} \hline Number & A $<$ l & A $=$ l & A $>$ l & A $<$ l+sbf & A $=$ l+sbf & A $>$ l+sbf\\ \hline 2221 & 27.0$\%$ & 57.0 $\%$ & 4.0$\%$ & 5.1$\%$ & 71.0$\%$ & 9.0$\%$\\ \hline \end{tabular} \caption{Experimental results on benchmarks without $R$ operators }\label{result2} \end{table} For the first five formulas, \textit{Aalta} outperforms LTL2BA and ltl2Buchi without optimizations (column 2 and 3), and \textit{Aalta} still outperforms LTL2BA with optimizations. It achieves the same results as ltl2Buchi with optimizations does (column 4 and 5). The reason for the performance is due to the fact that these formulas do not contain $R$ operator. According to the proposed \emph{DNF-based } construction, if the formula under construction does not contain $R$ operator, it has no proof obligations, which makes the size of the generated automata as small as possible. More explicit results shown in the following will support the conjecture. \iffalse \begin{table}[t] \centering \begin{tabular}{|l|l|l|l|l|l|l|} \hline Number & A $<$ l & A $=$ l & A $>$ l & A +sbf $<$ l+sbf & A+sbf $=$ l+sbf & A+sbf $>$ l+sbf\\ \hline 2023 & 33.4$\%$ & 46.7 $\%$ & 10.0$\%$ &6.6$\%$ & 76.5$\%$ & 10.4$\%$\\ \hline 2023(NEG) & 31.3$\%$ & 47.0$\%$ & 11.5$\%$ & 5.9$\%$ & 75.1$\%$& 11.3$\%$\\ \hline \end{tabular} \caption{Experimental results on benchmarks }\label{result3} \end{table} \fi \begin{table}[t] \centering \begin{tabular}{|l|l|l|l|l|l|l|} \hline Number & A $<$ l & A $=$ l & A $>$ l & A +sbf $<$ l+sbf & A+sbf $=$ l+sbf & A+sbf $>$ l+sbf\\ \hline 2023 & 37.4$\%$ & 46.2 $\%$ & 6.9$\%$ & 10.2$\%$ & 73.7$\%$ & 8.1$\%$\\ \hline 2023(NEG) & 34.7$\%$ & 47.3$\%$ & 7.4$\%$ & 9.4$\%$ & 72.9$\%$& 8.7$\%$\\ \hline \end{tabular} \caption{Experimental results on benchmarks }\label{result3} \end{table} The next five formulas in the table are selected for the reason that they are often used in property description. Again, \textit{Aalta} is better than LTL2BA or ltl2Buchi without optimizations. Compared with the optimized versions of the two tools, our tool generates the automata of the same size for three formulas, and is worse for the other two formulas. For the formula $G F a\wedge G F b\wedge G F c$, the size of the generated automaton from LTL2BA or ltl2Buchi with optimizations can be reduced to be linear with the length of the formula. However, the size of automaton generated by \textit{Aalta} for this formula is exponential to the length of the formula, which is the same case with LTL2BA and ltl2Buchi when they do not optimize the results. Once the result automata from \textit{Aalta} are optimized by the same techniques as those in ltl2Buchi (+sbf), we can see almost the same results for the formulas in the table between the two tools, i.e. shown in the fifth and last column. Moreover for the last formula shown in the table, we can also find the result (5s, 14t) from \textit{Aalta} (+sbf) can outperform the one (6s, 19t) from ltl2buchi (+sbf). Now in Table~\ref{result2} and Table~\ref{result3} we present the experimental results on the constructions for the benchmarks, compared with ltl2Buchi. To make the comparison more reasonable, we transform the generated automata from two tools to more deterministic versions -- as described in~\cite{Sebastiani03} by enumerating all the combination of transitions for the automata. This is mainly motivated by the fact that non-deterministic automata with the same transitions and states do not mean that they have the same performance for the model checking~\cite{Etessami00}~\cite{Rozier07}. In the two tables the field $A<l$ lists the percentage of the automata number from \textit{Aalta} in the total one whose both states and transitions are smaller than those from ltl2Buchi. The other fields denote the similar percentages. Note that the automata from \textit{Aalta} and ltl2Buchi may not always meet the situations in Column 2, 3 and 4 in Table~\ref{result2} and ~\ref{result3}, so the total percentage of the three columns cannot be 100$\%$. First we scratch all the 2221 formulas without $R$ subformulas in benchmarks with a total number of 4046 to confirm the conjecture of ``the \emph{DNF-based } construction can generates the B\"uchi automata as small as possible when the formulas do not contain any $R$-operators''. As Table~\ref{result2} shows, when considering the formulas without any $R$ operators and both \textit{Aalta} and ltl2Buchi do not use optimizations (column 2, 3, 4), we have 27.0$\%$ of the generated automata from \textit{Aalta} are smaller than those from ltl2Buchi and 57.0$\%$ of the ones from \textit{Aalta} are equal to those from ltl2Buchi, i.e. totally about 84$\%$ of the automata from \textit{Aalta} are as small as those from ltl2Buchi. Meanwhile, there is just about 4.0$\%$ of the results from \textit{Aalta} are bigger than those from ltl2Buchi, which would not affect the truth of the \emph{DNF-based } construction can outperform the original tableau construction any more. When comparing to the optimizing results from ltl2Buchi (+sbf), we have 5.1$\%$ of the generated automata from \textit{Aalta} are smaller than those from ltl2Buchi (+sbf) and 71.0$\%$ of the ones are equal to those from ltl2Buchi (+sbf), i.e. totally 76.1$\%$ of automata from \textit{Aalta} are as small as those from ltl2Buchi (+sbf), which is still a huge advantage. Finally We present the comparison results for all the 4046 formulas between \textit{Aalta} and ltl2Buchi. The process time of 4046 formulas in the computer for the experiments is about 3 minutes. For the 2023 formulas in the first row in Table~\ref{result3}, we have 37.4$\%$ of the B\"uchi automata generated by \textit{Aalta} are smaller than those generated by ltl2Buchi (the second column), while 46.2$\%$ of the generated automata from the two tools have the same size (the third column). Compared to ltl2Buchi (+sbf), 73.7$\%$ of the B\"uchi automata generated by \textit{Aalta} (+sbf) have the same size. Surprisedly, observe that 10.2$\%$ of the generated automata by \textit{Aalta} (+sbf) are better. The second formulas set is generated by negating the formulas in the first set (second row) and the statistics are similar. From this table we can see our approach also has a better performance than the tableau construction when both of them use the same optimizations. \fi \appendix \section{Proofs} \subsection{Proof of Theorem \ref{thm:transform}} Let $\varphi$ be a formula $\varphi = \bigvee_{i\in I} \varphi_i$ such that the root operator of $\varphi_i$ is not a disjunctive: then we define the disjunctive formula set as $DF(\varphi):=\{\varphi_i \mid i\in I\}$. When $\varphi$ does not include a disjunctive as a root operator, $DF(\varphi)$ only include $\varphi$ itself. \begin{proof} We first can directly use the rules in Lemma~\ref{lemma:expansion} to generate an intermediate normal form for $\varphi$, whose format is $\bigvee _i (\alpha_i\wedge X \varphi_i)$ where $\alpha_i$ is an propositional formula and $\varphi_i$ is an LTL formula without any constraint in Definition~\ref{def:dnf}. We denote the set of this intermediate normal form of the formula $\varphi$ as $DNF_1(\varphi)$; Second we prove any intermediate normal form can be changed to the \\\textit{disjunctive-normal form}. Intuitively, one can easily find for each $\alpha_i$ and $\varphi_i$ the corresponding $DF(\alpha_i)$ and $DF(\varphi_i)$ can be obtained trivially. Then we can get the final \emph{disjunctive-normal form} through the following two steps: \begin{enumerate} \item $DNF_2(\varphi)=\{\alpha_i\wedge X\psi\mid \alpha\wedge X\psi\in DNF_1(\varphi)\wedge\alpha_i\in DF(\alpha)\}$; \item $DNF(\varphi)=\{\alpha\wedge X\psi_i\mid\alpha\wedge X\psi\in DNF_2(\varphi)\wedge\psi_i\in DF(\psi)\}$. \end{enumerate} \end{proof} \subsection{Proof of Theorem \ref{thm:expand:bounded}} Let $n$ be the number of subformulas in $\lambda$. Moreover, let $cl(\lambda)$ be the set of subformulas in $\lambda$ and $\mathsf{True}$, so obviously $|cl(\lambda)|=n+1$. Before the proof we introduce two lemmas first. \begin{lemma}\label{lemma:dnf:finite} Let $\alpha\wedge X\psi\in DNF(\varphi)$, then $CF(\psi)\subseteq cl(\varphi)$. \end{lemma} \begin{proof} We prove it by structural induction over $\varphi$. \begin{itemize} \item Basic step: If $\varphi$ is the case of the literal $p$, then since $p = p\wedge X\mathsf{True}$, so obviously $CF(\mathsf{True})\subseteq cl(\varphi)$. \item Inductive step: If the formulas $\varphi_i$ ($i=\mathit{1,2}$) satisfy $\alpha\wedge X\psi\in DNF(\varphi_i) \Rightarrow CF(\psi)\subseteq cl(\varphi_i)$, then: \begin{enumerate} \item If $\varphi=\varphi_1\vee\varphi_2$, we know $cl(\varphi)=cl(\varphi_1)\cup cl(\varphi_2)\cup \{\varphi_1\vee\varphi_2\}$. According to Lemma~\ref{lemma:expansion}.5 we have $\alpha\wedge X\psi\in DNF(\varphi)\Rightarrow\alpha\wedge X\psi\in DNF(\varphi_1)\cup DNF(\varphi_2)$, then by induction hypothesis we have $CF(\psi)\subseteq cl(\varphi_1)\cup cl(\varphi_2)$, so $CF(\psi)\subseteq cl(\varphi)$; \item If $\varphi=X\varphi_1$, we know $cl(\varphi)=cl(\varphi_1)\cup \{X\varphi_1\}$. According to Lemma~\ref{lemma:expansion}.2 we have $\alpha\wedge X\psi\in DNF(\varphi)\Rightarrow\psi =\varphi_1$, so $CF(\psi)\subseteq cl(\varphi_1)\subseteq cl(\varphi)$; \item If $\varphi=\varphi_1\wedge\varphi_2$, we know $cl(\varphi_1\wedge\varphi_2)=\{\varphi_1\wedge\varphi_2\}\cup cl(\varphi_1)\cup cl(\varphi_2)$. According to Lemma~\ref{lemma:expansion}.6 we know $\alpha\wedge X\psi\in DNF(\varphi_1\wedge\varphi_2)\Rightarrow\exists\alpha_1\wedge X\psi_1\in DNF(\varphi_1), \alpha_2\wedge X\psi_2\in DNF(\varphi_2)\cdot\alpha = \alpha_1\wedge\alpha_2\wedge\psi = \psi_1\wedge\psi_2$. Then by induction hypothesis we have $CF(\psi_1)\subseteq cl(\varphi_1)$ and $CF(\psi_2)\subseteq cl(\varphi_2)$, so $CF(\psi)\subseteq cl(\varphi_1)\cup cl(\varphi_2)\subseteq cl(\varphi_1\wedge\varphi_2)$; \item If $\varphi=\varphi_1 U\varphi_2$, we know $cl(\varphi_1 U\varphi_2)=cl(\varphi_1)\cup cl(\varphi_2)\cup \{\varphi_1 U \varphi_2\}$. According to Lemma~\ref{lemma:expansion}.3 if $\alpha\wedge X\psi\in DNF(\varphi_2)$ then $CF(\psi)\subseteq cl(\varphi_2)$ directly by induction hypothesis, else if $\alpha\wedge X\psi\in \{\alpha\wedge X(\psi_1\wedge \varphi_1 U \varphi_2)\mid \alpha\wedge X\psi_1\in DNF(\varphi_1)\}$ then by induction hypothesis we have $CF(\psi)=CF(\psi_1)\cup\{\varphi_1 U \varphi_2\}\subseteq cl(\varphi_1)\cup \{\varphi_1 U \varphi_2\}\subseteq cl(\varphi_1 U \varphi_2)$; \item If $\varphi=\varphi_1 R \varphi_2$ one can also prove in the similar way that $\alpha\wedge X\psi\in DNF(\varphi)\Rightarrow CF(\psi)\subseteq cl(\varphi)$. \end{enumerate} \end{itemize} \end{proof} \begin{lemma}\label{lemma:expand:finite} Let $\psi \in EF(\varphi)$ then $CF(\psi)\subseteq cl(\varphi)$; \end{lemma} \begin{proof} We prove it by induction over the number of steps that $\psi$ can be reached from $\varphi$. \begin{itemize} \item Base step: If $\alpha\wedge X\psi\in DNF(\varphi)$ then according to Lemma~\ref{lemma:dnf:finite} we know $CF(\psi)\subseteq cl(\varphi)$. \item Induction step: If $\exists\varphi\rightarrow\psi_1\rightarrow\psi_2\rightarrow\ldots\psi_k=\psi$ where $k\geq 1$ and $CF(\psi)\subseteq cl(\varphi)$ hold, then according to Lemma~\ref{lemma:dnf:finite} we know for all $\nu\in CF(\psi)$ we have $\beta\wedge X\mu\in DNF(\nu)\Rightarrow CF(\mu)\subseteq cl(\nu\}\subseteq cl(\varphi)$. Then according to Lemma~\ref{lemma:expansion}.6 we know $\forall \alpha\wedge X\psi' \in DNF(\psi)\cdot CF(\psi')\subseteq cl(\varphi)$ holds. That is, if $\psi$ can be reached from $\varphi$ in $k$ steps and $CF(\psi)\subseteq cl(\varphi)$ holds, then any $\psi'$ can be reached from $\varphi$ in $k+1$ steps also has $CF(\psi')\subseteq cl(\varphi)$. \end{itemize} \end{proof} Now come to prove Theorem~\ref{thm:expand:bounded}. From Lemma~\ref{lemma:expand:finite} we know for all $\psi\in EF(\lambda)$ if $\mu\in CF(\psi)$ then we have $\mu\in cl(\lambda)$. So the elements number in $CF(\psi)$ can not exceed the number of $cl(\lambda)$, i.e. $|CF(\psi)|\leq |cl(\lambda)|$. Thus $|EF(\lambda)|\leq 2^{|cl(\lambda)|}=2^{n+1}$. \subsection{Proof of Lemma \ref{lemma:obligaionandsatonce}} We first prove the first part of the lemma by induction over the formula $\varphi$. \begin{itemize} \item Basic step: If $\varphi = p$, then $OS_{\varphi} = \{\{p\}\}$, and $\{p\},\models_f p$ obviously true. \item Inductive step: If for the formulas $\psi_i$ ($i = 1,2$), $\forall O\in OS_{\psi_i}\cdot O\models_f\psi_i$ holds. Then we have: \begin{enumerate} \item If $\varphi = X \psi_1$, then $OS_{\varphi} = OS_{\psi_1}$. Since for each $O$ in $OS_{\varphi}$, the predicate $O\models_f\varphi\equiv O\models_f \psi_1$ according to its definition, and since $OS_{\varphi} = OS_{\psi_1}$ so $O\in OS_{\psi_1}$. Then by induction hypothesis we know $O\models_f\psi_1$ holds thus $O\models_f\varphi$ holds. \item If $\varphi = \psi_1\vee\psi_2$, then $OS_{\varphi} = OS_{\psi_1}\cup OS_{\psi_2}$, so we know $\forall O\in OS_{\varphi}\cdot O\in OS_{\psi_1}\vee O\in OS_{\psi_2}$. Then since $O\models_f\varphi \equiv O\models_f\psi_1\vee O\models_f\psi_2$, and by induction hypothesis $O\models_f \psi_1$ holds when $O\in OS_{\psi_1}$ while $O\models_f\psi_2$ holds when $O\in OS_{\psi_2}$. Due to $O\in OS_{\psi_1}\vee O\in OS_{\psi_2}$ so $O\models_f\varphi \equiv O\models_f \psi_1\vee O\models_f \psi_2$ is true. \item If $\varphi = \psi_1\wedge\psi_2$, then $OS_{\varphi} = \{S_1\cup S_2\mid S_1\in OS_{\psi_1}\wedge S_2\in OS_{\psi_2}\}$. Then $\forall O\in OS_{\varphi}\exists S_1\in OS_{\psi_1}, S_2\in OS_{\psi_2}\cdot O = S_1\cup S_2$. By induction hypothesis that $S_1\models_f \psi_1$ and $S_2\models_f \psi_2$ are true, thus $O\models_f\varphi\equiv S_1\cup S_2\models_f \psi_1\wedge S_1\cup S_2\models_f\psi_2$ holds. \item If $\varphi = \psi_1 U\psi_2$, then $OS_{\varphi} = OS_{\psi_2}$. Since for each $O$ in $OS_{\varphi}$ $O\models_f\varphi\equiv O\models_f\psi_2$, and by induction hypothesis $O\models_f\psi_2$ holds, so $O\models_f\varphi$ also holds. Similarly one can prove the situation when $\varphi = \psi_1 R\psi_2$ and we omit it here. \end{enumerate} \end{itemize} We then prove the second part of the lemma also by induction over the formula $\varphi$. \begin{itemize} \item Basic step: If $\varphi = p$, then $OS_{\varphi} = \{\{p\}\}$, and $S\models_f p\Rightarrow p\in S$. So obviously $\exists O\in OS_{\varphi}\cdot O\subseteq S$. \item Inductive step: If for the formulas $\psi_i$ ($i = 1,2$), $S\models_f\psi_i\Rightarrow\exists O_i\in OS_{\psi_i}\cdot O\subseteq S$ holds. Then we have: \begin{enumerate} \item If $\varphi = X \psi_1$, then we know $OS_{\varphi} = OS_{\psi_1}$ and $S\models_f \varphi\equiv S\models_f\psi_1$. Since by induction hypothesis $S\models_f\psi_1\Rightarrow\exists O\in OS_{\psi_1}\cdot O\subseteq S$, and $OS_{\psi_1}=OS_{\varphi}$, so $O \in OS_{\varphi}$. Thus $S\models_f\varphi\Rightarrow\exists O\in OS_{\varphi}\cdot O\subseteq S$ holds. \item If $\varphi = \psi_1\vee\psi_2$, then we have $OS_{\varphi} = OS_{\psi_1}\cup OS_{\psi_2}$ and $S\models_f\varphi\equiv S\models_f\psi_1\vee S\models_f\psi_2$. By induction hypothesis $S\models_f\psi_1\Rightarrow\exists O\in OS_{\psi_1}\cdot O\subseteq S$ and $S\models_f\psi_2\Rightarrow\exists O\in OS_{\psi_2}\cdot O\subseteq S$, so $S\models_f\varphi\Rightarrow\exists O\in OS_{\psi_1}\cup OS_{\psi_2}\cdot O\subseteq S$, in which $OS_{\psi_1}\cup OS_{\psi_2}$ is exactly $OS_{\varphi}$. Thus $S\models_f\varphi\Rightarrow\exists O\in OS_{\varphi}\cdot O\subseteq S$ holds. \item If $\varphi = \psi_1\wedge\psi_2$, then $OS_{\varphi} = \{S_1\cup S_2\mid S_1\in OS_{\psi_1}\wedge S_2\in OS_{\psi_2}\}$. Since $S\models_f \varphi\equiv S\models_f \psi_1\wedge S\models_f\psi_2$, and by induction hypothesis we have $S\models_f\psi_i\Rightarrow\exists O_i\in OS_{\psi_i}\cdot O_i\subseteq S$, where $i = 1, 2$, so $S\models_f\varphi\Rightarrow\exists O = O_1\cup O_2\cdot O\subseteq S$. Obviously $O\in OS_{\varphi}$, so $S\models_f\varphi\Rightarrow\exists O\in OS_{\varphi}\cdot O\subseteq S$ holds. \item If $\varphi = \psi_1 U\psi_2$, then we know $OS_{\varphi} = OS_{\psi_2}$ and $O\models_f \varphi\equiv O\models_f\psi_2$. By induction hypothesis $O\models_f \psi_2\Rightarrow\exists O\in OS_{\psi_2}\cdot O\subseteq S$, and since $OS_{\varphi}= OS_{\psi_2}$ so $O$ is also in $OS_{\varphi}$. Thus $S\models_f\varphi\Rightarrow\exists O\in OS_{\varphi}\cdot O\subseteq S$ holds. Similarly one can prove the case when $\varphi = \psi_1 R\psi_2$ and we omit it here. \end{enumerate} \end{itemize} \subsection{Proof of Lemma \ref{lemma:finitesat:infsat}} There are some other lemmas need to be introduced before we prove this lemma. \begin{lemma}\label{lemma:subformula:equiv} $\mu\in cl(\nu)\wedge\nu\in cl(\mu)\Leftrightarrow\mu=\nu$. \end{lemma} \begin{proof} According to the definition of $cl$, it is definitely true. \end{proof} \begin{lemma}\label{lemma:formulacycle:exist} $\varphi\hookrightarrow\varphi \Rightarrow\exists\mu\in CF(\varphi)\cdot cl(\mu)\cap CF(\varphi) = \{\mu\}$. \end{lemma} \begin{proof} For each $\mu$ in $CF(\varphi)$ let $S_{\mu} = cl(\mu)\cap CF(\varphi)$, then we know easily $S_\mu\supseteq \{\mu\}$. If $\forall\mu\in CF(\varphi)\cdot S_{\mu}\supset\{\mu\}$, then we know $\exists\mu_1\in S_{\mu}$ and $\mu_1\neq \mu$. Since $\mu_1$ is also in $CF(\varphi)$, then according to the assumption $\exists\mu_2\in S_{\mu_1}$ and $\mu_2\neq \mu_1$. Moreover according to Lemma~\ref{lemma:subformula:equiv} $\mu_2\neq\mu$ also holds. However for $\mu_2$ it is also in $CF(\varphi)$ and has at least one subformula $\mu_3$ in $CF(\varphi)$ and $\mu_3\neq\mu_2$... Infinitely using this will cause $CF(\varphi)$ be an infinite set - that is obviously impossible. So this lemma is true. \end{proof} \begin{lemma}\label{lemma:formulacycle:forall} $\varphi\hookrightarrow\varphi\Rightarrow\forall\varphi\tran{\eta}\varphi\exists\mu\in CF(\varphi)\cdot(\mu\tran{\eta}\mathsf{True}\vee \mu\tran{\eta}\mu)$. \end{lemma} \begin{proof} According to Lemma \ref{lemma:formulacycle:exist} we know $\exists\mu\in CF(\varphi)\cdot cl(\mu)\cap CF(\varphi) =\{\mu\}$. Then we know for such $\mu$ it will meet and only meet $\mu\tran{\eta}\mathsf{True}\vee\mu\tran{\eta}\mu$ when each $\varphi\tran{\eta}\varphi$ holds. \end{proof} \begin{lemma}\label{lemma:cycle:partialorder} If $\varphi\hookrightarrow\varphi$, then there exists $S_0\subset S_1\subset\ldots\subset S_n= CF(\varphi) (n\geq 0)$ such that $\forall \mu\in S_0\forall\varphi\tran{\eta}\varphi\cdot\mu\tran{\eta}\mathsf{True}\vee\mu\tran{\eta}\mu$, and for $i\geq 1$ we have $\forall\mu\in S_i\forall\varphi\tran{\eta}\varphi\cdot\mu\tran{\eta}\mu'$ and $CF(\mu')\subseteq S_{i-1}\cup\{\mu\}$. \end{lemma} \begin{proof} From Lemma~\ref{lemma:formulacycle:forall} we know $S_0\neq\emptyset$. Then let $S_1=S_0\cup \{\mu\mid\mu\in CF(\varphi)\wedge\forall\varphi\tran{\eta}\varphi\cdot\mu\tran{\eta}\mu'\wedge CF(\mu')\subseteq S_0\cup\{\mu\}\}$. $S_1\supset S_0$ holds for the same reason with $S_0$ that $\exists\mu\in CF(\varphi)\cdot cl(\mu)\cap CF(\varphi) = S_0\cup\{\mu\}$, and such $\mu$s can be added into $S_1$. Inductively we can find the set $S_n=S_{n-1}\cup \{\mu\mid\mu\in CF(\varphi)\wedge\forall\varphi\tran{\eta}\varphi\cdot\mu\tran{\eta_i}\mu'\wedge CF(\mu')\subseteq S_{n-1}\cup\{\mu\}\}$ ($n\geq 1$). Since $S_n\supset S_{n-1}$ and $|CF(\varphi)|$ is limited and $\forall j\geq 0\cdot S_{j}\subseteq CF(\varphi)$, so we can finally find $S_n = CF(\varphi)$. \end{proof} \begin{figure} \caption{A demonstration of Lemma~\ref{lemma:cycle:partialorder} \label{fig:loopingformula} \end{figure} A demonstration of this lemma is shown in Figure~\ref{fig:loopingformula}. In this case, $CF(\varphi)=\{\varphi_0,\varphi_1,...,\varphi_k\}$ and $\varphi\tran{\eta}\varphi$ holds. Then according to Lemma~\ref{lemma:cycle:partialorder} there exists $\varphi_0$ so that $\varphi_0\tran{\eta}\mathsf{True}\vee\varphi_0\tran{\eta}\varphi_0$ holds. Moreover, for $S_1=S_0\cup\{\varphi_1\}$ we have $\varphi\tran{\eta}\varphi'$ and $CF(\varphi')\subseteq S_0\cup\{\varphi_1\}$. Note that including $S_{i-1}$ there can be more than one formulas added into $S_i$ at the same time: see $\varphi_1$ and $\varphi_3$ in $S_2$. This property for the looping formula plays a key role in the proofs in the following. \begin{lemma}\label{lemma:finitesat:expandsat} $\varphi\tran{\alpha}\psi\wedge S\models_f\psi\Rightarrow S\cup CF(\alpha)\models_f \varphi$. \end{lemma} \begin{proof} We prove it by induction over the formula $\varphi$. \begin{itemize} \item Basic step: If $\varphi = p$, then we know $DNF(\varphi)=\{p\wedge X\mathsf{True}\}$. So $\varphi\tran{\alpha}\psi\Rightarrow p\in CF(\alpha)\wedge CF(\alpha)\models_f\mathsf{True}$. Thus $S\cup CF(\alpha)\models_f\varphi$ is true. \item Inductive step: Assume $\varphi_i (i=1,2)$ meet $\varphi_i\tran{\alpha_i}\psi_i\wedge S_i\models_f\psi_i\Rightarrow S_i\cup CF(\alpha_i)\models_f \varphi_i$, then we have: \begin{enumerate} \item If $\varphi=X\varphi_1$, then we know $DNF(\varphi)=\{\mathsf{True}\wedge X(\varphi_1)\}$. If $S\models_f\varphi_1$ holds, then since $S\cup CF(\mathsf{True})\models_f\varphi\equiv S\models_f\varphi_1$, so $S\models_f \varphi$ holds. \item If $\varphi =\varphi_1\vee\varphi_2$, then we know $DNF(\varphi)=DNF(\varphi_1)\cup DNF(\varphi_2)$, that is, $\forall \alpha\wedge X\psi\in DNF(\varphi)\cdot \alpha\wedge X\psi\in DNF(\varphi_1)\cup DNF(\alpha_2)$. If $S\models_f\psi$ holds then by induction hypothesis we have $\varphi_1(\varphi_2)\tran{\alpha}\psi\wedge S\models_f \psi\Rightarrow S\cup CF(\alpha)\models_f \varphi_1(\varphi_2)$, which indeed implies $S\cup CF(\alpha)\models_f \varphi$ according to the definition of $\models_f$ (Definition~\ref{def:finitestepsat}). So $\varphi\tran{\alpha}\psi\wedge S\models_f \psi\Rightarrow S\cup CF(\alpha)\models_f \varphi$. \item If $\varphi =\varphi_1\wedge\varphi_2$, then we know $\forall\alpha\wedge X\psi\in DNF(\varphi)$ there exists $\alpha_i$ and $\psi_i(i=1,2)$ so that $\alpha=\alpha_1\wedge\alpha_2$ and $\psi=\psi_1\wedge\psi_2$ as well as $\alpha_1\wedge X\psi_1\in DNF(\varphi_1)$ and $\alpha_2\wedge X\psi_2\in DNF(\varphi_2)$. If $S\models_f\psi$ holds, then $S\models_f\psi_i(i=1,2)$ hold. By induction hypothesis we have $\varphi_i\tran{\alpha_i}\psi_i\wedge S\models_f \psi_i\Rightarrow S\cup CF(\alpha_i)\models_f \varphi_i (i=1,2)$, so $S\cup CF(\alpha_1)\cup CF(\alpha_2)\models_f \varphi_1\wedge\varphi_2$ holds. Thus $S\cup CF(\alpha)\models_f \varphi$ holds. \item If $\varphi =\varphi_1 U\varphi_2$, then we know for each $\alpha\wedge X\psi\in DNF(\varphi)$, it is either in $DNF(\varphi_2)$ or $\exists\alpha\wedge X\psi_1\in DNF(\varphi_1)$ and $\psi = \psi_1\wedge \varphi$. If $S\models_f \psi$ holds then $S\models_f \varphi$ obviously holds when $\psi = \psi_1\wedge\varphi$. Thus $S\cup CF(\alpha)\models_f \varphi$ holds. And if $\alpha\wedge X\psi\in DNF(\varphi_2)$ by induction hypothesis we have $S\cup CF(\alpha)\models_f\varphi_2 \equiv S\cup CF(\alpha)\models_f \varphi$ directly. \item If $\varphi =\varphi_1 R\varphi_2$, then we know for each $\alpha\wedge X\psi\in DNF(\varphi)$, it is either in $DNF(\varphi_1\wedge\varphi_2)$ or $\exists\alpha\wedge X\psi_2\in DNF(\varphi_2)$ and $\psi = \psi_2\wedge \varphi$. If $S\models_f \psi$ holds then we have proven $S\models_f \varphi$ holds when $\alpha\wedge X\psi\in DNF(\varphi_1\wedge\varphi_2)$. And if $\psi = \psi_2\wedge\varphi$ then $S\models_f \psi$ obviously makes $S\cup CF(\alpha)\models_f \varphi$ hold. \end{enumerate} \end{itemize} \end{proof} \begin{lemma}\label{lemma:finitesat:expandsat2} Let $\varphi_0=\varphi\tran{\alpha_0}\varphi_1\tran{\alpha_1}\varphi_2\tran{\alpha_2}\ldots\tran{\alpha_n}\varphi_{n+1}=\psi$ and $T=\bigcup_{0\leq j\leq n}\alpha_j$. If $S\models_f \psi$ then $S\cup T\models_f\varphi$ holds. \end{lemma} \begin{proof} According to Lemma~\ref{lemma:finitesat:expandsat} we know $\varphi_n\tran{\alpha_n}\varphi_{n+1}=\psi\wedge S\models_f\psi\Rightarrow S\cup CF(\alpha_n)\models_f \varphi_n$ holds. Inductively using Lemma~\ref{lemma:finitesat:expandsat} we can finally prove this lemma is true. \end{proof} \begin{lemma}\label{lemma:until:finitesat} If $\varphi\tran{\eta}\varphi$, then $\forall\mu\in UCF(\varphi)\cdot\mu\tran{\eta}\mu'\wedge\mu\not\in CF(\mu')\Leftrightarrow\eta\models_f\varphi$: here $UCF(\varphi)\subseteq CF(\varphi)$ and each $\mu$ in $UCF(\varphi)$ is the Until formula. \end{lemma} \begin{proof} Let $\varphi\tran{\eta}\varphi= (\varphi_0=\varphi\tran{\omega_0}\varphi_1\tran{\omega_1}\ldots\tran{\omega_{k}}\varphi_{k+1}=\varphi (k\geq 0))$ and the set $T=\bigcup_{0\leq j\leq k}\alpha_j$, where $\alpha_j\wedge X\varphi_{j+1}\in DNF(\varphi_j)\wedge\omega_j\vDash\alpha_j$ holds. ($\Rightarrow$). From Lemma~\ref{lemma:cycle:partialorder} we know there exists $S_0\subset S_1\subset\ldots\subset S_n= CF(\varphi) (n\geq 0)$such that $\forall \mu\in S_0\forall\varphi\tran{\eta}\varphi\cdot\mu\tran{\eta}\mathsf{True}\vee\mu\tran{\eta}\mu$, and for $i\geq 1$ we have $\forall\mu\in S_i\forall\varphi\tran{\eta}\varphi\cdot\mu\tran{\eta}\mu'$ and $CF(\mu')\subseteq S_{i-1}\cup\{\mu\}$. For each $\mu$ in $S_0$, if $\mu\tran{\eta}\mathsf{True}$ then according to Lemma~\ref{lemma:finitesat:expandsat2} we have $T\models_f \mu$ holds; And if $\mu\tran{\eta}\mu$ since $\mu$ is not an Until formula, so $\mu$ is a Release formula. For the Release formula $\mu=\nu_1 R\nu_2$ we know every time $\mu\tran{\eta}\mu$ implies $\nu_2\tran{\eta}\mathsf{True}$. Thus according to Lemma~\ref{lemma:finitesat:expandsat2} we have $T\models_f\nu_2$ holds and then $T\models_f\mu$ holds according to its definition. So we prove now $\forall\mu\in S_0\cdot T\models_f\mu$. Inductively, for $i> 0$, if $\mu\in S_i$ and $\mu\tran{\eta}\mu'$ where $CF(\mu')\subseteq S_{i-1}$, and since we have proven $T\models_f\mu'$ then according to Lemma~\ref{lemma:finitesat:expandsat2} we know $T\models_f \mu$ holds. Else if $\mu\tran{\eta}\mu'\wedge\mu\in CF(\mu')$, then according to the assumption we know $\mu$ must be the Release formula, so for $\mu=\nu_1 R\nu_2$ we have $\nu_2\tran{\eta}\nu'$ where $CF(\nu')\subseteq S_{i-1}$. Since we have proven $T\models_f \nu'$ then according to Lemma~\ref{lemma:finitesat:expandsat2} we have $T\models_f \nu$ hold also. Then according to the definition of $\models_f$ we know $T\models_f\mu$ holds. Thus we can prove $\forall\mu\in S_n=CF(\varphi)\cdot T\models_f\mu$, that is, $T\models_f\varphi$ holds. Moreover since $\varphi\tran{\eta}\varphi$ is true thus according to the definition of $\models_f$ (Definition~\ref{def:finitestepsat}) we know $\eta\models_f \varphi$ holds. ($\Leftarrow$) If $\exists\mu\in UCF(\varphi)\cdot\mu\tran{\eta}\mu'\wedge \mu\in CF(\mu')$, then according to the expansion rule $\mu=\nu_1 U\nu_2=\nu_2\vee\nu_1\wedge X(\nu_1 U\nu_2)$ we can conclude $\eta\models_f\nu_2$ never holds, which makes $\eta\models_f\varphi$ not hold. So the lemma is true. \end{proof} \begin{lemma}\label{lemma:release:sat} If $\varphi$ is a $Realse$ formula, then $\varphi\tran{\xi}\varphi\Rightarrow\xi\vDash\varphi$. \end{lemma} \begin{proof} Let $\varphi = \mu R\nu$. Since $\varphi\tran{\xi}\varphi$, so we have $\exists n\cdot\varphi\tran{\xi^n}\varphi\wedge\varphi\tran{\xi_n}\varphi$. Let $\xi^n=\omega_0\omega_1\ldots\omega_n$ and $\eta_i = \omega_i\omega_{i+1}\ldots\omega_n (0\leq i\leq n)$. Thus we can easily know $\forall 0\leq i\leq n\cdot\nu\tran{\eta_i}\mathsf{True}$, which makes $\forall 0\leq j\leq n\cdot \xi_j\vDash\nu$. Inductively for $\varphi\tran{\xi_n}\varphi$ we can get the same conclusion. So $\forall j\geq 0$ we have $\xi\vDash\nu$, which makes $\xi\vDash\varphi$ according to the LTL semantics. \end{proof} Now we begin to prove Lemma~\ref{lemma:finitesat:infsat}. \begin{proof} From Lemma~\ref{lemma:cycle:partialorder} we know there exists $S_0\subset S_1\subset\ldots\subset S_n= CF(\varphi) (n\geq 0)$such that $\forall \mu\in S_0\forall\varphi\tran{\eta}\varphi\cdot\mu\tran{\eta}\mathsf{True}\vee\mu\tran{\eta}\mu$, and for $i\geq 1$ we have $\forall\mu\in S_i\forall\varphi\tran{\eta}\varphi\cdot\mu\tran{\eta}\mu'$ and $CF(\mu')\subseteq S_{i-1}\cup\{\mu\}$. Basically for each $\mu$ in $S_0$, if $\exists\mu\tran{\eta_i}\mathsf{True}$ holds, then since $\forall 0\leq j\leq i\cdot\mu\tran{\eta_j}\mu$ so we have $\forall 0\leq j\leq i\cdot\xi'=\eta_j\eta_{j+1}\ldots\vDash\mu$; And if $\forall i\geq 0\cdot\mu\tran{\eta_i}\mu$, since $\eta_i\models_f\varphi\Rightarrow \eta_i\models_f\mu$, and according to Lemma~\ref{lemma:until:finitesat} we know $\mu$ cannot be an Until formula. Then according to Corollary~\ref{coro:expand:cycle} we can know $\mu$ is a Release formula. Also we have $\mu\tran{\eta_i}\mu$, and according to Lemma~\ref{lemma:release:sat} we know $\forall i\geq 0\cdot\mu\tran{\eta_i}\mu$ plus $\mu$ is a release formula implies $\forall i\geq 0\cdot \xi'=\eta_i\eta_{i+1}\ldots\vDash\mu$. So first we can prove $\forall \mu\in S_0\forall i\geq 0\cdot\eta_i\eta_{i+1}\ldots\vDash\mu$. Inductively for the set $S_{n+1} (n \geq 0)$, if $\exists\mu\in S_{n+1}\setminus S_{n}\forall i\geq 0\cdot \mu\tran{\eta_i}\mu'\wedge CF(\mu')\subseteq S_{n}$, then from the basic step we know $\eta_{i+1}\eta_{i+2}\ldots\vDash\mu'$ so $\eta_i\eta_{i+1}\ldots\vDash\mu$. Moreover, we also have $\forall 0\leq j\leq i\cdot \eta_j\eta_{j+1}\ldots\vDash\mu$. If $\forall i\geq 0\cdot\mu\tran{\eta_i}\mu'\wedge\mu\in CF(\mu')$, similarly according to Lemma~\ref{lemma:until:finitesat} and Corollary~\ref{coro:expand:cycle} we know $\mu$ must be a Release formula. Let $\mu=\nu_1 R\nu_2$ and we know $\forall i\geq 0\cdot\nu_2\tran{\eta_i}\nu'\wedge CF(\nu')\subseteq S_n$. We have proven $\eta_{i+1}\eta_{i+2}\ldots\vDash\nu'$, so we have $\forall i\geq 0\cdot \eta_i\eta_{i+1}\ldots\vDash\nu_2$. Then according to the LTL semantics we have $\forall i\geq 0\cdot\eta_i\eta_{i+1}\ldots\vDash\mu$. So we can prove now $\forall\mu\in S_{n+1}\forall i\geq 0\cdot\eta_i\eta_{i+1}\ldots\vDash\mu$. So finally we can prove the set $S_n= CF(\varphi)$, $\forall\mu\in S_n\forall i\geq 0\cdot\eta_i\eta_{i+1}\ldots\vDash\mu$. Thus $\forall \mu\in S_n\forall i\geq 0\cdot\eta_i\eta_{i+1}\ldots\vDash\mu$ implies $\xi\vDash \varphi$. \end{proof} \subsection{Proof of Lemma \ref{lemma:completeness}} \begin{lemma}\label{lemma:complete:finiteexist} $\xi\vDash\varphi\Rightarrow\exists n\cdot \xi^n\models_f\varphi$. \end{lemma} \begin{proof} We prove it by induction over the size of formula $\varphi$. \begin{itemize} \item Basic step: If $\varphi=p$, then $\xi\vDash\varphi\Rightarrow p\in \xi^1$. So according to Definition \ref{def:finitestepsat} we know $\xi^1\models_f \varphi$ is true. \item Inductive step: Assume for the formulas $\varphi_i$ ($i=1,2$) we have $\xi\vDash\varphi_i\Rightarrow\exists n\cdot \xi^n\models_f\varphi_i$ hold. Then \begin{enumerate} \item If $\varphi = X\varphi_1$, then $\xi\vDash\varphi\Rightarrow\xi_1\vDash\varphi_1$. By induction hypothesis we know $\exists n\cdot {\xi_1}^n\models_f\varphi_1$ holds, so ${\xi}^{n+1}\models_f\varphi$ also holds. \item If $\varphi = \varphi_1\wedge\varphi_2$, then $\xi\vDash\varphi\Rightarrow\xi\vDash\varphi_1\wedge\xi\vDash\varphi_2$. By induction hypothesis we know $\exists n_1\cdot {\xi}^{n_1}\models_f\varphi_1$ and $\exists n_2\cdot {\xi}^{n_2}\models_f\varphi_2$ hold, so we can conclude $\exists n\geq max(n_1,n_2)\cdot {\xi}^{n}\models_f\varphi$ holds. \item If $\varphi = \varphi_1\vee\varphi_2$, then $\xi\vDash\varphi\Rightarrow\xi\vDash\varphi_1\vee\xi\vDash\varphi_2$. By induction hypothesis we know $\exists n_1\cdot {\xi}^{n_1}\models_f\varphi_1$ or $\exists n_2\cdot {\xi}^{n_2}\models_f\varphi_2$ hold, so we can conclude $\exists n=n_1\vee n=n_2\cdot {\xi}^{n}\models_f\varphi$ holds. \item If $\varphi = \varphi_1 U \varphi_2$, then $\xi\vDash\varphi_1 U\varphi_2\Rightarrow\exists i\geq 0\cdot\xi_i\vDash\varphi_2$. By induction hypothesis we have $\exists n\cdot {\xi_i}^n\models_f\varphi_2$ hold, so from Definition \ref{def:finitestepsat} we have $\xi^{i+n}\models_f \varphi$ hold. \item If $\varphi = \varphi_1 R \varphi_2$, then $\xi\vDash\varphi_1 R\varphi_2\Rightarrow\forall i\geq 0\cdot\xi_i\vDash\varphi_2$. So $\xi\vDash\varphi_2$ holds. Then By induction hypothesis we know $\exists n\cdot \xi^n\models_f\varphi_2$ and according to Definition \ref{def:finitestepsat} we know $\xi^n\models_f\varphi$ also holds. \end{enumerate} \end{itemize} \end{proof} \begin{lemma}\label{lemma:complete:recursive} $\varphi\tran{\xi}\varphi\wedge\xi\vDash\varphi\Rightarrow\exists n\cdot\varphi\tran{\xi^n}\varphi\wedge \xi^n\models_f\varphi\wedge (\varphi\tran{\xi_n}\varphi\wedge\xi_n\vDash\varphi)$. \end{lemma} \begin{proof} We first prove $\varphi\tran{\xi}\varphi\wedge\xi\vDash\varphi\Rightarrow\exists n\cdot\varphi\tran{\xi^n}\varphi\wedge \xi^n\models_f\varphi$. If $\forall n\cdot\varphi\tran{\xi^n}\varphi\wedge \neg (\xi^n\models_f\varphi)$, we can conclude $\forall i\leq n\cdot \neg (\xi^i\models_f\varphi)$, thus causing the contradiction with Lemma~\ref{lemma:complete:finiteexist}. Moreover, since $\varphi\tran{\xi}\varphi\wedge\varphi\tran{\xi^n}\varphi\wedge\xi\vDash\varphi$, so $\varphi\tran{\xi_n}\varphi\wedge\xi_n\vDash\varphi$ is also true. So this lemma is true. \end{proof} To prove Lemma~\ref{lemma:completeness} we can use Lemma~\ref{lemma:complete:recursive} inductively, and obviously it is true. \subsection{Proof of Theorem \ref{thm:central}} \begin{proof} ($\Rightarrow$). According to Corollary~\ref{coro:expand:existcycle} and Lemma~\ref{lemma:completeness} we know it is true. \hspace*{5mm}($\Leftarrow$). According to Corollary~\ref{coro:expand:existcycle} and Lemma~\ref{lemma:finitesat:infsat} it is true. \end{proof} \subsection{Proof of Theorem \ref{thm:correct}} \begin{lemma}\label{lemma:automata:runable} Let $\xi=\omega_0\omega_1\ldots$ and $\mathcal{A}_{\lambda}$ the B\"uchi automaton for $\lambda$ generated by \textit{DNF-based} construction. Then $\psi_0=\lambda\tran{\omega_0}\psi_1\tran{\omega_1}\ldots\tran{\omega_{n-1}}\psi_n$ holds, where $\psi_i\in EF(\lambda)$, if and only if there is a corresponding path $s_0\tran{\omega_0}s_1\tran{\omega_1}\ldots\tran{\omega_{n-1}}s_n$ in $\mathcal{A}_{\lambda}$ where each $s_i$ is the $\psi_i$-state. \end{lemma} \begin{proof} We prove it by induction over $n$. 1). When $n = 1$, if $\psi_1\in DNF(\lambda)$, then according to our construction directly we know for $\psi_0=\lambda\tran{\omega_0}\psi_1$, if and only if there is a $s_0\tran{\omega_0}s_1$ where $s_i$ is the $\psi_i$-state and $\lambda\tran{\omega_0}\psi_1$. 2). When $n = k, k\geq 1$ we assume $\psi_0=\lambda\tran{\omega_0}\psi_1\tran{\omega_1}\ldots\tran{\omega_{k-1}}\psi_k$ if and only if there is a corresponding path $s_0\tran{\omega_0}s_1\tran{\omega_1}\ldots\tran{\omega_{k-1}}s_k$ where for $k \geq i\geq 0$ each $s_i$ is the $\psi_i$-state in $\mathcal{A}_{\lambda}$. Then for $\psi_0=\lambda\tran{\omega_0}\psi_1\tran{\omega_1}\ldots\tran{\omega_{k-1}}\psi_k\tran{\omega_{k}}\psi_{k+1}$ holds, we know if and only if $\exists\alpha_k\wedge X\psi_{k+1}\in DNF(\psi_k)\wedge\omega_k\models\alpha_k$ holds from Definition~\ref{def:expand}. According to the construction we know $\exists\alpha_k\wedge \psi_{k+1}\in DNF(\psi_k)\wedge\omega_k\models\alpha_k$ if and only if there is a $s_k\tran{\omega_k}s_{k+1}$ where $s_{k+1}$ is $\psi_{k+1}$-state. So it is true that $\psi_0=\lambda\tran{\omega_0}\psi_1\tran{\omega_1}\ldots\tran{\omega_{k-1}}\psi_k\tran{\omega_{k}}\psi_{k+1}$ if and only if there is a $s_0\tran{\omega_0}s_1\tran{\omega_1}\ldots\tran{\omega_{k-1}}s_k\tran{\omega_k}s_{k+1}$ in $\mathcal{A}_{\lambda}$. The proof is done. \end{proof} Now we come to prove Theorem~\ref{thm:central}. \begin{proof} ($\Leftarrow$) Let $\xi=\omega_0\omega_1\ldots$ be an accepting run of $\mathcal{A}_\lambda$, and we want to prove that $\xi\models\lambda$. Let $\sigma:=s_0\tran{\omega_0}s_1\tran{\omega_1}\ldots$ be the corresponding path accepting $\xi$. Thus, $inf(\sigma)$ contains at least one accepting state $s\in F$. Assume $s=\langle\varphi, \emptyset\rangle$. Since there exists a finite path $s_0\tran{\omega_0}s_1\tran{\omega_1}s_2\ldots\tran{\omega_n}s_{n+1}=s$, where each $s_i$ is the $\varphi_i$-state. According to Lemma~\ref{lemma:automata:runable} we know $\lambda=\varphi_0\tran{\omega_0}\varphi_1\tran{\omega_1}\varphi_2\ldots\tran{\omega_n}\varphi_{n+1}=\varphi$ holds. Then we know $\exists \xi_n=\eta_1\eta_2\ldots$ so that for each $\eta_i=\omega_{i_0}\omega_{i_1}\ldots\omega_{i_n} (i,n\geq 1)$ we have $s_{i_0}=s\tran{\omega_{i_0}}s_{i_1}\tran{\omega_{i_1}}\ldots\tran{\omega_{i_n}}s_{i_{n+1}}=s$, of which for simplicity we denote as $s\tran{\eta_i}s$. According to Lemma~\ref{lemma:automata:runable} we know each time $s\tran{\eta_i}s$ holds $\varphi\tran{\eta_i}\varphi$ also holds ($s$ is the $\varphi$-state). Moreover, according to our construction and Lemma~\ref{lemma:obligaionandsatonce} we know $\eta_i\models_f\varphi$ holds. Finally according to Theorem~\ref{thm:central} we can conclude $\xi\vDash\lambda$. ($\Rightarrow$) Let $\xi=\omega_0\omega_1\ldots$ and $\xi\vDash\lambda$, we now prove there is an accepting run $\sigma=s_0\tran{\omega_0}s_1\tran{\omega_1}\ldots$ in $\mathcal{A}_{\lambda}$. From Theorem~\ref{thm:central} we know $\xi\vDash\lambda\Rightarrow\exists\varphi\exists n\cdot\lambda\tran{\xi^n}\varphi\wedge(\exists\xi_n=\eta_1\eta_2\ldots\cdot\forall i\geq 1\cdot\varphi\tran{\eta_i}\varphi\wedge \eta_i\models_f\varphi)$. According to Lemma~\ref{lemma:automata:runable} we can find an infinite path $\sigma = s_0\tran{\omega_0}s_1\tran{\omega_1}\ldots\tran{\omega_{n-1}}s\tran{\omega_n}\ldots$ in $\mathcal{A}_{\lambda}$ on which $\xi$ can run. Here $s_0$ is the $\lambda$-state and $s$ is the $\varphi$-state, and for each $\eta_i=\omega_{i_0}\omega_{i_1}\ldots\omega_{i_n} (i,n\geq 1)$ we have $s_{i_0}=s\tran{\omega_{i_0}}s_{i_1}\tran{\omega_{i_1}}s_{i_2}\ldots\tran{\omega_{i_n}}s_{i_{n+1}}=s$, of which for simplicity we denote as $s\tran{\eta_i}s$. Let $s_{i_j} (n+1 \geq j\geq 0)$ be the $\varphi_{i_j}$-state, and the set $T=\bigcup_{0\leq k\leq n}\alpha_{i_k}$ where each $\alpha_{i_k}$ satisfies $\exists\alpha_{i_k}\wedge X(\varphi_{i_{k+1}})\in DNF(\varphi_{i_k})\wedge \omega_{i_k}\models\alpha_{i_k}$. Since $T\models_f \varphi$ holds so according to Lemma~\ref{lemma:obligaionandsatonce} we know $\exists O\in OS_{\varphi}\cdot O\subseteq T$. Moreover, our construction guarantees for each $s\tran{\eta_i}s$ there is $s_{i_j}=\langle\varphi_{i_j}, P\rangle (0\leq j\leq n)$ so that $P=\emptyset$. Since such states with the format of $\langle-, \emptyset\rangle$ is finite, so there must be such a state in $inf(\sigma)$. Finally we prove the theorem is true. \end{proof} \end{document}
\begin{document} \title{Minimum number of distinct eigenvalues of graphs \thanks{Received by the editors on Month x, 200x. Accepted for publication on Month y, 200y Handling Editor: .}} \author{ Bahman Ahmadi\thanks{Department of Mathematics and Statistics, University of Regina, Regina, Saskatchewan, S4S 0A2. ([email protected], [email protected]). } \and Fatemeh Alinaghipour\footnotemark[2] \and Michael S. Cavers \thanks{Department of Mathematics and Statistics, University of Calgary, Calgary, AB, T2N 1N4. ([email protected])} \and Shaun Fallat\thanks{Department of Mathematics and Statistics, University of Regina, Regina, Saskatchewan, S4S 0A2.Research supported in part by an NSERC research grant. ([email protected])} \and Karen Meagher\thanks{Department of Mathematics and Statistics, University of Regina, Regina, Saskatchewan, S4S 0A2. Research supported in part by an NSERC research grant. ([email protected])} \and Shahla Nasserasr\thanks{Department of Mathematics and Statistics, University of Regina, Regina, Saskatchewan, S4S 0A2. Research supported by PIMS and Fallat's NSERC Research Grant. ([email protected])} } \pagestyle{myheadings} \markboth{B. Ahmadi, F. Alinaghipour, M.S. Cavers, S. Fallat, K. Meagher, S. Nasserasr}{Minimum Number of Eigenvalues} \maketitle \begin{abstract} The minimum number of distinct eigenvalues, taken over all real symmetric matrices compatible with a given graph $G$, is denoted by $q(G)$. Using other parameters related to $G$, bounds for $q(G)$ are proven and then applied to deduce further properties of $q(G)$. It is shown that there is a great number of graphs $G$ for which $q(G)=2$. For some families of graphs, such as the join of a graph with itself, complete bipartite graphs, and cycles, this minimum value is obtained. Moreover, examples of graphs $G$ are provided to show that adding and deleting edges or vertices can dramatically change the value of $q(G)$. Finally, the set of graphs $G$ with $q(G)$ near the number of vertices is shown to be a subset of known families of graphs with small maximum multiplicity. \end{abstract} \begin{keywords} Symmetric matrix, Eigenvalue, Join of graphs, Diameter, Trees, Bipartite graph, Maximum multiplicity. \end{keywords} \begin{AMS} 05C50, 15A18. \end{AMS} \section{Introduction} Suppose $G=(V,E)$ is a simple graph with vertex set $V=\{1,2,\ldots,n\}$ and edge set $E$. To a graph $G$, we associate the collection of real $n \times n$ symmetric matrices defined by \[ S(G) = \{ A : A=A^{T}, \; {\rm for} \; i \neq j, \; a_{ij} \neq 0 \Leftrightarrow \{i,j\} \in E\}. \] Note that, the main diagonal entries of $A$ in $S(G)$ are free to be chosen. For a square matrix $A$, we let $q(A)$ denote the number of distinct eigenvalues of $A$. For a graph $G$, we define \[ q(G) = \min \{ q(A) : A \in S(G) \}.\] It is clear that for any graph $G$ on $n$ vertices, $1 \leq q(G) \leq n$. Furthermore, it is not difficult to show that for a fixed $n$, there exists a graph $G$ on $n$ vertices with $q(G)=k$, for each $k=1,2,\ldots,n$, see Corollary~\ref{cor:anynk} for further details. The class of matrices $S(G)$ has been of interest to many researchers recently (see \cite{FH, FH2} and the references therein), and there has been considerable development of the parameters $\M(G)$ (maximum multiplicity or nullity over $S(G)$) and $\mr(G)$ (minimum rank over $S(G)$) and their positive semidefinite counterparts, see, for example, the works \cite{psd, FH, FH2}. Furthermore, as a consequence interest has grown in connecting these parameters to various combinatorial properties of $G$. For example, the inverse eigenvalue problem for graphs (see \cite{Hog05}) continues to receive considerable and deserved attention, as it remains one of the most interesting unresolved issues in combinatorial matrix theory. In the context of the $(0,1)$-adjacency matrix, $A(G)$, it is well known that $q(A(G))$ is at least one more than the diameter of $G$ (denoted by ${\rm diam}(G)$) (see \cite{BR}). This result was generalized to the case of trees, by observing that $q(A) \geq {\rm diam}(G) +1$, whenever $A \in S(G)$ is an entry-wise nonnegative matrix (see \cite{LJ}). Thus if $G$ is a tree, it is known that $q(G) \geq {\rm diam}(G) +1$. However, it has been demonstrated that while this inequality is tight for some trees (e.g. path, star), equality need not hold for all trees (see \cite{BF} ,\cite{KS2}, and also \cite{RUofWy}). Our main interest lies in studying the value of $q(G)$ for arbitrary graphs, and as such is a continuation of \cite{F} by de Fonseca. As with many studies of this kind, moving beyond trees leads to a number of new and interesting difficulties, and numerous exciting advances. It is clear that knowledge of $q(G)$ for a graph $G$ will impact current studies of the inverse eigenvalue problem for graphs, and, in particular, the parameters $\M(G)$ and $\mr(G)$. Our work has been organized into a number of components. The next section contains necessary background information and various preliminary-type results including connections between $q(G)$ and existing graph parameters as well as the graphs attaining the extreme values of $q(G)$. The following section provides a simple but surprisingly useful lower bound for $q(G)$. Section~\ref{sec:q=2} is devoted to studying the graphs for which $q(G)=2$, which is continued into Section 5, whereas the next section considers bipartite graphs and certain graph products. The final two sections focus on the graphs for which $q(G)=|V(G)|-1$ and some possible further work. \section{Preliminary Results} To begin we list some basic results about the minimum number of distinct eigenvalues for graphs. In this work, we use $K_n, K_{m,n}$ and $I_n$ to denote the complete graph on $n$ vertices, complete bipartite graph with parts of sizes $m,n$, and the identity matrix of order $n$, respectively. The notations $M_{m,n}, M_n$ are used for the set of real matrices of order $m\times n$ and $n$, respectively. For $A\in M_n$, the set of eigenvalues of $A$ is denoted by $\sigma(A)$. For graphs $G$ and $H$, $G\cup H$ denotes the graph with vertex set $V(G)\cup V(H)$ and edges $E(G)\cup E(H)$, and is called the {\em union of $G$ and $H$}. \begin{lemma}\label{lem:q=1} For a graph $G$, $ q(G)=1$ if and only if $G$ has no edges. \end{lemma} \begin{proof} If $q(G)=1$, then there is an $A\in S(G)$ with exactly one eigenvalue, this matrix is a scalar multiple of the identity matrix, thus the graph $G$ is the empty graph. Clearly, if $G$ is empty graph, then $ q(G)=1$. \end{proof} \begin{lemma} For any $n\geq 2$, we have $q(K_n) = 2$. \end{lemma} \begin{proof} The adjacency matrix of $K_n$ has two distinct eigenvalues, so $q(K_n)\leq 2$ and by Lemma \ref{lem:q=1} $q(K_n)>1,$ which implies $q(K_n) = 2$. \end{proof} \begin{lemma}\label{lem:01} If $G$ is a non-empty graph, then for any two distinct real numbers $\mu_1, \mu_2$, there is an $A \in S(G)$ such that $q(A) = q(G)$ and $\mu_1,\mu_2\in \sigma(A)$. \end{lemma} {\em Proof.} Consider $B \in S(G)$ with $q(B) = q(G)$ and $\lambda_1,\lambda_2\in \sigma(B)$ with $\lambda_1\neq \lambda_2$. Then the matrix $A$ defined below satisfies $q(A) = q(B)$ and $\mu_1,\mu_2\in \sigma(A)$. \[ \displaystyle A=\frac{\mu_1-\mu_2}{\lambda_1-\lambda_2} \left(B + \displaystyle \frac{\lambda_1\mu_2-\lambda_2\mu_1}{\mu_1-\mu_2}I\right). ~~~~~~ \cvd\] \begin{corollary}\label{cor:union2} If $G$ and $H$ are non-empty graphs with $q(G) = q(H) = 2$, then $q(G \cup H)=2$. In particular, if $G$ is the union of non-trivial complete graphs, then $q(G) =2$. \end{corollary} The parameter $q$ is related to other parameters of graphs, such as the minimum rank of the graph. \begin{proposition}\label{mrandq} For any graph $G$, we have $q(G)\leq \mr(G)+1.$ \end{proposition} \begin{proof} Consider a matrix $A\in S(G)$ with the minimum possible rank, $\mr(G)$. Then, $A$ has $\mr(G)$ nonzero eigenvalues, so the number of distinct eigenvalues of $A$ is less than or equal to $\mr(G)+1$. \end{proof} Clearly, any known upper bound on the minimum rank of a graph can be used as an upper bound for the value of $q(G)$ for a graph $G$. For example, a {\em clique covering} of a graph is a collection of complete subgraphs of the graph such that every edge of the graph is contained in at least one of these subgraphs. Then the {\em clique covering number} of a graph is the fewest number of cliques in a clique covering. This number is denoted by $\cc(G)$. It is well known that for all graphs $G$, $\mr(G) \leq \cc(G)$; see \cite{FH}, and thus we have the following corollary. \begin{corollary}\label{cor:cliquecovering} Let $G$ be a graph, then $q(G) \leq \cc(G) + 1.$ \end{corollary} In Corollary~\ref{cor:anynk} a family of graphs is given for which this bound holds with equality. We conclude this section with the exact value of $q(C_n)$ where $C_n$ is a cycle on $n$ vertices. This result can be derived from~\cite{Ferg}, but in this section we will prove it using work from \cite{FF}. \begin{lemma}\label{lem:cycles} Let $C_n$ be the cycle on $n$ vertices. Then \[ q(C_n) = \left\lceil \frac{n}{2} \right\rceil. \] \end{lemma} \begin{proof} First, suppose $n=2k+1$, for some $k\geq 1$. Then, the adjacency matrix of $C_n$ has exactly $\frac{n-1}{2}+1$ distinct eigenvalues, these eigenvalues are $2\cos \frac{2\pi j}{n}$, $j=1,\ldots, n$. On the other hand, using \cite[Cor. 3.4]{FF}, any eigenvalue of $A\in S(C_n)$ has multiplicity at most two, so $q(C_n)\geq \frac{n-1}{2}+1$. Thus, $q(C_n)= \frac{n-1}{2}+1$. Next, consider $n=2k$, for some $k\geq 2$. Again using \cite[Cor. 3.4]{FF}, $q(C_n)\geq k$. Moreover, by \cite[Thm. 3.3]{FF} for any set of numbers $\lambda_1=\lambda_{2}> \lambda_3=\lambda_{4}>\ldots > \lambda_{2k-1}=\lambda_{2k},$ there is an $A\in S(C_{n})$ with eigenvalues $\lambda_i$, $i=1,\ldots, n$. This implies that if $n$ is even, then $q(C_n)=\frac{n}{2}.$\end{proof} Since $\mr(C_n)=n-2$ and $q(C_n)\approx n/2$, we know that for some graphs $G$ there can be a large gap between the parameters $\mr(G)$ and $q(G)$. \section{Unique shortest path} There is only one family of graphs for which the eigenvalues for every matrix in $S(G)$ are all distinct, these are paths. This statement is Theorem 3.1 in \cite{F}, and also follows from a result by Fiedler~\cite{Fiedler}, which states that for a real symmetric matrix $A \in M_n$ and a diagonal matrix $D$ if $\mathrm{rank}(A + D) \geq n - 1$, then $A\in S(P_n)$. A path on $n$ vertices is denoted by $P_n$. \begin{proposition}\label{paths} For a graph $G$, $q(G)=|V(G)|$ if and only if $G$ is a path. \end{proposition} From this we can also conclude that the parameter $q$ is not monotone on induced subgraphs; as $q(P_n)=n$ while $q(C_n)\approx n/2.$ The next result is related to a very simple, but often very effective, lower bound on the minimum number of distinct eigenvalues of a graph that is based on the length of certain induced paths. Recall that the {\em length} of a path is simply the number of edges in that path, and that the {\em distance between two vertices}, (in the same component) is the length of the shortest path between those two vertices. \begin{theorem}\label{thm:uniqueminpath} If there are vertices $u$, $v$ in a connected graph $G$ at distance $d$ and the path of length $d$ from $u$ to $v$ is unique, then $q(G) \geq d+1$. \end{theorem} \begin{proof} Assume that $u=v_1,v_2,\dots, v_d, v_{d+1}=v$ is the unique path of length $d$ from $u$ to $v$. For any $A=[a_{ij}] \in S(G)$, all of the matrices $A, A^2,\ldots, A^{d-1}$ have zero in the position $(u,v)$, while the entry $(u,v)$ of $A^d$ is equal to $\prod_{i=1}^{d}a_{v_{i} v_{i+1}}\neq 0$. Thus, the matrices $I, A, A^2, \dots , A^{d}$ are linearly independent and the minimal polynomial of $A$ must have degree at least $d+1$. \end{proof} It is important to note that the induced path from $u$ to $v$ in the proof of Theorem~\ref{thm:uniqueminpath} is the shortest path from $u$ to $v$ and that it is the only path of this length. The length of such a path is a lower bound on the diameter of the graph and if the path is not unique, then the bound only holds for nonnegative matrices. \begin{corollary} For any connected graph $G$, if $A \in S(G)$ is nonnegative, then $q(A) \geq \diam(G) +1$. \end{corollary} We note that Theorem~\ref{thm:uniqueminpath} implies Theorem 3.1 from \cite{F}. \begin{theorem}\label{carlos} (\cite[Thm. 3.1]{F}) Suppose $G$ is a connected graph. If $P$ is the longest induced path in $G$ for which no edge of $P$ lies on a cycle, then $q(G) \geq |V(P)|$. \end{theorem} It is not true that $\diam(G) +1$ is a lower bound for the minimum number of distinct eigenvalues of an arbitrary graph $G$, see Corollary~\ref{ex:hypercube} for a counter-example. However, in the case of trees, since in this case any shortest path between two vertices is the unique shortest path, we have \begin{corollary}\label{tree-diam} For any tree $T$, $q(T) \geq \diam(T) +1$. \end{corollary} There are several other proofs of Corollary \ref{tree-diam}, see \cite{LJ, KS1}. There are also trees with $q(T)> \diam(T)+1$; see \cite{BF}. Further, for any positive integer $d$, there exists a constant $f(d)$ such that for any tree $T$ with diameter $d$, there is a matrix $A\in S(T)$ with at most $f(d)$ distinct eigenvalues (this was shown by B. Shader~\cite{BS} who described $f(d)$ as possibly ``super-super-exponential''). It has been shown that $f(d)\geq (9/8)d$ for $d$ large, see \cite{KS1} and \cite{RUofWy}. Using unique shortest paths, it is possible to construct a connected graph on $n$ vertices with $q(G) = k$, for any pair of integers $k,n$ with $1\leq k \leq n$. \begin{corollary}\label{cor:anynk} For any pair of integers $k,n$ with $1\leq k \leq n$, let $G(n,k)$ be the graph on vertices $v_1,\ldots,v_n$, where vertices $v_1,\ldots,v_{n-k+2}$ form a clique and vertices $v_{n-k+2},v_{n-k+3},\ldots,v_{n-1},v_{n}$ form a path of length $k-2$. Then, $q(G(n,k)) = k$. \end{corollary} \begin{proof} There is a unique shortest path of length $k-1$ from $v_{n}$ to any of the vertices $v_1,\ldots,v_{n-k+1}$ and there is a clique covering of the graph consisting of $k-1$ cliques. Thus, by Corollary~\ref{cor:cliquecovering} and Theorem~\ref{thm:uniqueminpath}, $q(G(n,k))=k$. \end{proof} \section{Graphs with two distinct eigenvalues} \label{sec:q=2} For a graph $G$, $q(G)=2$ means that there is a matrix $A\in S(G)$ such that $A$ has exactly two distinct eigenvalues, and there is no matrix in $S(G)$ with only one eigenvalue. Therefore, the minimal polynomial of $A$ has degree two, thus, $A$ satisfies $A^2=\alpha A + \beta I$, for some scalars $\alpha$ and $\beta$. This implies that $A$ and $A^2$ have exactly the same zero-nonzero pattern on the off-diagonal entries. Equivalently, for any nonempty graph $G$, $q(G)=2$ if and only if $S(G)$ contains a real symmetric orthogonal matrix $Q$. Using this, we can show the following results with the aid of Theorem \ref{thm:uniqueminpath}. \begin{lemma}\label{nopendant} If $q(G)=2$, for a connected graph $G$ on $n\geq 3$ vertices, then $G$ has no pendant vertex. \end{lemma} \begin{proof} Suppose vertex $v_1$ is pendant and suppose its unique neighbor is $v_2$. Since $G$ is connected and has at least $3$ vertices there is another vertex $v_3$ that is adjacent to $v_2$. Thus there is a unique shortest path from $v_1$ to $v_3$ of length $2$ and the result follows from Theorem~\ref{thm:uniqueminpath}.\end{proof} The previous basic result is contained in the next slight generalization by noting that any edge incident with a pendant vertex is a {\em cut edge} (that is, its deletion results in a disconnected graph). \begin{lemma}\label{nocutedge} Suppose $G$ is a connected graph on $n$ vertices with $n\geq 3$. If $q(G)=2$, then there is no cut edge in the graph $G$. \end{lemma} \begin{proof} Assume that vertices $v_1$ and $v_2$ form a cut edge. We can assume without loss of generality that there is another vertex $v_3$ in $G$ that is adjacent to $v_2$. Thus there is a unique shortest path from $v_1$ to $v_3$ of length $2$ and the result follows from Theorem~\ref{thm:uniqueminpath}. \end{proof} The next result should be compared to Theorem \ref{carlos}. \begin{corollary} If $G$ is a graph on $n\geq 3$ vertices with $q(G)=2$, then every edge in $G$ is contained in a cycle. \end{corollary} \begin{proof} Lemmas \ref{nopendant} and \ref{nocutedge} together imply this result. \end{proof} Consider $\alpha\subseteq\{1,2,\ldots,m\}$ and $\beta\subseteq\{1,2,\ldots,n\}$. For a matrix $A\in M_{m,n}$, $A[\alpha,\beta]$ denotes the submatrix of $A$ lying in rows indexed by $\alpha$ and columns indexed by $\beta$. Recall that for any vertex $v$ of a graph $G$, the {\em neighborhood set of $v$}, denoted by $N(v)$, is the set of all vertices in $G$ adjacent to $v$. \begin{theorem}\label{neighbors} For a connected graph $G$ on $n$ vertices, if $q(G)=2$, then for any independent set of vertices $\{v_1,v_2,\dots, v_k\}$, we have \[ \left| \bigcup_{i \neq j} ( N(v_i) \cap N(v_j) ) \right | \geq k. \] \end{theorem} \begin{proof} For the purpose of a contradiction, suppose $G$ is a graph with $q(G)=2$ and that there exists an independent set $S$ with $|S|=k$ such that \[ \left| \bigcup_{i\neq j} (N(v_i) \cap N(v_j)) \right| < k, \] and let $X = \bigcup_{i\neq j} (N(v_i) \cap N(v_j))$. Using Lemma~\ref{lem:01}, there exists a symmetric orthogonal matrix $A\in S(G)$. Consider such $A$ and let $B=A[S,\{1,\ldots,n\}]$. Observe that $B$ is a $k \times n$ matrix and any column of $B$ not indexed with $X$ contains at most one nonzero entry. Since the rows of $B$ are orthogonal, we deduce that rows of $C=A[S,X]$ must also be orthogonal. However, $C$ is a $k \times |X|$ matrix with $k>|X|$, and orthogonality of these rows is impossible, as they are all nonzero. This completes the proof. \end{proof} The next two statements are immediate, yet interesting, consequences of Theorem \ref{neighbors}. \begin{corollary}\label{cor:22adj} Let $G$ be a connected graph on $n\geq 3$ vertices with $q(G)=2$. Then, any two non-adjacent vertices must have at least two common neighbors. \end{corollary} \begin{corollary} Suppose $q(G)=2$, for a connected graph $G$ on $n\geq 3$ vertices. If the vertex $v_1$ has degree exactly two with adjacent vertices $v_2$ and $v_3$, then every vertex $v$ that is different from $v_2$ and $v_3$, has exactly the same neighbors as $v_1$. \end{corollary} Along these lines, we also note that if $G$ is a connected graph with $q(G)=2$, then for any independent set of vertices $S$, we have $|S|\leq n/2$. Thus for any graph $G$ with $q(G)$ being two, we have a basic upper bound on the size of independent sets in $G$. As a final example, recall that $q(K_n)=2$ whenever $n \geq 2$. We can build on this result for complete graphs with a single edge deleted. \begin{proposition} Suppose $G$ is obtained from $K_n$ by deleting a single edge $e$. Then \[ q(G) = \left\{ \begin{array} {ll} 1, & {\rm if} \; n=2; \\ 3, & {\rm if } \; n=3; \\ 2, & {\rm otherwise}. \end{array}\right.\] \end{proposition} \begin{proof} The cases $n=2,3$ follow easily from previous facts. So suppose $n \geq 4$. We will construct a symmetric orthogonal matrix $Q$ in $S(G)$, assuming the edge deleted was $e = \{1,n\}$, without loss of generality. In this case set, \[ u_1 = \frac{1}{\sqrt{n-1}} \left[ \begin{array}{c} 1 \\ \hline e_{n-2} \\ \hline 0 \end{array} \right], \] where $e_{n-2}$ is the $(n-2)$-vector of all ones. Then choose $u_2'$ to be orthogonal to $u_1$ as follows \[ u_2' = \left[ \begin{array}{c} 0 \\ \hline e_{n-3} \\ \hline -(n-3) \\ \hline 1 \end{array} \right], \] where $e_{n-3}$ is the $(n-3)$-vector of all ones. Then set $u_2 = \frac{1}{\left\| u_2'\right\|} u_2'$. Finally, set $Q = I-2(u_1u_1^T + u_2u_2^T)$. Then it follows that $Q$ is orthogonal and a basic calculation will show that $Q \in S(G)$. Hence $q(G)=2$. \end{proof} \section{Join of two graphs} In the previous section we found several restrictions on a graph $G$ for which $q(G)=2$. In this section, we show that, despite these restrictions, a surprisingly large number of graphs satisfy this property. Let $G$ and $H$ be graphs, then the {\em join of $G$ and $H$}, denoted by $G \vee H$, is the graph with vertex set $V(G) \cup V(H)$ and edge set $E(G) \cup E(H) \cup \{\{g,h\} \;|\; g\in V(G), h\in V(H)\}$. A real matrix $R$ of order $n$ is called an {\em $M$-matrix} if it can be written in the form $R=sI-B$ for some $s>0$ and entry-wise nonnegative matrix $B$ such that its spectral radius satisfies $\rho(B)\leq s$. Recall that the {\em spectral radius} of a square matrix $B$ is defined to be $\rho(B) = \max \{ |\lambda| : \lambda \in \sigma(B) \}$. In the case that $\rho(B)<s$, then $R$ is called a {\em nonsingular $M$-matrix}. Recall that for $A \in M_n$, we call $B \in M_n$ a {\em square root of $A$} if $B^2=A$. In \cite{AS82}, it is shown that an $M$-matrix $R$ has an $M$-matrix as a square root if and only if $R$ has a certain property (which the authors refer to as {\em property c}). It is also known that all nonsingular $M$-matrices have ``property c''. The following theorem is proved in \cite{AS82}. If $P$ is a square matrix, $\diag(P)$ means the diagonal entries of $P$. \begin{theorem}\cite[Thm. $4$]{AS82}\label{ASthm} Let $R$ be an $M$-matrix of order $n$, and let $R=s(I-P)$ be a representation of $R$ for sufficiently large $s$ such that $\diag(P)$ is entry-wise positive and $\rho(P)\leq 1$. Then $R$ has an $M$-matrix as a square root if and only if $R$ has ``property c.'' In this case, let $Y^*$ denote the limit of the sequence generated by \[ Y_{i+1}=\frac{1}{2}(P+Y_i^2),\quad Y_0=0. \] Then $\sqrt s(I-Y^*)$ is an $M$-matrix with ``property c'' which is a square root of $R$. \end{theorem} Using Theorem \ref{ASthm}, we can prove the following. \begin{theorem}\label{join:thm} Let $G$ be a connected graph, then $q(G\vee G)=2$. \end{theorem} \begin{proof} Suppose $G$ is a connected graph on $n$ vertices. The goal of this proof is to construct a matrix $P$ such that \[ Q =\left[ \begin{array}{cc} \sqrt{P} & \sqrt{I-P} \\ \sqrt{I-P} & -\sqrt{P} \end{array} \right] \] is in $S(G \vee G)$. If we can construct such a matrix $P$ then $Q^2 = \left[ \begin{array}{cc} I & 0\\ 0 & I \end{array} \right]$ and $Q$ has exactly two eigenvalues. Let $A(G)$ be the adjacency matrix of $G$ and set \[ P=\frac{2n-1}{4n^2}\left(\frac{1}{n}A(G)+I\right)^2. \] Note that $\diag(P)$ is entry-wise positive. By Gershgorin's disc theorem (see \cite[pg. 89]{BR}), every eigenvalue of $\frac{1}{n}A(G)+I$ belongs to the interval $(0,2)$, and hence, every eigenvalue of $P$ belongs to the interval $\left(0,\frac{2n-1}{n^2}\right)$. Therefore, $\rho(P)\leq 1$. Consider the matrix $R=I-P$. Then $R$ is an $M$-matrix that satisfies the conditions of Theorem \ref{ASthm} with $s=1$. Note that if $S$ is a matrix with eigenvalue $\lambda$, then $1-\lambda$ is an eigenvalue of $I-S$. Thus, the eigenvalues of $R$ are in the interval $\left(\left(\frac{n-1}{n}\right)^2,1\right)$. Hence, $R$ is nonsingular and thus has ``property c.'' By Theorem \ref{ASthm}, $R$ has an $M$-matrix as a square root of the form $I-Y^*$, where $Y^*$ is the limit of the sequence generated by \begin{eqnarray} Y_{i+1}=\frac{1}{2}(P+Y_i^2),\quad Y_0=0.\label{ASeqn} \end{eqnarray} Note that $Y^*$ satisfies \begin{eqnarray} (I-Y^*)^2+\frac{2n-1}{4n^2}\left(\frac{1}{n}A(G)+I\right)^2=I.\label{sq-eqn} \end{eqnarray} As $G$ is a connected graph, its adjacency matrix $A(G)$ is an irreducible nonnegative matrix. Thus, $(aA(G)+bI)^n>0$ for every $a,b>0$, and hence, $Y^*>0$ by (\ref{ASeqn}). As $P$ is a real symmetric matrix, the sequence (and consequently the limit) of (\ref{ASeqn}) are real symmetric matrices. In particular, $Y^*$ is a real symmetric matrix that may be written as a polynomial in $A(G)$. Therefore, $I-Y^*$ commutes with $A(G)$. If $\lambda_1,\lambda_2,\ldots,\lambda_n$ are the eigenvalues of $Y^*$, then \[ {\rm trace}(Y^*)=\sum_{i=1}^n \lambda_i<1 \] as each eigenvalue of $Y^*$ belongs to the interval $\left(0,\frac{1}{n}\right)$. Therefore, $\diag(I-Y^*)>0$ implying that $I-Y^*$ is an entry-wise nonzero matrix. Finally consider the block matrix \[ Q=\left[\begin{array}{cc} \frac{\sqrt{2n-1}}{2n}\left(\frac{1}{n}A(G)+I\right) & I-Y^*\\ I-Y^* & -\frac{\sqrt{2n-1}}{2n}\left(\frac{1}{n}A(G)+I\right)\\ \end{array}\right]. \] By (\ref{sq-eqn}), $Q$ is an orthogonal matrix with two distinct eigenvalues. As $I-Y^*$ is entry-wise nonzero, $Q\in S(G\vee G)$, hence, $q(G\vee G)=2$. \end{proof} Recall that for any graph $G=(V,E)$, the graph $\overline{G} = (V, \overline{E})$, is called the {\em complement of $G$} whenever, $\overline{E} = \{ \{i,j\} | \{i,j\} \not\in E\}$. \begin{corollary} There are graphs $G$ for which the gap between $q(G)$ and $q(\overline{G})$ can grow without bound as a function of the number of vertices of $G$. \end{corollary} \begin{proof} Let $G = \overline{P_n} \vee \overline{P_n}$ with $n \geq 4$. Then $q(G) =2$, while $q(\overline{G}) = q(P_n \cup P_n) = n$. \end{proof} Also, note Theorem \ref{join:thm} fails to hold for two different graphs. For example, using Theorem \ref{neighbors}, we have that $q(P_1 \vee P_4) >2$. It is still unresolved whether or not the condition that $G$ be connected is required in Theorem \ref{join:thm}. \section{Bipartite Graphs and Graph Products} \label{sec:bipartite} Let $G$ be a bipartite graph with parts $X$ and $Y$ such that $0<|X|=m\leq n=|Y|$. Define $\mathcal{B}(G)$ to be the set of all real $m\times n$ matrices $B=[b_{ij}]$ whose rows and columns are indexed by $X$ and $Y$, respectively, and for which $b_{ij}\neq 0$ if and only if $\{i,j\}\in E(G)$. We have the following: \begin{theorem}\label{bipartite} For any non-empty bipartite graph $G$, if $B\in \mathcal{B}(G)$, then $q(G)\leq 2 q(BB^T) +1$. \end{theorem} \begin{proof} Let $B\in \mathcal{B}(G)$ and consider $A\in \mathcal{S}(G)$ with $A=\left[ \begin{array}{cc} 0& B \\ B^T & 0 \\ \end{array} \right].$ It is well known that $BB^T$ and $B^TB$ have the same nonzero eigenvalues, so the number of distinct nonzero eigenvalues of $A^2$ is at most $q(B^TB)$. Moreover, the eigenvalues of $A$ are of the form $\pm \sqrt{\lambda}$, where $\lambda$ is an eigenvalue of $A^2$. Thus, $A$ has at most $2q(B^TB)+1$ distinct eigenvalues. \end{proof} If $B$ is square, then $B^TB$ and $BB^T$ have the same eigenvalues, this implies the following corollary. \begin{corollary}\label{cor:bipartite2q} For any non-empty bipartite graph $G$ with equal sized parts, $q(G)\leq 2 q(BB^T)$. \end{corollary} \begin{lemma}\label{lem:bipartite2} For any non-empty bipartite graph $G$, if there is a matrix $B\in \mathcal{B}(G)$ with orthogonal rows and orthogonal columns, then $q(G)=2$. \end{lemma} \begin{proof} If $B\in \mathcal{B}(G)$ has orthogonal rows and orthogonal columns, then $B$ is a square matrix. Consider $A\in \mathcal{S}(G)$ with $ A=\left[ \begin{array}{cc} 0& B \\ B^T & 0 \\ \end{array} \right].$ Then, $A^2=I$, which implies that $A$ has at most two distinct eigenvalues. Thus, by Lemma~\ref{lem:q=1}, $q(G)=2$. \end{proof} \begin{proposition}\label{m=n} Consider a bipartite graph $G$ with parts $X$ and $Y$. If $q(G)=2$, then $|X|=|Y|$ and there exists an orthogonal matrix $B\in \mathcal{B}(G)$. \end{proposition} \begin{proof} Label the vertices of $G$ so that the vertices of $X$ come first. Then, any matrix in $S(G)$ is of the form $A=\left[ \begin{array}{cc} D_1& B \\\\ B^T & D_2 \\ \end{array} \right],$ where $D_1\in M_{|X|}$ and $D_2\in M_{|Y|}$ are diagonal matrices. Since $q(G)=2$, using Lemma~\ref{lem:01}, $A\in S(G)$ can be chosen with eigenvalues $-1,1$, therefore $A^2=I$. On the other hand, \[ A^2=\left[ \begin{array}{cc} D_1^2+BB^T& D_1B+BD_2 \\ B^T D_1+ D_2 B^T & B^T B +D_2^2 \\ \end{array} \right]. \] This implies that $B B^T$ and $B^T B$ are diagonal. Therefore the rows and columns of $B$ are orthogonal, and hence $|X|=|Y|$. \end{proof} For any $n\geq 1$, there is a real orthogonal $n\times n$ matrix all of whose entries are nonzero. For $n=1,2$ this is trivial, and for $n>2$, the matrix $B=I-\frac{2}{n} J$ is such an orthogonal matrix. Using the above example and Lemma~\ref{lem:bipartite2}, we have the following. \begin{corollary}\label{K_{m,n}} For any $m,n$ with $1\leq m\leq n,$ \[ q(K_{m,n}) = \left\{ \begin{array}{lr} 2, & \textrm{if } m=n; \\ 3, & \textrm{if } m<n. \end{array} \right. \] \end{corollary} \begin{proof} If $m=n$, it is enough to normalize the real orthogonal matrix in the example proceeding this Corollary and use it in Lemma~\ref{lem:bipartite2}. If $m<n$, then according to Proposition~\ref{m=n}, we have $q(G)\geq 3$. On the other hand, the adjacency matrix of $K_{m,n}$ has 3 distinct eigenvalues. This completes the proof. \end{proof} Next, we consider a group of bipartite graphs for which the lower bound given in Theorem~\ref{thm:uniqueminpath} is tight. This family is closely related to the ``tadpole graphs'' discussed in \cite{F} and are of interest since they are {\em parallel paths} (these graphs are discussed in Section~\ref{sec:n-1}). The exact value of the maximum multiplicity of parallel paths is known to be $2$ (see \cite{JLS}). Define $S_{m,n}$ to be the graph consisting of a $4$-cycle on vertices $v_1,u_1,v_2,u_2$ and edges $u_1v_2, v_2u_2, u_2v_1, v_1u_1$, together with a path $P_{m+1}$ starting at vertex $v_1$ and a path $P_{n+1}$ starting at $v_2$, where $P_{m+1}$ and $P_{n+1}$ are disjoint from each other and they intersect the $4$-cycle only on $v_1$ and $v_2$, respectively. Label the vertices on the paths $P_{m+1}$ and $P_{n+1}$ by $u_i$ and $v_i$, alternating the label $u$ and $v$ so that the graph can be considered as a bipartite graph with parts consisting of vertex sets $\{u_i\}$ and $\{v_i\}$. The graph $S_{m,n}$ has $m+n+4$ vertices, and the graph $S_{4,4}$ is given in Figure~\ref{fig:dart}. \begin{figure} \caption{The graph $S_{4,4} \label{fig:dart} \end{figure} \begin{lemma} If $m$ and $n$ have the same parity, then $$q(S_{m,n}) = \max\{m,n\}+2.$$ \end{lemma} \begin{proof} We assume that $m$ and $n$ are both even, the case when they are both odd is similar. We use the above labeling for $S_{m,n}$, and assume that $m \geq n$. Since there is a unique shortest path with $m+2$ vertices from the pendant vertex on $P_{m+1}$ to $u_1$, by Theorem \ref{thm:uniqueminpath} we know that $q(S_{m,n}) \geq m+2$. Define an $(m+n+4)/2 \times (m+n+4)/2$ matrix $B=[b_{ij}] \in \mathcal{B}(S_{m,n})$ with the rows labeled by the vertices $u_i$ and the columns labeled by the vertices $v_i$. Let $b_{u_2 v_1}=-1$ and $b_{u_2v_2}=b_{u_1v_1}=b_{u_1v_2}=1$. Then, with the proper ordering of the vertices, $BB^T$ has the form $\left[ \begin{matrix} X & 0 \\ 0 & Y \\ \end{matrix} \right]$ where $X$ is an $\frac{m+2}{2}\times \frac{m+2}{2}$ tridiagonal matrix and $Y$ is an $\frac{n+2}{2}\times \frac{n+2}{2}$ tridiagonal matrix. Using the inverse eigenvalue problem for tridiagonal matrices \cite{ld} it is possible to find entries for $B$ such that the eigenvalues for $X$ are distinct and the eigenvalues for $Y$ are a subset of the eigenvalues of $X$. Thus $q(BB^T) = \frac{m+2}{2}$ and by Corollary~\ref{cor:bipartite2q}, $q(S_{m,n}) \leq m+2$. \end{proof} Since $q(P_n)=n$ and $q(C_n)\approx n/2$, we know that addition of an edge can dramatically decrease the minimum number of distinct eigenvalues. Here we show that the addition of an edge to a graph can also dramatically increase the minimum number of distinct eigenvalues. To see this consider the graph $G$ obtained by adding an edge between vertices $u_1$ and $u_3$ in the graph $S_{m,m}$ (see Figure~\ref{fig:dart2}). We know that $q(S_{m,m}) = m+2$, but the new graph $G$ has a unique shortest path that contains $2m+2$ vertices from a pendant vertex to another pendant vertex. Thus, by Theorem \ref{thm:uniqueminpath}, $q(G)\geq 2m+2$, and \begin{figure} \caption{The graph $S_{4,4} \label{fig:dart2} \end{figure} we may conclude that there exist graphs $G$ and an edge $e$ such that the gap between $q(G)$ and $q(G-e)$ can grow arbitrarily large as a function of the number of vertices. Similarly, if we consider the graph obtained from $S_{m,m}$ by adding a new vertex $w$ and edges $\{w,u_1\}$ and $\{w, u_3\}$, then this new graph has a unique shortest path between the pendant vertices that contains $2m+3$ vertices. Hence there exists a family of graphs $G$ with a vertex $v$ of degree $2$ such that the gap between $q(G)$ and $q(G\backslash v)$ can grow arbitrarily large as a function of the number of vertices. We now switch gears and consider a graph product and a graph operation in an effort to compute $q$ for more families of graphs. The product that we consider is the {\em Cartesian product}; if $G$ and $H$ are graphs then $G \square H$ is the graph on the vertex set $V(G) \times V(H)$ with $\{g_1,h_1\}$ and $\{g_2,h_2\}$ adjacent if and only if either $g_1=g_2$ and $h_1$ and $h_2$ are adjacent in $H$ or $g_1$ and $g_2$ are adjacent in $G$ and $h_1 =h_2$. \begin{theorem}\label{thm:cartesian} Let $G$ be a graph on $n$ vertices, then $q(G \square K_2)\leq 2q(G)-2$. \end{theorem} \begin{proof} Let $A \in S(G)$ with $q(A) = q(G)=\ell$, and assume $\sigma(A)=\{\lambda_1, \lambda_2, \dots, \lambda_{\ell}\}$, where $\lambda_1=1, \lambda_2=-1$. Let $\alpha,\beta$ be nonzero scalars, and consider the matrix \[ B = \left[ \begin{array}{cc} \alpha A & \beta I \\ \beta I & -\alpha A \end{array}\right]\in S(G \square K_2). \] Then, \[ B^2 = \left[ \begin{array}{cc} \alpha^2 A^2 + \beta^2I & 0 \\ 0 & \alpha^2 A^2+ \beta^2 I \end{array}\right] \] and the eigenvalues of $B^2$ are of the form $\alpha^2\lambda_i^2 + \beta^2$, for $i=1,\ldots,\ell$. If we choose $\alpha^2+\beta^2 = 1$, then two eigenvalues of $B^2$ are equal to $1$. Since $\pm \lambda$ is an eigenvalue of $B$ whenever $\lambda^2$ is an eigenvalue of $B^2$, this implies that $q(B) \leq 2(\ell-1) = 2q(G) - 2$.\end{proof} The following is implied by Lemma~\ref{lem:q=1} and Theorem~\ref{thm:cartesian}. \begin{corollary}\label{thm:hypercube} If $q(G) =2$, then $q(G \square K_2) = 2$. \end{corollary} Observe that Corollary \ref{thm:hypercube} verifies that the bound in Theorem~\ref{thm:cartesian} can be tight. \begin{corollary}\label{ex:hypercube} If $n\geq 1$ is an integer, then the hypercube, $Q_n$, satisfies $q(Q_n) = 2$. \end{corollary} \begin{proof} Recall that $Q_n$ can be defined recursively as $Q_n = Q_{n-1} \square K_2$, with $Q_1=K_2$. Since $q(K_2)=2$, the results follows by application of Corollary \ref{thm:hypercube}. \end{proof} Note that the diameter of $Q_n$ is $n$ while $q(Q_n)$ is always $2$, so for a graph that is not a tree the difference between the diameter and the minimum number of distinct eigenvalues can be arbitrarily large, as a function of the number of vertices. Next we consider an operation on a graph. Let $G$ be a graph, then the {\em corona of $G$} is the graph formed by joining a pendant vertex to each vertex of $G$. \begin{lemma}\label{lem:1corona} Let $G$ be a graph and let $G'$ be the corona of $G$, then $q(G') \leq 2q(G)$. \end{lemma} \begin{proof} Consider the matrix $ B= \left[ \begin{array}{cc} A & I \\ I & 0 \end{array} \right]\in S(G') $ when $A\in S(G)$. Assume that $\lambda$ is an eigenvalue of $B$ with the eigenvector $ \left[ \begin{array}{c} x \\ y \end{array} \right]$. Then, $Ax + y = \lambda x$ and $x = \lambda y.$ Hence, $\lambda\neq 0$, and $Ay = \frac{\lambda^2-1}{\lambda} y$. Therefore, $\mu = \frac{\lambda^2 -1}{\lambda}$ is an eigenvalue of $A$. This implies that for each eigenvalue $\mu$ of $A$ there are two real eigenvalues for $\lambda=\frac{\mu\pm\sqrt{\mu^2+4}}{2}$, this completes the proof.\end{proof} \section{Connected graphs with many distinct eigenvalues} \label{sec:n-1} In this section we address the question of ``which connected graphs have the minimum number of distinct eigenvalues near the number of vertices of the graph". In Proposition~\ref{paths}, we observed that $q(G) = |V(G)|$ if and only if $G$ is a path. In this section we study the connected graphs $G$ with the property that $q(G) = |V(G)|-1$. To begin we apply Theorem~\ref{thm:uniqueminpath} to derive two families of graphs for which $q$ is one less than the number of vertices. \begin{proposition} \label{q:n-1-triangle} Let $G$ be the graph with vertices $v_1,v_2,\dots ,v_n$ and edge set $E = \{\{v_1,v_2\} ,\{v_2, v_3\},\ldots ,\{v_{n-1},v_n\}, \{v_i, v_{i+2}\}\}$, where $i$ is fixed and satisfies $1 \leq i \leq n-2$. Then, $q(G) = |V(G)|-1$. \end{proposition} \begin{proposition}\label{Cor:treeswithhighq} Let $G$ be the graph with vertices $v_1,v_2,\dots ,v_n$ and edge set $E = \{\{v_1,v_2\} ,\{v_2, v_3\},\ldots ,\{v_{n-2},v_{n-1}\}, \{v_i, v_{n}\}\}$, where $i$ is fixed and satisfies $2 \leq i \leq n-2$. Then, $q(G) = |V(G)|-1$. \end{proposition} Using Proposition~\ref{mrandq} we may deduce that any graph $G$ for which $q(G) = |V(G)|-1$ implies $\M(G) = 2$. However, even more can be said about such graphs. \begin{theorem}\label{Cor:graphsswithhighq} If $G$ is a graph that satisfies $q(G)=|V(G)|-1$, then $G$ has the following properties: \begin{enumerate} \item $\M(G) = 2$. \item If $A$ is in $S(G)$ and $A$ has a multiple eigenvalue, then $A$ has exactly one eigenvalue of multiplicity two, and all remaining eigenvalues are simple. \end{enumerate} \end{theorem} The next result verifies that the graphs in Proposition~\ref{Cor:treeswithhighq} are the only trees with $q(G)=|V(G)|-1$. \begin{lemma} \label{treeqn-1} Suppose $T$ is a tree. If $q(T) = |V(T)|-1$, then $T$ consists of a path $P_{|V(T)|-1}$, along with a pendant vertex adjacent to a non-pendant vertex in this path. \end{lemma} \begin{proof} Since $q(T)=|V(T)|-1$, it follows from Proposition~\ref{mrandq} that $\mr(T) \geq |V(T)|-2$. Using Theorem~\ref{Cor:graphsswithhighq} (1), $\M(T)=2$, and hence the vertices of $T$ can be covered by two vertex-disjoint paths (see \cite{JD}). Therefore, $T$ consists of two induced paths $P_1$ and $P_2$ that cover all of the vertices of $T$ along with exactly one edge connecting $P_1$ and $P_2$. Then $T$ has maximum degree equal to three and contains at most two vertices of degree three. Using Theorem~\ref{Cor:graphsswithhighq} (2), if $q(T)=|V(T)|-1$, then any matrix $A\in S(T)$ realizing an eigenvalue of (maximum) multiplicity two, has all other eigenvalue being simple. In \cite{JS}, all such trees have been characterized, for all values of $\M(T)$. In particular, from Theorem 1 in \cite{JS}, we may conclude that the subgraph of $T$ induced by the vertices of degree at least three must be empty. Thus, $T$ has exactly one vertex of degree three. Furthermore, deletion of the vertex of degree three yields at most two components that contain more than one vertex (see \cite[Thm. 1]{JS}), and hence must be of the claimed form. \end{proof} Characterizing general connected graphs $G$ with the property that $q(G) =$ \\ $|V(G)|-1$ appears to be rather more complicated. By Theorem~\ref{Cor:graphsswithhighq}, we can restrict attention to certain graphs with $\M(G) =2$. Fortunately, the graphs with $\M(G) =2$ have been characterized in~\cite{JLS} and they include the graphs known as graphs of two parallel paths. A graph $G$ is {\em a graph of two parallel paths} if there exist two disjoint induced paths (each on at least one vertex) that cover the vertices of $G$ and any edge between these two paths can be drawn so as not to cross other edges (that is, there exists a planar embedding of $G$). The graphs $S_{m,n}$ described in Section~\ref{sec:bipartite} are examples of graphs of two parallel paths that satisfy $q(S_{m,n} ) < |V(S_{m,n})| -1$. Using \cite{JLS}, our investigation reduces to testing, which graphs $G$ either of two parallel paths or from the exceptional list given in \cite{JLS}, satisfy $q(G) = |V(G)|-1$. We first, consider those graphs identified as exceptional type in \cite[Fig. B1]{JLS}. We let $C_5$, $C_5'$ and $C_5''$ denote the graphs pictured in Figure \ref{exp-type}, and refer to them as base exceptional graphs, from which all other exceptional graphs can be formed by attaching paths of various lengths to the five vertices in each of $C_5$, $C_5'$ and $C_5''$. \begin{figure} \caption{Base Exceptional Graphs} \label{exp-type} \end{figure} \begin{lemma}\label{coreexp} Each of the graphs $C_5$, $C_5'$ and $C_5''$ in Figure \ref{exp-type} satisfy \[ q(C_5) =q(C_5') = q(C_5'') = 3. \] \end{lemma} \begin{proof} We already know that $q(C_5)=3$. For the remaining equalities it is enough to demonstrate the existence of a matrix with three distinct eigenvalues, since using Theorem~\ref{thm:uniqueminpath} implies $q(C_5'), q(C_5'')\geq 3$. Consider $C_5''$ first. Let $A \in S(C_5'')$ be of the form \[ A = \left[ \begin{array}{c|c} L & b \\ \hline & \\ b^T & a \end{array} \right], \] where \[ L = \left[ \begin{array}{cccc} 1 & -1 & 0 & 0 \\ -1 & 2 & -1 & 0 \\ 0 & -1 & 2 & -1 \\ 0 & 0 & -1 & 1 \end{array} \right] \] is the Laplacian matrix for a path on four vertices, namely $\{1,2,3,4\}$. We will determine $a$ and $b$ based on some conditions in what follows. It is not difficult to check that the eigenvalues of $L$ are $\{0,2, 2\pm \sqrt{2}\}$. The objective here is to choose $a$ and $b$ so that $A \in S(C_5'')$ and the eigenvalues of $A$ are $\{ 0,0,2,2,\lambda\}$ ($\lambda \neq 0, 2$). This can be accomplished by satisfying the following conditions: \begin{enumerate} \item $b = Lu = (L-2I)w$ for some real vectors $u,w$; \item $a = u^T Lu = w^T (L-2I)w +2$; and \item $b$ has no zero entries.\end{enumerate} If the eigenvectors of $L$ associated with $ 2 \pm \sqrt{2}$ are $x_2$ and $x_3$, respectively, then we know that $u,w$ must be in the span of $\{x_2, x_3\}$. Hence we can write \[ u = \alpha_1 x_2 + \beta_1 x_3,\; {\rm and} \; w = \alpha_2 x_2 + \beta_2 x_3, \] for some scalars $\alpha_1, \beta_1, \alpha_2, \beta_2$. In this case, (2) can be re-written as \[ (2+ \sqrt{2})\alpha_1^2+ (2-\sqrt{2})\beta_1^2 = \sqrt{2}\alpha_2^2 - \sqrt{2}\beta_1^2 +2,\] and (1) can be re-written as \[ (2+ \sqrt{2})\alpha_1 x_2+ (2-\sqrt{2})\beta_1 x_3 = \sqrt{2}\alpha_2 x_2 - \sqrt{2}\beta_1 x_3.\] Since $\{x_2,x_3\}$ forms a linearly independent set of vectors, we have \[ \alpha_2 = \left( \frac{2+\sqrt{2}}{\sqrt{2}}\right) \alpha_1, \; {\rm and} \; \beta_2 = \left( \frac{\sqrt{2}-2}{\sqrt{2}}\right) \beta_1.\] Substituting these values back into (2) gives, \[ (2+ \sqrt{2})\alpha_1^2 = (2-\sqrt{2})\beta_1^2 - \sqrt{2}.\] Thus choosing $\beta_1$ large enough will suffice in satisfying all of the conditions (1)-(3) above. For example, if \[\beta_1 = 2 \; {\rm and} \; \alpha_1 = - \sqrt{\frac{(2-\sqrt{2})\beta_1^2 - \sqrt{2}}{2+\sqrt{2}}},\] then $A$, as constructed above, will have the desired form (that is $A \in S(C_5'')$) and with prescribed eigenvalues $\{0,0,2,2,18-9\sqrt{2}\}$. (The actual entries of $A$ cannot be easily simplified so we have not displayed it here.) Hence $q(C_5'')=3$. Similar arguments can be applied to the graph $C_5'$, to conclude that $q(C_5')=3$ as well. \end{proof} In fact, using the above techniques, and the results obtained thus far we may deduce the following result. \begin{theorem}\label{main-atmost5} For connected graphs $G$ on at most five vertices only the graphs from Propositions \ref{q:n-1-triangle} and \ref{Cor:treeswithhighq} satisfy $q(G)=|V(G)|-1$. \end{theorem} We have poured considerable effort into extending the above fact to larger orders, but this still has not been resolved. However, we have a strong suspicion that this fact can be extended. For instance, by Lemma \ref{treeqn-1}, this is true for trees. Furthermore, using Lemma \ref{coreexp} and considering the result in the previous section on coronas, we feel strongly that all of the exceptional graphs $G$ listed in \cite[Fig. B1]{JLS} satisfy $q(G)< |V(G)|-1$. \section{Possible future directions} There are many open questions concerning the minimum number of distinct eigenvalues of a graph. In this section we list some of them that we find interesting and provide some possible directions along these lines. We have seen that adding an edge or a vertex can dramatically change the minimum number of distinct eigenvalues of a graph but we suspect that adding a pendant vertex to a graph could increase the minimum number of distinct eigenvalues by at most one. The next problem that we plan to work on is to determine how adding pendant vertices to a graph affects the minimum number of distinct eigenvalues. We are also interested in how other graph operations affect the minimum number of distinct eigenvalues. For example, can we determine the minimum number of distinct eigenvalues of a graph that is the vertex sum of two graphs? Or what is the value of $q(G_1 \vee G_2)$ or $q(G \vee G \vee G)$ in general? Similarly, does Theorem \ref{join:thm} still hold if $G$ is disconnected? We formulate the following unresolved idea for the join of two distinct graphs. If $G_1$ and $G_2$ are connected graphs and $|q(G_1) -q(G_2)|$ is small, then is $q(G_1 \vee G_2)=2$? Another unresolved issue deals with strongly-regular graphs. For any nonempty strongly-regular graph $G$, it is clear that $2 \leq q(G)\leq3$. Thus a key question is which strongly-regular graphs satisfy $q(G) =2$? The complete bipartite graph $K_{n,n}$ and $K_n$ are examples of such graphs. By Corollary~\ref{cor:22adj}, if $G$ is a strongly-regular graph with parameters $(n,k,a,c)$ (see \cite[Chap. 5]{BR}), where $c$ is the number of mutual neighbors of any two non-adjancent vertices, and $q(G) = 2$, then $c \geq 2$ (but this is hardly a strong restriction). However, this restriction on $c$ does verify that the minimum number of distinct eigenvalues for the Petersen graph is three. In addition the complete multi-partite graphs of the form $G=K_{n_1, n_1, n_2,n_2, \dots, n_k, n_k}$, where $n_i$ are arbitrary positive numbers, have $q(G)=2$; because $G$ is the join of $K_{n_1,n_2,\dots, n_k}$ with itself; and obviously $G$ is strongly regular. Also, $q(K_{2,2,2})=2$ as a $6 \times 6$ real symmetric orthogonal matrix can be constructed with $2 \times 2$ zero blocks on the diagonal and Hadamard-like $2 \times 2$ matrices off the diagonal. Observe that $K_{2,2,2}$ is also strongly regular but is not the join of a graph with itself. At present we are still not sure about $q(K_{2,2,\dots,2})$ or $q(K_{3,3,3})$. Finally, the last outstanding issue is the characterization of all graphs $G$ for which $q(G) = |V(G)|-1$. Towards this end, as we eluded to in Section 7, we presented a number of ideas and directions towards a general characterization. {\bf Acknowledgment:} We would like to thank Dr. Francesco Barioli and Dr. Robert Bailey for a number of interesting discussions related to this topic, and other connections to certain spectral graph theory problems. \end{document}
\begin{document} \sloppy \title{The extremal number of surfaces} \begin{abstract} In 1973, Brown, Erd\H{o}s and S\'os proved that if $\mathcal{H}$ is a 3-uniform hypergraph on $n$ vertices which contains no triangulation of the sphere, then $\mathcal{H}$ has at most $O(n^{5/2})$ edges, and this bound is the best possible up to a constant factor. Resolving a conjecture of Linial, also reiterated by Keevash, Long, Narayanan, and Scott, we show that the same result holds for triangulations of the torus. Furthermore, we extend our result to every closed orientable surface $\mathcal{S}$. \end{abstract} \section{Introduction} Let $\mathcal{F}$ be a (possibly infinite) family of $r$-uniform hypergraphs. The \emph{Tur\'an number} or \emph{extremal number} of $\mathcal{F}$, denoted by $\mbox{ex}(n,\mathcal{F})$, is the maximum number of edges in an $r$-uniform hypergraph on $n$ vertices which does not contain any member of $\mathcal{F}$ as a subhypergraph. The study of extremal numbers of graphs and hypergraphs is one of the central topics in discrete mathematics, which goes back more than hundred years to the works of Mantel \cite{Ma} in 1907 and Tur\'an \cite{Tu} in 1941. For a general reference, we refer the reader to the surveys \cite{K11,MPS}. In this paper, we are interested in the extremal number of families which arise from topology. An $r$-uniform hypergraph $\mathcal{S}$ naturally corresponds to the simplicial complex formed by the subsets of the edges of $\mathcal{S}$. Therefore, we can talk about \emph{homeomorphisms} between hypergraphs, meaning the homeomorphisms of the corresponding simplicial complexes. Linial \cite{L08,L18}, as a part of the `high-dimensional combinatorics' programme, proposed to study the following extremal question. Given an $r$-uniform hypergraph $\mathcal{S}$, at most how many edges can an $r$-uniform hypergraph on $n$ vertices have if it does not contain a homeomorphic copy of $\mathcal{S}$? Let us denote this number by $\mbox{ex}_{hom}(n,\mathcal{S})$. In the case of $r=2$, a celebrated result of Mader \cite{M67} tells us that if the graph $G$ with $n$ vertices avoids a subdivision of the complete graph $K_t$, then $G$ has at most $cn$ edges, where $c=c(t)$ depends on $t$ only. Since every subdivision of a graph $H$ is homeomorphic to $H$, we deduce that $\mbox{ex}_{hom}(n,H)$ is linear for any fixed graph $H$. In this paper, we consider the case $r=3$. Whenever appropriate, we shall talk about triangulations of surfaces rather than homeomorphisms of hypergraphs. An old result of Brown, Erd\H{o}s and S\'os \cite{BES73} states that if a $3$-uniform hypergraph on $n$ vertices does not contain a triangulation of the sphere, then $\mathcal{H}$ has $O(n^{5/2})$ edges, and this bound is the best possible up to the constant factor. Inspired by this result, it is natural to study the related problem for the torus and other surfaces as well. Indeed, Linial \cite{L08,L18} proposed the conjecture that if a $3$-uniform hypergraph on $n$ vertices does not contain a triangulation of the torus, then $\mathcal{H}$ has at most $O(n^{5/2})$ edges. He proved (unpublished, see \cite{L18} for an outline of the approach he suggests) that if this is true, then this bound is best possible up to a constant factor, and together with Friedgut \cite{L08} they proved the upper bound $O(n^{3-1/3})$. Our main result is the resolution of this conjecture. \begin{theorem}\label{thm:torus} There exists a constant $c>0$ such that if $\mathcal{H}$ is a 3-uniform hypergraph on $n$ vertices which does not contain a triangulation of the torus, then $\mathcal{H}$ has at most $cn^{5/2}$ edges. \end{theorem} We can also generalize our result to every closed orientable surface. A surface is \emph{closed} if it is compact and without boundary. By the Classification theorem of closed surfaces (see Theorem \ref{thm:class}), every closed orientable surface is homeomorphic a sphere with $g$ handles for some $g\geq 1$. \begin{theorem}\label{thm:orientable} Let $\mathcal{S}$ be a closed orientable surface. There exists $c=c(\mathcal{S})>0$ such that if $\mathcal{H}$ is a 3-uniform hypergraph on $n$ vertices which does not contain a triangulation of $\mathcal{S}$, then $\mathcal{H}$ has at most $cn^{5/2}$ edges. \end{theorem} For completeness, we will present a proof for matching lower bounds as well. Our argument is based on the aforementioned approach of Nati Linial \cite{L18}. \begin{theorem}\label{thm:lower_bound} Let $\mathcal S$ be a closed surface. Then there exists $c=c(\mathcal{S})>0$ such that the following holds. For every positive integer $n\geq 3$, there exists a $3$-uniform hypergraph on $n$ vertices with at least $c n^{5/2}$ edges that does not contain a triangulation of $\mathcal S$. \end{theorem} In general, Keevash, Long, Narayanan and Scott \cite{KLNS20} proved that for every 3-uniform hypergraph $\mathcal{S}$ there exists $c=c(\mathcal{S})$ such that $\mbox{ex}_{hom}(n,\mathcal{S})\leq cn^{3-1/5}$. It remains open whether the exponent can be pushed down to $5/2$. If true, this would imply our main results. Our paper is organized as follows. In the next subsections, we introduce our notation and give a brief outline of the proof of Theorem~\ref{thm:torus}. In Section~\ref{sect:lower_bound} we prove Theorem~\ref{thm:lower_bound}; in Section \ref{sect:torus}, we prove Theorem \ref{thm:torus}; in Section \ref{sect:surface}, we prove Theorem \ref{thm:orientable}. \subsection{Notation} As usual, $[n]$ denotes the set $\{1,\dots,n\}$, and if $X$ is a set, $X^{(r)}$ is the family of $r$-element subsets of $X$. If $G$ is a graph and $X\subset V(G)$, then $N(X)$ denotes the \emph{neighborhood} of $X$, that is, the set of vertices $v\in V(G)\setminus X$ adjacent to at least one element of $X$. Let $\mathcal{H}$ be a 3-uniform hypergraph. If $x,y,z\in V(\mathcal{H})$, we write $xyz$ instead of $\{x,y,z\}$ (similarly for graphs as well). If $x\in V(\mathcal{H})$, then $\mathcal{H}_{x}$ is the \emph{link-graph} of $x$. If $x,y\in V(\mathcal{H})$, then $N(x,y)$ is the set of vertices $z$ such that $xyz\in E(\mathcal{H})$. Two edges of $\mathcal{H}$ are \emph{neighboring} if they intersect in two vertices. If $G$ and $H$ are graphs on the same vertex set, then $G\cap H$ is the graph on $V(G)$ with edge set $E(G)\cap E(H)$. We omit the use of floors and ceilings whenever they are not crucial. \subsection{Outline of the proof} In this subsection, we briefly outline the proof of the upper bounds. Let us first summarize the argument of Brown, Erd\H{o}s and S\'os \cite{BES73} for finding a triangulation of the sphere. They show that if a 3-uniform hypergraph $\mathcal{H}$ has $n$ vertices and $\Omega(n^{5/2})$ edges, then $\mathcal{H}$ contains a \emph{double pyramid}. \begin{definition} A \emph{double pyramid} is a 3-uniform hypergraph on $s+2$ vertices $x,x',y_{1},\dots,y_{s}$ for some integer $s\geq 3$, whose edges are $xy_{i}y_{i+1}$ and $x'y_{i}y_{i+1}$ for $i=1,\dots,s$ (indices are taken modulo $s$). The vertices $x$ and $x'$ are called the \emph{apexes} of the double pyramid. \end{definition} Indeed, by a simple averaging argument, one can find two vertices $x,x'$ such that the graph $G:=\mathcal{H}_{x}\cap \mathcal{H}_{x'}$ has at least $n$ edges. But then $G$ contains a cycle $y_{1},\dots,y_{s}$, and $x,x',y_{1},\dots,y_{s}$ forms a double pyramid. Our approach to construct a triangulation of the torus in $\mathcal{H}$ goes by gluing double pyramids together in a cyclic fashion. In order to do this, we show that if $\mathcal{H}$ has $\Omega(n^{5/2})$ edges, then double pyramids are ``all over the place''. More precisely, we show that almost all pairs of neighboring edges $(e,f)$ in $\mathcal{H}$ have the property that even after deleting a large proportion of the vertices in $V(\mathcal{H})\setminus (e\cup f)$ randomly, we can find a certain double pyramid-lire triangulation of the sphere containing $e$ and $f$ with high probability. Then, we find a sequence of neighboring edges forming a ``cycle'' (for the precise notion of a cycle, see Section \ref{section:topology}), and for any pair of neighboring edges $(e,f)$ in the cycle, we find a sphere containing $e$ and $f$. The `supersaturation' property for triangulations of a sphere described above allows us to choose these spheres to be pairwise disjoint outside the cycle. The union of these spheres contains a triangulation of the torus. Now let us outline the proof of Theorem \ref{thm:orientable}. Let $\mathcal{S}$ be an orientable surface of genus $g$, then $\mathcal{S}$ is homeomorphic to a sphere with $g$ handles, where each handle can be thought of as a torus glued to the sphere. We show that in general, gluing hypergraphs along an edge does not increase their extremal number by much, which then implies our result. \section{Lower bound for the extremal numbers of surfaces}\label{sect:lower_bound} In this section, we present the proof of Theorem \ref{thm:lower_bound}. We remark that the lower bound construction of Brown, Erd\H{o}s and S\'os \cite{BES73} does not work in our case. Indeed, they used the observation that every triangulation of the sphere contains a hypergraph $T$ consisting of five vertices $x,y_1,y_2,y_3,y_4$ and four edges $xy_{i}y_{i+1}$ for $i=1,2,3,4$ (indices are modulo 4), and they proved that the extremal number of $T$ is already $\Omega(n^{5/2})$. Unfortunately, e.g. the torus has triangulations without $T$. Our proof of the lower bound is based on the probabilistic deletion argument. \begin{proof}[Proof of Theorem \ref{thm:lower_bound}] Fix a triangulation of $\mathcal S$, and let $v,e,f$ stand for the number of vertices, edges, and faces in this triangulation. By the Classification theorem of closed surfaces (see Theorem \ref{thm:class}), there exists $g\ge 0$ (depending on $\mathcal S$ only) such that $v-e+f = 2-g.$ Moreover, we clearly have $2e = 3f,$ and thus $f = 2v-4+2g$ for any triangulation of $\mathcal S.$ Gao \cite{Gao} has determined the asymptotics (as $v\to \infty$) for the number of unlabelled rooted triangulations of $\mathcal S$ with $v$ vertices. The formula implies that the number $N_v$ of unlabelled triangulations of $\mathcal S$ is at most $C^v$ for some fixed constant $C>1$ (depending on $g$) and any $v$. We are now ready to prove the theorem. Consider a random hypergraph $\mathcal H$ on $n$ vertices, including each edge in $\mathcal H$ independently and with probability $p = c_0 n^{-1/2}$, where $c_0>0$ is a sufficiently small constant to be determined later. Let us count the expected number $\eta$ of triangulations of $\mathcal S$ in $\mathcal H.$ We have $$\mathbb{E}\eta = \sum_{v=4}^n \binom{n}{v} v! N_vp^{2v-4+g}\le \sum_{v=4}^nn^vC^v c_0^{2v-4+g}n^{-\frac 12(2v-4+g)}<n^2/2,$$ provided $c_0<(2C)^{-1}.$ Thus, there exists a choice of $\mathcal H$ with at least $\frac {c_0}{2}n^{5/2}$ edges and with at most $n^2$ different triangulations of $\mathcal S.$ Delete one edge from each such triangulation, obtaining a hypergraph $\mathcal H'$ with at least $\frac {c_0}{2} n^{5/2}-n^2 = \Omega(n^{5/2})$ edges and with no triangulation of $\mathcal S.$ \end{proof} \section{Torus --- Upper bound}\label{sect:torus} In this section, we prove Theorem \ref{thm:torus}. \subsection{Admissible edges}\label{sect:admissibleedges} Let $p,\epsilon\in (0,1]$ and let $k$ be a positive integer. In what comes, we define the notion of a \emph{$(p,\epsilon,k)$-admissible edge} in a graph $G$. The definition might seem quite convoluted at first, but in exchange it will be convenient to work with it later. \begin{definition} Let $e=xy$ be an edge of $G$. Select each vertex of $G$ independently with probability $p$, and let $U$ be the set of selected vertices. Let $A_{e}$ be the event that there are at least $k$ internally vertex disjoint paths with endpoints $x$ and $y$ in $G[U]$ (not counting the length one path $xy$), conditioned on the event that $x,y\in U$. Then $e$ is \emph{$(p,\epsilon,k)$-admissible} if $\mathbb{P}(A_{e})\geq 1-\epsilon$. \end{definition} This section is devoted to the proof of the following lemma. \begin{lemma}\label{lemma:admissible} Let $p,\epsilon\in (0,1]$, and let $k,n$ be positive integers. If $G$ is a graph on $n$ vertices, then all but at most $\frac{2k}{p^2\epsilon}n$ edges of $G$ are $(p,\epsilon,k)$-admissible. \end{lemma} In the proof of this lemma, we will use the following well known theorem of Mader \cite{M72}, which tells us that every graph of large average degree contains a graph of high connectivity. \begin{claim}\label{claim:connected}[Mader's theorem] If $G$ is a graph with average degree at least $4k$, then $G$ contains a ${(k+1)}$-vertex-connected subgraph. \end{claim} \begin{proof}[Proof of Lemma \ref{lemma:admissible}] Say that an edge $e\in E(G)$ is \emph{bad} if it is not $(p,\epsilon,k)$-admissible, and let $N$ be the number of bad edges. Let $U$ be a subset of $V(G)$ we get by selecting each vertex of $G$ independently with probability $p$. For every edge $e$, let $B_{e}$ be the event that both endpoints of $e$ are in $U$, and there are no $k$ internally vertex disjoint paths (other than the length one path) connecting the endpoints of $e$ in $G[U]$. If $e$ is bad, then $\mathbb{P}(B_{e}|e\subset U)\geq \epsilon$, so $\mathbb{P}(B_{e})\geq \epsilon p^{2}$. Let $X=\sum_{e\in E(G)}I(B_{e})$, where $I(B_e)$ is the indicator random variable of $B_{e}$. Then $X=|F|$, where $F$ is the set of edges in $G[U]$ for which there are no $k$ internally vertex disjoint paths connecting its endpoints in $G[U]$. We have $\mathbb{E}(X)=\sum_{e\in E(G)}\mathbb{P}(B_{e})\geq \epsilon p^{2}N$, so there exists a choice for $U$ for which $X\geq \epsilon p^{2}N$. Consider the subgraph $H$ of $G[U]$ with edge set $F$, then there are no $k$ internally vertex disjoint paths connecting the endpoints of any edge in $H$. By Menger's theorem \cite{M27}, $H$ cannot contain a $(k+1)$-vertex-connected subgraph. By Mader's theorem, this implies that $|F|=|E(H)|<2kn$. Therefore, we get $\epsilon p^{2}N<|F|<2kn$, which gives $N<\frac{2k}{\epsilon p^{2}}n$. \end{proof} \subsection{Admissible pairs of hyperedges} Let $\mathcal{H}$ be a 3-uniform hypergraph. Say that a neighboring pair of edges $(e,f)$, where $e=xyz$ and $f=x'yz$, is $(p,\epsilon,k)$-admissible, if the edge $yz$ in the graph $\mathcal{H}_{x}\cap \mathcal{H}_{x'}$ is $(p,\epsilon,k)$-admissible. Also, for a positive integer $r$, say that $(e,f)$ is $(p,\epsilon,k,r)$-semi-admissible, if there exist at least $r$ edges $g=x''yz$ of $\mathcal{H}$ such that $(e,g)$ and $(g,f)$ are both $(p,\epsilon,k)$-admissible. In this section, we prove the following lemma. \begin{lemma}\label{lemma:3admissible} Let $\epsilon,p\in[0,1]$, and let $k,r,n$ be positive integers. Let $\mathcal{H}$ be a 3-uniform hypergraph with $n$ vertices and at least $\frac{12r}{p}\sqrt{\frac{k}{\epsilon}}n^{5/2}$ edges. Then $E(\mathcal{H})$ contains a subset $F$ of at least $\frac{1}{2}|E(\mathcal{H})|$ edges such that any pair of neighboring edges in $F$ is $(p,\epsilon,k,r)$-semi-admissible in $\mathcal{H}$. \end{lemma} Let us prepare the proof with a simple claim. \begin{claim}\label{claim:2path} Let $r$ be a positive integer. Let $G$ be a graph on $n$ vertices in which the number of non-edges is at most $q$. Then $V(G)$ contains a set $W$ of at least $n-\frac{2rq}{n}$ vertices such that any pair of vertices in $W$ is joined by at least $r$ paths of length two in $G$. \end{claim} \begin{proof} Let $v_{1},\dots,v_{r}\in V(G)$ be vertices in $G$ with the $r$ largest degrees, and let $W$ be the common neighborhood of $v_{1},\dots,v_{r}$. Then $n-|W|\leq \sum_{i=1}^{r}\mbox{deg}_{\overline{G}}(v_{i})\leq \frac{2rq}{n}.$ \end{proof} \begin{proof}[Proof of Lemma \ref{lemma:3admissible}] Let $\{y,z\}$ be a pair of vertices in $V(\mathcal{H})$, and let $n_{y,z}=|N(y,z)|$. Let $H_{y,z}$ be the graph on $N(y,z)$ in which $x$ and $x'$ are joined by an edge if the pair $(xyz,x'yz)$ is \emph{not} $(p,\epsilon,k)$-admissible. Also, let $H'_{y,z}$ be the graph on $N(y,z)$ in which $x$ and $x'$ are joined by an edge if $(xyz,x'yz)$ is \emph{not} $(p,\epsilon,k,r)$-semi-admissible. Note that by Claim \ref{claim:2path}, one can delete a set $R_{y,z}$ of at most $2r|E(H_{y,z})|/n_{y,z}$ vertices of $H'_{y,z}$ to make it an empty graph. Let $T_{y,z}=\{xyz:x\in R_{y,z}\}$ be the set of edges of $\mathcal{H}$ corresponding to the elements of $R_{y,z}$, and let $T=\bigcup_{y,z\in V(\mathcal{H})}T_{y,z}$. Then $F=E(\mathcal{H})\setminus T$ has the desired property. It remains to show that $|F|\geq \frac{1}{2}|E(\mathcal{H})|$, which will follow from the inequality $|T|\leq \frac{6r}{p}\sqrt{\frac{k}{\epsilon}}n^{5/2}$. For simplicity, write $\alpha=\frac{1}{p}\sqrt{\frac{k}{\epsilon}}$. We have $$|T|\leq \sum_{y,z\in V(\mathcal{H}}|T_{y,z}|\leq 2r\sum_{y,z\in V(\mathcal{H})}\frac{|E(H_{y,z})|}{n_{y,z}}.$$ Note that if $n_{y,z}\leq \alpha n^{1/2}$, then $\frac{|E(H_{y,z})|}{n_{y,z}}<\alpha n^{1/2}$, so the contribution of such terms to the sum is at most $\alpha n^{5/2}$. Therefore, we have $$|T|<2r\alpha n^{5/2}+\frac{2r}{\alpha}n^{-1/2}\sum_{y,z\in V(\mathcal{H})}|E(H_{y,z})|.$$ For a pair of vertices $\{x,x'\}$, let $G_{x,x'}$ be the subgraph of $\mathcal{H}_{x}\cap \mathcal{H}_{x'}$ formed by the not $(p,\epsilon,k)$-admissible edges of $\mathcal{H}_{x}\cap \mathcal{H}_{x'}$. Then $|E(G_{x,x'})|\leq \frac{2k}{p^{2}\epsilon}n=2\alpha^2 n$ by Lemma \ref{lemma:admissible}. Also, note that each edge of $G_{x,x'}$ corresponds to a pair of neighboring edges $(e,f)$ that is not $(p,\epsilon,k)$-admissible, so we have $$\sum_{y,z\in V(\mathcal{H})}|E(H_{y,z})|=\sum_{x,x'\in V(\mathcal{H})}|E(G_{x,x'})|\leq 2\alpha^{2}n^{3}.$$ But then $|T|\leq 6r\alpha n^{5/2},$ finishing the proof. \end{proof} \subsection{Topology}\label{section:topology} This section contains all the topological notions and results we are going to use. This section should be comprehensible with basic knowledge of topology. Let us start with the classification theorem of closed surfaces. Recall that the Euler characteristic of a closed surface $\mathcal S$ is equal to $v-e+f$, where $v,e,f$ are the numbers of vertices, edges and faces, respectively, in a triangulation of $\mathcal S$. The Euler characteristic does not depend on the triangulation. \begin{theorem}[Classification theorem of closed surfaces]\label{thm:class} Any connected closed surface is homeomorphic to a member of one of the following families: \begin{enumerate} \item sphere with $g$ handles for $g\geq 0$ (which has Euler characteristic $2-2g$), \item sphere with $k$ cross-caps for $k\geq 1$ (which has Euler characteristic $2-k$). \end{enumerate} The surfaces in the first family are the orientable surfaces. \end{theorem} Now let us introduce our notion of cycle, which we will use to glue spheres along to get a triangulation of the torus. \begin{definition} A 3-uniform hypergraph $\mathcal{C}$ is called a \emph{topological cycle} if $\mathcal{C}$ is a triangulation of the cylinder $S^1\times [0,1]$, or the M\"obius strip such that each vertex of $\mathcal C$ lies on the boundary. \end{definition} If $\mathcal{C}$ is a topological cycle, then its edges can be ordered cyclically such that consecutive edges are neighboring, that is, they share two common vertices. Call such an ordering \emph{proper} for $\mathcal{C}$. Note that a topological cycle with $r$ edges has $r$ vertices, and the link graph of every vertex is a path. Also, the Euler characteristic of a topological cycle is 0. For example, tight cycles are topological cycles. For $r\geq 4$, a \emph{tight cycle of length $r$} is the 3-uniform hypergraph on vertices $x_{1},\dots,x_{r}$ with edges $x_{i-1}x_{i}x_{i+1}$ for $i=1,\dots,r$, where indices are meant modulo $r$. Indeed, a tight cycle of even length is a triangulation of the cylinder, while a tight cycle of odd length is a triangulation of the M\"obius strip, see Figure \ref{fig:topcycle} for an illustration. Another topological cycle of particular interest comes from double pyramids. Let $x,x',y_{1},\dots,y_{s}$ be the vertices of a double-pyramid, with edges $e_{i}=xy_{i}y_{i+1},f_{i}=x'y_{i}y_{i+1}$ for $i=1,\dots,s$, where $s\geq 4$. Then for any $3\leq r\leq s-1$, the sequence $e_{1},\dots,e_{r},f_{r},\dots,f_{s},f_{1}$ is a proper ordering of a topological cycle. Say that a topological cycle $\mathcal{C}$ is \emph{torus-like}, if either \begin{itemize} \item $\mathcal{C}$ has even number of edges, and $\mathcal{C}$ is the triangulation of the cylinder, or \item $\mathcal{C}$ has odd number of edges, and $\mathcal{C}$ is the triangulation of the M\"obius strip. \end{itemize} If $\mathcal{C}$ is not torus-like, then say that $\mathcal{C}$ is \emph{Klein bottle-like}. Note that, for example, every tight cycle is torus-like. \begin{figure} \caption{A tight cycle of length 48 (left) and a tight cycle of length 49 (right).} \label{fig:topcycle} \end{figure} The next lemma tells us that if we glue spheres along a topological cycle, then we either get a torus or a Klein bottle (which is the sphere with two cross-caps). \begin{lemma}\label{lemma:toplogy} Let $\mathcal{C}$ be a topological cycle, and let $e_{1},\dots,e_{r}$ be a proper ordering of the edges. Let 3-uniform hypergraphs $\mathcal{S}_{1},\dots,\mathcal{S}_{r}$ be triangulations of the sphere such that $e_i,e_{i+1}\in E(\mathcal{S}_{i})$ for $i=1,\dots,r$, where indices are meant modulo $r$, and $\mathcal{S}_{i}$ has no vertex common with $\mathcal{C}$ and $\mathcal{S}_{j}$ for $j\neq i$, with the possible exception of the vertices in $e_i\cup e_{i+1}$. Let $\mathcal{T}$ be the hypergraph with edge set $\bigcup_{i=1}^{r} (E(\mathcal{S}_{i})\setminus \{e_{i},e_{i+1}\})$. If $\mathcal{C}$ is torus-like, then $\mathcal{T}$ is homeomorphic to the torus, and if $\mathcal{C}$ is Klein bottle-like, then $\mathcal{T}$ is homeomorphic to the Klein bottle. \end{lemma} \begin{proof} For $i=1,\dots,r$, let $\mathcal{D}_{i}$ be the triangulation of the disc we get after removing $e_{i}$ and $e_{i+1}$ from $\mathcal{S}_{i}$. For $i=1,\dots, r$, denote by $s_i\subset e_i$ the 2-edge lying on the boundary of $\mathcal C$. Remark that the boundary of $\mathcal D_i$ contains $s_i$ and $s_{i+1}$. Put $\varepsilon_i=1$, if these two 2-edges share a common vertex, otherwise put $\varepsilon_i=-1$. Informally, $\varepsilon_i=-1$ if the 2-edges $s_i$ and $s_{i+1}$ of the boundary of $\mathcal D_i$ belong to 'the locally opposite sides of $\mathcal C$' and $\varepsilon_i=-1$ if they belong to 'the same side of $\mathcal C$'. It is easy to see that if $\varepsilon_1\dots \varepsilon_r=1$, then $\mathcal C$ is a triangulation of the cylinder $S^1\times [0,1]$ (because in these case the boundary of $\mathcal C$ consists of two disconnected parts), otherwise $\mathcal C$ is a triangulation of the M\"obius strip (because $\mathcal C$ has only one connected parts). Therefore, $\varepsilon_1\dots \varepsilon_r=(-1)^r$ if and only if $\mathcal C$ is torus-like. First, let us check that $\mathcal{T}$ is a simplicial manifold, that is, for every $v\in V(\mathcal T)$ the link graph of $v$ in $\mathcal T$ is a cycle (and thus $\mathcal{T}$ is homeomorphic to a closed surface). There are two possible cases: \textit{Case 1.} Suppose that $v\not\in V(\mathcal C)$, and thus $v\in V(\mathcal S_i)\setminus (e_i\cup e_{i+1})$ for some $i\in [r]$. Since $\mathcal S_i$ is a triangulation of the sphere and $v$ does not belong to other triangulations $\mathcal S_j$, the link graph of $v$ in $\mathcal{T}$ is a cycle. \textit{Case 2}. Suppose that $v\in V(\mathcal{C})$, and let $e_a=w_aw_{a+1}v, e_{a+1}=w_{a+1}w_{a+2}v, \dots, e_b=w_{b}w_{b+1}v$ be all the edges of $\mathcal{C}$ containing $v$. Let us describe the link graph $T_v$ of $v$ in $\mathcal{T}$. Since $v$ belongs to $V(\mathcal S_{a-1}), \dots, V(\mathcal S_{b})$ and does not belong to any other $V(\mathcal S_i)$, the link graph of $v$ in $\mathcal T$ naturally falls into several parts: \begin{enumerate} \item The link graph of $v$ in $\mathcal D_{a-1}=\mathcal S_{a-1}\setminus \{e_{a-1},e_a\}$ is a simple path connecting $w_a$ and $w_{a+1}$ passing through vertices of $V(\mathcal S_{a-1})\setminus (e_{a-1}\cup e_a)$. \item The link graph of $v$ in $\mathcal S_i\setminus\{e_i, e_{i+1}\}$ for $i=a,\dots, b-1$ is a simple path connecting $w_i$ and $w_{i+2}$ and passing through vertices of $V(\mathcal S_{i})\setminus (e_i\cup e_{i+1})$. \item The link graph of $v$ in $\mathcal S_{b}\setminus\{e_{b},e_{b+1}\}$ is a simple path connecting $w_b$ and $w_{b+1}$ passing through vertices of $V(\mathcal S_{b})\setminus (e_b\cup e_{b+1})$. \end{enumerate} From this observations, we conclude that the link graph of $v$ in $\mathcal T$ is a simple cycle; see Figure~\ref{fig:linkgraph}. \begin{figure} \caption{The link graph of $v\in V(\mathcal C)$.} \label{fig:linkgraph} \end{figure} Second, let us study the orientability of $\mathcal T$. Recall that a simplicial $2$-manifold (or a triangulation) is {\it orientable} iff we can orient the boundary of each triangle cyclically so that for any two triangles sharing an edge the orientations induced on the common edge are opposite. Remark that an orientation of the triangulation of $\mathcal D_i$ induces a cyclic orientation on the boundary of $D_i$, and in particular on $s_i$ and $s_{i+1}$. Inversely, an orientation of $s_i$ induces an orientation of the triangulation $\mathcal D_i$ of the disc. For a given orientation of $\mathcal D_i$, put $\delta_i=1$ if the orientation of $s_i=\{x,y\}$ induced by the orientation of $\mathcal D_i$ is $[x,y]$, where $x\in e_{i-1}$ and $y\in e_{i+1}$ (in this case, we say that $s_i$ is {\it oriented forwards}), otherwise put $\delta_i=-1$ (and say that $s_i$ is {\it oriented backwards}). For a given orientation of $\mathcal D_i\cup \mathcal D_{i+1}$, we have $\delta_{i+1}=\delta_i(-\varepsilon_i)$. Indeed, if $s_i$ and $s_{i+1}$ share a vertex, then within $\mathcal D_i$ either both are oriented forwards, or both are oriented backwards. But the orientation of $s_{i+1}$ within $\mathcal D_{i+1}$ is opposite, and thus is opposite to the orientation of $s_i$ within $\mathcal D_i$. If $s_i$ and $s_{i+1}$ do not share a vertex (and thus locally `lie on the opposite sides of $\mathcal S$'), then within $\mathcal D_i$ exactly one of them is oriented forwards, and we conclude in the same way as before. Clearly, $\mathcal T$ is orientable if and only if for each $i=1,\ldots, r$ there exists an orientation $\eta_i$ of $\mathcal D_i\cup \mathcal D_{i+1}$ such that $\eta_i$ and $\eta_{i+1}$ induce the same orientation of $D_{i+1}$. From the above, we conclude that such a set $\eta_1,\ldots, \eta_r$ of orientations exist iff \[ \delta_1=\delta_1(-\varepsilon_1)(-\varepsilon_2)\dots (-\varepsilon_r), \] that is, \[ \varepsilon_1\dots \varepsilon_r=(-1)^r, \] which corresponds to the case of the torus-like topological cycle $\mathcal C$. Lastly, we determine the Euler characteristic of $\mathcal{T}$. Clearly, $\mathcal{C}$ has Euler characteristic 0. Each disc $\mathcal{D}_{i}$ glued to $\mathcal{C}$ increases the Euler characteristic by 1, so after removing the $r$ edges $e_{1},\dots,e_{r}$, we end up with the Euler characteristic equal to $0$. Therefore, the Euler characteristic of $\mathcal{T}$ is $0$, which implies that $\mathcal{T}$ is homeomorphic to either the torus or the Klein bottle by the Classification theorem of closed surfaces. See Figure \ref{fig:torus} for an illustration of $\mathcal{T}$. \end{proof} Somewhat surprisingly, it turns out that if $\mathcal{C}$ is a topological cycle that is also 3-partite, then $\mathcal{C}$ is always torus-like. \begin{claim}\label{claim:3partite} If $\mathcal{C}$ is a topological cycle and $\mathcal{C}$ is 3-partite, then $\mathcal{C}$ is torus-like. \end{claim} \begin{proof} Let us use notation introduced in the proof of Lemma~\ref{lemma:toplogy}. Suppose that $V(\mathcal C)$ is partitioned into three sets $V_1$, $V_2$, and $V_3$. For $i=1,\dots,r $, put $\sigma_i=1$ if $s_i=\{x,y\}$, where $x\in e_{i-1}\cap V_j$ and $y\in e_{i+1}\cap V_{j+1}$ for some $j\in\{1,2,3\}$, where index $j+1$ is meant modulo $3$. Otherwise, put $\sigma_{i}=-1$. It is easy to check that $\sigma_{i+1}=\sigma_i(-\varepsilon_i)$. As in the proof above, we have $\sigma_1 = \sigma_1(-\varepsilon_1)(-\varepsilon_2)\dots (-\varepsilon_r)$, which implies $\varepsilon_1\dots\varepsilon_r=(-1)^r$. That is, $\mathcal C$ is a torus-like topological cycle. \end{proof} \begin{figure} \caption{An illustration of the torus we get after gluing spheres (more precisely double pyramids) to the neighboring edges of a tight cycle of length 24. We use the three colors to separate the spheres visually.} \label{fig:torus} \end{figure} Now let us show how finding a topological cycle helps us building the triangulation of the torus. \begin{lemma}\label{lemma:embedding_with_cyle} Let $\mathcal{H}$ be a 3-uniform hypergraph, and let $\mathcal{C}$ be a topological cycle with proper edge ordering $e_1,\dots,e_{r}$. Suppose that $(e_i,e_{i+1})$ is $(\frac{1}{2r},\frac{1}{2r+1},2r,2r)$-semi-admissible in $\mathcal{H}$ for $i=1,\dots,r$. If $\mathcal{C}$ is torus-like, then $\mathcal{H}$ contains a torus, otherwise $\mathcal{H}$ contains a Klein bottle. \end{lemma} \begin{proof} By the definition of a $(\frac{1}{2r},\frac{1}{2r+1},2r,2r)$-semi-admissible pair, for $i=1,\dots,r$, there exist at least $2r$ edges $f_{i}^1,\dots,f_{i}^{2r}$ such that $f_{i}^{j}\cap e_{i}=f_{i}^{j}\cap e_{i+1}=e_{i}\cap e_{i+1}$ for $j=1,\dots,2r$, and $(e_{i},f_i^j)$ and $(e_{i+1},f_i^j)$ are both $(\frac{1}{2r},\frac{1}{2r+1},2r)$-admissible. Hence, for $i=1,\dots,r$, we can choose an edge $f_{i}$ among $f_i^1,\dots,f_i^{2r}$ such that $|f_{i}\cap V(\mathcal{C})|=2$ and, moreover, the vertices $f_{1}\setminus V(\mathcal{C}),\dots,f_{r}\setminus V(\mathcal{C})$ are pairwise distinct. Let $X=V(\mathcal{C})\cup\bigcup_{i=1}^{r} f_{i}$ and note that $|X|=2r$. Color each vertex in $V(\mathcal{H})\setminus X$ randomly and independently with one of the $2r$ colors $c_1,\dots,c_r,c_1',\dots,c_{r}'$. For $i=1,\dots,r$, let $A_{i}$ be the event that there exists a sphere $\mathcal{S}_{i}$ containing $e_{i}$ and $e_{i+1}$ such that all vertices of $\mathcal{S}_{i}$, with the exception of $e_{i}\cup f_{i}\cup e_{i+1}$, are colored with $c_{i}$ or $c_{i}'$. In the next paragraph, we show that $\mathbb{P}(A_{i})\geq 1-\frac{2}{2r+1}$. Assuming this inequality, with positive probability there exists a coloring for which the events $A_{1},\dots,A_{r}$ hold simultaneously. Note that the spheres $\mathcal{S}_{i}$ are pairwise vertex disjoint outside of $\mathcal{C}$, so we can apply Lemma \ref{lemma:toplogy} to conclude that $\mathcal{H}$ contains a torus or a Klein bottle. Now let us show that $\mathbb{P}(A_{i})\geq 1-\frac{2}{2r+1}$. Fix some $i\in \{1,\dots,r\}$, let $e_{i}=xyz, e_{i+1}=x'yz$ and $f=x''yz$. Let $B_{i}^1$ be the event that there exists a double pyramid $\mathcal{S}_{i}^{1}$ such that $\mathcal{S}_{i}^{1}$ contains $e_{i}$ and $f_{i}$, and every vertex in $V(\mathcal{S}_{i}^1)\setminus (e_{i}\cup f_{i})$ is colored $c_{i}$. Similarly, let $B_{i}^2$ be the event that there exists a double pyramid $\mathcal{S}_{i}^{2}$ such that $\mathcal{S}_{i}^{2}$ contains $e_{i+1}$ and $f_{i}$, and every vertex in $V(\mathcal{S}_{i}^2)\setminus (e_{i+1}\cup f_{i})$ is colored $c_{i}'$. We show that $\mathbb{P}(B_{i}^1)\geq 1-\frac{1}{2r+1}$. Let $U$ be the set of vertices colored $c_{i}$. As the edge $yz$ is $(\frac{1}{2r},\frac{1}{2r+1},2r)$-admissible in the graph $G=\mathcal{H}_{x}\cap\mathcal{H}_{x''}$, the probability that there are $2r$ internally vertex disjoint paths between $y$ and $z$ (other than the edge $yz$) in $G[U\cup X]$ is at least $1-\frac{1}{2r+1}$. But then at least one of these paths, say $P$, does not contain a vertex of $X$. The union of the path $P$ with the edge $\{y,z\}$ forms a cycle in $\mathcal{H}_{x}\cap \mathcal{H}_{x'}$, which gives a double pyramid $\mathcal{S}_{i}^{1}$ in $\mathcal{H}$ with the desired properties. The proof of $\mathbb{P}(B_{i}^2)\geq 1-\frac{1}{2r+1}$ is analogous. But then $\mathbb{P}(B_{i}^1\cap B_{i}^2)\geq 1-\frac{2}{2r+1}$. But if both $B_{i}^1$ and $B_{i}^2$ occur then, taking $\mathcal{S}_{i}$ to be the union of $\mathcal{S}_{i}^1$ and $\mathcal{S}_{i}^2$ with $f_{i}$ removed, $\mathcal{S}_{i}$ is a sphere satisfying the desired properties. This concludes the proof. \end{proof} \subsection{The extremal number of the torus} After these preparations, we are almost done with the proof of Theorem \ref{thm:torus}. The last piece we are missing is that we can find short torus-like topological cycles in hypergraphs with $n$ vertices and $\Omega(n^{5/2})$ edges. Recently, it was proved by Sudakov and Tomon \cite{ST20} that if $\mathcal{H}$ has $n$ vertices and $n^{2+o(1)}$ edges, then $\mathcal{H}$ contains a tight cycle of length $O((\log n)^2)$. Also, by a much simpler argument, we can find a double pyramid of size $O(\log n)$ if $\mathcal{H}$ has $\Omega(n^{5/2})$ edges, which in turn contains a torus-like topological cycle of length $O(\log n)$. Using a topological cycle of such length, one can deduce that Theorem \ref{thm:torus} holds with the bound $O(n^{5/2}(\log n)^{3})$ instead of $O(n^{5/2})$. However, in order to prove the required bound $O(n^{5/2})$, we need to find a topological cycle of constant length. Conlon \cite{C10} proposed the conjecture that there exists a constant $c>0$ such that the extremal number of the tight cycle of length $3k$ is at most $O(n^{2+c/k})$. An unpublished result of Verstra\"ete \cite{V} states that the extremal number of the tight cycle of length 24 is $O(n^{5/2})$, which would be perfect for our purposes. However, in order to not rely on an unpublished result, we prove the following more general theorem, which can be viewed as a relaxation of Conlon's conjecture. \begin{theorem}\label{thm:cycles} Let $k\geq 2$ be an integer. If $\mathcal{H}$ is a 3-uniform hypergraph on $n$ vertices which contains no torus-like topological cycle of length at most $6k$, then $\mathcal{H}$ has at most $n^{2+2/(k-1)+o(1)}$ edges. \end{theorem} As the proof of this theorem is a bit of a detour, we present it later in Section \ref{sect:cycles}. One might wonder whether it is also possible to find a short Klein bottle-like topological cycle in $\mathcal{H}$ if $E(\mathcal{H})=\Omega(|V(\mathcal{H})|^{5/2})$, as this would allow us to find a triangulation of the Klein bottle. Unfortunately, this is not the case. Indeed, the complete 3-partite 3-uniform hypergraph with vertex classes of size $n/3$ has $\Omega(n^{3})$ edges and contains no Klein bottle-like topological cycle by Claim \ref{claim:3partite}. \begin{proof}[Proof of Theorem \ref{thm:torus}] Let $r=36$, then by Theorem \ref{thm:cycles}, there exists a constant $c_1$ such that every 3-uniform hypergraph on $n$ vertices with at least $c_1 n^{5/2}$ edges contains a torus like topological cycle of length at most $r$. We show that $c=\max\{2c_1,24\cdot r^2(2r+1)\}$ suffices. Let $\mathcal{H}$ be a 3-uniform hypergraph with at least $cn^{5/2}$ edges. Then by Lemma \ref{lemma:3admissible}, $\mathcal{H}$ contains a set $F$ of at least $c_1 n^{5/2}$ edges such that any neighboring pair of edges in $F$ is $(\frac{1}{2r},\frac{1}{2r+1},2r,2r)$-semi-admissible in $\mathcal{H}$. Also, $F$ contains a torus-like topological cycle of length at most $r$. But then Lemma \ref{lemma:embedding_with_cyle} tells us that $\mathcal{H}$ contains a triangulation of the torus. \end{proof} \section{Orientable surfaces --- Upper bound}\label{sect:surface} In this section, we prove Theorem \ref{thm:orientable}. If $\mathcal{S}$ is a closed orientable surface of genus $g$, then $\mathcal{S}$ is homeomorphic to $g$ copies of the torus glued to each other one-by-one. We show the more general result that if we glue two hypergraphs $\mathcal{T}$ and $\mathcal{T}'$ together, then the extremal number of the resulting hypergraph is $O(\mbox{ex}(n,\mathcal{T})+\mbox{ex}(n,\mathcal{T}'))$. This might be of independent interest. More precisely, let $\mathcal{T}$ and $\mathcal{T}'$ be two $r$-uniform hypergraphs. Define $\mathcal{T}\oplus\mathcal{T}'$ to be the family of hypergraphs we get after gluing some edge $e\in\mathcal{T}$ to some edge $e'\in\mathcal{T}'$. Also, given two families $\mathcal{F}$ and $\mathcal{F}'$ of $r$-uniform hypergraphs, let $$\mathcal{F}\oplus\mathcal{F}'=\bigcup_{\mathcal{T}\in \mathcal{F},\mathcal{T}'\in\mathcal{F}}\mathcal{T}\oplus\mathcal{T}'.$$ \begin{lemma}\label{lemma:gluing} Let $\mathcal{F}$ and $\mathcal{F}'$ be two families of $r$-uniform hypergraphs. Then $$\mbox{ex}(n,\mathcal{F}\oplus\mathcal{F}')\leq 2^{r+1}(\mbox{ex}(n,\mathcal{F})+\mbox{ex}(n,\mathcal{F}')).$$ \end{lemma} The proof of this lemma builds on similar ideas as the ones presented in Section \ref{sect:admissibleedges}. We prepare the proof with a lemma akin to Lemma \ref{lemma:admissible}. Let $\mathcal{F}$ be a family of $r$-uniform hypergraphs, and let $\mathcal{H}$ be an $r$-uniform hypergraph. For $\epsilon,p\in (0,1]$, say that an edge $e\in E(\mathcal{H})$ is \emph{$(\mathcal{F},p,\epsilon)$-rich} if the following holds. Select each vertex of $\mathcal{H}$ independently with probability $p$, and let $U$ be the set of selected vertices. Let $A$ be the event that $\mathcal{H}[U]$ contains a copy of a member of $\mathcal{F}$ which contains the edge $e$. Then $e$ is \emph{$(\mathcal{F},p,\epsilon)$-rich} if $\mathbb{P}(A| e\subset U)> 1-\epsilon$. \begin{lemma}\label{lemma:rich} Let $\mathcal{F}$ be a family of $r$-uniform hypergraphs, let $\mathcal{H}$ be an $r$-uniform hypergraph on $n$ vertices, and let $p,\epsilon\in (0,1]$. Then at most $\frac{\mbox{ex}(n,\mathcal{F})}{\epsilon p^{r}}$ edges of $\mathcal{H}$ are not $(\mathcal{F},p,\epsilon)$-rich. \end{lemma} \begin{proof} Select each vertex of $\mathcal{H}$ with probability $p$, and let $U$ be the set of selected vertices. For $e\in \mathcal{H}$, let $B_e$ be the event that $e\subset U$ and there exists no copy of a member of $\mathcal{F}$ in $\mathcal{H}[U]$ containing $e$. Also, let $\mathcal{H}'$ be the subhypergraph of $\mathcal{H}$ formed by the not $(\mathcal{F},p,\epsilon)$-rich edges. Note that for each $e\in E(\mathcal{H}')$, we have $\mathbb{P}(B_e)=p^r\mathbb{P}(B_e| e\subset U)\geq \epsilon p^{r}$. Let $\mathcal{H}''$ be the subhypergraph of $\mathcal{H}[U]$ formed by those edges $e$ for which $B_e$ happens. By linearity of expectation, we have $$\mathbb{E}(|E(\mathcal{H}'')|)\geq |E(\mathcal{H}')|\epsilon p^{r},$$ so there exists a choice for $U$ such that $|E(\mathcal{H}'')|\geq |E(\mathcal{H}')|\epsilon p^{r}$. Note that $\mathcal{H}''$ cannot contain any member of a copy of $\mathcal{F}$, so $|E(\mathcal{H}'')|\leq \mbox{ex}(n,\mathcal{F})$, which implies $|E(\mathcal{H}')|\leq \frac{\mbox{ex}(n,\mathcal{F})}{\epsilon p^{r}}.$ \end{proof} \begin{proof}[Proof of Lemma \ref{lemma:gluing}] Let $\mathcal{H}$ be a hypergraph on $n$ vertices with more than $2^{r+1}(\mbox{ex}(n,\mathcal{F})+\mbox{ex}(n,\mathcal{F}'))$ edges. By Lemma \ref{lemma:rich}, the number of edges of $\mathcal{H}$ which are not $(\mathcal{F},\frac{1}{2},\frac{1}{2})$-rich is at most $2^{r+1}\mbox{ex}(n,\mathcal{F})$, and the number of edges of $\mathcal{H}$ which are not $(\mathcal{F}',\frac{1}{2},\frac{1}{2})$-rich is at most $2^{r+1}\mbox{ex}(n,\mathcal{F}')$. Therefore, $\mathcal{H}$ has an edge $e$ which is both $(\mathcal{F},\frac{1}{2},\frac{1}{2})$-rich and $(\mathcal{F}',\frac{1}{2},\frac{1}{2})$-rich. Color the vertices in $V(\mathcal{H})\setminus e$ red or blue independently with probability $\frac{1}{2}$, and suppose that the vertices of $e$ receive both colors. Then, the probability that there exists a red copy of a member of $\mathcal{F}$ containing $e$ is more than $\frac{1}{2}$. Also, the probability that there exists a blue copy of a member of $\mathcal{F}'$ containing $e$ is more than $\frac{1}{2}$. But then there exists a coloring such that $e$ is contained in both a red copy of some $\mathcal{T}\in\mathcal{F}$, and a blue copy of some $\mathcal{T}'\in\mathcal{F}'$, which means that $\mathcal{H}$ contains $\mathcal{T}\oplus\mathcal{T}'.$ \end{proof} Let us remark that the statement of Lemma \ref{lemma:gluing} remains true even if we strengthen the definition of $\mathcal{T}\oplus\mathcal{T}'$ by specifying the edges of $\mathcal{T}$ and $\mathcal{T}'$ we wish to glue together. For ease of notation we only state the weaker version, which already serves our purposes. Now we are ready to prove the main theorem of this section. \begin{proof}[Proof of Theorem \ref{thm:torus}] Let $g$ be the genus of $\mathcal{S}$. We prove by induction on $g$ that there exists a constant $c(g)$ such that $\mbox{ex}_{hom}(n,\mathcal{S})\leq c(g)n^{5/2}$. The case $g=1$ follows from Theorem \ref{thm:torus}. Suppose that $g>1$. Let $\mathcal{F}$ be the family of triangulations of the surface of genus $g-1$, and let $\mathcal{F}'$ be the family of triangulations of the torus. Then by Lemma \ref{lemma:gluing}, we have $$\mbox{ex}(\mathcal{F}\oplus\mathcal{F}')\leq 16(\mbox{ex}(\mathcal{F},n)+\mbox{ex}(\mathcal{F}',n))\leq 16(c(g-1)+c(1))n^{5/2}.$$ But for each $\mathcal{T}\oplus\mathcal{T}'\in \mathcal{F}\oplus\mathcal{F}'$, after removing the edge $e$ we glued $\mathcal{T}$ and $\mathcal{T}'$ along, we get a triangulation of the surface of genus $g$. Therefore, we can take $c(g)=16(c(g-1)+c(1))$, finishing the proof. \end{proof} \section{Topological cycles}\label{sect:cycles} In this section, we present the proof of Theorem \ref{thm:cycles}. We will reduce the problem to finding rainbow cycles in graphs whose vertices are colored with certain colors. We present this problem in the next subsection. \subsection{Rainbow cycles} Let $G$ be a graph. If $r$ is a positive integer and $X$ is some base set, call an assignment $f:V(G)\rightarrow X^{(r)}$ an \emph{$r$-set coloring } of $G$, and for $v\in V(G)$, call the $r$-element set $f(v)$ the \emph{color} of $v$. Say that an $r$-set coloring $f$ is \emph{diverse} if for any two distinct vertices $v,w\in V(G)$, if $v$ and $w$ are neighbors, or $v$ and $w$ has a common neighbor, then $f(v)$ and $f(w)$ are disjoint. Finally, say that a subgraph of $G$ is \emph{rainbow}, if any two of its vertices are colored with disjoint sets. The main result of this subsection is the following lemma. \begin{lemma}\label{lemma:rainbow} Let $r,k$ be positive integers, then there exists $c=c(r,k)$ such that the following holds. If $G$ is a graph on $n$ vertices with a diverse $r$-set coloring which contains no rainbow cycle of length at most $2k$, then $G$ has at most $c n^{1+1/k}$ edges. \end{lemma} The proof of this lemma follows very closely the ideas presented in a recent paper of Janzer \cite{J20}. The next lemma we prove is a slight modification of Lemma 2.1 in \cite{J20}. Let $H$ and $G$ be two graphs. A \emph{homomorphism} (not to confuse with homeomorphism) from $H$ to $G$ is a function $\phi:V(H)\rightarrow V(G)$ such that $\phi(x)\phi(y)\in E(G)$ if $xy\in E(H)$. Also, $\mbox{hom}(H,G)$ denotes the number of homomorphisms from $H$ to $G$. If $G$ is clear from the context, we write simply $\mbox{hom}(H)$ instead of $\mbox{hom}(H,G)$. As usual, $P_{\ell}$ denotes the path of length $\ell$, and $C_\ell$ denotes the cycle of length $\ell$; we also consider $C_2$ as the degenerate cycle of length 2, so $\mbox{hom}(C_2,G)=2|E(G)|$. If $y,z\in V(G)$, then $\mbox{hom}_{y,z}(P_{\ell})$ denotes the number of walks of length $\ell$ with endpoints $y$ and $z$. Finally, $\Delta(G)$ denotes the maximum degree of $G$. \begin{lemma}\label{lemma:badhom} Let $G$ be a graph with a diverse $r$-set coloring $f$ of the vertices, and let $\ell\geq 2$. Then the number of homomorphisms of $C_{2\ell}$ which are not rainbow is at most $$16\ell(r\ell\Delta(G)\mbox{hom}(C_{2\ell-2})\mbox{hom}(C_{2\ell}))^{1/2}.$$ \end{lemma} \begin{proof} For a positive integer $s$, let $\alpha_s$ be the number of walks of length $\ell-1$ whose endpoints $y,z$ satisfy $2^{s-1}\leq\mbox{hom}_{y,z}(P_{\ell-1})\leq 2^{s}$, and let $\beta_{s}$ be the number of walks of length $\ell$ whose endpoints $y,z$ satisfy $2^{s-1}\leq\mbox{hom}_{y,z}(P_{\ell})\leq 2^{s}$. Then $$\sum_{s\geq 1}\alpha_{s}2^{s-1}<\mbox{hom}(C_{2\ell-2}),$$ and $$\sum_{s\geq 1}\beta_{s}2^{s-1}<\mbox{hom}(C_{2\ell}).$$ For positive integers $s$ and $t$, let $\gamma_{s,t}$ denote the number of homomorphic copies $(x_1,\dots,x_{2\ell})$ of $C_{2\ell}$ such that $f(x_1)$ and $f(x_i)$ are not disjoint for some $i\in\{2,\dots,\ell+1\}$, $2^{s-1}\leq\mbox{hom}_{x_1,x_{\ell+2}}(P_{\ell-1})<2^{s}$ and $2^{t-1}\leq \mbox{hom}_{x_2,x_{\ell+2}}(P_\ell)<2^t$. We can bound $\gamma_{s,t}$ two ways. \begin{enumerate} \item $\gamma_{s,t}\leq \alpha_s \Delta(G)2^{t}$. Indeed, there are at most $\alpha_{s}$ ways to choose $x_{\ell+2},x_{\ell+3},\dots,x_{2\ell},x_1$, then, there are at most $\Delta(G)$ ways to choose $x_2$ with $x_1$ already chosen, and there are at most $2^{t}$ ways to choose $x_3,\dots,x_{\ell+1}$ by the inequality $\mbox{hom}_{x_2,x_{\ell+1}}(P_{\ell})<2^t$. \item $\gamma_{s,t}\leq \beta_{t}r\ell2^{s}$. Indeed, there are at most $\beta_{t}$ ways to choose the vertices $x_2,\dots,x_{\ell+2}$. Then, there are at most $r\ell$ choices for $x_1$. This is true as $f(x_1)\cap f(x_i)\neq \emptyset$ for some $i\in \{2,\dots,\ell+1\}$, but the neighbors of $x_2$ have disjoint colors by $f$ being diverse, so at most $r$ neighbours of $x_2$ have a color intersecting $f(x_i)$. Finally, there are at most $2^s$ further choices for $x_{\ell+3},\dots,x_{2\ell}$ as $\mbox{hom}_{x_1,x_{\ell+2}}(P_{\ell-1})<2^s$. \end{enumerate} The number of homomorphisms of $C_{2\ell}$ which are not rainbow is at most $2k\sum_{s,t\geq 1}\gamma_{s,t}$. Indeed, if $(x_1,\dots,x_{2\ell})$ is a homomorphic copy of $C_{2\ell}$ which is not rainbow, then at least one of its $2k$ cyclic shifts $(x_1',\dots,x_{2\ell}')$ satisfy that $f(x_1')\cap f(x_i')\neq\emptyset$ for some $i\in\{2,\dots,\ell+1\}$. Let us bound the sum $\sum_{s,t\geq 1}\gamma_{s,t}$. Let $q$ satisfy $2^{2q}=\frac{r\ell\mbox{hom}(C_{2\ell})}{\Delta(G)\mbox{hom}(C_{2\ell-2})}$. Divide the sum into two parts. Firstly, \begin{align*} \sum_{s,t:s\leq t-q}\gamma_{s,t}&\leq \sum_{s,t:s\leq t-q}\beta_{t}r\ell2^{s}\leq 2r\ell2^{-q}\sum_{t\geq 1}\beta_{t}2^{t}\\ &\leq 4r\ell2^{-q}\mbox{hom}(C_{2\ell})=4(r\ell\Delta(G)\mbox{hom}(C_{2\ell})\mbox{hom}(C_{2\ell-2}))^{1/2}. \end{align*} Secondly, \begin{align*} \sum_{s,t:s> t-q}\gamma_{s,t}&\leq \sum_{s,t:s> t-q}\alpha_{s}\Delta(G)2^{t}\leq 2\Delta(G)2^{q}\sum_{s\geq 1}\alpha_{s}2^{s}\\ &\leq 4\Delta(G)2^{q}\mbox{hom}(C_{2\ell-2})=4(rl\Delta(G)\mbox{hom}(C_{2\ell})\mbox{hom}(C_{2\ell-2}))^{1/2}. \end{align*} Hence, we get $\sum_{s,t\geq 1}\gamma_{s,t}\leq 8(r\ell\Delta(G)\mbox{hom}(C_{2\ell})\mbox{hom}(C_{2\ell-2}))^{1/2}$, finishing the proof. \end{proof} We prepare the proof of Lemma \ref{lemma:rainbow} with two more claims. The fist one is a result of Jiang and Seiver \cite{JS12} stating that not too sparse graphs contain balanced subgraphs. \begin{claim}(Jiang, Seiver \cite{JS12})\label{claim:JS} Let $\alpha>0$, then there exist $c_{1},c_2>0$ such that the following holds. Let $G$ be a graph on $n$ vertices with at least $cn^{1+\alpha}$ edges. Then $G$ contains a subgraph $G'$ on $m$ vertices for some positive integer $m$ such that every degree of $G'$ is between $c_1cm^{\alpha}$ and $c_2cm^{\alpha}$. \end{claim} The final claim we need is that even cycles satisfy Sidorenko's conjecture \cite{S91}, and thus we have a good lower bound on the number of homomorphisms of $C_{2k}$. \begin{claim}(Sidorenko \cite{S91})\label{claim:Sidorenko} Let $G$ be a graph on $n$ vertices. Then $$\mbox{hom}(C_{2k},G)\geq \frac{(2|E(G)|)^{2k}}{n^{2k}}.$$ \end{claim} \begin{proof}[Proof of Lemma \ref{lemma:rainbow}] We show that $c=c(r,k)=\frac{256k^{3}rc_2}{c_1^2}$ suffices. Let $G$ be a graph on $n$ vertices with at least $cn^{1+1/k}$ edges, and let $f$ be a diverse $r$-set coloring of $G$. By Claim \ref{claim:JS}, $G$ contains a subgraph $G'$ with $m$ vertices such that every degree of $G'$ is between $c_1cm^{1/k}$ and $c_2cm^{1/k}$. In particular, $$ \frac{1}{2}c_1cm^{1+1/k}\leq|E(G')|\leq \frac{1}{2}c_2cm^{1+1/k}.$$ Suppose that $G'$ contains no rainbow copy of $C_{2\ell}$ for $2\leq \ell\leq k$. Then by Lemma \ref{lemma:badhom}, we have $$16\ell(r\ell c_2cm^{1/k}\mbox{hom}(C_{2\ell-2},G')\mbox{hom}(C_{2\ell},G'))^{1/2}\geq \mbox{hom}(C_{2\ell},G'),$$ or equivalently, $$256l^{3}rc_2cm^{1/k}\mbox{hom}(C_{2\ell-2},G')\geq \mbox{hom}(C_{2\ell},G').$$ But then $$\mbox{hom}(C_{2k},G')<\mbox{hom}(C_2,G')(256 k^{3}rc_2cm^{1/k})^{k-1}<(256k^{3}rc_{2}c)^{k}m^{2}.$$ On the other hand, by Claim \ref{claim:Sidorenko}, we have $$\mbox{hom}(C_{2k},G')\geq (c_1c)^{2k}m^{2}.$$ Comparing these two inequalities, we get a contradiction by the choice of $c$, so $G'$ contains a rainbow copy of $C_{2\ell}$ for some $2\leq \ell\leq k$. \end{proof} \subsection{Finding a topological cycle} Let us turn to hypergraphs. Let $\mathcal{H}$ be an $r$-partite $r$-uniform hypergraph with vertex classes $A_1,\dots,A_r$. For $i=1,\dots,r$, let $B_{i}$ be the family of $(r-1)$-element sets $X$ in $V(\mathcal{H})$ such that $|X\cap A_j|=1$ for $j\in\{1,\dots,r\}\setminus \{i\}$. Let the \emph{degree of $X$}, denoted by $d(X)=d_{\mathcal{H}}(X)$, be the number of edges of $\mathcal{H}$ containing $X$. Finally, let $C_{i}=C_{i}(\mathcal{H})=\{X\in B_{i}:d(X)>0\}$. First, we prove a slightly weaker variant of Claim \ref{claim:JS} for hypergraphs. \begin{lemma}\label{lemma:balanced} There exist positive real numbers $c_1=c_1(r)$ and $c_2=c_2(2)$ such that the following holds. Let $h=h(r)=\binom{r+1}{2}$, and let $|A_1|=\dots=|A_r|=n$. Then there exist positive integers $t_1,\dots,t_r$, and a subhypergraph $\mathcal{H}'$ of $\mathcal{H}$ such that \begin{enumerate} \item for $i=1,\dots,r$ and $X\in C_{i}(\mathcal{H}')$, we have $t_i\leq d_{\mathcal{H}'}(X)<c_1t_i(\log n)^{h}$, \item $\mathcal{H}'$ has at least $c_2|E(\mathcal{H})|(\log n)^{-h}$ edges. \end{enumerate} \end{lemma} \begin{proof} We prove this by induction on $r$. In the case $r=1$, there is nothing to prove, so suppose that $r>1$. Let $m=|E(\mathcal{H})|$. For $\ell=1,\dots,\log_2 n=:s$, let $D_{\ell}=\{X\in B_{i}: 2^{\ell-1}\leq d(X)<2^{\ell}\}$. Each edge of $\mathcal{H}$ contains exactly one element of one of the sets $D_0,\dots,D_{s}$, so there exists $\ell$ such that at least $\frac{m}{s}$ edges contain an element of $D_{\ell}$. Delete all edges of $\mathcal{H}$ not containing an element of $D_{\ell}$, let the resulting hypergraph be $\mathcal{H}_{0}$. Let $u_{r}=2^{\ell-1}$ and $p_r=|C_r(\mathcal{H})|$. Then $p_r\leq \frac{|E(\mathcal{H}_{0})|}{u_{r}}\leq \frac{m}{u_r}$. Now for $v\in A_r$, consider the link graph $\mathcal{H}_v$ of $v$ in $\mathcal{H}_{0}$. Let $h'=h(r-1)$, and $c_1'=c_1(r-1),c_2'=c_2(r-1)$. Then $\mathcal{H}_v$ is an $(r-1)$-partite ${(r-1)}$-uniform hypergraph, so we can apply our induction hypothesis to conclude that there exist positive integers $t_1^{v},\dots,t_{r-1}^{v}$ and a subhypergraph $\mathcal{H}'_v$ of $\mathcal{H}_v$ such that \begin{enumerate} \item for $i=1,\dots,r-1$ and $X\in C_{i}(\mathcal{H}'_{v})$, we have $t_i^{v}\leq d_{\mathcal{H}'_{v}}(X)<c_1't_i^{v}(\log n)^{h'}$, \item $\mathcal{H}'_{v}$ has at least $c_2'|E(\mathcal{H}_{v})|(\log n)^{-h'}$ edges. \end{enumerate} For $\overline{\ell}=(\ell_1,\dots,\ell_{r-1})\in [s]^{r-1}$, let $\mathcal{H}_{\overline{\ell}}$ be the hypergraph formed by those edges $X\cup \{v\}\in\mathcal{H}_{0}$, where $X\in \mathcal{H}_v'$ and $2^{\ell_{i}-1}\leq t_{i}^{v}<2^{\ell_{i}}$ for $i=1,\dots,r-1$. We have $$\sum_{v\in A_r}|E(\mathcal{H}'_{v})|\geq c_2'|E(\mathcal{H}_{0})|(\log n)^{-h'}\geq c_2'm(\log n)^{-h'-1},$$ so there exists $\overline{\ell}$ such that $|E(\mathcal{H}_{\overline{\ell}})|\geq c_2'm(\log n)^{-h'-r}=c_2'm(\log n)^{h}$. Let $\mathcal{H}_{1}=\mathcal{H}_{\overline{\ell}}$, let $m_1=|E(\mathcal{H}_{1})|$, and for $i=1,\dots,r-1$, let $u_{i}=2^{\ell_i-1}$. In $\mathcal{H}_{1}$, every $X\in C_{i}(\mathcal{H}_{1})$ has degree at most $2c_1'u_{i}(\log n)^{h'}$, and if $i\leq r-1$, then $d(X)\geq u_{i}$. However, if $X\in C_{r}(\mathcal{H}_{1})$, the degree of $X$ might be smaller than $u_r$. For $i\in [r-1]$, let $p_{i}=|C_{r}(\mathcal{H}_{1})|$, then $p_{i}\leq \frac{m_1}{u_i}$. Also, $p_r\leq \frac{c_2'm_1(\log n)^{h}}{u_{r}}$. For $i\in [r-1]$, let $t_{i}=\frac{1}{2r}u_{i}$, and let $t_{r}=\frac{1}{2rc_2'}u_{r}(\log n)^{-h}$. Now repeat the following procedure. If there exists $i\in [r]$ and $X\in C_{i}(\mathcal{H}_1)$ of such that $d(X)<t_{i}$, then delete all edges from $\mathcal{H}_1$ containing $X$, otherwise stop. Let $\mathcal{H}'$ be the hypergraph we get at the end of the procedure. In total, we deleted at most $\sum_{i=1}^{r}t_ip_{i}$ edges of $\mathcal{H}_1$. But by the choice of $t_{i}$ and the bounds on $p_{i}$, we have $t_{i}p_{i}\leq \frac{m_1}{2r}$ for $i\in [r]$. This means that we deleted at most half of the edges, so $\mathcal{H}'$ is a nonempty hypergraph. Furthermore, we have \begin{enumerate} \item for $i=1,\dots,r-1$, if $X\in C_{i}(\mathcal{H}')$, then $t_{i}\leq d_{\mathcal{H}'}(X)<2rc_1't_{i}(\log n)^{h'}$, \item if $X\in C_{r}(\mathcal{H}')$, then $t_{r}\leq d_{\mathcal{H}'}(X)<4rc_2'(\log n)^{h}$, \item $|E(\mathcal{H}')|\geq \frac{m_1}{2}\geq \frac{c_2}{2}m(\log n)^{-h}$. \end{enumerate} This shows that setting $c_1=\max\{2rc_1',4rc_2'\}$ and $c_2=\frac{c_2'}{2}$ suffices. \end{proof} Let $\mathcal{H}$ be an $r$-partite $r$-uniform hypergraph with vertex classes $A_1,\dots,A_r$. If $e=x_1\dots x_r$ and $f=y_1\dots y_r$ are two disjoint edges of $\mathcal{H}$, where $x_i,y_i\in A_i$ for $i\in [r]$, write $e\rightarrow f$ if $x_1\dots x_{i}y_{i+1}\dots y_{r}\in E(\mathcal{H})$ for $i\in [r-1]$. Define the graph $L=L(\mathcal{H})$ such that the vertices of $L$ are the edges of $\mathcal{H}$, and $e,f\in V(L)$ are joined by an edge if $e\rightarrow f$ or $f\rightarrow e$. The graph $L$ is naturally $r$-set colored, where the color of each vertex is itself. \begin{lemma}\label{lemma:diverse} There exist constants $c_3=c_3(r)$ and $h_1=h_1(r)$ such that the following holds. Let $\mathcal{H}$ be an $r$-partite $r$-uniform hypergraph with vertex classes of size $n$, and suppose that $|E(\mathcal{H})|\geq dn^{r-1}$. Let $L=L(\mathcal{H})$, and consider the natural $r$-set coloring of $L$. Then $L$ contains a subgraph $L'$ with average degree $c_3d(\log n)^{-h_1}$ on which the coloring is diverse. \end{lemma} \begin{proof} Let $h,c_1,c_2$ be the constants given by Lemma \ref{lemma:balanced}. We show that $c_3=\frac{c_2}{8rc_1^{r}}$ and $h_1=(r+1)h$ suffices. By Lemma \ref{lemma:balanced}, there exist positive integers $t_1,\dots,t_r$ and a subgraph $\mathcal{H}'$ of $\mathcal{H}$ such that \begin{enumerate} \item for $i=1,\dots,r$ and $X\in C_{i}(\mathcal{H}')$, we have $t_i\leq d_{\mathcal{H}'}(X)<c_1t_i(\log n)^{h}$, \item $\mathcal{H}'$ has at least $c_2dn^{r-1}(\log n)^{-h}$ edges. \end{enumerate} Let $L_1=L(\mathcal{H}')$, and let $T=t_1\dots t_r$. Note that the degree of each vertex $x_1\dots x_{r}$ of $L_1$ is between $T$ and $2Tc_1^{r}(\log n)^{rh}$. Indeed, given $y_{j+1},\dots,y_r\in V(\mathcal{H}')$ for some $j\geq 1$ such that $x_1\dots x_j y_{j+1}\dots y_r\in E(\mathcal{H})$, there are $$t_{j}\leq d_{\mathcal{H}'}(x_1\dots x_{j-1}y_{j+1}\dots y_r)< c_1t_j(\log n)^{h}$$ choices for the vertex $y_{j}\in V(\mathcal{H}')$ such that $x_1\dots x_{j-1}y_j y_{j+1}\dots y_r\in E(\mathcal{H})$. Also, as $|E(\mathcal{H}')|\leq |C_{i}(\mathcal{H}')|c_1t_i(\log n)^{h}\leq n^{r-1}c_1t_i(\log n)^{h}$, we get $t_{i}\geq \frac{c_2}{c_1}d(\log n)^{-2h}$ for $i\in[r]$. Define the graph $H$ whose vertices are the edges of $L_1$, and $ef\in E(L_1)$ and $e'f'\in E(L_1)$ are joined by an edge if $e=e'$ and $f\cap f'\neq \emptyset$. Given $e=x_1\dots x_{r}\in V(L_1)$, the number of neighbors of $e$ in $L_1$ containing a given $y_i\in A_i$ is at most $\Delta_i:=\frac{T}{t_i}(\log n)^{(r-1)h}c_1^{r-1}$. Therefore, the degree of every vertex of $H$ is at most $$2\sum_{i=1}^{r}\Delta_i\leq 2\sum_{i=1}^r\frac{T}{t_i}(\log n)^{(r-1)h}c_1^{r-1}\leq \frac{2rT(\log n)^{h(r+1)}c_1^{r}}{c_2d}.$$ But then $H$ has an independent set of size $$\frac{|V(H)|}{\Delta(H)+1}=\frac{|E(L_1)|}{\Delta(H)+1}>\frac{|V(L_1)|T/2}{4rT(\log n)^{h(r+1)}c_1^{r}/c_2d}=\frac{c_2d|V(L_1)|}{8r(\log n)^{h(r+1)}c_1^{r}}.$$ This independent set is a subgraph $L'$ of $L_1$ in which the natural $r$-set coloring is diverse, and $L'$ has average degree at least $\frac{c_2d}{8r(\log n)^{h(r+1)}c_1^{r}}=c_3d(\log n)^{-h_1}.$ \end{proof} Now let us show how the existence of rainbow cycles in $L(\mathcal{H})$ implies the existence of topological cycles in $\mathcal{H}$. \begin{lemma}\label{lemma:rainbow_to_topological} If $\mathcal{H}$ is a 3-partite 3-uniform hypergraph and $L(\mathcal{H})$ contains a rainbow cycle of length $2\ell$, then $\mathcal{H}$ contains a torus-like topological cycle of length at most $6\ell.$ \end{lemma} \begin{proof} As $\mathcal{H}$ is 3-partite, every topological cycle in $\mathcal{H}$ is torus-like by Claim \ref{claim:3partite}. Therefore, it is enough to show that $\mathcal{H}$ contains a topological cycle. Let $A_1,A_2,A_3$ be the 3 vertex classes of $\mathcal{H}$ and let $L=L(\mathcal{H})$. For $i\in[2\ell]$, let $f_{i}=x_{i,1}x_{i,2}x_{i,3}\in V(L)=E(\mathcal{H}')$ be the vertices of a rainbow copy of $C_{2\ell}$ in $L$, where $x_{i,j}\in A_{j}$ for $j=1,2,3$. Then $x_{i,j}\neq x_{i',j'}$ for any distinct $(i,j),(i',j')\in [2\ell]\times[3]$. By the definition of $L$, we have either $f_i\rightarrow f_{i+1}$ or $f_{i+1}\rightarrow f_i$ (indices are meant modulo $2\ell$). Define the edges $f_{i}',f_{i}''$ as follows. \begin{enumerate} \item If $f_i\rightarrow f_{i+1}$, let $f_{i}'=x_{i,1}x_{i,2}x_{i+1,3}$ and $f_{i}''=x_{i,1}x_{i+1,2}x_{i+1,3}$. \item If $f_{i+1}\rightarrow f_{i}$, let $f_{i}'=x_{i+1,1}x_{i,2}x_{i,3}$ and $f_{i}''=x_{i+1,1}x_{i+1,2}x_{i,3}$. \end{enumerate} Then $f_i',f_i''$ are also edges of $\mathcal{H}'$. Consider the sequence of edges $f_1,f_1',f_1'',f_2,f_2',f_2'',\dots,f_{2\ell},f_{2\ell}',f_{2\ell}''$. This sequence might not be a topological cycle, but it contains a subsequence which is. If $i$ is an index such that $f_{i-1}\rightarrow f_{i}$ and $f_{i+1}\rightarrow f_{i}$, or $f_{i}\rightarrow f_{i-1}$ and $f_{i}\rightarrow f_{i+1}$, then remove $f_i$ from the sequence. We show that the resulting subsequence is a proper ordering of the topological cycle~$\mathcal{C}$. One way to see this as follows. Let $i\in [2\ell]$. \begin{enumerate} \item If $f_{i}\rightarrow f_{i-1}$ and $f_{i}\rightarrow f_{i+1}$, then $f_{i-1}''=x_{i,1}x_{i,2}x_{i-1,3}$ and $f_{i}'=x_{i,1}x_{i,2}x_{i+1,3}$ are consecutive edges in the sequence, and the vertex $x_{i,2}$ does not appear in any other edge of $\mathcal{C}$. Remove $f_{i-1}''$ and $f_{i}'$ from $\mathcal{C}$ and add the 3-element set $g_{i}=x_{i,1}x_{i-1,3}x_{i+1,3}$. \item If $f_{i-1}\rightarrow f_{i}$ and $f_{i+1}\rightarrow f_{i}$, then $f_{i-1}''=x_{i-1,1}x_{i,2}x_{i,3}$ and $f_{i}'=x_{i+1,1}x_{i,2}x_{i,3}$ are consecutive edges in the sequence, and the vertex $x_{i,2}$ does not appear in any other edge of $\mathcal{C}$. Remove $f_{i-1}''$ and $f_{i}'$ from $\mathcal{C}$ and add the 3-element set $g_{i}=x_{i-1,1}x_{i+1,1}x_{i,3}$. \end{enumerate} After each such replacement, the resulting hypergraph remains homeomorphic to $\mathcal{C}$. But after every such replacement is executed, the resulting hypergraph is a tight cycle, which is a topological cycle. Therefore, $\mathcal{C}$ is a topological cycle as well. See Figure \ref{fig:rainbowcycle} for an illustration. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:cycles}] Let $\mathcal{H}$ be a 3-uniform hypergraph with $n$ vertices and $dn^{2}$ edges. Then $\mathcal{H}$ contains a 3-partite subhypergraph with at least $\frac{2}{9}dn^{2}$ edges. By adding isolated vertices, we can assume that each vertex class of $\mathcal{H}'$ has size $n$. Let $L=L(\mathcal{H}')$. Let $c_3,h_1$ be the constants given by Lemma \ref{lemma:diverse} in case $r=3$. Then $L$ has a subgraph $L'$ such that the average degree of $L'$ is at least $\frac{2c_3}{9}d(\log n)^{-h_1}$, and the natural 3-set coloring on $L'$ is diverse. Let $c=c(3,k)$ be the constant given by Lemma \ref{lemma:rainbow}. If the average degree of $L'$ is at least $2c |V(L')|^{1/k}$, then $L'$ contains a rainbow cycle of length at most $2k$. Here, $|V(L')|\leq |E(\mathcal{H}')|\leq dn^{2}$, so if $L'$ contains no rainbow cycle of length at most $2k$, then $$2c(dn^{2})^{1/k}>\frac{2c_3}{9}d(\log n)^{-h_1},$$ which gives $d\leq c'n^{2/(k-1)}(\log n)^{h'}$ with appropriate constants $c',h'>0$. But if $L'$ contains a a rainbow cycle of length at most $2k$, then $\mathcal{H}$ contains a torus like topological cycle of length at most $6k$ by Lemma \ref{lemma:rainbow_to_topological}. \end{proof} \begin{figure} \caption{An illustration of a rainbow cycle of length 4, $f_1\rightarrow f_2\rightarrow f_3\leftarrow f_4 \leftarrow f_1$ (left), and the tight cycle we get after the operations (right).} \label{fig:rainbowcycle} \end{figure} \section{Concluding remarks} In this paper, we proved that if $\mathcal{S}$ is the triangulation of an orientable surface, then $\mbox{ex}_{hom}(n,\mathcal{S})=O(n^{5/2})$. Does the same bound hold for non-orientable surfaces? In particular, we propose the following conjecture. \begin{conjecture} If $\mathcal{H}$ is a 3-uniform hypergraph with $n$ vertices which does not contain a triangulation of the real projective plane, then $\mathcal{H}$ has $O(n^{5/2})$ edges. \end{conjecture} As every non-orientable closed surface is homeomorphic to several real projective planes glued together, a positive answer to this conjecture together with Lemma \ref{lemma:gluing} would imply that every closed surface has extremal number $\Theta(n^{5/2})$. More generally, let us highlight the conjecture of Linial \cite{L08,L18} mentioned in the introduction. \begin{conjecture} If $\mathcal{S}$ is a 3-uniform hypergraph, then there exists $c=c(\mathcal{H})>0$ such that $\mbox{ex}_{hom}(n,\mathcal{S})<cn^{5/2}$. \end{conjecture} Finally, it would be interesting to see that which triangulations of the sphere appear in hypergraphs with $O(n^{5/2})$ edges. As we observed, one can find double-pyramids, and one can also glue several double-pyramids together to get a triangulation of the sphere. Are there other type of triangulations one can expect? More precisely, we propose the following conjecture which asks if one can expect to find bounded degree triangulations. \begin{conjecture} There exist two constants $d,c>0$ such that if $\mathcal{H}$ is a hypergraph with $n$ vertices and at least $cn^{5/2}$ edges, then $\mathcal{H}$ contains a triangulation of the sphere in which every vertex has degree at most $d$. \end{conjecture} \end{document}
\begin{document} \pagestyle{myheadings} \begin{center} {\huge \textbf{Inverse eigenproblems and approximation problems for the generalized reflexive and antireflexive matrices with respect to a pair of generalized reflection matrices}}\footnote{This research was supported by the Natural Science Foundation of China (11601328), the research fund of Shanghai Lixin University of Accounting and Finance (AW-22-2201-00118).} {\large \textbf{Haixia Chang }} $^{a}$School of Statistics and Mathematics, Shanghai Lixin University of Accounting and Finance,\newline Shanghai 201209, P.R. China\\ \end{center} \begin{quotation} \textbf{Abstract} A matrix $P$ is said to be a nontrivial generalized reflection matrix over the real quaternion algebra $\mathbb{H}$ if $P^{\ast }=P\neq I$ and $P^{2}=I$ where $\ast$ means conjugate and transpose. We say that $A\in\mathbb{H}^{n\times n}$ is generalized reflexive (or generalized antireflexive) with respect to the matrix pair $(P,Q)$ if $A=PAQ$ $($or $A=-PAQ)$ where $P$ and $Q$ are two nontrivial generalized reflection matrices of demension $n$. Let ${\large \varphi}$ be one of the following subsets of $\mathbb{H}^{n\times n}$ : (i) generalized reflexive matrix; (ii)reflexive matrix; (iii) generalized antireflexive matrix; (iiii) antireflexive matrix. Let $Z\in\mathbb{H}^{n\times m}$ with rank$\left( Z\right) =m$ and $\Lambda=$ diag$\left( \lambda_{1},...,\lambda_{m}\right) .$ The inverse eigenproblem is to find a\ matrix $A$ such that the set ${\large \varphi }\left( Z,\Lambda\right) =\left\{ A\in{\large \varphi}\text{ }|\text{ }AZ=Z\Lambda\right\} $ nonempty and find the general expression of $A.$\newline In this paper, we investigate the inverse eigenproblem ${\large \varphi}\left( Z,\Lambda\right) $. Moreover, the approximation problem: $\underset{A\in{\large \varphi}}{\min\left\Vert A-E\right\Vert _{F}}$ is studied, where $E$ is a given matrix over $\mathbb{H}$\ and $\parallel \cdot\parallel_{F}$ is the Frobenius norm. \newline\textbf{Keywords} inner inverse of a matrix; reflexive inverse of a matrix; generalized reflexive matrix, generalized antireflexive matrix, Frobenius norm, approximation problem\newline\textbf{2000 AMS Subject Classifications\ }{\small 15A24, 15A33, 15A57, 15A09}\newline \end{quotation} \section{Introduction} Throughout $\mathbb{H}^{m\times n}$ denotes the set of all $m\times n$ matrices over the real quaternion algebra \[ \mathbb{H=\{}a_{0}+a_{1}i+a_{2}j+a_{3}k\text{ }|\text{ }i^{2}=j^{2} =k^{2}=ijk=-1\text{ and }a_{0},a_{1},a_{2},a_{3}\ \text{are real numbers}\mathbb{\}}; \] the symbols $I,$ $A^{\ast},$ $r(A),$ $A^{\left( 1\right) }$ $A^{+},$ $\parallel A\parallel_{F},$ stand for the identity matrix with the appropriate size, the conjugate transpose, the rank, the inner inverse, the reflective inverse of a matrix $A\in\mathbb{H}^{m\times n}$ and Frobenius norm of $A\in\mathbb{H}^{m\times n}$, respectively; define inner product, $\left\langle A,B\right\rangle =trace\left( B^{\ast}A\right) $ for all $A,$ $B\in\mathbb{H}^{m\times n},$ then $\mathbb{H}^{m\times n}$ is a Hilbert inner product space and the norm that is generated by this inner product is Frobenius norm. The two matrices $L_{A}$ and $R_{A}$ stand for $L_{A} =I-A^{+}A,$ $R_{A}=I-AA^{+}$ where $A^{+}$ is an any but fixed reflexive inverse of $A.$ $V_{n}$ denotes the $n\times n$ backward identity matrix whose elements along the southwest-northeast diagonal are ones and whose remaining elements are zeros$.$ The real quaternion matrices play an important role in computer science, quantum physicis, and so on (\emph{ }\cite{shuai}-\cite{sla}). The more interests of quaternion matrices have been witnessed recently (e.g.,\emph{ }\cite{ZFZHEN}, \cite{DMV}). As $\mathbb{H}$ is noncommutative for its mutiplication, i.e., $ab\neq ba,$ for $a,b\in\mathbb{H}$ in general, it bring some obstacles in the study of quatenion matrices. Many authors has used the method of embeddings of the quaternions into complex vector spaces, through symplectic representations, such as D.R. Farenick and B. A.F. Pidkowich in \cite{Farenick2003} and homotopy theory, such as Zhang in \cite{ZFZHEN} \emph{.} Since left and right scalar multiplications are different, it force us to consider $Ax=\lambda x,$ and $Ax=x\lambda$ separately. A quaternion $\lambda$ is said to a left (right) eigenvalue if $Ax=\lambda x$ $\left( Ax=x\lambda \right) .$ In 1985, Wood proposed that every $n\times n$ quaternion matrix has at least one left eigenvalue in $\mathbb{H}$ and Zhang in \cite{ZFZHEN} \emph{ }give its proof with the homotopy theory method. But the problem that how many left eigenvalues a square quaternion matrix has has not been solved yet. While the right eigenvalues of a square quaternion matrix has been well studied, Zhang in \cite{ZFZHEN}\emph{ }explicitly refer to any $n\times n$ quaternion matrix $A$ has exactly $n$ right eigenvalues which are complex numbers with nonnegative imaginary parts (standard eigenvalues of $A$). Weighing the above cases, we only investigate the right inverse eigenproblems of a square quaternion matrix in this paper. Reflexive and antireflexive matrices with respect to a generalized reflection matrix have been widely studied, e.g. \cite{hu2003}-\cite{Trench3}. But the generalized reflexive and generalized antireflexive matrices with respect to the generalized reflection matrix pair have not been widely studied yet. In 1998, Chen in \cite{chen98} defined that $A\in\mathbb{H}^{m\times n}$ is generalized reflexive (or generalized antireflexive) and give two special subsets of the space $\mathbb{C}^{m\times n}$. The application of the generalized reflexive (or generalized antireflexive) matrix is very wide in engineering and science, such as the altitude estimation of a level network, electric network and structural analysis of trusses and so on, see \cite{chen98}. \begin{definition} A matrix $A\in\mathbb{H}^{n\times n}$ is generalized reflexive with respect to the matrix pair $(P,Q)$ if $A=PAQ$ and $A\in\mathbb{H}^{n\times n}$ is generalized antireflexive with respect to the matrix pair $(P,Q)$ if $A=-PAQ,$where $P$ and $Q$ are two nontrivial generalized reflection matrices of demension $n$, i.e., $P^{\ast}=P\neq I$, $P^{2}=I,$ and $Q^{\ast}=Q\neq I$, $Q^{2}=I$. \end{definition} The generalized reflexive and the generalized antireflexive matrix are further generalization of reflexive and antireflexive, respectively.\ From this definition, it is clear that $A$ is a reflexive martix if generalized reflection matrices $P=Q,$ i.e., $A=PAP$ and a centrosymmetric matrix if $P=Q=V_{n},$ i.e., $A=V_{n}AV_{n}.$ Centrosymmetric and centroskew matrices was investigated by many authors ,such as Andrew \cite{Andrew1},\cite{Andrew2} , Weaver \cite{Weaver}, Boullion and Atchison \cite{W.C.1}, Wang \cite{wqw2005}, and the others. Define two special subspaces of $\mathbb{H}^{n\times n}$ \begin{align*} \mathbb{H}_{r}^{n\times n}\left( P,Q\right) & =\left\{ A\text{ }|\text{ }A\in\mathbb{H}^{n\times n}\text{ and }A=PAQ\right\} ,\\ \mathbb{H}_{a}^{n\times n}\left( P,Q\right) & =\left\{ A\text{ }|\text{ }A\in\mathbb{H}^{n\times n}\text{ and }A=-PAQ\right\} , \end{align*} where $P$ and $Q$ are two generalized reflection matrices of dimension $m$ and $n,$ respectively. If $A\in\mathbb{H}_{r}^{n\times n}\left( P,Q\right) $ $\left( \in\mathbb{H}_{a}^{n\times n}\left( P,Q\right) \right) ,$ then $A$ is a generalized reflexive (generalized antireflexive) matrix with respect to nontrivial generalized reflection matrix pair $\left( P,Q\right) .$ The inverse eigenproblems of quaternion matrices: given matrices $Z\in\mathbb{H}^{n\times m}$ and $\Lambda=$diag$\left( \lambda_{1} ,...,\lambda_{m}\right) ,$ find $A\in\mathbb{H}^{n\times n}$ satisfying $AZ=Z\Lambda,$ where $\lambda_{1},...,\lambda_{m}$ are complex numbers with nonnegative imaginary parts. The inverse problems and inverse eigenproblems of matrices has been wide topics, (e.g., \cite{Trench2}, \cite{bai}-\cite{XIE}) with the method involing Moore-Penrose inverse or singular value decompositions of matrices related to $\left( Z,\Lambda\right) ,$ respectively. In this paper, we investigate the inverse (right) eigenproblem of $\left( Z,\Lambda\right) ,$ find $A\in\mathbb{H}_{r}^{n\times n}\left( P,Q\right) $ $\left( \in\mathbb{H}_{a}^{n\times n}\left( P,Q\right) \right) $ and present the general expression of $A.$ We make use of the structure properties of $A\in\mathbb{H}_{r}^{n\times n}\left( P,Q\right) $ $\left( \in \mathbb{H}_{a}^{n\times n}\left( P,Q\right) \right) $ to deal with the inverse eigenproblem. Moreover, the approximation problem: $\underset {A\in{\large \varphi}}{\min\left\Vert A-E\right\Vert _{F}},$ where $E$ is a given matrix over $\mathbb{H}$ and\ ${\large \varphi}$ is one of $\mathbb{H}_{r}^{n\times n}\left( P,Q\right) $ and $\mathbb{H}_{a}^{n\times n}\left( P,Q\right) $. The explicit expression of $A$ is also presented. Futhermore, it is pointed that some results in recent papers are special cases of this paper. \section{The solution of the inverse eigenproblems of $\left( Z,\Lambda \right) $} In this section, we first give a structure property of generalized reflexive matrix and a generalized antireflexive matrix. Then we make use of the structure property to derive the general expression of $A\in\mathbb{H} _{r}^{n\times n}\left( P,Q\right) $ $\left( \in\mathbb{H}_{a}^{n\times n}\left( P,Q\right) \right) $ satisfying $AZ=Z\Lambda.$ By Proposition 2.1 in (\cite{Farenick2003}), we know that $R\in\mathbb{H} ^{n\times n}$ has infinitely many nonreal right eigenvalues if $R$ has a nonreal right eigenvalue. Because if $\left( \lambda,\xi\right) $ is a right eigenvalue, then $\left( w^{-1}\lambda w,\xi w\right) $ is also a right eigenpair of $R$. Then we introduce the orbit $\theta(\lambda),$ i.e., a class from a partition of $\mathbb{H}$. For example, a similarity orbit $\theta(\lambda)$ of $\lambda$: \[ \theta(\lambda)=\left\{ w^{-1}\lambda w\text{ }|\text{ }w\in\mathbb{H}\text{, }w\neq0\right\} \] and a conjugacy class $\theta(\lambda)$ of $\lambda$: \[ \theta(\lambda)=\left\{ \bar{w}\lambda w\text{ }|\text{ }w\in\mathbb{H} \text{, }\left\vert w\right\vert =1\right\} . \] \begin{lemma} $($See Theorem 3.3 in \cite{Farenick2003}$)$ If $R\in\mathbb{H}^{n\times n}$ is normal, then there are matrices $D,$ $U\in\mathbb{H}^{n\times n}$ such that : \newline$\left( 1\right) $ $U$ is unitary matrix, $D$ is a diagonal matrix, and $U^{\ast}RU=D;$\newline$\left( 2\right) $ each diagonal entry of $D$ is a complex number contained in the closed upper halfplane $\mathbf{C} ^{+}.$\newline$\left( 3\right) $ $q\in\mathbb{H}$ is a right eigenvalue of $R$ if and only if $q\in\theta(\lambda)$ for some diagonal element $\lambda$ of $D.$ \end{lemma} \begin{lemma} Suppose that $P\in\mathbb{H}^{n\times n}$ is a nontrivial generalized reflection matrix; then there exists a unitary matrix $U\in\mathbb{H}^{n\times n}$ such that \begin{equation} P=U\left[ \begin{array} [c]{cc} I_{r_{1}} & 0\\ 0 & -I_{n-r_{1}} \end{array} \right] U^{\ast}. \label{aa1} \end{equation} \end{lemma} \begin{proof} A matrix $P\in\mathbb{H}^{n\times n}$ is a nontrivial generalized reflection matrix, i.e., $P^{\ast}=P\neq I$, $P^{2}=I.$ Obviously, $P$ is a normal matrix. According to the minimal polynomial of $P,$ we know 1, -1 are all right engivalues (or standard eigenvalues ) of $P.$ By Lemma 2.1, there exists a unitary matrix $U$ satisfying (\ref{aa1}). \end{proof} If a matrix $Q\in\mathbb{H}^{n\times n}$ is a nontrivial generalized reflective matrix, by Lemma 2.2, we get \begin{equation} Q=V\left[ \begin{array} [c]{cc} I_{r_{2}} & 0\\ 0 & -I_{n-r_{2}} \end{array} \right] V^{\ast}, \label{ab1} \end{equation} where $V$ is a unitary matrix. We can use a staightforward method to obtain $U$ and $V$ mentioned above. In fact, by $P^{2}=I,$ i.e., $\left( I-P\right) \left( I+P\right) =0=0$ or $\left( I+P\right) \left( I-P\right) =0,$ we know the column vectors of $I+P$ and $I-P$ are the eigenvector belonging to the right eigenvalues $\lambda=1,$ $\lambda=-1$ of the matrix $P,$ respectively. We can only apply the Gram-Schmidt process to the columns of $I+P$ and $I-P,$respectively. Let \begin{align*} & I+P\text{ }\underrightarrow{Gram-Schmidt}\text{ }U_{1}\\ & I-P\text{ }\underrightarrow{Gram-Schmidt}\text{ }U_{2} \end{align*} and in (\ref{aa1}) \begin{equation} U=\left[ \begin{array} [c]{cc} U_{1}, & U_{2} \end{array} \right] \label{A1} \end{equation} where $U_{1}^{\ast}U_{1}=I_{r_{1}},$ $U_{2}^{\ast}U_{2}=I_{m-r_{1}},$ $U_{1}^{\ast}U_{2}=0.$ Similarily, we can get the factorizaton of $V$ in (\ref{ab1}) \begin{equation} V=\left[ \begin{array} [c]{cc} V_{1}, & V_{2} \end{array} \right] \label{A2} \end{equation} where $V_{1}^{\ast}V_{1}=I_{r_{2}},$ $V_{2}^{\ast}V_{2}=I_{n-r_{2}},$ $V_{1}^{\ast}V_{2}=0.$ By (\ref{aa1})-(\ref{A2}), we obtain \begin{align} P & =\left[ \begin{array} [c]{cc} U_{1}, & U_{2} \end{array} \right] \left[ \begin{array} [c]{cc} I_{r_{1}} & 0\\ 0 & -I_{n-r_{1}} \end{array} \right] \left[ \begin{array} [c]{c} U_{1}^{\ast}\\ U_{2}^{\ast} \end{array} \right] ,\label{A3}\\ Q & =\left[ \begin{array} [c]{cc} V_{1}, & V_{2} \end{array} \right] \left[ \begin{array} [c]{cc} I_{r_{2}} & 0\\ 0 & -I_{n-r_{2}} \end{array} \right] \left[ \begin{array} [c]{c} V_{1}^{\ast}\\ V_{2}^{\ast} \end{array} \right] , \label{A4} \end{align} where \begin{equation} U_{1}^{\ast}U_{1}=I_{r_{1}},U_{2}^{\ast}U_{2}=I_{m-r_{1}},U_{1}^{\ast}U_{2}=0 \label{A5} \end{equation} and \begin{equation} V_{1}^{\ast}V_{1}=I_{r_{2}},V_{2}^{\ast}V_{2}=I_{n-r_{2}},V_{1}^{\ast}V_{2}=0. \label{A6} \end{equation} \begin{theorem} A matrix $A\in\mathbb{H}^{n\times n}$ is generalized reflexive matrix with respect to the nontrivial generalized reflection matrix pair $(P,Q)$ if and only if $A$ can be expressed \begin{equation} A=\left[ \begin{array} [c]{cc} U_{1}, & U_{2} \end{array} \right] \left[ \begin{array} [c]{cc} A_{11} & 0\\ 0 & A_{22} \end{array} \right] \left[ \begin{array} [c]{c} V_{1}^{\ast}\\ V_{2}^{\ast} \end{array} \right], \label{aa2} \end{equation} where \begin{equation} A_{11}=U_{1}^{\ast}AV_{1}\text{, }A_{22}=U_{2}^{\ast}AV_{2} \label{AS1} \end{equation} and $U_{1},$ $U_{2},$ $V_{1},$ $V_{2}\ $are defined as (\ref{A1}) and (\ref{A2}). \end{theorem} \begin{proof} By the definition of $A\in\mathbb{H}_{r}^{n\times n}\left( P,Q\right) ,$ (\ref{A3}) and (\ref{A4}), we have \begin{equation} A=PAQ=\left[ \begin{array} [c]{cc} U_{1}, & U_{2} \end{array} \right] \left[ \begin{array} [c]{cc} I_{r_{1}} & 0\\ 0 & -I_{n-r_{1}} \end{array} \right] \left[ \begin{array} [c]{c} U_{1}^{\ast}\\ U_{2}^{\ast} \end{array} \right] A\left[ \begin{array} [c]{cc} V_{1}, & V_{2} \end{array} \right] \left[ \begin{array} [c]{cc} I_{r_{2}} & 0\\ 0 & -I_{n-r_{2}} \end{array} \right] \left[ \begin{array} [c]{c} V_{1}^{\ast}\\ V_{2}^{\ast} \end{array} \right] , \label{ab3} \end{equation} i.e., \begin{equation} \left[ \begin{array} [c]{c} U_{1}^{\ast}\\ U_{2}^{\ast} \end{array} \right] A\left[ \begin{array} [c]{cc} V_{1}, & V_{2} \end{array} \right] =\left[ \begin{array} [c]{cc} I_{r_{1}} & 0\\ 0 & -I_{n-r_{1}} \end{array} \right] \left[ \begin{array} [c]{c} U_{1}^{\ast}\\ U_{2}^{\ast} \end{array} \right] A\left[ \begin{array} [c]{cc} V_{1}, & V_{2} \end{array} \right] \left[ \begin{array} [c]{cc} I_{r_{2}} & 0\\ 0 & -I_{n-r_{2}} \end{array} \right] \label{ab2} \end{equation} Put \begin{equation} \left[ \begin{array} [c]{c} U_{1}^{\ast}\\ U_{2}^{\ast} \end{array} \right] A\left[ \begin{array} [c]{cc} V_{1}, & V_{2} \end{array} \right] =\left[ \begin{array} [c]{cc} A_{11} & A_{12}\\ A_{21} & A_{22} \end{array} \right] , \label{aa3} \end{equation} where $A_{11}\in\mathbb{H}^{r_{1}\times r_{2}},$ $A_{12}\in\mathbb{H} ^{r_{1}\times\left( n-r_{2}\right) },$ $A_{21}\in\mathbb{H}^{\left( n-r_{1}\right) \times r_{2}},$ $A_{22}\in\mathbb{H}^{\left( n-r_{1}\right) \times\left( n-r_{2}\right) }.$ Substituting (\ref{aa3}) into (\ref{ab2}), we have \[ \left[ \begin{array} [c]{cc} A_{11} & A_{12}\\ A_{21} & A_{22} \end{array} \right] =\left[ \begin{array} [c]{cc} I_{r_{1}} & 0\\ 0 & -I_{n-r_{1}} \end{array} \right] \left[ \begin{array} [c]{cc} A_{11} & A_{12}\\ A_{21} & A_{22} \end{array} \right] \left[ \begin{array} [c]{cc} I_{r_{2}} & 0\\ 0 & -I_{n-r_{2}} \end{array} \right] =\left[ \begin{array} [c]{cc} A_{11} & -A_{12}\\ -A_{21} & A_{22} \end{array} \right] , \] yielding $A_{12}=0$, $A_{21}=0.$ (\ref{ab3}) becomes (\ref{aa2}). By (\ref{aa2}), (\ref{A5}) and (\ref{A6}), it is easy to get (\ref{AS1}). Conversly, if (\ref{aa2}) holds it is easy to verify $A$ is generalized reflexive matrix with respect to the nontrivial generalized reflection matrix pair $(P,Q).$ \end{proof} Similarly, we can obtain the factorization of generalized antireflexive matrix with respect to the generalized reflection matrix pair $(P,Q).$ We have the following Theorem. \begin{theorem} A matrix $A\in\mathbb{H}_{a}^{n\times n}\left( P,Q\right) $ if and only if $A$ can be expressed as \[ A=\left[ \begin{array} [c]{cc} U_{1}, & U_{2} \end{array} \right] \left[ \begin{array} [c]{cc} 0 & A_{12}\\ A_{21} & 0 \end{array} \right] \left[ \begin{array} [c]{c} V_{1}^{\ast}\\ V_{2}^{\ast} \end{array} \right] , \] where \[ A_{12}=U_{1}^{\ast}AV_{2}\text{, \ }A_{21}=U_{2}^{\ast}AV_{1}, \] and $U_{1},$ $U_{2},$ $V_{1},$ $V_{2}\ $are defined as (\ref{A1}) and (\ref{A2}). \end{theorem} Now we discuss the solution of the inverse eigenproblems of $\left( Z,\Lambda\right) .$ \begin{lemma} Given $B\in\mathbb{H}^{m\times t}$ $X\in\mathbb{H}^{s\times t}.$ Then the inverse problem \[ AX=B \] is consistent if and only if \[ BX^{+}X=B. \] In that case, the general form of $A\in\mathbb{H}^{n\times n}$ is \[ A=BX^{+}+WR_{X} \] where $W$ is an arbitrary matrix over $\mathbb{H}$ with appropriate size. \end{lemma} \begin{theorem} Given $Z\in\mathbb{H}^{n\times m}$, $\Lambda=$diag$\left( \lambda _{1},...,\lambda_{m}\right)$. Let $U_{1},$ $U_{2},$ $V_{1},$ $V_{2} $ be defined as (\ref{A1}), (\ref{A2}), \begin{equation} Z=\left[ \begin{array} [c]{cc} V_{1}X_{1}, & V_{2}X_{2} \end{array} \right] =\left[ \begin{array} [c]{cc} U_{1}Y_{1}, & U_{2}Y_{2} \end{array} \right] ,\text{ }\Lambda=\left[ \begin{array} [c]{cc} \Phi & 0\\ 0 & \Psi \end{array} \right] ,\text{ } \label{AK1} \end{equation} where $0<k<m$, $X_{1}\in\mathbb{H}^{r_{1}\times k},$ $X_{2}\in\mathbb{H}^{\left( n-r_{1}\right) \times\left( m-k\right) }$, $Y_{1}\in\mathbb{H}^{r_{2}\times k},$ $Y_{2}\in\mathbb{H}^{\left( n-r_{2}\right) \times\left( m-k\right) },$ $\Phi=$diag$\left( \lambda_{1},...,\lambda_{k}\right) ,$ $\Psi =$diag$\left( \lambda_{k+1},...,\lambda_{m}\right) .$ Then the set \begin{equation} {\large \varphi}_{1}\left( Z,\Lambda\right) =\left\{ A\in\mathbb{H} _{r}^{n\times n}\left( P,Q\right) \text{ }|\text{ }AZ=Z\Lambda\right\} \label{ak1} \end{equation} is nonempty if and only if \begin{equation} Y_{1}\Phi X_{1}X_{1}^{+}=Y_{1}\Phi,\text{ }Y_{2}\Psi X_{2}X_{2}^{+}=Y_{2}\Psi. \label{AK2} \end{equation} In that case, the general expression of $A$ is \begin{equation} A=\left[ \begin{array} [c]{cc} U_{1}, & U_{2} \end{array} \right] \left[ \begin{array} [c]{cc} Y_{1}\Phi X_{1}^{+}+W_{1}R_{X_{1}} & 0\\ 0 & Y_{2}\Psi X_{2}^{+}+W_{2}R_{X_{2}} \end{array} \right] \left[ \begin{array} [c]{c} V_{1}^{\ast}\\ V_{2}^{\ast} \end{array} \right] , \label{AK3} \end{equation} where $W_{1},W_{2}$ are arbitrary matrices over $\mathbb{H}$ with appropriate sizes. \end{theorem} \begin{proof} Since $A\in\mathbb{H}_{r}^{n\times n}\left( P,Q\right) ,$ $A$ has the form of (\ref{aa2}). By (\ref{A1}) and (\ref{A2}), substituting (\ref{aa2}) and (\ref{AK1}) into $AZ=Z\Lambda.$ We get \[ \left[ \begin{array} [c]{cc} A_{11}X_{1} & 0\\ 0 & A_{22}X_{2} \end{array} \right] =\left[ \begin{array} [c]{cc} Y_{1}\Phi & 0\\ 0 & Y_{2}\Psi \end{array} \right] , \] i.e., \[ A_{11}X_{1}=Y_{1}\Phi,\text{ }A_{22}X_{2}=Y_{2}\Psi. \] The set ${\large \varphi}\left( Z,\Lambda\right) $ is nonempty if and only if the inverse problem $A_{11}X_{1}=Y_{1}$ and $A_{22}X_{2}=Y_{2}\Psi$ are consistent. By Lemma 2.5, the invese problem $A_{11}X_{1}=Y_{1}$ and $A_{22}X_{2}=Y_{2}\Psi$ are consistent if and only if (\ref{AK2}) holds and their general solution are \begin{align} A_{11} & =Y_{1}\Phi X_{1}^{+}+W_{1}R_{X_{1}},\label{AK4}\\ A_{22} & =Y_{2}\Psi X_{2}^{+}+W_{2}R_{X_{2}},\nonumber \end{align} respectively. Then substituting (\ref{AK4}) into (\ref{aa2}), we get (\ref{AK3}). Conversly, if (\ref{AK2}) holds and $A$ has the expression of (\ref{aa2}), it is easy to verify $A\in{\large \varphi}\left( Z,\Lambda\right) .$ \end{proof} If $A\in\mathbb{H}_{r}^{n\times n}\left( P,P\right) ,$ i.e., $A$ is a reflexive matrix with respect to a generalized reflection matrix $P.$ We have the following Corollary. \begin{corollary} Given $Z\in\mathbb{H}^{n\times m}$, $\Lambda=$diag$\left( \lambda _{1},...,\lambda_{m}\right) . $ Let $U_{1},$ $U_{2},$ $V_{1},$ $V_{2}\ $ be defined as (\ref{A1}) and (\ref{A2}), \[ Z=\left[ \begin{array} [c]{cc} U_{1}X_{1}, & U_{2}X_{2} \end{array} \right] ,\text{ }\Lambda=\left[ \begin{array} [c]{cc} \Phi & 0\\ 0 & \Psi \end{array} \right] ,\text{ } \] where $0<k<m,$ $X_{1}\in\mathbb{H}^{r_{1}\times k},$ $X_{2}\in\mathbb{H}^{\left( n-r_{1}\right) \times\left( m-k\right) },$ $\Phi=$diag$\left( \lambda _{1},...,\lambda_{k}\right) ,$ $\Psi=$diag$\left( \lambda_{k+1} ,...,\lambda_{m}\right) .$ Then the set \begin{equation} {\large \varphi}_{1}^{^{\prime\prime}}\left( Z,\Lambda\right) =\left\{ A\in\mathbb{H}_{r}^{n\times n}\left( P,P\right) \text{ }|\text{ } AZ=Z\Lambda\right\} \label{ak4} \end{equation} is nonempty if and only if \[ X_{1}\Phi X_{1}X_{1}^{+}=X_{1}\Phi,\text{ }X_{2}\Psi X_{2}X_{2}^{+}=X_{2} \Psi. \] In that case, the general expression of $A$ is \[ A=\left[ \begin{array} [c]{cc} U_{1}, & U_{2} \end{array} \right] \left[ \begin{array} [c]{cc} X_{1}\Phi X_{1}^{+}+W_{1}R_{X_{1}} & 0\\ 0 & X_{2}\Psi X_{2}^{+}+W_{2}R_{X_{2}} \end{array} \right] \left[ \begin{array} [c]{c} U_{1}^{\ast}\\ U_{2}^{\ast} \end{array} \right] , \] where $W_{1},W_{2}$ are arbitrary matrices over $\mathbb{H}$ with appropriate sizes. \end{corollary} \begin{theorem} Given $Z\in\mathbb{H}^{n\times m}$, $\Lambda=$diag$\left( \lambda _{1},...,\lambda_{m}\right)$. Let $U_{1},$ $U_{2},$ $V_{1},$ $V_{2} $ be defined as (\ref{A1}), (\ref{A2}), \[ Z=\left[ \begin{array} [c]{cc} V_{2}X_{1}, & V_{1}X_{2} \end{array} \right] =\left[ \begin{array} [c]{cc} U_{1}Y_{1}, & U_{2}Y_{2} \end{array} \right] ,\text{ }\Lambda=\left[ \begin{array} [c]{cc} \Phi & 0\\ 0 & \Psi \end{array} \right] ,\text{ } \] where $0<l<m$, $X_{1}\in\mathbb{H}^{\left( n-r_{2}\right) \times l},$ $X_{2} \in\mathbb{H}^{r_{2}\times\left( m-l\right) },$ $Y_{1}\in\mathbb{H} ^{r_{2}\times l},$ $Y_{2}\in\mathbb{H}^{\left( n-r_{2}\right) \times\left( m-l\right) },$ $\Phi=$diag$\left( \lambda_{1},...,\lambda_{l}\right) ,$ $\Psi=$diag$\left( \lambda_{l+1},...,\lambda_{m}\right) .$ Then the set \begin{equation} {\large \varphi}_{2}\left( Z,\Lambda\right) =\left\{ A\in\mathbb{H} _{a}^{n\times n}\left( P,Q\right) \text{ }|\text{ }AZ=Z\Lambda\right\} \label{AG} \end{equation} ${\large \varphi}\left( Z,\Lambda\right) =\left\{ A\in\mathbb{H} _{a}^{n\times n}\left( P,Q\right) \text{ }|\text{ }AZ=Z\Lambda\right\} $ is nonempty if and only if \[ Y_{1}\Phi X_{1}X_{1}^{+}=Y_{1}\Phi,\text{ }Y_{2}\Psi X_{2}X_{2}^{+}=Y_{2} \Psi. \] In that case, the general expression of $A$ is \[ A=\left[ \begin{array} [c]{cc} U_{1}, & U_{2} \end{array} \right] \left[ \begin{array} [c]{cc} 0 & Y_{1}\Phi X_{1}^{+}+W_{1}R_{X_{1}}\\ Y_{2}\Psi X_{2}^{+}+W_{2}R_{X_{2}} & 0 \end{array} \right] \left[ \begin{array} [c]{c} V_{1}^{\ast}\\ V_{2}^{\ast} \end{array} \right] , \] where $W_{1},W_{2}$ are arbitrary matrices over $\mathbb{H}$ with appropriate sizes. \end{theorem} \begin{proof} Similar to the proof of the Theorem 2.6. \end{proof} If $A\in\mathbb{H}_{a}^{n\times n}\left( P,P\right) ,$ i.e., $A$ is a antireflexive matrix with respect to a generalized reflection matrix $P.$ We have the following Corollary. \begin{corollary} Given $Z\in\mathbb{H}^{n\times m}$, $\Lambda=$diag$\left( \lambda _{1},...,\lambda_{m}\right) .$ Let $U_{1},$ $U_{2}\ $ be defined as (\ref{A1}), \[ Z=\left[ \begin{array} [c]{cc} U_{1}X_{1}, & U_{2}X_{2} \end{array} \right] ,\text{ }\Lambda=\left[ \begin{array} [c]{cc} \Phi & 0\\ 0 & \Psi \end{array} \right] ,\text{ } \] where $0<l<m,$ $X_{1}\in\mathbb{H}^{\left( n-r_{1}\right) \times l},$ $X_{2} \in\mathbb{H}^{r_{1}\times\left( m-l\right) },$ $\Phi=$diag$\left( \lambda_{1},...,\lambda_{l}\right) ,$ $\Psi=$diag$\left( \lambda _{l+1},...,\lambda_{m}\right) .$ Then the set \begin{equation} {\large \varphi}_{2}^{^{\prime\prime}}\left( Z,\Lambda\right) =\left\{ A\in\mathbb{H}_{a}^{n\times n}\left( P,P\right) \text{ }|\text{ } AZ=Z\Lambda\right\} \label{AP} \end{equation} is nonempty if and only if \[ X_{1}\Phi X_{1}X_{1}^{+}=X_{1}\Phi,\text{ }X_{2}\Psi X_{2}X_{2}^{+}=X_{2} \Psi. \] In that case, the general expression of $A$ is \[ A=\left[ \begin{array} [c]{cc} U_{1}, & U_{2} \end{array} \right] \left[ \begin{array} [c]{cc} 0 & X_{1}\Phi X_{1}^{+}+W_{1}R_{X_{1}}\\ X_{2}\Psi X_{2}^{+}+W_{2}R_{X_{2}} & 0 \end{array} \right] \left[ \begin{array} [c]{c} U_{1}^{\ast}\\ U_{2}^{\ast} \end{array} \right] , \] where $W_{1},W_{2}$ are arbitrary matrices over $\mathbb{H}$ with appropriate sizes. \end{corollary} \begin{remark} We find it is not necessary that $P$ and $Q$ are hermitian involutionary matrices in the process of dealing with inverse eigenproblems. So we can only consider that $P$ and $Q$ are involutionary matrices and obtain some corresponding conclusions. Furthermore, when $P=Q$ are only involutionary matrix, we can get some results of inverse eigenproblems of \cite{Trench2}. \end{remark} \section{The approximation problem to the solution to the inverse eigenproblems} In this section, we consider the approximation problem: $\underset {A\in{\large \varphi}}{\min\left\Vert A-E\right\Vert _{F}},$ where $E\in\mathbb{H}^{m\times n}$ is a given matrix over $\mathbb{H}$\ and ${\large \varphi}$ is either $\mathbb{H}_{r}^{n\times n}\left( P,Q\right) $ or $\mathbb{H}_{a}^{n\times n}\left( P,Q\right) .$ \begin{lemma} Suppose that $E\in\mathbb{H}^{m\times n},$ $\Gamma\in\mathbb{H}^{m\times m}$ and $\digamma\in\mathbb{H}^{n\times n}$ where $\Gamma^{2}=\Gamma=\Gamma^{\ast }$ and $\digamma^{2}=\digamma=\digamma^{\ast}.$ Then the nearness problem $\underset{X\in\mathbb{H}^{m\times n}}{\min}\left\Vert \Gamma X\digamma -E\right\Vert _{F}$ is consistent if and only if \[ \Gamma(X-E)\digamma=0. \] In that case, \[ \underset{X\in\mathbb{H}^{m\times n}}{\min}\left\Vert \Gamma X\digamma -E\right\Vert_{F}=\left\Vert \Gamma E\digamma-E\right\Vert _{F} \] \end{lemma} \begin{proof} Note that \[ \Gamma X\digamma-E=(\Gamma E\digamma-E)+\Gamma(X-E)\digamma. \] Then \begin{equation} (\Gamma X\digamma-E)^{\ast}\left( \Gamma X\digamma-E\right) =(\Gamma E\digamma-E)^{\ast}(\Gamma E\digamma-E)+\left[ \Gamma(X-E)\digamma\right] ^{\ast}\Gamma(X-E)\digamma+G+G^{\ast} \label{w1} \end{equation} where \begin{align*} G =(\Gamma E\digamma-E)^{\ast}\Gamma(X-E)\digamma& =(\digamma E^{\ast} \Gamma-E^{\ast})\Gamma(X-E)\digamma\\ & =\left( \digamma-I\right) E^{\ast}\Gamma(X-E)\digamma \end{align*} Suppose $\left( \lambda,x\right) $ is a right eigenpair for the matrix $G\in\mathbb{H}^{n\times n},$ i.e., $Gx=x\lambda$ with $\lambda\neq0$. Since $\digamma^{2}=\digamma,$ i.e., $\digamma(\digamma-I)=0,$ we obtain \[ \digamma Gx=\digamma\left( \digamma-I\right) E^{\ast}\Gamma(X-E)\digamma x=\digamma x\lambda=0 \] i.e. $\digamma x=0.$ yielding $x=0.$ Hence, $G$ has nonzero right eigenvalues. Consequently, taking traces in (\ref{w1}) yields \[ \left\Vert \Gamma X\digamma-E\right\Vert _{F}^{2}=\left\Vert \Gamma X\digamma-E\right\Vert _{F}^{2}+\left\Vert \Gamma(X-E)\digamma\right\Vert _{F}^{2}. \] $\underset{X\in\mathbb{H}^{m\times n}}{\min}\left\Vert \Gamma X\digamma -E\right\Vert _{F}$ is equivalent to $\underset{X\in\mathbb{H}^{m\times n} }{\min}\left\Vert \Gamma(X-E)\digamma\right\Vert _{F}.$ Clearly, when $X=E+L_{\Gamma}YR_{\digamma},$\,\,\,\, $\underset{X\in\mathbb{H}^{m\times n}} {\min}\left\Vert \Gamma(X-E)\digamma\right\Vert _{F}=0$ where $Y$ is an arbitrary matrix over $\mathbb{H}$ with appropriate size$.$ Hence \[ \underset{X\in\mathbb{H}^{m\times n}}{\min}\left\Vert \Gamma X\digamma -E\right\Vert_{F}=\left\Vert \Gamma E\digamma-E\right\Vert _{F}. \] \end{proof} \begin{theorem} Given a matrix $E\in\mathbb{H}^{n\times n}$; \begin{equation} \left[ \begin{array} [c]{c} U_{1}^{\ast}\\ U_{2}^{\ast} \end{array} \right] E\left[ \begin{array} [c]{cc} V_{1}, & V_{2} \end{array} \right] =\left[ \begin{array} [c]{cc} E_{11} & E_{12}\\ E_{21} & E_{22} \end{array} \right] . \label{ac1} \end{equation} where $U_{1},$ $U_{2},$ $V_{1},$ $V_{2}\ $are defined as (\ref{A1}) and (\ref{A2}) and $E_{11}\in\mathbb{H}^{r_{1}\times r_{2}},$ $E_{12}\in \mathbb{H}^{r_{1}\times\left( n-r_{2}\right) },$ $E_{21}\in\mathbb{H} ^{\left( n-r_{1}\right) \times r_{2}},$ $E_{22}\in\mathbb{H}^{\left( n-r_{1}\right) \times\left( n-r_{2}\right) }$. Then the approximation problem: $\underset{A\in{\large \varphi}_{1}}{\min}\left\Vert A-E\right\Vert _{F}$ has a unique solution \begin{equation} A_{r}=\left[ \begin{array} [c]{cc} U_{1}, & U_{2} \end{array} \right] \left[ \begin{array} [c]{cc} Y_{1}\Phi X_{1}^{+}+E_{11}R_{X_{1}} & 0\\ 0 & Y_{2}\Psi X_{2}^{+}+E_{22}R_{X_{2}} \end{array} \right] \left[ \begin{array} [c]{c} V_{1}^{\ast}\\ V_{2}^{\ast} \end{array} \right] \label{c2} \end{equation} where ${\large \varphi}_{1}$ are defined as (\ref{ak1}). \end{theorem} \begin{proof} $A\in{\large \varphi}_{1}$, by Theorem 2.6 and (\ref{ac1}), \begin{align*} \left\Vert A-E\right\Vert _{F}^{2} & =\left\Vert \left[ \begin{array} [c]{cc} Y_{1}\Phi X_{1}^{+}+W_{1}R_{X_{1}} & 0\\ 0 & Y_{2}\Psi X_{2}^{+}+W_{2}R_{X_{2}} \end{array} \right] -\left[ \begin{array} [c]{c} U_{1}^{\ast}\\ U_{2}^{\ast} \end{array} \right] E\left[ \begin{array} [c]{cc} V_{1}, & V_{2} \end{array} \right] \right\Vert _{F}^{2}\\ & =\left\Vert \left[ \begin{array} [c]{cc} Y_{1}\Phi X_{1}^{+}+W_{1}R_{X_{1}} & 0\\ 0 & Y_{2}\Psi X_{2}^{+}+W_{2}R_{X_{2}} \end{array} \right] -\left[ \begin{array} [c]{cc} E_{11} & E_{12}\\ E_{21} & E_{22} \end{array} \right] \right\Vert _{F}^{2}\\ & =\left\Vert \left[ \begin{array} [c]{cc} Y_{1}\Phi X_{1}^{+}+W_{1}R_{X_{1}}-E_{11} & -E_{12}\\ -E_{21} & Y_{2}\Psi X_{2}^{+}+W_{2}R_{X_{2}}-E_{22} \end{array} \right] \right\Vert _{F}^{2}\\ & =\left\Vert W_{1}R_{X_{1}}-\left( E_{11}-Y_{1}\Phi X_{1}^{+}\right) \right\Vert _{F}^{2}+\left\Vert W_{2}R_{X_{2}}-\left( E_{22}-Y_{2}\Psi X_{2}^{+}\right) \right\Vert _{F}^{2}+\left\Vert E_{12}\right\Vert _{F} ^{2}+\left\Vert E_{21}\right\Vert _{F}^{2}. \end{align*} Therefore, $\underset{A\in{\large \varphi}_{1}}{\min}\left\Vert A-E\right\Vert _{F}^{2}$ is consistent is equivalent to \begin{align} & \min\left\Vert W_{1}R_{X_{1}}-\left( E_{11}-Y_{1}\Phi X_{1}^{+}\right) \right\Vert _{F}^{2},\text{ }\label{RT1}\\ & \min\left\Vert W_{2}R_{X_{2}}-\left( E_{22}-Y_{2}\Psi X_{2}^{+}\right) \right\Vert _{F}^{2} \label{RT2} \end{align} are consistent. Since $R_{X_{_{i}}}^{2}=R_{X_{_{i}}}=R_{X_{_{i}}}^{\ast},$ $i=1,2,$ it foillows from Lemma 3.1 that (\ref{RT1}) is consistent if and only if \begin{equation} \left[ W_{1}-\left( E_{11}-Y_{1}\Phi X_{1}^{+}\right) \right] R_{X_{1}}=0 \label{AG1} \end{equation} and \begin{align*} \min\left\Vert W_{1}R_{X_{1}}-\left( E_{11}-Y_{1}\Phi X_{1}^{+}\right) \right\Vert _{F} & =\left\Vert \left( E_{11}-Y_{1}\Phi X_{1}^{+}\right) -\left( E_{11}-Y_{1}\Phi X_{1}^{+}\right) R_{X_{1}}\right\Vert _{F}\\ & =\left\Vert E_{11}X_{1}X_{1}^{+}-Y_{1}\Phi X_{1}^{+}\right\Vert _{F} \end{align*} and (\ref{RT2}) is consistent if and only if \begin{equation} \left[ W_{2}-\left( E_{22}-Y_{2}\Psi X_{2}^{+}\right) \right] R_{X_{2}}=0 \label{AG2} \end{equation} and \begin{align*} \min\left\Vert W_{2}R_{X_{2}}-\left( E_{22}-Y_{2}\Psi X_{2}^{+}\right) \right\Vert _{F} & =\left\Vert \left( E_{22}-Y_{2}\Psi X_{2}^{+}\right) -\left( E_{22}-Y_{2}\Psi X_{2}^{+}\right) R_{X_{2}}\right\Vert _{F}\\ & =\left\Vert E_{22}X_{2}X_{2}^{+}-Y_{2}\Psi X_{2}^{+}\right\Vert _{F}. \end{align*} It is easy to know the solutions of (\ref{AG1}) and (\ref{AG2}) are \begin{align} W_{1} & =E_{11}-Y_{1}\Phi X_{1}^{+}+T_{1}X_{1}X_{1}^{+},\label{AG3}\\ W_{2} & =E_{22}-Y_{2}\Psi X_{2}^{+}+T_{2}X_{2}X_{2}^{+}, \label{AG4} \end{align} respectively, where $T_{1},$ $T_{2}$ are arbitary matrices with appropriate sizes. Substituting (\ref{AG3}) and (\ref{AG4}) into (\ref{AK3}), we easily get (\ref{c2}). \end{proof} \begin{corollary} Given a matrix $E\in\mathbb{H}^{n\times n}$; \[ \left[ \begin{array} [c]{c} U_{1}^{\ast}\\ U_{2}^{\ast} \end{array} \right] E\left[ \begin{array} [c]{cc} U_{1}, & U_{2} \end{array} \right] =\left[ \begin{array} [c]{cc} E_{11} & E_{12}\\ E_{21} & E_{22} \end{array} \right] . \] where $U_{1},$ $U_{2}$ are defined as (\ref{A1}) and $E_{11}\in\mathbb{H} ^{r_{1}\times r_{1}},$ $E_{12}\in\mathbb{H}^{r_{1}\times\left( n-r_{1} \right) },$ $E_{21}\in\mathbb{H}^{\left( n-r_{1}\right) \times r_{1}},$ $E_{22}\in\mathbb{H}^{\left( n-r_{1}\right) \times\left( n-r_{1}\right) } $. Then the approximation problem: $\underset{A\in{\large \varphi} _{1}^{^{\prime\prime}}}{\min}\left\Vert A-E\right\Vert _{F}$ has a unique solution \[ \tilde{A}_{r}=\left[ \begin{array} [c]{cc} U_{1}, & U_{2} \end{array} \right] \left[ \begin{array} [c]{cc} X_{1}\Phi X_{1}^{+}+E_{11}R_{X_{1}} & 0\\ 0 & X_{2}\Psi X_{2}^{+}+E_{22}R_{X_{2}} \end{array} \right] \left[ \begin{array} [c]{c} U_{1}^{\ast}\\ U_{2}^{\ast} \end{array} \right] \] where ${\large \varphi}_{1}^{^{\prime\prime}}$ are defined as (\ref{ak4}). \begin{theorem} Given a matrix $E\in\mathbb{H}^{n\times n}$; Suppose \[ \left[ \begin{array} [c]{c} U_{1}^{\ast}\\ U_{2}^{\ast} \end{array} \right] E\left[ \begin{array} [c]{cc} V_{1}, & V_{2} \end{array} \right] =\left[ \begin{array} [c]{cc} E_{11} & E_{12}\\ E_{21} & E_{22} \end{array} \right] , \] where $U_{1},$ $U_{2},$ $V_{1},$ $V_{2}\ $are defined as (\ref{A1}) and (\ref{A2}) and $E_{11}\in\mathbb{H}^{r_{1}\times r_{2}},$ $E_{12}\in \mathbb{H}^{r_{1}\times\left( n-r_{2}\right) },$ $E_{21}\in\mathbb{H} ^{\left( n-r_{1}\right) \times r_{2}},$ $E_{22}\in\mathbb{H}^{\left( n-r_{1}\right) \times\left( n-r_{2}\right) }$. Then the approximation problem: $A_{a}=\arg\underset{A\in{\large \varphi}_{2}}{\min}\left\Vert A-E\right\Vert _{F}$ has a unique solution \[ A_{a}=\left[ \begin{array} [c]{cc} U_{1}, & U_{2} \end{array} \right] \left[ \begin{array} [c]{cc} 0 & Y_{1}\Phi X_{1}^{+}+E_{12}R_{X_{1}}\\ Y_{2}\Psi X_{2}^{+}+E_{21}R_{X_{2}} & 0 \end{array} \right] \left[ \begin{array} [c]{c} V_{1}^{\ast}\\ V_{2}^{\ast} \end{array} \right] \] where ${\large \varphi}_{2}$ are defined as (\ref{AG}). \end{theorem} \end{corollary} \begin{proof} Combing Theorem 2.7, similarily to the proof of Theorem 3.2, we easily complete the proof the Theorem. \end{proof} \begin{corollary} Given a matrix $E\in\mathbb{H}^{n\times n}$; Suppose \[ \left[ \begin{array} [c]{c} U_{1}^{\ast}\\ U_{2}^{\ast} \end{array} \right] E\left[ \begin{array} [c]{cc} U_{1}, & U_{2} \end{array} \right] =\left[ \begin{array} [c]{cc} E_{11} & E_{12}\\ E_{21} & E_{22} \end{array} \right] , \] where $U_{1},$ $U_{2}\ $are defined as (\ref{A1}) and (\ref{A2}) and $E_{11}\in\mathbb{H}^{r_{1}\times r_{1}},$ $E_{12}\in\mathbb{H}^{r_{1} \times\left( n-r_{1}\right) },$ $E_{21}\in\mathbb{H}^{\left( n-r_{1} \right) \times r_{1}},$ $E_{22}\in\mathbb{H}^{\left( n-r_{1}\right) \times\left( n-r_{1}\right) }$. Then the approximation problem: $\tilde{A}_{a}=\arg\underset{A\in{\large \varphi}_{2}^{^{\prime\prime}}}{\min}\left\Vert A-E\right\Vert _{F}$ has a unique solution \[ \tilde{A}_{a}=\left[ \begin{array} [c]{cc} U_{1}, & U_{2} \end{array} \right] \left[ \begin{array} [c]{cc} 0 & X_{1}\Phi X_{1}^{+}+E_{12}R_{X_{1}}\\ X_{2}\Psi X_{2}^{+}+E_{21}R_{X_{2}} & 0 \end{array} \right] \left[ \begin{array} [c]{c} U_{1}^{\ast}\\ U_{2}^{\ast} \end{array} \right] \] where ${\large \varphi}_{2}^{^{\prime\prime}}$ are defined as (\ref{AP}). \end{corollary} \section{\textbf{Conclusion}} We in this paper used the matrix decomposition method to give the properties of the generalized reflexive and antireflexive matrices with respect to a pair of generalized reflection matrices over quaternion algebra. We studied the inverse eigenproblems and the approximation problems of the generalized reflexive and antireflexive matrices with respect to a pair of generalized reflection matrices. We also showed the explicit expressions and formulas. \end{document}
\begin{document} \title{Bijective proofs for Schur function identities} \begin{abstract} In \cite{gurevichPyatovSaponov:2009}, Gurevich, Pyatov and Saponov stated an expansion for the product of two Schur functions and gave a proof based on the Pl\"ucker relations. Here we show that this identity is in fact a special case of a quite general Schur function identity, which was stated and proved in \cite[Lemma~16]{fulmek:2001}. In \cite{fulmek:2001}, it was used to prove bijectively Dodgson's condensation formula and the Pl\"ucker relations, but was not paid further attention: So we take this opportunity to make obvious the range of applicability of this identity by giving concrete examples, accompanied by many graphical illustrations. \end{abstract} \author{Markus Fulmek} \address{Fakult\"at f\"ur Mathematik, Nordbergstra\ss e 15, A-1090 Wien, Austria} \email{{\tt [email protected]}\newline\leavevmode\indent {\it WWW}: {\tt http://www.mat.univie.ac.at/\~{}mfulmek} } \date{\today} \thanks{ Research supported by the National Research Network ``Analytic Combinatorics and Probabilistic Number Theory'', funded by the Austrian Science Foundation. } \maketitle \section{Introduction} \label{sec:intro} In \cite{gurevichPyatovSaponov:2009}, Gurevich, Pyatov and Saponov stated an expansion for the product of two Schur functions and gave a proof based on the Pl\"ucker relations. Here we show that this identity is in fact a special case of a more general Schur function identity \cite[Lemma 16]{fulmek:2001}. Since this involves a process of ``translation'' between the languages of \cite{fulmek:2001} and \cite{gurevichPyatovSaponov:2009} which might not be self--evident, we explain again the corresponding combinatorial constructions here. These constructions are best conceived by pictures, so we give a lot of figures illustrating the concepts. This paper is organized as follows: In Section~\ref{sec:background} we recall the basic definitions (partitions, Young tableaux, skew Schur functions and nonintersecting lattice paths). In Section~\ref{sec:bicoloured}, we present the central bijective construction (recolouring of bicoloured paths in the overlays of families of nonintersecting lattice paths corresponding to some product of skew Schur functions) and show how this yields a quite general Schur function identity (Theorem~\ref{lem:fulmek2}, a reformulation of \cite[Lemma 16]{fulmek:2001}). In Section~\ref{sec:applications} we try to exhibit the broad range of applications of Theorem~\ref{lem:fulmek2}: In particular, we show how the identity \cite[(3.3)]{gurevichPyatovSaponov:2009} appears as (a translation of) a special case of Theorem~\ref{lem:fulmek2}. \section{Basic definitions} \label{sec:background} An infinite weakly decreasing series of nonnegative integers $\pas{\lambda_i}_{i=1}^\infty$, where only finitely many elements are positive, is called a \EM{partition}. The largest index $i$ for which $\lambda_i>0$ is called the \EM{length} of the partition $\lambda$ and is denoted by $\length{\lambda}$. For convenience, we shall in most cases omit the trailing zeroes, i.e., for $\length{\lambda}=r$ we simply write $\lambda=\pas{\lambda_1,\lambda_2,\dots,\lambda_r}$, where $\lambda_1\geq\lambda_2\geq\dots\geq\lambda_r> 0$. The {\em Ferrers diagram\/} $F_\lambda$ of $\lambda$ is an array of cells with $\length{\lambda}$ left-justified rows and $\lambda_i$ cells in row $i$. An {\em $N$--semistandard Young tableau\/} of shape $\lambda$ is a filling of the cells of $F_\lambda$ with integers from the set $\{1,2,\dots,N\}$, such that the numbers filled into the cells weakly increase in rows and strictly increase in columns. Let ${T}$ be a semistandard Young tableau and define $m({T}, k)$ to be the number of entries $k$ in ${T}$. Then the weight $w({T})$ of ${T}$ is defined as follows: \begin{equation*} \omega\of{T} = \prod_{k = 1}^{N} x_k^{m({T}, k)}. \end{equation*} \EM{Schur functions}, which are irreducible general linear characters, can be combinatorially defined by means of $N$--semistandard Young tableaux (see, for instance, \cite[Definition~4.4.1]{sagan:2000}): \begin{equation*} s_\lambda(x_1, x_2, x_3, \dots, x_N) = \sum_{{T}}\omega\of{T}, \end{equation*} where the sum is over all $N$--semistandard Young tableaux ${T}$ of shape $\lambda$. Consider some partition $\lambda=(\lambda_1,\dots,\lambda_r>0)$, and let $\mu$ be a partition such that $\mu_i\leq\lambda_i$ for all $i\geq 1$. The {\em skew Ferrers diagram\/} $F_{\lambda/\mu}$ of $\lambda/\mu$ is an array of cells with $r$ left-justified rows and $\lambda_i-\mu_i$ cells in row $i$, where the first $\mu_i$ cells in row $i$ are missing. An {\em $N$--semistandard skew Young tableau\/} of shape $\lambda/\mu$ is a filling of the cells of $F_{\lambda/\mu}$ with integers from the set $\{1,2,\dots,N\}$, such that the numbers filled into the cells weakly increase in rows and strictly increase in columns (see the left picture of Figure~\ref{fig:skew-Young-tableau} for an illustration). \begin{figure} \caption{The left picture presents a semistandard Young tableau $T$ of skew shape $\lambda/\mu$, where $\lambda=\pas{7, 4, 4, 3, 1, 1, 1} \label{fig:skew-Young-tableau} \end{figure} Then we can define the \EM{skew Schur function}: \begin{equation} \label{eq:skewSchur} s_{\lambda/\mu}(x_1, x_2, x_3, \dots, x_N) = \sum_{{T}}\omega\of{T}, \end{equation} where the sum is over all $N$--semistandard skew Young tableaux ${T}$ of shape $\lambda/\mu$, where the weight $\omega\of{T}$ of ${T}$ is defined as before. Note that for $\mu=\pas{0,0,\dots}$ the skew Schur function $s_{\lambda/\mu}$ is identical to the ``ordinary'' Schur function $s_\lambda$. The Gessel-Viennot interpretation \cite{gessel-viennot:1998} gives an equivalent description of a semistandard Young tableau $T$ of shape $\lambda/\mu$ as an $r$--tuple $P=\pas{p_1,\dots,p_{r}}$ of nonintersecting lattice paths, where $r\defeq\length{\lambda}$: Fix some (arbitrary) integer \EM{shift} $t$ and consider paths in the lattice $\mathbb Z^2$ (i.e., in the directed graph with vertices $\mathbb Z\times\mathbb Z$ and arcs from $(j, k)$ to $(j+1, k)$ and from $(j, k)$ to $(j, k+1)$ for all $j, k$). The $i$--th path $p_i$ starts at $(\mu_i-i+t, 1)$ and ends at $(\lambda_{i}-i+t, N)$, and the $j$--th \EM{horizontal} step in $p_i$ goes from $\pas{-i+t+j-1,h}$ to $\pas{-i+t+j,h}$, where $h$ is the $j$--th entry in row $i$ of $T$. Note that the conditions on the entries of $T$ imply that no two paths $p_i$ and $p_j$ thus defined have a lattice point in common: such an $r$-tuple of paths is called \EM{nonintersecting} (see the right picture of Figure~\ref{fig:skew-Young-tableau} for an illustration). In fact, this translation of tableaux to nonintersecting lattice paths is a \EM{bijection} between the set of all $N$--semistandard Young tableaux of shape $\lambda/\mu$ and the set of all $r$--tuples of nonintersecting lattice paths with starting and ending points as defined above. This bijection is \EM{weight preserving} if we define the weight of an $r$--tuple $P$of nonintersecting lattice paths in the obvious way, i.e., as \begin{equation*} \omega\of P \defeq\prod_{k = 1}^{N} x_k^{n(P,k)}, \end{equation*} where $n(P,k)$ is the number of horizontal steps at height $k$ in $P$. So in the definition \eqref{eq:skewSchur} we could equivalently replace symbol ``$T$'' by symbol ``$P$'', and sum over $r$--tuples of lattice paths with prescribed starting and ending points instead of tableaux with prescribed shape. Note that the horizontal coordinates of starting and ending points determine uniquely the \EM{shape} $\lambda/\mu$ of the tableau, and the vertical coordinate (we shall call the vertical coordinate of points the \EM{level} in the following) of the ending points determines uniquely the \EM{set of entries} $\setof{1,2,\dots,N}$ of the tableau. (The choice of the shift parameter $t$ does influence neither the shape nor the set of entries.) \section{Bicoloured paths and products of skew Schur functions} \label{sec:bicoloured} In the following, all skew Schur functions are considered as functions of the variables $\pas{x_1,\dots,x_N}$. (Equivalently, all tableaux have entries from the set $\setof{1,\dots,N}$, and all families of nonintersecting lattice paths have ending points on level $N$). Viewing the product of two skew Schur functions $$ s_{\lambda/\mu}\cdots_{\sigma/\tau} $$ as the generating function of ``overlays of two families of nonintersecting lattice paths'' (according to definition~\eqref{eq:skewSchur}) gives rise to a bijective construction, which (to the best of our knowledge) was first used by Goulden \cite{goulden:1988}. This construction was used in \cite{fulmek:2001} to describe and prove a class of Schur function identities, special cases of which imply Dodgson's condensation formula and the Pl\"ucker relations. \begin{figure} \caption{Illustration for the example given in Section~\ref{sec:bicoloured} \label{fig:twoSkewShapes} \end{figure} We shall present this construction by way of an example: Consider skew shapes $\lambda/\mu$, where \begin{align} \lambda &= \pas{14, 13, 13, 11, 11, 9, 9, 8, 8, 7, 5, 3},\notag\\ \mu &= \pas{9, 9, 9, 6, 6, 5, 4, 4, 4, 3, 1},\label{eq:lambdamu} \end{align} and $\sigma/\tau$, where \begin{align} \sigma &= \pas{14, 14, 12, 12, 11, 11, 11, 9, 8, 7, 7, 5},\notag\\ \tau &= \pas{10, 10, 8, 8, 8, 7, 6, 6, 6, 5, 2}.\label{eq:sigmatau} \end{align} For the skew shape $\lambda/\mu$, choose fixed shift $2$, and for the skew shape $\sigma/\tau$ choose fixed shift $0$, and consider the starting and ending points of the corresponding families of nonintersecting lattice paths. For instance, the ending point of the first path corresponding to $\lambda/\mu$ is $\pas{\lambda_1-1+2,N}=\pas{15,N}$, and the starting point of the last (twelfth) path corresponding to $\lambda/\mu$ is $\pas{\mu_{12}-12+2,1}=\pas{-10,1}$, and the ending point of the first path corresponding to $\sigma/\tau$ is $\pas{\sigma_1-1,N}=\pas{13,N}$. Now colour the starting/ending points corresponding to $\lambda/\mu$ \EM{white}, and the starting/ending points corresponding to $\sigma/\tau$ \EM{black}: See the upper picture of \figref{fig:twoSkewShapes}, where the starting/ending points of $\lambda/\mu$ are drawn as white circles, and the starting/ending points of $\sigma/\tau$ are drawn as black circles. All starting/ending points which are coloured both black \EM{and} white are never affected by the following constructions: In the upper picture of \figref{fig:twoSkewShapes}, these points are enclosed by grey rectangles. We call the remaining starting/ending points (which are coloured \EM{either} black \EM{or} white) the \EM{coloured} points. Note that the number of coloured points is necessarily \EM{even}. For the coloured points, assume the circular orientation ``from right to left along level $N$, and then from left to right along level $1$''. In the upper picture of \figref{fig:twoSkewShapes}, this circular orientation is indicated by a grey circular arrow. Furthermore, assign to paths corresponding to $\lambda/\mu$ the orientation \EM{downwards}, and to paths corresponding to $\sigma/\tau$ the orientation \EM{upwards}. In the upper picture of \figref{fig:twoSkewShapes}, this orientation of paths is indicated by upwards or downwards pointing triangles. If we focus on the coloured points, we may encode the situation in a simpler picture, where the coloured starting/ending points are located on the lower/upper half of a \EM{circle}, and where the orientation of the respective path is translated to a \EM{radial orientation} (either towards the center of the circle or away from it). The lower picture of \figref{fig:twoSkewShapes} illustrates this: A grey horizontal line indicates the separation of the lower and upper half of the circle, the point labeled $1$ corresponds to the lattice point $\pas{15,N}$, the point labeled $2$ corresponds to the lattice point $\pas{6,N}$, and so on. Now consider some pair $\pas{P_1,P_2}$ of families of nonintersecting lattice paths, where $P_1$ corresponds to some tableau of shape $\lambda/\mu$, and $P_2$ corresponds to some tableau of shape $\sigma/\tau$. We call the paths of $P_1$ the \EM{white} paths and the paths of $P_2$ the \EM{black} paths, and we colour the arcs of the lattice $\mathbb Z^2$ accordingly (i.e., arcs used by some white path are coloured white, and arcs used by some black path are coloured black). As with the starting/ending points, arcs which are coloured black \EM{and} white are not affected by the following construction, and we call all arcs which are \EM{either} black \EM{or} white the \EM{coloured} arcs. We construct \EM{bicoloured paths} \bit \item connecting (only) coloured starting/ending points \item and using (only) coloured arcs \eit by the following algorithm: \begin{quote} We start at some coloured point $q$ and follow the path determined by the \EM{unique} coloured arc incident with it in the respective orientation (i.e., either up/right or down/left). Whenever we meet \EM{another} path on our way (necessarily, this path is of the other colour), we ``change colour and orientation'', i.e., we follow this new path \EM{and} change the orientation (i.e., if we were moving up/right along the old path, we move down/left along the new path, and vice versa). We stop if there is no possibility to go further. \end{quote} \begin{figure} \caption{ The left picture presents two tableaux of the identical shape $\pas{7, 4, 4, 3, 1, 1, 1} \label{fig:bicolouredPaths} \end{figure} This construction is described in detail in \cite{fulmek:2001}. Here, we simply refer to the left picture of \figref{fig:bicolouredPaths}, where the white paths are indicated by dashed lines, and all bicoloured paths are indicated by thick grey lines. The following observations are immediate: \begin{obs}[{\bf Bicoloured paths always exist}] \label{obs:arbitrary-point} For \EM{every} coloured point $q$, there exists a bicoloured path starting at $q$. \end{obs} \begin{obs}[{\bf Bicoloured paths connect points of different radial orientation}] \label{obs:different-orientation} The bicoloured paths thus constructed never connect points of the same radial orientation (i.e., two points oriented both towards or both away from the center). \end{obs} In the lower picture of \figref{fig:twoSkewShapes}, a possible pattern of ``connections by bicoloured paths'' is indicated by dashed lines. The following observation is easy to see: \begin{obs}[{\bf Bicoloured path connect points of different parity}] \label{obs:different-parity} Two different bicoloured paths may have lattice points in common (they may intersect), but they can never \EM{cross}. If we assume some consecutive numbering of the coloured points in their circular orientation (see the lower picture of \figref{fig:twoSkewShapes}), then this non--crossing condition implies that there can never be a bicoloured path connecting two points with numbers of the same parity. \end{obs} The non--crossing condition means that if all such connections were drawn as straight lines connecting points on the circle, then no two such lines can intersect. (In \figref{fig:twoSkewShapes}, not all connections are drawn as straight lines for graphical reasons.) Consider some bicoloured path $b$ in the overlay of nonintersecting lattice paths $\pas{P_1,P_2}$: Changing colours (black to white and vice versa) \bit \item of both ending points of $b$ \item and of all arcs of $b$ \eit gives a new overlay of nonintersecting lattice paths $\pas{P_1^\prime,P_2^\prime}$ (with different starting/ending points). It is easy to see that we have for this \EM{recolouring} of a bicoloured path: \begin{obs}[{\bf Recolouring bicoloured paths is a weight preserving involution}] \label{obs:weight-preserving-involution} The recolouring of a bicoloured path $b$ in an overlay of nonintersecting lattice paths $\pas{P_1,P_2}$ is an \EM{involutive operation} (i.e., if we obtain the overlay $\pas{P_1^\prime,P_2^\prime}$ by recolouring $b$ in $\pas{P_1,P_2}$, then recolouring $b$ \EM{again} in $\pas{P_1^\prime,P_2^\prime}$ yields the original $\pas{P_1,P_2}$), which \EM{preserves the respective weights}, i.e., $$ \omega\of{P_1}\cdot\omega\of{P_2}=\omega\of{P_1^\prime}\cdot\omega\of{P_2^\prime}. $$ \end{obs} \begin{figure} \caption{The two (necessarily different, by Observation~\ref{obs:different-orientation} \label{fig:twoSkewShapesRecoloured} \end{figure} Return to the example illustrated in \figref{fig:twoSkewShapes} and consider the white ending point $\pas{15,N}$ and the black starting point $\pas{5,1}$ there. Note that both of these points are marked with a triangle pointing \EM{downward}. The bicoloured paths ending at these points must have their other ending points marked with a triangle pointing \EM{upward}. One possible choice of these other ending points is depicted in \figref{fig:twoSkewShapesRecoloured}: The corresponding ending points are marked by white rectangles, the bicoloured paths are indicated by arrows. The picture shows the situation \EM{after} recolouring these paths. The skew shape corresponding to the white points in \figref{fig:twoSkewShapesRecoloured} is $\lambda^\prime/\mu^\prime$, where \begin{align} \lambda^\prime &=\pas{13, 13, 11, 11, 9, 9, 8, 8, 7, 5, 3},\notag\\ \mu^\prime&= \pas{9, 9, 7, 7, 7, 6, 5, 5, 5, 4, 0}.\label{eq:lambdaprime-muprime} \end{align} The skew shape corresponding to the black points in \figref{fig:twoSkewShapesRecoloured} is $\sigma^\prime/\tau^\prime$, where \begin{align} \sigma^\prime &= \pas{15, 14, 14, 12, 12, 11, 11, 11, 9, 8, 7, 7, 5},\notag\\ \tau^\prime &= \pas{10, 10, 10, 7, 7, 6, 5, 5, 5, 4, 2, 2, 0}.\label{eq:sigmaprime-tauprime} \end{align} For both skew shapes, the starting and ending points are shifted by $1$, so, for instance, the starting point of the first path corresponding to $\sigma^\prime/\tau^\prime$ is $\pas{\tau^\prime_1-1+1,1}=\pas{10,1}$ and the ending point of the last (eleventh) path corresponding to $\lambda^\prime/\mu^\prime$ is $\pas{\lambda_{11}-11+1,N}=\pas{-7,N}$. The pictures in \figref{fig:twoSkewShapesRecoloured} contain redundant information: All uncoloured points are ``doubled'', and the colour (black or white) of the coloured points is determined uniquely by their circular orientation. So we may encode the information in a more terse way, namely as \bit \item the \EM{configuration of (starting/ending) points (short: \cop){\gdef\pointconf{\cop}}} (see the upper picture in \figref{fig:GenericConfiguration}), \item and the \EM{circular orientation of coloured (starting/ending) points (short: \coc){\gdef\pointarr{\coc}}} (see the lower picture in \figref{fig:GenericConfiguration}). \eit \begin{figure} \caption{The information of \figref{fig:twoSkewShapesRecoloured} \label{fig:GenericConfiguration} \end{figure} We call a circular orientation of coloured (starting/ending) points (short: \coc){\gdef\pointarr{\coc}}\ \EM{admissible} if it has the same number of inwardly/outwardly oriented points. \EM{Every} admissible circular orientation of coloured (starting/ending) points (short: \coc){\gdef\pointarr{\coc}}\ determines (together with the corresponding configuration of (starting/ending) points (short: \cop){\gdef\pointconf{\cop}}) a certain configuration of starting/ending points. However, there might be no overlay of families of nonintersecting lattice paths that connect these points (if, for instance, the $i$--th white ending point lies to the left of the $i$--th white starting point; this would correspond to an $i$--th row of length $<0$ in the corresponding shape): In this case, the corresponding skew Schur function is zero. But if there \EM{is} an overlay of families of nonintersecting lattice paths that connect these points, then the family of all bicoloured paths determines a \EM{perfect matching} $M$ in the circular orientation of coloured (starting/ending) points (short: \coc){\gdef\pointarr{\coc}}\ (according to Observation~\ref{obs:arbitrary-point}), which is \EM{non--crossing} (according to Observation~\ref{obs:different-parity}), and where all edges of $M$ connect points of different radial orientation (according to Observation~\ref{obs:different-orientation}): We call such matchings \EM{admissible}. Note that recolouring some bicoloured path amounts to reversing the orientation of the corresponding edge in the matching $M$. We may summmarize all these considerations as follows (this is a reformulation of \cite[Lemma~15]{fulmek:2001}): \begin{lem} \label{lem:fulmek1} Let $\lambda/\mu$ and $\sigma/\tau$ be two skew shapes, and let $t$ be an arbitrary integer. Consider the configuration of (starting/ending) points (short: \cop){\gdef\pointconf{\cop}}\ corresponding to the starting/ending points with shift $0$ for $\lambda/\mu$ and shift $t$ for $\sigma/\tau$. In the corresponding circular orientation of coloured (starting/ending) points (short: \coc){\gdef\pointarr{\coc}}, choose a nonempty subset $S$ (arbitrary, but fixed) of the points oriented towards the center. Consider the set $V$ of \EM{all} admissible circular orientation of coloured (starting/ending) points (short: \coc){\gdef\pointarr{\coc}} s, and consider the graph $G$ with vertex set $V$, where two vertices $v_1$, $v_2$ are connected by an edge if and only if there are overlays of lattice paths $\pas{P_1,P_2}$ and $\pas{P^\prime_1,P^\prime_2}$ for the starting/ending points corresponding to $\pas{\text{configuration of (starting/ending) points (short: \cop){\gdef\pointconf{\cop}}},v_1}$ and $\pas{\text{configuration of (starting/ending) points (short: \cop){\gdef\pointconf{\cop}}},v_2}$, respectively, such that $\pas{P^\prime_1,P^\prime_2}$ is obtained from $\pas{P_1,P_2}$ by recolouring \EM{all} bicoloured paths that are incident with some point of $S$. Obviously, this graph $G$ is \EM{bipartite} (i.e., $V=E\cup O$ with $E\cap O=\emptyset$, such that there is no edge connecting two vertices of $E$ or two vertices of $O$). Let $C$ be an arbitrary connected component of $G$ with at least $2$ vertices, and denote by $C_O$ the set of pairs of skew shapes corresponding to $\pas{\text{configuration of (starting/ending) points (short: \cop){\gdef\pointconf{\cop}}},x}$ for $x\in O$, and by $C_E$ the set of pairs of skew shapes corresponding to $\pas{\text{configuration of (starting/ending) points (short: \cop){\gdef\pointconf{\cop}}},x}$ for $x\in E$. Then we have the following identity for skew Schur functions: \begin{equation} \label{eq:lemma-general} \sum_{\pas{\lambda/\mu,\;\sigma/\tau}\in C_E} s_{\lambda/\mu}\cdots_{\sigma/\tau} = \sum_{\pas{\lambda^\prime/\mu^\prime,\;\sigma^\prime/\tau^\prime}\in C_O} s_{\lambda^\prime/\mu^\prime}\cdots_{\sigma^\prime/\tau^\prime}. \end{equation} \end{lem} This Lemma is rather unwieldy. But there is a particularly simple situation which appears to be useful, so we state it as a Theorem (this is a reformulation of \cite[Lemma~16]{fulmek:2001}): \begin{thm} \label{lem:fulmek2} Under the assumptions of Lemma~\ref{lem:fulmek1}, let $c$ be the circular orientation of coloured (starting/ending) points (short: \coc){\gdef\pointarr{\coc}}\ for the pair of shapes $\pas{\lambda/\mu,\;\sigma/\tau}$, and assume that the orientation of the points in $c$ is \EM{alternating}. As in Lemma~\ref{lem:fulmek1}, let $S$ be some fixed subset of the points oriented towards the center in $c$. Consider the set of all circular orientation of coloured (starting/ending) points (short: \coc){\gdef\pointarr{\coc}} s which can be obtained by reorienting all edges incident with points in $S$ in some admissible matching of $c$, and denote the set of pairs of skew shapes corresponding to such circular orientation of coloured (starting/ending) points (short: \coc){\gdef\pointarr{\coc}} s by $Q$. Then we have: \begin{equation} \label{eq:lemma-special} s_{\lambda/\mu}\cdots_{\sigma/\tau} = \sum_{\pas{\lambda^\prime/\mu^\prime,\;\sigma^\prime/\tau^\prime}\in Q} s_{\lambda^\prime/\mu^\prime}\cdots_{\sigma^\prime/\tau^\prime}. \end{equation} \end{thm} \begin{proof} Observe that in the right hand side of \eqref{eq:lemma-special} the Schur function product $s_{\lambda^\prime/\mu^\prime}\cdots_{\sigma^\prime/\tau^\prime}$ is either zero (if there is, in fact, no overlay of families of nonintersecting lattice paths corresponding to the respective circular orientation of coloured (starting/ending) points (short: \coc){\gdef\pointarr{\coc}}), or there is some corresponding overlay of families of nonintersecting lattice paths $\pas{P_1^\prime,P_2^\prime}$. In the latter case, there \EM{are} bicoloured paths starting in the points of $S$ (by Observation~\ref{obs:arbitrary-point}), and by the combination of Observations \ref{obs:different-orientation} and \ref{obs:different-parity}, recolouring all such paths necessarily yields an overlay $\pas{P_1,P_2}$ of nonintersecting lattice paths which corresponds to the pair $\pas{\lambda/\mu,\;\sigma/\tau}$. \end{proof} \section{Applications} \label{sec:applications} Clearly, the interpretation of Schur functions as generating functions of $r$--tuples of nonintersecting lattice paths is best suited for the bijective construction of recolouring bicoloured paths. But of course, the recolouring operation can be translated into operations for the shapes of the corresponding tableau (i.e., for the corresponding partitions, or equivalently, Ferrers diagrams). We shall show how this translation gives the identity \cite[(3.3)]{gurevichPyatovSaponov:2009} of Gurevich, Pyatov and Saponov, but before doing this we consider a simple special case, in order to illustrate the meaning of Theorem~\ref{lem:fulmek2}: \begin{ex} \label{ex:fulmek} Assume that for the shapes $\lambda/\mu$ and $\sigma/\tau$ we have \bit \item $\mu=\tau$, \item $\lambda_1>\sigma_1$, \item $\length{\lambda}=\length{\sigma}$. \eit Choose shift $0$ for the families of starting/ending points corresponding to these shapes, then there is no coloured starting point (since $\mu=\tau$): accordingly, only the ending points are shown in \figref{fig:twoSimpleShapes}. Furthermore, assume that the corresponding black and white ending points \EM{alternate} along level $N$: Then the preconditions of Theorem~\ref{lem:fulmek2} are fulfilled. Since $\lambda_1>\mu_1$, the point $q=\pas{\lambda_1-1,N}$ is white. Consider the set $S=\setof{q}$: The bicoloured path $b$ starting in $q$ necessarily must end in a black point (by Observation~\ref{obs:different-orientation}). Assume that there are $k$ such black points $q_1,\dots, q_n$, and let $\indexthis{\lambda}{i}$ and $\indexthis{\sigma}{i}$ be the partitions corresponding to the configuration of white and black points obtained by changing colours of $q$ and $q_i$, $i=1,\dots,k$ (i.e., colour $q$ black and $q_i$ white, and leave all other colours unchanged). Then by Theorem~\ref{lem:fulmek2} we have: $$ s_{\lambda/\mu}\cdots_{\sigma/\mu} = \sum_{i=1}^k s_{\indexthis{\lambda}{i}/\mu}\cdots_{\indexthis{\sigma}{i}/\mu} $$ \end{ex} \begin{figure} \caption{Application of Lemma~\ref{lem:fulmek2} \label{fig:twoSimpleShapes} \end{figure} \figref{fig:twoSimpleShapes} illustrates this example for \begin{align*} \lambda&=\pas{16, 15, 15, 13, 13, 11, 11, 10, 10, 9, 7, 5},\\ \sigma&= \pas{14, 14, 12, 12, 11, 11, 11, 9, 8, 7, 7, 5}. \end{align*} From the pictures in \figref{fig:twoSimpleShapes} we see that $k=2$ in this case, with \begin{align*} \indexthis{\lambda}{1}&=\pas{14, 14, 12, 12, 11, 11, 11, 10, 10, 9, 7, 5},\\ \indexthis{\sigma}{1}&= \pas{16, 15, 15, 13, 13, 11, 11, 9, 8, 7, 7, 5}, \end{align*} (shown in the middle row of \figref{fig:twoSimpleShapes}) and \begin{align*} \indexthis{\lambda}{2}&=\pas{14, 14, 12, 12, 10, 10, 9, 9, 8, 7, 7, 5},\\ \indexthis{\sigma}{2}&= \pas{16, 15, 15, 13, 13, 12, 12, 12, 10, 9, 7, 5} \end{align*} (shown in the lower row of \figref{fig:twoSimpleShapes}). \figref{fig:FerrersDiagrams} presents the Ferrers diagrams for this example, where we chose $\mu=\tau=\pas{5,4,2,2,2,1,1,1}$. \begin{figure} \caption{The skew Ferrers boards corresponding to the example in \figref{fig:twoSimpleShapes} \label{fig:FerrersDiagrams} \end{figure} \subsection{The identity of Gurevich, Pyatov and Saponov} \label{sec:saponov} Now consider the special case $\mu=\tau$, $\lambda_1>\sigma_1$, $\length{\lambda}=\length{\sigma}+1$, with shift $0$ for all starting and ending points, and where black and white points alternate in their circular orientation. As in Example~\ref{ex:fulmek}, let $q=\pas{\lambda_1-1,N}$ and choose $S=\setof{q}$. The possible ending points of the bicoloured path $b$ ending in $q$ are \bit \item the black points at level $N$ \item and the leftmost white starting point $q_0$. \eit As our running example we choose the skew shapes $\lambda/\mu$ and $\sigma/\tau$ with \begin{align*} \lambda &= \pas{10,7,7,6,6,4,4,3,2,2}, \\ \sigma &= \pas{8,7,7,5,4,4,2,1,1}, \\ \mu=\tau &= \pas{4,3,3,1}. \end{align*} \begin{figure} \caption{Application of Theorem~\ref{lem:fulmek2} \label{fig:SaponovLatticePaths} \end{figure} See the upper picture in \figref{fig:SaponovLatticePaths} for an illustration. Assume that there are $k>1$ black points $q_1,\dots,q_k$ and denote the shapes corresponding to recolouring $q$ and $q_i$, $i=0,1,\dots,k$, by $\indexthis{\lambda}{i}/\mu$ and $\indexthis{\sigma}{i}/\mu$, respectively. Then by Theorem~\ref{lem:fulmek2} we have: \begin{equation} \label{eq:saponov-fulmek} s_{\lambda/\mu}\cdots_{\sigma/\mu} = s_{\indexthis{\lambda}{0}/\mu}\cdots_{\indexthis{\sigma}{0}/\mu} + \sum_{i=1}^k s_{\indexthis{\lambda}{i}/\mu}\cdots_{\indexthis{\sigma}{i}/\mu} \end{equation} See the three lower pictures in \figref{fig:SaponovLatticePaths} for an illustration (in this example, $k=2$). If we choose $\mu=0$, \eqref{eq:saponov-fulmek} amounts precisely to the identity \cite[(3.3)]{gurevichPyatovSaponov:2009}: We simply have to \EM{translate} our formulation to the language of adding and removing \EM{partial border strips} to Ferrers diagrams, which was used by Gurevich, Pyatov and Saponov \cite{gurevichPyatovSaponov:2009}. To get a first idea, have a look at \figref{fig:Saponov}, which presents the Ferrers diagrams corresponding to the concrete example of \figref{fig:SaponovLatticePaths}. \begin{figure} \caption{The skew Ferrers diagrams corresponding to the example in \figref{fig:SaponovLatticePaths} \label{fig:Saponov} \end{figure} Observe that recolouring starting/ending points may be viewed as a game of inserting/removing points in the configuration of starting/ending points corresponding to some partition. Instead of giving a lengthy verbal description, we present in \figref{fig:Recolouring} the effect of inserting (read the upper picture upwards, from $\sigma$ to $\lambda$) or removing (read the upper picture downwards, from $\lambda$ to $\sigma$) points in a graphical way. Reading the upper part of \figref{fig:Recolouring} \EM{downwards} (i.e., removing the point in position $\lambda_i-i$), the Ferrers diagram of $\sigma$ is obtained from the Ferrers diagram of $\lambda$ by a \EM{down--peeling} of a \EM{partial border strip} starting at row $i$, or, in the language of \cite{gurevichPyatovSaponov:2009}: $$\sigma=\sappeeldown{\lambda}{i}.$$ The special case of this operation for $i=1$ amounts to the removal of the \EM{complete} border strip --- in the language of \cite{gurevichPyatovSaponov:2009}: $$ \sappeel{\lambda}\defeq\sappeeldown{\lambda}{1}. $$ \begin{figure} \caption{ The operation of inserting/removing points in the configuration of starting/ending points can be translated to the language introduced in \cite{gurevichPyatovSaponov:2009} \label{fig:Recolouring} \end{figure} Now observe that a pair of partitions $\pas{\lambda,\sigma}$ fulfilling the preconditions for \eqref{eq:saponov-fulmek} can be obtained by \EM{constructing} an appropriate $\sigma$ for a given $\lambda$. This is achieved by applying to the configuration of ending points corresponding to $\lambda$ a sequence of $k$ removals/insertions of points, followed by removing the right--most ending point and inserting a new left--most starting point, see the upper picture in \figref{fig:SaponovLatticePaths}. Again, we shall illustrate the simple procedure by pictures instead of giving a lengthy verbal description. The first step of this construction is illustrated in \figref{fig:SaponovTranslation}: The configuration of points corresponding to the partition $\lambda=\pas{10,7,7,6,6,4,4,3,2,2}$ considered in \figref{fig:SaponovLatticePaths} is presented in the upper part of the picture. This configuration is changed by adding a new point at position $7$ first (the result of this change is presented in the middle part of the picture) and then by removing the point at position $2$ (the result of this change is presented in the lower part of the picture). It is obvious that this amounts to adding a \EM{partial border strip} to the Ferrers diagram of $\lambda$, which consists of $t_1=2$ boxes in row $r_1=2$ and spans the $m_1=3$ rows $2,3$ and $4$. Stated in the language introduced in \cite{gurevichPyatovSaponov:2009}, the picture in the lower row of \figref{fig:SaponovTranslation} shows $$\sapadd{\lambda}{t_1}{(r_1,m_1)}.$$ \begin{figure} \caption{Adding some point at position $i$ and removing a point at position $j<i$ amounts to adding a \EM{partial border strip} \label{fig:SaponovTranslation} \end{figure} The next step of this construction is to add a point at position $-1$ and to remove the point at position $-3$. This amounts to adding a partial border strip consisting of $t_2=2$ boxes in row $r_2=6$ and spanning the $m_2=2$ rows $6$ and $7$. We thus obtain $\nu=\pas{10,9,8,8,6,5,5,3,2,2}$, or, in the language of \cite{gurevichPyatovSaponov:2009}: $\nu=\sapadd{\lambda}{2,1}{(2,3),(6,2)}$, see the upper picture in \figref{fig:SaponovTranslation2}. Finally, we remove the rightmost ending point at position $9$: This amounts to removing a \EM{complete} border strip from the Ferrers diagram of $\nu$, giving the partition $\sigma=\sappeel{\nu}=\pas{8,7,7,5,4,4,2,1,1}$ of our running example, see the lower picture in \figref{fig:SaponovTranslation2}. \begin{figure} \caption{Removing the point corresponding to $\nu_1$ (marked by a white rectangle in the lower picture) amounts to removing the \EM{complete border strip} \label{fig:SaponovTranslation2} \end{figure} So we see that the Schur function identity stated in \figref{fig:SaponovLatticePaths} can be partially translated as: $$ s_{\lambda/\mu}\cdots_{\sappeel{\nu}/\tau}= s_{\sappeel{\lambda}/\mu}\cdots_{\nu/\mu}+\cdots $$ \begin{figure} \caption{Removing the point corresponding to $\lambda_1$ and inserting a new point at positions $7$ and $-1$, respectively (these points are marked by white rectangles in the pictures) yield the partitions $\indexthis{\lambda} \label{fig:SaponovTranslation3} \end{figure} To complete this translation, have a look at \figref{fig:SaponovTranslation3} and note that the partitions $\indexthis{\lambda}{1}$ and $\indexthis{\lambda}{2}$ (drawn with black lines) are obtained by the original partition $\lambda$ (drawn with grey lines) by removing a partial border strip which starts in the box marked with a small ``x'' and extends up to the first row. Note that for such \EM{up--peeling} of a border strip starting at row $i$, there are $\lambda_i-\lambda_{i+1}$ possible positions of the starting box: Number them from left to right, then the up--peeling is uniquely determined by the row number $i$ and the box number $t$. In the language of \cite{gurevichPyatovSaponov:2009}, this is denoted by $$ \sappeelup{\lambda}{i,t}, $$ i.e., we have $ \indexthis{\lambda}{1}=\sappeelup{\lambda}{1,2} $ and $ \indexthis{\lambda}{2}=\sappeelup{\lambda}{5,1}. $ Putting all these observations together, we see that the Schur function identity stated in \figref{fig:SaponovLatticePaths} can be translated as: $$ s_{\lambda/\mu}\cdots_{\sappeel{\nu}/\tau}= s_{\sappeel{\lambda}/\mu}\cdots_{\nu/\mu}+ s_{\sappeelup{\lambda}{1,2}/\mu}\cdots_{\sappeeldown{\nu}{2}/\mu}+ s_{\sappeelup{\lambda}{5,1}/\mu}\cdots_{\sappeeldown{\nu}{6}/\mu}. $$ So it is clear that the special case considered in this section can be stated as follows: \begin{cor}[Gurevich, Pyatov and Saponov] Let $\lambda=\pas{\lambda_1,\dots,\lambda_r}$ be a partition. Assume that there are $k$ indices $2\leq r_1<\dots<r_k\leq r$ such that $\lambda_{r_i}<\lambda_{r_{i-1}}$, $i=1,\dots, r$. Choose integers $t_i$ and $m_i$ for $i=1,\dots,k$ subject to the restrictions \begin{align*} 1 &\leq t_i\leq\lambda_{r_i-1}-\lambda_{r_i}, \\ 1 &\leq m_i\leq r_{i+1}-r_i. \end{align*} Then we may construct $\nu=\sapadd{\lambda}{t_1,\dots,t_k}{(r_1,m_1)\dots(r_k,m_k)}$, and we have $$ s_{\lambda}\cdots_{\sappeel{\nu}}= s_{\sappeel{\lambda}}\cdots_{\nu}+ \sum_{i=1}^ks_{\sappeelup{\lambda}{r_i-1,t_i}}\cdot s_{\sappeeldown{\nu}{(r_i)}}. $$ \end{cor} \end{document}
\begin{document} \captionsetup[figure]{labelfont={bf},labelformat={default},labelsep=period,name={Fig.}} \begin{frontmatter} \title{An explicit Hopf bifurcation criterion of fractional-order systems with order $1<\alpha <2$} \author{Jing Yang} \author{Xiaoxue Li$^*$,Xiaorong Hou$^*$} \address{University of Electronic Science and Technology of China, School of Automation Engineering, Chengdu 611731, China} \cortext[mycorrespondingauthor]{Corresponding author} \ead{[email protected],[email protected]} \begin{abstract} A Hopf bifurcation criterion of fractional-order systems with order $1<\alpha <2$ is established in this paper, in which all conditions are explicitly expressed by parameters without solving the roots of the relevant characteristic polynomial of Hopf bifurcation conditions. It avoids the problem that existing methods may fail due to the computational complexity in the multi-parameter situation. The bifurcation hyper-surface of multi-parameter can be obtained directly. \end{abstract} \begin{keyword} Hopf bifurcation\sep Fractional-order system\sep Multi-parameter\sep Generalized Routh-Hurwitz criterion \end{keyword} \end{frontmatter} \linenumbers \section{Introduction} Hopf bifurcation of fractional-order dynamical systems was discussed numerically \cite{el2009stability,li2014hopf,vcermak2019stability,deshpande2017hopf}. These existing methods are valid to obtain the results of Hopf bifurcation with the idea of analyzing eigenvalues of the Jacobian matrix of low-dimensional systems. To ensure the feasibility of methods in high-dimensional situation, some researchers simplified multi-parameter to a single parameter. For example, in the problem of Hopf bifurcation about fractional-order neural network systems, several neuron parameters are set to a same bifurcation parameter or the sum of some parameters is set to a single bifurcation parameter \cite{huang2017dynamical,2018Effects}, although the fact is that different neuron parameters have different values \cite{wei2004bifurcation,yan2006hopf}. For high-dimensional fractional-order systems with multi-parameter, an online method \cite{YANG2022111714} is suitable to determine the Hopf bifurcation hyper-surface through a visual representation of the parameter space if the systems have fewer parameters. However, the online method may fail because of the computational complexity of multi-parameter. \par In this paper, we establish a criterion for analyzing the Hopf bifurcation of fractional-order systems, where the expressions are all explicit ones of multiple parameters. This criterion can give results directly, which avoids the problem that the existing methods may fail due to the computational complexity in the multi-parameter situation. The Caputo fractional derivative is employed in this paper \cite{1999Fractional}. \par Consider the fractional-order nonlinear system: \begin{equation}\label{eq1} \frac{d^{\alpha }x}{dt^{\alpha }}=g(x,\mu ), \end{equation} where $\alpha \in \left ( 1,2 \right )$ is the fractional order, $x=(x_1,x_2,\cdots,x_n)^T$ is the state vector, $\mu=(\mu_1,\mu_2,\cdots,\mu_n)$, $\mu_i$ are bifurcation parameters, $g(x)=\left (g_1(x),\cdots,g_n(x)\right )^T$, $g_i(\cdot)$ are nonlinear functions. \par Initial conditions for system (\ref{eq1}) are $x_i(0)=x_{i0}, {x}'_i(0)={x}'_{i0}$. \par Suppose that $x^{\ast }=(x_1^{\ast },x_2^{\ast },\cdots,x_n^{\ast })^T$ is an equilibrium point of system (\ref{eq1}). We denote the Jacobian matrix of system (\ref{eq1}) at $x^{\ast }$ by $J(\mu)$. Let the characteristic polynomial of $J(\mu)$ be \begin{equation}\label{eq4} f(\lambda;\alpha,\mu)=\lambda^n+a_1\lambda^{n-1}+\cdots+a_n, \end{equation} where $a_i=a_i(\alpha,\mu),i=1,2,\cdots,n$. \par From the locally asymptotical stability theorem of fractional-order nonlinear systems \cite{ahmed2007equilibrium}, for $\alpha \in \left ( 1,2 \right )$, we denote $\Omega =\left \{ z\in \mathbb{C}\mid \left | arg\left ( z \right ) \right | >\frac{\alpha \pi }{2}\right \}$, $\Sigma =\left \{ z\in \mathbb{C}\mid \left | arg\left ( z \right ) \right | <\frac{\alpha \pi }{2}\right \}$ and $\Gamma =\left \{ z\in \mathbb{C}\mid \left | arg\left ( z \right ) \right | =\frac{\alpha \pi }{2}\right \}$ the stable region, the unstable region and the critical line of the complex plane, respectively. \par According to the fractional-order Hopf bifurcation conditions \cite{deshpande2017hopf,ma2016complexity}, system (\ref{eq1}) occurs Hopf bifurcation if $f(\lambda;\alpha,\mu)$ has a pair of conjugate complex roots, which cross the critical line $\Gamma$ at the critical roots from the stable region $\Omega$ to the unstable region $\Sigma$, as the parameter $\mu$ changes, and the other roots are all in $\Omega$. \section{Main Results} $f(\lambda;\alpha,\mu)$ in (\ref{eq4}) can be expressed as $g(r)=f\left (r\cdot e^{i\frac{\alpha -1}{2}\pi };\alpha,\mu \right )$ through counterclockwise turning the coordinate system with angle $\theta =\frac{\left (\alpha -1 \right )\pi }{2}$. A generalized Routh-Hurwitz matrix is constructed from $g(ir)=f\left ( r\cdot e^{i\frac{\alpha\pi}{2} };\alpha,\mu \right )$. Suppose that the imaginary part $f_1(r;\alpha,\mu)$ and the real part $f_2(r;\alpha,\mu)$ of $g(ir)=f\left ( r\cdot e^{i\frac{\alpha\pi}{2} };\alpha,\mu \right )$ are expressed as \begin{equation}\label{eq11} \left\{\begin{matrix} f_1(r;\alpha,\mu)=\overline{a}_0r^n+\overline{a}_1r^{n-1}+\cdots+\overline{a}_{n-1}r+\overline{a}_n\\ f_2(r;\alpha,\mu)=\overline{b}_0r^n+\overline{b}_1r^{n-1}+\cdots+\overline{a}_{n-1}r+\overline{b}_n, \end{matrix}\right. \end{equation} where $\overline{a}_j=a_j\cdot sin\left ( \frac{(n-j)\cdot \alpha \pi }{2}\right )$, $\overline{b}_j=a_j\cdot cos\left ( \frac{(n-j)\cdot \alpha \pi }{2}\right )$, $j=0,1,\cdots,n$. \begin{definition} For $f(\lambda;\alpha,\mu)$ in (\ref{eq4}), the $2n\times2n$ fractional-order Routh-Hurwitz matrix $H_{\alpha}$ of $f(\lambda;\alpha,\mu)$ is defined as: \begin{small} \begin{equation}\label{eq5} H_{\alpha}(\mu)=\left [ \begin{matrix} \overline{a}_0 &\overline{a}_1 & \cdots & 0 & \cdots &\cdots & 0\\ \overline{b}_0 &\overline{b}_1 & \cdots & \overline{b}_n & \cdots &\cdots & 0\\ 0 & \overline{a}_0 & \cdots &\overline{a}_{n-1} &0 &\cdots &0\\ 0 & \overline{b}_0 &\cdots & \overline{b}_{n-1}&\overline{b}_{n}&\cdots &0\\ \vdots &\vdots &\vdots & \vdots &\vdots&\vdots &\vdots\\ 0 &\cdots &\cdots & \overline{a}_0 &\cdots&\overline{a}_{n-1}&0\\ 0 &\cdots &\cdots & \overline{b}_0 &\cdots&\overline{b}_{n-1} &\overline{b}_n\\ \end{matrix} \right ]. \end{equation} \end{small} \end{definition} Denote the $2p$-th order leading principle minor of $H_\alpha(\mu)$ by $\nabla_p(\mu)(p=1,2,\cdots,n)$. \begin{theorem}\label{th1} System (\ref{eq1}) occurs Hopf bifurcation if exists $\mu^*$ such that \\ 1.(eigenvalues condition) $\nabla_{n}(\mu^*)=0, \nabla_{p}(\mu^*)>0(p=1,2,\cdots,n-1), \widetilde{\nabla}(\mu^*)<0$, where $\widetilde{\nabla}(\mu)$ denotes the determinant of the $2\left (n-1 \right )\times2\left (n-1 \right )$ matrix extracted from the first $2n-2$ rows, the first $2n-3$ columns and the $2n-1$th columns of $H_{\alpha}(\mu)$, that is \begin{small} \begin{equation} \widetilde{\nabla}(\mu)=\left | \begin{matrix} \overline{a}_0 &\overline{a}_1 & \cdots & 0 & \cdots &\cdots &0\\ \overline{b}_0 &\overline{b}_1 & \cdots & \overline{b}_n & \cdots &\cdots &0\\ \vdots &\vdots &\vdots & \vdots &\vdots&\vdots &\vdots\\ 0 &\cdots & \overline{a}_0&\cdots &\cdots & \overline{a}_{n-1} &0 \\ 0 &\cdots & \overline{b}_0 &\cdots &\cdots & \overline{b}_{n-1}&0 \\ 0 &\cdots &\cdots & \overline{a}_0 &\cdots& \overline{a}_{n-2}&0 \\ 0 &\cdots &\cdots & \overline{b}_0 &\cdots& \overline{b}_{n-2}&\overline{b}_n \\ \end{matrix} \right |. \end{equation} \end{small} \\ 2.(transversality condition) $\nabla_n(\mu)$ is indefinite in any neighborhood $\delta(\mu^*)$ of $\mu^*$, that is, $\forall \delta (\mu^*)$, $\exists \,\mu_1, \mu_2 \in \delta(\mu^*)$, s.t. $\nabla(\mu_1)>0,\nabla(\mu_2)<0$. \end{theorem} \begin{theorem}\label{th2} Denoting the pair of conjugate complex roots on the critical line $\Gamma$ by $\lambda(\mu^*),\overline{\lambda}(\mu^*)$, we have $\lambda(\mu^*),\overline{\lambda}(\mu^*)=-\frac{\widetilde{\nabla}(\mu^*)}{\nabla_{n-1}(\mu^*)}\cdot e^{\pm i\frac{\alpha \pi }{2}}$. \end{theorem} Set \begin{equation} BS=\left \{ \mu\mid \nabla_n(\mu)=0,\nabla_p(\mu)>0,p=1,2,\cdots,n-1 \right \}\\ \end{equation} called the bifurcation hyper-surface surface of system (\ref{eq1}). \par We denote Hessian matrix $\left (\frac{\partial^2\nabla_n(\mu) }{\partial \mu_i\partial \mu_j}\right)$ of $\nabla_n(\mu)$ at $\mu=\mu^*$ by $H\left (\nabla_n(\mu^*) \right )$. \begin{theorem}\label{th3} (1). If $\mu^* \in BS$ and $\frac{\partial \nabla_n(\mu)}{\partial \mu}\Big|_{\mu=\mu^*}\neq 0$, then $\nabla_n(\mu)$ is indefinite in any neighborhood $\delta(\mu^*)$ of $\mu^*$. \\ (2). If $\mu^* \in BS$, $\frac{\partial \nabla_n(\mu))}{\partial \mu}\Big|_{\mu=\mu^*}=0$, and $H\left (\nabla_n(\mu^*) \right )$ is indefinite, then $\nabla_n(\mu)$ is indefinite in any neighborhood ${\delta}(\mu^*)$ of $\mu^*$. \\ (3). If $\mu^* \in BS$, $\frac{\partial \nabla_n(\mu))}{\partial \mu}\Big|_{\mu=\mu^*}=0$, and $H\left (\nabla_n(\mu^*) \right )$ is positive (or negative) definite, then $\nabla_n(\mu)$ is positive (or negative) definite in some neighborhood $\delta(\mu^*)$ of $\mu^*$. \end{theorem} \section{Proofs of Theorems} The original coordinate system and the coordinate system which counterclockwise turned angel $\theta =\frac{\left (\alpha -1 \right )\pi }{2}$ denote by $xy$ and ${x}'{y}'$, respectively. Since $f(\lambda;\alpha,\mu)$ is a real polynomial whose roots are symmetrical about the $x$-axis, when system (\ref{eq1}) occurs Hopf bifurcation, one nonzero root of $f(\lambda;\alpha,\mu)$ is on the $y'$-axis, and the others are in the left half plane of the coordinate system ${x}'{y}'$. \par \begin{lemma}\label{le1} $f(\lambda;\alpha,\mu^*)$ has a pair of nonzero conjugate complex roots on the critical line $\Gamma$, and the other roots are in the stable region $\Omega$ if and only if $\nabla_n(\mu^*)=0$, $\nabla_p(\mu^*)>0(p=1,2,\cdots,n-1),\widetilde{\nabla}(\mu^*)<0$. \end{lemma} \begin{proof} From the generalized Routh-Hurwitz criterion \cite{gantmakher2000theory}, if $f(\lambda;\alpha,\mu^*)$ has one root that is on the $y'$-axis(a part of the critical line $\Gamma$), $f_1(r;\alpha,\mu^*)$ and $f_2(r;\alpha,\mu^*)$ have one common real root. From the resultant theorem, we have $\nabla_n(\mu^*)=0,\nabla_{n-1}(\mu^*)\neq0$. Suppose that the common real root is $r_0$. Eq. (\ref{eq11}) can be rewritten as \begin{equation} \left\{\begin{matrix} f_1(r;\alpha,\mu^*)=\widetilde{f}_1\cdot \left ( r-r_0\right )\\ f_2(r;\alpha,\mu^*)=\widetilde{f}_2\cdot \left ( r-r_0\right ), \end{matrix}\right. \end{equation} where \begin{equation} \left\{\begin{matrix} \widetilde{f}_1=\widetilde{a}_0 r^{n-1}+\widetilde{a}_1 r^{n-2}+\cdots+\widetilde{a}_{n-1}\\ \widetilde{f}_2=\widetilde{b}_0 r^{n-1}+\widetilde{b}_1 r^{n-2}+\cdots+\widetilde{b}_{n-1}. \end{matrix}\right. \end{equation} Denote \begin{small} \begin{equation} \widetilde{H}(\mu^*)=\left [ \begin{matrix} \widetilde{a}_0 &\widetilde{a}_{1}&\widetilde{a}_{2}& \cdots & 0 &\cdots &\cdots & 0\\ \widetilde{b}_0 &\widetilde{b}_{1}&\widetilde{b}_{2}& \cdots & 0 &\cdots &\cdots & 0\\ 0 & \widetilde{a}_0 &\widetilde{a}_{1}&\widetilde{a}_{2}&\cdots & 0 & \cdots & 0\\ 0 & \widetilde{b}_0 &\widetilde{b}_{1}&\widetilde{b}_{2}& \cdots & 0 & \cdots & 0\\ \vdots &\vdots &\vdots & \vdots &\vdots&\vdots &\vdots\\ 0 &\cdots &\cdots & \widetilde{a}_0 &\widetilde{a}_{1}&\widetilde{a}_{2}& \cdots &0\\ 0 &\cdots &\cdots & \widetilde{b}_0 &\widetilde{b}_{1}&\widetilde{b}_{2}& \cdots &0\\ \end{matrix} \right ], \end{equation} \end{small} \\ the $2n\times 2n$ resultant matrix of $\widetilde{f}_1$ and $\widetilde{f}_2$, and $\widetilde{\nabla}_k(\mu^*)$ is the $2k$-th order leading principle minor of $\widetilde{H}(\mu^*)$. Since the common roots of $\widetilde{f}_1$ and $\widetilde{f}_2$ are all in the stable region $\Omega$, from the generalized Routh-Hurwitz criterion, we have $\widetilde{\nabla}_k(\mu^*)>0(k=1,2,\cdots,n-1)$. \par Denote $T_i(M)$ the matrix obtained by multiplying the $(i-1)$th column of $M$ by $r_0$ and adding it to the $i$th column of $M$. We have \begin{equation} H_0=H_\alpha,H_1=T_1(H_0),H_2=T_2(H_1),\cdots,\widetilde{H}= T_n(H_{n-1}). \end{equation} Thus, $\nabla_p(\mu^*)=\widetilde{\nabla}_p(\mu^*)(p=1,2,\cdots,n-1)$, and $\nabla_p(\mu^*)(p=1,2,\cdots,n-1)>0,\nabla_n(\mu^*)=0$. \par From the subresultant theorem \cite{ABDELJAOUED2009588}, we have \begin{equation} \nabla_{n-1}(\mu^*)\cdot r_0+\widetilde{\nabla}(\mu^*)=0. \end{equation} Since the critical root is on the critical line $\Gamma$, which is on the left half $xy$-plane, $r_0>0$. From $\nabla_{n-1}(\mu^*)>0$, we know $\widetilde{\nabla}(\mu^*)<0$. This completes the proof of Theorem \ref{th2}. \end{proof} \begin{lemma}\label{le2} If $f(\lambda;\alpha,\mu)$ has a pair of conjugate complex roots $\lambda(\mu),\overline{\lambda}(\mu)$, as $\mu$ changes, they cross the critical line $\Gamma$ at $\lambda(\mu^*),\overline{\lambda}(\mu^*)$ from the stable region $\Omega$ to the unstable region $\Sigma$, and the other roots are all in $\Omega$, then $\nabla_n(\mu)$ is indefinite in any neighborhood $\delta (\mu^*)$ of $\mu^*$, that is, $\forall \delta (\mu^*)$, $\exists\, \mu_1,\mu_2\in\delta (\mu^*)$, s.t. $\nabla_n(\mu_1)>0,\nabla_n(\mu_2)<0$. \end{lemma} \begin{proof} Since the other roots of $f(\lambda;\alpha,\mu)$ are all in the stable region $\Omega$, from the proof of Lemma \ref{le1}, we have $\nabla_p(\mu^*)>0(p=1,2,\cdots,n-1)$. $\nabla_p(\mu)$ is the continuous function of $\mu$, so exists a small enough neighborhood $\delta(\mu^*)$ of $\mu^*$, such that $\forall \mu_1 \in \delta(\mu^*) \cap \Omega$, $\mu_2 \in \delta(\mu^*) \cap \Sigma$, $\nabla_p(\mu_k)>0 (k=1,2), \nabla_n(\mu_1)>0, \nabla_n(\mu_2)<0, p=1,2,...,n-1$. \par Lemma \ref{le1} and Lemma \ref{le2} together give the proof of Theorem \ref{th1}. \end{proof} Since $\nabla_n(\mu)=\nabla_n(\mu^*)+\frac{\partial \nabla_n(\mu)}{\partial \mu}\bigg|_{\mu=\mu^*}(\mu-\mu^*)+\frac{1}{2}(\mu-\mu^*)^T H(\nabla_n(\mu^*)) (\mu-\mu^*)+o( ||\mu-\mu^*||^2), $ Theorem \ref{th3} is obvious. \section{Illustrative Example} To demonstrate the effectiveness of our explicit method in the multi-parameter situation, we use the same system analyzed by the online method \cite{YANG2022111714}. Consider the fractional-order neural network system with multiple parameter as follows: \begin{equation}\label{eq17} \left\{\begin{matrix} \frac{d^{\alpha }x_{1}(t)}{dt^{\alpha }}=-\mu _{1}x_{1}(t)+k_{11}f_{11}(x_{1}(t))+k_{12}f_{12}(x_{2}(t))+k_{13}f_{13}(x_{3}(t)),\\ \frac{d^{\alpha }x_{2}(t)}{dt^{\alpha }}=-\mu _{1}x_{2}(t)+k_{21}f_{21}(x_{1}(t))+k_{22}f_{22}(x_{2}(t))+k_{23}f_{23}(x_{3}(t)), \\ \frac{d^{\alpha }x_{3}(t)}{dt^{\alpha }}=-\mu _{2}x_{3}(t)+k_{31}f_{31}(x_{1}(t))+k_{32}f_{32}(x_{2}(t))+k_{33}f_{33}(x_{3}(t)), \end{matrix}\right. \end{equation} where $\mu _{1},\mu_{2},k_{ij}(i,j=1,2,3)$ are bifurcation parameters, $\alpha =1.1$, $f_{ij}(x_{j})=tanh(x_{j}(t))$, the initial conditions are $x_j(0)=x_{j0}, {x}'_j(0)={x}'_{j0}$. The equilibrium point of system (\ref{eq17}) is $(0,0,0)$. \par The characteristic polynomial of the Jacobian matrix $J(\mu)$ is: \begin{equation} f(\lambda;\alpha,\mu) = \lambda ^{3}+a_1(\alpha,\mu)\lambda ^{2}+a_2\lambda(\alpha,\mu)+a_3(\alpha,\mu)=0, \end{equation} where \begin{equation} \begin{aligned} a_1(\alpha,\mu)=&\mu_2-k_{33}+2\mu_1-k_{22}-k_{11},\\ a_2(\alpha,\mu)=&k_{11}k_{22}+k_{11}k_{33}-k_{11}\mu_1-k{11}\mu_2-k_{12}k_{21}-k_{13}k_{31}\\&+k_{22}k_{33}-k_{22}\mu_1-k_{22}\mu_2-k_{23}k_{32}-2k_{33}\mu_1+\mu_1^2+2\mu_1\mu_2,\\ a_3(\alpha,\mu)=&-k_{11}k_{22}k_{33}+k_{11}k_{22}\mu_2+k_{11}k_{23}k_{32}+k_{11}k_{33}\mu_1-k_{11}\mu_1\mu_2\\&+k_{12}k_{21}k_{33}-k_{12}k_{21}\mu_2-k_{12}k_{23}k_{31}-k_{13}k_{21}k_{32}+k_{13}k_{22}k_{31}\\&-k_{13}k_{31}\mu_1+k_{22}k_{33}\mu_1-k_{22}\mu_1\mu_2-k_{23}k_{32}\mu_1-k_{33}\mu_1^2+\mu_1^2\mu_2. \end{aligned} \end{equation} From the definition in (\ref{eq5}) and Theorem \ref{th1}, we have the $H_{\alpha}(\mu)$ and $\widetilde{\nabla}(\mu)$ of system (\ref{eq17}). Then \begin{equation}\nonumber \begin{aligned} \nabla_1(\mu) &=a_1sin\left ( \frac{9\pi }{20} \right ), \\ \nabla_2(\mu) &=\left ( \frac{a_1^2a_2}{2}-\frac{a_3a_1}{2} \right )cos\left ( \frac{\pi }{10} \right )+\left ( \frac{a_2^2}{2}-\frac{a_1a_3}{2} \right )cos\left ( \frac{\pi }{5} \right )+\frac{a_1^2a_2}{2}-\frac{a_2^2}{2},\\ \end{aligned} \end{equation} \begin{equation} \begin{aligned} \nabla_3(\mu) &=\frac{a_3}{8}(2a_3^2sin\left ( \frac{\pi }{20} \right )+2a_1a_2a_3sin\left ( \frac{3\pi }{20} \right )+( 2a_1^3a_3+2a_1^2a_2^2-12a_1a_2a_3\\&+2a_2^3+6a_3^2 )sin\left ( \frac{7\pi }{20} \right )+ ( 6a_1^2a_2^2-4a_1^3a_3-2a_2a_1a_3-4a_2^3 )sin\left ( \frac{9\pi }{20} \right )\\&+\sqrt{2}a_1^3a_3-2\sqrt{2}a_1a_2a_3+\sqrt{2}a_2^3),\\ \widetilde{\nabla}(\mu) &=-\frac{1}{4}a_3\left ( 2a_1^2cos\left ( \frac{7\pi }{20} \right )+\left ( 2a_1^2-2a_2 \right )cos\left ( \frac{9\pi }{20} \right )+\sqrt{2}a_2 \right ). \end{aligned} \end{equation} Based on Theorem \ref{th1}, the bifurcation hyper-surface $BS$ of system (\ref{eq17}) is determined by \begin{figure} \caption{The bifurcation hyper-surface in the $(\mu_1,\mu_2)$-plane for fixed $k_{ij} \label{fig2} \end{figure} \begin{equation}\label{eq24} \nabla_1(\mu^*)>0,\nabla_2(\mu^*)>0,\nabla_3(\mu^*)=0,\widetilde{\nabla}(\mu^*)<0. \end{equation} To make the results intuitive, we assume that $\mu_{1}$ and $\mu_2$ are bifurcation parameters, other parameters are fixed as $k_{11}=k_{12}=k_{13}=k_{23}=2,k_{21}=k_{22}=k_{31}=k_{33}=-2$, $k_{32}=1$. Fig. \ref{fig2} shows the bifurcation surface of system (\ref{eq17}) in this case. \par From Theorem \ref{th3}, we know that there exists the only one point $\mu_0\in BS$ which satisfies \begin{center} $\nabla_3(\mu)=0$, $\frac{\partial \nabla_n(\mu)}{\partial \mu}\Big|_{\mu=\mu_0}=0$, and $H\left (\nabla_n(\mu) \right )\Big|_{\mu=\mu_0}$ is negative definite, \end{center} so the $\mu_0\approx (3.817533638,-4.170716050)$ does not meet the transversality condition. \par If the number of uncertain parameters are more three, it is difficult to show $BS$ with a figure. Our explicit expressions of results in Eq. (\ref{eq24}) are still valid for the situation of multi-parameter. \section{Conclusions} Explicit Hopf bifurcation conditions of fractional-order systems with order $1<\alpha <2$ are proposed in this paper. Basing the generalized Routh-Hurwitz criterion and subresultant theorem, we obtain some matrices to determine Hopf bifurcation hyper-surface directly. Our explicit method can avoid the failure of the existing methods in the face of cases of a large number of parameters. \section*{Acknowledgment} This work was partially supported by the National Natural Science Foundation of China under Grants no.12171073. \section*{References} \biboptions{square,numbers,sort&compress} \end{document}
\begin{document} \begin{center} {\LARGE Packing 1-Plane Hamiltonian Cycles in Complete Geometric Graphs} \end{center} \vskip8pt \centerline{\small Hazim Michman Trao, Adem Kilicman \ and \ Niran Abbas Ali} \begin{center} \itshape\small Department of Mathematics, \\ Universiti Putra Malaysia, 43400 Serdang, Malaysia, \\ \end{center} \begin{abstract} Counting the number of Hamiltonian cycles that are contained in a geometric graph is {\bf \#P}-complete even if the graph is known to be planar \cite{lot:refer}. A relaxation for problems in plane geometric graphs is to allow the geometric graphs to be 1-plane, that is, each of its edges is crossed at most once. We consider the following question: For any set $P\/$ of $n\/$ points in the plane, how many 1-plane Hamiltonian cycles can be packed into a complete geometric graph $K_n\/$? We investigate the problem by taking two different situations of $P\/$, namely, when $P\/$ is in convex position, wheel configurations position. For points in general position we prove the lower bound of $k-1\/$ where $n=2^{k}+h\/$ and $0\leq h <2^{k}\/$. In all of the situations, we investigate the constructions of the graphs obtained. \end{abstract} \section{Introduction} Let $P\/$ be a set of $n\/$ points in general position in the plane with no three points being collinear. A {\em geometric graph\/} is a graph $G = (P,E)$ that consists of a set of vertices $P$, which are points in the plane, and a set of edges, $E$, which are straight-line segments whose endpoints belong to $P$. A {\em complete\/} geometric graph $K_n\/$ is a geometric graph on a set $P\/$ of $n\/$ points that has an edge joining every pair of points in $P\/$. Two edges are disjoint if they have no point in common. Two subgraphs are {\em edge-disjoint\/} if they do not share any edge.\ A geometric graph is said to be \textit{plane} (or non-crossing) if its edges do not cross each other. A geometric graph is said to be \textit{1-plane} if every edge is allowed to have at most one crossing. Note that the terms plane graph and 1-plane graph refer to a geometric object, while to be planar or 1-planar are properties of the underlying abstract graph.\ By an \emph{edge packing} of a graph $G\/$ we mean a set of edge-disjoint subgraphs of $G\/$. By an \emph{edge partition} of $G\/$ we mean an edge packing of $G\/$ with no edge left over, that is the union of all subgraphs in the packing is equal to $G\/$. Dor and Tarsi \cite{dt:refer} proved that the problem of partitioning a given graph $G\/$ is NP-complete.\ It is often useful to restrict the subgraphs of $G\/$ to a certain class or property. Among all subgraphs of $K_n\/$, plane spanning trees, plane Hamiltonian cycles or paths, and plane perfect matchings, are of interest \cite{aam:refer, abd:refer, aic:refer, bhrw:refer, nm:refer} i.e., one may look for the maximum number of these subgraphs that can be packed into $K_n\/$. For instance, a long-standing open question is to determine if the edges of $K_n\/$, where $n\/$ is even, can be partitioned into $\frac{n}{2}$ plane spanning trees? Bernhart and Kanien \cite{bk:refer} give an affirmative answer for the problem when the points in convex position. Bose et al. \cite{bhrw:refer} proved that every complete geometric graph $K_n\/$ can be partitioned into at most $n-\sqrt{\frac{n}{12}}\/$ plane trees. Aichholzer et al. \cite{aic:refer} showed that $\Omega(\sqrt{n})$ plane spanning trees can be packed into $K_n\/$. Recently, Biniaz et al. \cite{bbms:refer} showed that at least $\lceil \log_2 n\rceil -1\/$ plane perfect matchings can be packed into $K_n\/$. \ A \textit{Hamiltonian cycle} is a cycle in a graph that passes through every vertex exactly once, except for the vertex that is both the beginning and end, which is visited twice. Finding a Hamiltonian cycle in a graph is {\bf NP}-complete even if the graph is known to be planar \cite{gjt:refer}. Moreover, counting the number of Hamiltonian cycles that are contained in a graph is {\bf \#P}-complete even if the graph is known to be planar \cite{lot:refer}. \ The problem of counting the number of plane Hamiltonian cycles (not necessary edge-disjoint) on a given graph considered in \cite{aki:refer, dsst:refer, ssw:refer, sw:refer} and many others. In particular, Newborn and Moser \cite{nm:refer} asked for the maximal number of plane Hamiltonian cycles; authors give an upper bound of $2\cdot6^{n-2}\lfloor\frac{n}{2}\rfloor!$, and conjecture that it should be of the form $c^n$, for some constant $c$. Motzkin \cite{m:refer} proved that a set of $n\/$ points in the plane has at most $O(86:81^n)$ plane Hamiltonian cycles.\ A relaxation for problems in plane geometric graphs is to allow the geometric graphs to be 1-plane. The problem of finding 1-plane Hamiltonian alternating cycle or path studied in many papers (see \cite{kky:refer}, \cite{cggst:refer}). Claverol et al. \cite{cgh:refer} studied the 1-plane character. They showed that one can always obtain a 1-plane Hamiltonian alternating cycle on a point set in convex position and on a double chain.\ Since finding more than one edge-disjoint plane Hamiltonian cycle for a given set of points is not always possible to achieve, We relax the constraint on the Hamiltonian cycles from being plane to being 1-plane and study the following problem: For any set of $n\/$ points in the plane, how many 1-plane Hamiltonian cycles can be packed into a complete geometric graph $K_n\/$?\ For simplicity, we write 1-PHC to refer to a 1-plane Hamiltonian cycle.\ \subsection{Results} We study the problem of packing 1-PHCs into the complete geometric graph $K_n\/$ for a given set of $n\/$ points in the plane. Since the complete graph $K_n\/$ on $n\/$ vertices has $n(n-1)/2\/$ edges and a Hamiltonian cycle has $n\/$ edges, therefore, the number of edge-disjoint Hamiltonian cycles in $K_n\/$ cannot exceed $(n-1)/2\/$.\ In Section 2, we show that $\lfloor\frac{n}{3}\rfloor\/$ is a tight bound for the number of 1-PHCs that can be packed into $K_n\/$ for any given set in convex position. In Section 3, we show that for a set of points in regular wheel configuration, $\lfloor\frac{n-1}{3}\rfloor\/$ edge-disjoint 1-PHCs can be packed into $K_n\/$ and this bound is tight. In the latter portion of this paper, point sets in general position are considered. We know that for $n\geq 3\/$, and by a minimum weight Hamiltonian cycle in $K_n\/$, a trivial lower bound of $1\/$ is obtained since it is a plane cycle. In Section 4, we present an algorithm (Algorithm $A\/$) to draw a 1-PHC for any set of points in general position in the plane. As the main result in this paper, we prove that there are at least $k-1\/$ 1-PHCs that can be packed into $K_n\/$, where $n=2^{k}+h\/$ and $0\leq h <2^{k}\/$. \vspace {1mm} Throughout this paper, for simplicity, we consider all vertices in counter-clockwise order. \section{1-PHCs for point sets in convex position} In this section, we study the problem of packing 1-PHCs on a well-known restricted position of a point set which is the convex position. We will show that for any point set $P\/$ in convex position, there are at most $\lfloor \frac{n}{3}\rfloor$ edge-disjoint 1-PHCs that can be packed into $K_n\/$ and this bound is tight (Theorem \ref{the1}). \vspace {1mm} Suppose $P= \{v_0, v_1, v_2, \ldots, v_{n-1}\}\/$ is a set of $n\/$ points in convex position. Let $G\/$ be a geometric graph on $P\/$. Edges of the form $v_iv_{i+1}\/$, $i=0,1,2, \ldots, n-1\/$, are called the {\em boundary edges\/} in $G\/$. A non-boundary edge in $G\/$ is called a {\em diagonal edge\/}. \ \begin{propo} \label{pro2} Let $P\/$ be a set of $n\/$ points in convex position in the plane where $n\geq 3\/$. Suppose $C\/$ is a 1-PHC on $P\/$ that has a diagonal edge $v_iv_j\/$ that divides $P\/$ into two parts, both including $v_i\/$ and $v_j\/$. Then the following statements hold:\ (1) If any part has an odd number of vertices, then $C\/$ has at least one boundary edge on this part. (2) If any part has an even number of vertices, then $C\/$ has at least two boundary edges on this part. \end{propo} \noindent {\bf Proof:} Let $C\/$ be a 1-PHC on $P\/$. Assume that $C\/$ contains the diagonal edge $v_iv_j\/$ that divides $P\/$ into two parts $P_1\/$ and $P_2\/$.\ Assume that $P_1=\{v_j,v_{j+1},...,v_i\}\/$ and $|P_1|\/$ is odd. By induction on $|P_1|\/$, if $|P_1|=3\/$, then $P_1=\{v_j,v_{j+1},v_i\}\/$ and either the boundary edge $v_jv_{j+1}\/$ or $v_iv_{j+1}\/$ in $C\/$; otherwise, there are two crossings, a contradiction.\ Assume that $|P_1|\geq 5\/$ is odd and the proposition is true when $m < |P_1|$, and $m$ is an odd number.\ We claim that either $v_{i+1}v_{j+1}\/$ or $v_{i-1}v_{j-1}\/$ is an edge in $C\/$. To prove our claim, Suppose that neither $v_{i+1}v_{j+1}\/$ nor $v_{i-1}v_{j-1}\/$ are in $C\/$.\ Since $C\/$ is a 1-PHC, then $C$ contains an edge $v_lv_k \notin \{v_{i+1}v_{j+1}, v_{i-1}v_{j-1}\}\/$ crosses $v_iv_j$ where $k\in P_1-\{v_i, v_j\}$ and $l\in P_2-\{v_i, v_j\}$.\ When $k \notin \{j+1, i-1\}$, then there is $p\in\{j+2, j+3, ..., k-1\}$ or $p\in\{k+1, k+2, ..., j-2\}$ such that the edge $v_pv_q$, that incident to $v_p$ and belongs to $C$, crosses either $v_iv_j$ or $v_lv_k$, a contradiction. When $k \in \{j+1, i-1\}$, without loss of generality assume that $k=j+1$ and then $l\neq i+1$ (by assumption). Thus, there is $p\in\{j+2, j+3, ..., i-1\}$ or $p\in\{i+1, i+2, ..., l-1\}$ such that the edge $v_pv_q$, that belongs to $C$, crosses either $v_iv_j$ or $v_lv_k$, a contradiction.\ Therefore, either $v_{i+1}v_{j+1}\in E(C)\/$ or $v_{i-1}v_{j-1}\in E(C)\/$. Without loss of generality, assume that $v_{i+1}v_{j+1}\in E(C)\/$.\ Let $C'\/$ be a subgraph of $C\/$ induced by $P_1\/$. Since $v_{i+1}v_{j+1}\/$ not in $C'\/$, then $d(v_j)=d(v_{j+1})=1\/$ in $C'\/$.\ Let $C_1=C'\cup \{v_jv_{j+1}\}\/$. It is clear that $C_1\/$ is a 1-PHC on $P_1\/$ since $v_jv_{j+1}\/$ is a boundary edge. Recall that $v_iv_j\/$ and $v_jv_{j+1}\/$ are boundary edges in $C_1\/$ but are not boundary edges in $C\/$.\ When the boundary edge $v_iv_{i-1}\in E(C_1)\/$., the claim in the proposition is hold.\ Hence, assume that the boundary edge $v_iv_{i-1}\notin E(C_1)\/$. Then there is a diagonal edge $v_iv_k\in E(C_1)\/$ divides $P_1\/$ into two parts $P_{1,1}=\{v_i, v_{i-1}, ..., v_k\}\/$ and $P_{1,2}=\{v_i, v_{j}, v_{j+1}, ..., v_k\}\/$. \ If $k=j+1\/$, there is a contradiction since $C_1\/$ is not a union of disjoint cycles.\ Hence, either $k\in \{j+2,j+4,...,i-2\}\/$, then $P_{1,1}=\{v_k, v_{k+1},...,v_{i-1}, v_i\}\/$. Then clearly, $|P_{1,1}|\/$ is odd. By the induction hypothesis, $C_1\/$ has at least one boundary edge on $P_{1,1}\/$. Thus, $C\/$ has at least one boundary edge on $P_1\/$.\ Or, $k\in \{j+3,j+5,...,i-3\}\/$, then $P_{1,2}=\{v_i,v_j,v_{j+1},...,v_{k}\}\/$. Then clearly, $|P_{1,1}|\/$ is odd. (*) Assume on the contrary that $C_1\/$ has the only two boundary edges $v_iv_j\/$ and $v_jv_{j+1}\/$ on $P_{1,2}\/$. Note that $\{v_{j+1},...,v_{k-1}\}\/$ has at least two vertices. However, $C_1\/$ matches all the vertices in $\{v_{j+1},...,v_{k-1}\}\/$ with at least two crossings with the edge $v_iv_k\/$ since $C_1\/$ has no boundary edge on $\{v_{j+1},...,v_{k-1}\}\/$; this is a contradiction (since $C_1\/$ is a 1-PHC). Thus, $C_1\/$ has at least one boundary edge different from $v_iv_j\/$ and $v_jv_{j+1}\/$ on $P_{1,2}\/$. This proves (1).\ Assume that $|P_1|\/$ is even. By induction on $|P_1|\/$, if $|P_1|=4\/$, then $P_1=\{v_j,v_{j+1},v_{j+2},v_i\}\/$ such that either $v_jv_{j+1},v_{j+1}v_{j+2}\in E(C)\/$ or $v_iv_{j+2},v_{j+1}v_{j+2}\in E(C)\/$; otherwise, there is a contradiction since $C\/$ is a 1-PHC. Assume that $|P_1|\geq 6\/$ is even and the proposition is true when $m < |P_1|$, $m$ is even.\ By repeating the same argument in (1), we conclude that either $v_{i+1}v_{j+1}\in E(C)\/$ or $v_{i-1}v_{j-1}\in E(C)\/$. Without loss of generality, assume that $v_{i+1}v_{j+1}\in E(C)\/$.\ Let $C'\/$ be a subgraph of $C\/$ induced by $P_1\/$. It is clear that $d(v_j)=d(v_{j+1})=1\/$ in $C'\/$. Let $C_1=C'\cup \{v_jv_{j+1}\}\/$. Then $C_1\/$ is a 1-PHC on $P_1\/$ since $v_jv_{j+1}\/$ is a boundary edge. Recall that $v_iv_j\/$ and $v_jv_{j+1}\/$ are boundary edges in $C_1\/$ but are not boundary edges in $C\/$.\ In the case that $C_1\/$ has the boundary edge $v_{i}v_{i-1}$, either $v_{i-1}v_{i-2}\in E(C)$ and then the claim in the proposition is true, or $v_{i-1}v_{i-2}\notin E(C)$ and then the diagonal edge $v_{i-1}v_{k}\in E(C)$ for some $k\in\{j+2, j+3, ..., i-3\}$. Hence, the diagonal $v_{i-1}v_{k}$ divides $P_1\/$ into two parts $P_{1,1}=\{v_{k}, v_{k+1}, ..., v_{i-2}, v_{i-1}\}\/$ and $P_{1,2}=\{v_{i-1}, v_i, v_{j}, v_{j+1}, ..., v_k\}\/$. \ $|P_{1,1}|\/$, whether odd or even, $C\/$ has at least one boundary edge on $P_{1,1}\/$ by part (1) or by induction, respectively.\ In the case that $C_1\/$ does not have the boundary edge $v_{i}v_{i-1}$, then $v_{i}v_{k}\in E(C)$ for some $k\in\{j+2, j+3, ..., i-2\}$. Hence, the diagonal edge $v_{i}v_{k}$ divides $P_1\/$ into two parts $P_{1,1}=\{v_{k}, v_{k+1}, ..., v_{i-1}, v_{i}\}\/$ and $P_{1,2}=\{v_i, v_{j}, v_{j+1}, ..., v_k\}\/$. \ Now, either both $|P_{1,1}|\/$ and $|P_{1,2}|\/$ are odd, then $C\/$ has at least one boundary edge on $P_{1,1}\/$ by part (1) and it has at least one boundary edge different from $v_iv_j\/$ and $v_jv_{j+1}\/$ on $P_{1,2}\/$ by argument (*). \ Or, both $|P_{1,1}|\/$ and $|P_{1,2}|\/$ are even. By the induction hypothesis, $C_1\/$ has at least two boundary edges on $P_{1,1}\/$.\ Thus, $C\/$ has at least two boundary edges on $P_1\/$. This completes the proof. \qed As a direct consequence of Proposition \ref{pro2}, we have the following lemma. \begin{support} \label{lem1} Let $P\/$ be a set of $n\/$ points in convex position in the plane where $n\geq 3\/$. Suppose $C\/$ is a 1-PHC on $P\/$. Then the following statements hold:\ (1) If $n$ is even, $C\/$ has at least two boundary edges. (2) If $n$ is odd, $C\/$ has at least three boundary edges. \end{support} \noindent {\bf Proof:} Let $C\/$ be a 1-PHC on a set $P\/$ of $n\/$ points. If all edges of $C\/$ are boundary edges, then the claim in the lemma holds. Thus, assume that $C\/$ contains a diagonal edge $v_iv_j\/$.\ If $n\/$ is even, $v_iv_j\/$ divides $P\/$ into two parts, each part having an odd (even) number of vertices. Then by Proposition \ref{pro2}, $C\/$ has at least one boundary edge (two boundary edges) on each part.\ If $n\/$ is odd, $v_iv_j\/$ divides $P\/$ into two parts, and one part has an odd number of vertices. By Proposition \ref{pro2}, $C\/$ has at least one boundary edge on this part, while $C\/$ has at least two boundary edges on the second part, which has an even number of vertices.\qed \\ Suppose $G\/$ is a geometric graph on a set of convex position $P\/$ that has a diagonal edge $v_iv_j\/$. A boundary edge $v_kv_{k+1}\/$ is called {\em on the right side} of $v_iv_j\/$ if $i\leq k< j$ and {\em on the left side} of $v_iv_j\/$ if $j\leq k< i$. A diagonal edge is said to {\em have a boundary edge on each side} if there are two boundary edges, on left and right sides of the diagonal edge. A boundary edge $v_kv_{k+1}\/$ is called a {\em single boundary edge in $G\/$} if the two boundary edges $v_{k-1}v_k\/$ and $v_{k+1}v_{k+2}\/$ are not in $G\/$.\ \begin{propo} \label{pro3} Let $P\/$ be a set of $n\/$ points in convex position in the plane where $n\geq 4\/$. Suppose $C\/$ is a 1-PHC on $P\/$. Then the following statements hold: (1) If $C\/$ has only two boundary edges $\{v_kv_{k+1}\}$ for $k\in \{r,s\}$, then $C\/$ has the edges $v_kv_{k+2}$ and $v_{k+1}v_{k-1}$. (2) If $C\/$ has only three boundary edges, then $C\/$ has at least one single boundary edge $v_rv_{r+1}$ and the edges $v_rv_{r+2}$ and $v_{r+1}v_{r-1}$. \end{propo} \noindent {\bf Proof:} Let $C$ be a 1-PHC on $P\/$ containing only two boundary edges $v_kv_{k+1}$ for $k\in \{r,s\}$. Assume on the contrary that at least one of the two edges $\{v_kv_{k+2}, v_{k+1}v_{k-1}\}\/$ is not in $C\/$ for some $k\in \{r,s\}$.\ Without loss of generality, assume that $v_rv_{r+2}\notin E(C)\/$. Then $v_rv_i\/$ and $v_{r+1}v_j\in E(C)$ for some $r+3\leq i\leq r-2\/$ and $r+3\leq j\leq r-1\/$.\ If $v_rv_{r+1}\/$ and $v_sv_{s+1}\/$ are in consecutive order, then all the remaining edges of $C\/$ are diagonal edges where $v_rv_{r+1}\/$ and $v_sv_{s+1}\/$ are on the same side of each of which. Hence, the other side of any diagonal edge does not contains any boundary edge, which is a contradiction with Proposition \ref{pro2}. Thus $v_kv_{k+1}$ is a single boundary edge for each $k\in \{r,s\}$ and then $i\neq r+1\/$ and $j\neq r+2\/$. \ If $v_rv_i\/$ and $v_{r+1}v_{j}$ are crossing, then there is a vertex $v_t\/$ where $r+2\leq t<i\/$ such that at least one of the two edges that incident to $v_t\/$ in $C\/$ crosses $v_rv_i\/$, which is a contradiction since $C\/$ is a 1-PHC.\ If $v_rv_i\/$ and $v_{r+1}v_j\/$ are not crossing, then by Proposition \ref{pro2}, $C\/$ has at least one boundary edge on the left side of $v_rv_i\/$ and at least another boundary edge on the right side of $v_{r+1}v_j\/$ (both different from $v_rv_{r+1}\/$), which is a contradiction since $C\/$ has only two boundary edges. This proves (1).\ Let $C\/$ contain three boundary edges $v_kv_{k+1}\/$ for $k\in \{r,s,t\}\/$. Suppose that no single boundary edge in $C\/$. That is, the boundary edges in $C\/$ are in consecutive order. But all the remaining edges of $C\/$ are diagonal edges where $v_kv_{k+1}\/$ for each $k\in \{r,s,t\}\/$ are on the same side of each of which. Hence, the other side of any diagonal edge does not contains any boundary edge, which is a contradiction with Proposition \ref{pro2}. Thus $C\/$ has at least one single boundary edge.\ Assume on the contrary that, if $v_kv_{k+1}\/$ is a single boundary edge in $C\/$ for some $k\in \{r,s,t\}\/$, then either $v_kv_{k+2}\/$ or $v_{k+1}v_{k-1}\/$ or both are not in $C\/$.\ Without loss of generality, assume that $v_rv_{r+1}\/$ is a single boundary edge in $C\/$ and $v_rv_{r+2}\notin E(C)$. Then $v_rv_i\/$ and $v_{r+1}v_j\/$ in $C\/$ where $r+3\leq i< r-1\/$ and $r+2< j\leq r-2\/$.\ If $v_rv_i\/$ and $v_{r+1}v_j\/$ are crossing, then there is a vertex $v_t\/$ where $r+2\leq t<j$ such that at least one of the two edges that incident to $v_t\/$ in $C\/$ crosses $v_rv_i\/$, which is a contradiction since $C\/$ is a 1-PHC.\ If $v_rv_i\/$ and $v_{r+1}v_j\/$ are not crossing, then by Proposition \ref{pro2}, $C\/$ has at least one boundary edge on the set $\{v_{i+1},v_{i+2},...,v_r\}\/$ and at least one boundary edge on the set $\{v_{r+1},v_{r+2},...,v_j\}\/$; otherwise, there is a contradiction since $C\/$ has only three boundary edges.\ This implies that there is a single boundary edge $v_sv_{s+1}\/$ where $i\leq s<r-1\/$ such that either $v_sv_{s+2}\/$ or $v_{s+1}v_{s-1}\/$ is not in $C\/$ (by assumption). Without loss of generality, assume that $v_sv_{s+2}\notin E(C)\/$. Let $v_sv_p\/$ and $v_{s+1}v_q\/$ in $C\/$.\ Suppose that $p\in\{j,j+1,...,s-1\}\/$; otherwise, $v_sv_p\/$ crosses both $v_rv_i\/$ and $v_{r+1}v_j\/$, which is a contradiction since $C\/$ is a 1-PHC. Then by Proposition \ref{pro2}, $C\/$ has at least one boundary edge on a vertex set $\{v_j,v_{j+1},...,v_{s-1}\}\/$, which is a contradiction since $C\/$ has only three boundary edges. This completes the proof. \qed Easily, one can see the Proposition \ref{pro3} still true when $n=3\/$, is odd with consecutive boundary edges.\ \begin{result}\label{the1} Let $P\/$ be a set of $n\/$ points in convex position on the plane where $n\geq 3\/$. Then there exist $k\/$ edge-disjoint 1-PHCs $C_1, C_2, ... , C_k\/$ on $P\/$ that can be packed into $K_n\/$ where $k\leq \lfloor \frac{n}{3}\rfloor \/$. \end{result} \noindent {\bf Proof:} (1) Let $n=2m\/$, which is even. Suppose $P=\{v_0, v_1, . . . , v_{n-1}\}\/$ is a set of $n\/$ points in convex position on the plane. By Lemma \ref{lem1}, every 1-PHC on $P\/$ contains at least two boundary edges. On the other hand, $P\/$ has $n\/$ boundary edges; that is, the number of 1-PHCs does not exceed $n/2\/$.\ We claim that if $C_i\/$ and $C_j\/$ are two edge-disjoint 1-PHCs each having only two boundary edges, then any boundary edge of $C_i\/$ can not be in consecutive order with a boundary edge of $C_j\/$. To prove our claim, assume on the contrary that $v_rv_{r+1}\in E(C_i)\/$ and $v_{r+1}v_{r+2}\in E(C_j)\/$. By Proposition \ref{pro3}, the edge $v_rv_{r+2}\in E(C_i)\cap E(C_j)\/$, which is a contradiction since $C_i\/$ and $C_j\/$ are edge-disjoint 1-PHCs ($E(C_i)\cap E(C_j)=\phi\/$, where $i\neq j\/$).\ Note that a single boundary edge in $C_i\/$ can be adjacent to any two consecutive boundary edges in $C_j\/$ where $i\neq j\/$; that is, $C_i \cup C_j\/$ can have three boundary edges in consecutive order. Therefore, the number of 1-PHCs that can be packed into $K_n$ is at most $\lfloor \frac{n}{3}\rfloor\/$ and this bound is tight.\ Now, we will show how to pack $\lfloor\frac{n}{3}\rfloor$ 1-PHCs into $K_n\/$. To ensure that the boundary edges of all cycles $\{C_1, C_2, ... , C_k\}\/$ are in consecutive order, let $\{C_1, C_2, ... , C_k\}\/$ be divided into two sets $A\/$ and $B\/$ where each 1-PHC in $A\/$ has only two boundary edges, and each 1-PHC in $B\/$ has only four boundary edges (which are two couples of boundary edges and each couple has two boundary edges in consecutive order). We arrange the boundary edges with the property that a single boundary edge in $C\in A\/$ is in consecutive order with a couple of boundary edges in $C' \in B\/$ . This property is depicted in Figure \ref{con1}(a).\ For each $i=0,1,...,\lfloor \frac{m}{3}\rfloor-1$ and $j=0,1,...,\lfloor \frac{m}{3}\rfloor-1$ where $m\geq 2\/$. Let\ $C_i = v_{3i} v_{3i+1} v_{3i-1} v_{3i+3} v_{3i-3} ... v_{3i-m} v_{2i+(m+1)},v_{3i-m-1}v_{2i+(m+3)} ... v_{3i-(2m-2)} v_{3i}\/$ and $C_j = v_{3j+2} v_{3j+1}v_{3j+4}v_{3j-1} ... v_{3j+m+1} v_{3j-m+2},v_{3j+m+3} v_{3j-m}, ... v_{3j+(2m)} v_{3j-(2m-3)}v_{3j+2}\/$. Here the operations on the subscripts are reduced modulo $n-1\/$.\ (2) Now, let $n=2m+1\/$, which is odd. By Lemma \ref{lem1}, every 1-PHC in $P$ contains at least three boundary edges. On the other hand, $P\/$ has $n$ edges. Therefore, the number of 1-PHCs that can be packed into $K_n\/$ is at most $\lfloor \frac{n}{3}\rfloor\/$.\ Now, we will show that how to pack $\lfloor \frac{n}{3}\rfloor$ 1-PHCs into $K_n\/$. To ensure that the boundary edges of all cycles $\{C_1, C_2, ... , C_k\}\/$ are in consecutive order, let each 1-PHC have two consecutive ordered boundary edges and one single boundary edge. We arrange the boundary edges with the property that a single boundary edge in $C\/$ is in consecutive order with two consecutive boundary edges in $C'\/$ and vice versa. This property is depicted in Figure \ref{con1}(b).\ For each $i=0,1,...,\lfloor \frac{n}{3}\rfloor-1$. Let \ $C_i = v_{(m+2)i}\ v_{(m+2)i+1}\ v_{(m+2)i-1}\ v_{(m+2)i+3}\ v_{(m+2)i-3}\ ... \ v_{(m+2)i-(2m-1)}\ v_{(m+2)i}$. Here the operations on the subscripts are reduced modulo $n-1\/$. \qed \begin{figure} \caption{\small {1-PHCs on point sets in convex position: (a) $n=12\/$ and (b) $n=13\/$. } \label{con1} \end{figure} In the next section, we will require the following additional result. \begin{support} \label{lem2} Let $P\/$ be a set of $n\/$ points in convex position in the plane where $n\geq 3\/$. Suppose $T\/$ is a 1-plane Hamiltonian path on $P\/$ with two pendent vertices $v_i\/$ and $v_j\/$. Then the following statements hold:\ (1) $T\/$ has at least one boundary edge when $|i-j|=1$, and each diagonal edge in $T \/$ has at least one boundary edge on each side. \ (2) $T\/$ has at least two boundary edges when $|i-j|>1$, and each diagonal edge in $T \/$ has at least one boundary edge on each side. \ \end{support} \noindent {\bf Proof:} Let $T\/$ be a 1-plane Hamiltonian path on $P\/$ with two pendent vertices $v_i\/$ and $v_j\/$. Assume that $|i-j|=1$, and let $C=T \cup \{v_iv_j\}\/$. Then $C\/$ is a 1-PHC since $v_iv_j\/$ is a boundary edge. By Lemma \ref{lem1}, $C\/$ has at least two boundary edges when $n\/$ is even and three boundary edges when $n\/$ is odd. Note that $v_iv_j\/$ is a boundary edge in $C\/$ since $|i-j|=1$. Observe that by Proposition \ref{pro2}, each diagonal edge in $C\/$ has at least one boundary edge on each side. This proves (1).\ Assume that $|i-j|>1$. By induction on $n\/$, if $n=4\/$, the statement is trivially true. Assume that $n\geq 5\/$ and the lemma is true when $m < n\/$.\ In the case that all the edges in $T\/$ are boundary edges, the statement holds. Hence, assume that there is a diagonal edge $v_rv_s\in E(T)\/$.\ Let $P_1\/$ and $P_2\/$ be two sets of points of $P$ on each side of $v_rv_s\/$ that both include $v_r\/$ and $v_s\/$. Let $T_1\/$ and $T_2\/$ be the edges of $T\/$ on $P_1\/$ and $P_2\/$, respectively. It is clear that $T_i\/$ is a 1-plane Hamiltonian path on $P_i\/$ and $P_i\/$ is in convex position, $i=1,2\/$.\ By the induction hypothesis, $T_i\/$ has at least two boundary edges. Note that $v_rv_s\/$ is a boundary edge in $T_i\/$, but it is not boundary edge in $T\/$. This proves (2).\qed \section{Points in wheel configuration} In this section, we turn to another special configuration. We say a set $P\/$ of $n\/$ points, is in {\em regular wheel configuration\/} if $n-1\/$ of its points are regularly spaced on a circle $C(P)\/$ with one point $x\/$ in the center of $C(P)\/$. We call $x\/$ the {\em center} of $P\/$. Note that when $n\/$ is even, $|C(P)|\/$ is odd, and since $C(P)\/$ is regularly spaced on a circle, a line passing through any two points in $C(P)\/$ does not contain $x\/$. On the other hand, when $n\/$ is odd, $|C(P)|\/$ is even, and by regularity of $C(P)\/$, $x\/$ lies on a line that passes through any two points $v_i\/$ and $v_j\/$ in $C(P)\/$ such that $|i-j|=\frac{n-1}{2}\/$. Hence, we assume $n\/$ is even. Observe that the vertices in $C(P)\/$ are the convex hull of $P\/$. An edge of the form $xv\/$ is called a {\em radial} edge, and every 1-PHC on $P\/$ contains two radial edges. \ \begin{support}\label{lem3} Let $P\/$ be a set of $n\/$ points in regular wheel configuration in the plane where $n\geq 4\/$, is even. Suppose $C\/$ is a 1-PHC on $P\/$. Then $C\/$ has at least two boundary edges and any diagonal edge in $C\/$ has at least one boundary edge on each side. \end{support} \noindent {\bf Proof:} The lemma is trivially true when n=4. hence assume $n\geq 6\/$. Suppose $x\/$ is the center of $P\/$ and $v_0v_1 \cdots v_{n-1}v_0\/$ is the cycle $C(P)\/$. Assume that $C\/$ is a 1-PHC where $xv_i\/$ and $xv_j\/$ are two radial edges of $C\/$. It is clear that $C-\{v_ix,v_jx\}\/$ is a 1-plane Hamiltonian path on $C(P)\/$. By Lemma \ref{lem2}, $C-\{v_ix,v_jx\}\/$ has at least two boundary edges except the case when $j=i+1\/$, which possibly has only one boundary edge. Furthermore, each diagonal edge in $C-\{v_ix,v_jx\}\/$ has at least one boundary edge on each side.\ Assume that $j=i+1\/$. Suppose that $C-\{v_ix,v_jx\}\/$ has only one boundary edge $e\/$. Let $C'=C-\{v_ix,v_jx\}\cup \{v_iv_j\} \/$. Then $C'\/$ is a 1-PHC on $C(P)\/$ and has only two boundary edges $v_iv_j\/$ and $e\/$. By Proposition \ref{pro3}, $C'\/$ has the crossing edges $v_iv_{i+2}\/$ and $v_{i+1}v_{i-1}\/$; that is, $v_iv_{i+2}\/$ and $v_{i+1}v_{i-1}\/$ are edges in $C\/$. Then the radial edges $xv_i\/$ cross $v_jv_{i-1}\/$ in $C\/$ and $xv_j\/$ crosses $v_iv_{j+1}\/$ in $C\/$ (since $n\geq 6\/$), which is a contradiction since $C\/$ is a 1-PHC. Thus $C\/$ has at least two boundary edges. \qed \begin{propo}\label{pro5} Let $P\/$ be a set of $n\/$ points in regular wheel configuration in the plane where $n\geq 8\/$, is even. Suppose $C\/$ is a 1-PHC on $P\/$. If $C\/$ has only two boundary edges $v_kv_{k+1}$ for $k\in \{r,s\}\/$. Then $C\/$ has the edges $v_kv_{k+2}\/$ and $v_{k+1}v_{k-1}\/$. \end{propo} \noindent {\bf Proof:} It is not difficult to verify that the proposition is not true when $n=6\/$. Hence, assume $n\geq 8\/$. Suppose $x\/$ is the center of $P\/$ and $v_0v_1 \cdots v_{n-2}v_0\/$ is in $C(P)\/$. Let $C\/$ be a 1-PHC on $P\/$ that contains only two boundary edges $v_kv_{k+1}\/$ for $k\in \{r,s\}\/$.\ It is clear that $v_kv_{k+1}\/$ for $k\in \{r,s\}\/$ are single boundary edges otherwise, by Lemma \ref{lem3} any diagonal edge in $C\/$ where $v_rv_{r+1}\/$ and $v_sv_{s+1}\/$ on one side has a third boundary edge on the other sides, a contradiction. Before proceeding, we shall take note of the following observation.\ (O1) If $v_pv_q\/$ any edge in $C\/$, then $r+1\leq p\leq s\/$ and $s+1\leq q\leq r\/$, otherwise, by Lemma \ref{lem3} $C\/$ has at least three boundary edge, a contradiction.\ Assume on the contrary that at least one of the two edges $\{v_kv_{k+2}, v_{k+1}v_{k-1}\}\/$ is not in $C\/$ for some $k\in \{r,s\}\/$. Without loss of generality, assume that $v_rv_{r+2}\notin E(C)\/$. Suppose that $v_rv_i\/$ and $v_{r+1}v_j\in E(C)\/$, where $i\neq j\/$, we consider the following two cases.\ {\em Case (1):} Suppose that $x\notin \{v_i,v_j\}\/$. By (O1) $r+3\leq i\leq s\/$ and $s+1\leq j\leq r-1\/$. Hence, $v_rv_i\/$ and $v_{r+1}v_j\/$ are crossing. Thus, there is a vertex $v_t\/$ where $r+2\leq t<i\/$ such that at least one of the two edges that matches $v_t\/$ in $C\/$ crosses $v_rv_i\/$, a contradiction since $C\/$ is a 1-PHC.\ {\em Case (2):} Suppose that either $v_i= x\/$ or $v_j= x\/$. Without loss of generality, assume that $v_i= x\/$. Let $v_{r-1}v_l\/$ and $v_{r+1}v_j\/$ be two edges in $C\/$. By (O1) $s+1\leq j\leq r-1\/$ and $r+2\leq l\leq s\/$ where $v_sv_{s+1}\/$ is the second boundary edge. By regularity of $C(P)\/$, $v_rx\/$ and $v_{r+1}v_p\/$ are crossing, then $j=r-1\/$; otherwise, there is a vertex $v_t\/$ where $j+1\leq t<r$ such that at least one of the two edges that matches $v_t\/$ into $C\/$ crosses $v_{r+1}v_j\/$, in both cases there is a contradiction since $C\/$ is a 1-PHC has only two boundary edges. Thus $v_{r+1}v_{r-1}\/$ in $C\/$.\ Now, if $l>r+3\/$. Then the edges in $C\/$ that incident on $v_{r+2},v_{r+3}\/$ crosses $v_{r-1}v_l\/$, a contradiction since $C\/$ is a 1-PHC. By regularity of $C(P)$ and $n\geq 8\/$ if $l=r+3\/$, then $v_{r-1}v_l\/$ crosses $v_rx\/$ which is a contradiction since $v_{r+1}v_{r-1} \in E(C)\/$ and crosses $v_rx\/$. This completes the proof. \qed We now present the main result of this section. \begin{result}\label{the2} Let $P\/$ be a set of $n\/$ points in regular wheel configuration in the plane where $n\geq 10\/$, is even. Then there exist $k\/$ edges-disjoint 1-PHCs $C_1, C_2, ... , C_k\/$ on $P\/$ that can be packed into $K_n\/$ where $k\leq \lfloor \frac{n-1}{3}\rfloor\/$. \end{result} \noindent {\bf Proof:} Suppose $x\/$ is the center of $P\/$ and $v_0v_1 \cdots v_{n-1}v_0\/$ is the cycle $C(P)\/$. By Lemma \ref{lem3}, every 1-PHC in $P\/$ contains at least two boundary edges. On the other hand, $C(P)\/$ has $n-1\/$ boundary edges; that is, the number of 1-PHCs does not exceed $n-1/2\/$.\ By Proposition \ref{pro5}, if $C_i\/$ and $C_j\/$ are two edge-disjoint 1-PHCs each having only two boundary edges, then any boundary edge of $C_i\/$ can not be in consecutive order with a boundary edge of $C_j\/$.\ Note that a single boundary edge in $C_i\/$ can be adjacent to two consecutive boundary edges in $C_j\/$ where $i\neq j\/$; that is, $C_i \cup C_j\/$ can have three boundary edges in consecutive order. Therefore, the number of 1-PHCs that can be packed into $K_n\/$ is at most $\lfloor \frac{n-1}{3}\rfloor\/$ and this bound is tight.\ Now, we will show how to pack $\lfloor\frac{n-1}{3}\rfloor$ 1-PHCs into $K_n\/$. To ensure that the boundary edges of all cycles $\{C_1, C_2, ... , C_k\}\/$ are in consecutive order. Then each 1-PHC should have three boundary edges (where two of them are in consecutive order) with the property that a single boundary edge in $C_i\/$ is adjacent to the two consecutive boundary edges in $C_j\/$ and vice versa. This property is depicted in Figure \ref{reg1}. For each $i=0,1,...,\lfloor \frac{n-1}{3}\rfloor-1$ and $r=\lfloor \frac{m+1}{2}\rfloor\/$. Let $C_i = v_{(m+1)i} v_{(m+1)i+1} v_{(m+1)i-1} v_{(m+1)i+3} v_{(m+1)i-3} \dots v_{(m+1)i+r} \ x \ v_{(m+1)i+5} v_{(m+1)i-5}\/$ $ \dots v_{(m+1) i-(2m-3)} v_{(m+1)i}\/$. Here the operations on the subscripts are reduced modulo $2n-1\/$. \qed \begin{figure} \caption{\small {1-PHCs on a set of points in regular wheel configuration, $n=14\/$. } \label{reg1} \end{figure} \section{1-PHCs for point sets in general position} In this section, we consider a set $P\/$ of $n\/$ points in general position in the plane i.e., no three points are collinear. For $n =2^k +h\/$ where $0 \leq h < 2^k\/$, we will show that there are at least $k-1\/$ edge-disjoint 1-PHCs on $P\/$ (Theorem \ref{the3}). For this purpose, we present some ingredients that will be used to prove the main result in this section. \subsection{Bisect lines for a set of points} Let $P\/$ be a set of $n\/$ points in general position in the plane. A line $l\/$ is said to \textit{bisect} a set $P\/$ if both open half spaces defined by $l\/$ contain precisely $\frac{n}{2}\/$ points. It is no loss of generality to assume $n\/$ is odd since otherwise, any point $v\/$ may be removed and any line that bisects $P-\{w\}$ also bisects $P\/$.\ Let $P_1\/$ and $P_2\/$ be two point sets in the plane. If $H_1\/$ and $H_2\/$ are two convex polygons containing $P_1\/$ and $P_2\/$ respectively, then we say that $H_1\/$ and $H_2\/$ are disjoint if there is a line that separates them. Moreover, if $P\/$ is a disjoint union of two point sets $P_1\/$ and $P_2\/$, the ham-sandwich cut theorem guarantees the existence of a line that simultaneously bisects $P_1\/$ and $P_2\/$ (see for example \cite {h:refer},\cite {cmw:refer}).\ \begin{support} \label{lem4} Let $P\/$ be a set of $m\/$ points in the general position where $m \geq 3\/$. Suppose there is a line separating a given set $\{v, w\}\/$ from $P\/$. Then there is a line that separates some subset $P_1\/$ from $P\/$ with $\{v, w\} \subseteq P_1\/$ and $2 \leq |P_1| \leq m-1\/$. \end{support} \noindent {\bf Proof:} The lemma is trivially true if $m=3\/$ with $P_1 = \{v, w\}\/$. So assume that $m \geq 4\/$. Let $v_1 \in P - \{v, w\}\/$ be such that all points in $P - \{v, w, v_1\}\/$ are on one side of the line $l_1\/$ joining $v_1\/$ and $z_1\/$ for some $z_1 \in \{v, w\}\/$ and let $P_1 = \{v, w, v_1\}\/$. Let $L_1\/$ be a line parallel to $l_1\/$ such that all points in $P - \{v, w, v_1\}\/$ are on one side of $L_1\/$. If $|P_1| = m-1\/$, then the proof is complete. Otherwise repeat the argument with $v_2 \in P - \{v, w, v_1\}\/$ and $z_2 \in \{v, w, v_1\}\/$ so that all points in $P - \{v, w, v_1, v_2\}\/$ are on one side of the line $l_2\/$ joining $v_2z_2\/$ and let $P_1 = \{v, w, v_1, v_2\}\/$ with the line $L_2\/$ similarly defined. By repeating the argument where necessary, we reach the conclusion of the lemma. \qed \begin{support} \label{lem5} Let $L\/$ be a line bisects a set $P\/$ of $m\/$ points in the general position into $P_1\/$ and $P_2\/$ where $m \geq 6\/$, and let $l^{\bot}\/$ be a line perpendicular to $L\/$ and all points in $P\/$ are on one side of $l^{\bot}\/$. Suppose $\{u, v\}\subset P_1\/$ is a given set such that no more than half points of $P_1\/$ are between $l^{\bot}\/$ and a line, perpendicular to $L\/$, passing through any point in $\{v, w\}\/$. Then there is a line bisects $P_1\/$ and $P_2\/$ into $P_{i,j}\/$, for each $i=1,2\/$ with $j=1,2\/$ and $\{v, w\} \subseteq P_{1,k}\/$ for some $k\in\{1,2\}\/$. \end{support} \noindent {\bf Proof:} The lemma is trivially not true if $m=5\/$ with $P_1 = \{v, w\}\/$. So assume that $m \geq 6\/$. By hum-sandwich cut theorem, there is a line $l\/$ bisects $P_1\/$ and $P_2\/$ in the plane into sets $P_{i,j}\/$, for each $i=1,2\/$ with $j=1,2\/$. It is no loss of generality to assume $P_{1,2}\/$ is between $l\/$ and $l^{\bot}\/$. Assume on the contrary that $\{v, w\} \nsubseteq P_{1,j}\/$ for each $j=1,2\/$. Then all points of $P_{1,2}\/$ are between $l^{\bot}\/$ and the line, perpendicular to $L\/$, passing through the point in $\{v, w\}\cap P_{1,1}\/$, a contradiction. Thus $\{v, w\} \subseteq P_{1,2}\/$. \qed \subsection{Drawing a 1-PHC on a set of points} We shall give a description of an algorithm for drawing a 1-PHC on a set of points in general position in the plane. In what follows, let $l(v_1, v_2)$ be a line passing through the two points $v_1$ and $v_2$. \ \textbf{Algorithm $A\/$} \begin{enumerate} \item Find a line $l\/$ that bisects $P\/$ into $P_1\/$ and $P_2\/$ where either $|P_1| = |P_2|\/$ or $|P_1| = |P_2| +1\/$. \item Find a line $l^{\bot}\/$ such that $l^{\bot}\/$ is perpendicular to $l\/$ and all points in $P\/$ are on one side of $l^{\bot}\/$. \item Find $CH(P_i)\/$, the convex hull of $P_i\/$, for each $i=1, 2\/$ and select $v_i \in CH(P_i)\/$ such that all the points in $P_1 \cup P_2 -\{v_1, v_2\}\/$ are between $l^{\bot}\/$ and the line $l(v_1, v_2)$. Let $v_1v_2\/$ be an edge in $C\/$ and let $v^{*}_i=v_i$, for each $i=1,2\/$. \item If $P_i = \{v^*_i\}\/$ for $i=1, 2$, let $v^{*}_1v^{*}_2\/$ be an edge in $C\/$ and Stop. If $P_2=\{v^*_2\}\/$ and $P_1 = \{v^*_1, w\}\/$ let $v^{*}_1w\/$ and $v^{*}_2w\/$ be edges in $C\/$ and Stop. Otherwise, let $P_i=P_i-\{v^*_i\}$ for each $i=1, 2$. \item Find $CH(P_i)\/$, for $i=1, 2\/$ and select $v_i \in CH(P_i)\/$, $i=1, 2\/$, be such that all the points in $P_1 \cup P_2 -\{v^*_1, v^*_2\}\/$ are between $l^{\bot}\/$ and the line $l(v_1, v_2)$. \item If no point of $\{v^{*}_1,v^{*}_2\}\/$, is between $l^{\bot}\/$ and the line $l(v_1, v_2)$. Let $v^{*}_1v_2\/$ and $v^{*}_2v_1\/$ be edges in $C\/$. Repeat Step (4) with $v_i\/$ taking the place of $v^{*}_i$ for each $i=1,2\/$. \item For some $i\in\{1,2\}\/$, if $v^{*}_{3-i}\/$ is not between $l^{\bot}\/$ and the line $l(v_i, v^{*}_i)$ and all points in $P_1 \cup P_2 -\{v^{*}_{3-i}\}\/$ are between $l^{\bot}\/$ and the line $l(v_i, v^{*}_i)$, let $v_iv^{*}_{3-i}\/$ be two edges in $C\/$. \item If $\{v_iv^{*}_{3-i}, v_iv_{3-i}\}\subset E(C)\/$, let $P_i=P_i-\{v_i\}$ and repeat Step (4) with $v_{3-i}\/$ taking the place of $v^{*}_{3-i}$. \end{enumerate} The edge $v^{*}_1w\/$ in Step (4) is termed a "stone" and shall be denoted by $st(v, w)\/$. A 1-PHC obtained by Algorithm (A) which contains a stone, is depicted in Figure \ref{algorithm}. \begin{figure} \caption{\small Drawing a 1-PHC on a set of $13\/$ points in general position by Algorithm $A\/$, and the red edge $v_{10} \label{algorithm} \end{figure} \subsection{A joining between two 1-PHCs } In this section, we show how to extract a 1-PHC by joining two edge-disjoint 1-PHCs. Let $P(1)\/$ and $P(2)\/$ be two disjoint point sets in general position in the plane. Suppose $C(1)\/$ and $C(2)\/$ are two edge-disjoint 1-PHCs on $P(1)\/$ and $P(2)\/$, respectively.\ The edges $u_1u_2\in E(C(1))\/$ and $v_1v_2\in E(C(2))\/$ are called \emph{joining edges} of $C(1)\/$ and $C(2)\/$ if the graph resulting from removing them and adding the edges $u_1v_1\/$ and $u_2v_2\/$ (or $u_1v_2\/$ and $u_2v_1\/$) still an edge-disjoint 1-PHC on $P_1\cup P_2\/$. The edges $u_1v_1\/$ and $u_2v_2\/$ (or $u_1v_2\/$ and $u_2v_1\/$) are termed a \emph{connection edges} of $C(1)\/$ and $C(2)\/$.\ Suppose that there are two crossing edges $u_1u_2\/$ and $u_3u_4\/$ in $C(1)\/$ such that the graph resulting from removing them and adding two non-crossing edges $u_1u_3\/$ and $u_2u_4\/$ still a 1-PHC. Then the edges $u_1u_3\/$ and $u_2u_4\/$ are termed \emph{created} edges in $C(1)\/$. Two joining edges of $C(1)\/$ and $C(2)\/$ are called \emph{created joining} edges if at least one of them is a created edge.\ \begin{support} \label{lem6} Let $P\/$ be a set of $n\/$ points in general position in the plane where $n\geq 8\/$, and let $C\/$ be a 1-PHC on $P\/$ which is bisected into two disjoint sets $P_1\/$ and $P_2\/$. Suppose $C(1)\/$ and $C(2)\/$ are two 1-PHCs on $P_1\/$ and $P_2\/$, respectively, where $C(1)\/$ and $C(2)\/$ have no joining edges. Then $C(1)\/$ and $C(2)\/$ can be joined by two joining created edges. \end{support} \noindent {\bf Proof:} Let $C\/$ be a 1-PHC on $P\/$. In so doing, the set $P\/$ has been split into $P_1\/$ and $P_2\/$. Assume that $C(1)\/$ and $C(2)\/$ are two 1-PHCs on $P_1\/$ and $P_2\/$, respectively since $n\geq 8\/$, and let $C(1)\/$ and $C(2)\/$ have no joining edges.\ {\em Case (1):} When $C(i)\/$ has at least one crossing for each $i=1, 2$ (since $n\geq 8\/$ and $|C(1)|\geq 4\/$ and $|C(2)|\geq 4\/$). Let $\{u_1u_2,u_3u_4\}\/$ and $\{v_1v_2,v_3v_4\}\/$ be the two crossing edges in $C(1)\/$ and $C(2)\/$, respectively. By removing the crossing edges in $C(1)\/$ and $C(2)\/$ and adding the non-crossing edges $\{u_1u_4,u_2u_3\}\/$ and $\{v_1v_4,v_2v_3\}\/$ in $C(1)\/$ and $C(2)\/$, respectively we obtain two created edges in each of $C(1)\/$ and $C(2)\/$. Chose $e_1\in\{u_1u_4,u_2u_3\}$ and $e_2\in\{v_1v_4,v_2v_3\}\/$ such that no points between them. Without loss of generality assume that no points between the two created edges $u_1u_4\/$ and $v_1v_4\/$.\ Note that at least one edge in a set $A=\{u_1v_1,u_1v_4,u_4v_1,u_4v_4\}\/$ is not in $C\/$ since $A\/$ is 4-cycle graph and $|V(C)|\geq8\/$ is not union of two cycles. That means\ (1) If $C\/$ has only three edges of $A\/$. Then there is an edge $u'v' \in A\cap E(C)\/$ such that $d_C(u')=d_C(v')=2\/$. Thus there are two joining edges one in original $C(1)\/$ incident on $u'\/$ and another in original $C(2)\/$ incident on $v'\/$, a contradiction (since $C(1)\/$ and $C(2)\/$ have no joining edges).\ (2) If $C\/$ has only two edges $\{uv,u'v'\}\subset A\/$. Then (i) $\{uv,u'v'\}$ share no vertex, and hence $u_1u_4\/$ and $v_1v_4\/$ are joining created edges since $A-\{uv,u'v'\}\/$ are connection edges. (ii) $\{uv,u'v'\}$ share on a vertex. It is no loss of generality to assume that $u=u'=u_1\/$. That is, $\{u_1v_1,u_1v_4\}\subset A\/$, then the created edge $u_1v_1\/$ and the edge incident on $u_1\/$ such as $u_1u^*\/$ for some $u^*\notin A\/$ are created joining edges since $A-\{u_1v_1,u_1v_4\}\/$ is not in $C\/$ and at most one of the two edges of $u^*v_1,u^*v_4 \/$ is not in $C\/$ since $A-\{u_1v_1,u_1v_4\}\cup \{u^*v_1,u^*v_4\}\/$ is 4-cyles. \ (O3) If $C\/$ has only one edge $u'v'\in A\/$. Then $u_1u_4\/$ and $v_1v_4\/$ are joining created edges since there are two connections edge in $A-\{u'v'\}\/$. {\em Case (2):} When $C(i)\/$ has at most one crossing for some $i\in\{1, 2\}$. By removing the crossing edges and adding two created edges we obtain two plane cycles $C(1)\/$ and $C(2)\/$. As in case(1), asume that no points between the two edges $u_1u_2 \in E(C(1))\/$ and another $v_1v_2 \in E(C(2))\/$. By repeat the similar argument in case (1) we see $C(1)\/$ and $C(2)\/$ have two joining created edges. \qed \subsection{Packing 1-PHCs into a point set } We conclude this paper with the following main result.\ \begin{result} \label{the3} Let $P\/$ be a set of $n\/$ points in general position in the plane where $n=2^k+h\/$, with $0\leq h <2^k\/$. Then there exist at least $k-1\/$ edge-disjoint 1-PHCs $C_1, C_2, ... , C_{k-1}\/$ on $P\/$ that can be packed into $K_n\/$. \end{result} \noindent {\bf Proof:} First we apply Algorithm $(A)\/$ to obtain the first 1-PHC $C_1\/$. In so doing, the set $P\/$ has been bisected into $P_1\/$ and $P_2\/$ by $l_1$. Let $P_1$ on the left of $l_1$ and $P_2$ on the right of $l_1$. If $P_1\/$ has no stone, then by hum-sandwich cut theorem there is a line $l_2\/$ that simultaneously bisects $P_1\/$ and $P_2\/$ into $P_{i,j}\/$, $i=1,2\/$ with $j=1,2\/$ which are label in the anticlockwise order and either $|P_{1, 1}| = |P_{1, 2}|\/$ or $|P_{1, 1}| = |P_{1, 2}|+1\/$. \ If $P_1\/$ has a stone $st(v, w)\/$, then we have two cases \ {\em Case (1):} By Lemma \ref{lem5}, there is a line $l_2\/$ that simultaneously bisects $P_1\/$ and $P_2\/$ into $P_{i,j}\/$, $i=1,2\/$ with $j=1,2\/$. and $\{v, w\} \subsetneq P_{1,2}\/$ and either $|P_{1, 1}| = |P_{1, 2}|\/$ or $|P_{1, 1}| = |P_{1, 2}|+1\/$.\ {\em Case (2):} By Lemma \ref{lem4}, there is a line $l_2\/$ that bisects $P_1\/$ into $P_{1,1}\/$ and $P_{1,2}\/$ and with $\{v, w\} \subsetneq P_{1,2}\/$ and either $|P_{1, 1}| = |P_{1, 2}|\/$ or $|P_{1, 1}| = |P_{1, 2}|+1\/$. Furthermore, there is a line $l_2'\/$ that bisects $P_2\/$ into $P_{2,1}\/$ and $P_{2,2}\/$ and either $|P_{2, 1}| = |P_{2, 2}|\/$ or $|P_{2, 1}| = |P_{2, 2}|+1\/$. In all cases, label the parts $P_{i,j}\/$ in the anticlockwise order. In case (1) and case (2), $C(1)\/$ and $C(2)\/$ are two edge-disjoint cycles can be joined using either the joining edges or created joining edges (by Lemma \ref{lem6}).\ To obtain $C_3\/$, rename $P_{i,j}$ to be four parts $P_1$, $P_2$, $P_3$, and $P_4$ arranged in anticlockwise ordered. Then repeat the above operations with $P_i$ taking place $P$ for each $i=1, 2, 3, 4$ to obtain 1-PHCs $C(i)$. Join $C(i)$ with $C(i+1)$ for $i=1, 2, 3$ either by joining edges or by created joining edges. In general, to obtain $C_r\/$ where $1\leq r\leq k-1\/$, repeat the above operations on parts $P_1, P_2, \ldots ,P_{2^{r-1}}$, arranged in anticlockwise ordered, with $P_i$ taking place $P$ for each $i=1, 2, \ldots , 2^{r-1}$ to obtain 1-PHCs $C(i)$. Join $C(i)$ with $C(i+1)$ for $i=1, 2, \ldots , 2^{r-1}-1$ either by joining edges or by created joining edges. \qed Theorem \ref{the3} is depicted in Figure \ref{three}. \begin{figure}\label{three} \end{figure} \end{document}
\betagin{document} \title{\Large\bf\boldmath Inverse coefficients problem for a magnetohydrodynamics system} \renewcommand{\fracnsymbol{footnote}}{\fracnsymbol{footnote}} \fracootnotetext{\hspace*{-5mm} \betagin{tabular}{@{}r@{}p{13cm}@{}} $^\mathrm dag$ & Graduate School of Mathematical Sciences, the University of Tokyo, 3-8-1 Komaba, Meguro-ku, Tokyo 153-8914, Japan. E-mail: [email protected], \mathrm end{tabular}} \betagin{abstract} \noindent In this article, we consider a magnetohydrodynamics system for incompressible flow in a three-dimensional bounded domain. Firstly, we give the stability results for our inverse coefficients problem. Secondly, we establish and prove two Carleman estimates both for direct problem and inverse problem. Finally, we complete the proof of stability result in terms of the above Carleman estimates. \mathrm end{abstract} \thetaxtbf{Keywords:} magnetohydrodynamics, Carleman estimates, inverse coefficients problem, stability inequality \section{Introduction} \lambdabel{sec-intro} Magnetohydrodynamics(MHD) is the study of the magnetic properties of electrically conducting fluids such as plasmas, liquid metals and salt water. The set of equations in three dimension is introduced by combining the Navier-Stokes equations and Maxwell's equations: $$ \left\{ \betagin{aligned} & \ \partial_t u + \mathrm{div}(\rho u\otimes u - P(u,p)) - \mu \mathrm{rot}\; H\times H = F, \\ & \ \partial_t H -\mathrm{rot}(u\times H) = -\mathrm{rot}(\fracrac{1}{\sigma\mu}\mathrm{rot}\; H), \\ & \ \mathrm{div} u = \mathrm{div} H = 0 \mathrm end{aligned} \right. $$ where the notations $\times$ and $\otimes$ mean cross product and outer product which are defined as follows: for any vectors $A=(A_1,A_2,A_3)^T$ and $B=(B_1,B_2,B_3)^T$, $$ \betagin{aligned} &A\times B:= (A_2B_3 - A_3B_2, A_3B_1 - A_1B_3, A_1B_2 - A_2B_1), &A\otimes B:= A\;B^T. \mathrm end{aligned} $$ Here, $u=(u_1,u_2,u_3)^T$, $H=(H_1,H_2,H_3)^T$ denote the velocity vector and the magnetic field intensity respectively. $P(u,p)$ denotes the stress tensor which is determined by generalized Newton's law as $$ P(u,p) = -pI + 2\nu\mathcal{E}(u) $$ where $p$ denotes the pressure and $\mathcal{E}(u)$ is called Cauchy stress tensor defined by $$ \mathcal{E}(u) := \fracrac{1}{2}(\nabla u + (\nabla u)^T). $$ The coefficient $\nu$ is related to the viscosity of the fluids. Furthermore, $\sigma$ and $\mu$ are the electrical conductivity and magnetic permeability respectively. For the derivation of above equations, we refer to Li and Qin \cite{LQ13}. We don't pay attention to temperature distribution of the fluid and thus neglect the energy equation. There are some papers for MHD systems. \cite{DL13,FL15} studied some regularity criteria for incompressible MHD system in three dimension. In \cite{DL13}, the authors established some general sufficient conditions for global regularity of strong solutions to incompressible three-dimensional MHD system. While \cite{FL15} gave a logarithmic criterion for generalized MHD system. We should also mention the study of exact controllability for MHD. Hav\^{a}rneanu, Popa and Sritharan \cite{HPS06,HPS07} studied it with locally internal controls both in two and in three dimension. In their papers, they have established a kind of Carleman estimate for MHD system in order to solve their controllability problems. However, it is not enough to consider inverse problems, especially inverse source problems. We will clarify this statement later. In this article, our main method is Carleman estimate. It is an $L^2$- weighted estimate with large parameter(s) for a solution to a partial differential equation. The idea was first introduced by Carleman \cite{C39} for proving the unique continuation for a two-dimensional elliptic equation. From the 1980s, there have been great concerns for the estimate itself and its applications as well. For remarkable general treatments, we refer to \cite{E86,H85,I90,I98,T96,T81}. Carleman estimate has then become one of the general techniques in studying unique continuation and stability for inverse problems. Since then, there are many papers considering different inverse problems for a variety of partial differential equations. We list some work for the well-known equations in mathematical physics. For hyperbolic equation, Bellassoued and Yamamoto \cite{BY06} considered the inverse source problem for wave equation and give a stability inequality with observations on certain sub-boundary. Gaitan and Ouzzane \cite{GO13} proved a lipschitz stability for the inverse problem which reconstructs an absorption coefficient for a transport equation with also boundary measurements. For heat(parabolic) equation, Yamamoto \cite{Y09} have given a great survey by summarizing different types of Carleman estimates and methods for applications to some inverse problems (see also the references therein). Moreover, Choulli, Imanuvilov, Puel and Yamamoto \cite{CIPY13} has worked on the inverse source problem for linearized Navier-Stokes equations with data in arbitrary sub-domain. To authors' best knowledge, there are few papers on Carleman estimates for MHD system. Recall that in \cite{HPS06,HPS07}, the authors have proved a Carleman estimate for the adjoint MHD system in order to prove the exact controllability. However, in their Carleman estimate, the observation of the first spatial derivative of external force $F$ is needed which makes it difficult to consider inverse source problems in general case and thus it is even not suitable for inverse coefficient problems. In this article, we intend to establish Carleman estimates for the above MHD system and then give the stability inequality for the principal coefficients. By taking the difference of two states for MHD systems with different coefficients, it is enough to consider an inverse source problem for a linearized MHD system. The main difficulty lies in the first-order partial differential term in the source. We use the idea of \cite{Y09} in which the author dealt with a similar problem for equation of parabolic type by giving a Carleman estimate for a first-order partial differential operator. In this article, we modified the Carleman estimate for first-order partial differential operator in a vector-valued case. Then together with Carleman estimate for MHD system, we prove a Lipschitz stability for inverse coefficients problem and also a conditional stability of H\"{o}lder type under weaker assumptions. This article is organized as follows. In section 2, we introduce some notations and then give the concerned MHD system and precise statements for our inverse coefficients problem. In section 3, we establish Carleman inequalities both for direct problem and inverse problem. For direct problem, we need a Carleman estimate for MHD system. On the other hand, we prove the inequality for inverse problem in terms of a Carleman estimate for a first-order partial differential operator. In section 4, we complete the proof of the main results in section 2 by using the above Carleman inequalities. \section{Notations and stability results} \lambdabel{sec-main} Let $\Omega\subset\Bbb R^3$ be a bounded domain with smooth boundary. We set $Q:=\Omega\times(0,T)$, $\Sigma :=\partial\Omega\times(0,T)$. In this article, we use the following notations. $\cdot^T$ denotes the transpose of matrices or vectors. Let $\partial_t = \frac{\partial}{\partial t}, \; \partial_j = \frac{\partial}{\partial x_j}, j = 1,2,3, \; \Delta = \sum_{j=1}^3 \partial_j^2, \; \nabla = (\partial_1, \partial_2, \partial_3)^T, \; \nabla_{x,t} = (\nabla,\partial_t)^T $ \betagin{equation*} (w \cdot \nabla) v = \left(\sum_{j=1}^3 w_j\partial_j v_1, \sum_{j=1}^3 w_j\partial_j v_2, \sum_{j=1}^3 w_j\partial_j v_3 \right)^T, \mathrm end{equation*} for $v = (v_1,v_2,v_3)^T$ and $w = (w_1,w_2,w_3)^T$. Henceforth let $n$ be the outward unit normal vector to $\partial\Omega$ and let $\partial_n u:=\frac{\partial u}{\partial n} = \nabla u \cdot n$. Moreover let $\gamma = (\gamma_1,\gamma_2,\gamma_3) \mathrm in (\Bbb N \cup \{0\})^3, \; \partial_x^{\gamma} = \partial_1^{\gamma_1} \partial_2^{\gamma_2} \partial_3^{\gamma_3}$ and $\varepsilonrt \gamma \varepsilonrt = \gamma_1 + \gamma_2 +\gamma_3$. Furthermore, we introduce the following spaces: \betagin{equation*} \left\{ \betagin{aligned} &W^{k,\mathrm infty}(D) :=\{w; \; \partial_t^{\gammamma} w,\partial_x^{\gammamma} w \mathrm in L^\mathrm infty (D), |\gammamma|\le k \}, \ k\mathrm in \Bbb{N} \\ &H^{k,l}(D) :=\{w; \; \partial_t^{\gammamma_0} w, \partial_x^{\gammamma} w \mathrm in L^2(D), \; |\gammamma_0|\le l, |\gammamma|\le k \}, \ k,l\mathrm in \Bbb{N}\cup \{ 0 \} \mathrm end{aligned} \right. \mathrm end{equation*} for any sub-domain $D\subset Q$. If there is no confusion, we also denote $(L^2(\Omega))^3$ by $L^2(\Omega)$, likewise $(H^{k,l}(D))^3$ by simply $H^{k,l}(D)$, $k,l\mathrm in \Bbb{N}\cup \{ 0 \}$. In this article, we denote $\kappa = \sigma^{-1}$ the resistance. For simplicity, we just assume the magnetic permeability $\mu$ to be a constant(identically $1$). In fact, we consider the following MHD system: \betagin{equation} \lambdabel{sy:MHD} \left\{ \betagin{aligned} & \ \partial_t u - \mathrm{div}(2\nu\mathcal{E}(u)) + (u\cdot\nablabla) u - (H\cdot\nablabla) H + \nabla (p - \fracrac{1}{2} |H|^2)= 0, \\ & \ \partial_t H + \mathrm{rot}(\kappa\mathrm{rot}\; H) + (u\cdot\nablabla) H - (H\cdot\nablabla) u= 0, \\ & \ \mathrm{div}\; u = \mathrm{div}\; H = 0. \mathrm end{aligned} \right. \mathrm end{equation} Here, the viscosity $\nu=\nu(x)$ and the resistance $\kappa=\kappa(x)$ are time independent coefficients which admit a positive lower bound. Now we let $(u_i, p_i, H_i)$(i=1,2) are two sets of functions satisfying (\ref{sy:MHD}) corresponding to coefficients $(\nu_i, \kappa_i)$(i=1,2). That is, \betagin{equation} \lambdabel{sy:MHD2} \left\{ \betagin{aligned} &\partial_t u_i - \mathrm{div}(2\nu_i\mathcal{E}(u_i)) + (u_i\cdot\nablabla) u_i - (H_i\cdot\nablabla) H_i + \nabla p_i - \nablabla H_i^T\!\cdot\! H_i = 0 &\quad in \ Q, \\ &\partial_t H_i + \mathrm{rot}(\kappa_i\mathrm{rot}\; H_i) + (u_i\cdot\nablabla) H_i - (H_i\cdot\nablabla) u_i = 0 &\quad in \ Q, \\ & \mathrm{div}\; u_i = 0, \quad \mathrm{div}\; H_i = 0 &\quad in \ Q. \mathrm end{aligned} \right. \mathrm end{equation} The sets of functions $(u_i, p_i, H_i, \nu_i,\kappa_i)$(i=1,2) are supposed to be smooth enough (e.g. $W^{2,\mathrm infty}(Q)$). Then we choose a function $d\mathrm in C^2(\overline{\Omegaega})$ such that \betagin{equation} \lambdabel{con:d} d>0 \ in \ \Omega, \quad |\nabla d| > 0 \ on \ \overline{\Omega}, \quad d=0 \ on \ \partial\Omega\setminus\Gammamma \mathrm end{equation} for any nonempty sub-boundary $\Gammamma\subset\partialrtial\Omegaega$. The existence of such function was proved in \cite{Y09}. In fact, we can choose a bounded domain $\Omega_1$ with boundary smooth enough such that \betagin{equation} \lambdabel{con:choice-Om1} \Omega \subsetneqq \Omega_1, \quad \overline{\Gammamma} = \overline{\partial\Omega\cap\Omega_1}, \quad \partial\Omega\setminus\Gammamma \subset \partial\Omega_1, \mathrm end{equation} thus $\Omega_1\setminus\overline{\Omega}$ contains some non-empty open subset. It is a well-known result (see Imanuvilov, Puel and Yamamoto \cite{IPY09}, Fursikov and Imanuvilov \cite{FI96}) that there exists a function $\mathrm eta\mathrm in C^2(\overline{\Omega})$ such that for any $\omega\subset\subset \Omega$, $$ \mathrm eta |_{\partial\Omega} = 0, \quad \mathrm eta >0 \ in \ \Omega, \quad |\nabla \mathrm eta|>0, \ on \ \overline{\Omega\setminus \omega}. $$ By choosing $\overline{\omega}\subset \Omega_1\setminus \overline{\Omega}$ and applying the above result in $\Omega_1$, we obtain our function $d$. Without special emphases, we use the function $d$ as above throughout this article. Fix observation time $t_0\mathrm in (0,T)$. Before giving our stability result, we need furthermore the following two assumptions: (A1)$\quad \mathrm{det}\;\mathcal{E}(u_1(x,t_0)) \neq 0 \quad\quad for \ any\ x\mathrm in \overline{\Omega}$, (A2)$\quad |\nabla d(x)\times\mathrm{rot}H_1(x,t_0)| \neq 0 \quad\quad for \ any\ x\mathrm in \overline{\Omega}$. \noindent Now we are ready to state our main result. $\Gammamma\subset \partial\Omega$ is an arbitrarily fixed relatively open sub-boundary. \betagin{thm} \lambdabel{thm:stability} Under the assumptions (A1)-(A2) and the conditions \betagin{equation} \lambdabel{con:bdy} \nu_1(x) = \nu_2(x) \quad on\ \Gammamma, \qquad \kappa_1(x) = \kappa_2(x),\ \nabla\kappa_1(x) = \nabla\kappa_2(x) \quad on\ \partialrtial\Omegaega, \mathrm end{equation} there exists a constant $C>0$ such that $$ \|\nu_1-\nu_2\|_{H^1(\Omega)} + \|\kappa_1-\kappa_2\|_{H^1(\Omega)} \le C\mathcal{D} $$ for all $(u_i,p_i,H_i)\mathrm in H^{2,3}(Q)\times H^{1,2}(Q)\times H^{2,3}(Q)$ satisfying system (\ref{sy:MHD2}) for $i=1,2$. \mathrm end{thm} \noindent Here the measurement $\mathcal{D}$ denotes \betagin{align*} &\mathcal{D}= \|(u_1-u_2)(\cdot,t_0)\|_{H^2(\Omega)} + \|(H_1-H_2)(\cdot,t_0)\|_{H^3(\Omega)} + \|\nabla (p_1-p_2)(\cdot,t_0)\|_{L^2(\Omega)} \\ &\hspace{0.8cm} + \|u_1-u_2\|_{H^{0,2}(\Sigma)} + \|\nabla_{x,t} (u_1-u_2)\|_{H^{0,2}(\Sigma)} + \|p_1-p_2\|_{H^{\fracrac{1}{2},2}(\Sigma)} \\ &\hspace{0.8cm} + \|H_1-H_2\|_{H^{0,2}(\Sigma)} + \|\nabla_{x,t} (H_1-H_2)\|_{H^{0,2}(\Sigma)} . \mathrm end{align*} $H^{k,l}(\Sigma)\mathrm equiv H^{k}(0,T;H^{l}(\partialrtial\Omega))$($k,l\mathrm in\Bbb{N}$). The assumption (A1)-(A2) are strong because we need them to hold globally. Now consider the following weaker assumptions: (A1$^\prime$)$\quad \mathrm{det}\;\mathcal{E}(u_1(x,t_0)) \neq 0 \quad\quad for \ any\ x\mathrm in \overline{\Omega_{3\mathrm epsilon}}$, (A2$^\prime$)$\quad |\nabla d(x)\times\mathrm{rot}H_1(x,t_0)| \neq 0 \quad\quad for \ any\ x\mathrm in \overline{\Omega_{3\mathrm epsilon}}$ \noindent where $\Omega_\mathrm epsilon := \{x\mathrm in \Omega: d(x)>\mathrm epsilon\}$ for any $\mathrm epsilon>0$. Then we can derive a local stability result. \betagin{thm} \lambdabel{thm:stabilityn} Under the assumptions {\thetaxt(A1$^\prime$)-(A2$^\prime$)} and the conditions \betagin{equation} \lambdabel{con:bdyn} \nu_1(x) = \nu_2(x) \quad on\ \Gammamma, \qquad \kappa_1(x) = \kappa_2(x),\ \nabla\kappa_1(x) = \nabla\kappa_2(x) \quad on\ \Gammamma, \mathrm end{equation} there exist constants $C>0$ and $\theta\mathrm in (0,1)$ such that \betagin{align} \lambdabel{eq:loc-stab} \|\nu_1-\nu_2\|_{H^1(\Omega_{5\mathrm epsilon})} + \|\kappa_1-\kappa_2\|_{H^1(\Omega_{5\mathrm epsilon})} \le C(\mathcal{D} + M^{1-\theta}\mathcal{D}^{\theta}) \mathrm end{align} for all $(u_i,p_i,H_i)\mathrm in H^{2,3}(Q)\times H^{1,2}(Q)\times H^{2,3}(Q)$ satisfying system (\ref{sy:MHD2}) for $i=1,2$. \mathrm end{thm} \noindent Here a prior bound $M$ and measurements $\mathcal{D}$ denote \betagin{align*} &M = \sum_{j=0}^2\Big(\|\partial_t^j u\|_{H^{1,1}(Q)} + \|\partial_t^j H\|_{H^{1,0}(Q)} + \|\partial_t^j p\|_{L^2(Q)}\Big) + \|\nu\|_{H^1(\Omega_{3\mathrm epsilon})} + \|\kappa\|_{H^1(\Omega_{3\mathrm epsilon})}, \\ &\mathcal{D} = \|(u_1-u_2)(\cdot,t_0)\|_{H^2(\Omega_{3\mathrm epsilon})} + \|(H_1-H_2)(\cdot,t_0)\|_{H^3(\Omega_{3\mathrm epsilon})} + \|\nabla (p_1-p_2)(\cdot,t_0)\|_{L^2(\Omega_{3\mathrm epsilon})} \\ &\hspace{0.8cm} + \|u_1-u_2\|_{H^{0,2}(\Gammamma\times(0,T))} + \|\nabla_{x,t} (u_1-u_2)\|_{H^{0,2}(\Gammamma\times(0,T))} + \|p_1-p_2\|_{H^{\fracrac{1}{2},2}(\Gammamma\times(0,T))} \\ &\hspace{0.8cm} + \|H_1-H_2\|_{H^{0,2}(\Gammamma\times(0,T))} + \|\nabla_{x,t} (H_1-H_2)\|_{H^{0,2}(\Gammamma\times(0,T))} . \mathrm end{align*} In order to prove the stability results, we use the technique of Carleman estimate. In the next part, we will establish two Carleman inequalities which are the key points for the proof. \section{Carleman estimates} \subsection{Carleman estimates with a singular weight function} First of all, let's fix the weight function. Throughout this article, we use a singular weight function. Arbitrarily fix $t_0\mathrm in (0,T)$ and set $\mathrm deltalta := \min\{t_0, T-t_0 \}$. Let $l\mathrm in C^\mathrm infty [0,T]$ satisfy: \betagin{equation} \lambdabel{con:choice-l} \left\{ \betagin{aligned} &\ l(t) > 0, \qquad \qquad \quad 0 < t < T, \\ &\ l(t) = \left\{ \betagin{aligned} &t, \qquad \qquad 0\le t\le \fracrac{\mathrm deltalta}{2}, \\ &T - t, \qquad T - \fracrac{\mathrm deltalta}{2}\le t\le T, \mathrm end{aligned} \right. \\ &\ l(t_0) > l(t), \quad \quad \qquad \fracorall t\mathrm in (0,T)\setminus \{ t_0\}. \mathrm end{aligned} \right. \mathrm end{equation} Then we can choose $e^{2s\alphapha}$ as our weight function where \betagin{equation} \lambdabel{con:choice-alpha-phi} \varphi(x,t) = \frac{e^{\lambda d(x)}}{l(t)}, \quad \alpha(x,t) = \frac{e^{\lambda d(x)}-e^{2\lambda\|d\|_{C(\overline{\Omega})}}}{l(t)}. \mathrm end{equation} This is called a singular weight because $\alphapha$ tends to $-\mathrm infty$ as $t$ goes to $0$ and $T$. Thus, the weight is close to $0$ near $t=0,T$. Now we establish two key Carleman inequalities. The first one is for direct problem. We consider the following linearized MHD system: \betagin{equation} \lambdabel{sy:MHD4} \left\{ \betagin{aligned} &\partial_t u - \nu\Delta u + (B^{(1)}\cdot\nabla) u + (u\cdot\nabla) B^{(2)} + \nabla(B^{(3)}\cdot u) + L_1(H) + \nabla p = F &\quad in \ Q, \\ &\partial_t H - \kappa\Delta H + (D^{(1)}\cdot\nabla) H + (H\cdot\nabla) D^{(2)} + D^{(3)}\times \mathrm{rot}\; H + L_2(u) = G &\quad in \ Q, \\ & \mathrm{div}\; u = 0, \quad \mathrm{div}\; H = 0 &\quad in \ Q. \mathrm end{aligned} \right. \mathrm end{equation} Here $$ \betagin{aligned} &L_1(H) = (C^{(1)}\cdot\nabla) H + (H\cdot\nabla) C^{(2)} + \nabla (C^{(3)}\cdot H), \\ &L_2(u) = (C^{(4)}\cdot\nabla) u + (u\cdot\nabla) C^{(5)}, \mathrm end{aligned} $$ $\nu,\kappa\mathrm in W^{1,\mathrm infty}(Q)$ admit a positive lower bound and the coefficients $B^{(k)},C^{(k)},D^{(k)}$, $k\mathrm in \Bbb{N}$ are assumed to have enough regularity (e.g. $W^{2,\mathrm infty}(Q)$). For simplicity, we define $$ \betagin{aligned} \|(u,p,H)\|_{\chi_s(Q)}^2 := \mathrm int_Q \bigg\{ &\fracrac{1}{s^2\varphi^2}\bigg( |\partial_t u|^2 + \sum_{i,j=1}^3 |\partial_i \partial_j u|^2\bigg) + |\nabla u|^2 + s^2\varphi^2|u|^2 + \fracrac{1}{s\varphi}|\nabla p|^2 + s\varphi|p|^2 \\ & + \fracrac{1}{s^2\varphi^2}\bigg( |\partial_t H|^2 + \sum_{i,j=1}^3 |\partial_i \partial_j H|^2\bigg) + |\nabla H|^2 + s^2\varphi^2|H|^2 \bigg\}e^{2s\alphapha}dxdt. \mathrm end{aligned} $$ In the proof, we have further assumption that \betagin{equation} \lambdabel{con:wk-div-zero} \mathrm{div}\;\partial_t u = 0, \quad \mathrm{div}\;\Deltalta u = 0 \quad in \ Q. \mathrm end{equation} \noindent Condition (\ref{con:wk-div-zero}) should be true at least in the weak sense. In fact, if we have higher regularity of source terms $F$ and $G$, then we have improved regularity of the solution $u$. In that case, (\ref{con:wk-div-zero}) holds automatically after the condition $\mathrm{div}\;u = 0, \ in \ Q$. Then the first Carleman estimate can be stated as: \betagin{thm} \lambdabel{thm:CEDP} Let $d\mathrm in C^2(\overline{\Omega})$ satisfy (\ref{con:d}) and $F,G\mathrm in L^2(Q)$. Then for large fixed $\lambdambda$, there exist constants $s_0>0$ and $C>0$ such that \betagin{equation} \betagin{aligned} \|&(u,p,H)\|_{\chi_s(Q)}^2 \le \;C\mathrm int_Q \big(|F|^2 + |G|^2\big) e^{2s\alphapha}dxdt + Ce^{-s}\bigg(\|u\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} u\|_{L^2(\Sigma)}^2\\ &\hspace{4cm} + \|H\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} H\|_{L^2(\Sigma)}^2 + \|p\|_{L^2(0,T;H^{\fracrac{1}{2}}(\partial \Omega))}^2 \bigg) \mathrm end{aligned} \mathrm end{equation} for all $s\ge s_0$ and all $(u,p,H)\mathrm in H^{2,1}(Q)\times H^{1,0}(Q)\times H^{2,1}(Q)$ satisfying the system (\ref{sy:MHD4}). \mathrm end{thm} \noindent $\mathbf{Remarks.}$ (\romannumeral1) There is a confusion for $\|p\|_{L^2(\Omega)}$ because $p$ can be changed up to a constant. Therefore, in this article, we actually mean $\mathrm inf_{c\mathrm in\Bbb{R}}\|p+c\|_{L^2(\Omega)}$ while we just write $\|p\|_{L^2(\Omega)}$. (\romannumeral2) In this article, $C$ usually denotes generic positive constant which depends on $T,\Omega$ and the coefficients but is independent of large parameter $s$ and $\lambda$ as well. However, $\lambdambda$ plays an important role in the proof of Carleman estimate. And so while the generic constant $C$ depends on $\lambda$, we use notation $C(\lambda)$ to indicate the dependence. We prove Theorem \ref{thm:CEDP} by some techniques and combinations of Carleman estimates. Our key point is the estimate of pressure $p$. Thanks to the paper of $H^{-1}$- Carleman estimate for elliptic type (see Imanuvilov and Puel \cite{IP03}), we are able to establish the Carleman estimate with boundary data by a simple extension. \betagin{proof}[Proof of Theorem \ref{thm:CEDP}] We divide the proof into three steps. \noindent $\mathbf{First\ step.}$ We prove a Carleman estimate for pressure $p$ with boundary data. We shall use the following lemma. \betagin{lem} \lambdabel{lem:Ce-H1-zero-bdy} Let $d\mathrm in C^2(\overline{\Omega})$ be chosen as (\ref{con:d}) and $y\mathrm in H^1(\Omega)$ satisfy $$ \left\{ \betagin{aligned} &\ \Delta y + \sum_{j=1}^3 b_j(x) \partial_j y = f_0 + \sum_{j=1}^3\partial_j f_j &\quad in \ \Omega, \\ &\ y = 0 &\quad on \ \partial\Omega \mathrm end{aligned} \right. $$ with $f_0,f_j\mathrm in L^2(\Omega)$ and $b_j\mathrm in L^{\mathrm infty}(\Omega)$, $j=1,2,3$. Then there exist constants $\lambda_0 \ge 1$, $s_0 \ge 1$ and $C>0$ such that \betagin{equation} \lambdabel{eq:Ce-ellip-zero-bdy} \betagin{aligned} \mathrm int_\Omega \big( |\nabla y|^2 &+ s^2\lambda^2e^{2\lambda d}|y|^2\big) e^{2se^{\lambda d}}dx \\ \le \; &C\bigg (\mathrm int_\Omega \frac{1}{s\lambda^2}e^{-\lambda d}|f_0|^2e^{2se^{\lambda d}}dx + \sum_{j=1}^3 \mathrm int_\Omega se^{\lambda d}|f_j|^2e^{2se^{\lambda d}}dx \bigg) \mathrm end{aligned} \mathrm end{equation} for all $\lambda \ge \lambda_0$ and $s\ge s_0$. \mathrm end{lem} \betagin{proof}[Proof of Lemma \ref{lem:Ce-H1-zero-bdy}] We use the same technique as we choose the function $d$ and apply an $H^{\!-1}$ -Carleman estimate for elliptic type. We take the zero extensions of $y,f_0,f_j,j=1,2,3$ to $\Omega_1$ and denote them by the same letters. Here $\Omega_1$ is chosen as that in (\ref{con:choice-Om1}). Thus we have \betagin{equation} \lambdabel{sy:prf-ellip-zero-bdy} \betagin{aligned} \Deltalta y + \sum_{j=1}^3 b_j(x) \partial_j y = f_0 + \sum_{j=1}^3\partial_j f_j \quad in \ \Omega_1, \qquad y = 0 \quad on \ \partial\Omega_1. \mathrm end{aligned} \mathrm end{equation} Note that the function $d$ is chosen as (\ref{con:d}). We apply an $H^{\!-1}$- Carleman estimate (see Theorem A.1 of \cite{IP03}) to (\ref{sy:prf-ellip-zero-bdy}) to obtain $$ \betagin{aligned} \mathrm int_{\Omega_1} \big( |\nabla y|^2 &+ s^2\lambda^2e^{2\lambda d}|y|^2\big) e^{2se^{\lambda d}}dx \\ \le \; &C\bigg (\mathrm int_{\Omega_1} \frac{1}{s\lambda^2}e^{-\lambda d}|f_0|^2e^{2se^{\lambda d}}dx + \sum_{j=1}^3 \mathrm int_{\Omega_1} se^{\lambda d}|f_j|^2e^{2se^{\lambda d}}dx \bigg) \mathrm end{aligned} $$ for all $\lambda\ge \lambda_0$ and $s\ge s_0$. In $H^{\!-1}$- Carleman estimate, there is a term of integral over interior sub-domain $\omega$. However, we remove this term in the above inequality because we have chosen $\omega\subset\subset \Omega_1$ such that $\overline{\omega}\subset\Omega_1\setminus \overline{\Omega}$ and $y$ vanishes outside of $\Omega$. Since $f_0,f_j,j=1,2,3$ are also zero outside of $\Omega$, (\ref{eq:Ce-ellip-zero-bdy}) is proved. \mathrm end{proof} We apply operator div to the first equation in (\ref{sy:MHD4}). By condition (\ref{con:wk-div-zero}), $$ \Deltalta p = \mathrm{div} \big(F - L_1(H) - (B^{(1)}\cdot\nabla)u - (u\cdot\nabla)B^{(2)} - \nabla(B^{(3)}\cdot u)\big) $$ holds at least in the weak sense. By Sobolev Trace Theorem, there exists $\widetilde{p}\mathrm in H^1(\Omega)$ such that $$ \widetilde{p} = p \quad on \ \partial\Omega $$ and \betagin{equation} \lambdabel{eq:Stt1} \|\widetilde{p}\|_{H^1(\Omega)} \le C\|\widetilde{p}\|_{H^{\fracrac{1}{2}}(\partial\Omega)} = C\|p\|_{H^{\fracrac{1}{2}}(\partial\Omega)}. \mathrm end{equation} We then set $$ q = p - \widetilde{p} \quad in \ \Omega. $$ Thus we have \betagin{equation} \lambdabel{sy:prf-ellip-ext-zero-bdy} \left\{ \betagin{aligned} &\ \Deltalta q = \mathrm{div}(F - L_1(H) - (B^{(1)}\cdot\nabla)u - (u\cdot\nabla)B^{(2)} - \nabla(B^{(3)}\cdot u) - \nabla \widetilde{p}) &\quad in \ \Omega,\ \ \\ &\ q = 0 &\quad on \ \partial\Omega. \mathrm end{aligned} \right. \mathrm end{equation} Applying Lemma \ref{lem:Ce-H1-zero-bdy} to (\ref{sy:prf-ellip-ext-zero-bdy}), we obtain $$ \betagin{aligned} \mathrm int_\Omega &\big( |\nabla q|^2 + s^2\lambda^2e^{2\lambda d}|q|^2\big) e^{2se^{\lambda d}}dx \\ &\le \; C\mathrm int_\Omega se^{\lambda d}|F|^2e^{2se^{\lambda d}}dx + C\mathrm int_\Omega se^{\lambda d}(|\nabla u|^2 + |u|^2 + |\nabla H|^2 + |H|^2 + |\nabla \widetilde{p}|^2)e^{2se^{\lambda d}}dx \mathrm end{aligned} $$ for all $\lambda\ge \lambda_0$ and all $s\ge s_0$. Since $p = q + \widetilde{p}$, we have \betagin{equation} \lambdabel{eq:prf-ce-1} \betagin{aligned} \mathrm int_\Omega \big( |\nabla p|^2 &+ s^2\lambda^2e^{2\lambda d}|p|^2\big) e^{2se^{\lambda d}}dx \\ \le \; &2\mathrm int_\Omega \big( |\nabla q|^2 + s^2\lambda^2e^{2\lambda d}|q|^2\big) e^{2se^{\lambda d}}dx + 2\mathrm int_\Omega \big( |\nabla \widetilde{p}|^2 + s^2\lambda^2e^{2\lambda d}|\widetilde{p}|^2\big) e^{2se^{\lambda d}}dx \\ \le \; &C\mathrm int_\Omega se^{\lambda d}|F|^2e^{2se^{\lambda d}}dx + Cs^2\lambda^2e^{2\lambda \|d\|_{C(\overline{\Omega})}}e^{2se^{\lambda \|d\|_{C(\overline{\Omega})}}}\|p\|_{H^{\fracrac{1}{2}}(\partial\Omega)}^2 \\ &+ C\mathrm int_\Omega se^{\lambda d}(|\nabla u|^2 + |u|^2 + |\nabla H|^2 + |H|^2)e^{2se^{\lambda d}}dx \\ \mathrm end{aligned} \mathrm end{equation} for all $\lambda\ge \lambda_0$ and all $s\ge s_0$. We used (\ref{eq:Stt1}) in the last inequality. Recall the definition of weight function (\ref{con:choice-l})-(\ref{con:choice-alpha-phi}). Let $s\ge s_1\mathrm equiv s_0l(t_0)$. Then $sl^{-1}(t)\ge s_0$ for all $0\le t\le T$. Hence substituting $s$ by $sl^{-1}(t)$ in (\ref{eq:prf-ce-1}) yields $$ \betagin{aligned} \mathrm int_\Omega \big( |\nabla p|^2 &+ s^2\lambda^2\varphi^2|p|^2\big) e^{2s\varphi}dx \le \; C\mathrm int_\Omega s\varphi|F|^2e^{2s\varphi}dx + Cs^2\lambda^2 l^{-2}e^{2\lambda}e^{2sl^{-1}e^{\lambda}}\|p\|_{H^{\fracrac{1}{2}}(\partial\Omega)}^2 \\ &+ C\mathrm int_\Omega s\varphi(|\nabla u|^2 + |u|^2 + |\nabla H|^2 + |H|^2)e^{2s\varphi}dx \mathrm end{aligned} $$ Without loss of generality, we can assume $\|d\|_{C(\overline{\Omega})}=1$ here. Multiplying the above inequality by $s^{-1}l(t)e^{-2sl^{-1}(t)e^{2\lambda}}$ and integrating over $(0,T)$, we obtain \betagin{equation} \lambdabel{eq:Ce-sin-p} \betagin{aligned} &\mathrm int_Q \big( \fracrac{e^{\lambda d}}{s\varphi}|\nabla p|^2 + s\lambda^2\varphi e^{\lambda d}|p|^2\big) e^{2s\alphapha}dxdt \le \; C\mathrm int_Q e^{\lambda d}|F|^2e^{2s\alphapha}dxdt + C(\lambda)e^{-s}\|p\|_{L^2(0,T;H^{\fracrac{1}{2}}(\partial\Omega))}^2 \\ &\hspace{5cm} + C\mathrm int_Q e^{\lambda d}(|\nabla u|^2 + |u|^2 + |\nabla H|^2 + |H|^2)e^{2s\alphapha}dxdt \mathrm end{aligned} \mathrm end{equation} for all $\lambda\ge \lambda_0$ and all $s\ge s_1$. \noindent $\mathbf{Second\ step.}$ We apply a Carleman estimate for parabolic type. We have the following lemma. \betagin{lem} \lambdabel{lem:Ce-sin-para} Let $\varphi$ be chosen as (\ref{con:choice-alpha-phi}) and $y\mathrm in H^{2,1}(Q)$ satisfy $$ \betagin{aligned} \quad \partial_t y - \nu(x,t)\Delta y + \sum_{j=1}^3 b_j(x,t)\partial_j y + c(x,t)y = f \quad in \; Q \mathrm end{aligned} $$ with $\nu,b_j,c\mathrm in W^{1,\mathrm infty}(Q)$, $\nu\ge c_0>0$ and $f\mathrm in L^2(Q)$, $j=1,2,3$. Then there exist constants $\lambda_0>0$, $s_0>0$ and $C>0$ such that \betagin{equation} \betagin{aligned} \mathrm int_Q \bigg\{ \fracrac{e^{\lambda d}}{s^2\varphi^2}\bigg( |\partial_t y|^2 + \sum_{i,j=1}^3 &|\partial_i \partial_j y|^2 \bigg) + \lambda^2e^{\lambda d}|\nabla y|^2 + s^2\lambda^4\varphi^2e^{\lambda d} |y|^2\bigg\} e^{2s\alpha}dxdt \\ \le \; &C\mathrm int_Q \frac{e^{\lambda d}}{s\varphi}|f|^2e^{2s\alpha}dxdt + C(\lambda)e^{-s}\mathrm int_{\Sigma} (|y|^2 + |\nabla_{x,t} y|^2)dSdt \mathrm end{aligned} \mathrm end{equation} for all $\lambda\ge \lambda_0$ and all $s\ge s_0$. \mathrm end{lem} This proof is similar to that in Chae, Imanuvilov and Kim \cite{CIK96}. See also Imanuvilov \cite{I95}. We rewrite the first equation in (\ref{sy:MHD4}) to get \betagin{equation*} \partial_t u - \nu \Deltalta u + (B^{(1)}\cdot\nabla)u + (u\cdot\nabla)B^{(2)} + \nabla(B^{(3)}\cdot u)= F - \nabla p - L_1(H). \mathrm end{equation*} Applying Lemma \ref{lem:Ce-sin-para} to each component of above equations, we obtain \betagin{equation} \lambdabel{eq:Ce-sin-u} \betagin{aligned} &\mathrm int_Q \bigg\{ \fracrac{e^{\lambda d}}{s^2\varphi^2}\bigg( |\partial_t u|^2 + \sum_{i,j=1}^3 |\partial_i \partial_j u|^2 \bigg) + \lambda^2e^{\lambda d}|\nabla u|^2 + s^2\lambda^4\varphi^2e^{\lambda d} |u|^2\bigg\} e^{2s\alpha}dxdt \le C\mathrm int_Q \frac{e^{\lambda d}}{s\varphi}|F|^2e^{2s\alpha}dxdt \\ &\hspace{1cm} + C\mathrm int_Q \frac{e^{\lambda d}}{s\varphi}(|\nabla p|^2 + |\nabla H|^2 + |H|^2)e^{2s\alpha}dxdt + C(\lambda)e^{-s}\big(\|u\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} u\|_{L^2(\Sigma)}^2\big) \mathrm end{aligned} \mathrm end{equation} for all $\lambda\ge \lambda_1$ and all $s\ge s_2$. Next, we apply Carleman estimate for parabolic type to the second equation of (\ref{sy:MHD4}) and we have the following estimate: \betagin{equation} \lambdabel{eq:Ce-sin-H} \betagin{aligned} &\mathrm int_Q \bigg\{ \fracrac{e^{\lambda d}}{s^2\varphi^2}\bigg( |\partial_t H|^2 + \sum_{i,j=1}^3 |\partial_i \partial_j H|^2 \bigg) + \lambda^2e^{\lambda d} |\nabla H|^2 + s^2\lambda^4\varphi^2e^{\lambda d} |H|^2\bigg\} e^{2s\alpha}dxdt \le C\mathrm int_Q |G|^2e^{2s\alpha}dxdt \\ &\hspace{1cm} + C\mathrm int_Q \frac{e^{\lambda d}}{s\varphi}(|\nabla u|^2 + |u|^2)e^{2s\alpha}dxdt + C(\lambda)e^{-s}\big(\|H\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} H\|_{L^2(\Sigma)}^2\big) \mathrm end{aligned} \mathrm end{equation} for all $\lambda\ge \lambda_2$ and all $s\ge s_3$. Here we used $s^{-1}\varphi^{-1}e^{\lambda d}\le 1\ in\; Q$ for any $s\ge s_1$. \noindent $\mathbf{Third\ step.}$ We combine the estimates for $p,u$ and $H$. Combining (\ref{eq:Ce-sin-p}), (\ref{eq:Ce-sin-u}) and (\ref{eq:Ce-sin-H}), we obtain $$ \betagin{aligned} & \mathrm int_Q \bigg\{ \fracrac{e^{\lambda d}}{s^2\varphi^2}\bigg( |\partial_t u|^2 + \sum_{i,j=1}^3 |\partial_i \partial_j u|^2\bigg) + \lambda^2e^{\lambda d}|\nabla u|^2 + s^2\lambda^4\varphi^2e^{\lambda d} |u|^2 + \fracrac{e^{\lambda d}}{s\varphi}|\nabla p|^2 + s\lambda^2\varphi e^{\lambda d} |p|^2 \\ &\hspace{1cm} + \fracrac{e^{\lambda d}}{s^2\varphi^2}\bigg( |\partial_t H|^2 + \sum_{i,j=1}^3 |\partial_i \partial_j H|^2\bigg) + \lambda^2e^{\lambda d} |\nabla H|^2 + s^2\lambda^4\varphi^2e^{\lambda d} |H|^2 \bigg\}e^{2s\alphapha}dxdt \\ & \le \;C\mathrm int_Q e^{\lambda d}( |F|^2 + |G|^2 ) e^{2s\alphapha}dxdt + C\mathrm int_Q e^{\lambda d}\big( |\nabla u|^2 + |u|^2 + |\nabla H|^2 + |H|^2 \big) e^{2s\alphapha}dxdt \\ &\hspace{1cm} + C(\lambda)e^{-s}\bigg(\|u\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} u\|_{L^2(\Sigma)}^2 + \|H\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} H\|_{L^2(\Sigma)}^2 + \|p\|_{L^2(0,T;H^{\fracrac{1}{2}}(\partial\Omega))}^2 \bigg) \mathrm end{aligned} $$ for all $\lambda\ge \lambda_2$ and all $s\ge s_3$. Finally we can fix $\lambda$ large enough to absorb the second term on the right-hand side into the left-hand side. By the relations $e^{\lambda d}\ge 1,\lambda \ge 1$, we obtain $$ \betagin{aligned} &\|(u,p,H)\|_{\chi_s(Q)}^2 \le \;C(\lambda)\mathrm int_Q \big(|F|^2 + |G|^2\big) e^{2s\alphapha}dxdt \\ &\hspace{1cm} + C(\lambda)e^{-s}\bigg(\|u\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} u\|_{L^2(\Sigma)}^2 + \|H\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} H\|_{L^2(\Sigma)}^2 + \|p\|_{L^2(0,T;H^{\fracrac{1}{2}}(\partial\Omega))}^2 \bigg) \mathrm end{aligned} $$ for fixed $\lambda$ large enough and all $s \ge s_4\mathrm equiv max\{s_1,s_2,s_3\}$. The proof of Theorem \ref{thm:CEDP} is completed. \mathrm end{proof} On the other hand, we investigate the following two first-order partial differential operators: ($\romannumeral1)\quad P f := \mathrm{div} (fA)=A \nabla f + f\mathrm{div} A, \quad f\mathrm in H^1(\Omega)$, ($\romannumeral2)\quad Q g := \mathrm{rot} (gb)=\nabla g\times b + g\mathrm{rot} b, \quad g\mathrm in H^1(\Omega)$ \noindent where $A=(A_{ij})_{i,j}$ is a $3\times 3$ matrix and $b=(b_1,b_2,b_3)^T$ is a vector satisfying $A\mathrm in W^{1,\mathrm infty}(\Omega), b\mathrm in W^{2,\mathrm infty}(\Omega)$. Recall that the divergence of a matrix is defined as $[\mathrm{div} A]_k=\sum_{j=1}^3 \partial_j A_{kj}$. We have the following Carleman inequalities: \betagin{thm} \lambdabel{thm:CEIP} Let $d$ be chosen as (\ref{con:d}) and $\varphi_0:=e^{\lambda d}$. Assume that $$ \mathrm{det} A(x) \neq 0 \ and \ |\nabla d(x) \times b(x)|\neq 0, \qquad for\ x\mathrm in \overline{\Omega}. $$ Then there exist constants $\lambda_0\ge 1$, $s_0\ge 1$ and a generic constant $C>0$ such that \betagin{equation} \lambdabel{CEIP1} \mathrm int_{\Omega} (|\nabla f|^2 + s^2\lambda^2\varphi_0^2 |f|^2)e^{2s\varphi_0} dx \le C\mathrm int_{\Omega} |P f|^2 e^{2s\varphi_0}dx + C\mathrm int_{\Gammamma} s\lambda\varphi_0 |f|^2e^{2s\varphi_0}d\sigma \mathrm end{equation} and \betagin{equation} \lambdabel{CEIP2} \betagin{aligned} &\mathrm int_{\Omega} (|\nabla g|^2 + s^2\lambda^2\varphi_0^2 |g|^2)e^{2s\varphi_0} dx \le C\mathrm int_{\Omega} (\fracrac{1}{s^2\lambda^2\varphi_0^2}|\nabla (Qg)|^2 + |Qg|^2) e^{2s\varphi_0}dx \\ &\hspace{5cm} + C\mathrm int_{\partialrtial\Omega} (\fracrac{1}{s\lambda\varphi_0}|\nabla g|^2 + s\lambda\varphi_0 |g|^2)e^{2s\varphi_0}d\sigma \mathrm end{aligned} \mathrm end{equation} for all $\lambda\ge \lambda_0$, $s\ge s_0$ and $f\mathrm in H^1(\Omega), g\mathrm in H^2(\Omega)$. \mathrm end{thm} To proof these inequalities, we apply the idea of Lemma 6.1 in \cite{Y09}. \betagin{proof} We first prove inequality (\ref{CEIP1}). Set $w=fe^{s\varphi_0}$. Then $$ Pf=P(we^{-s\varphi_0})=e^{-s\varphi_0}(A\nabla w + w\mathrm{div} A - s\lambda\varphi_0 (A\nabla d) w). $$ We rewrite it in components, that is \betagin{equation} \lambdabel{eq1} [Pf]_ke^{s\varphi_0}=\sum_{j=1}^3 \big( A_{kj}\partial_j w + \partial_j A_{kj} w - s\lambda\varphi_0 (A_{kj}\partial_j d)w\big) \mathrm end{equation} Now choose $a=(a_1,a_2,a_3)^T\mathrm in L^\mathrm infty (\Omega)$ such that $\sum_{k=1}^3 a_k A_{kj} = \partial_j d$ for any $x\mathrm in \overline{\Omega}$. In fact, the existence of such $\{a_k\}_{k=1,2,3}$ comes from the assumption $\mathrm{det} A\neq 0$ on $\overline{\Omega}$. We multiply $a_k$ to equation (\ref{eq1}) and take summation over $k$: $$ \sum_{k=1}^3 a_k[Pf]_k e^{s\varphi_0} = \nabla d\cdot \nabla w + \big(\sum_{j,k=1}^3 a_k \partial_j A_{kj}\big) w - s\lambda\varphi_0 |\nabla d|^2 w $$ Then we estimate $$ \betagin{aligned} &\mathrm int_\Omega \Big|\sum_{k=1}^3 a_k[Pf]_k \Big|^2e^{2s\varphi_0}dx = \mathrm int_\Omega s^2\lambda^2\varphi_0^2|\nabla d|^4|w|^2 dx + \mathrm int_\Omega |\nabla d \cdot \nabla w + (a\cdot \mathrm{div} A) w|^2 dx \\ &\hspace{4.3cm} - 2\mathrm int_\Omega s\lambda\varphi_0 |\nabla d|^2(\nabla d\cdot \nabla w + (a\cdot\mathrm{div}A) w)w dx \\ &\hspace{3.9cm}\ge \mathrm int_\Omega s^2\lambda^2\varphi_0^2|\nabla d|^4|w|^2 dx - 2\mathrm int_\Omega s\lambda\varphi_0 |\nabla d|^2 (a\cdot \mathrm{div} A)|w|^2 dx \\ &\hspace{4.3cm} - \mathrm int_{\partial\Omega} s\lambda\varphi_0 |\nabla d|^2 \fracrac{\partial d}{\partial n}|w|^2 d\sigma + \mathrm int_\Omega s\lambda \mathrm{div} (\varphi_0|\nabla d|^2 \nabla d)|w|^2 dx \\ &\hspace{3.9cm}\ge \mathrm int_\Omega s^2\lambda^2\varphi_0^2|\nabla d|^4|w|^2 dx - \mathrm int_{\Gammamma} s\lambda\varphi_0 |\nabla d|^2 \fracrac{\partial d}{\partial n}|w|^2 d\sigma \\ &\hspace{4.3cm} + \mathrm int_\Omega s\lambda\varphi_0 \big(\lambda|\nabla d|^4 + \nabla |\nabla d|^2 \nabla d + |\nabla d|^2 (\Deltalta d - 2(a\cdot\mathrm{div} A))\big)|w|^2 dx. \mathrm end{aligned} $$ In the last inequality, we used the relation (\ref{con:d}) to get $\fracrac{\partial d}{\partial n}<0$ on $\partial\Omega\setminus \Gammamma$. By choose $\lambda$ large, we can absorb the third term on the right-hand side. Thus, \betagin{equation} \lambdabel{eq2} \mathrm int_\Omega s^2\lambda^2\varphi_0^2|f|^2e^{2s\varphi_0} dx \le C \mathrm int_\Omega |Pf|^2e^{2s\varphi_0}dx + C \mathrm int_\Gammamma s\lambda\varphi_0|f|^2e^{2s\varphi_0} d\sigma \mathrm end{equation} holds for all $\lambda\ge \lambda_1$ and $s\ge 1$. Furthermore, for $l=1,2,3$, we can also choose $a^{(l)}=(a_1^{(l)},a_2^{(l)},a_3^{(l)})^T\mathrm in L^\mathrm infty (\Omega)$ such that $\sum_{k=1}^3 a_k^{(l)} A_{kj}=\mathrm deltalta_{lj}$. Take summation over $k$ after multiply $a_k^{(l)}$ to (\ref{eq1}): $$ \sum_{k=1}^3 a_k^{(l)}[Pf]_k e^{s\varphi_0} = \partial_l w + \big(\sum_{j,k=1}^3 a_k^{(l)} \partial_j A_{kj}\big) w - s\lambda\varphi_0 (\partial_l d) w $$ Again we estimate $$ \betagin{aligned} &\mathrm int_\Omega \Big|\sum_{k=1}^3 a_k^{(l)}[Pf]_k \Big|^2e^{2s\varphi_0}dx = \mathrm int_\Omega |\partial_l w|^2 dx + \mathrm int_\Omega |(a^{(l)}\cdot \mathrm{div} A) - s\lambda\varphi_0(\partial_l d)|^2 |w|^2 dx \\ &\hspace{4.3cm} + 2\mathrm int_\Omega \big((a^{(l)}\cdot \mathrm{div} A) - s\lambda\varphi_0(\partial_l d)\big)w(\partial_l w) dx \\ &\hspace{3.9cm}\ge \mathrm int_\Omega |\partial_l w|^2 dx + 2\mathrm int_\Omega (a^{(l)}\cdot \mathrm{div} A)w(\partial_l w) dx \\ &\hspace{4.3cm} - \mathrm int_{\partial\Omega} s\lambda\varphi_0 (\partial_l d)n_l |w|^2 d\sigma + \mathrm int_\Omega s\lambda\varphi_0(\lambda|\partial_l d|^2 + \partial_l^2 d)|w|^2 dx. \mathrm end{aligned} $$ Rewrite the above inequality and take summation over $l$ on both sides: $$ \betagin{aligned} &\mathrm int_\Omega |\nabla w|^2 dx \le \mathrm int_\Omega \sum_{l=1}^3\Big|\sum_{k=1}^3 a_k^{(l)}[Pf]_k \Big|^2e^{2s\varphi_0}dx + \mathrm int_{\partial\Omega} s\lambda\varphi_0 \fracrac{\partial d}{\partial n} |w|^2 d\sigma \\ &\hspace{2.5cm} - 2\sum_{l=1}^3\mathrm int_\Omega (a^{(l)}\cdot \mathrm{div} A)w(\partial_l w) dx - \mathrm int_\Omega s\lambda\varphi_0(\lambda|\nabla d|^2 + \Deltalta d)|w|^2 dx \\ &\hspace{1.8cm} \le C\mathrm int_\Omega |Pf|^2e^{2s\varphi_0}dx + \mathrm int_\Gammamma s\lambda\varphi_0\fracrac{\partial d}{\partial n} |w|^2 d\sigma \\ &\hspace{2.5cm} + \fracrac{1}{2}\mathrm int_\Omega |\nabla w|^2 dx + 2\mathrm int_\Omega \sum_{l=1}^3 |a^{(l)}\cdot\mathrm{div} A|^2 |w|^2 dx + \mathrm int_\Omega s\lambda\varphi_0|w|^2 dx \mathrm end{aligned} $$ This leads to $$ \mathrm int_\Omega |\nabla w|^2 dx \le C\mathrm int_\Omega |Pf|^2e^{2s\varphi_0}dx + C\mathrm int_\Gammamma s\lambda\varphi_0 |w|^2 d\sigma + C\mathrm int_\Omega s\lambda\varphi_0|w|^2 dx $$ Together with (\ref{eq2}) and take $\lambda$ large enough to absorb the last term on the right-hand side. Finally, we obtain $$ \mathrm int_{\Omega} (|\nabla f|^2 + s^2\lambda^2\varphi_0^2 |f|^2)e^{2s\varphi_0} dx \le C\mathrm int_{\Omega} |P f|^2 e^{2s\varphi_0}dx + C\mathrm int_{\Gammamma} s\lambda\varphi_0 |f|^2e^{2s\varphi_0}d\sigma $$ for all $\lambda\ge \lambda_2$ and $s\ge 1$. Next we consider the operator $Q$. Set $v=ge^{s\varphi_0}$. Then $$ Qg=Q(ve^{-s\varphi_0})=e^{-s\varphi_0}(\nabla v\times b + (\mathrm{rot} b)v - s\lambda\varphi_0(\nabla d\times b)v). $$ There is no hope to do in the same way as for operator $P$. In fact, we denote $$ B= \betagin{pmatrix} 0 & b_3 & -b_2 \\ -b_3 & 0 & b_1 \\ b_2 & -b_1 & 0 \\ \mathrm end{pmatrix}. $$ Then we rewrite the above formula: $$ Qg e^{s\varphi_0} = B\nabla v + (\mathrm{rot} b)v - s\lambda\varphi_0(B\nabla d)v. $$ However, $\mathrm{det} B = b_1b_2b_3 + (-b_1b_2b_3)=0$. Thus, we calculate directly $$ \betagin{aligned} &\mathrm int_\Omega |Qg|^2e^{2s\varphi_0} dx = \mathrm int_\Omega s^2\lambda^2\varphi_0^2 |B\nabla d|^2 |v|^2 dx + \mathrm int_\Omega |B\nabla v + (\mathrm{rot} b)v|^2 dx \\ &\hspace{2.7cm} -2\mathrm int_\Omega s\lambda\varphi_0 (B\nabla d)\cdot (B\nabla v + (\mathrm{rot} b)v)v dx \\ &\hspace{2.5cm} \ge \mathrm int_\Omega s^2\lambda^2\varphi_0^2 |B\nabla d|^2 |v|^2 dx -2\mathrm int_\Omega s\lambda\varphi_0(B\nabla d)\cdot (\mathrm{rot} b)|v|^2 dx \\ &\hspace{2.7cm} -\mathrm int_{\partial\Omega} s\lambda\varphi_0 (B\nabla d)\cdot (Bn)|v|^2 d\sigma + \mathrm int_\Omega s\lambda\varphi_0 \big(\lambda |B\nabla d|^2 + \mathrm{div}(B^T(B\nabla d))\big) |v|^2 dx \mathrm end{aligned} $$ By noting the assumption that $|B\nabla d|=|\nabla d\times b| \neq 0$ in $\overline{\Omega}$, we can take $\lambda$ large to absorb the second and fourth terms on the right-hand side: \betagin{equation} \lambdabel{eq3} \mathrm int_\Omega s^2\lambda^2\varphi_0^2 |g|^2e^{2s\varphi_0} dx \le C\mathrm int_\Omega |Qg|^2e^{2s\varphi_0} dx + C\mathrm int_{\partial\Omega}s\lambda\varphi_0 |g|^2e^{2s\varphi_0} d\sigma \mathrm end{equation} for all $\lambda\ge \lambda_3$ and $s\ge 1$. We take the $k$-th derivative of (\romannumeral2) and denote $g_k=\partial_k g$. Define $$ Q_k g_k:= \partial_k(Qg) - \nabla g \times \partial_k b - g(\mathrm{rot}(\partial_k b))= \nabla g_k\times b + g_k(\mathrm{rot}b). $$ By applying similar argument above to operator $Q_k$, we have $$ \betagin{aligned} &\mathrm int_\Omega |g_k|^2e^{2s\varphi_0} dx \le C\mathrm int_\Omega \fracrac{1}{s^2\lambda^2\varphi_0^2}|Q_k g_k|^2e^{2s\varphi_0} dx + C\mathrm int_{\partial\Omega} \fracrac{1}{s\lambda\varphi_0} |g_k|^2e^{2s\varphi_0} d\sigma \\ &\hspace{0cm} \le C\mathrm int_\Omega \fracrac{1}{s^2\lambda^2\varphi_0^2}|\partial_k (Qg)|^2e^{2s\varphi_0} dx + C\mathrm int_\Omega \fracrac{1}{s^2\lambda^2\varphi_0^2}(|\nabla g|^2 +|g|^2)e^{2s\varphi_0}dx + C\mathrm int_{\partial\Omega}\fracrac{1}{s\lambda\varphi_0} |g_k|^2e^{2s\varphi_0} d\sigma \mathrm end{aligned} $$ for all $\lambda \ge \lambda_4$, $s\ge 1$ and $k=1,2,3$. Sum up the estimates over $k$ and absorb again the lower-order terms by taking $\lambda$ large: \betagin{equation} \lambdabel{eq4} \betagin{aligned} &\mathrm int_\Omega |\nabla g|^2e^{2s\varphi_0} dx \le C\mathrm int_\Omega \fracrac{1}{s^2\lambda^2\varphi_0^2}|\nabla (Qg)|^2e^{2s\varphi_0} dx + C\mathrm int_\Omega \fracrac{1}{s^2\lambda^2\varphi_0^2}|g|^2e^{2s\varphi_0}dx \\ &\hspace{2.8cm}+ C\mathrm int_{\partial\Omega} \fracrac{1}{s\lambda\varphi_0} |\nabla g|^2e^{2s\varphi_0} d\sigma \mathrm end{aligned} \mathrm end{equation} for all $\lambda \ge \lambda_5$ and all $s\ge 1$. Combining (\ref{eq3}) and (\ref{eq4}), we proved (\ref{CEIP2}) and also Theorem \ref{thm:CEIP} with $\lambda_0=max\{\lambda_i:\ 1\le i\le 5\}$ and $s_0=1$. \mathrm end{proof} In \mathrm eqref{CEIP1} and \mathrm eqref{CEIP2}, we let $s_1 = s_0 l(t_0) = l(t_0)$. Then for all $s\ge s_1$, $sl^{-1}(t_0)\ge s_1l^{-1}(t_0) = s_0$. Substituting $s$ by $sl^{-1}(t_0)$ yields \betagin{align*} \mathrm int_{\Omega} (|\nabla f|^2 + s^2\lambda^2\varphi^2(x,t_0) |f|^2)e^{2s\varphi(x,t_0)} dx \le C\mathrm int_{\Omega} |P f|^2 e^{2s\varphi(x,t_0)}dx + C\mathrm int_{\Gammamma} s\lambda\varphi(x,t_0) |f|^2e^{2s\varphi(x,t_0)}d\sigma \mathrm end{align*} and \betagin{align*} &\mathrm int_{\Omega} (|\nabla g|^2 + s^2\lambda^2\varphi^2(x,t_0) |g|^2)e^{2s\varphi(x,t_0)} dx \le C\mathrm int_{\Omega} (\fracrac{1}{s^2\lambda^2\varphi^2(x,t_0)}|\nabla (Qg)|^2 + |Qg|^2) e^{2s\varphi(x,t_0)}dx \\ &\hspace{5cm} + C\mathrm int_{\partialrtial\Omega} (\fracrac{1}{s\lambda\varphi(x,t_0)}|\nabla g|^2 + s\lambda\varphi(x,t_0) |g|^2)e^{2s\varphi(x,t_0)}d\sigma \mathrm end{align*} for all $\lambda\ge \lambda_0$ and all $s\ge s_1$. By multiplying $exp\{-2s\frac{e^{2\lambda\|d\|}}{l(t_0)}\}$ on both inequalities, we derive \betagin{thm} \lambdabel{thm:CEIP-sin} Under the assumptions that $$ \mathrm{det} A(x) \neq 0 \ and \ |\nabla d(x) \times b(x)|\neq 0, \qquad for\ x\mathrm in \overline{\Omega}, $$ there exist constants $\lambda_0\ge 1$, $s_0\ge 1$ and a generic constant $C>0$ such that \betagin{align*} \mathrm int_{\Omega} (|\nabla f|^2 + s^2\lambda^2\varphi^2(x,t_0) |f|^2)e^{2s\alpha (x,t_0)} dx \le C\mathrm int_{\Omega} |P f|^2 e^{2s\alpha (x,t_0)}dx + C\mathrm int_{\Gammamma} s\lambda\varphi(x,t_0) |f|^2e^{2s\alpha (x,t_0)}d\sigma \mathrm end{align*} and \betagin{align*} &\mathrm int_{\Omega} (|\nabla g|^2 + s^2\lambda^2\varphi^2(x,t_0) |g|^2)e^{2s\alpha (x,t_0)} dx \le C\mathrm int_{\Omega} (\fracrac{1}{s^2\lambda^2\varphi^2(x,t_0)}|\nabla (Qg)|^2 + |Qg|^2) e^{2s\alpha (x,t_0)}dx \\ &\hspace{5cm} + C\mathrm int_{\partial\Omega} (\fracrac{1}{s\lambda\varphi(x,t_0)}|\nabla g|^2 + s\lambda\varphi(x,t_0) |g|^2)e^{2s\alpha (x,t_0)}d\sigma \mathrm end{align*} for all $\lambda\ge \lambda_0$, $s\ge s_0$ and $f\mathrm in H^1(\Omega), g\mathrm in H^2(\Omega)$. \mathrm end{thm} \subsection{Carleman estimates with a regular weight function} Throughout this part, we use a regular weight function. Arbitrarily fix $t_0\mathrm in (0,T)$ and set $\mathrm deltalta := \min\{t_0, T-t_0 \}$. Then we select our weight function as \betagin{equation} \lambdabel{con:choice-phi-psi} \varphi(x,t) = e^{\lambda \psi(x,t)}, \quad \psi(x,t) = d(x) - \betata(t-t_0)^2 + c_0 \mathrm end{equation} where $d$ is the same choice as \mathrm eqref{con:d}, parameter $\betata>0$ to be fixed later and $c_0:=\max\{\betata t_0^2, \betata (T-t_0)^2\}$ so that $\psi$ is always nonnegative in $Q$. Similar to the last subsection, we intend to establish two key Carleman inequalities with this regular weight. One is for direct problem and the other is for inverse problem. Firstly, we consider the following linearized MHD system: \betagin{equation} \lambdabel{sy:MHD4n} \left\{ \betagin{aligned} &\partial_t u - \nu\Delta u + (B^{(1)}\cdot\nabla) u + (u\cdot\nabla) B^{(2)} + \nabla(B^{(3)}\cdot u) + L_1(H) + \nabla p = F &\quad in \ Q, \\ &\partial_t H - \kappa\Delta H + (D^{(1)}\cdot\nabla) H + (H\cdot\nabla) D^{(2)} + D^{(3)}\times \mathrm{rot}\; H + L_2(u) = G &\quad in \ Q, \\ & \mathrm{div}\; u = h, &\quad in \ Q \mathrm end{aligned} \right. \mathrm end{equation} which is exactly system \mathrm eqref{sy:MHD4}. For simplicity, we define \betagin{align*} \|(u,p,H)\|_{\sigma_s(Q)}^2 := \mathrm int_Q \bigg\{ &\fracrac{1}{s\varphi}\bigg( |\partial_t u|^2 + \sum_{i,j=1}^3 |\partial_i \partial_j u|^2\bigg) + s\varphi|\nabla u|^2 + s^3\varphi^3|u|^2 + |\nabla p|^2 + s^2\varphi^2|p|^2 \\ & + \fracrac{1}{s\varphi}\bigg( |\partial_t H|^2 + \sum_{i,j=1}^3 |\partial_i \partial_j H|^2\bigg) + s\varphi|\nabla H|^2 + s^3\varphi^3|H|^2 \bigg\}e^{2s\varphi} dxdt. \mathrm end{align*} Then we have the first Carleman estimate: \betagin{thm} \lambdabel{thm:CEDPn} Let $d\mathrm in C^2(\overline{\Omega})$ satisfy \mathrm eqref{con:d} and $F,G\mathrm in L^2(Q)$. Then for large fixed $\lambdambda$, there exist constants $s_0>0$ and $C>0$ such that \betagin{align*} \|&(u,p,H)\|_{\sigma_s(Q)}^2 \le \;C\mathrm int_Q s\varphi\big(|F|^2 + |G|^2\big) e^{2s\varphi}dxdt + C\mathrm int_Q s\varphi|\nabla_{x,t} h|^2e^{2s\varphi}dxdt \\ &\hspace{1cm} + Ce^{Cs}\Big(\|u\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} u\|_{L^2(\Sigma)}^2 + \|H\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} H\|_{L^2(\Sigma)}^2 + \|p\|_{L^2(0,T;H^{\fracrac{1}{2}}(\partial \Omega))}^2 \Big) \mathrm end{align*} for all $s\ge s_0$ and all $(u,p,H)$ smooth enough and satisfying the system (\ref{sy:MHD4n}) with the conditions \betagin{align} \lambdabel{con:0,T} u(\cdot,0) = u(\cdot,T) = H(\cdot,0) = H(\cdot,T) = 0. \mathrm end{align} \mathrm end{thm} \noindent $\mathbf{Remarks.}$ (\romannumeral1) There is a confusion for $\|p\|_{L^2(\Omega)}$ because $p$ can be changed up to a constant. Therefore, in this article, we actually mean $\mathrm inf_{c\mathrm in\Bbb{R}}\|p+c\|_{L^2(\Omega)}$ while we just write $\|p\|_{L^2(\Omega)}$. (\romannumeral2) In this article, $C$ usually denotes generic positive constant which depends on $T,\Omega$ and the coefficients but is independent of large parameter $s$ and $\lambda$ as well. However, $\lambdambda$ plays an important role in the proof of Carleman estimate. And so while the generic constant $C$ depends on $\lambda$, we use notation $C(\lambda)$ to indicate the dependence. We prove Theorem \ref{thm:CEDP} by some techniques and combinations of Carleman estimates. Our key point is the estimate of pressure $p$. Thanks to the paper of $H^{-1}$- Carleman estimate for elliptic type (see Imanuvilov and Puel \cite{IP03}), we are able to establish the Carleman estimate with boundary data by a simple extension. \betagin{proof}[Proof of Theorem \ref{thm:CEDP}] We divide the proof into three steps. \noindent $\mathbf{First\ step.}$ We prove a Carleman estimate for pressure $p$ with boundary data. We apply operator div to the first equation in (\ref{sy:MHD4n}). Formal calculation leads to \betagin{align*} &\Deltalta p = \mathrm{div} \big(F + \nu \nabla h - L_1(H) - (B^{(1)}\cdot\nabla)u - (u\cdot\nabla)B^{(2)} - \nabla(B^{(3)}\cdot u)\big) - \partial_t h \\ &\hspace{2cm} + \sum_{i,j=1}^3\partial_j ((\partial_i\nu) \partial_j u^i) - \sum_{i,j=1}^3 (\partial_i\partial_j\nu) \partial_j u^i \mathrm end{align*} By Sobolev Trace Theorem, there exists $\widetilde{p}\mathrm in H^1(\Omega)$ such that $$ \widetilde{p} = p \quad on \ \partial\Omega $$ and \betagin{equation} \lambdabel{eq:Stt1n} \|\widetilde{p}\|_{H^1(\Omega)} \le C\|\widetilde{p}\|_{H^{\fracrac{1}{2}}(\partial\Omega)} = C\|p\|_{H^{\fracrac{1}{2}}(\partial\Omega)}. \mathrm end{equation} We then set $$ q = p - \widetilde{p} \quad in \ \Omega. $$ Thus we have \betagin{equation} \lambdabel{sy:prf-ellip-ext-zero-bdyn} \left\{ \betagin{aligned} &\ \Deltalta q = \Deltalta p - \mathrm{div}(\nabla \widetilde{p}) &\qquad in \ \Omega,\ \ \\ &\ q = 0 &\qquad on \ \partial\Omega. \mathrm end{aligned} \right. \mathrm end{equation} Applying Lemma \ref{lem:Ce-H1-zero-bdy} to (\ref{sy:prf-ellip-ext-zero-bdyn}), we obtain \betagin{align*} \mathrm int_\Omega &\big( |\nabla q|^2 + s^2\lambda^2e^{2\lambda d}|q|^2\big) e^{2se^{\lambda d}}dx \le \; C\mathrm int_\Omega se^{\lambda d}|F|^2e^{2se^{\lambda d}}dx + C\mathrm int_\Omega \frac{1}{s\lambda^2}e^{-\lambda d}(|\partial_t h|^2 + |\nabla u|^2)e^{2se^{\lambda d}}dx \\ &\hspace{2cm} + C\mathrm int_\Omega se^{\lambda d}(|\nabla h|^2 + |\nabla u|^2 + |u|^2 + |\nabla H|^2 + |H|^2 + |\nabla \widetilde{p}|^2)e^{2se^{\lambda d}}dx \mathrm end{align*} for all $\lambda\ge \lambda_0$ and all $s\ge s_0$. Since $p = q + \widetilde{p}$, we have \betagin{equation} \lambdabel{eq:prf-ce-1n} \betagin{aligned} \mathrm int_\Omega \big( |\nabla p|^2 &+ s^2\lambda^2e^{2\lambda d}|p|^2\big) e^{2se^{\lambda d}}dx \\ \le \; &2\mathrm int_\Omega \big( |\nabla q|^2 + s^2\lambda^2e^{2\lambda d}|q|^2\big) e^{2se^{\lambda d}}dx + 2\mathrm int_\Omega \big( |\nabla \widetilde{p}|^2 + s^2\lambda^2e^{2\lambda d}|\widetilde{p}|^2\big) e^{2se^{\lambda d}}dx \\ \le \; &C\mathrm int_\Omega se^{\lambda d}(|F|^2 + |\nabla h|^2)e^{2se^{\lambda d}}dx + Cs^2\lambda^2e^{2\lambda \|d\|_{C(\overline{\Omega})}}e^{2se^{\lambda \|d\|_{C(\overline{\Omega})}}}\|p\|_{H^{\fracrac{1}{2}}(\partial\Omega)}^2 \\ &+ C\mathrm int_\Omega \frac{1}{s\lambda^2}e^{-\lambda d}|\partial_t h|^2e^{2se^{\lambda d}}dx + C\mathrm int_\Omega se^{\lambda d}(|\nabla u|^2 + |u|^2 + |\nabla H|^2 + |H|^2)e^{2se^{\lambda d}}dx \\ \mathrm end{aligned} \mathrm end{equation} for all $\lambda\ge \lambda_0$ and all $s\ge s_0$. We used (\ref{eq:Stt1n}) in the last inequality. Recall the definition of weight function \mathrm eqref{con:choice-phi-psi}. Since $se^{\lambda(-\betata(t-t_0)^2+c_0)}\ge s$ for all $0\le t\le T$. Hence substituting $s$ by $se^{\lambda(-\betata(t-t_0)^2+c_0)}$ in (\ref{eq:prf-ce-1n}) yields \betagin{align} \nonumber \mathrm int_\Omega \big( |\nabla p|^2 &+ s^2\lambda^2\varphi^2|p|^2\big) e^{2s\varphi}dx \le \; C\mathrm int_\Omega s\varphi(|F|^2 + |\nabla h|^2)e^{2s\varphi}dx + C\mathrm int_\Omega \frac{1}{s\lambda^2\varphi}|\partial_t h|^2e^{2s\varphi}dx \\ \lambdabel{eq:Ce-reg-p} &+ C\mathrm int_\Omega s\varphi(|\nabla u|^2 + |u|^2 + |\nabla H|^2 + |H|^2)e^{2s\varphi}dx + C(\lambda)s^2\mathrm e^{C(\lambda)s}\|p\|_{H^{\fracrac{1}{2}}(\partial\Omega)}^2 \mathrm end{align} for all $\lambda \ge \lambda_0$ and all $s\ge s_0$. \noindent $\mathbf{Second\ step.}$ We apply a Carleman estimate for parabolic type. We have the following lemma. \betagin{lem} \lambdabel{lem:Ce-reg-para} Let $\varphi$ be chosen as (\ref{con:choice-phi-psi}) and $y\mathrm in H^{2,1}(Q)$ satisfy $$ \left\{ \betagin{aligned} & \quad \partial_t y - \nu(x,t)\Delta y + \sum_{j=1}^3 b_j(x,t)\partial_j y + c(x,t)y = f &\quad in \; Q \\ & \quad y(\cdot, 0) = y(\cdot, T) = 0 &\quad in \; \Omega \mathrm end{aligned} \right. $$ with $\nu,b_j,c\mathrm in W^{1,\mathrm infty}(Q)$, $\nu\ge c_0>0$ and $f\mathrm in L^2(Q)$, $j=1,2,3$. Then there exist constants $\lambda_0>0$, $s_0>0$ and $C>0$ such that \betagin{equation} \betagin{aligned} \mathrm int_Q \bigg\{ \fracrac{1}{s\varphi}\bigg( |\partial_t y|^2 + \sum_{i,j=1}^3 &|\partial_i \partial_j y|^2 \bigg) + s\lambda^2\varphi|\nabla y|^2 + s^3\lambda^4\varphi^3 |y|^2\bigg\} e^{2s\varphi}dxdt \\ \le \; &C\mathrm int_Q |f|^2e^{2s\varphi}dxdt + C(\lambda)e^{C(\lambda)s}\mathrm int_{\Sigma} (|y|^2 + |\nabla_{x,t} y|^2)dSdt \mathrm end{aligned} \mathrm end{equation} for all $\lambda\ge \hat{\lambda}$ and all $s\ge \hat{s}$. \mathrm end{lem} The proof is almost the same to Therorem 3.2 in Yamamoto \cite{Y09}. We rewrite the first equation in (\ref{sy:MHD4n}) to get \betagin{equation*} \partial_t u - \nu \Deltalta u + (B^{(1)}\cdot\nabla)u + (u\cdot\nabla)B^{(2)} + \nabla(B^{(3)}\cdot u)= F - \nabla p - L_1(H). \mathrm end{equation*} Applying Lemma \ref{lem:Ce-reg-para} to each component of above equations, we obtain \betagin{equation} \lambdabel{eq:Ce-reg-u} \betagin{aligned} &\mathrm int_Q \bigg\{ \fracrac{1}{s\varphi}\bigg( |\partial_t u|^2 + \sum_{i,j=1}^3 |\partial_i \partial_j u|^2 \bigg) + s\lambda^2\varphi|\nabla u|^2 + s^3\lambda^4\varphi^3 |u|^2\bigg\} e^{2s\varphi}dxdt \le C\mathrm int_Q |F|^2e^{2s\varphi}dxdt \\ &\hspace{1cm} + C\mathrm int_Q (|\nabla p|^2 + |\nabla H|^2 + |H|^2)e^{2s\varphi}dxdt + C(\lambda)e^{C(\lambda)s}\big(\|u\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} u\|_{L^2(\Sigma)}^2\big) \mathrm end{aligned} \mathrm end{equation} for all $\lambda\ge \lambda_1$ and all $s\ge s_1$. Next, we apply Carleman estimate of parabolic type to the second equation of (\ref{sy:MHD4n}) and we have the following estimate: \betagin{equation} \lambdabel{eq:Ce-reg-H} \betagin{aligned} &\mathrm int_Q \bigg\{ \fracrac{1}{s\varphi}\bigg( |\partial_t H|^2 + \sum_{i,j=1}^3 |\partial_i \partial_j H|^2 \bigg) + s\lambda^2\varphi |\nabla H|^2 + s^3\lambda^4\varphi^3 |H|^2\bigg\} e^{2s\varphi}dxdt \le C\mathrm int_Q |G|^2e^{2s\varphi}dxdt \\ &\hspace{1cm} + C\mathrm int_Q (|\nabla u|^2 + |u|^2)e^{2s\varphi}dxdt + C(\lambda)e^{C(\lambda)s}\big(\|H\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} H\|_{L^2(\Sigma)}^2\big) \mathrm end{aligned} \mathrm end{equation} for all $\lambda\ge \lambda_2$ and all $s\ge s_2$. \noindent $\mathbf{Third\ step.}$ We combine the estimates for $p$, $u$ and $H$. Combining (\ref{eq:Ce-reg-p}), (\ref{eq:Ce-reg-u}) and (\ref{eq:Ce-reg-H}), we obtain $$ \betagin{aligned} & \mathrm int_Q \bigg\{ \fracrac{1}{s\varphi}\bigg( |\partial_t u|^2 + \sum_{i,j=1}^3 |\partial_i \partial_j u|^2\bigg) + s\lambda^2\varphi|\nabla u|^2 + s^3\lambda^4\varphi^3|u|^2 + |\nabla p|^2 + s^2\lambda^2\varphi^2 |p|^2 \\ &\hspace{1cm} + \fracrac{1}{s\varphi}\bigg( |\partial_t H|^2 + \sum_{i,j=1}^3 |\partial_i \partial_j H|^2\bigg) + s\lambda^2\varphi |\nabla H|^2 + s^3\lambda^4\varphi^3 |H|^2 \bigg\}e^{2s\varphi}dxdt \\ & \le \;C\mathrm int_Q (s\varphi (|F|^2 + |\nabla h|^2)+ |G|^2 + \frac{1}{s\varphi}|\partial_t h|^2) e^{2s\varphi}dxdt + C\mathrm int_Q s\varphi\big( |\nabla u|^2 + |u|^2 + |\nabla H|^2 + |H|^2 \big) e^{2s\varphi}dxdt \\ &\hspace{1cm} + C(\lambda)s^2e^{C(\lambda)s}\bigg(\|u\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} u\|_{L^2(\Sigma)}^2 + \|H\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} H\|_{L^2(\Sigma)}^2 + \|p\|_{L^2(0,T;H^{\fracrac{1}{2}}(\partial\Omega))}^2 \bigg) \mathrm end{aligned} $$ for all $\lambda\ge \lambda_3:=\max\{\lambda_1,\lambda_2,\lambda_3\}$ and all $s\ge s_3:=\max\{s_0,s_1,s_2\}$. Finally we can fix $\lambda$ large enough to absorb the second term on the right-hand side into the left-hand side. By the relations $\lambda \ge 1$ and $s^2\le e^{Cs}$ for $s$ large, we obtain \betagin{align*} &\|(u,p,H)\|_{\sigma_s(Q)}^2 \le \;C\mathrm int_Q (s\varphi |F|^2 + s\varphi|\nabla h|^2 + |G|^2 + \frac{1}{s\varphi}|\partial_t h|^2) e^{2s\varphi}dxdt \\ &\hspace{1cm} + Ce^{Cs}\bigg(\|u\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} u\|_{L^2(\Sigma)}^2 + \|H\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} H\|_{L^2(\Sigma)}^2 + \|p\|_{L^2(0,T;H^{\fracrac{1}{2}}(\partial\Omega))}^2 \bigg) \mathrm end{align*} for fixed $\lambda$ large enough and all $s \ge s_3$. The proof of Theorem \ref{thm:CEDPn} is completed. \mathrm end{proof} On the other hand, we investigate the following two first-order partial differential operators: ($\romannumeral1)\quad P f := \mathrm{div} (fA)=A \nabla f + f\mathrm{div} A, \quad f\mathrm in H^1(\Omega)$, ($\romannumeral2)\quad Q g := \mathrm{rot} (gb)=\nabla g\times b + g\mathrm{rot} b, \quad g\mathrm in H^1(\Omega)$ \noindent where $A=(A_{ij})_{i,j}$ is a $3\times 3$ matrix and $b=(b_1,b_2,b_3)^T$ is a vector satisfying $A\mathrm in W^{1,\mathrm infty}(\Omega), b\mathrm in W^{2,\mathrm infty}(\Omega)$. Recall that the divergence of a matrix is defined as $[\mathrm{div} A]_k=\sum_{j=1}^3 \partial_j A_{kj}$. In addition, we select an open subset $O\subset \Omega$. Then we have the following Carleman inequalities: \betagin{thm} \lambdabel{thm:CEIPn} Let $d$ be chosen as (\ref{con:d}) and $\varphi_0:=e^{\lambda d}$. Assume that $$ \mathrm{det} A(x) \neq 0 \ and \ |\nabla d(x) \times b(x)|\neq 0, \qquad for\ x\mathrm in \overline{O}. $$ Then there exist constants $\lambda_0\ge 1$, $s\ge 1$ and a generic constant $C>0$ such that \betagin{equation} \lambdabel{CEIP1n} \mathrm int_{O} (|\nabla f|^2 + s^2\lambda^2\varphi_0^2 |f|^2)e^{2s\varphi_0} dx \le C\mathrm int_{O} |P f|^2 e^{2s\varphi_0}dx + C\mathrm int_{\partial O} s\lambda\varphi_0 |f|^2e^{2s\varphi_0}d\sigma \mathrm end{equation} and \betagin{equation} \lambdabel{CEIP2n} \betagin{aligned} &\mathrm int_{O} (|\nabla g|^2 + s^2\lambda^2\varphi_0^2 |g|^2)e^{2s\varphi_0} dx \le C\mathrm int_{O} (\fracrac{1}{s^2\lambda^2\varphi_0^2}|\nabla (Qg)|^2 + |Qg|^2) e^{2s\varphi_0}dx \\ &\hspace{5cm} + C\mathrm int_{\partial O} (\fracrac{1}{s\lambda\varphi_0}|\nabla g|^2 + s\lambda\varphi_0 |g|^2)e^{2s\varphi_0}d\sigma \mathrm end{aligned} \mathrm end{equation} for all $\lambda\ge \lambda_0$, $s\ge s_0$ and $f\mathrm in H^1(\Omega), g\mathrm in H^2(\Omega)$. \mathrm end{thm} To proof these inequalities, we apply the idea of Lemma 6.1 in \cite{Y09}. \betagin{proof} We first prove inequality (\ref{CEIP1n}). Set $w=fe^{s\varphi_0}$. Then $$ Pf=P(we^{-s\varphi_0})=e^{-s\varphi_0}(A\nabla w + w\mathrm{div} A - s\lambda\varphi_0 (A\nabla d) w). $$ We rewrite it in components, that is \betagin{equation} \lambdabel{eq1n} [Pf]_ke^{s\varphi_0}=\sum_{j=1}^3 \big( A_{kj}\partial_j w + \partial_j A_{kj} w - s\lambda\varphi_0 (A_{kj}\partial_j d)w\big) \mathrm end{equation} Now choose $a=(a_1,a_2,a_3)^T\mathrm in L^\mathrm infty (\Omega)$ such that $\sum_{k=1}^3 a_k A_{kj} = \partial_j d$ for any $x\mathrm in \overline{O}$. In fact, the existence of such $\{a_k\}_{k=1,2,3}$ comes from the assumption $\mathrm{det} A\neq 0$ on $\overline{O}$. We multiply $a_k$ to equation (\ref{eq1}) and take summation over $k$: $$ \sum_{k=1}^3 a_k[Pf]_k e^{s\varphi_0} = \nabla d\cdot \nabla w + \big(\sum_{j,k=1}^3 a_k \partial_j A_{kj}\big) w - s\lambda\varphi_0 |\nabla d|^2 w \qquad on \ \overline{O}. $$ Then we estimate \betagin{align*} &\mathrm int_{O} \Big|\sum_{k=1}^3 a_k[Pf]_k \Big|^2e^{2s\varphi_0}dx = \mathrm int_O s^2\lambda^2\varphi_0^2|\nabla d|^4|w|^2 dx + \mathrm int_O |\nabla d \cdot \nabla w + (a\cdot \mathrm{div} A) w|^2 dx \\ &\hspace{4.3cm} - 2\mathrm int_O s\lambda\varphi_0 |\nabla d|^2(\nabla d\cdot \nabla w + (a\cdot\mathrm{div}A) w)w dx \\ &\hspace{3.9cm}\ge \mathrm int_O s^2\lambda^2\varphi_0^2|\nabla d|^4|w|^2 dx - 2\mathrm int_O s\lambda\varphi_0 |\nabla d|^2 (a\cdot \mathrm{div} A)|w|^2 dx \\ &\hspace{4.3cm} - \mathrm int_{\partial O} s\lambda\varphi_0 |\nabla d|^2 \fracrac{\partial d}{\partial n}|w|^2 d\sigma + \mathrm int_O s\lambda \mathrm{div} (\varphi_0|\nabla d|^2 \nabla d)|w|^2 dx \\ &\hspace{3.9cm}\ge \mathrm int_O s^2\lambda^2\varphi_0^2|\nabla d|^4|w|^2 dx - \mathrm int_{\partial O} s\lambda\varphi_0 |\nabla d|^2 \fracrac{\partial d}{\partial n}|w|^2 d\sigma \\ &\hspace{4.3cm} + \mathrm int_O s\lambda\varphi_0 \big(\lambda|\nabla d|^4 + \nabla |\nabla d|^2 \nabla d + |\nabla d|^2 (\Deltalta d - 2(a\cdot\mathrm{div} A))\big)|w|^2 dx. \mathrm end{align*} By choosing $\lambda$ large, we can absorb the third term on the right-hand side. Thus, \betagin{equation} \lambdabel{eq2n} \mathrm int_O s^2\lambda^2\varphi_0^2|f|^2e^{2s\varphi_0} dx \le C \mathrm int_O |Pf|^2e^{2s\varphi_0}dx + C \mathrm int_{\partial O} s\lambda\varphi_0|f|^2e^{2s\varphi_0} d\sigma \mathrm end{equation} holds for all $\lambda\ge \lambda_1$ and $s\ge 1$. Furthermore, for $l=1,2,3$, we can also choose $a^{(l)}=(a_1^{(l)},a_2^{(l)},a_3^{(l)})^T\mathrm in L^\mathrm infty (\Omega)$ such that $\sum_{k=1}^3 a_k^{(l)} A_{kj}=\mathrm deltalta_{lj}$ on $\overline{O}$. Take summation over $k$ after multiply $a_k^{(l)}$ to (\ref{eq1n}): $$ \sum_{k=1}^3 a_k^{(l)}[Pf]_k e^{s\varphi_0} = \partial_l w + \big(\sum_{j,k=1}^3 a_k^{(l)} \partial_j A_{kj}\big) w - s\lambda\varphi_0 (\partial_l d) w \qquad on \ \overline{O}. $$ Again we estimate \betagin{align*} &\mathrm int_O \Big|\sum_{k=1}^3 a_k^{(l)}[Pf]_k \Big|^2e^{2s\varphi_0}dx = \mathrm int_O |\partial_l w|^2 dx + \mathrm int_O |(a^{(l)}\cdot \mathrm{div} A) - s\lambda\varphi_0(\partial_l d)|^2 |w|^2 dx \\ &\hspace{4.3cm} + 2\mathrm int_O \big((a^{(l)}\cdot \mathrm{div} A) - s\lambda\varphi_0(\partial_l d)\big)w(\partial_l w) dx \\ &\hspace{3.9cm}\ge \mathrm int_O |\partial_l w|^2 dx + 2\mathrm int_O (a^{(l)}\cdot \mathrm{div} A)w(\partial_l w) dx \\ &\hspace{4.3cm} - \mathrm int_{\partial O} s\lambda\varphi_0 (\partial_l d)n_l |w|^2 d\sigma + \mathrm int_O s\lambda\varphi_0(\lambda|\partial_l d|^2 + \partial_l^2 d)|w|^2 dx. \mathrm end{align*} Rewrite the above inequality and take summation over $l$ on both sides: \betagin{align*} &\mathrm int_O |\nabla w|^2 dx \le \mathrm int_O \sum_{l=1}^3\Big|\sum_{k=1}^3 a_k^{(l)}[Pf]_k \Big|^2e^{2s\varphi_0}dx + \mathrm int_{\partial O} s\lambda\varphi_0 \fracrac{\partial d}{\partial n} |w|^2 d\sigma \\ &\hspace{2.5cm} - 2\sum_{l=1}^3\mathrm int_O (a^{(l)}\cdot \mathrm{div} A)w(\partial_l w) dx - \mathrm int_O s\lambda\varphi_0(\lambda|\nabla d|^2 + \Deltalta d)|w|^2 dx \\ &\hspace{1.8cm} \le C\mathrm int_O |Pf|^2e^{2s\varphi_0}dx + \mathrm int_{\partial O} s\lambda\varphi_0\fracrac{\partial d}{\partial n} |w|^2 d\sigma \\ &\hspace{2.5cm} + \fracrac{1}{2}\mathrm int_O |\nabla w|^2 dx + 2\mathrm int_O \sum_{l=1}^3 |a^{(l)}\cdot\mathrm{div} A|^2 |w|^2 dx + \mathrm int_O s\lambda\varphi_0|w|^2 dx \mathrm end{align*} This leads to $$ \mathrm int_O |\nabla w|^2 dx \le C\mathrm int_O |Pf|^2e^{2s\varphi_0}dx + C\mathrm int_{\partial O} s\lambda\varphi_0 |w|^2 d\sigma + C\mathrm int_O s\lambda\varphi_0|w|^2 dx $$ Together with (\ref{eq2n}) and take $\lambda$ large enough to absorb the last term on the right-hand side. Finally, we obtain $$ \mathrm int_{O} (|\nabla f|^2 + s^2\lambda^2\varphi_0^2 |f|^2)e^{2s\varphi_0} dx \le C\mathrm int_{O} |P f|^2 e^{2s\varphi_0}dx + C\mathrm int_{\partial O} s\lambda\varphi_0 |f|^2e^{2s\varphi_0}d\sigma $$ for all $\lambda\ge \lambda_2$ and $s\ge 1$. Next we consider the operator $Q$. Set $v=ge^{s\varphi_0}$. Then $$ Qg=Q(ve^{-s\varphi_0})=e^{-s\varphi_0}(\nabla v\times b + (\mathrm{rot} b)v - s\lambda\varphi_0(\nabla d\times b)v). $$ By denoting $$ B= \betagin{pmatrix} 0 & b_3 & -b_2 \\ -b_3 & 0 & b_1 \\ b_2 & -b_1 & 0 \\ \mathrm end{pmatrix}, $$ we rewrite the above formula: $$ Qg e^{s\varphi_0} = B\nabla v + (\mathrm{rot} b)v - s\lambda\varphi_0(B\nabla d)v. $$ However, $\mathrm{det} B = b_1b_2b_3 + (-b_1b_2b_3)=0$. Thus, we calculate directly \betagin{align*} &\mathrm int_O |Qg|^2e^{2s\varphi_0} dx = \mathrm int_O s^2\lambda^2\varphi_0^2 |B\nabla d|^2 |v|^2 dx + \mathrm int_O |B\nabla v + (\mathrm{rot} b)v|^2 dx \\ &\hspace{2.7cm} -2\mathrm int_O s\lambda\varphi_0 (B\nabla d)\cdot (B\nabla v + (\mathrm{rot} b)v)v dx \\ &\hspace{2.5cm} \ge \mathrm int_O s^2\lambda^2\varphi_0^2 |B\nabla d|^2 |v|^2 dx -2\mathrm int_O s\lambda\varphi_0(B\nabla d)\cdot (\mathrm{rot} b)|v|^2 dx \\ &\hspace{2.7cm} -\mathrm int_{\partial O} s\lambda\varphi_0 (B\nabla d)\cdot (Bn)|v|^2 d\sigma + \mathrm int_O s\lambda\varphi_0 \big(\lambda |B\nabla d|^2 + \mathrm{div}(B^T(B\nabla d))\big) |v|^2 dx \mathrm end{align*} By noting the assumption that $|B\nabla d|=|\nabla d\times b| \neq 0$ on $\overline{O}$, we can take $\lambda$ large to absorb the second and fourth terms on the right-hand side: \betagin{equation} \lambdabel{eq3n} \mathrm int_O s^2\lambda^2\varphi_0^2 |g|^2e^{2s\varphi_0} dx \le C\mathrm int_O |Qg|^2e^{2s\varphi_0} dx + C\mathrm int_{\partial O}s\lambda\varphi_0 |g|^2e^{2s\varphi_0} d\sigma \mathrm end{equation} for all $\lambda\ge \lambda_3$ and $s\ge 1$. We take the $k$-th derivative of (\romannumeral2) and denote $g_k=\partial_k g$. Set $$ Q_k g_k:= \partial_k(Qg) - \nabla g \times \partial_k b - g(\mathrm{rot}(\partial_k b))= \nabla g_k\times b + g_k(\mathrm{rot}b). $$ By applying similar argument above to operator $Q_k$, we have \betagin{align*} &\mathrm int_O |g_k|^2e^{2s\varphi_0} dx \le C\mathrm int_O \fracrac{1}{s^2\lambda^2\varphi_0^2}|Q_k g_k|^2e^{2s\varphi_0} dx + C\mathrm int_{\partial O} \fracrac{1}{s\lambda\varphi_0} |g_k|^2e^{2s\varphi_0} d\sigma \\ &\hspace{0cm} \le C\mathrm int_O \fracrac{1}{s^2\lambda^2\varphi_0^2}|\partial_k (Qg)|^2e^{2s\varphi_0} dx + C\mathrm int_O \fracrac{1}{s^2\lambda^2\varphi_0^2}(|\nabla g|^2 +|g|^2)e^{2s\varphi_0}dx + C\mathrm int_{\partial O}\fracrac{1}{s\lambda\varphi_0} |g_k|^2e^{2s\varphi_0} d\sigma \mathrm end{align*} for all $\lambda \ge \lambda_4$, $s\ge 1$ and $k=1,2,3$. Sum up the estimates over $k$ and absorb again the lower-order terms by taking $\lambda$ large: \betagin{equation} \lambdabel{eq4n} \betagin{aligned} &\mathrm int_O |\nabla g|^2e^{2s\varphi_0} dx \le C\mathrm int_O \fracrac{1}{s^2\lambda^2\varphi_0^2}|\nabla (Qg)|^2e^{2s\varphi_0} dx + C\mathrm int_O \fracrac{1}{s^2\lambda^2\varphi_0^2}|g|^2e^{2s\varphi_0}dx \\ &\hspace{2.8cm}+ C\mathrm int_{\partial O} \fracrac{1}{s\lambda\varphi_0} |\nabla g|^2e^{2s\varphi_0} d\sigma \mathrm end{aligned} \mathrm end{equation} for all $\lambda \ge \lambda_5$ and all $s\ge 1$. Combining (\ref{eq3n}) and (\ref{eq4n}), we proved (\ref{CEIP2n}) and also Theorem \ref{thm:CEIPn} with $\lambda_0=max\{\lambda_i:\ 1\le i\le 5\}$ and $s_0=1$. \mathrm end{proof} Recall that our regular weight function is defined as $$ \varphi(x,t) = e^{\lambda\psi(x,t)}, \quad \psi(x,t) = d(x) - \betata(t-t_0)^2 +c_0. $$ For all $s\ge s_0$, $se^{\lambda c_0}\ge s\ge s_0$. Then substituting $s$ by $se^{\lambda c_0}$ in \mathrm eqref{CEIP1n} and \mathrm eqref{CEIP2n} leads to \betagin{thm} \lambdabel{thm:CEIP-reg} Under the assumptions that $$ \mathrm{det} A(x) \neq 0 \ and \ |\nabla d(x) \times b(x)|\neq 0, \qquad for\ x\mathrm in \overline{O}, $$ there exist constants $\lambda_0\ge 1$, $s_0\ge 1$ and a generic constant $C>0$ such that \betagin{align*} \mathrm int_{O} (|\nabla f|^2 + s^2\lambda^2\varphi^2(x,t_0) |f|^2)e^{2s\varphi(x,t_0)} dx \le C\mathrm int_{O} |P f|^2 e^{2s\varphi(x,t_0)}dx + C\mathrm int_{\partial O} s\lambda\varphi_0 |f|^2e^{2s\varphi(x,t_0)}d\sigma \mathrm end{align*} and \betagin{align*} &\mathrm int_{O} (|\nabla g|^2 + s^2\lambda^2\varphi^2(x,t_0) |g|^2)e^{2s\varphi(x,t_0)} dx \le C\mathrm int_{O} (\fracrac{1}{s^2\lambda^2\varphi^2(x,t_0)}|\nabla (Qg)|^2 + |Qg|^2) e^{2s\varphi(x,t_0)}dx \\ &\hspace{5cm} + C\mathrm int_{\partial O} (\fracrac{1}{s\lambda\varphi(x,t_0)}|\nabla g|^2 + s\lambda\varphi(x,t_0) |g|^2)e^{2s\varphi(x,t_0)}d\sigma \mathrm end{align*} for all $\lambda\ge \lambda_0$, $s\ge s_0$ and $f\mathrm in H^1(\Omega), g\mathrm in H^2(\Omega)$. \mathrm end{thm} \section{Proof of Theorem \ref{thm:stability} and \ref{thm:stabilityn}} In this section, we prove our stability result(Theorem \ref{thm:stability}, \ref{thm:stabilityn}) in terms of the two types of Carleman inequalities established in the last section. First of all, we change our inverse coefficients problem to an inverse source problem. Recall that we have two sets of solutions $(u_i,p_i,H_i)$(i=1,2) satisfying the following MHD system: \betagin{equation} \lambdabel{sy:MHD5} \left\{ \betagin{aligned} &\partial_t u_i - \mathrm{div}(2\nu_i\mathcal{E}(u_i)) + (u_i\cdot\nablabla) u_i - (H_i\cdot\nablabla) H_i + \nabla p_i - \nablabla H_i^T\!\cdot\! H_i = 0 &\quad in \ Q, \\ &\partial_t H_i + \mathrm{rot}(\kappa_i\mathrm{rot}\; H_i) + (u_i\cdot\nablabla) H_i - (H_i\cdot\nablabla) u_i = 0 &\quad in \ Q, \\ & \mathrm{div}\; u_i = 0, \quad \mathrm{div}\; H_i = 0 &\quad in \ Q. \mathrm end{aligned} \right. \mathrm end{equation} Take the difference of the two sets of equations in (\ref{sy:MHD5}). By setting $u = u_1 - u_2$, $H = H_1 - H_2$, $p = p_1 - p_2$ and $\nu = \nu_1 - \nu_2$, $\kappa = \kappa_1 - \kappa_2$, we obtain \betagin{equation} \lambdabel{sy:MHD6} \left\{ \betagin{aligned} &\partial_t u - \nu_2\Deltalta u + (u\!\cdot\!\nablabla) u_2 + ((u_1-\!\nablabla \nu_2)\!\cdot\!\nablabla) u - \nablabla u^T\!\cdot\!\nablabla\nu_2 + L_1(H,\nablabla H) + \nabla p = \mathrm{div}(2\nu\mathcal{E}(u_1)), \\ &\partial_t H - \kappa_2\Deltalta H - (H\!\cdot\!\nablabla) u_2 + (u_1\!\cdot\!\nablabla) H + \nablabla\kappa_2\times\mathrm{rot}H + L_2(u,\nablabla u) = -\mathrm{rot}(\kappa\mathrm{rot}H_1), \\ & \mathrm{div}\; u = 0, \quad \mathrm{div}\; H = 0, \hspace{9cm} in\ Q \mathrm end{aligned} \right. \mathrm end{equation} Here \betagin{align*} &L_1(H,\nablabla H) = - (H_1\cdot\nabla) H - (H\cdot\nabla) H_2 - \nabla H^T\cdot H_2 - \nabla H_1^T\cdot H, \\ &L_2(u,\nablabla u) = - (H_1\cdot\nabla) u + (u\cdot\nabla) H_2 \mathrm end{align*} \subsection{Proof of Theorem \ref{thm:stability}} Note that $t_0\mathrm in (0,T)$ is the fixed time for measurements. By the assumptions (A1)-(A2), we can replace coefficients $A$ and $b$ in Theorem \ref{thm:CEIP-sin} by $2\mathcal{E}(u_1(\cdot,t_0))$ and $\mathrm{rot} H_1(\cdot,t_0)$. This leads to \betagin{align} \lambdabel{eq5} \mathrm int_{\Omega} (|\nabla \nu|^2 + s^2\lambda^2\varphi^2(x,t_0) |\nu|^2)e^{2s\alpha (x, t_0)} dx \le C\mathrm int_{\Omega} |\mathrm{div}(2\nu\mathcal{E}(u_1))(x,t_0)|^2 e^{2s\alpha (x,t_0)}dx \mathrm end{align} and \betagin{align} \nonumber &\mathrm int_{\Omega} (|\nabla \kappa|^2 + s^2\lambda^2\varphi^2(x,t_0) |\kappa|^2)e^{2s\alpha (x,t_0)} dx \le C\mathrm int_{\Omega} |\mathrm{rot}(\kappa\mathrm{rot}H_1)(x,t_0)|^2 e^{2s\alpha (x,t_0)}dx \\ \lambdabel{eq6} &\hspace{4cm} + C\mathrm int_{\Omega} \fracrac{1}{s^2\lambda^2\varphi^2(x,t_0)}|\nabla (\mathrm{rot}(\kappa\mathrm{rot}H_1))|^2 e^{2s\alpha (x,t_0)}dx \mathrm end{align} for all $\lambda\ge \lambda_0$ and all $s\ge s_0$. Henceforth, we may omit $t_0$ when there is no confusion. We multiply by $s$ on both sides of \mathrm eqref{eq5} and \mathrm eqref{eq6} and then take the summation: \betagin{equation} \lambdabel{eq7} \betagin{aligned} &\mathrm int_{\Omega} s(|\nabla \nu|^2 + s^2\lambda^2\varphi^2 |\nu|^2 + |\nabla \kappa|^2 + s^2\lambda^2\varphi^2 |\kappa|^2)e^{2s\alpha} dx \\ &\hspace{1cm} \le C\mathrm int_{\Omega} s\Big(|\mathrm{div}(2\nu\mathcal{E}(u_1))|^2 + |\mathrm{rot}(\kappa\mathrm{rot}H_1)|^2 + \fracrac{1}{s^2\lambda^2\varphi^2}|\nabla (\mathrm{rot}(\kappa\mathrm{rot}H_1))|^2\Big) e^{2s\alpha}dx \mathrm end{aligned} \mathrm end{equation} holds for all $\lambda\ge \lambda_0$, $s\ge s_0$ and $t=t_0$. \betagin{align*} &[the\ RHS\ of\ (\ref{eq7})] \le Cs\mathrm int_\Omega (|\partial_t u|^2 + |\Deltalta u|^2 + |\nabla u|^2 + |L_1(H,\nabla H)|^2 +|\nabla p|^2 )e^{2s\alpha}dx \\ &\hspace{2cm} + Cs\mathrm int_\Omega \fracrac{1}{s^2\lambda^2\varphi^2}( |\nabla (\partial_t H)|^2 + |\nabla(\Deltalta H)|^2 + |\partial_i\partial_j H|^2 + |\nabla (L_2(u,\nabla u))|^2)e^{2s\alpha}dx \\ &\hspace{2cm} + Cs\mathrm int_\Omega (|\partial_t H|^2 + |\Deltalta H|^2 + |\nabla H|^2 + |L_2(u,\nabla u)|^2 )e^{2s\alpha}dx \\ &\hspace{2.8cm} \le C\mathrm int_\Omega s(|\partial_t u|^2 + |\partial_t H|^2 + \fracrac{1}{s^2\lambda^2\varphi^2}|\nabla (\partial_t H)|^2)e^{2s\alpha}dx + Cs\mathcal{D}_1^2 \mathrm end{align*} where $$ \mathcal{D}_1:= \|u(\cdot,t_0)\|_{H^2(\Omega)} + \|H(\cdot,t_0)\|_{H^3(\Omega)} + \|\nabla p(\cdot,t_0)\|_{L^2(\Omega)}. $$ Next, we use Theorem \ref{thm:CEDP}(Carleman estimate for direct problem) to estimate the first integral on the right-hand side. Notice that $e^{2s\alpha(x,0)}=0,\ x\mathrm in\Omega$, we calculate \betagin{equation} \lambdabel{eq8} \betagin{aligned} &\mathrm int_\Omega s|\partial_t u(x,t_0)|^2e^{2s\alpha(x,t_0)}dx = \mathrm int_0^{t_0} \frac{\partial}{\partial t}\bigg( \mathrm int_\Omega s|\partial_t u|^2e^{2s\alpha}dx\bigg) dt \\ &\hspace{3.7cm}= \mathrm int_\Omega\mathrm int_0^{t_0} \bigg( 2s(\partial_t u\cdot\partial_t^2 u) + 2s^2(\partial_t\alpha)|\partial_t u|^2 \bigg) e^{2s\alpha}dxdt \\ &\hspace{3.7cm}\le C(\lambda)\mathrm int_Q (|\partial_t^2 u|^2 + s^2\varphi^2|\partial_t u|^2)e^{2s\alpha}dxdt. \mathrm end{aligned} \mathrm end{equation} We used $s\ge 1$ and $$ \betagin{aligned} &2s^2|\partial_t \alpha| = 2s^2\left|\frac{l^\prime}{l^2}(e^{\lambda\mathrm eta}-e^{2\lambda})\right|\le C(\lambda)s^2\varphi^2, \\ &2s |\partial_t u\cdot\partial_t^2 u| \le \frac{1}{\varphi^2}|\partial_t^2 u|^2 + s^2\varphi^2|\partial_t u|^2 \le l^2(t_0)|\partial_t^2 u|^2 + s^2\varphi^2|\partial_t u|^2. \mathrm end{aligned} $$ Similarly, we have \betagin{equation} \lambdabel{eq9} \betagin{aligned} &\mathrm int_\Omega s|\partial_t H(x,t_0)|^2e^{2s\alpha(x,t_0)}dx = \mathrm int_0^{t_0} \frac{\partial}{\partial t}\bigg( \mathrm int_\Omega s |\partial_t H|^2e^{2s\alpha}dx\bigg) dt \\ &\hspace{3.7cm}= \mathrm int_\Omega\mathrm int_0^{t_0} \bigg( 2s(\partial_t H\cdot\partial_t^2 H) + 2s^2(\partial_t\alpha)|\partial_t H|^2 \bigg) e^{2s\alpha}dxdt \\ &\hspace{3.7cm}\le C(\lambda)\mathrm int_Q (|\partial_t^2 H|^2 + s^2\varphi^2|\partial_t H|^2)e^{2s\alpha}dxdt. \mathrm end{aligned} \mathrm end{equation} and \betagin{equation} \lambdabel{eq10} \betagin{aligned} &s\mathrm int_\Omega \fracrac{1}{s^2\lambda^2\varphi(x,t_0)^2}|\nabla(\partial_t H(x,t_0))|^2e^{2s\alpha(x,t_0)}dx \\ &\hspace{1cm}= \mathrm int_0^{t_0} \frac{\partial}{\partial t}\bigg( \mathrm int_\Omega \fracrac{1}{s\lambda^2\varphi^2}|\nabla(\partial_t H)|^2e^{2s\alpha}dx\bigg) dt \\ &\hspace{1cm}= \mathrm int_\Omega\mathrm int_0^{t_0} \bigg( \fracrac{2}{s\lambda^2\varphi^2}(\nabla(\partial_t H):\nabla(\partial_t^2 H)) + \fracrac{1}{s\lambda^2\varphi^2}2s(\partial_t\alpha) |\nabla(\partial_t H)|^2 \bigg) e^{2s\alpha} dxdt \\ &\hspace{1cm}\le C(\lambda)\mathrm int_Q (\fracrac{1}{s^2\varphi^2}|\nabla(\partial_t^2 H)|^2 + \fracrac{1}{\varphi^2}|\nabla(\partial_t H)|^2 + |\nabla(\partial_t H)|^2)e^{2s\alpha} dxdt \\ &\hspace{1cm}\le C(\lambda)\mathrm int_Q (\fracrac{1}{s^2\varphi^2}|\nabla(\partial_t^2 H)|^2 + |\nabla(\partial_t H)|^2)e^{2s\alpha} dxdt. \mathrm end{aligned} \mathrm end{equation} Set $w_1=\partial_t u$, $w_2=\partial_t^2 u$, $q_1=\partial_t p$, $q_2=\partial_t^2 p$ and $h_1=\partial_t H$, $h_2=\partial_t^2 H$. Then according to our governing system (\ref{sy:MHD6}), we have $$ \left\{ \betagin{aligned} &\partial_t u - \nu_2\Deltalta u + (u\!\cdot\!\nablabla) u_2 + ((u_1-\!\nablabla \nu_2)\!\cdot\!\nablabla) u - \nablabla u^T\!\cdot\!\nablabla\nu_2 + L_1(H,\nablabla H) + \nabla p = \mathrm{div}(2\nu\mathcal{E}(u_1)), \\ &\partial_t H - \kappa_2\Deltalta H - (H\!\cdot\!\nablabla) u_2 + (u_1\!\cdot\!\nablabla) H + \nablabla\kappa_2\times\mathrm{rot}H + L_2(u,\nablabla u) = -\mathrm{rot}(\kappa\mathrm{rot}H_1), \\ & \mathrm{div}\; u = 0, \quad \mathrm{div}\; H = 0 \mathrm end{aligned} \right. $$ and $$ \left\{ \betagin{aligned} &\partial_t w_1 - \nu_2\Deltalta w_1 + (w_1\!\cdot\!\nablabla) u_2 + ((u_1-\!\nablabla \nu_2)\!\cdot\!\nablabla) w_1 - \nablabla w_1^T\!\cdot\!\nablabla\nu_2 + L_1(h_1,\nablabla h_1) + \nabla q_1 \\ &\hspace{4cm} = \mathrm{div}(2\nu\mathcal{E}((\partial_t u_1))) - (u\!\cdot\!\nabla)(\partial_t u_2) - ((\partial_t u_1)\!\cdot\!\nablabla) u - L_{1t}(H,\nabla H), \\ &\partial_t h_1 - \kappa_2\Deltalta h_1 - (h_1\!\cdot\!\nablabla) u_2 + (u_1\!\cdot\!\nablabla) h_1 + \nablabla\kappa_2\times\mathrm{rot}h_1 + L_2(w_1,\nablabla w_1) \\ &\hspace{4cm} = -\mathrm{rot}(\kappa\mathrm{rot}(\partial_t H_1)) + (H\!\cdot\!\nablabla) (\partial_t u_2) - ((\partial_t u_1)\!\cdot\!\nablabla) H - L_{2t}(u,\nabla u), \\ & \mathrm{div}\; w_1 = 0, \quad \mathrm{div}\; h_1 = 0 \mathrm end{aligned} \right. $$ and $$ \left\{ \betagin{aligned} &\partial_t w_2 - \nu_2\Deltalta w_2 + (w_2\!\cdot\!\nablabla) u_2 + ((u_1-\!\nablabla \nu_2)\!\cdot\!\nablabla) w_2 - \nablabla w_2^T\!\cdot\!\nablabla\nu_2 + L_1(h_2,\nablabla h_2) + \nabla q_2 \\ &\hspace{3cm} = \mathrm{div}(2\nu\mathcal{E}((\partial_t^2 u_1))) - 2(w_1\!\cdot\!\nabla)(\partial_t u_2) - 2((\partial_t u_1)\!\cdot\!\nablabla) w_1 - 2L_{1t}(h_1,\nabla h_1) \\ &\hspace{3.5cm} - (u\!\cdot\!\nabla)(\partial_t^2 u_2) - ((\partial_t^2 u_1)\!\cdot\!\nablabla) u - L_{1tt}(u,\nabla u), \\ &\partial_t h_2 - \kappa_2\Deltalta h_2 - (h_2\!\cdot\!\nablabla) u_2 + (u_1\!\cdot\!\nablabla) h_2 + \nablabla\kappa_2\times\mathrm{rot}h_2 + L_2(w_2,\nablabla w_2) \\ &\hspace{3cm} = -\mathrm{rot}(\kappa\mathrm{rot}(\partial_t^2 H_1)) + 2(h_1\!\cdot\!\nablabla) (\partial_t u_2) - 2((\partial_t u_1)\!\cdot\!\nablabla) h_1 - 2L_{2t}(w_1,\nabla w_1) \\ &\hspace{3.5cm} + (H\!\cdot\!\nablabla) (\partial_t^2 u_2) - ((\partial_t^2 u_1)\!\cdot\!\nablabla) H - L_{2tt}(u,\nabla u), \\ & \mathrm{div}\; w_2 = 0, \quad \mathrm{div}\; h_2 = 0. \mathrm end{aligned} \right. $$ Apply Theorem \ref{thm:CEDP} to $(u,p,H)$, then to $(w_1,q_1,h_1)$ and then to $(w_2,q_2,h_2)$ respectively, we obtain \betagin{equation*} \betagin{aligned} \|&(u,p,H)\|_{\chi_s(Q)}^2 \le \;C\mathrm int_Q (|\nu|^2 + |\nabla \nu|^2 + |\kappa|^2 + |\nabla \kappa|^2 )e^{2s\alphapha}dxdt \\ &\hspace{1cm} + Ce^{-s}\bigg(\|u\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} u\|_{L^2(\Sigma)}^2 + \|H\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} H\|_{L^2(\Sigma)}^2 + \|p\|_{L^2(0,T;H^{\fracrac{1}{2}}(\partial \Omega))}^2 \bigg) \mathrm end{aligned} \mathrm end{equation*} and \betagin{equation*} \betagin{aligned} &\|(w_1,q_1,h_1)\|_{\chi_s(Q)}^2 \le \;C\mathrm int_Q (|\nu|^2 + |\nabla \nu|^2 + |\kappa|^2 + |\nabla \kappa|^2 + |u|^2 + |\nabla u|^2 + |H|^2 + |\nabla H|^2)e^{2s\alphapha}dxdt \\ &\hspace{0.5cm} + Ce^{-s}\bigg(\|w_1\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} w_1\|_{L^2(\Sigma)}^2 + \|h_1\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} h_1\|_{L^2(\Sigma)}^2 + \|q_1\|_{L^2(0,T;H^{\fracrac{1}{2}}(\partial \Omega))}^2 \bigg) \mathrm end{aligned} \mathrm end{equation*} and \betagin{equation*} \betagin{aligned} &\|(w_2,q_2,h_2)\|_{\chi_s(Q)}^2 \le \;C\mathrm int_Q (|\nu|^2 + |\nabla \nu|^2 + |\kappa|^2 + |\nabla \kappa|^2 + |u|^2 + |\nabla u|^2 )e^{2s\alphapha}dxdt \\ &\hspace{0.5cm} + C\mathrm int_Q (|H|^2 + |\nabla H|^2 + |w_1|^2 + |\nabla w_1|^2 + |h_1|^2 + |\nabla h_1|^2)e^{2s\alphapha}dxdt \\ &\hspace{0.5cm} + Ce^{-s}\bigg(\|w_2\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} w_2\|_{L^2(\Sigma)}^2 + \|h_2\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} h_2\|_{L^2(\Sigma)}^2 + \|q_2\|_{L^2(0,T;H^{\fracrac{1}{2}}(\partial \Omega))}^2 \bigg) \mathrm end{aligned} \mathrm end{equation*} We combine the above three estimates and absorb the lower-order terms on the right-hand side. Then we have \betagin{equation} \lambdabel{eq11} \betagin{aligned} \sum_{j=0}^2\|(\partial_t^j u,\partial_t^j p, \partial_t^j H)\|_{\chi_s(Q)}^2 \le\; C\mathrm int_Q (|\nu|^2 + |\nabla \nu|^2 + |\kappa|^2 + |\nabla \kappa|^2)e^{2s\alpha}dxdt + Ce^{-s}\mathcal{D}_2^2 \mathrm end{aligned} \mathrm end{equation} for fixed $\lambda\ge \hat{\lambda}$ and all $s\ge \hat{s}$. Here $$ \betagin{aligned} \mathcal{D}_2 = &\|u\|_{H^2(0,T;H^1(\partial\Omega))} + \|u\|_{H^3(0,T;L^2(\partial\Omega))} + \|\partial_n u\|_{H^2(0,T;L^2(\partial\Omega))} + \|H\|_{H^2(0,T;H^1(\partial\Omega))} \\ &+ \|H\|_{H^3(0,T;L^2(\partial\Omega))} + \|\partial_n H\|_{H^2(0,T;L^2(\partial\Omega))} + \|p\|_{H^2(0,T;H^{\fracrac{1}{2}}(\partial\Omega))} \\ = & \|u\|_{H^{0,2}(\Sigma)} + \|\nabla_{x,t} u\|_{H^{0,2}(\Sigma)} + \|H\|_{H^{0,2}(\Sigma)} + \|\nabla_{x,t} H\|_{H^{0,2}(\Sigma)} + \|p\|_{H^{1/2,2}(\Sigma)} \mathrm end{aligned} $$ Fix $\lambda$ large($\lambda\ge \hat{\lambda}$) in inequalities (\ref{eq8})-(\ref{eq10}) and then sum them up in terms of (\ref{eq11}): $$ \betagin{aligned} &\mathrm int_\Omega s|\partial_t u(x,t_0)|^2e^{2s\alpha(x)}dx + \mathrm int_\Omega s|\partial_t H(x,t_0)|^2e^{2s\alpha(x)}dx + \mathrm int_\Omega s|\nabla(\partial_t H(x,t_0))|^2e^{2s\alpha(x)}dx \\ &\hspace{4cm} \le C\mathrm int_Q (|\nu|^2 + |\nabla \nu|^2 + |\kappa|^2 + |\nabla \kappa|^2)e^{2s\alpha}dxdt + C\mathcal{D}_2^2. \mathrm end{aligned} $$ Thus, (\ref{eq7}) yields \betagin{equation} \betagin{aligned} &\mathrm int_{\Omega} s(|\nabla \nu|^2 + s^2\varphi^2 |\nu|^2 + |\nabla \kappa|^2 + s^2\varphi^2 |\kappa|^2)e^{2s\alpha} dx \\ &\hspace{1cm} \le C\mathrm int_Q (|\nu|^2 + |\nabla \nu|^2 + |\kappa|^2 + |\nabla \kappa|^2)e^{2s\alpha}dxdt + Cs\mathcal{D}^2. \mathrm end{aligned} \mathrm end{equation} where $$ \mathcal{D}^2\mathrm equiv \mathcal{D}_1^2 + \mathcal{D}_2^2. $$ We can absorb the first integral on the RHS onto the LHS for $s$ large: \betagin{align*} &\mathrm int_{\Omega} (|\nabla \nu|^2 + |\nu|^2 + |\nabla \kappa|^2 + |\kappa|^2)e^{2s\alpha} dx \le Cs\mathcal{D}^2. \mathrm end{align*} Here we used $\alpha(x,t)\le\alpha(x,t_0)$ thanks to the choice of function $l$. In the end, we fix $s$ sufficiently large and then the weight function $e^{2s\betata}$ admits a positive lower bound in $\Omega$. This completes the proof of our main result. \qed \subsection{Proof of Theorem \ref{thm:stabilityn}} Note that $t_0\mathrm in (0,T)$ is the fixed time for measurements. By the assumptions (A1$^\prime$)-(A2$^\prime$), we substitute coefficients $A$ and $b$ in Theorem \ref{thm:CEIP-reg} by $2\mathcal{E}(u_1(\cdot,t_0))$ and $\mathrm{rot} H_1(\cdot,t_0)$ so that we get Carleman type estimates. However, we cannot apply the theorem directly because we only know the information about $\nu,\kappa$ on the partial boundary $\Gammamma$. Therefore we introduce level sets: \betagin{align} \lambdabel{con:levelsets1} \Omega_\mathrm epsilon := \{x\mathrm in \Omega: d(x)>\mathrm epsilon\}\quad \thetaxt{for any }\mathrm epsilon>0. \mathrm end{align} Then select a cut-off function $\chi_1\mathrm in C^\mathrm infty(\Bbb{R}^3)$ such that $0\le\chi_1\le 1$ and \betagin{equation*} \chi_1 =\left\{ \betagin{aligned} &1\qquad \thetaxt{for }d > 4\mathrm epsilon \\ &0\qquad \thetaxt{for }d < 3\mathrm epsilon. \mathrm end{aligned} \right. \mathrm end{equation*} By setting $\widetilde \nu = \chi_1\nu$ and $\widetilde \kappa = \chi_1\kappa$, we apply Theorem \ref{thm:CEIP-reg} to $\widetilde\nu,\widetilde\kappa$ with $O = \Omega_{3\mathrm epsilon}$. Thanks to the choice of $\Omega_{3\mathrm epsilon}$ and $\chi_1$, we have $\partial\Omega_{3\mathrm epsilon}\cap\partial\Omega\subset\Gammamma$ and $\widetilde\nu=0 $ on $\partial\Omega_{3\mathrm epsilon}\cap \Omega$ which imply \betagin{equation} \lambdabel{eq5n} \mathrm int_{\Omega_{3\mathrm epsilon}} (|\nabla \widetilde\nu|^2 + s^2\lambda^2\varphi^2(x,t_0) |\widetilde\nu|^2)e^{2s\varphi(x,t_0)} dx \le C\mathrm int_{\Omega_{3\mathrm epsilon}} |\mathrm{div}(2\widetilde\nu\mathcal{E}(u_1))(x,t_0)|^2 e^{2s\varphi(x,t_0)}dx. \mathrm end{equation} Also we derive \betagin{align} \nonumber &\mathrm int_{\Omega_{3\mathrm epsilon}} (|\nabla \widetilde\kappa|^2 + s^2\lambda^2\varphi^2(x,t_0) |\widetilde\kappa|^2)e^{2s\varphi(x,t_0)} dx \le C\mathrm int_{\Omega_{3\mathrm epsilon}} |\mathrm{rot}(\widetilde\kappa\mathrm{rot}H_1)(x,t_0)|^2 e^{2s\varphi(x,t_0)}dx \\ \lambdabel{eq6n} &\hspace{4cm} + C\mathrm int_{\Omega_{3\mathrm epsilon}} \fracrac{1}{s^2\lambda^2\varphi^2(x,t_0)}|\nabla (\mathrm{rot}(\widetilde\kappa\mathrm{rot}H_1))(x,t_0)|^2 e^{2s\varphi(x,t_0)}dx. \mathrm end{align} Here the boundary integrals vanished since we have condition \mathrm eqref{con:bdyn}. Direct calculations lead to the equations $$ \mathrm{div}(2\widetilde\nu\mathcal{E}(u_1)) = \chi_1 \mathrm{div}(2\nu\mathcal{E}(u_1)) + \nu \mathcal{E}(u_1)\nabla \chi_1,\quad \nabla\widetilde\nu = \chi_1\nabla\nu + \nu\nabla\chi_1 $$ and $$ \mathrm{rot}(\widetilde\kappa\mathrm{rot}H_1) = \chi_1 \mathrm{rot}(\kappa\mathrm{rot}H_1) + \kappa \nabla\chi_1\times\mathrm{rot} H_1,\quad \nabla\widetilde\kappa = \chi_1\nabla\kappa + \kappa\nabla\chi_1 $$ which together with \mathrm eqref{eq5n} and \mathrm eqref{eq6n} imply \betagin{align*} &\mathrm int_{\Omega_{4\mathrm epsilon}} (|\nabla\nu|^2 + s^2\lambda^2\varphi^2 |\nu|^2)e^{2s\varphi} dx \le C\mathrm int_{\Omega_{3\mathrm epsilon}} |\mathrm{div}(2\nu\mathcal{E}(u_1))|^2 e^{2s\varphi}dx + C\mathrm int_{\Omega_{3\mathrm epsilon}\setminus \overline{\Omega_{4\mathrm epsilon}}} |\nu|^2 e^{2s\varphi}dx \mathrm end{align*} and \betagin{align*} &\mathrm int_{\Omega_{4\mathrm epsilon}} (|\nabla\kappa|^2 + s^2\lambda^2\varphi^2 |\kappa|^2)e^{2s\varphi} dx \le C\mathrm int_{\Omega_{3\mathrm epsilon}} \Big( |\mathrm{rot}(\kappa\mathrm{rot}H_1)|^2 + \fracrac{1}{s^2\lambda^2\varphi^2}|\nabla (\mathrm{rot}(\kappa\mathrm{rot}H_1))|^2 \Big) e^{2s\varphi}dx \\ &\hspace{5cm} + C\mathrm int_{\Omega_{3\mathrm epsilon}\setminus \overline{\Omega_{4\mathrm epsilon}}}\Big(|\kappa|^2 + \fracrac{1}{s^2\lambda^2\varphi^2}|\nabla\kappa|^2\Big)e^{2s\varphi}dx \mathrm end{align*} for all $\lambda\ge \lambda_0$, $s\ge s_0$ and $t=t_0$. Here and henceforth we may omit $t_0$ in the estimates while we exactly mean that the estimates hold for $t=t_0$. The domain of the last integral above is reduced to $\Omega_{3\mathrm epsilon}\setminus \overline{\Omega_{4\mathrm epsilon}}$ since the derivatives of $\chi_1$ vanish both on $\overline{\Omega_{4\mathrm epsilon}}$ and in $\Omega\setminus\overline{\Omega_{3\mathrm epsilon}}$. In addition, we have $\varphi(\cdot,t_0)=e^{\lambda(d+c_0)}< e^{\lambda(4\mathrm epsilon+c_0)}$ in $\Omega_{3\mathrm epsilon}\setminus \overline{\Omega_{4\mathrm epsilon}}$. Thus we combine the above two inequalities to obtain \betagin{equation} \lambdabel{eq7n} \betagin{aligned} &\mathrm int_{\Omega_{4\mathrm epsilon}} \big(|\nabla\nu|^2 + |\nabla\kappa|^2 + s^2\lambda^2\varphi^2 (|\nu|^2 + |\kappa|^2)\big)e^{2s\varphi} dx \le Ce^{2se^{\lambda(4\mathrm epsilon+c_0)}} (\|\nu\|_{L^2(\Omega_{3\mathrm epsilon})}^2 + \|\kappa\|_{H^1(\Omega_{3\mathrm epsilon})}^2) \\ &\hspace{2cm} +C\mathrm int_{\Omega_{3\mathrm epsilon}} \Big( |\mathrm{div}(2\nu\mathcal{E}(u_1))|^2 + |\mathrm{rot}(\kappa\mathrm{rot}H_1)|^2 + \fracrac{1}{s^2\lambda^2\varphi^2}|\nabla (\mathrm{rot}(\kappa\mathrm{rot}H_1))|^2 \Big) e^{2s\varphi}dx \mathrm end{aligned} \mathrm end{equation} for all $\lambda\ge \lambda_0$, $s\ge s_0$ and $t=t_0$. \betagin{align} \nonumber &\thetaxt{[the second term on the RHS of \mathrm eqref{eq7n}]} \le C\mathrm int_{\Omega_{3\mathrm epsilon}} (|\partial_t u|^2 + |\Deltalta u|^2 + |\nabla u|^2 + |L_1(H,\nabla H)|^2 +|\nabla p|^2 )e^{2s\varphi}dx \\ \nonumber &\hspace{2.5cm} + C\mathrm int_{\Omega_{3\mathrm epsilon}} (|\partial_t H|^2 + |\Deltalta H|^2 + |\nabla H|^2 + |L_2(u,\nabla u)|^2 )e^{2s\varphi}dx \\ \nonumber &\hspace{2.5cm} + C\mathrm int_{\Omega_{3\mathrm epsilon}} \fracrac{1}{s^2\lambda^2\varphi^2}\big( |\nabla (\partial_t H)|^2 + |\nabla(\Deltalta H)|^2 + \sum_{i,j=1}^3|\partial_i\partial_j H|^2 + |\nabla (L_2(u,\nabla u))|^2\big)e^{2s\varphi}dx \\ \lambdabel{eq8n} &\hspace{2.8cm} \le C\mathrm int_{\Omega_{3\mathrm epsilon}} (|\partial_t u|^2 + |\partial_t H|^2 + \fracrac{1}{s^2\lambda^2\varphi^2}|\nabla (\partial_t H)|^2)e^{2s\varphi}dx + C\mathcal{D}_1^2 \mathrm end{align} where $$ \mathcal{D}_1:= \|u(\cdot,t_0)\|_{H^2(\Omega_{3\mathrm epsilon})} + \|H(\cdot,t_0)\|_{H^3(\Omega_{3\mathrm epsilon})} + \|\nabla p(\cdot,t_0)\|_{L^2(\Omega_{3\mathrm epsilon})}. $$ Next, we introduce another level sets: \betagin{align*} Q_\mathrm epsilon := \{(x,t)\mathrm in Q: \psi(x,t) > \mathrm epsilon + c_0\}\qquad \thetaxt{for any }\mathrm epsilon>0. \mathrm end{align*} Then we have the following relations: (\romannumeral1) $Q_\mathrm epsilon\subset\Omega_{\mathrm epsilon}\times (0,T)$, (\romannumeral2) $Q_\mathrm epsilon\supset\Omega_{\mathrm epsilon}\times \{t_0\}$. \noindent In fact, if $(x,t)\mathrm in Q_\mathrm epsilon$, we have $d(x) - \beta(t-t_0)^2 > \mathrm epsilon$, i.e. $d(x)>\beta(t-t_0)^2 + \mathrm epsilon> \mathrm epsilon$. This means $x\mathrm in\Omega_{\mathrm epsilon}$. (\romannumeral1) is verified. On the other hand, if $x\mathrm in\Omega_\mathrm epsilon$ and $t=t_0$ then $\psi_2(x,t) = d(x) - \beta(t-t_0)^2 + c_0 = d(x) + c_0 > \mathrm epsilon + c_0$. That is, $(x,t)\mathrm in Q_\mathrm epsilon$. (\romannumeral2) is verified. Furthermore, we choose $\beta = \frac{\|d\|_{C(\overline{\Omega_1})}}{\mathrm delta^2}$ where $\mathrm delta:= \min\{t_0, T- t_0\}$ so that (\romannumeral3) $\overline{Q_\mathrm epsilon}\cap (\Omega\times\{0,T\}) = \mathrm emptyset$ \noindent is valid. Indeed, for $\fracorall (x,t)\mathrm in \Omega\times\{0,T\}$, $\psi(x,t) = d(x) - \beta(t-t_0)^2 + c_0 \le \|d\|_{C(\overline{\Omega_1})} - \beta\mathrm delta^2 + c_0 = c_0$. This leads to $(x,t)\notin \overline{Q_\mathrm epsilon}$. Relations (\romannumeral1) -- (\romannumeral3) guarantee that $Q_\mathrm epsilon$ is a sub-domain of $Q$ and $\partial Q_\mathrm epsilon\cap \partial Q\subset \Gamma\times (0,T)$. Moreover, we assert that (\romannumeral4) $\Omega_{3\mathrm epsilon}\times (t_0-\mathrm deltalta_{\mathrm epsilon},t_0)\subset Q_{2\mathrm epsilon}, \quad \mathrm delta_\mathrm epsilon := \sqrt{\frac{\mathrm epsilon}{\beta}} = \sqrt{\frac{\mathrm epsilon}{\|d\|}}\mathrm delta$. \noindent Actually, for any $(x,t)\mathrm in \Omega_{3\mathrm epsilon}\times (t_0-\mathrm deltalta_{\mathrm epsilon},t_0)$, we have $$ \psi(x,t) = d(x) - \betata(t-t_0)^2 + c_0 > 3\mathrm epsilon - \betata\mathrm deltalta_{\mathrm epsilon}^2 + c_0 = 2\mathrm epsilon + c_0 $$ which implies $(x,t)\mathrm in Q_{2\mathrm epsilon}$. Now we construct a function $\mathrm eta\mathrm in C^2[0,T]$ such that $0\le \mathrm eta\le 1$ and $$ \mathrm eta = \left\{ \betagin{aligned} & 1 \quad in\ [t_0-\frac12\mathrm delta_\mathrm epsilon, t_0+\frac12\mathrm delta_\mathrm epsilon], \\ & 0 \quad in\ [0,t_0-\mathrm delta_\mathrm epsilon]\cup [t_0+ \mathrm delta_\mathrm epsilon, T] \mathrm end{aligned} \right. $$ for any small $\mathrm epsilon<\|d\|_{C(\overline{\Omega_1})}$. Then by noting that $\mathrm eta(t_0-\mathrm delta_\mathrm epsilon) = 0$, $\mathrm eta(t_0) = 1$, we have \betagin{align} \nonumber &\mathrm int_{\Omega_{3\mathrm epsilon}} |\partial_t u(x,t_0)|^2e^{2s\varphi(x,t_0)}dx = \mathrm int_{t_0-\mathrm delta_\mathrm epsilon}^{t_0} \frac{\partial}{\partial t}\bigg( \mathrm eta \mathrm int_{\Omega_{3\mathrm epsilon}} |\partial_t u|^2e^{2s\varphi}dx\bigg) dt \\ \nonumber &\hspace{3.7cm}= \mathrm int_{\Omega_{3\mathrm epsilon}}\mathrm int_{t_0-\mathrm delta_\mathrm epsilon}^{t_0} \bigg( 2\mathrm eta(\partial_t u\cdot\partial_t^2 u) + \big(2\mathrm eta s(\partial_t\varphi) + \mathrm eta^\prime \big) |\partial_t u|^2 \bigg) e^{2s\varphi}dxdt \\ \lambdabel{eq9n} &\hspace{3.7cm}\le C(\lambda)s^{-1}\mathrm int_{Q_{2\mathrm epsilon}} (|\partial_t^2 u|^2 + s^2|\partial_t u|^2)e^{2s\varphi}dxdt. \mathrm end{align} We used $s\ge 1$ and \betagin{align*} &|\partial_t \varphi| = 2\betata\lambda\varphi |t-t_0| \le C(\lambda), \\ &2|\partial_t u\cdot\partial_t^2 u| \le \frac{1}{s}|\partial_t^2 u|^2 + s|\partial_t u|^2. \mathrm end{align*} Similarly, we have \betagin{align} \nonumber &\mathrm int_{\Omega_{3\mathrm epsilon}} |\partial_t H(x,t_0)|^2e^{2s\varphi(x,t_0)}dx = \mathrm int_{t_0-\mathrm delta_\mathrm epsilon}^{t_0} \frac{\partial}{\partial t}\bigg( \mathrm eta \mathrm int_{\Omega_{3\mathrm epsilon}} |\partial_t H|^2e^{2s\varphi}dx\bigg) dt \\ \nonumber &\hspace{3.7cm}= \mathrm int_{\Omega_{3\mathrm epsilon}}\mathrm int_{t_0-\mathrm delta_\mathrm epsilon}^{t_0} \bigg( 2\mathrm eta (\partial_t H\cdot\partial_t^2 H) + \big(2\mathrm eta s(\partial_t\varphi) + \mathrm eta^\prime \big) |\partial_t H|^2 \bigg) e^{2s\varphi}dxdt \\ \lambdabel{eq10n} &\hspace{3.7cm}\le C(\lambda)s^{-1}\mathrm int_{Q_{2\mathrm epsilon}} (|\partial_t^2 H|^2 + s^2|\partial_t H|^2)e^{2s\varphi}dxdt. \mathrm end{align} and \betagin{align} \nonumber &\mathrm int_{\Omega_{3\mathrm epsilon}} \fracrac{1}{s^2\lambda^2\varphi(x,t_0)^2}|\nabla(\partial_t H(x,t_0))|^2e^{2s\varphi(x,t_0)}dx = \mathrm int_{t_0-\mathrm delta_\mathrm epsilon}^{t_0} \frac{\partial}{\partial t}\bigg( \mathrm eta \mathrm int_{\Omega_{3\mathrm epsilon}} \fracrac{1}{s^2\lambda^2\varphi^2}|\nabla(\partial_t H)|^2e^{2s\varphi}dx\bigg) dt \\ \nonumber &\hspace{1cm}= \mathrm int_{\Omega_{3\mathrm epsilon}}\mathrm int_{t_0-\mathrm delta_\mathrm epsilon}^{t_0} \bigg( \fracrac{2\mathrm eta}{s^2\lambda^2\varphi^2}(\nabla(\partial_t H):\nabla(\partial_t^2 H)) + \fracrac{2\mathrm eta s(\partial_t\varphi) + \mathrm eta^\prime}{s^2\lambda^2\varphi^2} |\nabla(\partial_t H)|^2 \bigg) e^{2s\varphi} dxdt \\ \lambdabel{eq11n} &\hspace{1cm}\le C(\lambda)s^{-1}\mathrm int_{Q_{2\mathrm epsilon}} (s^{-2}|\nabla(\partial_t^2 H)|^2 + |\nabla(\partial_t H)|^2)e^{2s\varphi} dxdt. \mathrm end{align} Set $w_1=\partial_t u$, $w_2=\partial_t^2 u$, $q_1=\partial_t p$, $q_2=\partial_t^2 p$ and $h_1=\partial_t H$, $h_2=\partial_t^2 H$. Furthermore we denote $$ \mathcal{L}_1(u,p,H) := \partial_t u - \nu_2\Deltalta u + (u\!\cdot\!\nablabla) u_2 + ((u_1-\!\nablabla \nu_2)\!\cdot\!\nablabla) u - \nablabla u^T\!\cdot\!\nablabla\nu_2 + L_1(H,\nablabla H) + \nabla p $$ and $$ \mathcal{L}_2(u,H) := \partial_t H - \kappa_2\Deltalta H - (H\!\cdot\!\nablabla) u_2 + (u_1\!\cdot\!\nablabla) H + \nablabla\kappa_2\times\mathrm{rot}H + L_2(u,\nablabla u). $$ Then according to our governing system (\ref{sy:MHD6}), we have $$ \left\{ \betagin{aligned} &\ \mathcal{L}_1(u,p,H) = \mathrm{div}(2\nu\mathcal{E}(u_1)), \\ &\ \mathcal{L}_2(u,H) = -\mathrm{rot}(\kappa\mathrm{rot}H_1), \\ &\ \mathrm{div}\; u = 0, \quad \mathrm{div}\; H = 0 \mathrm end{aligned} \right. $$ and $$ \left\{ \betagin{aligned} &\mathcal{L}_1(w_1,q_1,h_1) = \mathrm{div}(2\nu\mathcal{E}((\partial_t u_1))) - (u\!\cdot\!\nabla)(\partial_t u_2) - ((\partial_t u_1)\!\cdot\!\nablabla) u - L_{1t}(H,\nabla H), \\ &\mathcal{L}_2(w_1,h_1) = -\mathrm{rot}(\kappa\mathrm{rot}(\partial_t H_1)) + (H\!\cdot\!\nablabla) (\partial_t u_2) - ((\partial_t u_1)\!\cdot\!\nablabla) H - L_{2t}(u,\nabla u), \\ & \mathrm{div}\; w_1 = 0, \quad \mathrm{div}\; h_1 = 0 \mathrm end{aligned} \right. $$ and $$ \left\{ \betagin{aligned} &\mathcal{L}_1(w_2,q_2,h_2) = \mathrm{div}(2\nu\mathcal{E}((\partial_t^2 u_1))) - 2(w_1\!\cdot\!\nabla)(\partial_t u_2) - 2((\partial_t u_1)\!\cdot\!\nablabla) w_1 - 2L_{1t}(h_1,\nabla h_1) \\ &\hspace{5cm} - (u\!\cdot\!\nabla)(\partial_t^2 u_2) - ((\partial_t^2 u_1)\!\cdot\!\nablabla) u - L_{1tt}(u,\nabla u), \\ &\mathcal{L}_2(w_2,h_2) = -\mathrm{rot}(\kappa\mathrm{rot}(\partial_t^2 H_1)) + 2(h_1\!\cdot\!\nablabla) (\partial_t u_2) - 2((\partial_t u_1)\!\cdot\!\nablabla) h_1 - 2L_{2t}(w_1,\nabla w_1) \\ &\hspace{5cm} + (H\!\cdot\!\nablabla) (\partial_t^2 u_2) - ((\partial_t^2 u_1)\!\cdot\!\nablabla) H - L_{2tt}(u,\nabla u), \\ & \mathrm{div}\; w_2 = 0, \quad \mathrm{div}\; h_2 = 0. \mathrm end{aligned} \right. $$ By choosing a cut-off function $\chi_2\mathrm in C^\mathrm infty(\Bbb{R}^4)$ which satisfies $0\le\chi_2\le 1$ and $$ \chi_2 =\left\{ \betagin{aligned} & 1\qquad \thetaxt{for }\psi > 2\mathrm epsilon + c_0, \\ & 0\qquad \thetaxt{for }\psi < \mathrm epsilon +c_0, \mathrm end{aligned} \right. $$ we rewrite the above three systems $$ \left\{ \betagin{aligned} &\ \mathcal{L}_1(\widetilde u,\widetilde p,\widetilde H) = \big(\mathcal{L}_1(\widetilde u,\widetilde p,\widetilde H) - \chi_2\mathcal{L}_1(u,p,H) \big) +\chi_2\mathrm{div}(2\nu\mathcal{E}(u_1)), \\ &\ \mathcal{L}_2(\widetilde u,\widetilde H) = \big(\mathcal{L}_2(\widetilde u,\widetilde H) - \chi_2\mathcal{L}_2(u,H) \big)-\chi_2\mathrm{rot}(\kappa\mathrm{rot}H_1), \\ &\ \mathrm{div}\; \widetilde u = \nabla\chi_2\cdot u \mathrm end{aligned} \right. $$ and $$ \left\{ \betagin{aligned} &\mathcal{L}_1(\widetilde w_1,\widetilde q_1,\widetilde h_1) = \big(\mathcal{L}_1(\widetilde w_1,\widetilde q_1,\widetilde h_1) - \chi_2\mathcal{L}_1(w_1,q_1,h_1)\big) + \chi_2\mathrm{div}(2\nu\mathcal{E}((\partial_t u_1))) - \chi_2(u\!\cdot\!\nabla)(\partial_t u_2) \\ &\hspace{2cm} - \chi_2((\partial_t u_1)\!\cdot\!\nablabla) u - \chi_2 L_{1t}(H,\nabla H), \\ &\mathcal{L}_2(\widetilde w_1,\widetilde h_1) = \big(\mathcal{L}_2(\widetilde w_1,\widetilde h_1) - \chi_2\mathcal{L}_2(w_1,h_1)\big) - \chi_2\mathrm{rot}(\kappa\mathrm{rot}(\partial_t H_1)) + \chi_2(H\!\cdot\!\nablabla) (\partial_t u_2) \\ &\hspace{2cm} - \chi_2((\partial_t u_1)\!\cdot\!\nablabla) H - \chi_2 L_{2t}(u,\nabla u), \\ & \mathrm{div}\; \widetilde w_1 = \nabla\chi_2\cdot w_1 \mathrm end{aligned} \right. $$ and $$ \left\{ \betagin{aligned} &\mathcal{L}_1(\widetilde w_2,\widetilde q_2,\widetilde h_2) = \big(\mathcal{L}_1(\widetilde w_2,\widetilde q_2,\widetilde h_2) -\chi_2\mathcal{L}_1(w_2,q_2,h_2)\big) + \chi_2\mathrm{div}(2\nu\mathcal{E}((\partial_t^2 u_1))) - 2\chi_2(w_1\!\cdot\!\nabla)(\partial_t u_2) \\ &\hspace{0.8cm} - 2\chi_2((\partial_t u_1)\!\cdot\!\nablabla) w_1 - 2\chi_2 L_{1t}(h_1,\nabla h_1) - \chi_2(u\!\cdot\!\nabla)(\partial_t^2 u_2) - \chi_2((\partial_t^2 u_1)\!\cdot\!\nablabla) u - \chi_2 L_{1tt}(u,\nabla u), \\ &\mathcal{L}_2(\widetilde w_2,\widetilde h_2) = \big(\mathcal{L}_2(\widetilde w_2,\widetilde h_2) - \chi_2\mathcal{L}_2(w_2,h_2)\big) - \chi_2\mathrm{rot}(\kappa\mathrm{rot}(\partial_t^2 H_1)) + 2\chi_2(h_1\!\cdot\!\nablabla) (\partial_t u_2) \\ &\hspace{0.8cm} - 2\chi_2((\partial_t u_1)\!\cdot\!\nablabla) h_1 - 2\chi_2 L_{2t}(w_1,\nabla w_1) + \chi_2(H\!\cdot\!\nablabla) (\partial_t^2 u_2) - \chi_2((\partial_t^2 u_1)\!\cdot\!\nablabla) H - \chi_2 L_{2tt}(u,\nabla u), \\ & \mathrm{div}\; \widetilde w_2 = \nabla\chi_2\cdot w_2 \mathrm end{aligned} \right. $$ where $\widetilde u = \chi_2 u$, $\widetilde w_1 =\chi_2 w_1$, $\widetilde w_2 = \chi_2 w_2$, etc. Then we can employ Carleman estimate (Theorem \ref{thm:CEDPn}) to $(\widetilde u,\widetilde p,\widetilde H)$, $(\widetilde w_1,\widetilde q_1,\widetilde h_1)$ and $(\widetilde w_2,\widetilde q_2,\widetilde h_2)$ respectively and obtain \betagin{equation*} \betagin{aligned} \|&(\widetilde u,\widetilde p,\widetilde H)\|_{\sigma_s(Q)}^2 \le \;C\mathrm int_Q s\varphi\chi_2^2(|\nu|^2 + |\nabla \nu|^2 + |\kappa|^2 + |\nabla \kappa|^2 )e^{2s\varphi}dxdt \\ &\hspace{0cm} + C\mathrm int_Q \Big(\sum_{i,j=1}^3|\partial_i\partial_j\chi_2|^2 + |\nabla_{x,t}\chi_2|^2 + |\nabla(\partial_t \chi_2)|^2\Big)(s\varphi(|\nabla_{x,t} u|^2 + |u|^2) + |\nabla H|^2 + |H|^2 + |p|^2)e^{2s\varphi}dxdt \\ &\hspace{0cm} + Ce^{Cs}\bigg(\|\widetilde u\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} \widetilde u\|_{L^2(\Sigma)}^2 + \|\widetilde H\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} \widetilde H\|_{L^2(\Sigma)}^2 + \|\widetilde p\|_{L^2(0,T;H^{\fracrac{1}{2}}(\partial \Omega))}^2 \bigg) \mathrm end{aligned} \mathrm end{equation*} and \betagin{equation*} \betagin{aligned} &\|(\widetilde w_1,\widetilde q_1,\widetilde h_1)\|_{\sigma_s(Q)}^2 \le \;C\mathrm int_Q s\varphi\chi_2^2(|\nu|^2 + |\nabla \nu|^2 + |\kappa|^2 + |\nabla \kappa|^2 + |u|^2 + |\nabla u|^2 + |H|^2 + |\nabla H|^2)e^{2s\varphi}dxdt \\ &\hspace{0cm} + C\mathrm int_Q \Big(\sum_{i,j=1}^3|\partial_i\partial_j\chi_2|^2 + |\nabla_{x,t}\chi_2|^2 + |\nabla(\partial_t \chi_2)|^2\Big)(s\varphi(|\nabla_{x,t} w_1|^2 + |w_1|^2) + |\nabla h_1|^2 + |h_1|^2 + |q_1|^2)e^{2s\varphi}dxdt \\ &\hspace{0cm} + Ce^{Cs}\bigg(\|\widetilde w_1\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} \widetilde w_1\|_{L^2(\Sigma)}^2 + \|\widetilde h_1\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} \widetilde h_1\|_{L^2(\Sigma)}^2 + \|\widetilde q_1\|_{L^2(0,T;H^{\fracrac{1}{2}}(\partial \Omega))}^2 \bigg) \mathrm end{aligned} \mathrm end{equation*} and \betagin{equation*} \betagin{aligned} &\|(\widetilde w_2,\widetilde q_2,\widetilde h_2)\|_{\sigma_s(Q)}^2 \le \;C\mathrm int_Q s\varphi\chi_2^2(|\nu|^2 + |\nabla \nu|^2 + |\kappa|^2 + |\nabla \kappa|^2 + |u|^2 + |\nabla u|^2 )e^{2s\varphi}dxdt \\ &\hspace{0cm} + C\mathrm int_Q s\varphi\chi_2^2(|H|^2 + |\nabla H|^2 + |w_1|^2 + |\nabla w_1|^2 + |h_1|^2 + |\nabla h_1|^2)e^{2s\varphi}dxdt \\ &\hspace{0cm} + C\mathrm int_Q \Big(\sum_{i,j=1}^3|\partial_i\partial_j\chi_2|^2 + |\nabla_{x,t}\chi_2|^2 + |\nabla(\partial_t \chi_2)|^2\Big)(s\varphi(|\nabla_{x,t} w_2|^2 + |w_2|^2) + |\nabla h_2|^2 + |h_2|^2 + |q_2|^2)e^{2s\varphi}dxdt \\ &\hspace{0cm} + Ce^{Cs}\bigg(\|\widetilde w_2\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} \widetilde w_2\|_{L^2(\Sigma)}^2 + \|\widetilde h_2\|_{L^2(\Sigma)}^2 + \|\nabla_{x,t} \widetilde h_2\|_{L^2(\Sigma)}^2 + \|\widetilde q_2\|_{L^2(0,T;H^{\fracrac{1}{2}}(\partial \Omega))}^2 \bigg) \mathrm end{aligned} \mathrm end{equation*} for all large fixed $\lambda$ and all $s\ge \hat{s}$. Combining the above three estimates and absorb the lower-order terms on the RHS which leads to \betagin{equation} \lambdabel{eq12n} \betagin{aligned} \sum_{j=0}^2\|(\partial_t^j u,\partial_t^j p, \partial_t^j H)\|_{\sigma_s(Q_{2\mathrm epsilon})}^2 \le\; C\mathrm int_{Q_\mathrm epsilon} s\varphi(|\nu|^2 + |\nabla \nu|^2 + |\kappa|^2 + |\nabla \kappa|^2)e^{2s\varphi}dxdt + Low + Ce^{Cs}\mathcal{D}_2^2 \mathrm end{aligned} \mathrm end{equation} for all large fixed $\lambda\ge \lambda_0$ and all $s\ge \hat{s}$. Here $$ \betagin{aligned} &Low := C\mathrm int_{Q_\mathrm epsilon} \Big(\sum_{i,j=1}^3|\partial_i\partial_j\chi_2|^2 + |\nabla_{x,t}\chi_2|^2 + |\nabla(\partial_t \chi_2)|^2\Big)(s\varphi(|\nabla_{x,t} u|^2 + |u|^2) + |\nabla H|^2 + |H|^2 + |p|^2)e^{2s\varphi}dxdt \\ &\hspace{0cm} +C\mathrm int_{Q_\mathrm epsilon} \Big(\sum_{i,j=1}^3|\partial_i\partial_j\chi_2|^2 + |\nabla_{x,t}\chi_2|^2 + |\nabla(\partial_t \chi_2)|^2\Big)(s\varphi(|\nabla_{x,t} w_1|^2 + |w_1|^2) + |\nabla h_1|^2 + |h_1|^2 + |q_1|^2)e^{2s\varphi}dxdt \\ &\hspace{0cm} + C\mathrm int_{Q_\mathrm epsilon} \Big(\sum_{i,j=1}^3|\partial_i\partial_j\chi_2|^2 + |\nabla_{x,t}\chi_2|^2 + |\nabla(\partial_t \chi_2)|^2\Big)(s\varphi(|\nabla_{x,t} w_2|^2 + |w_2|^2) + |\nabla h_2|^2 + |h_2|^2 + |q_2|^2)e^{2s\varphi}dxdt \\ &\mathcal{D}_2 := \sum_{j=0}^2 \Big(\|\partial_t^j u\|_{L^2(\Gammamma\times (0,T))} + \|\partial_t^j(\nabla_{x,t}u)\|_{L^2(\Gammamma\times (0,T))}\Big) + \sum_{j=0}^2 \Big(\|\partial_t^j H\|_{L^2(\Gammamma\times (0,T))} + \|\partial_t^j(\nabla_{x,t}H)\|_{L^2(\Gammamma\times (0,T))}\Big) \\ &\hspace{1cm} + \sum_{j=0}^2 \Big(\|\partial_t^j p\|_{L^2(0,T;H^{\fracrac{1}{2}}(\Gammamma))}\Big). \mathrm end{aligned} $$ From the choice of $\chi_2$, we see that the derivatives of it vanishes in $Q_{2\mathrm epsilon}$. Since $\psi(x,t)$ has an upper bound $2\mathrm epsilon + c_0$ when $(x,t)$ is outside of $Q_{2\mathrm epsilon}$, we can simplify $Low$: $$ Low \le Cse^{2se^{\lambda(2\mathrm epsilon+c_0)}}\sum_{j=0}^2\Big(\|\partial_t^j u\|_{H^{1,1}(Q_\mathrm epsilon)}^2 + \|\partial_t^j H\|_{H^{1,0}(Q_\mathrm epsilon)}^2 + \|\partial_t^j p\|_{L^2(Q_\mathrm epsilon)}^2\Big) =: Cse^{2se^{\lambda(2\mathrm epsilon+c_0)}} M_1^2 $$ where $M$ is defined in Theorem \ref{thm:stabilityn}. In terms of \mathrm eqref{eq12n}, immediately we have \betagin{align*} &\mathrm int_{Q_{2\mathrm epsilon}} \big(s^2|\partial_t u|^2 + s^2|\partial_t^2 u|^2 + s^2|\partial_t H|^2 + s^2|\partial_t^2 H|^2 + |\nabla (\partial_t H)|^2) + |\nabla(\partial_t^2 H)|^2\big) e^{2s\varphi}dxdt \\ &\hspace{2cm} \le C\mathrm int_{Q_\mathrm epsilon} (|\nu|^2 + |\nabla \nu|^2 + |\kappa|^2 + |\nabla \kappa|^2)e^{2s\varphi}dxdt + Ce^{2se^{\lambda(2\mathrm epsilon+c_0)}} M_1^2 + Ce^{Cs}\mathcal{D}_2^2. \mathrm end{align*} We thus insert above inequality to \mathrm eqref{eq9n}-\mathrm eqref{eq11n} and obtain $$ \betagin{aligned} &C\mathrm int_{\Omega_{3\mathrm epsilon}} (|\partial_t u(x,t_0)|^2 + |\partial_t H(x,t_0)|^2 + \fracrac{1}{s^2\lambda^2\varphi^2(x,t_0)}|\nabla(\partial_t H(x,t_0))|^2)e^{2s\varphi(x,t_0)}dx \\ &\hspace{1cm} \le Cs^{-1}\mathrm int_{Q_{\mathrm epsilon}} (|\nu|^2 + |\nabla \nu|^2 + |\kappa|^2 + |\nabla \kappa|^2)e^{2s\varphi}dxdt + Ce^{2se^{\lambda(2\mathrm epsilon+c_0)}} M_1^2 + Ce^{Cs}\mathcal{D}_2^2 \mathrm end{aligned} $$ which along with \mathrm eqref{eq7n} and \mathrm eqref{eq8n} yields \betagin{align} \nonumber &\mathrm int_{\Omega_{4\mathrm epsilon}} (|\nabla \nu|^2 + s^2\varphi^2 |\nu|^2 + |\nabla \kappa|^2 + s^2\varphi^2 |\kappa|^2)e^{2s\varphi(x,t_0)} dx \le Cs^{-1}\mathrm int_{Q_\mathrm epsilon} (|\nu|^2 + |\nabla \nu|^2 + |\kappa|^2 + |\nabla \kappa|^2)e^{2s\varphi}dxdt \\ \lambdabel{eq13n} &\hspace{2cm} + Ce^{2se^{\lambda(4\mathrm epsilon+c_0)}}(\|\nu\|_{L^2(\Omega_{3\mathrm epsilon})}^2 + \|\kappa\|_{H^1(\Omega_{3\mathrm epsilon})}^2) + Ce^{2se^{\lambda(2\mathrm epsilon+c_0)}} M_1^2 + Ce^{Cs}\mathcal{D}^2 \mathrm end{align} where \betagin{align*} \mathcal{D}^2 = \mathcal{D}_1^2 + \mathcal{D}_2^2. \mathrm end{align*} We carefully calculate the first term on the RHS of \mathrm eqref{eq13n}: \betagin{align*} &Cs^{-1}\mathrm int_{Q_\mathrm epsilon} (|\nu|^2 + |\nabla \nu|^2 + |\kappa|^2 + |\nabla \kappa|^2)e^{2s\varphi}dxdt \\ &= Cs^{-1}\mathrm int_{Q_{4\mathrm epsilon}} (|\nu|^2 + |\nabla \nu|^2 + |\kappa|^2 + |\nabla \kappa|^2)e^{2s\varphi}dxdt + Cs^{-1}\mathrm int_{Q_\mathrm epsilon\setminus Q_{4\mathrm epsilon}} (|\nu|^2 + |\nabla \nu|^2 + |\kappa|^2 + |\nabla \kappa|^2)e^{2s\varphi}dxdt \\ &\le Cs^{-1}\mathrm int_0^T\mathrm int_{\Omega_{4\mathrm epsilon}} (|\nu|^2 + |\nabla \nu|^2 + |\kappa|^2 + |\nabla \kappa|^2)e^{2s\varphi}dxdt + CTe^{2se^{\lambda(4\mathrm epsilon+c_0)}}\mathrm int_{\Omega_{\mathrm epsilon}} (|\nu|^2 + |\nabla \nu|^2 + |\kappa|^2 + |\nabla \kappa|^2)dx \\ &\le Cs^{-1} \mathrm int_{\Omega_{4\mathrm epsilon}} (|\nu|^2 + |\nabla \nu|^2 + |\kappa|^2 + |\nabla \kappa|^2)e^{2s\varphi(x,t_0)}dx + Ce^{2se^{\lambda(4\mathrm epsilon+c_0)}}(\|\nu\|_{H^1(\Omega_{\mathrm epsilon})}^2 + \|\kappa\|_{H^1(\Omega_{\mathrm epsilon})}^2) \mathrm end{align*} where we note that $\varphi(x,t)$ attains its maximum at $t=t_0$ for any $x\mathrm in \Omega$. Thus we can absorb the first term on the RHS above by taking $s$ large (e.g. $s\ge s_1$) which gives \betagin{align} \lambdabel{eq14n} \mathrm int_{\Omega_{4\mathrm epsilon}} (|\nabla \nu|^2 + s^2\varphi^2 |\nu|^2 + |\nabla \kappa|^2 + s^2\varphi^2 |\kappa|^2)e^{2s\varphi(x,t_0)} dx \le Ce^{2se^{\lambda(4\mathrm epsilon+c_0)}} M^2 + Ce^{Cs}\mathcal{D}^2. \mathrm end{align} Here $$ M^2 = M_1^2 + \|\nu\|_{H^1(\Omega_{3\mathrm epsilon})}^2 + \|\kappa\|_{H^1(\Omega_{3\mathrm epsilon})}^2. $$ On the other hand, the LHS of \mathrm eqref{eq14n} can be estimated from below: \betagin{align*} &\mathrm int_{\Omega_{4\mathrm epsilon}} (|\nabla \nu|^2 + s^2\varphi^2 |\nu|^2 + |\nabla \kappa|^2 + s^2\varphi^2 |\kappa|^2)e^{2s\varphi(x,t_0)} dx \ge \mathrm int_{\Omega_{5\mathrm epsilon}} (|\nabla \nu|^2 + |\nu|^2 + |\nabla \kappa|^2 + |\kappa|^2)e^{2s\varphi(x,t_0)} dx \\ &\hspace{8.2cm} \ge e^{2se^{\lambda(5\mathrm epsilon+c_0)}}(\|\nu\|_{H^1(\Omega_{5\mathrm epsilon})}^2 + \|\kappa\|_{H^1(\Omega_{5\mathrm epsilon})}^2) \mathrm end{align*} Therefore \mathrm eqref{eq14n} indicates \betagin{align} \lambdabel{eq15n} &\|\nu\|_{H^1(\Omega_{5\mathrm epsilon})}^2 + \|\kappa\|_{H^1(\Omega_{5\mathrm epsilon})}^2 \le Ce^{-C_0s}M^2 + Ce^{Cs}\mathcal{D}^2 \mathrm end{align} for all $s\ge s_2=\max\{s_0,s_1\}$ with $C_0 = 2e^{\lambda(4\mathrm epsilon+c_0)}(e^{\lambda\mathrm epsilon} -1)>0$. We can substitute $s$ by $s+s_2$ so that \mathrm eqref{eq15n} holds for all $s\ge 0$. Finally, we apply a well-known argument to reach the stability inequality of H\"{o}lder type \mathrm eqref{eq:loc-stab}. For reference, see the final step of the proof on pp.28 in \cite{Y09}. This completes the proof of our main result. \qed \noindent $\mathbf{Remark.}$ Sometimes the following case is considered. The coefficients $\nu,\kappa$ are given in a more general form of $$ \nu(x,t) = \tilde{\nu}(x)r_1(x,t), \quad \kappa(x,t) = \tilde{\kappa}(x)r_2(x,t),\qquad for\ (x,t)\mathrm in Q $$ provided $r_1,r_2$ are two given functions. The above stability inequality for $\tilde{\nu}$ and $\tilde{\kappa}$ still holds if we add some smoothness and nonzero assumptions to $r_1$ and $r_2$. The proof is similar but it is necessary to pay more attention to the order of large parameter $s$. \betagin{thebibliography}{99} \bibitem{BY06} M. Bellassoued and M. Yamamoto, Inverse source problem for the Wave equation, Hindawi Publishing Corporation, Proceedinds of the Conference on Differential and Difference Equations and Applications, 2006, 149-158. \bibitem{C39} T. Carleman, Sur un probleme d'unicite pour les systemes d'equations aux derivees partielles a deux variables independentes, Ark. Mat. Astr. Fys., 2 B (1939), 1-9. \bibitem{CIK96} D. Chae, O. Yu. Imanuvilov and S. M. Kim, Exact Controllability for Semilinear Parabolic Equations with Neumann Boundary Conditions, Journal of Dynamical and Control Systems, Vol. 2, No. 4 (1996), 449-483. \bibitem{CIPY13} M. Choulli, O. Yu. Imanuvilov, J. -P. Puel and M. Yamamoto, Inverse source problem for linearized Navier-Stokes equations with data in arbitrary sub-domain, Appl. Anal. 92 (2013), 2127-2143. \bibitem{E86} Y. V. Egorov, Linear Differential Equations of Principal Type, Consultants Bureau, New York, 1986. \bibitem{FL15} J. Fan and J. Li, A logarithmic regularity criterion for the 3D generalized MHD system, Math. Meth. Appl. Sci., doi: 10.1002/mma.3480, 2015. \bibitem{FI96} A. V. Fursikov and O. Yu. Imanuvilov, Controllability of Evolution Equations, Seoul National University, Korea, 1996. \bibitem{GO13} P. Gaitan and H. Ouzzane, Inverse problem for a free transport equation using Carleman estimates, Applicable Analysis, http://dx.doi.org/10.1080/00036811.2013.816686, 2013. \bibitem{H85} L. H\"{o}rmander, The Analysis of Linear Partial Differential Operators $\uppercase\mathrm expandafter{\romannumeral1}-\uppercase\mathrm expandafter{\romannumeral4}$, Springer, Berlin, 1985. \bibitem{HPS06} T. Hav\^{a}rneanu, C. Popa and S. S. Sritharan, Exact internal controllability for the magnetohydrodynamic equations in multi-connected domains, Adv. Differential Equations, Vol. 11, No. 8 (2006), 893-929. \bibitem{HPS07} T. Hav\^{a}rneanu, C. Popa and S. S. Sritharan, Exact internal controllability for the two-dimensional magnetohydrodynamic equations, SIAM J. CONTROL OPTIM. Vol. 46, No. 5 (2007), 1802-1830. \bibitem{I95} O. Yu. Imanuvilov, Controllability of parabolic equations, Sbornik Math. 186 (1995): 879-900. \bibitem{IP03} O. Yu. Imanuvilov and J. -P. Puel, Global Carleman estimates for weak solutions of elliptic nonhomogeneous Dirichlet problems, IMRN 16 (2003): 883-913. \bibitem{IPY09} O. Yu. Imanuvilov, J. -P. Puel and M. Yamamoto, Carleman estimates for parabolic equations with nonhomogeneous boundary conditions, Chin. Ann. Math. Ser. B 30 (2009), 333-378. \bibitem{I90} V. Isakov, Inverse Source Problems, American Mathematical Society, Providence, RI, 1990. \bibitem{I98} V. Isakov, Inverse Problems for Partial Differential Equations, Springer, Berlin, 1998. \bibitem{LQ13} T. Li and T. Qin, Physics and Partial Differential Equations, Higher Education Press, Beijing, Vol. 1, 2013. \bibitem{DL13} H. Lin and L.Du, Regularity criteria for incompressible magnetohydrodynamics equations in three dimensions, IOP Publishing Ltd \& London Mathematical Society, Nonlinearity, Vol.26, No. 1 (2013), 219-239. \bibitem{T96} D. Tataru, Carleman estimates and unique continuation for solutions to boundary value problems, J. Math. Pures Appl. 75 (1996), 367-408. \bibitem{T81} M. Taylor, Pseudodifferential Operators, Princeton University Press, Princeton, NJ, 1981. \bibitem{Y09} M. Yamamoto, Carleman estimates for parabolic equations and applications, Inverse Problems 25 (2009) 123013. \mathrm end{thebibliography} \mathrm end{document}
\begin{document} \title{Universal destabilization and slowing of spin transfer functions by a bath of spins} \author{Daniel Burgarth and Sougato Bose} \affiliation{Department of Physics \& Astronomy, University College London, Gower St., London WC1E 6BT, UK} \begin{abstract} We investigate the effect of a spin bath on the spin transfer functions of a permanently coupled spin system. When each spin is coupled to a seperate environment, the effect on the transfer functions in the first excitation sector is amazingly simple: the group velocity is slowed down by a factor of two, and the fidelity is destabilized by a modulation of $\left|\cos Gt\right|,$ where $G$ is the mean square coupling to the environment. \end{abstract} \maketitle \paragraph*{Introduction:---} Recently suggested protocols \cite{Sougato,C1,NJP} give a new perspective to the physics of strongly coupled spin systems. They demonstrate that the coherent transfer of spin flips can be used to transfer unknown quantum states and entanglement, a task of paramount importance in any quantum information application \cite{Nielsen}. Generally, the relevant quantities determining the performance of the mentioned protocols are the time dependent transition amplitudes of local spin flips in a ferromagnetic ground state. We will refer to these amplitudes as {}``spin transfer functions''. The same functions also occur in the charge and energy transfer dynamics in molecular systems \cite{EXCITON} and in continuous time random walks \cite{OLI} to which our results equally apply. It is both important and interesting to ask how these transfer functions change if the intended couplings between the spins are accompanied by unwanted couplings to environmental spins which do not take part in the transport. It is well known from the theory of open quantum systems \cite{key-36} that this can lead to dissipation and decoherence, which also means that quantum information is lost. Here we consider a model where the system is coupled to a spin environment through an exchange interaction because the same type of coupling is also responsible for the transport of the information through the system. Moreover, this coupling offers the unique opportunity of an analytic solution of our problem without {\em any} approximations regrading the strength of system-environment coupling (in most treatments of the effect of an environment on the evolution of a quantum system, the system-environment coupling is assumed to be weak) and allows us to include inhomogeneous interactions of the bath spins with the system. For such coupling, decoherence is possible for mixed (thermal) initial bath states \cite{key-33}. However if the system and bath are both initially cooled to their ground states, is there still a non-trivial effect of the environment on the spin transfer functions? In this paper we find that there are two important effects: the spin transfer functions are slowed and a destabilized due to the environment. This has both positive and negative implications for the use of strongly coupled spin systems as quantum communication channels. \paragraph*{Model:---} We choose to start with a specific spin system, i.e. an open spin chain of arbitrary length $N,$ with a Hamiltonian given by\begin{equation} H_{S}=-\frac{1}{2}\sum_{\ell=1}^{N-1}J_{\ell}\left(X_{\ell}X_{\ell+1}+Y_{\ell}Y_{\ell+1}\right),\end{equation} where $J_{\ell}$ are some arbitrary couplings and $X_{\ell}$ and $Y_{\ell}$ are the Pauli-X and Y matrices for the $\ell$th spin. Towards the end of the paper we will however show that our results hold for any system where the number of excitations is conserved during dynamical evolution. In addition to the chain Hamiltonian, each spin $\ell$ of the chain interacts with an independent bath of $M_{\ell}$ environmental spins (see Fig \ref{fig:spinchain} ) via an inhomogeneous Hamiltonian,\begin{equation} H_{I}^{(\ell)}=-\frac{1}{2}\sum_{k=1}^{M_{\ell}}g_{k}^{(\ell)}\left(X_{\ell}X_{k}^{(\ell)}+Y_{\ell}Y_{k}^{(\ell)}\right).\end{equation} \begin{figure} \caption{\label{fig:spinchain} \label{fig:spinchain} \end{figure} In the above expression, the Pauli matrices $X_{\ell}$ and $Y_{\ell}$ act on the $\ell$th spin of the chain, whereas $X_{k}^{(\ell)}$ and $Y_{k}^{(\ell)}$ act on the $k$th environmental spin attached to the $\ell$th spin of the chain. We denote the total interaction Hamiltonian by\begin{equation} H_{I}\equiv\sum_{\ell=1}^{N}H_{I}^{(\ell)}.\end{equation} The total Hamiltonian is given by $H=H_{S}+H_{I},$ where it is important to note that $\left[H_{S},H_{I}\right]\neq0.$ The ground state of the system is given by the fully polarized state $|0,0\rangle,$ with all chain and bath spins aligned along the z-axis. The above Hamiltonian describes an extremely complex and disordered system with a Hilbert space of dimension $2^{N+NM}.$ In the context of state transfer however, only the dynamics of the first excitation sector is relevant. We proceed by mapping this sector to a much simpler system \cite{Alexandra,Alexandra2}. For $\ell=1,2,\ldots,N$ we define the states\begin{equation} |\ell,0\rangle\equiv\sigma_{\ell}^{+}|0,0\rangle\end{equation} and\begin{equation} |0,\ell\rangle\equiv\frac{1}{G_{\ell}}\sum_{k=1}^{M_{\ell}}g_{k}^{(\ell)}\sigma_{k}^{+(\ell)}|0,0\rangle\end{equation} with \begin{equation} G_{\ell}=\sqrt{\sum_{k=1}^{M_{\ell}}\left(g_{k}^{(\ell)}\right)^{2}}.\label{eq:effect}\end{equation} It is easily verified that\begin{eqnarray} H_{S}|\ell,0\rangle & = & -J(1-\delta_{\ell1})|\ell-1,0\rangle-J(1-\delta_{\ell N})|\ell+1,0\rangle\nonumber \\ H_{S}|0,\ell\rangle & = & 0,\label{eq:hc_action}\end{eqnarray} and\begin{eqnarray} H_{I}|\ell,0\rangle & = & -G_{\ell}|0,\ell\rangle\label{eq:hb_action}\\ H_{I}|0,\ell\rangle & = & -G_{\ell}|\ell,0\rangle.\label{eq:hb_action2}\end{eqnarray} Hence these states define a $2N-$dimensional subspace that is invariant under the action of $H.$ This subspace is equivalent to the first excitation sector of a system of $2N$ spin $1/2$ particles, coupled as it is shown in Fig \ref{fig:spinchain_equiv}. \begin{figure} \caption{\label{fig:spinchain_equiv} \label{fig:spinchain_equiv} \end{figure} Our main assumption is that the bath couplings are \emph{in effect} the same, i.e. $G_{\ell}=G$ for all $\ell$. Note however that the individual number of bath spins $M_{\ell}$ and bath couplings $g_{k}^{(\ell)}$ may still depend on $\ell$ and $k$ as long as their means square average is the same. Also, our analytic solution given in the next paragraph relies on this assumption, but numerics show that our main result {[}Equation~(\ref{eq:scalingformula}){]} remains a good approximation if the $G_{\ell}$ slightly vary and we take $G\equiv\left\langle G_{\ell}\right\rangle .$ \paragraph*{\label{sec:Solving-the-Schr=F6dinger}Results:---} In this paragraph, we solve the Schr\"{o}dinger equation for the model outlined above and discuss the spin transfer functions. Firstly, let us denote the orthonormal eigenstates of $H_{S}$ alone by \begin{equation} H_{S}|\psi_{k}\rangle=\epsilon_{k}|\psi_{k}\rangle\quad(k=1,2\ldots,N)\end{equation} with\begin{equation} |\psi_{k}\rangle=\sum_{\ell=1}^{N}a_{k\ell}|x,0\rangle.\label{eq:eigen_h_c}\end{equation} For what follows, it is not important whether analytic expressions for the eigensystem of $H_{S}$ can be found. Our result holds even for models that are not analytically solvable, such as the randomly coupled chains considered in \cite{NJP}. We now make an ansatz for the eigenstates of the full Hamiltonian, motivated by the fact that the states \begin{equation} |\phi_{\ell}^{n}\rangle\equiv\frac{1}{\sqrt{2}}\left(|\ell,0\rangle+\left(-1\right)^{n}|0,\ell\rangle\right)\quad(n=1,2)\end{equation} are eigenstates of $H_{I}^{(\ell)}$ with the corresponding eigenvalues $\pm G$ {[}this follows directly from Eq.~(\ref{eq:hb_action})/(\ref{eq:hb_action2}){]}. Define the vectors\begin{eqnarray} |\Psi_{k}^{n}\rangle & \equiv & \sum_{\ell=1}^{N}a_{k\ell}|\phi_{\ell}^{n}\rangle\label{eq:ansatz} \end{eqnarray} with $k=1,2,\ldots,N$ and $n=0,1.$ The $|\Psi_{k}^{n}\rangle$ form an orthonormal basis in which we express the matrix elements of the Hamiltonian. We can easily see that\begin{equation} H_{I}|\Psi_{k}^{n}\rangle=-\left(-1\right)^{n}G|\Psi_{k}^{n}\rangle\end{equation} and \begin{equation} H_{S}|\Psi_{k}^{n}\rangle=\epsilon_{k}\sum_{x=1}^{N}a_{kx}|x,0\rangle=\frac{\epsilon_{k}}{2}\left(|\Psi_{k}^{0}\rangle+|\Psi_{k}^{1}\rangle\right).\end{equation} Therefore the matrix elements of the full Hamiltonian $H=H_{S}+H_{I}$ are given by \begin{equation} \langle\Psi_{k'}^{n'}|H|\Psi_{k}^{n}\rangle=\delta_{kk'}\left(-\left(-1\right)^{n}G\delta_{nn'}+\frac{\epsilon_{k}}{2}\right).\end{equation} The Hamiltonian is not diagonal in the states of Eq.~(\ref{eq:ansatz}). But $H$ is now block diagonal consisting of $N$ blocks of size $2$, which can be easily diagonalized analytically. The orthonormal eigenstates of the Hamiltonian are given by\begin{equation} |E_{k}^{n}\rangle=c_{kn}^{-1}\left\{ \left(\left(-1\right)^{n}\Delta_{k}-2G\right)|\Psi_{k}^{0}\rangle+\epsilon_{k}|\Psi_{k}^{1}\rangle\right\} \label{eq:eigenstates}\end{equation} with the eigenvalues\begin{equation} E_{k}^{n}=\frac{1}{2}\left(\epsilon_{k}+\left(-1\right)^{n}\Delta_{k}\right)\end{equation} and the normalization\begin{equation} c_{kn}\equiv\sqrt{\left(\left(-1\right)^{n}\Delta_{k}-2G\right)^{2}+\epsilon_{k}^{2}},\end{equation} where\begin{equation} \Delta_{k}=\sqrt{4G^{2}+\epsilon_{k}^{2}}.\label{eq:delta}\end{equation} Note that the ansatz of Eq.~(\ref{eq:ansatz}) that put $H$ in block diagonal form did not depend on the details of $H_{S}$ and $H_{I}^{(\ell)}.$ The methods presented here can be applied to a much larger class of systems, including the generalized spin star systems (which include an interaction within the bath) discussed in \cite{Alexandra}. After solving the Schr\"{o}dinger equation, let us now turn to quantum state transfer. The relevant quantity \cite{Sougato,C1,NJP} is given by the transfer function\begin{eqnarray*} f_{N,1}(t) & \equiv & \langle N,0|\exp\left\{ -iHt\right\} |1,0\rangle\\ & = & \sum_{k,n}\exp\left\{ -iE_{k}^{n}t\right\} \langle E_{k}^{n}|1,0\rangle\langle N,0|E_{k}^{n}\rangle.\end{eqnarray*} The modulus of $f_{N,1}(t)$ is between $0$ (no transfer) and $1$ (perfect transfer) and fully determines the fidelity of state transfer. Since\begin{eqnarray*} \langle\ell,0|E_{k}^{n}\rangle & = & c_{kn}^{-1}\left\{ \left(\left(-1\right)^{n}\Delta_{k}-2G\right)\langle\ell,0|\Psi_{k}^{0}\rangle+\epsilon_{k}\langle\ell,0|\Psi_{k}^{1}\rangle\right\} \\ & = & \frac{c_{kn}^{-1}}{\sqrt{2}}\left(\left(-1\right)^{n}\Delta_{k}-2G+\epsilon_{k}\right)a_{k\ell}\end{eqnarray*} we get\begin{eqnarray} \lefteqn{f_{N,1}(t)=}\label{eq:transfer}\\ & & \frac{1}{2}\sum_{k,n}e^{\frac{-it}{2}\left(\epsilon_{k}+\left(-1\right)^{n}\Delta_{k}\right)}\frac{\left(\left(-1\right)^{n}\Delta_{k}-2G+\epsilon_{k}\right)^{2}}{\left(\left(-1\right)^{n}\Delta_{k}-2G\right)^{2}+\epsilon_{k}^{2}}a_{k1}a_{kN}^{*}.\nonumber \end{eqnarray} Eq.~(\ref{eq:transfer}) is the main result of this article, fully determining the transfer of quantum information and entanglement in the presence of the environments. In the limit $G\rightarrow0,$ we have $\Delta_{k}\approx\epsilon_{k}$ and $f_{N,1}(t)$ approaches the usual result \cite{Sougato,C1,NJP} without an environment,\begin{equation} f_{N,1}^{0}(t)\equiv\sum_{k}\exp\left\{ -it\epsilon_{k}\right\} a_{k1}a_{kN}^{*}.\end{equation} In fact, a series expansion of Eq.~(\ref{eq:transfer}) yields that the first modification of the transfer function is of the order of $G^{2},$\begin{equation} G^{2}\sum_{k}a_{k1}a_{kN}^{*}\left[\exp\left\{ -it\epsilon_{k}\right\} \left(-\frac{1}{\epsilon_{k}^{2}}-\frac{it}{\epsilon_{k}}\right)+\frac{1}{\epsilon_{k}^{2}}\right].\end{equation} Hence we the effect is small for very weakly coupled baths. However, as the chains get longer, the lowest lying energy $\epsilon_{1}$ usually approaches zero, so the changes become more significant (scaling as $1/\epsilon_{k}$). For intermediate $G,$ we evaluated Eq.~(\ref{eq:transfer}) numerically and found that the first peak of the transfer function generally becomes slightly lower, and gets shifted to higher times (Figures \ref{cap:Example} and \ref{cap:Example2}). A numeric search in the coupling space $\left\{ J_{\ell},\ell=1,\ldots,N-1\right\} $ however also revealed some rare examples where an environment can also slightly improve the peak of the transfer function (Fig \ref{cap:Example3}). \begin{figure} \caption{\label{cap:Example} \label{cap:Example} \end{figure} \begin{figure} \caption{\label{cap:Example2} \label{cap:Example2} \end{figure} In the strong coupling regime $G\gg\epsilon_{k}/2,$ we can approximate Eq.~(\ref{eq:delta}) by $\Delta_{k}\approx2G.$ Inserting it in Eq.~(\ref{eq:transfer}) then becomes\begin{eqnarray} f_{N,1}(t) & \approx & \frac{1}{2}e^{-iGt}\sum_{k}\exp\left\{ -it\epsilon_{k}\frac{1}{2}\right\} a_{k1}a_{kN}^{*}+\nonumber \\ & & +\frac{1}{2}e^{Gt}\sum_{k}\exp\left\{ -it\epsilon_{k}\frac{1}{2}\right\} a_{k1}a_{kN}^{*}\nonumber \\ & = & \cos(Gt) f_{N,1}^{0}(\frac{t}{2}).\end{eqnarray} This surprisingly simple result consists of the normal transfer function, slowed down by a factor of $1/2,$ and modulated by a quickly oscillating term (Figures \ref{cap:Example} and \ref{cap:Example2}). Our derivation actually did not depend on the indexes of $f(t)$ and we get for the transfer from the $n$th to the $m$th spin of the chain that \begin{equation} f_{n,m}(t)\approx\cos(Gt) f_{n,m}^{0}(\frac{t}{2}).\label{eq:scalingformula}\end{equation} It may look surprising that the matrix $f_{n,m}$ is no longer unitary. This is because we are considering the dynamics of the chain only, which is an open quantum system \cite{key-36}. A heuristic interpretation of Eq.~(\ref{eq:scalingformula}) is that the excitation oscillates back and forth between the chain and the bath (hence the modulation), and spends half of the time trapped in the bath (hence the slowing). If the time of the maximum of the transfer function $|f_{n,m}^0(t)|$ for $G=0$ is a multiple of $\pi/2G$ then this maximum is also reached in the presence of the bath. Finally, we want to stress that Eq.~(\ref{eq:scalingformula}) is \emph{universal} for any spin Hamiltonian that conserves the number of excitations, i.e. with $\left[H_{S},\sum_{\ell}Z_{\ell}\right]=0$. Thus our restriction to chain-like topology and exchange couplings for $H_{S}$ is not necessary. In fact the only difference in the whole derivation of Eq.~(\ref{eq:scalingformula}) for a more general Hamiltonian is that Eq.~(\ref{eq:hc_action}) is replaced by\begin{eqnarray} H_{S}|\ell,0\rangle & = & \sum_{\ell'}h_{\ell'}|\ell',0\rangle. \label{eq:hc_action2}\end{eqnarray} The Hamiltonian can still be formally diagonalized in the first excitation sector as in Eq.~(\ref{eq:eigen_h_c}), and the states of Eq.~(\ref{eq:eigenstates}) will still diagonalize the total Hamiltonian $H_{S}+H_{I}.$ Also, rather than considering an exchange Hamiltonian for the interaction with the bath, we could have considered a Heisenberg interaction, but only for the special case where all bath couplings $g_{k}^{(\ell)}$ are all the same \cite{SUBRA}. Up to some irrelevant phases, this leads to the same results as for the exchange interaction. \begin{figure} \caption{\label{cap:Example3} \label{cap:Example3} \end{figure} \paragraph*{Conclusion:---} We found a surprisingly simple and universal scaling law for the spin transfer functions in the presence of spin environments. In the context of quantum state transfer \cite{Sougato,C1,NJP} this result is double-edged: on one hand, it shows that even for very strongly coupled baths quantum state transfer is possible, with the same fidelity and only reasonable slowing. On the other hand, it also shows that the fidelity as a function of time becomes destabilized with a quickly oscillating modulation factor. In practice, this factor will restrict the time-scale in which one has to be able read the state from the system. This demonstrates that even though a bath coupling need not introduce decoherence or dissipation to the system, there are other dynamical processes such as destabilization it may cause that can be problematic for quantum information processing. \paragraph*{Acknowledgments:---} DB is funded by the UK Engineering and Physical Sciences Research Council, Grant Nr. GR/S62796/01. \end{document}
\begin{document} \title{Boundary Observer for Space and Time Dependent Reaction-Advection-Diffusion Equations} \begin{abstract} This paper presents boundary observer design for space and time dependent reaction-advection-diffusion equations using backstepping method. The method uses only a single measurement at the boundary of the systems. The existence of the observer kernel equation is proved using the method of successive approximation. \end{abstract} \section{Introduction} Many physical phenomena can be described by partial differential equations (PDEs). Some examples include model of flexible cable in an overhead crane \cite{Brigit}, automated managed pressure drilling \cite{Agus1,Agus2,Hasan1}, and battery management systems \cite{Gu,Shuxia1}. Stabilization of PDEs with boundary control is considered as a challenging topic. After the introducing of the backstepping method, it becomes an emerging area \cite{Krstic}. The backstepping method has been successfully used for estimation and control of many PDEs, such as the Korteweg-de Vries equation \cite{Edu1,A3}, Benjamin-Bona-Mahony equation \cite{A4}, Schrodinger equation \cite{Kr1}, Ginzburg-Landau equation \cite{Aam}, and 2$\times$2 linear hyperbolic PDEs \cite{fin,deu}. In engineering, the backstepping method can be found in several applications, such as in gas coning control \cite{c8}, flow control in porous media \cite{c9}, slugging control in drilling \cite{c10}, and lost circulation control \cite{c11,c00}. In spite of the fact that the infinite-dimensional backstepping is limited to Volterra nonlinearities \cite{Rafa1,Rafa2}, local stabilization of nonlinear PDE systems shows promising results in recent years. As an example, feedback control design for a 2$\times$2 quasilinear hyperbolic PDEs is presented in \cite{Coron}. An early research on control of Burgers' equation may be found in \cite{Krstic2}. In the paper, the authors derived the nonlinear boundary control laws that achieve global asymptotic stability using Lyapunov method. In \cite{Krstic3}, the unstable shock-like equilibrium profiles of the viscous Burgers equation was stabilized using control at the boundaries. Recent results on control of Burgers' equation could be found in \cite{Balogh,Liu,Smy}. In \cite{Aamo}, a linear coupled hyperbolic PDE-ODE system is studied. Other works in control of coupled PDE-ODE systems include control with Neumann interconnections \cite{Susto}, coupled ODE-Schrodinger equation \cite{beibei}, and adaptive control of PDE-ODE cascade systems with uncertain harmonic disturbances \cite{zaihua}. Other researchers studied coupled systems of linear PDE-ODE systems \cite{Krstic1,Daa,Hasan} and nonlinear ODE and linear PDE cascade systems \cite{ahmed}. In this paper, we consider boundary observer design for a reaction-advection-diffusion (RAD) equation with variable coefficients in space and time. The RAD equation can be used to model chemical and biological processes in flowing medium, e.g., fluid flow through porous medium. The evolution equation combines three processes and can be derived from mass balances equation. \section{Problem Formulation} We consider the following space and time dependent reaction-advection-diffusion equations with mixed boundary conditions \begin{eqnarray} \frac{\partial u}{\partial t}(r,t) &=& D(r,t) \frac{\partial^2u}{\partial r^2}(r,t)+b(r,t)\frac{\partial u}{\partial r}(r,t)+\phi(r,t)u(r,t)\label{mol1}\\ u(0,t) &=& 0\label{mol2}\\ \frac{\partial u}{\partial r}(1,t) &=& u(1,t)+U(t)\label{mol3} \end{eqnarray} where $u(r,t)$ is the system state, $r\in[0,1]$ is the spatial variable, and $t\in[0,\infty)$ is the time variable. The coefficients for diffusion, advection, and reaction, are denoted by $D(r,t)\in\mathbb{C}^{2,1}\left((0,1)\times(0,\infty)\right)$, $b(r,t)\in\mathbb{C}^{1,1}\left((0,1)\times(0,\infty)\right)$, and $\phi(r,t)\in\mathbb{C}\left((0,1)\times(0,\infty)\right)$, respectively. We assume these coefficients and the input function $U(t)$ are known, and in particular $D(r,t)>0$. To simplify the equation, the advection term $\frac{\partial u}{\partial r}$ can be eliminated using a simple transformation. Let us define a new state variable \begin{eqnarray} c(r,t) = u(r,t)e^{\int_0^{r}\!\frac{b(\tau,t)}{2D(\tau,t)}\,\mathrm{d}\tau}\label{der} \end{eqnarray} The derivatives of \eqref{der} with respect to $t$ and $r$ are given by \begin{eqnarray} \frac{\partial u}{\partial t}(r,t) &=& \frac{\partial c}{\partial t}(r,t)e^{-\int_0^{r}\!\frac{b(\tau,t)}{2D(\tau,t)}\,\mathrm{d}\tau}\nonumber\\ &&-\left(\frac{1}{2}\int_0^{r}\!\frac{\partial}{\partial t}\left(\frac{b(\tau,t)}{D(\tau,t)}\right)\,\mathrm{d}\tau\right)c(r,t)e^{-\int_0^{r}\!\frac{b(\tau,t)}{2D(\tau,t)}\,\mathrm{d}\tau}\\ \frac{\partial u}{\partial r}(r,t) &=& \frac{\partial c}{\partial r}(r,t)e^{-\int_0^{r}\!\frac{b(\tau,t)}{2D(\tau,t)}\,\mathrm{d}\tau}-\frac{b(r,t)}{2D(r,t)}c(r,t)e^{-\int_0^{r}\!\frac{b(\tau,t)}{2D(\tau,t)}\,\mathrm{d}\tau}\\ \frac{\partial^2u}{\partial r^2}(r,t) &=& \frac{\partial^2c}{\partial r^2}(r,t)e^{-\int_0^{r}\!\frac{b(\tau,t)}{2D(\tau,t)}\,\mathrm{d}\tau}-\frac{b(r,t)}{D(r,t)}\frac{\partial c}{\partial r}(r,t)e^{-\int_0^{r}\!\frac{b(\tau,t)}{2D(\tau,t)}\,\mathrm{d}\tau}\nonumber\\ &&+\frac{b(r,t)^2}{4D(r,t)^2}c(r,t)e^{-\int_0^{r}\!\frac{b(\tau,t)}{2D(\tau,t)}\,\mathrm{d}\tau}\nonumber\\ &&-\frac{1}{2}\left(\frac{b_r(r,t)}{D(r,t)}-\frac{b(r,t)D_r(r,t)}{D(r,t)^2}\right)c(r,t)e^{-\int_0^{r}\!\frac{b(\tau,t)}{2D(\tau,t)}\,\mathrm{d}\tau} \end{eqnarray} Substituting these equations into \eqref{mol1}-\eqref{mol3}, yields \begin{eqnarray} \frac{\partial c}{\partial t}(r,t) &=& D(r,t)\frac{\partial^2c}{\partial r^2}(r,t)+\lambda(r,t)c(r,t)\label{sysmain}\\ c(0,t) &=& 0\label{sysmainbc1}\\ \frac{\partial c}{\partial r}(1,t) &=& H(t)c(1,t)+M(t)\label{sysmainbc2} \end{eqnarray} where \begin{eqnarray} \lambda(r,t) &=& \phi(r,t)-\frac{b(r,t)^2}{4D(r,t)}-\frac{b_r(r,t)}{2}+\frac{b(r,t)D_r(r,t)}{2D(r,t)}\nonumber\\ &&+\frac{1}{2}\int_0^{r}\!\frac{\partial}{\partial t}\left(\frac{b(\tau,t)}{D(\tau,t)}\right)\,\mathrm{d}\tau\\ M(t) &=& U(t)e^{\int_0^{1}\!\frac{b(\tau,t)}{2D(\tau,t)}\,\mathrm{d}\tau}\\ H(t) &=& 1+\frac{b(1,t)}{2D(1,t)} \end{eqnarray} The objective is to estimate the state $c(r,t)$ using only one boundary measurement $c(1,t)$. \section{Boundary Observer Design} We design the state observer for \eqref{sysmain}-\eqref{sysmainbc2} as the copy of the plant plus an output injection term as follow \begin{eqnarray} \frac{\partial \hat{c}}{\partial t}(r,t) &=& D(r,t) \frac{\partial^2\hat{c}}{\partial r^2}(r,t)+\lambda(r,t)\hat{c}(r,t)+p_1(r,t)\left(c(1,t)-\hat{c}(1,t)\right)\\ \hat{c}(0,t) &=& 0\\ \frac{\partial \hat{c}}{\partial r}(1,t) &=& H(t)\hat{c}(1,t)+M(t)+p_{10}(t)\left(c(1,t)-\hat{c}(1,t)\right) \end{eqnarray} where $p_1(r,t)$ and $p_{10}(t)$ are observer gains to be determined later. If we define $\tilde{c}(r,t)=c(r,t)-\hat{c}(r,t)$, then the error system is given by \begin{eqnarray} \frac{\partial \tilde{c}}{\partial t}(r,t) &=& D(r,t) \frac{\partial^2\tilde{c}}{\partial r^2}(r,t)+\lambda(r,t)\tilde{c}(r,t)-p_1(r,t)\tilde{c}(1,t)\label{err1}\\ \tilde{c}(0,t) &=& 0\\ \frac{\partial \tilde{c}}{\partial r}(1,t) &=& H(t)\tilde{c}(1,t)-p_{10}(t)\tilde{c}(1,t)\label{err3} \end{eqnarray} We employ a Volterra integral transformation \begin{eqnarray} \tilde{c}(r,t) = \tilde{w}(r,t) - \int_r^1\! p(r,s,t)\tilde{w}(s,t) \,\mathrm{d}s\label{trans} \end{eqnarray} to transform the error system \eqref{err1}-\eqref{err3} into the following target system \begin{eqnarray} \frac{\partial \tilde{w}}{\partial t}(r,t) &=& D(r,t)\frac{\partial^2\tilde{w}}{\partial r^2}(r,t)+\mu\tilde{w}(r,t)\label{e1}\\ \tilde{w}(0,t) &=& 0\\ \frac{\partial \tilde{w}}{\partial r}(1,t) &=& -\frac{1}{2}\tilde{w}(1,t)\label{e3} \end{eqnarray} where the free parameter $\mu$ can be used to set the desired rate of stability. The transformation \eqref{trans} is invertible and the transformation kernel $p(r,s,t)$ is used to find the observer gains $p_1(r,t)$ and $p_{10}(t)$. \begin{lemma} The error system \eqref{e1}-\eqref{e3} is exponentially stable in the $\mathbb{L}^2(0,1)$-norm under the condition \begin{eqnarray} \mu<-\left\{\frac{\max|D_{rr}(r,t)|}{2}+\frac{D_m^2}{\min|D(r,t)|}\right\}\label{mu} \end{eqnarray} where \begin{eqnarray} D_m=\max\left\{0,-\left(\frac{D(1,t)+D_r(1,t)}{2}\right)\right\}\label{dm} \end{eqnarray} \end{lemma} \begin{proof} Consider the following Lyapunov function \begin{eqnarray} W(t) &=& \frac{1}{2}\int_0^1\! \tilde{w}(r,t)^2\,\mathrm{d}r\label{lya} \end{eqnarray} The derivative of \eqref{lya} with respect to $t$ along \eqref{e1}-\eqref{e3} is \begin{eqnarray} \dot{W}(t) &=& -\left(\frac{D(1,t)+D_r(1,t)}{2}\right)\tilde{w}(1,t)^2-\int_0^1\! D(r,t)\left(\frac{\partial\tilde{w}}{\partial r}(r,t)\right)^2\,\mathrm{d}r\nonumber\\ &&+\int_0^1\! \left(\frac{D_{rr}(r,t)}{2}+\mu\right)\tilde{w}(r,t)^2\,\mathrm{d}r \end{eqnarray} Using Young's and Poincare's inequalities, we have \begin{eqnarray} \dot{W}(t) &\leq& D_m\tilde{w}(1,t)^2-\int_0^1\! D(r,t)\left(\frac{\partial\tilde{w}}{\partial r}(r,t)\right)^2\,\mathrm{d}r\nonumber\\ &&+\int_0^1\! \left(\frac{\max|D_{rr}(r,t)|}{2}+\mu\right)\tilde{w}(r,t)^2\,\mathrm{d}r\\ &\leq& \int_0^1\! \left(\frac{\max|D_{rr}(r,t)|}{2}+\frac{D_m^2}{\min|D(r,t)|}+\mu\right)\tilde{w}(r,t)^2\,\mathrm{d}r \end{eqnarray} where $D_m$ is given by \eqref{dm}. Choosing $\mu$ as in \eqref{mu} completes the proof. \end{proof} Let us calculate the derivative of \eqref{trans} with respect to $r$ \begin{eqnarray} \frac{\partial \tilde{c}}{\partial r}(r,t) &=& \frac{\partial \tilde{w}}{\partial r}(r,t) + p(r,r,t)\tilde{w}(r,t) - \int_r^1\! p_r(r,s,t)\tilde{w}(s,t) \,\mathrm{d}s\label{x}\\ \frac{\partial^2\tilde{c}}{\partial r^2}(r,t) &=& \frac{\partial^2\tilde{w}}{\partial r^2}(r,t) + \frac{\mathrm{d}}{\mathrm{d}r}p(r,r,t)\tilde{w}(r,t) + p(r,r,t)\frac{\partial \tilde{w}}{\partial r}(r,t)\nonumber\\ && +p_r(r,r,t)\tilde{w}(r,t)- \int_r^1\! p_{rr}(r,s,t)\tilde{w}(s,t) \,\mathrm{d}s\label{xx} \end{eqnarray} Furthermore, we calculate the derivative of \eqref{trans} with respect to $t$ \begin{eqnarray} \frac{\partial \tilde{c}}{\partial t}(r,t) &=& D(r,t)\frac{\partial^2\tilde{w}}{\partial r^2}(r,t)+\mu\tilde{w}(r,t)-\int_r^1\! \mu p(r,s,t)\tilde{w}(s,t) \,\mathrm{d}s\nonumber\\ &&-p(r,1,t)D(1,t)\frac{\partial\tilde{w}}{\partial r}(1,t)+p(r,r,t)D(r,t)\frac{\partial\tilde{w}}{\partial r}(r,t)\nonumber\\ &&+\left(p(r,1,t)D(1,t)\right)_s\tilde{w}(1,t)-\left(p(r,r,t)D(r,t)\right)_s\tilde{w}(r,t)\nonumber\\ &&-\int_r^1\! \left(p(r,s,t)D(s,t)\right)_{ss}\tilde{w}(s,t) \,\mathrm{d}s\nonumber\\ &&-\int_r^1\! p_t(r,s,t)\tilde{w}(s,t) \,\mathrm{d}s\label{t} \end{eqnarray} Substituting \eqref{xx} and \eqref{t} into \eqref{err1}-\eqref{err3}, the following system need to be satisfied \begin{eqnarray} p_t(r,s,t) &=& D(r,t)p_{rr}(r,s,t)-\left(D(s,t)p(r,s,t)\right)_{ss}\nonumber\\ &&-\left(\mu-\lambda(r,t)\right) p(r,s,t)\label{ker1}\\ \frac{\mathrm{d}}{\mathrm{d}r}p(r,r,t)&=&-\frac{D_r(r,t)}{2D(r,t)}p(r,r,t)+\frac{\left(\mu-\lambda(r,t)\right)}{2D(r,t)}\label{ker2}\\ p(0,s,t) &=& 0\label{ker3} \end{eqnarray} Furthermore, the state observer gains are obtained as \begin{eqnarray} p_1(r,t)&=&-\frac{1}{2}p(r,1,t)D(1,t)-\left(p(r,1,t)D(1,t)\right)_s\label{gain1}\\ p_{10}(t) &=& \frac{1}{2}+H(t)-p(1,1,t)\label{gain2} \end{eqnarray} The second boundary condition \eqref{ker2} can be simplified further. First, multiplying it by $\sqrt{2D(r,t)}$, we have \begin{eqnarray} \sqrt{2D(r,t)}\frac{\mathrm{d}}{\mathrm{d}r}p(r,r,t)+\frac{D_r(r,t)}{\sqrt{2D(r,t)}}p(r,r,t)&=&\frac{\mu-\lambda(r,t)}{\sqrt{2D(r,t)}} \end{eqnarray} alternatively, this equation can be written as \begin{eqnarray} \frac{\mathrm{d}}{\mathrm{d}r}\left(\sqrt{2D(r,t)}p(r,r,t)\right)&=&\frac{\mu-\lambda(r,t)}{\sqrt{2D(r,t)}} \end{eqnarray} Integrating the equation above, and from the fact that $p(0,0,t)=0$, the kernel equation become \begin{eqnarray} p_t(r,s,t) &=& D(r,t)p_{rr}(r,s,t)-\left(D(s,t)p(r,s,t)\right)_{ss}\nonumber\\ &&-\left(\mu-\lambda(r,t)\right) p(r,s,t)\label{kg1}\\ p(r,r,t) &=& \frac{1}{2\sqrt{D(r,t)}}\int_0^r\! \frac{\mu-\lambda(\tau,t)}{\sqrt{D(\tau,t)}} \,\mathrm{d}\tau\label{kg2}\\ p(0,s,t) &=& 0\label{kg3} \end{eqnarray} The result from this section could be stated as follow. \begin{theorem} Let $p(r,s,t)$ be the solution of system \eqref{kg1}-\eqref{kg3}. Then for any initial condition $\tilde{c}_0(r)\in\mathbb{L}^2(0,1)$ system \eqref{err1}-\eqref{err3} with $p_1(r,t)$ and $p_{10}(t)$ are given by \eqref{gain1} and \eqref{gain2}, respectively, has a unique classical solution $\tilde{c}(r,t)\in\mathbb{C}^{2,1}\left((0,1)\times(0,\infty)\right)$. Additionally, the origin $\tilde{c}(r,t)\equiv0$ is exponentially stable in the $\mathbb{L}^2(0,1)$-norm. \end{theorem} \section{Proof of Theorem 1} The existence of a kernel function satisfy \eqref{kg1}-\eqref{kg3} is hard to be proved due to the state dependency of of the diffusion coefficient, as can be seen from the second term of the right hand side of \eqref{kg1}. In order to handle this problem, we can change the equation \eqref{kg1} into a standard form. Let us define \begin{eqnarray} \breve{p}(\bar{r},\bar{s},t) &=& D(s,t)p(r,s,t)\\ \bar{r} &=& \phi(r) = \sqrt{D(0,t)}\int_0^r\! \frac{1}{\sqrt{D(\tau,t)}} \,\mathrm{d}\tau\\ \bar{s} &=& \phi(s) = \sqrt{D(0,t)}\int_0^s\! \frac{1}{\sqrt{D(\tau,t)}} \,\mathrm{d}\tau \end{eqnarray} Using these definitions, we compute\footnote{for simplicity $\breve{p}=\breve{p}(\bar{r},\bar{s},t)$ and $p=p(r,s,t)$.} \begin{eqnarray} \breve{p}_t &=& D(s,t)p_t+\frac{D_t(s,t)}{D(s,t)}\breve{p}\label{p1}\\ \breve{p}_{\bar{r}} &=& D(s,t)p_r\frac{\sqrt{D(r,t)}}{\sqrt{D(0,t)}}\label{p2}\\ \breve{p}_{\bar{r}\bar{r}} &=& D(s,t)p_{rr}\frac{D(r,t)}{D(0,t)}+\frac{D_r(r,t)}{2\sqrt{D(r,t)D(0,t)}}\breve{p}_{\bar{r}}\label{p3}\\ \breve{p}_{\bar{s}} &=& D_s(s,t)p\frac{\sqrt{D(s,t)}}{\sqrt{D(0,t)}}+D(s,t)p_s\frac{\sqrt{D(s,t)}}{\sqrt{D(0,t)}}\label{p4}\\ \breve{p}_{\bar{s}\bar{s}} &=& D_{ss}(s,t)p\frac{D(s,t)}{D(0,t)}+2D_s(s,t)p_s\frac{D(s,t)}{D(0,t)}\nonumber\\ &&+D(s,t)p_{ss}\frac{D(s,t)}{D(0,t)}+\frac{D_s(s,t)}{2\sqrt{D(s,t)D(0,t)}}\breve{p}_{\bar{s}}\label{p5} \end{eqnarray} Rearrange \eqref{p1}, \eqref{p3}, and \eqref{p5}, we have \begin{eqnarray} \left(\breve{p}_t-\frac{D_t(s,t)}{D(s,t)}\breve{p}\right)\frac{1}{D(s,t)} &=& p_t\label{pio1}\\ \left(\breve{p}_{\bar{r}\bar{r}}-\frac{D_r(r,t)}{2\sqrt{D(r,t)D(0,t)}}\breve{p}_{\bar{r}}\right)\frac{D(0,t)}{D(s,t)} &=& D(r,t)p_{rr}\label{pio2}\\ \left(\breve{p}_{\bar{s}\bar{s}}-\frac{D_s(s,t)}{2\sqrt{D(s,t)D(0,t)}}\breve{p}_{\bar{s}}\right)\frac{D(0,t)}{D(s,t)} &=& \left(D(s,t)p\right)_{ss}\label{pio3} \end{eqnarray} Plugging \eqref{pio1}-\eqref{pio3} into \eqref{kg1}, we have \begin{eqnarray} \breve{p}_t &=& D(0,t)\left(\breve{p}_{\bar{r}\bar{r}}-\breve{p}_{\bar{s}\bar{s}}\right)+\left(\frac{D_t(s,t)}{D(s,t)}-(\mu-\lambda(r,t))\right)\breve{p}\nonumber\\ &&-\left(\frac{D_r(r,t)}{2}\sqrt{\frac{D(0,t)}{D(r,t)}}\breve{p}_{\bar{r}}-\frac{D_s(s,t)}{2}\sqrt{\frac{D(0,t)}{D(s,t)}}\breve{p}_{\bar{s}}\right)\label{wow} \end{eqnarray} The boundary conditions become \begin{eqnarray} \breve{p}(\bar{r},\bar{r},t) &=& \frac{1}{2}\sqrt{\frac{D(r,t)}{D(0,t)}}\int_0^{\bar{r}}\! \left(\mu-\lambda(\phi^{-1}(\tau),t)\right) \,\mathrm{d}\tau\\ \breve{p}(0,\bar{s},t) &=& 0 \end{eqnarray} The advection term in \eqref{wow} can be eliminated like in the second section of this paper using the following transformation \begin{eqnarray} \bar{p}(\bar{r},\bar{s},t) &=& \left(D(r,t)D(s,t)\right)^{-\frac{1}{4}}\breve{p}(\bar{r},\bar{s},t)\label{panjang} \end{eqnarray} Computing the derivatives of \eqref{panjang} with respect to $t$, $\bar{r}$, and $\bar{s}$, we have\footnote{for simplicity $\bar{p}=\bar{p}(\bar{r},\bar{s},t)$.} \begin{eqnarray} \bar{p}_t &=& \left(D(r,t)D(s,t)\right)^{-\frac{1}{4}}\breve{p}_t-\frac{1}{4D(r,t)D(s,t)}\frac{\partial}{\partial t}\left(D(r,t)D(s,t)\right)\bar{p}\label{satu}\\ \bar{p}_{\bar{r}} &=& \left(D(r,t)D(s,t)\right)^{-\frac{1}{4}}\breve{p}_{\bar{r}}-\frac{D_r(r,t)}{4\sqrt{D(r,t)D(0,t)}}\bar{p}\\ \bar{p}_{\bar{r}\bar{r}} &=& \left(D(r,t)D(s,t)\right)^{-\frac{1}{4}}\breve{p}_{\bar{r}\bar{r}}-\frac{D_r(r,t)}{2\sqrt{D(r,t)D(0,t)}}\bar{p}_{\bar{r}}\nonumber\\ &&-\left(\frac{1}{4}\frac{\partial}{\partial r}\left(\frac{D_r(r,t)}{\sqrt{D(r,t)}}\right)\frac{1}{\sqrt{D(0,t)}}+\frac{D_r(r,t)^2}{16D(r,t)D(0,t)}\right)\bar{p}\label{dua}\\ \bar{p}_{\bar{s}} &=& \left(D(r,t)D(s,t)\right)^{-\frac{1}{4}}\breve{p}_{\bar{s}}-\frac{D_s(s,t)}{4\sqrt{D(s,t)D(0,t)}}\bar{p}\\ \bar{p}_{\bar{s}\bar{s}} &=& \left(D(r,t)D(s,t)\right)^{-\frac{1}{4}}\breve{p}_{\bar{s}\bar{s}}-\frac{D_s(s,t)}{2\sqrt{D(s,t)D(0,t)}}\bar{p}_{\bar{s}}\nonumber\\ &&-\left(\frac{1}{4}\frac{\partial}{\partial s}\left(\frac{D_s(s,t)}{\sqrt{D(s,t)}}\right)\frac{1}{\sqrt{D(0,t)}}+\frac{D_s(s,t)^2}{16D(s,t)D(0,t)}\right)\bar{p}\label{tiga} \end{eqnarray} Rearrange \eqref{satu}, \eqref{dua}, and \eqref{tiga}, we have \begin{eqnarray} \breve{p}_t &=& \left(D(r,t)D(s,t)\right)^{\frac{1}{4}}\left(\bar{p}_t+\frac{1}{4D(r,t)D(s,t)}\frac{\partial}{\partial t}\left(D(r,t)D(s,t)\right)\bar{p}\right)\label{lam1}\\ \breve{p}_{\bar{r}\bar{r}} &=& \left(D(r,t)D(s,t)\right)^{\frac{1}{4}}\left(\bar{p}_{\bar{r}\bar{r}}+\frac{D_r(r,t)}{2\sqrt{D(r,t)D(0,t)}}\bar{p}_{\bar{r}}\right)\nonumber\\ &&+L(r,t)\left(D(r,t)D(s,t)\right)^{\frac{1}{4}}\bar{p}\label{lam2}\\ \breve{p}_{\bar{s}\bar{s}} &=& \left(D(r,t)D(s,t)\right)^{\frac{1}{4}}\left(\bar{p}_{\bar{s}\bar{s}}+\frac{D_s(s,t)}{2\sqrt{D(s,t)D(0,t)}}\bar{p}_{\bar{s}}\right)\nonumber\\ &&+L(s,t)\left(D(r,t)D(s,t)\right)^{\frac{1}{4}}\bar{p}\label{lam3} \end{eqnarray} where \begin{eqnarray} L(y,t) &=& \frac{1}{4}\frac{\partial}{\partial y}\left(\frac{D_y(y,t)}{\sqrt{D(y,t)}}\right)\frac{1}{\sqrt{D(0,t)}}+\frac{D_y(y,t)^2}{16D(y,t)D(0,t)} \end{eqnarray} Substituting \eqref{lam1}-\eqref{lam3} into \eqref{wow}, we have \begin{eqnarray} \bar{p}_t(\bar{r},\bar{s},t) &=& D(0,t)\left(\bar{p}_{\bar{r}\bar{r}}(\bar{r},\bar{s},t)-\bar{p}_{\bar{s}\bar{s}}(\bar{r},\bar{s},t)\right)+\bar{\lambda}(r,s,t)\bar{p}(\bar{r},\bar{s},t)\label{psi1} \end{eqnarray} where \begin{eqnarray} \bar{\lambda}(r,s,t) &=& -\frac{D_r(r,t)^2}{8D(r,t)}+\frac{D_s(s,t)^2}{8D(s,t)}+D(0,t)\left(L(r,t)-L(s,t)\right)\nonumber\\ &&-\frac{1}{4D(r,t)D(s,t)}\frac{\partial}{\partial t}\left(D(r,t)D(s,t)\right)+\frac{D_t(s,t)}{D(s,t)}\nonumber\\ &&-(\mu-\lambda(r,t)) \end{eqnarray} The boundary conditions become \begin{eqnarray} \bar{p}(\bar{r},\bar{r},t) &=& \frac{1}{2\sqrt{D(0,t)}}\int_0^{\bar{r}}\! \left(\mu-\lambda(\phi^{-1}(\tau),t)\right) \,\mathrm{d}\tau\label{psi2}\\ \bar{p}(0,\bar{s},t) &=& 0\label{psi3} \end{eqnarray} We prove the existence of a kernel function satisfy \eqref{psi1} with boundary conditions \eqref{psi2}-\eqref{psi3} using the method of successive approximation. First, we convert the differential equation into an integral equation. We introduce the following change of variables \begin{eqnarray} \bar{p}(\bar{r},\bar{s},t) &=& \psi(\xi,\eta,t)\\ \xi &=& \bar{r}+\bar{s}\\ \eta &=& \bar{r}-\bar{s} \end{eqnarray} This transformations give \begin{eqnarray} \bar{p}_{\bar{r}}(\bar{r},\bar{s},t) &=& \psi_{\xi}(\xi,\eta,t)+\psi_{\eta}(\xi,\eta,t)\\ \bar{p}_{\bar{r}\bar{r}}(\bar{r},\bar{s},t) &=& \psi_{\xi\xi}(\xi,\eta,t)+2\psi_{\xi\eta}(\xi,\eta,t)+\psi_{\eta\eta}(\xi,\eta,t)\\ \bar{p}_{\bar{s}}(\bar{r},\bar{s},t) &=& \psi_{\xi}(\xi,\eta,t)-\psi_{\eta}(\xi,\eta,t)\\ \bar{p}_{\bar{s}\bar{s}}(\bar{r},\bar{s},t) &=& \psi_{\xi\xi}(\xi,\eta,t)-2\psi_{\xi\eta}(\xi,\eta,t)+\psi_{\eta\eta}(\xi,\eta,t) \end{eqnarray} Thus, from \eqref{psi1} and \eqref{psi2}-\eqref{psi3} we have \begin{eqnarray} \psi_{\xi\eta}(\xi,\eta,t) &=& \frac{1}{4D(0,t)}\left(\psi_t(\xi,\eta,t)-\bar{\lambda}(\phi^{-1}(\xi,\eta),t)\psi(\xi,\eta,t)\right)\label{ade}\\ \psi(\xi,0,t) &=& \frac{1}{2\sqrt{D(0,t)}}\int_0^{\frac{\xi}{2}}\! \left(\mu-\lambda(\phi^{-1}(\tau),t)\right) \,\mathrm{d}\tau\\ \psi(\xi,-\xi,t) &=& 0 \end{eqnarray} Integrating \eqref{ade} with respect to $\eta$ from $0$ to $\eta$, we get \begin{eqnarray} \psi_{\xi}(\xi,\eta,t) &=& \frac{1}{4\sqrt{D(0,t)}}\left(\mu-\lambda\left(\phi^{-1}\left(\frac{\xi}{2}\right),t\right)\right)\nonumber\\ &&+\frac{1}{4D(0,t)}\int_0^{\eta}\!\left(\psi_t(\xi,s,t)-\bar{\lambda}(\phi^{-1}(\xi,s),t)\psi(\xi,s,t)\right)\,\mathrm{d}s \end{eqnarray} Next, integrating the above equation with respect to $\xi$ from $-\eta$ to $\xi$, yields \begin{eqnarray} \psi(\xi,\eta,t) &=& \frac{1}{4\sqrt{D(0,t)}}\left(\mu-\lambda\left(\phi^{-1}\left(\frac{\xi}{2}\right),t\right)\right)(\xi+\eta)\nonumber\\ &&+\frac{1}{4D(0,t)}\int_{-\eta}^{\xi}\int_0^{\eta}\!\bar{\psi}_t(\tau,s,t)\,\mathrm{d}s\mathrm{d}\tau\label{integral} \end{eqnarray} where \begin{eqnarray} \bar{\psi}_t(\tau,s,t)= \psi_t(\tau,s,t)-\bar{\lambda}(\phi^{-1}(\tau,s),t)\psi(\tau,s,t) \end{eqnarray} We are ready to prove the existence of a kernel function satisfy \eqref{integral} using the method of successive approximation. Let us set up the recursive formula for \eqref{integral} as follow \begin{eqnarray} \psi^0(\xi,\eta,t) &=& \frac{1}{4\sqrt{D(0,t)}}\left(\mu-\lambda\left(\phi^{-1}\left(\frac{\xi}{2}\right),t\right)\right)(\xi+\eta)\nonumber\\ &\ll& C\left(1-\frac{t}{t_f}\right)^{-1}\left(\xi+\eta\right)\\ \psi^{n+1}(\xi,\eta,t) &=& \frac{1}{4D(0,t)}\int_{-\eta}^{\xi}\int_0^{\eta}\!\bar{\psi}_t^n(\tau,s,t)\,\mathrm{d}s\mathrm{d}\tau\label{ming} \end{eqnarray} for $t<t_f$, $C>0$, and $n\geq0$. The symbol $\ll$ denotes domination. If \eqref{ming} converges, the solution can be written as follows \begin{eqnarray} \psi(\xi,\eta,t) = \sum_{n=0}^{\infty} \psi^n(\xi,\eta,t)\label{series} \end{eqnarray} For $n=1$, we have \begin{eqnarray} \psi^1(\xi,\eta,t) &\ll& CC_0\left(1-\frac{t}{t_f}\right)^{-2}\frac{\xi\eta\left(\xi+\eta\right)}{2} \end{eqnarray} where \begin{eqnarray} C_0 = \max_{t_0\leq t\leq t_f}\left|\frac{1}{4D(0,t)}\right| \end{eqnarray} Using induction, we have \begin{eqnarray} \psi^n(\xi,\eta,t) &\ll& CC_0^n\left(1-\frac{t}{t_f}\right)^{-n-1}\frac{\xi^n\eta^n\left(\xi+\eta\right)}{(n+1)!} \end{eqnarray} Therefore, the series \eqref{series} could be proved to be absolutely and uniformly convergent. \end{document}
\begin{document} \title{\Large\bf Simple weight modules for Yangian $\operatorname{Y}(\mathfrak{sl}_{2})$} \author{{Yikun Zhou$^{1}$, Yilan Tan$^{1,*}$, Limeng Xia$^{1,2}$}} \date{} \maketitle \begin{abstract} Let $\mathfrak{g}$ be a finite-dimensional simple Lie algebra over $\mathbb{C}$. A $\operatorname{Y}(\mathfrak{g})$-module is said to be weight if it is a weight $\mathfrak{g}$-module. We give a complete classification of simple weight modules for $\operatorname{Y}(\mathfrak{sl}_2)$ which admits a one-dimensional weight space. We prove that there are four classes of such modules: finite, highest weight, lowest weight and dense modules. Different from the classical $\mathfrak{sl}_{2}$-representation theory, we show that there exist a class of $\operatorname{Y}(\mathfrak{sl}_{2})$ irreducible modules which have uniformly 2-dimensional weight spaces. \end{abstract} {\it Key words: Yangian; Weight module; Simple module; Dense module.} \noindent Author Addresses: 1. School of Mathematical Science, Jiangsu University, Zhenjiang, Jiangsu, 212013, China. 2. Institute of Applied System Analysis, Jiangsu University, Zhenjiang, Jiangsu, 212013, China. \noindent Corresponding Author(*) \noindent Email: [email protected] \noindent The third author of this paper thanks the support of the National Natural Science Foundation of China (Grants No. 11871249, 12171155) \section{Introduction} Let $\mathfrak{g}$ be a finite-dimensional simple Lie algebra over $\mathbb{C}$. The Yangian $\operatorname{Y}(\mathfrak{g})$ and the quantum affine algebra $\operatorname{U_{q}}({\hat{\mathfrak{g}}})$ constitute two remarkable families of the quantum groups of affine type. The Yangian $\operatorname{Y}(\mathfrak{g})$ is a unital associative algebra which is a Hopf algebra deformation of universal enveloping algebra of the current algebra $\mathfrak{g}[t]$. In the late 1970s and early 1980s, The Yangian used for the first time in the work of Ludvig Faddeev and his school concerning the quantum inverse scattering method in statistical mechanics. Later, it appears as a symmetric group in different physics models. The representation of the Yangian could be used to construct the rational solutions of the quantum Yang-Baxter equation\cite{ChPr4}. As stated in \cite{NT}, ``the physical data such as mass formula, fusion angle, and the spins of integrals of motion can be extracted from the Yangian highest weight representations.'' The finite-dimensional simple modules and highest weight modules of Yangians were studied in the last forty years, see \cite{VMFY, ChPr4, ChPr3, KhNaPa, Mo2, TaGu} and the references in. The representation theory of $\operatorname{Y}(\mathfrak{sl}_{2})$ is of paramount importance to understand the structure of simple $\operatorname{Y}(\mathfrak{g})$-modules for general $\mathfrak{g}$. It was showed by Chari and Pressley in \cite{ChPr3} that every finite-dimensional simple module for $\operatorname{Y}(\mathfrak{sl}_{2})$ is a tensor product of modules which are simple under $\mathfrak{sl}_{2}$. Then the irreducible condition for $\operatorname{Y}(\mathfrak{g})$ follows. Another example is the local Weyl modules for $\operatorname{Y}(\mathfrak{g})$, see \cite{TaGu}. A Local Weyl module $W(\pi)$ is a highest weight object with a nice property: every finite-dimensional hightest weight module associated to $\pi$ is a quotient of $W(\pi)$. The local weyl modules for $\operatorname{Y}(\mathfrak{sl}_{2})$ is isomorphic to an ordered tensor product of fundamental representations of $\operatorname{Y}(\mathfrak{sl}_{2})$. It is crucial to obtain the structure of the local Weyl module $W(\pi)$ for general $\operatorname{Y}(\mathfrak{g})$. The purpose of this paper is to construct and classify a new class of simple modules of the Yangian $\operatorname{Y}(\mathfrak{sl}_{2})$. Analogous to the weight modules of current algebras \cite{Lau}, a $\operatorname{Y}(\mathfrak{g})$-module is called weight if it is a weight $\mathfrak{g}$-module. Both the finite-dimensional simple modules and highest weight modules are weight modules, which have something in common: there exists a one-dimensional weight space. In this paper we study the simple weight module for $\operatorname{Y}(\mathfrak{sl}_{2})$ which admits a one-dimensional weight space. Due to the existence of the evaluation homomorphism for $\operatorname{Y}(\mathfrak{sl}_{n})$, every irreducible $\mathfrak{sl}_{n}$-module is a simple $\operatorname{Y}(\mathfrak{sl}_{n})$-module. The irreducible dense modules of $\mathfrak{sl}_{2}$ come in as another class of weight module for $\operatorname{Y}(\mathfrak{sl}_{2})$, which are still called dense in this paper. One of our contributions is to give the structure of dense modules, which are parametrized by three parameters $\mu,\tau,b_{\mu}$, see Section 3 for detail. Our main result is \begin{thm}\label{thm1} Let $V$ be a simple weight module for $\operatorname{Y}(\mathfrak{sl}_{2})$ which admits a one-dimensional weight space. Then $V$ is isomorphic to one of the following modules: \begin{enumerate}[label=(\arabic*)] \item A finite-dimensional simple module. \item An infinite-dimensional simple highest weight module. \item An infinite-dimensional simple lowest weight module. \item A simple dense module $V(\mu,\tau,b_{\mu})$. \end{enumerate} \end{thm} It is well known to the Lie theorists that every weight space of any simple weight $ \mathfrak{sl}_{2}$-module is one-dimensional. However, the condition in Theorem \ref{thm1} that $V$ admits a one-dimensional weight space is essential. In Section 5 we construct a class of simple weight modules whose every weight space is uniformly two dimensional. In this paper, denote the set of natural numbers, the set of positive integers and the set of complex numbers by $\mathbb{N}$, $\mathbb{Z}_{>0}$ and $\mathbb{C}$, respectively. \section{Preliminary}\label{sec:1} In this section, we recall the definition of the Yangian $\operatorname{Y}(\mathfrak{sl}_{2})$ and some results concerning its finite-dimensional representations. For the general definition of the Yangian $\operatorname{Y}(\mathfrak{g})$, we refer the articles \cite{VMFY, ChPr4}. \begin{definition} The Yangian $\operatorname{Y}(\mathfrak{sl}_2)$ is an associative algebra with generators $X_{k}^{\pm}$, $H_{k}$, $k \in \mathbb{N},$ and the following defining relations: \begin{equation*} \left[H_{k}, H_{l}\right]=0, \quad\left[H_{0}, X_{k}^{\pm}\right]=\pm 2 X_{k}^{\pm}, \quad\left[X_{k}^{+}, X_{l}^{-}\right]=H_{k+l}, \end{equation*} \begin{equation}\label{e11} \left[H_{k+1}, X_{l}^{\pm}\right]-\left[H_{k}, X_{l+1}^{\pm}\right]=\pm\left(H_{k} X_{l}^{\pm}+X_{l}^{\pm} H_{k}\right) , \end{equation} \begin{equation}\label{e12} \left[X_{k+1}^{\pm}, X_{l}^{\pm}\right]-\left[X_{k}^{\pm}, X_{l+1}^{\pm}\right]=\pm\left(X_{k}^{\pm} X_{l}^{\pm}+X_{l}^{\pm} X_{k}^{\pm}\right) . \end{equation} \end{definition} Yangian $\operatorname{Y}(\mathfrak{sl}_{2})$ is a hopf algebra. However, there is no explicit coproduct formula for the realization used in this paper. Partial information were obtained in the paper \cite{ChPr5}, which is enough for ours purpose. \begin{lemma}[\cite{ChPr5}] The coproduct $\Delta$ of $\operatorname{Y}(\mathfrak{sl}_{2})$ satisfies: \begin{enumerate}[label=(\arabic*)] \item $\Delta\left(H_{0}\right)=H_{0} \otimes 1+1 \otimes H_{0}$, \item $\Delta\left(H_{1}\right)=H_{1} \otimes 1+H_{0} \otimes H_{0}+1 \otimes H_{1}-2 X_{0}^{-} \otimes X_{0}^{+} $, \item $\Delta\left(X_{0}^{+}\right)=X_{0}^{+} \otimes 1+1 \otimes X_{0}^{+}$, \item $\Delta\left(X_{1}^{+}\right)=X_{1}^{+} \otimes 1+1 \otimes X_{1}^{+}+H_{0} \otimes X_{0}^{+}$, \item $\Delta\left(X_{0}^{-}\right)=X_{0}^{-} \otimes 1+1 \otimes X_{0}^{-}$, \item $\Delta\left(X_{1}^{-}\right)=X_{1}^{-} \otimes 1+1 \otimes X_{1}^{-}+X_{0}^{-} \otimes H_{0}.$ \end{enumerate} \end{lemma} \begin{corollary} The generators $X_{0}^{\pm}$ and $H_{0}$ generate a subalgebra which is isomorphic to $\mathfrak{sl}_{2}$. \end{corollary} \begin{lemma}[\cite{Guay,Leve}] \label{lem5} $\operatorname{Y}(\mathfrak{sl}_{2})$ is generated by $X_{0}^{\pm}$, $H_{0}$ and $H_{1}$. \end{lemma} \begin{proof} It follows from the defining relations of $\operatorname{Y}(\mathfrak{sl}_{2})$ that $$[H_0^2, X_k^{\pm}]=H_0[H_0,X_k^{\pm}]+[H_0,X_k^{\pm}]H_0=\pm 2(H_0X_k^{\pm}+X_k^{\pm}H_0).$$ Then we have \begin{eqnarray}\label{e13} \left[H_{1}, X_{l}^{\pm}\right]-\left[H_{0}, X_{l+1}^{\pm}\right] &=& \pm\left(H_{0} X_{l}^{\pm}+X_{l}^{\pm} H_{0}\right)\nonumber\\ \left[H_{1}, X_{l}^{\pm}\right]\mp 2X_{l+1}^{\pm}&=& \frac{1}{2}[H_0^2, X_l^{\pm}]\nonumber\\ \mp 2X_{l+1}^{\pm}&=& -\left(\left[H_{1}, X_{l}^{\pm}\right]- \frac{1}{2}[H_0^2, X_l^{\pm}]\right)\nonumber\\ X_{l+1}^{\pm}&=& \pm \frac{1}{2}\left[H_{1}-\frac{1}{2}H_0^2, X_{l}^{\pm}\right]. \end{eqnarray} Note that $\left[X_{r}^{+}, X_{0}^{-}\right]=H_{r}$. Thus $\operatorname{Y}(\mathfrak{sl}_2)$ is generated by $X_0^{\pm}$, $H_{0}$ and $H_1$. \end{proof} In next we introduce the representation theory of $\operatorname{Y}(\mathfrak{sl}_{2})$. \begin{definition} A module $V(\mu(u))$ of $\operatorname{Y}(\mathfrak{sl}_{2})$ is said to be highest weight if there exists a vector $v^{+}$ such that \begin{center} $V(\mu(u))=\operatorname{Y}(\mathfrak{sl}_{2}).v^{+}$, $X_{k}^{+}.v^{+}= 0$ and $H_{k}.v^{+} =\mu_{k} v^{+}$, \end{center} where $\mu(u)=1+\mu_{0}u^{-1}+\mu_{1}u^{-2}+\ldots$ is a formal series in $u^{-1}$. \end{definition} The Verma module $M(\mu(u))$ of $\operatorname{Y}(\mathfrak{sl}_{2})$ is defined to be the quotient of $\operatorname{Y}(\mathfrak{sl}_{2})$ by the left ideal generated by generators $X_{k}^{+}$ and the elements $H_{k}-\mu_{k}1$. $\operatorname{Y}(\mathfrak{sl}_{2})$ acts on $M(\mu(u))$ by left multiplication. A highest weight vector of $M(\mu(u))$ is $1_{\mu(u)}$ which is the image of the element $1\in \operatorname{Y}(\mathfrak{sl}_{2})$ in the quotient. The highest weight space is one-dimensional. The Verma module $M(\mu(u))$ is a universal highest weight module in the sense that: if $V(\mu(u))$ is another highest weight module with a highest weight vector $v$, then the mapping $1_{\mu(u)}\mapsto v$ defines a surjective $\operatorname{Y}(\mathfrak{sl}_{2})$-module homomorphism $M\left(\mu(u)\right)\rightarrow V\left(\mu(u)\right)$. The lowest weight modules are defined similarly, and we omit it here. Every finite-dimensional simple $\operatorname{Y}(\mathfrak{sl}_{2})$-module is a highest weight module \cite{Dr2,YACLA}. Moreover, a simple $\operatorname{Y}(\mathfrak{sl}_{2})$-module $L(\mu(u))$ is finite dimensional if and only if there exists monic polynomial $\pi(u)$ such that $\mu(u)=\frac{\pi(u+1)}{\pi(u)}$, in the sense that the right-hand side is the Laurent expansion of the left-hand side about $u=\infty$. In the references \cite{ChPr5, ChPr3}, an explicit realization of finite dimensional simple modules for $Y\left(\mathfrak{sl}_2\right)$ is given: every finite dimensional simple module for $Y\left(\mathfrak{sl}_2\right)$ is a tensor product of modules which are simple under $\mathfrak{sl}_2$. Let $W_m$ be the simple representation of $ \mathfrak{sl}_{2}$ with highest weight $m\in \mathbb{Z}_{\geq 1}$. Pulling it back by evaluation homomorphism $\rho$, $W_m$ becomes to a $\operatorname{Y}(\mathfrak{sl}_{2})$-module. Let $W_m\left(a\right)$ be the simple representation of $\operatorname{Y}(\mathfrak{sl}_{2})$ associated to the Drinfeld polynomial $\pi(u)=(u-a)(u-(a_1+1))\ldots (u-(a+m-1))$. \begin{lemma}[Proposition 3.5,\cite{ChPr3}]\label{wra} For any $m\geq 1$, $a\in \mathbb{C}$, $W_m\left(a\right)$ has a basis $\{w_0,w_1,\ldots, w_m\}$ on which the action of $\operatorname{Y}(\mathfrak{sl}_{2})$ is given by \begin{center} $ x_k^+.w_s=\left(s+a\right)^k\left(s+1\right)w_{s+1},\ \ x_k^-.w_s=\left(s+a-1\right)^k\left(m-s+1\right)w_{s-1},$\\ $h_k.w_s=\Big(\left(s+a-1\right)^ks\left(m-s+1\right)-\left(s+a\right)^k\left(s+1\right)\left(m-s\right)\Big)w_s$. \end{center} \end{lemma} The special case $m=1$ will be used in this paper. \begin{corollary}\label{slw1a} In $W_1\left(a\right)$, \begin{equation}\label{Cor3.3} H_{k}w_1=a^kw_1,\quad X_{k}^{-}.w_1=a^kw_0, \quad X_{k}^{+}.w_0=a^kw_{1},\quad H_{k}.w_0=-a^kw_0. \end{equation} \end{corollary} \begin{definition} A $\operatorname{Y}(\mathfrak{sl}_{2})$-module $V$ is called a weight module if $V=\bigoplus V_{\mu}$, where $\mu\in \mathbb{C}$ and $V_{\mu}=\{v \in V \mid H_{0}.v=\mu.v\}$. \end{definition} The next lemma follows from the defining relations of the Yangian. \begin{lemma} Let $V$ be a weight module and Let $V=\bigoplus V_{\mu}$. If $v\in V_{\mu}$, then $H_i.v\in V_{\mu}$, $X_{i}^{+}.v\in V_{\mu+2}$ and $X_{i}^{-}.v\in V_{\mu-2}$. \end{lemma} We close this section by paraphrasing Theorem 3.4.1 in \cite{Ma} about the dense module for $\mathfrak{sl}_{2}$. \begin{lemma}\label{lem3} Let $V$ be an simple weight $\mathfrak{sl}_{2}$-module which is neither highest nor lowest weight module. Then $V$ is an infinite-dimensional vector space with a basis $\{\ldots, v_{\mu-4}, v_{\mu-2}, v_{\mu}, v_{\mu+2}, v_{\mu+4}, \ldots\}$. The actions $X_{0}^{\pm}$ and $H_{0}$ on $V$ are defined as follows: $$\begin{aligned} X_0^{-}.\left(v_{\mu+2k}\right) &=v_{\mu+2k-2}, \\ X_0^{+}.\left(v_{\mu+2k}\right) &=a_{\mu+2k} v_{\mu+2k+2},\\ H_0.\left(v_{\mu+2k}\right) &=(\mu+2k) v_{\mu+2k}. \end{aligned}$$ Where $a_{\mu+2k}=\frac{1}{4}\left(\tau-(\mu+2k+1)^{2}\right)\neq 0$ for $\tau\in \mathbb{C}$ and all $k\in \mathbb{Z}$. \end{lemma} \section{Dense Modules for $\operatorname{Y}(\mathfrak{sl}_{2})$} As mentioned in the introduction, a dense module $V$ of $\mathfrak{sl}_{2}$ is a $\operatorname{Y}(\mathfrak{sl}_{2})$-module. By Lemma \ref{lem5}, $\operatorname{Y}(\mathfrak{sl}_{2})$ is generated by $X_{0}^{\pm}$, $H_{0}$ and $H_{1}$. It follows from Lemma \ref{lem3} that the action $H_{1}$ on weight vectors $v_{\mu+2k}$ will totally determine the structure of the $\operatorname{Y}(\mathfrak{sl}_{2})$-module $V$. Suppose that $H_{1}.v_{\mu+2k}=b_{\mu+2k}v_{\mu+2k}$, where $b_{\mu+2k}\in \mathbb{C}$. \begin{proposition} Suppose that $\mu\in (0,2]$. $$b_{\mu+2k} =\frac{\left(\mu+2k\right)\left(a_{u}+b_{\mu}\right)}{\mu}+k\left(\mu+2k\right)-a_{\mu+2k},\qquad k\in \mathbb{Z}$$ \end{proposition} \begin{proof} Let $v_{\mu+2k}$ be a weight vector with $\mu\neq 0$. We first show \begin{equation}\label{e18} b_{\mu+2k}-2b_{\mu+2k-2}+b_{\mu+2k-4}=6 \end{equation} for all $k\in \mathbb{Z}$. It follows from (\ref{e13}) that \begin{equation*} \begin{aligned} X_1^{-}.v_{\mu+2k}&=-\frac{1}{2}\left[H_{1}-\frac{1}{2}H_0^2, X_{0}^{-}\right].v_{\mu+2k}=-\frac{1}{2}\left(b_{\mu+2k-2}-b_{\mu+2k}+2\mu+4k-2\right)v_{\mu+2k-2},\\ X_1^{+}.v_{\mu+2k} &=\frac{1}{2}\left[H_{1}-\frac{1}{2}H_0^2, X_{0}^{+}\right].v_{\mu+2k}=\frac{a_{\mu}}{2}\left(b_{\mu+2}-b_{\mu}-2\mu-4k-2\right)v_{\mu+2k+2}. \end{aligned} \end{equation*} By the defining relation (\ref{e12}), $$\left(\left[X_{1}^{-}, X_{0}^{-}\right]-\left[X_{0}^{-}, X_{1}^{-}\right]\right).v_{\mu+2k}=-\left(X_{0}^{-} X_{0}^{-}+X_{0}^{-} X_{0}^{-}\right).v_{\mu+2k},$$ one can easily obtain that \begin{equation*} \begin{aligned} -\frac{1}{2}(b_{\mu+2k-4}-b_{\mu+2k-2}& +2\mu+4k-6).v_{\mu+2k-4}\\ &+\frac{1}{2}\left(b_{\mu+2k-2}-b_{\mu+2k}+2\mu+4k-2\right).v_{\mu+2k-4}=-v_{\mu+2k-4} \end{aligned} \end{equation*} Comparing the coefficient of $v_{\mu+2k-4}$ we have $b_{\mu+2k}-2b_{\mu+2k-2}+b_{\mu+2k-4}=6$. We next show that $b_{\mu+2k}=-a_{\mu+2k}+\frac{\mu+2k}{2}(b_{\mu+2k}-b_{\mu+2k-2}-2 \mu-4k+2)$. \begin{eqnarray*} H_{1} .v_{\mu+2k} &=&\left[X_{1}^{+}, X_{0}^{-}\right] . v_{\mu+2k}, \\ b_{\mu+2k} v_{\mu+2k} &=&X_{1}^{+} X_{0}^{-} .v_{\mu+2k}-X_{0}^{-} X_{1}^{+}. v_{\mu+2k} ,\\ b_{\mu+2k} v_{\mu+2k} &=&\frac{a_{\mu+2k-2}}{2}\left(b_{\mu+2k}-b_{\mu+2k-2}-2 \mu-4k+2\right) v_{\mu+2k}\\ &-&\frac{a_{\mu+2k}}{2}\left(b_{\mu+2k+2}-b_{\mu+2k}-2 \mu-4k-2\right) v_{\mu+2k}. \end{eqnarray*} Recall that $a_{\mu+2k-2}=a_{\mu+2k}+\mu+2k$. Thus \begin{eqnarray*} b_{\mu+2k} &=&\frac{a_{\mu+2k}+\mu+2k}{2}\left(b_{\mu+2k}-b_{\mu+2k-2}-2 \mu-4k+2\right)\\ &&-\frac{a_{\mu+2k}}{2}\left(b_{\mu+2k+2}-b_{\mu+2k}-2 \mu-4k-2\right)\\ &=&\frac{a_{\mu+2k}}{2}\left(-b_{\mu+2k+2}+2b_{\mu+2k}-b_{\mu+2k-2}+4\right)\\ &&+\frac{\mu+2k}{2}\left(b_{\mu+2k+2}-b_{\mu+2k}-2 \mu-4k-2\right)\\ &=&-a_{\mu+2k}+\frac{\mu+2k}{2}(b_{\mu+2k}-b_{\mu+2k-2}-2 \mu-4k+2). \end{eqnarray*} Then $(\mu+2k) b_{\mu+2k-2}=(\mu+2k-2)b_{\mu+2k}-2a_{\mu+2k}-2(\mu+2k)^{2}+2(\mu+2k)$. Adding $(\mu+2k) a_{\mu+2k-2}$ both sides, we get \begin{equation}\label{e17} (\mu+2k)(b_{\mu+2k-2}+a_{\mu+2k-2})=(\mu+2k-2)(b_{\mu+2k}+a_{\mu+2k}-\mu-2k). \end{equation} We proceed the proof by cases according the values of $\mu$. \textbf{Case 1:} $\mu\in (0,2)$. $\mu-2k\neq 0$ for all $k\in \mathbb{Z}$, thus we may rewrite (\ref{e17}) as $$\frac{b_{\mu+2k-2}+a_{\mu+2k-2}}{\mu+2k-2}=\frac{b_{\mu+2k}+a_{\mu+2k}}{\mu+2k}-1.$$ This recursive relation implies that $\frac{b_{\mu+2k}+a_{\mu+2k}}{\mu+2k}=\frac{b_{\mu}+a_{\mu}}{\mu}+k$ for all $k\in \mathbb{Z}$. \begin{equation}\label{e14} b_{\mu+2 k}=\frac{(\mu+2 k)\left(a_{u}+b_{\mu}\right)}{\mu}+k(\mu+2 k)-a_{\mu+2 k}. \end{equation} \textbf{Case 2.} $\mu=2$. It follows from (\ref{e17}) that $b_{0}=-a_{0}(k=0)$, and then, by (\ref{e18}), $b_{-2}=6-b_{2}-2a_{0}$. A straightforward computation shows that $\frac{b_{-2}+a_{-2}}{-2}=\frac{b_{2}+a_{2}}{2}-2$. Similarly as in Case 1, tedious computations show that $b_{\mu+2k}(k\in \mathbb{Z})$ satisfy (\ref{e14}). \end{proof} \begin{remark} The dense module for $\operatorname{Y}(\mathfrak{sl}_{2})$ are parametrized by three parameters, $\mu\in (0,2]$, $\tau\in \mathbb{C}$ and $b_{\mu}$ as in (\ref{e14}). Denote it by $V(\mu,\tau,b_{\mu})$. \end{remark} \section{Proof of Main Theorem} Let $V$ be a simple weight module for $\operatorname{Y}(\mathfrak{sl}_{2})$ which admits a one-dimensional weight space. Since the simple finite-dimensional, highest weight, or lowest weight module is a such module, we assume until the proof of Theorem 1.1 that $V$ is none of them above. Assume $\dim(V_{\mu})=1$ and $V_{\mu}=\text{span}\{w\}$. Since $V$ is neither highest weight nor lowest weight, we have both $\dim(V_{\mu-2})>0$ and $\dim(V_{\mu+2})>0$. \begin{lemma}\label{lem1} There exists $u\in V_{\mu-2}$ such that $X_{0}^{+}.u=w$. \end{lemma} \begin{proof} Recall that $X_{k}^{+}.V_{\mu-2}\subseteq V_{\mu}$. Suppose on the contrary that for all $u\in V_{\mu-2}$, $X_{0}^{+}.u=0$. We use mathematical induction to show that $X_{n}^{+}.u=0$ for all $n\in \mathbb{N}$. For $n=0$, it is clear. Suppose that $X_{n}^{+}.u=0$ for all $n=0,1,2,\ldots, k$. Let $n=k+1$. \begin{equation*} \begin{aligned} X_{k+1}^{+}.u&=\frac{1}{2}[H_0,X_{k+1}^{+}].u\\ &=\frac{1}{2}\left([H_{1},X_{k}^{+}]-H_{0}X_{k}^{+}-X_{k}^{+}H_{0}\right).u\\ &=\frac{1}{2}\left(H_{1}X_{k}^{+}-X_{k}^{+}H_{1}-H_{0}X_{k}^{+}-X_{k}^{+}H_{0}\right).u. \end{aligned} \end{equation*} It follows from $H_{i}V_{\mu-2}\subseteq V_{\mu-2}$ and the induction hypothesis that $X_{k+1}^{+}.u=0$. By mathematical induction, we have $x_{n}^{+}u=\mathbf{0}$ for all $n\in \mathbb{N}$. For any $\mathbf{0}\neq u\in V_{\mu-2}$, $W=\operatorname{Y}(\mathfrak{sl}_{2}).u$ is a proper submodule of $V$ since $V_{\mu}\not\subseteq W$, contradicting to the fact that $V$ is simple. Therefore there exists $u\in V_{\mu-2}$ such that $X_{0}^{+}.u=w$. \end{proof} Recall that $V_{\mu}$ is one-dimensional, $H_{k}V_{\mu-2}\subseteq V_{\mu-2}$ and $X_{k}^{+}V_{\mu-2}\subseteq V_{\mu}$. The following corollary is obtained easily. \begin{corollary}\label{cor1} Let $u$ be as in Lemma \ref{lem1}. For $k,m\in \mathbb{Z}$, $X_{k}^{+}u=a_k w$ and $X_{0}^{+}H_{m} u=b_{m}w$, where $a_k, b_{m}\in \mathbb{C}$. \end{corollary} Let $w_1=X_{0}^{+}.w=(X_{0}^{+})^2.u$. Similar as in Lemma \ref{lem1}, we claim that $w_1\neq \mathbf{0}$. Here is a graph of the relations between $u, w, w_{1}$. \begin{center} \begin{tikzpicture}[scale=0.5] \tikzstyle{vertex}=[circle,fill=black!0,minimum size=10pt,inner sep=0pt] \coordinate (A) at (-4,0); \coordinate (B) at (0,0); \coordinate (C) at (4,0); \coordinate (D) at (8,0); \coordinate (E) at (-8,0); \coordinate [label=above:$u$] (1) at ($(A)+(0,0.2)$); \coordinate [label=above:$w$] (2) at ($(B)+(0,0.2)$); \coordinate [label=above:$w_{1}$] (3) at ($(C)+(0,0.2)$); \coordinate [label=below:$V_{\mu-2}$] (4) at ($(A)-(0,0.2)$); \coordinate [label=below:$V_{\mu}$] (5) at ($(B)-(0,0.2)$); \coordinate [label=below:$V_{\mu+2}$] (6) at ($(C)-(0,0.2)$); \draw (A) circle (0.2); \draw (B) circle (0.2); \draw (C) circle (0.2); \draw (D) circle (0.2); \draw (E) circle (0.2); \draw[thick] ($(E)+(0.2,0)$)--($(A)-(0.2,0)$); \draw[thick] ($(A)+(0.2,0)$)--($(B)-(0.2,0)$); \draw[thick] ($(B)+(0.2,0)$)--($(C)-(0.2,0)$); \draw[thick] ($(C)+(0.2,0)$)--($(D)-(0.2,0)$); \draw[thick] ($(-10,0)$)--($(E)-(0.2,0)$); \draw[thick] ($(10,0)$)--($(D)+(0.2,0)$); \fill (-11,0) node [font={\small},scale=1.5, minimum height=3em,vertex] {...}; \fill (11,0) node [font={\small},scale=1.5, minimum height=3em,vertex] {...}; \end{tikzpicture} \end{center} In the next proposition, we will show that $w_{1}$ is a common eigenvector for all $H_{k}$. To prove it, we need the following lemma. \begin{lemma}\label{lem2} $X_{n}^{+}.w=c_{n}w_1$, where $c_{n}\in \mathbb{C}$ and $n\in \mathbb{N}$. \end{lemma} \begin{proof} We prove this lemma by induction. For $n=0$, it is obvious. For $n=1$, \begin{equation*} \begin{aligned} X_{1}^{+}.w&=X_{1}^{+}X_{0}^{+}.u \\ &= [X_{1}^{+}, X_{0}^{+}].u+X_{0}^{+}X_{1}^{+}.u\\ &=X_{0}^{+}X_{0}^{+}.u+a_{1}(X_{0}^{+})^2.u\\ &=c_{1}w_{1}. \end{aligned} \end{equation*} Suppose that the statement is true for all $n=0,1,2,\ldots,k$. Let $n=k+1$. \begin{equation*} \begin{aligned} X_{k+1}^{+}.w&=X_{k+1}^{+}.(X_{0}^{+}.u) \\ &= [X_{k+1}^{+}, X_{0}^{+}].u+X_{0}^{+}.(X_{k+1}^{+}.u)\\ &=(X_{k}^{+}X_{1}^{+}-X_{1}^{+}X_{k}^{+}+X_{k}^{+}X_{0}^{+}+X_{0}^{+}X_{k}^{+}).u+a_{k+1}(X_{0}^{+})^2. u.\\ \end{aligned} \end{equation*} It follows from Corollary \ref{cor1} and induction hypothesis that $x_{k+1}^{+}w=c_{k+1}w_{1}$. Therefore, by mathematical induction, the statement is true. \end{proof} \begin{proposition}\label{prop1} $H_{n}.w_{1}=d_{n}w_{1}$, where $d_{n}\in \mathbb{C}$ and $n\in \mathbb{N}$. \end{proposition} \begin{proof} We prove this proposition by induction. For $n=0$, $H_{0}.w_{1}=(\mu+2)w_{1}$. Suppose that the statement is true for $n=0,1,2,\ldots,k$. Now let $n=k+1$. \begin{equation*} \begin{aligned} H_{k+1}.w_1&=H_{k+1}. (\left(X_{0}^{+}\right)^2.u)\\ &=[H_{k+1}, \left(X_{0}^{+}\right)^2].u+\left(X_{0}^{+}\right)^2H_{k+1}.u\\ &= [H_{k+1},X_{0}^{+}]X_{0}^{+}.u+X_{0}^{+}[H_{k+1},X_{0}^{+}].u+X_{0}^{+}\left(X_{0}^{+}H_{k+1}.u\right)\\ &= \left([H_{k},X_{1}^{+}]+H_{k}X_{0}^{+}+X_{0}^{+}H_{k}\right).(X_{0}^{+}.u)+X_{0}^{+}[H_{k+1},X_{0}^{+}].u+X_{0}^{+}.\left(b_{k+1}w\right)\\ & \equiv \left([H_{k},X_{1}^{+}]+H_{k}X_{0}^{+}\right).(X_{0}^{+}.u) \quad (\operatorname{mod} \mathbb{C}w_{1})\\ & \equiv [H_{k},X_{1}^{+}]X_{0}^{+}.u \quad (\operatorname{mod} \mathbb{C}w_{1})\quad (\text{by induction hypothesis})\\ & \equiv \left(H_{k}X_{1}^{+}-X_{1}^{+}H_{k}\right).w \quad (\operatorname{mod} \mathbb{C}w_{1})\\ & \equiv c_{1}H_{k}X_{0}^{+}.w \quad (\operatorname{mod} \mathbb{C}w_{1})\\ &\equiv 0 \quad (\operatorname{mod} \mathbb{C}w_{1}). \end{aligned} \end{equation*} By mathematical induction, we have $H_{n}w_{1}=d_{n}w_{1}$. \end{proof} Similar as the discussions from Lemma \ref{lem1} to Proposition \ref{prop1}, we may find a nonzero vector $v\in V_{\mu+2}$ such that $w=X_{0}^{-}v$, and show $w_{-1}=X_{0}^{-}w\neq 0$. Here is a graph of the relations of $v,w$ and $w_{-1}$. \begin{center} \begin{tikzpicture}[scale=0.5] \tikzstyle{vertex}=[circle,fill=black!0,minimum size=10pt,inner sep=0pt] \coordinate (A) at (-4,0); \coordinate (B) at (0,0); \coordinate (C) at (4,0); \coordinate (D) at (8,0); \coordinate (E) at (-8,0); \coordinate [label=above:$w_{-1}$] (1) at ($(A)+(0,0.2)$); \coordinate [label=above:$w$] (2) at ($(B)+(0,0.2)$); \coordinate [label=above:$v$] (3) at ($(C)+(0,0.2)$); \coordinate [label=below:$V_{\mu-2}$] (4) at ($(A)-(0,0.2)$); \coordinate [label=below:$V_{\mu}$] (5) at ($(B)-(0,0.2)$); \coordinate [label=below:$V_{\mu+2}$] (6) at ($(C)-(0,0.2)$); \draw (A) circle (0.2); \draw (B) circle (0.2); \draw (C) circle (0.2); \draw (D) circle (0.2); \draw (E) circle (0.2); \draw[thick] ($(E)+(0.2,0)$)--($(A)-(0.2,0)$); \draw[thick] ($(A)+(0.2,0)$)--($(B)-(0.2,0)$); \draw[thick] ($(B)+(0.2,0)$)--($(C)-(0.2,0)$); \draw[thick] ($(C)+(0.2,0)$)--($(D)-(0.2,0)$); \fill (-11,0) node [font={\small},scale=1.5, minimum height=3em,vertex] {...}; \fill (11,0) node [font={\small},scale=1.5, minimum height=3em,vertex] {...}; \draw[thick] ($(-10,0)$)--($(E)-(0.2,0)$); \draw[thick] ($(10,0)$)--($(D)+(0.2,0)$); \end{tikzpicture} \end{center} We will summarize some results regarding the three vectors above, without proof, into the next proposition. \begin{proposition}\quad \begin{enumerate}[label=(\arabic*)] \item $X_{n}^{-}w=f_{n}w_{-1}$, where $f_{n}\in \mathbb{C}$ and $n\in \mathbb{N}$. \item $w_{-1}$ is a common eigenvector for all $H_{k}$, where $k\in \mathbb{N}$. \end{enumerate} \end{proposition} Define $w_{-n}=(X_{0}^{-})^{n}.w$ and $w_{n}=(X_{0}^{+})^{n}.w$. Denote $w$ by $w_{0}$. Let $$W=\text{span}\{\ldots,w_{-2}, w_{-1}, w_{0}, w_{1}, w_{2}, \ldots\}.$$ Replacing $u$ by $w_{0}$ in the proofs of Lemma \ref{lem2} and Proposition \ref{prop1}, we have the following lemma. \begin{lemma} \begin{enumerate}[label=(\arabic*)] \item $X_{n}^{+}.w_{1}=e_{n}w_{2}$, where $e_{n}\in \mathbb{C}$. \item $w_{2}$ is a common eigenvector for $H_{k}$. \end{enumerate} \end{lemma} Similarly we have \begin{proposition}\label{prop2} $w_{n}$ is a common eigenvector for $H_{k}$, where $n\in \mathbb{Z}$ and $k\in \mathbb{N}$. \end{proposition} We now prove our main theorem. \begin{proof}[Proof of Theorem 1.1] Let $V$ be a simple weight module for $\operatorname{Y}(\mathfrak{sl}_{2})$ which admits a one-dimensional weight space. It is well known that every simple finite-dimensional module, highest weight module, or lowest weight module has a weight space of one-dimensional. So we may assume $V$ is none of them above. It follows from the discussions above in this section that $W\subseteq V$. We claim that $W$ is a $\operatorname{Y}(\mathfrak{sl}_{2})$-module. By Lemma \ref{lem5}, it is enough to show that $W$ is stable under the actions of $X_0^{\pm}$, $H_{0}$ and $H_1$. \begin{enumerate}[label=(\arabic*)] \item $H_0$ stables $W$. It is easy to show that $H_0.w_{k}=(\mu+2k)w_k$. \item $X_{0}^{+}$ stables $W$. It is enough to show that $X_{0}^{+}.w_{-n}=p_{n}w_{-n+1}$ for all $n\in \mathbb{Z}_{>0}$. We prove it by mathematical introduction. Let $n=1$. \begin{equation*} \begin{aligned} X_{0}^{+}.w_{-1}&=X_{0}^{+}.(X_{0}^{-}.w_{0})\\ &=[X_{0}^{+},X_{0}^{-}].w_{0}+X_{0}^{-}.(X_{0}^{+}.w_{0})\\ &=H_{0}.w_{0}+X_{0}^{-}.w_{1}\\ &= \mu w_{0}+f_{0}w_{0}\\ &=(\mu+f_{0})w_{0}. \end{aligned} \end{equation*} Suppose that it is true for $n=1,2,\ldots,k$. Let $n=k+1$. \begin{equation*} \begin{aligned} X_{0}^{+}.w_{-k-1}&=X_{0}^{+}.(X_{0}^{-}.w_{-k})\\ &=[X_{0}^{+},X_{0}^{-}].w_{-k}+X_{0}^{-}.(X_{0}^{+}.w_{-k})\\ &=H_{0}.w_{-k}+X_{0}^{-}.(p_{k}w_{-k+1})\\ &= (\mu-2k)w_{-k}+p_{k}w_{-k}\\ &= (\mu-2k+p_{k})w_{-k}. \end{aligned} \end{equation*} By Mathematical induction, we have $X_{0}^{+}.w_{-n}=p_{n}w_{-n+1}$ for all $n\in \mathbb{Z}_{>0}$. \item $X_0^{-}$ stables $W$. å Similar to Case (2). \item $H_1$ stables $W$. It follows from Proposition \ref{prop2}. \end{enumerate} Since $V$ is simple, $V=W$. Recall that we assume $V$ is an simple weight module for $\operatorname{Y}(\mathfrak{sl}_{2})$ which is neither highest nor lowest weight. Thus $w_{n}\neq 0$ for all $n\in \mathbb{Z}$, so $V$ is infinite-dimensional. It is obvious that $V$ is an irreducible $\mathfrak{sl}_{2}$-module, which is neither highest weight nor lowest weight. It follows from Theorem 3.4.1 in \cite{Ma} that $V$ is a dense module of $\mathfrak{sl}_{2}$. By what we proved in Section 3, $V$ is isomorphic to a dense module $V(\mu,\tau, b_{\mu})$. \end{proof} \section{Modules with two-dimensional weight spaces} For any simple weight $ \mathfrak{sl}_{2}$-module, its weight space is one-dimensional. In this section, we show that this trend does not apply to the Yangian $\operatorname{Y}(\mathfrak{sl}_{2})$, by giving a class of simple weight modules whose every weight space is uniformly two-dimensional. Let $U=V(\mu,\tau, b_{\mu})\otimes W_{1}(r)$ in this section, where $\tau\neq (\mu+2k+1)^{2}$ for all $k\in \mathbb{Z}$. Immediately we know that $U$ is a weight module for $\operatorname{Y}(\mathfrak{sl}_{2})$, and the weight spaces $U_{\mu+2k+1}=\operatorname{span}\{v_{\mu+2k+2} \otimes w_{-1}, v_{\mu+2k} \otimes w_{1}\}$. \begin{lemma}\label{lem4} Let $V$ be a nonzero submodule of $U$. If $U_{\mu+2k+1}\subseteq V$ for some $k\in \mathbb{Z}$, then $V=U$. \end{lemma} \begin{proof} This lemma will be proved if one can show that both $U_{\mu+2k-1}\subseteq V$ and $U_{\mu+2k+3}\subseteq V$, which follow from the following computations easily. $$ \begin{aligned} X_{0}^{+}.\left(v_{\mu+2k+2}\otimes w_{-1}\right)&=a_{\mu+2k+2} \left(v_{\mu+2k+4} \otimes w_{-1}\right)+\left(v_{\mu+2k+2} \otimes w_{1}\right),\\ X_{0}^{+}.\left(v_{\mu+2k}\otimes w_{1}\right)&=a_{\mu+2k} \left(v_{\mu+2k+2} \otimes w_{1}\right),\\ X_{0}^{-}.\left(v_{\mu+2k+2}\otimes w_{-1}\right)&=v_{\mu+2k} \otimes w_{-1},\\ X_{0}^{-}.\left(v_{\mu+2k}\otimes w_{1}\right)&=v_{\mu+2k} \otimes w_{-1}+v_{\mu+2k-2} \otimes w_{1}.\\ \end{aligned} $$ \end{proof} \begin{thm} $U$ is simple if and only if $b_{\mu}\neq \frac{\mu^{2}+\mu \pm \mu \sqrt{\tau}}{2}-a_{\mu}+\mu(r-1)$. \end{thm} \begin{proof} We first prove the sufficiency by contrapositive method. Suppose on the contrary that $U$ has a proper submodule $V$. As a $\mathfrak{sl}_{2}$-module, $V$ is a weight module, and $V_{\mu+2k+1}=V\cap U_{\mu+2k+1}$, where $k\in \mathbb{Z}$. It follows from Lemma \ref{lem4} that $\dim(V_{\mu+2k+1})\leq 1$. Moreover, there exists $k\in \mathbb{Z}$ such that $\dim(V_{\mu+2k+1})=1$. Without loss of generality, we may assume that $k=0$. It follows from the coproduct of $\operatorname{Y}(\mathfrak{sl}_{2})$ that \begin{equation*} \begin{aligned} H_{1}.\left(v_{\mu+2} \otimes w_{-1}\right)=&H_{1}.v_{\mu+2} \otimes w_{-1}+H_{0}.v_{\mu+2} \otimes H_{0}.w_{-1}\\ &+v_{\mu+2} \otimes H_{1}.w_{-1}-2X_{0}^{-}.v_{\mu+2} \otimes X_{0}^{+}.w_{-1}\\ =& b_{\mu+2}v_{\mu+2} \otimes w_{-1}-(\mu+2)v_{\mu+2} \otimes w_{-1}\\ &-rv_{\mu+2} \otimes w_{-1}-2v_{\mu}\otimes w_{1}\\ =& \left(b_{\mu+2}-\mu-2-r\right)\left(v_{\mu+2} \otimes w_{-1}\right)-2\left(v_{\mu} \otimes w_{1}\right). \end{aligned} \end{equation*} Similarly we have $$H_{1}.\left(v_{\mu}\otimes w_{1}\right)=\left(b_{\mu}+\mu+r\right)\left(v_{\mu} \otimes w_{1}\right).$$ Let $v=a\left(v_{\mu+2} \otimes w_{-1}\right)+b\left(v_{\mu} \otimes w_{1}\right)$ be a nonzero element in $V_{\mu+1}$. It is not hard to see from Lemma \ref{lem4} that both $a\neq 0$ and $b\neq 0$. \begin{equation*}\label{e15} \begin{aligned} H_{1}.(a&\left(v_{\mu+2} \otimes w_{-1}\right)+b\left(v_{\mu} \otimes w_{1}\right))\\ =&a(H_{1}.(v_{\mu+2} \otimes w_{-1}))+b(H_{1}.(v_{\mu} \otimes w_{1}))\\ =&a(\left(b_{\mu+2}-\mu-2-r\right)\left(v_{\mu+2} \otimes w_{-1}\right)-2\left(v_{\mu} \otimes w_{1}\right))+b((b_{\mu}+\mu+r)(v_{\mu} \otimes w_{1}))\\ =&a(b_{\mu+2}-\mu-2-r)v_{\mu+2} \otimes w_{-1}+(b(b_{\mu}+\mu+r)-2a)v_{\mu} \otimes w_{1}. \end{aligned} \end{equation*} Since $V_{\mu+1}$ is 1-dimensional, $H_{1}.v$ is a scalar of $v$. Then we obtain $$a(b(b_{\mu}+\mu+r)-2a)=ba(b_{\mu+2}-\mu-2-r),$$ which implies that $$\frac{a}{b}=\frac{1}{2}(b_{\mu}-b_{\mu+2}+2\mu+4)=r-1-\frac{b_{\mu}+a_{\mu}}{\mu}.$$ A computation shows $X_{0}^{+}.v=aa_{\mu+2}\left(v_{\mu+4} \otimes w_{-1}\right)+\left(a+ba_{\mu}\right)\left(v_{\mu+2} \otimes w_{1}\right)$. \begin{equation*}\label{e16} \begin{aligned} H_{1}.(&aa_{\mu+2}\left(v_{\mu+4} \otimes w_{-1}\right)+\left(a+ba_{\mu}\right)\left(v_{\mu+2} \otimes w_{1}\right))\\ =&aa_{\mu+2}\left(b_{\mu+4}-\mu-4-r\right)\left(v_{\mu+4} \otimes w_{-1}\right)\\ &+\left(\left(a+ba_{\mu}\right)\left(b_{\mu+2}+2+\mu+r\right)-2aa_{\mu+2}\right)\left(v_{\mu+2} \otimes w_{1}\right)\\ \end{aligned} \end{equation*} Since $\dim(V_{\mu+3})\leq 1$, similarly as above, we have $$\frac{a}{b}=\frac{-a_{\mu}\left(2-r+\frac{b_{\mu}+a_{\mu}}{\mu}\right)}{a_{\mu+2}+2-r+\frac{b_{\mu}+a_{\mu}}{\mu}}.$$ so $\frac{a}{b}=r-1-\frac{b_{\mu}+a_{\mu}}{\mu}=\frac{-a_{\mu}\left(2-r+\frac{b_{\mu}+a_{\mu}}{\mu}\right)}{a_{\mu+2}+2-r+\frac{b_{\mu}+a_{\mu}}{\mu}}$. Let $t=\frac{a}{b}=r-1-\frac{b_{\mu}+a_{\mu}}{\mu}$, then $t=\frac{-a_{\mu}\left(1-t\right)}{a_{\mu+2}+1-t}$. Thus $t^{2}-t-a_{\mu+2}t+a_{\mu}t-a_{\mu}=0$. Since $a_{\mu}-a_{\mu+2}=\mu+2$, $t^2+\left(\mu+1\right)t-a_{\mu}=0$. Thus $t=\frac{a}{b}=\frac{-\left(\mu+1\right)\pm\sqrt{\tau}}{2}$, which implies that $b_{\mu}=\frac{\mu^2+\mu\pm\mu\sqrt{\tau}}{2}-a_{\mu}+\mu(r-1)$. We get a contradiction. Therefore, $U$ is simple. We next show the necessity. It is equivalent to show that $U$ is reducible if $b_{\mu}=\frac{\mu^2+\mu\pm\mu\sqrt{\tau}}{2}-a_{\mu}+\mu(r-1)$. Let $v=a\left(v_{\mu+2} \otimes w_{-1}\right)+\left(v_{\mu} \otimes w_{1}\right)$, where $a=\frac{-\left(\mu+1\right)\pm\sqrt{\tau}}{2}$. It follows from above computations that $H_{1}.v=c_{1}v$ and $H_{1}.(X_{0}^{+}.v)=c_{2}X_{0}^{+}.v$, where $c_{1}, c_{2}\in \mathbb{C}$. Similarly one can show that $H_{1}.(X_{0}^{-}.v)=c_{3}X_{0}^{-}.v$ for some $c_{3}\in \mathbb{C}$. Moreover, $$X_{0}^{-}.(X_{0}^{+}.v)=(a+a a_{\mu+2}+a_{\mu})v_{\mu+2}\otimes w_{-1}+(a+a_{\mu})v_{\mu}\otimes w_{1}.$$ A straightforward computation shows that it is a scalar of $v$. It follows from the discussions in Section 2 that $V=\operatorname{span}\{\ldots, (X_{0}^{-})^{2}.v, X_{0}^{-}.v,v,X_{0}^{+}.v,(X_{0}^{+})^{2}.v,\ldots\}$ is a submodule whose weight space of weight $\mu+1$ is 1-dimensional. $V$ is a proper submodule of $U$, thus $U$ is reducible. \end{proof} \end{document}
\begin{document} \title[Online Monitoring of Spatio-Temporal Properties for Imprecise Signals]{Online Monitoring of Spatio-Temporal Properties for Imprecise Signals} \author{Ennio Visconti} \affiliation{ \institution{TU Wien} \streetaddress{Treitlstraße 3} \city{Vienna} \country{Austria}} \author{Ezio Bartocci} \affiliation{ \institution{TU Wien} \streetaddress{Treitlstraße 3} \city{Vienna} \country{Austria}} \author{Michele Loreti} \affiliation{ \institution{Università di Camerino} \city{Camerino} \country{Italy}} \author{Laura Nenzi} \affiliation{ \institution{TU Wien} \streetaddress{Treitlstraße 3} \city{Vienna} \country{Austria}} \affiliation{ \institution{Università di Trieste} \city{Trieste} \country{Italy}} \renewcommand{Visconti, et al.}{Visconti, et al.} \begin{abstract} From biological systems to cyber-physical systems, monitoring the behavior of such dynamical systems often requires to reason about complex spatio-temporal properties of physical and/or computational entities that are dynamically interconnected and arranged in a particular spatial configuration. Spatio-Temporal Reach and Escape Logic (STREL) is a recent logic-based formal language designed to specify and to reason about spatio-temporal properties. STREL considers each system's entity as a node of a dynamic weighted graph representing their spatial arrangement. Each node generates a set of mixed-analog signals describing the evolution over time of computational and physical quantities characterising the node's behavior. While there are offline algorithms available for monitoring STREL specifications over logged simulation traces, here we investigate for the first time an online algorithm enabling the runtime verification during the system's execution or simulation. Our approach extends the original framework by considering imprecise signals and by enhancing the logics' semantics with the possibility to express partial guarantees about the conformance of the system's behavior with its specification. Finally, we demonstrate our approach in a real-world environmental monitoring case study. \end{abstract} \begin{CCSXML} <ccs2012> <concept> <concept_id>10003752.10003790.10002990</concept_id> <concept_desc>Theory of computation~Logic and verification</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003752.10003790.10003793</concept_id> <concept_desc>Theory of computation~Modal and temporal logics</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10011007.10010940.10010971.10011682</concept_id> <concept_desc>Software and its engineering~Abstraction, modeling and modularity</concept_desc> <concept_significance>300</concept_significance> </concept> </ccs2012> \end{CCSXML} \ccsdesc[500]{Theory of computation~Logic and verification} \ccsdesc[500]{Theory of computation~Modal and temporal logics} \ccsdesc[300]{Software and its engineering~Abstraction, modeling and modularity} \keywords{Runtime verification, online monitoring, spatio-temporal logic, imprecise signal, signal temporal logic} \maketitle \section{Introduction}\label{sec:intro} Complex emergent spatio-temporal patterns such as traffic congestion or travelling waves are central in the understanding of networked dynamical systems where locally interacting entities are operating at different time and spatial scales. We can observe these patterns both in biological systems~\cite{GrosuSCWEB09,BartocciBMNS15,Bartocci2016TNCS} as well as human engineered artefacts such as Collective Adaptive Systems~\cite{LoretiH16} (CAS) and Cyber-Physical Systems~\cite{RatasichKGGSB19} (CPS). CAS and CPS consist of a large number of heterogeneous (physical and computational in CPS) and spatially distributed entities featuring complex interactions among themselves, with humans and other systems. Example include biking sharing systems, the internet of things, contact tracing devices preventing the epidemic spread, vehicular networks and smart cities. Many of these systems are also safety-critical~\cite{RatasichKGGSB19}, meaning that a failure could result in loss of life or in catastrophic consequences for the environment. The complex interaction with the physical environment in which these systems are embedded prevents them from being exhaustively verified at design time. A common alternative is testing~\cite{chapter5}: traces generated during their execution/simulation are stored and monitored offline with respect to a formal specification used as an oracle. However, testing may provide a limited coverage and does not take into account physical failures that may happen during the execution. Online monitoring is instead a preferable solution when the monitoring verdict requires the immediate action of a policy maker during the system's execution or when it is very computationally expensive~\cite{SelyuninJNRHBNG17} generating and storing the system's execution traces to be monitored offline. \begin{figure*}\label{fig:no2} \end{figure*} \noindent \emph{\bf Motivating example.} As a case study we consider a sensor network for environmental monitoring. Air pollution is the primary cause of the loss of biodiversity, of the reduction of agricultural productivity and of many diseases for humans’ lungs and cardiovascular system. Policy makers are constantly monitoring the amount of $NO_2$, an air pollutant that forms from the combustion of fossil fuels, to activate special policies that mitigate its release when levels grow too significantly for the public concern. For example, in the Italian region of Lombardy, there are over 80 stations distributed throughout the region monitoring the level of $NO_2$ in the air. Figure~\ref{fig:no2} shows the location of these stations, and the value reported in the town of Rezzato, for the first quarter of 2021. The measurements happen regularly every hour, but sometimes the sensors fail to communicate the measurements, because of meteorological issues, or temporary faults, and the actual values are provided later on. Furthermore, the values measured by the sensors are noisy and they have a certain degree of uncertainty. Another way to deal with the missing values, could be to check that nearby locations (e.g. within 10 Km) do not register alarming values of particles in the air and this is possible only if we consider spatio-temporal properties. In this paper \emph{we address the problem of online monitoring of spatio-temporal properties over systems from which we can observe noisy signals with possible missing data or out-of-order samples.} \noindent {\bf Spatio-temporal monitoring.} In the last decade there has been a great effort to develop logic-based specification languages and monitoring frameworks for spatio-temporal properties. Examples include SpaTeL~\cite{spatel}, SSTL~\cite{NenziBCLM15}, (SaSTL)~\cite{MaBLS020,sastl2021} and STREL~\cite{Bartocci17memocode}. For more details on the underlying spatial models and the language expressiveness we refer the reader to~\cite{NenziBBLV20}. In this paper, we consider STREL~\cite{Bartocci17memocode} a spatio-temporal logic operating over a dynamic weighted graph representing the spatial arrangement of spatially distributed entities. Each node generates a set of mixed-analog signals describing the evolution over time of computational and physical quantities characterising the node's behavior. STREL extends the Signal Temporal Logic (STL)~\cite{MalerN13} with the \emph{reach} and \emph{escape} operators that generalizes the \emph{somewhere, everywhere} and \emph{surronded} spatial modalities, simplifying the monitoring that can be computed locally with respect to each node. However, the original work on STREL~\cite{Bartocci17memocode,BartocciBLNS20} provides only an offline monitoring algorithm. In contrast we present here the first online monitoring algorithm for STREL and in general for spatio-temporal monitoring. \noindent {\bf Online Monitoring.} To the best of our knowledge, the only online monitoring techniques~\cite{online-fainekos,Deshmukh2017, JaksicBGKNN15,JaksicBGN18,rtamt,Mamouras2020,MamourasCW21} that are available in the literature can handle temporal specification languages such as STL~\cite{MalerN13} and Metric Temporal Logic~\cite{Koymans90} (MTL). One of the main challenges for online monitoring is how and when to decide the satisfaction/violation of a formula with temporal operators reasoning about future and not yet observed events. In~\cite{online-fainekos} the authors provides, for the first time, a dynamic programming algorithm for the online monitoring of the robustness metric of MTL formulas with bounded future and unbounded past. The past formula is used to reason about the robustness of the actual system observations, while for evaluating the future formula they use a predictor to estimate the likely robustness. However, the value forecasted by the predictor needs to be trusted because it is not the real value that the system will provide. Other approaches~\cite{JaksicBGKNN15,JaksicBGN18,rtamt} address the problem of deciding about the future using a technique called \emph{pastification} that rewrites the future operators as past ones and delays the verdict. Similarly the works of Mamouras et al~\cite{Mamouras2020,MamourasCW21} delay the output verdict until some part of the future input is seen. In~\cite{Deshmukh2017} the authors present an efficient online algorithm to compute the robust interval semantics for bounded horizon formulas. All these approaches assume that the data and the events to be observed come synchronously and in-order. \noindent{\bf Our contribution} In contrast to these works, we present a novel approach to monitor online imprecise spatio-temporal signals (signals are defined on intervals, not just the robustness) where the samples can be processed also out-of-order. The notion of an interval is instrumental when representing partial knowledge about a value that is at least known to be within some boundaries. This might be because of errors in the measurement, or maybe because of some other sources of uncertainty throughout the process of acquiring and processing them. We define both a Boolean and a quantitative interval semantics for STREL and we prove the soundness and the correctness of the robust interval semantics. We design and implement, as extensions to the Moonlight tool\footnote{Source code available at: \href{https://github.com/MoonLightSuite/MoonLight}{\texttt{github.com/MoonLightSuite/MoonLight}}}, the first online monitoring algorithm for not-in-order sampled signals, and the first online spatio-temporal monitoring tool. Our experiments demonstrate also convincing performances comparing with the state-of-the-art tool \textsc{Breach}~\cite{breach} for the online monitoring of temporal properties over in-order sampled signals. \noindent{\bf Paper organization} The rest of this paper is structured as follows. We provide the important aspects of interval algebra and our notion of imprecise signals in Section~\ref{sec:background}. In Section~\ref{sec:logic} we introduce the interval extension of the STREL logic, and its primary results, while in Section~\ref{sec:monitoring} we present our approach for the online monitoring of imprecise signals. Lastly, we present a realistic use case in Section~\ref{sec:evaluation} and we share our concluding remarks in Section~\ref{sec:future}. \section{Interval Algebra, Signals and Spatial Model} \label{sec:background} In this section, we define the key elements of interval algebra, signal and spatial model, which will be useful to characterize samples of the kind depicted in Figure~\ref{fig:no2}. \begin{definition}[Intervals] Let $\mathcal{I}(\mathbb{R}^\infty)$ be the set of intervals defined over the set $\mathbb{R}^\infty \equiv \mathbb{R}\cup\{+\infty, -\infty\}$. We call \emph{closed interval} (or simply \emph{interval}) any set $I\subseteq \mathbb{R}^\infty$ such that $I \equiv [a, b] := \{x \in \mathbb{R^\infty}: a \leqslant x \leqslant b; a, b \in \mathbb{R}^\infty\}$. For any $I \equiv [a, b] \in \mathcal{I}(\mathbb{R}^\infty)$, we will indicate as $\underline{I} \equiv a$ and $\overline{I} \equiv b$ the extremes of the interval. \end{definition} In addition to the classical notion of interval, it is useful to recall some basic operations that can be performed. \begin{definition}[Interval Basic Operations]\label{def:interval-operations} Consider $I, I_1, I_2 \in \mathcal{I}(\mathbb{R}^\infty), c \in \mathbb{R}^\infty$, we define the following interval operators: \begin{align*} c + I &:= [\underline{I}+c, \overline{I}+c] & -I &:= [-\overline{I}, -\underline{I}] \\ I_1 + I_2 &:= [\underline{I_1} + \underline{I_2}, \overline{I_1} + \overline{I_2}] & I_1 - I_2 &:= I_1 + (-I_2) \end{align*} \begin{center} $[\max](I_1, I_2) := [\max(\underline{I_1}, \underline{I_2}), \max(\overline{I_1}, \overline{I_2})]$ \\ $[\min](I_1, I_2) := [\min(\underline{I_1}, \underline{I_2}), \min(\overline{I_1}, \overline{I_2})]$ \\ \end{center} We also consider the extensions of $[\min]$ and $[\max]$ operators defined over an arbitrary subset $A \subseteq \mathcal{I}(\mathbb{R}^\infty)$, denoted by $\{\}$ instead of $()$ for function arguments.\\ We call \emph{interval radius} of I, the operator $|I|:= [\min(|\underline{I}|, |\overline{I}|), \max(|\underline{I}|, |\overline{I}|)]$ \end{definition} Interval relations are described in this way: \begin{definition}[Interval Inequalities] Let $I_1, I_2 \in \mathcal{I}(\mathbb{R}^\infty)$, we say that $I_1 < I_2$ when $\overline{I_1} < \underline{I_2}$. Symmetrically, we say that $I_1 > I_2$ when $\underline{I_1} > \overline{I_2}$\footnote{ We will write $I < c$ (respectively $I > c$) in place of $I < [c, c]$ (resp. $I > [c, c]$)}, for $c \in \mathbb{R}^\infty$. \end{definition} To measure distances between intervals we consider the \emph{Hausdorff Distance.} \begin{definition}[Hausdorff Distance]\label{def:hausdorff} Let $X, Y$ be two non-empty subsets of a metric space $\langle M, d \rangle$, we will call (Hausdorff) distance the function $d_H : \mathcal{P}(X) \times \mathcal{P}(X) \rightarrow \mathbb{R}_{\geq 0}$ defined as \begin{equation*} d_H := \max\left\{\,\sup_{x \in X} \inf_{y \in Y} d(x,y),\, \sup_{y \in Y} \inf_{x \in X} d(x,y)\,\right\} \end{equation*} In practice, in our context, we can just consider the metric space defined by the euclidean distance over the real numbers, and thus $d_H$ reduces to computing $\max(|\underline{I_1} - \underline{I_2}|, |\overline{I_1} - \overline{I_2}|)$ for any two $I_1, I_2 \in \mathcal{I}(\mathbb{R}^\infty)$, although for doing that we say, by definition, that if both $\underline{I_1}, \underline{I_2}$ (or both $\overline{I_1}, \overline{I_2}$) are infinite, then their Hausdorff distance is $0$. \end{definition} Now we have all the tools to introduce the concept of imprecise signals. \begin{definition}[Imprecise Temporal Signal] Let $\mathbb{T} \equiv [0, \infty]$ be a set representing the time domain, and let $\mathcal{F}(\mathbb{T}, D^n)$, with $D \subseteq \mathcal{I}(\mathbb{R})$ for a fixed $n \in \mathbb{N}$, be the family of functions over Cartesian products of real intervals; we call \textit{imprecise time signal}, any $\mathbf{\sigma} \in \mathcal{F}(\mathbb{T}, D^n)$, i.e. any function $\mathbf{\sigma}: \mathbb{T} \rightarrow D^n$ \end{definition} It is convenient, in some cases, to slice the signals based on the domain $D$ of interest, for that reason, we recall the concept of (signal) projection. \begin{definition}[Signal Projection]\label{def:projection} Let $\pi_i:D_1 \times \dots \times D_i \times \dots \times D_n \rightarrow D_i$ be the function that takes the $i$-th projection of the set-theoretic Cartesian product $D^n$, we will indicate as $\pi_i(\mathbf{\sigma}(t))$ the projection of $\mathbf{\sigma}(t)$ to the $i$-th 1-dimensional signal $s_i:\mathbb{T} \rightarrow D_i$. \end{definition} To represent a set of signals distributed in the space, we introduce the following definition. \begin{definition}[Imprecise Spatio-Temporal Signal]\label{def:signal} Let $\mathcal{F}(\mathbb{L}, \mathbb{T}, D^n)$ be the family of functions of space and time over real intervals, with $\mathbb{L}$ a set of locations, we call \textit{imprecise spatio-temporal signal} -- or just \textit{signal} when there is no risk of ambiguity -- any $s \in \mathcal{F}(\mathbb{L}, \mathbb{T}, D^n)$, i.e. any function: $ \mathbf{s}: \mathbb{L} \times \mathbb{T} \rightarrow D^n$ \end{definition} Considering the pollution example, where $\mathbb{L}$ is the set of stations, $\mathbb{T}=[0,10]$ is the time domain corresponding to an interval of 10 days, and $D$ is the possible range for nitrogen-dioxide values ($NO_2$) in the air; then the spatio-temporal signal $\mathbf{s}: \mathbb{L} \times \mathbb{T} \rightarrow D$ returns at each time, in each location the value of $NO_2$, $\mathbf{s}(\ell, t)$. We can naturally describe the distance between spatio-temporal signals by considering the Hausdorff distance from Definition~\ref{def:hausdorff} over all possible locations of the space, and time instants. \begin{definition}[Spatio-Temporal Signal Distance]\label{def:signal-distance} Let $\mathbf{s_1}, \mathbf{s_2} \in \mathcal{F}(\mathbb{L}, \mathbb{T}, D^n)$, we will call \emph{signal distance} the largest Hausdorff distance over space and time, defined as: \begin{equation*} ||\mathbf{s_1}-\mathbf{s_2}||_\infty := \max\limits_{i \leq n}\max\limits_{\ell \in \mathbb{L}}\max\limits_{t \in \mathbb{T}}\{d_H(\pi_i(\mathbf{s_1}(\ell, t)), \pi_i(\mathbf{s_2}(\ell, t)))\} \end{equation*} \end{definition} To describe the interplay of signals in different locations, we need to encompass the information related to the spatial distribution of the locations. \begin{definition}[Spatial model] We call \emph{spatial model} the tuple $\mathcal{S} = \langle \mathbb{L}, W \rangle$ \footnote{We focus on real-valued positive labels, to convey the intuitive meaning of distance between two locations. For alternative definitions of $W$, the interested reader might refer to~\cite{Bartocci17memocode}.} , where $\mathbb{L}$ is a set of locations and $W \subseteq \mathbb{L} \times \mathbb{R}_{\geq 0}^{\infty} \times \mathbb{L}$ is a \emph{proximity function} associating at most one label $w \in \mathbb{R}_{\geq 0}^{\infty}$ to each distinct pair $\ell_1, \ell_2 \in \mathbb{L}$ \end{definition} An obvious spatial model for the region of Figure~\ref{fig:no2} is a graph where every location is connected to all the others, and the proximity function is defined by labels corresponding to the minimal aerial distance between each pair of locations. Finally to consider distance on paths of locations we introduce the notion of \textit{routes} over the spatial model. \begin{definition}[Routes] A route $\tau$ on $\mathcal{S}$ is a (potentially infinite) sequence $\ell_0, \ell_1, \dots \ell_k \dots $, such that for any $\ell_i,\ell_{i+1} \in \tau$, there is a label $(\ell_i, w, \ell_{i+1}) \in W$. We indicate by $\Lambda(\ell)$ the set of routes on $\mathcal{S}$ starting at $\ell$. Moreover, we will use $\tau[i]$ to denote the $i$-th node of the route, $\tau[i\dots]$ to denote the subroute starting at the $i$-th node, and $\tau(\ell_i)$ to denote the first occurrence of $\ell_i$ in $\tau$. Lastly, we will indicate by $\ell_1<\tau(\ell_2)$ the fact that $\ell_1$ precedes $\ell_2$ in the route $\tau$. \end{definition} Routes have the same intuitive meaning as they have in the physical world, and, similarly to the real world, we can define the concept of route (or travel) distance, as the aggregated sum of all the labels traversed by the route. \begin{definition}[Route Distance] For a given $\tau$ on $\mathcal{S}$, the distance $d_\tau[i]$ is: \begin{equation*} d_\tau[i] = \begin{cases} 0, & \mbox{if } i = 0 \\ w + d_{\tau[1\dots]}[i - 1], & \mbox{if } i > 0 \mbox{ and } (\tau[0], w, \tau[1]) \in W \end{cases} \end{equation*} \end{definition} Lastly, routes allow us to conveniently define the distance between any two locations $\ell_1, \ell_2 \in \mathbb{L}$, whichever the spatial model being considered. In fact, from the location $\ell_1$ to $\ell_2$, one can consider the minimal distance among all the routes starting at $\ell_1$ and ending in $\ell_2$: \begin{equation*} d_\mathcal{S}[\ell_1, \ell_2] = \min\limits_{\tau \in \Lambda(\ell_1)}\{d_\tau[\ell_2]\} \end{equation*} \section{STREL with interval Semantics}\label{sec:logic} We present in this section an interval semantics that allows for a conservative analysis that considers both the minimum and the maximum values of intervals. This way, a plethora of use scenarios can fit into this specification language, spanning from traditional offline monitoring of a given specification over imprecise signals to online monitoring with out-of-order updates. All the proofs of theorems and lemmas are reported in appendix. \begin{definition}[STREL Syntax] We consider logical formulae belonging to the language $\mathcal{L}$ generated by the following BNF grammar: \begin{equation*} \varphi := \top~|~\bot~|~p~\circ~c~|~\neg~\varphi~|~\varphi~\lor~\varphi~|~\varphi~\until{I}~\varphi ~|~\varphi~\reach{\leq d}~\varphi~|~\escape{\geq d}~\varphi \end{equation*} where $\circ \in \{>, <\}$, $c \in \mathbb{R}$, $p$ is associated to a projection function $\pi$ of Definition~\ref{def:projection}, i.e. $p~\circ~c$ are inequalities on the variables of the systems. $\until{I}$ is the \emph{until} temporal operator, with $I$ real interval, while $\reach{\leq d}{}$ and $\escape{\geq d}{}$ are the spatial operators \emph{reach} and \emph{escape} , with $d \in \mathbb{R}_{>0}$. In addition, we have the derived Boolean operators as \emph{and} ($\land$) and \emph{implies} ($\to$), temporal operators \emph{eventually} ($\ev{I}$) and \emph{globally} ($\glob{I}$), and spatial operators \emph{somewhere} ($\somewhere{\leq d}{}$) and \emph{everywhere} ($\everywhere{\leq d}{}$). \end{definition} Considering again the air pollution case study, current regulation in Lombardy requires to take action when the level of nitrogen dioxide ($NO_2$) exceeds the threshold of $400\mu g/m^3$ for more than three hours. Let $NO_2 < 400$ denote the atomic proposition that states that the level of nitrogen dioxide is lower than $400\mu g/m^3$. A requirement as the previous one could be expressed like in~(\ref{p1}): \begin{equation}\label{p1} \ev{[0, 3hours]} NO_2 < 400 \end{equation} Temporal operators like $\ev{}$ specify properties on the dynamic evolution of the system. In fact, when (1) is violated, the alerting procedure could be triggered, to inform the citizens about the danger. However, since it is known that noise and local faults frequently happen, one could consider of alerting the population also when the close neighbourhood (e.g. within 10 Km) exhibits a similar phenomenon. For this aim, a property like (\ref{p2}) can be monitored. \begin{equation}\label{p2} \somewhere{<10km}{} NO_2 < 400 \end{equation} Spatial operators like $\somewhere{}{}$ instead specify properties related to the spatial configuration, and in this context, the exact meaning is that at least a location in a range of less than 10 km must have a level of nitrogen dioxide lower than $400\mu g/m^3$. We will see other examples of the logic language in the next sections. For a more detailed description of the logic, we refer the reader to~\cite{Bartocci17memocode}. We present now the Boolean and quantitative interval semantics for STREL. \begin{definition}[STREL Boolean Semantics]\label{def:boolean-semantics} Let $\chi:\mathcal{F}(\mathbb{L}, \mathbb{T}, D^n)$ $ \times \mathbb{L} \times \mathbb{T} \times \mathcal{L} \rightarrow \{-1, 0,1\}$ be a function defined as follows: \begin{itemize}[leftmargin=-.03in] \setlength\itemsep{.5em} \item[] $\chi(\mathbf{s}, \ell, t, \top) = 1$ \item[] $\chi(\mathbf{s}, \ell, t, \bot) = -1$ \item[] $\chi(\mathbf{s}, \ell, t, p \circ c) = \begin{cases} 1, & \mbox{if } \pi_p(\mathbf{s}(\ell,t)) \circ c \\ -1, & \mbox{if } \pi_p(\mathbf{s}(\ell,t)) \circ^{-1} c \\ 0, & \mbox{otherwise}\footnotemark \end{cases}$ \item[] $\chi(\mathbf{s}, \ell, t, \lnot \varphi) = - \chi(\mathbf{s}, \ell, t, \varphi)$ \item[] $\chi(\mathbf{s}, \ell, t, \varphi_1 \lor \varphi_2) = \max(\chi(\mathbf{s}, \ell, t, \varphi_1), \chi(\mathbf{s}, \ell, t, \varphi_2))$ \item[] $\chi(\mathbf{s}, \ell, t, \varphi_1 \until{I} \varphi_2) = \max\limits_{t'\in t + I}\{\min(\chi(\mathbf{s}, \ell, t', \varphi_2), \min\limits_{t''\in [t', t]}\{\chi(\mathbf{s}, \ell, t'', \varphi_1)\})\}$ \item[] $\chi(\mathbf{s}, \ell, t, \varphi_1 \reach{\leq d}{} \varphi_2) = \max\limits_{\tau \in \Lambda(\ell)}\max\limits_{\ell' \in \tau : d_\tau[\ell'] \leq d}\\[3pt] \null\qquad\qquad\qquad\qquad\qquad~~\{\min(\chi(\mathbf{s}, \ell', t, \varphi_2),\min\limits_{\ell'' < \tau(\ell')}\{\chi(\mathbf{s}, \ell'', t, \varphi_1)\}\}$ \item[] $\chi(\mathbf{s}, \ell, t, \escape{\geq d}{} \varphi) = \max\limits_{\tau \in \Lambda(\ell)} \max\limits_{\ell' \in \tau : d_\mathcal{S}[\ell,\ell'] \geq d} \min\limits_{\ell'' < \tau(\ell')}\{\chi(\mathbf{s}, \ell'', t, \varphi)\}$ \end{itemize} \end{definition} \footnotetext{Note that `$>$' and `$<$'are used in this context to represent interval inequalities, which do not define a total ordering.} This is a three-valued semantics, which is equal to $1$ if the interval signal $\chi(\mathbf{s}, \ell, t, \varphi)$ satisfies $\varphi$, $-1$ if the formula is not satisfied, and 0 if we cannot answer. The semantics is directly derived from the standard Boolean semantics and the interval algebra described in the previous section. For atomic proposition $\chi(\mathbf{s}, \ell, t, p \circ c)=1$ iff the inequality $\pi_p(\mathbf{s}(\ell,t))\circ c$ is true. This means, e.g., $\chi(\mathbf{s}, \ell, t, NO_2>400)=1$ iff $\underline{NO_2}$, the left extreme of the projected signal $NO_2$, is greater than 400. $\chi(\mathbf{s}, \ell, t, NO_2>400)=-1$ if the right extreme is less than 400, and $\chi(\mathbf{s}, \ell, t, NO_2>400)=0$ otherwise, so if $400 \in NO_2$ interval value. Similar calculation can be done for the other combination of $\circ$ and $c$. The three-valued Boolean semantics can be sufficient in applications where the interest is only whether or not the specification is satisfied. However, in many complex cases, one might be interested in getting some insights about the degree by which a property is satisfied or violated. In the following, we introduce an extension of the quantitative semantics that provides numerical bounds to the robustness degree of a specification. \begin{definition}[STREL Robust Interval Semantics]\label{def:interval-semantics} Let $\rho:\mathcal{F}(\mathbb{L}, \mathbb{T}, D^n) \times \mathbb{L} \times \mathbb{T} \times \mathcal{L} \rightarrow \mathcal{I}(\mathbb{R})$ be the function mapping signals, locations, time instants, and formulae defined as follows: \begin{itemize}[leftmargin=-.1in] \setlength\itemsep{.8em} \item[] $\rho(\mathbf{s}, \ell, t, \top) = [+\infty, +\infty]$ \item[] $\rho(\mathbf{s}, \ell, t, \bot) = [-\infty, -\infty]$ \item[] $\rho(\mathbf{s}, \ell, t, p \circ c) = \begin{cases} \pi_p(\mathbf{s}(\ell,t)) - c, & \mbox{if } \circ \mbox{ is } ` > \text{'}\\ c - \pi_p(\mathbf{s}(\ell,t)), & \mbox{if } \circ \mbox{ is } ` < \text{'} \end{cases}$ \item[] $\rho(\mathbf{s}, \ell, t, \lnot \varphi) = - \rho(\mathbf{s}, \ell, t, \varphi)$ \item[] $\rho(\mathbf{s}, \ell, t, \varphi_1 \lor \varphi_2) = [\max](\rho(\mathbf{s}, \ell, t, \varphi_1), \rho(\mathbf{s}, \ell, t, \varphi_2))$ \item[] $\rho(\mathbf{s}, \ell, t, \varphi_1 \until{I} \varphi_2) = \left[\max\limits_{t'\in t + I}\right] \\[4pt] \null\qquad\qquad~~\left\{[\min]\left(\rho(\mathbf{s}, \ell, t', \varphi_2), \left[\min\limits_{t''\in [t', t]}\right]\{\rho(\mathbf{s}, \ell, t'', \varphi_1)\}\right)\right\}$ \item[] $\rho(\mathbf{s}, \ell, t, \varphi_1 \reach{\leq d}{} \varphi_2) = \left[\max\limits_{\tau \in \Lambda(\ell)}\right] \left[\max\limits_{\ell' \in \tau : d_\tau[\ell'] \leq d}\right]\\[4pt] \null\qquad\qquad~~\left\{[\min](\rho(\mathbf{s}, \ell', t, \varphi_2),\left[\min\limits_{\ell'' < \tau(\ell')}\right]\{\rho(\mathbf{s}, \ell'', t, \varphi_1)\}\right\}$ \item[] $\rho(\mathbf{s}, \ell, t, \escape{\geq d}{} \varphi) = \left[\max\limits_{\tau \in \Lambda(\ell)}\right] \left[\max\limits_{\ell' \in \tau : d_\mathcal{S}[\ell, \ell'] \geq d}\right]\left[\min\limits_{\ell'' < \tau(\ell')}\right] \\[4pt] \null\qquad\qquad\qquad\qquad~~\{\rho(\mathbf{s}, \ell'', t, \varphi)\}$ \end{itemize} \end{definition} We will indicate with $\rho^{\varphi}_{s}: \mathbb{L} \times \mathbb{T} \rightarrow \mathcal{I}(\mathbb{R}^\infty)$ the \emph{robustness signal}, i.e. the signal generated by the partial application of the $\rho$ function to a given formula $\varphi$ and a given signal $\mathbf{s}$, so that $\rho^{\varphi}_{s}(\ell, t) \equiv \rho(\mathbf{s}, \ell, t, \varphi)$. Note that without the interval semantics we have defined, missing values should be substituted by some values that approximate the actual value (e.g. by linear interpolation), and therefore only approximate the actual value of satisfaction or robustness of a given property at that specific time point. Conversely, by exploiting the interval semantics, one could actually get upper/lower bounds at those points, which can actually be sufficient in real-world applications. \begin{theorem}[Soundness of Robust Interval Semantics]\label{th:soundness} The Robust Interval Semantics of Definition~\ref{def:interval-semantics} is sound w.r.t the Boolean Semantics of Definition~\ref{def:boolean-semantics}, i.e. for any $\mathbf{s} \in \mathcal{F}(\mathbb{L}, \mathbb{T}, D^n)$, $\ell \in \mathbb{L}$, $t \in \mathbb{T}$, $\varphi \in \mathcal{L}$: \begin{itemize} \item[] $ \mbox{if\quad} \rho(\mathbf{s},\ell, t, \varphi) > 0 \mbox{\quad then \quad} \chi(\mathbf{s}, \ell, t, \varphi) = 1 $ \item[] $ \mbox{if\quad} \rho(\mathbf{s},\ell, t, \varphi) < 0 \mbox{\quad then \quad} \chi(\mathbf{s}, \ell, t, \varphi) = -1$ \item[] $ \mbox{if\quad} 0 \in \rho(\mathbf{s},\ell, t, \varphi) \mbox{\quad then \quad} \chi(\mathbf{s}, \ell, t, \varphi) = 0$ \end{itemize} \end{theorem} \begin{proof} See the extended version of this article for the proof. \end{proof} To provide the correctness of the interval semantics over imprecise signals, we introduce the following lemma: \begin{lemma}[Metric Lemma]\label{th:metric-lemma} Let $\mathbf{s_1}, \mathbf{s_2} \in \mathcal{F}(\mathbb{T}, D^n)$. For any $t \in \mathbb{T}$, for any $\ell \in \mathbb{L}$, for any $\varphi \in \mathcal{L}$, for any $\delta > 0$, we have: \begin{equation*} \mbox{if\quad}||\mathbf{s_1}-\mathbf{s_2}||_\infty < \delta \mbox{\quad then \quad} ||\rho_{s_1}^{\varphi}-\rho_{s_2}^{\varphi}||_\infty < \delta \end{equation*} \end{lemma} \begin{proof} See the extended version of this article for the proof. \end{proof} \begin{theorem}[Correctness of Robust Interval Semantics]\label{th:correctness} The Robust Interval Semantics of Definition~\ref{def:interval-semantics} is \emph{correct} w.r.t the Boolean Semantics of Definition~\ref{def:boolean-semantics}, i.e. for any $\mathbf{s} \in \mathcal{F}(\mathbb{L}, \mathbb{T}, D^n)$, $\ell \in \mathbb{L}$, $t \in \mathbb{T}$, $\varphi \in \mathcal{L}$: \begin{equation*} \mbox{if\hspace{0.5em}} ||\mathbf{s_1}-\mathbf{s_2}||_\infty < | \rho(\mathbf{s}_1,\ell, t,\varphi)| \mbox{\hspace{0.5em}then\hspace{0.5em}} \chi(\mathbf{s}_1, \ell, t,\varphi) = \chi(\mathbf{s}_2, \ell, t,\varphi) \end{equation*} for all $i \leq n$, where $|\cdot|$ is the interval radius of Definition~\ref{def:interval-operations}. \end{theorem} \begin{proof} See the extended version of this article for the proof. \end{proof} \section{Online Monitoring}\label{sec:monitoring} In this section a novel \emph{online (out-of-order) monitoring algorithm} for STREL is presented. Differently from the standard \emph{offline approach}, where all the data is available at the beginning of the execution, \emph{online} monitoring is performed incrementally, when a new piece of data is available. In this case, the uncertainty related to the absence of information must be taken into account. For that aim, the machinery of imprecise signals can be exploited to represent the uncertainty, where the result of the monitoring process, whether it is a satisfaction or a robustness signal, is refined as soon as new updates of the input arrive. The semantics for STREL is defined for arbitrary signals, but algorithms, for computational reason, are provided for piecewise constant ones, along the lines of~\cite{Deshmukh2017,Bartocci17memocode}. This class of signals is convenient, and frequently chosen as the class of reference for a number of reasons: (i) it naturally describes digital signals, (ii) it can be stored in memory very efficiently, and processed fast enough to be considered for real-time applications, (iii) it allows to express the vast majority of real-valued signals of practical use with a limited loss of information. Since the presented signals are Lipschitz-continuous (we consider only inequalities on the variables of the system), we can always bound our error, considering the minimum time step and the maximum of their individual Lipschitz constants. An \emph{imprecise piecewise-constant signal} $\sigma: \mathbb{T} \rightarrow \mathcal{I}(\mathbb{R}^\infty)$, can be characterized in the following way: \[ \sigma(t) = \begin{cases} I_{i}, & \textrm{for}~t_i \leq t < t_{i+1}\\ \vdots \\ I_{n}, & \textrm{for}~t_{n} \leq t < \infty \end{cases} \] and graphically represented as in Figure~\ref{fig:signal-appearance}. Note that frequently when monitoring real-time application the last part of the signal will be characterized by the widest interval possible, as this denotes the fact that the knowledge collected so far is insufficient for providing any insight about the monitored specification for future values of the signal. Similar infinite interval can be considered for missed values. \begin{figure}\label{fig:signal-appearance} \end{figure} We consider \emph{space-synchronized} (s.s.) signals, i.e. signals defined on the same time intervals for any location of the space model. More precisely, a p.c. s.s. signal $s: \mathbb{L} \times \mathbb{T} \rightarrow D^n$ is a signal that can be represented as a sequence of pairs $\{(t_i, \mathbf{V}_i)\}_{i \in \mathbb{N}}$, where each pair $(t_i, \mathbf{V}_i)$ of the sequence represents a piece of the signal, such that it maps any time-instant between $t_i$ and $t_{i+1}$ in $\mathbb{T}$, to the $|\mathbb{L}| \times n$ matrix $\mathbf{V}_i$ that represents the values of the $n$ dimensions of the signal at each location $\ell$ in $\mathbb{L}$. The space-synchronization restriction might appear to be a severe limitation, but this shows one of the conceptual differences between online and offline monitoring: in an offline setting, the space-synchronization hypothesis would likely have detrimental effects on the performances, as it would force all the processing to happen at a temporal granularity that is the union of the temporal granularities of the signals at the different locations. In an online setting, on the other hand, the temporal granularity is determined by the time when new information is available, and the space-synchronization hypothesis makes it possible to exploit in future work the Single-Instruction Multiple-Data (SIMD) capabilities of modern processors (see~\cite{Kusswurm2020, hayes2016}), resulting in execution times that are virtually independent from the number of locations, when appropriate hardware is available. In this context, we call \emph{signal update} $\mathbf{u}$ the triplet $(t_a, t_b, \mathbf{V})$, representing a mapping to the value matrix $\mathbf{V}$ for any time instant between $t_a$ (included) and $t_b$ (excluded). Signal updates can be seen as some special kinds of signals that we use to represent upcoming partial information from the online behavior of the monitored system. In the context of our analysis, we always assume updates to provide truthful information (the case of hard faults, i.e. where updates provide wrong information, will be explored in future work), and, for that reason, we can always assume updates to be well-formed, meaning that the interval $\mathbf{V}$ they provide is always included in the previous interval of the signal we stored for that time and location. To express the \emph{online} nature of the computation we want to pursue, we need some way of describing the incremental evaluation of new information. \begin{definition}[Signal refinement] Let $\mathbf{s_1}, \mathbf{s_2} \in \mathcal{F}(\mathbb{L}, \mathbb{T}, D^n)$, we say that $\mathbf{s_1}$ \textit{is refined by} $\mathbf{s_2}$, and we write $\mathbf{s_1} \succ \mathbf{s_2}$, iff for any $\ell \in \mathbb{L}, t \in \mathbb{T}, i \leq n$, $\pi_i(\mathbf{s_2}(\ell, t)) \subseteq \pi_i(\mathbf{s_1}(\ell, t))$, and there is some $\ell' \in \mathbb{L}, t' \in \mathbb{T}, i' \leq n$, such that $\pi_{i'}(\mathbf{s_2}(\ell', t')) \subset \pi_{i'}(\mathbf{s_1}(\ell', t'))$, i.e. each interval of the co-domain of the signal $\mathbf{s_2}$ is contained in the corresponding interval of the signal $\mathbf{s_1}$, and some of them are strictly contained. \end{definition} The \textit{refinement} relation expresses the fact that $\mathbf{s_1}$ and $\mathbf{s_2}$ represent the same information, except that $\mathbf{s_2}$ has a smaller degree of uncertainty. By the notions of signal update and signal refinement, we can easily represent the online evolution of a signal as a chain of signal refinements $\mathbf{s_0} \succ \dots \succ \mathbf{s}_j \succ \dots $, where the signal $\mathbf{s}_{j+1}$ at the step $j+1$ can be computed from $\mathbf{s}_{j}$ and update $\mathbf{u}_{j}$ like in Algorithm~\ref{alg:refinement}. \begin{algorithm} \caption{Signal Refinement} \label{alg:refinement} \begin{algorithmic}[1] \Procedure{refine}{\scalebox{0.9}{$\mathbf{s}:\{(t_0,\mathbf{V}_0),\dots,(t_N,\mathbf{V}_N)\}$, $\mathbf{u}:(t_a, t_b, \mathbf{V})$}} \For{$(t_i, \mathbf{V}_i)$ in $\mathbf{s}$} \If{$t_i < t_a <t_{i+1}$ \textbf{and} $\mathbf{V} \subset \mathbf{V}_i$} \State $\mathbf{s}$ := $\mathbf{s} \cup (t_a, \mathbf{V})$ \ElsIf{$t_i = t_a$ \textbf{and} $\mathbf{V} \subset \mathbf{V}_i$} \State $\mathbf{s}$ := $\mathbf{s} \setminus (t_i, \mathbf{V}_i) \cup (t_i, \mathbf{V})$ \ElsIf{$t_a < t_i < t_b$} \State $\mathbf{s}$ := $\mathbf{s} \setminus (t_i, \mathbf{V}_i)$ \EndIf \If{$t_i < t_b < t_{i+1} $} \State $\mathbf{s}$ := $\mathbf{s} \cup (t_b, \mathbf{V}_i)$ \EndIf \EndFor \EndProcedure \end{algorithmic} \end{algorithm} The $\Call{refine}{~}$ procedure takes a signal $\mathbf{s}_j$ as a sequence of ordered pairs, and an update as the triplet $(t_a, t_b, \mathbf{V})$. In practice, it removes all the pieces of the signal that start within the interval $[t_a, t_b)$, and adds a piece with value $\mathbf{V}$ in the case $t_a$ and/or $t_b$ lay in between of $t_i$ and $t_{i+1}$. Clearly, for efficiency reasons, the algorithm can jump to the next pair each time $t_{i+1} < t_a$, and can terminate as soon as $t_i > t_b$. The updated signal at the end of the execution is the next element of the refinement chain, i.e. $\mathbf{s}_{j+1}$. \noindent{\bf The Monitoring Problem.} When monitoring online a given specification $\varphi$, let $\mathbf{s}_0$ be the signal representing the starting information on which the atoms of the formula $\varphi$ are defined. Let also $(\mathbf{u}_j)_{j \in \mathbb{N}}$ denote a (finite or infinite) sequence of signal updates. The online monitoring problem can be framed as computing the robustness signal $\rho_{\mathbf{s}_{j+1}}^{\varphi}$, given $\rho_{\mathbf{s}_j}^{\varphi}$ (or, alternatively, the satisfaction signal $\chi_{\mathbf{s}_{j+1}}^{\varphi}$ given $\chi_{\mathbf{s}_j}^{\varphi}$), with $\mathbf{s}_{j} \succ \mathbf{s}_{j+1}$, starting from $\mathbf{s}_0$. A naïve implementation of an online monitor could just ignore the information coming from previous monitoring steps and restart the computation over the whole signal each time new information is available. As already noted in~\cite{online-fainekos,Deshmukh2017}, such an implementation would result in huge amounts of wasted resources when monitoring time signals, and it would therefore be even more costly when monitoring space-time signals. To properly scope the effect that an out-of-order update of the input signal generates for the evaluation of a formula, it is convenient to think of updates as starting from the atoms of the monitored formula, and then propagating their effects up through the syntactic tree, generating a ripple effect where the impacted time span widens based on the operators of the subformulae. Figure~\ref{fig:time-ripple} shows the ripple effect resulting from the propagation of update information through the syntactic tree. From this intuition we can define the \textit{update ripple} function, to scope the resulting update's time span, based on the provided update, and on the operator being computed in the following way: \begin{figure} \caption{The ripple effect generated from the propagation of an update of the signal $x$, over the formula $\varphi := (\escape{d} \label{fig:time-ripple} \end{figure} \begin{definition}[Update Ripple]\label{def:ripple} Let $\mathbf{OP}_I$ denote any temporal operator on the time interval $I$, given a signal update $\mathbf{u} \equiv (t_a, t_b, \mathbf{V})$, we call \textit{update ripple} the function \begin{equation*} \mathfrak{ur}(\mathbf{u}, \varphi) = \begin{cases} [t_a, t_b) - I & \varphi \equiv \mathbf{OP}_I \psi~\mathrm{or}~\varphi \equiv \psi_1 \mathbf{OP}_I \psi_2\\ [t_a, t_b) & \mathrm{otherwise} \end{cases} \end{equation*} \end{definition} Being able to assess the time boundaries of the effect of an update, we can therefore define an online monitoring procedure that updates the robustness (or satisfaction) signal when needed and that keeps the valid parts otherwise. In general, we have that at the step $j+1$, the monitoring function can be evaluated as: \begin{equation*} \rho_{j+1}(\mathbf{s}_{j+1}, \ell, t, \varphi) = \begin{cases} \Call{monitor}{\mathbf{u}_{j}, \varphi}[\ell, t] & t \in \mathfrak{ur}(\mathbf{u}_j, \varphi)\\ \rho_{j}(\mathbf{s}_j, \ell, t, \varphi) & otherwise \end{cases} \end{equation*} Note that in all the cases where the updates overlap, they must be processed sequentially in order to generate correct results. \noindent{\bf Monitoring Procedure.} To compute the monitoring result signal online, it is crucial to be able to exploit the knowledge from the past each time new information is available. The most natural way for doing so is to develop a stateful algorithm that stores the relevant information from previous computations. We will represent by $\mathcal{M}$ the persistent memory (i.e. the state) that we keep throughout the various iterations of the monitoring process, and by $\mathcal{M}[x]$ the access to the item $x$ from memory. The memory $\mathcal{M}$ is organized around a data structure that represents the set of robustness signals of all the subformulae of the monitored formula $\varphi$, as computed in the last iterations. This set can be encoded as an array indexed on some ordering of the subformulae. We represent as $\mathcal{M}[\rho^\psi]$ the access to the respective robustness signal for some formula $\psi$. This data structure is extremely important to maximize the time performance of the monitoring process, as next iterations will re-compute only the differing fragments based on the update ripple. Before starting the monitoring process, the memory is initialized by storing an \textit{undefined signal} for any subformula $\psi$ in the set of the subformulae of the formula $\varphi$ being monitored. We call \emph{undefined signal} the special signal $s: \mathbb{L}\times \mathbb{T} \rightarrow [-\infty, +\infty]$, which represents the total absence of knowledge about the value, at any possible time instant. Once the memory is initialized, the monitoring can start. We assume that the signal is always received as a sequence of signal updates $\mathbf{u}_j$, starting from $j=0$, where the input signal is considered to be undefined. Algorithm~\ref{alg:monitoring} represents the base routine triggered when receiving an update $\mathbf{u}$ of the input signal. The recursive procedure \Call{monitor}{$\mathcal{M}, \varphi, \mathbf{u}$} is responsible for propagating the input signal update to the subformulae and then fetching the corresponding updates of the robustness signal. We indicate by $\langle \mathcal{M}, \{u^\varphi\} \rangle$ the return value of the algorithm, to mean that it returns an updated version of the memory, and a list of robustness updates of the formula $\varphi$ that might either be used by the caller or discarded. The general procedure of Algorithm~\ref{alg:monitoring} calls the specific procedures of Algorithms~\ref{alg:atoms}-\ref{alg:sliding-window} depending on the operators encountered while traversing the tree of the formula. Note that all of the above exploit the \Call{refine}{~} primitive operation from Algorithm~\ref{alg:refinement}. \begin{algorithm} \caption{Online Monitoring Procedure} \label{alg:monitoring} \begin{algorithmic}[1] \Procedure{monitor}{$\mathcal{M}, \varphi, \mathbf{u}$} \Switch{$\varphi$} \Case{$p \circ c$} \State $\langle \mathcal{M}, \{u^\varphi\} \rangle$ := \Call{atom}{$\mathcal{M}, \varphi, \mathbf{u}$} \EndCase \Case{$\lnot \psi$ \textbf{or} $\escape{\geq d}{} \psi$} \State $\langle \mathcal{M}, \{u^\varphi\} \rangle$ := \Call{unary}{$\mathcal{M}, \varphi, \mathbf{u}$} \EndCase \Case{$\psi_1 \lor \psi_2$ \textbf{or} $\psi_1 \reach{\leq d}{} \psi_2$} \State $\langle \mathcal{M}, \{u^\varphi\} \rangle$ := \Call{binary}{$\mathcal{M}, \varphi, \mathbf{u}$} \EndCase \Case{$\ev{I} \psi$} \State $\langle \mathcal{M}, \{u^\varphi\} \rangle$ := \Call{slidingWindow}{$\mathcal{M}, \varphi, \mathbf{u}$} \EndCase \Case{$\psi_1 \until{} \psi_2$} \State $\langle \mathcal{M}, \{u^\varphi\} \rangle$ := \scalebox{0.95}{\Call{unboundedUntil}{$\mathcal{M}, \varphi, \mathbf{u}$}} \EndCase \EndSwitch \State \Return $\langle \mathcal{M}, \{u^\varphi\} \rangle$ \EndProcedure \end{algorithmic} \end{algorithm} \noindent{\bf Online Monitoring Of Non-temporal Operators} When monitoring formulae containing Boolean or Spatial operators, the online evaluation can be performed very efficiently by simply updating the robustness signal at the times corresponding to the received update. Algorithm~\ref{alg:atoms} shows the algorithm for monitoring atomic formulae, Algorithm~\ref{alg:unary} presents the one for monitoring unary operators (i.e. $\lnot$ and $\escape{d}{}$), and Algorithm~\ref{alg:binary} shows the one for binary operators (i.e. $\lor$ and $\reach{d}{}$). We represent by \Call{compute\_op}{$\mathbf{OP}, \mathbf{V}$} (and \Call{compute\_op}{$\mathbf{OP}, \mathbf{V}_1, \mathbf{V}_2$}) the execution of the semantic operation corresponding to the operator $\mathbf{OP}$, along the lines of Definitions~\ref{def:boolean-semantics},\ref{def:interval-semantics}, i.e. $[\max]/[\min]$ direct computation for Booleans, and the classical reach/escape routines~\cite{Bartocci17memocode} for spatial operators. A key difference from the offline version of the spatial algorithms, however, is that in our online version the \Call{compute\_op}{} implementation has been crafted to enable spatial-parallelization, i.e. monitors' users with appropriate hardware and the need to speed-up for large spaces, can opt-in for the multi-threaded version of the algorithm, where \Call{compute\_op}{} is executed in parallel for any location of the spatial model. \begin{algorithm} \caption{Atomic Formula Monitoring} \begin{algorithmic}[1] \Function{atom}{$\mathcal{M}, p \circ c, \mathbf{u}$} \State $(t_a, t_b, \mathbf{V}) := \mathbf{u}$ \State $\mathbf{u}^{p \circ c} := (t_a, t_b, \Call{compute\_op}{p \circ c, \mathbf{V})}$ \State \Call{refine}{$\mathcal{M}[\rho^{p \circ c}], \{\mathbf{u}^{p \circ c}\}$} \State \Return $\langle \mathcal{M}, \{\mathbf{u}^{p \circ c}\} \rangle$ \EndFunction \end{algorithmic} \label{alg:atoms} \end{algorithm} \begin{algorithm} \caption{Unary Operator Monitoring} \begin{algorithmic}[1] \Function{unary}{$\mathcal{M}, \mathbf{OP}\psi, \mathbf{u}$} \State $\langle \mathcal{M}, \{\mathbf{u}^\psi\} \rangle$ := \Call{monitor}{$\mathcal{M}, \psi, \mathbf{u}$} \State $\{\mathbf{u}^{\mathbf{OP}\psi}\}$ := $\emptyset$ \For{$(t_a, t_b, \mathbf{V}) \in \{\mathbf{u}^\psi\}$} \State \scalebox{0.9}{$\{\mathbf{u}^{\mathbf{OP}\psi}\}$ := $\{\mathbf{u}^{\mathbf{OP}\psi}\}~\cup (t_a, t_b, \Call{compute\_op}{\mathbf{OP}, \mathbf{V}})$} \EndFor \State \Call{refine}{$\mathcal{M}[\rho^{\mathbf{OP}\psi}], \{\mathbf{u}^{\mathbf{OP}\psi}\}$} \State \Return $\langle \mathcal{M}, \{\mathbf{u}^{\mathbf{OP}\psi}\} \rangle$ \EndFunction \end{algorithmic} \label{alg:unary} \end{algorithm} The algorithm for monitoring binary operators is slightly more complex, as it requires to take into account the corresponding value of the other subformula when an update is processed. In this context, we indicate by \Call{select}{$\mathbf{s}, t_1, t_2$} the restriction of the signal $\mathbf{s}$ to the time interval that starts at $t_1$ and ends at $t_2$ (excluded). \begin{algorithm} \caption{Binary Operator Monitoring} \begin{algorithmic}[1] \Function{binary}{$\mathcal{M}, \psi_1\mathbf{OP}\psi_2, \mathbf{u}$} \State $\{\mathbf{u}^{\psi_1\mathbf{OP}\psi_2}\}$ := $\emptyset$ \State $\langle \mathcal{M}, \{\mathbf{u}^{\psi_1}\} \rangle$ := \Call{monitor}{$\mathcal{M}, \psi_1, \mathbf{u}$} \For{$(t_a, t_b, \mathbf{V}^{\psi_1}) \in \{\mathbf{u}^{\psi_1}\}$} \State $\mathbf{V}^{\psi_2}$ := \Call{select}{$\mathcal{M}[\rho^{\psi_2}], t_a, t_b$} \State $\{\mathbf{u}^{\psi_1\mathbf{OP}\psi_2}\}$ := $\{\mathbf{u}^{\psi_1\mathbf{OP}\psi_2}\}$ $\cup$ \\\qquad\qquad\qquad\qquad\quad \scalebox{0.9}{$(t_a, t_b, \Call{compute\_op}{\mathbf{OP}, \mathbf{V}^{\psi_1}, \mathbf{V}^{\psi_2}}$} \EndFor \State \textit{Repeat lines 3-8 symmetrically for $\psi_2$...} \State \Call{refine}{$\mathcal{M}[\rho^{\psi_1\mathbf{OP}\psi_2}], \{\mathbf{u}^{\psi_1\mathbf{OP}\psi_2}\}$} \State \Return $\langle \mathcal{M}, \{\mathbf{u}^{\psi_1\mathbf{OP}\psi_2}\} \rangle$ \EndFunction \end{algorithmic} \label{alg:binary} \end{algorithm} \noindent{\bf Online Monitoring Of Temporal Operators} To execute temporal operators quickly enough for online needs, on the other hand, we need to store some extra information throughout the process. Firstly, it is useful to recollect that, in general, every temporal operator can be decomposed~\cite{EfficientSTL} in the conjunction of two (efficiently computable) operators: \begin{itemize} \item the bounded eventually $\ev{I}$ (or equivalently the bounded globally $\glob{I}$) \item the unbounded until $\until{}$ \end{itemize} We propose here an enhanced algorithm for monitoring bounded globally/eventually operators with out-of-order updates. For that aim, we slightly adapted the classical sliding window algorithm from Lemire~\cite{Lemire2006StreamingMF} so that it is constrained on the $\mathfrak{ur}$ function and that it can deal seamlessly with numerical and interval values. Algorithm~\ref{alg:sliding-window} presents primary routine of the sliding window for computing updates of bounded unary temporal operators $\mathbf{OP}_I$. The algorithm exploits an additional data structure $\mathtt{W}$ that is a deque, such that new elements of the window are added at the end, and such that when the window is saturated (i.e. the elements inside denote a time span bigger than the definition interval $I$ of the operator), they are removed from left and propagated as updates. The logic of the algorithm is essentially the following: for each update received in input, the sliding window is initialized on the fragment of the robustness signal of the subformula defined by the update ripple function $\mathfrak{ur}$. For each piece of the fragment, the sliding window is updated (line~\ref{lst:line:add}), and each time the new piece makes the data in the window exceed the maximum size, the sliding window slides to the right, removing from the window some elements that can be safely propagated as updates (line~\ref{lst:line:slide}); some edge cases are not covered to keep the algorithm concise (e.g. the case when the current piece is by itself wider than the window size). The precise behaviour of \Call{slide}{} and \Call{add}{}, that control the mutation of $\mathtt{W}$ can be examined in the extended version of the paper or in the Moonlight implementation. \begin{algorithm} \caption{Sliding Window}\label{alg:sliding-window} \begin{algorithmic}[1] \Procedure{slidingWindow}{$\mathcal{M}, \mathbf{OP}_{[t_a, t_b]}\psi, \mathbf{u}$} \State $\langle \mathcal{M}, \{\mathbf{u}^\psi\} \rangle$ := \Call{monitor}{$\mathcal{M}, \psi, \mathbf{u}$} \State $\{\mathbf{u}^{\mathbf{OP}\psi}\}$ := $\emptyset$ \For{$\mathbf{u}^\psi \in \{\mathbf{u}^\psi\}$} \State $(t_s, t_e) := \mathfrak{ur}(\mathbf{u}^\psi, \mathbf{OP}_{[t_a,t_b]}\psi)$ \For{$(t, \mathbf{V}) \in \Call{select}{\mathcal{M}[\rho^{\psi}], t_s, t_e} $} \If{$t > t_b + \mathtt{W.first.start} $} \State $\{\mathbf{u}^{\mathbf{OP}\psi}\} := \{\mathbf{u}^{\mathbf{OP}\psi}\} \cup \Call{Slide}{t - t_b}$ \label{lst:line:slide} \EndIf \State \Call{add}{$t - t_a, \mathbf{V}$} \label{lst:line:add} \EndFor \EndFor \State \Call{refine}{$\mathcal{M}[\rho^{\mathbf{OP}\psi}], \{\mathbf{u}^{\mathbf{OP}\psi}\}$} \State \Return $\langle \mathcal{M}, \{\mathbf{u}^{\mathbf{OP}\psi}\} \rangle$ \EndProcedure \end{algorithmic} \end{algorithm} The second fundamental temporal algorithm is the one for computing the unbounded until. Unfortunately, being unbounded, any update might require to recompute, in the worst case, the whole robustness/satisfaction signal. In our implementation, we consider the algorithm in~\cite{Deshmukh2017}. Note that it requires to keep the minimum value of preceding computations of $\varphi_1$ and the maximum value of preceding computations of the whole formula as secondary data structures. A last remark about the implementation must be made: while the algorithms have been developed with the goal to enable out-of-order execution, all of them have been implemented also in an in-order variant, so that the execution time penalty from not assuming that updates are at the end, does not affect the users of the tool, when the use case of interest allows to. \section{Experimental Evaluation}\label{sec:evaluation} The interval semantics we presented in Section~\ref{sec:logic} and the online (in-order and out-of-order) monitoring strategies of Section~\ref{sec:monitoring} have been implemented as extensions to the Moonlight tool. To showcase the kind of applications where they can be exploited, and to compare the performances with other state-of-the-art approaches, we propose here three different examples: (i) we present and discuss the results of the properties previously introduced, in the context of air quality monitoring; (ii) we compare the performances of our approach for the evaluation of a temporal property on the Abstract Fuel Control Simulink model from the Breach~\cite{breach} tool; (iii) we compare the performances of the online approach versus the offline version of Moonlight on a simulated sensor network adopting the ZigBee protocol. All the computations have been executed on an Intel\textsuperscript{\textregistered} Core\textsuperscript{\texttrademark} i7-5820K CPU @ 3.30GHz, 15M cache, 6 cores (12 threads), with 32GB RAM, running Ubuntu\textsuperscript{\textregistered} 20.04.2 LTS, and Matlab\textsuperscript{\texttrademark} R2021a. \subsection{Use case: Air pollution monitoring}\label{sec:issues} Recalling Properties~\ref{p1}, \ref{p2} from Section~\ref{sec:logic}, we can see in Figure~\ref{fig:rezzato-props} the results of the monitoring. Note that when both the upper and lower bounds are below the $0$ threshold, the property is certainly violated, while when only the lower bound is below $0$, then the property is \emph{potentially} violated. Property~\ref{p1} gives some important insights on the faults observed in Figure~\ref{fig:no2}. In fact, we can see that of the six observed failures for the ten-days span of interest, only three happen for a time that is long enough to potentially trigger public concern, which correspond to the spikes to minus infinity in the lower bound of Figure~\ref{fig:rezzato-props} (left). In essence, with the interval semantics we learn that the property could potentially be violated in those time-spans, while it is certainly not for the other missing values. However, Property~\ref{p2} tells us something more about the neighbourhood: in fact, by combining the observations registered from close location, it is apparent that just one of the failures (the one happening during March 20th) likely corresponds to a violation of the property, since there is no close location exhibiting low levels of nitrogen dioxide in Figure~\ref{fig:rezzato-props} (right). \begin{figure*} \caption{ The results of monitoring robustness for Property~\ref{p1} \label{fig:rezzato-p1} \label{fig:rezzato-p2} \label{fig:rezzato-props} \end{figure*} \subsection{Online comparison: Abstract Fuel Control} Consider a Simulink\textsuperscript{\textregistered} model that describes a black-box representation of an engine's air-fuel ratio controller aimed at complying to emission targets of a vehicle, where the user has direct control over the \textit{engine speed} and \textit{pedal angle}. Each input and output is represented as a signal that is sampled regularly, the outputs being the actual air-to-fuel ratio (AF), and the mean air-to-fuel ratio value for the given input parameters (AF\textsubscript{ref}) at a sampling period $T=0.1s$. For a full description of the model, the reader can see~\cite{afc-original}, while~\cite{Deshmukh2017} provides the reference implementation for online monitoring in Breach. In our experiments, we monitored the following STL property (note that STREL is an extension of STL, and therefore each STL formula is also a STREL formula) \begin{equation*} \small \varphi = \glob{[10,30]}(|AF - AF_{ref}| > 0.1 \rightarrow (\ev{[0,1]}|AF-AF_{ref}| < 0.1)) \end{equation*} for different sample sizes, considering both updates as an order chain and by shuffling them at random to simulate out-of-order retrieval and processing. The result of $|AF-AF_{ref}|$ from the model has been stored in a file and loaded before starting the stopwatch for both monitors, to eliminate the simulation and loading time from the performance evaluation. Breach monitor has been measured via the reference implementation as a Simulink model, while Moonlight is implemented as a Java program. Table~\ref{tab:afc} reports a summary of the performances of the monitors for different sample sizes. \begin{table}[ht] \centering \begin{tabular}{|l | l | l | l |} \hline N. samples & Breach Exec. Time & \multicolumn{2}{c|}{Moonlight Exec. Time} \\ & In-order & In-order& Out-of-order\\ \hline 500 & 7.603 s & 0.004 s & 0.157 s \\ 1000 & 8.143 s & 0.016 s & 0.489 s\\ 5000 & 10.770 s & 0.096 s & 9.790 s\\ 10000 & 13.730 s & 0.113 s & 44.894 s\\ \hline \end{tabular} \caption{Performances for monitoring the property $\varphi$ for different sample size. Times are averaged over 100 runs.} \label{tab:afc} \end{table} The interesting insight of the comparison is the fact that, while our in-order implementation provides reliably faster performances (note that the offline version of Moonlight had already shown better performance than Breach in~\cite{BartocciBLNS20}), the penalty that comes from not assuming ordered inputs grows substantially with the increase of the input size, as this requires longer searches in the output signal, to find the spot where the update should be applied. Nevertheless, the biggest sample size we considered is quite extreme (ten thousand randomly-shuffled samples), yet the execution time (4.489 ms/sample on average) is way smaller than the sampling time (0.1 s), which therefore makes it reasonable for most real-time scenarios. \subsection{Moonlight comparison: ZigBee Protocol} Consider a collection of moving devices communicating via the ZigBee (IEEE 802.15.4) protocol. From the protocol description we know that the devices can have three roles: they can either be coordinators, routers or sensor-node. Each device is equipped with an humidity sensor $h(t)$ that reports at each time instant the observed value of the humidity at the current location. The humidity observed can be described as an MA(0) process, i.e. \begin{equation*} h(t) = c_0 + \varepsilon(t), \tag{$\varepsilon \sim WN(0, \lambda^2)$} \end{equation*} where the observed value comes from the real value $c_0$, with some perturbation from the zero-mean white noise $\varepsilon$ of variance $\lambda^2$. Each device can communicate with the ones that are close enough directly, but they can also communicate with furthest ones, as long as there is some router between them that can bridges the communication. Let $X_H$ denote the true value of humidity for a given device at a given time, let $X_S$ denote the role of a given sensor, and $T$ some time threshold to warn the observers. We monitored the following properties on the system: \begin{equation*} \varphi_1 = (X_H > 60) \rightarrow \ev{[0, T]} (X_H < 30) \end{equation*} \begin{equation*} \varphi_2 = \everywhere{}{}\somewhere{d < 10}{}(X_S= \mathtt{coordinator}) \end{equation*} Property $\varphi_1$ denotes an alert condition: if the humidity measured by a device $X_H$ goes beyond $60\%$, then it must fall down at $30\%$ afterwords, within the time threshold $T$. Property $\varphi_2$, on the other hand, defines a reachability criterion between the sensors: it checks whether it is true that from any location, it is possible to reach a coordinator ($X_S = \mathtt{coordinator}$) in less than 10 hops. Similarly to previous versions of this model~\cite{Bartocci17memocode, tutorial}, we can consider the spatial model as a graph where all the devices are the nodes, and the edges between the nodes are all labeled by $d=1$ to denote the networking hop from one device to another. Table~\ref{tab:sensors} shows the difference in monitoring $\varphi_1$ and $\varphi_2$ both online and offline. It is interesting to see how the different algorithms behave on same formula and data: in fact, the online temporal algorithms are penalized by the complexity added by the fact that some values must be recomputed. Conversely, the online spatial ones benefit from the hypothesis of spatial synchronization of the locations, resulting in a slightly more efficient computations in the case we explored. Lastly, it can be seen that the benefit of parallelization is particularly evident when the number of nodes is strictly smaller than the number of cores of the CPU (10 in our case), while the benefit practically vanishes (actually resulting in more overhead) as the number of parallel threads grows significantly more than the cores available. \begin{table}[ht] \centering \begin{tabular}{|l | l | l | l | l | l | l | l |} \hline Time & N. & \multicolumn{2}{c|}{Offline} & \multicolumn{2}{c|}{Online} & \multicolumn{2}{c|}{Online(Parallel)} \\ samples & nodes & $\varphi_1$ & $\varphi_2$ & $\varphi_1$ & $\varphi_2$ & $\varphi_1$ & $\varphi_2$\\ \hline \multirow{3}{*}{100} & 10 & 9 & 77 & 116 & 29 & 49 & 58 \\ & 50 & 8 & 1028 & 151 & 430 & 84 & 583 \\ & 100 & 15 & 6919 & 197 & 2993 & 137 & 3017 \\ \multirow{3}{*}{500} & 10 & 8 & 200 & 621 & 45 & 461 & 760 \\ & 50 & 17 & 4058 & 1901 & 1783 & 1549 & 2009 \\ & 100 & 25 & 32561 & 3333 & 15641 & 2889 & 15486 \\ \hline \end{tabular} \caption{Execution times registered for monitoring $\varphi_1$ and $\varphi_2$ with different versions of Moonlight. Times in ms, averaged over 100 runs.} \label{tab:sensors} \end{table} \section{Conclusions \& Future Work}\label{sec:future} We extended the traditional definition of signals to also consider imprecise signals defined by intervals of values. We presented an interval semantics for STREL, we proved its soundness and correctness, and we introduced an online monitoring algorithm for STREL that exploits imprecise signals that can be refined by updates arriving in any order, and that can monitor updates on different locations in parallel. We implemented the proposed methodology in the Moonlight monitoring tool. We motivated our framework from an air pollution control specification with real data from the region of Lombardy, Italy. Lastly, we compared the new methodology with other state-of-the art tools, and discussed the differences. Many directions of future work can be followed, for example, the \emph{space-synchronization} hypothesis helped us simplifying the implementation of the algorithms, but is not needed from a theoretical point of view. It will be interesting in the future to clearly assess the computational advantages and disadvantages of that hypothesis, and to which extent it can be relaxed. Another intriguing topic for future development concerns spatial models representing (and interacting as) distributed systems. In that context, multiple directions could be pursued, like considering an ownership model for the atomic formulae, or by reasoning on an actor-based communication model among locations. Another interesting idea could be to expand the kind of failures we can monitor, for example, we could consider some form of error correction in case some received updates later prove to have provided wrong information (maybe because of some broken sensors). Lastly, different form of computational optimization could be explored, like stopping when some bounds on the satisfiability/robustness have been reached, as well as intensive parallelization and hardware acceleration of the main algorithms. \begin{acks} The authors would like to acknowledge Davide Prandini for his thesis work (unpublished) where a preliminary work on imprecise signals for STL had been conducted, together with many ideas that have been used for developing the proofs of the theorems presented. This research has been partially supported by the Austrian FWF projects ZK-35 and LogiCS DK W1255-N23; and by Italian MIUR project PRIN 2017FTXR7S IT MATTERS and by Marche Region in implementation of the financial programme POR MARCHE FESR 2014-2020, project "Miracle". \end{acks} \appendix \section{Proofs} \subsection{Proof of Theorem~\ref{th:soundness}}\label{proof:soundness} \input{proofs/soundness} \subsection{Proof of Lemma~\ref{th:metric-lemma}}\label{proof:metric-lemma} \input{proofs/metriclemma} \subsection{Proof of Theorem~\ref{th:correctness}}\label{proof:correctness} \input{proofs/correctness} \balance \section{Algorithms} Sliding window mutation primitives are described here. \begin{algorithm} \caption{Slide}\label{alg:slide} \begin{algorithmic}[1] \Function{slide}{$t$} \State $(t_i, \mathbf{V}_i) := \mathtt{W.removeFirst()}$ \While{$\mathtt{W.first.start} < t$} \State $(t_{i+1}, \mathbf{V}_{i+1}) := \mathtt{W.removeFirst()}$ \State $\{\mathbf{u}^{\mathbf{OP}\psi}\}$ := $\{\mathbf{u}^{\mathbf{OP}\psi}\} \cup {(t_i, t_{i+1}, \mathbf{V}_i)}$ \If{$t_{i+1} > t$} \State $\mathtt{W.addFirst}(t, \mathbf{V}_i)$ \State $\mathtt{W.addFirst}(t_{i+1}, \mathbf{V}_{i+1})$ \Else \State $(t_i, \mathbf{V}_i) := (t_{i+1}, \mathbf{V}_{i+1})$ \EndIf \EndWhile \State $\{\mathbf{u}^{\mathbf{OP}\psi}\}$ := $\{\mathbf{u}^{\mathbf{OP}\psi}\} \cup {(t_i, t_{i+1}, \mathbf{V}_i)}$ \State $\mathtt{W.addFirst}(t, \mathbf{V}_i)$ \State \Return $\{\mathbf{u}^{\mathbf{OP}\psi}\}$ \EndFunction \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{Add}\label{alg:do-add} \begin{algorithmic}[1] \Procedure{add}{$\mathbf{OP}, t, \mathbf{V}$} \State $(t_2,\mathbf{V}_2) := \mathtt{W.removeLast()}$ \State $\mathbf{V}_3 := $ \Call{compute\_op}{$\mathbf{OP}, \mathbf{V}, \mathbf{V}_2$} \If{$\mathbf{V}_2 = \mathbf{V}$} \State $\mathtt{W.addLast}(t_2, \mathbf{V}_2)$ \ElsIf{$\mathbf{V}_3 = \mathbf{V}$} \State \Call{add}{$\mathbf{OP}, t_2, \mathbf{V}_3$} \ElsIf{$\mathbf{V}_3 \neq \mathbf{V}_2$} \State \Call{add}{$\mathbf{OP}, t_2, \mathbf{V}_3$} \State $\mathtt{W.addLast}(t, \mathbf{V})$ \Else \State $\mathtt{W.addLast}(t_2, \mathbf{V}_2)$ \State $\mathtt{W.addLast}(t, \mathbf{V})$ \EndIf \EndProcedure \end{algorithmic} \end{algorithm} \end{document}
\begin{document} \footnote{ Partially supported by PRIN 2009: "Moduli, strutture geometriche e loro applicazioni" and by INdAM (GNSAGA). AMS Subject classification: 14H10, 14H40, 13D02. } \title{} \author[E. Colombo]{Elisabetta Colombo} \address{Dipartimento di Matematica, Universit\`a di Milano, via Saldini 50, I-20133, Milano, Italy } \email{{\tt [email protected]}} \author[P. Frediani]{Paola Frediani} \address{ Dipartimento di Matematica, Universit\`a di Pavia, via Ferrata 1, I-27100 Pavia, Italy } \email{{\tt [email protected]}} \title{On the Koszul cohomology of canonical and Prym-canonical binary curves} \maketitle \setlength{\parskip}{.1 in} \begin{abstract} In this paper we study Koszul cohomology and the Green and Prym-Green conjectures for canonical and Prym-canonical binary curves. We prove that if property $N_p$ holds for a canonical or a Prym-canonical binary curve of genus $g$ then it holds for a generic canonical or Prym-canonical binary curve of genus $g+1$. We also verify the Green and Prym-Green conjectures for generic canonical and Prym-canonical binary curves of low genus ($6\leq g\leq 15$, $g\neq 8$ for Prym-canonical and $3\leq g\leq 12$ for canonical). \end{abstract} \section{Introduction} Let $C$ be a smooth curve, $L$ a line bundle and $\mathcal{F}$ a coherent sheaf on $C$. We recall that the Koszul cohomology group $K_{p,q}(C,\mathcal{F},L)$ is the middle term cohomology of the complex: \begin{equation} \Lambda^{p+1}H^0(L)\otimes H^0(\mathcal{F}\otimes L^{q-1})\stackrel{d_{p+1,q-1}}\rightarrow \Lambda^{p}H^0(L)\otimes H^0(\mathcal{F}\otimes L^{q})\stackrel{d_{p,q}}\rightarrow \Lambda^{p-1}H^0(L)\otimes H^0(\mathcal{F}\otimes L^{q+1}) \end{equation} where $$d_{p,q}(s_1\wedge...\wedge s_p\otimes u):=\sum_{l=1}^{p}(-1)^l s_1\wedge...\wedge \hat{s_l} \wedge...\wedge s_p\otimes (s_lu).$$ If $\mathcal{F}=\mathcal{O}_C$ the groups $K_{p,q}(C,\mathcal{O}_C,L)$ are denoted by $K_{p,q}(C,L)$. The Koszul cohomology theory has been introduced in \cite{gr} and has been extensively studied in particular in the case of the canonical bundle. We recall that Green and Lazarsfeld (\cite{gr}) proved that for any smooth curve $C$ of genus $g$ and Clifford index $c$, $K_{g-c-2,1}(C,K_C)\neq 0$. Green's conjecture says that this result is sharp i.e. $K_{p,1}(C,K_C)=0$ for all $p\geq g-c-1$. The Clifford index for a general curve is $[\frac{g-1}{2}]$, so generic Green's conjecture says that $K_{p,1}(C,K_C)=0$ for all $p\geq[\frac{g}{2}]$, or equivalently, by duality, $K_{p,2}(C,K_C)=0$, i.e. property $N_p$ holds, for all $p\leq[\frac{g-3}{2}]$. Generic Green's conjecture has been proved by Voisin in \cite{v02},\cite{v05}. Green's conjecture has also been verified for curves of odd genus and maximal Clifford index (\cite{v05}, \cite{hr}), for general curves of given gonality (\cite{v02}, \cite{te} \cite{sch}), for curves on $K3$-surfaces (\cite{v02}, \cite{v05}, \cite{af}), and in other cases (see \cite{an}). Another interesting case is when the line bundle is Prym canonical, $L=K_C\otimes A$ where $A$ is a non trivial 2 torsion line bundle. This case has been studied in \cite{fl}, where the Prym-Green conjecture has been stated. This is an analogue of the Green conjecture for general curves, namely it says that for a general Prym-canonical curve $(C,K_C\otimes A)$, we have $K_{p,2}(C,K_C\otimes A)=0$, i.e. property $N_p$ holds, for all $p\leq[\frac{g}{2}-3]$. Prop.3.1 of \cite{fl} shows that for any $(C,K_C\otimes A)$ and $p>[\frac{g}{2}-3]$, $K_{p,2}(C,K_C\otimes A) \neq 0$. Debarre in \cite{deba} proved that a generic Prym-canonical curve of genus $g \geq 6$ is projectively normal (property $N_0$) and for $g \geq 9$ its ideal is generated by quadrics (property $N_1$). In \cite{cfes} the Prym-Green conjecture is proved for genus $g=10,12,14$ by degeneration to irreducible nodal curves and computation with Macaulay2. In a private communication Gavril Farkas told us that they could verify the conjecture also for $g=18,20$. The computations made in \cite{cfes} for genus 8 and 16 suggest that the Prym-Green conjecture may be false for genus which is a multiple of 8 or perhaps a power of 2. The possible failure of the Prym-Green conjecture in genus 8 is extensively discussed in the last section of \cite{cfes}, where a geometric interpretation of this phenomenon is given. In this paper we study Koszul cohomology and the Green and Prym-Green conjectures for canonical and Prym-canonical binary curves. Recall that a binary curve of genus $g$ is a stable curve consisting of two rational components $C_j$, $j=1,2$ meeting transversally at $g+1$ points. The canonical and Prym-canonical models of binary curves that we analyze are the one used in \cite{ccm} and \cite{cf} and described in the next section. The main result of the paper (Theorem \eqref{indstep}) says that if property $N_p$ holds for a Prym-canonical binary curve of genus $g$ then it holds for a generic Prym-canonical binary curve of genus $g+1$. In particular, if the Prym-Green conjecture is true for a Prym-canonical binary curve of genus $g = 2k$, then it is true for a general Prym-canonical binary curve of genus $g = 2k+1$. Moreover we verify the conjecture by a direct computation for $g=6,9,10,12,14$ (see Corollary \eqref{corpg}). As a consequence, we show that the generic Prym-canonical curve of genus $g$ satisfies property $N_0$ for $g \geq 6$, property $N_1$ for $g \geq 9$ (already shown by Debarre), property $N_2$ for $g \geq 10$, property $N_3$ for $g \geq 12$ and property $N_4$ for $g \geq 14$ (Corollary \eqref{Np}). For $g =8$ and $g =16$ our computations on Prym-canonical binary curves also suggest that Prym-Green conjecture's might fail, in fact in our examples we find that $K_{\frac{g}{2}-3,2}(C, K_C \otimes A) =1$ both for $g=8$ and $g=16$ (see Remark \eqref{g8-16}). An analogous result of Theorem \eqref{indstep} is proven for canonically embedded binary curves (Theorem \eqref{can}), where we show that if property $N_{p}$ holds for a canonical binary curve of genus $g$, then the same property holds for a general canonical binary curve of genus $g+1$. In particular, if the Green conjecture is true for a canonical binary curve of genus $g = 2k-1$, then it is true for a general canonical binary curve of genus $g = 2k$. Theorem \eqref{indstep} and analogous computations with maple in genus $g =3,5,7, 9,11$, imply that for a general canonical binary curve, if $g \geq 3$, then propery $N_0$ holds (see also \cite{ccm} section 2), if $g \geq 5$, then propery $N_1$ holds, if $g \geq 7$, then propery $N_2$ holds, if $g \geq 9$, then property $N_3$ holds, and if $g \geq 11$, then property $N_4$ holds. {\bf Acknowledgments.} We thank Riccardo Murri for having been so kind to do for us the computer computations in $g=14,16$. \section{Canonical and Prym-canonical binary curves} \subsection{Construction of canonical binary curves} Recall that a binary curve of genus $g$ is a stable curve consisting of two rational components $C_j$, $j=1,2$ meeting transversally at $g+1$ points. Moreover, $H^0(C,\omega_C)$ has dimension $g$ and the restriction of $\omega_C$ to the component $C_j$ is $K_{C_j}(D_j)$ where $D_j$ is the divisor of nodes on $C_j$. Since $K_{C_j}(D_j)\cong {\mathcal O}_{{\proj}^{1}}(g-1)$ we observe that the components are embedded by the complete linear system $|{\mathcal O}_{{\proj}^{1}}(g-1)|$ in ${\proj}^{g-1}$. Following \cite{ccm}, we assume that the first $g$ nodes are $P_i=(0,...,0,1,0,...0),$ with 1 at the $i$-th place, $i=1,...,g$. Then we can assume that $C_j$ is the image of the map \begin{equation}\label{can} \begin{gathered}\phi_j:{\proj}^1 \rightarrow {\proj}^{g-1}, \ j=1,2\\ \phi_j(t,u):= [\frac{M_j(t,u)}{(t-a_{1,j}u)}, ...,\frac{ M_j(t,u)}{(t-a_{g-1,j}u)}] \end{gathered} \end{equation} with $M_j(t,u):= \prod_{r=1}^{g} (t-a_{r,j}u)$, $j=1,2$ and $\phi_j([a_{l,j},1]) = P_l$, $l=1,...,g$. We see that the remaining node is the point $P_{g+1}:=[1,...,1]$ and it is the image of $[1,0]$ through the maps $\phi_j$, $j=1,2$. One can easily check that, for generic values of the $a_{i,j}$'s, $C=C_1\cup C_2$ is a canonically embedded binary curve. \subsection{Construction of Prym-canonical binary curves} Let $C$ be a binary curve of genus $g$, and $A\in Pic^0(C)$ a nontrivial line bundle. Then $H^0(C,\omega_C \otimes A)$ has dimension $g-1$ and the restriction of $\omega_C \otimes A$ to the component $C_j$ is $K_{C_j}(D_j)$ where $D_j$ is the divisor of nodes on $C_j$. Since $K_{C_j}(D_j)\cong {\mathcal O}_{{\proj}^{1}}(g-1)$, the components are embedded by a linear subsystem of ${\mathcal O}_{{\proj}^{1}}(g-1)$, hence they are projections from a point of rational normal curves in ${\proj}^{g-1}$. Viceversa, let us take 2 rational curves embedded in ${\proj}^{g-2}$ by non complete linear systems of degree $g-1$ intersecting transversally at $g+1$ points. Then their union $C$ is a binary curve of genus $g$ embedded either by a linear subsystem of $\omega_C$ or by a complete linear system $|\omega_C \otimes A|$, where $A\in Pic^0(C)$ is nontrivial (see e.g. \cite{capo}, Lemma 10). In \cite{cf} (Lemma 3.1) we constructed a binary curve $C$ embedded in ${\proj}^{g-2}$ by a linear system $|\omega_C \otimes A|$ with $A^{\otimes 2}\cong {\mathcal O}_C$, and $A$ is non trivial. Let us now recall this construction and denote a binary curve with this embedding a Prym-canonical binary curve. Assume that the first $g-1$ nodes, are $P_i=(0,...,0,1,0,...0)$ with 1 at the $i$-th place, $i=1,...,g-1$, the remaining two nodes are $P_g:=[t_1,...,t_{g-1}]$ with $t_i=0$ for $i =1,...,[\frac{g}{2}]$, $t_i =1$, for $i = [\frac{g}{2}]+1,...,g-1$. and $P_{g+1}:=[s_1,...,s_{g-1}]$ with $s_i=1$ for $i =1,...,[\frac{g}{2}]$, $s_i =0$, for $i = [\frac{g}{2}]+1,...,g-1$. Then the component $C_j$ is the image of the map \begin{equation}\label{pcan} \begin{gathered}\phi_j:{\proj}^1 \rightarrow {\proj}^{g-2}, \ j=1,2, \ \text{where} \\ \phi_1(t,u):= [\frac{tM_1(t,u)}{(t-a_{1,1}u)},..., \frac{tM_1(t,u)}{(t-a_{k,1}u)}, \frac{-M_1(t,u)d_1 a_{k+1,1}u}{A_1(t-a_{k+1,1}u)},..., \frac{-M_1(t,u)d_1a_{g-1,1}u}{A_1(t-a_{g-1,1}u)}]\\ \phi_2(t,u):= [\frac{tM_2(t,u)}{(t-a_{1,2}u)},..., \frac{tM_2(t,u)}{(t-a_{k,2}u)}, \frac{-M_2(t,u)d_2 a_{k+1,2}u}{A_2(t-a_{k+1,2}u)},..., \frac{-M_2(t,u)d_2a_{g-1,2}u}{A_2(t-a_{g-1,2}u)}]\\ \end{gathered} \end{equation} with $k :=[\frac{g}{2}]$, $M_j(t,u):= \prod_{r=1}^{g-1} (t-a_{r,j}u)$, and $A_j= \prod_{i=1}^{g-1} a_{i,j}$, $j=1,2$, $d_2$ is a nonzero constant and $d_1 = \frac{-d_2 A_1}{A_2}$. Notice that we have $\phi_j([a_{l,j},1]) = P_l$, $l=1,...,g-1$, $\phi_j([0,1]) = P_g$, $\phi_j([1,0]) = P_{g+1}$, $j=1,2$. In Lemma 3.1 of \cite{cf} we proved that for a general choice of $a_{i,j}$'s, $C=C_1\cup C_2$ is a binary curve embedded in ${\proj}^{g-2}$ by a linear system $|\omega_C \otimes A|$ with $A^{\otimes 2}\cong {\mathcal O}_C$ and $A$ nontrivial. In fact, recall that $Pic^0(C) \cong {{{\mathbb C}}^*}^g \cong {{{\mathbb C}}^*}^{g+1}/{{\mathbb C}}^*$, where ${{\mathbb C}}^*$ acts diagonally, and in Lemma 3.1 of \cite{cf} it is shown and our line bundle $A$ corresponds to the element $[(h_1,...,h_{g+1})] \in {{{\mathbb C}}^*}^{g+1}/{{\mathbb C}}^*$, where $h_i=1$, for $i< [\frac{g}{2}]+1$, $h_i = -1$, for $i =[\frac{g}{2}]+1,...,g-1$, $h_g=-1$, $h_{g+1} = 1$, so in particular $A$ is of 2-torsion. \section{Property $N_p$ for Prym-canonical binary curves} Let $C \subset {\proj}^{g-2}$ be a Prym-canonical binary curve embedded by $\omega_C \otimes A$, with $A^{\otimes 2} \cong {\mathcal O}_C$, as in (\ref{pcan}). In this section we study the Koszul cohomology for these curves, in particular we investigate property $N_p$, i.e. the vanishing of $K_{p,2}(C, K_C \otimes A)$. Since by duality (\cite{gr}, see also \cite{fr} prop.1.4) we have $K_{p,2}(C, K_C \otimes A) \cong K_{g-3-p,0}(C, K_C,K_C \otimes A)^{\vee} $, this vanishing is equivalent to the injectivity of the Koszul map \begin{equation} \label{koszul} F_{g-3-p}: \Lambda^{g-3-p}H^0(C,\omega_C \otimes A) \otimes H^0(C,\omega_C) \rightarrow \Lambda^{g-4-p}H^0(C,\omega_C \otimes A) \otimes H^0(C,\omega_C^2 \otimes A). \end{equation} Our strategy is to compare this map with analogous Koszul maps for a partial normalization of the curve $C$ at one node and possibly use induction on the genus. To this end, let us introduce some notation: set $k :=[\frac{g}{2}]$ and denote by $\tilde{C}_r$ the partial normalization of $C$ at the node $P_r$ with $r\leq k$ if $g =2k$, $r \geq k+1$ if $g = 2k+1$. This choice of the node is necessary in order to obtain the Prym-canonical model for the curve $\tilde{C}_r$. In fact, observe that in this way, for a general choice of the $a_{i,j}$'s, the projection from $P_r$ sends the curve $C$ to the Prym-canonical model of $\tilde{C}_r$ in ${\proj}^{g-3}$ given by the line bundle $K_{\tilde{C}_r} \otimes A'_r$ where $A'_r$ corresponds to the point $(h'_1,...,h'_{g-1}, 1) \in {{{\mathbb C}}^*}^{g}/{{\mathbb C}}^*$, with $h'_i = 1$ for $i\leq [\frac{g-1}{2}]$, $h'_i = -1$ for $i=[\frac{g-1}{2}]+1,...,g-1$, as described above. In fact $(\tilde{C}_r,A'_r)$ is parametrized by $a'_{i,j} = a_{i,j}$ for $i \leq r-1$, $j =1,2$, $a'_{i,j} = a_{i+1,j}$ for $i \geq r$, $j =1,2.$ So if we set $d'_{j} := \frac{d_j}{a_{r,j}}$, $j =1,2$, we clearly have a pair $(\tilde{C}_r,A'_r)$ as in \eqref{pcan}. For simplicity let us choose $d_2 =1$, so $d_1 = -\frac{A_1}{A_2}$, hence $d'_{2} := \frac{1}{a_{r,2}}$, $d'_{1} := -\frac{A_1}{A_2 a_{r,1}}$. To simplify the notation, set $T_g:= H^0(C,\omega_C \otimes A)$, $H_g := H^0(C,\omega_C)$, $B_g:= H^0(C,\omega_C^2 \otimes A)$. Denote by $\{t_1,...,t_{g-1}\}$ the basis of $T_g$ given by the coordinate hyperplane sections in $\proj^{g-2} \cong \proj (T_g^{\vee})$ and by $\{s_1,...,s_{g}\}$ the basis of $H_g$ given by the coordinate hyperplane sections in $\proj^{g-1} \cong \proj (H_g^{\vee})$. $T_{g-1,r}:= H^0(\tilde{C}_r,\omega_{\tilde{C}_r} \otimes A'_r)$, $H_{g-1,r} := H^0(\tilde{C}_r,\omega_{\tilde{C}_r})$, $B_{g-1,r}:= H^0(\tilde{C}_r,\omega_{\tilde{C}_r}^2 \otimes A'_r)$. Denote by $\{t'_1,...,t'_{g-2}\}$ the basis of $T_{g-1,r}$ given by the coordinate hyperplane sections in $\proj^{g-3} \cong \proj (T_{g-1,r}^{\vee})$ and by $\{s'_1,...,s'_{g-1}\}$ the basis of $H_{g-1,r}$ given by the coordinate hyperplane sections in $\proj^{g-2} \cong \proj (H_{g-1,r}^{\vee})$. We have the following injections: \begin{equation} T_{g-1,r} \stackrel{I_r}\hookrightarrow T_g, \ t'_i \mapsto t_i \ \text{for} \ i \leq r-1, \ t'_i \mapsto t_{i+1} \ \text{for} \ i \geq r,\end{equation} \begin{equation} \label{h} H_{g-1,r} \stackrel{J_r}\hookrightarrow H_g, \ s'_i \mapsto s_i \ \text{for} \ i \leq r-1, \ s'_{i}\mapsto s_{i+1} \ \text{for} \ i \geq r.\end{equation} Clearly these maps induce an injective map \begin{equation} B_{g-1,r} \stackrel{L_r}\hookrightarrow B_g, \end{equation} which on the set of generators of $B_{g-1,r}$ given by $t'_i s'_j$, $i = 1,...,g-2$, $j = 1,...,g-1$ is given by $t'_i s'_j \mapsto I_r(t'_i)J_r(s'_j)$. We claim that this map is well defined and injective by the definition of the $t'_i$'s and $s'_j$'s. In fact the restriction of $\sum \alpha_{i,j} t'_i s'_j$ to the two rational components of $\tilde{C}_r$ yields two polynomials $Q_1$ and $Q_2$. On the other hand we have $(\sum \alpha_{i,j} I_r(t'_i) J_r(s'_j))_{|C_{i}} = (t-a_{r,i})^2 Q_i$, hence $L_r$ is well defined and injective. We finally have a map \begin{equation} \Lambda^{l-1} T_{g-1,r} \stackrel{\wedge t_{r}} \longrightarrow \Lambda^{l} T_g, \end{equation} where by $\wedge t_{r}$ we indicate the composition of the natural map induced by $I_r$ at the level of the $(l-1)$-th exterior power $\Lambda^{l-1} T_{g-1,r} \rightarrow \Lambda^{l-1} T_g$ composed by the wedge product with $t_{r}$, $ \Lambda^{l-1} T_g \stackrel{ \wedge t_{r}} \longrightarrow \Lambda^{l} T_g $. As in \eqref{koszul}, denote by $F_l : \Lambda^l T_g \otimes H_g \rightarrow \Lambda^{l-1} T_g \otimes B_g$ the Koszul map. We have the following commutative diagram \begin{equation} \label{diagram1} \xymatrix{ \Lambda^{l} T_{g} \otimes H_{g} \ar[r]^{F_{l}} \ar[r]& \Lambda^{l-1} T_{g} \otimes B_{g} \ar[r]^{ \pi_{r}} & \langle t_{r} \rangle \wedge \Lambda^{l-2} T_g \otimes B_g& \\ \Lambda^{l-1} T_{g-1,r} \otimes H_{g-1,r} \ar[r]^{\tilde{F}_{l-1}} \ar[u]^{\wedge t_{r} \otimes J_r}& \Lambda^{l-2} T_{g-1,r} \otimes B_{g-1,r} \ar[ur]^{\wedge t_{r} \otimes L_r}} \end{equation} From now on, given a multi-index $I=(i_1,...,i_l)$ we denote by $t_I:=t_{i_1}\wedge ... \wedge t_{i_l}$. To study the injectivity of the maps $F_l$, a preliminary reduction comes from the following \begin{LEM} \label{W} Let $W \subset \Lambda^l T_g \otimes H_g$ be the subspace generated by the elements of the form $t_I \otimes s_j$, where $j \not \in I$. Then the kernel of the Koszul map $F_l : \Lambda^l T_g \otimes H_g \rightarrow \Lambda^{l-1} T_g \otimes B_g$ is contained in $W$. \end{LEM} \proof Assume that $ v \in \Lambda^l T_g \otimes H_g $, $ v = \sum_{I, |I|=l} \sum_{j=1...g} \lambda^I_j t_I \otimes s_j$ is such that $F_{l}(v) = 0$. $F_l(v) = \sum_{J, |J| = l-1} \sum_{ I =J \cup \{m\}} \sum_{j=1...g} \lambda^I_j \epsilon(I,J) t_J \otimes t_m s_j=0$, where $\epsilon(I,J) = \pm 1$, depending on the position of $m$ in the multi-index $I = J \cup \{m\}$. Then if we fix a multi-index $J$ with $|J| = l-1$, we must have $ \sum_{ m} \sum_{j=1...g} \lambda^{J \cup \{m\}}_j \epsilon(J \cup \{m\},J) t_J \otimes t_m s_j =0$ and therefore $$ \sigma_J:= \sum_{ m} \sum_{j=1...g} \lambda^{J \cup \{m\}}_j \epsilon(J \cup \{m\},J) t_m s_j =0.$$ So we have ${\sigma_J}_{|C_1} \equiv 0$, namely, if we denote by $P_1(t) := t \cdot M_1(t,1)$, as in \eqref{pcan}, we have $$\sum_{j=1...g} \sum_{ m \leq k} \lambda^{J \cup \{m\}}_j \epsilon(J \cup \{m\},J) \frac{ P_1(t)}{t-a_{m,1}} \frac{ P_1(t)}{t-a_{j,1}} $$ $$ +\sum_{j=1...g} \sum_{ m \geq k+1} \lambda^{J \cup \{m\}}_j \epsilon(J \cup \{m\},J) \frac{ P_1(t) a_{m,1}}{A_2t(t-a_{m,1})} \frac{ P_1(t)}{t-a_{j,1}} =0$$ If we evaluate in $t = a_{m,1}$, there remains only one term in the sum, namely the one with $j =m$, and hence we have $$ \lambda^{J \cup \{m\}}_m \epsilon(J \cup \{m\},J) a^2_{m,1} \cdot \prod_{ r \neq m, r =1...g-1} (a_{m,1} - a_{r,1})^2 =0, \ \text{if } \ m \leq k, $$ $$ \lambda^{J \cup \{m\}}_m \epsilon(J \cup \{m\},J) \frac{a^2_{m,1}}{A_2} \cdot \prod_{ r \neq m, r =1...g-1} (a_{m,1} - a_{r,1})^2 =0, \ \text{if } \ m \geq k+1, $$ hence we have $ \lambda^{J \cup \{m\}}_m =0$ for all $m$. Since this holds for every multi-index $J$ of cardinality $l-1$, we have shown that we can write $v = \sum_{I, |I|=l} \sum_{j=1...g, j \not \in I} \lambda^I_j t_I \otimes s_j$. \qed We can now state and prove our main result. \begin{TEO} \label{indstep} Assume that $g = 2k$, or $g = 2k+1$ and take an integer $p \leq k-3$. If property $N_p$ holds for a binary curve $\tilde{C}$ of genus $g-1$ embedded in ${\proj}^{g-3}$ by $|\omega_{\tilde{C}} \otimes A'|$ as in \eqref{pcan} for a generic choice of the parameters $a'_{i,j}$, then it holds for all binary curves $C$ of genus $g$ embedded in ${\proj}^{g-2}$ by $|\omega_C \otimes A|$ as in \eqref{pcan} for a generic choice of the $a_{i,j}$'s. \end{TEO} \proof We want to prove that $K_{p,2}(C, K_C \otimes A) =0$ for a binary curve of genus $g$ and we know that $K_{p,2}(\tilde{C_r}, K_{\tilde C_r} \otimes A'_r) =0$, for the curve $\tilde{C_r}$ which is obtained from $C$ by projection from $P_{r}$ with $r \geq k+1$ if $g = 2k+1$, $r \leq k$ if $g = 2k$. By duality, $K_{p,2}(C, K_C \otimes A) \cong K_{g-3-p,0}(C, K_C,K_C \otimes A)^{\vee} $, so the statement is equivalent to prove injectivity of the Koszul map $$ F_{g-3-p}: \Lambda^{g-3-p}T_g \otimes H_g \rightarrow \Lambda^{g-4-p}T_g \otimes B_g.$$ By assumption we know injectivity of the map $$\tilde{F}_{g-4-p}: \Lambda^{g-4-p}T_{g-1,r} \otimes H_{g-1,r} \rightarrow \Lambda^{g-5-p}T_{g-1,r} \otimes B_{g-1,r}.$$ For simplicity let us denote by $l:=g-3-p$. Assume first of all that $g = 2k+1$ and consider the projection of $C$ from $P_{g-1}$. Recall that by Lemma \eqref{W} we can reduce to prove injectivity of $F_{l}$ restricted the subspace $W$ generated by such $T_I \otimes s_j$ with $j \not \in I$. Note that we can decompose $W$ as $W:=X_{g-1} \oplus Y_{g-1}$, where $X_{g-1}$ is the intersection with $W$ of the image of the map $\wedge t_{g-1} \otimes J_{g-1}$ in diagram \eqref{diagram1} and $Y_{g-1}$ is the subspace of $W$ generated by such $t_I \otimes s_j$ with $g-1 \not \in I$ and $j \not \in I$: $$X_{g-1} = \langle t_{g-1} \wedge t_{J} \otimes s_{j} \ | \ j \not \in J \rangle, \ Y_{g-1} = \langle t_I \otimes s_{j} \ | \ j ,g-1 \not \in I \rangle.$$ Assume now that $F_{l}(x_{g-1} + y_{g-1}) =0$, where $x_{g-1} \in X_{g-1}$, $y_{g-1} \in Y_{g-1}$. Then we have $0=\pi_{g-1} \circ F_{l}(x_{g-1} + y_{g-1}) = \pi_{g-1} \circ F_{l}(x_{g-1})= (\wedge t_{g-1} \otimes L_{g-1}) \circ \tilde{ F}_{l-1}(x_{g-1})$, by the commutativity of diagram \eqref{diagram1}. Hence $x_{g-1}=0$, since by induction we are assuming that $\tilde{ F}_{l-1}$ is injective. So we have reduced to prove injectivity of $F_{l}$ restricted to $Y_{g-1}$. Now consider the projection of $C$ from the point $P_{g-2}$. Set $$Y'_{g-2} = \langle t'_{J} \otimes s'_{j} \ | \ j, g-2 \not \in J \rangle \subset \Lambda^{l-1} T_{g-1,g-2} \otimes H_{g-1,g-2}$$ Observe that the image $X_{g-2}:= (\wedge t_{g-2} \otimes J_{g-2}) (Y'_{g-2})$ is contained in $Y_{g-1}$ and in fact $$X_{g-2}= \langle t_{g-2} \wedge t_{J} \otimes s_{j} \ | \ j,g-1 \not \in J \rangle.$$ So we have $Y_{g-1} = X_{g-2} \oplus Y_{g-2}$, where $Y_{g-2}$ is the subspace of $Y_{g-1}$ generated by those elements of the form $t_I \otimes s_j$ where $g-2,g-1,j \not \in I$. We have the following commutative diagram \begin{equation} \label{diagram2} \xymatrix{ \Lambda^{l} Y_{g-1} \ar[r]^{F_{l}} \ar[r]& \Lambda^{l-1} T_{g} \otimes B_{g} \ar[r]^{ \pi_{g-2}} & \langle t_{g-2} \rangle \wedge \Lambda^{l-2} T_g \otimes B_g& \\ { \ \ \ \ \ \ \ \Lambda^{l-1}Y'_{g-2} \ \ \ \ \ \ \ } \ar[r]^{\tilde{F}_{l-1}} \ar[u]^{\wedge t_{g-2} \otimes J_{g-2}}& \ \ \ \ \Lambda^{l-2} T_{g-1,g-2} \otimes B_{g-1,g-2} \ar[ur]^{\wedge t_{g-2} \otimes L_{g-2}}} \end{equation} Assume that $v=x_{g-2} + y_{g-2} \in Y_{g-1} = X_{g-2} \oplus Y_{g-2}$ is such that $F_{l}(v) =0$, then we have $0 = \pi_{g-2} \circ F_{l}(x_{g-2}+y_{g-2}) = \pi_{g-2} \circ F_{l}(x_{g-2})$. So $0= \tilde{F}_{l-1}(x_{g-2})$ by the commutativity of the diagram, and this implies $x_{g-2} = 0$ by induction. Therefore we can assume that $v \in Y_{g-2}$, hence $v$ is a linear combination of vectors of the form $t_I \otimes s_j$ where $g-2,g-1,j \not \in I$. Repeat the procedure, i.e. project from the points $P_r$, $r=g-3...l$. This can be done since $l = g-3-p \geq k+1$. In this way we can reduce to prove injectivity for the restriction of the map $F_{l}$ to the subspace $Y_{l}$ of $W$ generated by the elements of the form $t_I \otimes s_j$ where $l,...,g-1,j \not \in I$. Observe that since $|I| = l$, we have $Y_{l} = 0$, so $F_{l}$ is injective and the theorem is proved. If $g =2k$ the proof is analogous: we subsequently project from the points $P_1, P_{2},...,P_{g-l}$. As before note that this can be done since $g-l= p+3 \leq k$. In this way we reduce to prove injectivity for the restriction of the map $F_{l}$ to the subspace $Y$ of $W$ generated by the elements of the form $t_I \otimes s_j$ where $1,2,3,...,g-l, j \not \in I$ and since $|I| = l$, we have $Y= 0$, so $F_{l}$ is injective and the theorem is proved. \qed \begin{COR} If the Prym-Green conjecture is true for a Prym-canonical binary curve of genus $g = 2k$ as in \eqref{pcan}, then it is true for a Prym-canonical binary curve of genus $g = 2k+1$ as in \eqref{pcan} for generic parameters $a_{i,j}$. \end{COR} \proof The conjecture for $g = 2k+1$ says that $K_{k+1,0}(C,K_C,K_C \otimes A) = 0$, or analogously that property $N_{k-3}$ holds for a generic $C$ embedded with $K_C \otimes A$. Hence the corollary immediately follows from Theorem \eqref{indstep} with $i =k-3$. \qed \begin{COR} \label{corpg} \label{Np}The generic Prym-canonical curve of genus $g$ satisfies property $N_0$ for $g \geq 6$, $N_1$ for $g \geq 9$, $N_2$ for $g \geq 10$, $N_3$ for $g \geq 12$, $N_4$ for $g \geq 14$. \end{COR} \proof With a direct computation one verifies the Prym-Green conjecture for explicit examples of Prym-canonical binary curves as in \eqref{pcan} for $g = 6, 9,10,12,14$, so the proof follows from Theorem \eqref{indstep} for generic Prym-canonical binary curves, and then by semicontinuity for generic Prym-canonical smooth curves. To do the computations we wrote a very simple maple code ( http://www-dimat.unipv.it/~frediani/prym-can) in which we explicitly give the matrix representing the Koszul map $F_l$: for every multi-index $J$ with $|J| = l-1$, we take the projection of the image of $F_l$ onto $t_J \otimes B_{g}$ and we restrict it to the rational components $C_j$. So we have two polynomials in one variable and we take their coefficients. Once the matrix is constructed, for $g = 6, 9,10,12$, maple computed its rank modulo $131$, which turned out to be maximal. In the case $g =14$ the order of the matrices was too big, so Riccardo Murri made the rank computation using the Linbox (\cite{1}) and Rheinfall (\cite{2}) free software libraries. Two different rank computation algorithms were used: Linbox' "black box" implementation of the block Wiedemann method (\cite{4,5}), and Rheinfall's Gaussian Elimination code(\cite{6}). Results obtained by either method agree. In both cases, the GNU GMP library (\cite{3}) provided the underlying arbitrary-precision representation of rational numbers and exact arithmetic operations. \qed \begin{REM} \label{g8-16} For Prym-canonical curves of genus 8, the maple computation on specific examples of binary curves gives $dimK_{1,2}(C, K_C \otimes A)=1$. This result is compatible with the computations in \cite{cfes}. For Prym-canonical binary curves of genus 16, we constructed the matrix representing the Koszul map $F_8$ on examples using maple and Riccardo Murri computed its rank as explained in the proof of Corollary \eqref{corpg}. Again it turned out that $dimK_{5,2}(C, K_C \otimes A)=1$, confirming the computations in \cite{cfes}. \end{REM} \section{Property $N_p$ for canonical binary curves} In analogy with the Prym-canonical case, we study now property $N_p$ for canonical binary curves with the same inductive method, projecting from a node. So, let $C \subset {\proj}^{g-1}$ be a canonical binary curve and denote by $\tilde{C}_r$ the partial normalization of $C$ at the node $P_r$, $1\leq r \leq g$. As above, for a general choice of the $a_{i,j}$'s, the projection from $P_r$ sends the curve $C$ to the canonical model of $\tilde{C}_r$ in ${\proj}^{g-2}$, where $\tilde{C}_r$ is parametrized by $a'_{i,j} = a_{i,j}$ for $i \leq r-1$, $j =1,2$, $a'_{i,j} = a_{i+1,j}$ for $i \geq r$, $j =1,2$. Set $H_g := H^0(C,\omega_C)$, $D_g:= H^0(C,\omega_C^2)$, $F_l : \Lambda^l H_g \otimes H_g \rightarrow \Lambda^{l-1} H_g\otimes D_g$ the Koszul map. Denote as before by $\{s_1,...,s_{g}\}$ the basis of $H_g$ given by the coordinate hyperplane sections in $\proj^{g-1} \cong \proj (H_g^{\vee})$ . $H_{g-1,r} := H^0(\tilde{C}_r,\omega_{\tilde{C}_r})$, $D_{g-1,r}:= H^0(\tilde{C}_r,\omega_{\tilde{C}_r}^2)$. Denote by $\{s'_1,...,s'_{g-1}\}$ the basis of $H_{g-1,r}$ given by the coordinate hyperplane sections in $\proj^{g-2} \cong \proj (H_{g-1,r}^{\vee})$. We have the injections $H_{g-1,r} \stackrel{J_r}\hookrightarrow H_g,$ as in \eqref{h} and $D_{g-1,r} \stackrel{L_r}\hookrightarrow D_g,$ which on the set of generators of $B_{g-1,r}$ given by $s'_i s'_j$, $i,j = 1...g-1$, is given by $s'_i s'_j \mapsto J_r(s'_i)J_r(s'_j)$. We finally have a map \begin{equation} \Lambda^{l-1} H_{g-1,r} \stackrel{\wedge s_{r}} \longrightarrow \Lambda^{l} H_g, \end{equation} where by $\wedge s_{r}$ we indicate the composition of the natural map induced by $J$ at the level of the $l-1$-th exterior power $\Lambda^{l-1} H_{g-1,r} \rightarrow \Lambda^{l-1} H_g$ composed by the wedge product with $s_{r}$, $ \Lambda^{l-1} H_g \stackrel{ \wedge s_{r}} \longrightarrow \Lambda^{l} H_g $. We are interested in property $N_p$ for these curves, hence by duality, in the vanishing of $K_{g-2-p,1}(C,K_C)$. Clearly the vanishing of $K_{l,1}(C,K_C)$ is equivalent to the injectivity of the map \begin{equation} \frac{\Lambda^l H_g \otimes H_g}{ \Lambda^{l+1} H_g}\rightarrow \Lambda^{l-1} H_g\otimes D_g \end{equation} coming from the Koszul complex. Notice that there is an isomorphism between $\frac{\Lambda^l H_g \otimes H_g}{ \Lambda^{l+1} H_g}$ and the subspace $V_{g}$ of $\Lambda^l H_g \otimes H_g$ generated by the elements of the form $s_I \otimes s_j$, where $j \geq i_1$, so the above injectivity is equivalent to the injectivity of the restriction of $F_l$ to $V_{g}$. We have the following commutative diagram \begin{equation} \label{diagram1can} \xymatrix{ V_g \ar[r]^{F_{l} } \ar[r]& \Lambda^{l-1} H_{g} \otimes D_{g} \ar[r]^{ \pi_{r}} & \langle s_{r} \rangle \wedge \Lambda^{l-2} H_g \otimes D_g& \\ \ \ \ \ \ \ \ V_{g-1,r} \ \ \ \ \ar[r]^{ \tilde{F}_{l-1}} \ar[u]^{\wedge s_{r} \otimes J_r}& \ \ \ \ \Lambda^{l-2} H_{g-1,r} \otimes D_{g-1,r} \ar[ur]^{\wedge s_{r} \otimes L_r}} \end{equation} where $V_{g-1,r}$ is the subspace of $\Lambda^{l-1} H_{g-1,r} \otimes H_{g-1,r}$ generated by the elements of the form $s'_J \otimes s'_j$, where $j \geq j_1$. Let $W \subset V_{g,l}$ be the subspace generated by the elements of the form $s_I \otimes s_j$, where $j \not \in I$ and $j \geq i_1$. \begin{REM} \label{W_i} The map $F_l :V_{g} \rightarrow \Lambda^{l-1} \otimes B_g$ is injective if and only if ${F_l}_{|W}$ is injective. \end{REM} \proof The proof is completely analogous to the proof of \eqref{W}. \qed \begin{TEO} \label{can} If property $N_{p}$ holds for a canonical binary curve of genus $g -1$ as in \eqref{can}, then the same property holds for a canonical binary curve of genus $g $ as in \eqref{can} for a generic choice of the parameters. \end{TEO} \proof From the above discussion we know that the statement is equivalent to prove injectivity of the Koszul map $ F_{l}: V_{g} \rightarrow \Lambda^{l-1}H_g \otimes D_g$ for $l = g-2-p$, while by assumption we know injectivity of the map $\tilde{F}_{l-1}: V_{g-1,r} \rightarrow \Lambda^{l-2}H_{g-1,r} \otimes D_{g-1,r}.$ We first project from $P_{g}$. By Remark \eqref{W_i} we can reduce to prove injectivity of $F_{l}$ restricted the subspace $W$ generated by such $s_I \otimes s_j$ with $j \not \in I, j >i_1$. Note that as before we can decompose $W$ as $W:=X_{g} \oplus Y_{g}$, where $X_{g}$ is the intersection with $W$ of the image of the map $\wedge s_{g} \otimes J_{g}$ in diagram \eqref{diagram1can} and $Y_{g}$ is the subspace of $W$ generated by such $s_I \otimes s_j$ with $g \not \in I$ and $j \not \in I, j >i_1$: $$X_{g} = \langle s_{g} \wedge s_{J} \otimes s_{j} \ | \ j \not \in J, j > j_1\rangle, \ Y_{g} = \langle s_I \otimes s_{j} \ | \ j ,g \not \in I, j>i_1 \rangle$$ If $F_{l}(x_{g} + y_{g}) =0$, where $x_{g} \in X_{g}$, $y_{g} \in Y_{g}$, then $0=\pi_{g} \circ F_{l}(x_{g} + y_{g}) = \pi_{g} \circ F_{l}(x_{g})= (\wedge s_{g} \otimes L_{g}) \circ \tilde{ F}_{l-1}(x_{g})$. Hence $x_{g}=0$, since by induction $\tilde{ F}_{l-1}$ is injective. So we have reduced to prove injectivity of $F_{l}$ restricted to $Y_{g}$. Repeat the procedure, i.e. project from the points $P_r$, $r=g-1...l$. In this way we can reduce to prove injectivity for the restriction of the map $F_{l}$ to the subspace $Y_{l}$ of $W$ generated by the elements of the form $s_I \otimes s_j$ where $l,...,g,j \not \in I, j>i_1$. Observe that since $|I| = l$, we have $Y_{l} = 0$, so $F_{l}$ is injective and the theorem is proved. \qed \begin{REM} Notice that, by the theorem of Green and Lazarsfeld (\cite{gr}), if $p>g-[\frac{g}{2}]-2$, condition $N_p$ does not hold for any curve $\tilde{C}$ of genus $g-1$. \end{REM} \begin{COR} If the Green conjecture is true for a canonical binary curve of genus $g = 2k-1$ as in \eqref{can}, then it is true for a canonical binary curve of genus $g = 2k$ as in \eqref{can} for a generic choice of the parameters. \end{COR} \proof The conjecture for $g = 2k$ says that $K_{k,1}(C,K_C) = 0$, or analogously that property $N_{k-2}$ holds for $C$ embedded with $K_C$. By assumption we know that $K_{k-1,1}(\tilde{C},K_{\tilde{C}}) = 0$, namely that property $N_{k-2}$ holds for $\tilde{C}$ embedded with $K_{\tilde{C}} $, so the thesis immediately follows from \eqref{can} . \qed With maple (http://www-dimat.unipv.it/~frediani/greenfinal.tar.gz) one verifies the conjecture for $g = 5,7, 9,11$, so one can prove with the same method that if $g \geq 3$, then propery $N_0$ holds (see also \cite{ccm} section 2), if $g \geq 5$, then propery $N_1$ holds, if $g \geq 7$, then propery $N_2$ holds, and if $g \geq 9$, then property $N_3$ holds, and if $g \geq 11$, then property $N_4$ holds. \end{document}
\begin{document} \begin{frontmatter} \title{A multi-functional analyzer uses parameter constraints to improve the efficiency of model-based gene-set analysis} \runtitle{A multi-functional analyzer} \begin{aug} \author[A]{\fnms{Zhishi}~\snm{Wang}\thanksref{m1}\ead[label=e1]{[email protected]}}, \author[A]{\fnms{Qiuling}~\snm{He}\thanksref{m2}\ead[label=e2]{[email protected]}}, \author[B]{\fnms{Bret}~\snm{Larget}\thanksref{m3}\ead[label=e3]{[email protected]}} \and\\ \author[C]{\fnms{Michael A.}~\snm{Newton}\corref{}\thanksref{m4}\ead[label=e4]{[email protected]}\ead[label=u1,url]{http://www.stat.wisc.edu/\textasciitilde newton/}} \runauthor{Wang, He, Larget and Newton} \affiliation{University of Wisconsin, Madison} \address[A]{Z. Wang\\ Q. He \\ Department of Statistics\\ University of Wisconsin, Madison \\ 1300 University Avenue\\ Madison, Wisconsin 53706\\ USA\\ \printead{e1} \\ \phantom{E-mail:\ }\printead*{e2}} \address[B]{B. Larget \\ Departments of Statistics and Botany \\ University of Wisconsin, Madison \\ 1300 University Avenue \\ Madison, Wisconsin 53706 \\ USA\\ \printead{e3}} \address[C]{M. A. Newton \\ Departments of Statistics and Biostatistics\\ \quad and Medical Informatics\\ University of Wisconsin, Madison \\ 1300 University Avenue \\ Madison, Wisconsin 53706\\ USA\\ \printead{e4}\\ \printead{u1}} \end{aug} \thankstext{m1}{Supported in part by a fellowship from the Morgridge Institute of Research.} \thankstext{m2}{Supported in part by a research assistantship from the National Institutes of Health (NIH) (R21 HG006568); currently employed by Novartis Pharmaceuticals.} \thankstext{m3}{Supported in part by NIH R01 GM086887 and NSF DEB 0949121.} \thankstext{m4}{Supported in part by NIH R21 HG006568.} \received{\smonth{10} \syear{2013}} \revised{\smonth{8} \syear{2014}} \begin{abstract} We develop a model-based methodology for integrating gene-set information with an experimentally-derived gene list. The methodology uses a previously reported sampling model, but takes advantage of natural constraints in the high-dimensional discrete parameter space in order to work from a more structured prior distribution than is currently available. We show how the natural constraints are expressed in terms of linear inequality constraints within a set of binary latent variables. Further, the currently available prior gives low probability to these constraints in complex systems, such as Gene Ontology (GO), thus reducing the efficiency of statistical inference. We develop two computational advances to enable posterior inference within the constrained parameter space: one using integer linear programming for optimization and one using a penalized Markov chain sampler. Numerical experiments demonstrate the utility of the new methodology for a multivariate integration of genomic data with GO or related information systems. Compared to available methods, the proposed multi-functional analyzer covers more reported genes without mis-covering nonreported genes, as demonstrated on genome-wide data from association studies of type~2 diabetes and from RNA interference studies of influenza. \end{abstract} \begin{keyword} \kwd{Gene-set enrichment} \kwd{Bayesian analysis} \kwd{integer linear programming}. \end{keyword} \end{frontmatter} \section{Introduction} In statistical genomics, the gene list is a recurring data structure. We have in mind situations where experimental results amount to a collection of genes measured to have some property. Examples include the following: RNA expression studies, in which the property might be differential expression of the gene between two cell types; genome-wide RNA knock-down studies, in which the property is significant phenotypic alteration caused by RNA interference; chromatin studies recording genes in the vicinity of transcription factor binding sites or having certain epigenetic marks. In all cases, the reported gene list is really the result of inference from more basic experimental data. These more basic data may be available to support subsequent analyses, but we are concerned with the important and relatively common case in which the gene list itself is the primary data set brought forward for analysis. \setcounter{footnote}{4} The statistical question of central importance in the present paper is how to interpret the gene list in the context of preexisting biological knowledge about the functional properties of all genes, as these exogenous data are recorded in database systems, notably Gene Ontology (GO), the Kyoto Encyclopedia (KEGG) and the reactome, among others [\citet{go}; \citet{kegg}; \citet{m08}; \citet{bioc}]. For us, exogenous data form a collection of gene sets, with each set equaling those genes previously determined, by some evidence, to have a specific biological property. Recently, for example, the full GO collection contained 16{,}527 sets (GO terms) annotating 17{,}959 human genes.\footnote{Bioconductor package org.Hs.eg.db, version 2.8.0.} Needless to say, genes are typically annotated to multiple gene sets (median 7 sets per gene among the genes annotated to sets which contain between 3 and 30 genes, e.g.), covering all sorts of functional properties. The task of \textit{gene-set analysis} is to efficiently interpret the functional content of an experimentally-derived gene list by somehow integrating these endogenous and exogenous data sources [\citet{k12}]. Our starting point is an exciting development in the methodology of gene-set analysis. Model-based gene-set analysis (MGSA) expresses gene-level indicators of presence on the gene list as Bernoulli trials whose success probabilities are determined in a simple way by latent activity states of binary variables associated with the gene sets [\citet{b10}; \citet{b11}]. Inference seeks to identify the \textit{active} gene sets, as these represent functional drivers of the experimental data. Inference is computationally difficult because the activity state of a given gene set depends not only on experimental data for genes in that set, but also on the unknown activity states of all other gene sets that annotate these same genes. MGSA overcomes the problem through Bayesian inference implemented with an efficient Markov chain Monte Carlo (MCMC) sampler, and thus provides marginal posterior probabilities that each gene set is in the active state. The MGSA methodology is compelling. Because it treats all gene sets in the collection simultaneously, it provides a truly multivariate analysis of the exogenous data source, where most available approaches are univariate (one set at a time). Where set/set overlaps are a nuisance in most gene-set methodologies, MGSA utilizes them directly in modeling and inference. This accounts for pleitropy, that genes have multiple biological functions, reduces the risk of spurious associations, and leads to cleaner output whereby a typical list of gene sets inferred to be active is simpler and exhibits less redundancy than in standard univariate analyses [\citet{b10}; \citet{n12}]. Our analysis reveals a feature of MGSA that adversely affects its statistical properties. In ever denser collections of gene sets, the MGSA prior distribution puts more and more mass on logically inconsistent joint activity states. As a result, data need to work ever harder to overcome this misguided prior probability. The effect is tangible; for a given amount of data, fewer truly activated gene sets are inferred to be active, compared to what is achievable with an alternative formulation. We propose a new methodology, the multi-functional analyzer (MFA), which aims to improve the statistical efficiency of MGSA. It uses two computational advances that enable posterior inference in the high-dimensional constrained space of joint activity states. One is an efficient MCMC sampling scheme constructed by penalizing the log-posterior in the unconstrained space and one is a discrete optimizaton scheme that translates the inference problem into an integer linear programming (ILP) problem. We note that inference about gene-set activity states may be interesting from the general perspective of high-dimensional statistics. Typically, dependence among data from different inference units (sets, in this case) is considered a nuisance and testing aims to identify nonnull units (active sets) by some methodology that is robust to dependencies, since these dependencies are often difficult to estimate from available data. In the present context dependencies are complicated but explicit, and inference benefits by using them to advantage. Finally, we also note that the probability model underlying our methodology---the \textit{role model}---has potential utility in other domains of application. It provides a simple way to relate data collected at one level (genes, in this case) to inference units that are unordered collections of the former (gene sets, in this case). \section{Role model} \subsection{Model} Following the description in Newton et al., we have a finite number of \textit{parts} $p$ and a finite number of \textit{wholes} $w$, where each whole is an unordered set of parts. The incidence matrix $I= (I_{p,w})$ is determined from external knowledge, where $I_{p,w}=1$ if and only if $p \in w$. The intended correspondence is that genes are parts and gene sets (i.e., functional categories) are wholes. The matrix $I$ encodes a full collection of gene sets. We will have measured data on the parts and aim to make inference on properties of the wholes. The experimentally-derived gene list may be viewed as a vector of Bernoulli trials $X=(X_p)$, with $X_p=1$ if and only if part (gene) $p$ is on the list. First proposed in \citet{b10}, the role model describes the joint distribution of $X$ in terms of latent binary $(0/1)$ activity variables $Z= (Z_w)$ and by part-level activities induced by them: $A_p = 1$ if $Z_w = 1$ for any $w$ with $p \in w$ or, equivalently, \begin{equation} \label{eqf1} A_p = \max_{w\dvtx p\in w} Z_w. \end{equation} This conveys the simple assumption that a part is activated if it is in any whole that is activated. For false-positive and true-positive parameters $\alpha, \gamma \in (0,1)$, with $\alpha < \gamma$, the model for $X$ entails mutually independent components (conditionally on latent activities), with \begin{equation} \label{eqmodel} X_p \sim \operatorname{Bernoulli} \cases{ \alpha, &\quad\mbox{if $A_p=0$}, \cr \gamma, &\quad \mbox{if $A_p=1$}.} \end{equation} Simply, activated parts (i.e., those with $A_p=1$) are delivered to the list at a higher rate than inactivated parts. A key feature of the model is that a part (gene) is activated by virtue of any one of its functional \textit{roles}; this implies that a gene may be activated and yet be part of a functional category that is inactivated, which is in contrast to most other gene-set inference methods [e.g., \citet{gb}; \citet{bnr}; \citet{sa09}] and which provides for a fully multivariate analysis of the gene list. In \citet{b10} it is further assumed, for the sake of Bayesian analysis, that uncertainty in whole-level activities is represented with a single rate parameter $\pi \in (0,1)$: \begin{equation} \label{eqprior1} Z_w \sim_{\mathrm{i.i.d.}} \operatorname{Bernoulli} (\pi). \end{equation} Taken together, the model (\ref{eqmodel}) and the prior (\ref{eqprior1}) determine a joint posterior for~$Z$ given $X$. The \texttt{R} package \texttt{MGSA} (model-based gene-set analysis) reports MCMC-computed marginal posterior probabilities $P(Z_w=1|X)$, also integrating uncertainty in the system parameters $\alpha$, $\gamma$ and $\pi$, and thus provides a useful ranking of the wholes [\citet{b11}; \citet{rdct}]. In addition to the system incidence matrix $I$, a useful data structure for computations turns out to be the bipartite graph $\mathcal{G}$, having whole nodes and part nodes, and an edge between $w$ and $p$ if and only if $I_{p,w} = 1$ (i.e., iff $p \in w$). \subsection{Activation hypothesis} As defined above, the role model allows that a~whole can be inactive while all of its parts are active. This can happen because of overlap among the wholes. Specifically, if $w$ is contained in the union of other wholes $\{w'\}$, then all $Z_{w'}=1$ will force $A_p=1$ for all $p \in w$, regardless of the value of $Z_w$. This rather odd situation calls into question the meaning of \textit{active} and what we might realistically expect can be inferred from data. Indeed, the issue is related to identifiability of the activity vector $Z$, since it shows that distinct $Z$ vectors may produce the same part-level activity vector $A= (A_p )$. (In the case above, switching $Z_w$ from 0 to 1 does not change $A$.) The mapping $Z \longrightarrow A$ given by~(\ref{eqf1}) is not necessarily invertible, depending on the system as defined in $I$. Lack of identifiability would not necessarily create difficulty in a Bayesian analysis, however, in the present case we are specifically interested in inferring the activity states of the gene sets and prioritizing these sets, and so it stands to reason that we ought to confer a real, if still only model-based, meaning on the activities. When \textit{activity} is defined more fully, there is a simple solution to the problem. The \textit{activation hypothesis} asserts that a set of parts is active if and only if all parts in the set are active. It was shown previously (Newton et al.) as follows: \begin{thm} \label{thmah} Under the activation hypothesis ($\mbox{AH}$), the mapping $Z \longrightarrow A$ defined by \[ A_p = \max_{w\dvtx p \in w} Z_w \] is invertible, with inverse $A \longrightarrow Z$ \[ Z_w = \min_{p\dvtx p \in w} A_p. \] \end{thm} The inverse mapping is simply that a whole is inactive if and only if any of its parts is inactive. So the odd case at the beginning of the section cannot occur under AH; if all parts are active, then $Z_w=1$ must hold. Further, with parameters~$\alpha$ and~$\gamma$ fixed, the $Z$ vector is identifiable under AH, since different $Z$ vectors necessarily give different probability distributions to data $X$. The first contribution of the present work is to show that the activation hypothesis is equivalent to a set of linear inequality constraints on the activity variables. The finding is useful for posterior inference computations. We prove in Section~\ref{sec7} the following: \begin{thm}\label{thm22} $\mbox{AH}$ holds if and only if all of the following hold: \begin{longlist}[3.] \item[1.] $Z_w \leq A_p$ for all $p, w$ with $p \in w$; \item[2.] $A_p \leq \sum_{w\dvtx p \in w} Z_w$ for all $p$; \item[3.] $\sum_{p\dvtx p \in w} ( Z_w - 2A_p + 2 ) \geq 1$ for all $w$. \end{longlist} \end{thm} Evidently, the i.i.d. Bernoulli prior (\ref{eqprior1}) does not respect AH in the sense that vectors $Z$ which violate AH have positive prior probability. In simple systems such violation may be innocuous. We provide evidence that in the complex systems such as GO, this violation creates a substantial loss of statistical efficiency. We note first that alternative prior specifications are available that respect AH. A simple one is to condition prior (\ref{eqprior1}) on the AH event, namely, \begin{equation} \label{eqprior2} P ( Z =z ) = \biggl( \frac{1}{c} \biggr) \pi^{\sum_{w} z_w} (1-\pi)^{ \sum_w (1-z_w)} \qquad \mbox{if $z$ satisfies AH}, \end{equation} otherwise $P(Z=z)=0$, where $c$ is the probability, in prior (\ref{eqprior1}), that $Z$ satisfies AH, and $z$ is a vector of binaries representing a possible realization of $Z$. In other words, with subscript ``1'' for the i.i.d. prior (\ref{eqprior1}) and ``2'' for prior (\ref{eqprior2}), we have $ P_2 ( Z=z ) = P_1 ( Z=z | \mbox{AH})$. Upon conditioning, the $(Z_w )$ are not necessarily either mutually independent or identically distributed. \section{Statistical properties} The role of the prior distribution in Bayesian analysis has surely been the subject of considerable debate. On the one hand, it helps by regularizing inference, especially in high dimensions. On the other hand, data need to work against it to produce inferences that trade off empirical characteristics with prior assumptions. A fact of relevance to the present problem is that gene-list data must work against either prior [(\ref{eqprior1}) or (\ref{eqprior2})] to deliver an inferred list of activated gene sets. For two Bayesian analysts, one using prior (\ref{eqprior1}) and the other using prior (\ref{eqprior2}), the true state is ascribed different prior mass. The ratio of these masses, $\rho$, represents the extra effort needed to be done by the data to overcome prior (\ref{eqprior1}) compared to prior (\ref{eqprior2}): \begin{equation} \label{eqratio} \rho = \frac{ P_2 ( Z= z_{\mathrm{true}} ) }{ P_1 ( Z= z_{\mathrm{true}})} = \frac{ P_1 ( Z=z_{\mathrm{true}} | {\mbox{AH}})}{ P_1 ( Z= z_{\mathrm{true}})} = \frac{1}{P_1 ( {\mbox{AH}})} \geq 1. \end{equation} Here we have used the particular structure of prior (\ref{eqprior2}) and also the assumption that $z_{\mathrm{true}}$ satifies AH. If $z_{\mathrm{true}}$ did not satisfy AH, the target of inference would be beyond the realm of any gene-level data set to estimate, owing to lack of identifiability. Indeed, it is difficult to see what meaning could be ascribed to $z_{\mathrm{true}}$ in that case. The observation to be gained from (\ref{eqratio}) is that the probability of AH under the i.i.d. prior affects the efficiency of inference. In systems where that probability is very small, there is reason to believe that improved inferences are possible. As to the precise effect of ignoring AH,\vadjust{\goodbreak} that depends on the particular system $I$, the true activation state, and the system parameters $\alpha$ and $\gamma$. What our initial investigation finds is that a truly activated whole $w$ may tend to have $P_1( Z_w=1 | X)$ smaller than $P_2( Z_w=1 | X)$, and if so the $P_1$ inference is too\break conservative. Whether or not AH holds for a given state $Z$ may be assessed by calculating the part-level activities $A$ and then checking Theorem~\ref{thm22}. Alternatively, we consider whole-level \textit{violation} variables $(V_w)$. These Bernoulli trials are defined as follows: \begin{equation} \label{eqviolation} V_w = \cases{1, &\quad \mbox{if $Z_w=0$ and if for all $p \in w$ there exists $w'$} \cr & \quad \mbox{with $p\in w'$ and $Z_{w'}=1$,} \cr 0, &\quad \mbox{otherwise}.} \end{equation} The probability, under $P_1$, that $Z$ satisfies AH is equivalent to the probability of no violations, that is, \begin{equation} \label{eqviolate} P_1(\mbox{AH}) = P_1 ( V_w = 0, \forall w ), \end{equation} and so AH probability might be approachable by considering the violation variables. Except in stylized examples, we do not expect these variables to be mutually independent; indeed, they may have a complicated dependence induced by overlaps of the wholes and, hence, direct calculation of $P_1(\mbox{AH})$ is intractable. However, the expectations of $V_w$ are readily computable for a given system, either by Monte Carlo or by a more sophisticated algorithm [\citet{wa14}]. Considering the Chen--Stein result for Poisson approximations, we conjecture that $-\log P_1(\mbox{AH})$ is approximately equal to $E_1 ( \sum_w V_w )$, though we have not been able to guarantee an error on this approximation [cf. \citet{ar90}]. \begin{figure} \caption{Expected number of sets that violate the activation hypothesis (AH) for four recent versions of Gene Ontology (GO), considering sets holding between 5 and 20 genes, taken on the i.i.d. Bernoulli prior. Calculations are done at $\pi = 1/100$. Respectively, these systems contain 3591, 4096, 4449 and 4772 gene sets, and correspond to versions of org.Hs.eg.db.} \label{figgo} \end{figure} Figure~\ref{figgo} charts the expected value $E_1 ( \sum_w V_w )$ over four recent versions of Gene Ontology, for $\pi = 1/100$. For concreteness it focuses on GO terms holding between 5 and 20 genes (for which an exact calculation of the expectation is feasible), though the key finding is not sensitive to that restriction, as evidenced by Monte Carlo computations (not shown). As one might expect by the increasing density and complexity of GO, the expected number of AH violations increases. This may very well reflect the fact that $P_1(\mbox{AH})$ is decreasing over time, which indicates to us that ignoring AH is becoming an ever greater problem for gene-set analysis. In terms of modeling assumptions, there is no additional cost to accounting for AH in the Bayesian analysis; the cost is purely computational, since inference must now deal with the constraints imposed by AH on the space of latent activities. The next sections describe two computational advances that address the problem.\vadjust{\goodbreak} \section{Decoding functional signals via constrained optimization} \subsection{MAP via ILP} Decoding a discrete signal is frequently accomplished by algorithms that compute the parameter state having the highest posterior mass: the maximum a posteriori (MAP) estimate. Although limited as a posterior summary, the MAP estimate may contain useful multivariate information [e.g., \citet{cl08}]. Our representation of model-based gene-set analysis reveals that under model (\ref{eqmodel}) and prior (\ref{eqprior2}), the log posterior is linear in the joint collection of whole and part activity variables. This log posterior is \begin{eqnarray} l(Z,A) &=& \sum_w \bigl\{ Z_w \log( \pi ) + (1-Z_w) \log( 1- \pi) \bigr\} \nonumber\\ \label{eqobjective} &&{}+\sum_p \bigl\{ A_p \bigl[ x_p \log(\gamma) + (1-x_p) \log(1-\gamma) \bigr] \\ \nonumber &&\qquad\hspace*{9pt}{}+ (1-A_p) \bigl[ x_p \log( \alpha) + (1-x_p) \log (1-\alpha) \bigr] \bigr\}, \end{eqnarray} where $x_p$ is the realized value of the gene-list indicator $X_p$, and $\alpha$, $\gamma$, and $\pi$ are system parameters, which are considered fixed in the present calculation. Considering Theorem~\ref{thm22}, finding the MAP estimate $(\hat{Z}, \hat{A})$ amounts to maximizing a linear function in discrete variables subject to linear inequality constraints. As such, it fits naturally into the domain of \textit{integer linear programming} (ILP), an active subfield of optimization. Our computations take advantage of ILP software available in the GNU Linear Programming Kit through its interface with \texttt{R}.\footnote{See \surl{www.gnu.org/software/glpk} and \surl{cran.R-project.org/package=Rglpk}.} We employed a series of basic code checks to assure our implementation worked in: (1) simple examples where the MAP estimate is computable by other means; and (2) limiting situations where $X_p$ was Binomial having a high sample size, and thus where the MAP estimate must converge to the true activity state. The reconstruction $\hat{Z}$ obtained through this optimization holds an estimate of the activated and inactivated gene sets. We refer to the overall method as the \textit{multi-functional analyzer} (MFA), and specifically MFA-ILP to refer to the posterior mode computed by ILP. We note that by invertibility of the mapping $Z \longrightarrow A$ under AH, the log-posterior $l$ could be expressed either as a function of $Z$ only or as a function of $A$ only, however, in neither reduced case would $l$ be linear in the input variables. Moreover, in neither reduced case could the constraints be expressed as linear inequality constraints. By expanding the domain we formulate the constrained optimization as an integer linear program. \subsection{Numerical experiments} In each experiment reported below we represented a system with a parts-by-wholes incidence matrix $I$; we fixed the false-positive rate $\alpha = 1/10$ and the true positive rate $\gamma = 9/10$. We simulated 100 gene-lists $X$ from model (\ref{eqmodel}), each time using a simulated activity vector $Z$. For methods comparison, we applied the following: (1) the commonly used Fisher exact test for enrichment of each gene set in the data $X$ [\citet{k12}], (2) MGSA (version 1.7.0), and (3) MFA-ILP. We allowed both model-based methods to know the system parameter settings. To evaluate performance, we calculated specificity, sensitivity, and precision of the estimated activity vector~$\hat{Z}$ for the true activity vector~$Z$ by averaging over the 100 replicates. \subsubsection*{Experiment 1: Low overlap} Initially, $I$ had size $300$ genes (parts) by $100$ gene sets (wholes). We randomly picked 5 and 10 parts for each whole in columns 1--50 and 51--100, respectively. Then we removed parts not contained by any whole, leaving a $296 \times 100$ incidence matrix. We sampled $Z$ from prior (\ref{eqprior1}) and then projected it onto AH by constructing $A_p=\max_{w\dvtx p \in w} Z_w$ and then updating $Z_w= \min_{p\dvtx p \in w} A_p$. All methods exhibit similar operating characteristics in this case (Table~\ref{tablow}). \begin{table} \caption{Simulation comparison of three gene-set methods, a case of low overlap among gene sets: Tabulated are mean values from 100 simulated data sets. On average 7.3 truly activated sets occur}\label{tablow} \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}lcccc@{}} \hline \textbf{Method}& \textbf{Predicted \# active} &\textbf{Sensitivity}&\textbf{Specificity}&\textbf{Precision}\\ \hline MFA-ILP&7.4 &0.963 &0.997 &0.958\\ Fisher (cut-off${}={}$0.05)&5.9&0.790 &0.998 &0.966\\ Fisher (cut-off${}={}$0.1)&6.8 &0.873 &0.996 &0.948\\ MGSA (cut-off${}={}$0.5)&7.2 &0.954 &0.998 &0.968\\ \hline \end{tabular*} \end{table} \subsubsection*{Experiment 2: Higher overlap and parent-child structure} Initially, $I$ had size $300$ parts by $105$ wholes. From column 1 to column 20, each column has 20 parts, of which 15 parts are in common with each other and 5 parts are randomly selected from the other parts; column 21 has 10 parts which are randomly picked from the 15 common parts shared by columns 1--20. Thus, columns 1--20 have a lot of overlaps and column 21 is a child of columns 1--20. Similarly, we built columns 22--42, 43--63, 64--84 and 85--105. The common 15 parts in each column combination are all different. Then parts not contained by any whole were removed, which resulted in a $265\times 105$ incidence matrix. We activated wholes by sampling one whole from columns 1--20, 22--41, 43--62, 64--83 and 85--104 as activated, respectively, and projected onto AH as above. \begin{table}[b] \caption{Simulation comparison of three gene-set methods, a case of higher overlap among gene sets: Tabulated are mean values from 100 simulated data sets. On average there are 10.1 truly activated sets in this case}\label{tabhigh} \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}lcccc@{}}\hline \textbf{Method}& \textbf{Predicted \# active} &\textbf{Sensitivity}&\textbf{Specificity}&\textbf{Precision}\\ \hline MFA-ILP&\phantom{0}10.2 &0.975 &0.997 &0.993\\ Fisher (cut-off${}={}$0.05)&104.2&0.996 &0.008 &0.096\\ Fisher (cut-off${}={}$0.1)&104.8 &0.996 &0.002 &0.096\\ MGSA (cut-off${}={}$0.5)&\phantom{00}5.5 &0.490 &0.995 &0.920 \\ \hline \end{tabular*} \end{table} Table~\ref{tabhigh} exhibits properties of three methods in relatively complicated system just defined. The univariate Fisher test tends to select the wholes with a high correlation (overlap) with the truly activated wholes, which results in high sensitivity but low specificity (or precision). The extra activation calls correspond to spurious associations that the multivariate, model-based approaches are able to recognize. The MGSA method often fails to discover truly activated wholes, which corresponds to a reduced sensitivity. The small \textit{child} wholes tend to be missed by MGSA in this case. The proposed MFA-ILP method is right on target. \subsection{ILP for large systems} Large systems strain unaided ILP computation, but the special structure of the gene-set problem allows for several refinements. \subsection*{Shrinking $I$} Up to a constant, the objective function in~(\ref{eqobjective}) may be expressed \[ l(Z,A) = c_1 \sum_{w} Z_w + c_2 \sum_{p \in P^{-}} A_p + c_3 \sum_{p \in P^{+}} A_p, \] where \begin{eqnarray*} c_1 &=& \log \pi - \log(1-\pi), \\ c_2 &=& \log (1-\gamma) - \log( 1-\alpha), \\ c_3 &=& \log \gamma - \log \alpha \end{eqnarray*} and where $P^-$ and $P^+$ denote the observed inactivated and activated parts, respectively. That is, $p \in P^-$ if $x_p=0$ and $p \in P^+$ if $x_p=1$. By assumption~(\ref{eqmodel}), $\alpha < \gamma$ and so $c_2 <0$ and $c_3 > 0$. If we further insist that $\pi < 1/2$, then $c_1 < 0$ also. In some cases we can know which $\hat{Z}_w$ and $\hat{A}_p$ must equal 0 in the optimal solution, and if so we can remove these variables from the system prior to implementing ILP. For each whole $w$ denote $P^{+}_{w} = w \cap P^+$ and similarly $P^{-}_{w} = w \cap P^-$, and define $W^* = \{ w\dvtx c_1 + c_3 \sum_{p \in P^{+}_{w}} 1 < 0 \}$. Clearly, those wholes containing no reported parts are in $W^*$, but there may be others. We prove in Section~\ref{sec7} that if $W^*$ is not empty, then we may be able to shrink the system prior to solving the constrained optimization problem via ILP. \begin{thm}\label{thm41} Suppose $\pi < 1/2$ and let $w_0$ denote an element of $W^*$. If there exists $p_0 \in w_0$ such that $\{ w\dvtx p_0 \in w \} \subset W^*$, then $\hat Z_{w_0}=\hat A_{p_0}=0$. \end{thm} Letting $W_0$ and $P_0$ denote wholes and parts for which the optimal solution is known (in advance of computation), we may remove these from the incidence matrix $I$, effectively shrinking it. The amount of shrinkage may be dramatic, but it depends on the observed data $x$, the system $I$, and system parameters $\alpha$, $\gamma$ and $\pi$. When $\alpha$ is small and $\gamma$ is large, the effects may be minimal. \subsubsection*{A sequential approach} In the unlikely event that the system matrix $I$ is separable into blocks of wholes that do\vadjust{\goodbreak} not overlap between blocks, then ILP may be applied separately to these distinct blocks in order to identify the MAP activities. We do not expect this separability in GO or related systems, but we can take advantage of size variation of the wholes and work sequentially from small ones to larger ones. As an example, let $S_{10}$ denote the sets containing no more than $\mathit{n.up}=10$ genes. In order to obtain the optimal solution for the full problem, we start from the sub-matrix $I.10$ obtained by extracting these sets from $I$. Suppose $Z^*_{10}$ is the MAP solution based on the data for $I.10$, and use notation $S^*_{10}$ to denote the active sets in $S_{10}$ as inferred by $Z^*_{10}$. We aim to find the optimal solution $Z^*_{11}$ for $I.11$ using what has\vspace*{1pt} already been computed in the smaller system. Denote the newly added sets in $I.11$ by $S_{10}^{11}$ (i.e., the sets containing exactly 11 genes). We just need to consider the sets with the possibility being active in the optimal solution on~$I.11$. First of all, $S^*_{10}$ and $S_{10}^{11}$ should be included, in the case we have no any other prior knowledge about $Z^*_{11}$. Second, by the 3rd AH inequality (Theorem~\ref{thm22}), any set in $S_{10}\setminus S^*_{10}$ which is a subset of some set in $S_{10}^{11}$, denoted by $D$, should also be included. But these sets already considered are not enough. Actually, for each set $w_1$ in $S_{10}^{11}$, we need to check whether there exists some set $w_2$ in $S_{10}\setminus (S^*_{10}\cup D)$ satisfying \begin{equation} \label{eqsequential} c_1 + c_2 \sum _{p\in P^{-}\cap P_{w_1}^{w_2}}A_p+ c_3\sum _{p\in P^+\cap P_{w_1}^{w_2}}A_p>0, \end{equation} where $P_{w_1}^{w_2}$ denote the set of genes contained by $w_2$ and not by $w_1$. We do this since each set in $S^{11}_{10}$ may be active in the optimal solution $Z^*_{11}$, and we need to check whether some sets in $S_{10}$ should be activated toward maximizing the objective function. We denote the sets in $S_{10}\setminus (S^*_{10}\cup D)$ satisfying the condition~(\ref{eqsequential}) by~$E$. Finally, by the 3rd AH inequality (Theorem~\ref{thm22}), any set in $S_{10}\setminus (S^*_{10}\cup D\cup E)$ which is a subset of some set in $E$, denoted by $F$, should also be included. Thus, we need to run the ILP on the incidence matrix only for $S^*_{10}\cup S^{11}_{10}\cup D \cup E\cup F$, instead of $I.11$. Hence, we obtain a sequential approach to solve the full ILP problem from a sequence of smaller problems. Examples show this is feasible in GO for subsystems holding sets of up to 50 genes, without excessive computational burden. \section{Posterior sampling} \subsection{Penalized MCMC} To obtain a sample from the posterior distribution defined by prior~(\ref{eqprior2}) and model~(\ref{eqmodel}) in which the whole activity variables $Z = (Z_w)$ have positive probability only when $Z$ satisfies AH, we design a Markov chain to run within the unconstrained space according to a penalized posterior: \begin{equation} \label{eqpenalizedMCMC} \tilde{l}(Z) = l(Z,A) - \lambda \sum _w V_w, \end{equation} where $l(Z,A)$ is defined in (\ref{eqobjective}), $V_w$ is the violation indicator~(\ref{eqviolation}), and $\lambda \ge 0$ is a tuning parameter. The desired sample is obtained by discarding any sampled states that do not satisfy AH. Note that there are no violations ($\sum_w V_w = 0$) for $Z$ that satisfy AH, so that $\tilde{l}(Z) = l(Z,A)$ in this case and the conditional log posterior distribution under $\tilde{l}(Z)$ restricted to AH is identical to the target log posterior distribution. Increasing the tuning parameter $\lambda$ increases the probability of AH in the larger state space, which is essential for efficient sampling when this probability is small. We find that penalizing the log posterior within the unconstrained space leads to a conditional sampler that mixes well in the constrained space, where our previous attempts to constrain move types were less successful. It is helpful to visualize the Markov chain as operating by changing colors on the node-colored bipartite graph $\mathcal{G}$ having whole nodes and part nodes, with an edge between a whole node $w$ and part node $p$ if and only if $p \in w$, and where the coloring of the whole nodes $\{w\}$ and part nodes $\{p\}$ match the activities $Z$ and $A$, respectively. It is useful in assessing the state of the Markov chain to associate with each node a count $n(\cdot)$ of its active connected neighbors in $\mathcal G$. $A_p=1$ if and only if $n(p)>0$ and $V_w = 1$ if and only if $Z_w = 0$ and $n(w) = \deg(w)$, the number of part nodes $p \in w$. The Markov chain proceeds by selecting at random a whole node $w$ and proposing a color swap (a change in the status of the activity variable, $Z_w^* = 1 - Z_w$) for this node.\footnote{Efficiency gains may be possible by using a nonuniform sampler, for instance, depending on the set size or the number of reported parts in the whole, though we use a uniform proposal in the present work.} This proposed change can, but need not, affect the activities of parts contained in this whole. When $Z_w^* = 1$, the active neighbor counts $n(p)$ increase by 1 for each $p \in w$. If $A_p$ changes from 0 to 1, then each node $w^\prime$ that contains $p$ (including $w$) gains an additional active neighbor and $n(w^\prime)$ increases by 1. This increase could cause a violation if $p$ were the only remaining inactive neighbor of an inactive $w^\prime$, causing $V_{w^\prime}$ to change form 0 to 1. If node $w$ were in violation before this proposal, activating it would eliminate the violation. Similarly, when $Z_w^* = 0$, the active neighbor counts $n(p)$ decrease by 1 for each $p \in w$. If this decrease is from 1 to 0, then the activity $A_p$ changes from 1 to 0 as well and all of the whole nodes $w^\prime$ connected to $p$ would lose an active neighbor, $n(w^\prime)$ decreasing by 1. If the whole node $w^\prime$ had been in violation, this change would eliminate the violation with $V_{w^\prime}$ changing from 1 to 0. \begin{figure} \caption{Operating characteristics of MFA-MCMC, MGSA, and Fisher's test based on simulating the role model in the \textup{D. melanogaster} \label{figsim} \end{figure} Careful accounting of the changes to a few key counts allows for quick calculation of the change in $\tilde{l}(Z^*)$ and subsequent\vspace*{1pt} acceptance or rejection of the proposal by Metropolis--Hastings. The log posterior $\tilde{l}(Z^*)$ is a function of $\alpha$, $\gamma$, $\pi$, the penalty $\lambda$ and the counts of the numbers of active and inactive whole nodes [$\sum_w Z_w$ and $\sum_w (1-Z_w)$, resp.], the number of whole nodes in violation ($\sum V_w$), the numbers of active part nodes with realized values 1 and 0 [$\sum_p A_p x_p$ and $\sum_p A_p (1-x_p)$, resp.], and the numbers of inactive part nodes with realized values~1 and 0 [$\sum_p (1-A_p) x_p$ and $\sum_p (1-A_p) (1-x_p)$, resp.]. \subsection{Numerical experiment} To assess the performance of MFA-MCMC, we simulated gene-list data according to the role model in the \textit{D. melanogaster} genome, following the scheme presented in \citet{b10}. Briefly, we used 3275 GO terms annotating between 5 and 50 fly genes, according to version 2.14.0 of Bioconductor package org.Dm.eg.db. In each simulation run, a number of GO terms were activated and then a gene list was constructed from independent Bernoulli trials depending on the activation states and settings of false-positive and false-negative error rates. Figure~\ref{figsim} shows receiver-operating (ROC) curves and precision-recall curves for two parameter settings, based on 100 simulated gene lists in each setting. Selection to the reported set list is based on thresholding the marginal posterior probability (MGSA, MFA-MCMC) or the \mbox{$p$-value} (Fisher). Evidently, MGSA and MFA-MCMC are accurate and show similar behavior when error rates are low, though MFA-MCMC shows improved precision and sensitivity in more difficult settings. In subsequent calculations we deploy both MAP estimation (MFA-ILP) and MCMC sampling on each data set in order to infer wholes that are probably activated. For MCMC, we use $10^7$ sweeps, burn-in of $10^6$, and $\lambda=5$, which causes about one third of the states to satisfy $\mbox{AH}$. MFA-ILP gives a summary functional decoding of the gene list. Posterior probabilities from the MCMC computation provide a measure of confidence in the inferred sets and also highlight notable non-MAP sets. Fisher's test is the default univariate method for gene-list data; we include it for comparison, even though the hypotheses it tests are different from the activation states assessed by MFA and MGSA. \section{Examples} \subsection{Genes implicated in type 2 diabetes (T2D)} From a large-scale genome-wide association study (GWAS) involving more than 34,000 cases and 114,000 control subjects, 77 human genes have been implicated as affecting T2D disease susceptibility [\citet{morris}, Supplementary Table~15, primary list]. To assess the functional content of this gene list, we applied MFA, MGSA, and simple enrichment via Fisher's exact test, all in the context of 6037 gene ontology terms, each annotating between 5 and 50 genes.\footnote{These 6037 terms annotate a total of 10,626 genes; among the 77 T2D-associated genes, 58 are in this \textit{moderately annotated} class.} Here and in other examples we took advantage of available information on likely false positive ($\alpha$) and false negative $(1-\gamma)$ error rates at the gene level. Using the fitted mixture model from Morris et al., we estimated $\alpha = 0.00019 $ and $\gamma = 0.02279 $ for this large-scale GWAS [details in \citet{wa14}]. \begin{figure} \caption{GO terms inferred by three methods (\textup{A} \label{figt2d} \end{figure} \begin{table} \caption{MFA results in type 2 diabetes (T2D) example: 11 GO terms are inferred to be active using the ILP algorithm to compute the MAP estimate (rows). Basic statistics on these terms are provided (\#~T2D-associated genes/set size). The next two columns give the MCMC-computed marginal posterior activation probabilities for these terms, both using MGSA and MFA, the constrained alternative. The final column holds the Benjamini--Hochberg adjusted Fisher-test $p$-value. All calculations start with 6037 GO terms (those annotating between 5 and 50 human genes) that together annotate 10{,}626 human genes. Of the 77 total T2D genes, 58 have at least one annotation to these 6037 GO terms. The inferred gene sets cover 26 of these 58 T2D genes}\label{tabT2Dmfa} \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}lcccc@{}}\hline \textbf{Gene set (GO term)}&\textbf{Statistics}&\textbf{P.MFA}&\textbf{P.MGSA}&\textbf{Fisher} \\ \hline RNA polymerase II core promoter$\ldots$ & $3/45$ &0.517&0.028&0.161 \\ Positive regulation of insulin secretion& $4/41$ &0.964&0.372&0.016 \\ Positive regulation of peptidyl-serine$\ldots$ & $2/35$ & 0.537&0.096&0.756 \\ Negative regulation of insulin secretion& $4/23$&0.996&0.201&0.003 \\ ER overload response&\phantom{0}$2/9$ & 0.398 & 0.159 & 0.102 \\ Positive regulation of insulin secretion$\ldots$ & \phantom{0}$0/9$&0.964&0.002&1.000 \\ Hepatocyte differentiation&\phantom{0}$2/9$ &0.316&0.016&0.102 \\ Endodermal cell fate specification& \phantom{0}$2/8$& 0.596 & 0.036 & 0.091 \\ Exocrine pancreas development&\phantom{0}$3/8$ & 0.946 & 0.600 & 0.003 \\ Negative regulation of protein$\ldots$ & \phantom{0}$2/5$ & 0.420 & 0.101 & 0.051 \\ Lamin filament& \phantom{0}$2/5$ & 0.790 & 0.400 & 0.051 \\ \hline \end{tabular*} \end{table} Figure~\ref{figt2d} summarizes the application of MFA, MGSA, and Fisher's test to this example. Table~\ref{tabT2Dmfa} reports those gene sets inferred by MFA-ILP to be activated in T2D. Tables S1--S4 provide further information for comparison of MFA with MGSA and Fisher's test [\citet{wa14}]. The example illustrates features we see repeatedly with these methods. Sets identified by Fisher's test tend to overlap substantially, reflecting the univariate nature of the approach; both MGSA and MFA-ILP alleviate this redundancy problem, but MGSA finds fewer sets than MFA-ILP. As expected, each of the 11 sets inferred by the ILP algorithm (i.e., the MAP estimate of the activated sets) has high marginal posterior probability of activation (P.MFA). Furthermore, MFA-ILP is able to explain more of the gene level findings than the other methods, as indicated by the number of genes that are both in the reported gene list (T2D) and are in at least one of the gene sets inferred to be activated (\textit{coverage}). It does this without increasing the mis-coverage, which is the number of non-T2D genes within the inferred active sets. Figure~\ref{figt2d} summarizes the sets found by these methods and reports these coverages. An interesting set in this case is the GO term \textit{glucose import} (GO:0046323), for which the proportion $5/41$ of observed T2D genes is very high (small Fisher \mbox{$p$-value}), but there is very small posterior activation probability according to MFA. That is because the 5 genes are explained more easily as parts of three other terms in the MAP estimate that have yet other genes supporting their activation. A second curious case is GO:0035774, a small term (9 genes) to do with regulation of insulin secretion. None of these genes was reported to be involved in T2D, however, the set is fully contained in a parent set which is inferred to be activated by MFA-ILP. As the calculation respects AH, all subsets of activated sets are activated. This may be a set-level false-positive call, as none of the contained genes was reported to be T2D associated. MFA favors the explanation that each of the 9 noncalls was a gene-level false negative, finding the weight of evidence supporting that interpretation. When we recall that the gene-level false-negative rate is almost 98\% (following the mixture calculation from Morris et al.), this assessment seems plausible. We note that for the sake of further simplification of output, it is reasonable to suppress any such subsets from primary tabulations [see trimming algorithm, \citet{wa14}]. \subsection{RNA interference and influenza-virus replication} In a meta-analysis of four genome-wide studies of influenza virus, \citet{metaflu} reported that 984 human genes had been detected by RNA interference as possibly being associated with viral replication. As in the T2D example, we compared MFA with MGSA and Fisher's test on this gene list using 6037 GO terms annotating between 5 and 50 human genes. Among the 984 influenza-involved genes, 683 are annotated to at least one of these terms. To apply the model-based methods, we took advantage of external information on the false positive rate $\alpha$ and the true positive rate $\gamma$ [see \citet{wa14}]. Figure~\ref{figflu} illustrates the sets found by MFA-ILP, MGSA, and Fisher's test, and Tables S5--S8 [\citet{wa14}] contain further details of the comparative analysis. Again, we find that MFA-ILP dominates the other methods in terms of gene coverage, with 245 genes explained as compared to 226 (MGSA) and 90 (Fisher) and with mis-coverages 635 (MFA-ILP), 634 (MGSA), and 206 (Fisher). Furthermore, MFA-ILP detects more sets than MGSA (50 in the trimmed list compared to 30 by MGSA). \begin{figure} \caption{Sets (GO terms) identified by three methods (\textup{A} \label{figflu} \end{figure} To better understand differences between the methods, consider one gene set GO:0032434, \textit{regulation of proteasomal ubiquitin-dependent protein\break catabolic process}, which annotates 48 human genes, 12 of which were in the 984-list of influenza involved genes (highlighted in Figure~\ref{figflu}). The Benjamini--Hochberg corrected Fisher $p$-value is 0.017, and so the term would be considered enriched in the standard analysis; it is also inferred to be active by MGSA (posterior probability 0.897). But it is not in the MAP estimate by MFA-ILP and its posterior activation probability is 0.000. Now 10 of these 12 influenza-involved GO:0032434 genes are part of the \textit{child} term GO:0032436, \textit{positive regulation of proteasomal ubiquitin-dependent protein catabolic process}, a set of size 31 genes. Both terms are found by the univariate Fisher procedure, exemplifying the redundancy issue; the more specific term GO:0032436 is identified by MFA-ILP and has high posterior activation probability. The MFA calculation favors the explanation whereby the smaller set is activated; this fails to cover two of the 12 GO:0032434 influenza genes, but it also simplifies the explanation of nonlisted genes in that set. If the larger set (GO:0032434) is activated, then we have a lot more mis-covered genes, that is, those in the set but undetected by RNAi [$15 = (48-12) - (31-10)$]. With this example, it may be that the more specific \textit{positive regulation} term better characterizes the experimental gene list. In Hao et al., gene set analysis was used to show that the four separate RNAi studies agreed more substantially than was evident by inspecting overlaps among the four gene lists. It was applied separately to the study-specific gene lists, and then the agreement among these four set lists was measured. For both Fisher's test and MGSA-ILP, the among-study set-level agreement was significant according to a simple permutation calibration. Curiously, the agreement by MGSA was not significant by that measure, owing primarily to the fact that very few sets were inferred to be active in the separate studies. The common set-level signal, in conjunction with other forms of meta-analysis, provided evidence that genome-wide RNAi studies have higher false-negative rates than false-positive rates. \subsection{Other issues} The full effects of prior choice in model-based gene-set analysis require further investigation. As to the practical importance of one choice over another, we do not examine the biological distinctions between the inferences produced by different methods. A close reading of the T2D and RNAi case studies above provides an initial indication of how and why reported set lists can differ, but assessing the biological significance of these differences is beyond the present scope. The procedures have distinct statistical properties and MFA more efficiently captures the functional content of the reported gene list in terms of model fit. We point out that the distinctions present themselves when using the relatively complex GO system. Control calculations show that MGSA and MFA give essentially the same results when applied to the less complex KEGG system [Figures S1 and~S2, \citet{wa14}]. In our comparisons we used MGSA to obtain an estimate of the hyper-parameter~$\pi$, which affects the overall rate of set activity. In order to control the comparison, we used the same numerical value of $\pi$ in the MFA calculations (in both MFA and MGSA we fixed the other parameters $\alpha$ and $\gamma$ at externally derived values). Further improvements of MFA may be possible using alternative estimates of $\pi$. Other model elaborations which could be useful in some applications include extending MFA beyond binary $X_p$ and allowing dependence in gene level measurement errors. Compute times for MFA depend on the size and content of the gene list, the incidence matrix $I$, and the model parameters: MFA-ILP used 2.5 CPU hours for T2D, and 23 CPU hours for RNAi;\footnote{R was running on a 4$\times$ AMD Opteron(TM) Processor 6174 (48 cores) with 128 GB RAM.} less time was required for MFA-MCMC (20 and 45 CPU minutes, resp.). \begin{table} \caption{Version effects: Tabulated are similarity scores comparing set structure and inferred active sets over time-adjacent versions of GO[5:50]. The collections of sets annotating between 5 and 50 human genes fluctuate over recent versions of GO. Respectively, in the four most recent fall versions of Bioconductor, the collections contain 4830, 5546, 6037 and 6488 gene sets. The first row shows the Jaccard index (size of intersection over size of union) comparing subsequent versions of these gene-set collections. In addition to the collections changing, the annotations recording which genes are in which sets also change over time. The second row measures similarity of the sets of annotations. Subsequent rows show similarity of reported lists of gene sets in the two main examples. In this comparison, the set is reported if it is in the MAP estimate by MFA-ILP and if its marginal posterior probability exceeds a threshold (50\% or 80\%). Inferred active sets depend to some extent on the GO version in view; setting stronger marginal posterior thresholds reduces the false-discovery rate and reduces the version effect} \label{tabversion} \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}lccc@{}}\hline & \textbf{2010--2011} & \textbf{2011--2012} & \textbf{2012--2013} \\ \hline Sets & 0.72 & 0.82 & 0.80 \\ Annotations & 0.54 & 0.70 & 0.66 \\ T2D 50/MAP & 0.33 & 0.60 & 0.30 \\ T2D 80/MAP & 1.00 & 0.75 & 0.60 \\ RNAi 50/MAP & 0.72 & 0.70 & 0.88 \\ RNAi 80/MAP & 0.90 & 0.79 & 0.85 \\ \hline \end{tabular*}\vspace*{6pt} \end{table} We have argued that temporal changes in GO reflect an increase in the complexity of the functional record that justifies the more refined prior distribution used in MFA (Figure~\ref{figgo}). These changes also tell us that the results of a given analysis naturally depend on the GO version, since the sets involved and the annotations of genes to sets continue to evolve. To assess the version effect, we applied MFA to four recent GO versions (2010--2013) in both the T2D and RNAi examples, and using sets annotating between 5 and 50 human genes (GO[5:50]). Table~\ref{tabversion} records how similar are time-adjacent versions of GO as well as how similar are results of MFA. The changes in GO reflect new terms and new annotations, as well as sets moving in and out of GO[5:50] as more genes become annotated. Against this substantial evolution of GO we see that MFA results do also change, but that the changes are less the more stringent is the marginal posterior probability cutoff. A feature of MGSA and MFA is that activation of the set implies activation of all genes in the set. This strict form of nonnull relationship is quite different from many univariate methods, which would claim a set is nonnull if any of its contained genes is nonnull\vadjust{\goodbreak} [e.g., the \textit{self-contained} tests of \citet{gb}]. It is precisely this relationship, however, that enables multivariate (i.e., mult-set) analysis, as the role model offers a straightforward approach to deal with the complex overlaps in the functional record. We note that the role model allows a weaker interpretation; for instance, we could continue to assert that a gene is activated if it is contained in any activated set, while allowing that only a fraction of activated genes are nonnull. The difference would be in the tabulation of errors (e.g., $X_p=0$ might not be a false negative when $A_p=1$) and in the interpretation of $\alpha$ and $\gamma$; the family of joint distributions would be the same. As GO and other repositories record functions of ever more specific gene combinations, it is reasonable to expect that a combination of genes relevant in the cells on test is within the repository. The strict interpretation of activation is parsimonious and is justified when the repository is sufficiently well endowed with relevant sets. We performed a small simulation study, using the T2D data structure, to assess MFA's ability to recover small activated sets. In each of 100 simulated cases, we fixed the repository (GO[5:50]), we randomized the response vector $X=(X_p)$ by appending to the T2D genes a randomly selected small ($5\mbox{--}10$ genes) set from GO, and we inferred the activated sets using MFA. In 91 cases the appended set was identified as active by MFA-ILP, demonstrating in a limited way the ability of the methodology to recover signals represented in the repository.\vadjust{\goodbreak} \section{Proofs}\label{sec7} \subsection{Proof of Theorem~\texorpdfstring{\protect\ref{thm22}}{2.2}} Relative to all the sets and parts in the system $I$, we say AH holds if and only if $A_p= \max_{w\dvtx p \in w} Z_w$ for all $p$ and $Z_w = \min_{p\dvtx p \in w} A_p$ for all $w$. Recall that all $A_p$ and $Z_w$ are binary, in $\{0,1\}$. The first condition $A_p=\max_{w\dvtx p \in w} Z_w$ implies $A_p \geq Z_w$ for all $w$ with $p \in w$; that $A_p$ achieves the max of the $Z_w$'s has to account for the possibility that $A_p=1$ when all $Z_w=0$, but this is covered by having $A_p \leq \sum_{w\dvtx p \in w} Z_w$. Thus, the condition $A_p = \max_{w\dvtx p \in w} Z_w$ is equivalent to the first two constraints in Theorem~\ref{thm22}. To address the second condition, that $Z_w = \min_{p\dvtx p \in w} A_p$ for all $w$, define a~new variable \[ T_w = 1 + \sum_{p\dvtx p \in w} ( A_p - 1 ), \] and notice that $T_w=1$ if and only if $A_p=1$ for all $p \in w$, otherwise $T_w \leq 0$. Observe that the second condition is equivalent to \begin{equation} \label{eqcondition} \sum_{p\dvtx p \in w} ( Z_w - A_p + 1 ) - T_w \geq 0 \end{equation} since if all $A_p=1$, for $p \in w$, then $T_w=1$, and $Z_w$ must equal 1 to satisfy (\ref{eqcondition}); otherwise, if at least one of the $A_p$'s equals 0, then $T_w \leq 0$, and noting that the summation in (\ref{eqcondition}) is positive confirms the claim. Next, replacing $T_w$ in (\ref{eqcondition}) with its definition, we obtain the third stated inequality \[ \sum_{p\dvtx p \in w} ( Z_w - 2A_p + 2 ) \geq 1. \] \subsection{Proof of the Theorem~\texorpdfstring{\protect\ref{thm41}}{4.1}} Compared to $Z_{w_0} = 0$, the possible maximal value added to the objective function by letting $Z_{w_0}$ be 1 is $c_1 + c_3\sum_{p\in P^+_{w_0}} 1$ (considering parts in $P_{w_0}^{-}$ may already be activated or $w_0$ has no parts in $P^-$, the best case), however, which is negative since $w_0\in W^*$. So $Z_{w_0}=0$ is preferred toward maximizing the objective function. Next we need to prove that letting $Z_{w_0}=0$ and $A_{p_0}=0$ will not violate the inequalities in Theorem~\ref{thm22}. Denote by $W_0$ and $P_0$ the sets of $w_0$ and $p_0$ satisfying the state in the theorem, respectively. We claim that for each $p_0\in P_0$, $\{w\dvtx I_{p_0,w}=1\}\subset W_0$. If not, then there exists $w^*\in W^*\setminus W_0$ such that $I_{p_0, w^*}=1$, so $w^*$ will be in $W_0$. This is a contradiction. Thus, $A_{p_0} = \max_{\{w\dvtx I_{p_0,w}=1\}} Z_w = 0$, so the first two AH inequalities are satisfied. It is readily verified that the third inequality is also satisfied. \section*{Acknowledgments} We thank Christina Kendziorski and anonymous reviewers for comments that helped to guide this research. Both the supplementary material document and a software implementation of MFA are available at \surl{http://www.stat.wisc.edu/\textasciitilde newton/}. An initial version of this manuscript was released as Technical Report \#1174, Department of Statistics, University of Wisconsin, Madison. The current version is dated August, 2014. \begin{supplement}[id=suppA] \sname{Supplement} \stitle{More on role modeling} \slink[doi]{10.1214/14-AOAS777SUPP} \sdatatype{.pdf} \sfilename{aoas777\_supp.pdf} \sdescription{We provide further details on violation probabilities, on estimating false-positive and true-positive error rates, on preparing data for the ILP algorithm, and on further data analysis findings in the T2D and RNAi examples.} \end{supplement} \printaddresses \end{document}
\begin{document} \title{Deformations of Q-Curvarure I} \author{Yueh-Ju Lin} \address{(Yueh-Ju Lin) Department of Mathematics, University of Michgan, Ann Arbor, MI 48109, USA} \email{[email protected]} \author{Wei Yuan} \address{(Wei Yuan) School of Mathematics and Computational Science, Sun Yat-sen University, Guangzhou, Guangdong 510275, China} \email{[email protected]} \keywords{deformation of $Q$-curvature, linearized stability, local surjectivity, rigidity} \thanks{Research of the second author supported by NSF grant DMS-1005295 and DMS-1303543.} \begin{abstract} In this article, we investigate deformation problems of $Q$-curvature on closed Riemannian manifolds. One of the most crucial notions we use is the \emph{$Q$-singular space}, which was introduced by Chang-Gursky-Yang during 1990's. Inspired by the early work of Fischer-Marsden, we derived several results about geometry related to $Q$-curvature. It includes classifications for nonnegative Einstein $Q$-singular spaces, linearized stability of non-$Q$-singular spaces and a local rigidity result for flat manifolds with nonnegative $Q$-curvature. As for global results, we showed that any smooth function can be realized as a $Q$-curvature on generic $Q$-flat manifolds, while on the contrary a locally conformally flat metric on $n$-tori with nonnegative $Q$-curvature has to be flat. In particular, there is no metric with nonnegative $Q$-curvature on $4$-tori unless it is flat. \end{abstract} \maketitle \section{Introduction} In the theory of surfaces, the \emph{Gauss-Bonnet Theorem} is one of the most profound and fundamental results: \begin{align}\label{eqn:Gauss-Bonnet} \int_M K_g dv_g = 2\pi \chi (M), \end{align} for any closed surfaces $(M^2 ,g)$ with Euler characteristic $\chi (M)$. \\ On the other hand, the \emph{Uniformization Theorem} assures that any metric on $M^2$ is locally conformally flat. Thus an interesting question is that whether we can find a scalar type curvature quantity such that it generalizes the classic \emph{Gauss-Bonnet Theorem} in higher dimensions? \\ For any closed $4$-dimensional Riemannian manifold $(M^4, g)$, we can define the $Q$-curvature as follows \begin{align}\label{Q_4} Q_g = - \frac{1}{6} \Delta_g R_g - \frac{1}{2} |Ric_g|_{g}^2 + \frac{1}{6} R_g^2, \end{align} which satisfies the \emph{Gauss-Bonnet-Chern Formula} \begin{align}\label{Gauss_Bonnet_Chern} \int_{M^4} \left( Q_g + \frac{1}{4} |W_g|^2_g \right) dv_g = 8\pi^2 \chi(M). \end{align} Here $R_g$, $Ric_g$ and $W_g$ are scalar curvature, Ricci curvature and Weyl tensor for $(M^4, g)$ respectively.\\ In particular, if $W_g = 0$, \emph{i.e.} $(M^4, g)$ is locally conformally flat, we have \begin{equation}\label{total_Q} \int_{M^4} Q_g dv_g = 8\pi^2 \chi(M), \end{equation} which can be viewed as a generalization of (\ref{eqn:Gauss-Bonnet}).\\ Inspired by Paneitz's work (\cite{Paneitz}), Branson (\cite{Branson}) extended (\ref{Q_4}) and defined the $Q$-curvature for arbitrary dimension $n \geq 3$ to be \begin{align} Q_{g} = A_n \Delta_{g} R_{g} + B_n |Ric_{g}|_{g}^2 + C_nR_{g}^2, \end{align} where $A_n = - \frac{1}{2(n-1)}$ , $B_n = - \frac{2}{(n-2)^2}$ and $C_n = \frac{n^2(n-4) + 16 (n-1)}{8(n-1)^2(n-2)^2}$.\\ In the study of conformal geometry, there is an $4^{th}$-order differential operator closely related to $Q$-curvature, called Paneitz operator, which can be viewed as an analogue of conformal Laplacian operator: \begin{align} P_g = \Delta_g^2 - div_g \left[(a_n R_g g + b_n Ric_g) d\right] + \frac{n-4}{2}Q_g, \end{align} where $a_n = \frac{(n-2)^2 + 4}{2(n-1)(n-2)}$ and $b_n = - \frac{4}{n-2}$.\\ We leave the discussion of conformal covariance of $Q$-curvature and Paneitz operator in the appendix at the end of the article for readers who are interested in it.\\ The most fundamental motivation in this article is to seek the connection between $Q$-curvature and scalar curvature both as scalar-type curvature quantities. Intuitively, they should share some properties in common since both of them are generalizations of Gaussian curvature on surfaces. Of course, as objects in conformal geometry, a lot of successful researches have revealed their profound connections in the past decades. (See the appendix for a brief discussion.) However, when beyond conformal classes there are only very few researches on $Q$-curvature from the viewpoint of Riemannian geometry. Motivated by the early work of Fischer and Marsden (\cite{F-M}) on the deformation of scalar curvature, we started to consider generic deformation problems of $Q$-curvature. \\ In order to study deformations of scalar curvature, the central idea of \cite{F-M} is to investigate the kernel of $L^2$-formal adjoint for the linearization of scalar curvature. To be precise, regard the scalar curvature $R(g)$ as a second-order nonlinear map on the space of all metrics on $M$, \begin{align*} \notag R: &\mathcal{M} \rightarrow C^{\infty}(M); \ g\mapsto R_{g}. \end{align*} Let $\gamma_g : S_2(M) \rightarrow C^\infty(M)$ be its linearization at $g$ and $\gamma_g^*: C^\infty(M) \rightarrow S_2(M)$ be the $L^2$-formal adjoint of $\gamma_g$, where $S_2(M)$ is the space of symmetric $2$-tensors on $M$. \\ A crucial related concept is the so-called \emph{vacuum static space}, which is the spatial slice of a type of special solutions to \emph{vacuum Einstein equations} (see \cite{Q-Y}). A vacuum static space can also be defined as a complete Riemannian manifold with $\ker \gamma_g^* \neq \{0\}$ (see the last section for an explicit definition). Typical examples of vacuum static spaces are space forms. In this sense, we can regard the notion of vacuum static spaces as a generalization of space forms. Of course, there are many other interesting examples of vacuum static spaces besides space forms, say $S^1\times S^2$ \emph{etc}. The classification problem is a fundamental question in the study of vacuum static spaces, even in the field of mathematical general relativity. We refer the article \cite{Q-Y} for readers who are interested in it.\\ In fact, being vacuum static or not is the criterion to determine whether the scalar curvature is being linearized stable at this given metric. It was shown in \cite{F-M} that a closed non-vacuum static manifold is linearized stable and hence any smooth function sufficiently close to the scalar curvature of the background metric can be realized as the scalar curvature of a nearby metric. This result was generalized to non-vacuum static domains by Corvino (\cite{Corvino}). On the other hand, for a vacuum static space, rigidity results are expected. In \cite{F-M}, authors also showed that there is a local rigidity phenomenon on any torus. Later, Schoen-Yau and Gromov-Lawson (\cite{S-Y_1, S-Y_2, G-L_1, G-L_2}) showed that global rigidity also holds on torus. Inspired by the \emph{Positive Mass Theorem}, many people made a lot of progress in understanding the rigidity phenomena of vacuum static spaces (c.f. \cite{Min-Oo, A-D, A-C-G, Miao, S-T, B-M}). It was demonstrated that local rigidity is a universal phenomena in all vacuum static spaces (see \cite{Q-Y_1}). \\ Along the same line, we can also consider $Q$-curvature as a $4^{th}$-order nonlinear map on the space of all metrics on $M$, \begin{align} \notag Q: &\mathcal{M} \rightarrow C^{\infty}(M); g\mapsto Q_g. \end{align} Due to the complexity of $Q$-curvature, this map is extremely difficult to study. However, by linearizing the map we may still expect some interesting results to hold. We denote $$\Gamma_g : S_2(M) \rightarrow C^\infty(M)$$ to be the linearization of $Q$-curvature at metric $g$. \\ Now we give the following notion of $Q$-singular space introduced by Chang-Gursky-Yang in \cite{C-G-Y}, which plays a crucial role in this article. \begin{definition}[Chang-Gursky-Yang \cite{C-G-Y}] We say a complete Riemannian manifold $(M, g)$ is $Q$-singular, if $$\ker \Gamma_g^* \neq \{ 0 \},$$ where $\Gamma_g^* : C^\infty(M) \rightarrow S_2(M)$ is the $L^2$-formal adjoint of $\Gamma_g$. We refer the triple $(M, g, f)$ as a $Q$-singular space as well, if $f (\not\equiv 0)$ is in the kernel of $\Gamma_g^*$. \end{definition} By direct calculations, we can obtain the precise expression of $\Gamma_g^*$ (see Proposition \ref{prop:Gamma^*}). Then follow their argument, we observed that the results in \cite{C-G-Y} for dimension $4$ can be extended to any other dimensions: \begin{theorem}[Chang-Gursky-Yang \cite{C-G-Y}]\label{thm:Q_const} A $Q$-singular space $(M^n, g)$ has constant $Q$-curvature and \begin{align} \frac{n+4}{2}Q_g \in Spec(P_g). \end{align} \end{theorem} As in the study of vacuum static spaces, the collection of all $Q$-singular spaces is also expected to be a ''small'' set in some sense, which lead us to the classification problems naturally. In fact, when restricted in the class of closed $Q$-singular Einstein manifolds with nonnegative scalar curvature, Ricci flat and spherical metrics are the only possible ones: \begin{theorem}\label{Classificaition_Q_singular_Einstein} Suppose $(M^n,g,f)$ is a closed $Q$-singular Einstein manifold. If the scalar curvature $R_g \geq 0$, then \begin{itemize} \item $f$ is a non-vanishing constant if and only if $(M,g)$ is Ricci flat; \item $f$ is not a constant if and only if $(M,g)$ is isometric to a round sphere with radius $r = \left( \frac{n(n-1)}{R_g}\right)^{\frac{1}{2}}$. \end{itemize} \end{theorem} For non-$Q$-singular spaces, we can show that they are actually linearized stable, which answers locally prescribing $Q$-curvature problem for this category of manifolds: \begin{theorem}\label{Q_stability} Let $(M,\bar{g})$ be a closed Riemannian manifold. Assume $(M,\bar{g})$ is not $Q$-singular, then the Q-curvature is linearized stable at $\bar g$ in the sense that $Q : \mathcal{M} \rightarrow C^{\infty}(M) $ is a submersion at $\bar{g}$. Thus, there is a neighborhood $U \subset C^{\infty}(M)$ of $Q_{\bar{g}}$ such that for any $\psi \in U$, there exists a metric $g$ on $M$ closed to $\bar{g}$ with $Q_g = \psi$. \end{theorem} In particular, with the aid of Theorem \ref{Classificaition_Q_singular_Einstein}, we know a generic Einstein metric with positive scalar curvature has to be linearized stable: \begin{corollary}\label{cor:stab_pos_Einstein} Let $(M,\bar{g})$ be a closed positive Einstein manifold. Assume $(M,\bar{g})$ is not spherical, then the Q-curvature is linearized stable at $\bar g$. \end{corollary} Theorem \ref{Q_stability} actually provides an answer to global prescribing $Q$-curvature problem: \begin{corollary}\label{cor:prescribing_zero_Q} Suppose $(M, \bar g)$ is a closed non-$Q$-singular manifold with vanishing $Q$-curvature. Then any smooth function $\varphi$ can be realized as a $Q$-curvature for some metric $ g$ on $M$. \end{corollary} As a direct corollary of Theorem \ref{Q_stability}, we can obtain the existence of a negative $Q$-curvature metric as the following: \begin{corollary}\label{cor:pos_Y_negative_Q} Let $M$ be a closed manifold with positive Yamabe invariant $Y(M) > 0$. There is a metric $g$ with $Q$-curvature $Q_g < 0$ on $M$. \end{corollary} We also investigate the stability of Ricci flat metrics, which are in fact $Q$-singular. Due to the speciality of such metrics, we can give a sufficient condition for prescribing $Q$-curvature problem. \begin{theorem}\label{Ricci_flat_stability} Let $(M,\bar{g})$ be a closed Ricci flat Manifold. Denote $$\Phi:=\{\psi \in C^{\infty}(M): \int_M \psi dv_{\bar{g}} = 0\}$$ to be the set of smooth functions with zero average. Then for any $\psi \in \Phi$, there exists a metric $g$ on $M$ such that $$Q_g = \psi.$$ \end{theorem} On the other hand, we are interested in the question of how much the generic stability fails for flat metrics. Applying perturbation analysis, we discovered the positivity of $Q$-curvature is the obstruction for flat metrics exactly as what scalar curvature means for them. That is, flat metrics are locally rigid for nonnegative $Q$-curvature. As a special case, we proved a local rigidity result for $Q$-curvature on torus $T^n$ ($n \geq 3$), which is similar to the non-existence result of positive scalar curvature metrics on torus due to Schoen-Yau and Gromov-Lawson (\cite{S-Y_1, S-Y_2, G-L_1, G-L_2}). \begin{theorem}\label{flat_local_rigidity} For $n \geq 3$, let $(M^n,\bar{g})$ be a closed flat Riemannian manifold and $g$ be a metric on $M$ with $$Q_g \geq 0.$$ Suppose $||g - \bar{g}||_{C^2(M, \bar{g})}$ is sufficiently small, then $g$ is also flat. \end{theorem} As for global rigidity, we have the following interesting result: \begin{theorem}\label{thm:rigidity_tori} Suppose $g$ is a locally conformally flat metric on $T^n$ with $$Q_g \geq 0,$$ then $g$ is flat. In particular, any metric $g$ on $T^4$ with nonnegative $Q$-curvature has to be flat. \end{theorem} It would be interesting to compare notions of $Q$-singular and vacuum static spaces. In fact, we have the following observation: \begin{theorem}\label{Q-static_R-static} Let $\mathcal{M}_R$ be the space of all closed vacuum static spaces and $\mathcal{M}_Q$ be the space of all closed $Q$-singular spaces. Suppose $(M, g, f) \in \mathcal{M}_R \cap \mathcal{M}_Q$, then $M$ is Einstein necessarily. In particular, it has to be either Ricci flat or isometric to a round sphere. \end{theorem} This article is organized as follows: We explain some general notations and conventions in Section 2. We then give a characterization of $Q$-singular spaces and proofs of Theorem \ref{thm:Q_const}, \ref{Classificaition_Q_singular_Einstein} in Section 3. In Section 4, we investigate several stability results, including Theorem \ref{Q_stability}, Corollary \ref{cor:stab_pos_Einstein}, \ref{cor:prescribing_zero_Q}, \ref{cor:pos_Y_negative_Q} and Theorem \ref{Ricci_flat_stability}. In Section 5, local rigidity of flat metrics, say Theorem \ref{flat_local_rigidity} and \ref{thm:rigidity_tori} will be shown. Some relevant results will also be discussed there. In Section 6, we discuss the relation between $Q$-singular and vacuum static spaces and prove Theorem \ref{Q-static_R-static} in the end of Section 6. In the end of this article, we provide a brief discussion about conformal properties of $Q$-curvature and Paneitz operator from the viewpoint of conformal geometry.\\ \paragraph{\textbf{Acknowledgement}} The authors would like to express their appreciations to professor Sun-Yung Alice Chang, Professor Justin Corvino, Professor Matthew Gursky, Professor Fengbo Hang, Professor Jie Qing and Professor Paul Yang, and Dr.Yi Fang for their interests in this work and inspiring discussions. Especially, we would like to thank Professor Matthew Gursky for pointing out the work in \cite{C-G-Y}. The authors are also grateful for MSRI/IAS/PCMI Geometric Analysis Summer School 2013, which created the opportunity for initiating this work. Part of the work was done when the second author visited \emph{Institut Henri Poincar\'e} and we would like to give our appreciations to the hospitality of IHP. \\ \section{Notations and conventions} Throughout this article, we will always assume $(M^n, g)$ to be an $n$-dimensional closed Riemannian manifold ($n \geq 3$) unless otherwise stated. Here by \emph{closed}, we mean compact without boundary. We also take $\{\partial_i\}_{i=1}^n$ to be a local coordinates around certain point. Hence we also use its components to stand for a tensor or vector in the article.\\ For convenience, we use following notions: $\mathcal{M}$ - the set of all smooth metric on $M$; $\mathscr{D}(M)$ - the set of all smooth diffeomorphisms $ \varphi : M \rightarrow M$; $\mathscr{X}(M)$ - the set of all smooth vector fields on $M$; $S_2(M)$ - the set of all smooth symmetric 2-tensors on $M$.\\ We adopt the following convention for Ricci curvature tensor, \begin{align*} R_{jk} = R^i_{ijk} = g^{il} R_{ijkl}. \end{align*} As for Laplacian operator, we simply use the common convention as follow, \begin{align*} \Delta_g := g^{ij} \nabla_i \nabla_j. \end{align*} Let $h, k \in S_2(M)$, for convenience, we define following operations: \begin{align*} (h \times k )_{ij} := g^{kl}h_{ik}k_{jl} = h_i^lk_{lj} \end{align*} and \begin{align*} h \cdot k := tr (h \times k) = g^{ij}g^{kl}h_{ik}k_{jl} = h^{jk}k_{jk}. \end{align*} Let $X \in \mathscr{X}(M)$ and $h \in S_2(M)$, we use the following notations for operators \begin{align*} (\overset{\circ}{Rm} \cdot h )_{jk}:= R_{ijkl} h^{il}, \end{align*} and \begin{align*} (\delta_g h)_i := - (div_g h)_i = -\nabla^j h_{ij}, \end{align*} which is the $L^2$-formal adjoint of Lie derivative (up to scalar multiple) $$\frac{1}{2}(L_g X)_{ij} = \frac{1}{2} ( \nabla_i X_j + \nabla_j X_i).$$\\ \section{Characterizations of Q-singular spaces} Let $(M,g)$ be a Riemannian manifold and $\{g(t)\}_{t \in (-\varepsilon, \varepsilon)}$ be a 1-parameter family of metrics on $M$ with $g(0) =g$ and $g'(0) = h$. Easy to see, ${g'}^{ij} = - h^{ij}$.\\ We have the following well-known formulae for linearizations of geometric quantities. (c.f. \cite{C-L-N, F-M, Yuan}) \begin{proposition} The linearization of Christoffel symbol is \begin{align} {\Gamma'}_{ij}^k = \frac{1}{2} g^{kl} \left( \nabla_i h_{jl} +\nabla_j h_{il} - \nabla_l h_{ij} \right). \end{align} \label{1st_variation_Ricci_scalar} The linearization of Ricci tensor is \begin{align} Ric' = \left.\frac{d}{dt}\right|_{t=0} Ric(g(t))_{jk} = - \frac{1}{2}\left( \Delta_L h_{jk} + \nabla_j \nabla_k (tr h) + \nabla_j (\delta h)_k + \nabla_k (\delta h)_j \right), \end{align} where the Lichnerowicz Laplacian acting on $h$ is defined to be \begin{align*} \Delta_L h_{jk} = \Delta h_{jk} + 2 (\overset{\circ}{Rm}\cdot h)_{jk} - R_{ji} h^i_k - R_{ki}h^i_j, \end{align*} and the linearization of scalar curvature is \begin{align}\label{scalar_1st_variation} R' = \left.\frac{d}{dt}\right|_{t=0} R(g(t)) = - \Delta (tr h) + \delta^2 h - Ric \cdot h. \end{align} \end{proposition} For simplicity, we use $'$ to denote the differentiation with respect to the parameter $t$ and evaluated at $t=0$.\\ \begin{lemma} The linearization of the Laplacian acting on the scalar curvature is \begin{align}\label{laplacian_scalar_variation} (\Delta R)' = \left.\frac{d}{dt}\right|_{t=0} \Delta_{g(t)} R(g(t)) = - \nabla^2 R\cdot h + \Delta R' + \frac{1}{2} dR \cdot (d( tr h ) + 2\delta h). \end{align} \end{lemma} \begin{proof} Calculating in normal coordinates at an arbitrary point, \begin{align*} &\left.\frac{d}{dt}\right|_{t=0} \Delta_{g(t)} R(g(t)) \\ =& \left.\frac{d}{dt} \right|_{t=0}\left({g^{ij}(\partial_i\partial_j R-\Gamma^k_{ij}\partial_kR)}\right)\\ =& -h^{ij}\partial_i\partial_jR+g^{ij}(\partial_i\partial_j R'-(\Gamma^k_{ij})'\partial_k R-\Gamma^k_{ij}\partial_k R')\\ =& -h^{ij}\partial_i\partial_j R - g^{ij}(\Gamma^k_{ij})'\partial_k R+\Delta R'\\ =& -h^{ij}\partial_i\partial_j R - \frac{1}{2}g^{ij}g^{kl}(\nabla_ih_{jl}+\nabla_jh_{il}-\nabla_lh_{ij})\partial_kR+\Delta R'\\ =& -h^{ij}\nabla_i\nabla_jR-\nabla^ih^k_i\nabla_kR+\frac{1}{2}\nabla^k(trh)\nabla_kR+\Delta R'\\ =& - \nabla^2 R\cdot h + \Delta R' + \frac{1}{2} dR \cdot (d( tr h ) + 2\delta h). \end{align*} \end{proof} Now we can calculate the linearization of $Q$-curvature (1st variation).\\ \begin{proposition}\label{Q_1st_variation} The linearization of $Q$-curvature is \begin{align}\label{Q_linearization} \Gamma_g h :=& DQ_g \cdot h = A_n \left( - \Delta^2 (tr h) + \Delta \delta^2 h - \Delta ( Ric \cdot h ) + \frac{1}{2} dR \cdot (d( tr h ) + 2\delta h) - \nabla^2 R\cdot h\right) \\ \notag & - B_n \left( Ric \cdot \Delta_L h + Ric \cdot \nabla^2(tr h) + 2 Ric \cdot\nabla (\delta h) +2 (Ric\times Ric) \cdot h \right) \\ \notag &+ 2 C_n R \left( - \Delta (tr h) + \delta^2 h - Ric \cdot h \right). \end{align} \end{proposition} \begin{proof} Note that \begin{align*} \left.\frac{d}{dt}\right|_{t=0} |Ric(g(t))|_{g(t)}^2 =&\left.\frac{d}{dt}\right|_{t=0} \left(g^{ik}g^{jl}R_{ij}R_{kl}\right) =2g^{ik}g^{jl}R'_{ij}R_{kl}+2g'^{ik}g^{jl}R_{ij}R_{kl}\\ =&2Ric' \cdot Ric - 2(Ric\times Ric) \cdot h, \end{align*} then we finish the proof by combining it with (\ref{1st_variation_Ricci_scalar}), (\ref{scalar_1st_variation}) and (\ref{laplacian_scalar_variation}). \end{proof} Now we can derive the expression of $\Gamma_g^*$ and hence the $Q$-singular equation $$\Gamma_g^* f = 0.$$ \begin{proposition}\label{prop:Gamma^*} The $L^2$-formal adjoint of $\Gamma_g$ is \begin{align*} \Gamma_g^* f :=& A_n \left( - g \Delta^2 f + \nabla^2 \Delta f - Ric \Delta f + \frac{1}{2} g \delta (f dR) + \nabla ( f dR) - f \nabla^2 R \right)\\ \notag & - B_n \left( \Delta (f Ric) + 2 f \overset{\circ}{Rm}\cdot Ric + g \delta^2 (f Ric) + 2 \nabla \delta (f Ric) \right)\\ \notag &- 2 C_n \left( g\Delta (f R) - \nabla^2 (f R) + f R Ric \right). \end{align*} \end{proposition} \begin{proof} For any compactly supported symmetric 2-tensor $h \in S_2(M)$, we have \begin{align*} &\int_M f \left.\frac{d}{dt}\right|_{t=0} {\Delta_{g(t)} R(g(t))}\ dv_g\\ =&\int_M f\left(- \nabla^2 R\cdot h + \frac{1}{2} dR \cdot (d( tr h ) + 2\delta h)+\Delta R'\right)dv_g\\ =&\int_M \left(-h \cdot f\nabla^2R+h \cdot \nabla(fdR) + \frac{1}{2}h \cdot \delta(fdR)g + \Delta f(-\Delta trh + \delta^2h-Ric\cdot h)\right)dv_g\\ =&\int_M \left\langle-f\nabla^2R+\nabla(fdR) + \frac{1}{2}g\delta(fdR) -g\Delta^2f+\nabla^2\Delta f-Ric\Delta f,\ h\right\rangle dv_g. \end{align*}\\ Similarly, \begin{align*} &\int_M f \left.\frac{d}{dt}\right|_{t=0} |Ric(g(t))|^2_{g(t)}\ dv_g\\ =&\int_M f\left(-2h^{il}g^{jk}R_{ij}R_{kl}+2R^{ij}{R'}_{ij}\right)dv_g\\ =&\int_M -2f(Ric\times Ric)\cdot h-fR^{ij}\left(\Delta_Lh_{ij} +\nabla_i\nabla_j(tr h) +\nabla_i(\delta h)_j + \nabla_j(\delta h)_i\right) dv_g\\ =&\int_M \left\langle -\Delta_L (fRic)-2f(Ric\times Ric) - 2\nabla\delta(fRic) - g\delta^2(fRic),\ h \right\rangle dv_g\\ =&\int_M \left\langle-\Delta (fRic)-2f\overset\circ{Rm}\cdot Ric - 2\nabla\delta(fRic)-g\delta^2(fRic),\ h \right\rangle dv_g. \end{align*}\\ And also, \begin{align*} &\int_M f \left.\frac{d}{dt}\right|_{t=0} (R(g(t)))^2 \ dv_g\\ =&\int_M 2 f R \left(-\Delta trh+\delta^2h-Ric\cdot h \right)dv_g\\ =&\int_M 2\left(-\Delta (fR)trh+\nabla^2(fR) \cdot h-(fR)Ric\cdot h\right)dv_g\\ =&\int_M 2 \left\langle -g\Delta (fR)+\nabla^2(fR)-(fR)Ric,\ h \right\rangle dv_g \end{align*} Combining all the equalities above, we have \begin{align*} \int_M f DQ_g \cdot hdv_g=\int_M \langle f,\Gamma_g h\rangle dv_g=\int_M \langle\Gamma_g^*f,h\rangle dv_g. \end{align*} \end{proof} \begin{corollary} \begin{align*} \mathscr{L}_g f := tr_g \Gamma_g^* f = \frac{1}{2} \left( P_g - \frac{n + 4}{2} Q_g \right) f. \end{align*} \end{corollary} \begin{proof} Taking trace of $\Gamma_g^* f$, we have \begin{align*} tr \Gamma_g^* f =& -2 Q_g f - (n-1) A_n \Delta^2 f - \left( \frac{n-4}{2} A_n + \frac{n}{2}B_n + 2(n-1)C_n \right)f \Delta R \\ &- (A_n + B_n + 2(n-1)C_n) R \Delta f - \left( \frac{n-2}{2}A_n + n B_n + 4(n-1) C_n\right) df \cdot dR \\ &- (n-2) B_n Ric \cdot \nabla^2 f\\ =& -2 Q_g f + \frac{1}{2} \Delta^2 f - \frac{(n-2)^2 + 4}{4(n-1)(n-2)} R \Delta f - \frac{n-6}{4(n-1)} df \cdot dR + \frac{2}{n-2} Ric \cdot \nabla^2 f\\ =& -2 Q_g f + \frac{1}{2} \left( \Delta^2 f - a_n R \Delta f - \left(a_n + \frac{1}{2}b_n\right) df \cdot dR - b_n Ric \cdot \nabla^2 f \right)\\ =& \frac{1}{2} \left( \Delta_g^2 f- div_g \left[(a_n R_g g + b_n Ric_g) df \right] \right) - 2 Q_g \\ = &\frac{1}{2} \left( P_g - \frac{n+4}{2} Q_g \right) f. \end{align*} Here we used the fact that $\frac{n-4}{2} A_n + \frac{n}{2}B_n + 2(n-1)C_n =0$. \end{proof} Combining above calculations, we can justify that $Q$-curvature is a constant for $Q$-singular spaces using exactly the same argument in \cite{C-G-Y}. For the convenience of readers, we include a sketch of the proof as follows. For more details, please refer to \cite{C-G-Y}. \begin{theorem}[Chang-Gursky-Yang \cite{C-G-Y}] A $Q$-singular space $(M^n, g)$ has constant $Q$-curvature and \begin{align} \frac{n+4}{2}Q_g \in Spec(P_g). \end{align} \end{theorem} \begin{proof} We only need to show $Q_g$ is a constant. For any smooth vector field $X \in \mathscr{X}$ on $M$, we have \begin{align*} \int_M \langle X, \delta_g \Gamma_g^* f \rangle dv_g = \frac{1}{2}\int_M \langle L_X g, \Gamma_g^* f \rangle dv_g = \frac{1}{2}\int_M f\ \Gamma_g (L_X g)\ dv_g = \frac{1}{2}\int_M \langle f dQ_g, X \rangle \ dv_g. \end{align*} Thus $$fdQ_g = 2 \delta_g \Gamma_g^* f = 0$$ on $M$. Suppose there is an $x_0 \in M$ with $f(x_0) = 0$ and $dQ_g (x_0) \neq 0$. By taking derivatives, we can see $f$ vanishes to infinite order at $x_0$. \emph{i.e.} $$\nabla^m f(x_0) = 0$$ for any $m \geq 1$. Since $f$ also satisfies that $$\mathscr{L}_g f = \frac{1}{2} \left(P_g f - \frac{n+4}{2}Q_g f\right) = 0,$$ by applying the Carleman estimates of Aronszajn (\cite{A}), we can conclude that $f$ vanishes identically on $M$. But this contradicts to the fact that $g$ is $Q$-singular. Therefore, $dQ_g$ vanishes on $M$ and thus $Q_g$ is a constant. Here we obtain the proof of Theorem \ref{thm:Q_const}. \end{proof} \textbf{Examples of $Q$-singular spaces.} \begin{itemize} \item Ricci flat spaces\\ Take $f$ to be a nonzero constant, then it satisfies the $Q$-singular equation.\\ \item Spheres\\ Take $f$ to be the $(n+1)^{th}$-coordinate function $x_{n+1}$ restricted on $$S^n = \{x\in \mathbb{R}^{n+1}: |x|^2 = 1\}.$$ Then $f$ satisfies the Hessian type equation $$\nabla^2 f + g f = 0.$$ With the aid of the following Lemma \ref{Q-static_Einstein}, we can easily check that $f$ satisfies the $Q$-singular equation.\\ \item Hyperbolic spaces\\ Similarly, if we still take $f$ to be the $(n+1)^{th}$-coordinate function $x_{n+1}$ but restricted on $$H^n = \{(x',x_{n+1})\in \mathbb{R}^{n+1}: |x'|^2 - |x_{n+1}|^2 = -1, x' \in \mathbb{R}^n, x_{n+1} > 0\}.$$ Then $f$ satisfies the similar Hessian type equation $$\nabla^2 f - g f = 0,$$ and hence solves the $Q$-singular equation. \end{itemize} Based on the complexity of the $Q$-singular equation, one can easily see that it is very difficult to study the geometry of generic $Q$-singular spaces. However, when restricted to some special classes of Riemannian manifolds we can still get some interesting results of $Q$-singular spaces. \begin{lemma}\label{Q-static_Einstein} Let $(M,g)$ be an Einstein manifold and $f \in \ker \Gamma_g^*$, we have \begin{align} \Gamma_g^* f = A_n \left[ \nabla^2 \left(\Delta f + \Lambda_n R f \right) + \frac{R}{n(n-1)}g \left(\Delta f + \Lambda_n R f\right) \right] = 0, \end{align} where $\Lambda_n : = \frac{2}{A_n} (\frac{B_n}{n} + C_n) = - \frac{(n+2)(n-2)}{2n(n-1)} < 0$, for any $n \geq 3$. \end{lemma} \begin{proof} Since $g$ is Einstein, \emph{i.e.} $Ric=\frac{R}{n}g$, we get \begin{align*} \Gamma_g^* f=&A_n \left[ -g\Delta^2f+\nabla^2\Delta f-\frac{R}{n}g\Delta f \right] -B_n\left[\frac{R}{n}\Delta (fg)+2f \overset\circ{Rm}\cdot \frac{R}{n}g + 2\frac{R}{n}\nabla\delta(fg) +\frac{R}{n}g\delta^2(fg) \right]\\ & \ \ -2C_n\left[Rg\Delta f-R\nabla^2f+\frac{R^2}{n}fg\right]\\ =&A_n \left[-g\Delta^2f+\nabla^2\Delta f-\frac{R}{n}g\Delta f\right] - 2B_n \left[ \frac{R}{n}g\Delta f+\frac{R^2}{n^2}gf - \frac{R}{n}\nabla^2f \right] \\ & \ \ -2C_n \left[ Rg\Delta f-R\nabla^2f+\frac{R^2}{n}fg \right]\\ =& A_n \left[-g\Delta^2f+\nabla^2\Delta f - \left(\frac{1}{n} + \Lambda_n\right) R g\Delta f + \Lambda_n R \nabla^2 f - \frac{\Lambda_n}{n}R^2 g f \right]\\ =& A_n \left[ \nabla^2 \left(\Delta f + \Lambda_n R f\right) - g \left( \Delta \left(\Delta f + \Lambda_n R f\right) +\frac{R}{n}\left(\Delta f + \Lambda_n R f\right) \right) \right]. \end{align*} By assuming $f\in \ker \Gamma_g^*$, \emph{i.e.} $\Gamma_g^*(f)=0$, when taking trace, we have \begin{align*} A_n \left[ - (n-1) \Delta \left(\Delta f + \Lambda_n R f\right) - R\left(\Delta f + \Lambda_n R f\right) \right] = 0. \end{align*} Thus, \begin{align*} \Delta \left(\Delta f + \Lambda_n R f\right) = - \frac{R}{n-1}\left(\Delta f + \Lambda_n R f\right). \end{align*} Now substitute it in the expression of $\Gamma_g^* f$, we have $$ \Gamma_g^* f =A_n \left[ \nabla^2 \left(\Delta f + \Lambda_n R f\right) + \frac{R}{n(n-1)}g \left(\Delta f + \Lambda_n R f\right)\right] = 0, $$ where $\Lambda_n= \frac{2}{A_n}\left(\frac{B_n}{n}+C_n\right) = - \frac{(n+2)(n-2)}{2n(n-1)} < 0$, for any $n \geq 3$.\\ \end{proof} Next we show that a closed $Q$-singular Einstein manifold with positive scalar curvature has to be spherical: \begin{proposition} \label{Q-static_sphere} Let $(M^n,g)$ be a complete $Q$-singular Einstein manifold with positive scalar curvature. Then $(M^n,g)$ is isometric to the round sphere $(S^n(r), g_{_{S^n(r)}})$, with radius $r = \left(\frac{n(n-1)}{R_g}\right)^{\frac{1}{2}}$. Moreover, $\ker \Gamma_g^*$ is consisted of eigenfunctions of $(-\Delta_g)$ associated to $\lambda_1 = \frac{R_g}{n-1}> 0$ on $S^n(r)$ and hence $\dim \ker \Gamma_{g}^* = n+1$. \end{proposition} \begin{proof} Note that $\Lambda_n < 0$ and $Spec (-\Delta_g)$ consisted of nonnegative real numbers, then $$\Lambda_n R_g \not\in Spec ( - \Delta_g).$$ Let $f \in \ker \Gamma_g^*$, $f\not\equiv 0$ and $\varphi := \Delta_g f + \Lambda_n R_g f$, then \begin{align*} \varphi \not\equiv 0. \end{align*} Therefore, by Lemma \ref{Q-static_Einstein}, we have the following equation \begin{align}\label{Obata_type_eqn} \nabla^2\varphi=-\frac{R_g}{n(n-1)}\varphi g \end{align} with a nontrivial solution $\varphi$.\\ Taking trace, we get \begin{align} \Delta_g \varphi = - \frac{R_g}{n-1}\varphi. \end{align} \emph{i.e.} $\frac{R_g}{n-1}$ is an eigenvalue of $-\Delta_g$.\\ On the other hand, by \emph{Lichnerowicz-Obata's Theorem}, the first nonzero eigenvalue of $(- \Delta_g)$ satisfies $$\lambda_1 \geq \frac{R_g}{n-1}$$ with equality if and only if $(M^n,g)$ is isometric to the round sphere $(S^n(r), g_{S^n(r)})$ with radius $r = \left(\frac{n(n-1)}{R} \right)^{\frac{1}{2}}$. Hence the first part of the theorem follows.\\ Since $\Lambda_n R_g \not\in Spec ( - \Delta_g)$, it implies that the operator $\Delta_g + \Lambda_n R_g$ is invertible. On the other hand, \begin{align*} \left( \Delta_g + \Lambda_n R_g \right) \left(\Delta_g + \frac{R_g}{n-1}\right) f = \left(\Delta_g + \frac{R_g}{n-1}\right) \left( \Delta_g + \Lambda_n R_g \right) f = \left(\Delta_g + \frac{R_g}{n-1}\right) \varphi=0. \end{align*} Thus $$\left(\Delta_g + \frac{R_g}{n-1}\right) f = 0.$$ \emph{i.e.} $f$ is an eigenfunction associated to $\lambda_1 = \frac{R_g}{n-1}$.\\ Therefore, $\ker \Gamma_g^*$ can be identified by the eigenspace associated to the first nonzero eigenvalue $\lambda_1> 0$ of $(-\Delta_g)$ on $S^n(r)$ and hence $\dim\Gamma_g^* = n+1$. \end{proof} As for Ricci flat ones, we have: \begin{theorem}\label{Static_Ricci_flat} Suppose $(M^n,g)$ is a $Q$-singular Riemannian manifold. If $(M,g)$ admits an nonzero constant potential $f \in \ker \Gamma_g^*$, then $(M,g)$ is $Q$-flat, \emph{i.e.} the $Q$-curvature vanishes identically. Furthermore, suppose $(M,g)$ is a closed $Q$-singular Einstein manifold, then $(M,g)$ is Ricci flat if and only if $\ker \Gamma_g^*$ is consisted of constant functions. \end{theorem} \begin{proof} Without lose of generality, we can assume that $1 \in \ker \Gamma_g^*$. We have \begin{align*} tr\Gamma^{*}_{g} 1 = \mathscr{L}_g 1 = \frac{1}{2} \left( P_g 1 - \frac{n+4}{2}Q_g\right) = - 2 Q_g = 0, \end{align*} which implies that $Q_g=0$.\\ Now assume $(M,g)$ is a closed $Q$-singular Einstein manifold.\\ We have \begin{align*} \int_M Q_g dv_g &= \int_M \left(A_n \Delta_{g} R + B_n |Ric|^2 + C_n R^2\right) dv_g \\ &= \left(\frac{B_n}{n} + C_n \right) R^2\ Vol_g(M) \\ &= \frac{(n+2)(n-2)}{8n(n-1)^2}R^2\ Vol_g(M). \end{align*} Suppose $1 \in \ker \Gamma_g^*$, then $Q_g = 0$ and hence \begin{align*} R = 0 , \end{align*} That means $(M, g)$ is Ricci flat.\\ On the other hand, if $(M,g, f)$ is Ricci flat, \begin{align*} \mathscr{L}_g f = \frac{1}{2}\Delta^2 f = 0. \end{align*} Since $M$ is compact without boundary, $f$ is a nonzero constant on $M$. \end{proof} \begin{remark} It was shown in \cite{C-G-Y} that for a closed $4$-dimensional $Q$-singular space $(M, g)$, $1 \in \ker \Gamma_g^*$ if and only if $g$ is Bach flat with vanishing $Q$-curvature. But this theorem doesn't generalized to other dimensions automatically. In fact, for generic dimensions, $1 \in \ker \Gamma_g^*$ doesn't imply $g$ is Bach flat by direct calculations, although it is still $Q$-flat as shown above. \end{remark} Theorem \ref{Classificaition_Q_singular_Einstein} follows from combining Propositions \ref{Q-static_sphere} and \ref{Static_Ricci_flat}.\\ For Einstein manifolds with negative scalar curvature, generically they are not $Q$-singular: \begin{proposition}\label{Q-hyperbolic} Let $(M, g)$ be a closed Einstein manifold with scalar curvature $R < 0$. Suppose $\Lambda_n R \not\in Spec(-\Delta_g)$, then $$\ker \Gamma_g^* = \{0\}.$$ \emph{i.e.} $(M, g)$ is not $Q$-singular. \end{proposition} \begin{proof} For any smooth function $f\in \ker \Gamma_g^*$, let $\varphi := \Delta f + \Lambda_n R f$. By Lemma \ref{Q-static_Einstein}, we have $$\nabla^2 \varphi + \frac{R}{n(n-1)}g\varphi = 0.$$ Taking trace, $$\Delta \varphi + \frac{R}{n-1}\varphi = 0.$$ Thus $$\varphi = \Delta f + \Lambda_n R f = 0$$ identically on $M$, since $0 > \frac{R}{n-1} \not\in spec(-\Delta_g)$. Thus $$\Delta f = - \Lambda_n R f .$$ By assuming $\Lambda_n R \not\in Spec(-\Delta_g)$, we conclude that $f \equiv 0$. That means, $\ker \Gamma_g^*$ is trivial. \end{proof} \section{Stability of Q-curvature} In this section, we will discuss the linearized stability of $Q$-curvature on closed manifolds. As main tools, we need following key results. Proofs can be found in referred articles. \begin{theorem}[Splitting Theorem (\cite{B-E, F-M})]\label{Splitting_Theorem} Let $(M,g)$ be a closed Riemannian manifold, $E$ and $F$ be vector bundles on $M$. Let $D : C^\infty (E) \rightarrow C^\infty(F)$ be a $k^{th}$-order differential operator and $D^* : C^\infty(F) \rightarrow C^\infty(E)$ be its $L^2$-formal adjoint operator. For $k \leq s\leq \infty$ and $1< p < \infty$, let $D_s : W^{s,p}(E) \rightarrow W^{s-k,p}(F)$ and $D_s^* : W^{s,p}(F) \rightarrow W^{s-k,p}(E)$ be the bounded linear operators by extending $D$ and $D^*$ respectively. Assume that $D$ or $D^*$ has injective principal symbols, then \begin{align} W^{s,p}(F) = \Ima D_{s+k} \oplus \ker D_s^*. \end{align} Moreover, \begin{align} C^\infty(F) = \Ima D \oplus \ker D^*. \end{align} \end{theorem} In particular, take the vector bundle $F$ to be all symmetric 2-tensors $S_2(M)$, we have \begin{corollary}[Canonical decomposition of $S_2$ (\cite{B-E, F-M})]\label{Canonical_decomposition_S_2} Let $(M,g)$ be a closed Riemannian manifold, then the space of symmetric 2-tensors can be decomposed into \begin{align} S_2(M) = \{ L_X g : X \in \mathscr{X}(M)\} \oplus \ker \delta_g. \end{align} \end{corollary} Another result we need is \begin{theorem}[Generalized Inverse Function Theorem (\cite{Gel'man})]\label{Generalized_Inverse_Function_Theorem} Let $X$, $Y$ be Banach spaces and $f : U_{x_0} \rightarrow Y$ be a continuously differentiable map with $f(x_0) = y_0$, where $U_{x_0} \subset X$ is a neighborhood of $x_0$. Suppose the derivative $D f (x_0) : X \rightarrow Y$ is a surjective bounded linear map. Then there exists $V_{y_0} \subset Y$, a neighborhood of $y_0$, and a continuous map $\varphi : V_{y_0} \rightarrow U_{x_0}$ such that \begin{align*} f(\varphi(y)) = y, \ \ \ \forall y\in V_{y_0}; \end{align*} and \begin{align*} \varphi(y_0) = x_0. \end{align*} \end{theorem} Combining Corollary \ref{Canonical_decomposition_S_2} and Theorem \ref{Generalized_Inverse_Function_Theorem}, we obtain \emph{Ebin's Slice Theorem}(\cite{Ebin}), which we will use later in this article. \begin{theorem}[Slice Theorem (\cite{Ebin, F-M})]\label{Slice_Theorem} Let $(M,\bar{g})$ be a Riemannian manifold. For $p> n$, suppose that $g$ is also a Riemannian metric on $M$ and $||g - \bar{g}||_{W^{2,p}(M,\bar{g})}$ is sufficiently small, then there exists a diffeomorphism $\varphi \in \mathscr{D} (M)$ such that $h := \varphi^* g - \bar{g}$ satisfies that $\delta_{\bar{g}} h = 0$ and moreover, $$||h||_{W^{2,p}(M,\bar{g})} \leq N ||g - \bar{g}||_{W^{2,p}(M,\bar{g})},$$ where $N$ is a positive constant only depends on $(M, \bar g)$. \end{theorem} \begin{remark} Brendle and Marques (c.f. \cite{B-M}) proved an analogous decomposition and slice theorem for a compact domain with boundary. \end{remark} Now we can give the proof of the main theorem (Theorem \ref{Q_stability}) in this section. \begin{theorem} Let $(M,\bar{g})$ be a closed Riemannian manifold. Assume $(M,\bar{g})$ is not $Q$-singular, then the Q-curvature is linearized stable at $\bar g$ in the sense that $Q : \mathcal{M} \rightarrow C^{\infty}(M) $ is a submersion at $\bar{g}$. Thus, there is a neighborhood $U \subset C^{\infty}(M)$ of $Q_{\bar{g}}$ such that for any $\psi \in U$, there exists a metric $g$ on $M$ closed to $\bar{g}$ with $Q_g = \psi$. \end{theorem} \begin{proof} The principal symbol of $\Gamma_{\bar{g}}^*$ is \begin{align*} \sigma_{\xi}(\Gamma_{\bar{g}}^*) = - A_n \left( g |\xi|^2 - \xi \otimes\xi \right)|\xi|^2. \end{align*} Taking trace, we get \begin{align*} tr\ \sigma_{\xi}(\Gamma_{\bar{g}}^*) = - A_n \left( n - 1 \right)|\xi|^4. \end{align*} Thus, $\sigma_{\xi}(\Gamma_{\bar{g}}^*) = 0$ will imply that $\xi = 0$. \emph{i.e.} $\Gamma^*_{\bar{g}}$ has an injective principal symbol. By the \emph{Splitting Theorem} \ref{Splitting_Theorem}, $$C^\infty(M) = \Ima \Gamma_{\bar{g}} \oplus \ker \Gamma^*_{\bar{g}},$$ which implies that $\Gamma_{\bar{g}}$ is surjective, since we assume that $(M,\bar{g})$ is not $Q$-singular, \emph{i.e.} $\ker \Gamma^*_{\bar{g}} = \{0\}$. Therefore, applying the \emph{Generalized Implicit Function Theorem} (Theorem \ref{Generalized_Inverse_Function_Theorem}), $Q$ maps a neighborhood of $\bar g$ to a neighborhood of $Q_{\bar{g}}$ in $C^\infty(M)$. \end{proof} As a consequence, we can derive the stability of generic positive Einstein manifolds (Corollary \ref{cor:stab_pos_Einstein}). \begin{corollary} Let $(M,\bar{g})$ be a closed positive Einstein manifold. Assume $(M,\bar{g})$ is not spherical, then the Q-curvature is linearized stable at $\bar g$. \end{corollary} \begin{proof} Since $M$ is assumed to be not spherical, by Theorem \ref{Classificaition_Q_singular_Einstein}, $(M,g)$ is not $Q$-singular. Now the stability follows from Theorem \ref{Q_stability}. \\ \end{proof} For a generic $Q$-flat manifold, we can prescribe any smooth function such that it is the $Q$-curvature for some metric on the manifold (Corollary \ref{cor:prescribing_zero_Q}). \begin{corollary} Suppose $(M, \bar g)$ is a closed non-$Q$-singular manifold with vanishing $Q$-curvature. Then any smooth function $\varphi$ can be realized as a $Q$-curvature for some metric $ g$ on $M$. \end{corollary} \begin{proof} Since $(M, \bar g)$ is non-$Q$-singular, applying Theorem \ref{Q_stability}, as a nonlinear map, $Q$ maps a neighborhood of the metric $\bar g$ to a neighborhood of $Q_{\bar g} = 0$ in $C^\infty(M)$. Thus there exists an $\varepsilon_0 > 0$ such that for any smooth function $\psi$ with $||\psi||_{C^\infty(M)}< \varepsilon_0$, we can find a smooth metric $g_\psi$ closed to $\bar g$ with $Q_{g_\psi} = \psi$. Now for any nontrivial $\varphi \in C^\infty(M)$, let $\tilde \varphi:= u_{\varepsilon_0 , \varphi}\varphi$, where $u_{\varepsilon_0 , \varphi} = \frac{\varepsilon_0 }{2 ||\varphi||_{_{C^\infty(M)}}} > 0$. Clearly, $||\tilde \varphi||_{C^\infty(M)} < \varepsilon_0$ and hence there is a metric $g_{\tilde \varphi}$ closed to $\bar g$ such that $Q_{g_{\tilde \varphi}} = \tilde \varphi$. Let $$g = u_{\varepsilon_0 , \varphi}^{\frac{1}{2}} g_{\tilde \varphi},$$ then we have $$Q_g = u_{\varepsilon_0 , \varphi}^{-1} \cdot Q_{g_{\tilde \varphi}} = \varphi.$$ \end{proof} Now we can prove the Corollary \ref{cor:pos_Y_negative_Q}. \begin{corollary} Let $M$ be a closed manifold with positive Yamabe invariant $Y(M) > 0$. There is a metric $g$ with $Q$-curvature $Q_g < 0$ on $M$. \end{corollary} \begin{proof} By Matsuo's theorem (see Corollary 2 in \cite{Mat14}), on a closed manifold $M$ with dimension $n\geq 3$ and positive Yamabe invariant, there exists a metric $g$ with scalar curvature $R_g = 0$ but $Ric_g \not\equiv 0$ on $M$. Thus the $Q$-curvature satisfies $$Q_g = - \frac{2}{(n-2)^2} |Ric_g|^2 \leq 0.$$ If $|Ric_g|>0$ pointwisely, then $Q_g < 0$ on $M$. Otherwise, there is a point $p \in M$ such that $|Ric_g(p)|^2 = 0$ and hence $Q_g$ is not a constant on $M$. This implies that the metric $g$ is not $Q$-singular. Therefore, by Theorem \ref{Q_stability}, we can perturb the metric $g$ to obtain a metric with strictly negative $Q$-curvature. This gives the conclusion. \end{proof} For the Ricci flat case, we have a better result since we can identify $\ker \Gamma_{\bar{g}}^*$ with constants. \begin{theorem} Let $(M,\bar{g})$ be a closed Ricci flat Manifold. Denote $$\Phi:=\{\psi \in C^{\infty}(M): \int_M \psi dv_{\bar{g}} = 0\}$$ to be the set of smooth functions with zero average. Then for any $\psi \in \Phi$, there exists a metric $g$ on $M$ such that $$Q_g = \psi.$$ \end{theorem} \begin{proof} In the proof of Theorem \ref{Q_stability}, we have showed that the principal symbol of $\Gamma_{\bar{g}}^*$ is injective and the decomposition $$C^\infty(M) = \Ima \Gamma_{\bar{g}} \oplus \ker \Gamma^*_{\bar{g}}$$ holds. On the other hand, by Theorem \ref{Static_Ricci_flat}, $\ker \Gamma^*_{\bar{g}}$ is consisted by constant functions and hence $ \Ima \Gamma_{\bar{g}} = \Phi$. By identifying $\Phi$ with its tangent space, we can see the map $Q$ is a submersion at $g$ with respect to $\Phi$. Therefore, by \emph{Generalized Inverse Function Theorem} (Theorem \ref{Generalized_Inverse_Function_Theorem}), we have the local surjectivity. That is, there exists a neighborhood of $\bar{g}$, say $U_{\bar{g}} \subset \mathcal{M}$ and a neighborhood of $0$, say $V_0 \subset C^\infty(M)$ such that $ V_0|_\Phi \subset Q_{g}(U_{\bar{g}})$, where $g\mapsto Q_{g}$ is a map.\\ Now for any $\psi \in \Phi$, let $r > 0$ be a sufficiently large constant such that $\frac{1}{r^4}\psi \in V_0$. There exists a metric $g_r \in U_{\bar{g}}$, such that $Q_{g_r} = \frac{1}{r^4} \psi$. Let $g := \frac{1}{r^2}g_r$, we have $$Q_g = r^4 \cdot Q_{g_r} = r^4 \cdot \frac{1}{r^4}\psi = \psi.$$ Thus we prove Theorem \ref{Ricci_flat_stability}. \end{proof} Similarly, we can also give an answer to the prescribing $Q$-curvature problem near the standard spherical metric by noticing the fact that $$\ker \Gamma_{\bar{g}}^* = E_{\lambda_1}$$ from Theorem \ref{Q-static_sphere}: \begin{theorem}\label{thm:stability_sphere} Let $(S^n, \bar{g})$ be the standard unit sphere and $E_{\lambda_1}$ be the eigenspace of $(-\Delta_g)$ associated to the first nonzero eigenvalue $\lambda_1 = n$. Then for any $\psi \in E_{\lambda_1}^\perp$ with $||\psi - Q_{\bar{g}}||_{C^\infty(S^n,\bar{g})}$ sufficiently small, there exists a metric $g$ near $\bar{g}$ such that $$Q_g = \psi.$$ \end{theorem} \section{Rigidity Phenomena of Flat manifolds} Let $(M,g)$ be a closed Riemannian manifold. Consider a deformation of $(M,g)$, $g(t)=g+th$, $t \in (-\varepsilon, \varepsilon)$.\\ We have calculated the first variation of $Q$-curvature (see equation (\ref{Q_linearization})). In order to study the local rigidity of $Q$-curvature, we are going to calculate the second variation of $Q$-curvature.\\ First, we recall the following well known $2^{nd}$-variation formulae, which can be found in \cite{F-M}. For detailed calculations, we refer to the appendices of \cite{Yuan}. \begin{proposition} We have the following $2^{nd}$-variation formulae for metrics, \begin{align} g''_{ij} = \left.\frac{d^2}{dt^2}\right|_{t=0} (g(t))_{ij} = 0, \end{align} and \begin{align} g''^{ij} = \left.\frac{d^2}{dt^2}\right|_{t=0} (g(t))^{ij} = 2 h_k^j h^{ik}. \end{align} Also for Christoffel Symbols, \begin{align} {\Gamma''}_{ij}^k = \left.\frac{d^2}{dt^2}\right|_{t=0} \Gamma(g(t))^k_{ij} = - h^{kl} \left( \nabla_i h_{jl} + \nabla_j h_{il} - \nabla_l h_{ij} \right). \end{align} \end{proposition} \begin{lemma} The second variation of $Q$-curvature is \begin{align} \left.\frac{d^2}{dt^2}\right|_{t=0} Q(g(t)) =& A_n [ \Delta_g R'' + 2 \nabla^2 R \cdot ( h \times h ) - 2 h \cdot \nabla^2 R' + ( 2 \delta h + d (tr h) ) \cdot dR' \\ \notag &\ \ \ \ \ \ + h^{ij} ( 2 \nabla_i h_j^k - \nabla^k h_{ij} ) \nabla_k R - h \cdot ((2 \delta h + d(tr h)) \otimes d R ) ]\\ \notag & + B_n [ 4 (Ric \times Ric) \cdot ( h \times h ) + 2 | Ric \times h |^2 - 8 Ric' \cdot ( Ric \times h )\\ \notag &\ \ \ \ \ \ + 2 Ric''\cdot Ric + 2 | Ric' |^2 ]\\ \notag & + C_n [ 2 R R'' + 2(R')^2], \notag \end{align} where $Ric'$, $Ric''$; $R'$, $R''$ are the first and second variations of Ricci tensor and scalar curvature at $g$ respectively. \end{lemma} \begin{proof} Choose normal coordinates at any point $p \in M$, \emph{i.e.} $\Gamma_{jk}^i = 0$ at $p$, then \begin{align*} &\left.\frac{d^2}{dt^2}\right|_{t=0}(\Delta_{g(t)}R(g(t)))\\ =& (g^{ij}(\partial_i\partial_j R - \Gamma_{ij}^k \partial_k R))''\\ =& g''^{ij} \partial_i\partial_j R + 2 g'^{ij}(\partial_i\partial_j R' - {\Gamma'}_{ij}^k \partial_k R) + g^{ij}(\partial_i\partial_j R'' - {\Gamma''}_{ij}^k \partial_k R - 2{\Gamma'}_{ij}^k \partial_k R')\\ =& 2 h_k^j h^{ik} \nabla_i \nabla_j R - 2 h^{ij} (\nabla_i\nabla_j R' - {\Gamma'}_{ij}^k \nabla_k R) + \Delta R'' - g^{ij} {\Gamma''}_{ij}^k \nabla_k R - 2 g^{ij} {\Gamma'}_{ij}^k \nabla_k R'\\ =& \Delta R'' + 2 \nabla^2 R \cdot (h \times h) - 2 h\cdot \nabla^2 R' + ( 2 \delta h + d trh) \cdot dR' + h^{ij} ( 2 \nabla_i h_j^k - \nabla^k h_{ij}) \nabla_k R\\ &- h \cdot ((2 \delta h + d tr h) \otimes d R), \end{align*} by substituting the expression of ${\Gamma'}_{ij}^k$ and ${\Gamma''}_{ij}^k$.\\ And \begin{align*} &\left.\frac{d^2}{dt^2}\right|_{t=0} |Ric(g(t))|^2_{g(t)}\\ =& \left( g^{ik} g^{jl} R_{ij} R_{kl} \right)''\\ =& 2 g''^{ik}R_{ij} R_k^j + 2 g'^{ik} g'^{jl} R_{ij} R_{kl} + 8 g'^{ik} R'_{ij} R_k^j + 2 R''_{ij} R^{ij} + 2 g^{ik} g^{jl} R'_{ij} R'_{kl}\\ =& 4 (Ric \times Ric) \cdot ( h \times h) + 2 |Ric \times h|^2 - 8 Ric' \cdot (Ric \times h) + 2 Ric'' \cdot Ric + 2|Ric'|^2. \\ \end{align*} Also, \begin{align*} \left.\frac{d^2}{dt^2}\right|_{t=0} R(g(t))^2 = (R^2)'' = 2 (R')^2 + 2 R R''. \end{align*} We prove the lemma by combining all three parts together. \end{proof} Simply by taking $Ric = 0$ and $R = 0$, we get the second variation for $Q$-curvature at a Ricci flat metric. \begin{corollary}\label{2nd_variation_Ricci_flat} Suppose the metric $g$ is Ricci flat, then \begin{align} D^2 Q_g \cdot ( h, h ) =A_n(\Delta_{g}R'' - 2 h \cdot \nabla^2 R' + ( 2 \delta h + d (tr h)) \cdot dR') + 2 B_n |Ric'|^2 + 2C_n(R')^2. \end{align} \end{corollary} Now we assume $(M,\bar{g}, f)$ is a $Q$-singular metric.\\ Consider the functional \begin{align*} \mathscr{F}(g)=\int_M Q_g \cdot f dv_{\bar{g}}. \end{align*} Note that here we fix the volume form to be the one associated to the $Q$-singular metric $\bar{g}$.\\ \begin{remark} The analogous functional \begin{align*} \mathscr{G}(g)=\int_M R_g \cdot f dv_{\bar{g}}, \end{align*} plays a fundamental role in studying rigidity phenomena of vacuum static spaces (c.f. \cite{F-M, B-M, Q-Y_1}, \emph{etc.}). \end{remark} \begin{lemma} The metric $\bar{g}$ is a critical point of the functional $\mathscr{F}(g)$. \end{lemma} \begin{proof} For any symmetric 2-tensor $h \in S_2$, let $g(t) = \bar{g} + t h$, $t \in (-\varepsilon, \varepsilon)$ be a family of metrics on $M$. clearly, $g(0) = \bar{g}$ and $g'(0) = h$. Then \begin{align*} \left. \frac{d}{dt}\right|_{t=0} \mathscr{F}(g(t)) = \int_M DQ_{\bar{g}} \cdot h f dv_{\bar{g}} = \int_M \Gamma_{\bar{g}} h \cdot f dv_{\bar{g}} = \int_M h \cdot \Gamma_{\bar{g}}^* f \ dv_{\bar{g}} = 0, \end{align*} \emph{i.e.} $\bar{g}$ is a critical point for the functional $\mathscr{F}(g)$. \end{proof} Furthermore, if we assume $\bar{g}$ is a flat metric, by Theorem \ref{Static_Ricci_flat}, we can take $f$ to be a nonzero constant. In particular, we can take $f \equiv 1$, since $Q$-singular equation is a linear equations system with respect to $f$.\\ \begin{lemma}\label{2nd_variation_flat} Let $\bar{g}$ be a flat metric and $f \equiv 1$. Suppose $\delta h = 0$, then the second variation of $\mathscr{F}$ at $\bar{g}$ is given by \begin{align} D^2 \mathscr{F}_{\bar{g}} \cdot (h,h) = -2 \alpha_n \int_M|\Delta(tr h)|^2 dv_{\bar{g}} + \frac{1}{2}B_n\int_M(|\Delta \overset{\circ}{h}|^2)dv_{\bar{g}}, \end{align} where $\overset{\circ}{h}$ is the traceless part of $h$ and $\alpha_n := - \frac{1}{2}\left(A_n + \frac{n+1}{2n}B_n + 2C_n \right)= \frac{(n^2 - 2) (n^2 - 2n - 2)}{8n(n-1)^2(n-2)^2} > 0$, $B_n=-\frac{2}{(n-2)^2}< 0$, for any $n \geq 3$. \end{lemma} \begin{proof} With the aid of Corollary \ref{2nd_variation_Ricci_flat}, we have \begin{align*} &\left. \frac{d^2}{dt^2}\right|_{t=0} \mathscr{F}(g(t))\\ =& \int_M Q''_{\bar{g}} dv_{\bar{g}}\\ =& \int_M \left[A_n(\Delta R'' - 2 h \cdot \nabla^2 R' + ( 2 \delta h + d (tr h)) \cdot dR') + 2 B_n |Ric'|^2 + 2C_n(R')^2 \right] dv_{\bar{g}}\\ =& \int_M \left[A_n(- 2 \delta h \cdot d R' + ( 2 \delta h + d (tr h)) \cdot dR') + 2 B_n |Ric'|^2 + 2C_n(R')^2 \right] dv_{\bar{g}}\\ =& \int_M \left[A_n( - \Delta (tr h)) \cdot R' + 2 B_n |Ric'|^2 + 2C_n(R')^2 \right] dv_{\bar{g}}\\ =& \int_M \left[A_n( - \Delta (tr h)) \cdot (- \Delta (tr h) + \delta^2 h - Ric \cdot h) + 2 B_n |Ric'|^2 + 2C_n(R')^2 \right] dv_{\bar{g}}\\ =& \int_M \left[A_n( \Delta (tr h))^2 + 2 B_n |Ric'|^2 + 2C_n(R')^2 \right] dv_{\bar{g}}, \end{align*} where the last step is due to the assumptions of $\bar{g}$ being a flat metric and divergence-free property of $h$.\\ For exact the same reasons, by Proposition \ref{1st_variation_Ricci_scalar}, we have \begin{align*} |Ric'|^2 = \frac{1}{4} | \Delta h + \nabla^2 (tr h)|^2 = \frac{1}{4} \left( |\Delta h|^2 + 2 \Delta h \cdot \nabla^2 (trh) + |\nabla^2 (tr h)|^2 \right) \end{align*} and \begin{align*} (R')^2 = ( \Delta (tr h))^2. \end{align*} Thus \begin{align*} \int_M |Ric'|^2 dv_{\bar{g}}=& \frac{1}{4} \int_M\left[\left( |\Delta h|^2 + 2 \Delta h \cdot \nabla^2 (trh) + |\nabla^2 (tr h)|^2 \right)\right] dv_{\bar{g}}\\ =& \frac{1}{4}\int_M \left[ \left( |\Delta h|^2 + 2 \delta\Delta h \cdot d(trh) + \delta\nabla^2 (tr h) \cdot d(tr h) \right)\right] dv_{\bar{g}}\\ =& \frac{1}{4}\int_M \left[\left( |\Delta h|^2 + 2 \Delta \delta h \cdot d(trh) + \delta d (tr h) \cdot \delta d(tr h) \right)\right] dv_{\bar{g}}\\ =& \frac{1}{4}\int_M \left[\left( |\Delta h|^2 + |\Delta (tr h)|^2 \right)\right] dv_{\bar{g}}\\ =& \frac{1}{4}\int_M \left[ |\Delta \overset{\circ}{h}|^2 + \frac{n+1}{n}( \Delta (tr h))^2 \right] dv_{\bar{g}}. \end{align*} Now \begin{align*} \left. \frac{d^2}{dt^2}\right|_{t=0} \mathscr{F}(g(t)) =& \int_M \left[(A_n + \frac{n+1}{2n}B_n + 2 C_n)( \Delta (tr h))^2 +\frac{1}{2}B_n |\Delta \overset{\circ}{h}|^2 \right] dv_{\bar{g}}\\ =& -2 \alpha_n \int_M|\Delta(tr h)|^2 dv_{\bar{g}} + \frac{1}{2}B_n\int_M(|\Delta \overset{\circ}{h}|^2)dv_{\bar{g}}. \end{align*} This gives the equation we claimed. \end{proof} Now we are ready to prove Theorem \ref{flat_local_rigidity}. \begin{theorem} For $n \geq 3$, let $(M^n,\bar{g})$ be a closed flat Riemannian manifold and $g$ be a metric on $M$ with $$Q_g \geq 0.$$ Suppose $||g - \bar{g}||_{C^2(M, \bar{g})}$ is sufficiently small, then $g$ is also flat. \end{theorem} \begin{proof} Since $g$ is $C^2$-closed to $\bar{g}$, by the \emph{Slice Theorem} (Theorem \ref{Slice_Theorem}), there exists a diffeomorphism $\varphi \in \mathscr{D}( M ) $ such that $$h := \varphi^* g - \bar{g}$$ is divergence free with respect to $\bar{g}$ and $$||h||_{C^2(M,\bar{g})} \leq N ||g-\bar{g}||_{C^2(M,\bar{g})},$$ where $N > 0$ is a constant only depends on $(M,\bar{g})$.\\ Apply Lemma \ref{2nd_variation_flat}, we can expand $\mathscr{F}(\varphi^*g)$ at $\bar{g}$ as follows, \begin{align}\label{eqn:F_expansion} \mathscr{F}(\varphi^*g) &= \mathscr{F}(\bar g) + D\mathscr{F}_{\bar g} \cdot h + \frac{1}{2}D^2 \mathscr{F}_{\bar g} \cdot (h, h) + E_3\\ \notag &= - \alpha_n \int_M|\Delta(tr h)|^2 dv_{\bar{g}} + \frac{1}{4}B_n\int_M |\Delta \overset{\circ}{h}|^2 dv_{\bar{g}} + E_3, \end{align} where $$|E_3| \leq C_0 \int_M |h|\ |\nabla^2 h|^2 \ dv_{\bar{g}}$$ for some constant $C_0=C_0 (n, M, \bar g)> 0$.\\ We know $$\mathscr{F}(\varphi^*g) = \int_M Q_g\circ\varphi\ dv_{\bar{g}} \geq 0$$ since $Q_{g}\geq0$ and $\varphi$ is a diffeomorphism near the identity. Since $$\alpha_n =-\frac{1}{2}\left(A_n + \frac{n+1}{2n}B_n + 2C_n \right) > 0,\ \ B_n = - \frac{2}{(n-2)^2} < 0$$ for $n\geq 3$, we can take $\mu_n > 0$, a sufficiently small constant only depends on the dimension $n$ such that $$\min\{ \alpha_n - \frac{\mu_n}{n} , - \frac{1}{4}B_n- \mu_n\} > 0.$$ Therefore, with the aid of equation (\ref{eqn:F_expansion}), we have \begin{align*} &\mu_n \int_M |\Delta h|^2 dv_{\bar{g}}\\ \leq& \left( \alpha_n - \frac{\mu_n}{n}\right)\int_M|\Delta(tr h)|^2 dv_{\bar{g}} + \left( - \frac{1}{4}B_n- \mu_n \right)\int_M |\Delta \overset{\circ}{h}|^2 dv_{\bar{g}} + \mu_n \int_M |\Delta h|^2 dv_{\bar{g}}\\ =& \alpha_n \int_M|\Delta(tr h)|^2 dv_{\bar{g}} - \frac{1}{4}B_n\int_M |\Delta \overset{\circ}{h}|^2 dv_{\bar{g}}\\ =& - \mathscr{F}(\varphi^*g) + E_3\\ \leq& |E_3|\\ \leq& C_0 \int_M |h|\ |\nabla^2{h}|^2 \ dv_{\bar{g}}. \end{align*} Suppose $g$ is sufficiently $C^2$-closed to $\bar g$, say $||g-\bar{g}||_{C^2(M,\bar{g})} < \frac{\mu_n}{2NC_0}$, then $$||h||_{C^0(M,\bar{g})} \leq ||h||_{C^2(M,\bar{g})} \leq N ||g-\bar{g}||_{C^2(M,\bar{g})} < \frac{\mu_n}{2C_0}$$ and therefore, \begin{align*} \mu_n\int_M |\nabla^2 h|^2dv_{\bar{g}} = \mu_n\int_M |\Delta h|^2dv_{\bar{g}} \leq C_0 \int_M |h|\ |\nabla^2{h}|^2 \ dv_{\bar{g}} \leq \frac{\mu_n}{2}\int_M |\nabla^2 h|^2dv_{\bar{g}}, \end{align*} which implies $\nabla^2 h = 0$ on $M$. \\ Now we have \begin{align*} \int_M |\nabla h |^2 dv_{\bar{g}} = - \int_M h \Delta h\ dv_{\bar{g}} = 0. \end{align*} That is $\nabla h = 0$.\\ Since $\bar{g}$ is flat, then on a neighborhood $U_p$ for any $ p \in M$, we can find a local coordinates, such that $\bar{g}_{ij} = \delta_{ij}$ and $\partial_k \bar{g}_{ij} = 0$, $i,j,k = 1, \cdots, n$ on $U_p$.\\ Under the same coordinates, Christoffel symbols of $\varphi^*g$ are \begin{align*} \Gamma^k_{ij}(\varphi^*g) &= \frac{1}{2} (\varphi^*g)^{kl} \left(\partial_i (\bar{g}_{jl} + h_{jl}) + \partial_j (\bar{g}_{il} + h_{il}) - \partial_l (\bar{g}_{ij} + h_{ij}) \right) \\ &= \frac{1}{2} (\varphi^*g)^{kl} (\nabla_i h_{jl} + \nabla_j h_{il} - \nabla_l h_{ij}) = 0 \end{align*} on $U_p$, since $h$ is parallel with respect to $\bar{g}$. \\ Thus the Riemann curvature tensor of $\varphi^* g$ vanishes identically on $U_p$ for any $p \in M$, which implies that the metric $\varphi^*g$ is flat and so is $g$. \end{proof} \begin{remark} Fischer and Marsden proved an analogous result for the scalar curvature. (see \cite{F-M}) \end{remark} As an application, we can get the local rigidity of $Q$-curvature on compact domain of $\mathbb{R}^n$, which can be thought as an analogue of rigidity part of \emph{Positive Mass Theorem}. \begin{corollary}\label{flat_domain_rigidity} Suppose $\Omega \subset \mathbb{R}^n$ is a compactly contained domain. Let $\delta$ be the flat metric and $g$ be a metric on $\mathbb{R}^n$ satisfying \begin{itemize} \item $Q_g \geq 0$, \item $supp(g - \delta) \subset \Omega$, \item $||g - \delta||_{C^2(\mathbb{R}^n, \delta)}$ is sufficiently small; \end{itemize} then $g$ is also flat. \end{corollary} \begin{proof} Since $\Omega$ is compactly contained in $\mathbb{R}^n$, we can choose a rectangle domain $\Omega'$ which contains $\Omega$ strictly. Thus, the metric $g$ is identically the same as the Euclidean metric on $\Omega' - \Omega$. Now we can derive a metric with nonnegative $Q$-curvature on the torus $T^n$ by identifying the boundary of $\Omega'$ correspondingly. Clearly, this new metric on $T^n$ is $C^2$-closed to the flat metric, hence has to be flat by Theorem \ref{flat_local_rigidity}. Now the claim follows. \end{proof} It would be interesting to ask whether there is a global rigidity result for $Q$-curvature. No result is known so far to the best of authors' knowledge, but we observed that the global rigidity holds in some special cases. \begin{proposition}\label{prop:conf_Ricci_flat} Let $(M, g)$ be a closed Riemannian manifold with $$Q_g \geq 0$$ pointwisely. Suppose that $g$ is conformal to a Ricci flat metric, then $(M, g)$ is Ricci flat. \end{proposition} \begin{proof} For $n = 4$, by the assumptions, there exists a smooth function $u$ such that $g = e^{2u} \bar g$, where $\bar g$ is a Ricci flat metric on $M$. By (\ref{eqn:conf_Q_4}) in the appendix, we have $$Q_g = e^{-4u} ( P_{\bar g} u + Q_{\bar g} )= e^{-4u} \Delta_{\bar g}^2 u \geq 0.$$ Hence $ \Delta_{\bar g}u$ is subharmonic on $M$ which implies that $u$ is a constant. Therefore, $g$ is also Ricci flat. \\ Similarly, for $n \neq 4$, there exists a positive function $u > 0$ such that $g = u^{\frac{4}{n-4}}\bar{g}$, where $\bar g$ is a Ricci flat metric on $M$. Then by (\ref{eqn:conf_Q_n}), $$Q_g = \frac{2}{n-4}u^{-\frac{n+4}{n-4}} P_{\bar{g}} u = \frac{2}{n-4} u^{-\frac{n+4}{n-4}} \Delta_{\bar{g}}^2 u. $$ For $n=3$, we have $$\Delta_{\bar{g}}^2 u \leq 0,$$ and for $n>4$, $$\Delta_{\bar{g}}^2 u \geq 0,$$ which imply that $u$ is a constant in both cases and thus $g$ is Ricci flat. \end{proof} In particular, we can consider tori $T^n$. First, we need a lemma which characterizes the conformally flat structure on $T^n$. \begin{lemma}\label{lem:conf_structure_tori} On the torus $T^n$, any locally conformally flat metric has to be conformal to a flat metric. \end{lemma} \begin{proof} Let $g$ be a locally conformally flat Riemannian metric on $T^n$. According to the solution of Yamabe problem, $g$ is conformal to a metric $\bar g$, whose scalar curvature is a constant. Suppose $R_{\bar g} < 0$, by Proposition 1.2 in \cite{S-Y_3}, the fundamental group of $T^n$ is non-amenable. But this contradicts to the fact that $\pi_1(T^n)$ is abelian, which is amenable. Thus, $R_{\bar g} \geq 0$, which implies that $\bar g$ is flat by famous results of Schoen-Yau and Gromov-Lawson (\cite{S-Y_1, S-Y_2, G-L_1, G-L_2}). \end{proof} Now we can derive the rigidity of tori with respect to nonnegative $Q$-curvature (Theorem \ref{thm:rigidity_tori}): \begin{theorem} Suppose $g$ is a locally conformally flat metric on $T^n$ with $$Q_g \geq 0,$$ then $g$ is flat. In particular, any metric $g$ on $T^4$ with nonnegative $Q$-curvature has to be flat. \end{theorem} \begin{proof} By Lemma \ref{lem:conf_structure_tori}, we can see that $g$ is conformal to a flat metric. Applying Proposition \ref{prop:conf_Ricci_flat}, we conclude that $g$ has to be flat. In particular for $T^4$, we have the \emph{Gauss-Bonnet-Chern formula}, $$\int_{T^4} \left( Q_g + \frac{1}{4}|W_g|^2 \right) dv_{g} = 8\pi^2 \chi(T^4) = 0$$ on $T^4$. Thus the non-negativity of $Q$-curvature automatically implies that Weyl tensor $W_g$ vanishes identically on $T^4$, which means $g$ is locally conformally flat. Therefore, $g$ is flat by the previous argument. \end{proof} As for dimension 3, we have the following result. \begin{proposition} The $3$-dimensional torus $T^3$ does not admit a metric with constant scalar curvature and nonnegative $Q$-curvature, unless it is flat. \end{proposition} \begin{proof} Suppose such a metric $g$ exists, its scalar curvature is non-positive and it is flat only if $R_g = 0$ (c.f. \cite{ S-Y_1, S-Y_2, G-L_1, G-L_2}). If it is non-flat, without loss of generality, we can assume $$R_g = -1.$$ Then we have $$Q_g = - \frac{1}{4}\Delta_g R_g - 2 |Ric_g|^2 + \frac{23}{32} R_g^2 = - 2 |Ric_g|^2 + \frac{23}{32} R_g^2 \geq 0.$$ That is $$|Ric_g|^2 \leq \frac{23}{64}R_g^2 = \frac{23}{64}.$$ At any point $p \in M$, choose an orthonormal basis $\{e_1, e_2, e_3 \}$ for $T_p M$ such that the Ricci tensor at $p$ is diagonal. Let $\lambda_i$, $i = 1,2,3$ be the eigenvalues of $Ric_g (p)$. Then we have $$\lambda_1 + \lambda_2 + \lambda_3 = -1$$ and $$\lambda_1^2 + \lambda_2^2 + \lambda_3^2 \leq \frac{23}{64}.$$ Hence for $i \neq j$, we have \begin{align*} 0 \geq& \lambda_i^2 + \lambda_j^2 + ( 1+ (\lambda_i + \lambda_j) )^2 - \frac{23}{64} \\ =& (\lambda_i + \lambda_j)^2 + 2 (\lambda_i + \lambda_j) + \lambda_i^2 + \lambda_j^2 + \frac{41}{64} \\ \geq &\frac{3}{2}(\lambda_i + \lambda_j)^2 + 2(\lambda_i + \lambda_j) + \frac{41}{64}, \end{align*} where the last step was achieved by the mean value inequality $$\lambda_i^2 + \lambda_j^2 \geq \frac{(\lambda_i + \lambda_j)^2}{2} .$$ That is, $$(\lambda_i + \lambda_j)^2 + \frac{4}{3}(\lambda_i + \lambda_j) + \frac{41}{96} \leq 0,$$ which implies that $\lambda_i + \lambda_j < - \frac{2}{3} + \frac{\sqrt{10}}{24} < -\frac{1}{2}$. Then the sectional curvature of the plane spanned by $e_i$ and $e_j$ satisfies \begin{align*} K_{ij} = R_{ijji} = R_{ii} g_{jj} + R_{jj} g_{ii} - \frac{1}{2} R g_{ii}g_{jj} = \frac{1}{2} + (\lambda_i + \lambda_j) < 0. \end{align*} Thus $g$ has negative sectional curvature. But the torus does not admit a metric with nonpositive sectional curvature by Corollary 2 in \cite{B-B-E}, which is a contradiction. \end{proof} \begin{remark} From the last inequality in the above argument, we can easily see that the conclusion actually allows a perturbation on the metric. That is, any metric on $T^3$ with scalar curvature sufficiently $C^4$-closed to a negative constant can not have a nonnegative $Q$-curvature, unless it is flat. \end{remark} \begin{remark} According to the solution of Yamabe problem, there is a metric with constant scalar curvature in each conformal class on any closed Riemannian manifold. Thus we can conclude that for any metric $g$ on $T^n$, if $g$ is not conformally flat, then it must be conformal to a metric with constant negative scalar curvature. However, $Q$-curvature cannot stay nonnegative when performing conformal changes on metrics in general. We hope to develop some techniques to resolve this issue in the future. \end{remark} According to the \emph{Bieberbach Theorem} (See Theorem 10.33 in \cite{C-L-N}), any $n$-dimensional flat Riemannian manifold is a finite quotient of the torus $T^n$. So based on Theorem \ref{flat_local_rigidity}, \ref{thm:rigidity_tori} and above observations, we give the following conjecture: \begin{conjecture}\label{Positive_Q_on_torus} For $n \geq 3$, let $(M^n,\bar{g})$ be a connected closed flat Riemannian manifold, then there is no metric with pointwisely positive $Q$-curvature. Moreover, if $g$ is a metric on $M$ with $$Q_g\geq 0,$$ then $g$ is flat. \end{conjecture} \begin{remark} This is a higher order analogue of the famous result due to Schoen-Yau and Gromov-Lawson (\cite{S-Y_1, S-Y_2, G-L_1, G-L_2}), which says that there is no nonnegative scalar curvature metric on tori unless they are flat. \end{remark} \section{Relation between vacuum static spaces and $Q$-singular spaces} In this section, we will discuss the relation between $Q$-singular spaces and vacuum static spaces. \begin{definition} We say a complete Riemannian manifold $(M,g)$ is vacuum static, if there is a smooth function $f (\not\equiv 0)$ on $M$ solving the following \emph{vacuum static equation} \begin{align} \gamma_g^* f := \nabla^2 f - \left(Ric_g - \frac{R_g}{n-1} g \right) f = 0. \end{align} We also refer $(M,g,f)$ as a \emph{vacuum static space}, if $f (\not\equiv 0) \in \ker \gamma_g^*$. \end{definition} Now we can prove the main result (Theorem \ref{Q-static_R-static}) in this section. \begin{theorem} Let $\mathcal{M}_R$ be the space of all closed vacuum static spaces and $\mathcal{M}_Q$ be the space of all closed $Q$-singular spaces. Suppose $(M, g, f) \in \mathcal{M}_R \cap \mathcal{M}_Q$, then $M$ is Einstein necessarily. In particular, it has to be either Ricci flat or isometric to a round sphere. \end{theorem} \begin{proof} If $(M,g,f)$ is vacuum static, $M$ has constant scalar curvature necessarily (c.f. \cite{F-M}). Then the $Q$-singular equation can be reduced to \begin{align}\label{eqQ_and_Rstatic} \Gamma_g^* f =& A_n \left( - g \Delta^2 f + \nabla^2 \Delta f - Ric \Delta f \right)- 2 C_n R \left( g\Delta f - \nabla^2 f + f Ric \right)\\ &\notag - B_n \left( \Delta (f Ric) + 2 f \overset{\circ}{Rm}\cdot Ric + g \delta^2 (f Ric) + 2 \nabla \delta (f Ric) \right)\\ =& \notag 0. \end{align} By \emph{Contracted $2^{nd}$ Bianchi Identity} \begin{align*} \delta Ric = - \frac{1}{2} dR = 0, \end{align*} we can simplify (\ref{eqQ_and_Rstatic}) furthermore, \begin{align*} &-\frac{1}{B_n}\Gamma_g^* f \\ =& -\frac{A_n}{B_n} \left( \nabla^2 \Delta f - g \Delta^2 f - Ric \Delta f \right) - \frac{2C_n}{B_n}R \left( \nabla^2 f - g \Delta f - Ric f \right) \\ &+ f (\Delta Ric + 2 \overset\circ{Rm} \cdot Ric) + g (Ric \cdot \nabla^2 f) + Ric \Delta f - 2 \nabla^2 f \times Ric + 2 C \cdot \nabla f\\ =& -\frac{A_n}{B_n}\ \gamma_g^* (\Delta f) - \frac{2C_n}{B_n}R\ \gamma_g^* f + f \Delta_L Ric + g (Ric \cdot \nabla^2 f) + Ric \Delta f +\frac{2 R }{n-1}f Ric + 2 C \cdot \nabla f\\ =& 0, \end{align*} where $\Delta_L$ is the Lichnerowicz Laplacian and $C$ is Cotton tensor, \begin{align*} C_{ijk} = \left( \nabla_i R_{jk} - \frac{1}{2(n-1)}g_{jk}\nabla_i R\right) - \left( \nabla_j R_{ik} - \frac{1}{2(n-1)}g_{ik} \nabla_j R \right) = \nabla_i R_{jk} - \nabla_j R_{ik} \end{align*} and \begin{align*} (C \cdot \nabla f)_{jk} := C_{ijk} \nabla^i f. \end{align*} Now suppose $g$ is also vacuum static, then $\Delta f = -\frac{R}{n-1}f$ and hence \begin{align*} \gamma_g^* (\Delta f) = -\frac{R}{n-1} \gamma_g^* f = 0. \end{align*} Thus we have \begin{align*} \Gamma_g^* f = f \Delta_L Ric + g (Ric \cdot \nabla^2 f) + \frac{R}{n-1} f Ric + 2 C \cdot \nabla f = 0. \end{align*} Taking trace and applying the vacuum static equation, \begin{align*} tr\ \Gamma_g^* f =& f \Delta R + n Ric \cdot \nabla^2 f + \frac{R^2}{n-1} f\\ =& n Ric \cdot \left( Ric - \frac{R}{n-1}g\right) f + \frac{R^2}{n-1} f\\ =& n \left( |Ric|^2 - \frac{R^2}{n} \right) f \\ =& n \left|Ric - \frac{R}{n} g\right|^2 f\\ =& 0. \end{align*} Since in an vacuum static space, $df \neq 0$ on $f^{-1}(0)$ (c.f. \cite{F-M}). Then $f^{-1}(0)$ is a regular hypersurface in $M$ and hence $f \neq 0$ on a dense subset of $M$, which implies that $$Ric = \frac{R}{n} g.$$ \emph{i.e.} $(M,g)$ is Einstein. For a closed vacuum static space, its scalar curvature is nonnegative necessarily. Hence the assertion follows easily from Theorem \ref{Classificaition_Q_singular_Einstein}. \end{proof} \appendix \section{Conformal covariance of $Q$-curvature and Paneitz operator} In this appendix, we will give a brief introduction to $Q$-curvature and Paneitz operator from the viewpoint of conformal geometry. \\ Let $(M^2, g)$ be a $2$-dimensional Riemannian surface. Consider the conformal metric $\tilde{g} = e^{2u}g$. A simple calculation shows that the \emph{Laplacian-Beltrami operator} is in fact conformally covariant: \begin{equation} \Delta_{\tilde{g}} = e^{-2u}\Delta_{g}. \end{equation} And Gaussian curvature satisfies \begin{equation} K_{\tilde{g}} = e^{-2u}\left(-\Delta_{g}u + K_{g} \right). \end{equation}\\ For $n\geq 3$, a second order differential operator called \emph{conformal Laplacian} associated to a Riemannian metric $g$ is defined as \begin{align} L_{g}:=-\frac{4(n-1)}{(n-2)}\Delta_{g} + R_{g}. \end{align} If we take $\tilde{g}=u^{\frac{4}{n-2}}g$, $u>0$, to be a metric conformal to $g$, then \begin{equation} L_{\tilde{g}}\phi = u^{-\frac{n+2}{n-2}}L_{g}(u\phi) \end{equation} for any $\phi \in C^2(M)$. Hence $L_g$ can be thought as a higher dimensional analogue of the \emph{Laplacian-Beltrami operator} on surfaces. By taking $\phi \equiv 1$, the scalar curvature can be calculated as follows: \begin{equation} R_{\tilde{g}} =u^{-\frac{n+2}{n-2}} L_{g}u. \end{equation}\\ As a generalization, we seek for a higher order operator which satisfies the similar transformation law. In \cite{Paneitz}, Paneitz introduced a $4^{th}$-order differential operator $P_g$ (people call it \emph{Paneitz operator} now) for any dimension $n \geq 3$. Shortly after, Branson (\cite{Branson}) realized that the $0^{th}$-order term in Paneitz operator actually can be defined as a generalization of $Q$-curvature for dimensions $n\neq 4$: \begin{align} Q_{g} = A_n \Delta_{g} R_{g} + B_n |Ric_{g}|_{g}^2 + C_nR_{g}^2, \end{align} where $A_n = - \frac{1}{2(n-1)}$ , $B_n = - \frac{2}{(n-2)^2}$ and $C_n = \frac{n^2(n-4) + 16 (n-1)}{8(n-1)^2(n-2)^2}$. Thus Paneitz operator can be rewritten as \begin{align} P_g = \Delta_g^2 - div_g \left[(a_n R_g g + b_n Ric_g) d\right] + \frac{n-4}{2}Q_g, \end{align} where $a_n = \frac{(n-2)^2 + 4}{2(n-1)(n-2)}$ and $b_n = - \frac{4}{n-2}$. In fact, when $n=4$, let $\tilde{g} = e^{2u} g$ be a conformal metric, then the Paneitz operator follows similar transformation law as Laplacian-Beltrami operator on surfaces: \begin{align} P_{\tilde{g}} = e^{-4u}P_g \end{align} and $Q$-curvature satisfies \begin{align}\label{eqn:conf_Q_4} Q_{\tilde{g}} = e^{-4u} \left(P_g u + Q_g \right). \end{align} As for $n\neq 4$, let $\tilde{g} = u^{\frac{4}{n-4}}g$, $u > 0$, then \begin{align} P_{\tilde{g}} \phi = u^{-\frac{n+4}{n-4}}P_g (u \phi), \end{align} for any $\phi \in C^4 (M)$. In particular, by taking $\phi \equiv 1$, we get the transformation law for $Q$-curvature: \begin{align}\label{eqn:conf_Q_n} Q_{\tilde{g}} = \frac{2}{n-4}u^{-\frac{n+4}{n-4}}P_g u. \end{align} In fact, Paneitz operator is a higher order analogue of conformal Laplacian.\\ Based on the similarity between $Q$-curvature and scalar curvature, one can consider a \emph{Yamabe-type problem} for $Q$-curvature: seeking for constant $Q$-curvature metrics in a given conformal class. Many people have contributed to this problem or related problems under different geometric assumptions. In particular for closed $4$-manifolds, one of the significance for $Q$-curvature is that the total $Q$-curvature is a conformal invariant. Indeed, according to (\cite{Gur98, Gur99}), Paneitz operator $P_{g}$ and total $Q$-curvature together can provide us some geometric information about manifolds. Thus under some geometric conditions, the existence of constant $Q$-curvature metric can be obtained (c.f. \cite{CY95, DM08, LLL12}). As for dimensions other than four, different approaches were applied due to the lack of a maximum principle. One of the approaches is to understand the relation among $Q$-curvature equations, Green's function of Paneitz operator and certain geometric invariants. We refer to \cite{QR06, Lin15, GM14, HY14a, HY14b, GHL15} for readers who are interested in it. \end{document}
\begin{document} \title{Continued Fractions, Quadratic Fields, and Factoring: \\ Some Computational Aspects } \author{Michele Elia \thanks{Politecnico di Torino Corso Duca degli Abruzzi 24, I - 10129 Torino -- Italy; ~~ e-mail: [email protected] }} \maketitle \thispagestyle{empty} \begin{abstract} \noindent Legendre discovered that the continued fraction expansion of $\sqrt N$ having odd period leads directly to an explicit representation of $N$ as the sum of two squares. In this vein, it was recently observed that the continued fraction expansion of $\sqrt N$ having even period directly produces a factor of composite $N$. It is proved here that these apparently fortuitous occurrences allow us to propose and apply a variation of Shanks' infrastructural method which significantly reduces the asymptotic computational burden with respect to currently used factoring techniques. \end{abstract} \section{Introduction} In a letter to { Pierre de Carcavi}, August $14^{th}$ $1659$, { Pierre de Fermat} reported several propositions; in particular, he stated the following theorem: {\it Every prime $p$ of the form $4k+1$ is uniquely expressible as the sum of two squares, i.e.} $ p = X^2+Y^2 ~~ \Leftrightarrow ~~ p \equiv 1 \bmod 4$, ~ whose first known proof was given by Euler using Fermat's {\em infinite descent} method. Many other proofs have been given, some constructive, others non-constructive; in particular, among the latter, Zagier's one-sentence proof deserves to be mentioned for its conciseness \cite{zagier}. Among the numerous constructive proofs, two different proofs by Gauss stand out. The first is direct, and gives $ x= \frac{(2k)!}{2(k!)^2} \bmod p$ and $y= \frac{((2k)!)^2}{2(k!)^2} \bmod p $; the partially incomplete proof was completed, a century later, by Jacobsthal. The second proof is based on quadratic forms of discriminant $-4$, and considers two equivalent principal quadratic forms with discriminant $-4$: $ p X^2+2 b_1XY+ \frac{b_1^2+1}{p}Y^2 ~~\mbox{and}~~~ x^2+y^2$, where $b_1$ is a root of $z^2+1$ modulo $p$. The first form represents $p$ trivially with $X=1$ and $Y=0$, thus Gauss' reduction produces the unique reduced form in the class \cite{mathews}, and meanwhile yields $x$ and $y$. \noindent Jacobsthal's constructive solution (1906) is based on counting the number of points on the elliptic curve $y^2=n(n^2-a)$ in $\mathbb Z_p$. He considers the sum of Legendre symbols $$ S(a) = \sum_{n=1}^{p-1} \left(\frac{n(n^2-a)}{p} \right) \Rightarrow x=\frac{1}{2} S(q_R) ~~,~~ y=\frac{1}{2} S(q_N) $$ where $q_R, q_N \in \mathbb Z_p$ are any quadratic residue and non-residue, respectively, \cite{jacobsthal}. \noindent Legendre's proof is reported on pages 59-60 of \cite{legendre}. It is constructive, since it yields $X$ and $Y$ from the complete remainder of the continued fraction expansion of $\sqrt p$. It is well explained in his own words \begin{quotation} {\em ... Donc tous le fois que l'\'equation $x^2 -Ay^2=-1$ est r\'esoluble (ce qui ha lieu entre autre cas lorsque $A$ est un numbre premier $4n+1$) le nombre $A$ peut toujours \^etre decompos\'e en deux quarr\'es; et cette d\'ecomposition est donn\'ee immediatement par lo quotient-complet $\frac{\sqrt A + I}{D}$ qui r\'epond au second des quotients moyens compris dans la premi\`ere p\'eriode du d\'eveloppement de $\sqrt A$; le nombres $I$ et $D$ \'etant ainsi connu, on aura $A=D^2+I^2$. pace{2mm} Cette conclusion ranferme un des plus beaux th\'eor\`emes de la science des nombres, savoir, {\em que tout nombre premier $4n+1$ est la somme de deux quarr\'es;} elle donne en m\^eme temps le moyen de faire cette d\'ecomposition d'une mani\`ere directe et sans aucun} {\bf \textcolor{black}{t\^atonnement}}. \end{quotation} \noindent Thus, Legendre's proof gives the representation of any composite $N$ such that the period of the continued fraction for $\sqrt N$ is odd, or equivalently, $x^2-N y^2=-1$ is solvable in integers \cite{legendre,sierp,elia1}. \\ As a counterpart to Legendre's finding, when the period of the continued fraction expansion of $\sqrt N$ is even, we directly obtain, under mild conditions, a factor of a composite $N$. In particular, this is certainly the case when both prime factors of $N=pq$ are congruent $3$ modulo $4$ \cite{elia1}. Legendre's solution of Fermat's theorem tacitly introduces a connection between continued fractions and the ramified primes of quadratic number fields, obviously without using this notion more than a century before Dedekind's invention. To explain these singular connections, the paper is organized as follows. Section 2 summarizes the properties of the continued fraction expansion of $\sqrt N$. Section 3 discusses the factorization of composite numbers $N$. Lastly, Section 4 draws conclusions. \section{Preliminaries} A regular continued fraction is an expression of the form \begin{equation} \label{cf} a_0+\frac{1}{ a_1+\frac{1}{a_2+\frac{1}{ a_3+ \cdots}}} ~~, \end{equation} where $a_0$, $a_1$, $a_2, \ldots, a_i, \ldots$ is a sequence, possibly infinite, of positive integers. A convergent of a continued fraction is the sequence of fractions $\frac{A_m}{B_m}$, each of which is obtained by truncating the continued fraction at the $(m+1)$-th term. The fraction $\frac{A_m}{B_m}$ is called the $m$-th convergent \cite{dave,hardy}. A continued fraction is said to be definitively periodic, with period $\tau$, if, starting from a finite position $n_o$, a fixed pattern $a_1'$, $a_2', \ldots, a_{\tau}'$ repeats indefinitely. Lagrange showed that any definitively periodic continued fraction, of period length $\tau$, represents a positive number of the form $a+b\sqrt{N}$, $a,b \in \mathbb Q$, i.e. an element of $\mathbb Q(\sqrt N)$, and conversely any such positive number is represented by a definitively periodic continued fraction \cite{dave,sierp}. The period of the continued fraction expansion of $\sqrt{N}$ begins immediately after the first term $a_0$, and is written as $ \sqrt{N} = \left[ a_0 , \overline{a_1,a_2, \ldots, a_2, a_1, 2a_0} \right]$, where the over-lined part is the period, which includes a palindromic part formed by the $\tau-1$ terms $~~a_1,a_2, \ldots, a_2, a_1$. In Carr's book \cite[p.70-71]{carr} we find a good collection of properties of the continued fraction expansion of $\sqrt{N}$, which are summarized in the following, along with some properties taken from \cite{dave,sierp}. \begin{enumerate} \item Let $c_n$ and $r_n$ be the elements of two sequences of positive integers defined by the relation $$ \frac{\sqrt{N}+c_n}{r_n}=a_{n+1}+\frac{r_{n+1}}{\sqrt{N}+c_{n+1}} $$ with $c_0= \left\lfloor \sqrt N \right\rfloor$, and $ r_0 =N-a_0^2$; the elements of the sequence $a_1, a_2, \ldots , a_n \ldots$ are thus obtained as the integer parts of the left-side fraction, which is known as the complete quotient. \item Let $a_0= \lfloor \sqrt{N} \rfloor$ be initially computed, and set $c_0 = a_0$, $r_0 =N-a_0^2$, then sequences $\{ c_n \}_{n \geq 0}$ and $\{ r_n \}_{n \geq 0}$ are produced by the recursions {\small \begin{equation} \label{contfrac} a_{m+1} = \left\lfloor \frac{a_0+c_m}{r_m} \right\rfloor ~,~ c_{m+1}=a_{m+1} r_m -c_m ~,~ r_{m+1} = \frac{N-c_{m+1}^2}{r_m}. \end{equation} } These recursions allow us to compute the sequence $\{a_m\}_{m\geq 1}$ using only rational arithmetical operations, and the iterations may be stopped when $a_m = 2 a_0$, having completed a period. \item If the period length $\tau$ is odd, set $\ell= \frac{\tau-1}{2}$; Legendre discovered and proved that the complete quotient $\frac{\sqrt{N} +c_\ell}{r_\ell}$ gives a representation of $N=c_\ell^2+r_\ell^2$ as the sum of two squares. \item Numerator $A_n$ and denominator $B_n$ of the $n$-th convergent to $\sqrt{N}$ can be recursively computed as $A_n = a_n A_{n-1}+ A_{n-2}$ and $B_n=a_n B_{n-1}+ B_{n-2}$, ~$n \geq $1, respectively, with initial conditions $A_{-1}=1$, $B_{-1}=0$, $A_{0}=a_0$, and $B_{0}=1$. The numerator $A_m$ and the denominator $B_m$ of any convergent are shown to be relatively prime by the relation $ A_mB_{m-1}-A_{m-1}B_m=(-1)^{m-1} $ \cite[p.85]{dave}. \item Using the sequences $\{ A_m \}_{m \geq 0}$ and $\{ B_m \}_{m \geq 0}$, two sequences $\mathbf \Delta = \{\Delta_m=A_m^2-N B_m^2\}_{m \geq 0}$, and $\mathbf \Omega = \{ \Omega_m = A_m A_{m-1}-N B_m B_{m-1} \}_{m \geq 1}$ are introduced. It can easily be checked that $\Omega_m^2 - \Delta_m \Delta_{m-1} =N,~ \forall m \geq 1$. The elements of $\mathbf \Delta$ and $\mathbf \Omega$ satisfy a system of linear recurrences \begin{equation} \label{Deltarecur} \left\{ \begin{array}{ll} \Delta_{m+1} = a_{m+1}^2 \Delta_{m} + 2 a_{m+1} \Omega_{m}+ \Delta_{m-1} \\ \Omega_{m+1} = \Omega_{m} + a_{m+1} \Delta_{m} \\ \end{array} \right. ~~~~~~m \geq 1 \end{equation} with initial conditions $\Delta_0=a_0^2-N$, $\Delta_1=(1+a_0a_1)^2-N a_1^2$ and $\Omega_1=(1+a_0a_1) a_0-N a_1$. By (\ref{Deltarecur}), it is immediate to see that $c_{m+1}= |\Omega_m|$ and $r_{m+1}=|\Delta_m|$. \item The period of $\mathbf \Delta$ and $\mathbf \Omega$ is $\tau$ or $2\tau$, depending on whether $\tau$ is even or odd. \item The sequence of ratios $ \frac{A_n}{B_n}$ assumes the limit value $\sqrt{N}$ as $n$ goes to infinity, due to the inequality $ \left| \frac{A_n}{B_n} -\sqrt N \right| \leq \frac{1}{B_n B_{n+1}} ~~, $ since $A_n$ and $B_n$ go to infinity along with $n$. Since $\frac{A_n}{B_n} <\sqrt N$, if $n$ is even, and $\frac{A_n}{B_n} >\sqrt N$, if $n$ is odd \cite{hardy}, any convergent of even index is smaller than any convergent of odd index. This property implies that the terms of the sequence $\mathbf \Delta$ have alternating signs, with $\Delta_1 > 0$. \item The value $c_0=a_0$ is the greatest value that $c_n$ may assume. No $a_n$ or $r_n$ can be greater than $2a_0$. \\ If $r_n=1$ then $a_{n+1}=a_0$. For all $n$ greater than $0$, we have $a_0-c_n < r_n\leq 2a_0$. The first complete quotient that is repeated is $\frac{\sqrt{N} +c_0}{r_0}$, and $a_1$, $r_0$, and $c_0$ commence each cycle of repeated terms. \item Through the first period, we have the equalities $a_{\tau-j}=a_j$ , $r_{\tau-j-2}=r_j$, and $c_{\tau-j-1}=c_j$. \item The period $\tau$ has the tight upper bound $ 0.72 \sqrt{N} \ln N ~,~ \ N > 7$, as was shown by Kraitchik \cite[p.95]{steuding}. However, the period length has irregular behavior as a function of $N$, because it may assume any value from $1$, when $N=M^2+1$, to values close to the order $O( \sqrt{N} \ln N )$ \cite{sierp}. \item Define the sequence of quadratic forms $\mathbf f_m(x,y)=\Delta_m x^2+ 2 \Omega_m x y + \Delta_{m-1}y^2$, $m \geq 1$, which has the same period as $\mathbf \Delta$. Every $\mathbf f_m(x,y)$ is a reduced form of discriminant $4N$. Within the first block, all quadratic forms $ \mathbf f_m(x,y)$, $1 \leq m \leq \tau$ are distinct, and constitute the principal class $\mathbf \Gamma(\mathbf f)$ of reduced forms, with the ordering of the elements inherited from $\mathbf \Delta$. The definition of reduced form used here is slightly different from the classic one: set $\kappa=\min \{ |\Delta_m |, |\Delta_{m-1} | \}$; it is easily checked that $\Omega_m$ is the sole integer such that $\sqrt N- |\Omega_m|<\kappa <\sqrt N+|\Omega_m| $, with the sign of $\Omega_m$ chosen opposite to the sign of $\Delta_m $. Since the sign of $\Delta_{m-1}$ is the same as that of $\Omega_m$, which is opposite to that of $\Delta_m$, in $\mathbf{\Gamma}(f)$ the two triples of signs (signatures) $(-,+,+)$ and $(+,-,-)$ alternate. \end{enumerate} \noindent The following theorems are taken, without proof, from \cite{elia1}. \begin{theorem} \label{per2} Starting with $m=1$, the sequences $\mathbf \Delta = \{ \Delta_m \}_{m \geq 0}$ and $\mathbf \Omega = \{ \Omega_m \}_{m \geq 0}$ are periodic with the same period $\tau$ or $2\tau$ depending on whether $\tau$ is even or odd. The elements of the blocks $\{ \Delta_m \}_{m=0}^{\tau}$ and $\{ \Omega_m \}_{m=1}^{\tau}$ satisfy the symmetry relations $\Delta_m=(-1)^{\tau}\Delta_{\tau-m-2}, ~~\forall~m \leq \tau-3$ and $\Omega_{\tau-m-1}=(-1)^{\tau+1}\Omega_{m}, ~~\forall~m \leq \tau-2$, respectively. \end{theorem} \noindent If $\tau$ is odd, the ordered set $\{ \Delta_m \}_{m=1}^{\tau}$ has a central term of index $\ell= \frac{\tau-1}{2}$, $\Delta_\ell =- \Delta_{\ell-1}$ since $\tau-\ell-2=\ell-1$, and the equation $\Omega_\ell^2 - \Delta_\ell \Delta_{\ell-1} =N$ gives a solution of the Diophantine equation $x^2+y^2=N$ with $x=\Delta_\ell$ and $y=\Omega_\ell$, the situation first recognized by Legendre. \noindent If $\tau$ is even, the ordered set $\{ \Delta_m \}_{m=1}^{\tau}$ has no central term; in this case, with $\ell = \frac{\tau-2}{2}$ we have $\Omega_{\ell+1} =- \Omega_\ell$ and $\Delta_{\ell+1} = \Delta_\ell$, hence $\mathbf f_{\ell+1}(x,y)=\mathbf f_\ell(y,-x)$. \begin{theorem} \label{sym3a} Let the period $\tau$ of the continued fraction expansion of $\sqrt{N}$ be even; we have $\Omega_{\tau-1}=-a_0$, $ \Delta_{\tau} = \Delta_{\tau-2}$, and $\Omega_{\tau}=- \Omega_{\tau-1}$. Defining the integer $\gamma \in \mathfrak O_{\mathbb Q(\sqrt N)}$ by the product $$ \gamma= \prod_{m=1}^\tau \left(\sqrt N +(-1)^{m} \Omega_m \right) ~~,$$ let $\sigma$ denote the Galois automorphism of $\mathbb Q(\sqrt N)$ (i.e. $\sigma(\sqrt N=-\sqrt N$), then $ \frac{\gamma}{\sigma(\gamma)}=A_{\tau-1}+B_{\tau-1}\sqrt{N}$ is a positive fundamental unit (or the cube of the fundamental unit) of $\mathbb Q(\sqrt N)$. \end{theorem} \noindent Based on this theorem, we say that the unit $\mathfrak c_{\tau-1}=A_{\tau-1}+B_{\tau-1}\sqrt{N}$ in $\mathbb Q(\sqrt N)$) splits $N$, if $N_1=\gcd \{A_{\tau-1} -1, N \}$ is neither $1$ nor $N$. Then we have the proper factorization $N=N_1 N_2$. Further, using the following involutory matrix, \cite{elia1}, whose square is $(-1)^{\tau} I_2$ $$ M_{\tau-1} = \left[ \begin{array}{cc} - A_{\tau-1} & N B_{\tau-1} \\ - B_{\tau-1} & A_{\tau-1} \end{array} \right] ~~, $$ it is shown that \begin{equation} \label{involutmat} A_{\tau-m-2} =(-1)^{m-1}( A_{\tau-1} A_m - N B_{\tau-1} B_m) ~~1\leq m \leq \tau-2 ~~. \end{equation} As an immediate consequence of this equation, if the unit $\mathfrak c_{\tau-1}$ splits $N$, then any pair$(A_m,A_{\tau-m-2})$ splits $N$, since taking $A_{\tau-m-2}$ modulo $N$ we have $A_{\tau-m-2} = (-1)^{m-1} A_m A_{\tau-1} \bmod N$ , thus $A_{\tau-m-2}$ is certainly different from $A_m$, because $A_{\tau-1} \neq \pm 1 \bmod N$. \begin{theorem} \label{locfactor} If the period $\tau$ of the continued fraction expansion of $\sqrt{N}$ is even, the element $\mathfrak c_{\tau-1}$ in $\mathbb Q(\sqrt N)$ splits $4N$, and a factor of $4N$ is located at positions $\frac{\tau-2}{2}+j\tau$, $j=0,1, \ldots$, in the sequence $\mathbf \Delta=\{\mathfrak c_{m} \sigma(\mathfrak c_{m}) \}_{m \geq 1}$. \end{theorem} \section{Factorization} Gauss recognized that the factoring problem was to be important, although very difficult, \begin{quotation} \noindent {\em $\ldots$ Problema, numeros primos a compositis dignoscendi, hosque in factores suos primos resolvendi, ad gravissima ac utilissima totius arithmeticae pertinere, et geometrarum tum veterum tum recentiorum industriam ac sagacitatem occupavisse, tam notum est, ut de hac re copiose loqui superfluum foret. $\ldots$ } {\scriptsize \sc C. F. Gauss [{\em Disquisitiones Arithmeticae} Art. 329]} \end{quotation} In spite of much effort, various different approaches, and the increased importance stemming from the large number of cryptographic applications, no satisfactorily factoring method has yet been found. However, approaches to factoring based on continued fractions have lead to some of the most efficient factoring algorithms. In the following, a new variant of Shanks' infrastructural method \cite{shanks} is described which exploits the property of the block $\mathbf \Delta_1=\{ \Delta_m \}_{m=1}^{\tau}$, which is made more precise in the following theorem taken without proof from \cite{elia1}. \begin{theorem} \label{mainfactor} Let $N$ be a positive square-free integer. If the norm of the positive fundamental unit $\mathfrak u \in \mathbb Q(\sqrt N)$ is $1$, and some factor of $N$ is a square of a principal integral ideal in $ \mathbb Q(\sqrt N)$, then $\mathfrak u$ is split for $N$. A proper factor of $N$ is found in position $\frac{\tau-2}{2}$ of $\mathbf \Delta_1$. \end{theorem} \noindent It should be noted that $\mathbf \Delta_1$ offers several different ways for factoring a composite number $N$: \begin{enumerate} \item If $\tau$ is even and $2$ is not a quadratic residue modulo $N$, then in position $\frac{\tau-2}{2}$ of the sequence $\mathbf \Delta_1$ we find a factor of $N$. \item If $\tau$ is odd, then by Legendre's results we find a representation $N=X^2+Y^2$, which implies that $s_1= \frac{X}{Y} \bmod N$ is a square root of $-1$. If we are able to find another square root $s_2$ different from $-\frac{X}{Y} \bmod N$ (we have four different square roots of a quadratic residue modulo $N=p q$), then the difference $s_1-s_2$ contains a proper factor of $N$. \item If some square $d_o^2$ is found in the sequence $\mathbf \Delta_1$, it implies the equation $A_m^2-NB_m^2=d_o^2$, thus there is a chance that some proper factor of $N$ divides $(A_m-d_o)$ or $(A_m+d_o)$. \\ The number of squares in $\mathbf \Delta_1$ is $O(\sqrt \tau)$, and about ($\frac{1}{2}$) of these squares factor $N$. This method was introduced by Shanks. \item If equal terms $\Delta_m=\Delta_n, ~~m\neq n$ occur in $\mathbf \Delta_1$, with $m,n <\frac{\tau}{2}$, then $A_m^2-A_n^2=0 \bmod N$ allows us to find two factors of $N$ by computing $\gcd\{A_m-A_n,N \}$ and $\gcd\{A_m+A_n,N \}$. This is an implementation of an old idea of Fermat's. \end{enumerate} \subsection{Computational issues} By Theorem \ref{mainfactor} we know that a factor of $N$ is $\Delta_{\frac{\tau-2}{2}}$, which can be directly computed from the continued fraction of $\sqrt N$ in $\frac{\tau-2}{2}$ steps. Unfortunately, this number is usually prohibitively large. However, if $\tau$ is known, using the baby-step/giant-step artifice, the number of steps can be reduced to the order $O(\log_2 \tau)$. To this end, we can move through the principal class $\mathbf \Gamma(\mathbf f)$, of ordered quadratic forms $\mathbf f_{m}(x,y)$, by introducing a notion of distance between pairs of quadratic forms compliant with Gauss' quadratic form composition. The distance between two adjacent quadratic forms $\mathbf f_{m+1}(x,y), \mathbf f_{m}(x,y) \in \mathbf \Gamma(\mathbf f)$ is defined as \begin{equation} \label{defdist} d(\mathbf f_{m+1}, \mathbf f_{m}) =\frac{1}{2} \ln \left( \frac{\sqrt{N}+(-1)^m\Omega_m }{\sqrt{N}-(-1)^m\Omega_m} \right) ~~, \end{equation} and the distance between two quadratic forms $\mathbf f_{m}(x,y)$ and $\mathbf f_n(x,y)$, with $m > n$, is defined as the sum $ d(\mathbf f_{m}, \mathbf f_{n}) = \sum_{j=n}^{m-1} d(\mathbf f_{j+1}, \mathbf f_{j}) $. The distance of $\mathbf f_{m}(x,y)$ from the beginning of $\mathbf \Gamma(\mathbf f)$ is defined referring to a properly-chosen quadratic form $ \mathbf f_{0}= \Delta_0 x^2-2 \sqrt{N-\Delta_0} x y+y^2$ hypotetically located before $\mathbf f_{1}$. Thus we have $d(\mathbf f_{m}, \mathbf f_{0}) = \sum_{j=0}^{m-1} d(\mathbf f_{j+1}, \mathbf f_{j})$ if $m \leq \tau$. The notion is also extended to index $k \tau \leq m < (k+1) \tau$ by setting $d(\mathbf f_{m}, \mathbf f_{0}) =d(\mathbf f_{m \bmod \tau}, \mathbf f_{0}) +kR_{\mathbb F}$. The distance $d(\mathbf f_{\tau}, \mathbf f_{0} )$ is exactly equal to $R^*= \ln \mathfrak c_{\tau-1}$, which is the regulator $R_{\mathbb F}$, or three times $R_{\mathbb F}$, and the distance $d(\mathbf f_{\frac{\tau}{2}}, \mathbf f_{0} )$ is exactly equal to $\frac{R^*}{2}$, see \cite{elia1} for a straightforward proof. Now, a celebrated formula of Dirichlet's gives the product \begin{equation} \label{dirichlet} h_{\mathbb F} R_{\mathbb F} = \frac{\sqrt{D}}{2} L(1, \chi) = - \sum_{n=1}^{\lfloor \frac{D-1}{2} \rfloor} \left(\frac{D}{n}\right) \ln\left( \sin \frac{n \pi}{D} \right) \end{equation} where $ h_{\mathbb F}$ is the class field number, $ L(1, \chi)$ is a Dedekind $L$-function, $D=N$ if $N\equiv 1 \bmod 4$ or $D=4N$ otherwise, and character $\chi$ is the Jacobi symbol in this case. If we know $h_{\mathbb F}$ exactly, we know $R^*$ exactly and we can proceed to factorization, with complexity $O((\log_2 N)^4)$ \cite{lagarias}, conditioned on the computation of $L(1, \chi)$. The Dirichlet $L(1,\chi_N)$ function can be efficiently evaluated using the following expression for the product $h_{\mathbb F} R_{\mathbb F}$ as a function of $N$ \begin{equation} \label{eqhR} h_{\mathbb F} R_{\mathbb F} = \frac{1}{2} \sum_{x \geq 1} \left(\frac{D}{x}\right) \left(\frac{\sqrt D}{x} \mbox{erfc}\left(x \sqrt{\frac{\pi}{D}}\right)+ E_1\left(\frac{\pi x^2}{D}\right)\right) ~~ . \end{equation} where the complementary error function $\mbox{erfc}(x)$, and the exponential integral function $E_1(x)$, can be quickly evaluated. Once we know $R^*$, with the Shanks' infrastructural method \cite{shanks} or some of its improvements \cite{williams,cohen,schoof0}, we can find $\mathbf f_{\frac{\tau-2}{2}}(x,y)$, thus a factor of $N$. The goal is to obtain $\mathbf f_{\frac{\tau-2}{2}}(x,y)$ with as few steps as possible. To this end we can perform 1) giant-steps within $\mathbf \Gamma(\mathbf f)$ which are realized by the Gauss composition law of quadratic forms, followed by a reduction of this form to $\mathbf \Gamma(\mathbf f)$, and 2) baby-steps moving from one quadratic form to the next in $\mathbf \Gamma(\mathbf f)$. Two operators $\rho^+$ and $\rho^-$ are further defined \cite[p.259]{cohen} to allow small (baby) steps, precisely \begin{itemize} \item $\rho^+$ transforms $\mathbf f_m(x,y)$ into $\mathbf f_{m+1}(x,y)$ in $\mathbf \Gamma(\mathbf f)$, and is defined as $ \rho^+([a,2b,c]) = [\frac{b_1^2-N}{a},2b_1, a] ~~, $ where $b_1$ is $2b_1= [2b \bmod (2a)] +2ka$ with $k$ chosen in such a way that $-|a| < b_1 < |a|$. \item $\rho^-$ transforms $\mathbf f_m(x,y)$ into $\mathbf f_{m-1}(x,y)$ in $\mathbf \Gamma(\mathbf f)$ and is defined as $ \rho^-([a,2b,c]) = [c,2b_1, \frac{b_1^2-N}{c}] ~~, $ where $b_1$ is $2b_1=[ -2b \bmod (2c)] +2kc$ with $k$ chosen in such a way that $-|c| < b_1 < |c|$. \end{itemize} \noindent The composed form $\mathbf f_{m}\bullet \mathbf f_{n}$ has the distance $d( \mathbf f_{m} \bullet \mathbf f_{n}, \mathbf f_{0}) \approx d(\mathbf f_{m}, \mathbf f_{0})+ d(\mathbf f_{n}, \mathbf f_{0})$. \begin{enumerate} \item By the law $\bullet$, $\mathbf \Gamma(\mathbf f)$ resembles a cyclic group, with $\mathbf f_{\tau-1}$ playing the role of identity. \item Since in $\mathbf{\Gamma}(f)$ the two triples of signs (signatures) $(-,+,+)$ and $(+,-,-)$ alternate, the composed form $\mathbf f_{m}(x,y) \bullet \mathbf f_{n}(x,y)$ must have one of these signatures. \item The composition of a quadratic form with itself is called doubling and denoted $2\bullet \mathbf f_{n}$, thus $s$ iterated doublings are written as $2^s\bullet \mathbf f_{n}(x,y)$. The distance is nearly maintained by the composition $\bullet$ (giant-steps). The error affecting this distance estimation is of order $O(\ln N)$ as shown by Schoof in \cite{schoof0}, and is rigorously maintained by the one-step moves $\rho^\pm$ (baby-steps). \end{enumerate} \noindent An outline of the procedure is the following, assuming that $R^*$ is preliminarily computed: \begin{enumerate} \item Let $\ell$ be a small integer. Compute an initial quadratic form $\mathbf f_\ell =[\Delta_\ell, 2\Omega_\ell, +\Delta_{\ell-1}]$ and its distance $d_\ell=d(\mathbf f_\ell,\mathbf f_0)$ from the continued fraction expansion of $\sqrt N$ stopped at term $\ell+1$. \item Compute $j_t= \lceil \log_2 \frac{R^*}{d_\ell} \rceil$ \item Starting with $[\mathbf f_\ell, d_\ell]$, iteratively compute and store in a vector $ \mathcal F_{j_t}$ the sequence $[ 2^{j} \bullet \mathbf f_\ell, 2^{j}d_\ell]$ up to $j_t$. The middle term (i.e. $\mathbf f_{\frac{\tau-2}{2}}$) of $\mathbf \Gamma(\mathbf f)$ is located between the terms $ 2^{j_t-1} \bullet \mathbf f_\ell$ and $ 2^{j_t}\bullet \mathbf f_\ell$. \item The middle term of $\mathbf \Gamma(\mathbf f)$ can be quickly reached using the elements of $ \mathcal F_{j_t}$, starting by computing $\mathbf f_r =(2^{j_t-1}\bullet \mathbf f_\ell) \bullet (2^{j_t-2}\bullet \mathbf f_\ell)$ and checking whether $2^{j_t-1}d_\ell+2^{j_t-2} d_\ell$ is greater or smaller than $\frac{R^*}{2}$; in the first case set $\mathbf f_s=\mathbf f_r$, otherwise set $\mathbf f_s=2^{j_t-1}\bullet \mathbf f_\ell$. Iterate this composition by computing $\mathbf f_r = \mathbf f_s \bullet (2^{i}\bullet \mathbf f_\ell)$ and setting $\mathbf f_s = \mathbf f_r$ for decreasing $i$ up to $0$, and let the final term be $[\mathbf f_s, d_s]$. \item Iterate the operation $\rho^\pm$ a convenient number $O(\ln N)$ of times, until a factor of $4N$ is found. \end{enumerate} \section{Conclusions} An iterative algorithm has been described which produces a factor of a composite square-free $N$ with $O((\ln (N))^4)$ iterations at most, if $hR$ is exactly known, $h$ being the class number, and $R$ the regulator of $\mathbb Q(\sqrt N)$. The bound $O((\ln (N))^4)$ is computed by multiplying the number of giant-steps, which is $O(\ln (N))$, by the number of steps at each reduction, completing a giant-step, which is upper bounded by $O((\ln (N))^3)$ as shown in \cite{lagarias,schonhage}. It is remarked that, in this bound computation, the cost of the arithmetics in $\mathbb Z$, i.e. multiplications and additions of big integers, is not counted \cite{lagarias}. Furthermore, it is not difficult to modify the algorithm to use a rough approximation of $hR$; the computations become cumbersome, but asymptotically the algorithm is polynomial, because a sufficient approximation of $hR$ is easily obtained by computing the series in equation (\ref{eqhR}) truncated at a number of terms $O(\ln (N))$, since the series converges exponentially \cite[Proposition 5.6.11, p.262-263]{cohen}. It remains to ascertain whether this asymptotically-good factoring algorithm is also practically better than any sub-optimal probabilistic factoring algorithm. \end{document}
\betaegin{document} \betaegin{abstract} We analyze the forcing notion $\mathcal P$ of finite matrices whose rows consists of isomorphic countable elementary submodels of a given structure of the form $H_{\theta}$. We show that forcing with this poset adds a Kurepa tree $T$. Moreover, if $\mathcal P_c$ is a suborder of $\mathcal P$ containing only continuous matrices, then the Kurepa tree $T$ is almost Souslin, i.e. the level set of any antichain in $T$ is not stationary in $\omegamega_1$. \end{abstract} \maketitle \section{Introduction} In this paper we analyze the forcing notion mentioned in the remark on page 217 of \cite{pfa}. This is the forcing notion of finite matrices whose rows consists of isomorphic countable elementary submodels of a given structure of the form $H_\theta.$ In \cite{pfa} they were merely meant as side conditions to various proper forcing constructions when one is interested in getting the $\alphaleph_2$-chain condition that can be iterated. Soon afterwards the second author realized that this variation of the original side conditions is as much an interesting forcing notion as the poset of finite chains of countable elementary submodels analyzed briefly in Theorem 6 of \cite{pfa}. For example, he showed that the poset of finite matrices of row-isomorphic countable elementary sub-models always forces CH (so, in particular preserves CH if it is true in the ground model). The second author observed at the same time that this forcing notion gives a natural example of a Kurepa tree. Here we shall explore this further and produce a natural variation of this forcing notion that gives us a Kurepa tree with no stationary antichains. This gives us a quite different forcing construction of such a tree from the previous ones which use countable rather than finite conditions (see \cite{golshani} and \cite{note}). We believe that there will be other natural variations of this forcing notion with interesting applications. For example, we note that in the recent paper \cite{aspero} Aspero and Mota have used the poset of finite matrices of elementary submodels to control their iteration scheme which shows that the forcing axiom for the class of all finitely proper posets of size $\omega_1$ is compatible with $2^{\alphaleph_0}>\alphaleph_2$. In view of the recent efforts to generalize the side condition method to higher cardinals (see \cite{neeman,neemanslides}) it would be interesting to also explore the possible higher-cardinal analogues of the posets that we analyze here. This could also be asked for the original side-condition poset of finite elementary chains of countable elementary submodels of \cite{pfa} which, as shown in Theorem 6 of \cite{pfa}, gives us a natural forcing notion that collapses a given cardinal $\theta$ to $\omegamega_1$ preserving all other cardinals\Vdashootnote{It should be noted that the hypothesis of Theorem 6 of \cite{pfa} assumes that there is a stationary subset $S$ of $[\theta]^{\alphaleph_0}$ or cardinality $\theta$, a condition that is satisfied by many cardinals and in particular by all cardinals of uncountable cofinality if $0^\sharp$ does not exist }. As far as we know, no higher-cardinal analogue of this poset has been produced. Some research related to this have been recently produced by Aspero \cite{aspero1}. \subsection*{Acknowledgements:} The authors would like to thank the referee for careful reading the paper and many useful suggestions. \section{Preliminaries}\lambdaabel{matrix} \betaegin{definition}\lambdaabel{d1} Let $\theta\gammae \omega_2$ be a regular cardinal. By $H_\theta$ we denote the collection of all sets whose transitive closure has cardinality $<\theta.$ We consider it as a model of the form $(H_\theta, \in, <_\theta)$ where $<_\theta$ is some fixed well-ordering of $H_\theta$ that will not be explicitly mentioned. The partial order $\mathcal P$ is the set of all functions $p:\omega_1\to H_{\theta}$ satisfying: \betaegin{enumerate} \item $\omegaperatorname{supp}(p)=\set{\alpha<\omega_1:p(\alpha)\neq \emptyset}$ is a finite set; \item $p(\alpha)$ is a finite collection of isomorphic countable elementary submodels of $H_{\theta}$ for every $\alpha\in\omegaperatorname{supp}(p)$; \item for each $\alpha,\beta\in\omegaperatorname{supp}(p)$ if $\alpha<\beta$ then $\Vdashorall M\in p(\alpha)\ \exists N\in p(\beta)\ M\in N$; \end{enumerate} \noindent The ordering on $\mathcal P$ is given by: \betaegin{equation}\lambdaabel{eq1} p\lambdae q\ \Leftrightarrow\ \Vdashorall \alpha<\omega_1\ \ q(\alpha)\subseteq p(\alpha). \end{equation} \end{definition} \noindent The fact that $M$ is a countable elementary submodel of $\seq{H_{\theta},\in}$ will be denoted by $M\prec H_{\theta}$. Also, if $M\prec H_{\theta}$, then $\omegaverline M\in H_{\omega_1}$ denotes the transitive collapse of $M$ with $\pi_M$ being the corresponding isomorphism. For $p\in \mathcal P$ and $\alpha\in \omegaperatorname{supp}(p)$ we denote $\deltaelta_{\alphalpha}^p=M\mathcal Ap \omega_1$ where $M$ is some (any) model in $p(\alpha)$. Also, if $M\prec H_{\theta}$, then $\delta_M$ will denote the ordinal $M\mathcal Ap \omega_1$. We list some standard lemmas concerning countable elementary submodels of $H_{\theta}$ that will be useful throughout the paper. \betaegin{lemma}\lambdaabel{t20} Let $F$ be a countable subset of $H_{\theta}$. Then the set of all ordinals of the form $M\mathcal Ap \omega_1$ such that $M$ is a countable elementary submodel of $H_{\theta}$ with $F\subseteq M$ contains a club. \end{lemma} \betaegin{lemma}\lambdaabel{t23} If $M\prec H_{\theta}$ contains some element $X$, then $X$ is countable if and only if $X\subseteq M$. \end{lemma} \betaegin{lemma}\lambdaabel{t24} If $M\prec H_{\theta}$ contains as an element some subset $A$ of $\omega_1$, then $A$ is uncountable if and only if $A\mathcal Ap M\mathcal Ap \omega_1$ is unbounded in $M\mathcal Ap \omega_1$. \end{lemma} \betaegin{lemma}\lambdaabel{t25} If $M\prec H_{\theta}$, $X$ is in $H_{\theta}$ and $X$ is definable from parameters in $M$, then $X\in M$. \end{lemma} \betaegin{lemma}\lambdaabel{t26} Let $\seq{N_{\xi}:\xi<\omega_1}$ be a continuous $\in$-chain of countable elementary submodels of $H_{\theta}$. Then $\set{\xi<\omega_1:N_{\xi}\mathcal Ap \omega_1=\xi}$ is a club in $\omega_1$. \end{lemma} \betaegin{lemma}\lambdaabel{t9} Let $M_0$ and $M_1$ be isomorphic countable elementary submodels of some $H_{\theta}$. Let $L_0=M_0\mathcal Ap \omega_2$ and $L_1=M_1\mathcal Ap \omega_2$. Then $L_0\mathcal Ap L_1$ is an initial segment of both $L_0$ and $L_1$. \end{lemma} \betaegin{proof} Without loss of generality, we can assume that both $M_0$ and $M_1$ contain the same family of mappings $\seq{e_{\gamma}:\gamma<\omega_2}$ where each map $e_{\gamma}:\gamma\to \omega_1$ is 1-1. Now we will prove that if $\beta$ is in $L_0\mathcal Ap L_1$ and $\alpha<\beta$ is in $L_0$ then $\alpha\in L_1$. Consider the map $e_{\beta}:\beta\to\omega_1$. Then, in $M_0$ there is some $\xi<\omega_1$ such that $e_{\beta}(\alpha)=\xi$, i.e. $\alpha=e_{\beta}^{-1}(\xi)$. But $M_1$ knows both $e_{\beta}$ and $\xi$ (this is because $M_0\cong M_1\mathbb Rightarrow M_0\mathcal Ap \omega_1=M_1\mathcal Ap \omega_1$). Hence, $\alpha\in M_1\mathcal Ap \omega_2=L_1$. \end{proof} If $\mathcal G\subseteq \mathcal P$ is a filter in $\mathcal P$ generic over $V$, then we define $G:\omega_1\to H_{\theta}$ as the function satisfying \[\textstyle G(\alpha)=\set{M\prec H_{\theta}: \exists p\in \mathcal G\ \mbox{such that}\ M\in p(\alpha)}. \] Note that $G$ is a well defined function because $\mathcal G$ is a filter. For $\alpha$ in the domain of $G$ we denote $\delta_{\alpha}=M\mathcal Ap \omega_1$ for some (any) $M$ in $G(\alpha)$. Further, we denote $A_{\gammaamma}=\betaigcup_{M\in G(\gammaamma)}M\mathcal Ap \omega_2$ for $\gamma<\omega_1$, and note that if $\gamma<\deltaelta$, then $A_{\gammaamma}\subseteq A_{\deltaelta}$. Also, we define the function $g:\omega_1\to H_{\omega_1}$ with $g(\alpha)=\omegaverline M$ for some (any) model $M$ from $G(\alpha)$ and for $p\in \mathcal P$, by $\betaar p$ we define $\betaar p:\omega_1\to [H_{\omega_1}]^{\omega}$ as a function with the same support as $p$ which maps $\alpha\in\omegaperatorname{supp}(\betaar p)$ to the transitive collapse of some model from $p(\alpha)$, while for $\alpha\in \omega_1\setminus \omegaperatorname{supp}(\betaar p)$ take $\betaar p(\alpha)=\emptyset$. \betaegin{lemma}\lambdaabel{t14} Let $p,q\in \mathcal P$. If $\betaar p=\betaar q$, then $p$ and $q$ are compatible conditions. \end{lemma} \betaegin{proof} First note that if $\betaar p=\betaar q$, then $\omegaperatorname{supp}(p)=\omegaperatorname{supp}(q)$. Also, if two countable elementary submodels of $H_{\theta}$, say $M_1$ and $M_2$ have the same transitive collapse, then they are isomorphic (the isomorphism is simply $\pi_{M_2}^{-1}\circ\pi_{M_1}$). Now it is clear that the function $r:\omega_1\to H_{\theta}$ defined by $r(\alpha)=p(\alpha)\cup q(\alpha)$ is in $\mathcal P$ and that it satisfies $r\lambdae p,q$. \end{proof} For $p,q\in \mathcal P$ we will define their 'join' $p\lambdaor q$ as the function from $\omega_1$ to $H_{\theta}$ satisfying $(p\lambdaor q)(\alpha)=p(\alpha)\cup q(\alpha)$ for $\alpha<\omega_1$. Further, if a condition $q\in \mathcal P$ and $M\prec H_{\kappa}$ ($\delta_M=M\mathcal Ap \omega_1$) for $\kappa\gammae\theta$ are given, it is clear what intersection $q\mathcal Ap M$ represents, and we define the restriction of $q$ to $M$ as a function with finite support $q\mid M:\omega_1\to H_{\theta}$ satisfying $\omegaperatorname{supp}(q\mid M)=\omegaperatorname{supp}(q)\mathcal Ap\deltaelta_M$ and for $\alpha<\delta_M$: \[\textstyle (q\mid M)(\alpha)=\set{\varphi_{M'}(N): M'\in q(\delta_M),\ \varphi_{M'}:M'\izo M\mathcal Ap H_{\theta},\ N\in q(\alpha)\mathcal Ap M'}. \] Note that the function $q\mid M$ is in $\mathcal P$. We will also need the following notion which we call 'the closure of $p$ below $\delta$'. \betaegin{definition}\lambdaabel{d2} Let $p\in \mathcal P$ and $\delta\in \omegaperatorname{supp}(p)$. Then $\omegaperatorname{cl}_{\delta}(p):\omega_1\to H_{\theta}$ is a function such that $\omegaperatorname{supp}(\omegaperatorname{cl}_{\delta}(p))=\omegaperatorname{supp}(p)$ and $\omegaperatorname{cl}_{\delta}(p)(\gamma)=p(\gamma)$ for $\gamma\gammae \delta$, while for $\gamma<\delta$ we have \[\textstyle \omegaperatorname{cl}_{\delta}(p)(\gamma)\!=\!\set{\emptyseti_{N_1,N_2}(M):M\!\in\! p(\gamma)\mathcal Ap N_1\ \!\&\!\ N_1,N_2\!\in\! p(\delta)\ \!\&\!\ \emptyseti_{N_1,N_2}:N_1\izo N_2}. \] \end{definition} We will also need the following standard lemmas later in the paper. \betaegin{lemma}\lambdaabel{t17} Suppose that $\theta\gammae \omega_2$ is a regular cardinal. If $\mathcal P\in H_{\theta}$ then in $V[\mathcal G]$ we have $H_{\theta}[\mathcal G]=\set{\operatorname{int}_{\mathcal G}(\tau):\tau\in H_{\theta}\ \mbox{is a }\mathcal P\mbox{-name}}=H_{\theta}^{V[\mathcal G]}$. \end{lemma} \betaegin{proof} $H_{\theta}^{V[\mathcal G]}\subseteq H_{\theta}[\mathcal G]$ follows from the fact that any $x\in H_{\theta}^{V[\mathcal G]}$ is of the form $\operatorname{int}_{\mathcal G}(\tau)$ for some $\tau\in H_{\theta}$ (see \cite[Claim I 5.17]{proper}). If $\tau\in H_{\theta}$ is a $\mathcal P$-name, then from $\alphabs{\omegaperatorname{trcl}(\tau)}<\theta$ and $\omegaperatorname{ran}k(\operatorname{int}_{\mathcal G}(\tau))\lambdae\omegaperatorname{ran}k(\tau)$ we have that $\alphabs{\omegaperatorname{trcl}(\operatorname{int}_{\mathcal G}(\tau))}<\theta$. \end{proof} \betaegin{lemma}\lambdaabel{t18} If $M\prec H_{\theta}$ and $\mathcal P\in M$, then in $V[\mathcal G]$ we have that \[\textstyle M[\mathcal G]=\set{\operatorname{int}_{\mathcal G}(\tau):\tau\in M\ \mbox{is a }\mathcal P\mbox{-name}}\prec H_{\theta}[\mathcal G]=H_{\theta}. \] \end{lemma} \betaegin{proof} First note that $M[\mathcal G]\subseteq H_{\theta}[\mathcal G]$. Now take any $\tau_1,\deltaots,\tau_n\in M$ and any formula $\varphi(x,x_1,\deltaots,x_n)$. Assume that there is some $y\in H_{\theta}[\mathcal G]$ such that $H_{\theta}[\mathcal G]\models \varphi(y,\operatorname{int}_{\mathcal G}(\tau_1),\deltaots,\operatorname{int}_{\mathcal G}(\tau_n))$. Then there is $p\in \mathcal G$ which forces $\varphi(\tau_y,\tau_1,\deltaots,\tau_n)$ for a $\mathcal P$-name $\tau_y\!\in\! H_{\theta}$ and $M\!\prec\! H_{\theta}$ implies $p\Vdash\! \varphi(\tau,\tau_1,\deltaots,\tau_n)$ for a $\mathcal P$-name $\tau\in M$. Hence $H_{\theta}[\mathcal G]\models\varphi(\operatorname{int}_{\mathcal G}(\tau),\operatorname{int}_{\mathcal G}(\tau_1),\deltaots,\operatorname{int}_{\mathcal G}(\tau_n))$ for a $\mathcal P$-name $\tau\in M$, so $M[\mathcal G]\prec H_{\theta}[\mathcal G]$. \end{proof} \section{Properness} In this section we show that $\mathcal P$ satisfies the condition stronger than being proper, namely $\mathcal P$ is strongly proper forcing. We are not sure who was the first do define this notion, but Mitchell's paper \cite{mitchell} is a good reference. \betaegin{definition}\lambdaabel{d3} If $P$ is a forcing notion and $X$ is a set, then we say that $p$ is strongly $(X,P)$-generic if for any set $D$ which is dense and open in the poset $P\mathcal Ap X$, the set $D$ is predense in $P$ below $p$. The poset $P$ is strongly proper if for every large enough regular cardinal $\kappa$, there are club many countable elementary submodels $M$ of $H_{\kappa}$ such that whenever $p\in M\mathcal Ap P$, there exists a strongly $(M,P)$-generic condition below $p$. \end{definition} \betaegin{lemma}\lambdaabel{T1} $\mathcal P$ is strongly proper. \end{lemma} \betaegin{proof} Let $p$ be a condition in $\mathcal P$ and $M\prec H_{\kappa}$ (for some $\kappa\gammae \theta$ which is large enough) such that $p,\mathcal P\in M$. Denote $\deltaelta=M\mathcal Ap \omega_1$. We will show that the condition $p_M\!=\!p\cup \set{\seq{\delta,M\mathcal Ap H_{\theta}}}$ is strongly generic (i.e. if $q\lambdae p_M$ and $\mathcal D\subseteq\mathcal P\mathcal Ap M$ is dense open, then there is some $q'\in \mathcal D$ such that $q\not\perp q'$). Let $q\lambdae p_M$ be arbitrary and $\mathcal D\subseteq M\mathcal Ap \mathcal P$ dense open. Consider the condition $q\mid M\in \mathcal P \mathcal Ap M$. Because $\mathcal D\subseteq M$ is dense open, there is some $q'\lambdae q\mid M$ which is in $\mathcal D$ (hence in $M$). To finish the proof, we still have to show that $q$ and $q'$ are compatible. First note that $\omegaperatorname{supp}(q\mid M)=\omegaperatorname{supp}(q)\mathcal Ap \omegaperatorname{supp}(q')$ and consider the following function $r:\omega_1\to H_{\theta}$ defined on the support $\omegaperatorname{supp}(r)=\omegaperatorname{supp}(q)\cup\omegaperatorname{supp}(q')$: \noindent For $\alpha\gammae \delta$ define $r(\alpha)=q(\alpha)$ and for $\alpha<\delta$ define \[\textstyle r(\alpha)\!=\!(q\lambdaor q')(\alpha)\cup\set{\varphi^{-1}_{N'}(N):N\!\in\! q'(\alpha), N'\in q(\delta),\ \varphi_{N'}:N'\!\izo\! M\!\mathcal Ap\! H_{\theta}}. \] We will prove that $r\lambdae q,q'$ is in $\mathcal P$ which will finish the proof. Properties (1) and (2) from Definition \ref{d1} are clear. To see (3) take any $\alpha,\beta<\omega_1$ with $\alpha<\beta$ and any $N\in r(\alpha)$. If $\alpha\gammae \delta$, then $r(\alpha)=q(\alpha)$ and $r(\beta)=q(\beta)$, hence there is clearly some $N'\in r(\beta)=q(\beta)$ such that $N\in N'$. If $\alpha<\delta$ and $\beta\gammae \delta$, then $N$ belongs to some $N'\in q(\delta)$ which belongs to some $N_1\in q(\beta)=r(\beta)$, hence $N\in N_1$ and the statement is true in this case as well. Otherwise, note that $\omegaperatorname{supp}(q)\mathcal Ap\delta\subseteq \omegaperatorname{supp}(q')$ and consider the following two possibilities. The first, that $N\in q'(\alpha)$ and $\beta<\delta$. Then there is a model $N'\in q'(\beta)\subseteq r(\beta)$ such that $N\in N'$. The second case is that $N\in r(\alpha)\setminus q'(\alpha)$ and $\beta<\delta$. If $N=\varphi^{-1}_{N'}(N_1)$ for some $N_1\in q'(\alpha)$, take $N_2\in q'(\beta)$ such that $N_1\in N_2$ and note that $N=\varphi^{-1}_{N'}(N_1)\in \varphi^{-1}_{N'}(N_2)\in r(\beta)$. If not, then $N\in q(\alpha)$, so there is some $N_3\in q'(\beta)$ such that $\varphi_{N'}(N)\in N_3$, hence $N\in \varphi^{-1}_{N'}(N_3)\in r(\beta)$ and the proof is finished. \end{proof} \betaegin{lemma}\lambdaabel{T2} Let $\mathcal G$ be a filter generic in $\mathcal P$ over $V$, let $M,M'\prec H_{(2^{\theta})^+}$ and $p,\mathcal P\in M\mathcal Ap M'$. If $\varphi:M\izo M'$ then for $\deltaelta=M\mathcal Ap \omega_1=M'\mathcal Ap \omega_1$ the condition $p_{MM'}=p\cup\set{\seq{\deltaelta,\set{M\mathcal Ap H_{\theta},M'\mathcal Ap H_{\theta}}}}$ satisfies: \[\textstyle p_{MM'}\Vdash \check\varphi[\deltaot\mathcal G\mathcal Ap \check M]=\deltaot\mathcal G\mathcal Ap \check M'. \] \end{lemma} \betaegin{proof} Assume the contrary, that there is a condition $q\lambdae p_{MM'}$ and a set $x$ such that $q\Vdash \check x\in \deltaot \mathcal G\mathcal Ap \check M\wedge\check \varphi(\check x)\notin \deltaot \mathcal G\mathcal Ap \check M'$. Then we have $q\Vdash \check x\in \deltaot\mathcal G\mathcal Ap \check M$ and $q\Vdash \check\varphi(\check x)\notin \deltaot \mathcal G\mathcal Ap \check M'$. From the fact that $q\Vdash \check x\in \deltaot\mathcal G\mathcal Ap \check M$, we have that $q\not\perp x$ (this is true because if $q\perp x$, then it is not possible that $q\Vdash \check x\in \deltaot \mathcal G$). Now take some $q'\lambdae q,x$ and assume that for all $r\lambdae q'$ there is some $t\lambdae r$ such that $t\lambdae \varphi(x)$ (which implies that $t\Vdash \check\varphi(\check x)\in \deltaot \mathcal G$). It follows that the set $\set{t\in \mathcal P: t\Vdash \check\varphi(\check x)\in \deltaot \mathcal G}$ is dense below $q'$, which is impossible because it would imply that $q'\Vdash \check\varphi(\check x)\in \deltaot \mathcal G\mathcal Ap \check M'$ which is in contradiction with the assumption that $q'\lambdae q$ and that $q\Vdash \check\varphi(\check x)\notin \deltaot \mathcal G\mathcal Ap \check M'$. Hence, we can pick some $r\lambdae q'\lambdae q,x$ which is incompatible with $\varphi(x)$. Now, consider the condition $r\mid M$. We have the following claim. \betaegin{claim}\lambdaabel{t6} Conditions $\varphi(r\mid M)$ and $r$ are compatible. \end{claim} \betaegin{proof} First note that $\omegaperatorname{supp}(\varphi(r\mid M))\subseteq \omegaperatorname{supp}(r)$. We will prove that the condition $s=\varphi(r\mid M)\lambdaor r$ is in $\mathcal P$ which will prove the claim (then $s$ will be below both $\varphi(r\mid M)$ and $r$). It is clear that the conditions (1) and (2) from Definition \ref{d1} are fulfilled so pick arbitrary $\alpha,\beta\in \omegaperatorname{supp}(s)$ with $\alpha<\beta$. If $\alpha\gammae \delta$ then every $N\in s(\alpha)$ is in $r(\alpha)$ hence there is some $N'\in r(\beta)=s(\beta)$ such that $N\in N'$. If $\beta\lambdae \delta$ then for $N\in s(\alpha)\mathcal Ap r(\alpha)$ there is some $N'\in r(\beta)\subseteq r(s)$ such that $N\in N'$. On the other hand, if $\beta\lambdae \delta$ and $N\in s(\alpha)\mathcal Ap \varphi(r\mid M)(\alpha)$, then $N=\varphi(N_1)$ for some $N_1\in (r\mid M)(\alpha)$, and $N_1=\emptyseti_{N_2,M}(N_3)$ for $N_2\in r(\delta)$, $N_3\in r(\alpha)\mathcal Ap N_2$ and an isomorphism $\emptyseti_{N_2,M}:N_2\to M\mathcal Ap H_{\theta}$. Now, there is some $N_4\in r(\beta)$ such that $N_3\in N_4$. So, clearly we have that $N\in \varphi(\emptyseti_{N_2,M}(N_4))\in\varphi(r\mid M)(\beta)\subseteq s(\beta)$. If $\alpha<\delta$ and $\delta\lambdae \beta$, then for each $N\in s(\alpha)\mathcal Ap r(\alpha)$ there is some $N'\in r(\beta)=s(\beta)$ such that $N\in N'$. Finally, let $\alpha<\delta$, $\delta\lambdae \beta$ and $N\in s(\alpha)\mathcal Ap \varphi(r\mid M)(\alpha)$. Then $N\in M\mathcal Ap H_{\theta}\in s(\delta)\mathcal Ap r(\delta)$ and clearly there is some $N'\in r(\beta)=s(\beta)$ such that $N\in M\mathcal Ap H_{\theta}\in N'$, hence $N\in N'$. \end{proof} Now because $x\in M$ and $r\lambdae x$ we have that $r\mid M\lambdae x$ which implies $\varphi(r\mid M)\lambdae \varphi(x)$ (the last implication is true because $\varphi$ is an automorphism between $M$ and $M'$). Now from $\varphi(r\mid M)\lambdae \varphi(x)$ and $r\perp \varphi(x)$ follows that $\varphi(r\mid M)$ and $r$ are incompatible which is not possible according to the Claim \ref{t6}. \end{proof} The following lemma will be useful in Section \ref{kurepa} of the paper. \betaegin{lemma}\lambdaabel{t13} If $M=M'\mathcal Ap H_{\theta}$ for some $M'\prec H_{(2^{\theta})^+}$ such that $\mathcal P\in M'$ and if $M\in G(\delta)$ for $\delta=M\mathcal Ap \omega_1$, then in $V[\mathcal G]$ we have $M[\mathcal G]\mathcal Ap \omega_1=\delta$. \end{lemma} \betaegin{proof} First note that any $p\in \mathcal G$ such that $M\in p(\delta)$ is an $(M',\mathcal P)$-generic condition which forces that $M'\mathcal Ap \omegaperatorname{Ord}=M'[\mathcal G]\mathcal Ap \omegaperatorname{Ord}$ (see \cite[Lemma III 2.6]{proper}). Because $\omega_1\subseteq H_{\theta}$ this implies that $M[\mathcal G]\mathcal Ap \omega_1=M\mathcal Ap \omega_1=\delta$. \end{proof} \section{Preserving CH} \betaegin{lemma}[CH]\lambdaabel{t15} $\mathcal P$ satisfies $\omega_2$-c.c. \end{lemma} \betaegin{proof} Assume that CH holds and that there is a sequence $\set{p_{\alpha}:\alpha<\omega_2}$ of pairwise incompatible elements of $\mathcal P$. For each $\alpha<\omega_2$ the function $\betaar p_{\alpha}$ (the transitive collapse of $p_{\alpha}$) is a finite subset of $\omega_1\times H_{\omega_1}$. Hence, there are some distinct $\alpha,\beta<\omega_2$ such that $\betaar p_{\alpha}=\betaar p_{\beta}$ (here we are using CH which implies that $\alphabs{H_{\omega_1}}=\omega_1$). But then conditions $p_{\alpha}$ and $p_{\beta}$ are compatible by Lemma \ref{t14} which is a contradiction with the choice of the sequence $\set{p_{\alpha}:\alpha<\omega_2}$. \end{proof} \betaegin{lemma} $\mathcal P$ preserves CH. \end{lemma} \betaegin{proof} Assume that CH holds in $V$ and that there is a sequence $\set{\tau_{\alpha}:\alpha<\omega_2}$ of $\mathcal P$-names and a condition $p\in \mathcal G$ such that \[\textstyle p\Vdash``\seq{\tau_{\alpha}:\alpha<\omega_2}\ \mbox{is a sequence of pairwise distinct reals"}. \] For each $\alpha<\omega_2$ let $M_{\alpha}$ be a countable elementary submodel of $H_{(2^{\theta})^+}$ containing $\mathcal P,\tau_{\alpha},p$. Now, using CH, we conclude that there are $\alpha,\beta<\omega_2$ ($\alpha\neq\beta$) such that there is an automorphism $\varphi:M_{\alpha}\to M_{\beta}$ which satisfies $\varphi(\tau_{\alpha})=\tau_{\betaeta}$. To see this consider the transitive collapses of $M_{\alpha}$'s, they are countable submodels of $H_{\omega_1}$ but since we assumed the CH (which implies $\alphabs{H_{\omega_1}}=\omega_1$) there must be two collapses $\omegaverline{M_{\alpha}}$ and $\omegaverline{M_{\beta}}$ which are isomorphic (via isomorphism $\phi$). Then the isomorphism $\varphi$ is given by $\varphi=\pi^{-1}_{M_{\beta}}\circ \phi\circ \pi_{M_{\alpha}}$. Also, it clearly holds that $\varphi(M_{\alpha}\mathcal Ap H_{\theta})=M_{\beta}\mathcal Ap H_{\theta}$, and because $M_{\alpha}$ and $M_{\beta}$ are isomorphic we can denote $\deltaelta=M_{\alpha}\mathcal Ap \omega_1=M_{\beta}\mathcal Ap \omega_1$. Now we prove that there is a condition $p_{\alpha\beta}\lambdae p$ such that $p_{\alpha\beta}\Vdash \tau_{\alpha}=\tau_{\beta}$, hence $\tau_{\alpha}$ and $\tau_{\beta}$ cannot be names for distinct reals in $V[\mathcal G]$. First note that $\varphi$ being an isomorphism, we have \betaegin{equation}\lambdaabel{eq2} \Vdashorall n<\omega\ \Vdashorall p'\in M_{\alpha}\mathcal Ap \mathcal P\ \Vdashorall \epsilon<2\ (p'\Vdash \tau_{\alpha}(\check n)=\check \epsilon\Leftrightarrow \varphi(p')\Vdash \tau_{\beta}(\check n)=\check \epsilon). \end{equation} \betaegin{claim}\lambdaabel{t3} If $p_{\alpha\beta}=p\cup\set{\seq{\delta,\set{M_{\alpha}\mathcal Ap H_{\theta},M_{\beta}\mathcal Ap H_{\theta}}}}$, then $p_{\alpha\beta}\Vdash \tau_{\alpha}=\tau_{\beta}$. \end{claim} \betaegin{proof} Assume the contrary, that there is some $q\lambdae p_{\alpha\beta}$ and $n<\omega$ such that $q\Vdash \tau_{\alpha}(\check n)\neq \tau_{\beta}(\check n)$ (suppose that $q\Vdash\tau_{\alpha}(\check n)=\check 0$ and $q\Vdash \tau_{\beta}(\check n)=\check 1$). Then by elementarity of $M_{\alpha}$ there is some $r\in \mathcal P \mathcal Ap M_{\alpha}$ such that $r\lambdae q\mid M$ and $\omegaperatorname{supp}(q\mid M_{\alpha})\sqsubseteq \omegaperatorname{supp}(r)$ (where $a\sqsubseteq b$ denotes that $a$ is an initial segment of $b$) and which satisfies $r\Vdash \tau_{\alpha}(\check n)=\check 0$. Again, because $\varphi$ is an isomorphism we have $\varphi(q\mid M_{\alpha})=q\mid M_{\beta}$, so $q\mid M_{\beta}$ is compatible with $\varphi(r)$. From $q\mid M_{\beta}\not\perp \varphi(r)$ we conclude that $q\not\perp \varphi(r)$. But, then $\varphi(r)\Vdash \tau_{\beta}(\check n)=\check 0$ (from equation \ref{eq2}) which is in contradiction with the fact that $q\Vdash \tau_{\beta}(\check n)=\check 1$ (simply because $\varphi(r)\not\perp q$). \end{proof} Hence, according to the Claim \ref{t3}, $p$ cannot force that $\seq{\tau_{\alpha}:\alpha<\omega_2}$ is a sequence of names for distinct reals in $V[\mathcal G]$, which proves the theorem.\end{proof} \section{Kurepa tree}\lambdaabel{kurepa} Recall that Kurepa tree is a tree of height $\omega_1$, with all levels countable but at least $\omega_2$ branches. In this section we will show that forcing with $\mathcal P$ adds a Kurepa tree. \betaegin{theorem}\lambdaabel{t4} There is a Kurepa tree in $V[\mathcal G]$. \end{theorem} \betaegin{proof} For each $\alpha<\omega_2$ define the function $f_{\alpha}:\omega_1\to \omega_1$: \betaegin{equation}\lambdaabel{eq3} f_{\alpha}(\delta)=\lambdaeft\{\betaegin{array}{rl}\xi,&\mbox{if there is }M\in G(\delta)\ \mbox{such that}\ \alpha\in M,\ \pi_{M}(\alpha)=\xi;\\ 0,&\mbox{otherwise.}\end{array}\right. \end{equation} Note that if there are two isomorphic models $M_1,M_2\in G(\delta)$ containing $\alpha<\omega_2$ then Lemma \ref{t9} implies that $\pi_{M_1}(\alpha)=\pi_{M_2}(\alpha)$, so each $f_{\alpha}$ is well defined. Also, if $\alpha\neq \beta$, then there are some $\delta<\omega_1$ and $M\in G(\delta)$ such that $\set{\alpha,\beta}\in M$, but then $\pi_M(\alpha)\neq\pi_M(\beta)$ (i.e. $f_{\alpha}(\delta)\neq f_{\beta}(\delta)$). So for $\alpha\neq\beta$ we have $f_{\alpha}\neq f_{\beta}$. If we denote the set of functions coding branches in $T$ by $\mathcal F=\set{f_{\alpha}:\alpha<\omega_2}$ then $\mathcal F_{\alpha}=\set{f_{\alpha}\upharpoonright\delta:\delta<\omega_1}$ will be the $\alpha$-th branch and the Kurepa tree will be given by $T=\betaigcup_{\delta<\omega_1}T_{\delta}$, where $T_{\delta}=\mathcal F\upharpoonright \delta=\set{f_{\alpha}\upharpoonright\delta:\alpha<\omega_2}$. We will show that for each $\delta$, the level $T_{\deltaelta}$ is countable. This will finish the proof. So assume that there are some $p\in \mathcal G$ and $\delta'<\omega_1$ such that $p\Vdash "\deltaot{T_{\delta'}}\ \mbox{is uncountable}"$. Take a countable elementary submodel $M$ of $H_{({2^{\theta}})^+}$ such that $p,\mathcal P,\delta'\in M$ and denote $\delta=M\mathcal Ap \omega_1$. Because we have chosen $M$ so that $\delta'\in M$ we have $\delta'<\delta$. Consider the $(M,\mathcal P)$-generic condition $p_{M}=p\cup\set{\seq{\delta,\set{M\mathcal Ap H_{\theta}}}}\lambdae p$ (note that $p_M\lambdae p$ because $p\in M$). The following claim shows that the $(M,\mathcal P)$-generic condition forces that the branches passing through the $\delta'$-th level are indexed only by ordinals less then $\omega_2$ which are already in $M$. \betaegin{claim}\lambdaabel{t5} Suppose that $p'\in \mathcal P$ is such that $M'\in G(\delta)\mathcal Ap p'(\delta)$ and $\delta\gammae\delta'$. Then $p'\Vdash \deltaot{T_{\delta'}}=\set{\deltaot{f_{\alpha}}\upharpoonright\check\delta':\check\alpha\in \check M'\mathcal Ap \check\omega_2}$ \end{claim} \betaegin{proof} The inclusion "$\supseteq$" is trivial. To prove the reverse inclusion take some $q\lambdae p'$ and $\alpha'<\omega_2$. Because $q\lambdae p'$ we have that $q\Vdash \deltaot{f_{\alpha'}}\upharpoonright\check\delta'\in\deltaot{T_{\delta'}}$. In order to finish the proof of the claim, we will find $r\lambdae q$ and $\alpha\in M'\mathcal Ap \omega_2$ such that $r\Vdash \deltaot{f_{\alpha'}}\upharpoonright\check\delta'=\deltaot{f_{\alpha}}\upharpoonright\check\delta'$. We will consider two cases. \vskip2mm Case I, $f_{\alpha'}\upharpoonright\delta'=0$. Then for $0\in M'\mathcal Ap \omega_2$ we have that $f_0\upharpoonright\delta'=f_{\alpha'}\upharpoonright\delta'$. To see this notice that either for every $\gamma<\delta'$ there is some $N\in G(\gamma)$ and then clearly $0\in N$ and $\pi_N(0)=0$ or $G(\gamma)=\emptyset$ and again $f_0(\gamma)=0$. So we clearly have $q\Vdash \deltaot{f_{\alpha'}}\upharpoonright\check\delta'=\deltaot{f_0}\upharpoonright\check\delta'$. \vskip2mm Case II, there is $\gamma<\delta'$ such that $f_{\alpha'}(\gamma)\neq 0$. Let $\gamma_0$ be minimal such $\gamma$. This means that for some $N_0\in G(\gamma_0)$ we have $\alpha'\in N_0$. Now, there is some $r\lambdae q\lambdae p'$ such that $N_0\in r(\gamma_0)$, hence there is some $N_1\in r(\delta)$ such that $N_0\in N_1$ (note that $r(\delta)$ is not empty because $M'\in p'(\delta)\subseteq r(\delta)$). Now we have that $\alpha'\in N_1$ ($N_0\in N_1$ implies $N_0\subseteq N_1$). Because $M'\in r(\delta)$ there is an isomorphism $\varphi:N_1\izo M'$. Denote $\alpha=\varphi(\alpha')$. We show that the condition $r_1=\omegaperatorname{cl}_{\delta}(r)$ (note $r_1\lambdae r\lambdae q$) forces $"\deltaot{f_{\alpha'}}\upharpoonright\check\delta'=\deltaot{f_{\alpha}}\upharpoonright\check\delta'"$. First, if $\gamma'<\gamma_0$ then $r_1\Vdash \deltaot{f_{\alpha}}(\check\gamma')=0=\deltaot{f_{\alpha'}}(\check\gamma')$. This is true because if there is some $N'\in r_1(\gamma')$ such that $\alpha=\varphi(\alpha')\in N'$, then there would be some $\varphi^{-1}(N')\in r_1(\gamma')$ such that $\alpha'\in \varphi^{-1}(N')$ which is impossible by the choice of $\gamma_0$ (it is a minimal ordinal such that $\alpha'$ belongs to some model on its level in generic filter). If $\gamma\gammae\gamma_0$ and $\gamma<\delta'$, let $r_1\Vdash \deltaot{f_{\alpha'}}(\check\gamma)=\check\xi\neq 0$ (it cannot be 0 because $\alpha'\in N_0\in r_1(\gamma_0)$ so for every $\gamma\gammae \gamma_0\ \exists N\in r_1(\gamma)\ \alpha'\in N_0\in N$ and since $\alpha'\neq 0$ we clearly have $\pi_N(\alpha')\neq 0$) and take $N'\in r_1(\gamma)$ such that $\alpha'\in N'$ and $\pi_{N'}(\alpha')=\xi$. Then $\alpha\in \varphi(N')\in r_1(\gamma)$ and clearly $\pi_{\varphi(N')}(\alpha)=\xi$ which implies $r_1\Vdash \deltaot{f_{\alpha}}(\check\gamma)=\check\xi$. So $r_1\lambdae q$ forces $"\deltaot{f_{\alpha'}}\upharpoonright\check\delta'=\deltaot{f_{\alpha}}\upharpoonright\check\delta'"$ and the claim is proved. \end{proof} Now according to the Claim \ref{t5} $p_{M}\Vdash \deltaot{T_{\delta'}}=\set{\deltaot{f_{\alpha}}\upharpoonright\check\delta':\check\alpha\in \check M\mathcal Ap \check\omega_2}$ so $p$ cannot force that $T_{\delta'}$ is uncountable (because $M$ is countable and $p_M\lambdae p$), hence $T_{\delta'}$ is countable in $V[\mathcal G]$. \end{proof} \betaegin{corollary}\lambdaabel{t19} Every uncountable downward closed set $S\subseteq T$ contains a branch of $T$. \end{corollary} \betaegin{proof} First recall that we denoted a branch of $T$ by $\mathcal F_{\alpha}=\set{f_{\alpha}\upharpoonright\gamma:\gamma<\omega_1}$. Take a $\mathcal P$-name $\deltaot S$ for $S$ and $p\in \mathcal G$ such that $p\Vdash "\deltaot S$ is downward closed". Now pick $M\prec H_{(2^{\theta})^+}$ such that $\deltaot S,\mathcal P,p\in M$ and denote $\delta=M\mathcal Ap \omega_1$. So we have that $S\in M[\mathcal G]\prec H_{(2^{\theta})^+}[\mathcal G]=H_{(2^{\theta})^+}^{V[\mathcal G]}$ (according to Lemma \ref{t18}). We have already shown that the condition $p_M=p\cup\set{\seq{M\mathcal Ap \omega_1,\set{M\mathcal Ap H_{\theta}}}}$ is $(M,\mathcal P)$-generic and by Claim \ref{t5} we have $p_M\Vdash \deltaot{T_{\delta}}=\set{\deltaot{f_{\alpha}}\upharpoonright\check\delta:\check\alpha\in \check M\mathcal Ap \check\omega_2}$. Note also that according to Lemma \ref{t13} $p_M\Vdash \check\delta=\check M[\deltaot{\mathcal G}]\mathcal Ap \check\omega_1$. Now, because $S$ is uncountable and each level in $T$ is countable by Theorem \ref{t4}, there is some $\beta>\delta$ such that $S\mathcal Ap T_{\beta}\neq\emptyset$ and because it is downward closed there is some $\alpha\in M\mathcal Ap \omega_2$ such that $f_{\alpha}\upharpoonright\delta\in S$. Now the fact that $S$ and $\mathcal F_{\alpha}$ are in $M[\mathcal G]$ (for $S$ is clear and for $\mathcal F_{\alpha}$ it follows from Lemma \ref{t25} and the fact that $f_{\alpha}$ is defined only from $\mathcal G$ and $\alpha\in M$) implies that $S\mathcal Ap \mathcal F_{\alpha}\in M[\mathcal G]$. If the set $S\mathcal Ap \mathcal F_{\alpha}$ is countable, then by elementarity of $M[\mathcal G]$ in $H_{(2^{\theta})^+}^{V[\mathcal G]}$ and Lemma \ref{t23} we have $S\mathcal Ap \mathcal F_{\alpha}\subseteq M[\mathcal G]$. Now $\delta=M[\mathcal G]\mathcal Ap\omega_1=M\mathcal Ap\omega_1$ and Lemma \ref{t24} imply that there is some $\gamma'<\delta$ such that for every $\gamma\gammae\gamma'$ we have $f_{\alpha}\upharpoonright\gamma\notin S$ which contradicts the assumption that $f_{\alpha}\upharpoonright\delta\in S$. So $S\mathcal Ap \mathcal F_{\alpha}$ is uncountable and if a downward closed set in a tree of height $\omega_1$ intersects a branch at uncountably (hence cofinally) many levels it clearly contains that branch. Hence, $\mathcal F_{\alpha}\subseteq S$. \end{proof} \section{Almost Souslin tree}\lambdaabel{continuousmatrix} In this section we consider the slightly modified version of the poset $\mathcal P$. Namely, let $\mathcal P_c$ be the partial order satisfying all the conditions (1)-(3) from Definition \ref{d1} together with \betaegin{itemize} \item[(4)] for every $p\in \mathcal P_c$ there is a continuous $\in$-chain $\seq{M_{\xi}:\xi<\omega_1}$ (i.e. if $\beta$ is a limit ordinal, then $M_{\beta}=\betaigcup_{\xi<\beta}M_{\xi}$) of countable elementary submodels of $H_{\theta}$ such that $\Vdashorall\xi\in\omegaperatorname{supp}(p)\ \ M_{\xi}\in p(\xi)$. \end{itemize} We point out that in this case the generic filter in $\mathcal P_c$ is denoted by $\mathcal G_c$ and that the function $G_c$ analogous to $G$ is a total function from $\omega_1$ to $H_{\theta}$. Moreover, the poset $\mathcal P_c$ is strongly proper. The proof of this fact needs slight modification of the proof of the Lemma \ref{T1}. Also, the Kurepa tree from the previous paragraph would be obtained in the same way with the poset $\mathcal P_c$. Now we prove that the tree $T$ which we already constructed is an almost Souslin tree in $V[\mathcal G_c]$, i.e. if $X\subseteq T$ is an antichain, then $L(X)=\set{\gammaamma<\omega_1: X\mathcal Ap T_{\gamma}\neq\emptyset}$ (the level set of $X$) is not stationary in $\omega_1$. Hence, to show that $T$ is almost Souslin we have to find a club $\Gamma$ in $\omega_1$ such that $\Gamma\mathcal Ap L(X)=\emptyset$. So let $\tau\in H_{\theta}$ be a $\mathcal P_c$-name and define the set \[\textstyle \Gamma_{\tau}=\set{\gamma<\omega_1: \exists M\in G_c(\gamma)\ \tau\in M\ \&\ M[\mathcal G_c]\mathcal Ap\omega_1=M\mathcal Ap\omega_1=\gamma}. \] \betaegin{lemma}[CH]\lambdaabel{t27} The set $\Gamma_{\tau}$ is a club in $\omega_1$. \end{lemma} \betaegin{proof} First we prove that $\Gamma_{\tau}$ is unbounded in $\omega_1$. Take $\gamma_1<\omega_1$ and assume that there is some $p\in \mathcal G_c$ such that $p\Vdash\Vdashorall \check\gamma\in \deltaot{\Gamma_{\tau}}\ \check\gamma<\check\gamma_1$. Take elementary submodel $M\prec H_{(2^{\theta})^+}$ such that $p,\mathcal P_c,\gamma_1,\tau\in M$. The condition $p_M\lambdae p$ given by $p_M=p\cup\set{\seq{M\mathcal Ap\omega_1,\set{M\mathcal Ap H_{\theta}}}}$ is in $\mathcal P_c$. To see this note that because $p\in \mathcal P_c$ there is a continuous chain $\seq{M_{\xi}:\xi<\omega_1}$ witnessing that condition (4) is satisfied, so by elementarity of $M$ there is a continuous chain $\seq{N_{\xi}:\xi<\delta_M}$ such that $\Vdashorall \alpha\in \omegaperatorname{supp}(p)\ N_{\alpha}\in p(\alpha)$ and $\betaigcup_{\xi<\delta_M}N_{\xi}=M$. The chain $\seq{N'_{\xi}:\xi<\omega_1}$ witnessing that $p_M\in \mathcal P_c$ is now recursively given by $\Vdashorall \xi<\delta\ N'_{\xi}=N_{\xi}$, $N'_{\delta}=M\mathcal Ap H_{\theta}$, for succesor $\alpha+1>\delta$ define $N'_{\alpha+1}\prec H_{\theta}$ as arbitrary submodel containing $N_{\alpha}$ while for limit $\alpha>\delta$ define $N'_{\alpha}=\betaigcup_{\xi<\alpha}N'_{\xi}$. Also, $p_M$ is an $(M,\mathcal P_c)$-generic condition, so we have that $p_M\Vdash \check M[\mathcal G_c]\mathcal Ap\check\omega_1=\check M\mathcal Ap\check\omega_1$ (see \cite[Lemma III 2.6]{proper}). This implies that $p_M\Vdash \check M\mathcal Ap \check\omega_1\in \deltaot{\Gamma_{\tau}}$, but from $\gamma_1\in M$ it follows that $p_M\Vdash \check\gamma_1<\check M\mathcal Ap \check\omega_1$, which is in contradiction with the choice of $p$. So $\Gamma_{\tau}$ is unbounded in $\omega_1$. In order to prove that $\Gamma_{\tau}$ is a club, it is enough to show that for every ordinal $\delta$ such that $\delta=\sup(\Gamma_{\tau}\mathcal Ap\delta)$ there is some $M\in G_c(\delta)$ which satisfies $M[\mathcal G_c]\mathcal Ap \omega_1=M\mathcal Ap\omega_1=\delta$. Because $\delta=\sup(\Gamma_{\tau}\mathcal Ap \delta)$ there is an increasing sequence $\gamma_n\in \Gamma_{\tau}\mathcal Ap \delta$ such that $\sup_{n<\omega}\gamma_n=\delta$. Pick $N_{\gamma_0}\in G_c(\gamma_0)$ such that $\tau\in N_{\gamma_0}$, hence there is some $M\in G_c(\delta)$ such that $\tau\in M$. First we will show that $M\mathcal Ap \omega_1=\delta$. Inductively pick models $N_{\gamma_n}\in G_c(\gamma_n)$ such that $N_{\gamma_{n-1}}\in N_{\gamma_{n}}$. Note that because $\gamma_n\in \Gamma_{\tau}$, we have $N_{\gamma_n}\mathcal Ap \omega_1=\gamma_n$. Now, because $\sup_{n<\omega}\gamma_n=\delta$ and $\Vdashorall n<\omega\ \gamma_n<M\mathcal Ap\omega_1$ (from $\Vdashorall n<\omega\ \exists N\in G_c(\delta)\ N_{\gamma_n}\in N$) we have that $M\mathcal Ap \omega_1\gammae \delta$. So assume that $M\mathcal Ap \omega_1=\beta>\delta$. Let $p\in \mathcal G_c$ be any condition such that $M\in p(\delta)$. \betaegin{claim}\lambdaabel{t12} The set $D_{\delta}=\set{q\lambdae p: \exists \gamma'<\delta\ \exists N\in q(\gamma')\ N\mathcal Ap \omega_1>\delta}$ is dense below $p$ in $\mathcal P_c$. \end{claim} \betaegin{proof} Take arbitrary $p'\lambdae p$ and pick a continuous $\in$-sequence $\seq{M_{\xi}:\xi<\omega_1}$ such that $\Vdashorall \xi\in \omegaperatorname{supp}(p')\ M_{\xi}\in p'(\xi)$. Because this chain is continuous, $\delta$ is a limit ordinal and $M\in p'(\delta)$ is such that $M\mathcal Ap \omega_1=\beta>\delta$, there is some $\xi_0<\delta$ such that $M_{\xi_0}\mathcal Ap \omega_1>\delta$. Now pick any $\xi_1\in \delta\setminus \max(\delta\mathcal Ap (\omegaperatorname{supp}(p')\cup \set{\xi_0}))$. Clearly $M_{\xi_1}\mathcal Ap \omega_1>\delta$. To extend $p'$ to some $q\in D_{\delta}$ first denote the ordinal $\alpha=\max(\omegaperatorname{supp}(p')\mathcal Ap \delta)$. Further, note that for each $N\in p'(\alpha)$ there is some $M_N\in p'(\delta)$ such that $N\in M_N$. Let $\emptyseti_{N}:M_N\izo M_{\delta}$ be isomorphism for each $N\in p'(\alpha)$. Because $\seq{M_{\xi}:\xi<\omega_1}$ is a continuous chain and $\delta$ is a limit ordinal, there is some $\xi_2<\delta$ such that $M_{\xi_1}\in M_{\xi_2}$ and that moreover $\Vdashorall N\in p'(\alpha)\ \emptyseti_{N}(N)\in M_{\xi_2}$. Now define \[\textstyle q=p'\cup\set{\seq{\xi_2,\set{M_{\xi_2}}\cup\set{\emptyseti^{-1}_N(M_{\xi_2}):N\in p'(\alpha)\mathcal Ap M_{\delta}}}}. \] It is clear that $q\in \mathcal P$ and the sequence $\seq{M_{\xi}:\xi<\omega_1}$ witnesses that $q$ satisfies the property (4), hence $q$ is also in $\mathcal P_c$. \end{proof} Now, according to Claim \ref{t12}, there is some $q\in \mathcal G_c\mathcal Ap D_{\delta}$ below $p$. But this implies that there is some $N\in G_c(\gamma')$ for $\gamma'<\delta$, satisfying $N\mathcal Ap \omega_1>\delta$ which is impossible by the choice of $\gamma_n$ (note that $\gamma_n$ is cofinal in $\delta$ and we would have that there is some $\gamma_{n'}>\gamma'$ such that $N_{\gamma_{n'}}\mathcal Ap\omega_1<N\mathcal Ap\omega_1$ and $N_{\gamma_{n'}}\in G_c(\gamma_{n'})$ and $N\in G_c(\gamma')$). So $M\mathcal Ap\omega_1=\delta$. We still have to prove that also $M[\mathcal G_c]\mathcal Ap\omega_1=\delta$. Let $\sigma\in M$ be a $\mathcal P_c$-name for a countable ordinal. Because CH holds, the poset $\mathcal P_c$ is $\omega_2$-c.c., Lemma \ref{t15}, so we can assume that $\sigma$ is a nice name of cardinality at most $\omega_1$, i.e. $\sigma=\set{\seq{\check\xi,p_{\xi}}:\xi<\omega_1}$ where $\set{p_{\xi}:\xi<\omega_1}$ is a maximal antichain in $\mathcal P_c$. Now let $p\in \mathcal G_c$ be any condition containing $M$. Because $p\in\mathcal P_c$, there is a continuous chain $\seq{M_{\xi}:\xi<\omega_1}$ such that $\Vdashorall \xi\in\omegaperatorname{supp}(p)\ M_{\xi}\in p(\xi)$. Now there is an isomorphism $\varphi:M\izo M_{\delta}$ and in the same way as in the proof of Claim \ref{t12} we show that there is some $q\in \mathcal G_c$ and $\xi_1<\delta$ such that $\varphi(\sigma)\in M_{\xi_1}\in q(\xi_1)\subseteq G_c(\xi_1)$. Because $\delta=\sup(\Gamma_{\tau}\mathcal Ap \delta)$ we can assume that $\xi_1\in \Gamma_{\tau}$. Now according to Lemma \ref{T2} and the form of $\sigma$ and $\varphi(\sigma)$ we have that $\operatorname{int}_{\mathcal G_c}(\sigma)=\operatorname{int}_{\mathcal G_c}(\varphi(\sigma))<M_{\xi_1}[\mathcal G_c]\mathcal Ap \omega_1=M_{\xi_1}\mathcal Ap \omega_1<\delta$. Hence $M[\mathcal G_c]\mathcal Ap \omega_1=\delta$ and the proof is finished. \end{proof} \betaegin{theorem}[CH]\lambdaabel{t28} The tree $T$ is an almost Souslin tree. \end{theorem} \betaegin{proof} Let $\tau'$ be a $\mathcal P_c$-name for an antichain $X$ in $T$. Because CH holds in $V$, according to Lemma \ref{t15} $\mathcal P_c$ is $\omega_2$-c.c. so there is a $\mathcal P_c$-name $\tau$ for $X$ which is in $H_{\theta}$. To prove the theorem, we will show that $L(X)\mathcal Ap\Gamma_{\tau}=\emptyset$. So assume that $X\mathcal Ap T_{\delta}\neq\emptyset$ for some $\delta\in\Gamma_{\tau}$. Because $\delta\in \Gamma_{\tau}$ there is some $M\in G_c(\delta)$ such that $\tau\in M$ and that $M[\mathcal G_c]\mathcal Ap\omega_1=M\mathcal Ap\omega_1=\delta$, so take any $p\in\mathcal G_c$ such that $M\in p(\delta)$. Now, in the same way as in the proof of Claim \ref{t5} we know that $p$ forces that $T_{\delta}=\set{f_{\alpha}\upharpoonright\delta:\alpha\in M\mathcal Ap \omega_2}$, hence there is some $\alpha\in M\mathcal Ap\omega_2$ such that $f_{\alpha}\upharpoonright\delta\in X$. Consider the branch $\mathcal F_{\alpha}$. It is defined solely from $\alpha\in M$ and $\mathcal G_c$, so $\mathcal F_{\alpha}\in M[\mathcal G_c]$. Also, because $\tau\in M$ we have that $X\in M[\mathcal G_c]$. Consequently $\mathcal F_{\alpha}\mathcal Ap X\in M[\mathcal G_c]$, and from the fact that $X$ is an antichain it follows that this intersection is singleton (i.e. $f_{\alpha}\upharpoonright\delta$). But, because $M[\mathcal G_c]\prec H_{\theta}^{V[\mathcal G_c]}$ (see Lemma \ref{t18}) and the height of $f_{\alpha}\upharpoonright\delta$ is less than $\omega_1$, there must exist some element $t\in \mathcal F_{\alpha}\mathcal Ap X\mathcal Ap M[\mathcal G_c]$ which is of height less then $\delta=M[\mathcal G_c]\mathcal Ap\omega_1$. But then $t<f_{\alpha}\upharpoonright\delta$ and both $t,f_{\alpha}\upharpoonright\delta\in X$ which is in contradiction with the fact that $X$ is an antichain. \end{proof} \section{Concluding remarks} We have seen that both versions of the matrix posets force the Continuum Hypothesis. It turns out that a bit more is true, both versions of the matrix poset force the combinatorial principle $\Diamond$ independently of the status of the Continuum Hypothesis in the ground model. In fact, if CH fails in $V$ that $\Diamond$ is forced follows from a slight adaptation of a result of Roslanowski and Shelah \cite{roslanowski} to our matrix posets that do not have cardinality $2^{\alphaleph_0}$ but are neverltheles $2^{\alphaleph_0}$-centered in the canonical way. So we may concentrate on the case that the ground model $V$ satisfies CH. If CH holds in $V$, then the following theorem from \cite{note2} proves a bit more for the continuous version of the matrix forcing (poset $\mathcal{P}_c$ of Section \ref{continuousmatrix}). \betaegin{theorem}[CH]\lambdaabel{t40} $\Diamond^+$ holds in $V[\mathcal G_c]$. \end{theorem} For the convenience of the reader we include the sketch of the proof. \betaegin{proof} First denote the generic club by $\Delta$, i.e. $\Delta$ is the club contained in the set $\omegaperatorname{Tr}(\mathcal G_c)=\set{M\mathcal Ap \omega_1:M\in \betaigcup_{p\in \mathcal G_c}\omegaperatorname{ran}(p)}$. The key part of the proof is the following claim which shows that the generic club is almost contained in every club from the ground model. \betaegin{claim}\lambdaabel{t41} Let $C\subseteq \omega_1$ be a club in $V$. Then there is a countable ordinal $\delta$ such that for every $\beta\gammae \delta$ if $\beta\in\Delta$ then $\beta\in C$. \end{claim} \betaegin{proof} Because $C\in V$ there is some $\alpha<\omega_1$ such that $\exists M\in G_c(\alpha)\ C\in M$. Denote $\delta=M\mathcal Ap \omega_1$. By elementarity of $M$ we know that $C$ is a club in $\delta$, so $\delta\in C$. Now take arbitrary $\beta>\delta$ such that $\beta\in \Delta$. This means that for some $\gamma<\omega_1$ ($\alpha<\gamma$) there is some $N'\in G_c(\gamma)$ such that $N'\mathcal Ap \omega_1=\beta$. Also, by the definition of $\mathcal P_c$ there is some $N\in G_c(\gamma)$ such that $C\in N$. Now by elementarity of $N$ we conclude that $C$ is a club in $\beta=N\mathcal Ap \omega_1$ so $\beta\in C$. \end{proof} \betaegin{claim}\lambdaabel{t43} There is, in $V[\mathcal G_c]$, a sequence $\seq{S_{\alpha}:\alpha<\omega_1}$ such that for every $\alpha<\omega_1$ we have $S_{\alpha}\in [P(\alpha)]^{\lambdae \omega}$ and that \[\textstyle \Vdashorall X\in P(\omega_1)\mathcal Ap V\ \exists \gamma<\omega_1\ \Vdashorall \alpha\gammae \gamma\ X\mathcal Ap \alpha\in S_{\alpha}. \]\end{claim} \betaegin{proof} By CH in $V$ we can find an increasing continuous sequence of countable sets $\seq{D_{\alpha}:\alpha<\omega_1}$ such that $D_{\alpha}\subseteq P(\alpha)$ and $\betaigcup_{\alpha<\omega_1}D_{\alpha}=[\omega_1]^{\lambdae\omega}$. Let $f:\omega_1\to \omega_1$ be defined by $f(\alpha)=\min(\Delta\setminus (\alpha+1))$. Finally, for $\alpha<\omega_1$ we let $S_{\alpha}=\set{X\mathcal Ap \alpha:X\in D_{f(\alpha)}}$. To see that the sets $S_{\alpha}$ ($\alpha<\omega_1$) satisfy the statement of the claim pick any $X\subseteq \omega_1$ which is in $V$. As in the proof of Lemma 2.1 in \cite{devlin} there is a club $E\subseteq \omega_1$ which is in $V$ such that $\Vdashorall \alpha\in E\ \Vdashorall \beta<\alpha\ X\mathcal Ap\beta\in D_{\alpha}$. Now according to Claim \ref{t41} there is some $\gamma<\omega_1$ such that $\Delta\setminus \gamma\subseteq E$. For $\gamma\lambdae \alpha<\omega_1$ it holds $f(\alpha)\in \Delta\setminus\alpha\subseteq\Delta\setminus\gamma\subseteq E$ so as $f(\alpha)>\alpha$, the choice of $E$ ensures that $X\mathcal Ap \alpha\in D_{f(\alpha)}$. So the claim is proved. \end{proof} Let $\seq{S_{\alpha}:\alpha<\omega_1}$ be the sequence from the previous claim. For $\alpha<\omega_1$ let $W_{\alpha}=P(\alpha)\mathcal Ap (\betaigcup_{X\in S_{\alpha}}L_{\alpha+2}[X,\Delta\mathcal Ap \alpha])$. Then $\seq{W_{\alpha}:\alpha<\omega_1}$ is a $\Diamond^+$ sequence. To show this pick arbitrary $A\subseteq \omega_1$. Because $\mathcal P_c$ is $\omega_2$-c.c. there is a name for $A$ which is coded by some $X\subseteq \omega_1$. Hence, $A\in L[X,\Delta]$. By Claim \ref{t43} there is a $\gamma<\omega_1$ such that $\Vdashorall \alpha\gammae \gamma\ X\mathcal Ap \alpha\in S_{\alpha}$. By induction we define a normal sequence $\seq{\alpha_{\xi}:\xi<\omega_1}$ in $\omega_1$. Let $\alpha_0>\gamma$ be the least $\alpha$ such that $L_{\alpha}[X\mathcal Ap \alpha,\Delta\mathcal Ap \alpha]\prec L_{\omega_1}[X,\Delta]$. If $\alpha_{\xi}$ is defined let $\alpha_{\xi+1}>\alpha_{\xi}$ be the least ordinal such that $L_{\alpha_{\xi+1}}[X\mathcal Ap \alpha_{\xi+1},\Delta\mathcal Ap \alpha_{\xi+1}]\prec L_{\omega_1}[X,\Delta]$. Let $B=\seq{\alpha_{\xi}:\xi<\omega_1}$. Then it is easily checked by the construction that $B$ is a club in $\omega_1$. So pick arbitrary $\alpha\in B$ (we will prove that $A\mathcal Ap \alpha, B\mathcal Ap \alpha\in W_{\alpha}$). Because $\alpha>\gamma$ we have $X\mathcal Ap \alpha\in S_{\alpha}$, so $P(\alpha)\mathcal Ap L_{\alpha+2}[X\mathcal Ap \alpha,\Delta\mathcal Ap \alpha]\subseteq W_{\alpha}$. Because $L_{\alpha}[X\mathcal Ap \alpha,\Delta\mathcal Ap \alpha]\prec L_{\omega_1}[X,\Delta]$ we have that $A\mathcal Ap \alpha$ is first-order definable over $L_{\alpha}[X\mathcal Ap \alpha,\Delta\mathcal Ap \alpha]$ so $A\mathcal Ap \alpha\in L_{\alpha+1}[X\mathcal Ap \alpha,\Delta\mathcal Ap \alpha]\subseteq W_{\alpha}$. Similarly we would show that $B\mathcal Ap \alpha\in W_{\alpha}$ and the theorem is proved. \end{proof} It is clear that this proof adapts to showing that the original matrix poset (poset $\mathcal{P}$ of Section \ref{matrix}) also forces $\Diamond.$ \Vdashootnotesize \betaegin{thebibliography}{22} \betaibitem{aspero} D.\ Aspero, M.\ A.\ Mota, Forcing consequences of PFA together with the continuum large, Trans. Amer. Math. Soc., published electronically, http://dx.doi.org/10.1090/S0002-9947-2015-06205-9 \betaibitem{aspero1} D.\ Aspero, A forcing notion collapsing $\alphaleph_3$ and preserving all other cardinals, preprint (2014). \betaibitem{golshani} M.\ Golshani, Almost Souslin Kurepa trees, Proc. Amer. Math. Soc. 141,5 (2013) 1821--1826. \betaibitem{devlin} K.\ Devlin, Concerning the consistency of CH+SH, Ann. Math. Logic. 19 (1980) 115,125. \betaibitem{mitchell} W.\ Mitchell, $I[\omega_2]$ can be the nonstationary idel on $\omegaperatorname{cof}(\omega_1)$, Trans. Amer. Math. Soc. 361,2 (2009) 561-601. \betaibitem{neeman} I.\ Neeman, Forcing with sequences of models of two types, Notre Dame J. Formal Logic 55 (2014) 265--298. \betaibitem{neemanslides} I.\ Neeman, Higher analog of the proper forcing axiom, Talk at the Fields Institute, Toronto, October 2012. \betaibitem{roslanowski} A.\ Roslanowski, S.\ Shelah, More forcing notions imply diamond, Archive Math. Logic 35 (1996) 299-313. \betaibitem{proper} S.\ Shelah, Proper and improper forcing, Springer, 1998. \betaibitem{pfa} S.\ Todorcevic, A note on the proper forcing axiom, In Axiomatic set theory, (Boulder, Colorado 1983), volume 31 of Contemporary Mathematics, pages 209--218, Amer. Math. Soc. 1984, ed. J.\ Baumgartner, D.\ Martin and S.\ Shelah. \betaibitem{note} S.\ Todorcevic, Kurepa tree with no stationary antichain, Note of September, 1987. \betaibitem{note2} S.\ Todorcevic, Forcing club with finite conditions, Note of December, 1982. \end{thebibliography} \end{document}